The intersection of artificial intelligence and biotechnology is, depending on whom you ask, either a harbinger of our demise or the dawn of a biomedical renaissance. AI is being framed as everything from a rogue scientist’s bioweapon lab to the ultimate safeguard against pandemics. But as someone who sat on the National Academies’ study committee on AI and Biosecurity, I can tell you: the truth is neither so dire nor so utopian.
Last week, the National Academies of Sciences, Engineering, and Medicine released its report, The Age of AI in the Life Sciences: Benefits and Biosecurity Considerations, an in-depth assessment of AI’s role in accelerating biological discovery and the risks it presents to biosecurity. The timing is critical, panic over AI’s potential to enable bioengineered threats has fueled a wave of doomsday speculation. But this report offers a reality check: AI’s impact on biosecurity is complex, and its risks are often overstated in comparison to its limitations.
The podcast audio was AI-generated using Google’s NotebookLM
Last week's tech + policy edition explored mirror life and the risk-reward hype cycle as well
Separating Fiction from Reality
Let’s start with the prevailing fear: that AI is lowering the barriers to creating biological weapons. The reality? Today, AI-assisted biological design is still deeply constrained by fundamental scientific limitations. While AI can optimize the design of biomolecules (including potential toxins), it does not replace the need for rigorous experimental validation. Designing a functional, novel virus or significantly enhancing a pathogen’s virulence remains far beyond AI’s current capabilities.
A key finding of the report is that AI-enabled biological tools are strongest in the design phase of synthetic biology, but they do little to eliminate the physical bottlenecks involved in building and testing. In other words, the idea that AI can generate a pandemic-ready pathogen with the push of a button is pure science fiction. The data simply don’t currently exist to train models to predict viral transmissibility and pathogenicity with accuracy.
Moreover, AI’s role in bioengineering remains deeply dependent on human expertise. The notion that AI will enable amateurs or bad actors to develop sophisticated biological threats is highly speculative. Even the most advanced AI-driven drug discovery platforms still require teams of experts to interpret outputs, refine hypotheses, and conduct extensive laboratory work.
Where AI Does Matter: Biosecurity and Pandemic Defense
The report makes another key point that often gets overlooked in fear-mongering headlines: AI’s real impact on biosecurity is in our ability to respond to biological threats, both natural and engineered. AI is already revolutionizing biosurveillance, enabling early detection of emerging pathogens and accelerating countermeasure development.
During COVID-19, AI-assisted drug discovery and protein modeling played a role in developing treatments and vaccines. Future iterations of these technologies could allow us to design vaccine candidates within days of identifying a novel virus, rather than months. AI-driven biosurveillance networks, powered by machine learning, can detect unusual patterns in pathogen evolution, giving us a head start in mitigating outbreaks before they spiral out of control.
One of the report’s recommendations is that the U.S. should continue investing in AI-driven biosurveillance and medical countermeasure development. AI is not just a threat, it is one of our best defenses against the next pandemic.
The Narrative We Choose Matters
Doomsday narratives are easy to sell, especially when it comes to AI. The idea of an AI-generated supervirus is cinematic, tapping into deep-seated fears of technological overreach. But as someone who has spent years at the intersection of AI, biotech, and policy, I see a different story unfolding, one of opportunity, responsibility, and human ingenuity.
Yes, AI presents risks that must be addressed through careful oversight and governance. But if we allow fear to dominate the discourse, we risk missing the bigger picture: AI is a force multiplier for defense, not just for threat creation. The most immediate challenge is not preventing AI from designing the next pandemic, it’s ensuring that AI-driven innovations in synthetic biology are harnessed for the public good.
The Age of AI in the Life Sciences report is a call for nuance in an era of extremes. We can acknowledge the biosecurity risks while also championing the vast potential AI holds for human health. The future of AI in biotechnology will be determined not by fear, but by the choices we make today in how we govern, develop, and apply these transformative tools.
And if we get it right? AI won’t be the end of us. It will be the thing that helps us thrive.
Cheers,
-Titus
Share this post