The Connected Ideas Project
Tech, Policy, and Our Lives
Ep 08 - AI & Bio: Finding Balance Through Evidence
0:00
Current time: 0:00 / Total time: -13:40
-13:40

Ep 08 - AI & Bio: Finding Balance Through Evidence

Exploring the Line Between Innovation and Safety with Insights from Congress, the National Security Commission, and Why We Need a New Approach to Tech Policy

Good morning my friends,

I've spent the last few weeks deep in the weeds of AI and biotechnology policy (that’s a lie, I’ve spent far more than a few weeks, but humor me), and I've got to tell you - the latest progress never ceases to amaze me no matter how much time I spend thinking about the topic. Today, I want to share some perspectives I shared last December in written testimony for the Senate AI Insights Forum and the National Security Commission on Emerging Biotechnology's latest thinking on AIxBio. In fact, they’ve released a great series of white papers covering an intro, thoughts on emerging tech, potential risks, and potential policy options. Grab your favorite caffeinated beverage (I'm on my third espresso as I write this), and let's dive in!

Side note: In my weekly use of these cool newfangled AI podcast generators, I love to see how they work out. In this week’s case, it can talk about all kinds of interesting and deep topics but can’t say AIxBio. They pronounce it AIsBio. And, no matter how much love I show these AI hosts, they just never seem to remember me :-P What’s a guy to do to get a little love around here? It's like listening to a Spanish friend with amnesia. “Welcome to our podcast on AIsBio. Who are you again…?” Nothing’s perfect, but cheers to AI-Bread food for thought!

The Perfect Storm We're Living Through

You know that feeling when multiple threads of your life suddenly weave together into a perfect tapestry? That's exactly what's happening right now in the AI-bio space. I've been working at this intersection since my grad school days, watching these fields gradually converge. But folks, the pace of change we're seeing now is unlike anything before. When I was writing my first papers in the space, many reviewers told me it wasn’t worth the effort and it was a dead-end direction. The “I told you so” feels sooooo good. Not gonna lie.

Here's what makes this moment so crucial: we're approaching what could be a "ChatGPT moment" for biological design tools. Companies across the US, China, and elsewhere are racing to create breakthrough AI systems that could revolutionize how we engineer biology. The potential benefits are staggering - from accelerating drug development to enhancing food production to enabling a more sustainable bioeconomy.

But (and it's a big but), we need to be thoughtful about how we develop and deploy these technologies. In my recent Senate testimony, I emphasized that we can't let fear drive us to either extreme: excessive restriction or inadequate oversight. Both paths could lead to their own forms of catastrophe.

A Tale of Two AIs (And Why It Matters)

Let me break this down in a way that I wish someone had explained to me years ago. When we talk about AI in biology, we're really talking about two fundamentally different things:

  1. Large Language Models (LLMs) like ChatGPT: Think of these as incredibly sophisticated librarians. They can help compile and synthesize biological information that's already out there, but they don't create new biological knowledge per se. They're amazing at what they do, but they have clear limitations. At least today.

  2. Biological Design Tools (BDTs): These are the specialized AI systems trained specifically on biological data. They're like having a team of expert scientists working around the clock, making predictions, and guiding experimental design. This is where the real transformative potential - and need for careful oversight - lies.

Here's why this distinction matters: The risks and opportunities are completely different for each type. When I talk to policymakers, I often see their eyes light up with understanding when I explain this difference. It's not just academic - it has real implications for how we govern these technologies. One of my favorite people, Matt Walsh, wrote a great op-ed on the topic that I recommend you peruse. He also might give me some flack for linking to his LinkedIn, but its my newsletter!

The Evidence-Based Approach We Need

Through my work with the National Security Commission on Emerging Biotechnology, I've been advocating for what I call a "refined approach" to oversight. The Carnegie Endowment recently released an interesting perspective on an IF-THEN policy approach to AI that I find interesting. I don’t personally share all the same concerns, but Let me share what a refined approach means in practice:

1. Empirical Assessment

We need hard data, not just speculation. For example, when someone claims an AI model increases biosecurity risks, my first question is always: "Compared to what baseline?" Without measuring against existing risks, we're just guessing. What we need is a set of expert thoughts on the scenario where, IF we see something, THEN we should do something. The crux of that, however, is to actually know what you’re measuring and tracking.

2. Violet Teaming

Violet Teaming is one of my favorite concepts (and trust me, I get some weird looks when I first mention it). Instead of just red-teaming for risks, we need diverse perspectives from:

  • Healthcare providers who understand patient needs

  • Economists who can assess opportunity costs

  • Ethicists who can help us navigate moral challenges

  • Social scientists who understand human behavior

  • And many more voices that often get left out of these discussions

3. Targeted Controls

Different tools, different risks, different rules. It's that simple. One-size-fits-all regulation in this space would be like using the same rules for bicycles and jets - it just doesn't make sense.

The Path Forward (And Why I'm Optimistic)

Despite all the challenges, I'm incredibly excited about where we're headed. Here's why:

  1. Infrastructure Development: We're seeing recommendations for new initiatives to establish dedicated computing infrastructure for AI + Bio research. This isn't just about bigger computers - it's about creating safe spaces for innovation.

  2. Cloud Labs: The emergence of national networks of cloud labs for safe experimentation is a game-changer. It democratizes access while maintaining security, and we have a rich commercial ecosystem of cloud labs today.

  3. Better Standards: We're developing more sophisticated frameworks for assessing and publishing potentially sensitive algorithms. This is crucial for balancing openness with security.

The Risk We're Not Talking About Enough

Here's something that keeps me up at night (besides my excessive coffee consumption): What if fear causes us to unnecessarily restrict these technologies? This isn't just an abstract concern. We could see:

  • People dying because drug development is hindered

  • Communities going hungry because we can't advance food production fast enough

  • Our economy stagnating because we can't keep pace with sustainable biomanufacturing

This is why I'm pushing so hard for balanced, evidence-based approaches. We can't afford to get this wrong in either direction. I recently started a new role leading AI for a rare disease drug discovery company and I’m more convinced of this risk than ever, because there are patient who have zero approved therapies to help them thrive in the face of their rare diseases, and that is a great argument for using AI for good!

What's Next? (And How You Can Get Involved)

The decisions we make in the next few years will shape the future of biotechnology. Here's what I'm focusing on:

  1. Continuing Work with Congress: Helping develop frameworks that promote innovation while ensuring safety.

  2. Research Initiatives: Leading projects to establish empirical baselines for assessing AI capabilities in biology.

  3. Public Engagement: Because these decisions affect everyone, not just scientists and policymakers.

Want to get involved? Here are some ways:

  • Subscribe to this newsletter (and share it with your friends!)

  • Follow the ongoing policy discussions (I'll keep sharing updates here!)

  • Participate in public comment periods on proposed regulations (awesome opportunity right now with the US AI Safety Institutes request for input on responsible development of chem-bio models)

  • Share your perspectives on balancing innovation and safety

  • Join or support organizations working on responsible innovation

A Personal Note

You know what's amazing about this moment? We're not just observers - we're all participants in shaping how these technologies will develop. When I started in this field, studying early applications of AI to biology in grad school, I never imagined we'd be where we are today.

Every time I work with members of Congress or the Commission, I'm reminded of how crucial it is to get this right, not just for us, but for future generations who will live with the consequences of our decisions.

I'd love to hear your thoughts on this. How do you think we should balance innovation and safety? What excites or concerns you about these developments? Drop me a line at newsletter@theinvivogroup.com.

Until next time, keep evolving!

Cheers,

-Titus


The podcast audio was AI-generated using Google’s NotebookLM

Discussion about this podcast

The Connected Ideas Project
Tech, Policy, and Our Lives
This podcast is about the co-evolution of emerging tech and public policy, with a particular love for AI and biotech, but certainly not limited to just those two. The podcast is created by Alexander Titus, Founder of the In Vivo Group and The Connected Ideas Project, who has spent his career weaving between industry, academia, and public service. Our hosts are two AI-generated moderators, and we're leveraging the very technology we're exploring to explore it. This podcast is about the people, the tech, and ultimately, the public policy that shapes all of our lives.