© 2025 Aspen Public Radio
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

AI designs for dangerous DNA can slip past biosecurity measures, study shows

A new study found that artificial intelligence could design DNA for all kinds of dangerous proteins, and do it in such a way that DNA manufacturers' biosecurity screening measures would not reliably catch them.
Malte Mueller/fStoap
/
Getty Images
A new study found that artificial intelligence could design DNA for all kinds of dangerous proteins, and do it in such a way that DNA manufacturers' biosecurity screening measures would not reliably catch them.

Major biotech companies that churn out made-to-order DNA for scientists have protections in place to keep dangerous biological material out of the hands of would-be evil-doers. They screen their orders to catch anyone trying to buy, say, smallpox or anthrax genes.

But now, a new study in the journal Science has demonstrated how AI could be used to easily circumvent those biosafety processes.

A team of AI researchers found that protein-design tools could be used to "paraphrase" the DNA codes of toxic proteins, "re-writing them in ways that could preserve their structure, and potentially their function," says Eric Horvitz, Microsoft's chief scientific officer.

The computer scientists used an AI program to generate DNA codes for more than 75,000 variants of hazardous proteins – and the firewalls used by DNA manufacturers weren't consistently able to catch them.

"To our concern," says Horvitz, "these reformulated sequences slipped past the biosecurity screening systems used worldwide by DNA synthesis companies to flag dangerous orders."

A fix quickly got written that and slapped onto the biosecurity screening software. But it's not perfect — it still wasn't able to detect a small fraction of the variants.

And it's just the latest episode showing how AI is revving up long-standing concerns about the potential misuse of powerful biological tools.

The perils of open science 

"AI-powered protein design is one of the most exciting frontiers in science. We're already seeing advances in medicine and public health," says Horvitz. "Yet like many powerful technologies, these same tools can often be misused."

For years, biologists have worried that their ever-improving DNA tools might be harnessed to design potent biothreats, like more virulent viruses or easy-to-spread toxins. They've even debated whether it's really wise to openly publish certain experimental results, even though open discussion and independent replication has been the lifeblood of science.

The researchers and the journal who published this new study decided to hold some of their information back, and will restrict who gets access to their data and software. They enlisted a third party, a non-profit called the International Biosecurity and Biosafety Initiative for Science, to make decisions about who has a legitimate need to know.

"This is the first time such a model has been employed to manage risks of sharing hazardous information in a scientific publication," says Horvitz.

Scientists who have been worried about future biosecurity threats for some time praised this work.

"My overall reaction was favorable," says Arturo Casadevall, a microbiologist and immunologist at Johns Hopkins University. "Here we have a system in which we are identifying vulnerabilities. And what you're seeing is an attempt to correct the known vulnerabilities."

The trouble is, says Casadevall, "what vulnerabilities don't we know about that will require future corrections?"

He notes that this team did not do any lab work to actually generate any of the proteins designed by AI, to see if they would truly mimic the activity of the biological original threats.

Such work would be an important reality check as society grapples with this kind of emerging threat from AI, says Casadevall, but would be tricky to do, as it might be precluded by international treaties prohibiting the development of biological weapons.

Getting ahead of an AI "freight train"

This isn't the first time scientists have explored the potential for malevolent use of AI in a biological setting.

For example, a few years ago, another team wondered if AI could be used to generate novel molecules that would have the same properties as nerve agents. In less than six hours, the AI tool dutifully concocted 40,000 molecules that met the requested criteria.

It not only came up with known chemical warfare agents like the notorious one called VX, but also designed many unknown molecules that looked plausible and were predicted to be more toxic. "We had transformed our innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules," the researchers wrote.

That team also didn't openly publish the chemical structures that the AI tool had devised, or create them in the lab, "because they thought they were way too dangerous," points out David Relman, a researcher at Stanford University. "They simply said, we're telling you all about this as a warning."

Relman thinks this latest study, showing how AI could be used to evade security screening and finding a way to address that, is laudable. At the same time, he says, it just illustrates that there's an enormous problem brewing.

"I think it leaves us dangling and wondering, 'Well, what exactly are we supposed to do?'" he says. "How do we get ahead of a freight train that is just evermore accelerating and racing down the tracks, in danger of careening off the tracks?"

Despite concerns like these, some biosecurity experts see reasons to be reassured.

Twist Bioscience is a major provider of made-to-order DNA, and in the past ten years, it's had to refer orders to law enforcement fewer than five times, says James Diggans, the head of policy and biosecurity at Twist Bioscience and chair of the board of directors at the International Gene Synthesis Consortium, an industry group.

"This is an incredibly rare thing," he says. "In the cybersecurity world, you have a host of actors that are trying to access systems. That is not the case in biotech. The real number of people who are really trying to create misuse may be very close to zero. And so I think these systems are an important bulwark against that, but we should all find comfort in the fact that this is not a common scenario."

Copyright 2025 NPR

Nell Greenfieldboyce is a NPR science correspondent.