
AI could be used to make the toxin ricin, but this can also be obtained from castor beans, found in many gardens
American Photo Archive/Alamy
Artificial intelligence promises to transform biology, allowing us to design better drugs, vaccines and even synthetic organisms for, say, eating waste plastic. But some fear it could also be used for darker purposes, to create bioweapons that wouldn’t be detected by conventional methods until it was too late. So, how worried should we be?
“AI advances are fuelling breakthroughs in biology and medicine,” says Eric Horvitz, chief scientific officer at Microsoft. “With new power comes responsibility for vigilance.”
His team has published a study looking at whether AI could design proteins that do the same thing as proteins that are known to be dangerous, but are different enough that they wouldn’t be recognised as dangerous. The team didn’t reveal which proteins they attempted to redesign – parts of the study were withheld – but it probably included toxins such as ricin, famously used in a 1978 assassination, and botulinum, the potent neurotoxin better known as Botox.
To make lots of a protein like botulinum, you need the recipe – the DNA that codes for it. When biologists want a specific piece of DNA, they usually order it from companies that specialise in making any desired piece.
Due to concerns that would-be bioterrorists could order the recipes for making bioweapons this way, some DNA-synthesis companies voluntarily screen orders to check if someone is trying to make something dangerous. Proteins are sequences of amino acids, and the screening checks whether the amino acid sequence matches any “sequences of concern” – that is, potential bioweapons.
But with AI, it is in theory possible to design a version of a protein that has a different amino acid sequence but still does the same thing. Horvitz and his colleagues attempted this with 72 potentially dangerous proteins and showed that screening methods often miss these alternative versions.
This isn’t as alarming as it sounds. Firstly, the team didn’t actually make the redesigned proteins, for obvious reasons. But in a separate study earlier this year, they tested redesigned versions of harmless proteins – and basically found they didn’t work.
Secondly, while there have been attempted bioterrorist attacks, albeit very few, there is little reason to think this is because of a failing of the voluntary scanning system. There are already many ways to get around it without resorting to AI redesigns – for instance, ricin can be obtained from castor oil plants, found in many gardens. This study is the equivalent of warning that a bank could be robbed by some highly sophisticated Mission Impossible-style plan, when in fact the vault door has been left wide open.
Last but not least, when state actors are excluded, no bioterrorist has ever managed to kill anyone using protein-based bioweapons. The Aum Shinrikyo cult in Japan tried to kill people with botulinum, but succeeded only with chemical agents. The ricin-laced letters sent to the White House didn’t kill anyone. Based on body counts, guns and explosives are wildly more dangerous than biotoxins.
So does that mean we stop worrying about AI-designed bioweapons? Not quite. While Horvitz’s studies looked only at proteins, it is viruses that pose the big threat – and AI is already being used to redesign entire viruses.
Last month, a team at Stanford University in California revealed the results of their efforts to redesign a virus that infects the bacterium E. coli. As with the redesigned proteins, the results were unimpressive – of the 302 AI-designed viruses that were made, just 16 could infect E. coli. But this is just the start.
When asked about AI-designed viruses, James Diggans at the DNA-making firm Twist Bioscience, and a member of Horvitz’s team, said it is easier to detect DNA-encoding viruses of concern than proteins of concern. “Synthesis screening operates better on more information rather than less. So at the genome scale, it’s incredibly informative.”
But not all DNA-making companies carry out this screening, and benchtop DNA synthesisers are becoming available. There is talk of designing AI tools that will refuse to create dangerous viruses or try to detect malevolent intent, but people have found many ways to get around safeguards meant, for instance, to stop AIs providing bomb-making instructions.
To be clear, history suggests the risk from “wild” viruses is way higher than the risk from bioterrorism. Despite what the current US administration claims, the evidence suggests that SARS-CoV-2 emerged when a bat virus jumped to other wild animals, and then to people at a market – no lab involved.
What’s more, would-be bioterrorists could do an incredible amount of damage simply by releasing a known virus, such as smallpox. With the many gaping holes in bioweapon control efforts, there is little need to resort to AI trickery to get around them.
For all these reasons, the risk of an AI-designed virus being unleashed anytime soon is probably near zero. But this risk is going to grow as the various technologies continue to advance – and the covid-19 pandemic showed just how much havoc a new virus can create, even when it isn’t especially deadly. Increasingly, there will be reason to worry.
Topics:
Article source: https://www.newscientist.com/article/2498478-should-we-worry-ai-will-create-deadly-bioweapons-not-yet-but-one-day/?utm_campaign=RSS%7CNSNS&utm_source=NSNS&utm_medium=RSS&utm_content=technology