The writer is a science commentator
When it comes to using artificial intelligence to design new drugs, the rules are simple: therapeutic activity is rewarded while toxicity is penalised.
But what happens if the rule is flipped, so that toxicity is rewarded? Those same computational techniques, it turns out, can be repurposed to design potential biochemical weapons. AI-designed drugs now have a dark side: AI-designed toxins.
The unmasking of intelligent drug design as a dual-use technology — obvious in hindsight — was done by a team working at Collaborations Pharmaceuticals, a company in North Carolina. The company uses machine learning to identify drugs for rare and neglected diseases. Its scientists, invited to contribute to a conference on the impact of scientific developments on the Biological and Chemical Weapons Convention, wondered how easy it would be to make its molecule-generating model go rogue.
The AI was trained with a starting set of molecules, including pesticides and known environmental toxins, and left to calculate how to adapt them to become increasingly deadly. The resulting molecules were then scored for lethality. The outcome was chilling: within six hours, the model had sketched out 40,000 potential killer molecules, including VX, a banned nerve agent used to murder the half-brother of North Korean leader Kim Jong Un.
Filippa Lentzos, co-director of the Centre for Science and Security Studies at King’s College London, remembers the sense of shock that rippled through the audience as Sean Ekins, Collaborations Pharmaceuticals’ founder, presented his findings last autumn.
“It was a jaw-drop moment,” she says. “Everyone was thinking, ‘This is awful. What do we do now?’ The potential for misuse has always been a concern in the life sciences, but with AI that potential is on steroids.” Lentzos teamed up with Ekins and others to write up the results of the experiment, published this month in Nature Machine Intelligence.
The modified software also independently came up with other known chemical warfare agents, none of which had been included in the training data. Some compounds scored more highly for toxicity than any known biochemical weapons.
“By inverting the use of our machine-learning models, we had transformed our innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules,” the authors note in their paper, which withholds crucial details of the method for security reasons. Some virtual molecules showed few similarities to any existing toxins, suggesting entirely new classes of lethal biochemical weapons could be conjured up that circumvent current watch lists for known precursor chemicals.
The company has deleted its library of death and now plans to restrict the use of its technology. The authors recommend a reporting hotline to authorities so that suspected misuse can be investigated and a code of conduct for anyone working in AI-focused drug discovery, akin to The Hague Ethical Guidelines, which promote responsible behaviour in the chemical sciences.
This kind of computer-generated drug discovery does not produce the molecules themselves but instead provides recipes specifying the chemicals needed to make them. Manufacture can be outsourced to other companies offering commercial synthesis, an industry that enjoys minimal regulation and oversight.
Lentzos thinks the level of technical knowledge required in both coding and chemistry for wrongdoing puts it beyond the reach of amateurs and garage-tinkerers. But minting new poisons is certainly not beyond the capacity of state-sponsored groups and national militaries, as shown by the Soviet-era development of novichok, favoured by Russian assassins. “If there is malicious intent, then the capacity for doing bad things has increased exponentially,” she says.
All it takes is one nation prepared to flout international norms, perhaps one led by an autocrat dedicated to malevolence. It would not be surprising if such projects were already under way.
Source: Financial Times