Can AI create deadly biological weapons?

Can AI create deadly biological weapons?

Can AI create deadly biological weapons?

In the summer of 1990, the Japanese capital, Tokyo, witnessed a strange incident, as trucks sprayed a yellow liquid in very sensitive locations, including American military bases, the international airport, and even the Imperial Palace. These attacks were the beginning of the plans of a cult called Aum Shinrikyo, which sought to establish a new world order based on its religious beliefs, and the biological attack was just the first step in this path.

Five years after this incident, the group carried out a horrific chemical attack using sarin gas on the Tokyo subway, killing 13 people and injuring thousands.

Aum Shinrikyo was seeking to use a deadly biological weapon in its 1990 attacks, as it later emerged. It planned to poison the yellow spray with botulinum toxin , one of the world's deadliest biological toxins. But its efforts failed, in part because the cult members lacked scientific knowledge. They were unable to distinguish between the bacteria that produced the toxin and the toxin itself, a critical mistake that affected the effectiveness of the attack.

These historical events have raised many questions now. What would have happened in the world if the Aum Shinrikyo group had used the advanced artificial intelligence tools we use now, such as ChatGPT? Perhaps it would have been able to develop its biological weapons faster and more accurately!

How AI Models Amplify the Bioweapons Threat?

Can AI create deadly biological weapons?

This September, OpenAI launched its new o1 series of models , which come in two versions: o1-preview and o1-mini. These models feature unprecedented capabilities thanks to their reliance on reinforcement learning technology, which enables them to reason, solve complex math problems, and answer scientific research questions. This is a crucial development in the efforts to create general artificial intelligence that thinks in a way that is closer to human thinking.

The ability of these models to connect ideas sequentially greatly improves their performance, but it opens the door to new risks, such as giving illegal advice, choosing stereotypical responses, or succumbing to known jailbreaks.

“The new models carry a ‘medium risk’ for chemical, biological, radiological, and nuclear (CBRN),” the company said in its O1 System Card , a report detailing the steps the company took to assess potential risks before launching the models. That’s the highest risk OpenAI has given its models to date, underscoring the seriousness of the new capabilities they offer.

The company explained that the two models were able to help bioengineers develop practical plans for reproducing harmful biological agents. This means that the model can provide the essential information that experts need to carry out biological attacks, from identifying the target biological agent to choosing the best methods for delivering it. While implementing these plans requires practical expertise that the new models cannot provide, their ability to generate such plans represents a worrying development in the field of biosecurity.

This development has raised widespread concern in the scientific and security community, and artificial intelligence experts have warned that any malicious parties could exploit these models to carry out their attacks, which poses a real threat to global security.

“ If OpenAI ’s models do indeed exceed the average CBRN risk level as reported, it would underscore the need for strict laws and regulations, such as California’s AI Act (SB 1047), to regulate this field and protect society from potential risks,” said Yoshua Bengio, a professor of computer science at the University of Montreal and one of the founding fathers of AI.


SB 1047 aims to require the largest AI developers to do what they have repeatedly pledged to do: conduct basic safety tests on their most powerful AI models and take safeguards to prevent them from being misused or getting out of control.

Yoshua Bengio stressed the importance of such laws in light of the rapid advancement of artificial intelligence, warning that the lack of protective restrictions could lead to the exploitation of this technology for harmful purposes.

Alarming study:

 Researchers from the Massachusetts Institute of Technology have revealed in a study published in the journal Science that artificial intelligence could facilitate the development of biological weapons. In this study, the researchers asked a group of students to use artificial intelligence models to design a new virus capable of starting a global pandemic, and within just an hour, these models suggested a list of four possible causes of the pandemic.

In some cases, these models provided detailed plans for creating deadly viruses. These models suggested specific genetic mutations to increase the virus’s ability to spread, and described in detail the steps needed to manufacture the virus in the laboratory, including: identifying companies that could provide the necessary materials.

But what most worried researchers and scientists in this study was that the AI ​​models provided detailed instructions on how to overcome obstacles that a non-specialist in this field might face.

This study demonstrated that AI can be a powerful tool in the hands of evildoers, as its capabilities allow anyone, even those with no scientific background, to acquire the knowledge necessary to design biological weapons. This scenario has raised widespread concern among policymakers about the growing security threats posed by the intersection of AI and biology.

Automation and biological design.. Future threats:

The impact of AI extends beyond information, extending to the field of automation and biological design. As AI tools advance, almost anyone can design new biological molecules and develop genetically modified organisms. This means that the development of biological weapons will become easier and faster than ever before, making them more difficult to detect and prevent.

How can the risks of the intersection of artificial intelligence and biology be contained?

The intersection of AI and biology poses a new and serious challenge to global biosecurity, requiring work to strengthen biological defenses and develop secure AI systems.

To reduce the risk of misuse of biotechnology, control over the supply chain of genetic materials must be strengthened, especially the process of gene synthesis, which is an artificial process of creating new gene sequences from scratch, or modifying existing gene sequences, meaning creating genes on demand.

Gene synthesis is a double-edged sword: it can lead to tremendous medical breakthroughs, but it also carries significant risks. Thanks to the development of gene synthesis technologies, it has become possible to create genetically modified viruses and bacteria with new and dangerous properties. This increases the risk of biological attacks, whether by states or terrorist groups. To mitigate these risks, there must be strict control over the supply chain of genetic materials.

However, many companies offering gene synthesis services do not adequately screen their applications, making it easy to obtain the materials needed to manufacture dangerous biological agents. In addition, the development of artificial intelligence tools further complicates the problem, as these tools can help unqualified individuals understand the complex scientific information related to gene synthesis and use it for malicious purposes.

An MIT study has shown that current AI models are very adept at pointing out this fact and giving instructions on how to exploit such vulnerabilities in supply chain security.

In addition to strengthening general biosecurity measures, we also need to focus specifically on the risks posed by AI, particularly large language models, because these models not only facilitate access to harmful information, but may also contribute to the development of dangerous biological technologies in unexpected ways, so comprehensive strategies must be put in place to mitigate these risks.

These strategies should include developing new criteria for assessing the risks associated with large language models, and developing tools to detect malicious uses of AI.

Therefore, the necessary preventive measures to contain the risks of the intersection of AI and biology currently include:

  • Mandatory screening of gene combinations: There should be a global system for screening all gene combination requests, regardless of company size or geographic location.
  • Update regulations: Existing regulations and laws need to be updated to cover recent developments in biology and artificial intelligence.
  • International cooperation: Countries, companies, and international organizations must cooperate to develop common standards for biosecurity.
  • Developing defensive AI tools: AI itself can be used to develop tools to detect and track new biological threats.

It is worth noting the importance of the time factor here, as artificial intelligence models are evolving rapidly, so the threat requires cooperation between all parties to develop effective solutions before the situation gets out of control.

Conclusion:

AI represents a great opportunity for scientific advancement, but it must be approached with extreme caution. Without proper controls, it could have serious consequences for global biosecurity. Therefore, investment in biosecurity research and development must be made, international cooperation must be encouraged, and the international community must take urgent action to regulate the development and use of AI, and ensure that it is not used for malicious purposes.


google-playkhamsatmostaqltradent