英语轻松读发新版了,欢迎下载、更新

A call for built-in biosecurity safeguards for generative AI tools

2025-04-28 09:49:26 英文原文

作者:Church, George

Generative AI is changing biotechnology research, and accelerating drug discovery, protein design and synthetic biology. It also enhances biomedical imaging, personalized medicine and laboratory automation, which enables faster and more efficient scientific advancements. However, these breakthroughs have also raised biosecurity concerns, which has prompted policy and community discussions1,2,3,4.

The power of generative AI lies in its ability to generalize from known data to the unknown. Deep generative models can predict novel biological molecules that might not resemble existing genome sequences or proteins. This capability introduces dual-use risks and serious biosecurity threats — such models could potentially bypass the established safety screening mechanisms used by nucleic acid synthesis providers5, which presently rely on database matching to identify sequences of concerns6. AI-driven tools could be misused to engineer pathogens, toxins or destabilizing biomolecules, and AI science agents could amplify risks by automating experimental designs7.

This is a preview of subscription content, access via your institution

References

  1. Baker, D. & Church, G. Science 383, 349 (2024).

    Article  PubMed  Google Scholar 

  2. Bloomfield, D. et al. Science 385, 831–833 (2024).

    Article  PubMed  Google Scholar 

  3. Blau, W. et al. Proc. Natl Acad. Sci. USA 121, e2407886121 (2024).

    Article  PubMed  PubMed Central  Google Scholar 

  4. Bengio, Y. et al. Preprint at https://doi.org/10.48550/arXiv.2501.17805 (2025).

  5. Wittmann, B. J. et al. Preprint at bioRxiv https://doi.org/10.1101/2024.12.02.626439 (2024).

  6. Fast Track Action Committee On Synthetic Nucleic Acid Procurement Screening. Framework For Nucleic Acid Synthesis Screening. (White House Office of Science and Technology Policy, 2024).

  7. Boiko, D. A., MacKnight, R., Kline, B. & Gomes, G. Nature 624, 570–578 (2023).

    Article  PubMed  PubMed Central  Google Scholar 

  8. Church, G. A synthetic biohazard non-proliferation proposal. Harvard Medical School https://arep.med.harvard.edu/SBP/Church_Biohazard04c.htm (2004).

  9. Nguyen, E. et al. Science 386, eado9336 (2024).

    Article  PubMed  Google Scholar 

  10. Huang, K. et al. Preprint at https://doi.org/10.48550/arXiv.2404.18021 (2024).

  11. Zhang, Z. et al. Preprint at https://doi.org/10.1101/2024.10.23.619960 (2024).

  12. Rafailov, R. et al. Direct preference optimization: your language model is secretly a reward model. In NIPS’23: Proc. 37th International Conf. Neural Information Processing Systems (eds Oh, A.) 53728–53741 (Curran Associates, 2023).

  13. Liu, S. et al. Nat. Mach. Intell. 7, 181–194 (2025).

    Article  Google Scholar 

Download references

Acknowledgements

Certain tools and software are identified in this Correspondence to foster understanding. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the tools and software identified are necessarily the best available for the purpose.

Author information

Authors and Affiliations

  1. AI Lab, Princeton University, Princeton, NJ, USA

    Mengdi Wang & Zaixi Zhang

  2. Department of Computer Science, University of Central Florida, Orlando, FL, USA

    Amrit Singh Bedi

  3. Defense Advanced Research Projects Agency, Ballston, VA, USA

    Alvaro Velasquez & Stephanie Guerra

  4. National Institute of Standards and Technology, Gaithersburg, MD, USA

    Sheng Lin-Gibson

  5. Department of Pathology, School of Medicine, Stanford University, Stanford, CA, USA

    Le Cong & Yuanhao Qu

  6. Department of Computer Science, University of Maryland, College Park, MD, USA

    Souradip Chakraborty

  7. Iris Medicine, Palo Alto, CA, USA

    Megan Blewett

  8. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA

    Jian Ma & Eric Xing

  9. Mohamed bin Zayed University of AI, Abu Dhabi, United Arab Emirates

    Eric Xing

  10. Department of Genetics, Harvard Medical School, Harvard University, Boston, MA, USA

    George Church

Corresponding author

Correspondence to Mengdi Wang.

Ethics declarations

Competing interests

Z.Z., A.S.B., A.V., S.G., S.L.-G., S.C., M.B. and J.M. have no competing interests. E.X. has equity in GenBio AI. G.C. has biotechnology patents and equity in Lila.AI, DynoTx, Jura.bio, ShapeTx, GC-Tx, ArrivedAI, Nabla.bio, Manifold.bio and Plexresearch. M.W., L.C. and Y.Q. invented some of the technologies mentioned in this Correspondence, with patent applications filed by Princeton University and Stanford University. L.C. is scientific advisor to Acrobat Genomics and Arbor Biotechnologies.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, M., Zhang, Z., Bedi, A.S. et al. A call for built-in biosecurity safeguards for generative AI tools. Nat Biotechnol (2025). https://doi.org/10.1038/s41587-025-02650-8

Download citation

  • Published:

  • DOI: https://doi.org/10.1038/s41587-025-02650-8

关于《A call for built-in biosecurity safeguards for generative AI tools》的评论


暂无评论

发表评论

摘要

Generative AI is transforming biotechnology research, speeding up drug discovery and protein design while enhancing biomedical imaging and personalized medicine. However, it raises significant biosecurity concerns due to its capability to generate novel biological molecules that could bypass current safety screening mechanisms. This introduces risks of misuse in creating pathogens or toxins, necessitating discussions on policy and safeguards.