作者:ABC News
The UN secretary-general has called for "global guardrails" on AI.
As the United Nations General Assembly continues discussions this week about regulating weapons that use artificial intelligence, some experts have raised concerns about the rapid advancement of autonomous military technology and its impact on modern warfare.
Robert Bishop, vice chancellor and dean of engineering at Texas A&M University, addressed the assembly about the urgent need for regulation of these "killer robots."
"The technology has really gotten ahead of us," Bishop told ABC News. "The application of the technology has moved faster than our policies and procedures."
At a recent UN summit, Secretary-General António Guterres warned that the growing technology risked deepening geopolitical divides and called for urgent "global guardrails."
Robert Bishop urges ethical approach to AI warfare at UN meeting.
ABC News
Bishop told ABC News that AI technology could actually help reduce conflict if properly regulated.
"We can use this technology to reduce conflict through less-than-lethal action," he explained. "We can also utilize the technology to help inform our policymakers so that they make better decisions."
When asked about the risks of leaving these weapons unregulated, Bishop said there were immediate concerns beyond science-fiction scenarios. "The worst-case scenario in the near term is the gap that countries face," he told ABC News, explaining that this gap could lead to "cybersecurity breaches, data breaches, and infrastructure breaches" by bad actors.
UN debates rules for AI weapons as experts warn of risks.
Department of Defense
Bishop explained the reluctance of some countries to embrace regulation in this matter. "We're always cognizant of the fact that folks who may wish to do us harm are ahead of us," he said, citing concerns about hypersonic vehicles capable of carrying nuclear warheads as an example of why nations are cautious about limiting military AI development.
However, Bishop suggests a different approach. He says Texas A&M University is creating a nonprofit organization to develop ethical approaches to AI use, working closely with industry, the Department of Defense and other academic institutions. He believes AI could be used to find less violent solutions to conflicts.
"If you know that a group of folks wants to do you harm, can you disrupt their ability to do that without kinetic weapons, without drones? The answer is yes," Bishop said. He suggests using AI to analyze data and present decision-makers with less-violent options for resolving conflicts.
Last week, 96 countries met for the first-ever international meeting held at the United Nations focused specifically on autonomous weapons systems.
The meeting, mandated by a General Assembly resolution from last December, brought together representatives from governments, international organizations, civil society, academia, and industry to address the growing concerns about AI-powered weapons.
Guterres is pushing for clear rules on AI weapons by next year, with both the secretary-general and the president of the International Committee of the Red Cross calling for a legally binding agreement by 2026. The recent New York meeting expanded discussions beyond traditional military concerns to include human rights law, criminal law, and broader ethical considerations.
"We must prevent a world of AI 'haves' and 'have-nots,'" Guterres said at AI Action Summit 2025. "We must all work together so that artificial can bridge the gap between developed and developing countries – not widen it. It must accelerate sustainable development – not entrench inequalities."