英语轻松读发新版了,欢迎下载、更新

Terminators: AI-driven robot war machines on the march

2025-09-12 08:45:00 英文原文

Opinion I've read military science fiction since I was a kid. Besides the likes of Robert A. Heinlein's Starship Troopers, Joe Haldeman's The Forever War, and David Drake's Hammer's Slammers books, where people held the lead roles, I read novels such as Keith Laumer's Bolo series and Fred Saberhagen's Berserker space opera sf series, where machines are the protagonists and enemies. Even if you've never read war science fiction, you certainly at least know about Terminators. But what was once science fiction is now reality on the Ukrainian battlefields. It won't stop there.

Military IT tech in fatigues in server room

'Slow AI' needed to stop autonomous weapons making humans worse

READ MORE

You see, war always accelerates technology's advances. After Russia invaded Ukraine, Ukraine first took drone technology from expensive gear to cheaply made drones that are literally made from cardboard. As the battles continued, both Russia and Ukraine have countered each other's drones by interfering with GPS and jamming the wireless bandwidth used to control the drones.

As a result, both sides have taken to using fiber optic drones, which are unjammable. They're not perfect. You can follow the fiber optic cable back to their controllers, their range is limited to about 20 kilometers, and they are being countered by nets being put up around roads and important sites.

So Ukraine has been working hard on the next logical step of drone warfare: AI-driven drones. It is far from the first. If you try to cross the Korean Demilitarized Zone (DMZ), you might be stopped by a South Korean SGR-A1 sentry robot, which is armed with a K-3 machine gun and 40mm automatic grenade launcher. These static robots have been deployed since 2010.

Israel has also been pushing forward with a variety of AI-driven war machines such as the Harpy and Harop, loitering munitions; and the six-wheeled RoBattle. The US has also been retrofitting its MQ-9 Reaper and XQ-58 Valkyrie drones with AI, while the experimental Longshot comes with AI built-in. And, sorry Top Gun fans, but Maverick won't be able to beat VENOM AI-equipped F16s fighter planes when they're finally deployed.

In the meantime, though, Ukraine has taken the next step and is using AI-powered swarms of drones to attack the Russian military. With these Swarmer drones, according to the company's CEO Serhii Kupriienko “You set the target and the drones do the rest. They work together, they adapt.”

These systems are under human control. We set their missions, we tell them what their parameters are. But they won't stay that way. Just like everything else AI, we want the AI to do the work, while we just give the chatbot general goals. We don't want to worry about the details. That's a mistake, whether you're just trying to vibe your way to being a 10x programmer or get a robot to shoot an enemy.

Say, for example, you told an AI that China was going to invade Taiwan and asked it to plan what to do next to "win" the situation. Oh wait! What's this? The Hoover Wargaming and Crisis Simulation Initiative at Stanford University tried exactly that and the group found that OpenAI’s GPT-3.5, GPT-4, and GPT-4-Base; Anthropic's Claude 2; and Meta’s Llama-2 Chat were all trigger-happy aggressors.

Sure, we can keep a human in the loop, right? But, will we? The US Department of Defense (DoD) directive 3000.09 on AI weapons systems [PDF] insists that "appropriate levels of human judgment over the use of force" are required in any deployment. However, if there's an urgent need, such as PLA paratroopers dropping in on Taipei, the DoD can decide to let the AI systems run the show without human intervention. That's the kind of thinking that might first lead to the US nuking Xiamen, the closest PRC city to Taiwan, and then World War III.

We also know that AI has a bad way of "lying" to us when its assigned or programmed goal conflicts with your requests. In fact, when AI plays games, especially ones like poker and Diplomacy, where bluffing is a big part of how you win, chatbots will cheerfully lie to you. In the real world, the fog of war also lends itself to situations where misrepresenting your intentions can be a winning strategy.

They're also perfectly capable of attacking people. Just ask the folks at Anthopic, who discovered in an experiment that Claude 4.0, when faced with a threat, often tried to blackmail people.

So, let's say, we have mobile AI-enabled sentry robots tasked with patrolling a city for criminals. Further, the people in charge of the robots tell them that brown people should be regarded with suspicion and that the crime rate must be brought down by 10 percent by the end of the week. What will a platoon of robots with M-16 rifles do about the Latin/Indian/Arab/Black/Jewish protestors at city hall with those instructions?

I wish this were still science fiction. I want to believe that, like Matthew Broderick in War Games, we could convince AI not to automate its human parents' worst impulses. Instead, I'm sure we'll soon see AI drones and armed robots killing innocents. After all, we do. ®

关于《Terminators: AI-driven robot war machines on the march》的评论


暂无评论

发表评论

摘要

Military advancements in Ukraine have seen the transition from traditional drones to fiber optic drones and now AI-driven swarms of drones. While current systems remain under human control, there is a growing concern that future AI weapons could act autonomously, potentially leading to unintended conflicts or escalations without proper oversight. The ethical and practical challenges of integrating AI into warfare are highlighted by examples like the US DoD directive allowing for potential autonomous decision-making in urgent scenarios.

相关新闻