Weaponized AI: Near Future Warfare

Terminator - Skynet's Battle Mech

Terminator – Skynet’s Battle Mech

I’m on a panel with the topic “Weaponized AI and Future Warfare” at the upcoming Libertycon, so I’ll write on that to organize my thoughts. This is Part 2, Near Future AI in Warfare.

What’s war? Answer: an active struggle between competing authorities to control decisions. Where warrior bands would raid and pillage the adjoining villages, advancing technologies of both weapons and social roles led to specialized full-time warriors — armies — who would capture and control new territory by killing or driving away the forces that had controlled it.

When the source of wealth was hunting, fishing, and gathering wild produce, captured territory was typically occupied by the capturer’s tribe. As agriculture multiplied wealth and increased the surplus of food to support an urban population, ruling classes could have their armies capture new territory from a competing city-state, then tax the population which worked the land without displacing them. The more organized the technology of farming, the more damaging warfare became — and with the advent of industrial plant and total war, warfare destroyed the wealth being fought over, making conquest too expensive to be self-sustaining. Which did not stop war, which continued as a strategic option undertaken for defense against loss of access to raw materials or as the desperation move of an authoritarian society needing to shore up public support — as in today’s Russia, where propaganda is the most important product of the war effort in Ukraine. The budgetary and personnel costs of optional wars have to be kept down to avoid repercussions, because even authoritarian regimes know their citizens have more access to uncontrollable sources of information than they once did.

The invention of nuclear missiles has curbed total warfare — since absolute destruction of one or both sides is more costly to both than conventional warfare, the taboo on using nuclear devices won’t be broken by a typical nation-state without a significant negative downside, making their use most likely by stateless actors who have no infrastructure at risk. Which limits warfare to less damaging, more controlled destruction.

There are struggles analogous to classic war going on today, in battlefields as varied as the propaganda war between the West and Islamists, between NATO countries and Russia, and in cyberspace. Preparations for near-space battle are advanced, with Chinese, American, and Russian hunter-killer satellite programs and secret kinetic and energy weapons ready to cripple satellite surveillance and communications networks. The “territory” being fought over might be space, communications, or computer systems, but the goal is the same: denying access to a rival authority and defending your own.

How is today’s AI being used in current weapons systems? While there is likely much that is secret, the outlines of what is already in place and what will soon be available can be inferred from leaks and DARPA’s research program of recent years.

Cruise missiles already use GPS and detailed ground maps to chart routes hugging the terrain avoiding ground radar and defenses. Self-driving car technology currently uses precompiled models of road landscapes, and similar self-driving tanks and airplanes carrying weapons are already available, though the public emphasis is on remote-controlled drone versions. This Russian tank is basically a remote-controlled drone. The Russians also claim to be planning for substantial remote-controlled and autonomous ground forces in the near future, 30% by 2026, though given the Russian history of big promises and big failures one could be skeptical.

Phalanx CIWS test firing GULF OF OMAN (Nov. 7, 2008) The close-in weapons system (CWIS) is test fired from the deck of the guided-missile cruiser USS Monterey (CG 61). Monterey and the Theodore Roosevelt Carrier Strike Group are conducting operations in the U.S. 5th Fleet area of responsibility. (U.S. Navy photo by Mass Communication Specialist 3rd Class William Weinert/Released)


Phalanx CIWS test firing. Gulf of Oman 2008. US Navy photo.

Autonomous control of deadly weaponry is controversial, though no different in principle than cruise missiles or smart bombs, which while launched at human command make decisions on-the-fly about exactly where and whether to explode. The Phalanx CIWS automated air defense system (see photo above) identifies and fires on enemy missiles automatically to defend Navy ships at a speed far beyond human abilities. Such systems are uncontroversial since no civilian human lives are likely to be at risk.

DARPA is actively researching Lethal Autonomous Weapons Systems (LAWS). Such systems might be like Neal Asher’s (identity) reader guns, fixed or slow-moving sentries equipped to recognize unauthorized presences and cut them to pieces with automatic weapons fire. More mobile platforms might cruise the skies and attack any recognized enemy at will, robotically scouring terrain of enemy forces:

LAWS are expected to be drones that can make decisions on who should be killed without requiring any human interaction, and DARPA currently has two different projects in the works which could lead to these deadly machines becoming a reality.

The first, called Fast Lightweight Army (FLA) will be tiny and able to buzz around inside of buildings and complex urban areas at high speeds. The second is called the Collaborative Operations in Denied Environment (CODE), which intends to create teams of drones which are capable of carrying out a strike mission- even if contact with the humans controlling them is lost.

Jody Williams, an American political activist who won a Nobel Peace Prize in 1997 for her work banning anti-personnel landmines, has also been an outspoken activist against DARPA’s love affair with artificial intelligence and robots, with her Campaign to Stop Killer Robots.

“Autonomous robots would lack human judgment and the ability to understand context. These qualities are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack. As a result, fully autonomous weapons would not meet the requirements of the laws of war.” the campaigns website reads.

The Army has already weaponized bomb disposal robots, which leads many to believe that robots such as Atlas, which was developed by Boston Dynamics, in a humanoid form- allegedly for disaster relief, will be weaponized as well.

“The United States has not met anything in its military arsenal that it did not want to weaponize, so to say that you have this 6’6 robot who they are working feverishly to make more mobile, to not be still tethered to a cord, etc, etc- you’re going to tell me that they aren’t going to put machine guns on that guy and send him into urban warfare?” Williams told Motherboard last month. “I want to know how they can convince themselves that might be true- and when I’m in a real bad mood, I want to know how they can look you in the face and lie.”

While humans can be given rules of engagement and use their discretion to avoid collateral damage, humans are not known to be perfect in either situational understanding or high-stakes firing decisions. Humans make many mistakes, and they are especially likely to do so when their own lives are on the line. Some of the facial and object recognition programs are now better than human, especially in noisy and high-stress environments, and quite soon a robotic soldier will be better at sparing civilians while taking out enemy forces. The use of such forces to keep order and suppress guerrilla fighters may be so effective that it spares a civilian population a much longer and more intrusive occupation; like any destructive weapon, they could be used horrifically by a totalitarian regime to oppress civilians. But there is no conceivable way of preventing their development and use by hostile forces, unlike atomic weapons which require a large state-level effort — so the “good guys” will just have to suck it up and do the work, if only to prevent much worse governments from gaining a strategic advantage.

Most of these existing and proposed weapons systems are simple additions of AI recognition and targeting systems to conventional weapons platforms like tanks and planes. One can imagine battles between opposing forces of this type being largely similar to the human-controlled versions, if faster and more decisive.

But development of small, networked AI control systems removes the need to house human controllers within a weapon platform, freeing up the design space to allow very small mobile platforms, which could use swarm tactics — e.g., vast numbers of small, deadly drones acting as one force to attack and destroy much larger forces. Or Iain Banks’ knife missiles, small intelligent objects capable of rapid action when required to slice up a threat.

As with the development of smart bombs and missiles, such systems are both more effective and more capable of pinpoint destruction of enemy forces without harming people or structures nearby. Where today a drone strike may kill its intended target but also kill dozens of nearby civilians, an autonomous assassin drone can identify its target using facial recognition software and neatly take them out without harming a person standing nearby — I’m quite sure these drones already exist, though no one has admitted to them as yet. When both AI and power storage technologies have advanced sufficiently, knife missiles and assassin drones no larger than dragonflies become a real possibility, and countervailing defense systems will require complete sensor coverage and defensive drones to intercept.

As these technologies advance, war becomes less destructive and more imaginable, with opposing forces unable to hide in civilian populations. The house-to-house searches of the Iraq war, so dangerous for US troops, would become omnipresent armed quadcopters picking off identified persons whenever they are exposed to the open air — I have a scene featuring such a drone in Nemo’s World. And nothing says states will have a monopoly on these weapons — stateless organizations like terrorist groups and larger businesses will be able to employ them as well. Imagine the narco-syndicates with assassin drones….

Battles between equally advanced AI-based forces would be faster and less destructive, with control of the airspace and radio communications critical. Drones and mechs might use laser communication to avoid radio jamming, in which case optical jamming — smoke and fog generators — might be employed.

So in the near term, AI-based weaponry, like today’s remote-controlled drones, will tend to amplify capabilities and reduce collateral damage while saving the lives of the operators who will remain safely far away. In the hazier far future, AIs will use strategy and tactics discovered using deep learning (as Google’s winning go program does) to outthink and outmaneuver human commanders. Future battles are likely to be faster and harder for humans to understand, either stalemated or decided quickly. The factors that now determine the outcomes of human-led battles, like logistics chains and the training of troops, may change as command and control are turned over to AI programs — centuries of conventional warfare experience are re-examined with every major development of new weapons, and the first to discover better rules for fighting can win a war before their opponents have time to catch on.

An interesting if now outdated appraisal of future battlefields here.

Next installment: Mil SF and the Real Future of Warfare

One comment

Comments are closed.