Autonomous weapons systems first appeared in science fiction movies, but in recent years, they have entered engineering laboratories and the battlefield. These are weapons capable of performing advanced military functions with little or no human oversight.
In addition to offering significant strategic and tactical advantages on the battlefield, autonomous weapons systems are also morally superior to using human combatants. Experts claim that substituting robots for humans in high-stress combat situations may have ethical benefits. They contend that when under extreme stress, the neural circuits in charge of conscious self-control can malfunction, resulting in sexual assaults and other crimes that soldiers would otherwise be less likely to commit.
For a given mission, fewer warfighters are required, and each is more effective thanks to robots’ ability to act as a force multiplier. Autonomous weapons systems have the rare ability to move more quickly than humans do, allowing them to strike even when communication links are broken. Additionally, they enable combat to reach previously inaccessible regions and lower casualties by removing human warfighters from hazardous missions like the disposal of explosive ordnance.
Future autonomous robots may behave more “humanely” in combat because they won’t need to be programmed with a self-preservation instinct, potentially doing away with the need for a “shoot first, ask questions later” mentality. Autonomous weapons systems will be able to process much more incoming sensory information than humans without discarding or distorting it to fit preconceived notions. These systems will also be able to make decisions without being influenced by emotions like fear or hysteria.
There are six areas where advances in autonomy would significantly benefit current systems: perception, planning, learning, human-robot interaction (HRI), natural language, and multi-agent coordination.
Advantages
According to experts, the potential advantages of autonomous weapon systems typically fall into three groups: operational advantages that increase military effectiveness, economic advantages related to resource allocation efficiency, and human advantages resulting from the potential use of such systems to lessen casualties in combat situations.
1. Operational
Autonomous weapon systems have several operational advantages, including speed (they can make decisions faster than human operators) and agility (they don’t need to constantly communicate with human operators to function; as a result, they can adapt more quickly). Due to human operators’ cognitive and physical limitations might also be more accurate and resilient than those operators.
Autonomous weapon systems may be more suitable than conventional weapon systems to operate in remote areas because they do not require communication with a human operator. Additionally, autonomous weapon systems might make it simpler for the military to penetrate enemy territory. Fewer casualties may result from such long-distance operations, and supply and communication networks may not be as necessary.
2. Economic
Autonomous weapon systems may have economic advantages by lowering the cost of military operations in various ways, despite high initial research and development costs. For instance, with such systems, fewer service members might be required to complete the same task, enabling militaries to reallocate personnel and other resources away from “dull, dirty, or dangerous” jobs and toward more complex tasks. Additionally, autonomous weapon systems can save money on personnel. A small armed robot cost US$230,000 in 2013, compared to the US Department of Defense’s $850,000 annual cost to equip and maintain one soldier in Afghanistan.
3. Human
Autonomous weapon systems may benefit humans because their use may result in fewer casualties. For instance, militaries may use such systems to replace manually operated fighter aircraft, removing the risk to the pilots. Some commentators use historical instances of mistakes and atrocities in warfare to support their claim that autonomous weapon systems could decrease civilian casualties. Such systems wouldn’t attack out of retaliation, fear, or anger and wouldn’t make mistakes due to fatigue or stress.
Disadvantages
Critics contend that these weapons should be restricted for several moral and legal reasons, if not completely banned. They think that while AI has the potential to help humanity, its reputation might be damaged if an arms race between military AI develops. A public backlash could limit the future advantages of AI.
Additionally, they draw attention to the lack of scientific evidence supporting the notion that robots may someday possess “the functionality required for accurate target identification, situational awareness, or decisions regarding the proportional use of force.” As a result, they risk doing a lot of collateral damage. Giving AI control over targeting will almost certainly lead to unacceptable collateral damage and civilian deaths.
The issue of accountability when autonomous weapon systems are used is another significant worry. International humanitarian law is fundamentally predicated on holding someone accountable for the deaths of civilians. As a result, no weapon or other tool of war should be used in combat if doing so makes it impossible to determine who is to blame for the deaths it causes.
It is challenging to tell whether a flawed decision results from programming errors or the autonomous deliberations of the AI-equipped (so-called smart) machines because AI-equipped machines make decisions independently. There is a clear chain of responsibility when a person decides to use force against a target, starting with the person who “pulled the trigger” and ending with the commander who gave the order. There is no such clarity in the case of autonomous weapons systems.
Although autonomous systems can achieve great flexibility to respond to the specific war situation, there can emerge unforeseen situations, such as:
- Malfunctions and bugs: As a system becomes more complex, the sheer number of mechanical parts and lines of code grows, increasing the number of elements that could malfunction or be coded improperly.
- System failures: System failures occur not from the breakdown of any given part but unanticipated interactions between system elements. Verifying all possible combinations of the system’s internal workings becomes increasingly difficult as the system’s complexity increases.
- Systems are not transparent to human operators: While one of the benefits of automation in many cases is reducing the potential for human error, a negative side effect of greater complexity is that the system’s functioning may be increasingly opaque to even trained users.
- Unanticipated interactions with the environment: As the complexity of the system and/or its operating environment increases, the number of potential interactions increases dramatically. This can make testing the autonomous system’s operation under every possible environmental condition effectively impossible.
- Adversarial hacking: In an adversarial environment, such as in war, enemies will likely attempt to exploit vulnerabilities of the system, whether through hacking, spoofing (sending false data), or behavioral hacking (taking advantage of predictable behaviors to “trick” the system into performing a certain way). While any computer system is, in principle, susceptible to hacking, greater complexity can make it harder to identify and correct any vulnerabilities.