Autonomous Weapon Systems - Existential Threat?

Autonomous Weapon Systems

Existential Threat?

Michèle Trebo
by Michèle Trebo
on March 09, 2023
time to read: 10 minutes

Keypoints

Select and Eliminate Targets without Human Intervention

  • In-the-loop, on-the-loop and out-of-the-loop systems
  • relinquishing human control to an AI
  • smart, precise, powerful, fast and affordable
  • calls from experts and renowned researchers in AI and robotics to ban autonomous weapons systems
  • dynamization of warfare that exceeds human responsiveness

The central factor that distinguishes autonomous weapons from remotely controlled weapons is human control. Lethal weapon systems in which target selection and the manner of target engagement are not subject to human control but are left to artificial intelligence (AI) are problematic from both ethical and legal perspectives. Elon Musk, 160 research and business organizations, and experts from the Stop Killer Robots campaign called against building autonomous weapons systems in 2018. In doing so, they raised awareness of the threat of autonomous weapons for the first time.

Definition of autonomous weapon systems

In principle, most objects can become weapons (Bendel, 2019, page 321). Examples include bicycle chains, screwdrivers, or scissors. These items were not designed as weapons, but can certainly be used as such. This means that a weapon is not necessarily recognizable by its external shape. But at what point is a weapon considered autonomous? In the military context, a distinction is made between in-the-loop, on-the-loop, and out-of-the-loop systems depending on the role humans play in the control loop. It depends on how the target is identified and who initiates the attack. In the In-the-Loop system, all decisions (including possible remote control) are made by humans. The on-the-loop system, on the other hand, follows a program. It is able to operate in real time, independent of human intervention. However, the human is still in the control loop and monitors the system, so there is always the possibility to intervene. Out-of-the-loop systems operate without human control and intervention capabilities. Thus, an autonomous weapon system has the capability to detect, engage, and disable a target designated as an enemy without human intervention (Lee & Chen, 2022, page 375). Such a scenario is shown in the short film Slaughterbots (Bendel, 2019, page 320). What appears at first glance to be a documentary film is fiction. The mini-drones shown in the short film can track a specific person and kill them with a small amount of dynamite shot through the target’s head. As AI and robotics become more accessible and affordable, robots will also be able to perform similar functions in the near future (Lee & Chen, 2022, pages 375-376). The tremendous advances in AI are also accelerating the development of autonomous weapons. These robots are becoming smarter, more accurate, more powerful, faster, and less expensive. They are also acquiring new capabilities such as swarm formation through cooperation and redundancy.

Advantages

From a military perspective, an out-of-the-loop system offers the greatest advantages in practical use (Bendel, 2019, page 322). It acts at a speed that far exceeds human responsiveness, wherein may lie a decisive strategic gain. Moreover, it does not require permanent communication between system and human. For this reason, it is less likely to be detected, disrupted, or incapacitated by enemies. Cost reduction and scalability in groups also represent advantages. An inventory deficit can be compensated for by accomplishing tasks more efficiently, durably, and effectively despite a small number of forces on the front lines. Impaired communications inaccessible or difficult terrain or hard-to-bear, dangerous, and time-consuming missions in which people reach their limits are not a problem for autonomous weapons. Moreover, they act rationally and can be used defensively against assassins and criminals. Wars could be fought by machines, sparing the lives of large numbers of soldiers (Lee & Chen, 2022, page 376). In the hands of a responsible military, autonomous weapons systems help prevent accidental killings of friendly forces, children, and civilians.

Disadvantages

A major disadvantage of autonomous weapon systems is the moral aspect (Lee & Chen, 2022, pages 376-377). Ethical and religious systems consider the killing of a human as a reprehensible act. In the case of an error, there is also the question of who is responsible. This diffusion of responsibility could absolve aggressors from blame for wrongdoing or violations of international humanitarian law, which in turn lowers the threshold for war. Further, autonomous weapons simplify the execution of an attack. The self-sacrifice of a human being still represents a high hurdle, which could be waived by the use of autonomous weapon systems in an attack. Through facial or gait recognition, as well as tracking phone and network signals from the Internet of Things (IoT), autonomous weapons can locate individuals. This enables both individual assassination and genocide of any group of people. Furthermore, greater autonomy without the understanding of the higher-level issues means that the tempo of warfare is increased as well as the probability of escalations such as nuclear war, thus increasing casualties. As autonomous weapons systems such as small drones have become less expensive and more available, this provides opportunities for weak adversaries and non-state actors to achieve significant impact at a distance. The defense of such systems involves substantial effort. In particular, swarms of different systems are difficult to combat. Distinguishing between enemy, civilian, and friendly systems presents another challenge. Taking advantage of autonomy while maintaining human control over key decisions such as target neutralization is difficult.

Limitations

An AI, unlike a human, does not possess the everyday mind or the ability to think across domains (Lee & Chen, 2022, page 377). No matter how well trained an autonomous weapon system is, its limitation to a specific operational domain prevents it from fully understanding the consequences of its actions. Therefore, certain actions, such as counterterrorism measures, must still be carried out by humans. Berkeley professor Stuart Russel says the ability of autonomous weapons is more limited by laws of physics than by flaws in AI systems.

Threat of autonomous weapon systems

Military superiority has always been a national priority (Lee & Chen, 2022, pages 377-379). Autonomous weapon systems further exacerbate this problem. This is because the fastest, best camouflaged, most lethal weapons usually win the war. Add to this the low cost, which lowers the barrier to entry. A good example of this is Israel with their powerful technologies. They have already entered the arms race with some of the most advanced military robots, some smaller than a fly. Since all countries must assume that their potential adversaries are building autonomous weapons, they are forced to keep up. Berkeley professor Stuart Russell says we must reckon with platforms whose mobility and destructive power humans cannot counter. Thus, if this multilateral arms race is given free rein, it could lead to the extinction of humanity. For nuclear and atomic weapons, the deterrence theory applies. Whoever launches a first strike must expect a counter-reaction and consequently his own destruction. Since a first strike by autonomous weapon systems may not be traceable, there is also no danger of a destructive retaliatory strike. A first strike, not even necessarily by a country, but possibly by terrorists and non-state actors, could cause an escalation and thus poses a major threat. In principle, however, it can never be ruled out that weapons will be hacked, which entails unpredictable and uncontrolled risks (Bendel, 2019, page 320).

Possible solutions

In order to avert an existential threat to humanity, various solutions have been proposed (Lee & Chen, 2022, pages 379-380). One of the solutions is represented by the human-at-trigger approach. In this approach, any decision to kill a human must be made by a human. However, this would compromise the performance of autonomous weapon systems in terms of speed and possibly precision. Moreover, this solution is difficult to enforce and offers the possibility for loopholes. A second solution, proposed by both the Stop Killer Robots campaign and 3,000 people, including Elon Musk, the late Stephen Hawking, and AI experts, would be a ban. Biologists, chemists and physicists have made similar pushes against biological, chemical and nuclear weapons in the past. Although such a ban is not easily achieved, previous bans against blinding lasers, chemical weapons, and biological weapons appear to be being followed. Russia, the United States and the United Kingdom are currently the biggest opponents of a ban on autonomous weapons. They justify their position by saying that it is too early for such a ban. A third approach is to regulate autonomous weapons. This, too, is difficult, however, because it requires an appropriate technical specification that is not too broad. Clearly defining the term “autonomous weapons” and figuring out how to check for rule violations is a major obstacle in the short term. However, one could speculate that an agreement will be reached in a few years. Otherwise, it might even be possible that future wars will be fought only by robots or at the software level, thus preventing human casualties. But these ideas are currently not yet feasible.

Summary

Autonomous weapon systems are considered the third revolution after the introduction of firearms and nuclear weapons, which bring a new quality to warfare (Bendel, 2019, page 320). There is a threat of an expansion of armed conflicts and an extreme dynamization of warfare that exceeds the reaction capacity of a human being. Autonomous weapons are rapidly becoming smarter, more maneuverable, more lethal, and more affordable, further increasing the threat (Lee & Chen, 2022, pages 380). The inevitable arms race further accelerates the use of autonomous weapons. While relinquishing human control to an AI has its advantages, the disadvantages outweigh them. Ethical value standards collide most clearly with the use of AI in connection with autonomous weapons systems, threatening the continued existence of humanity. Experts and policymakers have a duty to weigh various approaches to preventing and proliferating autonomous weapons and to present an actionable outcome as soon as possible. However, so far, no concrete steps have been taken toward a ban on autonomous weapons systems, only the need for further discussions has been noted (Bendel, 2019, page 321). In addition to the lack of political will, especially of those actors capable of developing autonomous weapon systems, there are definitional difficulties. The precise delineation of autonomous weapons from other types of weapons is difficult. This is because it requires consideration of both the current state of the art and future developments in order to formulate an effective ban.

Literature

About the Author

Michèle Trebo

Michèle Trebo has a Bachelor of Information Technology at ZHAW and worked six years as a police officer in the field of cyber crime investigations. She is responsible for criminal research topics like darknet analysis, cyber threat intelligence, fraud investigation, and forensics. (ORCID 0000-0002-6968-8785)

Links

You want to evaluate or develop an AI?

Our experts will get in contact with you!

×
From crisis to opportunity

From crisis to opportunity

Michèle Trebo

Open Source Intelligence Investigation

Open Source Intelligence Investigation

Michèle Trebo

Chatbot-Scams

Chatbot-Scams

Michèle Trebo

IT forensics

IT forensics

Michèle Trebo

You want more?

Further articles available here

You need support in such a project?

Our experts will get in contact with you!

You want more?

Further articles available here