Artificial Intelligence and International Humanitarian Law

Author: Dr. Garima Tiwari

Partially autonomous unmanned military drones, lethal autonomous weapons systems (LAWS), automated defensive systems like Israel’s Iron dome, and ‘killer robots’ have highlighted the need for immediately regulating the production, distribution and use of weapons based on artificial intelligence. This post will raise and re-iterate the issues relating to the International Humanitarian Law and Artificial Intelligence.

Issac Asimov gave the following famous Three Laws of Robotics in his science fiction “Runaround” namely: [i]

  • Law One – “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
  • Law Two – “A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.”
  • Law Three – “A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.”

Asimov later added the “Zeroth Law,” to supersede all the above laws: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”[ii] This was 1942 and today in 2018 fiction does seem far from reality. Technology has progressed leaps and bounds and law is still gearing to catch up. The advancement in military technology has led to new actors in the conduct of warfare that owe their origins to Artificial Intelligence (AI). This poses new challenges for the study, application and development of international humanitarian law (IHL).

International Committee of the Red Cross (ICRC) defines an autonomous weapons system (AWS) as: “any weapon system with autonomy in its critical functions. That is, a weapon system that can select (i.e. search for or detect, identify, track, select) and attack (i.e. use force against, neutralize, damage or destroy) targets without human intervention”.[iii]

The main concern here emerges from the autonomous nature of these weapons being “without human intervention”. The very decision to “kill” using AI invokes an analysis under IHL as to the “lawfulness of use of force”. This burden of deciding who is responsible ultimately lies on humans[iv] and may raise issues of superior/command responsibility.  Already, semi-autonomous drones have caused severe collateral damage to civilians in the tribal areas of North West Pakistan and in Afghanistan, especially the city of Kunduz.[v] What would be the impact if these were fully autonomous weapons? What happens if an autonomous weapon system commits a grave breach of international humanitarian law? Who would be liable? Further, would states be comfortable in deploying their troops before AWS? While programming may initially comply with IHL principles, but in the era of super-intelligence and machines with learning capabilities, this would essentially require exceptions and deeper probe.

Meeting of governmental experts and officials in Geneva under the auspices of the Convention on Certain Conventional Weapons (CCW) are being held to find consensus on steps in regulating AWS.[vi]  The discussion would essentially require assessment of compliance with the following IHL principles:

  1. Principle of distinction: civilian versus combatants and even civilian versus civilian who participates in hostilities. [Person and Property]
  2. Principle of Proportionality: This means that the anticipated loss of civilian life and damage to property incidental to attacks must not be excessive in relation to the anticipated military advantage.
  3. Principle of military necessity: This means that the target must be necessary and essential for securing the submission of the other party and there should not be any illegality in attacking it.

Civilians are protected under the Geneva Conventions and Additional Protocols and only combatants (until they become hors de combat) are considered to be the legitimate aim of an armed attack.  Further, IHL requires that the attack must be authorised for a valid military purpose implying military necessity. Unless the target qualifies as a “military objective” and the commanding officer(s) assesses the overall collateral damage versus advantage ratio, a target may not be attacked.[vii] The question therefore is, can an AWS -even though initially programmed by a human—understand the nature and urgency of attacking the targets and distinguish between civilian and military entities in conflict?[viii]  ICRC ventured into a study about those civilians who participate in the hostilities particularly in armed conflict of non-international character, implying the ‘distinction’ is not merely between “civilians and combatants” but also between “civilians and civilians who participate in hostilities.”[ix]  This raises the issue of how far can machines distinguish such thin lines that are not necessarily objective and require a subjective assessment of the situation prior to the attack.

In a report titled, “Losing Humanity-The Case of Killer Robots”, Human Rights Watch[x] has called for, “a pre-emptive prohibition on their development and use.” The report concluded that the revolutionary weapons would not be consistent with IHL and would increase the risk of death or injury to civilians during armed conflict. Another view is given by Prof. Schmitt who has argued that autonomous weapons may be more compliant to the laws of armed conflict than traditional military. He suggests that such weapons are not illegal per se and that, “International humanitarian law’s restrictions on the use of weapons would nevertheless limit their employment in certain circumstances. This is true of every weapon, from a rock to a rocket.” [xi]

Thus, issues of autonomy, control, predictability and adaptability on how machines learn and the resultant liabilities are of immediate concern. Each new AWS will have to be tested on their compliance to IHL principles. Changing modes of warfare, may require adaptable means of regulation. Who can produce and use AWS may be a starting point. This means a greater burden lies on those working on policy and designing of the systems based on AI so that they comply with IHL principles. The fiction is no more a fiction and law has to urgently find solutions that are futuristic, adaptable and not redundant when it comes to fast changing technology.

[i] Read Three Laws of Robotics (1942) at https://www.ttu.ee/public/m/mart-murdvee/Techno-Psy/Isaac_Asimov_-_I_Robot.pdf

[ii] Technology Review, “Do we need Asimov’s Laws, MIT Technology Review”, [May 16, 2014] at https://www.technologyreview.com/s/527336/do-we-need-asimovs-laws/

[iii] International Committee of the Red Cross (ICRC), Views of the ICRC on Autonomous Weapon Systems, 11 April 2016, p. 1, at https://www.icrc.org/en/document/views-icrc-autonomous-weapon-system.

[iv] See Interview with Paul Scharre, Senior Fellow and Director, Future of Warfare Initiative, Center for New American Security, [Washington, D.C. , Jan. 29, 2016].

[v] J.G. Castel and Matthew E. Castel, “The Road to Artificial Super-intelligence: Has International Law a Role to Play?”, Canadian Journal of Law and Technology Vol 14, No 1 (2016)

[vi] Campaign to Stop Killer Robots, Support Grows for New International Law on Killer Robots, 17 November 2017, https://www.stopkillerrobots.org/?p=6579.

[vii] For an excellent analysis on the issue read Alan Schuller, “At the Crossroads of Control: The Intersection of Artificial Intelligence in Autonomous Weapon Systems with International Humanitarian Law” [May 30, 2017]. 8 Harvard National Security Journal 379 at  https://ssrn.com/abstract=2978141.

[viii] Herbert Lin, “Will artificially intelligent weapons kill the laws of war?” [18 September 2017] at https://thebulletin.org/will-artificially-intelligent-weapons-kill-laws-war11124

[ix] N. Melzer, “Interpretive guidance on the notion of direct participation in hostilities under the international humanitarian law”, Geneva [21 December 2010] at https://www.icrc.org/eng/assets/files/other/icrc-002-0990.pdf

[x] Human Rights Watch, “Losing Humanity-The Case of Killer Robots”, [November 19, 2012] at https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots

[xi] Michael N. Schmitt, “Autonomous Weapon Systems and IHL: A Reply to the Critics”, Harvard National Security Journal Features [2013] at http://harvardnsj.org/wp-content/uploads/2013/02/Schmitt-Autonomous-Weapon-Systems-and-IHL-Final.pdf

Advertisements

Thoughts

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s