Artificial Intelligence and Asian Security
Event Recap
On March 27, the Asia Society Policy Institute (ASPI) hosted a public discussion entitled, Drones, Bots, and Smart Weapons: Artificial Intelligence and Asian Security. The panel discussion from the event can be watched above. It features Joshua Marcuse, Executive Director of the Defense Innovation Board, Office of the Secretary of Defense; Dr. Heather Roff, Associate Fellow at the University of Cambridge Leverhulme Centre for the Future of Intelligence; R. Shashank Reddy, Research Analyst at Carnegie India; Dr. Heigo Sato, Professor and Vice President of the Institute of World Affairs at Takushoku University; and moderator Lindsey Ford, ASPI's Director for Political-Security Affairs. (58 min., 29 sec.)
Ambassador Amandeep Singh Gill, India's Permanent Representative to the United Nations Conference on Disarmament and the Chair of the United Nations Group of Governmental Experts on Lethal Autonomous Weapons Systems, delivered keynote remarks at the event. He delivered his remarks remotely, via a live video feed. Below is an unedited transcript of the speech.
Keynote Remarks by Amb. Amandeep Singh Gill
Thank you Lindsey. I am grateful to Asia Society for this opportunity and I am delighted to see the distinguished panelists you have gathered for this discussion. Professor Sato, Dr. Heather Roff, Shashank, it is good to be thinking about this issue with you again.
Whether it was the socially-disruptive cross bow in medieval times or the dual use horse stirrups that allowed the Mongols to stand erect and shoot, technology has always mattered in the security space. Paradigm-shifting advances such as nuclear weapons can even rewrite security strategies. As battlespaces expand to cover the globe, as the speed of engagement accelerates and as peer-powers compete in information-degraded battle environments thousands of miles away from the home-base, autonomy, and intelligent autonomy in particular, will rise in salience. If nuclear weapons helped rewrite strategy – just think of Brodie, Kissinger, Kahn, Wohlstetter, autonomy might also change the way humans relate to weapons. Throughout the history of conflict, the weapon and the wielder have been separate. What happens when the weapon becomes the wielder? Who do the laws of war apply to and how? But it is the potential shock and awe of swarms of intelligent autonomous systems “learning” their way through defences that is focusing a lot of minds in the security space.
Some of that focus is on compatibility and compliance with the laws of war or IHL, some on what lethal autonomous systems mean for strategic stability, crisis stability, arms race stability and good old fashioned proliferation, including to non-state actors as well as some on potential shifts in thinking about the distinction between war and peace, the lowering of the threshold for use of force in international relations – in other words are some countries going to be more trigger happy if body-bags are replaced by mangled metal? Then there is no other sector, with the exception perhaps of the cyber domain, where the commercial stakes are so high and so intertwined with the security stakes. As important and cut-throat as the military edge is the economic edge as advanced economies try to squeeze out an extra percentage point of growth and as the emerging economies attempt catch up with leapfrogging technologies. If we look at the dozen or so countries with the relevant upstream capabilities –U.S., Canada, UK, Germany, France, Russia, China, India, Japan, Korea, Israel – you see this entanglement at work. Eight of these eleven by the way are in the Asia-Pacific region.
In the context of the Group that I chair in Geneva in the framework of the Convention on Certain Conventional Weapons, we are looking at the risks associated with the potential deployment of autonomous intelligent systems in weapons capabilities and whether these risks can be managed within existing legal frameworks and how or whether some explicit rules of the road – political or legal – prohibitions and/or regulations are required. Our mandate is such that we have to look at emerging technologies in the area of lethal autonomous weapons systems, in other words we have to focus on the novel, the unexpected, not just what is quantitatively different but what is qualitatively different from existing weapon systems. We are not looking so much at discrete, countable, “feelable” systems as we are at the penetration of AI/AS into weapons capabilities. As I am fond of saying, we are neither going far back into the past in terms of legacy systems nor are we going far into the future in Hollywoodian terms.
This focus on the mandate for the GGE allows me to segue from the why and what of the security dimension to the how of our work. How are we tackling the humanitarian and international security concerns arising out of lethal autonomy in a multilateral inter-governmental setting? We started last year in a formal sense building on 3 years of informal discussions that were triggered by the first set of reflections on the issue in the human rights context. What the informal meetings in 2014-2016 did was to raise awareness of the complex dimensions of the issue – humanitarian, ethical, military, legal and techno-commercial and to underline the importance of the CCW as a platform for addressing these various dimensions. The CCW sits at the junction of IHL and arms control; it recognises the principle of military necessity alongside the IHL humanitarian principles. Thus it provides comfort to the governmental stakeholders to engage.It has a modular construct and can be updated as the security and technology landscapes evolve. Its rules of procedure allow the participation of a broad range of stakeholders.
I took up the reins in 2017 from my French and German predecessors and started a formal discussion using a Food for Thought paper and other devices of multilateral diplomacy on three specific dimensions – the legal/ethical, the military and the technological. I also brought in a broader set of stakeholders into the conversation, in particular industry as a civil society actor. The gender and age representativeness were strengthened and participation was encouraged from the Global South. The November 2017 meeting was successful in underlining the appropriateness of the CCW as the venue for dealing with this issue. We also managed to lock in some basic principles as a bedrock on which to build on. They include:
- One, IHL continues to apply fully to all weapons systems, including the potential development and use of LAWS;
- Two, responsibility for the deployment of any weapons system in armed conflict remains with States and States must ensure accountability for lethal action by any weapon system used by its forces in armed conflict;
- Three, given the dual nature of technologies in the area of intelligent autonomous systems that continue to develop rapidly, the work on LAWS should not hamper progress in or access to civilian R&D and use of these technologies;
- And, four, given the pace of technology development and uncertainty regarding the pathways for emergence of increased autonomy, there would be need to keep potential military applications of related technologies under review.
Importantly, we also managed to shape an agenda for the future. This is what will guide our work in two weeks from now in April and then again in August.
The three plus one agenda we have developed is as follows:
i) characterisation of the systems under consideration in order to promote a common understanding on concepts and characteristics relevant to the objectives and purposes of the CCW;
ii) further consideration of the human element in the use of lethal force and further assessment of aspects of the human-machine interaction in the development, deployment and use of emerging technologies in the area of LAWS;
iii) possible options for addressing the humanitarian and international security challenges posed by emerging technologies in the area of LAWS – these options could include a politically-binding declaration, a legally-binding instrument, a moratorium or some other pathway; plus
iv) a review of military applications of related technologies in the context of the Group’s work. I see the last two items – the political and technical aspects if you will – as something of a continuing focus. This year, we hope to be able to make progress on characterisation and the various touch points in the human-machine interface.
Let me conclude by sharing with you my sense of where we are and where we could be at the end of the year based on the consultations I have had so far, bilaterally with thirty odd delegations, with regional groups, NGOs, academics and industry, I am optimistic that we can have a high quality discussion with the participation of all stakeholders as we did last year. And like last year we can capture our work in text that is acceptable to everyone in the room. Thus, we would be able to advance our mandate in a step-by-step and pragmatic manner while ensuring that everyone stays on board. I am particularly heartened by the Working Papers that countries are bringing to the table after extensive inter-agency work in capitals. I am also heartened by the attention our work is garnering, as this event today underlines.
Thank you. And I look forward to your questions, comments and advice.