Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative – Pre 10 2025
12 May 2025 | 13:00 - 14:15 CEST | Room 10 |
Consolidated programme 2025
Proposal: #34
This session aims to foster a multi-stakeholder dialogue complementing the ongoing political process related to the international regulation of autonomous weapons systems. It includes relevant constituencies, including tech sector and industry, parliamentarians, academia and civil society in order to tackle the issue from various perspectives, including the human rights, ethical, security and technological implications of autonomous weapons systems and the need for establishing rules and limits on their development and use.
Session description
Artificial intelligence (AI) applications offer a range of benefits but also challenges and risks and lead to significant transformations in many sectors including the military sector. For more than a decade, we see the development of AI-based weapon systems, which have the potential to undermine both international peace and security as well as the functioning of the internet.
The Session will focus on recent developments and new perspectives of the need for regulation of Autonomous Weapons Systems (AWS). The 79th UN General Assembly in October discussed the AWS Report of UN Secretary General Antonio Guterres published in July 2024 pursuant to the UN Resolution 78/241 on “Lethal autonomous weapons systems” (LAWS). The Resolution was introduced by Austria in 2023 and was supported by the overwhelming majority of UN member states. The UN General Assembly adopted two resolutions in 2024:
- UN Resolution 79/62 on LAWS sponsored by Austria and a cross-regional group of co-sponsors, which is aimed to intensify the AWS discussion and address those aspects of the UN SG report, that have not yet been discussed comprehensively. These aspects include human rights, ethical, and security perspectives.
- UN-Resolution 79/239 on AI in the military domain and its implications for peace and security, sponsored by the Netherlands and the Republic of Korea together with a cross-regional group of co-sponsors.
The work in the Group of Governmental Experts (GGE) in the framework of the Convention on Certain Conventional Weapons (CCW) has continued to work on formulating measures to address AWS including a set of elements of an instrument and hopes to finalize it by end of 2025. Further, the UN will organize a two-day multi-stakeholder informal consultation on AWS pursuant to UN-Resolution 79/62 from May 12 – 13, 2025 in New York. The call of the UN Secretary General and the ICRC President, to conclude negotiations on a legally binding instrument on AWS by 2026, is supported by many states and is still on the table.
In a world of new political turbulences and risks, the EURODIG Open Forum contributes to build public awareness on the challenges regarding AWS. Stakeholders from different communities will discuss the various aspects – political, legal, ethical, humanitarian, technological etc. – of the development and use of AWS and consider ideas, how to contribute to the development of a regulatory framework for autonomous weapon systems as a contribution to peace and security and the achievement of the development goals.
Format
Keynote Address Vint Cerf, Vice-President and Chief Internet Evangelist for Google (Online)
Message from the AWS Consultation in New York Aloisia Wörtgetter, Permanent Representative of the Republic of Austria to the Council of Europe
Relevant activities of the Council of Europe Damien Cottier, Representative of Switzerland in the Parliamentary Assembly of the Council of Europe
Further reading
- ( UNGA Resolution 78/241 on Lethal Autonomous Weapons Systems )
- ( UN SG Report on Lethal Autonomous Weapons Systems )
- ( UNGA Resolution 79/62 on Lethal Autonomous Weapons Systems )
- ( UNGA Resolution 79/239 on AI in the Military Domain and its implications for Peace and Security )
- ( Group of Governmental Experts on Lethal Autonomous Weapons Systems )
- ( 2024 Vienna Conference on Autonomous Weapons Systems )
- ( IGF 2024 ANNUAL MEETING SUMMARY REPORT )
Panelists
- Benjamin Tallis, Senior Manager for Thought Leadership, Helsing
- Anja Kaspersen, Director for Global Markets Development and Frontier Technologies at IEEE SA
- Elena Plexida, Vice President for Government and IGO Engagement at ICANN
- Chris Painter, Former Chair of the Global Forum for Cyberexpertise (Online)
- Marjete Schaake, Member of the Global Commmission on Responsible Artificial Intelligence in the Military Domain (Online)
- Angela Mueller, Executive Director, Algorithm Watch (Online)
Moderation
Wolfgang Kleinwächter, Professor Emeritus, University of Aarhus
Transcript
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Wolfgang Kleinwächter: It’s one o’clock So we are waiting for Anja Kaspersen Good afternoon, everyone. Welcome to the session of Regulation of Autonomous Weapons System,
Moderator: navigating the legal and ethical imperative. My name is Istimarta and I will be the remote moderator for the session. So for now, I will be reading the rules for our remote audiences. So first, for the remote audiences, please enter with your full name. And to ask questions, raise your hand using the Zoom function, and you will be unmuted when the floor is given to you. And when speaking, please switch on the video, state your name and affiliation, and please do not share the links to Zoom meetings, not even to your colleagues. So for now, I will be giving the floor to our moderator, Professor Wolfgang Kleinwächter from University of Aarhus. Thank you very much and welcome to our session.
Wolfgang Kleinwächter: As you know, we are living in difficult times, and while everybody agreed 20 years ago that the cyberspace and the digital sphere would contribute to a more peaceful world and to better understanding among nations, we have realized in the last 20 years that the cyberspace is also an area for conflict, conflict among nations, and also a process has started where cyberspace become weaponized. During the recent Munich Security Conference, we did see a lot of discussion how this space, cyberspace, but also the outer space, has become now pulled into a discussion for military experts. So we have seen a lot of negotiations already within the United Nations, but also under the umbrella of the Convention on Certain Conventional Weapons, the CCB, where we see a discussion about new types of weapons, which we call autonomous weapon systems, AWS, and the General Secretary of the United Nations has produced a report last year, which has led to a resolution, which was sponsored by Austria, with the outcome that today and tomorrow there will be informal consultations in New York City about this new type of weapons. And that’s why, with the help of the Austrian government, we have decided to bring this very crucial and delicate and complicated debate into a broader public so that we have a better understanding what are the consequences of, quote-unquote, weaponization of the cyberspace. And so we started with an outreach workshop during the IGF in Riyadh in December, where we had the first round of discussion, and this is now the second in a series that we want to reach out more to the European public, and there will be a third workshop in Oslo. in June when we have the UN-sponsored IGF. So it means the session here is mainly an informal session so that we inform the public what’s going on and we hope we’ll have also a very good discussion. We are, Anja is here, okay, great. So unfortunately, we are still missing Winsurf who wanted to give a short opening speech because he also helped us to make the workshop in Riyadh but he’s in Los Angeles and it’s three o’clock in the morning, probably a little bit too early for him. So that means if he arrives then our remote moderator will give us a signal. So we have a good panel which gives you different perspectives. We have the Ambassador from Austria to the Council of Europe, Madame Werketter who will inform about the ongoing negotiations. We have Mr. Thales from Helsinki, this is the industry perspective. This is one of the new rising industrial stars in Germany which has specialized in the produce of one type of autonomous weapon system, mainly drones. We have Anja Kaspersen, she is from the technical community. She will speak a little bit about the technical perspective and how realistic or unrealistic is the debate about the human control over all this because human control and human oversight is a key issue in the debate. And we have then some comments from the online commentators. We have Chris Painter who was the first US Cyber Ambassador in Washington. He was then for many years the chair of the Global Forum on Cyber Expertise and is now with a conference in Geneva with UNIDIA and dealing also with these issues. Unfortunately, Marjete Schaake, a former president, member of parliament from the Netherlands, and she is now a member of the Global Commission on AI in the military domain, is conflicted and cannot make it. But we have also Elena Plexida from ICANN online, and we have from the NGO Stop Killer Robots. She’s from India and will give us a civil society perspective. So this is more or less the program, and now I give the floor to Madam Ambassador. Thank you very much.
Aloisia Wörgette: Thank you. Yes, that works. Thank you, Professor Kleinwächter. Dear colleagues, ladies and gentlemen, I see many of you here. A special welcome to everybody with a strong connection to Austria. My colleagues in Vienna, disarmament experts, have asked me to speak to you on their behalf. Also, as you know, that the Council of Europe deals with human rights, rule of law, and democracy, but has specifically no mandate for defense issues. Still, we found it very important that this topic is dealt with at EuroDIG in connection with the Council of Europe here. I want to thank you, Professor Kleinwächter, to moderate this session and want to thank all the distinguished speakers present and online to join us and contribute to this timely and important conversation. Like all transformative technologies, the application of artificial intelligence in the military domain is advancing rapidly. These developments promise to make tasks faster, easier, and more accessible. Yet, as in the civilian sector, they demand robust guardrails and limitations to ensure that artificial intelligence is used in a human rights-based, human-centered, ethical, and responsible manner. While the civilian domain is increasingly governed, and thank goodness we do find consensus on these things, with regard to the Council of Europe’s AI Convention, first legally binding international treaty on AI, European Union’s AI Act, first comprehensive global regulation, the military and defence sectors still lag behind. And let me state here that Austria has supported, during the negotiations for the Convention on Artificial Intelligence, that we do include the defence sector, but we were not successful in this regard. National security considerations have largely excluded these domains from such instruments, and no similar binding frameworks exist to date. We therefore support ongoing international efforts to promote responsible military use of artificial intelligence. These include the RE-AIM initiative by the Netherlands and South Korea, and the US Political Declaration on Responsible Military Use of AI and Autonomy. Today, we focus on one of the most critical and sensitive issues in this broader field, autonomous weapon systems, systems that can select and apply force to targets without further human intervention. AWS raises fundamental legal, ethical and security concerns. These include the necessity for meaningful human control to ensure proportionality and distinction, the need for predictability and accountability, and the protection of the right of life and human dignity. There are also serious risks of proliferation and a destabilizing autonomy arms race. These topics will be explored by our panel, and I want to link back also to the panel that started EURADIC this morning, where the execution department of the Council of Europe did report on the case law of the European Court on Human Rights. We are concerned about these things going on, and therefore Austria has taken a leading role in advancing international regulation on AWS. Last year, Austria hosted the Vienna Conference Humanity at a Crossroads to examine the ethical, legal and security implications of AWS and to build momentum for international regulation. We strongly support the joint call by US Secretary General and the ICRC President to conclude negotiations on a legally binding instrument by 2026. Over the past decade, valuable discussions have taken place, notably within the Group of Governmental Experts in Geneva and the Human Rights Council, where a growing majority of states agree on the need for international regulation, including prohibitions and restrictions. However, moving from discussion to a formal negotiation mandate remains difficult. Geopolitical tensions, mistrust and the reticence to regulate these fast-paced technologies are slowing progress, even as the window for preventive regulation is closing rapidly. Minister Kleinwächter has just mentioned that we have supported and championed the first-ever resolution on AWS in the UN General Assembly in 2023. You’re aware that this has mandated a UN Secretary General report, and last year we sponsored also the follow-up resolution, which was supported by 166 UN member states. These consultations complement the Geneva-based efforts. And Professor Kleinwächter has already also mentioned that these negotiations are taking place today and tomorrow in New York, and we would have I want to speak briefly about the need for a multi-stakeholder perspective. From our point of view, the global discourse must extend beyond diplomats and beyond military experts. The implications of autonomous weapons systems affect human rights, human security, and sustainable development, and it concerns all regions and all people. We therefore advocate a multi-stakeholder approach. Contributions from science, academia, industry, the tech sector, parliamentarians, and civil society are essential to ensure a holistic and inclusive debate. We welcome that the Council of Europe Parliamentary Assembly has already in 2023 supported a resolution on the emergence of lethal autonomous weapons systems, which references relevant international and European human rights law. We aim to broaden the discourse through outreach, like we are doing right now, such as the AWS session that we have hosted at the Internet Governance Forum in Riyadh last December, and we will continue the conversation in the Internet Governance Forum in Oslo in July. Let me just, in concluding, reiterate the urgency to act. We find humanity is at a crossroads. We must come together to confront the challenges posed by AWS. We think that we are in an Oppenheimer moment. Advocates from across disciplines are warning of the profound risks and irreversible consequences of unregulated autonomous weapons arms race. There is urgency. to finally move from discussions to negotiations on binding rules and limits. And as AWS technologies evolve, the gap between regulation and reality continues to widen. So we need decisive political leadership to shape international rules. We believe that a multi-stakeholder exchange will contribute considerably and we will remain my colleagues who are working on this armament for a long, long time, which is also an element of our active neutrality. We’ll continue the conversation. I’m looking forward to the conversation. Thank you.
Wolfgang Kleinwächter: Thank you, Madam Ambassador. And I will already now announce that we have reserved some time for interactive discussion because EuroDIG is a dialogue and so we want to get you involved to prepare your questions or your comments if we ask all the panelists. But now a great welcome to Mr. Tallis from Helsinki. I think in this context it’s the first time that we have a representative from the industry. But as just Madam Ambassador has said, a multi-stakeholder approach is needed and we have to hear all voices and it’s bad if some stakeholder groups are sitting in their silos. So you are mostly welcome and you have the floor.
Benjamin Tallis: Thank you very much indeed, Professor Kleinwächter. And thank you for revealing now that I’m the first representative of the defense industry to speak in this format. I’m braver than I thought in that case. Thank you also to the Ambassador for excellent scene-setting remarks. And coming from industry, I’m obviously here with a very fancy PowerPoint presentation to show you why everything is going to be fine. Well, you’ll notice I don’t have a PowerPoint presentation and I’m not here to tell you everything is going to be fine. My job at Helsinki, which I should clarify is not a drone maker, we do make drones. But what we actually do is make battle networks, extending from all sensors to all shooters, using AI to actually enhance the kind of battle networks that we can field, which allow us to make better decisions based on better understanding and take more effective and precise actions. So relating very much to some of the things that the ambassador already mentioned. So we don’t just make drones, and I’m not here to be a salesman for drones or any other technology. My role with Helsing is what they call thought leadership, which involves exactly engaging with third-party stakeholders, with a multitude of different actors to have that kind of multi-stakeholder dialogue to ensure that we’re aware, first of all, of all the necessary discussions that are going on that affect what we’re doing, but also to make sure that others involved in those discussions are aware of what we’re doing, what we provide, and also where the industry is on these issues. Today I speak on behalf of myself, but you’ll get an idea of where we stand. Now before joining Helsing, I was not a professional defence industry person. I was a think tanker. I was prior to that an academic, and I’ve been a government advisor working on European security in various capacities for about 20 years, including working on the field for the European Union on security missions in the Balkans in the post-conflict period there, and also in Ukraine, going back about 20 years, which is where the start of a long association with that country came from. In those capacities, when I was working also with diplomatic status, I had the chance to engage with people from the Council of Europe, as well as many civil society groups and many others who were deeply concerned with human rights, with the principles of humanitarianism, with upholding the values that actually make our democracies different from the authoritarian regimes by whom we are so clearly challenged at the moment. So with that perspective in mind, that informs the remarks that I’ll make today. It’s no secret that we are in an increasingly competitive and increasingly hostile geopolitical climate. It was mentioned that we’re seeing a destabilizing arms race. Well, I would put it to you, while it’s bad to be in an arms race, it would be worse should we lose that arms race to authoritarian regimes who have far less honorable intentions for their peoples and indeed for the world than our democratic societies do. We can see that one aspect of this competition does involve emerging defense technologies, including autonomous weapon systems, and it’s an area in which we give considerably more care than our adversaries in Russia, in China, and elsewhere do. And that’s good, that’s part of what sustains us as democracies. And it’s very important that while we work to ensure that we have the military capabilities, as well as the demonstrated resolve to ensure deterrence, we do that without undermining the democratic values that again set us apart and which give our citizens the kind of right to a hopeful future, which is the unique selling point of liberal democracies when they are at their best, and again sets us apart from our authoritarian competitors. Now we’ve seen this competition in emerging defense technology as well as in geopolitical power positioning in microcosm in Ukraine. And while a lot of people would say there’s huge amounts of transferable lessons to be learned from the Ukrainian experience, others would say, well, the Ukrainians have made virtues of many necessities, limitations of their weapons systems and so on, such as lack of air power, that don’t affect us. I think there’s an awful lot we can learn from what’s been happening in Ukraine. Not necessarily, and this might surprise you, not necessarily because there’s something truly new happening. What I would suggest is happening in Ukraine is actually the culmination of a 50-year process of military transformation that began in the 1970s. Many of you will be familiar with William Perry, Undersecretary of Defense at that time in the U.S., who famously said, our aim is to be able to see any high value target on the battlefield, to strike any target we can see, and destroy any target we can strike. That ushered in what was known as the precision networked warfare revolution, which only now do we fully have the technology to be able to exploit through massed precision strike weapons, massed persistent sensors that we can afford to field, and the kind of battle networks that can actually link those things up in a sensible way. What is the evolution there, rather than the revolution, is that because of AI, we’ve been able to make these battle networks efficient in a way that we weren’t before. That means humans are no longer brute-forcing massive amounts of data through networks that can’t handle them. Humans are no longer fat fingering, as the US military calls it, data from one machine to another that can’t talk to each other. We’re now developing the ways that we can get our intelligent machines to talk to each other. So, again, this is not necessarily new. It’s the culmination of that process, but it’s also the beginning of another process, the revolution in military affairs to come from autonomous systems, from robotics, from artificial intelligence, quantum computing, additive manufacturing, and so on. But we don’t know yet what shape that revolution will take, but we need to be prepared for that industrially, governmentally, strategically, and indeed ethically. Focusing today on what we’ve already seen, though, it’s not new in another way either. Everything that we’re seeing in terms of the ethical discussion about autonomous weapon systems, including the strike drones, intelligence surveillance and reconnaissance drones, and other systems being used in Ukraine, and which our militaries are starting slowly to procure, relates to older discussions about military affairs. What we’re essentially talking about is command and control. The whole discussion, or the whole organization of military affairs, has been based on the principle of command and control since time immemorial. What is this? It is the delegation of bounded autonomy to conduct particular tasks. And until we get to a stage where we are able to talk about artificial general intelligence, which I’m not that kind of Silicon Valley enthusiast who will tell you it’s just around the corner. I think we’re quite a long way off artificial general intelligence. Until we get to talking about that, what we’re again talking about is the delegation of particular tasks, in this case to machines rather than to humans. Now obviously that has implications for how we understand this, but the principles remain the same. When military commanders delegate to their subordinates, they do so on the basis that those subordinates are trained. They’re trained to do the task required of them. We do it on the basis that they have been tested at doing that. And because they have been trained, and they have been tested in order to be able to be predictable, to be able to be reliable, foreseeable in the things that they do, and thus to be effective also in what they do, they do what they’re supposed to do, we can trust them. And on this basis of training, testing, and trusting, I don’t actually think there is a significant difference between delegation of many of the tasks involved, between delegating to a lower human authority or to a machine. And guess what? We’ve been doing this for a long time. So again, not actually something necessarily new. Any so-called beyond visual range engagement, for example in air-to-air combat, has contained an element of this delegation. Delegation from a pilot, to a radar and targeting system, to a fire-and-forget missile. That’s delegation. Further back still, delegation to dumb bombing. Dropping a bomb over a target to try and hit it, which we were terrible at for an awful long time. Even artillery beyond visual range contains an element of exactly the same question. The difference now is that we can actually be more precise, and we are much more likely to be precise than we were before. And if you do go back and look, at the history of strategic bombing, for example, which I doubt is a favorite occupation in this building, but nonetheless, I will prevail upon you. The history of that is that we have been terribly inaccurate and terribly ineffective at that, causing massive amounts of collateral damage. So I would put it to you that actually advances in precision that follow the same rules of delegation are a potential advance for democracies. The other aspect of this, of course, is democracies do not want to fight wars of attrition. We value our people too much. We actually want to have the kind of precise weapons and make use of the kind of asymmetric capabilities that reflect our inherent advantages as societies, our unique selling point of human creativity amplified through the market mechanism and allied to government strategy that give us the edge if we leverage that over our authoritarian rivals. So again, with that said, and I’m happy to talk about an example of this that Professor Kleinwächter asked me to from Helsinki and others who use a term called the drone wall on the eastern flank, but I’d rather do that in questions in order to be able to set out this clear position first of all. So I would put it to you that it’s incumbent upon us to think through these ethical questions, but not to focus or get misdirected when doing so. Not to confuse means and ends, not to confuse actions with the actors or actants that we delegate them to, and not to confuse quote-unquote killer robots with the kind of battle networks, the kind of technology that can actually put humans where they most need to be by making more informed decisions, faster, in more effective ways that would drive the better kind of actions that democracies seek. Not only to be more precise in doing the awful things that we don’t like to do but we have to do in war, but in order to be able to win and to be able to use our strengths as democracies to actually prevail against the geopolitical and military challenge that we face today. which, if we fail to rise to, would have dire consequences for any of the kind of discussions we’re having today and for our democratic societies more widely. So with that, I’ll leave you there as the opening statement, and I look forward to discussing more on the specifics, including about the drone wall, in the questions.
Wolfgang Kleinwächter: Thank you. Thank you very much, and Anja, you are a representative from an organization of engineers. I think you have 100,000 members in the IAEA around the world. In Riyadh, we had Wim Mohammed, the CTO from Digital Identity, and he gave us a perspective and said, you know, whatever, you have a perfect software, you have some bugs in it, and so that means don’t trust all this technology, so that means you are dealing with this issue from the technical perspective. So what are your comments to the diplomatic and industry perspective, if we trust you? Thank you.
Anja Kaspersen: Thank you so much, Professor, and I should first, actually, we’ll have to correct you a little bit on numbers. So we actually are almost half a million members globally, and that just counts for the membership, not the larger ecosystems that is in the millions, and we are across 190 countries around the world. And we have been around for close to 141 years, so this was an initiative that came out of efforts with pioneers like Alexander Bell, Thomas Edison at the time, and that’s why I’m mentioning the history of it, around a core principle of how do you advance technology while keeping humanity safe. And a core part of this work was also then creating standards to make sure that all these good initiatives could also interoperate with one another without, for example, electrocuting us in the process, et cetera, et cetera. So most of you, the way that you’re connecting with one another in this room, you know, be that integrated devices, the Wi-Fi you’re connecting to in the Council of Europe, that’s actually IEEE standards. So almost everything that connects everyone in this room is one of our underlying standards. But I’m just mentioning the history of this organization because we don’t only do that, it’s also about scientific integrity, it’s about dialogue, it’s about scientific collaboration. So that’s what this group is doing worldwide and why societal issues such as the one that we’re discussing today is not something that we’ve been focusing on the last few years, but something that has been at the core of its existence, you know, from the beginning. So if you allow me, Professor, I prepared, because we all got like very strict timelines, so unusually for me, I actually prepared some remarks, but answering the questions that you just asked me. So first of all, thank you to Austria for the opportunity to intervene on this critical issue. I was lucky enough to be at the inauguration of these efforts, you know, in Vienna last year, in the Grand Palais. And I’m also, I should say, for those of you who may not know me, I have a very varied background, including from diplomacy. And I was also the former director for disarmament affairs in Geneva, where I oversaw some of these processes, including CCW, and tried to make a real push to, perhaps at that time, moving a little bit away, I called it away from the 10,000 feet perspective, and down to more practical considerations that allowed, such as, you know, my colleague on the side here to engage differently in this process. So I think that’s an important thing, how you frame this discussion can be quite alienating, or it can be inclusive, dependent, right? And I’m sure from industry, you have experienced that. So I speak today, not only from the perspective of the technical community, but also as someone who has long been engaged in international governance, including overseeing these efforts in Geneva, and contributing for decades to initiatives aimed at developing a coherent multilateral framework on the military use of technologies, as well as the broader strategic, operational, tactical, and not least, and I mention this because it’s very important, because it’s often forgotten, cultural and societal impacts, including on civil preparedness. There’s a lot of focus on civil preparedness right now, so what I’m about to say relates to that as much as it relates to the question at hand. What I want to offer is not a summary of technical challenges, which I think are by now well understood, but I would be happy to field any questions, of course, to any of you after this conference or after this meeting. What I want to focus on instead is a framing of what is structurally at stake and why, from a technical standpoint, some of the most urgent questions remain inadequately addressed. First, we must stop treating AI as a bounded technological tool. AI is not a weapon system in a traditional sense. It is a social, technical, economic methodology, if you may. It reorganizes how war is imagined, operationalized and bureaucratized. It alters the concept of decision making itself, shifting authority away from experience and judgment toward inference and correlation. What this means in practice is that the challenge is not simply how to use AI, but how it reshapes the very infrastructure of responsibility and intent. One concept that is routinely overlooked is commander’s intent. This is not a checklist or an input. It is a deep cognitive and ethical practice about anticipation, discernment and alignment across dynamic conditions. In human to human operations, it’s already complex. In human machine interaction, it becomes nearly impossible. Systems that do not and cannot reason are being asked to, in fair intent, respond to shifting environments and remain predictable without a contextual understanding this requires. Special forces are trained precisely for this kind of discernment to override instinct, interpret ambiguity and exercise calibrated judgment. These are human traits, tactical and moral. that no current complex information structure or machine learning system is built to replicate. That brings me to reliability. Reliability is not a static attribute. These systems adapt, drift and behave differently in different contexts. A model may function perfectly and still fail ethically, operationally or politically. It may perform as intended and still degrade trust or escalate instability and trigger proliferation. This is an important point when we discuss compliance with international humanitarian law. Can something be in compliance and still be harmful? Can something be compliant in war but be highly non-compliant in peace? We have to think through these scenarios. Over-reliance is not just a technical risk. It is an operational risk. It is a governance risk. And yet we routinely see systems treated as reliable in ways that ignore context, fragility and institutional constraints. Another important point. Procurement. Not a conversation that happens very often when we discuss these issues. And it’s one of the most overlooked ethical fault lines in my view. Most institutions, military or otherwise, do not build AI systems. They procure them. Increasingly, these systems are pre-trained, modular and abstracted from operational realities. And this relates to any of you that also work in public governance and that may have been included in your governments or companies’ procurement processes. These are very important issues. And increasingly, these systems are pre-trained, modular and abstracted from operational realities. This introduces profound misalignments, especially when end users have little involvement in setting technical specifications. I’ll do a little kind of flag for work that I think is just important, not because I’m selling anything, but it might provide a lot of insights for those in the room. So IEEE issued something called the IEEE P319. make a note of it, P3-119. It’s a cross-sector global procurement standard, or more like a practitioner’s handbook guideline, that helps organizations, companies, governments, militaries, to interrogate vendor claims, clarify assumptions, and surface hidden risks before integrating or embedding AI features into any form of systems. And includes questions not just for engineers, but policy makers, legal experts, and institutional decision makers. Because this, in my view, and also my institution’s view, is where ethical, you know, managing things with ethical considerations and true governance begins. We may also be cautious about the language used to frame the systems. Terms like responsible AI, trustworthy autonomy, or ethical automation, suggest a coherence and controllability that do not reflect how these systems actually operate. From a technical perspective, these labels often obscure the fact that many of these systems are built on failed approximations, trained on proxy data, deployed in contexts their designers never anticipated, and governed by assumptions, including about winning, what is winning in today’s battlefield, right? And dynamics that are not always visible to users. The failures that will matter are unlikely to be those we plan for. They will not look like system crashes. They will look like misalignments between logic and lived reality. Instead of projecting responsibility onto the system, we should talk more seriously about responsible decision-making processes at the human and institutional level. Responsibility lies not in the tool, but in the processes and choices that governs its design, deployment, oversight, and use. When that distinction is blurred, the vulnerability becomes harder to trace and governance risks become symbolic rather than substantive. Everyone in this room knows that data is the very backbone of AI-enabled systems. We had Eurodig. And yet, despite this recognition, data often remains backgrounded in this debate, treated as ambient infrastructure rather than a strategic asset. But data is never just there. It is collected, conditioned, labeled and selected, always by someone, for some purpose, under particular constraints. We must therefore ask, whose data is being used? How was it obtained? Why was it chosen? And for what outcome? These are also important questions in this debate. Questions of data integrity, veracity, provenance and security are not academic, nor are they pertaining just to the civilian domain. They are central to both performance and trust. The risk of tampering, poisoning and silent drift are real, particularly in military and intelligence contexts. If we do not account for the full data pipeline, we cannot account for the system. It’s very important we talk about weapons reviews. This brings me to infrastructure, because AI systems do not operate in isolation. Most current deployments rely heavily on legacy hardware and network-centric architectures that were not designed for systems with autonomous features. These architectures introduce friction, fragmentation and vulnerabilities, especially when retrofitted to accommodate high-intensity compute loads. This also risks undermining interoperability, particularly in joint or cross-force environments, where systems are expected to function across organizational, national and technical boundaries. This is precisely why robust, internationally applicable technical standards are so important in this domain, especially where systems must communicate, adapt and escalate decisions across contexts and constraints. And this leads directly to the question of energy. Advanced AI systems, particularly those involving real-time inference or large-scale simulation, are computational intensive. That means that they’re highly energy intensive. So, any serious conversation about AI, as well as cyber-reliant or network-centric warfare, is not just a conversation about power in the geopolitical or socio-economic sense, it is about power in the literal sense. Electricity, resilience, energy availability, and re-infrastructure security. Governance frameworks that overlook this is not just incomplete, but strategically short-sighted. This is why our anticipation strategies must change. Governance must shift from a logical prediction to one of adaptation. Systems need to be designed not only to perform, but to fail safely and visibly. That requires institutions to develop memory, reflexivity, and the ability to surface weak signals before they become structural liabilities. Here I would also flag another process that maybe even some in the room have been involved in, because it’s been a large-scale work for years, is the IEEE P7000 series. It was developed around how to guide ethically aligned design across sectors by supporting practitioners in identifying stakeholder values and translating them into system requirements from the outset. When this approach was launched, now many years ago, and been adopted across the world, it caused a critical shift in understanding that the ethical considerations must be architected into design, not added later just as an assurance. Because design decisions are never neutral. They determine what is seen, what is measurable, and what forms of harm and risk are rendered invisible. These decisions shape how systems respond to ambiguity, and how power and discretion are distributed. They are political, even when framed as technical. And once baked into architecture, these choices often become inaccessible to oversight or review. Governance must begin by recognizing this. Effective oversight is not simply a matter of control at the point of use. It depends on tracing responsibility back to the layers of abstraction and specification where many of the most consequential decisions are made. This includes questioning whose designs for whom with what assumptions against whose values. And I’ll come to the end here. I just want to say there’s a language plays a key role here. As I mentioned before, a few years ago, while working with the CCW state parties, I led a what we call computational text analysis of national statements and working papers. And it revealed a striking difference in how core technical and military concepts were framed, particularly around definitions, system limitations, mission command and human oversight. And I see this diversion still persisting today. And it continues to undermine efforts to build a shared foundation of governance. And I just give this example. And I’ve been in multiple, I’m part of multiple multilateral efforts. And I see this being a common trend. A term like redundancy might refer to fault tolerant architecture and engineering, but to inefficiency or duplication in policy. Safety might indicate statistical reliability in one field and protection in another or humanitarian protection in another. Even the term reliability can refer to technical precision, political stability or normative acceptability. These are not minor misunderstandings, they shape procurement, deployment, review and oversight. And they create governance gaps that are filled by assumption. What matters is not just taxonomy, but comprehension. So understanding how terms are used and understood in practice is essential, particularly if we are serious about building a governance framework that focuses on conversions around baseline standards. This is urgent. And I would just want to conclude by saying I want to return to an ethical point, speaking as strictly my personal capacity. In his work, Christopher Coker, my late professor, he was with the London School of Economics, warned of the dangerous illusion that technology could sanitize violence, that increased automation or distance could somehow make war more humane. It cannot, nor can it help us to define what winning means, nor should it. Technology may obscure the moral weight of decision making or create abstraction where there was once contact. but it does not eliminate responsibility. So the challenge before us is not simply one of technical control, it’s about governance and about kinds of institutions and cultures we want to build. It is about listening, not for consensus, but for the conditions that allow disagreement to be meaningful and oversight to be real. And I think that’s something this conversation could really benefit from. Thank you.
Wolfgang Kleinwächter: Thank you very much, Anja. As you see, if you are digging deeper, complexity is growing. And I think this is a good opportunity in this environment here to get many perspectives so that we get a full picture. We will hear now three shorter comments online and then I hope we can enter into a discussion with Q&A. So, Chris, you have a couple of minutes just to comment what you have heard and with your background, you are best positioned. I introduced you already. Chris, you have the floor.
Chris Painter: Great. Thank you. And it’s been a good discussion. Hopefully you can hear me. Can you hear me all right? Yes. Okay. So I come at this from a cybersecurity perspective and that’s been my background, certainly. And a couple of things. One was just mentioned, you know, the vulnerability of manned control systems, including AI systems, to cybersecurity attacks. And that’s not something that’s new, but that’s something that’s a challenge. We’ve talked about this in the nuclear area, with nuclear command and control, that although even when they separate them from the Internet as a whole, there are other dependent systems that could be susceptible to attacks. So aside from all the concerns about how AI is trained and how it’s used, it also has a concern about whether it is made less reliable because of cybersecurity attacks by adversaries who could make this much more and less reliable and amp up all the problems we talked about. The other thing, I think, is we’ve also talked in the cyber realm for a long time in terms of cyber offensive operations, you know, talking about the speed of the Internet and how we have to respond faster. that to automate cyber offensive operations to take them out of the middle. Now, those are likely not as destructive as the attacks we’re talking about here of kinetic weapon systems, but they could be destructive. They go after critical infrastructure and others. There’s long been a debate about how autonomous it can be for all the reasons that we just heard, how it’s trained, how it’s used. And I think that poses a huge problem here. And I don’t think we have a real solution to that without having humans still in the middle, rather than having an entirely automated system. And then the final thing I want to talk about is the geopolitical considerations. And I know there is an OEWG looking at this, or a GGE looking at this in this contest. And there’s been an OEWG of all the countries in the cyber context, cybersecurity context. But what the problem is there is more true than ever before. And I don’t want to be too much of a damper on this. The geopolitical considerations outweigh any ability to really reach an agreement. And though I applaud the effort to try to do some binding approach to this in the UN, I think that’s going to be, at least in the short term, very, very difficult. And that’s what we’re seeing across the board in cyber and all technological issues, really in all issues, where there’s such division within the UN and other international venues. And we’ve seen the US, for instance, I think, pull back from any kind of AI guidelines that would establish guardrails for the reasons that were noted of not wanting to constrain themselves, which is coupled with the lean to be more offensive in cyberspace, but also in other areas too. And that complicates this issue as well. So not to paint an overly non-rosy picture, but I think there are a lot of concerns on the horizon. And that doesn’t mean we shouldn’t talk about this. It doesn’t mean we shouldn’t have these efforts. I just don’t have a huge amount
Wolfgang Kleinwächter: of confidence we’re going to make progress in the short term. Thank you very much for your realistic outlook. And anyhow, it’s on the table and we have to discuss it. So Stop Killer Robot as an NGO has been involved in this from the very early days. And Sai is with us from India. Sai, probably you could comment on what you have heard this morning.
Speaker: Well, they were really interesting conversations that I heard. I’m really glad to be part of this. Thank you so much for having me here. As part of, I think, Stop Killer Robots and from civil society, one of the biggest concern is that we believe autonomous weapon systems will not be able to comply with the ethical, legal, humanitarian and moral implications that it presents. And especially it will not be able to comply with international humanitarian law, various provisions of it, including distinction, proportionality, being able to differentiate between a combatant and a non-combatant and so on and so forth. Apart from this, we also think it’s not, military technology historically has had examples of percolating into civilian uses. And they then don’t just create problems for international humanitarian law, but also raise questions about the implementation of other international law, like international human rights law, international criminal law and so on and so forth. So I think it is very important at this present state of the geopolitics to also assess properly as to how international law will be upholded with the advent of weapon systems such as autonomous weapon systems. What we believe is that the way forward is to do a legally binding instrument is through a legally binding instrument on autonomous weapon systems that completely bans autonomous weapon systems that are not able to comply with international humanitarian law and regulates other weapon systems which are not able to be used without meaningful human control or otherwise don’t have basic understandability are not able to hold people accountable as this part of international humanitarian law. So I think, because there’s a paucity of time.
Wolfgang Kleinwächter: I will stop there, but these largely seem to be our issues with autonomous weapons systems. Thank you. Thank you very much, Jutta. My understanding from the discussion in the CCW is that they have agreed on a two-tire approach. They said, okay, probably we could prohibit weapons systems where human control is impossible and we can regulate weapons systems where you have certain type of human control. But the question, what type of human control is realistic, this is another question. But I think to have this differentiation, I think it’s important to have at least a realistic way forward. So that means, you know, if you cut it in smaller pieces, it’s probably easier to negotiate. We have now the rolling text, and let’s wait and see what will happen until the end of 2025. And, you know, Guterres has set a deadline for 2026 for legal binding document. Chris has just told us that it’s rather unrealistic against the background of the geopolitical tensions. So I think all these are open questions on the table. But before I ask you to prepare your question, let me move to Elena Plexida from ICANN. I think Anna mentioned also the infrastructure which is needed, and ICANN managed one of the most important infrastructure in the digital world. It’s the domain name system, the root server system, and so that means everything which goes over the Internet needs a functioning ICANN, a functioning IP address and domain name system. So, Elena, you are not directly involved, ICANN is not directly involved in this debate, but you could be affected. So what is your view about this rather, not totally new, but new issue in this Internet community?
Elena Plexida: Thank you, Wolfgang. Thank you very much. Hello everyone. Yes, exactly. As you said, I work for one of the organizations that help maintain what we all know as the global internet. And in fact, the global internet and maintaining it and the work around it is a collective effort. There’s a togetherness in this one. It’s a peace project in fact. So being part of this discussion for me is a little bit remote. But then again, peace and stability is something that you have to work for and safeguard. Hence, the discussion about rules is really relevant. Others did mention the current geopolitical ecosystem and the deterioration and the difficulty in such an ecosystem to probably agree around norms or rules. But I would say that particularly because of this deterioration, adhering to existing norms or creating new ones where they are needed are super, super relevant. As regards technological developments, again, they’re not in our sphere. As you said, both come very rightfully so. But to me, it seems that the technological developments are so fast that if my understanding is correct, it makes it even more difficult to land to an agreement with respect to the use of autonomous weapon systems. Then we have two challenges really. Difficulty in creating an unbiased AI system or unbiased AI systems. The possibility of jailbreaking AI systems through prompt engineering. Here, I want to highlight the undoubted value of and the need to involve technology experts in conversations such as the development of norms or regulation for the use of autonomous weapon systems. As the ambassador said at the beginning, and of course, other experts, a holistic debate is indeed needed. Maintaining meaningful human control is one of the problems apparently. Then in addition, the use of such systems by non-state armed groups, if you will. Those are not really issues that any data icon is into. So I go directly into the norms, the suggestion of or the idea that there needs to be norms. And I think Chris mentioned that, if I’m not mistaken, kinetic weapons seems to be perceived like weapons that can do a much more significant damage, including in the infrastructure that maintains the internet. But those kind of systems would also do that. So I would say, undoubtedly, the most important thing is to look at the human aspect and look at norms or regulation that makes sure that we do not dehumanize, so that we do not harm people. But if I may, I would say that together with that, we should also be looking into norms that are about the infrastructure. And here, I will repeat one of my favorite norms, which comes from the Global Commission for the Stability in Cyberspace that you know very well of, Wolfgang. And it’s the norm about the core of the internet. So to make sure that such systems and other weapons, of course, but such systems do not harm, or if you will, weaponize what we call the core of the internet, technical parameters that are absolutely essential for the internet to function, such as the protocols, the DNS, the IXPs, cable systems that support entire regions or populations. And as that would constitute a threat to the stability of the global internet, and in turn, a threat to the stability of cyberspace. And the internet is a common good. And as I said at the beginning, I think it’s a peace project. So yes, putting some thought into not threatening it, together with other norms that are being considered, is something to add to the conversation. Thank you very much.
Wolfgang Kleinwächter: Thank you, Alina. And good to remember the recommendation from the Global Commission on Stability in Cyberspace a couple of years ago. that the public or the internet is seen as a common good and an attack against the public or of the internet. This was one of the conclusions from the global commission where I had the honor to be a member. This would be seen as an attack against mankind. Because it’s like polluting the air or something else. This should be seen as a crime. So far the question is what we see now with all the attacks against cable systems and other things how far this will go and which role AI could play in attacking the public or of the internet. So this is a big challenge, a complicated question and so we have to do something to avoid this and law can be an instrument but as we have seen also from the debate so it’s difficult to reach an agreement in a geopolitical situation where we have more polarization than harmonization. Anyhow, we have reached now the moment where I would ask questions from the floor. We have also some online questions. So if somebody wants to ask a question from the floor directly, yeah, one and two and please introduce yourself and then if you direct the question to one of the panelists to make it clear. So it’s always better to ask a panelist directly than to ask a general question and then we will have a certain confusion who replies best. Okay, you go first.
Audience: Good morning, I’m Brahim Alla, intern at Acedel. Strasbourg, I wanted to ask very quickly a question related to, for example, the recent events in Spain. Would it be possible to imagine shutting down areas or regions or even countries on a voluntary basis as a future modern warfare strategy, and if so, do you have insights about the influence of such behaviours or events on autonomously guided weapon systems? Thank you. My name is Frances, and I’m here with YouthDIG. I had a question, I think, for Benjamin, so I do agree that just because there are major ethical concerns, that doesn’t mean, I mean, obviously that means we need to think about this more, because it could influence warfare and practices in warfare so much, so it’s something that people are going to want to mechanise and utilise, but I’m not asking about war, but rather limited force, because if you think about how America, and especially under Obama, a lot of drone strikes were utilised, we see that democracies, even though they want to protect themselves, even outside of war, they also want to assert their ideologies, right? So, I think that if they have a technology that’s more precise, that doesn’t have any human costs to people of their own country, this, I think, would lead to overuse of this kind of technologies, because now you don’t have civilian losses, but you have serious damage to people in those countries because of psychological harms, of possible strikes happening at any moment by technologies that aren’t operated by humans, and so it’s not only the precision and the people who are targeted specifically by this, but I think it needs to overuse and also a mental disconnect, right? Because now you think, well, we’re only targeting the bad guys, but also what data is telling you who are the bad guys and what assumptions are being made by these autonomous weapons. So I think in limited force, do you think this will lead to even democracies overusing this technology? Because I think the difference here is that there’s no human cost. So it’s not like delegation. So then you get massive asymmetries and warfare and limited force because now democracies aren’t losing anyone. And so I think that’s the crucial difference that I would love to hear your opinion on.
Wolfgang Kleinwächter: Thank you. Thank you. Good question. Now we go back to the online questions. You could read it.
Moderator: So yeah, a question we have online is, would you consider a scenario wherein an enemy does not buy or make drones, but develop a counter-AI battle system to hack into even elaborately secure battle AI system? For instance, a takeover weapon mounted stones on air, on the ground, redirect and counter-target the drones that they don’t own. Would such scenario would be even remotely realistic? Okay, thank you. That’s a good question. I think it’s primarily for Benjamin and Anna. I think the first and the last is actually for Chris. Okay. Okay. Then I ask also, Chris, if you could…
Wolfgang Kleinwächter: Benjamin first. Sure. Benjamin first. Okay, go ahead.
Benjamin Tallis: Thanks. Yeah. Something very brief to say to all three. I do have points to come back to Anna’s excellent presentation as well, but we’ll see. Do you want responses to other panelists? Yeah. Okay. Very good. This is the right moment. Okay. So very quickly to Rahim. Great question. It’s about resilience and grid resilience in this case. It’s a classic case of one of Anna’s misconstrued or multiply construed terms. Inertia was the key in Spain, which is the ability of a grid to be able to withstand fluctuating power flows. Is that vulnerable to cyber attack? Yes. Is it vulnerable to multiple kinetic attack by uncrewed systems? Yes, it is. So what is the answer? Build grids with more inertia. And distribute the power across the grid, distribute the control across the grid, which is precisely what edge computing and other advances like that allow you to do in military and non-military networks. That means putting the compute power in distributed locations rather than coordinating it in a central location, which is an easy hit. So very quickly to that one. Florence, superb question. Very much conditioned by the misadventures and terrible things that the West did in the last 25 years. The problem is not with the technology, I would argue. The problem was with the intent. The problem was with the analysis and the problems with our hubris there. Big questions now about how do we order a world that is not only safe for democracy, but in which free societies can thrive. Learning from those huge errors which have massive human cost. Where the technology comes in relates to the Chris Coker point. Chris, I knew as well, knew many of his students. And the whole notion of virtuous asymmetric war, that you’re detached. We are war through the screen, etc. removes you of your human responsibility. That was quite widely shown by some studies not to be the case for drone operators who suffered considerable stress. Now you might say that’s nothing compared to what those on the receiving end were getting. But at the same time it shows there is not actually such a disconnect in the same way. We’re not in that situation anymore. We are not in a situation where we are fighting quote unquote wars of choice. We are not fighting limited wars with much weaker adversaries for marginal interests. We are in a situation of great power conflict. We’re in a situation of peer conflict. There is no one in Ukraine who would tell you that the use of drones is first of all a substitute for all the other systems they have. It’s not a single silver bullet. Second, there’s no one in Ukraine and no one around the world should believe Ukraine is not losing people. because it’s using drones. We’re facing a very, very different combat environment. So while I can see the logic of the question, I don’t think it’s the logic we should be looking at right now, because I don’t think it applies to the combat situations we’re actually likely to be in, which also relates then to the question about the drone wall. On the comments from Anna, and I’ll come to this as quick as I can, there was so much I agreed with in this, as with the comments from Saeed and others online. And I agree with Chris’s point about the geopolitical difficulty of reaching a regulation on this. Normally, we only see regulation on new weapons types when there’s an interest of the parties that operate them, when they’ve actually tried them, found out either they are massively consequential in human terms, or they don’t work, or they cause blowback. So for example, in the regulation of gas warfare after the First World War. But crucial points that come out of all of this intention and accountability. I would argue that actually, the use of advanced battle networks now gives you the chance to restore mission command, it gives you the chance to restore commanders intent by allowing commanders to focus on those key decisions. That’s something actually, we’ve been talking to militaries a lot about, they are very keen on restoring that in a way that can actually be communicated, but is based on the proliferating, very confusing battlefield, which is full of diverse systems, multiple inputs, which they have to deal with in a way that it hasn’t been before. On procurement, and end user requirements, and so on, having been through procurement processes, I disagree with the analysis that’s presented. Because the crucial part that we’ve certainly experienced and many others in our position, I mean, Helsinki is the biggest new defense company in Europe and biggest defense AI company in Europe, but there are many others doing similar things. We have to work very, very closely with the customer, which is the government, and with the end users, which are the military in order to understand the capabilities, the technical specifications, and the bounding and the way that we actually can put guardrails on what is being done. You mentioned correctly that most defense companies don’t actually build AI, they procure it. We are different, we are AI first. That’s one of the reasons we think this is a better approach because adding AI or adding software onto hardware has proven to be a very expensive, very ineffective way to build true systems that can actually work in dynamic environments. We do it the other way, we’re software defined, we build it from the AI out and that’s why we actually then started building drones because we realized we could build drones better than other people adding our software to their drones. The same thing applies for future systems, we’re stuck mentally when thinking about military things in terms of tanks, in terms of planes, in terms of ships. That’s not how we should be thinking, we need to be thinking in terms of capabilities, effects and networks. Why software defined? Because software can be updated much more easily than hardware, it can be updated and corrected much more easily. What is crucial with all of this is not only the intent, which we’ve discussed now quite a bit, but the accountability that you mentioned. And accountability I think comes in two ways, first of all you have to know whose intent was it, what orders did they give, what was the command actually given, to which human machine combination did they delegate that, then what were the effects that they should be held accountable for and can you trace it back. The second part of this is about explicability as they call it and this particularly relates to artificial intelligence at the moment. The beauty of artificial intelligence, which is why people want it, is because it reasons in ways that humans don’t. We want it to do that because it makes the decision that we can’t in the time available. However that creates the problem we don’t know why it did what it did. Well newer artificial intelligence builds in explicability as it’s called and this is still a progressing science which is why we have to be very careful about the steps forward that we take but this actually means that the AI will give account for why it reached a decision. Now you could say well what if the AI is trying to trick you, well can the AI trick another AI that’s trained to trace this stuff etc etc and so what we’re into is a progressive iteration of explicability. which allows you to get to the reasoning that was used in order to be able to provide correction over time. Now that’s actually better than we can get to with some humans, as we’ve seen over time, which is very difficult for humans to give account for why they’ve done certain things. I mean, humans, we know for all their ethical qualities, can also lie, they can also obscure. They may not have been sure why they did something. So when thinking about this, we have to again think of those two points of intent and accountability, but recognizing that geopolitical situation that we’re in, taking advantages of the technologies we have in order to make sure that we can actually defend our democracies. The very last point, why do we actually need this stuff when we’re talking in military terms? Our adversaries have it as one answer. The second thing is technologies is advancing in ways that we can use to make sure again that we don’t have to try and fight wars of attrition. Now while it’s not the case that we simply won’t be able to not lose anybody on the battlefield, as per Florence’s question, we don’t want mass casualties. We do not want mass conscription if we can avoid it. We want to use our technological edge. It used to be the case during the Cold War that Western precision and Western quality of weapons was used to counteract Soviet mass. Now the equation is different. Now we can have precise mass and we can actually be able to afford it, and we have to be able to think about that when we’re allocating defense budgets in times of scarce resources. We are going to need to put more money in, but how do we get the most effect for that while still maintaining the kind of democratic societies that we believe in in other ways? I’ll leave it there because that was already a long answer, but there’s a lot that we could go into also further in discussions about how to respect international humanitarian law, the histories of that with autonomous and semi-autonomous weapons including anti-tank mines and so on, and how that’s actually enhanced by the kind of SATA and sensor and data fusion that is now possible from using the new kinds of battle networks that are out there. Thank you.
Chris Painter: Still very, very briefly. I mean, just very briefly, I’d say on the Spain thing, absolutely, it’s possible it’s already happening. Russia is doing this against Ukraine. The whole reason we have a norm against attacks on critical infrastructure is because that’s what happens. So if Spain was a cyber attack, that’d be true. And on the area and the issue of attacking drones, or attacking AI systems, absolutely. I mean, that’s one of the worries. And especially if an adversary doesn’t have the financial wherewithal to build expensive networks, expensive drones, etc., expensive AI systems, attacking them and make them less secure is exactly what an adversary would do.
Wolfgang Kleinwächter: Okay, thanks. Are there more questions in the room? Or Anna, do you want to react to what Ben just said? I’ll say this. I think it’s an honour to Austria and to yourself, Professor, because you
Anja Kaspersen: actually brought very different views onto the panel. And I always say when I talk about this issue, it’s like the most important thing you can leave the audience with, both those in a room and those online, is good questions to ask. And when you heard me talk, and you heard Chris, and you heard Benjamin, and we represent different viewpoints, although we kind of aligned on some of the technical challenges, I hope people leave here with really good questions. Is this what’s desirable? Is this what we think? Do we believe the commander’s intent, that human intent can be translated in the way that was just described by Benjamin? I will make a small correction. I can’t remember who was saying it. But there is a common understanding that, you know, these things are being developed in the defence industrial complex. And what is the big shift? There’s two big shifts, right? One is that what used to be defence industrial complex have moved increasingly into the civilian commercial space. And more and more technology, more and more of the technologies that are now game changing, are being then brought back into the military space. So who’s actually creating the parameters and setting the parameters have shifted somewhat. So I’m not saying there’s something different to you. And I understand your company operates differently from other companies. So I respect that. There’s also a trend that more and more is, to the point of procurement, is bought off-shelf. Because it takes too long. There’s no time. There is a perception that time is not on our side, geopolitically and otherwise, that you don’t invest the same amount of money into maybe doing the specs and doing the traditional methods of procurement and acquisition that was traditionally done in this field. So there are some changes. And I’m not saying your company is on that category. But overall, those are just going to be my comments. And I have many, many more comments, which has more to do with the kind of the bigger philosophical questions, including the technical issues and what was implied and some what Benjamin said. But having such different viewpoints on this panel that allows people to really go out with some real considerations. And actually, I always say that one of the missing things in our current discourse is the inability or the diminishing ability of just sitting with contrasting realities and being uncomfortable. I think it’s worth being uncomfortable with this space. And we have to be able to sit with contrasting realities and navigate that space without getting upset or disagreeing. We’ve been smiling at each other the whole time, even when he’s been saying that I fiercely disagree with you. I’m nodding because we may have agreement on the technical side, but we may disagree on what the impact would be and how OK we are with that. So those are just different views. And that’s what ethics is about. It’s about your outlook. It’s about navigating uncertainties. It’s about sitting with the discomfort of the trade-offs that will inevitably be the result of this discourse, no matter what we do. So thank you.
Wolfgang Kleinwächter: Anne, you are so right. And I hope you will continue the debate in Oslo and beyond Oslo, because this will keep us busy, hopefully before. The Digital Winter will come, so that we have some space which we can use to avoid some of what people have called the Digital Hiroshima, or the Disciple Hiroshima, so there is still room to find a consensus to avoid the worst things. But we have one additional question online, and is there a question in the room? Because more or less we have to come to an end then, because the big plenary is waiting.
Moderator: If there is no question in the room, then please, the final question from Monika from the online. So, question to Ben. Delegating such selection of targets of AI programs has resulted in inconsiderable collateral damage in the Israeli war against Gaza. When would you say, is software safe enough to be delegated such tasks? Who should be held responsible for illegal collateral damage inflicted? The state is using the software, or the companies developing and selling the software as precision tools. Who has to take the responsibility for such hallucination of AI tools? Hallucination of AI tools. Good point, Ben.
Benjamin Tallis: Thanks for that one. It’s nice that people are engaging. I just really first want to absolutely back up what Anna said. This has been a terrific experience for that reason, that we’ve had the chance to productively disagree. And I hope the point about, it’s not only about ethics, but it’s about what democracy is at heart as well. Different points of view making their case in an arena. So again, thanks to you for convening this. In that particular case, and without commenting on particular instances, again, this is a history of warfare question. This is nothing new. Is it the supplier of the weapons, the supplier of the bullets, or so on, who actually is responsible for the effects that they have? And I think we have to be extremely careful not to… We don’t want to confuse our rightful distaste, our rightful hatred of the awful outcomes that result from war. I mean, war is awful. This is the plain, simple truth. War is something we would rather didn’t happen at almost any cost. Although, as Ukrainians would tell you, some things are worth fighting for. And that includes their democracy, their freedom. And that’s what I would hope we’d like to see in Europe too. Which is why we need to be so well-armed that it doesn’t happen, that Putin doesn’t look at us and go, So this is part of the point about building up deterrence. Now, in terms of accountability, again, which is the essence of the question, same rules as applied to other forms of warfare before. Who is responsible for the My Lai Massacre? Well, you could look up at a chain of command, you could look at the individual perpetrating that, you could look at the other individuals who didn’t stop William Calley and co. doing what they did. It’s a complex question that has many, many parts to answer. The question about whether autonomous targeting is responsible, this is a question of setting the boundaries. And this is why my company and many others want to work with democracies who set proper boundaries. And who actually set proper limits and guardrails for how you use that AI. And if they don’t, then that can be the system that’s provided. How it’s then used is ultimately up to the military and the democratically elected governments concerned. So I think there’s a key point there in terms of understanding where is the political responsibility, as well as the command responsibility, and then the frontline responsibility that all play into question there. One very last point, because Anna made a really interesting observation about technology shifting from the military world to the civilian world. I would actually argue that what we’re seeing now is the true shift of the civilian world into the military world. And anyone who’s read Christian Brose’s book Kill Chain, which I would highly recommend, despite the off-putting title, or off-putting to some, or even DIUx, or any of these other books on military innovation, will know buying off-the-shelf, exactly as Anna said, is key in many ways. Because you can now buy off-the-shelf sensors, you can buy off-the-shelf interface tools like phones, like iPads, whatever it is. that by using AI, you can actually upgrade to military-level quality and effect. I would argue actually what we’ve seen is the military world catching up with the technology of the civilian world, but of course it has different consequences when you’re actually using those systems to strike human and military targets rather than to order an Uber. So we have to actually have the serious conversations like this that we’re having today.
Anja Kaspersen: And thank you all for engaging so richly with that. So I’ve been doing arms control disarmament stuff for a long time. So even when we had the composite disarmament like fully operational, and it’s a very important thing to say about an instrument. And as you know yourself, first it takes time. You know, some of the most effective arms control instruments took not a few years, right? They took nine years, 18 years, the Chemical Weapons Convention, the Biological Weapons Convention. I’m not arguing that we should spend that time on anything that is happening in the process that you’re leading. But what has been some, you know, in the Chemical Weapons Convention, what was a big transformative shift for the conversation that happened at the UN was when the chemical industry started engaging. I’m just mentioning that because, you know, we’re trying to reflect creative disagreement because they saw the benefit of having a regulated space to make sure that the edge cases, the edge uses, what was not, you know, set up to be transparent and visible and accountable and held accountable, will be flagged and ruled out. So having all industries and those proactive industries involved is, of course, as we’ve seen with other arms control instruments, very important to make sure that what is agreed upon is implementable. I just wanted to share that observation.
Wolfgang Kleinwächter: So this is an additional argument to involve many stakeholders to get the full picture and then to find something which could be a dynamic consensus in the future. So we have reached the end of our time. And I would ask. but I’m better to give some concluding remarks. Thank you.
Aloisia Wörgette: Thank you for a fascinating panel. I will take home all the praise that Austria has been receiving for hosting this and be assured with your positive motivation we’ll continue to do it. Fascinating discussions. It is absolutely true. Maybe we are not there yet, but I’m really optimistic because we are not in controversy, we are in deliberation and this is why you are disagreeing and still smiling upon each other. It’s absolutely about avoiding unintended consequences for human rights, rule of law and democracy and of course it’s about the question of intent and this could also be an Oppenheimer moment for philosophy as such. I’m much more optimistic about whether we will get an agreement than you are because this is not only about industry and about governments. The discussion on artificial intelligence mobilizes different segments of society globally and therefore in a global democratic process we have a chance to go further because different people are looking at it and are guiding us. So thank you and enjoy EuroDIG for the rest of the days in Strasbourg. Thank you. Thank you. The meeting is closed.
Wolfgang Kleinwächter: Thank you the panelists and our moderator for the insightful discussion and thank you for the audiences for the active involvement as well. So the next session, the opening ceremony will be at 15. So we look forward to seeing you then. Thank you.