Innovation and ethical implication – TOPIC 03 Sub 01 2024

From EuroDIG Wiki
Jump to navigation Jump to search

19 June 2024 | 10:30 - 11:15 EEST | Auditorium | Video recording | Transcript
Consolidated programme 2024 overview

Proposals: (#27) #40 #42 #48 (see list of proposals)

You are invited to become a member of the Session Org Team by simply subscribing to the mailing list. By doing so, you agree that your name and affiliation will be published at the relevant session wiki page. Please reply to the email send to you to confirm your subscription.

Kindly note that it may take a while until the Org Team is formed and starts working.

To follow the current discussion on this topic, see the discussion tab on the upper left side of this page.

Session teaser

As Europe embraces artificial intelligence (AI) and other emerging technologies, ethical considerations become increasingly crucial. This session delves into the ethical implications of AI, machine learning, and related technologies, exploring issues such as bias in algorithms, privacy concerns, and their impact on human rights.

Session description

This session invites participants to engage in discussions on advancing digital freedoms and ethical technology in Europe at the upcoming EuroDIG meeting. As Europe embraces artificial intelligence (AI) and other emerging technologies, ethical considerations become increasingly important. The aim of this session is to address critical issues at the intersection of technology, human rights, and societal well-being, focusing on AI and Human Rights: With the increasing adoption of AI and emerging technologies, ethical considerations are essential. This session will examine the ethical implications of AI, including algorithmic bias, privacy concerns, and potential impacts on human rights, advocating for responsible AI frameworks.

Format

To ensure an interactive format of the session, at the beginning, the poll is used to ask participants to share their opinion on what they consider to be the main ethical concerns of AI or their views on AI impact on human rights. Then, based on their answers, key participants will share 3-minute insights on the topic from their fields of expertise. This would lead to a further, more informed debate between the audience and key participants, including an exchange of comments and Q&A.

Further reading

Links to relevant websites, declarations, books, documents. Please note we cannot offer web space, so only links to external resources are possible. Example for an external link: Main page of EuroDIG

People

Please provide name and institution for all people you list here.

Programme Committee member(s)

  • Meri Baghdasaryan
  • Minda Moreira

The Programme Committee supports the programme planning process throughout the year and works closely with the Secretariat. Members of the committee give advice on the topics, cluster the proposals and assist session organisers in their work. They also ensure that session principles are followed and monitor the complete programme to avoid repetition.

Focal Point

  • Lina Lelesiene, Head of Information Technology Division at the State Data Protection Inspectorate of the Republic of Lithuania

Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective Programme Committee member(s) and the EuroDIG Secretariat and are kindly requested to follow EuroDIG’s session principles

Organising Team (Org Team) List Org Team members here as they sign up.

The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.

Key Participants

  • Vanja Skoric, Program Director at ECNL | European Center for Not-For-Profit Law Stichting
  • Nicola Palladino, Assistant Professor of Global Governance of Artificial Intelligence, University of Salerno, Founder Member of the Digital Constitutionalism Network (Center for Advanced Internet Studies, Bochum, DE), Expert Member of the ISO JTC 42 on Artificial Intelligence
  • Marine Rabeyrin, EMEA Education Segment Director
  • Thomas Schneider, Ambassador, Chair of the CAI, Vice-Director, Swiss Federal Office of Communication (OFCOM), Swiss Federal Department of the Environment, Transport, Energy and Communications (DETEC)

Key Participants are experts willing to provide their knowledge during a session – not necessarily on stage. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance. Please provide short CV’s of the Key Participants involved in your session at the Wiki or link to another source.

Moderator

  • Prof. Dr. Paulius Pakutinskas, UNESCO Chair on AI, Emerging Technologies and Innovations for Society.
    A full Professor at Mykolas Romeris University (MRU) (Vilnius, Lithuania). He is Head of Legal Tech Centre. Professor is Director of Legal Tech LL.M and Law, Technology & Business Master degree programmes at MRU.
    He is a Board member of Artificial Intelligence Association of Lithuania and works extensively on ethical and regulatory issues related to AI, including in the medical and military fields.
    He has studied, interned and lectured at the Norwegian School of Management BI (Norway), University of Cambridge (UK), Tel Aviv University (Israel), Kanagawa University (Japan), Re: Humanize Institute (Denmark), Singularity University (Denmark), Vilnius University, Faculty of Law (Lithuania), International School of Management (Lithuania).
    He is experienced Senior Legal Executive with a demonstrated history of working in the telecommunications industry. He has a very strong legal professional background with a Doctor of Philosophy (Ph.D.) focused in Technology & Business and is a top professor at MRU.
    All his Interdisciplinary research, projects, and publications are related to an interrelation of Law, Business, Emerging Technologies and Innovations, including in the field of AI, ICT, New Technologies, Cyber Security, harmful content on the internet, etc.
    Professor, together with Lithuanian laser and biotechnology scientists, founded Vital3D Technologies – high-tech startup working with cutting edge technologies in lasers/materials/biotech/AI to develop advanced 3D tools for the future of personalized medicine. He is a Board Member of Vital3D Technologies.

The moderator is the facilitator of the session at the event. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.

Remote Moderator

Trained remote moderators will be assigned on the spot by the EuroDIG secretariat to each session.

Reporter

Reporters will be assigned by the EuroDIG secretariat in cooperation with the Geneva Internet Platform. The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:

  • are summarised on a slide and presented to the audience at the end of each session
  • relate to the particular session and to European Internet governance policy
  • are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
  • are in (rough) consensus with the audience

Current discussion, conference calls, schedules and minutes

See the discussion tab on the upper left side of this page. Please use this page to publish:

  • dates for virtual meetings or coordination calls
  • short summary of calls or email exchange

Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.

Messages

The proliferation of AI-related initiatives and documents and the adoption of regulatory and human rights frameworks is key to fostering user’s trust in AI technologies, to tackle AI’s complexity and applications and to provide tailored solutions to the specific needs of the diverse stakeholders. A multistakeholder approach to AI governance is crucial to ensure that AI development and use are informed by a variety of perspectives to minimise bias and serve the interests of society. A pressing ethical concern is the military use of AI which is yet to be addressed by existing regulatory frameworks but will need more focused attention in the near future.

Video record

https://youtu.be/LqYTUUbWKw8

Transcript

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Transcripts and more session details were provided by the Geneva Internet Platform


Dr. Paulius Pakutinskas: Yeah, so, hello. I’d like to invite the only one live person who will participate in this discussion. It’s Thomas Schneider to our floor. And as it was introduced, I’m Paul Spokotinskas. It’s the same complicated family name as Tomas Lamanauskas, so you can call just Paulus. And why I’m here, so just to discuss some very controversial things like innovations and ethics and maybe some regulation. Because ethics is part of regulations at the same time. And we have other, our panelists, they are online, so it’s Vanja Skoric, Nicola Palladino, and Marine Rabeyrin. So I hope they will join us in some way. And after a few minutes I will ask all our panelists to introduce why they are here, how they are connected with the issue. But for the first step, let’s have a game, a play. We will check, do you have your phones? So could you just take your phones and we can have our audience poll. So please use Menti. Oh, I see our participants are joining. So we need the third one. Okay, hello, Marina, Nicola. Hello. Yeah, so, and we have a question. What do you consider to be the main ethical concerns of AI? And please vote, use Menti, join Menti.com, you can use code. Here it is, 12066360. And we have privacy and data protection, bias, autonomous decisions making and accountability, job displacement and economic inequality, safety and security, misinformation and manipulation as existential risks, and others. Why others? Because there are much more risks and some of them were mentioned in the key speakers’ notes. So, wow, we see quite clear leaders. Because there were quite a lot of such, you know… researches and there are quite different answers. There were some where safety and security and cyber security was mentioned as a key point. But here we can see that, yeah, so let’s say misinformation and manipulation is the main point. And bias, general bias is a topic which is quite equal to misinformation. And privacy and data protection issues are a big concern. Maybe we do not think about existential risks so much, but anyway, so we can see that is what concerns our auditorium here. So that’s very interesting. And I will ask a question for our panelists. So maybe we just can have a short roundtable on what are you doing, how you are connected with the topic. Maybe we can start from Thomas.

Thomas Schneider: Yes, thank you. My name is Thomas. I work for the Swiss government. And I happen to be as the Secretary General of the Council of Europe has said, one that led the negotiations on the Framework Convention on AI in the last two years. And we’ve been dealing with AI for many time, also in another committee at the Council of Europe that deals with media, mass media, and human rights in the information society. And this is probably where the AI popped up as one of the first areas, because with Cambridge Analytica and all of these scandals, of course, people realize that algorithms have power. And yeah, that’s it.

Dr. Paulius Pakutinskas: Perfect, thank you. Maybe you can go to Vanja.

Vanja Skoric: Good morning, everyone. I’m Vanya Skoric. I’m Program Director at the European Center for Not-for-Profit Law, a non-profit organization that works on protection of. human rights and civic freedoms and also in digital space. That’s why we deal with AI as well.

Dr. Paulius Pakutinskas: Perfect. Nicola?

Nicola Palladino: Hello, everybody. I’m very happy to be here. I am an assistant professor on Global Governance of Artificial Intelligence in the University of Salerno, and I’m a wholesome and expert member of the Joint Technical Committee of the International Standard Organization on Artificial Intelligence, where they are developing several standards to comply with ethical issues, human rights-related issues connected with artificial intelligence.

Dr. Paulius Pakutinskas: Perfect. Thank you. Marina?

Marine Rabeyrin: Hello, everyone. Very happy to be here as well. I’m Marine. I’m working for Lenovo, and today I’m coming with two hats. First, because I’m working in a company, where I know AI is obviously one of the key development area or a key trend. The second reason is because I am leading a dedicated specific initiative to an NGO, which is really looking at how to mitigate some risk of AI, and more specifically, gender bias risk. How does AI can potentially lead to discrimination? So, obviously, you’re very much involved in the ethical aspect of AI.

Dr. Paulius Pakutinskas: Perfect. So, we see that we have really, really experienced experts. So, our task is now to provoke them to talk and say all secrets. So, we would like to have a very interactive discussion. So, if you have any questions, you can raise your hand. Here is the microphone, which will be put to you, and you can ask, so you are free to do anything in the topic. And we are talking about these two, like a bit controversial sometimes, or maybe not, we will find. issues. So when we talk about AI, so it’s innovations, because we like to be innovative, we would like to be very very skilled technical, as TomasLamanauskas said, we need to solve big problems which are impossible to solve without AI. But we have some concerns and we talk about ethics. So is it, you know, contradicting or maybe we can do some solutions what are good. So my first question for all of panelists and just for warm-up, why we’re talking so much about artificial intelligence ethics and artificial intelligence but we do not talk about, you know, some internet or social networks ethics, where we have a lot of problems too. But somehow we stress AI as a more problematical maybe. Maybe we can start from Thomas.

Thomas Schneider: Thank you. Well, we’ve also been talking about ethics on social media and so on and so forth. But I guess the difference is that with social media until and including the Arab Spring in 2011, we were all thinking that or many of us were thinking that more or less naturally new tools, new technologies will lead to a better world and to more democratization and so on and so forth. And then we realized and we made the same mistake like people 100 years ago with radio and other new communication tools that you can also use these tools for other things than good things. And I think we’re just a little bit more, we’ve burned our fingers already in the in the 10 years so that we now know that these tools can be used for good and bad things. And people are more concerned also because people realize the power. But of course in algorithms, if you talk about AI, algorithms are part of all the tools that you refer to. So it’s actually the same or it’s component of this.

Dr. Paulius Pakutinskas: Thank you. What about other panelists?

Marine Rabeyrin: So, yeah, I would maybe say that maybe it concerns more people because it is also related to the explainability. And, you know, we don’t master or most of people don’t master what is behind artificial intelligence. So it’s a little bit the fear of black box, you know, that box. So that could be one. And also, I think it’s related to the to some bad buzz that we heard, you know, on the news where we see what what it could look like, you know, if AI doesn’t is not led ethically. So I’m thinking about some bad buzz that happened a few years ago when we showed that AI could discriminate people, especially if we use it for legal and legal context or also in employment context. So I think that has maybe raised some concern. At least it was the case for me.

Dr. Paulius Pakutinskas: OK, thank you. So, Nicola.

Nicola Palladino: Well, I think that we are talking so much about artificial intelligence, I think also because, you know, compared to the other digital technologies, artificial intelligence has a more immediate and perceivable impact on people’s life. We are talking about application that are used to screen our CV when apply for a job position or to assess to a university. They are used to assess our risk profile when we request a loan. And we also see during the last election how they could be powerful with regard to the manipulation and polarization of public opinion. And so while I also like when we were used to talk about to think about the Internet as, you know, fundamentally beneficial technologies that could help us to spread, improve democracy. But in the case of artificial intelligence, we are more aware of the risks associated to these technologies also because they are more close to our experience. Even for these things, there is all this talk about AI ethics, because in the end, ethics could be defined as an approach to identify what are the most proper behaviors and rule in order to maximize the benefit of a situation, while minimizing the risk. Thank you.

Dr. Paulius Pakutinskas: Okay. Thank you. Vanja?

Vanja Skoric: Yes. Thank you. For the interest for our audience and to make it more interactive, I would actually challenge some of the assumptions. First of all, the ethics concept has been evolving. Actually, the conversation moved towards the human rights framework in many of the policy and regulatory efforts, not only concerning AI, but also the Internet and the social media. When we talk about the potential harms and potential risks, we frequently in the conversation, not only in the policy sphere, but also in the academic and research sphere, more and more talk about rights. Also, I think the Internet and social media are involved in the conversation together with AI, because we know and increasingly are aware that all of those are also being powered by AI in some way. When we talk about AI and the risks and benefits, and ethical and human rights dimensions, we actually touch upon all of these topics.

Dr. Paulius Pakutinskas: Yeah.

Thomas Schneider: I would just like to react because I was waiting if somebody would mention it. To give you a simple answer on the question is, because we’ve all seen Terminator and other movies, that are dealing with AI and something like social media, other things are not so easy to personalize, but everybody knows what a robot is that could destroy mankind and so on. We’ve all been influenced. by this fear of the man creating, starting with Frankenstein and whatever, the man creating the Superman that will then kill us. And of course, that is the hype that the media, the journalists, the big tech industry themselves, and politicians all mutually reinforce, and this is why we are not talking about data governance, which is equally important, but we are talking about, that’s because of creators of movies and stories.

Dr. Paulius Pakutinskas: Thank you. Thank you, Thomas, you did a great job because my target was to provoke you, but Thomas provoked you, so thank you very much. That’s a very good point, because there are quite good researchers on trust of AI, and we can see that in different societies and different cultures, we can see quite different acceptance and trust on AI. And we can see quite clear division between Asia, let’s say China, India, Brazil, Singapore, where they trust much more, twice or sometimes more, than European countries, or let’s say rich countries in some cases, okay? So, and when we talk about ethics, ethics can be understood quite different in different societies, different cultures. So how we can increase the trust to AI, what we can do, so especially when we know that Europe is not the strongest economy in usage of AI. So maybe we can start from somebody online, maybe Nicola.

Nicola Palladino: Sorry, well, what about how we can increase the trust across different society and cultural context? I will be quite optimistic, because in the last few years, we had a flourishing of initiatives, trying to set in some principle for the regulation governance of AI. And we can see that all of them converge around a very narrow set of principles that you just suspect, transparency, accountability, fairness, human oversight, safety and security. And of course, they are very, very high level. principles and then differences in interpretation occur when we move toward the implementation of these principles. But anyway I think it’s a very very good sign that we have this kind of common ground, a common lexicon around which we can develop our discourses and yes I personally think that probably we have to, as already highlighted by previous speakers, we have to move from ethics to rights and I believe that we should base our future regulation on international human rights law and I think that the Universal Declaration on Human Rights could be a very good base to overcome all the cultural and geographical differences, given that it’s a document that would be signed by more than 190 different countries. Yes we know that there are differences around the world in the level in which human rights, the Universal Declaration of Human Rights has been implemented, made legally enforceable but anyway I think that is a good starting point. And last thing, I think that behind the cultural differences I think that every human being on Earth want the same things. They don’t want to be discriminated, they want to communicate freely, they want that it’s privacy respected, they want not to be manipulated, they’re not to be subjected to a completely automated processes without having the ability to appeal a decision that could potentially damage its interest and so I think that we, all the human beings on the Earth, have this kind of things in common and these things and exactly the things that are protected, promoted, safeguarded by international human rights law and by the Universal Declaration of human rights. Thank you.

Dr. Paulius Pakutinskas: Perfect. So we go to human rights as it was mentioned by Vanja. So ethics is somehow related to human rights, and that’s easier for us to understand. Marine, do you have your opinion on this issue? How we can increase trust, especially in different societies, in different cultures?

Marine Rabeyrin: Well, I believe there is a role that everyone is playing to raise awareness about what is AI and how we are starting to organize ourselves at different levels in terms of governance to make sure that we put in place the right safeguards. And I believe when I say everyone has a responsibility, on my side, I’m thinking about the role of companies, tech companies, and how they are also there to contribute in implementing the right governance, but also raise awareness about what is really AI, unlock some fear related to AI, because most of the time AI is used on applications which are far from anything related to humans, which are more about processes, optimization of logistics, all those types of things, and then explain what is put in place when AI is using some or has some impact on humans. So I just want to raise that everyone has this responsibility to increase the awareness and the understanding, and I fully include the role of tech companies.

Dr. Paulius Pakutinskas: Okay, good. We will go to stakeholders after a few minutes. And Vanja, do you have something to add?

Vanja Skoric: Just to share the concrete data I posted in the chat as well, the link to the global study Trust in AI from KPNG last year, which showed that actually institutional safeguards are the strongest driver of trust for people globally and people are more trusting of AI systems when they believe that the current regulatory framework are sufficient to make AI use safe and have confidence in the government technology and commercial entities to develop use and govern AI. So this clearly shows the pathway we need to take and of course some regions have started on this pathway.

Dr. Paulius Pakutinskas: Yeah, this study is really good. Maybe everything is changing by days and hours so maybe if we will do a new study it will be a bit different but that’s really good to fix the situation and to look at how the world looks like because it was worldwide so it’s a really good study so please read who has the possibility. Thomas, what would you like to add?

Thomas Schneider: Yes, and it has been referred to this morning about the comparison to engines. When ChatGPT came up, I had to talk to many Swiss media and journalists and parliamentarians because they all wanted to know what is AI, what can we do and so on and so forth. And I realized also that like always when journalists and politicians see a problem, they want to have a Mr. Digital, a Mr. Cyber, one cyber security, government office, one law that will solve all the problems. That’s one element and that works less and less and the other thing is that I realized when discussing what to do that actually, yeah, there are similarities between how humankind deal with disruptive technologies, whether it’s this one or previous ones and probably also future ones because I’m not a lawyer working for a government but an economist. Of course, the connection to engines is, there are so many connections because that was a disruptive technology that was replacing human or animal labor, turning it into motion in vehicles but also machines that produce something. And do we have one engine law that regulates engines in one way all over the world? Do we have a UN agency for engines? No, because it’s all context-based. We have regulation for cars, for airplanes, for different types of cars, for the people that drive the cars or the airplanes, for the infrastructure and so on and so forth. And the same for machines, their health issues, their other product security issues and so on and so forth. And we have different levels of, and so much for harmonization, we have different levels of harmonization depending on the context of where an engine is used. In the aviation system it is fairly harmonized so you can land the plane all over in the world according to the same procedures. If you take the cars, our friends on the island in the northwest of Europe still drive their cars on the other side. The cars are built so that they’re driven on the other side of the road. We allow them to drive their cars with their expertise here. It more or less works. You see, and the Germans for instance, they still think that they can have roads where they have no speed limits because that’s important to their identity as that’s one part of their freedom. Of course, it’s also important for the image of the auto industry. And that doesn’t mean that they have more dead people per year because they have to take responsibility themselves compared to other countries that have very strict rules. In Italy at every turn you have a sign that says 90, 70, 50, 30, 10. And as a Swiss you think, do I really have to break down to 10? I know all the Italians are driving through with at least 70. That just means drive a little slower. So just to say that ways of dealing with risks are cultural based on your history and so on. And you will never get to harmonize this unless you harmonize all cultures. But of course you will have to interoperabilize and harmonize things. And you probably have to harmonize and interoperabilize things more in a digital world because it’s much easier to move AI systems and data around the world and copy them than it was with engines.

Dr. Paulius Pakutinskas: Perfect. A lot of good examples which just illustrate the situation. But I have a question. So we have a lot of initiatives, a lot of initiatives over the world in different formats like UNESCO, OECD, G20, Hiroshima AI, professional level, associations and so on and so on. So looks like so many documents. Let’s talk about the soft law, some recommendations. Why we have so many and do we need so many?

Thomas Schneider: We will see much more. And again, look at engines. How many technical standards do we have for engines in different machines? How many laws do we have that regulate engine driven machines? It’s thousands and thousands of technical norms, of legal norms and then again of social cultural norms. And we are about to build the same for AI. And of course, this is painful. It’s hard work. It goes slightly faster than developing the machines and developing the regulation on the machines. But we have to develop again, there’s no one size fits all solution. And the Secretary General of the Council of Europe has said it and also Thomas has explained what they do with partners in the sectors. We’ll have to have norms to use AI in all kinds of contexts. And we have to have some horizontal shared understanding and maybe some norms about how to do risk assessment across all sectors. But we’ll have to develop rules for every single application of AI. The question is, do we all have to develop thousands of laws? Or can we maybe, and as a Swiss, I would rather say, let’s see what the problem is and try to find easiest, most bureaucratic solution that may not always be a super complicated bureaucratic answer. But basically, we have to find a way to deal with AI in every single way it is used in every single way. area this is.

Dr. Paulius Pakutinskas: Marine, do you have your opinion here?

Marine Rabeyrin: I was about to make the comment because I believe there is so many initiatives as you said hard, soft guidance, but somehow they talk to different people. I believe there is recommendation or guidance for different types of audience. I don’t believe that we can come with a kind of consensus on what is the ultimate guidance or regulation because we just said before it also adapts to its own context, to the own culture of the audience, to the different type of stakeholders. So I believe this is OK to have so many initiatives and it’s probably demonstrates how different stakeholders want to handle and want to contribute in making this, addressing this topic the right way. And I can share with you the experience that I had myself. So as I said before, I’m leading an initiative dedicated more specifically to how to produce or use AI in a way that it does not amplify, replicate or amplify gender bias. This is a very specific aspect of the ethics. So you can say, well, then if you are so specific there should not be so many initiatives. And as we started to work on that one we realized that there is an ecosystem of association, initiative working on the ethical aspect which was already approaching this specific topic but they were approaching it in a different way. So it was this already four years ago. So do we need to align? Do we need to be only one voice? And we realized that comparing the different initiatives we were every time proposing something different. A little thing different. For example, one initiative was Our initiative was rather recommendation, supporting companies who were talking to companies. Other initiative was talking to companies, but most specifically to lead them to a certification, so a label. Another initiative was rather focusing only on the technical aspect of AI when you are looking at different type of area of action. Every time there is a little difference, which is okay because then we believe that the audience in front of us, we’ll have the choice to adapt and choose what is the most adapted to their own challenge. Rather than trying to be as one and merge our initiative, we said, no, let’s rather acknowledge there is different approach and promote each other. Make sure that anyone we talk to, we give them this panel of choice. If you want to take action on gender bias, there is what we propose, but also what the other one are proposing. Join forces by talking to our different audience with our own specificity, but make sure that we try to promote each other, that we also give the opportunity to let more people know about all those initiatives. Because we can de-multiply through the number of intervention, the number of people we talk to, then we are probably more effective than trying to merge. I think there is no issue of being more and more initiative and different people who wants to take action on it.

Dr. Paulius Pakutinskas: Vanja, could you add some thinking?

Vanja Skoric: I think this is a perfect segue also to the issue of diverse stakeholders and who needs to be involved, if you agree. Absolutely agree with both Thomas and Madine’s point. The framework that we operate in, both technical and legal and policy, is so complex and then fragile at the same time that it really requires robust safeguards are in place to protect individuals and societies from the negative impact of AI and to allow us to actually enhance the benefits. But this also means, as Marine was pointing out in the example, that the AI development needs to become fully inclusive, so to involve diverse disciplines, variety of expertise. We don’t even probably know who is needed still, aside from other disciplines that we already identified. And also different groups and communities with some lived experience and direct on the ground presence to provide input, to provide examples, to provide warning signs, but also to provide the needs that can be addressed. And all of that needs to happen throughout the whole AI lifecycle, from design, testing, development, piloting, use, evaluation. So it’s really important to embed also in the standard setting processes as we start to agree how these standards should look like in various different contexts, that AI developers and deployers really proactively include external stakeholders and to prioritize diverse voices, including those that might be actually affected by various AI systems, with the simple goal to make AI better and to make it beneficial for all of us and the society as a whole.

Dr. Paulius Pakutinskas: Thank you, Nicola. Maybe you have a short adjustment, something to add.

Nicola Palladino: Yeah, yeah, of course. But I think that having this variety of initiatives is a necessity also because, you know, artificial intelligence is so powerful, it involves so many layers of governance. So we have a transnational level in which, you know, we need to in some way harmonize the rules. And then there is also a national level in which states have, you know, I think, the rule to refine the high-level principles that have been established at this level. international level, according to the specificness of the national contest, and they also have to develop accountability and oversight system for companies, and also, above all, to place the enforcement capability. But we also need to involve a technical layer, and so this is the role of technical communities organization that have the very fundamental role to translate the rules, the rights and principles that we have defined at the political level into something that could be actually implemented, that could be translated at the operational level. And then we also need the contribution of NGOs, media, and civil society, because we need to raise the awareness about what are the social and political implication of the technical specification of the technologies that we are building. And then I think that we have so many initiatives also because there is some kind of, you know, power play, power struggle between different organizations that want to have a say on this very relevant topic, but in the end it seems that this is beneficial, also because my impression is that all these initiatives are more or less overlapping, and so they are contributing to create a common discourse about how to regulate the official intelligences that is a very fundamental prerequisite for non-creation, for the emergence of what is in political science is called a regime complex. So that is, yes, a set of rules, institutions around which we can, you know, regulate a particular domain.

Dr. Paulius Pakutinskas: Okay, thank you. As I promised, we are waiting for questions from the auditorium. Okay, so, oh wow, how many? So I think the first was the lady in white. Maybe you can just shortly introduce yourself and point maybe a specific panelist or all of panelists.

Audience: Good morning, everyone. Thank you for giving me the word. I’m Emily Khachatryan, coming and representing Council of Europe, the Advisory Council on Youth of Council of Europe coming from Armenia. And my question was more specific about AI being used at border control because right now it is a very rising issue as it’s causing a vast amount of discrimination. And I would like to ask your opinion and the ethical usage of it. As you already discussed, ethics and human rights are interchangeable words. And do you think this kind of surveillance technologies because they’re also getting a lot of biometric data on migrants, which is not ethically correct and is violating human rights. So do you think it should be fully banned or maybe used with human supervisors in order to prevent this? Thank you.

Dr. Paulius Pakutinskas: Perfect. Would you like to address to somebody or any of the panelists?

Audience: No, whoever wants to answer.

Dr. Paulius Pakutinskas: Okay, so let’s start from Thomas.

Audience: Thank you.

Thomas Schneider: Well, I don’t work for the Ministry of Police or whatever it’s called in your countries or justice. But I mean, yeah, this is already a reality. In Switzerland, we could vote about five years ago or 10 years ago, whether we would want to have biometric passports or not. And there were some people that thought that this was not such a good idea. Of course, we were under pressure by the US and others that told us if you don’t have biometric passport, it will be much more complicated. So things are normally not so simple. And then it was almost turned down, not necessarily because people had the problem with the biometric data stored in your passport. But the main problem was that… It was decided to store the data in a central database and people didn’t like that idea of a central database. They would rather have preferred to have the data stored like more decentralized because there’s a feeling that it is safer. It’s not so easy for wanted or unwanted, let’s say, giving access. So it’s less about the what, sometimes it’s about the how and what the safeguards are, because let’s be honest, it’s much quicker, it’s much easier if you can walk through an airport and just put your passport into a machine than queue. So if these things are used in a trustworthy way, then it can actually make your life easier. If these things are not used in a trustworthy way, if you don’t know who has access to the data, who doesn’t, then, of course, you don’t trust it and you try not to use it. You may be forced to use it every now and then. So again, it’s and also the whole face recognition discussion is there are areas also in the medical field where emotion recognition, face recognition may be a great solution for a big problem. But if you use the same technology with a different purpose in another area, it can be a disaster. So you really need to look into the context and find the regime for each context based on. And we hope that this is the value added of the of the of the of the convention that gives you a guidance on a very high abstract principle level. What are the important issues? What does it mean to keep up human rights, democracy and rule of law in particular uses? But it doesn’t give you the concrete answers you need to develop themselves in the context.

Dr. Paulius Pakutinskas: Perfect. Maybe somebody from other panelists would like to add.

Vanja Skoric: Maybe I can add, I will put my legal hat on now as a human rights lawyer. I think the answer lies in your question, because if you premise it as a system that already breaches human rights, then the clear answer is it should not be used. Now, the question is, again, safeguards and the assessment to what extent does it harm the rights? If it breaches the human rights, it is essentially unacceptable. This is not only my saying, this is the saying of the UN Special Rapporteurs, UN OHCHR, Human Rights Commissioner, European Human Rights Commissioner. So everybody in the field with some authority on human rights have expressed the clear request to ban such use of AI that breaches human rights and poses threats and risks of mass surveillance and data breach.

Dr. Paulius Pakutinskas: Good. Maybe, okay, so maybe we can just listen to two questions and summarize.

Nicola Palladino: Sorry.

Dr. Paulius Pakutinskas: Yes.

Nicola Palladino: Can I add something to this question?

Dr. Paulius Pakutinskas: Sure.

Nicola Palladino: Yeah, well, thank you, the audience, for this question, because I think one of the most relevant concerns about artificial intelligence. Well, just to say that, as you know, the European Union approved a couple of months ago the Artificial Intelligence Act, and this is a very important piece of legislation because the first comprehensive legislation on artificial intelligence set a series of requirements, a limitation for the development, deployment, and use of artificial intelligence that are able to offer some guarantees, some kind of human rights. But, unfortunately, all these safeguards do not apply for military, defense, and security purposes, especially for migration and border control. And, personally, I think one of the most disappointing points of the Artificial Intelligence Act is also very dangerous, because we are allowing to experiment some kind of mass surveillance technologies that, one day, could also be extended to other feed over sectors, one day prove to be very efficient and, by the point of view of government, useful for security purposes that could be also extended to also the European citizens. And so I think coming to your question, where just to ban this kind of technology, I think it’s sufficient to apply even the case of border control, the same kind of guarantee that a limitation that we established for European citizens.

Dr. Paulius Pakutinskas: Good, so we have two questions and then we will round up. Okay, please.

Audience: Thank you, my name is Catalin Donsu, I’m representing GivDec, I’m currently a Mathematical and Computing Sciences student for AI at Bocconi and I would like to ask, so we already know the uses of AI in environmental policy, early warning systems make use of machine learning algorithms to predict wildfires, to predict earthquakes, and it’s used more and more by local and national administrations. During the COVID pandemic, there was much talk of optimizing governance, of making it more mathematical in a sense. And another example is in 2023, for example, Romania introduced Ion, the world’s first AI governmental counselor. And I wanted to ask, so with the surge of the popularity of AI, how will administration and the nature of the landscape of administration change? Is government by algorithm a growing trend? Is it something to be expected? I know Hidalgo, one MIT professor, even went as far as saying that in the near future, we could have a sort of augmented democracy with people’s digital twins, basically taking a part in decision making, sort of a decentralized system. I wanted to ask, besides the risks that were already mentioned, what other risks would you say are probable and what benefits could be expected? And in your opinion, is this a viable possibility?

Dr. Paulius Pakutinskas: Thank you. Perfect question. Let’s ask one more and then panelists can choose which would you like to answer.

Audience: Thank you very much. My name is Wolfgang Kleinwächter, I’m a retired professor. for Internet Policy and Regulation. Both the Framework Convention and the AI for Good Summit of the ITU excluded the military use of AI, but we know from the war in Gaza and Ukraine that AI plays a central role. You know, the Time magazine recently had a cover story, the first AI World War, and Guterres, the Secretary General of the United Nations, has called for a ban, and just last week, the Pope addressed the G7 Summit meeting in Italy and said, you cannot leave killing of people in the hands of machines. So that means my question goes, how to deal with the military dimension? Very delicate, I know, but we have to deal with this. Thank you.

Dr. Paulius Pakutinskas: Yeah, let’s hope that we will have some other agreements on military, and it’s really strange. We are producing more and more precise guns, weapons, but we’re killing more and more civilians in all wars, so it looks like a bit of a contradiction. So maybe you can choose any of these questions.

Thomas Schneider: Well, actually, all the three raised issues that I also wanted to raise. The first one about why the EU AI Act leaves out security. Sometimes the answer is fairly simple, because the Commission only has a mandate to regulate markets. National security is an issue of the member states, and they would never give the competence to the Commission to regulate their national security. I’m simplifying slightly, but of course, that’s one of the main reasons why. Also, that shows how an institutional setting shapes the instruments. If you look at the US, if you look at the UK, they have different approaches, also because they have different legal institutions, different legal traditions. The EU had, the only way they could regulate AI is the way they did because of the way their institutions are made. That’s one thing, which is sometimes nice not to forget. And then about military, of course, we at the Council of Europe as well, we don’t have a mandate to deal with military issues. National security is something different, as long as it’s civilian, like law enforcement and so on, but not the military part of it. And I just want to call this the elephant in the room. We know that in the current wars, algorithms are used in Ukraine, in Gaza, wherever in the world. And that this is the so-called race. And again, I mean, some wars in the past, in the last century, have been decided not only, but also by who had the better machines in the air, on the ground, in the water. And then, of course, you need to have a strategy and how they use them and so on. And this is a logical fact that as long as people think they can win something by starting a war, they will want to have the best technology to do that. Unfortunately, we were wrong in 1990 that we thought at least the Europeans have passed that stage. And the other thing about the governance model. First of all, and this is no matter whether you have like the EU, a horizontal law or whether you do it like the UK with sectorial adaptations. We all need to empower all administrations, all authorities. We need to empower a whole society to know about AI and data, otherwise we will not be able to cope with it. And then also to go back to the engines and that will fundamentally transform the way we govern. If you look at the engines, when the first railways basically conquered Europe, there was no Italy, there was no France, there was no Germany. These countries didn’t exist. 25 years later, we had these national states, we had parliaments with a working class, with entrepreneurs and then like the farmers or Christian conservatives. That was also to some extent an effect of the industrial revolution and the milieu this created. that this transformed the governance models of our societies. And the same will in another way happen probably also with AI. We’ll use AI to democratize our system. We’ll use AI to not have to work for five years on a law and then implement it, and then five years on another one. We’ll have to go for much more agile and dynamic rules-making with the use of AI. But also the milieus are changing, the political parties are breaking apart. So probably in 20 to 50 years time we’ll be in a different world also when it comes to how our societies are governed and decisions are taken.

Dr. Paulius Pakutinskas: Thank you, thank you. It looks like we have no time. So let’s try to conclude. Just very, very few words, what is most important, what are the takeaways from our discussions. Please do not forget the innovations, which was a bit skipped from our discussion.