The Council of Europe Framework Convention on AI and Guidance for the Risk and Impact Assessment of AI Systems on Human Rights, Democracy and Rule of Law (HUDERIA) – Pre 02 2025

From EuroDIG Wiki
Jump to navigation Jump to search

12 May 2025 | 09:00 - 10:15 CEST | Room 10 | Transcript
Consolidated programme 2025

Proposal: #76

Session teaser

This session will delve into how the Framework Convention on Artificial Intelligence and Human Rights, Democracy and Rule of Law was crafted to complement existing international human rights standards, bridge legal gaps arising from rapid technological advances, and strengthen democracy and the rule of law.

Session description

The Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law is a groundbreaking initiative addressing the challenges and opportunities posed by AI in the context of the Sustainable Development Goals (SDGs). It requires all activities within the AI lifecycle to align with fundamental principles that support human rights, democracy, and the rule of law.

Join our panelists as they share their experiences in shaping this landmark instrument and discuss their dedication to advancing human rights, inclusion, and the protection of democracy in the digital age.

Format

Panel discussion

Further reading

The Framework Convention: https://rm.coe.int/1680afae3c

People

Moderator:

  • Mr Mario Hernandez Ramos, Chair of the Council of Europe’s Committee on Artificial Intelligence (CAI)

Panelists:

  • Mr Jordi Ascensi Sala, Member of the CAI for Andorra
  • Mr Jasper Finke, Member of the CAI for Germany
  • Ms Murielle Popa Fabre, NLP/ML Expert for AI Policies, Responsible AI Policies and Governance | Computational Neuroscientist | x INRIA & Cornell Researcher

Transcript

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

The Geneva Internet Platform will provide transcript, session report and additional details shortly after the session.


Mario Hernandez Ramos: Good morning, everyone. Sorry for the small delay. Welcome, all of you, to this session of the RDIG, about the Council of Europe Framework Convention on Artificial Intelligence and Guidance for the Risk and Impact Assessment of Artificial Intelligence Systems on Human Rights, Democracy and the Rule of Law, what we call HUDERIA. My name is Mario Hernández-Ramos, and I serve as Chair of the Council of Europe’s Committee on Artificial Intelligence. As you all know, artificial intelligence is reshaping societies of an unprecedented pace, offering extraordinary opportunities, but also posing significant risks to fundamental rights, democracy and rule of law. In response to those challenges, the Council of Europe is leading global efforts to establish the first ever binding International Treaty on Artificial Intelligence, ensuring that these technologies develop in alignment with human rights and democratic values. Today, we are privileged to hear from distinguished experts who have been actively shaping this groundbreaking treaty, and also worked on the guidance on the risk and impact assessment of AI systems from the point of view of human rights, democracy and the rule of law. We have an exceptional group of panellists. who will share their insights on how we can ensure this technology upholds human rights and democratic values. Let us then start with Mr. Jasper Finke, Legal Officer of the Federal Ministry of Justice and Head of the German Delegation to the Committee on Artificial Intelligence. Dear Jasper, could you please present the formal convention to us and to the public and to stress the main elements, please? Thank you.

Jasper Finke: Sure, thank you very much, Mario. Before I start, the usual safeguards, personal safeguards, I’m here in my personal capacity, so everything I say is not representing the position of the Federal Republic of Germany, but my own. Now we can start. Let me start with a modest comment. I think the negotiations of the AI Framework Convention were a success. Why do I say so? Well, in evaluating international agreements, you have to take into account the context in which they were negotiated and not just the content. So both matter, context and content, for evaluating international negotiations and agreements. And therefore, I will spend a little bit of time on the context in which we negotiated the AI Framework Convention before I will then focus on the content. If you look at the context in which we negotiated the AI Framework Convention, a few things mattered. The first thing was time. The zero draft of the Framework Convention was published in summer 2022. We finalized the negotiations. in March 2024. So there was a huge time pressure under which we negotiated the Framework Convention. We basically had more or less one and a half years to do so and we managed to do so. And when you look at the content of the Framework Convention, please always take this into account. So as I said, we finalized the negotiations in March 2024. The Convention was adopted by the Committee of Ministers in May last year and then it was signed or opened for signature in September 2024 and by now I have to read it because my memory is not that good. The following have signed the European Union and take note that it’s not the EU itself but it also signed the Convention on behalf of its 27 member states. So there will be no signature or ratifications from EU member states. It will just be the EU. We have Israel, Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom, Japan, Canada, Switzerland, Liechtenstein, Montenegro and I’m afraid I have missed the United States. They have also signed the AI Framework Convention. Well this already indicates the approach we took in the negotiations. It was what will hopefully be a global approach. So the becoming a party is not restricted to being to the members of the Council of Europe. Instead we thought that with other conventions as well it would be valuable to have non-Council of Europe member states. as potential parties as well, and this really opened up the negotiations. States from Latin America took also part in the negotiations. We hope to see signatures from that region of the world as well. Australia joined the negotiations in the end, too. Of course, the AI Framework Convention was not the only initiative. We had other initiatives, and for everyone negotiating the AI Framework Convention, but especially for EU member states, it was particularly interesting to negotiate the AI Act and the AI Framework Convention in parallel. The AI Act was not yet finalized while we negotiated the AI Framework Convention that posed a number of obstacles and challenges. We managed them all, but it didn’t make things easier. Let’s put it this way. As Mario has already said, there are other initiatives worldwide on AI regulation, but the Framework Convention is the first international binding agreement, and therefore it stands out. Against this very limited time frame, we had diverse interests, legal backgrounds, and cultures in the negotiations that should also be taken into account. Of course, we had a moving target. The technology developed even more rapidly when we negotiated the AI Framework Convention. It’s always a tricky question. How do you frame rules? object of these rules is still changing. Of course this is not just the case for the AFM convention, it applies to the Air Act, the Hiroshima process and UNESCO principles as well, but of course if it’s a binding agreement internationally it becomes particularly difficult as it’s not easy to change these rules once enforced. So again to this background let me give you a brief overview on what the convention covers. First of all it does not address the technology as such artificial intelligence, it addresses the artificial intelligence systems and the entire life cycle. So basically from product design to decommissioning to be a bit un-technical. I should add Jordi here is the more technical person and every time I speak about technology he has to work very hard from rolling his eyes I guess. Of course the AI Framework Convention does not stand alone in itself, it’s part of a larger framework, human rights conventions and the Council of Europe, particularly the European Convention on Human Rights and of course data protection is important when it comes to AI systems as well, even though this was not the core of our endeavor and is dealt with in other legal acts, conventions, committees. So how did we proceed? We first provided a list of fundamental principles that should be, or that must be, observed in the life cycle of AI systems. Human dignity and autonomy, especially the idea of human autonomy, is important, becomes increasingly important to the extent that technology evolves. Equality and non-discrimination, important in itself, but given the possible impact of AI on manifesting discrimination and inequalities and prolonged stereotypes, it was important to stress these principles as well. Protection of privacy is, of course, included as well. More AI-specific transparency and oversight, meaning oversight over the AI system. Accountability and responsibility, and of course, safe innovation and reliability. So, these are principles. They’re not specific rules. And, as everyone knows who has negotiated international agreements, if you want to be more specific, you need more time. If you do not have this kind of time, because you are not a lot of pressure, you have to start relying on these more abstract principles. Now, I have to look at the chair. How much more time do I have? Of course, we also included procedural rights and safeguards. So, there’s a documentation requirement for AI systems. have, there must be an effective possibility to lodge complaints, the notification that one is interacting with an AI system. Of course, there are, you can deviate from these principles under specific circumstances, but the basic rule is the notification and very important and a core element is the risk and impact management framework. I wouldn’t go into detail here, colleagues will do that. So, to conclude, let me ask a rhetorical question. Is the convention perfect? Well, I’m afraid the answer is no, but which convention or which outcome of an international negotiation has ever been perfect? It was not just the four of us sitting in a room for one and a half years drawing up ideal rules on AI. The AI convention is a result of compromise, as all international agreements are, and you have to take into account more diverse interests if you extend the scope and allow, for example, or take a global approach, let’s put it this way, if you take a global approach, more diverse interests have to be accommodated. And this relates to the idea that we started with principles, the negotiations were not, or finalizing the negotiations last year, was not the end. No one understood it as the end. but more or less as a starting point. So we have abstract principles and all of us know that they have to be specified. So who can specify them? Well, of course, once ratified and enforced, national legislators, so the parties to the convention can specify and make the principle more specific. One example could be the Act, but there are many other ways to approach the topic and this is what we took into account that there are different ways of approaching AI regulation and the Framework Convention leaves and gives and allows this kind of space to regulate according to the specific needs and interests of the parties. But it’s not just, if we say the principles have to be specified or should be specified, this is not just a job for the parties. The work of the CHI, the Committee on Artificial Intelligence must and will continue and let me show you we have already started our work and therefore, to point this out again, I think that given the context in which we negotiated, the convention is a success because it’s a starting point for further work and we are all committed to actually do this work and with that, thank you very much.

Mario Hernandez Ramos: Thank you very much Jasper for this general review and also for stressing. the issues that contextualize the outcome of the Framework Convention, of course it’s not perfect from many interests, but it’s the first international treaty on artificial intelligence. But this is not the only one, there are more exercises, more interesting regulatory examples, and for that we have our next panelist, Mrs Murielle Popa Fabre Fabre, Generative AI Government Advisor with expertise at the intersection of technology and policy with PhD in Neuroimaging and Natural Language Process and Handout of Experienced Training Large Language Models. Dear Murielle, could you please introduce us to the current international national artificial governance license, taking into account human rights standards, where do we stand currently, Murielle?

Murielle Popa Fabre: Thank you, good morning everyone, I will share some graphical elements, because probably the first thing we have to do is that, for the sake of time, development and because of the public opinion disruption that CHAT-GPT yielded, we see that there is an incredible, as you see on the graphic, that is just stopping one year ago, an incredible acceleration of formulated AI rules across the world, a lot of dynamics on trying to find the applicability of existing rules, and what is actually interesting for us, given the context of presentation of the Huderia, is that it’s a relatively stable amount of international frameworks, so we have a lot of rules, but when we think about what’s the question of having a framework, it’s a has to stick to reality, right? So rules can be principle-based, but whenever we have something that has to be a framework, it has to grisp on reality, right? Like in a car, you have to have the wheels on, otherwise you slide off, right? And so it’s interesting because everyone understands the need to act, but it’s very difficult to make it land in reality. And so I will have a crosswalk of the first attempt of having a grisp on reality. And so, but we should start with what I usually call the AI governance lasagna, which is actually a multi-layered approach to AI governance in correct terms. And as you see, you have the meat right in the center in yellow. So what are the regulated entities, right? The AI producers, deployers, the design, all the AI supply chain. And then the more you go off the meat of the lasagna, the more you meet things that try to be very based on, like based on reality that is right at the bottom, but are usually non-binding. So I would call this the cream. And then you have everything that gives structure, like the pasta, the layers up. And so what is at stake here is to take principles and to make them land in reality, right? So what is actually at stake in this three main, I just take three cross national situation is to have something like the AI Act that is based on risk, that is based on product safety to land in reality. And I don’t know to what extent you’re familiar to the inner workings of the AI Act, but definitely it’s something that is going to be implemented with standards. and the C mark will make the landing in society. Because at the end of the day, reality is also society, which is the main focus of the convention of the Council of Europe. It’s something that has not only to be, to define what is a nice system, but to define what is a nice system with humans. And so the focus here is human rights, democracy and rule of law. And as the focus is humans, then it’s the whole life cycle, because it’s not about only development. It’s also, for example, about the commissioning. What if you do therapy with an AI and all your data are stuck into one company and you wanna change? For example, this is a question about the commissioning. What if the system stops working? What are you doing with your company? Companies I was doing consultancy with were stressed when governance inside big AI company was shaky, because they said, what is happening? So it’s really important to understand this whole life cycle focus. And here, how it lands on the ground is not standards at the CENSELEC level, it’s the Huderian methodology. And then we have the UL approach that is also about main agreement on core elements, like the first US-led was about safety. So we had a safe, secure, trustworthy, sustainable development of AI. And the second China-led one was about free, open, inclusive and non-discriminatory. And when you say free, open and inclusive, you also mean interoperable. So you see that when we are at the level of principle, we already have a specialization, like just people targeting core elements of AI systems. inside reality. So basically, if we look back to the lasagna, the problem is to make it eatable, because if, I don’t know if you’ve tried lasagnas that don’t have the right structure, it’s really difficult to eat them. And also to make it then arrive in reality, right? Because if it’s not eatable, nobody will eat it, right? So the question is to bridge the layers, to come to the principle and operationalize them. And so what does it mean to make principle operable? So here you have a very basic schema where you have the fundamental values, you pick the ones you want, then you try based on this fundamental value to find AI principles. So to see how the technology can actually, what kind of characteristic the technology has to have in order to be in line with this fundamental rights. Then you try, you discover that there are risks. This is something that is actually, everyone’s discovering, right? So this is just, you discover there are risks, you wanna manage them, you’ve tried to find ways to manage them. And then when you found ways to manage them, that is the cream, all the soft, cushy part of that was at the bottom, you finally get into hardcore decision about regulations, about standards, about rights, liability, remedies. So the first, it’s also important that then you have basically three steps. You have a step that is about what is your approach? So for example, the focus on human rights and how it lands in society for the Council of Europe compared to the product, right? So you have the approach, then you have the method. How your approach comes down to reality with a method. And then you have all the governance. and regulation that comes and gives it, like fix it in stow. So I would like to take these three main conceptual steps that were actually introduced by Jasper in order to show you the different approaches, because there are many, many initiatives, but they all target different elements of this landing in reality. So how we make principles operable. And so the first one that was really a first one in time and has to be acknowledged is the national standard, the U.S. national standard developing an AI risk management framework. So they put themselves at the risk framework level, but they focus on the AI system and on focus on the AI system with humans like the convention. And so in this focus of the AI system, they did important work in order to find some characteristics that every trustworthy AI system should have. And so this is their approach, focus on the system. And what is the method is to give guidance to companies through an AI risk management framework. And so what is interesting here is that they developed this framework that is graphically represented here. And an important element that they share with the Houdini framework, for example, is that and also with the commission is that you have to map in context. What are the risks? So that you take the context of application, you know, to understand the risks. And we’ll see that Houdini will do something more. But this is really very important because when you think about an AI system and how versatile it is, it’s really fundamental to understand it in its context of application. When you’re when you’re developing products in a company, for example. And so you make sure that in the context, all these seven principles are thicked, and then you do all your governance, then you measure, you go to the yellow spot, you measure, and then you manage it, and all this is built around the governance mechanism. And then we have what happened during the G7 in 2023 in Hiroshima, where there was a government forum that decided to issue some guiding principles. And so the approach was to say, we find principles that can be across risk management, stakeholder engagement, and ethical and society consideration. So we’re starting to enter at the interface with humans here, and given these principles, they say, okay, we wanna build on these 11 pillars, a code of conduct, voluntary code of conduct, of course. And this is interesting, because when you look at the list here, you have all the risk management and governance consideration, what we can also find in the U.S. national standard approach. But we also have some stakeholder engagement. But how is this stakeholder engagement actually structured? It’s structured around transparency and accountability. So they take inside the principle and they focus on some principles. And the responsible information sharing. So here, the dynamic is to say, okay, we want transparency to be the core, and transparency will be a way to have this stakeholder engagement. And so what they developed in order to make it operable, to be transparent, is a monitoring mechanism, also based on voluntary reporting, where companies developing AI system can actually report about the best practices. they have in their risk management. And here I would like to stress that in this, the relationship with the stakeholders is just like I put information here on this platform and then you go if you want to check it. While what we see in the Huderia approach and at the core of the treaty is that the approach is socio-technical, so totally sticking with the reality of being an interface with a human being and society at large. Having the life cycle, but actually it also develops a methodology that is called Huderia that wants to check the impact, not only the risk, but wants to assess and quantify the impact. And what is actually interesting is that it’s not only about transparency, but it has two crucial steps I would like to follow. That is the context-based analysis. And here the context is not only the application and it has the stakeholder engagement process, which is not only about transparency. And if you want, you go and check this on the web. And here is really important because it’s only these two poles that makes it land in reality in a totally different way compared to these other initiatives I have been mapping. And here the COBRA risk analysis is based on the application, like we saw in the US national standards, but also the design, the development and the context and the deployment. So actually the context is linked to all the steps that lead the system to then interface with you. And this is fundamental when we think that we are getting to very sophisticated systems that are sold as black boxes. and that actually have internally a lot of different steps that have different impacts on their addiction patterns, on their influence patterns, on the end user and at large in society for systemic questions. So in this stakeholder analysis it’s really important because it’s about putting around the same table all the people that are interacting with these systems. And so it’s important because one step of this analysis is about identifying missing viewpoints, which is actually something that people when they develop products do like to have somehow. And so it’s really for me something that puts a lot of much more granularity in the analysis and that actually is very lively because it keeps the pace of technology because you are using tools and as you’re using tools you’re observing the effects they have on your life, both positive and negative. So if you want to keep the peace of the use of these systems you definitely have everyone around one table. And so I’d like, if I still have three minutes, I’d like to show, taking the example of China and of DeepSeek, I’d like to show actually what is a governance journey for the country that was the first to set up binding rules on algorithms at large and AI-powered algorithms. So China had a regulatory journey that started by having laws according to different architectures. So you had one law about recommendation algorithm that power social media for example or to help you fix fixating prices. They had another law on deep synthesis techniques for generating content that is synthetic. And they had another law in August 2023 that is called for the moment interim generative AI law that is about algorithmic discrimination, fake content, intellectual property, privacy, social values in generated content, and also security and identity verification. So one way to see the kind of path I’m going to design in the last last three minutes is to understand to what extent there is a layered iterative approach in the experience that China has been developing in regulating AI that is linked to a central tool that is called the central register for algorithms, where you have to put your algorithm, your training data before going to the market. And, and this tool actually is a tool that everyone developing, for example, a large language model has to use. And, and it’s, this is the interface, or this is the interface one year and a half ago. So maybe the interface is slightly different today. But basically, you have to include if you have biometric features or not, you have to include your identity information, if it’s open source, what data sets you use, what sources you use, and a lot of different, what is the use scenarios and a lot of different other characteristics. And this since 2022. And so you have batches of approval of what is put on the market. For example, this is an example of batches of approval of deep synthesis algorithms. And, and after this, they still developed standards like it’s happening now at the EU level and what is interesting is that the generative AI standards came out on the 11th of October 2023 and they were actually very detailed. Their scope was training data for example and for training data you had to say what’s the assessment, what is the evaluation methods and so for all the people that say that innovation has to be without concrete and fine-grained regulations here you see an example with DeepSeek where actually you definitely have innovation and you definitely have at least already you see three layers of compulsory steps you have to take before putting into the markets and now we are three now I ask you to count. So when you’re here you have different methods of evaluation but for example you have identified 31 risks including of course socially valued discrimination, commercially legal, legitimate interest of people and all these risks have to be evaluated in a certain manner and you have to have 98% of acceptability on your training data according to all your risks and you also have to have 90% of acceptable answers on a pool of 1,000 questions you ask to a chatbot in this case on the generated content so you take the two initial and final so input and output and you have 98% of acceptable questions you have and this is what DeepSeek went through. You also have to have a maximum of 5% of reject the question on certain questions it means that you have to answer correctly to a certain amount of questions and you can just say I reject them and it’s fine. And then you have additional step of standardization. on data security of pre-training and optimization in training in generative AI and also cyber security measures, so here we count as five. And then the Cyberspace Administration mandated on the 11th of July 2024 an additional step of government review on the AI models. And here is six, and then we had in March this year an additional new regulation labeling AI-generated content, so I’m sure we can count more than seven. So this was just an example of how detailed can be the AI regulation journey and how agile and flexible this job is, and thank you for your attention.

Mario Hernandez Ramos: Thank you very much, Murielle, for this very interesting overview of the landscape and especially the China’s regulation, which is sometimes not very known, but this is, of course, a very interesting thing to know. And now let’s move to our last panelist, of course, Mr. Jordi Ascensi Sala, Head of Technology at Andorra Research and Innovation, Head of Delegation of Andorra to the Committee of Artificial Intelligence. Let’s move to a very important question, which is try to make a system, an artificial intelligence system safe at the center. This is our main worries, of course, to assess before problems with their use even arise. So, Jordi, please, could you please explain how and why risk assessment and risk management of artificial intelligence are so important and how Huderia contribute to making artificial intelligence safe.

Jordi Ascensi Sala: Thank you, Mario. I don’t know if you’re hungry or not, because we’re talking about lasagnas, and I’m going to take this nice metaphor from Murielle, because as she explained, the way that you try to use a legal instrument, as the convention is, and to touch base into real practice is not an easy task, because in the realm of the convention, you talk with lawyers and politicians and policy makers that are interested in making a very consistent text that you can understand in a specific way, and it can be understood at the same time in the terms of an international treaty. When you translate that into practice, you create kind of a bridge, because putting this into practice, you’re going to talk with people like me, that we have a technical background, you’re going to have with people that are in the public administrations in charge of public procurement, you’re going to touch people that have rights, and you are going to be dealing with people that deliver these rights. So we have to bridge these gaps between a legal convention and the real practice. Murielle explained it in a very nice way, the lasagna way, but I think that it’s important to note that in the convention there is an article about that there is an obligation to have a methodology to understand the impact on human rights, democracy, and the rule of law. And since the Committee of AI… did the convention, it felt that it would be important to have a special recipe for this lasagna. And of course, this is a non-binding instrument, you can use whatever you want. You have to have one, but you can use whatever you want, and the Council of Europe proposed, or the Committee of the Air proposed, Huderia, and it’s a concrete model, as Murielle explained. I like when Jasper said that the convention had specific context, and in terms of Huderia, context is super important, because it depends on the approach that you have to apply this convention. And also, at the end, this methodology should help you to fulfill the requirements that the convention asks for. And so, in terms of the Huderia methodology, we have a very important focus on context, but also a very important focus on perspective. We’re going to go into the, Murielle already told us about the context-based risk analysis and the stakeholder engagement process. We’re going to abound a little bit further with this, but how you deal with something that it’s in continuous evolution, that is technology in this case. I remember when we finished the zero draft for the convention, I think that this was in 2022, and all these tools from ChagGPT, generative, and so on, they were just starting, popping out. And now, two, three years later, we are in a situation that things have changed in a very dramatic way. And also, in terms of technology, also in terms of geopolitics. And this is a change, continuous movement. So, how you do a methodology, how you implement something, that touch base, taking into account that the place where you touch base is moving you know, and it’s evolving all the time so probably I will say that the answer is to focus on human rights, democracy and the rule of law and this is one of the important things, that this is of course something that it’s principles that evolve but they don’t evolve as CGPT or other, you know, large language models or other AI systems so focusing on that and taking into account that this is the main part at the end, the Huderian methodology, it reaches this intersection between human rights and tech, technology, frameworks and practices and it’s a structured approach based on scale, scope, probability and reversibility and this is quite important because it touches all the life cycle Jasper was looking at me when he was talking about the terminology and for computer scientists or engineers as me, life cycle is something that is commonly understood you know, it’s when you design a system, you test it and you implemented it and you operated it and then sometimes you decommission this system and then there are many things around so when bridging the legal instrument into a more practical approach we have to think also on these processes and we have to also use similar languages it is true that sometimes, and this is important to know too about the methodology when we talk about, let’s say, explainability the understanding of explainability from a technical standpoint it’s different from a legal standpoint and there are things that merge here But it is important to have this, again, perspective and this context and to create this dialogue between people that are in the designing phases, on the operational phases, on the training phases, on the implementation phases, on the procurement phases, to use the same language. And this is one of the things that we are trying to do in the Hederi methodology, to have a common understanding of the language. Because otherwise we are talking with the same way, using the same words, but we are not talking the same thing. And this is quite, you know, something that is important to notice. So, when we speak about the phases, this process of the Hederi, we saw that this context-based risk analysis, where you check, based on the context, what is the risk that arises when you use an AI system. And then you go to the stakeholder engagement process, that it means, okay, this risk, let’s put that in perspective. And perspective is not just one single perspective, it’s going to be the perspective of the people that are engaged in this, using this system, or the people that will be affected by this system. And this is a way to create a conversation. Because otherwise, talking about processes, and we engineers, we like processes. We can process everything. We can process waking up in the morning, you know, and not myself, but you know, you’re brushing your hair, but you’re cleaning your teeth, and we can process all these things with many different indicators and KPIs. But when we think about using an AI system, we don’t put in this process other perspective, other than the technical. And using this methodology, it creates a conversation that says, are you taking into account this specific part of the population that will be impacted by this AI system? and there are some questions that they help you frame this conversation. Same thing for the people that want to use this system. It creates this engagement. Have you have a system that will help explain what are the reasons of the decision that has been made by an AI system? We can have this conversation in this table where we have an AI expert as Murielle and two magnificent lawyers and myself and we can just start this conversation. I’m sure that we can have this conversation in this room and this will be a very rich and fruitful conversation to understand the risk and the impact of this risk depending on the perspective. Then of course when you have analyzed all this, then you understand what the risks are and what is the impact of the risk. And at the end you have to think about how to mitigate the risk because sometimes it will be impossible. It will be difficult to just avoid the risk. You have to think about how you will mitigate them. So examples of the use of this process and this is an ongoing process. So you don’t do that. It’s an ex-ante analysis, but you don’t do that just one time and that’s it. It works for everyone. You have to do it from time to time. And I think this is also, I don’t want to be super romantic here, but you create conversation among people. And this is a good thing, you know, when you install an AI system and you want to see how is it going on and you do in this thing in an ongoing conversation, you are asking questions and you are, you know, since the system is evolving and amplifying its capacities or maybe going into different sectors or maybe dealing with different inputs to deliver different outputs. you have to have this conversation, and this is a good tool to have this conversation. Examples of other tools that are similar to this one, because this is not new, the Hedera is part of, let’s say, a tradition of doing things in that way. The Convention 108 Plus has this in terms of data protection rights. The GDPR in the European Union has this. Also, in terms of cybersecurity, there are also tools that are similar to this, how you assess a system in terms of cybersecurity. So, the Hedera methodology, at the end, it helps to have a holistic way and a continuous way to approach the use and the implementation of an AI system. And this holistic way, I want to link it with Jasper’s talk about the human autonomy, that is one of the basic principles. And now, personally, I’m doing things about philosophy, I can’t explain that in order to be free, to have this human autonomy, there is this universality principle. So, the holistic way of understanding the conversation is a way to understand this universal understanding. It’s not the engineer or the computer scientist saying what is the risk. It’s not only the people that receive the rights or deliver the rights that they say, they signal where are the risks and the impact of this risk. It’s having a coral conversation in terms of having this universality. So, just to wrap up a little bit, and I’m going to finish in two minutes, the holistic way means to have AI systems application context, where this system is going to be applied. The AI systems design and development context, and this is the process of the life cycle, the data protection, explainability, interoperability. and this is taking into account how you deal with the whole process of installing, operating, using an AI system but also is how you put it into motion because sometimes we have this feeling right now that you turn on a switch and there is light and you turn on your cell phone and there is a connection and then you start a system and all works but when we have these powerful systems as AI systems are things are not that easy, of course they work right away but there is this French philosopher Paul Virilio used to say that when we invented the train we invented the train accident so when we install a system we have to think about all of this and this is not a very rapid question, you have to think about it so just to finalize here when we were discussing about the Hederia to me having the methodology is important but also it’s having how you are going to implement this methodology because at the end there will be either a small company or a big company thinking about deploying, preparing AI systems it’s going to be also a government or a small municipality thinking about using a system to better deliver services but it’s going to be also a policy maker, regulators and so on so how you make this operational and in here there are two parts the secretariat is working hard to help with this the first part is the capacity building so we have to have a tool that will be able to be used by the majority of the people so let’s make this tool useful Let’s make this tool in a way that it will be understandable. And the second part will be the library of knowledge, thinking that when you use this system, they’re going to create cases, specific analysis from specific applications. And these are things that, of course, using all the privacy and the disclaimers that should be used, that we can be used and shared around. So it’s going to be a common practice. I don’t think that a small municipality in my country will be much different from a small municipality in Germany, in France or in Spain or other countries in other jurisdictions. So probably we can use this as an example. I’m trying to breach this process of using a convention as a basis, the top layer of the lasagna, the Huderia as a methodology to fulfill, to put some meat or some vegetables if you’re vegan. And but also how to help to digest this lasagna. So this will be maybe the salt and pepper. So thank you very much.

Mario Hernandez Ramos: Thank you very much, Jordi. We are starting getting hungry. Hungry, sorry, hungry. So we have 15 minutes to allow you to make any questions or comments to this extraordinary panelist. So the floor is yours if you have. If you don’t have questions, I have many. So we took advantage of my position, but I’d rather prefer you to have questions. There’s no questions.

Audience: Martin Boteman, question. With developing all this AI, and I hear you very much about capacity building and making available to everybody. How would be a good balance between the way forward in investing in innovation? on one hand, and making AI available for the public on the other hand, because I think you will need to have premium AI in some way to stimulate innovation. How do you see that balance?

Mario Hernandez Ramos: Thank you for the question.

Jordi Ascensi Sala: I’m going to do the engineering answer, meaning that to me, if we know what are the rules and what are the clear methodology or framework to deploy this, it’s not different to the protocols that we are using when we are creating an AI system or a technological digital system. It is true that you should have an approach that is different from the pure technical one. In the technical side, we have a gap here about the understanding of human rights. Let me put this, I don’t want this sounding like a critique on the engineering schools, but when we went to the engineering schools or the computer science schools and so on, they told us or they taught us how to build a bridge between A and B. I’m not thinking about what are the implications of building the bridge between A and B. I think that this conversation for the good of the profession is important to have. The implications of using a specific technology. We have the mindset and the framework to find ways if the rules are clear. To me, it will be let’s have a very specific framework, let’s talk about this. Of course, it’s gonna take a little bit more of time that just have a free for all process. But at the end, there is no, there is never a free for all process. We have limitations in terms of capacity, energy, processing. When you install a system into a municipality, for instance, into a government, the system is not a standalone, you know, it’s, you put it in a, in terms of a specific context. So what we are is, what we are doing is enlarging this context, thinking about when you put this, what are the implications? Can you explain this? Can you put a specific audit mechanisms or lock mechanisms? To me, it will be also important that the way that we use this methodology will be approachable in a easy way for computer scientists or for public procurement, you know, teams. Because otherwise it’s gonna be a big document that it will be difficult to understand. And we’re putting lots of efforts here because this is the juice, the important meat of the conversation.

Murielle Popa Fabre: In addition to this, I would say that taking the perspective of developing today generative AI tools or investing in creating an ecosystem of generative AI, which is something I do, for example, for France. When you are in investment and you wanna accelerate the economy, you wanna build products, just don’t want the best tech of the world. You wanna have adoption and you wanna build products. And so I would go even further in what Jordi just said, saying that there is something about design that is actually product design, that is something that is highly cultural. And highly human and highly cognitive. And that’s the questions we are facing today. With sovereignty, it’s about culture and cognition, too. So, my understanding of innovation is also the idea of having responsibility of innovation, but also having just good products that fit the cognitive well-being of people and fit also the cultural demand of a certain area of the world. And so, I think Huderia is interesting because of putting all these people around the same table, so that at the end, I already said it in my presentation, you can have also good products.

Audience: May I? I think in a way it’s remembering how privacy came into the world and GDPR in a way became a strong instrument where the public interest is defined by law and not by the public. So, I like the fact that it’s a convention because it allows us to shape it. And as you said, Madam, it’s global, it’s not. So, it’s indeed a dialogue that would be needed. I can see that part of the convention would be to stop premature regulation that stifles public interest progress.

Mario Hernandez Ramos: Thank you very much for the question.

Audience: Jacques Berglinger, Swiss-based board of EuroDIG and with Leiden University in the Netherlands. My question is, and thank you the committee for the wonderful work, but will we see after that a Strasbourg effect globally, similar to what the Brussels effect is trying to achieve?

Jasper Finke: Unfortunately enough, I cannot predict the future. If I could, maybe I would have a different job, and I would definitely not be a lawyer. Well, yes and no. The convention and the AI Act work on different levels. So the AI Act, for example, or the Brussels effect, as you said, it’s more specific. It’s basically implementing the Frame Convention. On the other hand, we have now, once ratified and enforced, we have binding principles. We are working to make these principles more operable, to give guidance using Uteria methodology, using COBRA. And therefore, this effect, well, we are putting all our efforts into achieving this effect. And then, in the end, it also depends on parties, on companies, on municipalities, local actors, to actually use the tools that are provided by the Council of Europe. Of course, then we will see this effect. But if you look at the Frame Convention itself, it does not play on the same level as the AI Act. And therefore, it’s, well, I think it was clear from the beginning that it’s about content, but it’s also about geopolitics. And in this, as you can see, by the potentially global approach, and I think this geopolitical aspect and impact will remain. is an essential part of the success, or hopefully the success of the framework convention. Thank you.

Murielle Popa Fabre: So it is really courageous to take principles and transform them into measurable elements because principles are not quantitative, right? So this move between qualitative and quantitative and having a method in order to do it right for humans, I’m always thinking about the cognitive risk of large-scale automation and the question of autonomy that was already raised by the two panelists. So I think that if there is something that can be a Strasbourg effect, is this courage to tackle the question of transforming qualitative into quantitative in a world that is automatically dataifying and algorithmicizing. So I think we’re on a good track.

Mario Hernandez Ramos: Thank you very much. I will take advantage of my position and to say that I’m very happy with that question and with the Strasbourg effect. I’ve never heard about the Strasbourg effect, but there is a Strasbourg effect regarding human rights and the extension of standards beyond the Council of Europe countries. It is a reality, especially in Latin America and other parts of the world. But I would like to stress the complementarity relationship between the Council of Europe Framework Convention on Artificial Intelligence and, for instance, with the European Union Regulation. It is very clear that both instruments work better together and with other instruments. So it’s a part of a very needed network in order to regulate this horizontal technology that posts so many different. and the important risk on human rights. So thank you very much for the question. Is there any question in the room? Yes, please.

Audience: Thank you. Given my IT background, I wanted to ask a question. Given that many AI developers, especially smaller startups, public sector innovators, often lack, as you already said, lack of capacity to navigate this complex framework of compliance, how do you envision supporting them in aligning with convention, with other EU regulations? Is this methodology enough for them, or do you perceive something else to help them to navigate all this? Thank you.

Jordi Ascensi Sala: Thank you. It’s a very interesting question, and it touches me in my heart. Why? Because I come from a small country with only 85,000 people. So to me, it’s not only about the small companies, because the majority of the developers are small companies. Of course, then they have the big ones, the big players, but they are not the problem. They have lots of lawyers and lots of people dedicated or saying that they are dedicated to understand the risks of AI systems. But in my case, it has been something that we have been stressing at each meeting, thinking about there’s going to be small companies, but it’s going to be also small municipalities, small public institutions, that they don’t have a ton of means to understand this. So with the secretariat, we approach this in a way that first, we have to have a tool. that will help in this capacity building. So, a training tool that will help assess the methodology digested. We are thinking about the way, but we have to think about the user design UX and user experience and so on, because it’s different from the perspective from the computer scientists or the small companies than the small municipalities. It’s a different thing. But we can use the same way of doing this tool that in my mind, and we haven’t decided yet, and I’m looking at the secretariat and I hope that I don’t want to put myself into a complete difficult situation, but in my mind it will be a visual tool that you follow the process of Huderia and then you have different definitions and they help you understand why the question is made in that way and in which capacity you are answering it. So, this is my thoughts. We’re dealing with that. We have several meetings that we have to do with municipalities, but also with publics, with the states, and also with developers to grasp or to understand what will be a useful tool. This is one part, so training capacity building and then a tool that will help you go into the process. And the second part will be this common knowledge platform where, let’s say, I’m working on a tool that will help to assess financial credits for housing, let’s say. And I’m a developer and I wanted to prepare a tool like this. So, in my mind, it will be interesting to see what other cases are similar to this one, where the questions were asked and which were the context-based risk assessment and the stakeholder. and then from this I will start the design of the application in a very specific way that will be in line with the Huderia methodology. This is something that we are thinking right now, developing or thinking about it. This is the reason why Jasper is saying that there is an important job to do with the CHI or the foregoing committee that will be in charge of this, because the baby is born but we need to help him or her walk and we need to take it in a very precise way to make it useful and to make it broad in its way of understanding the methodology, because otherwise it’s going to be a big ton of papers in a drawer and we don’t like that. Believe me, I’ve been very specific into this because my government, it’s going to be something very difficult to digest and we are one of the signatories, so it’s in our interest to make this, I don’t want to say easier, but manageable in our scale. So yeah, we are doing this, we don’t have the details yet, but think about this academy, you know, capacity planning tool that helps you navigate and then a library of knowledge of cases.

Mario Hernandez Ramos: Thank you very much to all our panellists, thank you to you. This discussion underscores the importance of this treaty and all the regulation of Artificial Intelligence in shaping a future where Artificial Intelligence is a force for good, protecting human rights and upholding democratic values. I hope the insights shared today will inspire all like-minded in the States to consider joining the Landmark Initiative. Thank you all for joining us, and I look forward to seeing this treaty come to force and fruition. Thank you very much for today.