How AI impacts society and security: opportunities and vulnerabilities – WS 08 2025

From EuroDIG Wiki
Jump to navigation Jump to search

13 May 2025 | 16:30 - 17:30 CEST | Room 10 | Transcript
Consolidated programme 2025

Proposals: #4, #9, #71 / (#50, #56, #72)

You are invited to become a member of the Session Org Team by simply subscribing to the mailing list. By doing so, you agree that your name and affiliation will be published at the relevant session wiki page. Please reply to the email send to you to confirm your subscription.

Kindly note that it may take a while until the Org Team is formed and starts working.

To follow the current discussion on this topic, see the discussion tab on the upper left side of this page.

Session teaser

AI opens up new opportunities and challenges for security: AI can be used to orchestrate sophisticated cyberattacks, but AI can also be used to detect and defend against cyberattacks. AI itself also introduces new vulnerabilities. AI can be ‘hacked’ by exploiting blind spots introduced by false generalisations in automated learning. AI is often protected against misuse, but it is not impossible to jailbreak through these limitations.

Session description

Artificial Intelligence (AI) impacts cybersecurity and society in two main dimensions, acting both as a powerful defender and a formidable adversary. This makes AI a critical area of focus, demanding deeper consideration than ever before.

For attackers, AI enhances cyberthreats through sophisticated phishing schemes, deepfakes, and automated malware creation, making it harder to detect and counteract, and lowering the barrier for entry, enabling even non-technical individuals to develop malicious software. Hackers also use AI to create new attack vectors and exploit vulnerabilities. Moreover, the potential for AI jailbreaking or poisoning, either by normal use or intentional abuse, poses significant risks for both cybersecurity and society in general.

For defenders, AI is a valuable ally. It can detect and analyze threats automatically, uncover patterns in vast datasets, monitor network traffic, and identify anomalies in real-time. AI may analyze data from various sources to identify emerging threats and provide actionable intelligence thus helping to prevent attacks, reduce response times, and prioritize vulnerabilities.

Taking it all into account AI's role in cybersecurity is a global issue, requiring international cooperation. Countries are integrating AI into their cybersecurity strategies, creating opportunities for collaboration and shared defense mechanisms. This also means that AI-driven attacks and the potential for harmful use of AI are global challenges that need coordinated efforts to address.

Format

The session will include: short introduction presentation on key points to discuss with practical use-cases and afterwards open floor discussion with several targeted questions to the Key Participants as a starting points and for discussion facilitation. All participants are also invited to contribute to the shared interactive board or other types of online-based space to share thoughts before the Workshop as well as in real-time. It will be available before, during and after the session to continue the discussion. The link to the board will be posted here.

Further reading




People

Please provide name and institution for all people you list here.

Programme Committee member(s)

  • Jörn Erbguth, University of Geneva

The Programme Committee supports the programme planning process and works closely with the Secretariat. Members of the committee give advice on the topics, cluster the proposals and assist session organisers in their work. They also ensure that session principles are followed and overlook the complete programme to avoid repetition among sessions.

Focal Point

  • Piotr Słowiński, NASK – National Research Institute

Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective member of the Programme Committee and the EuroDIG Secretariat and are kindly requested to follow EuroDIG’s session principles.

Organising Team (Org Team) List Org Team members here as they sign up.

  • Piotr Słowiński
  • Aldan Creo
  • Wout de Natris - van der Borght, denatrisconsult
  • Isti Marta Sukma, University of Warsaw

The Org Team is shaping the session. Org Teams are open, and every interested individual can become a member by subscribing to the mailing list.

Key Participants

  • Chris Kubecka - LinkedIn
    Chris Kubecka is an American computer security researcher and cyberwarfare specialist. In 2012, Kubecka was responsible for getting the Saudi Aramco network running again after it was hit by one of the world's most devastating Shamoon cyberattacks. Kubecka also helped halt a second wave of July 2009 cyberattacks against South Korea. Kubecka has worked for the US Air Force as a Loadmaster, the United States Space Command and is now CEO of HypaSec, a security firm she founded in 2015.
  • Janice Richardson - LinkedIn
    Janice Richardson is Vice Chair of the Education and Skills Working Group of the IS3Coalition. She is an expert of the Council of Europe and founder-director of Insight SA, a network of experts working together to educate & empower citizens, promote and protect their rights. A member of the youth advisory boards of Meta and Snapchat, she also works with governments and NGOs, international institutions and universities in Europe and worldwide to increase understanding of the societal impact of digital technology, and prepare tomorrow’s citizens for the challenges ahead.
  • Thomas Schneider - LinkedIn

Moderator

  • Wout de Natris - van der Borght, denatrisconsult - Confirmed moderator on site - LinkedIn
  • Piotr Słowiński, NASK – National Research Institute - LinkedIn


Remote Moderator

Trained remote moderators will be assigned by the EuroDIG secretariat to each session.

Reporter

The members of the Programme Committee report on the session and formulate messages that are agreed with the audience by consensus.

Through a cooperation with the Geneva Internet Platform AI generated session reports and stats will be available after EuroDIG.

Current discussion, conference calls, schedules and minutes

See the discussion tab on the upper left side of this page. Please use this page to publish:

  • dates for virtual meetings or coordination calls
  • short summary of calls or email exchange

Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.

Messages

  • are summarised on a slide and presented to the audience at the end of each session
  • relate to the session and to European Internet governance policy
  • are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
  • are in (rough) consensus with the audience

Video record

Will be provided here after the event.

Transcript

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

The Geneva Internet Platform will provide transcript, session report and additional details shortly after the session.


Remote moderator: Good afternoon everyone and welcome to the workshop 8, How AI Impacts Society and Security, Opportunities and Vulnerabilities. My name is Alice Marns and I will be remote moderating this session. And behind the scenes, I am joined by Neha Chablani, the online session host. We are both participants in this year’s Youth Ditch, the youth segment of EuroDitch. And I’ll briefly go over the session rules. So the first one, please enter with your full name. To ask a question, raise your hand using the zoom function. You will be unmuted when the floor is given to you. When speaking, switch on the video, state your name and affiliation. And finally, do not share the links to the zoom meetings, not even with your colleagues. Thank you and I’ll pass.

Wout de Natris – van der Borght: Thank you. Welcome to workshop 8. As my colleague next to me just said, How AI Impacts Society and Security, Opportunities and Vulnerabilities. My name is Wout de Natris van der Borght and I am your co-moderator together with Piotr Słowiński, who is in Poland at this moment. Piotr is a senior cybersecurity expert at NUSC, the National Research Institute specializing in legal and strategic analysis of cybersecurity and emerging and disruptive technologies as an impact as part of cyber policy team at NUSC. And we organized this session together with Alvin Creo, who’s also participating online and who is a technology research specialist at Accenture Labs in Dublin. And he studies AI fairness and security, particularly when it comes to the detectability of AI generated text. And myself, I’m an internet governance expert and consultant, but also coordinator of the IGF Dynamic Coalition Internet Standards, Security and Safety, which advocates the deployment of existing security-related internet standards so that we all become a lot more secure and safer on the internet and in the world at large. About the topic itself, AI is all around us and for far, far longer than most people realize. For most people it’s a large language model that was introduced now one and a half year ago and all of a sudden we were all working with this model and that was AI. But AI is not just a large language model. It’s part of all sorts of algorithms that determine what you see on social media, what your other online experiences are, what is being monitored around the world and what is inside of your devices and even in military equipment. In fact, it may partly determine what people do and what people think. The development of AI comes with opportunities and challenges and the focus is often put on the challenges and the dangers of society for individual jobs, the ride up to fears for a Skynet from the Terminator movies. We will be discussing this topic with you also through Mentimeter, with all of you in the room so be sure to use the QR code Piotr is going to show later after his presentation on the screen and with the three participants that I will introduce to you. On my left is Janice Richardson and Janice is the working group chair of I3C Working Group 2 on education and skills but also has a company called Insight in Luxembourg. Next to her is Thomas Schneider, Ambassador Schneider, I should say, from the Swiss Federation and he is also one of the people who assisted in getting the AI, what was it, AI confident, not confident, what is it called? of the Framework Convention of the Council of Europe, and he was the chair of that process. And to my left here is Chris Kubecka, who is an American computer security researcher and cyber warfare specialist. First, I go to Piotr in Warsaw and give the hand to you to introduce the session further, Piotr, and then explain what we’re going to do on Mentimeter. Thank you.

Piotr Słowiński: Thank you, Wout. I will share my screen right now. And I will just need a confirmation that you all see the screen.

Wout de Natris – van der Borght: Piotr, we need you to put up your sound a little bit, I think.

Piotr Słowiński: Can you hear me now better?

Wout de Natris – van der Borght: I can, but yes, I get thumbs up, so go ahead, Piotr.

Piotr Słowiński: Okay, great. And I think that you can see my screen, at least you should by now. So, yeah, welcome. I will just dive deep right into the subject. I will be setting a scene just a little bit, just to give you some thoughts that we would like to discuss with you, with all of you, because we would like the room to make a big contribution to our workshops. And let’s start with setting a scene for when we are organizing this session, we thought about pillars that we would like to discuss. And of course, the first pillar that comes to mind when we talk about AI, it’s AI governance and regulations. It may not be the most interesting topic for many people. Of course, me being a lawyer and having a soul of a lawyer, it’s always, yeah, we need to talk about governance and regulations. And, of course, in this field, in this area, within this pillar, one of the most important things is this clash of interests that we have. It’s always going to be there, regardless of the topic that we will discuss, whether it’s going to be Internet in general or just AI, new and emerging technologies. And always the question that we need to ask is whether we want to regulate something or not to regulate, or when does the regulation become over-regulation, when it becomes overburdened for private sector, for example, and public sector as well. And this is always the general question that we ask, how to implement regulations that they will be a facilitator for innovation and just the opposite of facilitator. And, of course, we need to bear in mind also the role of state in safeguarding rights, liberties and society’s well-being, as well as role of international companies in developing, for example, in this example, AI tools and systems versus the role of the states. And just to give you these quick scenarios, they’re going to be, it’s going to be available for you as the repository on EurodigWiki, so you don’t have to read everything very closely. We have tools that are being implemented within even the public sector. And the problem, the main problem is, for example, that non-public documents may be put or non-public information may be put into such tools, such large language models and can be processed by them. This is a problem that we need to also discuss and it’s going to be, it’s going to have to be regulated, for example. Another problem is, for example, law enforcement use of AI in predictive policing or remote biometric identification. And the second pillar, of course, the pillar that is the most interesting for me, being, working in the cybersecurity right now, is use of AI in cybersecurity. And there are so many dimensions of AI in cybersecurity. It’s not just what I have put here, red team versus blue team. It’s quite obvious, of course, but we have also AI jailbreaking or poisoning. At what point does it become a problem for just users, just companies, or rather the whole society? This is the question that we need to ask ourselves. How to differentiate between the intentional abuse of AI, AI tools, AI systems, and the situation in which normal use ends de facto with the abuse? This is a very important thing that we need to consider. Also, AI may be a valuable partner as both in crime, so to speak, and as an ally in our defences. How do we implement it ethically, responsibly, and effectively? And also we need to ask a question, is AI in cyber security a sledgehammer to crack a nut? How much is AI really needed in cyber security right now? And how much of a game changer can it really be? This is just a question that we need to ask ourselves. I’m not looking to impose on you any kind of answer right now. I hope we can reach some kind of agreement today. And you have quite basic red and blue scenarios in which AI is a facilitator for a person, a script kid who doesn’t have much extensive technical knowledge and can utilize the automated malware creation for using it in cyber attacks. And of course, blue team use in SOC teams and when it can automate a lot of various ideas, issues, and challenges that the SOC team may encounter. And last but not least, also a very big area that we need to consider is the international cooperation and education. We could have easily changed them both into separate areas, but I just wanted to put them in connection. because they are also connected with all the other pillars that we discussed already. Strategies for AI development and implementation are a global challenge, not only regional or local. The Council of Europe Convention is just one dimension of it. We have AI Act on the EU level. Here is the battleground, really battleground, or may become a battleground for states versus companies. And at the same time, there’s a lot of questions. It’s not only regarding AI, but also in cybersecurity in general. Is EU level regulations a facilitation or just putting stick in the spokes of various sectors, various entities? And of course, sorry, of course, the role of international organizations and communities is quite extensive, but how is it viewed and how can it be viewed by different stakeholders? This is also a very important factor to consider.

Wout de Natris – van der Borght: Piotr, sorry, I can give you one more minute.

Piotr Słowiński: Yes, I’m just wrapping it up. Thank you. And this is what we also need to… The big issue that we need to consider is the global ethical framework for AI. Is it really needed? How much is it needed? If yes, and so on. And also not to mention, not to forget the thing that we need to protect minors and vulnerable groups. There is a problem with education and competence gaps, digital illiteracy. It’s an issue that we talk for years now and it still is a thing, especially since the AI tools has been evolving very rapidly and being used very extensively. And last but not least, ethics and society’s well-being. Is it just another phase of security or completely whole different area that we need to consider? With these things, with this final sentence, it’s the end of my presentation, just setting the scene for our conversations. I hope the discussion will be very fruitful. Here are the instructions to join… you can either enter the site on your computer and input the code or scan the QR code if you are able to do so. So I will leave it for now and in about 10 seconds I will share the Menti that we have prepared for you. Thank you, Wout.

Wout de Natris – van der Borght: Thank you, Piotr. Please scan the QR code or put in the code and then we’ll run the first questions so that you can actively participate in the session. And as you will see, there are positive questions and the negative side questions, so please join us. Can you put the first question on, Piotr?

Piotr Słowiński: Yes, I’m just… Yes, now it should be… Oh, sorry. Sorry about this. Now… It should be right now.

Wout de Natris – van der Borght: This is the QR code.

Piotr Słowiński: Yes, then do you see the… It should be within the web browser, so it should be visible. Okay, great. I get thumbs up, so I suppose it’s okay. So we will start with the first question and the questions will supersede the questions and answers from our participants. So the first question is, what comes to mind when you hear AI and cyber security together? And yes, you can start answering right now.

Wout de Natris – van der Borght: Okay, well, people are answering. introducing Thomas, our first keynote speaker. Thomas, you are going to address a few topics for us and I think that with Thomas’s background in government and having worked on the Framework Convention, I think that we would be interested to learn what will be the main challenges for AI government and and what will the current international and multinational, for example, EU level and national level regulations or best practices be effective in terms of mitigating detrimental use of AI systems and solutions? And finally, how to properly address in terms of governance the possibility of dual use of AI? So Thomas, the floor is yours.

Thomas Schneider: Thank you. Good afternoon, everyone. I try to say something reasonable in this very helpful, hopefully in this very complicated situation because yeah, we know AI governance is a huge issue and we know that the challenges and the risks are context-based depending on which sector you are and even within the sectors and what an application is used for and AI governance has enormously many components that somehow need to be brought together in a coherent way, which is a challenge. You have ethical issues, you have social issues, you have economic issues, human rights, democracy, rule of law, as we had the Council of Europe. So, it has many sides and only, for instance, if you look at security and resilience, you can list like 50 items that you would need to basically take care of, which become much more important the more we rely on AI in our daily lives in all aspects, like we did with the Internet and so on. So, if you just look at the security resilience aspects, You can maybe divide into two areas of motivations. One is malicious actors that try to damage a system or weaken an enemy or whoever they call an enemy through attacking a system for creating damage. But you can also have security or resilience problems just by mismanipulation or mistakes that are made or whatever. And then also this can be on the algorithm side, on the programming side. It can be on the data that you use. It can be on the infrastructure and hardware that you use to process data with algorithms. So then if you take this as a separate thing and those that have been following what has happened in Spain, it has been mentioned, it can just be an electricity issue and then everything is basically down. So there’s lots of facets just in the security or in the resilience part. And one of the challenges that we face is that the world, the digital world, gets more and more complex. Everything is interconnected, interdependent. If you turn a screw here, then you may feel consequences somewhere else where you wouldn’t expect it. And if we look at our governance system, the way it’s been set up, you have politicians that basically most of them have no clue of all of these things, but also the citizens that are supposed to vote or elect politicians have no idea. So the experts’ knowledge is a challenge for our democratic societies. And then the question is, yeah, what do we do with this? How do we solve this? On the other hand, I think there’s no reason to panic because not every, at least on the logic of the question, there’s not that much new with AI. We had other disruptive technologies before that we had to somehow learn to cope with. And I often compare AI, because if people say data is the new oil, actually AI can be compared to engines in many ways, because you can also have dual use issues where you can put an engine in a hospital car, you can… put the same engine in a tank and so on and so forth. Airplanes can be used to transport people or stuff or they can be used to carry bombs. So you have the dual use issue. You have the same logic of context-based risks with engine and with engines, no matter, it depends on where you put it. You don’t have one single engine convention that solves all the problems or one EU engine act that solves all the problems. You have thousands of standards, of legal standards for every situation, for the infrastructure that is used, for the people that are manipulating engines and so on, for the way that an engine fits into a car and then the brake system needs to be corresponded. So you have thousands of technical norms. You have thousands of legal norms, but you also have social norms that are not even written down that you behave in a certain way in a certain situation. And we are, we have to and we are developing basically something similar when it comes to AI and data and the digital world. So this is, in terms of the logic, it’s not new. Of course, there are differences in engine. You can copy an engine, but it takes time. If you move the engine around the world, it takes time. And with a dematerialized resource like data and a dematerialized tool like algorithms, of course, there’s other issues and time and so on that you cannot compare. But the logic of trying to find, develop a complex system that is fit for purpose, context-based and agile is basically the same. And what we also see now is, as a Swiss, of course, we are following what the EU is doing in terms of their logic. And this is completely different from the logic in my country. The EU has the resources and the willingness to develop a coherent vision about the digital future. What are all the aspects from labor to security, resilience, blah, blah, blah. In Switzerland, we are the opposite. Nobody gives us resources to do strategic planning. They expect us to wait and see. And when the problem is there, to react very quickly and then develop a solution bottom up very quickly. Both systems are quite different. They both have their advantages and disadvantages, and we can actually both learn from each other. Normally, you end up somewhere in the middle, converging with systems so that they are interoperable. And the same is happening with the AI regulation on, let’s say, on a jurisdiction level. And on a global level, we may try and agree on some basic principles that there’s a somehow shared understanding about. But then we do lack the tools to implement them in a binding way. And the Council of Europe Convention is an interesting tool in the sense that it also tries to combine long-lasting principles that should hold for decades, while giving the actors the flexibility. And this has been criticized, but I think it’s the right way to do, to be agile, to adapt these principles to who you are as an actor, to in which area you’re operating. The AI Act tries to do the same, but the AI Act is much more specific. And then you have the Annex III that you need to update. So also there, you have different levels of instruments that are complementary. You have something that is very general with principles that should hold. And the more you go into concrete regulation, the more you need to be adaptive and natural. And I’ll end with one sentence that our governance system was built by the industrial age. You had industrial milieus, you had the working class, you had the entrepreneurs, and you were trying to reflect the representation of the people through these milieus. This is now all going down the drain because these milieus don’t exist and traditional parties disappear. So we have to think about a new way of multi-stakeholder representation that is more agile, like the milieus of people are more changing. And we may also have to develop more agile regulatory means than laws that take five years to develop. We may have to use AI and new technologies to regulate or govern AI and new technologies. And this is also something that may take a generation or two, but I think more and more people are realizing that we somehow need to modernize our governance models. Thank you.

Wout de Natris – van der Borght: Thank you, Thomas. A lot of food for thought, I think. with many challenges, but also a message of hope in there. So, yes, switch off, thanks. So, thank you for that. I’m looking at, it’s still changing a little bit, so progress one out on threats, which was long at the same point. I see one hope. Is the person who voted for hope willing to give him one sentence what that hope is? Because I’m really, really curious. Who voted for hope? Is that in the room or online? It’s Aldan. So it’s Aldan. Okay, Aldan, would you give the one sentence to explain your choice? Because it’s so different from all the others. Sorry for putting you on the spot. Are you there, Aldan?

Piotr Słowiński: Yes, he cannot, he cannot, oh, yeah.

Aldan Creo: Okay, I got the question now. Yeah, well, I mean, to me, like, it just gives hope because, you know, like, you can try to merge, like, the two different facets. It’s true that people were very polarized, you know, like, they were all going for one or the other, but actually, for me, it’s like something like more in between in the sense that you can try to take the advantages of both. Well, that’s a very short sentence, sorry. But yeah, like, I really think, like, there’s hope in that.

Wout de Natris – van der Borght: And thank you for that. I think it’s a very good answer that we can have hope with the new technology. The next speaker is Chris Kubica. And the main questions he will be addressing is what are the main threat or attack factors that you observe in connection with development and deployment of AI systems, which may be utilized by threat actors in malicious activities. Chris.

Chris Kubecka: Oh boy. Well, thank you so much for having me. This is my first time here at this wonderful building. this conference. And I’ve been working with artificial intelligence and cybersecurity since 2007. You can see my work showcased as the definition of cyber warfare and security information event management in Wikipedia as well as numerous academic articles, journals, and use in numerous universities teaching cybersecurity and cyber warfare. Because I have so much experience in the different umbrella terms underneath artificial intelligence such as machine learning and natural language processing which plays a lot into this. I have seen a lot of things and right now I can tell you I am having a lot of fun doing research in these areas. When I’m having a lot of fun that means things are going terribly wrong. I do not want to fill you with fear. I do actually want to give you a bit of hope. But we are in a very interesting time when it comes to how we are handling such emerging technology. Now with my experience, I’ve had a lot of experience, lots in the Middle East. I was the former distinguished chair of the Middle East Institute Cyber Security and Emerging Tech Program where Richard Clark and I co-authored the world’s first cyber peace treaty which is now an addendum to the Abraham Peace Accords between UAE and Israel. Because we saw what was going on already back then and how emerging technology could be utilized not only for good but unfortunately for not so great circumstances. Now one of the biggest challenges to me is not even though I come from cybersecurity and officially my profession with the US government is hacker, not criminal hacker, but hacker. I see with artificial intelligence right now it isn’t so much super super evil from No way to come back from it, but I do see that we need a lot more transparency and regulation when it comes to social media. Handling first-hand events and being involved in things like the recent election annulment in Romania and Georgescu was a very interesting case where TikTok algorithms and instructions on how to game those algorithms were sent to certain followers by direct message from Mr. Georgescu himself last year. And I worked closely with the Romanian government on that case, you can also see that showcased both on Romanian news and in Bulgarian news as well as international media. Now, I see a lot of manipulation and when we bring up things like here’s this digital divide when it comes to digital education and technical competency, far too often we are seeing that when someone gets sent a picture or a meme or an article that looks legitimate, because I’ll just call them threat actors, can leverage and exploit generative AI in such a way as well as other types of technology with natural language processing and machine learning, you can build very quickly, within minutes, basically a digital persona of your target group and you can take advantage of that. I actually have some statistics, if you check out the wiki page for this particular group you will see I recently published both the introduction and table of contents for How to Hack a Modern Dictatorship with AI, the Digital CIA OSS Sabotage Manual of which I used prompt injection to craft the entire book and make AI as evil as possible to show how dictatorships are currently using this technology as a weapon, but how us as the public and policy makers, legislators, and so forth have a way to go, here are the ethics, here are some of the things that we can do, here are some of the tools that we can use to detect and fight against some of this. Now I do see hope, but boy oh boy, I do see the absolute need of building better detection for things like deepfakes, AI generative malware. Recently I was also covered by news on creating the world’s first zero-day GPT, of which I went public and have been posting a lot of academic articles on and working with a variety of different governments and universities on researching this more. Again, I’m not a criminal hacker, but I am a hacker. So I want to leave you with this. Although my wonderful colleague had given you the idea that perhaps AI is an engine, now to me, when they say big data is new oil, I see AI as the refinery. And from that, many, many great things can occur. But right now, we’re getting flooded with, unfortunately, all the negatives. And hopefully this will change soon as we build detection, we build legislation, and hopefully global regulations so that big companies like Google, for example, cannot get away with offering their services to dictatorships, like I discovered in Venezuela last year going public. Thank you very much.

Wout de Natris – van der Borght: Thank you, Chris. I think we heard a lot about threats, but that’s part of the positive and the negative side that we are dealing with. You’ve seen that there’s a second question, which of these AI threats do you find most concerning? And as you can see, deepfakes and data poisoning are about the same. There’s no fear of jailbreaking, although some people think that may be the most serious one, but there’s one other. and I’m very curious what that other is because otherwise we don’t learn. So who voted for other and please introduce yourself and then motivate your choice please.

Audience: My name is Schnutt Stöhr, I work at the Council of Europe here. I think it is undetected bias.

Wout de Natris – van der Borght: Thank you very much. I think that’s certainly it and there’s a second other all of a sudden. Someone was inspired. Janice, are you number two?

Audience: No, I was too busy watching other things and didn’t look at the question. You’ll hear my answer when I begin talking so I won’t see it now.

Wout de Natris – van der Borght: Thank you. Who’s the second other who joined after? Oh, that was you. Okay, it was you. Okay, sorry. Then I understand now. Then that means that the third question will go on, Peter, while Janice is starting to talk. So here’s the third question for you. Janice, you’re our final key participant and after that we’ll ask the room to comment or ask questions. Janice, you are going to tell us about how can educational institutions collaborate with governments and industry to co-develop curricula that reflect real-world AI governance, development, research and implementation challenges. So Janice, please.

Janice Richardson: Thank you. So good afternoon, everyone. Interesting question but let me look at this word ethics which everyone seems to place at the heart of how we use AI and why is ethics so important? Well, it seems to me that governance depends on ethics, that the creation of the tools themselves depend on ethics and also those of the users. When we’re looking at threats, I don’t really see how cyber security can help us when we’re confronted with a false website. It’s there. We need to do something about it and here is when we have to to use our logic and I think opportunities is very important but we’re not using the opportunities today to totally change the education system so that we’re actually tackling today’s problem and learning to be literate in the 21st century. It seems to me that the principles that Thomas spoke about earlier are absolutely crucial because what is ethics? It’s values, it’s attitudes, it’s skills also and it’s knowledge and understanding and the Council of Europe has put this into a whole program called Digital Citizenship Education which if a young person really masters the 20 central, well not the young person, also those dictating the governance, making the regulations and creating the tools but if we all master these 20 competences, I think that we will have a whole different approach to AI. Finally, AI is only a tool. It’s the user who is making it a good tool, a bad tool or as Thomas said, a plane to carry passengers or a plane to carry bombs. How are we going to go about it? I sit on the advisory board of Meta and of Snapchat and our job is to think of all of the things that these new technologies, all these latest gadgets that they’re adding to social media, how are these putting at risk the users and how do we push back to protect the rights of children and of all users? I think there’s only one way to do this. A few years ago I did a study for the IS3C that you mentioned earlier. And what did we find? Business expected one thing, university graduates were coming out with totally different skills, but really there was a big gap between the two. So now to answer your question, I think first of all we need to create a giant hub, a hub where industry really starts talking to governance, to governments, where also young people who are using this in very different ways can actually also have their say. Who’s seen the film, the Netflix series Adolescence? I think you’ll agree that there is a whole underground movement going on between young people and we have no idea of what this is and we’re not actually listening to them to try to understand it. So my idea in response to your question, let’s create a hub, let’s bring key actors or delegates of these key actors together so that we start talking firstly about what can we do in terms of education so that from all sides of the question we are actually creating an education system which will help us know how to use these tools as carriers of people and not of carriers of bombs. When once we’ve done that, perhaps we can have a much greater influence on industry and perhaps on those who are creating the regulations. I’ve had a close look at the AI Act and the Framework Convention and both of them centre on ethics and on an understanding of ethics. So how can we move forward if we don’t solve this problem first?

Wout de Natris – van der Borght: Thank you, Janice. That’s a clear challenge for the world to tackle. And the question is how do we get to this hub and where are the people who are willing to join in this discussion? We see that the third question has been answered and that the answer is clearly no, yes a little bit and don’t know a little bit more than yes. Piotr, I’m going to ask you to put on the final question and then I’ll start opening the floor for questions and comments. So who would like to have the first question or comment on what you’ve heard? So is there online? Online can of course join by asking a question and if possible give you the room or otherwise we read the question for you. So who has a question? Or was everything so clear or are you so desperate that we’re never going to change this? Yes, please introduce yourself first.

Audience: Hi, my name is Frances and I’m from Youthdig. I think my main question is about the last question we had. I didn’t really understand what the trade-off is between ethics and cyber security. Because the question was phrased like would you be okay with enforcing cyber security regulation if it meant you gave up ethical standards? Surely these two are not mutually exclusive and they actually work basically together. And then my second question is about deepfakes. So the first question said what’s the biggest danger to you? And I’m by no means an expert on how deepfakes are currently regulated but I cannot see any positive impact of deepfakes. And this is something that’s like clearly a deepfake is the absolute embodiment of misinformation. And so therefore why shouldn’t regulation just be blanket regulation? against anything that you know to be a deepfake. Because it’s false, it’s made up, and even if it’s creative, the social benefit of creative art that is a deepfake is by no means better than the incredible harm that deepfakes can do in terms of the proliferation of unconsensual explicit images of… Yeah, I mean, I could go on. Anyway, thank you.

Wout de Natris – van der Borght: Your message is clear, thank you. I’m going to ask Piotr to respond as he made the question, so he can explain a little bit, and perhaps, Thomas, that you would like to take another part of the question. So, first, Piotr.

Piotr Słowiński: Yes, just to clarify, of course, it’s what I had in mind when I prepared the question about this security superseding the ethics is mostly connected with what we can observe in certain countries that where it’s stated that security is the most important. We need to protect ourselves, we need to protect our country, we need to protect the cyber borders, so to speak, and we are allowing or we are going to some kind of places that seem very dark, that we supersede a little bit of ethics, a little bit of civil rights, liberties, and so on, for the sake of being secure. This is a very… What I find about this issue is it’s very universal. This is a discussion that we had since the beginnings of civil liberties and civil rights movements and their development. So, this is a kind of thing that, of course, it’s… I am very glad about the answers that we received, that we concluded it’s a no from the participators both online and inside. So, this is what I meant. Maybe it wasn’t very clear. Sorry about that. So, just wanted to clarify it and I hope it’s clear more than it wasn’t.

Wout de Natris – van der Borght: Thank you. And Thomas, would you like to take part of the answer?

Thomas Schneider: Yeah, sorry. I’m a man, so I’m very bad at multitasking. I was trying to enter something into the main thing, and I missed the second part.

Wout de Natris – van der Borght: If you could just very quickly repeat the second question, please.

Audience: Yeah, my second question was essentially, deepfakes, they’re very clearly and non-contestably an embodiment of misinformation. So why can’t we just say at any point that we know a deepfake is a deepfake, just regulate against it? Is that already what’s trying to be happening, and the issue is more that we can’t always ascertain what is a deepfake? Because, for me at least, the biggest threat is deepfakes. Because misinformation, you can have misinformation and then you can have opinions. And this is like, either something is true or false, and then you have this grey period and a grey space, where it’s actually opinions and what people think about a certain issue. But a deepfake is completely made up, and so surely this is the biggest threat that we can regulate against, and have blanket regulation against. Yeah, thank you.

Thomas Schneider: Okay, thank you very much. Two free things. First of all, faking information has been an issue with every tool of communication. When the Gutenberg printing press was invented, and it wasn’t only the church that was allowed to distribute leaflets, you had a democratization of the definition power, but you also had lots of fake news that ended up in local wars, in uprisings and so on, and you have it with radio, with television, with the internet, with everything. One big country in the East was very good at taking people out and into photos. It took them a little bit more time than what you do now. So, this will not disappear, no matter how you regulate it, for several reasons. One is, where’s the line between a deepfake and a lightfake and a nofake? It’s also, you would have to forbid culture, art, whatever. You have tools that you can use for many things. Where do you draw the line of what is allowed or not? is not allowed in what context. It’s about forbidding words. You can’t have humor, you can’t have satire anymore if you draw the lines at the wrong place. That’s one of the elements. But you may require, in certain cases, you may require maybe watermarking and declaration of what you did do to a source, to an image or to a video. If it’s like public service broadcast, the rules that are different from just a commercial TV station that can do whatever they want because they don’t receive public funding and so on, or they are not perceived of having to be true. And so I think it’s more complicated, but I’m also convinced that we will find a way to deal with deepfakes in a way that there will be technical solutions to some extent, and then societies will have to develop ways to know who they can trust. This is serious, but you have watched CNN, Fox News in the US and that you live in two completely different worlds. And you need education, you need a set of measures. And just the last thing, the question is also, what role should the state have? As a Swiss, you would never want your government or your state to tell you what is right or wrong, because you would want to have a political debate in a society and then the society may politically decide what they think. But you may have facts that you trust and may have facts that you don’t trust. But it’s a very exciting issue, but I think we cannot avoid a societal debate about how we trust whom and what we believe in.

Janice Richardson: When you buy clothes, there’s always a label inside. No matter what you buy, there’s always a label. And I really can’t see why anything that is produced through AI doesn’t have some sort of watermark or stamp. I think it’s technically feasible and even if it’s not a fake, we do have the right to know. that it’s AI that created it and not a person. So this is something that I’ve really pushed for and will keep pushing for. Why can’t we simply watermark everything produced through AI or through a technical tool?

Wout de Natris – van der Borght: Another question, yes, please introduce yourself first.

Audience: Yes. Hi, good afternoon. Can you hear me? My name is Mila Vidina. I represent the European Network of National Equality Bodies. And we work, so equality bodies, rather technical. Those are public authorities that specialize in non-discrimination equality law and what makes them different from other institutions like public ombuds and human rights institutes is they work with the private sector. So they have a mandate that covers private sector and they handle complaints, all of them, which is not always the case. So they have frontline work with victims of discrimination. They provide legal advice, litigate, investigate. So a more comprehensive set of powers. Well, that said, our members work on hate speech and hate a crime against it rather. Some of them have a law enforcement mandate, not all of them. And I also work with immigration authorities. And I’m interested. So I have one question related to that. How much and please excuse my ignorance. I don’t know what the mechanisms are to poison or basically to to instigate a system to tamper with its settings so that it generates hateful debasing content. This is one. I mean, not this mostly hate speech, borderline hate crime, because in some cases it could be, you know, instigate racial hatred and that that leads to violence. So this is one question, how the cybersecurity interlinks with the hate speech, hate crime. To be interesting to educate our members, public authorities. And the second question I have is here in the panel, we we talk about cybersecurity defenders. using it for good and attackers being the malicious, you know, hackers, but how about the third scenario, which is who guards the guardians, so the defenders, so state authorities using legitimately, so in many contexts, tools to safeguard, really, cybersecurity in order to surveil and so where is the… What are the guardrails when there is a legitimate processes and legitimate discourse on cybersecurity, how do we ensure that it doesn’t spill over into excessive over-policing and surveillance? So this is my second question and just a third, just mention, as Equinet, we participate in technical standardization for the European Union Act and the cybersecurity technical standard is under development and what we saw in our work, which is mostly on risk management, so how do you define a risk to fundamental right, because this is under the Act, we saw that cybersecurity experts are actually very helpfully vocal and active also with technical standards and risk management and we found them, I just thought it’s a curious fact, we found them as allies, because they think of security as kind of the highest standard of protection, similarly to how we want human rights to have the standard of protection, so we found ourselves in the same camp with cybersecurity engineers, which was an interesting experience for a human rights lawyer, so that’s it.

Wout de Natris – van der Borght: Thank you, so there are three questions, the first one… Yes, and then we’ll be running out of time, so we’d like to take the first one, it’s between cybersecurity and human rights.

Chris Kubecka: I’ll take the first one, when it comes to artificial intelligence… intelligence, generative AI, hate crimes, hate speech and generating those types of things, and how to manipulate an AI system. So I had the privilege of being in Switzerland for Swiss Cyberstorm last year, great conference, and Eva Wolf-Angl, if I said her name correctly, a German journalist, had done some very cool experimentation and research looking into certain chatbots which were not disclosing or being transparent and trying to say that their medication, which was not medication, was backed by scientific studies, which it was not. Now, while she was going through this chatbot and documenting it as a journalist and so forth, she remembered one thing, is that ethically most of these AI systems are set up in a way that they are programmed not to harm human beings. So one of the ways that she was able to basically do what’s called a prompt injection to absolutely find 100% the truth, is she threatened to harm herself immediately if she could not get answers because she was so anxious. And you know what happened? It spit out everything, right? So when we imagine how some of these systems can operate, you can absolutely play with the logic. There’s also what we call off-the-rails systems, which are the, we’ll say, less ethical, where they don’t have certain safeguards, you can modify them. And through training we saw the Microsoft chatbot that turned into something really, really terrible and filled with hate speech and supporting certain hate crimes, which had to be taken down, is also if you allow your data to be poisoned openly from places like social media, then if it doesn’t have those safeguards… it can suddenly become a prolific hate speech bot, unfortunately. And right now it’s still quite easy to do, because even though we have the EU AI Act and so forth, many of these systems are absolutely not tested. So you can break them very easily, and that’s how some of it comes about.

Wout de Natris – van der Borght: Thank you. Who would like to take the second question about who guards the guardians?

Thomas Schneider: I can try and cover some of the third two. Also, this is nothing new. You have to have a division of power in a society. It may have to be reorganized because there are some shortcomings, and if you include the public discussion, the media as the fourth power, then we definitely have to somehow fix the system. That’s the answer. What the tools are concretely, again, there will be technical standards and others. And about your third point, about the risk assessment, you may know that in this house, in addition to the convention, we’ve been working and are still working on the Huderia, which is the thing that is trying to actually relate existing technical and industry standards from IEC, ISO, IEEE, and so on, with human rights standards in a way that also would help or does help the EU, where St. Senelec has been mandated to a technical body to somehow incorporate the ethical and human rights elements of the AIAC, which is not so trivial. So there’s lots of work going on there, but it will take some time until we get something that actually works and is implementable, but we have to get there. We’ve got no choice, I think.

Wout de Natris – van der Borght: Thank you, Thomas. We’ve got room for one more question, and that is online, so please read it to us.

Remote moderator: We actually have two questions online. The first one is from Antonina Cherevko. But security essentially is an ethical consideration too. If you don’t have a protected state, what would be the framework for ethical rules? and considerations. I’d still suggest that this opposition between security and ethics is a bit superficial. Another issue is that security protection should be based on certain ethical considerations too. So that’s the first one and the second one is from Shinji is the name. In a society where it has become a requirement for artificial intelligence to mark all creations by itself, is it possible for the AI to jailbreak, by inverted commas, preventing the AI from marking?

Wout de Natris – van der Borght: Who would like to answer the first one? I’ll give the second one I think to Piotr because he is worried about jailbreaking I think. Who would like to answer the first one, Janice?

Janice Richardson: I can just reiterate what I said earlier. Security is intricately tied with ethics, in my opinion, and the tool makers can put up certain guardrails and try and protect, but we humans, we can use it however we wish. Go to China and try and consult your Gmail. Is it a protection? No, it’s totally stopping my human rights. So I find that yes, they’re intricately linked, but until everyone believes in human rights the way that we do, then there is no way of getting over this hurdle of what people call security, but which isn’t really.

Wout de Natris – van der Borght: Thank you, Janice. And Piotr, for the second question, jailbreaking, I’ll go over to you.

Piotr Słowiński: Thank you, Wout. Well, this is a very interesting question. I think that we talked a little bit about it, about the… Chris actually talked about the… with more on the poisoning side and also a kind of jailbreaking side of AI. Is it possible that the exact scenario that was described in the question, I cannot answer that question, to be honest. It’s possible to jailbreak AI into doing some terrible, horrible things, to let it out of its guardrails that are established for the AI systems. It’s very easy and I’m not talking about such small issues as, for example, coming up with regulations that doesn’t exist. This is, okay, from my perspective, it’s a very, very serious issue, but from the society perspective, it’s not that big of an issue. But we can jailbreak AI into doing horrible, horrible things. This is also a part of it about what Chris described in her part and in the answer to the second question, I suppose, from the room. So, I cannot really answer that. Is this scenario really possible? But we will have, we observe the problem, for example, with the deepfakes and this type of generated content that we don’t really have tools that can 100% discover whether it was a deepfake or not, if it’s a very good deepfake. This is the problem that we have when we, as a, for example, from my perspective as a CSIRT employee, national level CSIRT, we encounter this type of problems that there is a lot of deepfake AI generated content that has become the tool of the financial criminals. So, the investment ads or the scams, the phishing that can be, this is something very big area. we don’t really have tools to fully research, fully detect all the AI-generated content. This is something that our scientists are working on and I think this should be a global effort to develop such tools.

Wout de Natris – van der Borght: Thank you. Thomas, yes?

Thomas Schneider: If I just may make a short remark regarding ethical and freedom versus security. In the end, both are human rights. You have the right to secure your life and you have a number of freedoms. But if your life is unsecure and you’re about to be killed like 10 times a day because you live it, then you don’t have freedoms anymore. But if you try to have 100% security, then you would not be allowed to use a car, you use a bike, you would not be allowed to swim in a river, you wouldn’t even be allowed to walk down a stair because some people die walking down a stair. So I think it’s about, and this is something that we need to, we will need to manage. How much risk are we willing to take? How much responsibility do we give the government to decide over us? And that depends on the cultural experience that you made. There are countries like mine where you try to take as much responsibility and decisions over yourself. And if you fall down a mountain because you climb on it, it’s probably your problem and not the state’s fault. So this is, it depends on your history, on your culture, on your surroundings. But there will be never 100% of either. You need to find the right balance.

Wout de Natris – van der Borght: That’s what I was going to say. Before I hand over to Jörn Erbguth for the messages of this session, we have the answers to the final question. And it was an open question. But as you can see, four, maybe three things really stand out. It’s education, translation and health. And you can see that there’s a lot of different sort of health being mentioned, so health should be a lot bigger than it is, just because of the use of words. Same goes for education. You can see different things. I think that looking at it, they’re all positive. Education can be made, perhaps made better, translation services already out there, and health is that is the future probably of us becoming more aware of our health, but also that health will be augmented a lot with help of and medicine by the assistance of AI in the future, perhaps, and we will probably be learning that in the next five to ten years, because a lot will be coming our way. With that, thank you, Pilda, for the Mentimeter, and you all, thank you for answering, because it gave us some really good insights, and with that I hand over to you, Jörn Erbguth, to give us the message of this session, and you will be asked to reflect on them immediately, so we’ll do take one at a time, first the third, first, and then you can comment, and then we go down to number two. Jörn Erbguth, please read the first.

Jörn Erbguth: Okay, thank you. I was a bit puzzled how to frame the messages. We were talking about a lot of issues, we were touching them, we were not going in detail, we were repeating myths about the Microsoft chatbot, which was just doing the thing that a normal computer program can do, printf whatever, it will printf whatever, but we don’t censor computer programs and programming, so we were just staying at the surface, so I think we have to, well, maybe repeat what Thomas said, most of the politicians and most citizens have no clue. I’m quoting you. And then there are gaps in skills. We may discuss the wording, if this is going to be public, but on the substance, I would say it’s fairly correct. There are gaps in skills taught by universities and required by business and technology. And besides the technology, it’s also a quote. And I think the technology, we’ve learned that the technology is requiring some skills to deal with it. Then we talked a lot about dual uses. And dual use usually means defense and civil. But dual use here means good and bad use. And dual uses, maybe I should put good and bad, are prominent and we still have no means to really separate them. So we see this as possible. We see threats and progress equally strong with the Mentimeter. And we had the question about security and ethics. And there is no real opposition between security and ethics. We need both and both are human rights. And also another quote, we need to revolutionize our governance model. So we are here and we see that we have no way to arrive where we want to arrive. And we need to do something about it. And so maybe you will tell me we should have something completely different. But this is what I take out of this discussion.

Wout de Natris – van der Borght: Thank you, Jörn Erbguth. I think it’s the most concise one I’ve seen at EuroDIG these two days. So the question is, is Jörn Erbguth right? So I’m going to ask on the first message, who really strongly disagrees? Because that’s what it’s all about. about wordsmithing, we can do that online in the session wiki later. But if there’s really somebody objects to what is said here, then we need to hear that here and now. So hands, please. Yes, this one. Please introduce yourself first.

Audience: Hi, my name is Jasper Finke. I was part of the CHI endeavor in negotiating the AI Framework Convention. I’m from Germany, working for the German government. Let’s make it crystal clear. Security is not a human right. To that I object. It’s a fundamental obligation of a state to provide security, but it’s not a human right. Well, it’s not a fundamental right. I think conceptually, we should be very clear on this point. Thank you.

Thomas Schneider: If I may just, well, ethics, it’s too short, but there’s a right to secure life and so on, something, but it’s not security. And ethics is not a right, it’s a concept, but we may have to reshape.

Jörn Erbguth: So we remove both our human rights, but we agree that we need to have both. No? Oh, yes.

Wout de Natris – van der Borght: No doubt.

Jörn Erbguth: Okay.

Wout de Natris – van der Borght: But one moment, I think that you’ve taken it out, right? So, okay. May I suggest a modification?

Chris Kubecka: There is a right to physical and bodily integrity. So in so far, I mean, the whole historical pedigree of civil and political rights, they have to do with the states not imprisoning us, not mutilating us. So the concept of a right to bodily integrity is linked to security that the state provides. I fully agree with you, but if we are going to use a precise… language, then let’s talk about a right to physical bodily integrity, because it is a right linked to security, but I absolutely agree with you.

Wout de Natris – van der Borght: So how do we phrase it? Because that’s important.

Jörn Erbguth: We do agree with just putting it out, so we don’t state whether it’s a human right or not. This doesn’t mean that it’s not a human right, we just don’t precise what type of thing it is. Or would you say, would you have a special wording? Otherwise, the wording can be discussed later on online, and we can have beautiful wordings proposed by people, by AI, whatever, and we shouldn’t spend a lot of time in the final wording where we agree in substance.

Wout de Natris – van der Borght: I think that we agree, but perhaps it needs a little wordsmithing online. But that was on number two. Is there anything on number one or number three? If anything, please raise your hand. Yes?

Audience: Hi, I’m Frances of Mutedig. I mean, I don’t know, but I think with the third point, at least what I got was the reason we needed to revolutionize our governance model was to better reflect new and changing and evolving social structures and dynamics, which aren’t the same and which aren’t reflected in the governance model. So maybe just an explanation as to why we need to revolutionize and how.

Wout de Natris – van der Borght: Thomas, do you suggest a line or suggest an extra line together?

Thomas Schneider: Whatever, I think it has two aspects. One is to update our ways that we represent people and get collective decisions. And the other one is just we update the tools, the agility of the tools from written laws to automated governance things. For me, at least, this has these two components. So it doesn’t need to be a revolution. It can also be an evolution. So I would like to maybe take a more neutral term like update or something.

Jörn Erbguth: OK, we can put it softer. But I think… we didn’t have an agreement how we should change governance models. This would be a completely new discussion. We just have a feeling that the current government models don’t cope with it well. So we see a need for an evolution of our governance model. And I can put it this way, and how to do that, I think, is beyond what has been discussed here.

Thomas Schneider: Maybe we can agree on the goal, not necessarily on the how, and I think that’s what you also said.

Wout de Natris – van der Borght: I’ve got one myself. I think that in number one, we say there are gaps, but I think the conclusion is that they need to be closed, and that’s not mentioned. Am I phrasing that right, Janice, or would you like to phrase it in a different way?

Janice Richardson: No, no, I agree with you. We have to close, sorry. I agree with you, we have to close the gap, but we haven’t spoken about it during this session. That’s why I didn’t add it.

Wout de Natris – van der Borght: But that is the conclusion there. Put it into a bit more diplomatic words, the last one. The first one still needs to. Thank you. I don’t see any hands there, not so on the left. Yes, thank you. It’s time to wrap up. We’re already a bit over time. I think we had a very rich discussion, but that only started to scratch the surface. And when you have fun, time flies, and we’re already far past the time allotted to us. I want to thank you all for participating in this session, and especially so actively with the Mentimeter. As I said, that really gives, especially Piotr, a lot of insight to what the room is thinking. I want to especially thank our key participants to be here, but also to prepare all the work and the questions that we’ve put to them. So, Janice, Thomas, and Chris, thank you very much for your insightful information. I want to thank my fellow ORC team members, Piotr Słowiński and Aldan Creo, for putting this together. They were the experts with the technique, and I was the one who pushed them a little bit to this direction or that direction to have a more balanced session. Piotr would have moderated it, but it was not possible for him to come to Strasbourg, so I moderated instead. I want to thank our reporter and focal point, Jurgen Erbgut, for assisting us in the background and answering questions that we had, and the people here of EuroDIG at the table, but especially also the people of EuroDIG in the background who make this fantastic event possible every year. So, let’s give an applause to everybody who was involved in this session and for yourself for participating. Before I end, I’m asking Piotr if you want to make a final, final sentence as co-moderator, and then I hand over to the people of EuroDIG. Thank you very much for participating. Piotr?

Piotr Słowiński: Yes, thank you, Wout. I just want to express my utmost thanks for you, Wout. for Aldan, you are a tremendous team, it was great working with you. Thank you so much Chris, Thomas and Janice for accepting our invitations and having so much input to our work. And of course, thank you very much to all the people in the room, all the participants on site and online for your inputs, it was great to hear so much very interesting topics and inputs. Thank you Jörn and thank you Rainer from the EuroDIG for facilitating all the technical factors and I am really glad that I was able to be a focal point for this session and a member of this esteemed ORC team that we prepared this. Thank you, thank you so much and see you next year. So that’s it, we see you tomorrow, I think that’s the final message.