Identification of AI generated content – TOPIC 03 Sub 03 2024

From EuroDIG Wiki
Jump to navigation Jump to search

19 June 2024 | 12:30 - 13:15 EEST | Auditorium | Video recording | Transcript
Consolidated programme 2024 overview

Proposals: #10 #15 #16 (#27) (see list of proposals)

You are invited to become a member of the Session Org Team by simply subscribing to the mailing list. By doing so, you agree that your name and affiliation will be published at the relevant session wiki page. Please reply to the email send to you to confirm your subscription.

To follow the current discussion on this topic, see the discussion tab on the upper left side of this page.

Session teaser

Artificial intelligence can (be helpful) prove beneficial in generating useful content. However, AI-generated content can also be used (where) to replace content that should be human-generated ( such as school exams) or to imitate authentic content (deep fakes). While deep fakes can be used for artistic and satirical purposes, (but) they can also contribute to (for) disinformation and defamation.

This session will discuss ways to identify AI-generated content:

  • AI-based detection tools do not provide reliable identification of AI-generated content.
  • Legal regulations may require AI tools to add a notice or watermark, but this can be circumvented.
  • Cryptographic proof of authenticity for authentic content could help, but it is cumbersome.

Session description

The advent of AI has introduced a new era of content generation, with AI-generated content being employed in a multitude of formats, including text, images, audio, and video. This is utilized in a diverse array of fields, such as journalism, entertainment, and education. While AI-generated content can be advantageous in numerous ways, it also presents challenges like its potential for deepfakes and disinformation. Therefore, it is critical to be able to identify it.

Nevertheless, the principal alternatives proposed to date seem to be insufficient:

  • The detection of AI-generated texts is unreliable, as it can be easily circumvented by means of paraphrasing.
  • Applying cryptographic signatures to content to verify its authenticity would be cumbersome, and it would not guarantee that the content is not AI-generated. For example, if a person takes a photograph of a deep fake, the cryptographic signature would still verify the content as authentic.
  • Watermarking the content with a notice that it was generated by AI can be circumvented, even if the watermark is not directly visible.

In essence, the identification of AI-generated content is a complex problem that demands a multi-faceted approach. This session will examine the challenges and opportunities in detecting AI-generated content and discuss potential ways to protect the authenticity and reliability of information.

Format

To ensure a dynamic and original format of the session, we will start with a demonstration of two tools to detect AI-generated content. First, we will show an example of how AI-generated text can be rewritten so that it evades detectors (AI->Human), using a homoglyph-based attack. Second, we will show an example with images, modifying a human image to make detectors classify it as AI (Human->AI).

The demonstrations will help to illustrate the difficulties on the detection of AI-generated content, the possible negative effects of misattribution, and elicit the debate on the opportunities that arise.

Then, key participants will share 3-minute insights on the topic from their fields of expertise. This would lead to a further, more informed debate between the audience and key participants, including an exchange of comments and Q&A.

We will close the session by going back to the key participants and audience to see whether views have changed, by asking them to comment on the same points that they touched at the start of the session.

People

Programme Committee member(s)

  • Desara Dushi
  • Jörn Erbguth
  • Minda Moreira

The Programme Committee supports the programme planning process throughout the year and works closely with the Secretariat. Members of the committee give advice on the topics, cluster the proposals and assist session organisers in their work. They also ensure that session principles are followed and monitor the complete programme to avoid repetition.

Focal Point

Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective Programme Committee member(s) and the EuroDIG Secretariat and are kindly requested to follow EuroDIG’s session principles.

Organising Team (Org Team)

  • Gianluca Diana
  • Vittorio Bertola
  • Rokas Danilevicius

The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by simply subscribing to the mailing list.

Key Participants

  • Laurens Naudts
  • Mykolas Katkus
  • Paulius Pakutinskas

Moderator

Remote Moderator

Trained remote moderators will be assigned on the spot by the EuroDIG secretariat to each session.

Reporter

Reporters will be assigned by the EuroDIG secretariat in cooperation with the Geneva Internet Platform. The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:

  • are summarised on a slide and presented to the audience at the end of each session
  • relate to the particular session and to European Internet governance policy
  • are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
  • are in (rough) consensus with the audience

Current discussion, conference calls, schedules and minutes

See the discussion tab on the upper left side of this page.

Messages

Current AI detection systems are unreliable or even arbitrary. They should not be used other than in an experimental context with a very high level of caution and particularly not for assessing works of students. Without reliable AI detectors, we have to rely on education and critical assessment of content that takes into account that any content can easily be generated by AI. Watermarking and certification of origin should be a more reliable means to authenticate content and should be supported by regulation.

Video record

https://youtu.be/dNbCq3Khfus

Transcript

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Transcripts and more session details were provided by the Geneva Internet Platform


Aldan Creo: Great. Hello. How are you, everyone? Well, it’s a pleasure to be able to have this session. I hope we’ll make it very interesting. But before we start, I’d just love to walk you through what are we going to do in the next 45 minutes. So if we get my screen here, I’ll explain to you how we’re going to structure the session. So that’s one of our participants. But what I’d love to be able to have here is – great. Here we are. Yes. So this is where we’re going to do the session. First, we’re going to have a live demonstration of AI detection systems. So we’ll see two systems in five minutes at the start. And then we’ll move on to some opening remarks by key participants. Three minutes each, three participants. Then we’ll have an open discussion. So that’s where you get to feed in with your ideas. 20 minutes, interventions of one to two minutes. And finally, we’ll close the session with some closing remarks. So that’s how it’s going to go. look like, but let’s start with the first of the points, which is the live demonstration of an AI detection system. And here, what you can see, I’m going to show you. So I’m going to, this is just like a tool to rewrite text in a way that they are not detected anymore. So I’m going to go to an AI text detector. So here I just pasted some text, this has been generated. So you’ll see now that when I run the detector, I will get an output and it will say this text here has been generated by AI, et cetera. Let’s just give it a second to calculate the probabilities. But when we get the output here, what you’ll see is that we’ll get this output in a second where it says that it is AI generated. Let’s just give it a second to think. And in the meanwhile, I’ll show you what we can do about this. We have a text here, but the thing, the problem is that it’s actually very easy to rewrite the text. This is another version of the text. You see that the system told us it’s AI generated, but now let’s rewrite the text, let’s change it. And let’s instead just paste a different version of the text and run the detector again. So what can we see here when we get the output? Well, what we will see is that the prediction changes. We are going to see an example of how can we go from something that is generated and classified as such to something that has been generated. but it’s not classified correctly. So the idea here is we can trick detectors of AI-generated text to get a prediction that has been written by a human. So reliability comes into question here. That’s an issue, but we are going to see another example of how this can become an issue. Here we went from something that was classified correctly to something that wasn’t classified correctly. We are going to do the same now, but now the prediction will be I just took an image from Eurodig here. This is a real image, so this is human. And we see that it’s classified as human. That’s good. That’s a good prediction. The problem is that I can do something that’s called glazing. Glazing means, well, it’s a technique that you can use to change images. And if I change the image and glaze the image, I actually get a different prediction. I can take an image, and I can make it look artificial. That’s also a problem. So this is where the reliability of systems comes into question. I hope that this is a bit informative on, well, what’s the landscape right now? How does it look like? And this is where we actually go into our key participants to get their thoughts, ideas, and opening remarks on the issue. So I’ll invite them one by one. First, we’re going to have Professor Paulius Pakutinskas. Sorry, I’m a bit bad with other languages. But you’ll get three minutes to do a short introduction about yourself and your thoughts about the issue. So go for it.

Paulius Pakutinskas: OK. OK, so I’m Paulius Pakutinskas. I’m Professor. in law. So, I work with UNESCO. I’m UNESCO Chair on Artificial Intelligence and Emerging Technologies and Innovations for Society. So, that means I work on social issues related to artificial intelligence. And what we were discussing last sessions very correlated to this session. And I think it’s very good just to repeat what was said in the very, very end of last session on usage of AI and so on. So, here we go more a bit on technological side. And there was very good illustration that sometimes we can’t recognize AI in this way what we’d like to have. Because there was an illusion that everything what is created by AI could be recognized by AI. But you can see that it’s not really true. And I think there will be race of technologies. Some technologies will create some content. Doesn’t matter is it text or image or video. And some of them will try to detect and some will try to change text to do to be not detectable. But we need to understand that the life is not so simple. We just can’t sit on technologies. So, when technologies are not so good at this time stage, we have society and we have regulation. So, we have regulation, some requirements set in Artificial Intelligence Act on labeling, on watermarking of content. It will be challenging. But laws doesn’t care how you will do it. You need just to do it. So, that’s very important point. But the third one is the society. We need to be more adapted. We need to understand that there will be huge amount of AI generated content. There are some predictions or ideas that it will be 90% in next or 2026. 90% of content generated by AI. And we do not need to afraid of it. Some of them is very good. But the issue is when we talk about misimation, with some falsification of existing content, that is a topic which need to be regulated. So there are, let’s say, three pillows here. So technical possibilities, the society as such, as humans, we need to have critical thinking anyway. That’s very, very important for us. Not just to rely. And then we have regulation. So thank you.

Aldan Creo: Great, thanks a lot for that, well, opening remark. We’ll now go to Dr. Naudts. So he’s online with us now. Please feel free to introduce yourself, a little bit what you do, et cetera.

Laurens Naudts: Perhaps first a technical question. Does everybody hear me correctly and clearly?

Aldan Creo: Yes. Yes.

Laurens Naudts: Okay, so thank you for the introduction. My name is Laurens Naudts. I’m a postdoctoral researcher at the AI Media and Democracy Lab, an Institute of Information Law at the University of Amsterdam. And my research looks into how, well, my perspective is legal. And I try to investigate how within an era characterized by an increased presence of synthetic content, this might exert pressures on democratic values and kind of complementing the previous talk, looking into the EU’s regulations ability to protect citizens. Now, I will leave my closing statement for some positive outlooks and perhaps address some of the risks now. And we do see that with the emergence of general purpose models due to their scalability. as well as the democratization of these tools amongst a wider public and ease of use. There is indeed a risk that people’s exposure to artificial content and actors, whether they are static images, like the Pope wearing Balenciaga, or sustained conversations with bots on social media, people are increasingly unable to distinguish what exactly is true, what is false, what is authentic and inauthentic. And if we look at the conditions of what we typically associate with an inclusive democratic model, being able to meaningfully inform political and social opinions and being able to participate with your citizens, this inability to make this distinction kind of erodes those values of inclusivity and participation. And the AI Act recognizes these risks. Now, what is the policy solution? It’s label and transparency. And one of those, of course, within academia, one of the fallacies that we typically tackle is the fallacy of transparency. Now, transparency is, of course, a necessary component to protect citizens. But is it a sufficient condition? Because malicious actors are unlikely to be transparent if their purpose is to spread misinformation. And as mentioned earlier, artificial content, for example, news articles, co-edited, translated, summarized by AI can still be truthful content. So I think that from a regulatory perspective, we also need to kind of take a step back and see, OK, what values are exactly threatened? And what tools are available to empower citizens against those threats? And this, once we are able to define this, we can actually kind of double back. OK, what information do we need? So perhaps as an addendum to your title, it’s also identification as well as communication of AI content. But for what purpose?

Aldan Creo: Great, thanks a lot. Well, I have some sort of bad news to give to you, which is that one of our speakers has had some logistical issues. So he’s still trying to arrive. So what we’ll do is we’ll go straight to the interventions open to everyone and hopefully he will arrive during the session. All right? So we’ll open the floor now, and I’d love to start by asking you in general do you have any comments to make or any interventions you’d like to have? Feel free to go for that and we’ll take it from there.

Audience: Yon Erkut, Yon Erkut, University of Geneva. Thanks a lot for these demonstrations, and I would emphasize how very cautious we need to be with this kind of AI detectors. They don’t work. Don’t use them. Just ignore them. When you see a test of AI detectors, I took the one that came out first with a hundred percent reliability. I just asked JetGPT to generate a short text. It was detected as human. Then I took the first part of the US Constitution. It was detected as AI-generated. And then I took the first part of the speech of Dr. Martin Luther King, I have a dream, and it was also detected as AI-generated. No uncertainty. So just ignore them, and we are in danger of running into the same thing that we ran with the bots, the botometer discussion. There will be probably some research saying so much content is AI generated and it’s bad content or whatever, but don’t trust this. There are no reliable means to detect AI-generated content yet. We might have to introduce watermarketing, et cetera, but don’t trust those systems, they are scan.

Aldan Creo: Thanks a lot, thanks a lot for your input. Actually, that brings up a very interesting point and I’d love to go to you, Mr. Naudts, for a question on that, which is that sometimes AI-generated content detectors can be used to punish people. So, for example, think of a high school student writes an essay and then it’s detected as generated. What should we do when it comes to equality, discrimination, what should be the approach that we take regarding content detectors and punishment of people in general?

Laurens Naudts: I could, I mean, that’s a very difficult question to provide a concrete singular answer. Two, I think in general, I will go about it in two ways. One, the problem that you described is a problem that we have seen with artificial technologies in general, whether it is automated decision-making in welfare, that they disproportionately disadvantage marginalized communities and here potentially unfairly exclude people from getting a particular opportunity. This is here the case as well. What I think is perhaps important is first of all, that when we introduce technology and I second the intervention earlier that we would first need to establish the efficacy. Do they actually succeed in what they pretend to do or pretend to do, or are they AI snake oil as some might claim? And second, to be reflective also of the powers that one has when developing these tools. So if you say, okay, we can detect false content or AI content, or let’s say misinformation, what are the definition of those concepts? Because in defining what perhaps might be a punishable offense, you determine the conditions that will give somebody an opportunity, yes or no, or deprive somebody of an opportunity. That’s one way of looking at it, to be more cognizant of one’s own responsibilities, powers, whether it is individually, as a designer, or institutionally, or state-wise, as somebody who will use these technologies. And then a point that I couldn’t make earlier, is we also need to be careful on who can occupy the space in determining which tools will be used for AI content detection, and who will determine those conditions. If you look at a risk of misinformation, we know, for example, that state institutions might weaponize the notion of misinformation to silence political dissidents. Private parties have economic interests, standardization bodies, they’re often democratically opaque, or do not include civil society. So on the one hand, be reflective of one’s responsibilities, and what you might deprive of people when a technology is incorporated. And second, who can then occupy that space in making these conditions, and do they have any form of legitimacy?

Aldan Creo: That’s great. You probably want to intervene.

Paulius Pakutinskas: Yeah, I’ve seen pain in your question, because the same pain is for all universities and schools, because we have some specific ways of teaching and examining. The thesis is one of the forms. I had the same headache in the very beginning of ChatGPT, when it popped up. And I’ve seen works, what were really done by AI. And I tested a lot of these tools. And I have the same conclusion. It doesn’t work at all. And it’s very easy to falsify it. And we did another experiment. We asked one professor to write a text. She was typing just here. And we gave her the best of these AI solutions, detection solutions. And it was written that it is algorithm-generated text. So you can see that there are both mistakes possible from one to another side, especially when your language is learned, not native language. So you use some forms, very clear forms, and so on and so on. So that is not reliable. Maybe it’s more reliable when we talk about video, images, and so on and so on. But without marking of this content, we will not succeed. And we will not have very, very good results. But there was a good question raised, why we need to know it. Because in quite short time period, most of content will be somehow in some percentage generated by AI, because a lot of tools will be used. So then we need to find some specific rules how to detect what is harmful. Because if it’s not harmful, we have a lot of good ways to use. It’s very efficient, very, very fast. We will have a lot of good content. So that’s not a threat in general. So we need to find what issues we’d like to solve.

Aldan Creo: Do you have any interventions from the online audience, maybe? Yes?

Audience: Thank you so much. Jacques Beglinger, coordinator of the Swiss IGF, and I may just share with you one of the findings of our session on the same topic two weeks ago, and I think the most startling was, well, we concur with what Jörn just explained, don’t trust whatever mechanism. So it’s not a question of whether what you see or hear is directly fake or not, but it’s the source that counts. And we had an input from a reputable local newspaper who explained to us how even they have problems finding out, but they then, whenever they are unsure, as a reputable source, they do further digging. So in the end, it’s education, education on the media level, but also in the population. Lies were always there, and there’s nothing new. Just right now, it’s just crying out, crying for the wolf all over. And this was also the last finding, finding the hole in the messages of Swiss IGF, that too much just crying foul play may, in the end, add to more destabilization than is ever wanted. So even media need to cut back a little bit on telling what went wrong once again, and all these beautiful fake pictures, and just decrying them as false, in the end, adds to destabilization and not to better education.

Paulius Pakutinskas: Maybe, yeah, maybe I can add a bit of work with fake reality and fake things like disinformation and so on. It’s a very simple and easy situation when we have black and white, so it’s just lie. But AI is capable to do it very soft way. So you just can’t understand that just very few details were changed, and that’s semi-truth. And you can go round and round and round, and after some time you will see that you are in absolutely artificial environment without any possibilities to understand that it was faken. Just step by step, there are really good examples we had here in Lithuania, in other countries, when there are some artificial media sources which are backed up by other sources, and when you’re double-checking, that’s right, because other sources are saying the same. And you can just make absolutely unreal reality. That’s the danger.

Aldan Creo: I see we have a question from the online audience, yes.

Audience: Thank you once more, and thanks for this interesting discourse. Probably, I might have to go back to the issue that I raised about the treaties and the multilateral instruments, and what we intend doing with them. And I want to link it with this discourse that we’re in, because the issue of deepfakes is a serious one. However, I think we are privileged to have many of us from higher education institutions, the ability of us, some of us are involved in supervision of masters and some doctorate students, and some involved in dealing with assignments. and these assignments are put in what is called Turnitin, which is an AI. When that comes out, we make some judgments. The question is, with all the discourse that we have been having now, truth about the truth, can we rely on such instruments? Remember, this instrument has the capability of making someone who has submitted a dissertation not to be able to proceed because of similarities. Can we then argue that those similarities are real similarities? That’s one. The second one, probably talking from the African context, what we have discovered is, if you look at generative AI and the LLMs, that informs whatever you are looking for. If the population regarding that information, I’ll talk about the cultural aspect of Africa, you will then find out that the response that you get is not helpful. I just want to use to be that diplomatic. Because the AI dips from LLMs that are available in its ecosystem. If the ecosystem itself is such that the information that is provided there, it’s not going to say, I can’t find it. Most importantly, it will just give you something. That takes me to my question there. Noting that we have young people and we have the ability, like we said yesterday, to define the world we want. Could it not be that is the time that we allow our young engineers, mushrooming engineers, mushrooming programmers to begin to determine? capabilities, speaking the military way, that would localize the LLMs. I know AI allows that, but allow for the development, for example, if you are looking at the University of Wales, development of LLMs around their instructional offerings. And such that then we are able to slowly develop the basis for our AI to be helpful, because there is a helpfulness within AI, but we need to manage the risk in terms of everything that is said. What is the take around that? Given that we have young brains that are around us, what’s their take in terms of all this? Because it’s, strategically speaking, it’s creating an opportunity to move into an environment where they develop capabilities, where they develop programs and codes that take us to another strategy in terms of everything.

Aldan Creo: Thanks a lot. Thanks a lot for that very insightful comment. I’d love to ask you to try to stick to the two minutes possible. And having said that, I’d love to, well, just ask a question on an issue related to equality and inclusion, which is that, you know, LLMs, they are trained mainly in English, etc., a very specific dataset. So they provide much worse results when we go to languages like Lithuanian, for example, or, I don’t know, Spanish, etc. Detectors also fail much more when we have other languages that are not so common. So when we talk about languages that are in a minority, that brings a lot of interesting questions. And I’d love to hear your thoughts in general, but let’s start with Mr. Naudts and then I’ll go with Professor Pakutinskas.

Laurens Naudts: I mean, first of all, thank you for the previous intervention. And I think it is a question that probably requires a more layered and nuanced answer that I’m able to provide in this particular talk, but feel free to reach out afterwards. But I think that especially when it comes to the creation of AI tools, it has been dominated primarily by a select few economic actors, which for the training of their LLM often depend upon extractive and exploitative data practices. We have seen open AI training their system, using people within Kenya for a low sum to filter out toxic content from their models. Without these models actually being to any benefit to them, because ultimately they function best for an English-speaking Western audience. So when it comes to the problem of how, and I think there should be more initiatives to respectfully engage with local communities, countries to develop AI models and systems for a purpose that benefits the communities to which they communicate or reach out in dialogue respectfully, such as that the ultimate product, an LLM that attacks content, is not extractive, nor to their detriment. So that would be a typical start. So I fully agree with this point. And I would welcome more initiatives in this space, because currently the economic realities and practices do not correspond with this ambition.

Aldan Creo: Great, thanks a lot. Professor, your turn.

Paulius Pakutinskas: So when we talk about small languages, small cultures, it’s a topic, a big topic. When we talk about official languages, that’s one topic. But we have countries where there are 200 of dialects, or dialects are like languages, like separate languages. So I think that is a matter of time. So most of the issues will be solved in some time period. Here in Lithuania we have some program. So it was financed by government, some grants were granted. In some countries people are working on these topics too. I think in some time period we will solve it. But it’s a wider question. It’s related to cultures, because the culture that’s called programmed in big LLMs are based on, let’s say, Anglo-Saxon and British culture. And it’s much more difficult to change, because it’s in the… But I think we need to change our understanding of AI and usage of AI in general. As I said, there is technical level, there is human level, and regulatory. For example, I am forcing my students to use AI tools. Any and all tools. And then we discuss how it works. And it’s not so easy. It’s not so easy. If somebody thinks that I will just ask to present an essay of 25 pages on a specific topic, no, you will receive trash. So it will be really, really poor essay. So you need to understand the topic, you need to work with topic, and then you can reach. And I’m not so naive that young generation will not use this powerful tool. They will use. So why to lie for ourselves that they will not use? The issue is to change our understanding and how we teach and how we examine. That’s very difficult for us, I understand, but you know, that is a change. So we need to change in ourself, in our educational systems, how we test, how we interview, how we interact with people by using both human brains and artificial intelligence.

Aldan Creo: Great, I’d love to close with the audience. If someone has one last comment, potentially two, depending on how long it is, feel free to go ahead.

Audience: Well, hi, my name is Diego. I’m from the Youth League program, and I’d like to share a concern and a question, if I may. So we talked about detecting texts generated by large language models, and I think we shared the notion that right now it’s extremely difficult, but it was said that perhaps for deepfakes or audiovisual material, it might be easier, but I’m not sure exactly about how easy it would be, because by design, deepfakes, this kind of technology works through what it’s called the adversarial network. That is, these AIs are trained to beat all the AIs that are supposed to detect fake content. By design, they’re supposed to evade this kind of detection. So right now, the technology is quite young, so it could be feasible, but in the long term, I don’t see how this could be applied. So I do not have the authority to assert this. with absolute certainty. I am no expert, so do point out if there’s something, if there’s a catch that’s beyond my understanding. But my question would be, should we abandon trust in audiovisual information altogether? And if so, what possible contingencies should it be to get trustful information, perhaps from the cryptographic field? Thank you.

Aldan Creo: Great, thanks a lot for your intervention. We need to go to the closing remarks, but maybe if you want, you could also try during your two-minute closing remarks to address a little bit of the question that was just made. So we’ll start with Mr. Naudts, it’s online, and then we’ll move to the room here.

Laurens Naudts: Yeah, thank you. And I think that as part of my closing remarks, I wanted to actually highlight one of the points just raised. I think that we should not abandon all trust and hope, and maybe then, as one point earlier made, we should also rely on perhaps legacy markers of quality, that they are traditional news media who have been a trustworthy source of information to kind of depend on. And this then brings me to the other point that I earlier made, and watch out who occupies a given space and can claim sudden legitimacy over a traditional field that they might have had no expertise of beyond. But the problem is, we should not overemphasize the risks and erode trust within people’s information space, because if every person departs from a position of lack of trust, yeah, then the democratic institutions themselves become completely undermined. So we should, in concert, also focus and channel attention to how we can direct citizens to authentic information that is trustworthy, that is reliable. and do also a positive effort in that particular space. That’s the first one and I hope that addresses the last point made. The second point, and it’s related, but it then also brings back the theme that was earlier discussed about inclusion and equality. I think we should also recognize that if we allow technology to become an important part of our society, we should also recognize how technology holds power. That power can be regulatory in who can regulate AI, but also economic who has access to the resources. To understand what type of responsibilities are associated with each power and how that might shape our future living environment. I think once we do that, we will be able to perhaps more targetly intervene to realize equality, also on a global level, rather than have a new form of cultural imperialism through technology.

Aldan Creo: Thanks a lot.

Paulius Pakutinskas: I think I will stick on the same what I said in the very beginning. First is human, so we need to change ourself. We need to improve our critical thinking. It’s very difficult. Always thought that if something is on video, so it’s truth. It’s not so anymore. We had such changes. We had changes with photos, you remember, before these revision programs, so we trusted that if it’s photo, it looks like true. We need to change our thinking here and we will do it. I think there are other measures like technical, so here will be a race. Who will win, we will see. But anyway, there will be big ideas to catch these fake images or texts. but maybe it’s successful or less successful. And another big part is regulation. And we have Artificial Intelligence Act where most of such topics can go to high penalties. Like for this level, up to 15 million or 3% of income. So, and there are other possible ways like regulation of propaganda and misinformation and so on, so on. So there are other tools. They are complicated. There are no just very simple ways how to do it. We need to do it in quite agile way. Another issue is that sometimes we can over-regulate and it could be not so good too. So we have a lot of new laws and regulations in EU level. And we can’t imagine how it will work, how they will interrelate, how market will take it, how business will take it, how citizens will take it. So it’s challenging time, but it’s interesting.

Aldan Creo: Thanks a lot. We need to close the session sadly. It’s been really insightful to hear all your thoughts and comments. We’ve touched a lot of topics really from regulation, discrimination, the reliability of such systems, minorities, education, misinformation. And there’s a lot more that could be said. Sadly, we don’t have time, right? But I’d love to thank you all, the audience, the key participants. And also I’d love to ask you to please excuse the participant that couldn’t make it today just due to traffic, et cetera. Sadly, Mr. Katkus would have loved to be here with us. he couldn’t, but I’d love to extend the thanks to really everyone that participated today and also the organizing team and technical people. Thanks a lot. We’re closing the session. Thank you.