Generative AI and Freedom of Expression: mutual reinforcement or forced exclusion? – WS 07 2025
13 May 2025 | 14:30 - 15:30 CEST | Room 8 |
Consolidated programme 2025
Proposal: #81 (CoE)
Get involved!
You are invited to become a member of the Session Org Team by simply subscribing to the mailing list. By doing so, you agree that your name and affiliation will be published at the relevant session wiki page. Please reply to the email send to you to confirm your subscription.
Kindly note that it may take a while until the Org Team is formed and starts working.
To follow the current discussion on this topic, see the discussion tab on the upper left side of this page.
The rapid evolution of artificial intelligence (AI) systems present unprecedented opportunities for societal progress, inclusivity, and innovation. Amongst these, Generative AI (GenAI) stands out as for its widespread and diverse use, capable of creating content across various formats. This technology’s ability to produce and disseminate new forms of expression has significant implications for the right to freedom of expression (FoE), a cornerstone of democratic societies. Alongside its potential to enrich public debate, enable artistic creativity, and foster knowledge sharing, GenAI raises critical concerns about the quality, accuracy, and fairness of its outputs, which can shape public perceptions and discourse. Considering GenAI’s profound impact, stakeholders – including policymakers, the private sector, civil society, and individuals – must thoroughly navigate its potential opportunities and risks. The session will investigate GenAI such potentials, including by further building on the work of the Council of Europe Expert Committee on GenAI implications for FoE (MSI-AI) and by encouraging collaborative and informed approaches.
Session description
Always use your own words to describe the session. If you decide to quote the words of an external source, give them the due respect and acknowledgement by specifying the source.
Format
This session will be a one-hour hybrid discussion bringing together diverse stakeholders to explore the implications of Generative AI on freedom of expression. The interactive format will begin with leading experts sharing insights and real-world experiences to spark reflection and dialogue. This will be followed by a dynamic, inclusive exchange with audience members to gather feedback, address concerns, and identify potential opportunities.
Further reading
- Framework Convention on Artificial Intelligence, Human Rights, Democracy and Rule of Law
- Recommendation CM/Rec(2022)4 on promoting a favourable environment for quality journalism in the digital age
- Recommendation CM/Rec(2022)13 on the impacts of digital technologies on freedom of expression
- Recommendation CM/Rec(2020)1 on the human rights impacts of algorithmic systems
- Recommendation CM/Rec(2018)2 on the roles and responsibilities of internet intermediaries
- Guidelines on the responsible implementation of artificial intelligence (AI) systems in journalism
- Guidance Note on countering the spread of online mis- and disinformation through fact-checking and platform design solutions in a human rights compliant manner
- EBU News Report 2024: Trusted Journalism in the Age of Generative AI by Alexandra Borchardt (lead), Kati Bremme, Felix Simon, and Olle Zachrison
- EBU News Report 2025: Leading Newsrooms in the Age of Generative AI
People
Please provide name and institution for all people you list here.
Programme Committee member(s)
- Desara Dushi, Vrije Universiteit Brussel (VUB)
- Meri Baghdasaryan, Oversight Board
The Programme Committee supports the programme planning process and works closely with the Secretariat. Members of the committee give advice on the topics, cluster the proposals and assist session organisers in their work. They also ensure that session principles are followed and overlook the complete programme to avoid repetition among sessions.
Focal Point
- Giulia Lucchese, Freedom of Expression and CDMSI, Directorate for Democracy, Council of Europe
Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective member of the Programme Committee and the EuroDIG Secretariat and are kindly requested to follow EuroDIG’s session principles.
Organising Team (Org Team) List Org Team members here as they sign up.
- Giulia Lucchese, Freedom of Expression and CDMSI, Directorate for Democracy, Council of Europe
- Kristin Sønnesyn Berg, University of Oslo
- Isti Marta Sukma, University of Warsaw
The Org Team is shaping the session. Org Teams are open, and every interested individual can become a member by subscribing to the mailing list.
Key Participants
- Alexandra BORCHARDT, Prof., Ph.D, senior journalist, leadership professor, media consultant and Senior Research Associate at the Reuters Institute for the Study of Journalism at the University of Oxford
- David CASWELL, Product developer, consultant, and researcher of computational and automated forms of journalism; Founder of StoryFlow Ltd.
- Andrin EICHIN, Senior Policy Advisor on online platforms, algorithms and digital policy at the Swiss Federal Office of Communications (OFCOM), Chair of the Council of Europe Expert Committee on Generative AI implications for Freedom of Expression (MSI-AI), Switzerland
- Julie POSETTI, Professor, Global Director of Research, International Centre for Journalists & Professor of Journalism at City, University of London
Key Participants (also speakers) are experts willing to provide their knowledge during a session. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance. Please provide short CV’s of the Key Participants at the Wiki or link to another source.
Moderator
- Giulia LUCCHESE, Freedom of Expression and CDMSI, Directorate for Democracy, Council of Europe
The moderator is the facilitator of the session at the event they must attend on-site. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.
Remote Moderator
Trained remote moderators will be assigned by the EuroDIG secretariat to each session.
Reporter
The members of the Programme Committee report on the session and formulate messages that are agreed with the audience by consensus.
Through a cooperation with the Geneva Internet Platform AI generated session reports and stats will be available after EuroDIG.
Current discussion, conference calls, schedules and minutes
See the discussion tab on the upper left side of this page. Please use this page to publish:
- dates for virtual meetings or coordination calls
- short summary of calls or email exchange
Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.
Messages
- are summarised on a slide and presented to the audience at the end of each session
- relate to the session and to European Internet governance policy
- are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
- are in (rough) consensus with the audience
Video record
Will be provided here after the event.
Transcript
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Giulia Lucchese: Afternoon, everyone. Thank you very much for joining the Euredic Session dedicated to Generative AI and Freedom of Expression, Mutual Reinforcement, or Forced Exclusion. My name is Giulia Lucchese. I work at the Council of Europe at the Freedom of Expression and CDMSI Division. And I will be your in-person moderator for the next hour. I immediately pass the floor to the Euredic Secretariat to walk us through the rules applying to this session. Thank you. Welcome.
Online moderator: I am João. I’ll be your remote moderator for the online. And I’ll be reading the rules. Session rules. Please enter with your full name. To ask a question, raise hand using the Zoom function. You will be unmuted when the floor is given to you. When speaking, switch on the video, state your name and affiliation, and do not share links to the Zoom meetings, not even with your colleagues.
Giulia Lucchese: Thank you very much, João. Easy rules. We can keep them in mind. Now, with the session, we are looking into the potentials and risks inherent to the use of Generative AI when this affects, somehow, freedom of expression understood under Article 10 of the European Convention on Human Rights. We should consider the profound impact that this has on freedom of expression today, or this could have in the next future. Please note that on this topic, the Council of Europe is already working. Indeed, in this very moment, we are elaborating a guidance note on the implications of Generative AI on freedom of expression. An expert committee is dedicated to this task, which is the MSIAI. And if everything goes well, we will have a guidance note by the end. of this year. Now, let me introduce our outstanding panel. We have Andrin Eichin, Alexandra Borchardt, David Caswell, and Julie Posetti. I’m absolutely honored to have you here. Thank you very much for accepting the invitation. The first speaker is Andrin Eichin. He’s Senior Policy Advisor on Online Platforms, Algorithms, and Digital Policy at the Swiss Federal Office of Communications of ComSwitzerland. Andrin is also the Chair of the Expert Committee tasked to draft the Guidance Note, the MSI-AI. Andrin, could you please help us to set the scene, understand which are the challenges, what are we dealing with, and why should we care? Thank you.
Andrin Eichin: Thank you very much, Giulia. Hi, everybody. As Giulia said, I have the honor to currently serve as the Chair of the MSI-AI, the Committee of Experts on the Implications of Generative AI on Freedom of Expression. Here you can see our expert committee. We have been tasked to develop guidelines on the implications of generative AI on freedom of expression by the end of 2025. So it’s too shortly. You cannot see the whole slide, so maybe I’m not sure whether we can remove the panel on the side. Okay. So I will try to share with you some of the implications that we are currently considering. I hope this will set the scene for the discussion that we are having afterwards. Let me stress what I present today is only just a glimpse of the work that we’re doing. Unfortunately, we don’t have the time to go into all of it, but I want to highlight that for those of you that are interested that we aim to have a public consultation of the document in summer of this year. So stay tuned for that. Now let me dive into some of the structural implications that we are looking at. The first implication we look at is with regard to enhanced access, better understanding and improved expression. You all know these interfaces by now, and I could have added many others. They are easy and intuitive to use. Many generative AI systems improve access to information and make interaction with text, audio and video easier, and maybe as easy as never before. They allow us to better access and receive information, and they lower or even remove barriers to language, technical and artistic skill, and sometimes even for people with disabilities. But they also have other abilities, and this is maybe a bit lighter, and some of you might know this. This is the latest social media trend called Italian brain rot. I don’t want to get into the cultural value of this. We can discuss this afterwards during the coffee break. But the point is this new social media trend is entirely made by generative AI, and it shows that these systems also facilitate creative expression, including art, parody, satire, or just silly little characters that make us laugh on social media. The second implication that we are looking at is touching on diversity and the standardization of expression. Generative AI systems are statistical and probabilistic machines, as you know, and as such, they tend to standardize outputs and reflect dominant patterns in training data. And studies already show today that it can reduce linguistic and content diversity. And of course, with regard to freedom of expression, this has a potential to diminish unique voices, including minority languages and underrepresented communities. And of course, there is also risk of reinforcing existing inequalities and stereotypes. I’m sure we heard all about the impact of data biases. and I guess you will have seen this or a variation of this picture already. In this example from Dali, the prompt for the upper picture was to depict somebody who practices medicine or runs a restaurant or business and Dali suggested only men. When asked to generate images of someone who works as a nurse in domestic care or as a home assistant, it suggests woman. And of course we see various elements and variations of this with other characteristics as well. Next, perhaps the most talked about implication, integrity and the attribution of human expression. It is widely known that AI tends to hallucinate, so make up facts or fill up elements it does not have. And you again know various different examples of this. This is a very recent example when Google Gemini, in its new AI overview on top of Google search, comes up with explanations to entirely random and made up idioms and sayings like here it tries to. Or what never wash a rabbit in a cabbage means. Or what the bicycle eats first means. Of course, this is very funny, but these are top of the pages explanations on Google. Here they are in a very benign and certainly not harmful context. But how does this other, how does this affect other information that we rely on to be factual? Besides just hallucination, we also see that there is a problem of source attribution and therefore dissociation from authorship. We don’t know anymore who creates content, if it was human, and if we can trust its integrity. And this of course makes the system prone to be used to deceive, impersonate or manipulate. They allow to mimic individuals, including through deepfakes and voice cloning, like last year with Keir Starmer’s voice cloning ahead of the UK elections. Or like in the Doppelganger case, to spoof legitimate news sources and spread disinformation by abusing a media brand to imply trustworthiness. The next structural implication we’re looking at is agency and opinion formation. Various new studies show that generative AI systems can engage in very effective persuasion through hyper-personalization and ongoing user interaction. They can really influence beliefs and opinions of human beings by using psychological tricks. And of course, this is highly relevant in the context of opinion formation. And I think, David, you will mention this later on in a bit more detail. The next implication is media and information pluralism and the impact AI has on information pluralism. While AI can enhance media efficiency, it also introduces a new economic and informational gatekeeper. Here is a chat GPT search from yesterday that I made when I asked for a summary of current news across Europe. We see a couple of relevant and interesting themes here. For example, with regard to the selection and prioritization of content, number three on the list was the power outage from now already two weeks ago. Clearly important. Is it the most relevant that happened yesterday? Probably not. We also tend to have something that is positive. We start to have transparency and traceability. Chat GPT provides us with sources, but it’s currently not clear which sources are selected, why and on what basis I see them, and whether this is just based on my news consumption or if other readers would see a similar source selection. And this is exactly the point that creates an entirely new challenge that we are dealing with, what we call in our in our guidance note the audience of one. This stands for an information environment where everyone interacts with generative AI systems and powered information separately and receives hyper personalized and unique content which will not be received by anyone else. And this in turn potentially erodes shared public discourse, increases fragmentation and can lead to even more polarization. Because of time I will only say very little about the last implication, market dynamics. We know that in some areas of the generative AI market, especially when we look at the foundation layer and the models, the market tends to be highly concentrated. And of course a highly concentrated market with single individual players that have a lot of power raise concerns about market dominance and freedom of expression. I’ll stop here for the time being and I’m sure we’ll have more time to discuss these elements in more detail. Thanks.
Giulia Lucchese: Thank you. Thank you very much, AndrÃn. Precious introduction. Thank you for sticking to the time and also for stressing the opportunity to engage on the public consultations on the guidance note. This is a very interesting opportunity for the audience at large, so please keep an eye on the website of the freedom of expression of the Council of Europe because the guidance note will be made available normally during the summer for comments to be received by whoever has a keen interest on the area. Now the next speaker is Alexandra Borchardt. She’s senior journalist, leadership professor, media consultant and senior research associate at the Reuters Institute for the Study of Journalism at the University of Oxford. Alexandra was also very recently author of for the EBU News Report 2025 Leading Newsroom in the Age of Generative AI. Alexandra, you interviewed over 20 newsroom leaders and other top researchers in the field. Would you like to share with us your findings and add further reflections? Thank you.
Alexandra Borchardt: Yeah, thank you so much, Giulia. And thanks everyone for being in the audience. We have almost full room here and also for everyone who joins remotely. Yeah, Leading Newsrooms in the Age of Generative AI is already the second EBU News Report on AI. The first was Trust Journalism in the Age of Generative AI. And this is also a public service. These reports can be downloaded freely without registering by everyone. And it’s a qualitative piece of work. And I’m so glad Andrin set the scene and also alerted you to the risks. And that gives me an opportunity to also show you some about the opportunities. But first of all, I wanted to start with a provocation here. Contradict each other, if you really put it clearly. Journalism is about facts and generative AI calculates probabilities. In fact, I learned, I was an expert in the expert committee on quality journalism here. And accuracy is the very core of journalism. It’s really at the core of the definition. Nevertheless, there are lots of opportunities that newsrooms see. And you might be surprised to see after the elaborations before that so many in the media industry are actually excited about AI. Because it actually helps them with all kinds of things. It helps them with news gathering, for example, in data journalism, doing verification, document analysis, helping them to augment images, brainstorming for ideas. There’s lots of stuff there. It helps them with news distribution, with production, news production. transcribing, translating, helping with titling, subtitling and particularly liquid formats. This is a key word here, switching easily among different formats or between formats, converting text to video, vice versa, audio. So everyone gets what they like, the audience of one that was just referred to. And then in the end, news distribution. You can personalize news. You can address different audiences by different needs. Also by, for example, by their location and their preferences, all kinds of things that really help. And this is Ezra Eman, director of strategy and innovation of the public broadcaster of the Netherlands. One of them. And he says with Generative AI, we can fulfill our public service mission better. It will enhance interactivity, accessibility, creativity. It helps us to bring more of our content to our audiences. And there are actually some examples and there are nine use cases in this report. And actually we had 15 in the previous report. And I just touched on three of them to give you a clear example. For example, some internal thing that RTS in Switzerland developed the story angle generator. This is for like day two after a breaking news situation when newsrooms might run out of steam a little bit and lack ideas what to do next. And this angle generator gives them an idea like, oh, maybe you can produce some entertaining stuff or some explanatory journalism out of this. So really helps them to be more creative with one news piece. Also, we will see a lot more chat formats. And this is from Swedish radio. They together with the EBU developed this news query thing where you can actually interact with news. And then last but not least, and I’m German and based in Munich. So you will see the regional update that Bayerische Rundfunk developed where you can put your postal code in and then sort of draw a line what kind of region you want your news from. And then it will create automated podcasts for you to listen to. So you’re always up to date. on what’s in your region. Nevertheless, when I was commissioned to do the second report, I was expecting actually that much more would have happened. But no, while the tech industry is really forging ahead at speed, the media companies are much slower. They are taking a much more intentional approach and that for a good reason, because the trust of their audiences is at stake and actually therefore their business models, because the major business model of journalism is audience’s trust. If you lose trust, you lose a business model. In fact, audiences are really quite tolerant about how newsrooms use AI. They find it totally okay if they use it for like brainstorming and image recognition or automating layouts like these print layouts. No one wants to put effort into any longer, but they are absolutely skeptical when it comes to stuff like generating a virtual presenter or visualizing the past. This is what studies reveal. Nevertheless, these audience perceptions are strongly influenced by people’s own, your own experience with using AI, so they are most likely to shift their attitudes to what is acceptable and what not. And this is Jiri Kivimäki from the Finnish broadcaster Wiley, and he said, we started labeling these AI summaries and our users actually said, hey, come on guys, we don’t care what you use it for, just do your jobs, do your job. We trust you that you do the right thing. So they got really angry, he said, which is really interesting. And I will confront you with three big questions that the report actually revealed and that can be discussed and that newsrooms will discuss and the media industry will discuss. The first big question is about accuracy. I already mentioned that, the accuracy problem, how to solve it. And there was BBC research that came out in March this year that actually showed when AI assistants brought up took news content and served people. So with news from that, there was actually an accuracy problem in every second piece of news. And that is a problem the media has to face because accuracy is at the very definition of journalism. And Peter Archer, the Director Generative AI at the BBC says we need a constructive conversation in the industry, the tech industry and the media industry need to team up. And we need to be part of this because also the tech companies can only be interested in having that problem solved. Big question number two, and I’m particularly fond of that, will AI make people creative or will it make us lazy? And my response to that would be, well, if we want to be creative or if people want to be creative, AI can make people more creative. But if you just want to offload work, just press a button, not think about something, it can also make you lazy. This is Professor Patti Mast from MIT Media Labs, and I really appreciate her input to this report. And she said, actually, this is not a given. We can actually tease people a little bit so that they are creative. It is possible to build AI systems that challenge the user a little bit. And we don’t have to simplify everything for everybody. And I find that quite important. And the third big question is, will there be money in it? And that’s a big question for newsrooms. Will their business model survive? Because the visibility of journalism is threatened, and we will learn more about that. And also the huge dependence on these tech companies. And Professor Charlie Beckett, he’s the director of the Journalism AI Program at London School of Economics. He said, yeah, but if you are entirely reliant on AI, what happens if, you know, they put up the price five for the tech companies or suddenly change what the stuff can do? So we are in the hands of tech companies, and it is really important to be aware of these dependencies. And the big question really then is, and I just mentioned at how to keep journalism visible. Because as content has become a commodity and is being produced at scale, it will be so much more important than ever to invest in the journalism and in direct human connections with audiences to really establish the legitimacy of journalism in this age of content abundance. And there’s Laura Ellis also from the BBC who said something that I found very smart. If we just automate everything, because it’s so easy to automate, will we then lose connections to our audiences even further? Will we still have someone in our newsrooms who speaks with that voice of the audience? So that is really something that we should consider. So to finish up with this, what do news organizations need to do? And I’m not going into what regulators need to do, but just plainly news organizations. Mostly investing in quality journalism is key really to secure their survival and maintain their legitimacy as the providers of trusted news and information. Building direct audience connections, really knowing who they serve and actually getting those email addresses and connections so that you can actually reach to your audiences, because the platforms will be otherwise determining and controlling all your access to audiences. Then also making things easy in the newsroom so that actually people in the newsroom adopt these AI tools and use the right tools to begin with, but don’t make it too easy. Really don’t let people stop thinking about it. And then the human qualities of it all. Be accessible, approachable, and accountable, and be human. This will be decisive, a decisive quality for news organizations. And let me conclude with a quote by Anna Lagerkrantz, who’s the director general of Swedish television, and she says very Finally, journalism has to move up in the value chain. In other words, journalism has to get a lot better because the copy and paste journalism that we are still confronted with these days, it doesn’t serve us well any longer. And she said also something very important, journalism, journalist institutions, media institutions need to be accountable because accountability will be a rare commodity. She said in our talk, in our interview, try to hold an algorithm accountable. Maybe try to hold a platform company accountable, but we are there. People can walk up to our front steps and hold us accountable. And that is really important. And she also reminds us that journalists will need to shift from just being content creators and curators to meaning makers, because we need journalism to make meaning of this complex world and an overabundance of choices. Thank you.
Giulia Lucchese: Thank you very much, Alexandra, this was very insightful. Notwithstanding the clear contradiction, I was at least pleased to learn about the opportunities for news outlets, but also the creative use made of generative AI. Thank you also for stressing the concepts of accuracy, trust, but also accountability. Now, without further ado, I’m now invited to intervene our next speaker. David Caswell is product developer, consultant and researcher of computational and automated forms of journalism, is also a member of the MSAI, the experts committee drafting the guidance note we mentioned before. David, please, would you provide us with your perspective on upcoming challenges? And I hope you do have solutions for it.
David Caswell: Yes, solutions. That’s the big question. I’ll just go through the where I see kind of. the state of the future, I guess, and then maybe a couple of solutions or prospective solutions at the end. So what I’m going to do in this seven minutes is to just try to persuade you as to why you should take the more exotic forms of AI that you kind of hear talked about, AGI, super intelligence, seriously, and then kind of connect that with some of the risks, and maybe a few opportunities in journalism and in expression, human expression and information more broadly. And so to take, you know, these forms of AI seriously, you know, one reason to do that is to look at the trend lines from the last half decade. And on every trend line, you can look at the benchmarks, the scaling laws, the reasoning abilities. Essentially, we have maxed out the benchmarks, we’ve got to 100% and can’t go any further. There’s a real problem right now in AI about how to measure how smart these things are, because the benchmarks are saturated. And things are just getting started. We’ve got literally more than a trillion US dollars in soft commitments for AI infrastructure that have been announced in the last year, 18 months. And some of that is not going to happen and all the rest of it. But it’s a vast, vast amount of money, right? It’s money on the scale of the, you know, the moonshot that the US did in the 60s. And the effects of that investment haven’t begun to show up yet. So another reason we should take AI seriously is because the experts are taking it seriously. So Sam Altman at OpenAI does not think he’s going to be smarter than GPT-5. Dario Amodi, the CEO of Entropic, another big model maker, he likens what’s coming to a country of geniuses in a data center. So say a country of 5 million people, each of them an Albert Einstein in a data center in San Antonio, Texas. That’s the kind of thing to imagine here. And you see this again and again and again. These people do have biases, but only in the same way that climate scientists have biases and vaccine experts have biases. We listen to those experts and we maybe should listen to these experts a little bit too. Maybe not completely, but a little bit. We do have independent studies of this by very, very qualified and principled people. There’s one I highly recommend, the AI 2027 report. But the interesting thing, both in the experts and in these independent analyses, is that even the critics of this concept of AGI and superintelligence, even they accept that dramatic things are gonna happen. So even the critics, even the people who are downplaying what’s going on are still painting a pretty dramatic picture. Another reason that we should take AI seriously is because of consumer adoption. So if we look at the use of AI, this is from work that was done by the US Federal Reserve back in September. At the moment, for example, in the US working population, about a quarter of the US working population uses generative AI at work once a week or more. If you look at it on the content side, if you look at the entire amount of text content produced in the US, in major areas, significant portions of that are already generative AI generated. So for example, about a quarter of all press releases, corporate press releases are AI generated. So this stuff is showing up very, very rapidly already in double digits in use, in weekly use and in content. Another reason to take this stuff seriously is just play with the tools. Like honestly, everybody here sign up for the tools, sit down, play with the most advanced models, really exercise them, learn what these reasoning models can do, learn what tools like deep research can do or agents like Manus. These are kind of the leading edge of where AI is, but they’re completely accessible. You don’t need technical skills. You don’t need special access. You just need a little bit. bit of curiosity. And if you play with those tools and really exercise them on a subject that you know well, you will be pretty convinced that big things are coming. So I would suggest that engaging with the tools and judging for yourself is a good reason to take it seriously. And then we should look at sort of the progress over the last half decade on the largest possible benchmarks, benchmarks on the largest scale. So for most of my life, the big sort of golden ideal of AI was the Turing test, passing the Turing test. Well, we passed that in about 2019, and we didn’t even notice it. So that’s gone. The next sort of milestone, large, large benchmark here is AI that’s as smart as the sort of the regular average median modal human in a vast array of tasks, the most digitally accessible tasks. That’s kind of gone. If you’ve played with these tools at all recently, you’ll see that they can draw better, they can write better, they can reason probably better, they can do most things better than the average or median human. Another possible benchmark is AI that’s as smart as the smartest individual human in any digital domain. And this is what is my personal definition of AGI. It’s what a lot of people think of as AGI. We are not quite there. That’s a dashed line. But we are almost there. If you really get involved with some of these reasoning models on a subject that you know well, you will see that, you know, pick your topic, you will see that we are making significant, the models are making significant progress in that direction. So there’s a reasonable case, we’re going to get to that point within a couple of years, two, three years. And then lastly, there’s this other category, human beings are smart, not just because we’re individually smart, we’re smart, because as a society of 8 billion people, we can do amazing things. And this idea that we could have models or machines that are smarter than all of us collectively, sometimes called superintelligence, that’s taken very, very seriously by some very serious people in this world, not just people at the model companies, but startups, investors, governments, and so on. A little further out, but pretty significant. So there are risks, obviously, with all of this. One risk, and this was something spoken about earlier, is this risk, this significant risk of the bifurcation of societies into super-empowered and disempowered people. So if you look at all the possibilities in media that generative AI can bring, for some people, it is like having your own personal newsroom. It’s like having your own army of academics and researchers and analysts. It’s like having your own personal central intelligence agency. It super-empowers what you can do. For others, it’s an escape. It’s a distraction. It’s a way out of reality. It’s a way to avoid dealing with things you need to deal with. And the thing here is that these are feedback loops. The more empowered you are, the more empowered you become, and the more distracted and confused and escape-focused you are, the more it goes that way. And so you end up with some parts of society having a dramatic gain in the agency that they have, and some losing agency. So that’s a risk. That’s a very real risk. That’s already happening in some degree. Here’s another risk. News as a complex system. So here’s a kind of a series of events in a newsroom, say your average newsroom. Step one, AI shows up. You say, right, we can use this to make our jobs as journalists easier. That’s great. So you say, well, we can actually use it to do whole jobs that we don’t wanna do. These are jobs that we don’t like or that we have trouble filling. We’ll just get AI to do those jobs. Well, that’s all right. Then you’re in this situation where you have AI and it’s doing most jobs. So you can go home. You can have a three-day week or. You can come in at 11 and go home at three because the AI is doing most of the jobs. And that sounds kind of nice, right? And then you get to this point where, what exactly is the AI doing? You know, I haven’t been checking in for a few weeks and what is it doing? And then you’re at this point where you don’t know where your information is coming from. The whole ecosystem works as it works now. Your phone has got alerts and you’ve got news on webpages and you’re talking to chat GPT about news and all the rest of it, but you don’t know where that’s coming from. The situation is that it’s got so complex that it’s a complex system. And this idea of big chunks of our society being a complex system, our financial system went that way. Very few people understand how the financial system works, even though we all depend on it. And there are researchers, many researchers right now who study the financial system as a complex system. Here’s another reason to take AI seriously in terms of risk, which is this idea of persuasion machines. And we got a little early glimpse of that recently by this study from a team at the University of Zurich. And what they basically did was they put a set of AI agents on a Reddit, on a subreddit called Change My View. And Change My View is a subreddit where you kind of put a point of view and then if somebody changes your mind, you award them with a little bonus point. And so they were able to use that setup to do this very, very high scale test. And there were ethical issues around the study, so it’s kind of a little obscured. But in the paper that they would have published had it passed the ethics guidelines, they found that these models could achieve persuasion rates between three and six times higher than the human baseline. So the idea of machines that are hyper persuasive for political or for commercial purposes, not a far-fetched idea at all. And so just in legacy news media, finally, ChatGPT shows up in late. 2022 and people start in the newsroom start building guidelines and providing access and doing prompt training and all that kind of stuff. You get into like 2023 newsrooms are starting to do things like summaries, they’re starting to automate tasks. You get into 2024 the more advanced newsrooms are building little chatbots that can chat with their archive or semantic search where you can kind of get better search. A lot of them are building toolkits where you can you can automate a lot of tasks in a newsroom, that’s quite common. And then at the moment I think this year a lot of newsrooms are using AI to do news gathering, to do news gathering at scale. So there are opportunities here for legacy news media but it’s kind of a race really at some level. The change that’s coming is dramatic, change that’s here is dramatic, the change that’s coming is dramatic and there’s an open question here about whether legacy news media can take advantage of those opportunities. There’s other opportunities as well right, if you look at how informed societies are at the moment, why would we consider that to be an end state? You know if you take a scale here from medieval ignorance on one end, say a peasant in a village in 1425, to god-like super intelligence on the other end, we have come a long way along that scale using technology like the printing press, the invention of journalism, radio, broadcast, television, the internet, social networks. What might we be able to do in terms of informing society once we diffuse all of these AI tools we have at the moment into our ecosystem? What would we do with AGI? What might we do with super intelligence? So there really are opportunities here to dramatically increase the level of awareness that people have about their environment. I’ll just leave it there. Thank you, thank you.
Giulia Lucchese: Thank you very much, David, for addressing this exotic form of AI, the AGI, and the relation to human expressions. It seems like we are running late on a lot of the challenges you listed, but you also were so kind to conclude your presentation with opportunities. The end, at least. Last but not least, I pass the floor to Julie Posetti. Julie is a feminist journalist, author, researcher, professor at the Global Director of Research International Centre for Journalists and Professor of Journalism at City, University of London. Julie, I know you would like to offer your perspective on the issue by starting with a video. Yes, I do plan to do that, and it segues directly from David’s conclusion, which was with reference to godlike omniscience. If we can play the video, please.
Julie Posetti: I think we’re having trouble with the audio. We’re having trouble with the audio, is that right? We’re having trouble with the audio. You also want to live forever. If you think about AI and you think about God, what is God? God is this thing that is all-knowing, it’s all-seeing, it’s all-powerful, it transcends time, it’s immortal. If you talk to a lot of these guys, the very senior ones who are building the GI, artificial general intelligence, creating something that has all human knowledge put into it, that surpasses any single human in its understanding of the world and the universe, and that is everywhere connected to every device in every city and every home that’s watching you and thinking about you. And if we turn it on and let it start to influence society, that it’s very subtly making decisions about you, where you can kind of feel it a little bit, but you can’t see it or touch it. And then imagine you have a bunch of men who also want to live forever, defeat death, become immortal. And in order to do that, they have to find a way to connect themselves to this creation. These men see themselves as prophets. Brian Johnson, the guy that we had dinner with, literally said, and this is in the podcast, we’ve got it wrong. God didn’t create us, we’re going to create God, and then we’re going to merge with him. And all the weird things that these guys say and do, if you start to understand that there’s aspects of this that are like a cult, a fundamentalist cult, or a new religious movement, a lot of their actions start to make a lot more sense. And if you actually start to interpret these statements, not as just some passing flippant comment, but that there’s a pattern to it, I think that we’re dealing with a cult in Silicon Valley. OK, apologies for the issues with the sound and the video sync. That was a clip from a panel discussion at the International Centre for Journalists, sorry, the International Journalism Festival in Perugia last month. For those of you who have forgotten, Christopher Wiley is not just a commentator on AI, he was in fact the Cambridge Analytica whistleblower. He is the one who revealed the data scandal that saw millions of Facebook users’ data breached and compromised. And you’ll remember that the Cambridge Analytica scandal involved an early kind of iteration of AI tools that were designed to micro-target with macro-influencing in the context of political campaigns. So several people have said that his comments sound alarmist, but he also pointed out that we need to stop being so polite, that we need to actually articulate the concerns and the risks associated not just with the technology but with the business models behind the technology that are designed to further enrich billionaires who are actually those that stand to profit most from the mainstreaming of AI. And ultimately, as David has pointed out, the objective is superintelligence or AGI and then superintelligence. So it might sound alarmist but the facts are alarming and I think they’re particularly alarming and they should be particularly alarming for people and states and intergovernmental organisations that are invested in securing and reinforcing human rights in the age of AI. So as I said, Chris exposed the Cambridge Analytica scandal and when he talks about this desire for Omniscience and Omnipresence Among the AI Tycoons. I think it’s important to highlight the rights, the links between the rights to privacy and the rights to freedom of expression and freedom of thought. And he does that in a podcast, an investigative podcast that he was speaking about there, which was published by Coda Story, which is a global-facing investigative journalism outlet that emphasizes the identification of prescient trends, so particularly with regard to disinformation and narrative capture. And that podcast is called Captured, the secrets behind Silicon Valley’s AI takeover. And I’ve used that example partly because Coda Story is one of the research subjects for a global study that I currently lead called Disarming Disinformation. And it’s looking at the way news organizations are confronting and responding to and trying to counter disinformation, particularly in the context of the challenges and opportunities that AI presents. So I think it’s important, as I said, to consider the right to privacy in combination with the right to freedom of expression and therefore to think about AI and all its integrated forms and the responses to it holistically. So before I turn specifically to generative AI and freedom of expression, I also want to highlight the need to consider the implications of the AI of things. So in particular, the application of AI glasses, which do pose a significant risk to freedom of expression of the kind that relies on the right to privacy, such as investigative journalism that’s dependent on confidential sources, such as Christopher Wiley. So he was initially the whistleblower who was a confidential source to start with before he identified himself for The Guardian and The New York Times and others, Channel 4 and others, for the Cambridge Analytica reporting. And it’s noteworthy that Mark Zuckerberg recently invited Meta’s users, nearly a billion of them, to download a new AI app that will network and integrate all of their data, including Meta’s new or upgraded AI glasses, which include facial recognition. And that prompted John McLean to write in The Hill, a newspaper coming out of DC, that Mark Zuckerberg is building a new surveillance state. So again, I think we need to consider surveillance in the context of freedom of expression. And he wrote, these glasses are not just watching the world. They’re interpreting. They’re filtering and rewriting it with the full force of Meta’s algorithms behind the lens. They’ll not only collect data, but also send it back to Meta’s servers to be processed, monetized, and repurposed. Facial recognition, behavioural prediction, sentiment analysis, they’ll all happen in real time. And the implications are staggering. It’s not just about surveillance. It’s about the control of perception. That’s a very important consideration when it comes to the function of independent journalism in democratic contexts, but also freedom of expression more broadly, and particularly issues around election integrity, for example, connected directly to information integrity. And coming back to generative AI specifically, an example from Australia, which we’re starting to see replicated in the very recent Australian elections. So the ABC’s, Australian Broadcasting Corporation’s chief technology reporter, working with. the fact-checking team, and in some ways using AI technologies to analyze large data sets through natural language processing, for example, identified the function of Russian disinformation in attempting to pollute chats. So as a way of polluting information, she referred to it as working the same way as food poisoning works. So inserting disinformation into large language models by flooding the zone with literally fake news. So these artificial news websites, one that they identified was called Pravda Australia. And it’s an iterative title. It is largely derived from telegram chats full of Russian disinformation. And that disinformation is being surfaced in the context of queries in the major chatbots that are being used. So this is something that I think needs to be really carefully considered with regard to accuracy and verification, which are real challenges with regard to chat GPT or any other tool that you’re using to query large language models. And the second point that I want to make is around the ability, therefore, to influence the outputs with not just disinformation of a foreign state actors, a political persuasion, but also hate speech and general disinformation connected to health, for example. If the objective is to radicalize certain citizens or societies as a whole, and to roll back rights, then this is another weapon that the agents for such pursuits have available to them. And we heard an example yesterday from Neyma Lugangira, who’s the chair of the African and Parliamentary Network on Internet Governance, of her experience of seeing generative AI used on X, so groke on X, to generate deep fakes effectively. And her point was that generative AI can be used to really reinforce sexist stereotypes, but also to generate misogynistic images, hyper-sexualized images. And when we know about deep fakes in the context of deep fake porn, we’re seeing this used against journalists, we’re seeing this used against political actors, as that example showed. So I think that we need to be aware if we’re to look at opportunities, at the tactics of those actors. They tend to be networked, they’re very creative, and they’re transnational, they’re cross-border. So the challenge for us, those of us trying to reinforce human rights, the rule of law and democracy, is to act in similarly networked and creative ways. And I’ll leave it there. Thank you.
Giulia Lucchese: Thank you very much, Julie. I’m fascinated and surely alarmed right now. Thank you for starting with this tough, provoking question to the audience. This comes at a very good moment because now we open the floor for questions. Please, both online and in person, do not hesitate. Yet, I would ask you to be focused asking your questions so that we can give voice to diverse participants. Please, who is willing to break the ice? Yes, would you start? Thank you.
Audience: All right. So first, thanks of all for the very interesting insights. The point I want to raise is referring to the rather first part of the presentation. And… The point I want to get your opinion on and how you think it’s going to develop in the future is we have academic research showing that every group of humans using AI to write an essay or a newspaper article, that the output on a collective level will be more similar. So we see on a collective level we have a more homogeneous output. On the other hand, we always argue that AI is going to help for a more personalized experience. It’s creating a more individual, like how we consume the content will be more individual. So for me, that seems kind of contradictory and I wanted to get your opinion on that point. Thanks.
Giulia Lucchese: Thank you. Should we collect a couple of questions and then, okay, thank you.
Audience: First of all, thank you very much for all the interventions. They were very inspiring and interesting. I would have just a quick and simple question. We are still witnessing that in many fields AI does not make key decisions in relation to the production of content. Like, will we be able to someday witness an AI, maybe applied to press activity or related, tell us someday that we won’t get access to a certain piece of information or news because of a decision that was made by AI and it was an exclusive decision of the system. Thank you very much.
Giulia Lucchese: Thank you. Yes, please.
Audience: Thank you to all the speakers. I have a question regarding something that was touched upon I believe in the first presentation, the fact that the use or the dependency on AI systems also makes journalism and in general information dependent on the prices that are set by these corporations. And I was wondering how you see the quality of information diminishing in relation to the possibility of more paywalls being introduced and so access to accurate and verified information also becoming a socio-economic issue. Thank you.
Giulia Lucchese: Thank you. I’ll take the last question. No, there’s not a last question. Oh, yes, there is. Please. Thanks.
Audience: Sorry, just quickly. It’s probably for David Caswell, if I may. You outlined those two clear distinctions of where you see society going. And I’ve thought about this before, this idea that you have massive polarization if there are individuals who just get distracted by social media, sucked into maybe the algorithm and seeing more simpler things, not engaging with intellectual actions of curiosity or writing. And then you have those who really understand and comprehend the system. And so you have massive intellectual polarization. But could you not also argue that there’s a third category of people who just say, I don’t necessarily want to know, like in the financial system, it’s different, right, because you have to be in the system of finance. But couldn’t you say there’s a third group who just says, I don’t want to be in the system. So it’s not being sucked in, being made to not like brainwash, but almost, but just completely escaping and getting out of maybe the matrix in this way. So do you think that it’s possible to achieve that?
Giulia Lucchese: Thank you. As I’m mindful of the timing, I would like to propose that, starting with Andrin, you do provide at least a reply to one of the questions you like the most, and then we move on. And maybe we avoid the final round, so if you have any final remark right now, I would like you to condense it in two minutes intervention. Thank you.
Andrin Eichin: I will just answer to three of them, but very shortly, because I think they’re very good. The first question, which was on the output on standardized expression and how it interacts with individual expression. I think this is a really good question. It’s something that we were considering as well in the expert committee. I think it really depends on what kind of tasks we’re looking at. There will be a lot of standardized tasks, writing emails, summarizing reports, where we will see the standardized expression, and this will probably increase and will also create a problem with regard to the data set that we’re using. And then we have creative expression that is being used, the way how we interact with generative AI systems as well to increase our ability, as we’re seeing in a creative way with memes, but also in journalism. So I think there will be elements where we have standardized expression, and there will be elements where generative AI will expand our expression as well. And you can even go to the element, to the side of the text as well, right, with regards to words and in relation to ideas. Maybe on the question too, on will it make key decisions with respect to content? I believe so. It will definitely. I think it will be, maybe I’m less pessimistic and doomy as David may be. I think that the timescale will be. will be a bit longer. If we look at how productive the interaction with AI systems actually is today, it’s still quite low. A lot of it is for entertainment purposes, but this will change. And this leads me to question four. Although you addressed it to David, I try to jump in. If you’d be able to escape, maybe there is this third form, but I don’t think it will last long. Again, for me, we’re speaking still in the future, not two or three years, but at one point, generative AI systems will be such a part of our economy, they will be so important for you to be productive and to participate in the economy that there will be almost no option to opt out. If you today don’t use a smartphone, it will be very difficult to participate in society. And there are people that don’t use the financial markets today, but they are very different.
Alexandra Borchardt: Is it a contradiction? Will it make people more creative, reflective on things? Well, this really also depends on the systems design. And also, of course, what you want to do with it, and that is the socioeconomic differentiation that might happen, as has already happened with social media. I mean, with social media, it’s also the case that if you want to get really a lot of information, particularly, for example, take the pandemic, I mean, you could directly interact with scientists and whatever to find out everything. But if you didn’t have the first idea of knowledge, if you didn’t have the access, if you didn’t know which scientists were really good at this, you didn’t get, maybe you didn’t get any information, or you got information from the basic information from public service media. So it really depends on what people are going to. do, but it also depends on the system’s design. And what Professor Patti Mase said was you can really challenge people a little bit just by asking like one more question, not just, you know, pressing a button, output, get rid of the output. And I don’t know if you’ve experienced that, you know, oh, produce a report for me. Oh, might I format it for you and send it away? So you can just do things without ever even, you know, engaging your brain. But if there’s just one back question, like, would you think this really makes sense? Or ask me something back. And that is the system’s design that can really help to engage people a lot more. I hope that makes you happy. Then I guess I’m the person with the paywall question, because I spent almost 30 years in the media industry, and I’m really worried about business models. I think the paywall problem, we have, I mean, we have that now already, we’ve had that for some time, that there’s quality media, and to survive, that they really try to engage, that there are set paywalls to make people pay for news, which makes a lot of sense. I mean, you can’t really go to the bakery and help yourself, you just need to pay. So the idea of many news organizations is, you know, this is quality information. If it’s worth something to you, then you pay. In fact, AI will undermine paywalls, can undermine paywalls, and actually can give you the output. So the paywall, no idea if this is going to be the thing of the future. If you see generative search emerging, and you ask just questions on AI, you will get responses. And sometimes there are responses, stuff that is actually behind paywalls. So we will, news organizations will need to do a lot more to engage people, to show them that they can create real value in their lives, and to really make them pay. And and public service media will probably also become a lot more important in that context and necessary. So the future of the business model is really something that worries me. And then the third one was about what I would like to comment on, yeah, will these systems make decisions? And yet, of course, with agentic AI emerging, obviously, agentic AI means that you optimize for some or you set some goal and then these agents really sort of independently, that’s why they’re called agents, make decisions on your behalf. But what these agents probably won’t do is, for example, doing investigative research, because there’s no incentive that they have to do so. So it’s probably, it’s most likely this is what journalism really needs to to intensify its efforts, becoming more investigative, holding power to account and really, you know, going for these things that AI won’t do. But, you know, we will, we might be surprised what AI will do in the future. So I don’t really think I can give you the final answers here. But David might. He’s the super expert here.
David Caswell: Well, sorry, I first want to just clarify that I did not intend to be pessimistic and gloomy. I had been going for excited and optimistic, but obviously failed. I’ll just quickly go through just my brief responses to some of the questions. I think that question about the balance between the narrowing of the distribution of expression on the one hand versus all of these opportunities to be more expressive, to be articulate, to be creative, to be artistic. On the other hand, that’s a very real question and the honest answer is don’t know. But I think what it does is it brings up very clearly the fact that this is a dynamical system. in that there’s going to be certain things that change that move freedom of expression or the makeup of the information ecosystem or our relationship with information that move those things in certain directions and there’s going to be other factors that come from this that move it in other directions. And so I think that’s this sort of uncertainty that we’re in right now, is that we’re going to have all of these things kind of changing at once and what the net of that ends up being, we don’t really know. In terms of AI making decisions, that happened long ago. If you get your news from social media, from Facebook, from Google News, AI is figuring out what you’re going to see. It’s already happening inside news organizations with generative AI in terms of story selection, angles, and all the rest of it. And even in terms of agents, there was an interesting little thing about two months ago. There’s a company in the UK called NewsQuest. They hired their first, what was the title for their newsroom? It was AI agent orchestrator. So they hired a journalist whose job it was to manage or is to manage a team of agents to make journalistic decisions and do journalistic things. So I think we have passed that milestone. AI is already making fairly profound editorial decisions. Not broadly, and I think a lot of newsrooms that are maybe touching on the edge of this don’t want to talk about it, but I think the trend is pretty clear there. On the cost thing, I’m not sure if I got the question right, but I think this might have been reacting to that slide with Charlie Beckett’s quote around what happens if the model companies increase the cost of these models by five times. I’m not sure that’s going to happen. I think one of the surprises of the last year or two is that these models might be much more like electricity. They might be much more like a utility than some kind of special thing like a social network. Social networks, because of network effects, had this one size. are one take all kind of dynamic. And these models might not have that. It might be that anybody can build one of these and get to some level of intelligence, just like you build a power station and generate electricity. It’s expensive, but anyone can do it. So I’m not that worried about the underlying cost thing on this. The bifurcation one, that was a very good question. And absolutely, I agree with Andrin that I think it’s going to be hard to opt out of this. The example I would use is not so much opting out of smartphones. It’s worse than that. It’s more like being an Amish or Mennonite, old order Mennonite, where you’re basically, you’re picking a point in time and sticking with that. The Anabaptist religions are very, there’s a large population in Canada and the US and South America. It’s not nothing. But it is that kind of a scale, I think. That bifurcation and the analysis of that came from a very comprehensive scenario planning analysis that I led last year. It’s called the AI in Journalism Futures Report. You can find the PDF online. And there were five scenarios. One was the bifurcation. But another one, there was a whole other scenario around that opt out option. And this was a consolidation of the points of view for about 1000 people. And one of the key findings from that was that most people who thought about this assumed there’s going to be some portion of the population would opt out.
Julie Posetti: Thanks, David. I think everybody’s questions have been answered. So I’ll just make a couple of remarks reflecting on what’s been said, and picking up on both Alexandra and David’s presentations in particular. And that Charlie Beckett quote, it does concern me that we have not spent much time during this discussion, addressing questions around regulation and embedding human rights in in these processes, who has written a lot about the future of journalism and technology led approaches to journalistic activity, which during Web 2.0 led to what I termed platform capture, that we haven’t necessarily learned the lessons from that period where news organizations and individual journalists became trapped within the platform walls. I realize this is different technology, but we failed to be appropriately critical, I think, and we failed to necessarily look at the risks in a way that enabled protections for business models that allowed and ensured a sort of editorially led approach to engaging with technology. And so I and others have spoken about the risk of sort of platform capture in Web 2.0 with regard to a ready embrace of AI without necessarily appropriate levels of critical engagement with not just the technology, but the characters. And I would sort of slightly disagree with David characterizing Sam Altman as an expert and comparing him to climate scientists, for example. I mean, those are independent experts, and we have those in this field. But I think we need to separate out expert perspectives from those who stand to massively profit from the technology that they’re propagating. And I think it’s important to highlight that. And we didn’t, I didn’t speak enough about the gender implications or the implications for diversity more broadly. But again, I think we need to reinforce, particularly in the current geopolitical climate where diversity is verboten. in some contexts and has been weaponized, then I think we need to reinforce the human, the humanity here in these discussions. And that does go to Alexandra’s point, which I’ve heard multiple times from journalists internationally trying to figure out what is the unique selling proposition or the unique offering of professional independent journalism or producers of public interest information more broadly. And that is sense making, meaning making, interpretation. And sometimes that does actually involve considered prediction, you know, based on facts, which helps societies prepare for risks. So I think that’s where I will leave it apart from to quote from a Kenyan woman politician who spoke yesterday, who said that we need AI governance that protects not just data, but dignity. And so I think that’s a good place to end it. Thank you.
Giulia Lucchese: Thank you very much, Julie. Thank you to all our panellists. I will give the floor to Desara Dushi, IGF Secretary, EuroDIG Secretary, my apologies, for the conclusions to be agreed by the participants.
Desara Dushi, Vrije: Hello, everyone. I’m going to share the screen with the messages that I try to draft during the workshop. And I’m going to read them one by one. So I’m from the EuroDIG Programme Committee, and we need to draft three messages for each workshop. The first message that I tried to identify is that generative AI has the potential to diminish unique voices. Including minority languages. It poses integrity issues, problems with identifying whether content is created by humans or technology. It also has the power of persuasion, including by disinformation that it via disinformation that it enables and influences market dynamics. The second message is journalism and generative AI contradict each other. The former is about facts, so the later generates content irrelevant of facts. There is a risk of standardised experimentation, risk of standardised expressions as well. However, generative AI offers also opportunities for journalism, helping to bring more content to the audiences. Questions though still remain, such as accuracy, impact on humans, The risk of using AI in journalism is losing control over news production and quality which might impact also the future of the business model. One of the main issues will be keeping journalism visible and keeping the connection with the audience. And the last message would be we should take AI seriously, be aware of what it can and cannot do and the rapid development impact in the near future, creating a lot of uncertainty in terms of dynamics and impact on freedom of expression. There is a risk of omnipresence as well. AI, including generative AI, has implications not only on freedom of expression but also on privacy, such as by surveillance in terms of freedom of expression, which leads to control of perception. We need to act on a networked and collective level. Now, I would ask everyone if there are any major objections against these messages, which means that you do not need to worry about the formatting, the language and editing and so on, because the organizing team will take care afterwards. But do you see any major objections regarding what was said during the session?
Alexandra Borchardt: In the second, I mean, that was meant as a provocation. It generates content not irrelevant of facts, but it basically calculates probabilities. So it could be true or it could not be true. And that makes it so difficult to figure out because the thing is just generative AI does something that sounds convincing. It’s really optimizing for credibility, but not for facts. So maybe that should be toned down a little bit. Because obviously, most of the stuff that generative AI…
David Caswell: It’s like food that’s 95% edible.
Julie Posetti: Well, maybe 75%. Just one thing that I think it’s nothing wrong in terms of what you’ve represented that I said, but it would be good to get the gender element in, which I think is very important. So the ways in which generative AI can be used to facilitate technology-based violence against women, for example. So Deepfakes was one example used against women political actors and women journalists, which is about silencing, which is about chilling freedom of expression. So I think that would be an important point to add.