Managing Change in Media Space: Social Media, Information Disorder, and Voting Dynamics – WS 02 2024: Difference between revisions

From EuroDIG Wiki
Jump to navigation Jump to search
 
(36 intermediate revisions by 2 users not shown)
Line 1: Line 1:
18 June 2024 | 15:00 - 16:00 EEST | Workshop 2a | WS room 1 <br />
18 June 2024 | 15:00 - 16:00 EEST | Workshop 2a | WS room 1 | [[image:Icons_live_20px.png | Video recording | link=https://youtu.be/BtyjA6zVC10]] | [[image:Icon_transcript_20px.png | Transcript | link=Managing Change in Media Space: Social Media, Information Disorder, and Voting Dynamics – WS 02 2024#Workshop_2a:_2]] <br />
18 June 2024 | 16:45 - 17:45 EEST | Workshop 2b | WS room 1 <br />
18 June 2024 | 16:45 - 17:45 EEST | Workshop 2b | WS room 1 | [[image:Icons_live_20px.png | Video recording | link=https://youtu.be/REbbY6-ehoM]] | [[image:Icon_transcript_20px.png | Transcript | link=Managing Change in Media Space: Social Media, Information Disorder, and Voting Dynamics – WS 02 2024#Workshop_2b:_2]] <br />
[[Consolidated_programme_2024#ws02_24|'''Consolidated programme 2024 overview''']]<br /><br />
[[Consolidated_programme_2024#ws02_24|'''Consolidated programme 2024 overview''']]<br /><br />
{{Sessionadvice-WS-2024}}
Proposals: #7 (#27) (#35) #37 #53 #54 (#60) #61 #80 #83 (see [https://www.eurodig.org/wp-content/uploads/2024/01/EuroDIG-2024_List-of-proposals-20240116_for_wiki.xlsx list of proposals])<br /><br />
Working title: <big>''' Workshop 2a: Managing change in media space (Part 1)'''</big><br />
*Sustaining democratic processes
*review of EU elections
*concrete impact of EU regulation
Proposals: #54 #80 (see [https://www.eurodig.org/wp-content/uploads/2024/01/EuroDIG-2024_List-of-proposals-20240116_for_wiki.xlsx list of proposals])<br /><br />
Working title: <big>'''Workshop 2b: Managing change in media space (Part 2)'''</big><br />
*Countering disinformation
Proposals: (#27) #37 #53 #83 (see [https://www.eurodig.org/wp-content/uploads/2024/01/EuroDIG-2024_List-of-proposals-20240116_for_wiki.xlsx list of proposals])<br />
*Education
Proposals: #7 (#35) (#60) #61 (see [https://www.eurodig.org/wp-content/uploads/2024/01/EuroDIG-2024_List-of-proposals-20240116_for_wiki.xlsx list of proposals])<br /><br />
== <span class="dateline">Get involved!</span> ==  
== <span class="dateline">Get involved!</span> ==  
You are invited to become a member of the Session Org Team by simply subscribing to the [https://list.eurodig.org/mailman/listinfo/ws2_2024 '''mailing list''']. By doing so, you agree that your name and affiliation will be published at the relevant session wiki page. Please reply to the email send to you to confirm your subscription.
You are invited to become a member of the Session Org Team by simply subscribing to the [https://list.eurodig.org/mailman/listinfo/ws2_2024 '''mailing list''']. By doing so, you agree that your name and affiliation will be published at the relevant session wiki page. Please reply to the email send to you to confirm your subscription.
Kindly note that it may take a while until the Org Team is formed and starts working.


To follow the current discussion on this topic, see the [[{{TALKPAGENAME}} | discussion]] tab on the upper left side of this page.
To follow the current discussion on this topic, see the [[{{TALKPAGENAME}} | discussion]] tab on the upper left side of this page.
Line 24: Line 12:


== Session description ==  
== Session description ==  
Questions to be Addressed by the Workshop:
'''Questions to be Addressed by the Workshop:'''


-What are the primary implications of the fragmented and disintermediated media space for sustaining democratic processes?   
*What are the primary implications of the fragmented and disintermediated media space for sustaining democratic processes?   
*What roles do governments, civil society, and media actors play in addressing these issues and fostering resilient democracies?
*How the EU digital regulations, such as the Digital Single Market Initiatives (DMA, DSA), impacted European elections? The speaker will address how each legislation piece works together, what to expect when they come into force, and what has been done so far.
*What has been happening in other elections around the world and how is this relevant for the EU / what similarities are there?
*What have the platforms learnt globally, and how does that impact the EU?
*How does the use of artificial intelligence (AI) contribute to the spread of disinformation online, and what implications does this have for the electoral process and political campaigns?
*What gaps need to be addressed to effectively combat disinformation and other issues arising from the current media landscape, and at what levels should these efforts be focused? Guidance Note on countering the spread of online mis- and disinformation will be presented
*How can educational initiatives be designed to build resilience against disinformation and promote critical media literacy?
*What roles do educational institutions and policymakers play in fostering a media environment that supports democratic principles and combats the spread of misleading information? How can initiatives be brought into the curricula?


-What roles do governments, civil society, and media actors play in addressing these issues and fostering resilient democracies?
'''Expected Outcomes:'''


-How will EU digital policy regulations, such as the Digital Single Market Initiatives (DMA, DSA), impact the upcoming European elections?
By attending this session, participants will:
The speaker will address how each legislation piece works together, what to expect when they come into force, and what has been done so far.


-How does the use of artificial intelligence (AI) contribute to the spread of disinformation online, and what implications does this have for the electoral process and political campaigns?
*Enhance their understanding of the current state of the media space and its implications for democratic processes at both societal and individual levels.
*Gain insights into EU digital policy regulation and its anticipated impact on the European elections, lessons learned and how much is this applicable elsewhere.
*How platforms face conflicting issues such as freedom of expression/content moderation during elections
*Identify challenges and opportunities for creating a less polluted media space that fosters resilient democracies.
*Explore potential solutions to current regulatory gaps, such as self and co-regulation instruments.
*Learn about educational initiatives designed to build resilience against disinformation and the role of policymakers and educational institutions in cultivating critical media literacy.


== Format ==  
== Format ==  
Please try out new interactive formats. EuroDIG is about dialogue not about statements, presentations and speeches. Workshops should not be organised as a small plenary.
Each panel member will present a brief overview of their views and perspectives. This will be followed by an interactive open discussion with all attendees to hear opinions, ideas, concepts and recommendations.  
 
On the topic of EU elections, panellist will structure their views as follows:
*First, what happened and lessons learned
*What can be done better
*How can this be replicated


== Further reading ==  
== Further reading ==  
*Through the Digital Lens: A review of the effects of digital mediatisation in journalism and politics<br/ >https://eplus.uni-salzburg.at/JKM/periodical/pageview/8764670
*Through the Digital Lens: A review of the effects of digital mediatisation in journalism and politics<br/ >https://eplus.uni-salzburg.at/JKM/periodical/pageview/8764670
*Content Moderation In A Historic Election Year: Key Lessons For Industry<br/ >https://www.oversightboard.com/news/content-moderation-in-a-historic-election-year-key-lessons-for-industry/
*Systemic vulnerabilities, MIL, disinformation threats: Preliminary Risk Assessment ahead of the 2024 European elections<br/ >https://edmo.eu/publications/systemic-vulnerabilities-mil-disinformation-threats-preliminary-risk-assessment-ahead-of-the-2024-european-elections/
*EDMO Task Force on 2024 European Elections<br/ >https://edmo.eu/thematic-areas/european-elections/edmo-taskforce-on-2024-european-elections/
*EDMO Task Force on 2024 European Elections<br/ >https://edmo.eu/thematic-areas/european-elections/edmo-taskforce-on-2024-european-elections/
*Voluntary Election Integrity Guidelines for Technology Companies (by International Foundation for Electroral Systems)<br/ >ttps://electionsandtech.org/election-integrity-guidelines-for-tech-companies/
*Voluntary Election Integrity Guidelines for Technology Companies (by International Foundation for Electroral Systems)<br/ >ttps://electionsandtech.org/election-integrity-guidelines-for-tech-companies/
Line 71: Line 78:
The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.
The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.


'''Key Participants'''
'''Key Participants and Moderators'''
 
Key Participants are experts willing to provide their knowledge during a session – not necessarily on stage. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance.
Please provide short CV’s of the Key Participants involved in your session at the Wiki or link to another source.
 
'''Moderator'''


The moderator is the facilitator of the session at the event. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.
:'''Workshop 2a'''
::''Key Participants''
::*Irena Guidikova, Council of Europe (On site)
::*Afia Asantewaa Asare-Kyei, Director for Justice & Accountability, Open Society (Remote)
::*Paula Gori - EDMO & Florence School of Transnational Governance, European University Institute (Remote)
::*Ms. Aistė Meidutė, Lithuanian Counter-Disinfo Project DIGIRES (On site)
::''Moderator''
::*Giacomo Mazzone, Member of the Advisory Council of EDMO


:'''Workshop 2b'''
::''Key Participants''
::*Dr. Tilak Jha, Associate Professor at Bennett University, India (online)
::*Dr. Viktor Denisenko, Associate Professor, Centre for Journalism and Media Research, Faculty of Communication, Vilnius University
::*Ms. Ieva Ivanauskaitė, Innovation and Partnerships Team Lead, Delfi
::*Mr. Gabriel Karsan, Secretariat Support, African Parliamentary Network on Internet Governance (online)
::*Ms. Aistė Meidutė, Lithuanian Counter-Disinfo Project DIGIRES
::''Moderator''
::*Vytautė Merkytė
'''Remote Moderator'''
'''Remote Moderator'''


Line 98: Line 116:
Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.
Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.


== Messages ==
== Messages ==  
A short summary of the session will be provided by the Reporter.
==== Workshop 2a: ====
''Rapporteur: Francesco Vecchi, Eumans''
 
# '''Impact and Challenges of the EU Elections'''<br />Disinformation campaigns before EU elections targeted issues like Ukraine, COVID-19, and the state of EU democracy, aiming to manipulate public opinion and polarize voters. While immediate election periods showed reduced incidents, AI and traditional methods play crucial roles in maintaining (or degrading) electoral integrity and ensuring (or threatening) access to verified political content. The measures put in place by the EU (through funding an independent organisation like EDMO, the Code of Practice on Disinformation, the EEAS, the European Parliament, and a network of fact-checkers) have succeeded in mitigating the impact of foreign interference. However, concerns remain about the spreading of mistrust in democratic institutions.
# '''Possible Solutions'''<br />To combat disinformation, a multimethod approach includes independent fact-checking, international collaboration on research and demonetisation strategies, and holding digital platforms accountable [1]. The representative of Meta’s oversight board presented recommendations addressed to the platform about how to operate during elections. Educating users in critical thinking and media literacy, along with developing voter-friendly communication, enhances electoral transparency and promotes informed electoral participation. Besides, the long-term financial sustainability of relatable media is key to managing effective strategies.
# '''Multidimensional Approach'''<br />Addressing media manipulation and electoral integrity requires enhanced cooperation between states, platforms, and civil society with a multidimensional approach involving diverse stakeholders, multidisciplinary expertise (e.g. psychosociology, neurology, linguistics, communications, etc.), multi-level governance (from international to local), and the development of inclusive multilingual standards.
 
[1] See the [https://www.oversightboard.com/wp-content/uploads/2024/04/Oversight-Board-Elections-Paper-May-2024FINAL.pdf full document here].
 
==== Workshop 2b: ====
''Rapporteur: Francesco Vecchi, Eumans''
 
# '''General Mistrust in Democratic Institutions'''<br />In 2024, amid widespread distrust in democratic institutions globally, approximately 4 billion people engage in elections. Information (both digital and traditional) is increasingly crafted for entertainment, gamification, and political polarisation, amplified by Artificial Intelligence through propaganda, translation services, and micro-targeting. More specifically, social media platforms serve as crucial feedback and control channels for governments, particularly in the Global South.
# '''Diversified and Tailored Solutions'''<br />To tackle these challenges, promoting media literacy in educational curricula is essential, fostering critical thinking and fact-checking skills. Creating a symbiotic relationship between stakeholders (taking proactive measures to combat disinformation) and users (encouraged to adopt critical thinking practices and rely on verified sources) strengthens resilience against misinformation. Besides, tailored solutions are crucial: e.g. Central-Eastern Europe frames disinformation geopolitically, African countries grapple with centralised power dynamics, and India faces issues with social media micro-profiling. Finally, empowering community leaders strengthens local resilience by leveraging their influence to promote accurate information.
# '''Focus on Inclusivity and Social Media'''<br />An inclusive global approach to infrastructure development avoids biases and ensures equitable solutions across regions. Prioritising efforts on social media platforms, especially in the Global South where youth and mobile access are influential, enhances interventions against disinformation and supports transparent electoral processes.


== Video record ==
== Video record ==
Will be provided here after the event.
==== Workshop 2a: ====
https://youtu.be/BtyjA6zVC10
==== Workshop 2b: ====
https://youtu.be/REbbY6-ehoM


== Transcript ==
== Transcript ==
Will be provided here after the event.
==== Workshop 2a: ====
 
 
'''''Disclaimer:''' This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.''
 
[https://dig.watch/event/eurodig-2024/managing-change-in-media-space-social-media-information-disorder-and-voting-dynamics '''Transcripts and more session details were provided by the Geneva Internet Platform''']
 
 
Giacomo Mazzone:
Okay, thank you for being here. Because we are Swiss in the audience, we are obliged to start in time. I’m Giacomo Mazzone. I’m one of the members of the Advisory Council of EDMO, that is the European Digital Media Observatory of the European Union. And we have with me here in the room Irena Guidikova from the Council of Europe and Aiste Medute, more or less, from Delphi. And we have also online with us Paula Gori from EDMO, that is Secretary General of the organization, and Afia Asantewaa Asare-Kyei, that is Director for Justice and Accountability at Open Society. And she is here with us as a member of the Oversight Board. So I think that we can start after the presentation. You know what is the topic of today, I guess, because if not, you were not in the room. But just to introduce a little bit to you, this has been an exceptional year, because it’s the first year in which we have many elections all over the world taking place at the same time, but it’s also the first year in which we have the impact of artificial intelligence used for spreading misinformation and disinformation across the world. So it’s interesting at the middle of the year. and we have still the worst to come, probably, to make the point and to see what happened in the first months in order to understand where we are in the battle and in the way to tackling disinformation and trying to preserve the electoral process all over the world. I will give the floor to Irina Gridikova, because the Council of Europe is, as we all know, the place where we try to contemporate the freedom of expression, but also with the integrity of the elections. Irena?
 
Irena Guidikova:
So hello, everyone. I think it’s not the first time that you hear from me. I’m really happy. Thank you, Giacomo, for inviting for the session. I don’t know if my screen is being shared.
 
Giacomo Mazzone:
Not yet.
 
Irena Guidikova:
Oh, wait, sorry. That should be the case now.
 
Giacomo Mazzone:
Nope, yet.
 
Irena Guidikova:
All right, never mind. So I do represent the Council of Europe, Europe’s oldest and largest organization. It’s an intergovernmental organization of 46 member states. And for some reason, my presentation is not showing. Is it showing? No, it’s not showing. Never mind.
 
Giacomo Mazzone:
I can send it to you. No, it’s not that one. I send you.
 
Irena Guidikova:
The Council of Europe is a standard-setting organization, among other things. And it has recently adopted a guidance note on disinformation, countering the spread of online mis- and disinformation. It was adopted last December. It was prepared by an intergovernmental committee, the Committee on Media and Information Society, where all the 40. six member states, represented in a lot of civil society organizations, including journalism organizations and others. Now, this guided note showcases interconnected measures in three areas, and these are fact-checking, platform design, and user empowerment. So, basically, these are the three pillars of fighting disinformation that the Council of Europe is recommending to its member states, and I should underline that this should happen in a coordinated, multi-stakeholder approach, including the users and including non-governmental organizations and industry. Now, if I go one by one through each of these pillars or areas, first of all about fact-checking. Now, fact-checking is essentially a journalistic process and a profession. It’s very difficult to improvise oneself a fact-checker, although we do also have trusted flaggers and citizens that do fact-checking. But primarily, it’s a cornerstone of responsible journalism, and it’s one of the key tools to fight disinformation. There are dedicated, as I’m sure you know, fact-checking organizations, and they need to be supported, both financially but also in regulatory terms, to become trusted actors in the information environment. And states and other stakeholders should ensure their independence, their transparency, and their sustainability, their independence from any political or commercial interests. That’s clear. Their transparency with regard to the working methods they use, the methodologies, and the sources they use to check whether the facts shared are correct or not. And finally, their sustainability, in particular their financial sustainability, so that we make sure that these organizations do become really professional, that they carry out their role, it’s actually 24-7, and that they do their vital work without any undue interference. Finally, through their own cooperative networks, fact-checking organizations should ensure quality standards, and the states and other stakeholders should have a way to check their effectiveness. Now, digital platforms are also called to ensure that they participate in the effort of fact-checking by either integrating fact-checking into their own internal functioning, but this can be done basically by the bigger platforms, or that they associate with independent fact-checking organizations, and integrate external fact-checking in their content curation systems. The second dimension recommended by the Council of Europe’s guidance note on fighting misinformation and disinformation concerns digital platform design. Now, there are a very wide range of measures that states can undertake to ensure that there is human rights by design, and safety by design, and in fact, this actually probably rings a bell with something that I said this morning, and human rights by design, safety by design, their general principles, not just for this information, but for many other harmful and illicit activities or content online, including hate speech. So, they’re always there, their requirements with regard to human rights measures for managing disinformation, and for putting in place mediating strategies. Design measures for platforms should obviously not focus on content, but actually on the processes to which the platforms judge and decide which content should be suppressed in the first place. But this is rather rare and should be done only in exceptional circumstances, clearly defined by law, or whether a content should be prioritized or deprioritized, promoted, demoted, monetized, demonetized. There are also other ways, apart from these harsh measures by platforms. Okay, right, thank you so much.
 
Giacomo Mazzone:
You have to tell her.
 
Irena Guidikova:
Oh, yes, you move, I’ll tell you where to stop. More, more, more, more, more, more, more, more. Platforms, there we are. The other measures, apart from managing the actual content, that can help alert the users as to the potential risk of disinformation and misinformation. Let me just open a parenthesis, because you see that I’m using misinformation and disinformation. They’re interconnected, but two distinct types of incorrect information provision. Disinformation being more deliberate action, either by any content provider or creator, whether it be a state or another organization, even an individual. And misinformation is more of an unwilling sharing of wrong information. That’s why to englobe the two of the Council of Europeans is the term information disorder. So apart from managing content, there’s also provision of supplementary information to users, such as debunking age-related alerts or trigger warnings or others that can alert the users to potential presence of… incorrect content or information disorders. In this context, what’s really important, and this is, I would say, an overarching principle of managing information or countering information disorders is that the best way to counter disinformation is by the provision of trusted information from trusted sources, the prioritization of independent and professionally produced content and public interest content. It’s not so much by displacing, suppressing, or deprioritizing the wrong content that by the provision of trusted content that disinformation is best managed. User empowerment, and you can move to the next slide, is the last pillar. The guidance note focuses on user empowerment to make sure that users become resilient to disinformation, while warning, and it’s really important, of the risk of fostering narratives that blame the victim or blame the users by becoming victims of disinformation or burdening them with excessive responsibilities. At the end of the day, it’s for the state’s digital platforms and media to remain primarily in charge of promoting structural conditions for a healthy media ecosystem, and to ensure that reliable and quality information on matters of public interest is abundant and easily accessible online. At the same time, the citizens need to be equipped, citizens of all ages need to be equipped to discern facts from opinion and reliable information from fabricated myths and truths. And the more people are able to deal and cope with disinformation, the less we need to worry about them being targets of disinformation. So digital platforms should empower the users, including through systems such as trusted flaggers, at different levels, keeping an eye on linguistic and cultural differences as well. But the empowerment of users goes through education, digital literacy, and other generalistic measures that are beyond the reach of digital platforms would require cooperation between states, public authorities, media, and educational institutions. So I’ll stop here to let the other speakers take their turn. And of course, I’ll be happy to answer any questions.
 
Giacomo Mazzone:
Thank you. Thank you for this very interesting presentation. Can you leave the last slide just to situate what we are talking about? Because this is the good transition to the next speaker, that is Paula, because we are shrinking now this horizon that is the larger Europe. We go to the smaller Europe, we can say that is the European Union. And European Union has been worried by the spread of disinformation and has made over the years many tentative to try to prevent this kind of problem. Initially, it was tried the way of co-regulation or let’s say self-regulation through the agreement with the platforms that also the Council of Europe has tried and was signed the Code of Practice. But then from the Code of Practice, after the first evaluation of the Code of Practice has been seen that this was not enough. And beside the self-regulation, the co-regulation, we are now in the regulation phase. One of the tools that the Commission put in place in order to measure the fulfilling of the obligation of the Code of Practice was exactly EDMOD, European Digital Media Observatory. that copy in the name also the European or the visual survey that is in Strasbourg. And Paola Gori is the Secretary General of this body that is based in Florence at the European Institute University. Paula, can you explain what the European Union has made specifically for the European election that took place a few days ago?
 
Paula Gori:
First of all, thank you very much for inviting me. Just a technical question, shall I share the screen or will it be shared there, just to know?
 
Giacomo Mazzone:
Yes, let’s try. If it doesn’t work, we have a backup.
 
Paula Gori:
Okay, because it said that I’m not able to do that.
 
Giacomo Mazzone:
You will be empowered soon.
 
Paula Gori:
Great, I’m always happy when I’m getting empowered. But I’m still not.
 
Giacomo Mazzone:
But you have to have a little bit of patience.
 
Paula Gori:
Thank you. All right, so can you see my presentation now?
 
Giacomo Mazzone:
Yes, now yes.
 
Irena Guidikova:
Great, super. So yeah, as Giacomo was saying, let’s say that you started, I would say some years ago, ready to deal with this information. And as it was previously already said, there is no one single solution to this information. It’s rather what is the so-called whole of society approach. So there are different pieces of a puzzle that jointly could make the trick. And to do that, and this was already said, what is important is to have not only a multi-stakeholder approach, but also a multidisciplinary one. We come with a very easy example. We definitely need, for example, in research, neuroscientists or psychologists, sociologists, to tell us how this information actually impacts our brain, why we share irrationally or rationally, which is the whole, if you want, health status of our society, why we have the need to stay in the echo chambers and why not. So you see, it’s really multidisciplinary. It’s not the usual data analysts and lawyers. It’s way more. And as previously said, we need all the stakeholders involved jointly. And this is why Aetmo. So Aetmo brings the stakeholders together. So we have the fact checkers, the media literacy experts, the researchers, the policy experts. And what we do is we jointly try to find common solutions or if not solutions to fight, if you want to get the tools that we need, let’s think about data access in research, which is fundamental. So we’re talking about access to the data of the online platforms and search engines, which is actually, if you want the last mile that we are missing to completely understand this information, once we get that, it’s gonna be way easier also to tackle it. Now, what we have the privilege of is to have a collaboration with hubs that cover all member states. And as Jacqueline was saying, it’s less member states than the Council of Europe, but still quite a good number. And they act locally. And why is this so important? Because as we often say, this information has no borders, that’s completely true, but the local element keeps being very important. Not only language, but also culture, history, broadband penetration, media diet, and so on. So it’s really important to have actually also the local dimension always include. We see narratives that enter some countries and that completely do not enter other countries. And these are the factors
 
Paula Gori:
that actually make the difference. Now, ahead of the elections, what did we do as EDMO? Just to first go very quickly to what Jacqueline was saying. You know, there is a code of practice on disinformation in the EU, which was strengthened a few years ago, and then the EU adopted the regulation, which is called the Digital Services Act. Just a very important disclaimer here, the Digital Services Act is not on disinformation. The Digital Services Act, the DSA, is actually mainly on illegal content. This is important to highlight, but within the DSA, there is also actually- And the whole main approach of the DSA is that given that regulating content risk being dangerous because of freedom of expression, let’s go if you want by design. So basically do risk assessments of the platforms they have to run regularly to basically set in very easy words, they have to assess if the way they are structured can be actually exploited by malicious actors to share specific content, including in this case is information. So under the DSA, there is the possibility of translating the code of practice into a code of conduct and not only signing this code, but also, of course, respecting the principles of this code can be evaluated as a risk assessment, as a mitigation measure to these risks. And ADMO is part of the permanent task force in this code. So basically this is a task force that actually helps in implementing the code. So very practically speaking, and also in making sure that the code keeps being aligned with the technological developments and in general, the developments in the field. So having said that, what did we do before the election? So we built on one side a task force that was composed of representatives of all our members of our hubs. And the idea there was on one side to analyze what happened in the national elections, because we could learn something from that. And on the other side to keep the trend, monitor the trend of this information in the EU. And I have to say, analyzing the national elections was already very useful because basically many things like, for example, expecting this information on how to people would vote. So the processes was something that we were, if you want, we’re seeing coming, because this happened also in the national elections. And this task force also produced for more than a month, a daily newsletter with this information narratives, which were detected in the EU. So very quickly, the recap of what we used. we saw is that indeed, we didn’t saw lots of disinformation in the two days of the three days of the elections, which means everything and nothing, of course. And, but we saw, of course, is information in the weeks, days, months before. And, and you know, that actually to kind of build an idea, it takes time. So actually dealing with this information only ahead of elections makes no sense at all. We need to have a systemic approach to this information, because it’s a long term process, also how actually it impacts our brain. And, and there were no big, so far, disinformation campaign that basically we’re trying to basically put into discussion the results of the outcome of the of the of the of the elections. Now, what we saw in the last month before the elections was clearly lots of disinformation growing related to the EU. So basically trying to undermine the EU and it’s, it’s the way it works and the things that the EU does. And as you can see, in these slides, we try to monitor also some very broad topics like climate change, which actually is one of the topics that tends to be stable and growing, because it can be very easily actually attached to political agendas. Same for LGBTQ plus, then you see it information in Ukraine, which was a lot used to attack the EU institutions. We saw lots of like, disinformation for that migration and others. Sorry, it’s blocked. Okay, so as I was saying, the war in Ukraine, just examples, for example, it was said that people in the EU are forced to eat insects because of the sanction the EU is imposing to Russia. And that’s basically we’re suffering a lot because of that. Climate change as I don’t know how familiar you are with climate change disinformation, but there is a clear trend from all denialism. So basically denialism, denying a climate change and the biological causes behind to new denialism. So basically, actually, it’s information on the solutions. And we saw that a lot, of course, considering the Green Deal that we have in the EU. Migrants here again, especially Ukrainian migrants, but also refugees, but also in general migrants because there is, again, a discussion, policy discussions and regulations in this field in the EU in this very moment. And here, very important to say this year, this is a topic in which disinformation can clearly trigger hate speech. So this is something to take into consideration that disinformation you can then also kind of unfortunately bring to other types of types of content and election integrity. As I was saying, we saw quite a lot of disinformation on like how to vote. So double votes or basically you can saying that you can vote by, I don’t know, by post in countries in which it was not allowed or stuff like that. And more in general, then we also saw if you want specific attacks, like I don’t know how much you have seen the attacks that were given to Ursula von der Leyen, like saying that she was that her family is linked to to to Nazis and so on. And but one very important thing is now this is I can and skip is that Jacob was I think Jacob was mentioning artificial intelligence, which is, of course, something so I wanted to get to that slide, but very important. But the old of this information still holds quite strongly. I will finish now. Don’t worry. And this is very important to say, AI, generative AI is doing damage. But the old ways fashion, the old fashion of producing this information still holds like dubbing wrongly a video or miscaptioning a picture or stuff like that. So this is something very important to keep into consideration. And last but not least. Unfortunately, I don’t know why I forgot to put it in this slide, but we also run a BeElectionSmart campaign that was sent in all member states, in all local languages, and I have to give credit to all the platforms because the online platforms collaborated with Edmo. It was, we got three credit ads by X and it was promoted also by all other platforms. So this is a way in which sometimes collaboration with platforms actually works and it’s important also to highlight that. Open for any questions, of course, and I’m sorry if I was a little long.
 
Giacomo Mazzone:
Thank you very much, Paula. So European Union that is smaller than the Council of Europe spoke more than the Council of Europe. I’m afraid what Facebook Oversight Board of NATO that spoke for the world will talk. Afia, the floor is yours. Thank you.
 
Afia Asantewaa Asare-Kyei:
Thank you so much. That was wonderful, the two presentations, but thank you. Thank you to the organizers for associating and giving the Oversight Board the opportunity to share what has been happening in other elections around the world, outside Europe, but what would be relevant and might be similar to EU and I guess most importantly, what have the platforms, particularly META, learned globally and how it’s impacting or could impact on Europe. So just briefly a minute for me to just broadly introduce the board. So the board is an independent attempt to hold META accountable and ensure that its policies and processes better respect human rights and freedom of expression. So what does this mean in practice? What it means is that when you feel META has gotten something wrong on Facebook, Instagram and now threads, you can appeal to the board and the board is made up of. of 22 global members. When we started, we had scope over Facebook and Instagram. And recently, due to some advocacy for scope and expansion, threads has been added. Now we are truly independent and we have over 10 matters decisions on whether content goes up or down around 80% of the time, based on the cases that we have so far handled. The board also make what we call recommendations, very impactful recommendations on just about everything from how to better protect LGBTIQ plus users to making sure that government requests to remove content are reported clearly. Meta has and is in the process of implementing majority of our think to date, if I’m not mistaken, based on the recent data from our implementation committee, we have about 250 plus recommendations and a good percentage of that Meta has or is in the process of implementing. So thanks to the board Meta’s policies and practices are much clearer. You know, the company is being much more transparent. You as a user is now told exactly why your content has been taken down and why you have been given a strike. If you have been given a strike, you can also edit your post and flag it as being likely to be removed, to be moved down. So I think this is huge because the alternative before was often a default to removal of content, which might be harmful or, you know, which might be harmful in a small way. So I think what we can imagine is how many or how much content fell into this category and what that means for public debate, if a lot of content was removed. So Meta has agreed. to track and publish data on when government requests, request policy violating content to be removed from their platform. And this is big, it’s a big advance for users’ rights and allows people to understand what kinds of content governments and state actors are seeking to remove, especially during election context. Now, specifically on elections, protecting elections online has been a key focus of the board for the last couple of years, but it became one of our official priorities since 2022. So we were preparing ahead of this historic year of elections for the past two years. So by taking high profile cases from all around the world, so looking at everything from, you know, disinformation to incitement to violence from politicians, the board has worked to ensure that Meta removes extremely harmful political content that is likely to full actual violence. We then, you know, make very sweeping recommendations on how the company should improve to ensure it does not keep, this does not keep happening. One of our major concern has been to ensure that, you know, global users are not being forgotten. So we’ve looked at whether Meta was right to suspend former U.S. President Donald Trump from its platform for stoking violence that led to the attack on Congress, whether, you know, manipulated content of U.S. President Joe Biden, that was made to seem like he was groping his granddaughter while he was sticking an I voted sticker on her chest should be taken down. And how manipulated content should be treated more broadly. We’ve also been looking at whether, you know, content by Brazilian generals. calling on people to hit the streets and take over government institutions by force, should have been allowed to spread. And as well, we looked at a video by then Cambodian President Hun Sen, in which he threatened violence against political opponents should be allowed on Meta’s platforms. So on the whole, I think we found that the company is trying to grow up and is learning some key lessons, but it still has a lot to do, much further to go, especially when it comes to the global majority. So here I’m talking about the global South and East. So to hone in on the Brazil example that I mentioned, for example, the board investigated Meta’s handling of a very partly contested 2022 election and found that the company eased its restrictions too soon, because we know that free, as I think Paola mentioned, it’s important, free, during, but sometimes you have to really pay attention to post. This was dangerous and allowed content by a very prominent Brazilian general who called for people to besiege Brazil’s Congress and to go to the National Congress and the Supreme Court. That went viral. This spread in the weeks before the January 8th, when thousands of people tried to violently overthrow the new government, almost in an imitation of what had happened in the US. And so we were wondering why the platform allowed this to spread. So the board has made sure that Meta took these posts down, but we’re really concerned about why this even happened in the first place. Why did they relax their approach? protocols and almost two years to the day after the storming of the US Capitol, as I mentioned. So to stop this happening again, the board has pushed META to commit to sharing election monitoring matrix that will clearly show what steps the company has taken and whether its efforts are working or not. This was launched in, well, so they have made the commitment to do this, and it is going to be launched later this year, and hopefully it will be in time for the UK elections. Sadly, it was not ready for other key elections, such as the elections in India and Indonesia, you know, very critical, significant democracies, and even South Africa. So I think looking forward, our experience has taught us that how you work with social media companies to make an impact. And we believe that, you know, lessons we have learned go well beyond META and can help set basic standards for other social media companies globally. Earlier this year, we issued a white paper. I don’t know how many of you have had the opportunity to see and read it, but we issued a white paper on protecting election integrity online, which made clear recommendations to companies, not just META, but social media companies. And the founding premise is that, you know, the right of voters to express and to hear a broad range of information and opinion is essential. You can only take down political content, especially around elections, if it’s totally necessary to prevent or mitigate genuine real-world harm, like violence. If this is not the case, then you have to keep it on because it is going to provoke and ginger, you know, necessary debates and conversations among citizens. And to do this, we believe that the companies must dedicate sufficient resources, I think I saw in the first presentation, sufficient resources to moderating content before, during and after elections, and not limit it to just, you know, immediately during the voting period. And here we are looking specifically at resources to non-English speaking geographies, so that, you know, they can moderate content, that they understand the context, they understand the cultural context better. So how do you make more sufficient resources to content moderation? And then we’re also asking the companies to set basic standards for all elections everywhere, and not neglect, you know, dozens of elections taking place in countries that might be less lucrative markets, where the threat of instability is sometimes often greatest. And then, you know, never allow political speech that incites violence to go and check. With quicker escalation of content to human reviewers and tough sanctions on repeated abusers, they can prioritize this and make sure that any kind of political speech that incites violence is really checked. And then just guard against the dangers of allowing governments, I’m just ending. Lastly, we are asking them to guard against the dangers of allowing governments to use disinformation or vague or unspecified reasons to suppress critical speech, because sometimes the spread of disinformation is not just by people, but by governments as well. Thank you.
 
Giacomo Mazzone:
Thank you, Afia. I see that you use your prerogative of representing 192 states in terms of timing.
 
Afia Asantewaa Asare-Kyei:
Thank you. I’m sorry, but I thought I should say it broader. Thanks.
 
Giacomo Mazzone:
The paper you mentioned is here in case, very useful and very interesting. I will raise question, and I want to leave some time for question. So, if we now respect the geography, so you have 30 seconds as a Lithuanian.
 
Aistė Meidutė:
No, no.
 
Giacomo Mazzone:
No? Dutch, you have 45 seconds.
 
Aistė Meidutė:
I try to be fast, but I’m not that fast, unfortunately.
 
Giacomo Mazzone:
No, but for the sake of the debate. Thank you very much. Aiste, please, the floor is yours.
 
Aistė Meidutė:
Yes, one moment, I will try to share my slides. Do everybody see that?
 
Giacomo Mazzone:
Yes. Yes. We have just a little screen.
 
Aistė Meidutė:
Great. So, first, let me introduce myself very shortly. I’m an editor and lead fact-checker in Delphi Lithuania’s fact-checking initiative, MeloDetektorus. Delphi Lithuania is the biggest news portal here in Lithuania, and we were established as fact-checking initiative in 2018, and since then, we achieved quite many things. We became signatories of international fact-checking network, as well as European fact-checking standards network, and we’re a happy member of AdMob family, too, as their fact-checking partner. Today, I’m going to talk a little bit about how we try to save elections from disinformation and disinformers. Of course, in Lithuania, it was pretty challenging to do so, because just with a couple of weeks gap, we had presidential elections, as well. So, we had to tackle both these huge events at one time, and I have to say that presidential election, of course, stole a bit of a spotlight from the European one. So, we witnessed more disinformation and misinformation narratives related to presidential elections, rather than European election. Of course, in such a challenging time, you have to work with different approaches. And our approach was to use pre-banking and debunking as well. One of the things that we do was we increased a number of fact checks that we normally produce, tackling especially European elections related content. So everything that is related to politicians as well as major decisions and European agenda as well. One of the most important things I would say that we’ve done during this period, we increased fact-checking content in Russian language. So we also have a separate fact-checking initiative in Russian language. So we try to produce as many quality content for Russian-speaking audiences. It’s no surprise that, of course, ethnic minorities are one of the main targets of disinformers, in Lithuania especially. So since the full-scale war in Ukraine started in 2022, in Lithuania, a lot of Russian channels, Russian television channels were blocked. And, of course, those people that used to consume this content regularly were left with, well, I wouldn’t say no alternatives, no alternative content, but with less content that they used to read, they used to consume. So one of the tasks that we took was to create more quality content in Russian language so they would be served accordingly. And especially with the huge flow of war migrants from Ukraine, this need increased even more. The other thing that we do was to partner with the European Fact-Checking Standard Network, where we together with 40 partners from different European countries created elections 24 check database. And to this day in this database, there is more than 2,300 fact checks from 40 countries related to European Union topics. So it’s not only European policies or European agenda, but it’s also major crisis events like war between Israel and Hamas or Ukraine. And why this database is important and why this approach is super important is that researchers have a possibility to use this content, to use the statistics and to analyze the whole disinformation scene, what was happening before the election, what was happening during election period and what is gonna happen post election. So the project is still ongoing and we already have quite a lot of data collected and also narratives published. The other things that are worth mentioning is that we try to engage our audiences in kind of a critical thinking assignment, showing them a TV show called Politika in Lithuanian. In English, it means catch a politician and it has double meaning, it’s kind of a wordplay. It means catch a politician, but also know the politician, understand the politician and by understanding a candidate, a politician, we mean that you have chance to understand how politics work. how the basic thinking behind the political agenda is constructed, and also think very critically whether all those promises that the candidates are promising are real and easily achievable. In this TV show, we invited an expert in the political field to comment on what the candidates are saying. So we had a couple of shows before the presidential election and also a couple of shows before the European election as well. And the last effort that we made countering disinformation and misinformation before the election period was to produce social media videos, mostly talking about media literacy, especially about how to recognize generated AI, generated content, and basic kind of suggestions how to consume information more efficiently and in a safe manner, checking the sources and trying to question each piece of information that you find online. So I promise to be brief. Let’s connect. If you have any questions personally for me or Biba, we are, as a media organization, always very eager to communicate with our audience as public. And especially us as fact checkers always wait for, I don’t know, suggestions. How could we improve what we’re doing? Because it’s not a fight that you can take alone. You need many people to do that and many inclusivity as well. Thank you so much.
 
Giacomo Mazzone:
Thank you, Aiste. So now we have some time for the floor and some questions. Unfortunately, we have to come here because… The mic, there is no mic in the room.
 
Irena Guidikova:
You can repeat the question.
 
Giacomo Mazzone:
It depends if it’s a short one or if it’s a statement.
 
Audience:
It won’t be a statement.
 
Giacomo Mazzone:
No? Then I can repeat if you’re short. We sit or? No, no, no, stand.
 
Audience:
No, no, I won’t stay here, don’t worry. So my name is Dan, I’m from Israel, and we’re facing a very big problem with disinformation in our country, especially in the current conflict, but also before with many elections like you have this year. And I want to first of all thank you for this very interesting panel. My first question is, I think for Irena and for Paula, you talked about collaborations and policies of EU states, European countries, working together with databases with policies. What can you offer a country or what do you think a country that is non-EU member, a small country that cannot work with other countries, does not have also common language and news sources with other countries, what can we learn from EU strategies and policies of the things you implemented together? And I think your slides were very interesting for us, for thinking, also Paula, for thinking about some systematic approach. And the other question I have is more for the Oversight Board, which was very impressive, but I wanted to ask, what does it help to discuss or take down content, sometimes weeks after it is being promoted and published? If it doesn’t happen in 24 hours, 48 hours, it doesn’t really worth a lot. I don’t think the Oversight Board’s job is to look at pieces of content. I think it’s to oversight and to see that the policy and the strategies that there is accountability at the platform. And we see this a lot. And also another last. The question also has to do with small countries. We hear a lot about resources going to elections in big countries. You know, you, India, what about elections in small countries? When we try to ask META or other platforms, what do we do? What do they do for elections in small countries? We don’t really get any responses. So that was my question. Thank you.
 
Giacomo Mazzone:
Other question? You see? You can’t repeat it. So other question from the room? Other question from remote? Yeah, please.
 
Audience:
Yes, I had a question for Ms. Gori and Ms. Meidutė as well. I got the feeling after the last European elections that there was a sense of relief that nothing extremely big like, for example, Hillary Clinton’s emails happened during the election that really seemed to have swayed things one way or the other. Do you think this feeling is justified? Or have you been able maybe to compare misinformation, disinformation between the last election 2019 and 2024? And do you see a rise? Or is it getting better or worse?
 
Giacomo Mazzone:
Thank you.
 
Irena Guidikova:
I also have a question.
 
Giacomo Mazzone:
For yourself?
 
Irena Guidikova:
No, for the other panel.
 
Giacomo Mazzone:
Please.
 
Irena Guidikova:
Yes, I was wondering about the oversight board and to what extent your recommendations are compulsory or, I mean, followed. Yes, because obviously they’re not compulsory, but to what extent they’re followed. And to Aistė, I had a question about your outreach. Because beyond the timing of the debunking and whatever alternative narratives, It’s also the reach. Are you able to reach out sufficiently wide because this information usually travels wider? And what are your outreach strategies?
 
Giacomo Mazzone:
Thank you. Then I also had some questions to some of the speakers. One question is to the oversight board again, if they see contradiction with the regulations that are in Europe because they are based in the U.S. as a company. The regulation in the U.S. there is the first amendment, so it’s less cogent than it is in Europe. It makes a problem for META to comply with the European regulation while they have to comply with the U.S. regulation. That’s my question. Then there are others, but I don’t know if there is time we will raise later. So, starting to ask you to respond. Paula, you want to respond to the colleague from ISOC, for instance?
 
Paula Gori:
Yeah, so I have that and the second question. So, I think, of course, the EU set this whole effort as EU effort also to kind of avoid discrimination within the EU market because also legally some countries are already taking some different, if you want, paths. But there’s lots of what we do, which can, of course, be put under discussion also, that I think is something that can be applied everywhere. Like, for example, working more on monetizing content, so making sure that platforms don’t monetize on disinformation. As it was said earlier already, make sure or insist or advocate for the fact that the platforms have content moderation teams in the country, in the local language. Also, whenever a country has a very peculiar language, there is also somehow Sometimes, this information is less foreign, is more domestic, because it’s difficult if you want to enter, because you have to get used to that language. But I guess in a country like Israel, English, this information still enters quite a lot. So again, try to understand what comes from where and like, who are the actors behind. Then fact checking, but independent fact checking. And there, I mean, it was mentioned the European Fact Checking Standard Network, but there is the IFCN, which is international one, and fact checkers can apply if they are there to certain standards. So again, this is something which is actually quite broad. Then research. On research, the forces are actually joined globally, not only at EU level. Of course, there is the DSA and the access to data that is quite strongly imposed by the DSA. But for example, one thing that is very important in research is to join also basically financial resources, but also technical resources. Not all universities in the world are actually technologically equipped to deal with all this data. And if I’m not mistaken, at least in Israel, you have quite advanced tech universities. So maybe actually you can help and like, it’s like a do-out test. So you can help with your technology, other universities, and they may help you in research on other fields in this information. And of course, media literacy initiatives, which I also mentioned. Edmo will be publishing soon guidelines on like how to build a good media literacy campaign, because the point is not only to implement media literacy initiatives, but they also have to have pedagogical standards, otherwise they’re basically useless and without impact. And this is, again, something which is not, I mean, only EU related, it’s something that is more broad. So I would say actually in the whole discourse, it’s not a matter of, I mean, as I was saying earlier, it’s the local element is very important. But as a matter of reflection and policy and activities to be implemented, I think we can have a more global approach. And on the relief, I personally am. not relieved. Because I think as I was saying earlier, it’s not because of those two, three days, there was no major incident, that actually this means that we didn’t have a problem. And as I was saying, it’s something I mean, disinformation. It’s not only political, but it is political, it starts long way before also, for example, with issue based advertising for which at the even at EU level, there’s still no agreed definition. So I wouldn’t be so positive regarding 2019. We’re still trying to understand how things were. But let’s be honest, technology has changed completely in these years, and also policy. So you would be comparing things that are if you want structurally, in any case different, but clearly, there will be analysis also to understand if things went better or not. And I think that were these were the two questions that were raised addressed to me. So I will stop here.
 
Giacomo Mazzone:
Thank you. So I think you were asked.
 
Aistė Meidutė:
Yes, to comment about how everything has been during previous election season, I would say that this time we have much more tensions. And we are definitely more polarized society. So it’s easier to tackle us. That’s why things are definitely more difficult than it used to be. Especially we noticed this thing in Lithuania, which is definitely a target because of its proximity to to Russia, for instance. And of course, a huge part of Lithuanian society has this deep fear of coming back to coming back to Soviet Union, experiencing the war, and it’s easy to manipulate these emotions, and it’s easy to scare us. If we if we think about why there hasn’t been any major boom before the election, and of course, During every meeting, probably, European fact-checkers were discussing this thing and preparing for this major boom. Maybe AI created huge false information that we weren’t prepared for to tackle, and we’re not going to be on time to do that, and it’s going to work like that. It didn’t happen, but it doesn’t mean that we’re safe. When we think about disinformation, it’s not really about those major explosions that we have to talk about. It’s about sowing doubt, and it’s a really slow-working process, but it’s still faster than those who search for the truth and fact-checkers. There’s many more of them than there’s us, who try to debunk things and try to explain things. She asked about the reach. Well, it’s hard to say. I mean, I’m pretty sure that the reach of disinformation, sometimes it’s much, much higher than the reach of fact-checks, and that definitely hurts. It’s not an easy topic for us. We try our best, and of course, being a major media outlet in the country, I say that we manage to reach quite a good number of people. The problem is that whenever we talk about fact-checking, we realize that society imagines fact-checkers and fact-checking in a very different way. Even though it’s pretty… I wouldn’t say that it’s a novel practice in media, not anymore, definitely, but for instance, in Lithuania, not many people yet know about fact-checking. and who the fact-checkers is. So fact-checkers are those kind of still medical creatures that we need to understand. But I hope that we’re on the right track.
 
Giacomo Mazzone:
Okay. Let me be fast because Alfea has a lot of questions to answer.
 
Irena Guidikova:
Yeah, just to say that I totally agree with you. The crux of the matter about citizens spreading and believing in fake information, fake rumors, is that they don’t trust public authorities any longer. So in fact, the best way to fight disinformation is to restore trust in public authorities. And that means really rethinking democracy, revising, revisiting democratic processes, institutions with citizens. And just to reply about Israel, Giacomo is always joking about the Council of Europe being a relatively small organization, but the Council of Europe is actually becoming more and more global organization. All of our recent instruments, treaties, conventions are open globally, including the one on AI. Israel is also an observer state. The five observer states in the Council of Europe, and Israel is one of them. So you can participate in all of our intergovernmental committees, including the one that produced the disinformation guidance notes. So don’t hesitate. Civil society organization. Civil society organization. In fact, that’s a little bit of a gray zone government. Yes, civil society organizations can participate, but they can be from Israel. Maybe it’s better to associate with some international organization, civil society organization, and then this way, yes. And we also have actually a South program with EU co-funding, which is also active in Israel. So we have various channels.
 
Giacomo Mazzone:
Okay, Afia, there were many questions for you. Can you be short because we are already… Luckily, the Swiss member of the room is not here anymore, so we can be late. I’m Italian, so you can go ahead, but not so long, please.
 
Afia Asantewaa Asare-Kyei:
Sure, I have three questions.
 
Giacomo Mazzone:
No, no, you have answers to give us, not questions.
 
Afia Asantewaa Asare-Kyei:
You have three questions, so I’m going to take them all at once. So we have, for the gentleman about, you know, what does it matter when our cases are decided, we have three types of decisions that we make. So we have the standard, which is the in-depth review of, you know, META’s decision to remove or allow, which includes, of course, our recommendations. Then we have what we call a summary decision, which is an analysis of META’s, you know, original decision on a post when the company later changes its mind, when the board selects the case for review, and then we let them know, and they say, oh, sorry, here, we made a mistake. It was an enforcement error, and we’re able to say, okay, quickly, rectify it. And then there is the expedited cases. So this is now the rapid review of, you know, META’s decision on post in exceptional, you know, situations with urgent real-world consequences, such as the two, the cases that we decided on related to Israel and Hamas late, in last year, October, November. So those are the three, standard, summary, expedited. So we do have a mechanism to really fast-track, and that expedited process is 48 hours. So within 48 hours, we have to make a decision. And then to what extent is our recommendations follow? So our recommendations are binding on META, and META must implement it. META has up to 60 days to respond to us and to update us on what they are doing in terms of implementation. We have an internal implementation tracker. We have an internal implementation committee, because it will actually not make any sense if our recommendations are not implemented, and we may as well not exist. So yes, there is a seriousness on our part, and I believe on- META’s part as well to implement our recommendations and we track them. We know how many has been implemented fully, how many has been implemented partially and how many are still to be implemented and we have regular meetings with them to get updates on it. And then the contradictions. So you are right in that META is an American company, but it’s a global company as well. It’s a company that has global reach. I mean, here we are talking about the most powerful speech regulator in the history of humanity. So they do have to respond, yes, and respect US regulations, but also EU. So right now I know that internally they are having to put in mechanisms to implement the DSA vis-a-vis the regulations on social media platforms and social media companies. So just quickly to say that, yes, it is global, but I’m sorry, it is an American company, but they have to. US amendment, yes, it’s likely more, you know, a lot of things are less, but the EU is slightly more stringent and META has to respond and respect both of those rules and regulations.
 
Giacomo Mazzone:
Thank you, Afia, for being short. Two final comments on my side and then I will give the floor to the wrap up. I have one good news and one bad news. The bad news is for Afia. Afia, I appreciate your effort and I see with great interest with the document about content moderation for elections. The problem is that, for instance, for Estonia, Facebook META is free Estonian speaking native persons working on that language for all of Europe. And you have 11 for Slovakia. In Slovakia, we had a lot of troubles for the last national elections. 9 in Slovenia, and I don’t see any in the report that has been given to the European Commission for Lithuania. So, I think that there is a lot to work on your side. The good news is that, answering to the question before, why it didn’t happen so much in the European elections, you have to remember one thing. The social media can make the difference when there are elections that are tied up. So, for proportional vote, you can only influence the tone and set up the agenda. You cannot influence the vote. But when it comes to the UK elections, where some constituencies, some counties, are attributed only on the basis of the difference of a few dozen votes, or in the US, presidential campaigns, as we have seen, they make the difference. So, I expect that in elections where the vote system is different, the attack will be different, and the proportion will be higher than what we have seen. Now, sorry for closing so abruptly, but we have tried to summarize what has been said.
 
Reporter:
Yes, thank you very much. I’m just here to provide a wrap-up, because we need a broader consensus on the final messages. So, I’ll try to sum up what has been said, and if there is any objection, anything to add, please tell me. I also wanted to point out that, actually, you will have the time to check on the shared platform if you want to add any other comments, or any other things that you actually missed during today’s session. Okay, during speaking, I’ll start from the context. Okay, the context that, and not just the European elections, but during speaking the electoral year, has seen the presence of this information, even though no huge breakouts in the just in the last few days before the elections. This doesn’t mean that it is a problem, because actually we live in a much more polarized society than just a few years ago, and therefore this opens to short and long-term manipulation techniques that can be even more pervasive than just the outburst of specific kind of disinformation. AI has an impact, but traditional ways are still really important in sharing this information. Generally speaking, the right of voters to hear and express right political content is essential, but political content must be checked and constantly monitored. What are the possible solutions? A multi-method approach has been proposed that works on independent and transparent fact-checking, international collaboration, especially on demonetization and research, digital platforms, also for sharing databases and data, generally speaking, for research, and user empowerment through education, critical thinking, and media literacy. It has also been proposed to translate the code of practice into a code of conduct, so to make it more policy and not just a set of recommendations, and to consider effective collaboration with platforms to make an impact via consistent recommendations and implementation monitoring. And also a couple of proposals to produce a sort of voter-friendly communication on information and misinformation through, for example, TV shows and media content. Finally, the general approach needs to be multi-stakeholder, multi-disciplinary, with multi-level governance from international to national, regional, and local authorities, multi-linguistic, and it should pass through the creation of standards that can be globalized, especially for the global South. I hope that everything is clear. I am sorry if I have been too quick, but if anyone has objections, please let me know. Otherwise, you can comment later.
 
Giacomo Mazzone:
Thank you. And with this, we close.
 
Paula Gori:
Giacomo, sorry, if I may, just two things. One is on the code of conduct, it is already foreseen, if you want, on a policy level, that this becomes the code of practice, a code of conduct, so it’s not something that we are proposing. And if we could put a point on the fact that we need long-term sustainability on a financial point of view. I mean, it was mentioned for the fact checkers, but also the civil society organizations and all. the people, researchers and so on, working on this information, they cannot do it for free. And so far we don’t see honestly a solid and sustainable business model for everybody here. And I mean, we saw it clearly also from the presentation of Delphi, the risk is that in the end, fact-checkers work for free, civil society organizations as well.
 
Giacomo Mazzone:
Okay, perfect. For the platforms. We have to close here, there is a lot of things. Afia, you are lucky that we have to close, because the number of questions piles up for you. Thank you very much, everybody, and we will continue the discussion at the next part of the session that will start soon. Thank you.
 
 
 
==== Workshop 2b: ====
 
 
'''''Disclaimer:''' This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.''
 
[https://dig.watch/event/eurodig-2024/managing-change-in-media-space-social-media-information-disorder-and-voting-dynamics-2 '''Transcripts and more session details were provided by the Geneva Internet Platform''']
 
 
Vytautė Merkytė:
So hello, everyone here. Welcome to the second half of the workshop. The next upcoming hour we’re going to be speaking about managing change in media space, social media information disorder and voting dynamics. My name is Vytautė Merkytė I’m a journalist at Delphi and I’ll be your moderator. And we have a lovely panel of specialists here who are eager to share their thoughts, their information, everything that they have. Some of them are online and some of them are, as you can see, present here. So I’ll start by introducing the people who are online. And since this is an international event, obviously, there’s going to be this charming element of not knowing how to correctly pronounce someone’s name. So I’m very, very sorry if I’m going to butcher your names. Since my name is Vytautė, I’m very used to having this happen to me. So our first panelist is Dr. Tilak Jha. He is an associate professor at Bennett University, India. And please correct me if I pronounce your name in a wrong way.
 
Dr. Tilak Jha:
No, it’s perfectly fine.
 
Vytautė Merkytė:
That’s a surprise for me. Thank you. So our next participant who is online is Mr. Gabriel Carson. He is Secretariat Support at African Parliamentary Network on Internet Governance. Do we have him online? No. Oh, OK. And did I butcher your name?
 
Gabriel Karsan:
All right. So how much time do I have?
 
Vytautė Merkytė:
Oh, no, no, no. We’re still introducing the people. So if I understand, Mr. Gabriel is not online. OK. Thank you. Thank you. Thank you. So next to me, we have Dr. Viktor Denisenko. He is Associate Professor at Vilnius University. On my right, we have Ms. Ieva Ivanauskaitė. She is Innovation and Partnership Team Lead at Delphi. And over here, we have Aistė Meidutė. She is representing Lithuanian Counter Disinfo Project, DigiREST. So the way it’s going to work here, we’re going to have each panelist present their talking points, and they’re going to have around six minutes to do so. Afterwards, we’re going to have a discussion and we’ll be awaiting questions from the audience and also from the people who are online. So I think we can start with Professor Tulak, who is online. So are you ready to present your talking points? Yes, yes. Please go ahead.
 
Dr. Tilak Jha:
So first of all, thank you so much. I think it’s an immense privilege to be part of this fantastic discussion. I think let’s start with the debate today about elections. And I think India just had world’s biggest and possibly most comprehensive election exercise with almost 1 billion people voted. So I think how we need to, before we get into the detail of it, let me just tell a few things, is that what we are essentially talking about is two things, two words, which have become very, very interesting in current times, myth and truth. So what essentially these AI tools and misinformation campaigns, and for that matter, they don’t have always negative impact. They also at times and often have very positive impact, including in the election process. But what we do see on a larger scale that there is a whole, sort of thing in which we find that truth has become a casualty. So that’s the first thing. And I think we are living in an era of TikToks and Instagrams, fragmentation of politics, far right coming up in a very, very significant way, including in almost many other parts of the world, including in Europe as well. Quite a bit of irreverence, I would say. I think this has become a trend in younger generation, in the social media generation in particular. There is a tendency to become entertaining and rebellious. And that is a clear sign towards some sort of illiberal rule, if I may say it. Democracy is technically turning more direct in terms of being people’s ability to comment and share and respond to things being said. But at the same time, I think we often see biases coming from both sides. And this is the real tragedy of the misinformation thing, which is happening, including at the time, something as crucial as elections as well. And I think elections, now we’re talking about elections, just a few, I mean, a couple of days ago, we got to know that there have been at least two people who got elected recently to the European Union elections in which there is a politician from Cyprus and another is a Spaniard. They both have hardly any political experience, not much of higher education. Their primary qualification lies in railing against the political elite of the country, taking position which mainstream politicians would find very, very difficult. And this is not just happening in one part of the world, this has happened in America, this has happened. in many other countries, and I think including in India as well, what we see is that it is not about left or right at times, it’s just about politicians and stakeholders. When there is something as high at stake as elections and getting to power, they are willing to cut short, and that’s where all the election-related misinformation comes in. I think when we talk about Indian election 2024, we had, it went on for roughly six weeks, in which you had a huge, this was the first election for that matter in which AI was deployed at this scale, you know, because there was not much of AI before this, so this was the first election in that sense, and what we initially saw was a bit of innocent videos in which some politicians would use the technology to create some videos, to personalize their campaigns, but over the period, it looked to a situation where it was doing big fakes, some of it really, really controversial, and creating issues. There were some Bollywood stars who were caught up in creating, and who were basically found to be doing deep fakes, which were, which was not theirs, and I think thereafter the election commission also started taking note, but this was not just on that side, just celebrities from the Bollywood, or this world, that world, it was happening from the ruling and the opposition parties on the both sides, you know, and I think I would like to just point out that there is this fact-checking website, which I have gone into the detail of it, they have found that… Yeah, I think I have been hearing voices.
 
Vytautė Merkytė:
Yeah, that’s usually not a good sign, but I can assure you, we all heard them. So I’m very sorry for this, the situation.
 
Dr. Tilak Jha:
I think I would just like to cut this story short. I believe in the six minutes that I’ve been allotted is that in general, what we see is that positive and negative things. So you have political parties and BJP, the ruling party of Prime Minister Modi and opposition. And of course the regional parties, they’re all using information, misinformation and AI to a great extent at times to speak in regional languages. For example, BJP has been at the forefront of having the Prime Minister speaking many regional languages. This was not something very easily possible earlier, but with AI, this has come fairly. There was one dance video and with the Prime Minister who was dancing. I think the Prime Minister himself shared that video and commented on a light note that, well, I also found myself dancing for the first time ever. So those things have also happened, but there have also been some videos which have been very offensive. I think that legal actions have been taken. when the government had the issue that well, there could be legal action. I think some of them have also found it very, very easy to sort of just escape the legal thing because they can’t be caught one person at least who shared a video of an opposition politician, Mamata Banerjee, in which she was again shown dancing. And I think one of the video was really, really offensive in which she was being shown using a sort of, using a sort of remote to burn down a hospital. That was offensive for sure. So in that case, the user who got in touch with the news agency said, I cannot be tracked and I cannot, I’m not going to take down this video. So these are also the things. So positive and negatives have both happened. But what we do see is that AI has done a lot of micro-targeting and personalization. And quite a bit of misinformation has been used on both sides for the matter. I think in this misinformation thing, at least in this Indian election, what we found is that the opposition and the government were equal victims. At least one website called Logically Facts. They did roughly 224 fact checks. In the report, they said that almost 93 of them were against the alliance, against the ruling party. And roughly 46%, which was almost 103 of the 242 were against the opposition party. So roughly they were both targeted very, very significantly. That’s what we see. And of course, there were quite a bit of AI defects and other things. The election commission also found itself to be at the receiving end initially, though there was not much of AI content in this misinformation thing. It was hardly around four to five to 6%, not much. But it was also, there was, even if it was normal misinformation, people would claim that it is AI, but it was just normal editing. Those sort of things have also happened. At times, politicians have claimed that the video was fake. but fact-check organizations have found that the video was real, so those sort of misinformation has also happened. So quite a bit of, you know, trust has become a casualty, and I think that has been the biggest casualty, not just because of AI, but in general. I think news agencies, the declining amount of trust in news agencies, have created a situation where there is a general distrust, there is a general lack of authority, and at least an authority in which people have faith. And now this failure for that matter of the liberal political setup has definitely sort of pushed the AI and misinformation thing in particular, and the use of AI for this, really, really further. One senior election commission official went on to say that, well, we simply do not have the means to keep track of it. All we can do is complain to the social media platforms, and if they say that this is in line with the community norms, or for that matter, if they take time, we have not much to do. By the way, AI has also helped in voter education. It has also helped in generating engagement. But if we see, sort of do comparison in terms of whether the benefits have been more, and whether whether its limitations have actually been more, it’s the other way around. So these are the, some of the, some of the contours in which we can see the Indian elections, a lack of trust, a lot of fake news and misinformation. And there is a whole tendency to skew public perception, influence voter behavior, and even manipulating election outcomes. Thankfully, it has not led to a situation where we can say with any certainty that it has actually led to manipulation. But the jury remains out in terms of saying that, well, AI and misinformation. misinformation campaigns didn’t really affect election. At least there have been some incidents which have been reported after the election that do point to this being a factor at least in some constituencies, at least in 5% to 10% of, 5% constituencies for sure, especially in populous states and states where literacy is less. For example, Uttar Pradesh, the largest state of India, at least in some cities there have been reports which do point out that misinformation did play significant role. So I think this is the contour in which we need to see Indian elections. And I think I look forward to questions from the, I’ve got to take it further, yeah.
 
Vytautė Merkytė:
Thank you so much. And I just want to remind you that everyone that this year is a very, very special year. So this year around 4 billion people are going to the election polls and India is one of the biggest democracies in the world and just had their election, but there was going to be election here in Lithuania and in the States. So around 4 billion people are going to be, are either were affected by disinformation that can be found during the elections. Thank you so much. And I believe I can see Gabriel Karsan on the Zoom call. Are you ready to speak?
 
Gabriel Karsan:
Yes, I’m available.
 
Vytautė Merkytė:
Okay, so please present your talking points.
 
Gabriel Karsan:
Thank you very much. My name is Gabriel Karsan. I am the Secretary of Support for the African Parliamentarian Network on Internet Governance. Briefly, what we do is we want to empower parliamentarians in Africa to understand internet governance as an ecosystem, as a means to influence policy and create further understanding. I would like to start with a simple reflection of what social media and social networking is because for countries like Tanzania and developing countries like India, we have a lot of social media platforms. So we have a lot of social media platforms that are used by the government to promote the development of social media. And we have a lot of social media platforms that are used by the government to promote the development of social media. countries like Africa, where we have been privileged to have leapfrogging in localizing social media, there’s a difference on how we relate to the system, social media itself, social network is about bringing communities of people. So if we talk about the internet, characteristically being open, and centralized, and, and, and there is an abstraction in which social media is a use case drawn on how communities align. And for a country like Tanzania, where we have a socialist background, a combination of almost 120 tribes and another nation coming together to coalesce, we do have some parameters of integration, which can also be viewed in the high abstraction of social media. When it comes to voting, frankly, Tanzania, we have gone through democratic processes, backed by the Greek mechanisms where voting hasn’t changed much in terms of its forms, whether online, or whether it goes in a simple ballot box, the conceptuality of voting has always been the same. And when we have mixed with the concept of social media, I think it has still been highly principled in how our community is viewed as deep representation at a centralized and local level. But beginning in the 2015 election, where we did have a higher penetration of using digital tools and people understanding social media, we saw that the political party engagement in using social media as a means to share their objective campaign truth and not much oppression happened in terms of the opposition there, it was an equal space, believing that we were a people still gaining the digital skills and the I think we’re having again, technical difficulties standing up what social media is. Hello.
 
Vytautė Merkytė:
I’m sorry. You kind of wait for a second. Can you hear us?
 
Gabriel Karsan:
Yes, I can. I do hope I’m available.
 
Vytautė Merkytė:
So you can continue.
 
Gabriel Karsan:
Yes. Thank you very much. So as I was saying, uh, in terms of our understanding of the technical space, um, social media to us has just been used as a tool for representation in our communities. But ever since the 2015 digital transformation where the government aided a lot of incentivization for young people to come to improve access and to actually improve what it meant to be on social media as a tool to embrace democratic values. Then we saw the impacts that came without balanced coalitions among understanding for people. Hence, this was a source of misinformation and misinformation has not been quite politically turned out or politically themed. No, it’s that it has been an era of particular skill sets of the people not actually understanding the power of social media. It has been just a representation of rhetoric that happens on ground. Hence, it’s been sort of a natural coalition of seeing social media for what it could be in the social wise. But in terms of the voting procedure, it hasn’t quite been a tool of oppression rather than a tool of deeper engagement, especially in the 2020 election where we could see with improved digital transformation with improved accessibility and affordability context. So many people now could express themselves in terms of representation, but could also use the engaging social media tool as a form of sharing constant feedback. And this is what representation of democracy has been. So for our community, that angle for voting and representation has aligned well with that. social media as a tool. But when we see what is happening now and as we go to elections in the next year, I am seeing a different rise in terms of the political use of social media to influence people, especially our countries where most of the infrastructure is highly controlled by the state. And this is by design because we want to push for further inclusion in the social structure. But still, who regulates the regulator becomes a question for all of us to understand. But as young people, they are engaging. As young people, we are speaking. But the problem comes that there’s a bridge in terms of the population dynamics. And most of the elder generation understands social media rather as a single, monotonous channel, whereas us young people see it as a cultural shift. And that’s the balancing that we need to do. Because in the end, we are still a representation of democracy at the very ground level, which is the decentralized nature of the internet that we see expressed highly by social media. And with these principles and parameters falling in line, I think it still falls to one thing, informed policy. Informed policy in terms of a very dynamic and engaging population, certainly that actually understands digital skills as tools to help them represent themselves. But as I said, voting in the end has never changed what its form or its nature has been. We still have that secrecy. We still have that dignity holding one’s vote in the ballot box. The confidence is online, but the confidence of digital systems and interoperable and open systems, that is where the question arises because of the regulatory nature. But in terms of the ground, the formality of what people understand, I think that is the dynamic which has been balanced, and Tanzania has been quite exemplary in that matter. I would also like to add that the conflict of interest now we have is that most social media has been created and carries the bias of the creator. It is highly Western or Eastern-centric, hence the need for localization and Hong Kong solutions. And we cannot be blind to the dynamics that are changing now. There is a big geopolitical and geotechnology issue that’s happening with China as well as with America. And for us as Africans, we are caught highly in the middle. And unless we do a lot of understanding and owning ourselves in terms of the infrastructure that we need, the ownership that we need, then even the use case of voting on social media might still have influence and might not come to the basic principles. Hence, localization of Hong Kong is such an understanding or context as the social media uses and a culture of people is something that has created a great dynamic shift on how we have aligned in balancing the dynamics of voting online and using the online channel as a representation of participatory democracy. I think those are my thoughts for now. Thank you.
 
Vytautė Merkytė:
Thank you so much. And I’ll turn to the panelists who are here live with us. Dr. Denisenko, could you please share your thoughts with us?
 
Dr. Viktor Denisenko:
Okay. Of course, we could see that every region have own challenges when we’re talking about disinformation, some kind of propaganda, new technologies, including AI. And I will try talk more from perspective of our region. And in our region, I could say that the main challenge is the geopolitical situation. We are living next to Russia, a state which a few years ago began open aggression against another country. We are living next to Belarus, a closer ally of Russia, and in Lithuania. And in general, in this region, in the Baltic states, in Poland, it’s not the first year when we are talking about information warfare and the challenge of information warfare. I, as a young journalist, covered this topic for the first time 18 years ago, and I was not the first to talk or write about it. So, for us, it’s not a new challenge, but it’s part of our reality, and this reality is also changing. Because, before Crimea, or even before the year 2022, we talked more about propaganda warfare, war of narratives, information warfare, in terms of psychological warfare. Today, we are also talking about some kind of hybrid influence, when together with information warfare, we have elements of some physical or kinetic activities, including in our region and including in the Baltic states and Poland. So, it’s a big challenge, a challenge for our security. And in this context, a question about media literacy, and I’m understanding media literacy in a broad term. It means we are talking not only about possibility to recognize some fakes or disinformation and propaganda, but in general, to use information, to figure out which sources of information are trustful, which are not, how this activities of information warfare could affect society and so on and so on. In this situation, I could say that media literacy is crucial thing. And in Lithuania, we have kind of paradoxical situation because I could say that in Lithuania, we are lucky because authorities recognize that the challenge of information warfare exist, and that we are, in fact, in this tough situation. Also, we have political will to do something about it. Non-governmental organizations, a lot of non-governmental organizations works in this field. Media supports media literacy, and in our media, we have a few initiative of fact-checking, so it’s a very good thing. And main point, which we are discussing in Lithuania, I think, last 10 years or even more, but we need to put media literacy in the schools. And of course, here, we have discussion. how it should look like. Should it be a separate course for pupils in the schools, or it could be integrated in some existing lessons or courses? And the paradox is that, in general, in Lithuania, we have this political consensus. If you will ask any politician, I think, do we need some media literacy like a course in the schools, he or she will say, yes, of course, it’s quite obvious. But still, I could not see implementation of this political will, because every time when we’re trying to talk about practical implementation, it’s more problems. First of all, we need teachers. It should be part of education reform. In general, also in Lithuania, in general, is lack of teachers. And if you will start preparing teachers today for this course, but first teachers we will have after four years, bachelor level of university degree, and it will be not enough still. So my point is that even in countries where understanding about challenges and importance of media literacy is quite, where this understanding exists, still it’s problems with implementation. So I think we will stop here.
 
Vytautė Merkytė:
Thank you so much. And so we touched on elections, we touched on media literacy and what can be done, and what’s actually not being done. And now it’s time to turn to these two lovely ladies, who will actually share some practical information on what they gather know, and how it actually looks to fight disinformation. So, Ieva Ivanauskaitė, if you could go first.
 
Ieva Ivanauskaitė:
Yeah, just let me share my screen. Apparently, it seems that I have some sort of system restrictions. So I know that the organizers have my presentation, if you would be so kind to show it on the screen. I would be really helpful. But while they’re doing that, I wanted to say that what I am about to say is going to be a smooth transition from the first part of the workshop, workshop 2A, of what Victor has just said, because I’m going to be talking about solutions. And specifically, I’m going to be talking about the solutions of the organization that I represent. And it is DELFE. It is the largest online news organization in the Baltic states, and the most read online news media source in Lithuania. And we have been working quite hard in terms of anti-disinformation measures ever since 2018, when the first project related, actually 2017, when the first project related to countering disinformation measures was born. What we are doing right now is probably, if we’re not seeing the slides, I’m going to try to visualize them for you. So on my first slide, you would see our specific initiatives against disinformation. So we have three main areas. We have fact-checking tools, we have educational content, and we have collaborations. When it comes to fact-checking tools, we have a fact-checking department, the lead of which is sitting right next to me, and it’s called Malwa Detectors, or Lie Detector in English. We belong to many collaborative networks that unify fact-checking organizations globally and on a European level, and thus we’re able to make an impact not only in Lithuania, but also beyond. When it comes to the educational content, we have specific content and specific tools that we’re targeting, by which we’re targeting youth audiences. So for example, when it comes to Malwa Detectors, we have a campaign of media literacy videos on TikTok and Instagram, where we very briefly, but also in a very simplistic manner, what specific disinformation trends on topics that are of everyday relevance to the user’s needs. So for example, there was one video where we explained what a little sticker of a frog on a banana means. You would be surprised that it was a very trending disinformation narrative of such a simplistic thing, but we think that it’s relevant. We did that video and it acquired more than 100,000 views on TikTok.
 
Vytautė Merkytė:
Which is a lot for Lithuania. A lot.
 
Ieva Ivanauskaitė:
We only have 2.7 million people living in Lithuania, so I think that’s a huge achievement. When it comes to collaborations, we are part of a few networks. One of which I think is going to be presented as well, as far as I know, where we are part of collaborative work with academia, NGOs, media organizations, which I represent, where we brainstorm together to find solutions on how, in specific markets, in our markets, In our case, there are two organizations that we belong to. One is a Lithuanian organization called Digitas, and another one is a part of the Edmo Hub that was presented in Workshop 2A. It’s the Baltic Hub called Besit. We sit together, we find solutions on how to counter disinformation together. That’s what I was about to present on that first slide, but I’ll probably fast forward to and conclude after this, because I feel like I’m talking too much. The last thing, when it comes to solutions, are also technological innovations. We have rolled out a fact-checking bot on Messenger platform under the account of Maladetektors, but we will also be launching another one in collaboration with other fact-checkers in Europe to ensure… AI learning from different languages when it comes to disinformation in real time. So basically what I wanted to say is that when it comes to the best practices that we can deploy, it is a two-way street. So stakeholders, meaning government institutions, media organizations, NGOs have to take measures themselves, which include fact-checking, collaborative networks to find solutions together, because we’re doing that and we know that it works, and creating engaging content that would be relevant to their audience, regardless of whether the audience is of media organization. So it’s relatively clear what we have to do. We have to make the content easy to consume, readable, but for example, if we’re talking about government institutions, they have to think of what their target audience is as well and how to best approach them. And on the other side of that same street are users and readers. So what they can do, they can improve their critical thinking, they have to use the verified sources, which is a huge problem. And as Viktor mentioned, there has to be political will and there has to be strong measures to change the status quo, which is not so good right now, unfortunately. And yes, they have to be willing to participate in those educational programs. Yeah, so if there is no interest from those parties on both sides of the street, we will probably not see the impact, but at least I can say from a media’s perspective that we’re trying and we’re doing everything we can. Thank you.
 
Vytautė Merkytė:
Thank you so much. And I’ll just add, I think what was said is very important. I come from Delphi as well. And sometimes, as a media organization, we invite school children to have to see our office and to understand how journalists work. And quite recently, I had a group of 16 year olds who visited us. And I asked them, how do you get your news? What do you guys read? And their reaction was, we don’t read news media like portals, newspapers, and so on. Okay, so how do you get your information? And their reaction was TikTok. And then my reaction was, but do you realize that there’s a lot of disinformation on there? And they were like, yeah. And then I asked them to give me examples of this information that they saw on TikTok. They gave a lot of examples. And so it’s very important that people want to understand and find disinformation, that people want to actually increase their media literacy skills. Because for example, those 16 year olds were okay with receiving disinformation. So I just wanted to add this because it’s a very interesting point. And our last panelist here is Ms. Aistė Meidutė.
 
Aistė Meidutė:
Hello, everybody. Once again, it seems that today I’m wearing many different hats. So some of you already heard me during the first part of this workshop, where I was talking as part of Delta’s Fact Checking Initiative as an editor and fact checker. And now I’m gonna briefly talk about another thing that we did together with Vital Tasmanian University and the different NGOs. It’s a project called DigitS. This project was started with a very ambitious goal to strengthen digital resilience. of society to talk about disinformation, to empower people to fact-check some certain themselves. And it was a common, it still is a common initiative between academia, universities, media organizations, and independent journalists. And what was our approach to fight disinformation, to talk about it? First of all, we were thinking about how to build trust in traditional media, because of course it’s a huge problem that people do not trust media organizations anymore. They tend to look for information on social networks. Just last week, I was fact-checking one claim when one woman declared that young Lithuanian schoolboys right after the school, they’re gonna be sent to Ukraine to fight in the war. And when one person asked, where’s this information coming from? She said, I saw it on Telegram. So Telegram is now the leader who passes information to people, and not the official media. What we tried to do to change this, at least a little bit in Lithuania, we were talking a lot with regional media, with different media organizations, with NGOs, different stakeholders, about how media works, about what is fact-checking, how fact-checking works, why it’s important. Of course, when you see this picture, how everything is in huge cities, it’s very different from what you see in smaller regions and media-wise as well. People living in smaller territories, usually they do not tend to think about this global perspective, about why do we need so much to talk about this information, this information problem. And by connecting with regional journalists, people from region, we get their perspective. We get their problem, the problem that they’re facing. The other thing that we did was equipping community leaders with knowledge how to fact check content themselves, how to look for sources. We equipped them with knowledge how to consume content in a different manner. And one of the main techniques that we were talking about was lateral reading, where you, instead of scrolling down the page, you tend to come out of the article that you’re reading and search for different clues, like what people they mention, what events they mention, and fact check information in that way, looking for more contextual information on the content that you’re reading. And I think one of the most important efforts that we’ve done was meeting with community leaders who passes their knowledge to others. For instance, we tend to think that journalists passing the knowledge, but it’s also doctors who are passing the knowledge, different people. And people are entrusting themselves with their most valuable asset, their health. They’re looking for advice, help questions. There’s also librarians who passes their knowledge to people who not only are looking for information, they are talking about the reality that they have to face. We’re talking about teachers who passes their knowledge to children. And all these people, all these different people need to be equipped with this knowledge how to fact check information. Well, some of us could say, like, I don’t know. I’m a doctor. I’m a driver. I don’t need those tools, those fact checking tools in my life. I have other problems. But we need to understand that this huge problem of disinformation, it’s not going to. solve itself. And it’s not only fact checkers and journalists who have to explain what the reality is, we all need to have the sense of what the reality is. And the only way of achieving it is to having better knowledge of the most common tools, for instance, how to do basic fact checking. And my general notion is that being a fact checker, at least a mediocre fact checker, if not a good fact checker, is easy. And it’s pretty reachable for many members of our society. And it’s going to be our reality, but so much in terms of such a huge information flow. The other approach that we took was creating a pilot learning, teaching model for students in third university or communication students. And I was leading the workshops on how to recognize the main disinformation narratives and how to use those simple digital tools to fact check information themselves. And of course, those young people had many questions regarding how to, for instance, talk with their parents or their grandparents that deeply believe in conspiracy theory or are deeply affected by low quality content and disinformation. Of course, each huge goal comes with the challenge to achieve that and we have to deal with as well. And one of the biggest obstacles is that of course, fact checking works. And it’s proven by by university studies, but not enough people see why not enough people see because we constantly have to compete with, I don’t know, cute kitties playing butters all over the internet. And it’s hard. It’s a it’s a really hard task to talk about serious things. and to attract people’s attention. So that’s why fact checkers, of course, need help from the biggest social media platforms. Otherwise, we’re not gonna be able to pass our message, what we want to share. And of course, when you work in this huge multiple organization project, you have this feeling that different stakeholders have pretty different goals and it’s not always matching or even if they match, there’s another problem, we risk of duplicating our efforts. And that’s what we see with a huge rise of fact-checking organizations, for instance, and a huge rise of organizations consisting of different fact-checking organizations is that most of the time, we tackle the same disinformation narratives without looking for a direction, a common direction that we could go to and reaching something bigger kind of reaching progress. And the other thing that I noticed, especially it’s kind of self-critique, but between us, a lot of fact-checking organizations are driven by this approach to fact-check singular claims. And this is how most of the partnership works with the bigger platforms that we fact-check separate claims instead of looking at wider context, instead of talking about the influence operations and the actions that the bad actors take. And it’s very important to see the wider picture, of course, one fact-check is not gonna solve this huge problem, but I don’t want to leave you with such a gloomy message. I deeply believe as I said before, each of us can be a fact-checker. We just need this curiosity to do things, to explore media world. It’s very, very powerful. So thank you for listening to me.
 
Vytautė Merkytė:
Thank you so much. And I can say, I don’t know about you guys, but I’m constantly working as a fact checker for my mother, for my father, for everyone in my family. And I guess everyone can relate to this. So we still have some time for questions. And I hope that either here in this lovely audience or from the lovely people online, maybe someone would like to ask a question and I see an eager audience member. Please go ahead.
 
Audience:
I’ve been hearing a lot about media literacy programs. And everybody knows that in media literacy, education and activities are important. But my question is, how do we do it at scale? Because we are, most of us, we are small, not well-funded civil society organizations. And if we bring a group and another group, we maybe help a hundred people, a thousand people. But how do you do this? Do you have any ideas? How do you do it at scale? Reach a large audience, thematic change?
 
Vytautė Merkytė:
I guess maybe Ieva , you could help find the answer here.
 
Ieva Ivanauskaitė:
Yeah. So coming back to the point that I emphasized during my presentation, first of all, you have to have the demand for a change in a country. If there is a demand, there has to be one initiator and one organization is enough, I think. As long as that organization is motivated enough to bring all of the stakeholders in one room to discuss the possible next steps, even, not the final result, but to outline the strategy that can be later be divided into smaller steps that would allow for a big change. And this is what we are trying to do. We are still doing the baby steps. But when it comes to the networks I mentioned, this is what we did. We had this idea that we wanted to collaborate with academia in the very first place. And then little by little, we realized that we also want to collaborate with government institutions, with the ministries that should be interested in that, with media literacy practitioner, NGOs that have hands-on experience in informal education. And we brought them all together and there is a demand for it. And we’re little by little discussing on how this can move forward. So there has to be coming back to the- Relations, right? Yeah, yeah. Different people in organizations interested in changing the status quo with different capacity skills that can complement one another.
 
Vytautė Merkytė:
We have a lovely observation from Professor Talaq, he’s online. And he’s talking about that we should look at the algorithms of media companies and how they are using or rather misusing the nature of the human mind, trying to get more likes, trying to get more attention and so on. And I will turn it into a question and it’s gonna be directed to Aiste. Is it possible to find disinformation on social media when social media companies are gaining so much by making people angry, making people engaged? So is there a way for us to work together actually with them trying to achieve this one goal of eradicating or just minimizing the amount of disinformation?
 
Aistė Meidutė:
I believe unless we include all the platforms and we sit at this round table with them and we have the same goal, then yes, it’s possible. But we didn’t manage to achieve that yet. And while some platforms, for instance, Meta, is at least doing something. I mean, we know that majorly this disinformation problem was created by social media. So they’re kind of tidying after themselves, in a sense. At least they’re trying to do something now. But it’s not enough involvement. And we talk about this problematic platforms. Like, for instance, YouTube, who was sent a letter from the world’s fact checkers, but at first didn’t really react to it, then tried to kind of say that, oh, we’re doing something in-house. But it’s not enough to do something in-house. I don’t see it. Probably you don’t see it as well, how they try to reduce this information on the platform. And we need to have this huge collaboration between fact checkers and platforms. And look for those mutual solutions how to tackle this problem. Because as experts in this field, we know what to do, probably. And it shouldn’t be the initiative coming only from the platforms that say, we can govern ourselves to do that. We don’t trust them anymore, I guess.
 
Vytautė Merkytė:
So what I noticed when it comes to YouTube, when you go on certain video, you can see at the bottom, they say, oh, this video has information about COVID. And that’s it, you know. And I haven’t noticed any additional work from them. We still have time for probably one question. Is there anyone who would like to ask something? Anyone from who is online? Ah, OK. Yeah.
 
Dr. Tilak Jha:
Not a question or rather not. observation, I would say, is that I think the previous panelists deliberated on the aspect of algorithm and human mind, but I think we tend to ignore how much logic can achieve. Logic is a double whammy. It can make both sides appear equally logical, which may not be the case. I think this is the challenge that we are facing in elections, in misinformation and information related to health and everything. You just see during COVID, the time when people were dying, just in queue for oxygen and hundreds and thousands of us all across the world, and there were people who were busy minting money because somehow they were misusing the information. So logic and this argument that we can live with a very, very logical world when the entire limits of logic ends up with consumer and markets. I think that’s where this is the fundamental question that we need to address. AI is not a problem, but AI is not very natural. We tend to also ignore the fact that human intelligence and AI, what AI does is essentially clubbing up a lot of information and recycling that information, and we tend to call it intelligent. That’s what intelligence is. Intelligence is using less information to be able to tell more, more deduction. If I know everything that Facebook and Google and Twitter and all the social media platform know about you, I’ll be able to tell much more about any person. But with all this information, they’re just able to read, provide some basic information. These needs to be understood, these needs to be understood and taken in context. I think we have tried to make the world far more logical, including with the application of AI and all these things. That’s somehow backfiring. We need to… focus also on how to understand this information. Information is an empowerment. We are providing empowerment, but we are not providing people the sense to use that empowerment. That is far more critical question.
 
Vytautė Merkytė:
Thank you so much. And I think it’s a very interesting point saying that, you know, some people were gaining, you know, some people were trying to find information and some people were spreading this information and gaining money from that. So it’s, we turn into trying to find out, realize what being human is. Do we want to get more money and spread this information for our gains or are we actually trying to fight it? And I would like to turn to Francesco to wrap the session up.
 
Reporter:
Okay, yes. Thank you very much. I’m Francesco Mecchi from Youth League. I’m here as a rapporteur. So my aim is to try to wrap up what has been said during the session. Finally, if there is broader consensus in the room in order to draft, you know, the message that you can actually even later edit, modify, add comments on if you believe that I missed something. It’s a process that’s going to take place in the next few days. So please do when it’s going to be shared on the platform. Okay, I’ll start from the context. Okay, we said that today, this year was really particular because 4 billion people were going to elections and this showed on the one hand, much potential for this information to produce some problems in these democratic institutions, but also general distrust in democratic institutions as they are. We see trend towards entertainment, gamification and polarization in democratic societies. AI used to foster disinformation practices and propaganda, not just with generative AI, but also with algorithm and machine learning. And also with tools like translation tools or micro-targeting campaigns and targeting practices. And especially social media of course played a major role, especially in global South countries where they are actually a tool for constant feedback from the government. What are the solutions to this? Please keep in mind that there has already been a session, so I would skip on things that we already discussed earlier. So, for example, multistakeholderism and other approaches that already have been presented, especially by Delphi. Actually, what we wanted to… Some solutions that are proposed are especially to widen the broad media literacy. That means not just education on how to use media, but especially to feed a critical spirit, education, fact-checking, and add it to the educational curriculum in schools with a specific attention to implementation. Try to create a virtuous cycle between stakeholders and users. So, on the one hand, stakeholders must take actions to change how they provide services, but on the other hand, users need to produce critical thinking on their own and use verified sources. Fourth, diversify solutions depending on the region. We saw that actually misinformation for Central Eastern Europe can be a problem, must be framed in geopolitical terms. For Africa, it has to do with decentralization of power. For India, it’s more related to micro-profiling and other practices. So, we need to diversify solutions. We cannot think of just one solution for every kind of issue. And finally, empower community leaders with knowledge and tools to detect the myths and disinformation and understand information. It’s obvious because they are actually important actors on the local level. Finally, the general approach should avoid West-centric or East-centric trends. So, avoid either European, American attitudes and Chinese attitudes. And be really inclusive and global. And it must be focused specifically on social media because especially for the growing youth in the Global South, they are most of the Internet they consume. Is there any specific objection, anything you want to add, any modification, or do you agree with the main message?
 
Vytautė Merkytė:
Well, I think you did a wonderful job.
 
Reporter:
Great. Okay, thank you very much.
 
Dr. Tilak Jha:
We need to engage more often.
 
Vytautė Merkytė:
True, that’s true. So, that’s what I wanted to say. I wanted to encourage everyone here to reach out to these lovely panelists. If you have any suggestions, any ideas, if you want to collaborate somehow. I believe that we’re all here and we all have the same purpose. We want to make sure that there is less disinformation and more democracy in the world. So, let’s, you know, collaborate. So, thank you so much for being here today.  


[[Category:2024]][[Category:Sessions 2024]][[Category:Sessions]][[Category:Workshop 2024]]
[[Category:2024]][[Category:Sessions 2024]][[Category:Sessions]][[Category:Workshop 2024]]

Latest revision as of 15:43, 25 July 2024

18 June 2024 | 15:00 - 16:00 EEST | Workshop 2a | WS room 1 | Video recording | Transcript
18 June 2024 | 16:45 - 17:45 EEST | Workshop 2b | WS room 1 | Video recording | Transcript
Consolidated programme 2024 overview

Proposals: #7 (#27) (#35) #37 #53 #54 (#60) #61 #80 #83 (see list of proposals)

You are invited to become a member of the Session Org Team by simply subscribing to the mailing list. By doing so, you agree that your name and affiliation will be published at the relevant session wiki page. Please reply to the email send to you to confirm your subscription.

To follow the current discussion on this topic, see the discussion tab on the upper left side of this page.

Session teaser

What used to be the public sphere – in Europe and elsewhere – is undergoing fragmentation and disintermediation, as traditional media is ceding its role as the fulcrum for public opinion formation. With the dominance of social media platforms and proliferation of more advanced technologies, such as generative AI, the media environment of political processes is increasingly prone to disinformation, particularly during elections. These workshops will look at the challenges and opportunities presented by the changing media landscape, including its consequences for democracy, and highlight best practices on how to enhance digital literacy, address disinformation and strengthen independent media.

Session description

Questions to be Addressed by the Workshop:

  • What are the primary implications of the fragmented and disintermediated media space for sustaining democratic processes?
  • What roles do governments, civil society, and media actors play in addressing these issues and fostering resilient democracies?
  • How the EU digital regulations, such as the Digital Single Market Initiatives (DMA, DSA), impacted European elections? The speaker will address how each legislation piece works together, what to expect when they come into force, and what has been done so far.
  • What has been happening in other elections around the world and how is this relevant for the EU / what similarities are there?
  • What have the platforms learnt globally, and how does that impact the EU?
  • How does the use of artificial intelligence (AI) contribute to the spread of disinformation online, and what implications does this have for the electoral process and political campaigns?
  • What gaps need to be addressed to effectively combat disinformation and other issues arising from the current media landscape, and at what levels should these efforts be focused? Guidance Note on countering the spread of online mis- and disinformation will be presented
  • How can educational initiatives be designed to build resilience against disinformation and promote critical media literacy?
  • What roles do educational institutions and policymakers play in fostering a media environment that supports democratic principles and combats the spread of misleading information? How can initiatives be brought into the curricula?

Expected Outcomes:

By attending this session, participants will:

  • Enhance their understanding of the current state of the media space and its implications for democratic processes at both societal and individual levels.
  • Gain insights into EU digital policy regulation and its anticipated impact on the European elections, lessons learned and how much is this applicable elsewhere.
  • How platforms face conflicting issues such as freedom of expression/content moderation during elections
  • Identify challenges and opportunities for creating a less polluted media space that fosters resilient democracies.
  • Explore potential solutions to current regulatory gaps, such as self and co-regulation instruments.
  • Learn about educational initiatives designed to build resilience against disinformation and the role of policymakers and educational institutions in cultivating critical media literacy.

Format

Each panel member will present a brief overview of their views and perspectives. This will be followed by an interactive open discussion with all attendees to hear opinions, ideas, concepts and recommendations.

On the topic of EU elections, panellist will structure their views as follows:

  • First, what happened and lessons learned
  • What can be done better
  • How can this be replicated

Further reading

People

Please provide name and institution for all people you list here.

Programme Committee member(s)

  • Meri Baghdasaryan
  • Yrjö Länsipuro

The Programme Committee supports the programme planning process throughout the year and works closely with the Secretariat. Members of the committee give advice on the topics, cluster the proposals and assist session organisers in their work. They also ensure that session principles are followed and monitor the complete programme to avoid repetition.

Focal Point

  • Luis Manuel Arellano Cervantes (Part 1)
  • Monojit Das (Part 2)

Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective Programme Committee member(s) and the EuroDIG Secretariat and are kindly requested to follow EuroDIG’s session principles

Organising Team (Org Team) List Org Team members here as they sign up.

  • Riccardo Nanni
  • Giacomo Mazzone
  • Emilia Zalewska-Czajczyńska, NASK PIB
  • Charlotte Gilmartin
  • Octavian Sofransky, CoE
  • Ximena Lainfiesta
  • Narine Khachatryan
  • Mzia Gogilashvili
  • Phaedra de Saint-Rome

The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.

Key Participants and Moderators

Workshop 2a
Key Participants
  • Irena Guidikova, Council of Europe (On site)
  • Afia Asantewaa Asare-Kyei, Director for Justice & Accountability, Open Society (Remote)
  • Paula Gori - EDMO & Florence School of Transnational Governance, European University Institute (Remote)
  • Ms. Aistė Meidutė, Lithuanian Counter-Disinfo Project DIGIRES (On site)
Moderator
  • Giacomo Mazzone, Member of the Advisory Council of EDMO
Workshop 2b
Key Participants
  • Dr. Tilak Jha, Associate Professor at Bennett University, India (online)
  • Dr. Viktor Denisenko, Associate Professor, Centre for Journalism and Media Research, Faculty of Communication, Vilnius University
  • Ms. Ieva Ivanauskaitė, Innovation and Partnerships Team Lead, Delfi
  • Mr. Gabriel Karsan, Secretariat Support, African Parliamentary Network on Internet Governance (online)
  • Ms. Aistė Meidutė, Lithuanian Counter-Disinfo Project DIGIRES
Moderator
  • Vytautė Merkytė

Remote Moderator

Trained remote moderators will be assigned on the spot by the EuroDIG secretariat to each session.

Reporter

Reporters will be assigned by the EuroDIG secretariat in cooperation with the Geneva Internet Platform. The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:

  • are summarised on a slide and presented to the audience at the end of each session
  • relate to the particular session and to European Internet governance policy
  • are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
  • are in (rough) consensus with the audience

Current discussion, conference calls, schedules and minutes

See the discussion tab on the upper left side of this page. Please use this page to publish:

  • dates for virtual meetings or coordination calls
  • short summary of calls or email exchange

Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.

Messages

Workshop 2a:

Rapporteur: Francesco Vecchi, Eumans

  1. Impact and Challenges of the EU Elections
    Disinformation campaigns before EU elections targeted issues like Ukraine, COVID-19, and the state of EU democracy, aiming to manipulate public opinion and polarize voters. While immediate election periods showed reduced incidents, AI and traditional methods play crucial roles in maintaining (or degrading) electoral integrity and ensuring (or threatening) access to verified political content. The measures put in place by the EU (through funding an independent organisation like EDMO, the Code of Practice on Disinformation, the EEAS, the European Parliament, and a network of fact-checkers) have succeeded in mitigating the impact of foreign interference. However, concerns remain about the spreading of mistrust in democratic institutions.
  2. Possible Solutions
    To combat disinformation, a multimethod approach includes independent fact-checking, international collaboration on research and demonetisation strategies, and holding digital platforms accountable [1]. The representative of Meta’s oversight board presented recommendations addressed to the platform about how to operate during elections. Educating users in critical thinking and media literacy, along with developing voter-friendly communication, enhances electoral transparency and promotes informed electoral participation. Besides, the long-term financial sustainability of relatable media is key to managing effective strategies.
  3. Multidimensional Approach
    Addressing media manipulation and electoral integrity requires enhanced cooperation between states, platforms, and civil society with a multidimensional approach involving diverse stakeholders, multidisciplinary expertise (e.g. psychosociology, neurology, linguistics, communications, etc.), multi-level governance (from international to local), and the development of inclusive multilingual standards.

[1] See the full document here.

Workshop 2b:

Rapporteur: Francesco Vecchi, Eumans

  1. General Mistrust in Democratic Institutions
    In 2024, amid widespread distrust in democratic institutions globally, approximately 4 billion people engage in elections. Information (both digital and traditional) is increasingly crafted for entertainment, gamification, and political polarisation, amplified by Artificial Intelligence through propaganda, translation services, and micro-targeting. More specifically, social media platforms serve as crucial feedback and control channels for governments, particularly in the Global South.
  2. Diversified and Tailored Solutions
    To tackle these challenges, promoting media literacy in educational curricula is essential, fostering critical thinking and fact-checking skills. Creating a symbiotic relationship between stakeholders (taking proactive measures to combat disinformation) and users (encouraged to adopt critical thinking practices and rely on verified sources) strengthens resilience against misinformation. Besides, tailored solutions are crucial: e.g. Central-Eastern Europe frames disinformation geopolitically, African countries grapple with centralised power dynamics, and India faces issues with social media micro-profiling. Finally, empowering community leaders strengthens local resilience by leveraging their influence to promote accurate information.
  3. Focus on Inclusivity and Social Media
    An inclusive global approach to infrastructure development avoids biases and ensures equitable solutions across regions. Prioritising efforts on social media platforms, especially in the Global South where youth and mobile access are influential, enhances interventions against disinformation and supports transparent electoral processes.

Video record

Workshop 2a:

https://youtu.be/BtyjA6zVC10

Workshop 2b:

https://youtu.be/REbbY6-ehoM

Transcript

Workshop 2a:

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Transcripts and more session details were provided by the Geneva Internet Platform


Giacomo Mazzone: Okay, thank you for being here. Because we are Swiss in the audience, we are obliged to start in time. I’m Giacomo Mazzone. I’m one of the members of the Advisory Council of EDMO, that is the European Digital Media Observatory of the European Union. And we have with me here in the room Irena Guidikova from the Council of Europe and Aiste Medute, more or less, from Delphi. And we have also online with us Paula Gori from EDMO, that is Secretary General of the organization, and Afia Asantewaa Asare-Kyei, that is Director for Justice and Accountability at Open Society. And she is here with us as a member of the Oversight Board. So I think that we can start after the presentation. You know what is the topic of today, I guess, because if not, you were not in the room. But just to introduce a little bit to you, this has been an exceptional year, because it’s the first year in which we have many elections all over the world taking place at the same time, but it’s also the first year in which we have the impact of artificial intelligence used for spreading misinformation and disinformation across the world. So it’s interesting at the middle of the year. and we have still the worst to come, probably, to make the point and to see what happened in the first months in order to understand where we are in the battle and in the way to tackling disinformation and trying to preserve the electoral process all over the world. I will give the floor to Irina Gridikova, because the Council of Europe is, as we all know, the place where we try to contemporate the freedom of expression, but also with the integrity of the elections. Irena?

Irena Guidikova: So hello, everyone. I think it’s not the first time that you hear from me. I’m really happy. Thank you, Giacomo, for inviting for the session. I don’t know if my screen is being shared.

Giacomo Mazzone: Not yet.

Irena Guidikova: Oh, wait, sorry. That should be the case now.

Giacomo Mazzone: Nope, yet.

Irena Guidikova: All right, never mind. So I do represent the Council of Europe, Europe’s oldest and largest organization. It’s an intergovernmental organization of 46 member states. And for some reason, my presentation is not showing. Is it showing? No, it’s not showing. Never mind.

Giacomo Mazzone: I can send it to you. No, it’s not that one. I send you.

Irena Guidikova: The Council of Europe is a standard-setting organization, among other things. And it has recently adopted a guidance note on disinformation, countering the spread of online mis- and disinformation. It was adopted last December. It was prepared by an intergovernmental committee, the Committee on Media and Information Society, where all the 40. six member states, represented in a lot of civil society organizations, including journalism organizations and others. Now, this guided note showcases interconnected measures in three areas, and these are fact-checking, platform design, and user empowerment. So, basically, these are the three pillars of fighting disinformation that the Council of Europe is recommending to its member states, and I should underline that this should happen in a coordinated, multi-stakeholder approach, including the users and including non-governmental organizations and industry. Now, if I go one by one through each of these pillars or areas, first of all about fact-checking. Now, fact-checking is essentially a journalistic process and a profession. It’s very difficult to improvise oneself a fact-checker, although we do also have trusted flaggers and citizens that do fact-checking. But primarily, it’s a cornerstone of responsible journalism, and it’s one of the key tools to fight disinformation. There are dedicated, as I’m sure you know, fact-checking organizations, and they need to be supported, both financially but also in regulatory terms, to become trusted actors in the information environment. And states and other stakeholders should ensure their independence, their transparency, and their sustainability, their independence from any political or commercial interests. That’s clear. Their transparency with regard to the working methods they use, the methodologies, and the sources they use to check whether the facts shared are correct or not. And finally, their sustainability, in particular their financial sustainability, so that we make sure that these organizations do become really professional, that they carry out their role, it’s actually 24-7, and that they do their vital work without any undue interference. Finally, through their own cooperative networks, fact-checking organizations should ensure quality standards, and the states and other stakeholders should have a way to check their effectiveness. Now, digital platforms are also called to ensure that they participate in the effort of fact-checking by either integrating fact-checking into their own internal functioning, but this can be done basically by the bigger platforms, or that they associate with independent fact-checking organizations, and integrate external fact-checking in their content curation systems. The second dimension recommended by the Council of Europe’s guidance note on fighting misinformation and disinformation concerns digital platform design. Now, there are a very wide range of measures that states can undertake to ensure that there is human rights by design, and safety by design, and in fact, this actually probably rings a bell with something that I said this morning, and human rights by design, safety by design, their general principles, not just for this information, but for many other harmful and illicit activities or content online, including hate speech. So, they’re always there, their requirements with regard to human rights measures for managing disinformation, and for putting in place mediating strategies. Design measures for platforms should obviously not focus on content, but actually on the processes to which the platforms judge and decide which content should be suppressed in the first place. But this is rather rare and should be done only in exceptional circumstances, clearly defined by law, or whether a content should be prioritized or deprioritized, promoted, demoted, monetized, demonetized. There are also other ways, apart from these harsh measures by platforms. Okay, right, thank you so much.

Giacomo Mazzone: You have to tell her.

Irena Guidikova: Oh, yes, you move, I’ll tell you where to stop. More, more, more, more, more, more, more, more. Platforms, there we are. The other measures, apart from managing the actual content, that can help alert the users as to the potential risk of disinformation and misinformation. Let me just open a parenthesis, because you see that I’m using misinformation and disinformation. They’re interconnected, but two distinct types of incorrect information provision. Disinformation being more deliberate action, either by any content provider or creator, whether it be a state or another organization, even an individual. And misinformation is more of an unwilling sharing of wrong information. That’s why to englobe the two of the Council of Europeans is the term information disorder. So apart from managing content, there’s also provision of supplementary information to users, such as debunking age-related alerts or trigger warnings or others that can alert the users to potential presence of… incorrect content or information disorders. In this context, what’s really important, and this is, I would say, an overarching principle of managing information or countering information disorders is that the best way to counter disinformation is by the provision of trusted information from trusted sources, the prioritization of independent and professionally produced content and public interest content. It’s not so much by displacing, suppressing, or deprioritizing the wrong content that by the provision of trusted content that disinformation is best managed. User empowerment, and you can move to the next slide, is the last pillar. The guidance note focuses on user empowerment to make sure that users become resilient to disinformation, while warning, and it’s really important, of the risk of fostering narratives that blame the victim or blame the users by becoming victims of disinformation or burdening them with excessive responsibilities. At the end of the day, it’s for the state’s digital platforms and media to remain primarily in charge of promoting structural conditions for a healthy media ecosystem, and to ensure that reliable and quality information on matters of public interest is abundant and easily accessible online. At the same time, the citizens need to be equipped, citizens of all ages need to be equipped to discern facts from opinion and reliable information from fabricated myths and truths. And the more people are able to deal and cope with disinformation, the less we need to worry about them being targets of disinformation. So digital platforms should empower the users, including through systems such as trusted flaggers, at different levels, keeping an eye on linguistic and cultural differences as well. But the empowerment of users goes through education, digital literacy, and other generalistic measures that are beyond the reach of digital platforms would require cooperation between states, public authorities, media, and educational institutions. So I’ll stop here to let the other speakers take their turn. And of course, I’ll be happy to answer any questions.

Giacomo Mazzone: Thank you. Thank you for this very interesting presentation. Can you leave the last slide just to situate what we are talking about? Because this is the good transition to the next speaker, that is Paula, because we are shrinking now this horizon that is the larger Europe. We go to the smaller Europe, we can say that is the European Union. And European Union has been worried by the spread of disinformation and has made over the years many tentative to try to prevent this kind of problem. Initially, it was tried the way of co-regulation or let’s say self-regulation through the agreement with the platforms that also the Council of Europe has tried and was signed the Code of Practice. But then from the Code of Practice, after the first evaluation of the Code of Practice has been seen that this was not enough. And beside the self-regulation, the co-regulation, we are now in the regulation phase. One of the tools that the Commission put in place in order to measure the fulfilling of the obligation of the Code of Practice was exactly EDMOD, European Digital Media Observatory. that copy in the name also the European or the visual survey that is in Strasbourg. And Paola Gori is the Secretary General of this body that is based in Florence at the European Institute University. Paula, can you explain what the European Union has made specifically for the European election that took place a few days ago?

Paula Gori: First of all, thank you very much for inviting me. Just a technical question, shall I share the screen or will it be shared there, just to know?

Giacomo Mazzone: Yes, let’s try. If it doesn’t work, we have a backup.

Paula Gori: Okay, because it said that I’m not able to do that.

Giacomo Mazzone: You will be empowered soon.

Paula Gori: Great, I’m always happy when I’m getting empowered. But I’m still not.

Giacomo Mazzone: But you have to have a little bit of patience.

Paula Gori: Thank you. All right, so can you see my presentation now?

Giacomo Mazzone: Yes, now yes.

Irena Guidikova: Great, super. So yeah, as Giacomo was saying, let’s say that you started, I would say some years ago, ready to deal with this information. And as it was previously already said, there is no one single solution to this information. It’s rather what is the so-called whole of society approach. So there are different pieces of a puzzle that jointly could make the trick. And to do that, and this was already said, what is important is to have not only a multi-stakeholder approach, but also a multidisciplinary one. We come with a very easy example. We definitely need, for example, in research, neuroscientists or psychologists, sociologists, to tell us how this information actually impacts our brain, why we share irrationally or rationally, which is the whole, if you want, health status of our society, why we have the need to stay in the echo chambers and why not. So you see, it’s really multidisciplinary. It’s not the usual data analysts and lawyers. It’s way more. And as previously said, we need all the stakeholders involved jointly. And this is why Aetmo. So Aetmo brings the stakeholders together. So we have the fact checkers, the media literacy experts, the researchers, the policy experts. And what we do is we jointly try to find common solutions or if not solutions to fight, if you want to get the tools that we need, let’s think about data access in research, which is fundamental. So we’re talking about access to the data of the online platforms and search engines, which is actually, if you want the last mile that we are missing to completely understand this information, once we get that, it’s gonna be way easier also to tackle it. Now, what we have the privilege of is to have a collaboration with hubs that cover all member states. And as Jacqueline was saying, it’s less member states than the Council of Europe, but still quite a good number. And they act locally. And why is this so important? Because as we often say, this information has no borders, that’s completely true, but the local element keeps being very important. Not only language, but also culture, history, broadband penetration, media diet, and so on. So it’s really important to have actually also the local dimension always include. We see narratives that enter some countries and that completely do not enter other countries. And these are the factors

Paula Gori: that actually make the difference. Now, ahead of the elections, what did we do as EDMO? Just to first go very quickly to what Jacqueline was saying. You know, there is a code of practice on disinformation in the EU, which was strengthened a few years ago, and then the EU adopted the regulation, which is called the Digital Services Act. Just a very important disclaimer here, the Digital Services Act is not on disinformation. The Digital Services Act, the DSA, is actually mainly on illegal content. This is important to highlight, but within the DSA, there is also actually- And the whole main approach of the DSA is that given that regulating content risk being dangerous because of freedom of expression, let’s go if you want by design. So basically do risk assessments of the platforms they have to run regularly to basically set in very easy words, they have to assess if the way they are structured can be actually exploited by malicious actors to share specific content, including in this case is information. So under the DSA, there is the possibility of translating the code of practice into a code of conduct and not only signing this code, but also, of course, respecting the principles of this code can be evaluated as a risk assessment, as a mitigation measure to these risks. And ADMO is part of the permanent task force in this code. So basically this is a task force that actually helps in implementing the code. So very practically speaking, and also in making sure that the code keeps being aligned with the technological developments and in general, the developments in the field. So having said that, what did we do before the election? So we built on one side a task force that was composed of representatives of all our members of our hubs. And the idea there was on one side to analyze what happened in the national elections, because we could learn something from that. And on the other side to keep the trend, monitor the trend of this information in the EU. And I have to say, analyzing the national elections was already very useful because basically many things like, for example, expecting this information on how to people would vote. So the processes was something that we were, if you want, we’re seeing coming, because this happened also in the national elections. And this task force also produced for more than a month, a daily newsletter with this information narratives, which were detected in the EU. So very quickly, the recap of what we used. we saw is that indeed, we didn’t saw lots of disinformation in the two days of the three days of the elections, which means everything and nothing, of course. And, but we saw, of course, is information in the weeks, days, months before. And, and you know, that actually to kind of build an idea, it takes time. So actually dealing with this information only ahead of elections makes no sense at all. We need to have a systemic approach to this information, because it’s a long term process, also how actually it impacts our brain. And, and there were no big, so far, disinformation campaign that basically we’re trying to basically put into discussion the results of the outcome of the of the of the of the elections. Now, what we saw in the last month before the elections was clearly lots of disinformation growing related to the EU. So basically trying to undermine the EU and it’s, it’s the way it works and the things that the EU does. And as you can see, in these slides, we try to monitor also some very broad topics like climate change, which actually is one of the topics that tends to be stable and growing, because it can be very easily actually attached to political agendas. Same for LGBTQ plus, then you see it information in Ukraine, which was a lot used to attack the EU institutions. We saw lots of like, disinformation for that migration and others. Sorry, it’s blocked. Okay, so as I was saying, the war in Ukraine, just examples, for example, it was said that people in the EU are forced to eat insects because of the sanction the EU is imposing to Russia. And that’s basically we’re suffering a lot because of that. Climate change as I don’t know how familiar you are with climate change disinformation, but there is a clear trend from all denialism. So basically denialism, denying a climate change and the biological causes behind to new denialism. So basically, actually, it’s information on the solutions. And we saw that a lot, of course, considering the Green Deal that we have in the EU. Migrants here again, especially Ukrainian migrants, but also refugees, but also in general migrants because there is, again, a discussion, policy discussions and regulations in this field in the EU in this very moment. And here, very important to say this year, this is a topic in which disinformation can clearly trigger hate speech. So this is something to take into consideration that disinformation you can then also kind of unfortunately bring to other types of types of content and election integrity. As I was saying, we saw quite a lot of disinformation on like how to vote. So double votes or basically you can saying that you can vote by, I don’t know, by post in countries in which it was not allowed or stuff like that. And more in general, then we also saw if you want specific attacks, like I don’t know how much you have seen the attacks that were given to Ursula von der Leyen, like saying that she was that her family is linked to to to Nazis and so on. And but one very important thing is now this is I can and skip is that Jacob was I think Jacob was mentioning artificial intelligence, which is, of course, something so I wanted to get to that slide, but very important. But the old of this information still holds quite strongly. I will finish now. Don’t worry. And this is very important to say, AI, generative AI is doing damage. But the old ways fashion, the old fashion of producing this information still holds like dubbing wrongly a video or miscaptioning a picture or stuff like that. So this is something very important to keep into consideration. And last but not least. Unfortunately, I don’t know why I forgot to put it in this slide, but we also run a BeElectionSmart campaign that was sent in all member states, in all local languages, and I have to give credit to all the platforms because the online platforms collaborated with Edmo. It was, we got three credit ads by X and it was promoted also by all other platforms. So this is a way in which sometimes collaboration with platforms actually works and it’s important also to highlight that. Open for any questions, of course, and I’m sorry if I was a little long.

Giacomo Mazzone: Thank you very much, Paula. So European Union that is smaller than the Council of Europe spoke more than the Council of Europe. I’m afraid what Facebook Oversight Board of NATO that spoke for the world will talk. Afia, the floor is yours. Thank you.

Afia Asantewaa Asare-Kyei: Thank you so much. That was wonderful, the two presentations, but thank you. Thank you to the organizers for associating and giving the Oversight Board the opportunity to share what has been happening in other elections around the world, outside Europe, but what would be relevant and might be similar to EU and I guess most importantly, what have the platforms, particularly META, learned globally and how it’s impacting or could impact on Europe. So just briefly a minute for me to just broadly introduce the board. So the board is an independent attempt to hold META accountable and ensure that its policies and processes better respect human rights and freedom of expression. So what does this mean in practice? What it means is that when you feel META has gotten something wrong on Facebook, Instagram and now threads, you can appeal to the board and the board is made up of. of 22 global members. When we started, we had scope over Facebook and Instagram. And recently, due to some advocacy for scope and expansion, threads has been added. Now we are truly independent and we have over 10 matters decisions on whether content goes up or down around 80% of the time, based on the cases that we have so far handled. The board also make what we call recommendations, very impactful recommendations on just about everything from how to better protect LGBTIQ plus users to making sure that government requests to remove content are reported clearly. Meta has and is in the process of implementing majority of our think to date, if I’m not mistaken, based on the recent data from our implementation committee, we have about 250 plus recommendations and a good percentage of that Meta has or is in the process of implementing. So thanks to the board Meta’s policies and practices are much clearer. You know, the company is being much more transparent. You as a user is now told exactly why your content has been taken down and why you have been given a strike. If you have been given a strike, you can also edit your post and flag it as being likely to be removed, to be moved down. So I think this is huge because the alternative before was often a default to removal of content, which might be harmful or, you know, which might be harmful in a small way. So I think what we can imagine is how many or how much content fell into this category and what that means for public debate, if a lot of content was removed. So Meta has agreed. to track and publish data on when government requests, request policy violating content to be removed from their platform. And this is big, it’s a big advance for users’ rights and allows people to understand what kinds of content governments and state actors are seeking to remove, especially during election context. Now, specifically on elections, protecting elections online has been a key focus of the board for the last couple of years, but it became one of our official priorities since 2022. So we were preparing ahead of this historic year of elections for the past two years. So by taking high profile cases from all around the world, so looking at everything from, you know, disinformation to incitement to violence from politicians, the board has worked to ensure that Meta removes extremely harmful political content that is likely to full actual violence. We then, you know, make very sweeping recommendations on how the company should improve to ensure it does not keep, this does not keep happening. One of our major concern has been to ensure that, you know, global users are not being forgotten. So we’ve looked at whether Meta was right to suspend former U.S. President Donald Trump from its platform for stoking violence that led to the attack on Congress, whether, you know, manipulated content of U.S. President Joe Biden, that was made to seem like he was groping his granddaughter while he was sticking an I voted sticker on her chest should be taken down. And how manipulated content should be treated more broadly. We’ve also been looking at whether, you know, content by Brazilian generals. calling on people to hit the streets and take over government institutions by force, should have been allowed to spread. And as well, we looked at a video by then Cambodian President Hun Sen, in which he threatened violence against political opponents should be allowed on Meta’s platforms. So on the whole, I think we found that the company is trying to grow up and is learning some key lessons, but it still has a lot to do, much further to go, especially when it comes to the global majority. So here I’m talking about the global South and East. So to hone in on the Brazil example that I mentioned, for example, the board investigated Meta’s handling of a very partly contested 2022 election and found that the company eased its restrictions too soon, because we know that free, as I think Paola mentioned, it’s important, free, during, but sometimes you have to really pay attention to post. This was dangerous and allowed content by a very prominent Brazilian general who called for people to besiege Brazil’s Congress and to go to the National Congress and the Supreme Court. That went viral. This spread in the weeks before the January 8th, when thousands of people tried to violently overthrow the new government, almost in an imitation of what had happened in the US. And so we were wondering why the platform allowed this to spread. So the board has made sure that Meta took these posts down, but we’re really concerned about why this even happened in the first place. Why did they relax their approach? protocols and almost two years to the day after the storming of the US Capitol, as I mentioned. So to stop this happening again, the board has pushed META to commit to sharing election monitoring matrix that will clearly show what steps the company has taken and whether its efforts are working or not. This was launched in, well, so they have made the commitment to do this, and it is going to be launched later this year, and hopefully it will be in time for the UK elections. Sadly, it was not ready for other key elections, such as the elections in India and Indonesia, you know, very critical, significant democracies, and even South Africa. So I think looking forward, our experience has taught us that how you work with social media companies to make an impact. And we believe that, you know, lessons we have learned go well beyond META and can help set basic standards for other social media companies globally. Earlier this year, we issued a white paper. I don’t know how many of you have had the opportunity to see and read it, but we issued a white paper on protecting election integrity online, which made clear recommendations to companies, not just META, but social media companies. And the founding premise is that, you know, the right of voters to express and to hear a broad range of information and opinion is essential. You can only take down political content, especially around elections, if it’s totally necessary to prevent or mitigate genuine real-world harm, like violence. If this is not the case, then you have to keep it on because it is going to provoke and ginger, you know, necessary debates and conversations among citizens. And to do this, we believe that the companies must dedicate sufficient resources, I think I saw in the first presentation, sufficient resources to moderating content before, during and after elections, and not limit it to just, you know, immediately during the voting period. And here we are looking specifically at resources to non-English speaking geographies, so that, you know, they can moderate content, that they understand the context, they understand the cultural context better. So how do you make more sufficient resources to content moderation? And then we’re also asking the companies to set basic standards for all elections everywhere, and not neglect, you know, dozens of elections taking place in countries that might be less lucrative markets, where the threat of instability is sometimes often greatest. And then, you know, never allow political speech that incites violence to go and check. With quicker escalation of content to human reviewers and tough sanctions on repeated abusers, they can prioritize this and make sure that any kind of political speech that incites violence is really checked. And then just guard against the dangers of allowing governments, I’m just ending. Lastly, we are asking them to guard against the dangers of allowing governments to use disinformation or vague or unspecified reasons to suppress critical speech, because sometimes the spread of disinformation is not just by people, but by governments as well. Thank you.

Giacomo Mazzone: Thank you, Afia. I see that you use your prerogative of representing 192 states in terms of timing.

Afia Asantewaa Asare-Kyei: Thank you. I’m sorry, but I thought I should say it broader. Thanks.

Giacomo Mazzone: The paper you mentioned is here in case, very useful and very interesting. I will raise question, and I want to leave some time for question. So, if we now respect the geography, so you have 30 seconds as a Lithuanian.

Aistė Meidutė: No, no.

Giacomo Mazzone: No? Dutch, you have 45 seconds.

Aistė Meidutė: I try to be fast, but I’m not that fast, unfortunately.

Giacomo Mazzone: No, but for the sake of the debate. Thank you very much. Aiste, please, the floor is yours.

Aistė Meidutė: Yes, one moment, I will try to share my slides. Do everybody see that?

Giacomo Mazzone: Yes. Yes. We have just a little screen.

Aistė Meidutė: Great. So, first, let me introduce myself very shortly. I’m an editor and lead fact-checker in Delphi Lithuania’s fact-checking initiative, MeloDetektorus. Delphi Lithuania is the biggest news portal here in Lithuania, and we were established as fact-checking initiative in 2018, and since then, we achieved quite many things. We became signatories of international fact-checking network, as well as European fact-checking standards network, and we’re a happy member of AdMob family, too, as their fact-checking partner. Today, I’m going to talk a little bit about how we try to save elections from disinformation and disinformers. Of course, in Lithuania, it was pretty challenging to do so, because just with a couple of weeks gap, we had presidential elections, as well. So, we had to tackle both these huge events at one time, and I have to say that presidential election, of course, stole a bit of a spotlight from the European one. So, we witnessed more disinformation and misinformation narratives related to presidential elections, rather than European election. Of course, in such a challenging time, you have to work with different approaches. And our approach was to use pre-banking and debunking as well. One of the things that we do was we increased a number of fact checks that we normally produce, tackling especially European elections related content. So everything that is related to politicians as well as major decisions and European agenda as well. One of the most important things I would say that we’ve done during this period, we increased fact-checking content in Russian language. So we also have a separate fact-checking initiative in Russian language. So we try to produce as many quality content for Russian-speaking audiences. It’s no surprise that, of course, ethnic minorities are one of the main targets of disinformers, in Lithuania especially. So since the full-scale war in Ukraine started in 2022, in Lithuania, a lot of Russian channels, Russian television channels were blocked. And, of course, those people that used to consume this content regularly were left with, well, I wouldn’t say no alternatives, no alternative content, but with less content that they used to read, they used to consume. So one of the tasks that we took was to create more quality content in Russian language so they would be served accordingly. And especially with the huge flow of war migrants from Ukraine, this need increased even more. The other thing that we do was to partner with the European Fact-Checking Standard Network, where we together with 40 partners from different European countries created elections 24 check database. And to this day in this database, there is more than 2,300 fact checks from 40 countries related to European Union topics. So it’s not only European policies or European agenda, but it’s also major crisis events like war between Israel and Hamas or Ukraine. And why this database is important and why this approach is super important is that researchers have a possibility to use this content, to use the statistics and to analyze the whole disinformation scene, what was happening before the election, what was happening during election period and what is gonna happen post election. So the project is still ongoing and we already have quite a lot of data collected and also narratives published. The other things that are worth mentioning is that we try to engage our audiences in kind of a critical thinking assignment, showing them a TV show called Politika in Lithuanian. In English, it means catch a politician and it has double meaning, it’s kind of a wordplay. It means catch a politician, but also know the politician, understand the politician and by understanding a candidate, a politician, we mean that you have chance to understand how politics work. how the basic thinking behind the political agenda is constructed, and also think very critically whether all those promises that the candidates are promising are real and easily achievable. In this TV show, we invited an expert in the political field to comment on what the candidates are saying. So we had a couple of shows before the presidential election and also a couple of shows before the European election as well. And the last effort that we made countering disinformation and misinformation before the election period was to produce social media videos, mostly talking about media literacy, especially about how to recognize generated AI, generated content, and basic kind of suggestions how to consume information more efficiently and in a safe manner, checking the sources and trying to question each piece of information that you find online. So I promise to be brief. Let’s connect. If you have any questions personally for me or Biba, we are, as a media organization, always very eager to communicate with our audience as public. And especially us as fact checkers always wait for, I don’t know, suggestions. How could we improve what we’re doing? Because it’s not a fight that you can take alone. You need many people to do that and many inclusivity as well. Thank you so much.

Giacomo Mazzone: Thank you, Aiste. So now we have some time for the floor and some questions. Unfortunately, we have to come here because… The mic, there is no mic in the room.

Irena Guidikova: You can repeat the question.

Giacomo Mazzone: It depends if it’s a short one or if it’s a statement.

Audience: It won’t be a statement.

Giacomo Mazzone: No? Then I can repeat if you’re short. We sit or? No, no, no, stand.

Audience: No, no, I won’t stay here, don’t worry. So my name is Dan, I’m from Israel, and we’re facing a very big problem with disinformation in our country, especially in the current conflict, but also before with many elections like you have this year. And I want to first of all thank you for this very interesting panel. My first question is, I think for Irena and for Paula, you talked about collaborations and policies of EU states, European countries, working together with databases with policies. What can you offer a country or what do you think a country that is non-EU member, a small country that cannot work with other countries, does not have also common language and news sources with other countries, what can we learn from EU strategies and policies of the things you implemented together? And I think your slides were very interesting for us, for thinking, also Paula, for thinking about some systematic approach. And the other question I have is more for the Oversight Board, which was very impressive, but I wanted to ask, what does it help to discuss or take down content, sometimes weeks after it is being promoted and published? If it doesn’t happen in 24 hours, 48 hours, it doesn’t really worth a lot. I don’t think the Oversight Board’s job is to look at pieces of content. I think it’s to oversight and to see that the policy and the strategies that there is accountability at the platform. And we see this a lot. And also another last. The question also has to do with small countries. We hear a lot about resources going to elections in big countries. You know, you, India, what about elections in small countries? When we try to ask META or other platforms, what do we do? What do they do for elections in small countries? We don’t really get any responses. So that was my question. Thank you.

Giacomo Mazzone: Other question? You see? You can’t repeat it. So other question from the room? Other question from remote? Yeah, please.

Audience: Yes, I had a question for Ms. Gori and Ms. Meidutė as well. I got the feeling after the last European elections that there was a sense of relief that nothing extremely big like, for example, Hillary Clinton’s emails happened during the election that really seemed to have swayed things one way or the other. Do you think this feeling is justified? Or have you been able maybe to compare misinformation, disinformation between the last election 2019 and 2024? And do you see a rise? Or is it getting better or worse?

Giacomo Mazzone: Thank you.

Irena Guidikova: I also have a question.

Giacomo Mazzone: For yourself?

Irena Guidikova: No, for the other panel.

Giacomo Mazzone: Please.

Irena Guidikova: Yes, I was wondering about the oversight board and to what extent your recommendations are compulsory or, I mean, followed. Yes, because obviously they’re not compulsory, but to what extent they’re followed. And to Aistė, I had a question about your outreach. Because beyond the timing of the debunking and whatever alternative narratives, It’s also the reach. Are you able to reach out sufficiently wide because this information usually travels wider? And what are your outreach strategies?

Giacomo Mazzone: Thank you. Then I also had some questions to some of the speakers. One question is to the oversight board again, if they see contradiction with the regulations that are in Europe because they are based in the U.S. as a company. The regulation in the U.S. there is the first amendment, so it’s less cogent than it is in Europe. It makes a problem for META to comply with the European regulation while they have to comply with the U.S. regulation. That’s my question. Then there are others, but I don’t know if there is time we will raise later. So, starting to ask you to respond. Paula, you want to respond to the colleague from ISOC, for instance?

Paula Gori: Yeah, so I have that and the second question. So, I think, of course, the EU set this whole effort as EU effort also to kind of avoid discrimination within the EU market because also legally some countries are already taking some different, if you want, paths. But there’s lots of what we do, which can, of course, be put under discussion also, that I think is something that can be applied everywhere. Like, for example, working more on monetizing content, so making sure that platforms don’t monetize on disinformation. As it was said earlier already, make sure or insist or advocate for the fact that the platforms have content moderation teams in the country, in the local language. Also, whenever a country has a very peculiar language, there is also somehow Sometimes, this information is less foreign, is more domestic, because it’s difficult if you want to enter, because you have to get used to that language. But I guess in a country like Israel, English, this information still enters quite a lot. So again, try to understand what comes from where and like, who are the actors behind. Then fact checking, but independent fact checking. And there, I mean, it was mentioned the European Fact Checking Standard Network, but there is the IFCN, which is international one, and fact checkers can apply if they are there to certain standards. So again, this is something which is actually quite broad. Then research. On research, the forces are actually joined globally, not only at EU level. Of course, there is the DSA and the access to data that is quite strongly imposed by the DSA. But for example, one thing that is very important in research is to join also basically financial resources, but also technical resources. Not all universities in the world are actually technologically equipped to deal with all this data. And if I’m not mistaken, at least in Israel, you have quite advanced tech universities. So maybe actually you can help and like, it’s like a do-out test. So you can help with your technology, other universities, and they may help you in research on other fields in this information. And of course, media literacy initiatives, which I also mentioned. Edmo will be publishing soon guidelines on like how to build a good media literacy campaign, because the point is not only to implement media literacy initiatives, but they also have to have pedagogical standards, otherwise they’re basically useless and without impact. And this is, again, something which is not, I mean, only EU related, it’s something that is more broad. So I would say actually in the whole discourse, it’s not a matter of, I mean, as I was saying earlier, it’s the local element is very important. But as a matter of reflection and policy and activities to be implemented, I think we can have a more global approach. And on the relief, I personally am. not relieved. Because I think as I was saying earlier, it’s not because of those two, three days, there was no major incident, that actually this means that we didn’t have a problem. And as I was saying, it’s something I mean, disinformation. It’s not only political, but it is political, it starts long way before also, for example, with issue based advertising for which at the even at EU level, there’s still no agreed definition. So I wouldn’t be so positive regarding 2019. We’re still trying to understand how things were. But let’s be honest, technology has changed completely in these years, and also policy. So you would be comparing things that are if you want structurally, in any case different, but clearly, there will be analysis also to understand if things went better or not. And I think that were these were the two questions that were raised addressed to me. So I will stop here.

Giacomo Mazzone: Thank you. So I think you were asked.

Aistė Meidutė: Yes, to comment about how everything has been during previous election season, I would say that this time we have much more tensions. And we are definitely more polarized society. So it’s easier to tackle us. That’s why things are definitely more difficult than it used to be. Especially we noticed this thing in Lithuania, which is definitely a target because of its proximity to to Russia, for instance. And of course, a huge part of Lithuanian society has this deep fear of coming back to coming back to Soviet Union, experiencing the war, and it’s easy to manipulate these emotions, and it’s easy to scare us. If we if we think about why there hasn’t been any major boom before the election, and of course, During every meeting, probably, European fact-checkers were discussing this thing and preparing for this major boom. Maybe AI created huge false information that we weren’t prepared for to tackle, and we’re not going to be on time to do that, and it’s going to work like that. It didn’t happen, but it doesn’t mean that we’re safe. When we think about disinformation, it’s not really about those major explosions that we have to talk about. It’s about sowing doubt, and it’s a really slow-working process, but it’s still faster than those who search for the truth and fact-checkers. There’s many more of them than there’s us, who try to debunk things and try to explain things. She asked about the reach. Well, it’s hard to say. I mean, I’m pretty sure that the reach of disinformation, sometimes it’s much, much higher than the reach of fact-checks, and that definitely hurts. It’s not an easy topic for us. We try our best, and of course, being a major media outlet in the country, I say that we manage to reach quite a good number of people. The problem is that whenever we talk about fact-checking, we realize that society imagines fact-checkers and fact-checking in a very different way. Even though it’s pretty… I wouldn’t say that it’s a novel practice in media, not anymore, definitely, but for instance, in Lithuania, not many people yet know about fact-checking. and who the fact-checkers is. So fact-checkers are those kind of still medical creatures that we need to understand. But I hope that we’re on the right track.

Giacomo Mazzone: Okay. Let me be fast because Alfea has a lot of questions to answer.

Irena Guidikova: Yeah, just to say that I totally agree with you. The crux of the matter about citizens spreading and believing in fake information, fake rumors, is that they don’t trust public authorities any longer. So in fact, the best way to fight disinformation is to restore trust in public authorities. And that means really rethinking democracy, revising, revisiting democratic processes, institutions with citizens. And just to reply about Israel, Giacomo is always joking about the Council of Europe being a relatively small organization, but the Council of Europe is actually becoming more and more global organization. All of our recent instruments, treaties, conventions are open globally, including the one on AI. Israel is also an observer state. The five observer states in the Council of Europe, and Israel is one of them. So you can participate in all of our intergovernmental committees, including the one that produced the disinformation guidance notes. So don’t hesitate. Civil society organization. Civil society organization. In fact, that’s a little bit of a gray zone government. Yes, civil society organizations can participate, but they can be from Israel. Maybe it’s better to associate with some international organization, civil society organization, and then this way, yes. And we also have actually a South program with EU co-funding, which is also active in Israel. So we have various channels.

Giacomo Mazzone: Okay, Afia, there were many questions for you. Can you be short because we are already… Luckily, the Swiss member of the room is not here anymore, so we can be late. I’m Italian, so you can go ahead, but not so long, please.

Afia Asantewaa Asare-Kyei: Sure, I have three questions.

Giacomo Mazzone: No, no, you have answers to give us, not questions.

Afia Asantewaa Asare-Kyei: You have three questions, so I’m going to take them all at once. So we have, for the gentleman about, you know, what does it matter when our cases are decided, we have three types of decisions that we make. So we have the standard, which is the in-depth review of, you know, META’s decision to remove or allow, which includes, of course, our recommendations. Then we have what we call a summary decision, which is an analysis of META’s, you know, original decision on a post when the company later changes its mind, when the board selects the case for review, and then we let them know, and they say, oh, sorry, here, we made a mistake. It was an enforcement error, and we’re able to say, okay, quickly, rectify it. And then there is the expedited cases. So this is now the rapid review of, you know, META’s decision on post in exceptional, you know, situations with urgent real-world consequences, such as the two, the cases that we decided on related to Israel and Hamas late, in last year, October, November. So those are the three, standard, summary, expedited. So we do have a mechanism to really fast-track, and that expedited process is 48 hours. So within 48 hours, we have to make a decision. And then to what extent is our recommendations follow? So our recommendations are binding on META, and META must implement it. META has up to 60 days to respond to us and to update us on what they are doing in terms of implementation. We have an internal implementation tracker. We have an internal implementation committee, because it will actually not make any sense if our recommendations are not implemented, and we may as well not exist. So yes, there is a seriousness on our part, and I believe on- META’s part as well to implement our recommendations and we track them. We know how many has been implemented fully, how many has been implemented partially and how many are still to be implemented and we have regular meetings with them to get updates on it. And then the contradictions. So you are right in that META is an American company, but it’s a global company as well. It’s a company that has global reach. I mean, here we are talking about the most powerful speech regulator in the history of humanity. So they do have to respond, yes, and respect US regulations, but also EU. So right now I know that internally they are having to put in mechanisms to implement the DSA vis-a-vis the regulations on social media platforms and social media companies. So just quickly to say that, yes, it is global, but I’m sorry, it is an American company, but they have to. US amendment, yes, it’s likely more, you know, a lot of things are less, but the EU is slightly more stringent and META has to respond and respect both of those rules and regulations.

Giacomo Mazzone: Thank you, Afia, for being short. Two final comments on my side and then I will give the floor to the wrap up. I have one good news and one bad news. The bad news is for Afia. Afia, I appreciate your effort and I see with great interest with the document about content moderation for elections. The problem is that, for instance, for Estonia, Facebook META is free Estonian speaking native persons working on that language for all of Europe. And you have 11 for Slovakia. In Slovakia, we had a lot of troubles for the last national elections. 9 in Slovenia, and I don’t see any in the report that has been given to the European Commission for Lithuania. So, I think that there is a lot to work on your side. The good news is that, answering to the question before, why it didn’t happen so much in the European elections, you have to remember one thing. The social media can make the difference when there are elections that are tied up. So, for proportional vote, you can only influence the tone and set up the agenda. You cannot influence the vote. But when it comes to the UK elections, where some constituencies, some counties, are attributed only on the basis of the difference of a few dozen votes, or in the US, presidential campaigns, as we have seen, they make the difference. So, I expect that in elections where the vote system is different, the attack will be different, and the proportion will be higher than what we have seen. Now, sorry for closing so abruptly, but we have tried to summarize what has been said.

Reporter: Yes, thank you very much. I’m just here to provide a wrap-up, because we need a broader consensus on the final messages. So, I’ll try to sum up what has been said, and if there is any objection, anything to add, please tell me. I also wanted to point out that, actually, you will have the time to check on the shared platform if you want to add any other comments, or any other things that you actually missed during today’s session. Okay, during speaking, I’ll start from the context. Okay, the context that, and not just the European elections, but during speaking the electoral year, has seen the presence of this information, even though no huge breakouts in the just in the last few days before the elections. This doesn’t mean that it is a problem, because actually we live in a much more polarized society than just a few years ago, and therefore this opens to short and long-term manipulation techniques that can be even more pervasive than just the outburst of specific kind of disinformation. AI has an impact, but traditional ways are still really important in sharing this information. Generally speaking, the right of voters to hear and express right political content is essential, but political content must be checked and constantly monitored. What are the possible solutions? A multi-method approach has been proposed that works on independent and transparent fact-checking, international collaboration, especially on demonetization and research, digital platforms, also for sharing databases and data, generally speaking, for research, and user empowerment through education, critical thinking, and media literacy. It has also been proposed to translate the code of practice into a code of conduct, so to make it more policy and not just a set of recommendations, and to consider effective collaboration with platforms to make an impact via consistent recommendations and implementation monitoring. And also a couple of proposals to produce a sort of voter-friendly communication on information and misinformation through, for example, TV shows and media content. Finally, the general approach needs to be multi-stakeholder, multi-disciplinary, with multi-level governance from international to national, regional, and local authorities, multi-linguistic, and it should pass through the creation of standards that can be globalized, especially for the global South. I hope that everything is clear. I am sorry if I have been too quick, but if anyone has objections, please let me know. Otherwise, you can comment later.

Giacomo Mazzone: Thank you. And with this, we close.

Paula Gori: Giacomo, sorry, if I may, just two things. One is on the code of conduct, it is already foreseen, if you want, on a policy level, that this becomes the code of practice, a code of conduct, so it’s not something that we are proposing. And if we could put a point on the fact that we need long-term sustainability on a financial point of view. I mean, it was mentioned for the fact checkers, but also the civil society organizations and all. the people, researchers and so on, working on this information, they cannot do it for free. And so far we don’t see honestly a solid and sustainable business model for everybody here. And I mean, we saw it clearly also from the presentation of Delphi, the risk is that in the end, fact-checkers work for free, civil society organizations as well.

Giacomo Mazzone: Okay, perfect. For the platforms. We have to close here, there is a lot of things. Afia, you are lucky that we have to close, because the number of questions piles up for you. Thank you very much, everybody, and we will continue the discussion at the next part of the session that will start soon. Thank you.


Workshop 2b:

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Transcripts and more session details were provided by the Geneva Internet Platform


Vytautė Merkytė: So hello, everyone here. Welcome to the second half of the workshop. The next upcoming hour we’re going to be speaking about managing change in media space, social media information disorder and voting dynamics. My name is Vytautė Merkytė I’m a journalist at Delphi and I’ll be your moderator. And we have a lovely panel of specialists here who are eager to share their thoughts, their information, everything that they have. Some of them are online and some of them are, as you can see, present here. So I’ll start by introducing the people who are online. And since this is an international event, obviously, there’s going to be this charming element of not knowing how to correctly pronounce someone’s name. So I’m very, very sorry if I’m going to butcher your names. Since my name is Vytautė, I’m very used to having this happen to me. So our first panelist is Dr. Tilak Jha. He is an associate professor at Bennett University, India. And please correct me if I pronounce your name in a wrong way.

Dr. Tilak Jha: No, it’s perfectly fine.

Vytautė Merkytė: That’s a surprise for me. Thank you. So our next participant who is online is Mr. Gabriel Carson. He is Secretariat Support at African Parliamentary Network on Internet Governance. Do we have him online? No. Oh, OK. And did I butcher your name?

Gabriel Karsan: All right. So how much time do I have?

Vytautė Merkytė: Oh, no, no, no. We’re still introducing the people. So if I understand, Mr. Gabriel is not online. OK. Thank you. Thank you. Thank you. So next to me, we have Dr. Viktor Denisenko. He is Associate Professor at Vilnius University. On my right, we have Ms. Ieva Ivanauskaitė. She is Innovation and Partnership Team Lead at Delphi. And over here, we have Aistė Meidutė. She is representing Lithuanian Counter Disinfo Project, DigiREST. So the way it’s going to work here, we’re going to have each panelist present their talking points, and they’re going to have around six minutes to do so. Afterwards, we’re going to have a discussion and we’ll be awaiting questions from the audience and also from the people who are online. So I think we can start with Professor Tulak, who is online. So are you ready to present your talking points? Yes, yes. Please go ahead.

Dr. Tilak Jha: So first of all, thank you so much. I think it’s an immense privilege to be part of this fantastic discussion. I think let’s start with the debate today about elections. And I think India just had world’s biggest and possibly most comprehensive election exercise with almost 1 billion people voted. So I think how we need to, before we get into the detail of it, let me just tell a few things, is that what we are essentially talking about is two things, two words, which have become very, very interesting in current times, myth and truth. So what essentially these AI tools and misinformation campaigns, and for that matter, they don’t have always negative impact. They also at times and often have very positive impact, including in the election process. But what we do see on a larger scale that there is a whole, sort of thing in which we find that truth has become a casualty. So that’s the first thing. And I think we are living in an era of TikToks and Instagrams, fragmentation of politics, far right coming up in a very, very significant way, including in almost many other parts of the world, including in Europe as well. Quite a bit of irreverence, I would say. I think this has become a trend in younger generation, in the social media generation in particular. There is a tendency to become entertaining and rebellious. And that is a clear sign towards some sort of illiberal rule, if I may say it. Democracy is technically turning more direct in terms of being people’s ability to comment and share and respond to things being said. But at the same time, I think we often see biases coming from both sides. And this is the real tragedy of the misinformation thing, which is happening, including at the time, something as crucial as elections as well. And I think elections, now we’re talking about elections, just a few, I mean, a couple of days ago, we got to know that there have been at least two people who got elected recently to the European Union elections in which there is a politician from Cyprus and another is a Spaniard. They both have hardly any political experience, not much of higher education. Their primary qualification lies in railing against the political elite of the country, taking position which mainstream politicians would find very, very difficult. And this is not just happening in one part of the world, this has happened in America, this has happened. in many other countries, and I think including in India as well, what we see is that it is not about left or right at times, it’s just about politicians and stakeholders. When there is something as high at stake as elections and getting to power, they are willing to cut short, and that’s where all the election-related misinformation comes in. I think when we talk about Indian election 2024, we had, it went on for roughly six weeks, in which you had a huge, this was the first election for that matter in which AI was deployed at this scale, you know, because there was not much of AI before this, so this was the first election in that sense, and what we initially saw was a bit of innocent videos in which some politicians would use the technology to create some videos, to personalize their campaigns, but over the period, it looked to a situation where it was doing big fakes, some of it really, really controversial, and creating issues. There were some Bollywood stars who were caught up in creating, and who were basically found to be doing deep fakes, which were, which was not theirs, and I think thereafter the election commission also started taking note, but this was not just on that side, just celebrities from the Bollywood, or this world, that world, it was happening from the ruling and the opposition parties on the both sides, you know, and I think I would like to just point out that there is this fact-checking website, which I have gone into the detail of it, they have found that… Yeah, I think I have been hearing voices.

Vytautė Merkytė: Yeah, that’s usually not a good sign, but I can assure you, we all heard them. So I’m very sorry for this, the situation.

Dr. Tilak Jha: I think I would just like to cut this story short. I believe in the six minutes that I’ve been allotted is that in general, what we see is that positive and negative things. So you have political parties and BJP, the ruling party of Prime Minister Modi and opposition. And of course the regional parties, they’re all using information, misinformation and AI to a great extent at times to speak in regional languages. For example, BJP has been at the forefront of having the Prime Minister speaking many regional languages. This was not something very easily possible earlier, but with AI, this has come fairly. There was one dance video and with the Prime Minister who was dancing. I think the Prime Minister himself shared that video and commented on a light note that, well, I also found myself dancing for the first time ever. So those things have also happened, but there have also been some videos which have been very offensive. I think that legal actions have been taken. when the government had the issue that well, there could be legal action. I think some of them have also found it very, very easy to sort of just escape the legal thing because they can’t be caught one person at least who shared a video of an opposition politician, Mamata Banerjee, in which she was again shown dancing. And I think one of the video was really, really offensive in which she was being shown using a sort of, using a sort of remote to burn down a hospital. That was offensive for sure. So in that case, the user who got in touch with the news agency said, I cannot be tracked and I cannot, I’m not going to take down this video. So these are also the things. So positive and negatives have both happened. But what we do see is that AI has done a lot of micro-targeting and personalization. And quite a bit of misinformation has been used on both sides for the matter. I think in this misinformation thing, at least in this Indian election, what we found is that the opposition and the government were equal victims. At least one website called Logically Facts. They did roughly 224 fact checks. In the report, they said that almost 93 of them were against the alliance, against the ruling party. And roughly 46%, which was almost 103 of the 242 were against the opposition party. So roughly they were both targeted very, very significantly. That’s what we see. And of course, there were quite a bit of AI defects and other things. The election commission also found itself to be at the receiving end initially, though there was not much of AI content in this misinformation thing. It was hardly around four to five to 6%, not much. But it was also, there was, even if it was normal misinformation, people would claim that it is AI, but it was just normal editing. Those sort of things have also happened. At times, politicians have claimed that the video was fake. but fact-check organizations have found that the video was real, so those sort of misinformation has also happened. So quite a bit of, you know, trust has become a casualty, and I think that has been the biggest casualty, not just because of AI, but in general. I think news agencies, the declining amount of trust in news agencies, have created a situation where there is a general distrust, there is a general lack of authority, and at least an authority in which people have faith. And now this failure for that matter of the liberal political setup has definitely sort of pushed the AI and misinformation thing in particular, and the use of AI for this, really, really further. One senior election commission official went on to say that, well, we simply do not have the means to keep track of it. All we can do is complain to the social media platforms, and if they say that this is in line with the community norms, or for that matter, if they take time, we have not much to do. By the way, AI has also helped in voter education. It has also helped in generating engagement. But if we see, sort of do comparison in terms of whether the benefits have been more, and whether whether its limitations have actually been more, it’s the other way around. So these are the, some of the, some of the contours in which we can see the Indian elections, a lack of trust, a lot of fake news and misinformation. And there is a whole tendency to skew public perception, influence voter behavior, and even manipulating election outcomes. Thankfully, it has not led to a situation where we can say with any certainty that it has actually led to manipulation. But the jury remains out in terms of saying that, well, AI and misinformation. misinformation campaigns didn’t really affect election. At least there have been some incidents which have been reported after the election that do point to this being a factor at least in some constituencies, at least in 5% to 10% of, 5% constituencies for sure, especially in populous states and states where literacy is less. For example, Uttar Pradesh, the largest state of India, at least in some cities there have been reports which do point out that misinformation did play significant role. So I think this is the contour in which we need to see Indian elections. And I think I look forward to questions from the, I’ve got to take it further, yeah.

Vytautė Merkytė: Thank you so much. And I just want to remind you that everyone that this year is a very, very special year. So this year around 4 billion people are going to the election polls and India is one of the biggest democracies in the world and just had their election, but there was going to be election here in Lithuania and in the States. So around 4 billion people are going to be, are either were affected by disinformation that can be found during the elections. Thank you so much. And I believe I can see Gabriel Karsan on the Zoom call. Are you ready to speak?

Gabriel Karsan: Yes, I’m available.

Vytautė Merkytė: Okay, so please present your talking points.

Gabriel Karsan: Thank you very much. My name is Gabriel Karsan. I am the Secretary of Support for the African Parliamentarian Network on Internet Governance. Briefly, what we do is we want to empower parliamentarians in Africa to understand internet governance as an ecosystem, as a means to influence policy and create further understanding. I would like to start with a simple reflection of what social media and social networking is because for countries like Tanzania and developing countries like India, we have a lot of social media platforms. So we have a lot of social media platforms that are used by the government to promote the development of social media. And we have a lot of social media platforms that are used by the government to promote the development of social media. countries like Africa, where we have been privileged to have leapfrogging in localizing social media, there’s a difference on how we relate to the system, social media itself, social network is about bringing communities of people. So if we talk about the internet, characteristically being open, and centralized, and, and, and there is an abstraction in which social media is a use case drawn on how communities align. And for a country like Tanzania, where we have a socialist background, a combination of almost 120 tribes and another nation coming together to coalesce, we do have some parameters of integration, which can also be viewed in the high abstraction of social media. When it comes to voting, frankly, Tanzania, we have gone through democratic processes, backed by the Greek mechanisms where voting hasn’t changed much in terms of its forms, whether online, or whether it goes in a simple ballot box, the conceptuality of voting has always been the same. And when we have mixed with the concept of social media, I think it has still been highly principled in how our community is viewed as deep representation at a centralized and local level. But beginning in the 2015 election, where we did have a higher penetration of using digital tools and people understanding social media, we saw that the political party engagement in using social media as a means to share their objective campaign truth and not much oppression happened in terms of the opposition there, it was an equal space, believing that we were a people still gaining the digital skills and the I think we’re having again, technical difficulties standing up what social media is. Hello.

Vytautė Merkytė: I’m sorry. You kind of wait for a second. Can you hear us?

Gabriel Karsan: Yes, I can. I do hope I’m available.

Vytautė Merkytė: So you can continue.

Gabriel Karsan: Yes. Thank you very much. So as I was saying, uh, in terms of our understanding of the technical space, um, social media to us has just been used as a tool for representation in our communities. But ever since the 2015 digital transformation where the government aided a lot of incentivization for young people to come to improve access and to actually improve what it meant to be on social media as a tool to embrace democratic values. Then we saw the impacts that came without balanced coalitions among understanding for people. Hence, this was a source of misinformation and misinformation has not been quite politically turned out or politically themed. No, it’s that it has been an era of particular skill sets of the people not actually understanding the power of social media. It has been just a representation of rhetoric that happens on ground. Hence, it’s been sort of a natural coalition of seeing social media for what it could be in the social wise. But in terms of the voting procedure, it hasn’t quite been a tool of oppression rather than a tool of deeper engagement, especially in the 2020 election where we could see with improved digital transformation with improved accessibility and affordability context. So many people now could express themselves in terms of representation, but could also use the engaging social media tool as a form of sharing constant feedback. And this is what representation of democracy has been. So for our community, that angle for voting and representation has aligned well with that. social media as a tool. But when we see what is happening now and as we go to elections in the next year, I am seeing a different rise in terms of the political use of social media to influence people, especially our countries where most of the infrastructure is highly controlled by the state. And this is by design because we want to push for further inclusion in the social structure. But still, who regulates the regulator becomes a question for all of us to understand. But as young people, they are engaging. As young people, we are speaking. But the problem comes that there’s a bridge in terms of the population dynamics. And most of the elder generation understands social media rather as a single, monotonous channel, whereas us young people see it as a cultural shift. And that’s the balancing that we need to do. Because in the end, we are still a representation of democracy at the very ground level, which is the decentralized nature of the internet that we see expressed highly by social media. And with these principles and parameters falling in line, I think it still falls to one thing, informed policy. Informed policy in terms of a very dynamic and engaging population, certainly that actually understands digital skills as tools to help them represent themselves. But as I said, voting in the end has never changed what its form or its nature has been. We still have that secrecy. We still have that dignity holding one’s vote in the ballot box. The confidence is online, but the confidence of digital systems and interoperable and open systems, that is where the question arises because of the regulatory nature. But in terms of the ground, the formality of what people understand, I think that is the dynamic which has been balanced, and Tanzania has been quite exemplary in that matter. I would also like to add that the conflict of interest now we have is that most social media has been created and carries the bias of the creator. It is highly Western or Eastern-centric, hence the need for localization and Hong Kong solutions. And we cannot be blind to the dynamics that are changing now. There is a big geopolitical and geotechnology issue that’s happening with China as well as with America. And for us as Africans, we are caught highly in the middle. And unless we do a lot of understanding and owning ourselves in terms of the infrastructure that we need, the ownership that we need, then even the use case of voting on social media might still have influence and might not come to the basic principles. Hence, localization of Hong Kong is such an understanding or context as the social media uses and a culture of people is something that has created a great dynamic shift on how we have aligned in balancing the dynamics of voting online and using the online channel as a representation of participatory democracy. I think those are my thoughts for now. Thank you.

Vytautė Merkytė: Thank you so much. And I’ll turn to the panelists who are here live with us. Dr. Denisenko, could you please share your thoughts with us?

Dr. Viktor Denisenko: Okay. Of course, we could see that every region have own challenges when we’re talking about disinformation, some kind of propaganda, new technologies, including AI. And I will try talk more from perspective of our region. And in our region, I could say that the main challenge is the geopolitical situation. We are living next to Russia, a state which a few years ago began open aggression against another country. We are living next to Belarus, a closer ally of Russia, and in Lithuania. And in general, in this region, in the Baltic states, in Poland, it’s not the first year when we are talking about information warfare and the challenge of information warfare. I, as a young journalist, covered this topic for the first time 18 years ago, and I was not the first to talk or write about it. So, for us, it’s not a new challenge, but it’s part of our reality, and this reality is also changing. Because, before Crimea, or even before the year 2022, we talked more about propaganda warfare, war of narratives, information warfare, in terms of psychological warfare. Today, we are also talking about some kind of hybrid influence, when together with information warfare, we have elements of some physical or kinetic activities, including in our region and including in the Baltic states and Poland. So, it’s a big challenge, a challenge for our security. And in this context, a question about media literacy, and I’m understanding media literacy in a broad term. It means we are talking not only about possibility to recognize some fakes or disinformation and propaganda, but in general, to use information, to figure out which sources of information are trustful, which are not, how this activities of information warfare could affect society and so on and so on. In this situation, I could say that media literacy is crucial thing. And in Lithuania, we have kind of paradoxical situation because I could say that in Lithuania, we are lucky because authorities recognize that the challenge of information warfare exist, and that we are, in fact, in this tough situation. Also, we have political will to do something about it. Non-governmental organizations, a lot of non-governmental organizations works in this field. Media supports media literacy, and in our media, we have a few initiative of fact-checking, so it’s a very good thing. And main point, which we are discussing in Lithuania, I think, last 10 years or even more, but we need to put media literacy in the schools. And of course, here, we have discussion. how it should look like. Should it be a separate course for pupils in the schools, or it could be integrated in some existing lessons or courses? And the paradox is that, in general, in Lithuania, we have this political consensus. If you will ask any politician, I think, do we need some media literacy like a course in the schools, he or she will say, yes, of course, it’s quite obvious. But still, I could not see implementation of this political will, because every time when we’re trying to talk about practical implementation, it’s more problems. First of all, we need teachers. It should be part of education reform. In general, also in Lithuania, in general, is lack of teachers. And if you will start preparing teachers today for this course, but first teachers we will have after four years, bachelor level of university degree, and it will be not enough still. So my point is that even in countries where understanding about challenges and importance of media literacy is quite, where this understanding exists, still it’s problems with implementation. So I think we will stop here.

Vytautė Merkytė: Thank you so much. And so we touched on elections, we touched on media literacy and what can be done, and what’s actually not being done. And now it’s time to turn to these two lovely ladies, who will actually share some practical information on what they gather know, and how it actually looks to fight disinformation. So, Ieva Ivanauskaitė, if you could go first.

Ieva Ivanauskaitė: Yeah, just let me share my screen. Apparently, it seems that I have some sort of system restrictions. So I know that the organizers have my presentation, if you would be so kind to show it on the screen. I would be really helpful. But while they’re doing that, I wanted to say that what I am about to say is going to be a smooth transition from the first part of the workshop, workshop 2A, of what Victor has just said, because I’m going to be talking about solutions. And specifically, I’m going to be talking about the solutions of the organization that I represent. And it is DELFE. It is the largest online news organization in the Baltic states, and the most read online news media source in Lithuania. And we have been working quite hard in terms of anti-disinformation measures ever since 2018, when the first project related, actually 2017, when the first project related to countering disinformation measures was born. What we are doing right now is probably, if we’re not seeing the slides, I’m going to try to visualize them for you. So on my first slide, you would see our specific initiatives against disinformation. So we have three main areas. We have fact-checking tools, we have educational content, and we have collaborations. When it comes to fact-checking tools, we have a fact-checking department, the lead of which is sitting right next to me, and it’s called Malwa Detectors, or Lie Detector in English. We belong to many collaborative networks that unify fact-checking organizations globally and on a European level, and thus we’re able to make an impact not only in Lithuania, but also beyond. When it comes to the educational content, we have specific content and specific tools that we’re targeting, by which we’re targeting youth audiences. So for example, when it comes to Malwa Detectors, we have a campaign of media literacy videos on TikTok and Instagram, where we very briefly, but also in a very simplistic manner, what specific disinformation trends on topics that are of everyday relevance to the user’s needs. So for example, there was one video where we explained what a little sticker of a frog on a banana means. You would be surprised that it was a very trending disinformation narrative of such a simplistic thing, but we think that it’s relevant. We did that video and it acquired more than 100,000 views on TikTok.

Vytautė Merkytė: Which is a lot for Lithuania. A lot.

Ieva Ivanauskaitė: We only have 2.7 million people living in Lithuania, so I think that’s a huge achievement. When it comes to collaborations, we are part of a few networks. One of which I think is going to be presented as well, as far as I know, where we are part of collaborative work with academia, NGOs, media organizations, which I represent, where we brainstorm together to find solutions on how, in specific markets, in our markets, In our case, there are two organizations that we belong to. One is a Lithuanian organization called Digitas, and another one is a part of the Edmo Hub that was presented in Workshop 2A. It’s the Baltic Hub called Besit. We sit together, we find solutions on how to counter disinformation together. That’s what I was about to present on that first slide, but I’ll probably fast forward to and conclude after this, because I feel like I’m talking too much. The last thing, when it comes to solutions, are also technological innovations. We have rolled out a fact-checking bot on Messenger platform under the account of Maladetektors, but we will also be launching another one in collaboration with other fact-checkers in Europe to ensure… AI learning from different languages when it comes to disinformation in real time. So basically what I wanted to say is that when it comes to the best practices that we can deploy, it is a two-way street. So stakeholders, meaning government institutions, media organizations, NGOs have to take measures themselves, which include fact-checking, collaborative networks to find solutions together, because we’re doing that and we know that it works, and creating engaging content that would be relevant to their audience, regardless of whether the audience is of media organization. So it’s relatively clear what we have to do. We have to make the content easy to consume, readable, but for example, if we’re talking about government institutions, they have to think of what their target audience is as well and how to best approach them. And on the other side of that same street are users and readers. So what they can do, they can improve their critical thinking, they have to use the verified sources, which is a huge problem. And as Viktor mentioned, there has to be political will and there has to be strong measures to change the status quo, which is not so good right now, unfortunately. And yes, they have to be willing to participate in those educational programs. Yeah, so if there is no interest from those parties on both sides of the street, we will probably not see the impact, but at least I can say from a media’s perspective that we’re trying and we’re doing everything we can. Thank you.

Vytautė Merkytė: Thank you so much. And I’ll just add, I think what was said is very important. I come from Delphi as well. And sometimes, as a media organization, we invite school children to have to see our office and to understand how journalists work. And quite recently, I had a group of 16 year olds who visited us. And I asked them, how do you get your news? What do you guys read? And their reaction was, we don’t read news media like portals, newspapers, and so on. Okay, so how do you get your information? And their reaction was TikTok. And then my reaction was, but do you realize that there’s a lot of disinformation on there? And they were like, yeah. And then I asked them to give me examples of this information that they saw on TikTok. They gave a lot of examples. And so it’s very important that people want to understand and find disinformation, that people want to actually increase their media literacy skills. Because for example, those 16 year olds were okay with receiving disinformation. So I just wanted to add this because it’s a very interesting point. And our last panelist here is Ms. Aistė Meidutė.

Aistė Meidutė: Hello, everybody. Once again, it seems that today I’m wearing many different hats. So some of you already heard me during the first part of this workshop, where I was talking as part of Delta’s Fact Checking Initiative as an editor and fact checker. And now I’m gonna briefly talk about another thing that we did together with Vital Tasmanian University and the different NGOs. It’s a project called DigitS. This project was started with a very ambitious goal to strengthen digital resilience. of society to talk about disinformation, to empower people to fact-check some certain themselves. And it was a common, it still is a common initiative between academia, universities, media organizations, and independent journalists. And what was our approach to fight disinformation, to talk about it? First of all, we were thinking about how to build trust in traditional media, because of course it’s a huge problem that people do not trust media organizations anymore. They tend to look for information on social networks. Just last week, I was fact-checking one claim when one woman declared that young Lithuanian schoolboys right after the school, they’re gonna be sent to Ukraine to fight in the war. And when one person asked, where’s this information coming from? She said, I saw it on Telegram. So Telegram is now the leader who passes information to people, and not the official media. What we tried to do to change this, at least a little bit in Lithuania, we were talking a lot with regional media, with different media organizations, with NGOs, different stakeholders, about how media works, about what is fact-checking, how fact-checking works, why it’s important. Of course, when you see this picture, how everything is in huge cities, it’s very different from what you see in smaller regions and media-wise as well. People living in smaller territories, usually they do not tend to think about this global perspective, about why do we need so much to talk about this information, this information problem. And by connecting with regional journalists, people from region, we get their perspective. We get their problem, the problem that they’re facing. The other thing that we did was equipping community leaders with knowledge how to fact check content themselves, how to look for sources. We equipped them with knowledge how to consume content in a different manner. And one of the main techniques that we were talking about was lateral reading, where you, instead of scrolling down the page, you tend to come out of the article that you’re reading and search for different clues, like what people they mention, what events they mention, and fact check information in that way, looking for more contextual information on the content that you’re reading. And I think one of the most important efforts that we’ve done was meeting with community leaders who passes their knowledge to others. For instance, we tend to think that journalists passing the knowledge, but it’s also doctors who are passing the knowledge, different people. And people are entrusting themselves with their most valuable asset, their health. They’re looking for advice, help questions. There’s also librarians who passes their knowledge to people who not only are looking for information, they are talking about the reality that they have to face. We’re talking about teachers who passes their knowledge to children. And all these people, all these different people need to be equipped with this knowledge how to fact check information. Well, some of us could say, like, I don’t know. I’m a doctor. I’m a driver. I don’t need those tools, those fact checking tools in my life. I have other problems. But we need to understand that this huge problem of disinformation, it’s not going to. solve itself. And it’s not only fact checkers and journalists who have to explain what the reality is, we all need to have the sense of what the reality is. And the only way of achieving it is to having better knowledge of the most common tools, for instance, how to do basic fact checking. And my general notion is that being a fact checker, at least a mediocre fact checker, if not a good fact checker, is easy. And it’s pretty reachable for many members of our society. And it’s going to be our reality, but so much in terms of such a huge information flow. The other approach that we took was creating a pilot learning, teaching model for students in third university or communication students. And I was leading the workshops on how to recognize the main disinformation narratives and how to use those simple digital tools to fact check information themselves. And of course, those young people had many questions regarding how to, for instance, talk with their parents or their grandparents that deeply believe in conspiracy theory or are deeply affected by low quality content and disinformation. Of course, each huge goal comes with the challenge to achieve that and we have to deal with as well. And one of the biggest obstacles is that of course, fact checking works. And it’s proven by by university studies, but not enough people see why not enough people see because we constantly have to compete with, I don’t know, cute kitties playing butters all over the internet. And it’s hard. It’s a it’s a really hard task to talk about serious things. and to attract people’s attention. So that’s why fact checkers, of course, need help from the biggest social media platforms. Otherwise, we’re not gonna be able to pass our message, what we want to share. And of course, when you work in this huge multiple organization project, you have this feeling that different stakeholders have pretty different goals and it’s not always matching or even if they match, there’s another problem, we risk of duplicating our efforts. And that’s what we see with a huge rise of fact-checking organizations, for instance, and a huge rise of organizations consisting of different fact-checking organizations is that most of the time, we tackle the same disinformation narratives without looking for a direction, a common direction that we could go to and reaching something bigger kind of reaching progress. And the other thing that I noticed, especially it’s kind of self-critique, but between us, a lot of fact-checking organizations are driven by this approach to fact-check singular claims. And this is how most of the partnership works with the bigger platforms that we fact-check separate claims instead of looking at wider context, instead of talking about the influence operations and the actions that the bad actors take. And it’s very important to see the wider picture, of course, one fact-check is not gonna solve this huge problem, but I don’t want to leave you with such a gloomy message. I deeply believe as I said before, each of us can be a fact-checker. We just need this curiosity to do things, to explore media world. It’s very, very powerful. So thank you for listening to me.

Vytautė Merkytė: Thank you so much. And I can say, I don’t know about you guys, but I’m constantly working as a fact checker for my mother, for my father, for everyone in my family. And I guess everyone can relate to this. So we still have some time for questions. And I hope that either here in this lovely audience or from the lovely people online, maybe someone would like to ask a question and I see an eager audience member. Please go ahead.

Audience: I’ve been hearing a lot about media literacy programs. And everybody knows that in media literacy, education and activities are important. But my question is, how do we do it at scale? Because we are, most of us, we are small, not well-funded civil society organizations. And if we bring a group and another group, we maybe help a hundred people, a thousand people. But how do you do this? Do you have any ideas? How do you do it at scale? Reach a large audience, thematic change?

Vytautė Merkytė: I guess maybe Ieva , you could help find the answer here.

Ieva Ivanauskaitė: Yeah. So coming back to the point that I emphasized during my presentation, first of all, you have to have the demand for a change in a country. If there is a demand, there has to be one initiator and one organization is enough, I think. As long as that organization is motivated enough to bring all of the stakeholders in one room to discuss the possible next steps, even, not the final result, but to outline the strategy that can be later be divided into smaller steps that would allow for a big change. And this is what we are trying to do. We are still doing the baby steps. But when it comes to the networks I mentioned, this is what we did. We had this idea that we wanted to collaborate with academia in the very first place. And then little by little, we realized that we also want to collaborate with government institutions, with the ministries that should be interested in that, with media literacy practitioner, NGOs that have hands-on experience in informal education. And we brought them all together and there is a demand for it. And we’re little by little discussing on how this can move forward. So there has to be coming back to the- Relations, right? Yeah, yeah. Different people in organizations interested in changing the status quo with different capacity skills that can complement one another.

Vytautė Merkytė: We have a lovely observation from Professor Talaq, he’s online. And he’s talking about that we should look at the algorithms of media companies and how they are using or rather misusing the nature of the human mind, trying to get more likes, trying to get more attention and so on. And I will turn it into a question and it’s gonna be directed to Aiste. Is it possible to find disinformation on social media when social media companies are gaining so much by making people angry, making people engaged? So is there a way for us to work together actually with them trying to achieve this one goal of eradicating or just minimizing the amount of disinformation?

Aistė Meidutė: I believe unless we include all the platforms and we sit at this round table with them and we have the same goal, then yes, it’s possible. But we didn’t manage to achieve that yet. And while some platforms, for instance, Meta, is at least doing something. I mean, we know that majorly this disinformation problem was created by social media. So they’re kind of tidying after themselves, in a sense. At least they’re trying to do something now. But it’s not enough involvement. And we talk about this problematic platforms. Like, for instance, YouTube, who was sent a letter from the world’s fact checkers, but at first didn’t really react to it, then tried to kind of say that, oh, we’re doing something in-house. But it’s not enough to do something in-house. I don’t see it. Probably you don’t see it as well, how they try to reduce this information on the platform. And we need to have this huge collaboration between fact checkers and platforms. And look for those mutual solutions how to tackle this problem. Because as experts in this field, we know what to do, probably. And it shouldn’t be the initiative coming only from the platforms that say, we can govern ourselves to do that. We don’t trust them anymore, I guess.

Vytautė Merkytė: So what I noticed when it comes to YouTube, when you go on certain video, you can see at the bottom, they say, oh, this video has information about COVID. And that’s it, you know. And I haven’t noticed any additional work from them. We still have time for probably one question. Is there anyone who would like to ask something? Anyone from who is online? Ah, OK. Yeah.

Dr. Tilak Jha: Not a question or rather not. observation, I would say, is that I think the previous panelists deliberated on the aspect of algorithm and human mind, but I think we tend to ignore how much logic can achieve. Logic is a double whammy. It can make both sides appear equally logical, which may not be the case. I think this is the challenge that we are facing in elections, in misinformation and information related to health and everything. You just see during COVID, the time when people were dying, just in queue for oxygen and hundreds and thousands of us all across the world, and there were people who were busy minting money because somehow they were misusing the information. So logic and this argument that we can live with a very, very logical world when the entire limits of logic ends up with consumer and markets. I think that’s where this is the fundamental question that we need to address. AI is not a problem, but AI is not very natural. We tend to also ignore the fact that human intelligence and AI, what AI does is essentially clubbing up a lot of information and recycling that information, and we tend to call it intelligent. That’s what intelligence is. Intelligence is using less information to be able to tell more, more deduction. If I know everything that Facebook and Google and Twitter and all the social media platform know about you, I’ll be able to tell much more about any person. But with all this information, they’re just able to read, provide some basic information. These needs to be understood, these needs to be understood and taken in context. I think we have tried to make the world far more logical, including with the application of AI and all these things. That’s somehow backfiring. We need to… focus also on how to understand this information. Information is an empowerment. We are providing empowerment, but we are not providing people the sense to use that empowerment. That is far more critical question.

Vytautė Merkytė: Thank you so much. And I think it’s a very interesting point saying that, you know, some people were gaining, you know, some people were trying to find information and some people were spreading this information and gaining money from that. So it’s, we turn into trying to find out, realize what being human is. Do we want to get more money and spread this information for our gains or are we actually trying to fight it? And I would like to turn to Francesco to wrap the session up.

Reporter: Okay, yes. Thank you very much. I’m Francesco Mecchi from Youth League. I’m here as a rapporteur. So my aim is to try to wrap up what has been said during the session. Finally, if there is broader consensus in the room in order to draft, you know, the message that you can actually even later edit, modify, add comments on if you believe that I missed something. It’s a process that’s going to take place in the next few days. So please do when it’s going to be shared on the platform. Okay, I’ll start from the context. Okay, we said that today, this year was really particular because 4 billion people were going to elections and this showed on the one hand, much potential for this information to produce some problems in these democratic institutions, but also general distrust in democratic institutions as they are. We see trend towards entertainment, gamification and polarization in democratic societies. AI used to foster disinformation practices and propaganda, not just with generative AI, but also with algorithm and machine learning. And also with tools like translation tools or micro-targeting campaigns and targeting practices. And especially social media of course played a major role, especially in global South countries where they are actually a tool for constant feedback from the government. What are the solutions to this? Please keep in mind that there has already been a session, so I would skip on things that we already discussed earlier. So, for example, multistakeholderism and other approaches that already have been presented, especially by Delphi. Actually, what we wanted to… Some solutions that are proposed are especially to widen the broad media literacy. That means not just education on how to use media, but especially to feed a critical spirit, education, fact-checking, and add it to the educational curriculum in schools with a specific attention to implementation. Try to create a virtuous cycle between stakeholders and users. So, on the one hand, stakeholders must take actions to change how they provide services, but on the other hand, users need to produce critical thinking on their own and use verified sources. Fourth, diversify solutions depending on the region. We saw that actually misinformation for Central Eastern Europe can be a problem, must be framed in geopolitical terms. For Africa, it has to do with decentralization of power. For India, it’s more related to micro-profiling and other practices. So, we need to diversify solutions. We cannot think of just one solution for every kind of issue. And finally, empower community leaders with knowledge and tools to detect the myths and disinformation and understand information. It’s obvious because they are actually important actors on the local level. Finally, the general approach should avoid West-centric or East-centric trends. So, avoid either European, American attitudes and Chinese attitudes. And be really inclusive and global. And it must be focused specifically on social media because especially for the growing youth in the Global South, they are most of the Internet they consume. Is there any specific objection, anything you want to add, any modification, or do you agree with the main message?

Vytautė Merkytė: Well, I think you did a wonderful job.

Reporter: Great. Okay, thank you very much.

Dr. Tilak Jha: We need to engage more often.

Vytautė Merkytė: True, that’s true. So, that’s what I wanted to say. I wanted to encourage everyone here to reach out to these lovely panelists. If you have any suggestions, any ideas, if you want to collaborate somehow. I believe that we’re all here and we all have the same purpose. We want to make sure that there is less disinformation and more democracy in the world. So, let’s, you know, collaborate. So, thank you so much for being here today.