Best practices of self- and co-regulation of platforms towards a legal framework – WS 12 2021: Difference between revisions

From EuroDIG Wiki
Jump to navigation Jump to search
No edit summary
 
(39 intermediate revisions by the same user not shown)
Line 1: Line 1:
30 June 2021 | 12:15-13:15 CEST | Studio B<br />
30 June 2021 | 12:15-13:15 CEST | Studio Belgrade | [[image:Icons_live_20px.png | Video recording | link=https://youtu.be/rnehllszB6w?t=8027s]] | [[image:Icon_transcript_20px.png | Transcript | link=Best practices of self- and co-regulation of platforms towards a legal framework – WS 12 2021#Transcript]]<br />
[[Consolidated_programme_2021#day-2|'''Consolidated programme 2021 overview / Day 2''']]<br /><br />
[[Consolidated_programme_2021#day-2|'''Consolidated programme 2021 overview / Day 2''']]<br /><br />
{{Sessionadvice-WS-2021}}
Working title: <big>'''Self- and co-regulation initiatives by platforms'''</big><br />
Proposals: [[List of proposals for EuroDIG 2021#prop_23|#23]] [[List of proposals for EuroDIG 2021#prop_52|#52]]<br /><br />
Proposals: [[List of proposals for EuroDIG 2021#prop_23|#23]] [[List of proposals for EuroDIG 2021#prop_52|#52]]<br /><br />
== <span class="dateline">Get involved!</span> ==  
== <span class="dateline">Get involved!</span> ==  
Line 16: Line 14:


== Format ==  
== Format ==  
Until <span class="dateline">{{2021-Date-02}}</span>.
Structure of the workshop
{| class="wikitable"
|-
|Moderators’ introduction || 3'
|-
| 4 speakers: 7’ each average
*A case of multistakeholder model for platform governance that specifically focusses on reducing extremist content on social media platforms (Christchurch call)
*A case study of an self-regulatory body
*The regulatory approach across Europe
*A case study of multi-stakeholder and multidisciplinary approach
|| 28'
|-
| Debate with the audience || 15'
|-
| Moderators’ Final recommendations and Conclusions || 10'
|-
GIP summary of the meeting || 4'
|-
| Total duration || 60'
|}
 
== Further reading ==
''About the functioning of Self Regulation and Co-regulation systems:''
*VUB Trisha Meyer's research https://www.disinfo.eu/publications/one-year-onward-platform-responses-to-covid-19-and-us-elections-disinformation-in-review
*GDHRN https://leibniz-hbi.de/uploads/media/default/cms/media/fi1c9mo_GDHRNet_Working%20Paper1.pdf
*Reuters: [https://app.assets.reuters.com/e/er?utm_campaign=&utm_medium=email&utm_source=Eloqua&utm_content=B2B%20210623%20NEWS%20GLOB%20DIGITAL%20NEWS%20REPORT%20AWARE%20DNLD%20-%20TRIGGERED&s=2124157686&lid=13143&elqTrackId=478AA2BDF992434C29E06B60D68494E9&elq=300a087e2e334a3396b3e4f486d11f57&elqaid=1145&elqat=1rt%202021 Digital News Report 2021]
 
''About the comments moderations:''
*Reuters journalism institute – Federica Cherubini https://coralproject.net/blog/killing-the-comments%E2%80%8A-%E2%80%8Awhats-next/; https://reutersinstitute.politics.ox.ac.uk/people/federica-cherubini;
*https://coralproject.net/about-2/; https://www.mobiloud.com/blog/andrew-losowsky-dont-disable-comments (A. Losowski OS moderation software)
*https://benwhitelaw.co.uk/index.php/updates/
*https://www.getrevue.co/profile/everythinginmoderation (Ben Whitelaw blog on content moderation)
*https://www.renaissancenumerique.org/publications/moderating-our-dis-content-renewing-the-regulatory-approach (Claire Pershan) also available in French: https://www.renaissancenumerique.org/system/attach_files/files/000/000/280/original/RenaissanceNumerique_Note_ModerationContenus.pdf?1613557339


Please try out new interactive formats. EuroDIG is about dialogue not about statements, presentations and speeches. Workshops should not be organised as a small plenary.
''About self-regulation national organizations (TV, printed media, internet – the German case):''
*the description of the FSM, Self-regulatory body for multi media services providers ist available in English language at https://www.fsm.de/en/about-us
*Than there is an article in English about the Japanese and german concept at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3119247
*and another one, only available in German at https://www.hans-bredow-institut.de/uploads/media/Publikationen/cms/media/a80e5e6dbc2427639ca0f437fe76d3c4c95634ac.pdf


== Further reading ==
''About platforms self-regulation model:''
Links to relevant websites, declarations, books, documents. Please note we cannot offer web space, so only links to external resources are possible. Example for an external link: [http://www.eurodig.org/ Main page of EuroDIG]
*https://uk.news.yahoo.com/tiktok-creates-european-safety-advisory-000100232.html (on Tik Tok Safety advisory council)
*https://onezero.medium.com/twitter-cribs-an-idea-from-wikipedia-9e0d98b90334 (on Wikipedia model)
*https://www.bbc.com/news/technology-57122216 (about changing approach by Youtube: from neutral to proactive policy on COVID vaccines.
*https://www.newsweek.com/facebook-shouldnt-deciding-who-ban-its-users-should-choose-opinion-1599887
*https://www.wired.com/story/facebook-oversight-board-kind-of-working-trump-ban/
*https://www.washingtonpost.com/technology/2021/06/03/trump-facebook-oversight-board/


Harvard Business Review: '''Social Media Companies Should Self-Regulate Now''' https://hbr.org/2021/01/social-media-companies-should-self-regulate-now
''About regulatory approach across Europe:''
*ERGA https://erga-online.eu/wp-content/uploads/2021/03/ERGA-SG2-Report-2020-Notions-of-disinformation-and-related-concepts-final.pdf (ERGA has published in March 2021 its report on EU code of practice on disinformation)
*OFCOM https://www.ofcom.org.uk/__data/assets/pdf_file/0020/212861/tools-for-online-regulation.pdf (Collin Kurre)
*European Audiovisual Observatory EAO European Audiovisual Observatory, 2019: Self- and Co-Regulation in the New AVMSD https://rm.coe.int/iris-special-2019-2-self-and-co-regulation-in-the-new-avmsd/1680992dc2


Website of the '''Facebook Oversight Board''' https://oversightboard.com/
''Useful web sites''
*https://edmo.eu/
*https://gdhrnet.eu/
*https://www.ofcom.org.uk/
*https://gfmd.info/
*https://wan-ifra.org/
*https://oversightboard.com/
*https://www.christchurchcall.com/


== People ==  
== People ==  
Until <span class="dateline">{{2021-Date-02}}</span>.
'''Please provide name and institution for all people you list here.'''
'''Please provide name and institution for all people you list here.'''


'''Focal Point'''  
'''Focal Points'''  


*Giacomo Mazzone
Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective Subject Matter Expert (SME) and the EuroDIG Secretariat and are kindly requested to follow [https://www.eurodig.org/get-involved/planning-process/#tab-organising-a-session EuroDIG’s session principles]


Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective Subject Matter Expert (SME) and the EuroDIG Secretariat and are kindly requested to follow [https://www.eurodig.org/get-involved/planning-process/#tab-organising-a-session EuroDIG’s session principles]
*Giacomo Mazzone, Eurovisioni
*Giovanni De Gregorio


'''Organising Team (Org Team)''' ''List Org Team members here as they sign up.''
'''Organising Team (Org Team)''' ''List Org Team members here as they sign up.''
Subject Matter Expert (SME)
*Yrjö Länsipuro


The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.
The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.


*Aleksandra Ivanković
*Aleksandra Ivanković, Internet Society & YCIG
*Giacomo Mazzone
*Giacomo Mazzone, Eurovisioni
*Juuso Jarviniemi
*Juuso Järviniemi, Student at College of Europe, Federal Committee member of the Young European Federalists (JEF-Europe)
*Giovanni De Gregorio
*Giovanni De Gregorio
*Vittorio Bertola, Open-Xchange
*Mira Milosevic
*Farzaneh Badii
*Claire Pershan, EU DisinfoLab
*Jutta Croll
*Lewis McQuarrie / Collin Kurre
*Paul Ash
*David Reid


'''Key Participants'''
'''Key Participants'''
* Paul Ash, coordinator of the Christchurch call, New Zealand Government
:''REPLACING Giovanni De Gregorio,  Centre for Socio-legal Studies, University of Oxford, GDRH Global network of HR research on self and co-regulation''
:Paul Ash is the New Zealand Prime Minister’s Special Representative on Cyber and Digital.  He works closely with the technology sector and cyber and digital agencies in New Zealand and globally.  Paul supported PM Ardern and the French side leading the development of the Christchurch Call to eliminate terrorist and violent extremist content online, and continues to lead on this work within the New Zealand Government.  He has previously served as the Director for National Security Policy and the National Cyber Policy Office, as Deputy Head of Mission in Brussels, and on diplomatic postings and secondments in the Solomon Islands, Beijing, and Taipei.
*Cherine Chalaby, member of Board of Trustees at FB oversight board
*Lewis McQuarrie, International Policy Manager OFCOM UK
:Lewis McQuarrie is a policy manager in the international team at Ofcom, the UK’s communications regulator. He works on media and online policy issues, with a particular focus on new rules for video-sharing platforms which came into effect in the UK last year. He acted as lead drafter and project manager for Ofcom’s report into monitoring the voluntary Code of Practice on disinformation published in February. He represents Ofcom at EPRA, the European Platform of Regulatory Authorities, leading their work on media plurality in online media.
*Paula Gori, Secretary General EDMO


Key Participants are experts willing to provide their knowledge during a session – not necessarily on stage. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance.  
Key Participants are experts willing to provide their knowledge during a session – not necessarily on stage. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance.  
Please provide short CV’s of the Key Participants involved in your session at the Wiki or link to another source.
Please provide short CV’s of the Key Participants involved in your session at the Wiki or link to another source.


'''Moderator'''
'''Moderators'''
 
*Mira Milosevic, GFMD
The moderator is the facilitator of the session at the event. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.
::Mira Milosevic is the Executive Director at Brussels-based Global Forum for Media Development (GFMD). She leads GFMD’s engagement with the United Nations, the Internet Governance Forum, and other multilateral institutions as well as GFMD’s international efforts advocating for the sustainability of journalism and news media. Mira frequently writes and speaks about the intersection of media, economy, technology, and human rights. Before joining GFMD, she authored the World Press Trends reports, the most authoritative global source of data and analysis on the international newspaper industry, managed Media Development Programmes at WAN-IFRA, served as Chief Platform Officer at Indie Voices, and as Director of Belgrade-based Media Center. Mira holds a BA in Economics and MA in Communication; she has started her career as a journalist.
*Elena Perotti, WAN-IFRA
::Elena Perotti is Executive Director of Media Policy and Public Affairs at WAN-IFRA, the World Association of News Publishers. She is responsible for identifying and studying major public affairs issues within the news industry, and is leader and/or author of all research output of the department. Elena is also in charge of the News Literacy initiatives, and of interaction and liaison with WAN-IFRA’s governing boards, as well as with national and regional member associations, and international bodies. 


'''Remote Moderator'''
'''Remote Moderator'''
Line 61: Line 128:


'''Reporter'''
'''Reporter'''
 
*Ilona Stadnik, [https://www.giplatform.org/ Geneva Internet Platform]
Reporters will be assigned by the EuroDIG secretariat in cooperation with the [https://www.giplatform.org/ Geneva Internet Platform]. The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:  
Reporters will be assigned by the EuroDIG secretariat in cooperation with the [https://www.giplatform.org/ Geneva Internet Platform]. The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:  
*are summarised on a slide and  presented to the audience at the end of each session  
*are summarised on a slide and  presented to the audience at the end of each session  
Line 69: Line 136:


== Current discussion, conference calls, schedules and minutes ==
== Current discussion, conference calls, schedules and minutes ==
See the [[{{TALKPAGENAME}} | discussion]] tab on the upper left side of this page. Please use this page to publish:
See the [[{{TALKPAGENAME}} | discussion]] tab on the upper left side of this page.
*dates for virtual meetings or coordination calls
*[[Talk:WS_12_2021#NOTES_FROM_THE_FIRST_MEETING_OF_THE_TEAM_PREPARING_WORKSHOP_12_At_Eurodig_2021|NOTES FROM THE FIRST MEETING OF THE TEAM PREPARING WORKSHOP 12 At Eurodig 2021]]
*short summary of calls or email exchange
*[[Talk:WS_12_2021#NOTES_FROM_THE_2ND_MEETING_TEAM_PREPARING_WORKSHOP_12_At_Eurodig_2021|NOTES FROM THE 2ND MEETING OF THE TEAM PREPARING WORKSHOP 12 At Eurodig 2021]]
Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.
*[[Talk:WS_12_2021#NOTES_FROM_THE_3rd_MEETING_TEAM_PREPARING_WORKSHOP_12_At_Eurodig_2021|NOTES FROM THE 3RD MEETING OF THE TEAM PREPARING WORKSHOP 12 At Eurodig 2021]]


== Messages ==   
== Messages ==   
A short summary of the session will be provided by the Reporter.
*Liberal approaches of governments towards online platforms at the start of the platform economy led to the rise of platform power to influence the public sphere. Though we have soft law arrangements like voluntary codes of conduct to regulate harmful content, they are not sufficient to address serious problems like extremist content and disinformation while ensuring the right to free speech.
*Self-regulation, co-regulation, and multistakeholder/multidisciplinary governance models are challenged with the need to reconcile different accountability and power structures that exist within them. Of additional importance, they should have internal and external legitimacy.
*Externally, a governance model must be recognised for the quality and timeliness of its decisions; internally, it has to have robust checks and balances.
*There should be a global collaborative effort in the form of dialogic regulation between governments, tech companies, and civil society to develop a solution grounded in human rights that will address disinformation and harmful content.
 
Find an independent report of the session from the Geneva Internet Platform Digital Watch Observatory at https://dig.watch/resources/best-practices-self-and-co-regulation-platforms-towards-legal-framework.


== Video record ==
== Video record ==
Will be provided here after the event.
https://youtu.be/rnehllszB6w?t=8027s


== Transcript ==
== Transcript ==
Will be provided here after the event.
Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-482-9835, www.captionfirst.com
 
 
 
This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.
 
 
 
>> STUDIO: Welcome back, we’ll continue with the second Workshop of today, which is the Best Practices of co-regulation and self-regulation of platforms towards a legal framework, my name is Jelena Cosic. And I’ll be the studio host for the session. Before I turnover to the moderators of the session. I will briefly go over session rules. First, make sure to displace your full name when you join the studio, if you have a question during the session, use the Zoom function to raise your hand. You will be unmuted when you are given the floor. And just remember, that at the beginning, you should state your name and affiliation. You can also choose to switch on the camera so that we can see you, but this is optional and we leave that up to you. You can also use the chat to post your questions during the session. I will also be the chat moderator for the session. I will make sure from time to time I pass to the panelists. Don’t be afraid to use the chat. Keep in mind, it will not be stored or published.
 
Lastly, we ask you not to share the Zoom links with anybody, let them register or follow us on the YouTube Live stream. I will now hand over to two moderators for today’s session. Mira and Elena, the floor is yours. I will stay here and see you at the end of the session.
 
>> Elena Perotti: We agreed with Mira, I would be started. So that is what I will be doing. So hello everybody. I am Elena Perotti. I work at the World Association of News Publishers, I work in the media department at the World Association News Publishers. I will moderate this session with Mira, who is the associate Director of media development.
 
What we will do today – first, a little bit of housekeeping. My Internet is horrible. I have not one but two construction sites close to me. So I might be popping on and off in that case. I know Mira is ready to take my place if I cannot manage to be back online.
 
So as I was saying, today we’re investigating the European Union soft and hard law approach to the challenges by the state powers enforced by the platforms in enforcement of users’ fundamental rights and spreading hate speech online. So you can find – I will post the chat right now the EuroDIG wiki in case anybody doesn’t have it. Here you go.
 
So here is how we go about this. We have – you will have seen the agenda. We have first four case studies. The speakers line up comprises Paul Ash the coordinator of the Christchurch call, New Zealand Government who will speak on the Christchurch Call. And we will have Cherine Chalaby a former member of ICANN and on the Board of Trustees at Facebook Oversight Board. Lewis McQuarrie is a policy manager of the international team at OFCOM, the UK’s communication regulator. He will present a case study on the disciplinary approach. And we will close with Paula Gori the Secretary-General of EDMO. We will have some time for questions from the audience, additional insights from key participants and conclusions and messages.
 
Public actors in the European Union and US have not generally approached the internet and allowed platforms as I was saying to have state-like powers. This is in the beginning the Internet was an independent and perceived as one. It was thought to be dealt with on the basis of the free market. But what has happened is the activity of all and the desire to be forgotten, and there is the position to effect and exercise the fundamental rights, particularly privacy freedom of expression, so on. The European Union deals with code of conduct, free speech, the measures to tackle the legal content online. This solution is self-regulation and the platforms and mechanisms to give structure to the functions that are similar to states which they’re performing now and possible tackle the problems of hate speech and disinformation that we have seen so broadly during the last year of pandemic.
 
In this Workshop, we will analyze different responses to this challenge, regulatory and nonregulatory ones. That is why we have the Facebook Oversight Board, which represents the platform starting to incorporate voluntary mechanisms.
 
At the same time, there are initiatives developed with intent to respond to the challenges that we have mentioned before. Such as Christchurch Call and the European media observatory, in the phase of misinformation, and there is high regulation, that is why we have OFCOM to testify to that.
 
Without further delay let’s get into it. It I will leave the floor to my co-moderator to introduce herself, Mira Milosevic.
 
>> Mira Milosevic: This is fantastic for our conversation and give our speakers enough time to cover everything that we planned. We will start immediately with a story about an international multistakeholder model for platform governance that specifically focuses on online harms. The Christchurch Call is a commitment initiated by a group of Governments and tech companies to eliminate terrorist content and attacks online. It is an example of a multistakeholder approach where a genuine effort is made to consult and engage Academia and Civil Society with the process. Not only Governments and private companies. We have with us Paul Ash, as you said, Elena, representative on cyber and digital at New Zealand. He was involved in developing the Christchurch Call and he supported the Prime Minister then and continues to lead on the work within the New Zealand Government.
 
Paul, it is a pleasure to have you with us today. Can you tell us about the call and your unique approach? And what have you learned throughout the process over the last two years? How relevant and binding is the call today for Governments and companies?
 
>> Paul Ash: Greeting from New Zealand. It is the middle of the night here. I thought I would put a nice background from our countryside. For those that can’t get here because of travel restrictions. It is lovely to see you again, Mira, you were around at the very beginning of the call in mid-2019. Great to be able to join people today.
 
I guess the fact that we’re talking about regulating the Internet at all is antithetical to most of us on the discussion. If we think about that we’re a long way of the utopia of the 1990s, when some of us with more gray hair first connected to the Internet and discovered a realm that was unanchored from the rest of the world. As we discovered, the couple of thousand years of political philosophy were right. And the things we grapple with offline manifested themselves online.
 
In a sense, those have led to quite a sense of political drive in the sphere to ensure that life online better reflects the range of values and objectives that we would with offline.
 
There are good reasons underlining some of it and pitfalls in democracies like ours as we try to do that. From our perspective, the conversation we’re having tonight as well, fundamentally is checking the future of global and still mostly borderless – and hopefully it will stay that way, infrastructure that is designed to connect people everywhere. Applying a lens of regulatory economical advantage is not the enduring approach to solve some of those issues. I think we heard from Elena earlier about the economic models that led to the rise of platform power and the shape of the Internet we have today.
 
We probably need a different approach if the Internet is for all and remain a place where no one stakeholder has complete legitimacy or mandate to make changes for themselves, we need to think about more collaborative models to deliver regulation if you want to use that phrase or structured approach to operating online.
 
That is why from our perspective, multistakeholder approaches are important and why we reached for one of those after the events of 15 March 2019.
 
So I guess good to talk about some of the challenges before we kick into it. One of the most complicated aspects of co-regulation and multistakeholder models is how to both surface and reconcile the different accountabilities and power structures that exist within them.
 
For instance, security typically has been and still remains an exclusive competence of states for those of you in the EU, that is one of those huge debates in the European Union context around subsidiarity of what is in Brussels and what is states is states. States like yours are democratically responsible to the citizens. There is the core to the social contract. That accountability was thrown into start by events of 15 March 2019, Christchurch Call. A terrorist steeped in online conspiracy theories, radicalized and trade craft online, working with Islamophobic and white supremacist narratives built an atrocity designed for the Internet. And with the horror was broadcasting the murder of 51 people across the world. Exploiting major consumer and social media platforms in a way we hadn’t seen before. Initiated the harm to think about solving for this problem in new ways. The live stream video was repackaged, manipulated, turned into memes, video games. It has had a persistent online presence that has inspired a range of other Islamophobic attacks.
 
At the time, the platforms this was occurring on the mainstream ones had a range of responses. But not one of the responses was equal to the task of preventing affected families, consumers, viewers all over the world from being exposed to the horrifying content. But spilled over to the cross-over area of traditional and online media and interesting questions of where some of the gaps lie. Where some of the overlaps in the areas are. That is something we had to think hard about as we watched some bits of the more traditional media using the call video for click bait to advertise against. Some mainstream newspapers did that and others alongside them were engaged in responsible reporting and working with the video and what we were grappling with. There are the shades of judgment that require us to solve for that nay different way and need for multistakeholder approach to ensure we actually get the right balance of views in that discussion.
 
The temptation for many Governments following such (?) is something to think about. To punish platforms for failing to stem the tide of terrorist content. If you hold a legislative hammer, it is easy to hold it as a fresh n-ail. The problem is manifest on platforms, and they have a big role to play in solving it, but if you think about the problem of terrorists and content online, it is in need of a holistic approach to look at imagines, manifestation online and effects offline. The Christchurch Call was an acknowledgment the approach won’t just be a symbolic gesture, seeking to tame the Internet. That would be folly.
 
But actually one that materially changed the way online platforms, Governments, Civil Society worked together on the issue. That first meant working with the platforms and actually engaging deeply with them on the challenges we faced. It involved working with other Governments and certainly it is great to speak in a European context. European partners and LIs were very quick to reach out. And provide support. None more so than France as we sought to take this forward. An importantly that meant working with Civil Society. And finding a way to shape an agreement that created – that was timely and moved at pace, but that provided space for Civil Society to begin to assume a full role. For us sticking to important principles. I think they are important in this discussion more widely. Steeped in human rights and fundamental freedoms. Including not just the freedom of speech but also the rights of victims in this instance. And protecting a free, open, and secure Internet. Trying to do those two things at once holds a number of different things in tension, but it is important to do that. Also committing to quite difficult technical policy, social, regulatory solutions to see what might be effective. To know and acknowledge in the beginning those discussions might not be easy but needed to be worked through.
 
All of that is more time-consuming and more difficult perhaps than the linear hammer and nail solution. It is fair to say it took a while to figure out how to make that process inclusive and effective.
 
>> Mira Milosevic: Thank you, Paul, I hate to jump in here. We can go into details if we have more time at the end for the audience. It is a really interesting model. And also how it related to other models, we can discuss that later. Elena, over to you.
 
>> Elena Perotti: I do agree. Thank you, Paul, for this excellent intervention. Our next topic is the Facebook Oversight Board. And how it interprets itself as self-regulatory mode. Our key participant is Cherine Chalaby member of the Board of Trustees at the Facebook Oversight Board. Cherine has roles in Internet Governance and considerable board experience in 2019 he retired as Chairman of ICANN after he served on the maximum nine years.
 
His speech will cover the function of the Facebook Oversight Board, how it preserves independence and how such a self-regulatory model will integrate the framework in the future. I will link the Oversight Board in the chat. Cherine, to you?
 
>> Cherine Chalaby: Thank you Elena and thank you, Paul for the ideas which I wholeheartedly agree. It is my pressure to be part of this distinguished panel.
 
So since the creation in 2020, the Oversight Board has had the final say on some of the most consequential pieces of content posted on Facebook and Instagram. It has kept the Facebook company accountable. The promises it made and reminded three billion users that their voices matter.
 
It is, of course, early days. And naturally, their exist a high-level of public scrutiny. Some critics, for example, have implied that the Oversight Board is not truly independent of its creator. I don’t believe that to be the case. From my vantage point as a trustee responsible for the governance of the Oversight Board, I can see how independence is firmly rooted in everything we do. Elena has outlined what I am going to cover, but before doing so I wanted to begin by framing the context in which Facebook created the Oversight Board.
 
So there are three aspects to consider. Firstly, the rise in cyber sovereignty. Policymaker, regulators are increasingly looking for ways to address their deep concerns about the impact of social media platforms, on the safety and health of billions of users around the world. Whilst at the same time, they want to protect the privacy and freedom of expressions of those same users.
 
An immensely complex thought. Unfortunately, there is no quick or easy fix. And as Paul mentioned, there is no experience of what it means to regulate a global virtual public sphere.
 
Secondly, the relentless assault on Facebook. The concerns I mentioned above focus on Facebook as it grew into a powerful and integral part of the social fabric of most countries. Almost every crisis and headline to date plays out in some way across Facebook services. The more the services become ubiquitous, the more Facebook finds itself at the center of extensive criticism on everything. From the spread of misinformation to concern about the company’s power and approach to competition. And thirdly, this is very important context. Within this political and social context, Facebook needed to renew and strengthen its legitimacy with the users. Mark Zuckerberg and Facebook’s credit they recognize the decisions that have enormous consequences for our society, for human rights, and freedom of expression should not be made by social media companies acting alone.
 
Furthermore, these companies should not be the final arbiter of what can and cannot be said on their platforms. Users should have a voice. And their cases should be heard by an independent appeal body.
 
This is how the Oversight Board began. A bold move to create such an independent appeal body and a decisive step towards self-regulation. So let’s now have a look inside of the Oversight Board to understand how it works. To succeed, it was decided from the outset that the Oversight Board cannot just be credible from the outside, it needs to be solid on the inside. In other words, externally, it has to be recognized for the quality and timeliness of the decisions and internally, it has to have robust checks and balances. With that principle in mind, the governance model was uniquely designed to consist of three interlocking elements. The board, the board, and the administration. Each plays a distinct role as follows.
 
The board’s first 20 members were appointed last year. This is a diverse and well credentialed thinkers and leaders that make principle, independent decisions regarding content on Facebook and Instagram. And in so doing, they uphold or reverse Facebook’s own decisions. Users can appeal directly to the board. And the board has received already more than 400,000 appeals since January. Facebook can also refer cases such as the case they sent to the board in January on the question of whether the former President of the United States could be indefinitely suspended.
 
It is important to note that in its deliberations, the board takes public comment into account. For example, there were over 9,000 submissions related to the case of the former President of the United States. Equally important, the board decisions are binding. They’re done transparently so everyone can understand how the board reached their decisions.
 
And the board is working to shift Facebook from making arbitrary decisions or decisions that might be informed by the company’s economic or political interest towards decisions that promote freedom of expression, that treat all users fairly. And that are consistent with the companies own standards and values.
 
And if you look at the board’s decision so far, you can see how this is an institution not afraid of calling out Facebook when it feels to meet its responsibilities. The board also makes recommendations on content policies. These are not binding, but they are no less important than case decisions. The board has been given the ability to shine the light on systemic problems that it identified within the policies. And to give precise guidance on how to resolve them. Facebook is required to respond ... publicly within 30 days – excuse me, I suffer from hay fever. So far, Facebook has accepted the vast administrator of the recommendation. This is a positive sign.
 
You may ask yourself the question how this board can be truly independent? After all, it is Facebook that came up with the idea, and it is Facebook that funded it?
 
The answer lies in the trust, the second interlocking element of this model, which is where I sit. The trust is basically a shield between Facebook and the board. It is responsible for governing the board and for protecting its independence in three ways. It protects the board’s independent judgment and integrity of the decision-making process by keeping Facebook at arm’s length from board members. It protects the board’s operational independence by ensuring that board member adhere to stated purpose, uphold the code of conduct and act at all times in a manner that reflects the independence of the board. Finally, it protects the board’s financial independence by safeguarding the financial assets in the trust, approving, and monitoring the annual budget.
 
The third is the interlocking administration. It has a trust and full-time staff independent from Facebook that are totally dedicated to assisting board member with the research, case selection, communication, and decision. In closing, it would be remiss not to stress the Oversight Board was not designed to solve all the problem of Facebook alone. Nor to supplant the role of policymakers and regulators. The Oversight Board is nevertheless an important innovative model of self-regulation. Untried before on such a large scale where – I’m going to be slow here – one of the largest corporations in the world – one of the largest for-profit corporations in the world has created an independent not-for=profit institution to make binding decisions by which the for-profit corporation must abide. This is a unique model. This was designed to avoid both the commercial interest of the for-profit organization and potential abuse of state-based regulation.
 
Institutions such as the Oversight Board are in my view, necessary. We do not want for-profit corporation regulating the global public sphere in their own economic interests.
 
Nor do we want national or regional political interests Balkanizing the same sphere. Instead, we want disinterested, disinterested regulation of our virtual speech. That means regulations which is impartial and unbiased. In this regard, the Oversight Board is truly a disinterested institution. Whose disinterest is guaranteed by the trust.
 
>> Elena Perotti: To interrupt. One minute warning.
 
>> Cherine Chalaby: I’m continue. One minute. It aligns with multistakeholderism, and state-based regulation, which we are here to talk about. I’m sure you would agree one-size-fits-all solution does not exist. And you also agree no single Government, institution or actor has all the answer. I, therefore, like Paul, feel the imperative global collaborative effort between Governments and tech companies to agree on a solution that are clearly grounded on human rights principle. And B, we critically need to manage the complex challenges of our borderless digital future. Thank you. Did I get within the one minute?
 
>> Thank you.
 
>> Mira Milosevic: Thank you. This is interesting insight in how the board works. We have questions for you. I’m sure we will have many more and hopefully we’ll get time to respond to some of that.
 
Next speaker will present the approach of the national regulator. The OFCOM and engagement with different regulations. Lewis McQuarrie is the OFCOM UK regulator. He works with media online issues and focus on new rules for video sharing platforms which came into effect in the UK last year. Lewis, thank you for being here with us today. Could you tell us about how an independent regulatory body such as OFCOM interacts with different regulatory models including the experience of the EU code of disinformation. Duties under the forthcoming UK online safety bill. Over to you, Lewis.
 
>> Lewis McQuarrie: I apologize for the ghoulish appearance. I feel like I’m in a broom closet. I’m speaking on the report that OFCOM published on the monitoring of the code of practice of disinformation during the UK general election in 2019. The code was signed by Facebook, Twitter, YouTube, others aimed at tackling disinformation on the services. They want to empower users, the research community to enable the scrutiny of ad placement and made political advertisement more transparent.
 
It is that we joined this general election and matched the scope of other regulators involved in this exercise. The bit about the code. It is voluntary. It is a hybrid between self-regulation and co-regulation. It was set by the Commission and monitored by several stakeholders throughout the first 12 months including regulators, which is part of the Commission, Civil Society, Academia, and others that were participated in the 12-month review and submitted evidence of the implementation of the code and the efficacy.
 
It was multistakeholder and multidisciplinary from the genesis. The objectives set by the Commission were informed by the advice of high-level Expert Group established in 2018 as well.
 
In parallel, in fact to the development of the code, there was a sounding board created which was composed of members of the media sector, Civil Society, Academia that advised and provided critical feedback to the drafters throughout the drafting process and published the final opinion around the time it was signed in 2018. I want to address the elephant on the room which is why is OFCOM talking about EU code? I’m not here to speak on behalf of the code or the merits necessarily. Luckily, Paula can correct me where I go wrong immediately after. We were members before submitting to the EU and we submitted our finding on that before we left. We took a decision to develop our findings after the fact to try to take stock of the lessons we learned from conducting this exercise. Which we learned that was transferrable to other code and self-regulatory tools. The ERGA reports and others are more comprehensive in terms of the review of efficacy? What are the lessons? I’m trying my best to be brief.
 
I would say it underscored the need for online regulation, including self-regulation and co-regulation to take a test evolve approach. The process has to be built into the model. This is particularly true online, because of the risk of unintended consequences that come at the cost of freedom of expression that some of the previous speakers spoke about. Because of the dynamic nature of the market and how user behaviors evolve online at such speak.
 
Self-and co-regulation approach can be more dynamic. Platforms themselves need to lead this approach and have robust systems to monitor the effectiveness and outcomes of the actions and calibrate the response. There needs to be a reflexivity to the system where people take stock of the intention, measures taken, outcomes, how they measure up to the objectives.
 
We think the multistakeholder model to self-regulation and co-regulation tools works well involving public bodies, experts from Civil Society and Academia can add accountability and legitimacy of the platforms. Multistakeholder is relevant at the point and of the implementation of the test and evolve approach. Toronto to have critical feedback. It is not always true that platforms are the best place to know the effect that their actions are having on users around the world.
 
A multistakeholder approach can be more equitable and inclusive. The involvement of the membership was able to help draw attention to shortcomings in the implementation smaller EU markets. For these advantages to be true, we think self-regulation and co-regulation tools need to be transparent in how they operate. In terms of the access the organizations involved in the scrutinizing have on implement association well as in terms of the transparency in terms of processes adopted by platforms. For instance, publishing better roars of amendments to platform policies that happen rapidly.
 
We think monitoring can be strengthened by supplementing the analysis of platform data with other evidence such as consumer research into how users engage with the measures. And in the case of this code and the case of political transparency, political advertising, a good example is it is useful to know how users interact with the information provided to them by the context. Who has paid for the ad, why we get to them? Trying to understand how these measures actually pierce through and into the user journey and affect the user and citizen decisions.
 
I want to end on commenting briefly on the next stage for this code. It is moving into a new stage that the Commission just issued new guidance on how the code should be improved. This incorporates feedback from several different organizations involved in its monitoring. And there’s quite a long list of improvements it recommends. To pick out a few, it hits home the point about improved KPI. This is about transparency and measuring efficacy of the code. Greater access to data for researchers and thankfully EDMO will speak to that. It oversees the task force to the development. This is to help ensure it is accountable, agile, and up-to-date.
 
I understand the code, intention is to then give it some degree of statutory backing as a code of conduct under article 35 of the DSA. I will try and pick up your questions, Mira, in the Q&A, if that is all right. I will hand it back to you.
 
>> Mira Milosevic: Thank you, there are really relevant points here, also that connected to what Cherine and Paul were saying. Elena, to you.
 
>> Elena Perotti: Next we’ll hear about example of multistakeholder and multidisciplinary approach. Paula Gori is Secretary-General of EDMO, the digital media observatory. Joined the School of Transnational governance in 2017, where she is a member of the management team. Prior the coordinator of Florence School of Co-regulation media. She is a mediator, has a background in international law in Italy, Florence, and France. With the EDMO and focuses on the multidisciplinary and multistakeholder approach when dealing with disinformation. We remind you the observatory is one of the elements in the Action Plan against information published in 2018. There was a first phase. Which was focused on the development of a core service infrastructure. And the governance role of the observatory. Now in the second phase of the project, that funds at the creation of mushy, national, international research hubs across Europe. Over to you Paula.
 
>> Paula Gori: Thank you. I will be the only one sharing slides. I have this pleasure with you. I hope you see the presentation. Thank you very much for inviting us to this important Forum.
 
Let me start by giving you a little background. One thing, to set the scene and remember what we’re talking about, very often there is confusion between disinformation and misinformation, because in the U.S., they tend to have other use of the word.
 
Disinformation includes all forms of false, inaccurate, and misleading information designed to intentionally cause public harm. And misinformation is when you share false information with your friends but not aware it is harmful.
 
I’m sorry to start with the definition. It is something that is helpful to set the scene.
 
When dealing with online disinformation, I think it came from the previous presentations, the big challenge is that one side we want to protect the right for informed decisions. The other side we want to protect fundamental rights. And parallel to that, we want to avoid citizens lose trust in media and solutions. All of that happens in an environment that is fast evolving with technology and by the tools, actors, strategies, developed fast. The challenge is quite high.
 
I had this slide, but probably we’re all aware about this. This is a Roadmap of the policy to tackle online disinformation. As you can see, there is reference to pieces already mentioned. Among which of course the practice and disinformation.
 
The code, going back to the challenge I was mentioning before. The code was seen as normal, natural, a right first step to tackle this issue. As mentioned by Lewis, it is in the e-regulatory toolbox, which sees the involvement. And this is to protect what was said by other speakers to make sure this is something that starts from the platforms and then moves with multidisciplinary approach. I think we are all aware. It is mentioned the guidance for the new code. I won’t go to what was missing in the first version and coming up in the new one. I want to remember as it was mentioned, the process for the DSA, the Digital Services Act was also triggered recently and it brings elements in the discussion on online disinformation.
 
Elena mentioned, it so I don’t have to say why EDMO was born.
 
Legally, let’s say. But let me mention what EDMO is. It is an independent platform, a digital service infrastructure, community builder. The aim is to be a body of facts, evidence, tools, to have a multistakeholder and multidisciplinary approach. It brings together the various stakeholders in the field and makes sure the various disciplines are involved. Why am I saying that? The partners are led by EUI, ATC and a fact checker in Italy.
 
Why is it so important? There are lots of questions coming up. These are a few. For example, the motivation behind. What made it viral. How is human brain reacting when we see disinformation piece? Not knowing if it is a disinformation or not. Which is the target audience of the message and which is the target audience of the actor?
 
For example, you might think which is the difference? I don’t know, for example, the actor is the one that decides that the disinformation trend should be out. And the target of the actor might be actually another state. Like state against state.
 
And which is a target audience which probably the population or part of the population of the second state. Then it is played by the business model of the platform. Which tools are used to build societal resilience, how do we assess them? There are lots of different questions that come up. It is impossible to tackle the issue in silos. We have to work together. You have the multistakeholder approach and multidisciplinary. Thanks to neurosciences we know emotions such as fear and anger are a trigger to share online disinformation. It is not by chance that during COVID-19, for example, it was true we were spending more time at home and from the social media but also scared. So this had a lot actually in spreading online disinformation. [Audio skipping].
 
EDMO, how we are actually serving as a platform and evidence-based platform? The first thing is that what you can already find it on the EDMO website. We set up a secure collaborative platform for fact-checking platforms. And they can do fact-checking. A similar platform will be open soon for researchers.
 
We have a governance body that [audio skipping] with an executive board and advisory board. And it is to ensure public trust toward EDMO and authorities. There is one already mentioned and that is the mission of building a framework to provide secure access to data online platforms for research purposes.
 
To go back to what I was saying before, I mean, we have lots of questions and often those data are actually very important to answer some of the questions. Of course, this should be done in full compliance with GDPR and data protection in general. This is potential codes based on article 40 of the GDPR. We did fact-checking work in Europe and will have a repository with fact check items and media literacy. We had a session right before ours so I won’t any too much into the details. To let you know, the challenge there comes probably from the discussion we had before. Media literacy, even if only related to online disinformation, it is a very wide field, where you have different actors, different targets, different strategies. So what we are doing now is we’re trying to map – issue a report that maps the various different media literacy initiatives and tries to identify assessment criteria. And then open to regulatory.
 
A similar exercise is with academic research in Europe. So far, we are considering articles in English. But we’ll soon get other languages and back to that soon. And providing academic input and methodological input for the monitoring of policies put in place. I mean, you all know the new guidance for the new code sees an involvement of EDMO in this, the task force was already mentioned. It was mentioned also that there needs to be an assessment of the application of the code and among which structural and service level indicators working. EDMO will provide supports in the structure and level indicators for those that are less familiar, that is what is the impact of the code on the overall information system.
 
How are we doing with that? As I was mentioning, we’re providing tools, which means the tools that are already mentioned. But we are also providing other types of tools. For example, trainings. We had the training on the ABC of fact checking. We will have a total of 20 training, all for free. The trainings will follow the multidisciplinary and multistakeholder approach I was mentioning. We are organizing events, conferences, so on. To conclude as mentioned by Elena, with EDMO hubs were announced. The aim is to have a hub in all Member States covered by the EDMO hubs. So far, there are eight hubs in 12 Member States. These are really fundamental. Because those are the realities of actually get the local level. We all know that disinformation, the spread of disinformation and the actors and the tools they differ from country to country. We’re happy that the first hubs were selected to start collaboration with them in September, October this year. Briefly, what will they do. Detect and analyze the disinformation campaign. They will organize the media literacy activities at many levels and provide support to the national authorities for the monitoring of the platforms policies.
 
To go back to what was being said before, imagine EDMO is saying about the repositories and feed those as well. For example, thinking of the academic one. This is a great opportunity to have also academic papers which are not in English, but rather in the various national languages.
 
So I think this was a short way. I hope I stayed in the time to introduce EDMO. Of course, I’m happy to take any question on EDMO. Thank you very much.
 
>> Elena Perotti: Thank you, Paula. Thank you very much. I would say we now have the opportunity to go to questions. And Mira will lead us in that part.
 
>> Mira Milosevic: Just to thank you, Elena and thank you, Paula, for these fantastic points. Looking at the time, I will just ask all our panelists to be as brief as possible. If possible, up to two minutes, in their responses. I will try to merge all the questions so to have one question for each of the panelists.
 
Going back to you, Paul, Giacomo asked about the examples of your commitments and recommendations already requested by stakeholders. You mentioned the information and call protocol how it informed this protocol now used by platforms. Can you tell us a bit more about that? And if you get time, whether you have access to data from the platforms where you look at algorithms that you mentioned in the chat as well.
 
>> Paul Ash: Thanks, Mira. I will be as quick as I can. The measures put in place to prevent the terrorist and violence content level is at two levels, there is measures in place and the content incident protocol is an effort to try to understand what kind of content reach the threshold for activating emergency processes to prevent a live spread broadcasting like with Christchurch Call. The response is to make sure companies, Governments, Civil Society can communicate at speed in a crisis. We have a bit of work to do in updating it. It is sitting with the New Zealand Government at the moment to take it forward.
 
You are absolutely right to focus on the question of algorithmic outcomes and how to look at the way algorithms might operate across three areas. One, recommendation in some that might be dangerous and might insight violence. And some that might auto complete and take people to places they don’t expect to be.
 
The last is around the algorithmic preferences used for the detection of terrorist and extremist content. In transparency we’re focused on strongly looking at the outcomes of the algorithms, instead of the white box of code. In the process now working with companies to build black box approaches for testing that. The answer to your question, I think some firms are far more comfortable with that than others. We’ll look to go at the pace of quicker ones and see how to take that forward. Drawing on the principles, from all of the speakers, multistakeholderism from our perspective is the only way to make that a safe process for everyone. I’ll stop here because we can delve into different rabbit holes of ours.
 
>>>> Mira Milosevic: Thank you this was an important point, especially having in mind, principles, and recommendations from other speakers. Cherine. The most questions for you. I will try to sum up in two categories. Again, if you can be as quick as possible.
 
One is related to user protection. You know, what are the resources ordinary users have when Facebook content moderation removes the content or accounts and they don’t get kind of satisfactory response. And they have no access to the Oversight Board processes because there is a limited number of cases. So that’s one.
 
And also, you know, what is the approach the users can use in terms of their legal appeals? If self-regulation options don’t work.
 
The other one is related to comment made by Lewis, that is that self-regulation and other tools need to be dynamic and updated over time. The question is will there be periodic independent reviews of the effectiveness of the appointment process of the board? I would say, in general, the impact of the Oversight Board on the policies and the decision-making at Facebook. I know you have a minute or two to respond. So sorry for being long with my questions.
 
>> Elena Perotti: I will pile on to that. I am also interested in knowing how the Oversight Board chooses which cases to deal with. Because you spoke of very, very many cases. And [audio skipping] through all of them. How you choose?
 
>> Cherine Chalaby: I will go off mute. And why don’t I – I mean, some of these questions are related. So what do users want in general? And I’ve been on the ICANN board for a number of years. I know what stakeholders and users want. They want to have the right to be heard. They want to have an opportunity to ask the company that made decisions to review the decision and they want to have an opportunity of that decision still against their own interest to have an opportunity to go to an independent body to appeal.
 
This is what Facebook has done. It has basically created a two-tier appeal system. The first tier is you go to Facebook when they take down or block your account. You can ask them to review their decision. It is a review request that every user – every user has a right to do.
 
And Facebook consider all of these cases. Not that there is a case that they just forget. They consider all of them. They have in addition to AI that has algorithm. They have 30,000 people looking at content moderation and other people looking at policies and at cases as well at the same time. So they look at that.
 
And if user doesn’t have any satisfaction then they can apply to the Facebook. – to the Facebook Oversight Board and ask for an independent appeal of their case.
 
Of course, it could be frustrating sometimes. That their case is not heard. I can understand that. I will go back to your question, Elena. The Oversight Board does review a selected number of highly emblematic cases, that are difficult, significant, and globally relevant to inform future policy. Basically cases that have precedential impact.
 
Now, the Oversight Board is increasing in size. Hopefully, it will be able to handle more and more cases around the world. So I think that’s the question. In terms of – sorry. The approach – what was the second question, Mira? Was it about periodic review of the effectiveness of the board?
 
>> Mira Milosevic: Yes, keep it to couple sentences. It is there periodic review of appointment of the board member and also a question related to diversity. That is related. Then of course, I added to that impact and effectiveness of decision-making. Yeah.
 
>> Cherine Chalaby: So the board members are assessed once a year. We assess their performance and how they do, how do they adhere to code of conduct and adhere to representing independence of the board. And also importantly, their impact. So there is a yearly review of every single board member. And the trust where I sit has the authority to remove a board member if they believe they’re not performing according to the code of conduct.
 
>> Mira Milosevic: I have to cut you Don. Sorry. We only have three minutes to go. I will use the time to ask the same question both Lewis and Paul, question, is from actually Paul’s colleague from New Zealand Government. And he’s interested in the relationship between the work of self-regulation mechanisms. When having in mind marginalized and minority groups that Paula mentioned could be at more risk from deceitful campaigns. This is important question. How do we make sure that those users, especially are empowered to have their voices heard and listen especially when they’re in danger? First Lewis, and then Paula, again, if you can be as quick as possible.
 
>> Lewis McQuarrie: Yeah, happily. It is a shared responsible. Certainly incumbent on policymakers and platforms developing the tools to reach out to make sure that marginalized communities are represented in that process and check that they aren’t limited by the Civil Society groups that they’re familiar with. Sometimes you have to go beyond the kind of usual suspects to reach groups that may not even have representative in the Civil Society groups that are the forefront of the debates. I will hand to Paula.
 
>> Paula Gori: I think sometimes minorities have the voice not heard, but for example, the problem we saw with COVID-19, the EDMO plays a crucial role because you get the national dimension and what happens in their respective countries and we detect what happens regarding minorities.
 
>> Mira Milosevic: Thank you so much, Lewis and Paula for being quick. We have a couple of comments in the chat. Elena, I think we need to go to the studio and see when we hear from all Ilona Stadnik and the messages, whether we will have time to reactions to messages. Over to you.
 
>> STUDIO: I will now give floor – we will now go to Ilona Stadnik who will have some key messages from the session. Are you there?
 
>> Ilona Stadnik: Can you hear me?
 
>> STUDIO: Yes.
 
>> Ilona Stadnik: Great. I am Ilona Stadnik, I am Geneva Internet Platform reporter for your session. I will briefly provide the key messages that I comprise out of the discussions from the Workshop. A quick reminder, there will be opportunity to comment on them after EuroDIG is over. All the messages will be placed on the commenting platform. You are invited to suggest some additions to my recommendations afterwards. Now I will read out the messages.
 
I just need a rough consensus for you. If something is in contrast to the views, we can just simply remove the message. Liberal approaches to online platforms in the beginning led to the rise of the platform power to influence the public sphere. Though we have soft law arrangements, they’re not sufficient to address serious problems like extremist content hate speech and disinformation while ensuring the right to free speech. Any objections here? I don’t hear anything.
 
Governance models like co-regulation, self-regulation, multistakeholder, multidisciplinary models are challenged with the need to reconcile different accountability and power structures that exist. Also they should have internal and external legitimacy. It means externally the model has to be recognized for the quality and timeliness much its decisions and internally it has to have robust checks and balances.
 
The last one, regulation of platforms is not a burden for a single Government, institution, or platform itself. There should be a global collaborative effort between Governments and tech companies to elaborate a solution that would be grounded in human rights and address a particular problem.
 
>> Can I ask regulation of platforms where we’re talking about regulation, I think it is – [audio skipping]
 
[Audio skipping].
 
>> Ilona Stadnik: Could you please repeat. I heard some issues with the connection.
 
Somebody was talking about regulation of platforms. So what is the point here?
 
What we should change?
 
>> Mira Milosevic: We’ll type it in the chat. There is some problems with the connections.
 
>> While we’re waiting, can you repeat the first message again? I didn’t hear it properly. I apologize.
 
>> Mira Milosevic: It is on the screen if you can –
 
>> Ilona Stadnik I see, add Civil Society to the last bullet. Where, exactly?
 
>> Cherine Chalaby: Oh, sorry. I didn’t have the big screen here.
 
>> Ilona Stadnik: Regulation of speech on the platform would be better?
 
>> Mira Milosevic: Paula, maybe you should explain again and explain which are commenting. On the last one or in general?
 
>> Paula Gori: The last one. I mean the last one. Then I heard regulation of speech. Regulation of speech is a little bit too hard, considering that fundamental rights are to be – I mean, are to be protected. I would not say regulation of speech, honestly. But I mean, I understand it is difficult to go with general messages. But probably I’m a lawyer and that is why, for me, words matter. But given that we’re talking about self-regulation and co-regulation, not regulation as such, I was wondering if we could find – I don’t know – policy for content moderation. I don’t know, something similar that would avoid misunderstandings.
 
>> Elena Perotti: I’m not sure we need the last message, to tell you the truth. Yeah.
 
>> Mira Milosevic: Maybe just the first sentence. The second sentence is important.
 
[Inaudible, multiple people speaking]
 
>> Cherine Chalaby: Can I make a session for the last sentence. The suggestion I made in the speech, which is consistent with what you are saying. There isn’t a single Government institutional actor that has all the answers. We need collaboration between Government, between tech companies, between also actors like human rights actors, Civil Society, Academia, all getting together into a collaborative effort to find a solution for us that is grounded on human rights. That is what I think I would frame it that way to be honest with you.
 
>> Elena Perotti: We will close with not address a particular problem but address disinformation and hate speech.
 
>> And also if I may, going back to – I will let you type. [Audio skipping]
 
The first point again, there is this regulation of speech online. Which I find a little tricky. Yeah. I mean, there is always this difference between illegal content and legal content. So I was thinking if we want to replace like – I mean hate speech. There is also problem of [audio skipping] all illegal. Probably hate speech is one of more illegal [audio skipping].
 
>> Cherine Chalaby: It is more about harmful content.
 
>> Ilona Stadnik: Okay the harmful content.
 
>> Paula Gori: Exactly, harmful content.
 
>> Cherine Chalaby: Right at the last, the harmful, rather than hate speech. The last two words. The last point. And hate speech should be harmful content or harmful speech. Take hate out because it is one example of harmful.
 
>> Elena Perotti: Are there any other objections regarding the messages? Or can we roughly agree on those?
 
>> Cherine Chalaby: Go ahead, I beg your pardon. I don’t understand what we mean by liberal approach here? By Governments, by whom?
 
>> Ilona Stadnik: The Governments.
 
>> Elena Perotti: It is in reference to what we said in the beginning, the Government started to not regulate the Internet based on the free speech.
 
[Inaudible, multiple people speaking]
 
>> Ilona Stadnik: Shall we add by Government, so it is obvious?
 
[Audio skipping]
 
>> Mira Milosevic: I think Paula is trying to say something, it is difficult.
 
>> (?)
 
>> Mira Milosevic: I think these are really sensitive subjects. I think the sentences need to be reviewed, a couple of times by us. So especially the relationship I presume that Paula wants to discuss the relationship between hate speech and harmful content. There is a really important distinction here when we talk about regulation. So Giacomo, what can we suggest in terms of procedure for this, so not to take any more time of our fantastic panel and colleagues that have joined us today?
 
>> Can we have a chance to look at it offline and send it back to you quickly?
 
>> Mira Milosevic: Yes, Giacomo we will go offline and send suggestions to Ilona Stadnik.
 
>> Giacomo: Yes, any case, we have to report from this Workshop to the main session. That is something I will do. So Ilona Stadnik will remain as a basis but the report to the plenary session will be larger. But of course, it is important that we all agree on what is there because this will remain in the final messages out of the EuroDIG that will be spread around. So we have time to revise. What is important to all that is agreed that can be sent immediately, but we have time to revise in the next days before to publish the messages from EuroDIG 2021. Correct? It is like this? Yes. Okay. So don’t worry. We have time to work on it. If you would like to continue to work. There is always work to be done.
 
>> Mira Milosevic: You mean now or offline.
 
>> Offline.
 
>> Mira Milosevic: Offline.
 
>> Unless there is somebody that is something that is absolutely important. If we’re talking about adjusting a word and refine the concept. Then we can do afterwards. If somebody believes that there is something totally wrong –
 
>> Elena Perotti: I will ask Ilona Stadnik to send what you have now in the communication with key participants so we have it in one place. Thank you so much.
 
>> Mira Milosevic: Well, shall we wrap up? Yes, go ahead. I said most of the things that we had in the questions. So I will give the floor to you to thank our panelists and wrap up.
 
>> Elena Perotti: I don’t have anything to add to this session. I enjoyed it, I found it informative and clarified all the points to me. I hope to the audience as well. I hope we’ll keep in touch, because these are very important issues that need thoughtful consideration continuously. I believe. Thank you to all the participants. Thank you to Mira, Giacomo and to all the participants with their questions. I look forward to the rest of EuroDIG and to working with you again in the future. Thank you. To everyone.
 
>> See you at 2:45 for the focus session.
 
>> Where all the Workshop will report what has been discussed. It is important to be there to sustain our viewpoints. Thank you very much.
 
>> Thank you.
 
>> Mira Milosevic: Good-bye.
 
>> Bye.
 
>> STUDIO: Yes. Thank you, everyone. I agree it was a very informative discussion and I would like to thank our moderators Mira and Elena for the wonderful moderation. We will now go to a lunch break. And when we come back, you can join us for the keynote from the UNESCO. And so enjoy the break. We’ll see you in about an hour.  


[[Category:2021]][[Category:Sessions 2021]][[Category:Sessions]][[Category:Media and content 2021]]
[[Category:2021]][[Category:Sessions 2021]][[Category:Sessions]][[Category:Media and content 2021]]

Latest revision as of 15:32, 19 July 2021

30 June 2021 | 12:15-13:15 CEST | Studio Belgrade | Video recording | Transcript
Consolidated programme 2021 overview / Day 2

Proposals: #23 #52

You are invited to become a member of the session Org Team! By joining an Org Team, you agree to your name and affiliation being published on the respective wiki page of the session for transparency. Please subscribe to the mailing list to join the Org Team and answer the email that will be sent to you requesting your subscription confirmation.

Session teaser

As pressures mount for EU and/or national level regulation large social media platforms, Workshop 12 reviews self- and co-regulation initiatives by platforms, evaluates them and discusses their interaction with other actors, such as regulators, other media, and users.

Session description

Until .

Always use your own words to describe the session. If you decide to quote the words of an external source, give them the due respect and acknowledgement by specifying the source.

Format

Structure of the workshop

GIP summary of the meeting || 4'
Moderators’ introduction 3'
4 speakers: 7’ each average
  • A case of multistakeholder model for platform governance that specifically focusses on reducing extremist content on social media platforms (Christchurch call)
  • A case study of an self-regulatory body
  • The regulatory approach across Europe
  • A case study of multi-stakeholder and multidisciplinary approach
28'
Debate with the audience 15'
Moderators’ Final recommendations and Conclusions 10'
Total duration 60'

Further reading

About the functioning of Self Regulation and Co-regulation systems:

About the comments moderations:

About self-regulation national organizations (TV, printed media, internet – the German case):

About platforms self-regulation model:

About regulatory approach across Europe:

Useful web sites

People

Please provide name and institution for all people you list here.

Focal Points

Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective Subject Matter Expert (SME) and the EuroDIG Secretariat and are kindly requested to follow EuroDIG’s session principles

  • Giacomo Mazzone, Eurovisioni
  • Giovanni De Gregorio

Organising Team (Org Team) List Org Team members here as they sign up.

Subject Matter Expert (SME)

  • Yrjö Länsipuro

The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.

  • Aleksandra Ivanković, Internet Society & YCIG
  • Giacomo Mazzone, Eurovisioni
  • Juuso Järviniemi, Student at College of Europe, Federal Committee member of the Young European Federalists (JEF-Europe)
  • Giovanni De Gregorio
  • Vittorio Bertola, Open-Xchange
  • Mira Milosevic
  • Farzaneh Badii
  • Claire Pershan, EU DisinfoLab
  • Jutta Croll
  • Lewis McQuarrie / Collin Kurre
  • Paul Ash
  • David Reid

Key Participants

  • Paul Ash, coordinator of the Christchurch call, New Zealand Government
REPLACING Giovanni De Gregorio, Centre for Socio-legal Studies, University of Oxford, GDRH Global network of HR research on self and co-regulation
Paul Ash is the New Zealand Prime Minister’s Special Representative on Cyber and Digital. He works closely with the technology sector and cyber and digital agencies in New Zealand and globally. Paul supported PM Ardern and the French side leading the development of the Christchurch Call to eliminate terrorist and violent extremist content online, and continues to lead on this work within the New Zealand Government. He has previously served as the Director for National Security Policy and the National Cyber Policy Office, as Deputy Head of Mission in Brussels, and on diplomatic postings and secondments in the Solomon Islands, Beijing, and Taipei.
  • Cherine Chalaby, member of Board of Trustees at FB oversight board
  • Lewis McQuarrie, International Policy Manager OFCOM UK
Lewis McQuarrie is a policy manager in the international team at Ofcom, the UK’s communications regulator. He works on media and online policy issues, with a particular focus on new rules for video-sharing platforms which came into effect in the UK last year. He acted as lead drafter and project manager for Ofcom’s report into monitoring the voluntary Code of Practice on disinformation published in February. He represents Ofcom at EPRA, the European Platform of Regulatory Authorities, leading their work on media plurality in online media.
  • Paula Gori, Secretary General EDMO

Key Participants are experts willing to provide their knowledge during a session – not necessarily on stage. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance. Please provide short CV’s of the Key Participants involved in your session at the Wiki or link to another source.

Moderators

  • Mira Milosevic, GFMD
Mira Milosevic is the Executive Director at Brussels-based Global Forum for Media Development (GFMD). She leads GFMD’s engagement with the United Nations, the Internet Governance Forum, and other multilateral institutions as well as GFMD’s international efforts advocating for the sustainability of journalism and news media. Mira frequently writes and speaks about the intersection of media, economy, technology, and human rights. Before joining GFMD, she authored the World Press Trends reports, the most authoritative global source of data and analysis on the international newspaper industry, managed Media Development Programmes at WAN-IFRA, served as Chief Platform Officer at Indie Voices, and as Director of Belgrade-based Media Center. Mira holds a BA in Economics and MA in Communication; she has started her career as a journalist.
  • Elena Perotti, WAN-IFRA
Elena Perotti is Executive Director of Media Policy and Public Affairs at WAN-IFRA, the World Association of News Publishers. She is responsible for identifying and studying major public affairs issues within the news industry, and is leader and/or author of all research output of the department. Elena is also in charge of the News Literacy initiatives, and of interaction and liaison with WAN-IFRA’s governing boards, as well as with national and regional member associations, and international bodies. 

Remote Moderator

Trained remote moderators will be assigned on the spot by the EuroDIG secretariat to each session.

Reporter

Reporters will be assigned by the EuroDIG secretariat in cooperation with the Geneva Internet Platform. The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:

  • are summarised on a slide and presented to the audience at the end of each session
  • relate to the particular session and to European Internet governance policy
  • are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
  • are in (rough) consensus with the audience

Current discussion, conference calls, schedules and minutes

See the discussion tab on the upper left side of this page.

Messages

  • Liberal approaches of governments towards online platforms at the start of the platform economy led to the rise of platform power to influence the public sphere. Though we have soft law arrangements like voluntary codes of conduct to regulate harmful content, they are not sufficient to address serious problems like extremist content and disinformation while ensuring the right to free speech.
  • Self-regulation, co-regulation, and multistakeholder/multidisciplinary governance models are challenged with the need to reconcile different accountability and power structures that exist within them. Of additional importance, they should have internal and external legitimacy.
  • Externally, a governance model must be recognised for the quality and timeliness of its decisions; internally, it has to have robust checks and balances.
  • There should be a global collaborative effort in the form of dialogic regulation between governments, tech companies, and civil society to develop a solution grounded in human rights that will address disinformation and harmful content.

Find an independent report of the session from the Geneva Internet Platform Digital Watch Observatory at https://dig.watch/resources/best-practices-self-and-co-regulation-platforms-towards-legal-framework.

Video record

https://youtu.be/rnehllszB6w?t=8027s

Transcript

Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-482-9835, www.captionfirst.com


This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.


>> STUDIO: Welcome back, we’ll continue with the second Workshop of today, which is the Best Practices of co-regulation and self-regulation of platforms towards a legal framework, my name is Jelena Cosic. And I’ll be the studio host for the session. Before I turnover to the moderators of the session. I will briefly go over session rules. First, make sure to displace your full name when you join the studio, if you have a question during the session, use the Zoom function to raise your hand. You will be unmuted when you are given the floor. And just remember, that at the beginning, you should state your name and affiliation. You can also choose to switch on the camera so that we can see you, but this is optional and we leave that up to you. You can also use the chat to post your questions during the session. I will also be the chat moderator for the session. I will make sure from time to time I pass to the panelists. Don’t be afraid to use the chat. Keep in mind, it will not be stored or published.

Lastly, we ask you not to share the Zoom links with anybody, let them register or follow us on the YouTube Live stream. I will now hand over to two moderators for today’s session. Mira and Elena, the floor is yours. I will stay here and see you at the end of the session.

>> Elena Perotti: We agreed with Mira, I would be started. So that is what I will be doing. So hello everybody. I am Elena Perotti. I work at the World Association of News Publishers, I work in the media department at the World Association News Publishers. I will moderate this session with Mira, who is the associate Director of media development.

What we will do today – first, a little bit of housekeeping. My Internet is horrible. I have not one but two construction sites close to me. So I might be popping on and off in that case. I know Mira is ready to take my place if I cannot manage to be back online.

So as I was saying, today we’re investigating the European Union soft and hard law approach to the challenges by the state powers enforced by the platforms in enforcement of users’ fundamental rights and spreading hate speech online. So you can find – I will post the chat right now the EuroDIG wiki in case anybody doesn’t have it. Here you go.

So here is how we go about this. We have – you will have seen the agenda. We have first four case studies. The speakers line up comprises Paul Ash the coordinator of the Christchurch call, New Zealand Government who will speak on the Christchurch Call. And we will have Cherine Chalaby a former member of ICANN and on the Board of Trustees at Facebook Oversight Board. Lewis McQuarrie is a policy manager of the international team at OFCOM, the UK’s communication regulator. He will present a case study on the disciplinary approach. And we will close with Paula Gori the Secretary-General of EDMO. We will have some time for questions from the audience, additional insights from key participants and conclusions and messages.

Public actors in the European Union and US have not generally approached the internet and allowed platforms as I was saying to have state-like powers. This is in the beginning the Internet was an independent and perceived as one. It was thought to be dealt with on the basis of the free market. But what has happened is the activity of all and the desire to be forgotten, and there is the position to effect and exercise the fundamental rights, particularly privacy freedom of expression, so on. The European Union deals with code of conduct, free speech, the measures to tackle the legal content online. This solution is self-regulation and the platforms and mechanisms to give structure to the functions that are similar to states which they’re performing now and possible tackle the problems of hate speech and disinformation that we have seen so broadly during the last year of pandemic.

In this Workshop, we will analyze different responses to this challenge, regulatory and nonregulatory ones. That is why we have the Facebook Oversight Board, which represents the platform starting to incorporate voluntary mechanisms.

At the same time, there are initiatives developed with intent to respond to the challenges that we have mentioned before. Such as Christchurch Call and the European media observatory, in the phase of misinformation, and there is high regulation, that is why we have OFCOM to testify to that.

Without further delay let’s get into it. It I will leave the floor to my co-moderator to introduce herself, Mira Milosevic.

>> Mira Milosevic: This is fantastic for our conversation and give our speakers enough time to cover everything that we planned. We will start immediately with a story about an international multistakeholder model for platform governance that specifically focuses on online harms. The Christchurch Call is a commitment initiated by a group of Governments and tech companies to eliminate terrorist content and attacks online. It is an example of a multistakeholder approach where a genuine effort is made to consult and engage Academia and Civil Society with the process. Not only Governments and private companies. We have with us Paul Ash, as you said, Elena, representative on cyber and digital at New Zealand. He was involved in developing the Christchurch Call and he supported the Prime Minister then and continues to lead on the work within the New Zealand Government.

Paul, it is a pleasure to have you with us today. Can you tell us about the call and your unique approach? And what have you learned throughout the process over the last two years? How relevant and binding is the call today for Governments and companies?

>> Paul Ash: Greeting from New Zealand. It is the middle of the night here. I thought I would put a nice background from our countryside. For those that can’t get here because of travel restrictions. It is lovely to see you again, Mira, you were around at the very beginning of the call in mid-2019. Great to be able to join people today.

I guess the fact that we’re talking about regulating the Internet at all is antithetical to most of us on the discussion. If we think about that we’re a long way of the utopia of the 1990s, when some of us with more gray hair first connected to the Internet and discovered a realm that was unanchored from the rest of the world. As we discovered, the couple of thousand years of political philosophy were right. And the things we grapple with offline manifested themselves online.

In a sense, those have led to quite a sense of political drive in the sphere to ensure that life online better reflects the range of values and objectives that we would with offline.

There are good reasons underlining some of it and pitfalls in democracies like ours as we try to do that. From our perspective, the conversation we’re having tonight as well, fundamentally is checking the future of global and still mostly borderless – and hopefully it will stay that way, infrastructure that is designed to connect people everywhere. Applying a lens of regulatory economical advantage is not the enduring approach to solve some of those issues. I think we heard from Elena earlier about the economic models that led to the rise of platform power and the shape of the Internet we have today.

We probably need a different approach if the Internet is for all and remain a place where no one stakeholder has complete legitimacy or mandate to make changes for themselves, we need to think about more collaborative models to deliver regulation if you want to use that phrase or structured approach to operating online.

That is why from our perspective, multistakeholder approaches are important and why we reached for one of those after the events of 15 March 2019.

So I guess good to talk about some of the challenges before we kick into it. One of the most complicated aspects of co-regulation and multistakeholder models is how to both surface and reconcile the different accountabilities and power structures that exist within them.

For instance, security typically has been and still remains an exclusive competence of states for those of you in the EU, that is one of those huge debates in the European Union context around subsidiarity of what is in Brussels and what is states is states. States like yours are democratically responsible to the citizens. There is the core to the social contract. That accountability was thrown into start by events of 15 March 2019, Christchurch Call. A terrorist steeped in online conspiracy theories, radicalized and trade craft online, working with Islamophobic and white supremacist narratives built an atrocity designed for the Internet. And with the horror was broadcasting the murder of 51 people across the world. Exploiting major consumer and social media platforms in a way we hadn’t seen before. Initiated the harm to think about solving for this problem in new ways. The live stream video was repackaged, manipulated, turned into memes, video games. It has had a persistent online presence that has inspired a range of other Islamophobic attacks.

At the time, the platforms this was occurring on the mainstream ones had a range of responses. But not one of the responses was equal to the task of preventing affected families, consumers, viewers all over the world from being exposed to the horrifying content. But spilled over to the cross-over area of traditional and online media and interesting questions of where some of the gaps lie. Where some of the overlaps in the areas are. That is something we had to think hard about as we watched some bits of the more traditional media using the call video for click bait to advertise against. Some mainstream newspapers did that and others alongside them were engaged in responsible reporting and working with the video and what we were grappling with. There are the shades of judgment that require us to solve for that nay different way and need for multistakeholder approach to ensure we actually get the right balance of views in that discussion.

The temptation for many Governments following such (?) is something to think about. To punish platforms for failing to stem the tide of terrorist content. If you hold a legislative hammer, it is easy to hold it as a fresh n-ail. The problem is manifest on platforms, and they have a big role to play in solving it, but if you think about the problem of terrorists and content online, it is in need of a holistic approach to look at imagines, manifestation online and effects offline. The Christchurch Call was an acknowledgment the approach won’t just be a symbolic gesture, seeking to tame the Internet. That would be folly.

But actually one that materially changed the way online platforms, Governments, Civil Society worked together on the issue. That first meant working with the platforms and actually engaging deeply with them on the challenges we faced. It involved working with other Governments and certainly it is great to speak in a European context. European partners and LIs were very quick to reach out. And provide support. None more so than France as we sought to take this forward. An importantly that meant working with Civil Society. And finding a way to shape an agreement that created – that was timely and moved at pace, but that provided space for Civil Society to begin to assume a full role. For us sticking to important principles. I think they are important in this discussion more widely. Steeped in human rights and fundamental freedoms. Including not just the freedom of speech but also the rights of victims in this instance. And protecting a free, open, and secure Internet. Trying to do those two things at once holds a number of different things in tension, but it is important to do that. Also committing to quite difficult technical policy, social, regulatory solutions to see what might be effective. To know and acknowledge in the beginning those discussions might not be easy but needed to be worked through.

All of that is more time-consuming and more difficult perhaps than the linear hammer and nail solution. It is fair to say it took a while to figure out how to make that process inclusive and effective.

>> Mira Milosevic: Thank you, Paul, I hate to jump in here. We can go into details if we have more time at the end for the audience. It is a really interesting model. And also how it related to other models, we can discuss that later. Elena, over to you.

>> Elena Perotti: I do agree. Thank you, Paul, for this excellent intervention. Our next topic is the Facebook Oversight Board. And how it interprets itself as self-regulatory mode. Our key participant is Cherine Chalaby member of the Board of Trustees at the Facebook Oversight Board. Cherine has roles in Internet Governance and considerable board experience in 2019 he retired as Chairman of ICANN after he served on the maximum nine years.

His speech will cover the function of the Facebook Oversight Board, how it preserves independence and how such a self-regulatory model will integrate the framework in the future. I will link the Oversight Board in the chat. Cherine, to you?

>> Cherine Chalaby: Thank you Elena and thank you, Paul for the ideas which I wholeheartedly agree. It is my pressure to be part of this distinguished panel.

So since the creation in 2020, the Oversight Board has had the final say on some of the most consequential pieces of content posted on Facebook and Instagram. It has kept the Facebook company accountable. The promises it made and reminded three billion users that their voices matter.

It is, of course, early days. And naturally, their exist a high-level of public scrutiny. Some critics, for example, have implied that the Oversight Board is not truly independent of its creator. I don’t believe that to be the case. From my vantage point as a trustee responsible for the governance of the Oversight Board, I can see how independence is firmly rooted in everything we do. Elena has outlined what I am going to cover, but before doing so I wanted to begin by framing the context in which Facebook created the Oversight Board.

So there are three aspects to consider. Firstly, the rise in cyber sovereignty. Policymaker, regulators are increasingly looking for ways to address their deep concerns about the impact of social media platforms, on the safety and health of billions of users around the world. Whilst at the same time, they want to protect the privacy and freedom of expressions of those same users.

An immensely complex thought. Unfortunately, there is no quick or easy fix. And as Paul mentioned, there is no experience of what it means to regulate a global virtual public sphere.

Secondly, the relentless assault on Facebook. The concerns I mentioned above focus on Facebook as it grew into a powerful and integral part of the social fabric of most countries. Almost every crisis and headline to date plays out in some way across Facebook services. The more the services become ubiquitous, the more Facebook finds itself at the center of extensive criticism on everything. From the spread of misinformation to concern about the company’s power and approach to competition. And thirdly, this is very important context. Within this political and social context, Facebook needed to renew and strengthen its legitimacy with the users. Mark Zuckerberg and Facebook’s credit they recognize the decisions that have enormous consequences for our society, for human rights, and freedom of expression should not be made by social media companies acting alone.

Furthermore, these companies should not be the final arbiter of what can and cannot be said on their platforms. Users should have a voice. And their cases should be heard by an independent appeal body.

This is how the Oversight Board began. A bold move to create such an independent appeal body and a decisive step towards self-regulation. So let’s now have a look inside of the Oversight Board to understand how it works. To succeed, it was decided from the outset that the Oversight Board cannot just be credible from the outside, it needs to be solid on the inside. In other words, externally, it has to be recognized for the quality and timeliness of the decisions and internally, it has to have robust checks and balances. With that principle in mind, the governance model was uniquely designed to consist of three interlocking elements. The board, the board, and the administration. Each plays a distinct role as follows.

The board’s first 20 members were appointed last year. This is a diverse and well credentialed thinkers and leaders that make principle, independent decisions regarding content on Facebook and Instagram. And in so doing, they uphold or reverse Facebook’s own decisions. Users can appeal directly to the board. And the board has received already more than 400,000 appeals since January. Facebook can also refer cases such as the case they sent to the board in January on the question of whether the former President of the United States could be indefinitely suspended.

It is important to note that in its deliberations, the board takes public comment into account. For example, there were over 9,000 submissions related to the case of the former President of the United States. Equally important, the board decisions are binding. They’re done transparently so everyone can understand how the board reached their decisions.

And the board is working to shift Facebook from making arbitrary decisions or decisions that might be informed by the company’s economic or political interest towards decisions that promote freedom of expression, that treat all users fairly. And that are consistent with the companies own standards and values.

And if you look at the board’s decision so far, you can see how this is an institution not afraid of calling out Facebook when it feels to meet its responsibilities. The board also makes recommendations on content policies. These are not binding, but they are no less important than case decisions. The board has been given the ability to shine the light on systemic problems that it identified within the policies. And to give precise guidance on how to resolve them. Facebook is required to respond ... publicly within 30 days – excuse me, I suffer from hay fever. So far, Facebook has accepted the vast administrator of the recommendation. This is a positive sign.

You may ask yourself the question how this board can be truly independent? After all, it is Facebook that came up with the idea, and it is Facebook that funded it?

The answer lies in the trust, the second interlocking element of this model, which is where I sit. The trust is basically a shield between Facebook and the board. It is responsible for governing the board and for protecting its independence in three ways. It protects the board’s independent judgment and integrity of the decision-making process by keeping Facebook at arm’s length from board members. It protects the board’s operational independence by ensuring that board member adhere to stated purpose, uphold the code of conduct and act at all times in a manner that reflects the independence of the board. Finally, it protects the board’s financial independence by safeguarding the financial assets in the trust, approving, and monitoring the annual budget.

The third is the interlocking administration. It has a trust and full-time staff independent from Facebook that are totally dedicated to assisting board member with the research, case selection, communication, and decision. In closing, it would be remiss not to stress the Oversight Board was not designed to solve all the problem of Facebook alone. Nor to supplant the role of policymakers and regulators. The Oversight Board is nevertheless an important innovative model of self-regulation. Untried before on such a large scale where – I’m going to be slow here – one of the largest corporations in the world – one of the largest for-profit corporations in the world has created an independent not-for=profit institution to make binding decisions by which the for-profit corporation must abide. This is a unique model. This was designed to avoid both the commercial interest of the for-profit organization and potential abuse of state-based regulation.

Institutions such as the Oversight Board are in my view, necessary. We do not want for-profit corporation regulating the global public sphere in their own economic interests.

Nor do we want national or regional political interests Balkanizing the same sphere. Instead, we want disinterested, disinterested regulation of our virtual speech. That means regulations which is impartial and unbiased. In this regard, the Oversight Board is truly a disinterested institution. Whose disinterest is guaranteed by the trust.

>> Elena Perotti: To interrupt. One minute warning.

>> Cherine Chalaby: I’m continue. One minute. It aligns with multistakeholderism, and state-based regulation, which we are here to talk about. I’m sure you would agree one-size-fits-all solution does not exist. And you also agree no single Government, institution or actor has all the answer. I, therefore, like Paul, feel the imperative global collaborative effort between Governments and tech companies to agree on a solution that are clearly grounded on human rights principle. And B, we critically need to manage the complex challenges of our borderless digital future. Thank you. Did I get within the one minute?

>> Thank you.

>> Mira Milosevic: Thank you. This is interesting insight in how the board works. We have questions for you. I’m sure we will have many more and hopefully we’ll get time to respond to some of that.

Next speaker will present the approach of the national regulator. The OFCOM and engagement with different regulations. Lewis McQuarrie is the OFCOM UK regulator. He works with media online issues and focus on new rules for video sharing platforms which came into effect in the UK last year. Lewis, thank you for being here with us today. Could you tell us about how an independent regulatory body such as OFCOM interacts with different regulatory models including the experience of the EU code of disinformation. Duties under the forthcoming UK online safety bill. Over to you, Lewis.

>> Lewis McQuarrie: I apologize for the ghoulish appearance. I feel like I’m in a broom closet. I’m speaking on the report that OFCOM published on the monitoring of the code of practice of disinformation during the UK general election in 2019. The code was signed by Facebook, Twitter, YouTube, others aimed at tackling disinformation on the services. They want to empower users, the research community to enable the scrutiny of ad placement and made political advertisement more transparent.

It is that we joined this general election and matched the scope of other regulators involved in this exercise. The bit about the code. It is voluntary. It is a hybrid between self-regulation and co-regulation. It was set by the Commission and monitored by several stakeholders throughout the first 12 months including regulators, which is part of the Commission, Civil Society, Academia, and others that were participated in the 12-month review and submitted evidence of the implementation of the code and the efficacy.

It was multistakeholder and multidisciplinary from the genesis. The objectives set by the Commission were informed by the advice of high-level Expert Group established in 2018 as well.

In parallel, in fact to the development of the code, there was a sounding board created which was composed of members of the media sector, Civil Society, Academia that advised and provided critical feedback to the drafters throughout the drafting process and published the final opinion around the time it was signed in 2018. I want to address the elephant on the room which is why is OFCOM talking about EU code? I’m not here to speak on behalf of the code or the merits necessarily. Luckily, Paula can correct me where I go wrong immediately after. We were members before submitting to the EU and we submitted our finding on that before we left. We took a decision to develop our findings after the fact to try to take stock of the lessons we learned from conducting this exercise. Which we learned that was transferrable to other code and self-regulatory tools. The ERGA reports and others are more comprehensive in terms of the review of efficacy? What are the lessons? I’m trying my best to be brief.

I would say it underscored the need for online regulation, including self-regulation and co-regulation to take a test evolve approach. The process has to be built into the model. This is particularly true online, because of the risk of unintended consequences that come at the cost of freedom of expression that some of the previous speakers spoke about. Because of the dynamic nature of the market and how user behaviors evolve online at such speak.

Self-and co-regulation approach can be more dynamic. Platforms themselves need to lead this approach and have robust systems to monitor the effectiveness and outcomes of the actions and calibrate the response. There needs to be a reflexivity to the system where people take stock of the intention, measures taken, outcomes, how they measure up to the objectives.

We think the multistakeholder model to self-regulation and co-regulation tools works well involving public bodies, experts from Civil Society and Academia can add accountability and legitimacy of the platforms. Multistakeholder is relevant at the point and of the implementation of the test and evolve approach. Toronto to have critical feedback. It is not always true that platforms are the best place to know the effect that their actions are having on users around the world.

A multistakeholder approach can be more equitable and inclusive. The involvement of the membership was able to help draw attention to shortcomings in the implementation smaller EU markets. For these advantages to be true, we think self-regulation and co-regulation tools need to be transparent in how they operate. In terms of the access the organizations involved in the scrutinizing have on implement association well as in terms of the transparency in terms of processes adopted by platforms. For instance, publishing better roars of amendments to platform policies that happen rapidly.

We think monitoring can be strengthened by supplementing the analysis of platform data with other evidence such as consumer research into how users engage with the measures. And in the case of this code and the case of political transparency, political advertising, a good example is it is useful to know how users interact with the information provided to them by the context. Who has paid for the ad, why we get to them? Trying to understand how these measures actually pierce through and into the user journey and affect the user and citizen decisions.

I want to end on commenting briefly on the next stage for this code. It is moving into a new stage that the Commission just issued new guidance on how the code should be improved. This incorporates feedback from several different organizations involved in its monitoring. And there’s quite a long list of improvements it recommends. To pick out a few, it hits home the point about improved KPI. This is about transparency and measuring efficacy of the code. Greater access to data for researchers and thankfully EDMO will speak to that. It oversees the task force to the development. This is to help ensure it is accountable, agile, and up-to-date.

I understand the code, intention is to then give it some degree of statutory backing as a code of conduct under article 35 of the DSA. I will try and pick up your questions, Mira, in the Q&A, if that is all right. I will hand it back to you.

>> Mira Milosevic: Thank you, there are really relevant points here, also that connected to what Cherine and Paul were saying. Elena, to you.

>> Elena Perotti: Next we’ll hear about example of multistakeholder and multidisciplinary approach. Paula Gori is Secretary-General of EDMO, the digital media observatory. Joined the School of Transnational governance in 2017, where she is a member of the management team. Prior the coordinator of Florence School of Co-regulation media. She is a mediator, has a background in international law in Italy, Florence, and France. With the EDMO and focuses on the multidisciplinary and multistakeholder approach when dealing with disinformation. We remind you the observatory is one of the elements in the Action Plan against information published in 2018. There was a first phase. Which was focused on the development of a core service infrastructure. And the governance role of the observatory. Now in the second phase of the project, that funds at the creation of mushy, national, international research hubs across Europe. Over to you Paula.

>> Paula Gori: Thank you. I will be the only one sharing slides. I have this pleasure with you. I hope you see the presentation. Thank you very much for inviting us to this important Forum.

Let me start by giving you a little background. One thing, to set the scene and remember what we’re talking about, very often there is confusion between disinformation and misinformation, because in the U.S., they tend to have other use of the word.

Disinformation includes all forms of false, inaccurate, and misleading information designed to intentionally cause public harm. And misinformation is when you share false information with your friends but not aware it is harmful.

I’m sorry to start with the definition. It is something that is helpful to set the scene.

When dealing with online disinformation, I think it came from the previous presentations, the big challenge is that one side we want to protect the right for informed decisions. The other side we want to protect fundamental rights. And parallel to that, we want to avoid citizens lose trust in media and solutions. All of that happens in an environment that is fast evolving with technology and by the tools, actors, strategies, developed fast. The challenge is quite high.

I had this slide, but probably we’re all aware about this. This is a Roadmap of the policy to tackle online disinformation. As you can see, there is reference to pieces already mentioned. Among which of course the practice and disinformation.

The code, going back to the challenge I was mentioning before. The code was seen as normal, natural, a right first step to tackle this issue. As mentioned by Lewis, it is in the e-regulatory toolbox, which sees the involvement. And this is to protect what was said by other speakers to make sure this is something that starts from the platforms and then moves with multidisciplinary approach. I think we are all aware. It is mentioned the guidance for the new code. I won’t go to what was missing in the first version and coming up in the new one. I want to remember as it was mentioned, the process for the DSA, the Digital Services Act was also triggered recently and it brings elements in the discussion on online disinformation.

Elena mentioned, it so I don’t have to say why EDMO was born.

Legally, let’s say. But let me mention what EDMO is. It is an independent platform, a digital service infrastructure, community builder. The aim is to be a body of facts, evidence, tools, to have a multistakeholder and multidisciplinary approach. It brings together the various stakeholders in the field and makes sure the various disciplines are involved. Why am I saying that? The partners are led by EUI, ATC and a fact checker in Italy.

Why is it so important? There are lots of questions coming up. These are a few. For example, the motivation behind. What made it viral. How is human brain reacting when we see disinformation piece? Not knowing if it is a disinformation or not. Which is the target audience of the message and which is the target audience of the actor?

For example, you might think which is the difference? I don’t know, for example, the actor is the one that decides that the disinformation trend should be out. And the target of the actor might be actually another state. Like state against state.

And which is a target audience which probably the population or part of the population of the second state. Then it is played by the business model of the platform. Which tools are used to build societal resilience, how do we assess them? There are lots of different questions that come up. It is impossible to tackle the issue in silos. We have to work together. You have the multistakeholder approach and multidisciplinary. Thanks to neurosciences we know emotions such as fear and anger are a trigger to share online disinformation. It is not by chance that during COVID-19, for example, it was true we were spending more time at home and from the social media but also scared. So this had a lot actually in spreading online disinformation. [Audio skipping].

EDMO, how we are actually serving as a platform and evidence-based platform? The first thing is that what you can already find it on the EDMO website. We set up a secure collaborative platform for fact-checking platforms. And they can do fact-checking. A similar platform will be open soon for researchers.

We have a governance body that [audio skipping] with an executive board and advisory board. And it is to ensure public trust toward EDMO and authorities. There is one already mentioned and that is the mission of building a framework to provide secure access to data online platforms for research purposes.

To go back to what I was saying before, I mean, we have lots of questions and often those data are actually very important to answer some of the questions. Of course, this should be done in full compliance with GDPR and data protection in general. This is potential codes based on article 40 of the GDPR. We did fact-checking work in Europe and will have a repository with fact check items and media literacy. We had a session right before ours so I won’t any too much into the details. To let you know, the challenge there comes probably from the discussion we had before. Media literacy, even if only related to online disinformation, it is a very wide field, where you have different actors, different targets, different strategies. So what we are doing now is we’re trying to map – issue a report that maps the various different media literacy initiatives and tries to identify assessment criteria. And then open to regulatory.

A similar exercise is with academic research in Europe. So far, we are considering articles in English. But we’ll soon get other languages and back to that soon. And providing academic input and methodological input for the monitoring of policies put in place. I mean, you all know the new guidance for the new code sees an involvement of EDMO in this, the task force was already mentioned. It was mentioned also that there needs to be an assessment of the application of the code and among which structural and service level indicators working. EDMO will provide supports in the structure and level indicators for those that are less familiar, that is what is the impact of the code on the overall information system.

How are we doing with that? As I was mentioning, we’re providing tools, which means the tools that are already mentioned. But we are also providing other types of tools. For example, trainings. We had the training on the ABC of fact checking. We will have a total of 20 training, all for free. The trainings will follow the multidisciplinary and multistakeholder approach I was mentioning. We are organizing events, conferences, so on. To conclude as mentioned by Elena, with EDMO hubs were announced. The aim is to have a hub in all Member States covered by the EDMO hubs. So far, there are eight hubs in 12 Member States. These are really fundamental. Because those are the realities of actually get the local level. We all know that disinformation, the spread of disinformation and the actors and the tools they differ from country to country. We’re happy that the first hubs were selected to start collaboration with them in September, October this year. Briefly, what will they do. Detect and analyze the disinformation campaign. They will organize the media literacy activities at many levels and provide support to the national authorities for the monitoring of the platforms policies.

To go back to what was being said before, imagine EDMO is saying about the repositories and feed those as well. For example, thinking of the academic one. This is a great opportunity to have also academic papers which are not in English, but rather in the various national languages.

So I think this was a short way. I hope I stayed in the time to introduce EDMO. Of course, I’m happy to take any question on EDMO. Thank you very much.

>> Elena Perotti: Thank you, Paula. Thank you very much. I would say we now have the opportunity to go to questions. And Mira will lead us in that part.

>> Mira Milosevic: Just to thank you, Elena and thank you, Paula, for these fantastic points. Looking at the time, I will just ask all our panelists to be as brief as possible. If possible, up to two minutes, in their responses. I will try to merge all the questions so to have one question for each of the panelists.

Going back to you, Paul, Giacomo asked about the examples of your commitments and recommendations already requested by stakeholders. You mentioned the information and call protocol how it informed this protocol now used by platforms. Can you tell us a bit more about that? And if you get time, whether you have access to data from the platforms where you look at algorithms that you mentioned in the chat as well.

>> Paul Ash: Thanks, Mira. I will be as quick as I can. The measures put in place to prevent the terrorist and violence content level is at two levels, there is measures in place and the content incident protocol is an effort to try to understand what kind of content reach the threshold for activating emergency processes to prevent a live spread broadcasting like with Christchurch Call. The response is to make sure companies, Governments, Civil Society can communicate at speed in a crisis. We have a bit of work to do in updating it. It is sitting with the New Zealand Government at the moment to take it forward.

You are absolutely right to focus on the question of algorithmic outcomes and how to look at the way algorithms might operate across three areas. One, recommendation in some that might be dangerous and might insight violence. And some that might auto complete and take people to places they don’t expect to be.

The last is around the algorithmic preferences used for the detection of terrorist and extremist content. In transparency we’re focused on strongly looking at the outcomes of the algorithms, instead of the white box of code. In the process now working with companies to build black box approaches for testing that. The answer to your question, I think some firms are far more comfortable with that than others. We’ll look to go at the pace of quicker ones and see how to take that forward. Drawing on the principles, from all of the speakers, multistakeholderism from our perspective is the only way to make that a safe process for everyone. I’ll stop here because we can delve into different rabbit holes of ours.

>>>> Mira Milosevic: Thank you this was an important point, especially having in mind, principles, and recommendations from other speakers. Cherine. The most questions for you. I will try to sum up in two categories. Again, if you can be as quick as possible.

One is related to user protection. You know, what are the resources ordinary users have when Facebook content moderation removes the content or accounts and they don’t get kind of satisfactory response. And they have no access to the Oversight Board processes because there is a limited number of cases. So that’s one.

And also, you know, what is the approach the users can use in terms of their legal appeals? If self-regulation options don’t work.

The other one is related to comment made by Lewis, that is that self-regulation and other tools need to be dynamic and updated over time. The question is will there be periodic independent reviews of the effectiveness of the appointment process of the board? I would say, in general, the impact of the Oversight Board on the policies and the decision-making at Facebook. I know you have a minute or two to respond. So sorry for being long with my questions.

>> Elena Perotti: I will pile on to that. I am also interested in knowing how the Oversight Board chooses which cases to deal with. Because you spoke of very, very many cases. And [audio skipping] through all of them. How you choose?

>> Cherine Chalaby: I will go off mute. And why don’t I – I mean, some of these questions are related. So what do users want in general? And I’ve been on the ICANN board for a number of years. I know what stakeholders and users want. They want to have the right to be heard. They want to have an opportunity to ask the company that made decisions to review the decision and they want to have an opportunity of that decision still against their own interest to have an opportunity to go to an independent body to appeal.

This is what Facebook has done. It has basically created a two-tier appeal system. The first tier is you go to Facebook when they take down or block your account. You can ask them to review their decision. It is a review request that every user – every user has a right to do.

And Facebook consider all of these cases. Not that there is a case that they just forget. They consider all of them. They have in addition to AI that has algorithm. They have 30,000 people looking at content moderation and other people looking at policies and at cases as well at the same time. So they look at that.

And if user doesn’t have any satisfaction then they can apply to the Facebook. – to the Facebook Oversight Board and ask for an independent appeal of their case.

Of course, it could be frustrating sometimes. That their case is not heard. I can understand that. I will go back to your question, Elena. The Oversight Board does review a selected number of highly emblematic cases, that are difficult, significant, and globally relevant to inform future policy. Basically cases that have precedential impact.

Now, the Oversight Board is increasing in size. Hopefully, it will be able to handle more and more cases around the world. So I think that’s the question. In terms of – sorry. The approach – what was the second question, Mira? Was it about periodic review of the effectiveness of the board?

>> Mira Milosevic: Yes, keep it to couple sentences. It is there periodic review of appointment of the board member and also a question related to diversity. That is related. Then of course, I added to that impact and effectiveness of decision-making. Yeah.

>> Cherine Chalaby: So the board members are assessed once a year. We assess their performance and how they do, how do they adhere to code of conduct and adhere to representing independence of the board. And also importantly, their impact. So there is a yearly review of every single board member. And the trust where I sit has the authority to remove a board member if they believe they’re not performing according to the code of conduct.

>> Mira Milosevic: I have to cut you Don. Sorry. We only have three minutes to go. I will use the time to ask the same question both Lewis and Paul, question, is from actually Paul’s colleague from New Zealand Government. And he’s interested in the relationship between the work of self-regulation mechanisms. When having in mind marginalized and minority groups that Paula mentioned could be at more risk from deceitful campaigns. This is important question. How do we make sure that those users, especially are empowered to have their voices heard and listen especially when they’re in danger? First Lewis, and then Paula, again, if you can be as quick as possible.

>> Lewis McQuarrie: Yeah, happily. It is a shared responsible. Certainly incumbent on policymakers and platforms developing the tools to reach out to make sure that marginalized communities are represented in that process and check that they aren’t limited by the Civil Society groups that they’re familiar with. Sometimes you have to go beyond the kind of usual suspects to reach groups that may not even have representative in the Civil Society groups that are the forefront of the debates. I will hand to Paula.

>> Paula Gori: I think sometimes minorities have the voice not heard, but for example, the problem we saw with COVID-19, the EDMO plays a crucial role because you get the national dimension and what happens in their respective countries and we detect what happens regarding minorities.

>> Mira Milosevic: Thank you so much, Lewis and Paula for being quick. We have a couple of comments in the chat. Elena, I think we need to go to the studio and see when we hear from all Ilona Stadnik and the messages, whether we will have time to reactions to messages. Over to you.

>> STUDIO: I will now give floor – we will now go to Ilona Stadnik who will have some key messages from the session. Are you there?

>> Ilona Stadnik: Can you hear me?

>> STUDIO: Yes.

>> Ilona Stadnik: Great. I am Ilona Stadnik, I am Geneva Internet Platform reporter for your session. I will briefly provide the key messages that I comprise out of the discussions from the Workshop. A quick reminder, there will be opportunity to comment on them after EuroDIG is over. All the messages will be placed on the commenting platform. You are invited to suggest some additions to my recommendations afterwards. Now I will read out the messages.

I just need a rough consensus for you. If something is in contrast to the views, we can just simply remove the message. Liberal approaches to online platforms in the beginning led to the rise of the platform power to influence the public sphere. Though we have soft law arrangements, they’re not sufficient to address serious problems like extremist content hate speech and disinformation while ensuring the right to free speech. Any objections here? I don’t hear anything.

Governance models like co-regulation, self-regulation, multistakeholder, multidisciplinary models are challenged with the need to reconcile different accountability and power structures that exist. Also they should have internal and external legitimacy. It means externally the model has to be recognized for the quality and timeliness much its decisions and internally it has to have robust checks and balances.

The last one, regulation of platforms is not a burden for a single Government, institution, or platform itself. There should be a global collaborative effort between Governments and tech companies to elaborate a solution that would be grounded in human rights and address a particular problem.

>> Can I ask regulation of platforms where we’re talking about regulation, I think it is – [audio skipping]

[Audio skipping].

>> Ilona Stadnik: Could you please repeat. I heard some issues with the connection.

Somebody was talking about regulation of platforms. So what is the point here?

What we should change?

>> Mira Milosevic: We’ll type it in the chat. There is some problems with the connections.

>> While we’re waiting, can you repeat the first message again? I didn’t hear it properly. I apologize.

>> Mira Milosevic: It is on the screen if you can –

>> Ilona Stadnik I see, add Civil Society to the last bullet. Where, exactly?

>> Cherine Chalaby: Oh, sorry. I didn’t have the big screen here.

>> Ilona Stadnik: Regulation of speech on the platform would be better?

>> Mira Milosevic: Paula, maybe you should explain again and explain which are commenting. On the last one or in general?

>> Paula Gori: The last one. I mean the last one. Then I heard regulation of speech. Regulation of speech is a little bit too hard, considering that fundamental rights are to be – I mean, are to be protected. I would not say regulation of speech, honestly. But I mean, I understand it is difficult to go with general messages. But probably I’m a lawyer and that is why, for me, words matter. But given that we’re talking about self-regulation and co-regulation, not regulation as such, I was wondering if we could find – I don’t know – policy for content moderation. I don’t know, something similar that would avoid misunderstandings.

>> Elena Perotti: I’m not sure we need the last message, to tell you the truth. Yeah.

>> Mira Milosevic: Maybe just the first sentence. The second sentence is important.

[Inaudible, multiple people speaking]

>> Cherine Chalaby: Can I make a session for the last sentence. The suggestion I made in the speech, which is consistent with what you are saying. There isn’t a single Government institutional actor that has all the answers. We need collaboration between Government, between tech companies, between also actors like human rights actors, Civil Society, Academia, all getting together into a collaborative effort to find a solution for us that is grounded on human rights. That is what I think I would frame it that way to be honest with you.

>> Elena Perotti: We will close with not address a particular problem but address disinformation and hate speech.

>> And also if I may, going back to – I will let you type. [Audio skipping]

The first point again, there is this regulation of speech online. Which I find a little tricky. Yeah. I mean, there is always this difference between illegal content and legal content. So I was thinking if we want to replace like – I mean hate speech. There is also problem of [audio skipping] all illegal. Probably hate speech is one of more illegal [audio skipping].

>> Cherine Chalaby: It is more about harmful content.

>> Ilona Stadnik: Okay the harmful content.

>> Paula Gori: Exactly, harmful content.

>> Cherine Chalaby: Right at the last, the harmful, rather than hate speech. The last two words. The last point. And hate speech should be harmful content or harmful speech. Take hate out because it is one example of harmful.

>> Elena Perotti: Are there any other objections regarding the messages? Or can we roughly agree on those?

>> Cherine Chalaby: Go ahead, I beg your pardon. I don’t understand what we mean by liberal approach here? By Governments, by whom?

>> Ilona Stadnik: The Governments.

>> Elena Perotti: It is in reference to what we said in the beginning, the Government started to not regulate the Internet based on the free speech.

[Inaudible, multiple people speaking]

>> Ilona Stadnik: Shall we add by Government, so it is obvious?

[Audio skipping]

>> Mira Milosevic: I think Paula is trying to say something, it is difficult.

>> (?)

>> Mira Milosevic: I think these are really sensitive subjects. I think the sentences need to be reviewed, a couple of times by us. So especially the relationship I presume that Paula wants to discuss the relationship between hate speech and harmful content. There is a really important distinction here when we talk about regulation. So Giacomo, what can we suggest in terms of procedure for this, so not to take any more time of our fantastic panel and colleagues that have joined us today?

>> Can we have a chance to look at it offline and send it back to you quickly?

>> Mira Milosevic: Yes, Giacomo we will go offline and send suggestions to Ilona Stadnik.

>> Giacomo: Yes, any case, we have to report from this Workshop to the main session. That is something I will do. So Ilona Stadnik will remain as a basis but the report to the plenary session will be larger. But of course, it is important that we all agree on what is there because this will remain in the final messages out of the EuroDIG that will be spread around. So we have time to revise. What is important to all that is agreed that can be sent immediately, but we have time to revise in the next days before to publish the messages from EuroDIG 2021. Correct? It is like this? Yes. Okay. So don’t worry. We have time to work on it. If you would like to continue to work. There is always work to be done.

>> Mira Milosevic: You mean now or offline.

>> Offline.

>> Mira Milosevic: Offline.

>> Unless there is somebody that is something that is absolutely important. If we’re talking about adjusting a word and refine the concept. Then we can do afterwards. If somebody believes that there is something totally wrong –

>> Elena Perotti: I will ask Ilona Stadnik to send what you have now in the communication with key participants so we have it in one place. Thank you so much.

>> Mira Milosevic: Well, shall we wrap up? Yes, go ahead. I said most of the things that we had in the questions. So I will give the floor to you to thank our panelists and wrap up.

>> Elena Perotti: I don’t have anything to add to this session. I enjoyed it, I found it informative and clarified all the points to me. I hope to the audience as well. I hope we’ll keep in touch, because these are very important issues that need thoughtful consideration continuously. I believe. Thank you to all the participants. Thank you to Mira, Giacomo and to all the participants with their questions. I look forward to the rest of EuroDIG and to working with you again in the future. Thank you. To everyone.

>> See you at 2:45 for the focus session.

>> Where all the Workshop will report what has been discussed. It is important to be there to sustain our viewpoints. Thank you very much.

>> Thank you.

>> Mira Milosevic: Good-bye.

>> Bye.

>> STUDIO: Yes. Thank you, everyone. I agree it was a very informative discussion and I would like to thank our moderators Mira and Elena for the wonderful moderation. We will now go to a lunch break. And when we come back, you can join us for the keynote from the UNESCO. And so enjoy the break. We’ll see you in about an hour.