Tackling online harms – a regulation minefield? Present and future. – PL 07 2019: Difference between revisions

From EuroDIG Wiki
Jump to navigation Jump to search
No edit summary
(41 intermediate revisions by 3 users not shown)
Line 1: Line 1:
20 June 2019 | 16:30-17:30  | KING WILLEM-ALEXANDER AUDITORIUM | [[image:Icons_live_20px.png | Video recording | link=https://youtu.be/b_rcvT_Vlz8]] | [[image:Icon_transcript_20px.png | Transcription | link=#Transcript]]<br />
[[Consolidated programme 2019|'''Consolidated programme 2019 overview''']]<br /><br />
[[Consolidated programme 2019|'''Consolidated programme 2019 overview''']]<br /><br />
{{Sessionadvice-PL-2019}}
Working title: <big>'''Tackling online harms – regulation by whom, how much?'''</big><br /><br />
Proposals assigned to this session: ID 37, 52, 55, 73, 85, 91, 124, 125, 136, 138, 146, 151, 173, 178, 180, 183, 184, 203 – [https://www.eurodig.org/fileadmin/user_upload/eurodig_The-Hague/statistik_proposals_all/proposals_for_2019_2018-12-04__01_final_web_IDs_ver1.pdf list of all proposals as pdf]<br /><br />
Proposals assigned to this session: ID 37, 52, 55, 73, 85, 91, 124, 125, 136, 138, 146, 151, 173, 178, 180, 183, 184, 203 – [https://www.eurodig.org/fileadmin/user_upload/eurodig_The-Hague/statistik_proposals_all/proposals_for_2019_2018-12-04__01_final_web_IDs_ver1.pdf list of all proposals as pdf]<br /><br />
== <span class="dateline">Get involved!</span> ==  
== <span class="dateline">Get involved!</span> ==  
You are invited to become a member of the session Org Team! By joining an Org Team you agree to that your name and affiliation will be published at the respective wiki page of the session for transparency reasons. Please subscribe to the session [https://list.eurodig.org/mailman/listinfo/pl07_2019 '''mailing list'''] and answer the email that will be send to you requesting your confirmation of subscription.
You are invited to become a member of the session Org Team! By joining an Org Team you agree to that your name and affiliation will be published at the respective wiki page of the session for transparency reasons. Please subscribe to the session [https://list.eurodig.org/mailman/listinfo/pl07_2019 '''mailing list'''] and answer the email that will be send to you requesting your confirmation of subscription.
== Session information ==
'''Date:''' Thursday, 20 June 2019 (Day 2)
'''Time:''' 16:30 - 17:30 (CEST)
'''Room:''' King Willem-Alexander Auditorium
'''Venue:''' World Forum The Hague, The Netherlands
'''Sched link:''' https://sched.co/P0Oi


== Session teaser ==
== Session teaser ==
Until <span class="dateline">15 April 2019</span>.
Over the last two years, many governing bodies have been seeking to compete on who is putting social media in order, leading to many proposals for regulation. For government, it’s a real minefield, a situation containing a lot of hidden problems and dangers, in which they need to navigate the different harms yet at the same time avoid harming the Internet itself. With some countries starting to develop regulations, and the private sector publicly engaging on regulatory issues, we have to ensure that we follow these developments and consider the pros and cons when looking at the future.
 
1-2 lines to describe the focus of the session.


== Session description ==  
== Session description ==  
Until <span class="dateline">30 April 2019</span>.


Always use your own words to describe the session. If you decide to quote the words of an external source, give them the due respect and acknowledgement by specifying the source.
Governments throughout the world have been taking initiatives to pass legislation and enact regulation to secure the Internet against a large number of online harms from a diverse pool of attackers. Our question is how exactly that goal is being accomplished. The session will include key participants that will address different perspectives on how we can future proof the Internet and address their concerns and insights on how this policy area is developing.


== Format ==  
== Format ==  
Until <span class="dateline">30 April 2019</span>.
Panel
 
Please try out new interactive formats. EuroDIG is about dialogue not about statements, presentations and speeches.


== Further reading ==  
== Further reading ==  
Line 25: Line 31:
*[https://www.gov.uk/government/consultations/online-harms-white-paper Online Harms White Paper (UK Government)]
*[https://www.gov.uk/government/consultations/online-harms-white-paper Online Harms White Paper (UK Government)]
*[https://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=24509&LangID=E "Smart mix" of measures needed to regulate new technologies – Michelle Bachelet]
*[https://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=24509&LangID=E "Smart mix" of measures needed to regulate new technologies – Michelle Bachelet]
*[https://www.numerique.gouv.fr/uploads/rapport-mission-regulation-reseaux-sociaux.pdf French Report on responsabilisation of social networks]


'''Yrjo Lansipuro'''
'''Yrjo Lansipuro'''
Line 41: Line 48:
*[https://www.accc.gov.au/system/files/ACCC%20Digital%20Platforms%20Inquiry%20-%20Preliminary%20Report.pdf Digital Platforms Inquiry Preliminary Report]
*[https://www.accc.gov.au/system/files/ACCC%20Digital%20Platforms%20Inquiry%20-%20Preliminary%20Report.pdf Digital Platforms Inquiry Preliminary Report]
*[https://www.movedemocracy.org/making-a-case-for-media-and-news-literacy-in-combating-disinformation?eType=EmailBlastContent&eId=a1b00427-73b8-46f5-aa76-c228a466984e Making a Case for Media and News Literacy in Combating Disinformation]
*[https://www.movedemocracy.org/making-a-case-for-media-and-news-literacy-in-combating-disinformation?eType=EmailBlastContent&eId=a1b00427-73b8-46f5-aa76-c228a466984e Making a Case for Media and News Literacy in Combating Disinformation]
*[https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624279/EPRS_STU(2019)624279_EN.pdf Regulating disinformation with artificial intelligence: The effects of disinformation initiatives on freedom of expression and media pluralism]


'''Nadia Tjahja'''
'''Nadia Tjahja'''
Line 57: Line 65:
'''Anelia Dimova & Bissera Zankova'''
'''Anelia Dimova & Bissera Zankova'''
*[http://compact-media.eu/objectives-social-media-and-convergence/ Social Media and Convergence | COMPACT EC Horizon 2020 project website]. Interesting headlines: Research findings and Policy recommendations for organisations and initiatives tackling fake news: False news stories are 70% more likely to be retweeted on Twitter than true ones; Wisdom of the Crowd: Multistakeholder perspective on the fake news debate; Minutes from Adria Information Disorder AI Tools 2018 Workshop; Challenges and dilemmas for national regulatory authorities in the age of convergence with respect to hate speech and the link to the Council of Europe conference in Zagreb
*[http://compact-media.eu/objectives-social-media-and-convergence/ Social Media and Convergence | COMPACT EC Horizon 2020 project website]. Interesting headlines: Research findings and Policy recommendations for organisations and initiatives tackling fake news: False news stories are 70% more likely to be retweeted on Twitter than true ones; Wisdom of the Crowd: Multistakeholder perspective on the fake news debate; Minutes from Adria Information Disorder AI Tools 2018 Workshop; Challenges and dilemmas for national regulatory authorities in the age of convergence with respect to hate speech and the link to the Council of Europe conference in Zagreb
'''Anelia Dimova'''
*[https://news.un.org/en/story/2019/06/1040131 UN makes ‘declaration of digital interdependence’, with release of tech report | UN News]
*[https://www.internetjurisdiction.net/publications/paper/internet-jurisdiction-global-status-report-key-findings Internet & Jurisdiction Global Status Report: Key Findings]
'''Lorna Woods'''
*[https://d1ssu070pg2v9i.cloudfront.net/pex/carnegie_uk_trust/2019/04/08091652/Online-harm-reduction-a-statutory-duty-of-care-and-regulator.pdf| Online harm reduction – a statutory duty of care and regulator]


== People ==  
== People ==  
Line 70: Line 85:
'''Organising Team (Org Team)''' ''List them here as they sign up.''
'''Organising Team (Org Team)''' ''List them here as they sign up.''
''The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.''
''The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.''
*Dr. Bissera Zankova, Media 21, Bulgaria
*Dr. Bissera Zankova, Media 21 Foundation, Bulgaria
*Anelia Dimova, Information Society Policy Expert, MTITC, Bulgaria
*Anelia Dimova, Information Society Policy Expert, MTITC, Bulgaria
*Liu Yong
*Liu Yong
*Elena Perotti, Executive Director of Media Policy and Public Affairs, WAN-IFRA
*Elena Perotti, Executive Director of Media Policy and Public Affairs, WAN-IFRA
*Matthias Kettemann, Senior postdoc and head of research for rules in digital communicative spaces at @BredowInstitut
*Matthias Kettemann, Senior postdoc and head of research for rules in digital communicative spaces at @BredowInstitut
*Narine Khachatryan, Coordinator at Safer Internet Armenia
*Narine Khachatryan, STEM Society
*Virginija Balciunaite, Communications and Public Relations, Sunium  
*Virginija Balciunaite, Communications and Public Relations, Sunium  
*Annie Ferguson, Head of the EU & International Online Policy Team, UK Government
*Annie Ferguson, Head of the EU & International Online Policy Team, UK Government
*Ruth Cookman, EU & International Online Policy Team, UK Government
*Ruth Cookman, EU & International Online Policy Team, UK Government
*Giacomo Mazzone, EBU
*Giacomo Mazzone, EBU-UER European Broadcasting Union
*Kristina Olausson, European Telecommunications Network Operators’ Association (ETNO)
*Kristina Olausson, ETNO - European Telecommunications Network Operators' Association
*Daniëlle Flonk
*Daniëlle Flonk, Hertie School of Governance
*Fredrik Dieterle, LightNet Foundation, Sweden
*Fredrik Dieterle, LightNet Foundation, Sweden
*Annika Linck
*Annika Linck, European DIGITAL SME Alliance
*Paul Franklin
*Paul Franklin
*Richard Wingfield
*Richard Wingfield
*Evangelia Psychogiopoulou
*Evangelia Psychogiopoulou, Hellenic Foundation for European and Foreign Policy (ELIAMEP)
*Dan Wilson, Online Harms, UK Government
*Dan Wilson, Online Harms, UK Government
*Ben Wallis, Microsoft


'''Key Participants'''
'''Key Participants'''


Key Participants are experts willing to provide their knowledge during a session not necessarily on stage. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance.  
*[https://www.coe.int/en/web/human-rights-rule-of-law/jan-kleijssen Jan Kleijssen], Director Information Society Action against Crime, Council of Europe
Please provide short CV’s of the Key Participants involved in your session at the Wiki or link to another source.
*[https://www.essex.ac.uk/people/woods91406/lorna-woods Prof. Lorna Woods], Lecturer in law, University of Essex
*[https://www.ripe.net/about-us/press-centre/publications/speakers/chris-buckridge Chris Buckridge], External Relations Manager, RIPE NCC
* Meri Baghdasaryan, Human rights lawyer, Ara Ghazaryan law office LLC, YouthDIG Fellow


'''Moderator'''
'''Moderator'''


The moderator is the facilitator of the session at the event. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.
Virginija Balciunaite - Sunium


'''Remote Moderator'''
'''Remote Moderator'''


Trained remote moderators will be assigned on the spot by the EuroDIG secretariat to each session.
Fabio Monnet


'''Reporter'''
'''Reporter'''


Reporters will be assigned by the EuroDIG secretariat in cooperation with the [https://www.giplatform.org/ Geneva Internet Platform]. The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:  
*Cedric Amon, Geneva Internet Platform
 
The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:  
*are summarised on a slide and  presented to the audience at the end of each session  
*are summarised on a slide and  presented to the audience at the end of each session  
*relate to the particular session and to European Internet governance policy
*relate to the particular session and to European Internet governance policy
Line 124: Line 144:


== Messages ==   
== Messages ==   
A short summary of the session will be provided by the Reporter.
*Identifying the scope of online harms, as well as having a clear understanding of the terminology, are crucial in order to choose the right response. These include regulatory measures (e.g. legal frameworks based on self-/co-regulation) and the fostering of digital literacy.
*It is important not only to look at how to develop new laws, but rather, to consider existing regulations and human rights frameworks through which content and online harms can be evaluated and enforced.
*We must not overlook the less visible and more difficult-to-identify issues such as cyberbullying or the outsourcing of content filtering conducted by humans.


== Video record ==
== Video record ==
Will be provided here after the event.
https://youtu.be/b_rcvT_Vlz8


== Transcript ==
== Transcript ==
Will be provided here after the event.
Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-800-825-5234, www.captionfirst.com
 
 
''This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.''
 
 
>> NADIA TJAHJA: So when we started this process, for us it became clear very quickly on that the participants wanted to focus on solutions-oriented messages. They wanted something concrete that can be discussed, argued and further developed and move away from the abstract. They wanted to really focus on something that can create change, and during our sessions, they emphasized the importance of ensuring that the messages are kept in mind with a focus on the future, and with access for all.
 
One of their messages actually states, emerging concerns online such as hate speech, fake news, cyberbullying are not sufficiently discussed in school. We want to build a curriculum, focusing and raising awareness on producing digital skills and literacy of the youngest students. It is clear that this is one of the issues they sought to address and particularly focus on how regulations can be used to address online harms.
 
Therefore, I would like to invite you to join our next session, plenary 7, which is tackling online harms, a regulation mine field, present and future. So I would like to invite moderate Virginija Balciunaite and key participants to come on the stage.
 
Thank you very much.
 
(Applause)
 
>> VIRGINIJA BALCIUNAITE: Good afternoon, ladies and gentlemen, YOUthDIGers, EuroDIGers.
 
Welcome to the final panel session, tackling online harms. I'm Virginija Balciunaite and I'm more than happy to invite you to this discussion and my utmost honor to host this and look at the future of online discussions in different stakeholder fields. We have very different, but very engaged and impressive plenary session today.
 
And to begin with, I would like to -- I would like outline how we started the session. So we built up the session by trying to map out all of the social harms, all of the harms that are out there. Our organizing team mapped out over 250 online harms, and we only looked at the European documents. And we only looked at our expertise and our experience.
 
And then we decided to look at the future, what does it hold for us? And because also the future is -- is messy. It looks messy. It feels messy. And also let's face it, regulation takes a long time to be voted and implemented. And then it's also important to have this debate now, that is anticipatory and we need to include visions, viewpoints and knowledge, not only from researchers, scientists, but also from stakeholders, who have the possibility to look a bit in the future, and share the knowledge with the legislatures.
 
So looking into the future, we raised several concerns. And therefore, we invited four key panelists, key participants who we believed are going to address these concerns, and the future as they proceed.
 
So firstly, we wanted to see the current developments. So we invited Professor Lorna Woods, from the University of Ethics who has extensive experience in the field of media policy, and communications regulation including social media and the Internet.
 
And we wanted to address the recent policy developments in the United Kingdom because they have recently published an online harms white paper which sets out the government's plan to keep the UK safer, online through legislative and non-legislative measures.
 
Secondly, we wanted to see why -- why we're falling short on the human rights aspect. So we invited Jan Kleijssen, director Information Society, action against crime, Council of Europe. And for actions against crime.
 
Thirdly, during our team meetings we raised concern for future of technology, in relate to online harms. We said the telecommunications and Internet services providers should be here to talk about the future for regulation. So we will brought Chris Buckridge, who is external relations manager for the RIPE NCC and his role involves engaging the broad range of stakeholders, including members, governments, law enforcement agencies, and international organizations.
 
And finally, it's my very pleasure to introduce you a YOUthDIG fellow, a human rights lawyer at ARAC law office in Armenia.
 
So I introduce you four different speakers, but also would like to introduce to you different sessions that are directly related to this panel. Therefore, I would like to invite you, Michael -- Michael and Elena. Michael Oghia and Elena Perotti was workshop 12, play the villain and learn to fight the misinformation.
 
>> MICHAEL OGHIA: If it's okay, I will speak from the floor, if that's fine. So as she said, my name is Michael Oghia, I'm one of the co-focal points for one of the early workshops that we had. WS8, and WS12, both fed into this workshop, and WS12 -- 8, rather, was focused essentially on the threats to our information ecosystem, such as trolls, disinformation, et cetera, and how journalism can strengthen it. And although you will obviously be able to read the key messages when they come out, some of the really major points that we discussed was that disinformation, for instance, is a phenomenon that's evolving quickly in quantity and quality and some solutions for institutions, such as the EU commission include a rapid alert system to timely flag this information. The creation of a shared platform between the EU and Member States, and the implementation of voluntary codes of practice for online platforms.
 
Our -- our workshop actually included a representative from Google, someone from the Council of Europe, a representative from the European Broadcasting Union and a representative from Internews Ukraine, and was moderated by -- and also a representative from the European Commission. So we had, many, many different views on this, but one thing that definitely came out was that, for instance, the sustainability of journalism and news media organizations is one of the key problems facing some of the current challenges online, especially disinformation, because if we want our information ecosystems to be more robust, if we want them to include you know more factual information, one of the best ways to do that is to support good journalism and support fact-based evidence and fact-based information. So that is -- that is something that we -- those are some of the major points that we covered, and I will stop there and give the floor to Elena.
 
>> ELENA PEROTTI: Hello, everyone. As was correctly stated. I was the focal for workshop 12, where we tried to understand disinformation from the inside, from going behind the lines of the misinformation crisis. We played a game, that was called the bad news game and it allowed us to see how -- to have awe little hint into what goes through the process of creating a specific aim to misinformation.
 
They were two specific takeaways from the workshop. Number one, we all need to collectively stop talking of fake news. Fake news is a term that stopped having any meaning the moment that it started being weaponized in order to, again, create propaganda and conspiracy theories. That's something really important. We should start indicating using terms like disinformation, misinformation, propaganda, hoax, not fake news. Fake news is, as we could see, is being used normally to attack journalists when they do the work that they are supposed to be doing which is to hold the powerful to being. And by definition, a democracy should reject that as a first step.
 
The second important thing was media literacy and news literacy need to go hand in hand.
 
We -- news literacy is important because it is a way to -- it conveys the importance to teach to everybody how to understand that a certain piece of news is actually news and media literacy is important instead to make sure that you have the technical means to navigate the Information Society at the same time to recognize what news is not.
 
So that would be our two points to participate in this conversation.
 
>> VIRGINIJA BALCIUNAITE: Thank you very much, Elena and Michael.
 
Before I give the spotlight to the speakers, I would like to introduce another participatory element that we will have for the audience to engage with us. It's menti tool. Do we have the page, please? So through this tool, we would like to get your answers and your concerns about online harms. So throughout the whole session, give us your feedback, and your answers will be visible also here. So the speakers will be able to address accordingly.
 
And now, I would like to give the floor to Professor Lorna Woods. Could you share with us your insights about the legislative procedures and the white paper as well?
 
>> LORNA WOODS: Thank you very much. And thank you for inviting me. I would like to start about 18 months ago. And not with white paper. I would like to start by mentioning a research project that I was involved with, with Carnegie UK Trust and we were looking at this question of how can we do better? And we thought the problem was that a lot of the debate had been framed in terms of discussing whether social media platforms, in particular, were publishers or intermediaries and we thought that really didn't fit what social media did, the range of activities that are carried out through on, in, social media.
 
So we had to look around existing regulatory models and happened upon something called the Health and Safety at Work Act. What does this have to do with the Internet? We thought a better analogy, rather than publisher or intermediary would be public space. And we noticed that the health and Safety at Work Act requires employers to make the workspace safe. This is things like, fire extinguishers and making sure that the stairs are well lit, that sort of thing.
 
An we thought, you can apply this to certainly social media platforms, maybe the internet more generally. Because what you are looking at there is not content but the systems that the social media companies or other intermediary companies provide. And more particularly, the way they nudge us towards certain behavior, rewards certain behaviors do. They actually think this through when they release them? Or are they just focused on how this is going to improve the bottom line?
 
So our thought was that we should borrow the idea from the health and safety at work act, that social media platforms should have a statutory duty of care, at least to think about the systems that they put in place. Now, this might be an incomplete solution but it may help.
 
Let's move forward a year to the white paper. And that also suggests a statutory duty of care.
 
Now, it doesn't go into too much detail about what the statutory duty of care looks like, but it does refer to safety by design. So I think that there are some similar ideas in there, that the core of the white paper is about how systems are set up. Having said that, we look towards the end of the white paper and the government's imposing a lot of codes of practice. And some of these are quite focused and they do focus on notice and takedown and how fast that should be, but those notice and take down provisions, requirements, are focused towards the harder end of harms. So things like child sexual abuse material, and terrorism. And you don't see it so much when we are talking about, say, cyberbullying and that sort of thing.
 
So the white paper is under consultation. We don't know what the final form is going to be like, but I think that at least part of it is going to be looking to the systems that the companies put in mace even if there is still some concern about notice and takedown.
 
The government is thinking this is not just self-regulation. It is proposing that this should be enforced through a regulator. So I think that's an overview. I can answer questions later.
 
>> VIRGINIJA BALCIUNAITE: Yes, we will have a brief 15 minutes for audience engagement, during which people can give commentary and questions. Now I give the floor to Jan Kleijssen, please for your intervention.
 
>> JAN KLEIJSSEN: Thank you very much, representing the Council of Europe. Our starting point is the European Convention on Human Rights and the constant case law of the court derides that laws that apply offline should be online.
 
Now content, it's clear it's the responsibility of states and the parties on the human rights which is 47 European countries to ensure that freedom of expression is guaranteed and the rights and responsibilities that come with that are also respected and also that no harm is done.
 
In addition to states, the Ruggie framework, the business and human rights, draws our attention that the companies also have an obligation.
 
In Europe, we have seen that sometimes companies or perhaps more than sometimes, companies have not always reacted as fast as they could or should. It is also clear that the terms and conditions of most service providers are not examples of clarity and non-legalese. I don't know how many. You have actually read all the terms and conditions of any frame or platform you are using, but -- has anyone read the terms and conditions of, say, WhatsApp, or Facebook? We are three or four out of 100. And so that's, of course, already clear that most people don't realize what these terms and conditions are.
 
We also seen in Europe that when things go wrong, certain governments are quick to legislate. Germany has clear provisions on taking down harmful content. The fines have not yet been imposed. I was at a conference in Berlin, but there were substantial fines. There's a clear responsibility on the part of industry.
 
Now you all heard of the Christchurch call, in which they are taking a number of measures following the horrific events in New Zealand. It's clear that the commitments taken in the Christchurch call will be easier to implement by bigger companies than the small ones. I think this is something we should keep in mind, that for a number of -- and it also goes for the German law. I don't know to what extent the UK white paper will deal with this, but smaller companies may have a tougher task in complying with very short deadlines.
 
What is also clear is that algorithms will play a big role in removing harmful content, given the speed and the scale, and let's take Christchurch as an example. It's clear that human moderation will not be able to avoid harm and therefore algorithms will be used and I would like to pick up a point that was made by the youth. The YOUthDIGers a moment ago, it is of course, important that these algorithms respond to certain criteria.
 
Now, at least 47 governments must already have been informed of what the YOUthDIG came up with, because the proposal that was made to harmonize the ethical rules has been taken up two weeks ago and has been taken even a step further. 47 governments have decided not only to align the existing 60 or so ethical frameworks that exist, but to start working on a binding treaty, distilling the essential principles of these 60 ethical charters into law. So that's important and it will hopefully ensure that when algorithms are used and algorithms are used for content moderation but for Internet Governance more and more, in general, that these algorithms also correspond to certain criteria as regards transparency and that a human rights impact assessment has been made.
 
Finally, I would like to add something on the human rights of the human moderators because that's a sub not often mentioned at conferences. Some of you may have seen "The Cleaners" the human content moderation, the people that have to do the dirty work, and we know sadly that the richer world has the tendency to dump toxic waste on poorer countries. I think we should ensure that we don't dump toxic virtual waste on these countries in addition.
 
Thank you very much.
 
>> VIRGINIJA BALCIUNAITE: Thank you very much. And now we'll have a brief minute, just to see your answers to the question: Why do you think we are falling short in addressing online harm.
 
Freedom, not the right politicians, and not sure we actually are addressing badly.
 
Slow law making. Actual knowledge. A lot of different answers. A lot of interesting answers. We could also keep in mind for the future discussion.
 
And now, let's go back to Chris. The floor is yours.
 
>> CHRIS BUCKRIDGE: Thank you. So -- well, I'm not going to fix the politicians problem. I find myself somewhat oddly placed in this discussion, which is necessarily about social media. It's about a lot of content that we find online that can be dangerous, or harmful.
 
The organization that I work for, RIPE NCC manages registration and distribution of IP addresses. So the addresses that Internet service providers obtain from us, and then use to build the networks. On top of those networks run social networks, run the content that we are talking about, but we're very clear as an organization and often as a community to say, does the infrastructure, is the technology there and is the content on top?
 
And governance of those two things is fundamentally quite different or requires some different approaches or at least different considerations. That said, I think there's obviously connections that can't beat an eye. There needs to be a discussion about the intersection there.
 
I think Dr. Woods, your points actually sparked two things that I think are really interesting. The first was your initial point talking about the need to scope the problem. And I think that's something that's really, well, important, but also devilishly difficult all the time. Understanding what the problems are and what the solutions might look like.
 
I think the other point that really resonated with me was that what you turn to was a lot of existing law and that's something that I think too often is maybe forgotten or overlooked that maybe the solution is already there in law and maybe regulation or activities that try to interfere with the fundamental structures of the Internet are more damaging when there are solutions that might be found in the law in any case.
 
But I think going back a little bit to the scoping the problem issue, one of the key points that that brings me to is the in need to understand these things and that's where that idea of enhanced cooperation and stakeholder groups working together really becomes fundamental. I think in the RIPE community, what we have seen over the last ten years is the development of closer relationships and more involvement of government regulators, law enforcement, and a big part of that has been helping them to understand both the technical issues that -- and the technical scenarios and situations that we are dealing with, but also the policy processes that are available to everyone, so including to regulators and law enforcement to have some effect in that space.
 
And we had some success -- well, I think what I would call success in that the last few years we had Europe Hold bringing in policy measures to that RIPE community policy, to make small changes, small changes how we RIPE NCC validate an abuse in a record. And it can have a significant effect on how regulators or law enforcement can address the bad actors aspect of online harm. So the people who are actually trying to do some of the damage that's being done there.
 
So that's probably my opening perspective on this.
 
>> VIRGINIJA BALCIUNAITE: Thank you so much.
 
And now, by intervention from my dear YOUthDIGer. YOUthDIG fellow. What's your opinion?
 
>> Yes, thank you. I'm here representing youth perspective, and also being legal professional from the COCO system. We believe this is a multidimensional issue and just imposing more regulation would not serve the -- would not solve the issue in its hole.
 
For instance, in Armenia and several other, there's a lot of areas with regulation, and states are not providing enough -- you know, minimal rights human rights protections with regards to online harms.
 
In Armenia, there's discussions about draft law on hate speech or there are a lot of issues with regard to intermediary liability, even though there are two human rights court cases with regards to intermediary liability, but still the institution and practice is not really resolved. And then you have the big Internet platforms where there are only the rules that are created not by the community, but by the platforms themselves and people from, for instance, these regions have little or maybe no avenue towards resolving the application of these rules in their particular cases because the community with this platforms may take a lot of time, maybe costly, sometimes may not end with any fruitful result. And people give up.
 
So in my opinion, this may be a chilling effect on people expressing themselves but -- or trying to challenge different harmful content, for instance, online. But on the other hand, we have people that will see that there is impunity and will take place on the Internet and this creates another issue in these countries where the political situation is, I can say a bit fragile and, for instance, in Armenia, after the revolution that happened last year -- I mean, the Internet and the platforms were used in very interesting ways but right now, I see a lot of hate speech that is all over the place and there are no tools to really fight it. And this creates a polarization and radicalization in the society which is not something that we welcome.
 
So to sum up, I think we don't need over regulation, but we need to tackle the -- like, provide minimal regulations and then have extensive and effective communication and dialogue with the technical community.
 
And also what I forgot to mention, which is really crucial, and we also highlighted it in the youth messages is the level of law -- low level of digital literacy, because people, especially in my region cannot tackle the issue of misinformation a lot and there's a lot of debates with this regard and this is used a lot in -- for propaganda. And I believe that this issue of online harm should be tackled from multiple angles.
 
As I said with regard to minimal regulation, not over regulation, dialogue between -- among the stakeholders and, of course, initiatives towards raising the digital literacy.
 
>> VIRGINIJA BALCIUNAITE: Excellent. Thank you, Mary. I would like to raise questions or comments from what was said before. I open the floor to you. There are two microphones in front. So raise your questions and come up to us, to ask questions.
 
And just to start the initiative, the discussion, I would like to go back to YOUthDIG messages because one of the message was that we should create the curricula and enforce it by regulation. And I would like to ask our speakers, our participants, what are your thoughts about enforcing curricula through regulation?
 
>> JAN KLEIJSSEN: You mean curricula, you mean content online?
 
>> VIRGINIJA BALCIUNAITE: In schools.
 
>> JAN KLEIJSSEN: In schools. On something on digital literacy and enforcement here, in most countries, as far as I know, children get some form of traffic education when they go to school. Because quite rightly, both authorities and parents assume that it is safer that the children know the code level as the friends say. The highway code before they go into the traffic.
 
On the digital highway, no such training is systemically provided. And one of the main concerns -- and I couldn't agree more with you -- is that the literacy is -- the digital literacy is generally lacking in schools but also in other categories, elderly people, for instance. People with disabilities. There are lots of people who are excluded from -- from the digital mainstream, and therefore, I think there is an obligation and we hope that in this instrument on algorithms that we will negotiate there will be on obligation included for states to ensure that digital literacy is promoted so that people know.
 
I was at a conference recently of prison directors and it wasn't -- it was about AI and prisons and asked how many of you have been using an AI application -- or have been interacting with AI in the past days? And out of 150 people, only two raised their hands. So I asked them a question whether anyone had used a mobile phone for the last few hours. It indicates how little people are aware.
 
>> LORNA WOODS: I was just going to say the white paper does have admittedly very short section suggesting that digital literacy is important. It builds on existing media literacy, and the media regulatory in the UK does have media literacy obligations, which is good.
 
I would just like to say that media literacy, when people talk about it, they do focus on users as victims, how to avoid being abused, that sort of thing.
 
And that's important. But I think it's also important to remind people that they shouldn't be bullies either, and going into technical education, to actually get the computer scientists and those sorts of people to go beyond doing one lecture on the GDPR, and to say, think about what you are building and how you are building it because a lot of things are built these days by taking from a library of software and who knows what's buried in that library of software.
 
>> VIRGINIJA BALCIUNAITE: If there are no more comments, I would like to take the question from the floor. Could you please state your name and affiliation.
 
>> AUDIENCE MEMBER: Sure. I'm Colin Curry from Article 19.
 
When we think about ethics and well-being and different conceptual frameworks for evaluating what is right or wrong or good or bad, I know in our organization, when we were engaging, for example, in the IEEE's global initiative on the ethics of autonomous systems, one the things that we pushed back against was use of well-being or ethics as rubrics for evaluation, just because ethics can be contextual based on the sector or the environment and then well-being can be incredibly difficult to gauge, even if you are checking on your own well-being. So we were always pointing back to human rights' frameworks as an appropriate standard to evaluate -- as appropriate rubric, because if it's corpus of national law and binding treaties, things like that.
 
With that in mind, I pose the question, do we think that framing -- that harm, the notion of harm is a suitably solid base to begin conversations about legislation? Because in my mind, for example, if -- I think that it draws us into a gray zone that might not have the clarity and the precision that would allow for due process in the instance in which, for example, content was deemed harmful under the white paper, you know, system. Then you might have insufficient grounds or documentation to be able to appeal this decision or to be able to follow it through the legal system.
 
So I'm just wondering what the panelists might think about this. Is this actually taking something that is insufficiently clear and trying to build legal frameworks on it?
 
>> CHRIS BUCKRIDGE: I will say something very brief to that. It was something I was saying to the YOUthDIG participants the other day. I think some of the -- maybe not most violently harmful, but some of the most broadly harmful things online content that we see, perhaps doesn't even come from a malicious or even necessarily greedy place. It comes from, perhaps a misunderstanding or a development of the -- what people are trying to do, that they weren't aware of previously.
 
I think your point speaks a little to that. How do you actually regulate that? It's very much a backwards looking attempt.
 
>> LORNA WOODS: I think the question of what is good and bad is very difficult, and that's partly why the duty of care proposal that I put forward tries to sidestep it somewhat. So that it's not trying to define very, very closely how bad something is. What it's looking at is how you get to that harm broadly speaking and whether the social media platform has thought about it.
 
If I go back to my analogy of the real world, if there is a floorboard up in a building, somebody who sees that wouldn't think, oh, someone will twist their ankle or break their leg there, because a floorboard is there?
 
What they would see is a broad idea of risk of harm. And they will probably get a hammer and knock the nails in. I guess I'm trying to stay focused on process because the idea of harms, especially when we move away from perhaps these things that are clearly harmful like child pornography, that you do get difficulties in definition. So it's trying to shift the, are we thinking about this? I mean Facebook famously was moving fast and breaking things. And I'm not sure that's a responsible attitude.
 
It's sort of thinking, what am I going to break? Or what am I likely to break? Rather than just moving fast?
 
>> Yes, as a lawyer, I also agree. If the term is ambiguous, it will be hard to create a whole legal system based on this notion of harm or online harm. Also, I think another underlying aim with this new wave of fight against online harm is that we want to change the community rules which as I said in my opinion, were not created by the community at all, but were imposed on the community and not necessarily reflect the values of the community.
 
And with this, in long term, I think we will achieve the shift in the mind-set of also the -- I don't know the technical community, for instance, and I agree with you a lot, when you said that we also need to talk with -- I don't know, computer science people, because, for instance, I have friends that are in -- that work in the IT field and our perception of things is completely different, but when you communicate and try to understand different perspectives, there is a shift.
 
And they are building their -- for instance, their applications and they consider this privacy issues now.
 
So I think that that's -- it goes a long way, and that's just a small effort, but if everyone, like, works towards that goal, then maybe long term, we will see some shift.
 
>> VIRGINIJA BALCIUNAITE: We have a question here. Yes, please.
 
>> AUDIENCE MEMBER: Yes, good afternoon. My name is Marco, full disclosure, I'm a colleague of Chris, and I work for RIPE NCC. Sharing observation that from earlier in the workshop and I would love to hear the panel's responses. It's two connected things but as one of the finalists mentions, yeah, we need AI. We can't do this with human and I hear that more often. Yes, we need technology to reign in the technology, and extrapolating from that and that's something I hear it echoing, starting with Gabriel's speech and also in the UK white papers.
 
Self-regulation has failed, and so we must regulate. We are kind of stepping up to a point where we quite possibly need to regulate self-regulation, and my feeling is that we're kind of coming into a circle with technology, controlling technology and stacking layers of regulation. How far is that control some what do we do if the next regulation, in the UK white paper fails, if the regulation of self-regulation fails? Aren't we at a neverending story? I would love to hear the panel's reflection on that.
 
>> JAN KLEIJSSEN: I will start. Perhaps in between there is something called co-regulation. So have a legal framework and the legal framework as Chris largely pointed out largely exists. There are very clear legal standards. We don't have to ban child pornography, that exists in our countries. The more problematic areas are the ones in the middle, where it's not immediately clear whether it's harmful but it may be harmful.
 
And so I think co-regulation is important. The responsibilities that the industry takes, but also the responsibilities that state takes to intervene where necessary. And there is, in law-abiding democracies there's a judicial system, there's a court system. It's ultimately under the treaty I mentioned first, the treaty of human rights. It's finally up to a court to decide whether or not there is a breach of freedom, either unjustified or a justified interference with freedom of expression and whether or not some content is harmful and needs to be taken down.
 
If such a decision is taken, then, of course, vital that decision is complied with as rapidly as possible. But I think co-regulation is perhaps the best way forward.
 
>> LORNA WOODS: I can say something about the white paper again. I feel like I'm droning on about the white paper but that is my purpose here, I feel.
 
I suppose you could describe the system envisaged by the white paper as a form of co-regulation because it envisages a framework set in legislation at quite a general level, but then there will be a role for regulator to develop more detailed codes of practice and in this, I think it's envisaged that various stakeholders get involved. So that will be the technical community. I suspect also civil society and various interest groups to try and counter some of the problems you get with devolved standard settings which they sometimes can get not necessarily deliberately but they can get hijacked by those with expertise.
 
I suppose that is a form of co-regulation.
 
>> VIRGINIJA BALCIUNAITE: Excellent. We had the first question in the back. I'm not sure if the microphone works.
 
>> AUDIENCE MEMBER: My name is Alex Brown. So my question is about public spaces and the idea of duty of care being an appropriate metaphor for public spaces. I'm wondering how far or how much of the Internet that can really cover because sympathy Internet companies seem to be in the business of providing semipublic or semiprivate spaces. And to use a metaphor, an analogy, take swimming pool provision. Suppose you are a public body, offering a new public swimming. No heavy petting, and you have a lifeguard on duty to make sure the rules are followed.
 
And now imagine, you are a luxury hotel and a swimming pool and you allow your patrons, and you have done your safety bit by having signs on the wall, and yet the pole is only 10-foot high at this point. I'm wondering if that tells us how far the duties of care can go or how it will be interpreted by different Internet companies?
 
>> LORNA WOODS: I guess that one is for me, is it?
 
I'm glad you gave the example of a swimming pool because there is actually from the health and the safety director which enforces the health and the safety at work act, there's an entire guidance sheet on what you have to do to make swimming pools safe, but from is a good point you make, which is about proportionality and error space analysis. We started talking about the duty of care and public space to make it talkable about, to move away from publisher intermediary and at some point, I think we let go of the analogy.
 
We have said -- and I think you will also find this in the white paper, that when you are looking at the duty of care, and because you are looking at process and risk it will vary, depending on what the nature of your virtual space is.
 
So that if you are, you know, sort of aiming at children, I would expect there to be much more stringent safeguards, tools and what have you than if you are looking at a site that's aiming at 18 to 24-year-olds. Yes, risk and different ways it can be used should be factored in. It's not a one size fits all approach.
 
>> VIRGINIJA BALCIUNAITE: Before we go to the next question, I would like to go to the online moderator and if we have any questions from the online community.
 
>> ONLINE MODERATOR: Yes, there's a question from Arnana and he's asking, Chris Buckridge mentioned that his work is only the framework upon companies are built upon, which content is built. Facebook also said they were a framework and now they find themselves in a rather precarious situation in relation to content moderation. Is there a risk or possibility where this extends to your work? Was it clearly understood or --
 
>> CHRIS BUCKRIDGE: I think so. I mean, I think there's definitely a risk and that's probably why I'm sitting on this panel.
 
No, I think when I made that point, when I make that point about the different levels and, I mean, we often use that sort of hour glass of the different layers of the Internet, it over simplifies things.
 
The situation in reality is certainly much more complex and I think you do see some discussions fall back. That's sort of the structures and the architecture that actually makes it good. So we can't mess with that.
 
When I say there's law, they are in place, and that that's what we should turn to, there are still obviously cases and this is the basis of internet governance, where the Internet structures, it's cross border and it's global, it presents challenges to laws and the ways we make laws or have historically made law.
 
So I think you are always going to have to find the balance there, and we can't sort of rule out all the Internet infrastructure or architecture is sacrosanct. That's constantly evolving, good or bad, driven by different processes. So that's where awareness on the part of regulators and users of what that architecture actually is, is important, and is significant.
 
>> VIRGINIJA BALCIUNAITE: Let's have both questions because I know some panelists need to catch plains.
 
>> AUDIENCE MEMBER: My name is Adam Kingsley from Sky in the UK. We heard some of the challenges of trying to build a regime by maybe asking some challenging questions about what might be a harm specifically, and how do we ensure that we are not impinging on human rights, et cetera, et cetera.
 
I wanted to offer an observation that through the process in the UK, which was a government green paper and now white paper, actually, what's happened is there's been a really broad conversation over the last 18 months, I guess, and the work that Lorna has done and others have done in the UK, has actually led to quite a broad consensus, that people are now saying this idea of what you call it regulating, the self-regulation or looking at the processes is actually the way that this can be achieved with an independent regulator that's governed by principles of proportionality and risk based but with sanctions, ultimately. And it's just really interesting how everybody has come together, and I expect that it won't be seen as that controversial, given that it's a very novel, groundbreaking piece of regulation. I wonder if this sort of dialogue can happen more broadly across others -- in other countries and other Member States.
 
>> AUDIENCE MEMBER: Well, I would like to continue previous question and Chris mentioned, the Internet is cross border. It's EuroDIG, but not European Union dig. In this region, there are already countries with online, harmful content regulations, Ukraine, Belarus, Russia, where there are unregulatory mine fields already set and detonating under the feet of operators and freedom of speech. So unfortunately, I don't know why such examples are not involved in this discussion. It looks like everyone is discussing online harms, white paper from the United Kingdom, but if you look at that paper and you look, for example, on Russian regulation, all tendership is performed on the name of protecting children from online harms. Why such cases are not studied? What such cases are not brought up? Internet has global things and while talking to the European Union, for example, LGBTQ information in Russia could be considered as harmful to children, whereas here in the Netherlands, such information could not be harmful at no cost.
 
So how do you give examples of bad regulation and successful regulations and lower obligations in other European countries?
 
>> JAN KLEIJSSEN: Thank you very much for that observation. First of all, we're not speaking just about the European Union. I think we mentioned the council is 47 countries and it includes all the countries you mentioned, including Belarus. All 47 other European countries are, and I also mentioned to the European convention and court of human rights, there are cases concerning regulatory frameworks, in the machinery, there were very few decisions so far but there are a number of cases that have been brought and that are currently under consideration by the court. So watch this space. We will see --
 
>> AUDIENCE MEMBER: I think we are aware of this, but this panel is not discussing, this community is not discussing these cases. I'm aware and you are aware, why we not bringing up this, especially you sitting on the stage.
 
>> JAN KLEIJSSEN: Well, I think the first thing I said at this panel was drawing attention to the European Convention on Human Rights so I don't think I can be accused of not having brought that up. And I also said it's for courts to decide. We have so far relatively few cases but it's also takes a while for cases.
 
But I know there are a whole series of cases pending. That's why I said watch this space because we will see in the coming months and certainly before the end of the year, a number of cases coming out of the work, where the court will decide whether or not the regulatory measures are in conformity with human rights law.
 
I must also apologize because I'm one of the people who has a very limited possibility of connection. So I will have to leave you now.
 
>> VIRGINIJA BALCIUNAITE: Thank you very much for being with us. Any more comments about the questions that we had. No?
 
Okay. If nobody else wants to raise any more concerns or questions, or comments, I would like to conclude by inviting our dear rapporteur Cedric. Would you like to read the messages?
 
>> CEDRIC AMON: So I will just move up here. My name is Cedric Amon, I'm working here with the Geneva Internet platform and as you have seen throughout the conference, we have been in charge of putting together these little messages, trying to sum up and have a somewhat forward looking outlook on the topics discussed today.
 
So the first point that I have mentioned, was identifying the scope of online harms as well as having a clear understanding of the terminology, are crucial in order to allow for the right responses to be taken. These include regulatory measures, example given such as legal frameworks based on self or co-regulation and fostering of the digital literacy.
 
The second point is it is important to not only look at how to develop new laws, but rather it is necessary to consider existing regulations and human rights frameworks by which content and online harms can be evaluated and enforced.
 
And finally, we must not overlook the less visible and more difficult to identify issues such as cyberbullying or the outsourcing of content filtering conducted by humans.
 
So the final question I would have is if there is some strong disagreement with these key messages.
 
I see none. Thank you very much.
 
>> VIRGINIJA BALCIUNAITE: Perfect! That's very brief. So nice. The audience agrees with everything.
 
So to address how to the future could or should look like, in terms of tackling I will legal and harmful content and activity online. However, we must remember that our online community is just as real as offline community. And we should be working together to make it a better place.
 
So I want to thank you all very much for the discussion, for the debate, for the comments, and the concerns raised. And just to end the session, I would like to express my deep gratitude for the key participants, to Professor Lorna Woods, for Jan Kleijssen who had to rush off to the airport, Chris Buckridge, thank you very much, and our YOUthDIG fellow, Mary thank you. Thank you so much.
 
And, of course, this would not have happened without the organizing team, without the EuroDIG Secretariat. Thank you for the part and the contributions. Also, Fabio, the moderator, the online moderator and Cedric the reporter and finally, I would like to thank the audience who are so active and it was a true pleasure. So thank you very much.
 
(Applause)
 


''This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.''


[[Category:2019]][[Category:Sessions 2019]][[Category:Sessions]][[Category:Media and content 2019]]
[[Category:2019]][[Category:Sessions 2019]][[Category:Sessions]][[Category:Media and content 2019]]

Revision as of 16:25, 8 July 2019

20 June 2019 | 16:30-17:30 | KING WILLEM-ALEXANDER AUDITORIUM | Video recording | Transcription
Consolidated programme 2019 overview

Proposals assigned to this session: ID 37, 52, 55, 73, 85, 91, 124, 125, 136, 138, 146, 151, 173, 178, 180, 183, 184, 203 – list of all proposals as pdf

You are invited to become a member of the session Org Team! By joining an Org Team you agree to that your name and affiliation will be published at the respective wiki page of the session for transparency reasons. Please subscribe to the session mailing list and answer the email that will be send to you requesting your confirmation of subscription.

Session information

Date: Thursday, 20 June 2019 (Day 2)

Time: 16:30 - 17:30 (CEST)

Room: King Willem-Alexander Auditorium

Venue: World Forum The Hague, The Netherlands

Sched link: https://sched.co/P0Oi

Session teaser

Over the last two years, many governing bodies have been seeking to compete on who is putting social media in order, leading to many proposals for regulation. For government, it’s a real minefield, a situation containing a lot of hidden problems and dangers, in which they need to navigate the different harms yet at the same time avoid harming the Internet itself. With some countries starting to develop regulations, and the private sector publicly engaging on regulatory issues, we have to ensure that we follow these developments and consider the pros and cons when looking at the future.

Session description

Governments throughout the world have been taking initiatives to pass legislation and enact regulation to secure the Internet against a large number of online harms from a diverse pool of attackers. Our question is how exactly that goal is being accomplished. The session will include key participants that will address different perspectives on how we can future proof the Internet and address their concerns and insights on how this policy area is developing.

Format

Panel

Further reading

Yrjo Lansipuro

Michael Oghia

Nadia Tjahja

Anelia Dimova & Bissera Zankova

  • Social Media and Convergence | COMPACT EC Horizon 2020 project website. Interesting headlines: Research findings and Policy recommendations for organisations and initiatives tackling fake news: False news stories are 70% more likely to be retweeted on Twitter than true ones; Wisdom of the Crowd: Multistakeholder perspective on the fake news debate; Minutes from Adria Information Disorder AI Tools 2018 Workshop; Challenges and dilemmas for national regulatory authorities in the age of convergence with respect to hate speech and the link to the Council of Europe conference in Zagreb

Anelia Dimova

Lorna Woods

People

Until .

Please provide name and institution for all people you list here.

Focal Points

  • Nadia Tjahja, Youth Coalition on Internet Governance, Steering Committee Member (WEOG) | CEO & Co-Founder Sunium
  • Michael Oghia, Global Forum for Media Development (GFMD)

Organising Team (Org Team) List them here as they sign up. The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.

  • Dr. Bissera Zankova, Media 21 Foundation, Bulgaria
  • Anelia Dimova, Information Society Policy Expert, MTITC, Bulgaria
  • Liu Yong
  • Elena Perotti, Executive Director of Media Policy and Public Affairs, WAN-IFRA
  • Matthias Kettemann, Senior postdoc and head of research for rules in digital communicative spaces at @BredowInstitut
  • Narine Khachatryan, STEM Society
  • Virginija Balciunaite, Communications and Public Relations, Sunium
  • Annie Ferguson, Head of the EU & International Online Policy Team, UK Government
  • Ruth Cookman, EU & International Online Policy Team, UK Government
  • Giacomo Mazzone, EBU-UER European Broadcasting Union
  • Kristina Olausson, ETNO - European Telecommunications Network Operators' Association
  • Daniëlle Flonk, Hertie School of Governance
  • Fredrik Dieterle, LightNet Foundation, Sweden
  • Annika Linck, European DIGITAL SME Alliance
  • Paul Franklin
  • Richard Wingfield
  • Evangelia Psychogiopoulou, Hellenic Foundation for European and Foreign Policy (ELIAMEP)
  • Dan Wilson, Online Harms, UK Government
  • Ben Wallis, Microsoft

Key Participants

  • Jan Kleijssen, Director Information Society – Action against Crime, Council of Europe
  • Prof. Lorna Woods, Lecturer in law, University of Essex
  • Chris Buckridge, External Relations Manager, RIPE NCC
  • Meri Baghdasaryan, Human rights lawyer, Ara Ghazaryan law office LLC, YouthDIG Fellow

Moderator

Virginija Balciunaite - Sunium

Remote Moderator

Fabio Monnet

Reporter

  • Cedric Amon, Geneva Internet Platform

The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:

  • are summarised on a slide and presented to the audience at the end of each session
  • relate to the particular session and to European Internet governance policy
  • are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
  • are in (rough) consensus with the audience

Current discussion, conference calls, schedules and minutes

See the discussion tab on the upper left side of this page for minutes, updates and action points.

You can find the minutes of the session in this GoogleDoc

Preliminary Timeframe & Agenda outline

  • Meeting 1 (March): Introductions - Meeting the organising team, explaining the preparation methods, and launching a list of speaker proposals
  • Meeting 2 (March): Generating variables - Each org team member can add to the digital wall contributions that are important to them in terms of "tackling online harms"
  • Meeting 3 (April): Connecting the system - The variables from the digital wall are clustered and mapped visually, drawing interconnecting lines of positive and negative feedback
  • Meeting 4 (April): Programme - Based on the results submitted by the org team members, the title of the session is confirmed and speakers are selected to address the levers of change
  • Meeting 5 (May): Speaker Preparations - Key participants are invited and briefed on the focus of the sessions and are asked to give a preliminary overview of their contributions
  • Meeting 6 (June, EuroDIG): Speaker preparations - Meeting with key participants and moderators

Messages

  • Identifying the scope of online harms, as well as having a clear understanding of the terminology, are crucial in order to choose the right response. These include regulatory measures (e.g. legal frameworks based on self-/co-regulation) and the fostering of digital literacy.
  • It is important not only to look at how to develop new laws, but rather, to consider existing regulations and human rights frameworks through which content and online harms can be evaluated and enforced.
  • We must not overlook the less visible and more difficult-to-identify issues such as cyberbullying or the outsourcing of content filtering conducted by humans.

Video record

https://youtu.be/b_rcvT_Vlz8

Transcript

Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-800-825-5234, www.captionfirst.com


This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.


>> NADIA TJAHJA: So when we started this process, for us it became clear very quickly on that the participants wanted to focus on solutions-oriented messages. They wanted something concrete that can be discussed, argued and further developed and move away from the abstract. They wanted to really focus on something that can create change, and during our sessions, they emphasized the importance of ensuring that the messages are kept in mind with a focus on the future, and with access for all.

One of their messages actually states, emerging concerns online such as hate speech, fake news, cyberbullying are not sufficiently discussed in school. We want to build a curriculum, focusing and raising awareness on producing digital skills and literacy of the youngest students. It is clear that this is one of the issues they sought to address and particularly focus on how regulations can be used to address online harms.

Therefore, I would like to invite you to join our next session, plenary 7, which is tackling online harms, a regulation mine field, present and future. So I would like to invite moderate Virginija Balciunaite and key participants to come on the stage.

Thank you very much.

(Applause)

>> VIRGINIJA BALCIUNAITE: Good afternoon, ladies and gentlemen, YOUthDIGers, EuroDIGers.

Welcome to the final panel session, tackling online harms. I'm Virginija Balciunaite and I'm more than happy to invite you to this discussion and my utmost honor to host this and look at the future of online discussions in different stakeholder fields. We have very different, but very engaged and impressive plenary session today.

And to begin with, I would like to -- I would like outline how we started the session. So we built up the session by trying to map out all of the social harms, all of the harms that are out there. Our organizing team mapped out over 250 online harms, and we only looked at the European documents. And we only looked at our expertise and our experience.

And then we decided to look at the future, what does it hold for us? And because also the future is -- is messy. It looks messy. It feels messy. And also let's face it, regulation takes a long time to be voted and implemented. And then it's also important to have this debate now, that is anticipatory and we need to include visions, viewpoints and knowledge, not only from researchers, scientists, but also from stakeholders, who have the possibility to look a bit in the future, and share the knowledge with the legislatures.

So looking into the future, we raised several concerns. And therefore, we invited four key panelists, key participants who we believed are going to address these concerns, and the future as they proceed.

So firstly, we wanted to see the current developments. So we invited Professor Lorna Woods, from the University of Ethics who has extensive experience in the field of media policy, and communications regulation including social media and the Internet.

And we wanted to address the recent policy developments in the United Kingdom because they have recently published an online harms white paper which sets out the government's plan to keep the UK safer, online through legislative and non-legislative measures.

Secondly, we wanted to see why -- why we're falling short on the human rights aspect. So we invited Jan Kleijssen, director Information Society, action against crime, Council of Europe. And for actions against crime.

Thirdly, during our team meetings we raised concern for future of technology, in relate to online harms. We said the telecommunications and Internet services providers should be here to talk about the future for regulation. So we will brought Chris Buckridge, who is external relations manager for the RIPE NCC and his role involves engaging the broad range of stakeholders, including members, governments, law enforcement agencies, and international organizations.

And finally, it's my very pleasure to introduce you a YOUthDIG fellow, a human rights lawyer at ARAC law office in Armenia.

So I introduce you four different speakers, but also would like to introduce to you different sessions that are directly related to this panel. Therefore, I would like to invite you, Michael -- Michael and Elena. Michael Oghia and Elena Perotti was workshop 12, play the villain and learn to fight the misinformation.

>> MICHAEL OGHIA: If it's okay, I will speak from the floor, if that's fine. So as she said, my name is Michael Oghia, I'm one of the co-focal points for one of the early workshops that we had. WS8, and WS12, both fed into this workshop, and WS12 -- 8, rather, was focused essentially on the threats to our information ecosystem, such as trolls, disinformation, et cetera, and how journalism can strengthen it. And although you will obviously be able to read the key messages when they come out, some of the really major points that we discussed was that disinformation, for instance, is a phenomenon that's evolving quickly in quantity and quality and some solutions for institutions, such as the EU commission include a rapid alert system to timely flag this information. The creation of a shared platform between the EU and Member States, and the implementation of voluntary codes of practice for online platforms.

Our -- our workshop actually included a representative from Google, someone from the Council of Europe, a representative from the European Broadcasting Union and a representative from Internews Ukraine, and was moderated by -- and also a representative from the European Commission. So we had, many, many different views on this, but one thing that definitely came out was that, for instance, the sustainability of journalism and news media organizations is one of the key problems facing some of the current challenges online, especially disinformation, because if we want our information ecosystems to be more robust, if we want them to include you know more factual information, one of the best ways to do that is to support good journalism and support fact-based evidence and fact-based information. So that is -- that is something that we -- those are some of the major points that we covered, and I will stop there and give the floor to Elena.

>> ELENA PEROTTI: Hello, everyone. As was correctly stated. I was the focal for workshop 12, where we tried to understand disinformation from the inside, from going behind the lines of the misinformation crisis. We played a game, that was called the bad news game and it allowed us to see how -- to have awe little hint into what goes through the process of creating a specific aim to misinformation.

They were two specific takeaways from the workshop. Number one, we all need to collectively stop talking of fake news. Fake news is a term that stopped having any meaning the moment that it started being weaponized in order to, again, create propaganda and conspiracy theories. That's something really important. We should start indicating using terms like disinformation, misinformation, propaganda, hoax, not fake news. Fake news is, as we could see, is being used normally to attack journalists when they do the work that they are supposed to be doing which is to hold the powerful to being. And by definition, a democracy should reject that as a first step.

The second important thing was media literacy and news literacy need to go hand in hand.

We -- news literacy is important because it is a way to -- it conveys the importance to teach to everybody how to understand that a certain piece of news is actually news and media literacy is important instead to make sure that you have the technical means to navigate the Information Society at the same time to recognize what news is not.

So that would be our two points to participate in this conversation.

>> VIRGINIJA BALCIUNAITE: Thank you very much, Elena and Michael.

Before I give the spotlight to the speakers, I would like to introduce another participatory element that we will have for the audience to engage with us. It's menti tool. Do we have the page, please? So through this tool, we would like to get your answers and your concerns about online harms. So throughout the whole session, give us your feedback, and your answers will be visible also here. So the speakers will be able to address accordingly.

And now, I would like to give the floor to Professor Lorna Woods. Could you share with us your insights about the legislative procedures and the white paper as well?

>> LORNA WOODS: Thank you very much. And thank you for inviting me. I would like to start about 18 months ago. And not with white paper. I would like to start by mentioning a research project that I was involved with, with Carnegie UK Trust and we were looking at this question of how can we do better? And we thought the problem was that a lot of the debate had been framed in terms of discussing whether social media platforms, in particular, were publishers or intermediaries and we thought that really didn't fit what social media did, the range of activities that are carried out through on, in, social media.

So we had to look around existing regulatory models and happened upon something called the Health and Safety at Work Act. What does this have to do with the Internet? We thought a better analogy, rather than publisher or intermediary would be public space. And we noticed that the health and Safety at Work Act requires employers to make the workspace safe. This is things like, fire extinguishers and making sure that the stairs are well lit, that sort of thing.

An we thought, you can apply this to certainly social media platforms, maybe the internet more generally. Because what you are looking at there is not content but the systems that the social media companies or other intermediary companies provide. And more particularly, the way they nudge us towards certain behavior, rewards certain behaviors do. They actually think this through when they release them? Or are they just focused on how this is going to improve the bottom line?

So our thought was that we should borrow the idea from the health and safety at work act, that social media platforms should have a statutory duty of care, at least to think about the systems that they put in place. Now, this might be an incomplete solution but it may help.

Let's move forward a year to the white paper. And that also suggests a statutory duty of care.

Now, it doesn't go into too much detail about what the statutory duty of care looks like, but it does refer to safety by design. So I think that there are some similar ideas in there, that the core of the white paper is about how systems are set up. Having said that, we look towards the end of the white paper and the government's imposing a lot of codes of practice. And some of these are quite focused and they do focus on notice and takedown and how fast that should be, but those notice and take down provisions, requirements, are focused towards the harder end of harms. So things like child sexual abuse material, and terrorism. And you don't see it so much when we are talking about, say, cyberbullying and that sort of thing.

So the white paper is under consultation. We don't know what the final form is going to be like, but I think that at least part of it is going to be looking to the systems that the companies put in mace even if there is still some concern about notice and takedown.

The government is thinking this is not just self-regulation. It is proposing that this should be enforced through a regulator. So I think that's an overview. I can answer questions later.

>> VIRGINIJA BALCIUNAITE: Yes, we will have a brief 15 minutes for audience engagement, during which people can give commentary and questions. Now I give the floor to Jan Kleijssen, please for your intervention.

>> JAN KLEIJSSEN: Thank you very much, representing the Council of Europe. Our starting point is the European Convention on Human Rights and the constant case law of the court derides that laws that apply offline should be online.

Now content, it's clear it's the responsibility of states and the parties on the human rights which is 47 European countries to ensure that freedom of expression is guaranteed and the rights and responsibilities that come with that are also respected and also that no harm is done.

In addition to states, the Ruggie framework, the business and human rights, draws our attention that the companies also have an obligation.

In Europe, we have seen that sometimes companies or perhaps more than sometimes, companies have not always reacted as fast as they could or should. It is also clear that the terms and conditions of most service providers are not examples of clarity and non-legalese. I don't know how many. You have actually read all the terms and conditions of any frame or platform you are using, but -- has anyone read the terms and conditions of, say, WhatsApp, or Facebook? We are three or four out of 100. And so that's, of course, already clear that most people don't realize what these terms and conditions are.

We also seen in Europe that when things go wrong, certain governments are quick to legislate. Germany has clear provisions on taking down harmful content. The fines have not yet been imposed. I was at a conference in Berlin, but there were substantial fines. There's a clear responsibility on the part of industry.

Now you all heard of the Christchurch call, in which they are taking a number of measures following the horrific events in New Zealand. It's clear that the commitments taken in the Christchurch call will be easier to implement by bigger companies than the small ones. I think this is something we should keep in mind, that for a number of -- and it also goes for the German law. I don't know to what extent the UK white paper will deal with this, but smaller companies may have a tougher task in complying with very short deadlines.

What is also clear is that algorithms will play a big role in removing harmful content, given the speed and the scale, and let's take Christchurch as an example. It's clear that human moderation will not be able to avoid harm and therefore algorithms will be used and I would like to pick up a point that was made by the youth. The YOUthDIGers a moment ago, it is of course, important that these algorithms respond to certain criteria.

Now, at least 47 governments must already have been informed of what the YOUthDIG came up with, because the proposal that was made to harmonize the ethical rules has been taken up two weeks ago and has been taken even a step further. 47 governments have decided not only to align the existing 60 or so ethical frameworks that exist, but to start working on a binding treaty, distilling the essential principles of these 60 ethical charters into law. So that's important and it will hopefully ensure that when algorithms are used and algorithms are used for content moderation but for Internet Governance more and more, in general, that these algorithms also correspond to certain criteria as regards transparency and that a human rights impact assessment has been made.

Finally, I would like to add something on the human rights of the human moderators because that's a sub not often mentioned at conferences. Some of you may have seen "The Cleaners" the human content moderation, the people that have to do the dirty work, and we know sadly that the richer world has the tendency to dump toxic waste on poorer countries. I think we should ensure that we don't dump toxic virtual waste on these countries in addition.

Thank you very much.

>> VIRGINIJA BALCIUNAITE: Thank you very much. And now we'll have a brief minute, just to see your answers to the question: Why do you think we are falling short in addressing online harm.

Freedom, not the right politicians, and not sure we actually are addressing badly.

Slow law making. Actual knowledge. A lot of different answers. A lot of interesting answers. We could also keep in mind for the future discussion.

And now, let's go back to Chris. The floor is yours.

>> CHRIS BUCKRIDGE: Thank you. So -- well, I'm not going to fix the politicians problem. I find myself somewhat oddly placed in this discussion, which is necessarily about social media. It's about a lot of content that we find online that can be dangerous, or harmful.

The organization that I work for, RIPE NCC manages registration and distribution of IP addresses. So the addresses that Internet service providers obtain from us, and then use to build the networks. On top of those networks run social networks, run the content that we are talking about, but we're very clear as an organization and often as a community to say, does the infrastructure, is the technology there and is the content on top?

And governance of those two things is fundamentally quite different or requires some different approaches or at least different considerations. That said, I think there's obviously connections that can't beat an eye. There needs to be a discussion about the intersection there.

I think Dr. Woods, your points actually sparked two things that I think are really interesting. The first was your initial point talking about the need to scope the problem. And I think that's something that's really, well, important, but also devilishly difficult all the time. Understanding what the problems are and what the solutions might look like.

I think the other point that really resonated with me was that what you turn to was a lot of existing law and that's something that I think too often is maybe forgotten or overlooked that maybe the solution is already there in law and maybe regulation or activities that try to interfere with the fundamental structures of the Internet are more damaging when there are solutions that might be found in the law in any case.

But I think going back a little bit to the scoping the problem issue, one of the key points that that brings me to is the in need to understand these things and that's where that idea of enhanced cooperation and stakeholder groups working together really becomes fundamental. I think in the RIPE community, what we have seen over the last ten years is the development of closer relationships and more involvement of government regulators, law enforcement, and a big part of that has been helping them to understand both the technical issues that -- and the technical scenarios and situations that we are dealing with, but also the policy processes that are available to everyone, so including to regulators and law enforcement to have some effect in that space.

And we had some success -- well, I think what I would call success in that the last few years we had Europe Hold bringing in policy measures to that RIPE community policy, to make small changes, small changes how we RIPE NCC validate an abuse in a record. And it can have a significant effect on how regulators or law enforcement can address the bad actors aspect of online harm. So the people who are actually trying to do some of the damage that's being done there.

So that's probably my opening perspective on this.

>> VIRGINIJA BALCIUNAITE: Thank you so much.

And now, by intervention from my dear YOUthDIGer. YOUthDIG fellow. What's your opinion?

>> Yes, thank you. I'm here representing youth perspective, and also being legal professional from the COCO system. We believe this is a multidimensional issue and just imposing more regulation would not serve the -- would not solve the issue in its hole.

For instance, in Armenia and several other, there's a lot of areas with regulation, and states are not providing enough -- you know, minimal rights human rights protections with regards to online harms.

In Armenia, there's discussions about draft law on hate speech or there are a lot of issues with regard to intermediary liability, even though there are two human rights court cases with regards to intermediary liability, but still the institution and practice is not really resolved. And then you have the big Internet platforms where there are only the rules that are created not by the community, but by the platforms themselves and people from, for instance, these regions have little or maybe no avenue towards resolving the application of these rules in their particular cases because the community with this platforms may take a lot of time, maybe costly, sometimes may not end with any fruitful result. And people give up.

So in my opinion, this may be a chilling effect on people expressing themselves but -- or trying to challenge different harmful content, for instance, online. But on the other hand, we have people that will see that there is impunity and will take place on the Internet and this creates another issue in these countries where the political situation is, I can say a bit fragile and, for instance, in Armenia, after the revolution that happened last year -- I mean, the Internet and the platforms were used in very interesting ways but right now, I see a lot of hate speech that is all over the place and there are no tools to really fight it. And this creates a polarization and radicalization in the society which is not something that we welcome.

So to sum up, I think we don't need over regulation, but we need to tackle the -- like, provide minimal regulations and then have extensive and effective communication and dialogue with the technical community.

And also what I forgot to mention, which is really crucial, and we also highlighted it in the youth messages is the level of law -- low level of digital literacy, because people, especially in my region cannot tackle the issue of misinformation a lot and there's a lot of debates with this regard and this is used a lot in -- for propaganda. And I believe that this issue of online harm should be tackled from multiple angles.

As I said with regard to minimal regulation, not over regulation, dialogue between -- among the stakeholders and, of course, initiatives towards raising the digital literacy.

>> VIRGINIJA BALCIUNAITE: Excellent. Thank you, Mary. I would like to raise questions or comments from what was said before. I open the floor to you. There are two microphones in front. So raise your questions and come up to us, to ask questions.

And just to start the initiative, the discussion, I would like to go back to YOUthDIG messages because one of the message was that we should create the curricula and enforce it by regulation. And I would like to ask our speakers, our participants, what are your thoughts about enforcing curricula through regulation?

>> JAN KLEIJSSEN: You mean curricula, you mean content online?

>> VIRGINIJA BALCIUNAITE: In schools.

>> JAN KLEIJSSEN: In schools. On something on digital literacy and enforcement here, in most countries, as far as I know, children get some form of traffic education when they go to school. Because quite rightly, both authorities and parents assume that it is safer that the children know the code level as the friends say. The highway code before they go into the traffic.

On the digital highway, no such training is systemically provided. And one of the main concerns -- and I couldn't agree more with you -- is that the literacy is -- the digital literacy is generally lacking in schools but also in other categories, elderly people, for instance. People with disabilities. There are lots of people who are excluded from -- from the digital mainstream, and therefore, I think there is an obligation and we hope that in this instrument on algorithms that we will negotiate there will be on obligation included for states to ensure that digital literacy is promoted so that people know.

I was at a conference recently of prison directors and it wasn't -- it was about AI and prisons and asked how many of you have been using an AI application -- or have been interacting with AI in the past days? And out of 150 people, only two raised their hands. So I asked them a question whether anyone had used a mobile phone for the last few hours. It indicates how little people are aware.

>> LORNA WOODS: I was just going to say the white paper does have admittedly very short section suggesting that digital literacy is important. It builds on existing media literacy, and the media regulatory in the UK does have media literacy obligations, which is good.

I would just like to say that media literacy, when people talk about it, they do focus on users as victims, how to avoid being abused, that sort of thing.

And that's important. But I think it's also important to remind people that they shouldn't be bullies either, and going into technical education, to actually get the computer scientists and those sorts of people to go beyond doing one lecture on the GDPR, and to say, think about what you are building and how you are building it because a lot of things are built these days by taking from a library of software and who knows what's buried in that library of software.

>> VIRGINIJA BALCIUNAITE: If there are no more comments, I would like to take the question from the floor. Could you please state your name and affiliation.

>> AUDIENCE MEMBER: Sure. I'm Colin Curry from Article 19.

When we think about ethics and well-being and different conceptual frameworks for evaluating what is right or wrong or good or bad, I know in our organization, when we were engaging, for example, in the IEEE's global initiative on the ethics of autonomous systems, one the things that we pushed back against was use of well-being or ethics as rubrics for evaluation, just because ethics can be contextual based on the sector or the environment and then well-being can be incredibly difficult to gauge, even if you are checking on your own well-being. So we were always pointing back to human rights' frameworks as an appropriate standard to evaluate -- as appropriate rubric, because if it's corpus of national law and binding treaties, things like that.

With that in mind, I pose the question, do we think that framing -- that harm, the notion of harm is a suitably solid base to begin conversations about legislation? Because in my mind, for example, if -- I think that it draws us into a gray zone that might not have the clarity and the precision that would allow for due process in the instance in which, for example, content was deemed harmful under the white paper, you know, system. Then you might have insufficient grounds or documentation to be able to appeal this decision or to be able to follow it through the legal system.

So I'm just wondering what the panelists might think about this. Is this actually taking something that is insufficiently clear and trying to build legal frameworks on it?

>> CHRIS BUCKRIDGE: I will say something very brief to that. It was something I was saying to the YOUthDIG participants the other day. I think some of the -- maybe not most violently harmful, but some of the most broadly harmful things online content that we see, perhaps doesn't even come from a malicious or even necessarily greedy place. It comes from, perhaps a misunderstanding or a development of the -- what people are trying to do, that they weren't aware of previously.

I think your point speaks a little to that. How do you actually regulate that? It's very much a backwards looking attempt.

>> LORNA WOODS: I think the question of what is good and bad is very difficult, and that's partly why the duty of care proposal that I put forward tries to sidestep it somewhat. So that it's not trying to define very, very closely how bad something is. What it's looking at is how you get to that harm broadly speaking and whether the social media platform has thought about it.

If I go back to my analogy of the real world, if there is a floorboard up in a building, somebody who sees that wouldn't think, oh, someone will twist their ankle or break their leg there, because a floorboard is there?

What they would see is a broad idea of risk of harm. And they will probably get a hammer and knock the nails in. I guess I'm trying to stay focused on process because the idea of harms, especially when we move away from perhaps these things that are clearly harmful like child pornography, that you do get difficulties in definition. So it's trying to shift the, are we thinking about this? I mean Facebook famously was moving fast and breaking things. And I'm not sure that's a responsible attitude.

It's sort of thinking, what am I going to break? Or what am I likely to break? Rather than just moving fast?

>> Yes, as a lawyer, I also agree. If the term is ambiguous, it will be hard to create a whole legal system based on this notion of harm or online harm. Also, I think another underlying aim with this new wave of fight against online harm is that we want to change the community rules which as I said in my opinion, were not created by the community at all, but were imposed on the community and not necessarily reflect the values of the community.

And with this, in long term, I think we will achieve the shift in the mind-set of also the -- I don't know the technical community, for instance, and I agree with you a lot, when you said that we also need to talk with -- I don't know, computer science people, because, for instance, I have friends that are in -- that work in the IT field and our perception of things is completely different, but when you communicate and try to understand different perspectives, there is a shift.

And they are building their -- for instance, their applications and they consider this privacy issues now.

So I think that that's -- it goes a long way, and that's just a small effort, but if everyone, like, works towards that goal, then maybe long term, we will see some shift.

>> VIRGINIJA BALCIUNAITE: We have a question here. Yes, please.

>> AUDIENCE MEMBER: Yes, good afternoon. My name is Marco, full disclosure, I'm a colleague of Chris, and I work for RIPE NCC. Sharing observation that from earlier in the workshop and I would love to hear the panel's responses. It's two connected things but as one of the finalists mentions, yeah, we need AI. We can't do this with human and I hear that more often. Yes, we need technology to reign in the technology, and extrapolating from that and that's something I hear it echoing, starting with Gabriel's speech and also in the UK white papers.

Self-regulation has failed, and so we must regulate. We are kind of stepping up to a point where we quite possibly need to regulate self-regulation, and my feeling is that we're kind of coming into a circle with technology, controlling technology and stacking layers of regulation. How far is that control some what do we do if the next regulation, in the UK white paper fails, if the regulation of self-regulation fails? Aren't we at a neverending story? I would love to hear the panel's reflection on that.

>> JAN KLEIJSSEN: I will start. Perhaps in between there is something called co-regulation. So have a legal framework and the legal framework as Chris largely pointed out largely exists. There are very clear legal standards. We don't have to ban child pornography, that exists in our countries. The more problematic areas are the ones in the middle, where it's not immediately clear whether it's harmful but it may be harmful.

And so I think co-regulation is important. The responsibilities that the industry takes, but also the responsibilities that state takes to intervene where necessary. And there is, in law-abiding democracies there's a judicial system, there's a court system. It's ultimately under the treaty I mentioned first, the treaty of human rights. It's finally up to a court to decide whether or not there is a breach of freedom, either unjustified or a justified interference with freedom of expression and whether or not some content is harmful and needs to be taken down.

If such a decision is taken, then, of course, vital that decision is complied with as rapidly as possible. But I think co-regulation is perhaps the best way forward.

>> LORNA WOODS: I can say something about the white paper again. I feel like I'm droning on about the white paper but that is my purpose here, I feel.

I suppose you could describe the system envisaged by the white paper as a form of co-regulation because it envisages a framework set in legislation at quite a general level, but then there will be a role for regulator to develop more detailed codes of practice and in this, I think it's envisaged that various stakeholders get involved. So that will be the technical community. I suspect also civil society and various interest groups to try and counter some of the problems you get with devolved standard settings which they sometimes can get not necessarily deliberately but they can get hijacked by those with expertise.

I suppose that is a form of co-regulation.

>> VIRGINIJA BALCIUNAITE: Excellent. We had the first question in the back. I'm not sure if the microphone works.

>> AUDIENCE MEMBER: My name is Alex Brown. So my question is about public spaces and the idea of duty of care being an appropriate metaphor for public spaces. I'm wondering how far or how much of the Internet that can really cover because sympathy Internet companies seem to be in the business of providing semipublic or semiprivate spaces. And to use a metaphor, an analogy, take swimming pool provision. Suppose you are a public body, offering a new public swimming. No heavy petting, and you have a lifeguard on duty to make sure the rules are followed.

And now imagine, you are a luxury hotel and a swimming pool and you allow your patrons, and you have done your safety bit by having signs on the wall, and yet the pole is only 10-foot high at this point. I'm wondering if that tells us how far the duties of care can go or how it will be interpreted by different Internet companies?

>> LORNA WOODS: I guess that one is for me, is it?

I'm glad you gave the example of a swimming pool because there is actually from the health and the safety director which enforces the health and the safety at work act, there's an entire guidance sheet on what you have to do to make swimming pools safe, but from is a good point you make, which is about proportionality and error space analysis. We started talking about the duty of care and public space to make it talkable about, to move away from publisher intermediary and at some point, I think we let go of the analogy.

We have said -- and I think you will also find this in the white paper, that when you are looking at the duty of care, and because you are looking at process and risk it will vary, depending on what the nature of your virtual space is.

So that if you are, you know, sort of aiming at children, I would expect there to be much more stringent safeguards, tools and what have you than if you are looking at a site that's aiming at 18 to 24-year-olds. Yes, risk and different ways it can be used should be factored in. It's not a one size fits all approach.

>> VIRGINIJA BALCIUNAITE: Before we go to the next question, I would like to go to the online moderator and if we have any questions from the online community.

>> ONLINE MODERATOR: Yes, there's a question from Arnana and he's asking, Chris Buckridge mentioned that his work is only the framework upon companies are built upon, which content is built. Facebook also said they were a framework and now they find themselves in a rather precarious situation in relation to content moderation. Is there a risk or possibility where this extends to your work? Was it clearly understood or --

>> CHRIS BUCKRIDGE: I think so. I mean, I think there's definitely a risk and that's probably why I'm sitting on this panel.

No, I think when I made that point, when I make that point about the different levels and, I mean, we often use that sort of hour glass of the different layers of the Internet, it over simplifies things.

The situation in reality is certainly much more complex and I think you do see some discussions fall back. That's sort of the structures and the architecture that actually makes it good. So we can't mess with that.

When I say there's law, they are in place, and that that's what we should turn to, there are still obviously cases and this is the basis of internet governance, where the Internet structures, it's cross border and it's global, it presents challenges to laws and the ways we make laws or have historically made law.

So I think you are always going to have to find the balance there, and we can't sort of rule out all the Internet infrastructure or architecture is sacrosanct. That's constantly evolving, good or bad, driven by different processes. So that's where awareness on the part of regulators and users of what that architecture actually is, is important, and is significant.

>> VIRGINIJA BALCIUNAITE: Let's have both questions because I know some panelists need to catch plains.

>> AUDIENCE MEMBER: My name is Adam Kingsley from Sky in the UK. We heard some of the challenges of trying to build a regime by maybe asking some challenging questions about what might be a harm specifically, and how do we ensure that we are not impinging on human rights, et cetera, et cetera.

I wanted to offer an observation that through the process in the UK, which was a government green paper and now white paper, actually, what's happened is there's been a really broad conversation over the last 18 months, I guess, and the work that Lorna has done and others have done in the UK, has actually led to quite a broad consensus, that people are now saying this idea of what you call it regulating, the self-regulation or looking at the processes is actually the way that this can be achieved with an independent regulator that's governed by principles of proportionality and risk based but with sanctions, ultimately. And it's just really interesting how everybody has come together, and I expect that it won't be seen as that controversial, given that it's a very novel, groundbreaking piece of regulation. I wonder if this sort of dialogue can happen more broadly across others -- in other countries and other Member States.

>> AUDIENCE MEMBER: Well, I would like to continue previous question and Chris mentioned, the Internet is cross border. It's EuroDIG, but not European Union dig. In this region, there are already countries with online, harmful content regulations, Ukraine, Belarus, Russia, where there are unregulatory mine fields already set and detonating under the feet of operators and freedom of speech. So unfortunately, I don't know why such examples are not involved in this discussion. It looks like everyone is discussing online harms, white paper from the United Kingdom, but if you look at that paper and you look, for example, on Russian regulation, all tendership is performed on the name of protecting children from online harms. Why such cases are not studied? What such cases are not brought up? Internet has global things and while talking to the European Union, for example, LGBTQ information in Russia could be considered as harmful to children, whereas here in the Netherlands, such information could not be harmful at no cost.

So how do you give examples of bad regulation and successful regulations and lower obligations in other European countries?

>> JAN KLEIJSSEN: Thank you very much for that observation. First of all, we're not speaking just about the European Union. I think we mentioned the council is 47 countries and it includes all the countries you mentioned, including Belarus. All 47 other European countries are, and I also mentioned to the European convention and court of human rights, there are cases concerning regulatory frameworks, in the machinery, there were very few decisions so far but there are a number of cases that have been brought and that are currently under consideration by the court. So watch this space. We will see --

>> AUDIENCE MEMBER: I think we are aware of this, but this panel is not discussing, this community is not discussing these cases. I'm aware and you are aware, why we not bringing up this, especially you sitting on the stage.

>> JAN KLEIJSSEN: Well, I think the first thing I said at this panel was drawing attention to the European Convention on Human Rights so I don't think I can be accused of not having brought that up. And I also said it's for courts to decide. We have so far relatively few cases but it's also takes a while for cases.

But I know there are a whole series of cases pending. That's why I said watch this space because we will see in the coming months and certainly before the end of the year, a number of cases coming out of the work, where the court will decide whether or not the regulatory measures are in conformity with human rights law.

I must also apologize because I'm one of the people who has a very limited possibility of connection. So I will have to leave you now.

>> VIRGINIJA BALCIUNAITE: Thank you very much for being with us. Any more comments about the questions that we had. No?

Okay. If nobody else wants to raise any more concerns or questions, or comments, I would like to conclude by inviting our dear rapporteur Cedric. Would you like to read the messages?

>> CEDRIC AMON: So I will just move up here. My name is Cedric Amon, I'm working here with the Geneva Internet platform and as you have seen throughout the conference, we have been in charge of putting together these little messages, trying to sum up and have a somewhat forward looking outlook on the topics discussed today.

So the first point that I have mentioned, was identifying the scope of online harms as well as having a clear understanding of the terminology, are crucial in order to allow for the right responses to be taken. These include regulatory measures, example given such as legal frameworks based on self or co-regulation and fostering of the digital literacy.

The second point is it is important to not only look at how to develop new laws, but rather it is necessary to consider existing regulations and human rights frameworks by which content and online harms can be evaluated and enforced.

And finally, we must not overlook the less visible and more difficult to identify issues such as cyberbullying or the outsourcing of content filtering conducted by humans.

So the final question I would have is if there is some strong disagreement with these key messages.

I see none. Thank you very much.

>> VIRGINIJA BALCIUNAITE: Perfect! That's very brief. So nice. The audience agrees with everything.

So to address how to the future could or should look like, in terms of tackling I will legal and harmful content and activity online. However, we must remember that our online community is just as real as offline community. And we should be working together to make it a better place.

So I want to thank you all very much for the discussion, for the debate, for the comments, and the concerns raised. And just to end the session, I would like to express my deep gratitude for the key participants, to Professor Lorna Woods, for Jan Kleijssen who had to rush off to the airport, Chris Buckridge, thank you very much, and our YOUthDIG fellow, Mary thank you. Thank you so much.

And, of course, this would not have happened without the organizing team, without the EuroDIG Secretariat. Thank you for the part and the contributions. Also, Fabio, the moderator, the online moderator and Cedric the reporter and finally, I would like to thank the audience who are so active and it was a true pleasure. So thank you very much.

(Applause)


This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.