Cross-border hate speech and defamation – living together online – WS 08 2013

From EuroDIG Wiki
Revision as of 17:06, 12 November 2020 by Eurodigwiki-edit (talk | contribs) (Created page with "21 June 2013 | 11:30-13:00 <br /> '''Programme overview 2013'''<br /><br /> == People == '''Key Participants''' *Adriana Delgado, The No Hate Spe...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

21 June 2013 | 11:30-13:00
Programme overview 2013

People

Key Participants

  • Adriana Delgado, The No Hate Speech Movement
  • Konstantinos Komaitis, ISOC
  • Marco Pancini, Google
  • Comments by: Rui Gomes, Council of Europe

Co-moderators

  • Paul Fehlinger, Internet & Jurisdiction Project
  • Francisco Seixas da Costa, North-South Centre, Council of Europe

Reporter

  • Nicolas von zur Mühlen, Max-Planck Institute for Foreign and International Criminal Law

Messages

  • Fragmentation: Current piecemeal solutions in different national jurisdictions to tackle the problem of hate speech and defamation entail the danger of a fragmentation of cyberspace, e.g. through techniques like Geo-IP Filtering or ISP blocks.
  • Transparency: Companies dealing with the definition and restriction of free speech by prohibiting hate speech and defamation must be transparent in their terms of service.
  • Education: The prevention of hate-speech and defamation trough education plays an important role.
  • Tools: Hotlines and safer internet centres are currently the most common tools for internet users to handle online hate speech and defamation.
  • Multistakeholder: The problem of hate speech and defamation has to be discusses in multistakeholder process to avoid disproportionate measures.

Session report

The aim of the workshop was discuss how to handle hate speech and defamation in shared cross-border online spaces, where not only different national laws but also different social values apply.

Issues raised:

  • Are current tools to handle cross-border hate-speech and defamation effective?
  • Can national laws or Terms of Service efficiently deal with cross-border online defamation and how do they interface?
  • Do we have today the tools and frameworks to handle diversity in common cross-border online-spaces?

Main points of the discussion:

  • Fragmentation: Current piecemeal solutions in different national jurisdictions to tackle the problem of hate speech and defamation entail the danger of a fragmentation of cyberspace, e.g. through techniques like Geo-IP Filtering or ISP blocks.
  • Transparency: Companies are dealing with the definition and restriction of free speech by prohibiting hate speech and defamation in their Terms of Service. Therefore, measures taken by these entities (esp. takedown procedures) have to be transparent for the users to ensure granularity.
  • Education: The prevention of hate-speech and defamation trough education can play an important role, like for example the No Hate Speech Movement youth campaign does.
  • Tools: Hotlines and safer internet centres are currently the most common tools for internet users to handle online hate speech and defamation.
  • Multistakeholder: The problem of hate speech and defamation has to be discusses in multistakeholder process to create dialog and identify best practices and to avoid disproportionate measures.

Transcript

Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-481-9835, www.captionfirst.com


This text is being provided in a rough draft format. Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.


>> PAUL FEHLINGER: I think we are going to start. Everybody came back from the coffee break. So welcome to workshop 8 on cross-border hate speech and defamation – living together online.

Today we are going to discuss how to handle cross-border hate speech and defamation in shared online spaces, where we have the coexistence of diverse local laws and norms, so basically people interacting on those platforms that come from different jurisdictions have different conceptions of what is offending, what is considered hate speech and defamation.

And we want to have, today, a look at the existing tools to basically handle cross-border hate speech and defamation and to see what frameworks are needed to manage digital coexistence. So in a nutshell, do we, today, have the tools for digital coexistence, the cohabitation of people from diverse countries with diverse backgrounds in shared cross-border online spaces.

So I am going to introduce to you our panel. Left of me is Rui Gomes, in charge of the youth group work in the Council of Europe, where he is coordinating the No Hate Speech Campaign. Then we have Adriana Delgado, an activist in the No Hate Speech Movement, and also in women’s rights in Portugal. Then we have Marco Pancini in Google. Olof Ehrenkrona, Ambassador and Senior Advisor for the Swedish Minister for Foreign Affairs. Konstantinos Komaitis, Policy Analyst for the Internet Society, and my co-moderator, Francisco Seixas da Costa, Executive Director of the Council of Europe and former career diplomat and ambassador.

So Francisco, could you start maybe to set the scene for us, and what are we talking about today?

>> FRANCISCO SEIXAS DA COSTA: Okay. Yesterday during one of the panels, somebody wrote in the tweet which appears on the wall that Human Rights people enjoy offline are also relevant online, and I took note of this very simple phrase because this is, in fact, the key question that we need to think about in this exercise. I will base my comments on the experience of the North-South Centre that exists here in Lisbon since 25 years. And I have the impression that all discussion we’ve had up till now was based on this balance between the need to preserve freedom of expression and capacity of international community to address the question of its most flagrant abuses.

More precisely, the issues we are dealing with this in panel, hate speech and defamation, have been at the core of international debate in recent years. The questions of the threats, motivated insults, denial, gross minimization, approval of genocide, crimes against humanity, is being addressed more and more frequently, and I would say Council of Europe has a lead in this discussion.

The Internet proved to be a powerful instrument to support racism and xenophobia and to permit certain persons to disseminate easily these ideas. The scope of our session is now much larger. Intolerance goes beyond these lines, and it covers aggressive nationalism and discrimination and hostility against minorities and people of migrant origin. Also, the question of defamation, which goes beyond all political aspects and entering the area of criminality, as such.

And concrete clarity of these issues remains the central point of this discussion. Some of these concepts we are dealing with are not dealt with in the same way by different sectors. This is a key question for the way to solve and to create minimal operative mechanisms to intervene.

It is clear that the peaceful cohabitation and tolerance in cyberspace is at risk, and we are enabled to find ways and means to tackle that vital question up to now. We live here in a clear contradiction. There is a cross-border expression that coexists with different national legal systems which will be always difficult to harmonize.

The risk for fragmentation of national reactions, if a more global and effective approach of control of negative efforts is not obtained, is very, very high. This is a political question, but in fact – and this is where I take our own experience – behind there, there is a cultural divide that we need to take always into account.

Maybe I am wrong, but I got the impression from the debates, parts of debate I witnessed yesterday and today, that most people tend to base their assessments on the experience created in the western world, where the Internet was born and flourished. But when we talk about states and governments, we have to have in the back of our minds that we are not necessarily talking about democracy, about political powers with legitimacy, sincerely preoccupied with the rights and freedom of their citizens.

On the contrary, we are dealing, sometimes, with international subjects formerly equal and respectable in international world who promote themselves internal polices based on censorship and restriction of basic freedom.

This is a question of real politics. We are obliged to deal in the day-to-day life, not only in multilateral organisations, but also in bilateral ways. These are our partners, even if they don’t follow the basic rules we are convinced that should be followed.

So sometimes certain restrictions are linked not only with, I’d say, political motivations, but some idiosyncrasies. This morning was mentioned question of king of Thailand. You can talk about protection in terms of any kind of expression in Turkey. So these are elements which are not necessarily linked sometimes with democratic basic rights but are linked with some symbolic elements in some countries. And this is part also – be considered part of the hate speech or defamation speech because this is a question that will appear in this divide.

These are very, very delicate points. And the aim of our action is not to convince them. That would be a very patronizing approach. And I must say we learned not to recommend this line of action. What we need, according to our experience, is to try to mobilize the civil society and to promote the modernizing factor of those societies.

Education for a more responsible citizenship and intercultural mobilization are the keys and the possible answers for this.

The structure I run, the North-South Centre of the Council of Europe, is easier to confront or better to put into dialogue what may be considered perspective from the – the north views from the south. We tried in last years to open a dialogue with those with some very different views and very frequently at a starting point in the debate very far from ours.

I would say that dialogue at all cost is our permanent line of work. Thousands of people, mostly young people and women, have been involved in our exercises. In our case, we tend to work with relations, interactions, and the tensions between freedom of expression and freedom of religion, for example. But dialogue between hard-liners on both sides appear to be very difficult. We usually say that the best way to operate is to try to shift the tension from speaker to the listener, to focus rather and the listener’s perception of the other speeches than the freedom of expression of one individual.

Our experience points in direction that places global education at the centre of the strategy to promote coexistence. We use online training courses – and I invite you to visit North-South Centre site – to promote this dialogue, with tend more and more to cover Arab world, where sometimes Internet proves to be a unique information and education resource.

We operate in a very simple assumption. If digital network, Facebook, YouTube, Twitter, blogs, is a new tool for dissemination of hate speech, the digital instruments may also prove to be decisive to promote dialogue, debate, and cultural interaction, to educate people, and to give positive instruments, freedom of expression, to disseminate guides of good practices. The evolve of youth people is fundamental to success of these measures. From our experience, it helps to fight stereotypes to promote cultural diversity.

One thing is clear for us, based in our large experience, running online courses involving participants from different cultural environments. It would be unrealistic to try to establish a global and common legal framework. It is a nonstarter. As it was said this morning, the current common denominator is very low.

A different thing is in this domain, as in many others, is to present some kind of consensual, even if limited, approach obtained at limited regional level, and work done in Council of Europe is a good example, as a case to be considered, not necessarily as a beacon or even a benchmark, but as a motivating factor for future promotion of coordination at different and cultural distant areas of the world. The models which we present based in our own experience in limited areas can be used as, let’s say, a model to be retaken and revisited in other areas.

If we want to be effective – and I finish now – if we want to be effective in this domain, we need to be always clear about our own principles, respectful for the voices of the others, and to be prepared to be more patient regarding short-term results. There will be no short-term results in this area. For our experience, spread of information, education, and permanent dialogue are the key elements.

>> PAUL FEHLINGER: Thank you very much. Okay. So today we want to have a dialogue on the operational level. Today we don’t necessarily want to talk about very high-level principles, but today we want to talk about the actual situation on the ground, the everyday situation, and the existing frameworks and tools to deal with them. So in order to get us into the right mood, I am going to present you very quickly four cases, two on hate speech, two on defamation, that happened within the last eight months in the European region.

So I guess many of you heard about the Twitter case in France. You know, France has very strict laws regarding racism speech, and there was this one hashtag last October, a good Jew, and there were horrible racist jokes about Jews, a good Jew is a dead Jew, things like that. It became a top three trending topic in France.

So the French union of Jew students filed a complaint against Twitter, and a court in January ruled Twitter, you need to comply with French law. This is a violation. Please give us – please provide us with the identity of the authors of those tweets. We want to prosecute them. And please, in our French jurisdiction, implement a notification tool that we can, when we see this sort of hate speech on the Internet, flag this and report this.

Twitter did not yet apply because in their terms of service, it says basically that they just reply to valid requests from courts in the U.S.

So they appealed the position, and just a few days ago, a French court declined this appeal and basically said French law is not an option for you if you operate on our territory. But Twitter has no office in France.

So this is the first case.

The second case on hate speech: Facebook. You all know that actually the terms of service of those platforms, they form a sort of transnational normative order that regulates a lot of your online activity. So when you are on the space of Facebook, there are rules considering hate speech and defamation. And here I am going to quote the statement of rights and responsibilities of Facebook. You will not bully, intimidate, or harass any user, and you will not post content that is hate speech, threatening, or pornographic, incites violence, or contains nudity or graphic or egregious violence.

So there are those, but the internal policies of Facebook were not always very straightforward. There is a case in Iceland, there was a woman last December, and she reported on a picture that basically – I think it was a picture of a beaten woman, and there was a comment: Women are like rats. They need to be beaten/cut regularly. Facebook refused to take down this picture. There were many of these cases where the determination of companies, basically, on what is hate speech, what is considered hate speech, is very difficult. There was a campaign of over 100 women NGOs recently that lobbied Facebook to change this determination policy, and they succeeded. Facebook acknowledged just a few weeks ago that they are entering into a dialogue on those issues.

So let’s have a quick look at defamation now. In Germany, we had a very interesting case involving Google and the auto-complete function in Google search. There as was a German – that sued Google because when you typed in the name of his company, it actually suggested the term fraud.

Google must block in its auto-complete tool when it’s notified of defamatory suggestions. And you know, those are generated by algorithms. That they must take this down. So in Germany, there’s now something like a notice and takedown regime for defamatory speech.

There’s another high-profile case involving auto-complete pending in Germany that involves the First Lady, the former First Lady of Germany, and suggestions in the area of prostitution.

And last but not least, there’s a very recent case in Ireland, basically a guy, a taxi driver, put a video on YouTube of a guy fleeing his taxi without paying. This video went viral, everybody was complaining, and one guy identified the fraudster as being an Irish student; however, this was a wrong allegation. So this thing became viral. Everybody talked about him. And he went to the court in Ireland. Basically, the court said, well, until we have the final trial, we would like Facebook, YouTube, and (Indiscernible) to remove this video from their platforms and also to prevent the republishment.

And this is a delicate situation because as you know, most of the headquarters of those companies are located in Ireland.

So just to set us into an operational mind-set, those are the cases we want to talk about today.

So I’m opening the discussion. So Rui Gomes, when we talk about hate speech on the Internet, online hate speech across borders, is it different in terms of quality or quantity or why – tell us more about that.

Do you have a microphone?

Can we also see the Twitter wall, or is there no Twitter wall?

Okay. I am going to switch on the Twitter wall, but please go ahead.

>> RUI GOMES: Good morning, everybody. Thank you for passing me the floor. Hate speech online is not new. It is relatively different, however, from hate speech offline or altogether, you can say.

The difference and significance of hate speech online lies essentially on five, six factors. Some of them are, in fact, illusions, but they contribute a lot to the frequency of hate speech and especially to its impact. One of them is the illusion that it’s anonymous, that you can do online things, that people will not know it’s you. You can take part, you can comment, you can disseminate, and nothing will happen.

Second, related to this, is the feeling of impunity. It’s normal to do it, and nobody can do anything about it because it is free speech.

So these are the two starting points, this feeling of anonymity, which gives a sense of power, and this feeling of impunity, which goes together with it.

The third I mention is, of course, in terms of impact of outreach. What you post online has much wider impact potentially because it can be retweeted, resent, potentially stay there forever, used years after years. Famous hoaxes were, in fact, used to legitimize racist speeches, to legitimize different forms of ideas of superiority which remain, are recycled and reused to say this is true. You see? This Muslim person did that. It’s written there. Then you go and check, it’s written there, and it’s – when it traces back – years ago. Impact is something specific. It has a very wide reach, but also, combined with anonymity, you don’t know, and it’s part of the impact on the people. It can also have very serious consequences on people that you don’t know.

Of course you can, and people use – especially in cyberbullying – use hate speech also to target specific people, but it also happens if you just place hate speech online and think it’s not a problem. Because you don’t see who is reading that. Or the people who are reading or confronting you, that that is a problem.

The fifth I would say is the continuity between online and offline behavior. What people do online has consequences offline, even if they don’t see them. So what happens that people do things online that they would not do offline because offline is just not on. Either it’s not allowed or you will not do it because you look like an idiot.

>> So it’s more and it’s also different?

>> RUI GOMES: It’s more, it’s also different, and the impact is wider.

And now I will just stop you, perhaps, by mentioning that the hate speech online is yes, there is a dimension which is more individual. The person, the individuality is behind the computer, and resends, retweets, or just shares some pictures or some memos which he or she finds funny or interesting, but there is also part of – it is used by movements, political movements, that are not interested in a society of Human Rights online or offline, but on the opposite, of spreading ideas which have hate speech or racist discriminatory content. That is something we have to be aware of, that even if the individual person does not have that in mind, that is actually being used and promoted also to undermine the foundations of democracy. This is exactly why for organisations like the Council of Europe, it is important so that we can take action.

Action, for us – I’ll release the floor in a minute – action for us has to take into account, as the ambassador said in the beginning, education. It’s not just repression. That may be needed. But it’s also education. Thank you.

>> Thank you very much. Let’s have a look at the current tools to handle cross-border hate speech and defamation. Marco, I would like to know how Google is reconciling the different conceptions of hate speech and defamation all around the world.

You have a very large user base. How do you reconcile this? How do you make the determination when asked to take down things? And very important, how did this develop over time? Just as a U.S. company or still rather local and not adds global as you are today, how did this evolve?

>> MARCO PANCINI: Thank you. Thank you very much for having us with you today.

Let me state two very important points. The first is that as a responsible citizen, we comply with local laws and local sensitivities because it’s not only about laws, it’s also about local community, local sensitivities.

On top of that, we have terms of conditions, which represent our vision of the platform that we also, but also represent the way the users are interacting with our platform since the beginning. So the people are expecting by using blog or by using YouTube a certain degree of freedom of expression because again, these tools are very important.

Let’s look at what is happening these days in Turkey. The main driver for getting information and images and video from Turkey comes from social network, from YouTube, Twitter. So again, users are expecting from us a degree of respect for their freedom of expression. But as a responsible citizen, this does not mean that laws are not applicable. Laws are applicable. We comply with laws and did it since the beginning. Surely, increased amount of content that is uploaded on YouTube is every user, but this requires from us more support towards the community. But what you are expecting from the online service like Google is not to become the judge and jury. You don’t want us to become gatekeeper of these. This is why it’s not easy job to look at the content.

This is why we need support of the community. We provide reporting tools, flagging tools on all our online platforms to allow the user to report the content. This is why we need the collaboration with enforcement, with authorities, reporting content online. But again, I don’t like to talk about balance between free speech and the respect of the law and freedom of expression as a fundamental right, cannot be put in a balance with other things.

I think it’s very important from our point of view to, on top of all the things that they say, to be very transparent in the way we deal with content on our platforms. This means that we provide all information in relation to (Inaudible) we are taking. All requests, this is why we provide information on the number of requests that we receive, you know, in relation to our users using the platform. And we allow the user to look into these and see the way we deal with content, we deal with law enforcement requests, and actually encourage researchers and civil society groups to look into this. So every user can look and see the way we deal with it.

But these are very important societal challenges. I don’t believe that hate speech was created by the Internet. I don’t believe that on the Internet hate speech – sure, can be different, the way that people can deal with it, anonymity can push them to be maybe more outspoken. But the challenge that we see online in relation to hate speech are the product of the society, challenges that we are facing in these days, from the economic rises, and so the increase of importance of the populace movement toward racism. This is why we need to be involved in multistakeholder projects, involved with institutions, try to understand this phenomenon and better understand to them.

I would like to mention the anti-radicalization network, launched by the European Commission, which is bringing together NGOs, online service providers, institution to discuss a way to address these issues, not only through censorship, but also through creating a dialogue, sharing best practice. There is a huge – great example of the (Inaudible) campaign where these NGOs went into a big festival of the far right movement and gave out for free t-shirts with very bad and aggressive logos, racist logos, but when the people coming to the concert and going back home washed their T-shirt, suddenly the logo changed and there were some messaging about as you can wash this T-shirt, actually, you can exit from the far right movement. And we can help you in doing this. And from there they started a viral campaign online and got a lot of positive results, start a debate.

I think by sharing these kind of best practices, by having a strong focus in addressing the societal changes altogether, we can make steps forward.

>> So you have global terms of service that regulate on your platforms what is allowed to say and what not, and you’re accommodating national requests also based on national sensitivities, which means it’s not only legal official requests, but it’s also a cultural issue to have the sensitivity.

>> MARCO PANCINI: These take into consideration to respect local sensitivities. In the example of Thailand, it makes a lot of sense, we take reaction only in relation to the local law.

>>PAUL FEHLINGER: Konstantinos, when we have a look at the proliferation of different approaches in different jurisdictions on how to tackle hate speech and defamation – so there are different conceptions, different laws, and they are becoming increasingly implemented. So we see, for instance, now notice and takedown regime for defamation in Germany. We have the court request from France for Twitter to establish a notification system. What is the wider impact on the ecology of the Internet when we see this proliferation of national solutions to regulate cross-border platforms?

>> KONSTANTINOS KOMAITIS: Thank you. I would like to make a couple of points and start with something Marco actually said. The issue of hate speech, just like the issue of jurisdiction, is not one of type but one of degree. So bullying online is exactly the same as bullying offline. The difference is that within a split second, this can spread. Bullying can spread to an entire school community, as an example, and this, as we can all understand, has huge problems.

Now, in the context of hate speech in particular, we always used to see some sort of jurisdictional discretion. We have some minimum standards deriving from international law. Actually, international law instructs the Nation State to regulations that regulate hate speech. But we also say that various states come also from different angles, the case in point being the one in Nazism where even within Europe we have different laws.

So one of the things that we see, however, happening through this exercise in the online environment is the – we have two sovereigns, if you can say. So on the one hand, we have a traditional Nation State that has to deal with those issues. On the other hand, we have online platforms. You’ve read the terms and conditions on Facebook. These terms and conditions apply internationally to one billion users. This is quite unprecedented, if you think about it. And although it is quite charming, if you allow me, at the same time, we have private intermediaries actually regulating the way we understand free speech on the Internet.

So however, going back to the issue of how to deal with hate speech in the online environment, the initial reaction, the spontaneous reaction of the state is to ask for blocking of these types. That’s when we start seeing various problems. For example, we see various measures. The one is domain name blocking. We see it in the context of genetic Top-Level Domain names, dot coms, where it’s targeting individual websites or in the concept of country codes of the domain names, the country, where it’s part of its sovereignty, is more able and can do that much easier.

Then we see it at the level of ISPs. Various governments actually asking the ISPs and various other intermediaries to start filtering what goes in and out of their channels. And again, there we see some sort of an attempt to regulate the kind of speech based, again, on local laws and cultural sensitivities.

And also we see it on the level of the content, and that’s where it becomes really, really tricky for platforms like Google and Facebook because they have a global reach, yet again, they are being asked to stop blocking specific type of content that is visible in various countries, where they consider this content opposes and is against their national beliefs, laws, and culture.

A very good example was the YouTube video, the Innocence of Muslims, which, again, demonstrated this tension, if you want, between – of how to deal with it – and I’m sure that Marco will be able to say more on how Google had to deal with that.

So if I was to sum up all of these things, even though there is a very clear understanding that the Nation States should be able to control, if you want, what is happening within its borders, because of the Internet, that’s becoming more challenging. And what we all need to bear in mind is that the Internet, first and foremost, is a global tool. So when we start seeing and we start seeing that more and more often, domain name blocking, this fragmentation, this national – national approach on the Internet, then there is a great possibility of – a great possibility of moving towards the Balkanization of the Internet. And that is going to become very tricky.

So if we – there is a way to address this. There is a way to address this because we need to start – we need to rethink our ideas of how international corporations are structured. We need the assistance and the experience of current platforms, so Google’s transparency reports, Facebook transparency reports are very important in that respect because they allow us to see what is reasonable and what is not. And at the same time, the multistakeholder framework, the governance framework that exists, will assist in actually coming up with this minimum principles and standards.

So I’ll stop here, and we can have a discussion.

>> PAUL FEHLINGER: Okay. Thank you very much. So let’s go down from this sort of level on the wider implications for the Internet as a global network, and let’s look at the actual interfaces between states, platforms, and users. Olof, the current procedures for takedowns, for notifications, for determinations, implementations of policies and laws, are they transparent, and do they ensure due process?

>> OLOF EHRENKRONA: They could be more transparent, definitely. But I would start to say that I am not quite comfortable to say hate speech and defamation because defamation is not a legal ground for restriction on freedom of expression, while hate speech is. And I think this distinction is very important. Because already the hate speech legislation illustrates the difficulties with having restrictions on freedom of expression.

Let me take two examples. In Sweden, we had a Pentecost preach who spoke against homosexuality in the church. Was that acceptable according to the Swedish regulation with regard to hate speech? If he had done it outside of church, it would not have been acceptable. But the court said that since it was in the church, it was a consequence of the freedom of religion that he had – he could use.

But it illustrates the difficult and delicate decisions that you have to take.

And of course, I am quite – I’m not happy with the situation where we let private firms and private companies – where we outsource this delicate decision on how to regulate freedom of expression to private firms. So I basically think that the enormously restrictive view that the big companies have on these issues is basically a very good thing. We need to fate hate speech, defamation, et cetera, with other means. What you said about education, for example. We need to have more debate on these issues because I am quite sure that there is an overwhelming majority in the global village that do not accept hate speech and do not accept infringements on other people’s views on religion, et cetera. I think that basically, people are quite liberal in the sense in the global village.

So I think that we should use the power of the Internet and the power of people’s rational thinking to fight these problems.

We will never be able to succeed through repression. We will never be able to succeed through repression. But there are good chances that we can succeed at least 90% by voluntary means. And I think it’s extremely important that we – those companies who sometimes, from different places, do have to put in certain restrictions, that they are very, very transparent on this so we know what are the grounds, is this something that we can accept for – as a very, very unusual thing happening, as an exception, and how have they recent in these particular cases. So I think transparent is very, very important.

But transparency does not solve the problem. The basic problem can only be solved by responsible end users taking actions and taking the debate against those – those villains on the net, I would say, who use the net for providing hate speech or hurting other people’s senses and do not show respect to other people.

>> PAUL FEHLINGER: And I think one of those campaigns or initiatives to tackle this, also with an education approach, is the No Hate Speech Movement by the Council of Europe that Adriana is a volunteer. Adriana, can you tell us about your work and what you basically brought with you today for us?

>> ADRIANA DELGADO: Can you hear me? Is it working? Okay. So you want me to speak a bit about the campaign? All right.

The No Hate Speech Campaign is a campaign that is promoted by the Council of Europe, and Rui is also participating in it. And what we are try to go do is mostly raising awareness to the problem of hate speech on the Internet and its consequences. It’s not about censorship. It’s not about saying all hate speech should be deleted forever. But through education – as the other panelists spoke – through education and activities that try to raise awareness, as I said, we try to make people think about the fact that there is someone else on the other side. In spite of the anonymity, in spite of the apparent impunity that Rui spoke about, there are real people on the other side. So try to think about that.

About the video that Paul mentioned, I recollected with other activists that are in this campaign their opinions on how to deal with this cross-border hate speech, how to harmonize it. Should we look for some universal harmonizing code of conduct? Should we do these bubbles and intensify the national differences? Or do we need something new? And this is what we – what we gathered.

>> Okay. Let’s have a look at that.

>> Traditionally been inseparable from the concept of territory. If we think of different jurisdictions as different colors, then in each country, law would have one color alone, thus avoiding conflicts of values. So we find yellow, I happen to be in a green country, I will have to behave green.

Territory defines legitimacy of law. Borders limit the extent of such legitimacy. But what to do when there is no territory, no borders?

Internet is technically borderless. Its users, millions of dots, each with its color, have to share this space to interact. This sometimes leads to conflict. If a traditional territorial approach no longer works as a means of regulation, how can cohabitation in cross-border online spaces be mediated in a way that is fair to all and respects every user’s rights as questions gain particular relevance when we tackle the subject of hate speech? Why is it apparently so abundant? What can we do to prevent and discourage it?

>> I started thinking, and I put myself in a very huge dilemma. From one side, if we create the national bubbles with different laws and regulations, we have huge risks toward democracy in many countries.

>> I don’t think that any creation of national laws and regulation would actually help us to regulate the hate speech online.

>> There is no international executive that could protect international binding law. I think it is important to strengthen the already existing means of the national executive.

>> I believe that hate speech, both online and offline, should be universally recognized as a violation of Human Rights. But what is hate speech should be decided by groups of countries. What can be taken as a joke in one country can be real offensive in another.

>> I understand it’s really difficult and sensitive and complicated to regulate the online space, and much more if we speak about privacy or freedom issues. I consider that first an international worldwide agreement with the minimum basis to protect the fundamental rights of human beings also in the online field.

>> It’s really great if they have universal code of conducts or online legislation about cross-border hate speech in online environments.

>> It’s contextual to a large degree, and if we do not have a universal formula that would apply and capture the whole gamut of hate expressions, it seems a generic approach would undermine and would clash would culture.

>> I don’t think there’s a one clear-cut really solve the problem that is hate speech online. But I think a multifaceted sort of approach should be the way to go that involves stakeholders from end users to governments to the multinationals that are running these platforms, Facebook and Google and all the others. I think especially the multinationals have very specific role to play in trying to have internal mechanisms to deal with the problem that is hate speech because they are the couriers of this message, and they should be able to intervene where necessary to stop hate speech, hateful content being spread around online.

>>

(End of video)

>> I think it’s time for debate to be open to the audience. I think we should base it on the previous discussion, the presentation, but also in the basic questions which were put by the video to get some inputs from the audience regarding the traditional and emerging roles and responsibilities of the governments vis-a-vis the private cross-border platforms, and in particular, if you can see that national laws in terms of service deal efficiently with cross-border online defamation, and how do they interface? Please.

>> So we have – oh, we need to share the microphone. Yeah, thank you.

>> Good morning. My name is Diane, and I am a student of journalism. I actually have a question for Seixas da Costa. I understand you already have diplomatic background in your career. I would like do ask about hate speech and how hate speech can have a role in international relations at this point. We live in a global village. Can hate speech actually extend to political relationships in between countries?

>> There’s one remote question also.

>> Thank you. My name is Patrick from the Internet Society chapter of Luxembourg. It’s not so much a question I have, but more of an observation. When trying to counter hate speech, who are we after? The writer or the reader? By this I mean do we want to prevent the publishing of hate speech, or do we want to prevent people from viewing it? That’s two different things.

If a country manages to block hate speech at the national level, for example, by asking ISPs to block the viewing of some content, that won’t block the publishing of this content. Those who want to publish this content will always find a technical way to get around it.

I am wondering if the energy being put in this thing is really useful? In the end, maybe the authorities might say, well, look, we’ve done something against hate speech. But those who really want to read it will find a way. Thank you.

>> Hi. A simple question. Who decides exactly what is hate speech, when I’m hate speeching?

>> Hello. I am also active within the campaign, but I’m arguing usually by a different point of view because I don’t think that the campaign is really dealing with hate speech. A lot of the examples – for example, it was said earlier hate speech is always a matter of degree. Usually that degree is limited by hate speech, should incite to violence in order to be hate speech. Most examples did not do that, they did not incite to violence.

Therefore, I would say the campaign aim is not necessarily dealing with hate speech on that level, to incite violence, but also that would, in a sense, violate other people’s Human Rights. And therefore, I think it is positive to speak about the different attitudes online and also the hateful attitudes online, but I’m also a lot questioning the different possibilities (something crashed) – that are introduced on whole. Hate speech could be combative.

My question would be, rather, is there other forms of dealing with it? Because a lot of people said you need to encounter it with education. People say you need to make awareness. But what can the user really do? And especially the gentleman with the yellow tie – especially the gentleman with the yellow tie basically took out the ISPs from the discussion of moderating that content, but what can the user really do in the situation where, especially the contents that were raised here that are not necessarily identifiable as hate speech, what can the user really do in these situations?

>> So maybe one more question?

>> We have one question from the remote participants, maybe.

>> My name is Alene, and I am working for the Council of Europe on this campaign, and I am also an online activist for this campaign, and that’s where I started out.

I just want to make a comment that this movement is more about raising awareness of hate speech. It’s not about trying to block or stop it. It’s raising awareness so young people who flippantly use hate speech can learn about why not to use it.

As well, it isn’t just about inciting violence; it’s promoting hatred and intolerance against a specific group of people, and I think this is completely understated today.

>> Yeah, I’d just like to pick something that is on the Twitter feed that is related to very specific local languages. And one thing that I want to insert in the debate is how can hosting platforms, for instance, manage expressions that are in local languages with very specific sensitivities? Because if it’s in English, in French, in major languages, it’s okay. But how do you deal with the challenge, potentially, in languages that are very little used, for instance?

>> Hello. I have a question. What happens when hate speech online becomes offline and becomes whole government, like in Greece, political party, now hate speech is inside the national parliaments? Thank you.

>> Maybe a last question from the remote?

>> Sorry. Just one addition to what was said there. Not just in terms of local languages, but expressions in different languages?

>> Can we have the microphone here for the remote? Is there another one?

>> Thank you, Viktor. I try to keep in contact with those not here, and we don’t have video stream, so I am really sorry for that.

The question relates to local languages, like addressing you, Marco, like what Google can do or is trying to do, or if there is hate speech in different – I mean, very local language, and how you can proceed with this. Thanks.

>> So we may start. I realise clear what was said regarding the impact of hate speech in diplomatic relations. In fact, I must say that it was not a great issue for a long time, and it’s not something that – obviously, it becomes more important now when we have different countries reacting one against the other. Because some positions are taken in a certain way, and they are taken in a certain language that can be offensive. That can have some diplomatic impact.

But I don’t see, for the moment, a major impact on the hate speech in terms of the global diplomatic world.

>> Yes, go ahead, please.

>> I think most of the questions here illustrates the problems with actually defining hate speech, fighting hate speech on the net top-down. You need to do it bottom-up. That’s the only viable solution at hand.

And it’s a relevant question, what can end user do? I think it’s a very relevant question.

I’m born in 1951, so I was brought up intellectually in a period where we in Europe had to really very seriously discuss the modes of discussion. How do you deal with the fundamental issues like the individuals’ relations to the state? How do we deal with not only hate speech, but actually extension of racism?

This was a catharsis, I would say, in Europe, in academia and in research, in media, and in schools, in education. And we really had to discuss this from – based on the basic principles. And one result was the founding of the Council of Europe, who was given as its primary focus to defend Human Rights in Europe and to get the nations of Europe to understand the necessity to have protections for Human Rights.

And I think that we need to have similar discussions with regard to online activities that is actually based on our individual views on how we debate, how we talk to each other, how we relate to each other. And I think that it’s – the Google slogan “do no harm” is quite a good slogan for how we should actually be to each other, how we should behave.

And I think, actually, that most people in the global village believes in the principle that they have a strong – very much freedom, but there is a border for that freedom, and that border is where I start to infringe other people’s Human Rights, other people’s freedom. I think that is a reasonable way of looking at it. And I think that should – we, as end users, always think of. We, as parents and teachers and professors and NGOs, et cetera, try to foster each other in this tradition. It’s a good tradition. It’s workable. And it’s extremely well adapted to this network society.

>> Thank you very much. There is a question, Marco, to you concerning the languages.

>> KONSTANTINOS KOMAITIS: Yes, so every platform is localized in local language, so every content, content platform, can be reported. So there is no limitation in terms of languages supported.

At the same time, I really like to pick the comment on “do not harm,” which is actually coming from Vint Cerf, when we discussed severity time with Vint the challenges of hate speech, the possibility of harmful content online, he said that when they designed the Internet, they were thinking about these kind of challenges, and the concept of “do not harm” was very strong in the way they developed the network. This is why the notice and takedown system are exactly the kind of approach that the online service provider needed to provide in order to answer to these needs of interaction between the content and the user.

At the same time, I want to stress that this is affecting just the visibility of harmful content online but does not solve the societal problem, does not solve the criminal issues behind illegal activities online. So it’s very important that we go one level more deeper and try to address the societal challenges and in case the criminal issues.

>> Coming back to the language issue, do you have a different approach, if you have, for instance, something that is hate speech that triggers violence on the streets, if it’s viral. And if it’s in English, do you deal with it in a different way than if it would be in, I don’t know, Taiwanese, for instance?

>> MARCO PANCINI: As everyone knows, in the case of Muslim, these issues are not easy to solve. All these are taken into consideration when we look at content online. This is why there are different degrees in terms of addressing these issues. For example, some content, instead of being taken down, can be only age restricted. If for example it’s not illegal but could be harmful for a certain audience.

So again, I would say case by case we need to look into these things.

On top of that, we need to – again, even if we need to look at these cases on a wider basis, it’s important to remember we don’t want to be the gatekeeper of content. It’s very important to have involved in this debate also the other stakeholders.

>> Well, concerning the question – the remark that you made on the definition of hate speech, I have to say that, first of all, the definition is not a closed one, so that could be one that has to do with inciting violence. But as Alene pointed out, you can have speech that raises intolerance and that naturalizes it. And when it’s very frequent, it naturalizes hate speech, and it does have a consequence indirectly. They do not have to necessarily be saying to beat ohm homosexuals in the street, but it will naturalize in the environment.

Hate speech, you have to imagine – sometimes I think we focus a lot on the perpetrator, the one saying the hate speech, but let’s look at it from the point of view of the victim.

It’s one thing to be criticized from time to time. We are not saying that would be hate speech. But actually when you have to face continuous aggression in multiple spaces, not just in real life, but also online, that becomes violence because words can be aggressive too.

And when you asked who decides what is hate speech or not, well, of course this is an extremely – it’s not a defined thing, precise, this is hate speech, so there is one word. Unless you take it to some sort of legal authority, who will decide it? But not in a way that is absolutely right. Maybe if another person was deciding it, they would have a different opinion. These things are very complicated, very sensitive issues.

And speaking of this legal authority, I just want to finish by saying that many of the views here spoke of the importance that private companies have on the Internet. I think this is something that actually should be talked about. If we discuss transparency on the Internet, then, well, what – the public opinion, the opinion of the citizens, usually lies in the public sphere. We vote, we elect, and then they represent us in the law. But what happens here is we have – in private, I did not vote for you. Sorry. (Laughter). But I did not vote for those who made Facebook policies. So maybe something needs to change in a way that these private companies’ policies are done because they are, really, in the end – it’s to them that we have to answer.

If you report something on Facebook, in the end, it is Facebook that decides whether to take down the content or not, even if there’s thousands of us reporting it. So just wanted to raise that question.

>> I would like to react to my fellow Greek about the issue that, you know, taking online hate speech and the online world. Also I am going to make relevance with Patrick’s comments.

It is becoming increasingly apparent to non-techies that filtering and blocking and things you are doing are not working. You are blocking the content, but the content will go out there one way or the other.

Just like you have online communities being formed for any other issue, you have online communities being formed for hate speech by use of the Internet.

So there is an increased level of responsibility right now, and this responsibility, apparently the state cannot deal with it, so the responsibility comes from all of us, from all actors on the Internet. The online platforms, users in particular, and going back to what Adriana was saying, yes, we need to start holding platforms, and we need to start holding the various actors on the Internet more accountable.

I’m not sure – I don’t have an answer how to do that, and I’m not sure anybody has, but international cooperation is needed, and international cooperation is needed more than ever before, in particularly in issues like hate speech. I mean, we cannot afford any longer the flexibility of actually sitting back and continue to deliberate on how we are going to deal with it. We need to step up, and we need to start cooperating, because it’s only through this cooperation, the multiple voices of cooperation, that we will be able to at least address or start addressing or start finding solutions towards this issue.

>> Yes.

>> Just two short remarks. There are definitions of hate speech which are fairly accepted. Incitement of violence can be part of hate speech, but incitement of violence as such is already criminalised, I believe, in most countries. So I don’t believe it’s so much of an issue – I mean, it can be an issue about to prevent it, but I think the way it is legally addressed is different.

Speech is different to a large extent because of knowing who is in charge and whether it is understood legally in the same way across borders. And clearly it is not. I mean, it is dealt with differently by different countries. It does not prevent us from having a common understanding that hate speech as such, as it is defined, which talks about spreading, inciting, promoting, or justifying racial hatred, xenophobia, anti-Semitism, et cetera, that this is not accepted.

Now, how to deal with it, I think that’s the question that brings us here. I very much would like to address the point posed by previous speakers in terms of building also a global conscience of problem. We do not manage to address the questions of racist parties in parliament if we do not tackle also the questions of citizenship and citizenship education.

And one of the points of this campaign is to say – is not to ask for anything. It’s to say if young people receive civic education about their constitutions and about how to rule society, they must also be part of this multistakeholders in discussing also how the Internet is ruled. And maybe we will say, well, many things that I don’t like are allowed. That’s fine. That’s part of society. But young people – and children included – must also be part of this movement. And we have to involve them. They also have an opinion, and they should have an opinion.

>> Just two points. It was mentioned the fact that we may be much more interested on taking the question of the listener than those who speak, and in the Council of Europe, and in particular the North-South Centre, we used to say that – and I repeat what I said – best way is to try to shift the tension from the speaker to the listener, to focus, rather, on the listener’s perception of the other speech than on the freedom of expression of one single speaker.

This is the key question, and this is where the education enters. I think the global education is precisely based on that and trying to motivate people to react and to create a sort of a conscience on that.

On the definition of hate speech, I was recalling the huge debate that still takes place in the United Nations regarding terrorism. It’s impossible in the United Nations nowadays to define terrorism because there are some different perceptions, and some countries can see that terrorism – the way some countries look to terrorism is trying to – not to give capacity for some people to address their rights. When bombs were put in the trains during second world war by the resistance in France, well, technically, it’s an act of terrorism, but in reality, it was an act to protect freedom long-term. So this divide is still very, very strong in United Nations.

And hate speech, I suppose, suffers a little bit from the fact that we have different perceptions, what is considered to be in some parts of Muslim world hate speech or some kind of even defamation, as you can say, as we saw with cartoons, things like that, is seen differently in our side, western side.

So I think this debate is still going on, and we need to make a consensual approach as much as possible in European area or western area, but we should never forget to maintain open dialogue with those who are different and those who look at things, even for religious beliefs, in another way. I think this divide, that’s what we try to do every day in North-South Centre.

>> That’s the perfect introduction to the part that comes now. Because we want to have a look at how we actually – what are the current frameworks that exist to tackle cross-border hate speech and defamation. What do you do if you see something on Facebook, Twitter, on YouTube, all the different platforms that exist? You are online. You see something that either you consider as hate speech or you consider as being defamatory against you, what do you do in this situation? What should be the right frameworks? What frameworks can allow interoperability, the coexistence of all the different people from different backgrounds in those online spaces?

And I would not like this to be discussed in the panel, but I would like to – this to be a discussion among all of you. So what I suggest is that we will now just spend ten minutes, you discuss on your table – because we’re seated on those nice tables. You’re already in a perfect discussion round. And I want you to discuss for ten minutes how you think interoperability can be guaranteed. What should be the right frameworks? How should this work when you see something on the Internet that you consider to be hate speech or defamation?

And after ten minutes – we’re going to set the watch. After ten minutes, we are going to pick some tables, and I would like you to report on the discussion. The panelists will share and sit down with you so we can have a discussion.

For the remote participants, I think it would be great if they can also, in the chat room, have a discussion for ten minutes. So let’s have a look at the time. Now, it’s 12:53, so at 1:03, we come together again and will hear the outcomes of your discussion. So the panelists, please grab and chair and sit down with the people, and let’s have a good discussion.

(Table discussions)

>> Please ask the panelists to come back to the front.

Excuse me. The time is up. Could I ask the panelists to please come back. I know you have very interesting discussions, and I am very much looking forward to hear the results of the table discussions. Do I have all my panel? No. There’s still some panelists missing.

Adriana, could we have you? And we are also waiting for Marco and Olof.

Okay, Olof already left. Okay.

So can I have your attention, please. So we have 15 minutes left, and it’s not possible to hear all of you, but let’s just pick first come first served. Who wants to start? Can you please report on what you discussed? Who is first? Yes, go ahead. Do you have a microphone?

>> So basically, we discussed two things. I want to highlight the second or the first – I don’t know which is first. But I just want to highlight one thing, and that’s that we came to the point that we really believe in peer-to-peer pressure on what’s – what can be said and – what can be said and that governments as well as companies should really motivate and stimulate peer-to-peer commenting and reevaluating things people say.

Facebook actually does this, for example. When you try to report something, it actually also gives you the option to send a message to the author directly instead of reporting to Facebook. So you can confront him with what he is saying, whether it’s hate speech or not or whether it’s offensive or not. So you can reevaluate what he is posting.

We actually really believe in stimulating peer to peer because then you are not (Inaudible) government or international law, but community and peers, the content is being posted on.

>> Okay. The next table, please.

>> In a discussion we had, we had here, we agreed on two points, I think, and one pressing question that I think is very interesting is that the line between public space and the private space and public space online. Do we consider the Internet public space or private space? And if it is public – because if it is public space, we do have the right to offend as well. That’s one of the pressing concerns.

>> Hello again. We spoke about the freedom of speech versus religious matters. For example, a couple of years ago, about the Denmark example about the cartoonist. There was a lot of protests. So the question was freedom of speech in one country, and this is basically hate speech in another part of the world. So who will decide what is it? Thank you.

>> Thank you very much. Who is next? You?

>> In our group, I don’t think we came to a decision, but we were talking about –

>> It would be very interesting to that.

>> We didn’t come to one. We were talking about whether we should remove hate speech on the Internet. We weren’t talking about how, but we were talking about whether it was morally right to leave it there where young children – where young people and children are getting bullied, that they can constantly view it if it was directed at them.

We didn’t agree – well, I didn’t agree. So yeah.

>> Okay. Thank you very much. Who is next in line? Can you just pass the microphone to the table next to you, please?

>> So here we were discussing what we would do if we were in touch with such content online. And as some of us are part of the Cyber Internet Centre, what we would do, of course, it would be to report through the hot line.

At the same time, we know – because like this, we have protocols with the right institutions, with law enforcement, that – and victim protection associations, and so we would take care through this perspective. But at the same time, we are also aware that we should – we should maintain proof of what we – of what we’ve seen so that if it were necessary to proceed for some sort of prosecution, we would have proof. So this would be what we do. And maybe it would be great if people in other countries would be also aware of the hot lines of the centres because they are connected throughout the world. And it’s not only for child pornography. It’s also for hate speech. So that is what we had to say. Thank you.

>> Thank you very much.

>> We talked about the definition again, because we saw the problem that, first, at the moment, the court decides whether it’s hate speech or not, and now the users should decide. And this will create a problem in defining where the border lies between offensive speech and hate speech.

The next one was that maybe the victim itself also has an interest knowing whether there is hate against it. So if you just delete all the comments against it that maybe even the victim itself doesn’t have an interest in that.

Then there was the problem that if we want to appeal to courts to solve this problem that we don’t know where these courts actually are, like the problem of jurisdiction and, yeah, national, global we had already.

Then that there is the problem because it’s in the online world that it will not – it will be available since there’s, at the moment, not something that makes sure that it’s deleted after some time, in comparison to the offline world where you just say something and that’s it. Yeah. And we talked in general about whether it’s kind of censorship or not to delete these comments.

>> Thank you very much. Which tables are missing? The one over there.

>> We also spoke in terms of what can be done to remove hate speech, or at least report it. At least no one mentioned any institution or anything that we could contact. We already got a reply from that from that table.

The one we think that I would like to add is in terms of the platforms themselves and, in particular, social platforms, the one thing that we mentioned is that by design, unless you are part of the platform, you do not get informed of what is happening. So hate speech could be happening. You could be the target. And you wouldn’t know.

And in addition to that, even if you did know, you would have to join the platform in order to report content. At least that’s how platforms work currently.

In addition to this, any website which is not under a company – so a personal website, for instance – how exactly can we report that type of website, even that only one person would be the responsible for put up the content and perhaps the institution that serves or the company that hosts that information?

>> Thank you very much. Our last table.

>> Okay. We talked a lot on the freedom of expression side of the discussion, and I think we talked a lot about whether that rights to freedom of expression can actually exist when you are using a private platform, something like Facebook. I think we agreed there was a role maybe for government or regulation in cases of extreme hate speech but that this applies to having to go to that private sector and talk to them about what they can do, taking something down. That’s it.

>> Great. Thank you very much. As time is running out and lunch is approaching, I would like to ask our panelists for your short takeaways, like one sentence, what are you taking away from discussion today? Would you like to start? Would you like to speak into a microphone?

(Laughter)

>> Just very briefly to say two things. One of them is that, indeed – and come back to the question of contextualization. Important to use existing mechanism that exists, whether they are international or national. We want the national campaigns, for example, No Hate Speech Movement, to report and to use whatever measures exist. They may not be the same measures everywhere, but they exist and we ought to use them. That is the first thing.

The second thing, because someone is asking on Twitter when we ask about the Budapest Convention protocol. I want to say yes, we will speak about it. We hope the protocol will be adopted by more member states. I think one of the actions people can take afterwards is to ask their member states to sign the protocol and ratify the protocol. That’s very clear. So that’s been answered, Mr. Gentleman up there.

Also, for Twitter, I think it is, indeed, important to stay calm but take action. Thank you.

>> ADRIANA DELGADO: Okay. So it seems that the discussion, as I had thought, turned to be a lot about freedom of speech versus hate speech. So I would like to say that you said freedom of speech is a fundamental right, yes, but it’s not an absolute right. And my right to punch the air finishes where Paul’s nose begins. I cannot say whatever I want, not punch you.

So I would leave you with that. You know, we have been looking at things a lot from the perspective of someone who is creating content but try to think as a victim. I think to discuss these things, we should have the point of view of the victims. Maybe they will not be so open to everyone being able to say everything every time.

>> Yes, I actually agree. The two rights, the freedom of expression and the right to not be harmed by any content online, has to live together. We, as a responsible citizen, we need to address these concerns by providing platforms and transparent approach. Our role is not to become the gate keepers of content on the Web.

On top of that, I think we need to keep – thanks for the feedback we received today. We need to keep alive this kind of debate on the societal challenges by creating strong continuality to fight hate speech online and offline.

>> Very briefly, responsibility, I think that we need to start taking responsibility, and we need to encourage the formation of communities that are willing to tackle issues of free speech. That’s what came out of the discussion that I have. And there is an indication, these people sitting over there, they really want to do it through a community base. So we have responsibility to provide them with the tools and encourage them to actually, when they take action, to know that they will have the support of all of us.

>> Okay. If I may, it is not conclusions, but some points that I think could came out of this debate. I think a great deal can be done through voluntary means, but also, obviously, legal action cannot be avoided, particularly between individual rights. We should try as much as possible to work on education and the cultural dialogue in order to promote better understanding between different conceptions.

And this also leads to the question of a better definition of what hate speech is. We know that there is a gray zone, an important gray zone, that probably will remain forever because it will be totally impossible to get a clean line between the different concepts.

Cultural dimension, I think it’s most important thing, and I – I stress what I said regarding the work with listeners in order to better – to better inform and to report and to motivate people to be alert. This is important thing too. This is in the – it’s not a negative element in terms of denouncing all the time, but it’s to have people alert in terms of identifying what is hate speech and putting that clearly. This is key for solution.

>> PAUL FEHLINGER: Thank you very much. On this note, thank you very much for participating in this workshop. Have a good lunch, and continue the discussion.

(Applause)