Transatlantic rift on Freedom of Expression – MT 04 2025
13 May 2025 | 16:00 - 17:30 CEST | Hemicycle |
Consolidated programme 2025
Proposals: (#11), #15, #26, #64, #66, #75, #77
Get involved!
You are invited to become a member of the Session Org Team by simply subscribing to the mailing list. By doing so, you agree that your name and affiliation will be published at the relevant session wiki page. Please reply to the email send to you to confirm your subscription.
Kindly note that it may take a while until the Org Team is formed and starts working.
To follow the current discussion on this topic, see the discussion tab on the upper left side of this page.
The recent tensions between tech giants from across the Atlantic and European regulators center around fundamental disagreements on the scope of freedom of expression. Politicized charges about censorship are leveled against attempts to rein in disinformation and interference in elections. The session will discuss appropriate European responses to these developments and look for reconciliation possibilities between the diverging approaches.
Session description
A growing transatlantic divide over freedom of expression calls for an exploration into the baseline of shared principles and the specific areas of division between the US and European regulators. This session will first establish this common ground before detailing the contrasting aspects, followed by an analysis of the evolving political landscape that has contributed to the current divide. Subsequently, the concept of intermediary liability will be examined alongside the shifts in content moderation practices, considering the benefits and drawbacks of these new approaches and their impact on users’ online experience. The session will also explore how civil society and multi-stakeholder collaboration can serve as part of the solution and can support more effective oversight of platforms, ensuring accountability and transparency. Finally, the session will outline the current and foreseen European responses to this evolving paradigm, outlining how European legislation will be employed and refined to address these challenges.
- Guiding questions:
- What is the common understanding of freedom of speech between the US and Europe? Where are the limitations/boundaries?
- Does the perspective of what platform liability entails have an effect on the diverse views of the US vs Europe definition of freedom of expression?
- How does the shift in content moderation affect the safety of users online, especially vulnerable users?
- What legal and rhetorical tools is the Trump administration wielding in its attacks on European “censorship?” How should European policymakers respond?
Format
We introduce a new format for all Main Sessions. They are NOT panel discussion and conducted as follows:
- 30 min input (2 x 15' or 3 x 10' VIP / expert presentation)
- 45 min moderated discussion with the entire audience along a set of guiding questions
- 15 min agreeing on the messages
Interpretation in English and French.
Further reading
People
Please provide name and institution for all people you list here.
Programme Committee member(s)
- Desara Dushi, Vrije Universiteit Brussel (VUB)
- Meri Baghdasaryan, Oversight Board
- Yrjö Länsipuro, Internet Society Finland Chapter
The Programme Committee supports the programme planning process and works closely with the Secretariat. Members of the committee give advice on the topics, cluster the proposals and assist session organisers in their work. They also ensure that session principles are followed and overlook the complete programme to avoid repetition among sessions.
Focal Point
- Cristina Herrera, Adapt
Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective member of the Programme Committee and the EuroDIG Secretariat and are kindly requested to follow EuroDIG’s session principles.
Organising Team (Org Team) List Org Team members here as they sign up.
- Vittorio Bertola, Open Xchange
- Nitsan Yasur, Internet Society
- David Frautschy, Internet Society
- Alena Muravska, Ripe NCC
- Torsten Krause, stiftung digitale chancen
- Berin Szóka, Tech Freedom
- Rui Esteves
- Roberto Gaetano
- Davit Alaverdyan
The Org Team is shaping the session. Org Teams are open, and every interested individual can become a member by subscribing to the mailing list.
Key Participants
- Berin Szóka - TechFreedom (in person)
- Nitsan Yasur - ISOC (in person)
- Judit Bayer - University of Münster (remote)
Key Participants (also speakers) are experts willing to provide their knowledge during a session. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance. Please provide short CV’s of the Key Participants at the Wiki or link to another source.
Moderator
- Cristina Herrera - Adapt - (in person)
The moderator is the facilitator of the session at the event they must attend on-site. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.
Remote Moderator
Trained remote moderators will be assigned by the EuroDIG secretariat to each session.
Reporter
The members of the Programme Committee report on the session and formulate messages that are agreed with the audience by consensus.
Through a cooperation with the Geneva Internet Platform AI generated session reports and stats will be available after EuroDIG.
Current discussion, conference calls, schedules and minutes
See the discussion tab on the upper left side of this page. Please use this page to publish:
- dates for virtual meetings or coordination calls
- short summary of calls or email exchange
Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.
Messages
- are summarised on a slide and presented to the audience at the end of each session
- relate to the session and to European Internet governance policy
- are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
- are in (rough) consensus with the audience
Video record
Will be provided here after the event.
Transcript
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Speaker: But before we start the next session, I would like to invite you to join us this evening for our social evening. We’re going this evening to Le Tigre. We’re meeting at 6.30 there and the first drink is on us. So I hope that you come and join us. You can pick up your voucher at the door. But before that, we are going to head into main session four, the transatlantic rift on freedom of expression. And I would like to invite the remote moderator, João Pedro Martins, to explain the session rules. Thank you very much. Hi, everyone. By now, you should be tired of hearing me with the Zoom session rules, but I’ll go once again. For those joining for this session through Zoom and online, please raise your hand when you want to take the floor. And as soon as the moderator opens for interventions, I will flag your presence and enable your microphone. For those who are joining Zoom also from the AmiCycle, please, when you do so, enter the Zoom meeting muted and with your speakers disabled. And now I give the floor to the moderator of the session.
Cristina Herrera : Welcome to the fourth main session of EuroDIG, transatlantic rift on freedom of expression. As you have probably noticed from previous sessions, this year, EuroDIG has changed the format of the session to make it more interactive. As such, the session will be divided in three parts, 30 minutes for high-level statements, followed by 45 minutes of statements of people that have pre-registered to give their statements. At this point, we will allow to have other interventions and questions, and 15 or 10 minutes in the end to agree on quick messages. The purpose of this format is to encourage the audience participation and to enrich the conversation with inputs from diverse stakeholders. This is your session, and we encourage you to think of questions and remarks you would like to make. I am joined today by three multidisciplinary experts in European law, American law, content moderation, and Internet governance. Berin, Nitzan, and Judith, who is joining us remotely. I will let them introduce themselves when they make their statements. First, to set the scene. For a long time, Europe and the United States have had different approaches to how they interpret freedom of expression and its limitations, as does the rest of the world. However, in the past few years and even months, we have seen these sessions intensify, especially in online environments. Next slide, please. The U.S. President signed a memorandum in February where he promised to defend American companies for what he perceives as overseas extortion. This includes considerations of tariffs to respond to fines and digital service taxes. Trump also ordered agencies to cease any contracts with companies that facilitate censorship. On this side of the Atlantic, the EU has started enforcing the Digital Service Act, or DSA, requiring companies to have safeguards in place to remove content that is illegal based on national and international law. We have started to see examples of investigations against American companies, including the release of preliminary findings for acts for breaching the DSA. With geopolitical tensions intensifying, questions arise regarding how the U.S. might pressure the EU as far as digital policies are concerned, as well as how the EU in general, and the European Commission in particular, will respond. In this session, we will delve more deeply into the roots of the different approaches, and we will try to find a way forward. Now we’re going to start the 30 minutes of high-level statements. Mariam, I will start with you. If you can tell us more about how is the Trump administration influencing the understanding of platform liability and freedom of expressions in the United States, and what legal and rhetorical tools is the U.S. administration using to address what they perceive as European censorship?
Berin Szóka: Thank you. I’m Barron Soka. I run a think tank called Tech Freedom. I have been based in the U.S., but now live in Europe. So in February, U.S. Vice President J.D. Vance accused Europe of retreating from some of its most fundamental values. Last year, he suggested that America’s commitment to defend its NATO allies would depend on whether they, quote, share American values, especially about some very basic things like free speech. But it is Trump and his administration who have betrayed American values. The First Amendment to the U.S. Constitution says Congress shall make no law abridging the freedom of speech or of the press. Yet the Trump administration is now trying to shut down broadcasters and suing newspapers and pollsters. Victor Orban must be very proud. Trump claims to be protecting free speech. But what his administration really means is that private media must carry lies about who won the 2020 presidential election, conspiracy theories about vaccines and the most hateful, toxic speech imaginable. Yes, the First Amendment means freedom for the thought we hate. In America, neo-Nazis do have a constitutional right to march in public, but they’ve never had the right to force private media to carry their venom. President Ronald Reagan once summarized what Republicans used to think. The framers of our First Amendment aimed, he said, to promote vigorous public debate and a diversity of viewpoints in the public forum as a whole, but not in any particular medium, let alone in any particular journalistic outlet. In other words, the First Amendment protects the marketplace of ideas against manipulation by the state. But it doesn’t require that marketplace to be a Hobbesian war of all against all. The Constitution is not, as Justice Robert Jackson said in 1949, a suicide pact. There have always been gatekeepers making editorial judgments about truth and decency. These judgments, too, are a vital form of free speech, perhaps the one most under attack by Trump. Nearly 30 years ago, the Supreme Court said the First Amendment fully protects the Internet. Last year, it reiterated that website operators have the same constitutional right to make editorial judgments as newspaper publishers. But in 2021, when tech companies exercised that right and banned Trump for inciting the January 6th insurrection, he started to make stopping big tech censorship central to MAGA politics. And this is now the top priority of Trump’s tech regulators. There’s a word for what the Trump administration is doing. Jawboning. Jawboning means using pressure, browbeating, and regulatory extortion to achieve results that regulators don’t have the legal authority to require directly. and it’s working. To appease Trump’s rage, major tech companies have abandoned fact-checking. Meta now allows denigration of immigrants, women, and sexual minorities, for example, the kind of absurd claims that Trump and Vance made last year about Haitian immigrants supposedly eating dogs and cats. Such claims resulted in bomb threats. This is exactly the kind of violence that could explode in the U.S. at any time. But with tech companies retreating on content moderation, MAGA needs a new villain, so it’s fixated on Europe and on the United Kingdom. J.D. Vance, in his speech last year, offered a litany of examples of restrictions on speech. Many of these, if not all of them, actually probably violate Article 10 of the European Convention on Human Rights. However legitimate their aim, they are hard to justify as proportionate. Should it really be a crime to pray silently within 50 meters of an abortion clinic? Well, it is in Bournemouth, England. Vance could have argued that Europe hasn’t lived up to its own values, that the European Court of Human Rights here in Strasbourg and the European Court of Justice in Luxembourg should do more to protect Europeans’ fundamental rights. The Strasbourg Court, in particular, must decide cases much faster. Both courts should give less deference on speech restrictions and apply more skepticism to laws that are not content-neutral and viewpoint-neutral. The US government, if it were serious about free speech, could file briefs here with the Strasbourg Court to defend free speech. But of course, that isn’t really the point. This isn’t really about legal doctrine. Trump and Vance are just using the term free speech as a rhetorical weapon. Vance accused Europe’s so-called old entrenched interests of, he said, hiding behind ugly Soviet-era words like misinformation and disinformation to censor those with an alternative viewpoint. He made very clear who he was talking about, the kinds of voices that were excluded from the Munich Security Forum for parroting Kremlin propaganda. So how should Europe respond to these threats? Well, consider Romania. Its constitutional court may have been right to annul last year’s elections. Campaign laws should be enforced. But look what happened. The far right nearly doubled its share of the vote in the election redo. Regulating speech or its impacts on elections may actually fuel populist rage. J.D. Vance could have invoked many such examples, but he picked one. He picked Thierry Breton, who was commissioner responsible for the Digital Services Act. And in 2023, Breton threatened to shut down social media during unrest for failing to remove hateful content. Vance didn’t mention the 67 civil society groups, nearly all European, who condemned Breton’s comments, warning that they could, quote, reinforce the weaponization of internet shutdowns and legitimize arbitrary blocking of online platforms by governments around the world. Such principled defense of free speech is what Europe needs more of, its European values at their best. Last year, Breton threatened Elon Musk with action under the Digital Services Act merely for hosting a conversation with candidate Trump, because Trump might incite violence, hate, or racism. But this time, only a handful of civil society groups spoke out, including my own. Failing to defend freedom of speech, even when it’s Donald Trump and Elon Musk speaking, isn’t just hypocritical. It proves J.D. Vance right, and it costs Europe our most precious asset, our moral authority. There’s a legal problem here as well. The Digital Services Act is ambiguous enough that Breton thought he could wield the law against content he didn’t like. Article 35 requires the largest platforms to mitigate systemic risks that are only loosely defined, risks like civic discourse and electoral processes. These are essentially the same concerns that Trump himself has invoked in trying to force social media to carry his lies about election fraud. Professor Martin Husovich argues that Article 35 doesn’t give regulators the power to dictate content-specific rules because the DSA doesn’t say so expressly, and Article 52 of the European Charter of Fundamental Rights requires that limitations on rights must be provided for by law. He’s probably right that the European Court of Justice would say so, eventually, but even he concedes that the answer is, quote, far from clear. In the new global culture war, this isn’t good enough. As President Reagan said, if you’re explaining, you’re losing. The DSA, the AI Act, and Europe’s other platform laws may be new, but they based on assumptions of a slower, better era, where it was good enough for the courts to work out such questions eventually. But in an era when policy is made as much by tweet as by legislation, what matters is threats and political pressure, jawboning. Internet platforms, writes Professor Derek Bambauer, face structural incentives to knuckle under government jawboning over content, which makes them unusually vulnerable to government pressures, both formal and informal. So increasingly, when it comes to online content, to paraphrase what Andy Warhol once said about art, law is what you can get away with. Terry Breton may not have gotten away with very much. He soon quit in a huff before he could be fired, but he gave MAGA all the ammunition that it needed to characterize the DSA, however unfairly, as a censorship regime and the commission as the new ministry of truth, and he may have set a dangerous precedent. So Europe should rethink its tech regulations by asking two questions. First, how can we guard against the law being mischaracterized? And second, how can we avoid it being abused? Breton might have proved more fool than villain, but consider how the DSA might be weaponized by a commissioner in the future sympathetic to Elon Musk. So I’ve been speaking to you as an American lawyer only recently cross-trained in European law, but I’m also a German citizen, and I see my own future here in Europe and the future of freedom in general, depending on the European Union. I can’t say that I’m optimistic. Europe lacks three things. The first is realism. The United States just hasn’t been an unreliable ally on tech and many other issues. The United States is increasingly an adversary to Europe and to liberal democracies around the world. Last week, J.D. Vance struck a more conciliatory note when he told the Munich Leaders Forum in Washington that the U.S. and Europe were on the same civilizational team, as he put it, but he also conspicuously avoided talking about speech. Maya Angelou wouldn’t be fooled. The poet said, when someone shows you who they are, believe them the first time.
Cristina Herrera : Europeans aren’t cynical enough when it comes to Trump. When you read that his administration is talking about breaking up big tech, don’t kid yourselves. This is just more jawboning, another way of asserting control. When you read that bipartisan legislation would protect kids online, don’t assume that Democrats have done enough to include safeguards against abuse by Trump. Trust me, our Congress is much too broken for such competent drafting. And if you see concepts popping up in U.S. law that resemble the DSA, like requiring non-arbitrary or non-discriminatory content moderation, understand that those concepts are being weaponized by MAGA to break content moderation. Moreover, don’t assume that European regulators will resist American jawboning if American support for NATO and Ukraine are at stake. And they are. Or that tariffs are at stake. And they are. Europe may have fine principles, but it lacks strategic autonomy. Has Stalin supposedly quipped, how many divisions does the Pope have? Until Europe can stand up for itself militarily, the Trump administration may effectively have veto power over the enforcement of the DSA and the enforcement of other European tech laws, and even member state laws regarding free speech. And the world may have fine principles, but it lacks strategic autonomy. Until Europe can stand up for itself militarily, the Trump administration may effectively have veto power over the enforcement of the DSA and the enforcement of other European tech laws. And if you see concepts popping up in Europe right now, understand that those concepts are being weaponized by MAGA to break content moderation. And they are. Or that the European Union has been too stubborn in its efforts to enforce the DSA. And it will. And it will. And it will. And it will. And it will. will use that power to protect Elon Musk and his allies. So finally, until Europe can produce tech services that Europeans want to use, the commission will always play a weak hand. It will have to try to graft European values onto American creations. We cannot simply regulate our way out of Europe’s failure to innovate. Europe, in short, has much to change. Unless it does, it may find, after a decade of the Brussels effect, that the next decade will be that of the Trump effect, and Trump will reshape the internet into a far darker place than even today’s deepest pessimists fear. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you, Marion. Over to you, Judith, online. How does the European approach differ from the American, and does the role of the European Commission as the enforcement body of the TSA for very large online platforms increase the risk of American influence, and how should Europe respond?
Judit Bayer: Thank you very much. I’m very much honored to be here. I’m a co-author of Freedom of Expression and Media Law, and have expertise in digital European regulation. I’m an associate professor of the University of Budapest for Economy, but I’ve spent my last five years in Germany researching this field, affiliated with the University of Münster, the Institute for Information, Telecommunication, and Media, and the Center for Advanced Internet Studies. May I ask for a bit of feedback if I am audible well? Yes, you are. You can continue. Thank you. So, there is a European ideal for governance which is rooted in the principle of people’s sovereignty, where the state receives its mandate from the people, and it’s the state’s duty to stand up for the interests of the people. against other private representatives of power. The state’s obligation to protect human rights extends to the protection from other fellow human beings or from companies. Perhaps this roots in feudalism which didn’t exist in the U.S. The time when the king provided freedom to the cities, freedom from the landlords, this was the cradle of urban liberalism and of citizenship rights. This historical perspective might explain why the state is obliged not only to refrain from interfering with citizens’ rights but also to protect citizens’ rights against another private actor, especially against another great power. That’s why Europe has a stronger protection of laborers’ rights, of working mothers, a better health care regulation, food safety and so forth. And this pattern is reflected in the EU regulation of big tech. For the same reason, the European interpretation of freedom of expression has a more systemic view than the American. Freedom of expression is seen more as a political right which enables people to participate in the democratic decision-making process. When this goal is threatened by an actor through speech or suppression of speech, then state might intervene in extreme cases to protect the systemic function of freedom of expression. In legal language, this means that most EU member states and the EU itself have a positive obligation to secure a framework or the framework conditions for a plural and free information system. And technically, platforms are not speakers themselves. They are aggregating and reorganizing information and disinformation. They promote some speech and suppress other speech without explanation. They exploit vulnerabilities through behavioral targeting and they contribute to polarization with their opaque algorithms. Contrary to how they try to position themselves, they are not speakers and neither are are the neutral mediators of speech, but they govern who will access what kind of speech. And this opinion power cannot remain unregulated in European logic. But still, the DSA doesn’t order platforms to remove certain content. Not more than the DMC, the Digital Millennium Copyright Act of the United States does, or not more than the E-commerce Directive has been doing it since 2000. What’s more, it protects users’ rights against platform privileges by introducing procedural guarantees that platforms must provide, such as to give explanation when they removed certain content, to be transparent with their terms of services, to keep to their terms of services, to make a dispute resolution mechanism available, and so forth. So, in fact, Facebook is often more restrictive in removing content than required by EU law in most cases of hate speech removal. For example, according to the record, they relied on their own terms of services and not any law. Their speech standards are significantly stricter than European standards. While they object against this, in reality, that they should provide information to those speakers whose content they removed, respond to counter notices, and that they should provide information about their removal practices in the monitoring procedure. Additionally, several European nations have passed laws, often criminal, to prohibit this information, at least under circumstances such as elections, and the DSA also aimed to reduce such fragmentation in the European market. The obligation to reduce systemic risk leaves a huge range of discretion for platform companies to provide a fair and safe environment. It demands to prevent that their platform is misused and manipulated. Why do I think that the DSA is so important? The DSA is not a tool for political censorship. The DSA and related other laws to regulate the digital communication environment use a technique which is similar to risk hedging. It’s a core regulation, largely built on the cooperation of the tech companies, but also of auditors, civil society actors, who can act as trusted flaggers, on national regulatory authorities, on researchers, fact-checkers, I probably couldn’t even give an exhaustive list. This multi-actor participation creates a complex web of collaboration. Yes, each of them is a potential risk of failure, but overall, this risk is distributed. The system has created a system based on mutual distrust, a certain system of checks and balances. The actors mutually supervise and control each other, and the biggest part of the regulation is just the requirement of transparency. Therefore, I sometimes like to say, ironically, that the DSA is a big research project, because the transparency requirements are so dominant. This huge amount of information is processed by experts, auditors, monitoring organizations, and not by political bodies. In sum, while the sword may be in the hands of the Commission, the Commission is not in a position to take an arbitrary decision, because it’s part of a large system. And sanctions can be imposed anyway only for the violation of the law, in all cases of risk management obligations, never for individual pieces of speech, nor even for inefficient self-regulation. It’s mainly only a reckless disregard of the risk management obligation that would establish the ground for a fine. And whether the fact that the DSA is enforced by the European Commission for very large online platforms and certain genes increased the risk of American influence? I think quite the contrary. First of all, the EU currently doesn’t have a central independent regulatory authority. with an executive power that would be comparable to that of the Commission, which is the executive body of the EU. The national regulatory authorities are unlikely to exert sufficient pressure to enforce the law individually, as shown by the example of the data protection authorities. And there is a digital services board, which combines all national regulatory authorities in the digital sphere, and which is an advisory body to the Commission. There is also an AI board, similarly providing a representation of all member states. So if you are raising the issue that the EU needs a central, supranational EU-level independent supervisory body for platform and AI regulation, that sounds to me as a great idea, but the EU has currently a more centralized structure, and that also has its advantages. Whereas the big tech companies are becoming political by allying with the US president and leveraging his political power to resist European regulation. It is a fact that these companies possess and exercise a certain form of functional sovereignty, similarly to laws in the feudal age. This is sometimes called digital feudalism, sometimes it’s called political capitalism, but the fact is that state power is de facto shared with these companies. And just as media used to be called the fourth branch of power, the power to form opinions lies today much more with these companies. But this power encompasses even more than that, because they possess enormous dynamic databases and technological power, which can be converted into industrial and military power. And to the further point which was raised here, yes, the EU relied too much on the US in the past decades and didn’t develop a strategic autonomy. The wake-up call is rather harsh and requires urgent action. But what does the EU have, what are its assets? It has prime markets still, which are desirable for the big tech companies and also for other companies. across the globe, for example, Chinese ones. It can make new alliances with the states of the global majority. And it still has a regulatory competence and high potential and expertise on regulation, which may still be an export product with minor amendments, perhaps. Change is necessary in several aspects within the EU. However, there is no reason for the EU to take back from its regulatory standards. There is also no reason for the Big Tech to panic, however, because the DSA is built on the tool of dialogue and cooperation and transparency. In my view, this effort should be maintained. The power that the Big Tech holds, the data, the technology and the opinion power could be lethal to any society if it’s combined with a populistic, extremist, authoritarian or merely reckless political power. Let’s assume for a minute that the U.S. had a president who has no moral considerations in achieving his power ambitions. In cooperation with Facebook and Twitter, which government on the world is it that it couldn’t overthrow, where it couldn’t incite a coup, a civil war or even a genocide, as it happened in Myanmar and some African countries might line up. So regulation is, in my view, absolutely necessary and the EU must carry on this project for the interest of the public within the EU and beyond. Thank you.
Cristina Herrera : Thank you, Judith. And now for our last high-level statement. Nisten, can you tell us from a civil society perspective, how the EU and the U.S. regulatory models influence well-being and safety of online users?
Nitsan Yasur: Hello everyone and thank you for the opportunity to be here. and share some of our data and field experience. My name is Miklani Yassour. I’m a disinformation and digital investigation lead in the Israeli Internet Association, an independent non-profit civil society organization. We operate internet infrastructure and domain registry services and focus on digital safety, fighting disinformation, bridging digital device research and internet policy. As mentioned before, and the other speakers mentioned, we are here today to discuss this tension between the two, let’s say, regulatory worlds, the European one and the American approach. And from our perspective, both approaches fundamentally impact how platforms moderate the content daily. I’m speaking from outside perspective, from a non-EU and non-US-based organization, but one that’s directly affected by both EU and US approaches. And I’m not a lawyer, so I’ll try to answer this question by walking you through our civil society experience and real-world data and experience from the ground. And from a more accountability point of view, showing you what this tension looks like when it’s translated into practice. And I wanted to mention and recommend also about the liability aspects of the… You can see the ISOC policy framework for the internet intermediaries and content for more liability point of view and examples. At ISOC-IL, we run a safety internet hotline that recognizes our trusted flagger by all major platforms, not just social networks, but also shorteners, hosting providers, dating apps, and even adult content sites. We don’t see ourselves a help desk for the platforms. We see the trusted flagger role as a representing of the public interests in front of the platforms. not on the other way around. When we report harmful content we can deal with multiple layers at once hosting link, shortening, DNS, communication, pieces of content and more. Each layer brings its own set of challenges. That’s why we believe it’s more effective to focus on specific and intermediary functions than just one broad category. The hotline receives reports from the public, helps them navigate the platform’s reporting system and escalates violations that fall clearly under the platform’s community standards or terms of service. But more than that, we are often the first and only human point of contact for users who face serious harm online and don’t get responses from the platform nor from the state. This example gives us a unique point of view. We see where users are hurting, recognize emerging online harm trends, and we can see how well platforms actually respond, moderate, and handle harmful content. The war broke on October 7th, shocked the region to its core, and it was designed to create viral media impacts in addition to the real-world harm, broadcast in real-time through social media platforms. The result was a massive flood of harmful content online, including graphic violence, terror content, incitement, disinformation, and more. Platforms were quick to issue statements like, we removed hundreds of thousands of posts and we opened war rooms, etc. During that time, usage of social media spiked by 35% and the reports to our hotline more than doubled. Trusted plug-in experience from other conflicts, such as Russia-Ukraine and other crises in other parts of Africa, have shown consistent patterns. Platforms simply lack the capacity to moderate content at the speed and scale demanded during times of crisis. In a moment, I would like to share a few examples. I will share some of our findings from the recent war in Israel-Gaza research we conduct, which I believe can help to inform and shape the broader discussion around these challenges. We analyzed how the platform handled the reports we submitted during the first months of the war, after carefully filtering and categorizing them according to each platform’s relevant policies. And here are the key findings. On average, it took the platforms more than five days to respond to our reports. And we could see a difference between the platforms’ results. We also noticed a clear trend. We got no response during weekends, like Sunday and Saturday were just silence. Next, we looked at the nature and quality of response we received. We could see the platforms were generally quick to remove graphic content, terrorism-related materials, or sexual abuse, harmful content that can be efficiently handled by automated tools. But hate speech, incitement, and disinformation, however, required more skilled human moderation and understanding of language, context, and local culture, and were much less consistently and properly handled. When we looked on disinformation alone, excluding cases where also involved graphic content and incitement, we found that Facebook had the worst performance. They failed to respond for more than 70% of the disinformation reports we submitted. From ISOCIL’s direct experience, the day-to-day reality of content moderation requires approaches that recognize the complex, multifunctional nature of platforms and harms. Our data shows that platforms cannot be treated as a single, uniform entity. Each one has its own vulnerability and the way they respond to harm. This is important to remember when designing solutions and adapting policies. One size can’t fit all. Disinformation has emerged as a major public concern, something people increasingly recognize as harm and report to our hotline. While it’s likely here to stay, it’s far from the only form of harm. Disinformation is just one among many threats users face. And when we talk about platforms’ accountability to user safety and privacy, the conversation must extend far beyond just that one issue and protect the public and the user holistically. Looking at the broader power triangle between platforms, government, and users, we see different models in the U.S. and Europe when it comes to defining the relationship between the state and the platform. Most regulatory approaches still overemphasize platform control moderation, which our data shows is failing during crisis. I believe that there is a real value in involving citizens and users, shaping knowledge, and holding platforms accountable. However, I remain deeply concerned about the vulnerabilities of a public-only based solution approach. We see in our experience that the platforms still haven’t solved the old problems of inauthentic manipulation. So what protects the community-driven tools from being exploited in the same way? I want to suggest that the way to include the public in the loop is with civil society, such as we are, with a meaningful role in this dynamic. The public’s trust in civil society, its ability to remain independent from government, to be critical to platforms while also advocating for user protection and safety, makes it uniquely positioned to help bridging the gap. Civil society can already support platforms in understanding local contexts, needs, and sensitivities. It’s a critical balancing force in the evolving landscape that should be handled and designed with a multi-stakeholder approach. Finally, while today’s conversations center on the US-EU axis, we must not forget that a large part of the world falls outside of those spheres, places and regions considered by the platform as a small market, less widely spoken languages, and with different governance systems. The discussions we hold here have a global effect. The US and EU must ensure that the rest of the world is not left neglected in the digital shadows. Thank you very much, and I’m looking forward to this rich discussion now.
Cristina Herrera : Thank you. Thank you very much. It was very interesting to see how publicâ¦
Karine Caunes: Thank you for having this open discussion. My name is Karine Caunes and I’m the Executive Director of the Center for AI and Digital Humanisms, which aims to ensure a humanistic governance of AI in Europe and beyond, and we participated in many negotiations at Council of Europe, EU, Organization of American States or UNESCO level. So I would like to address this topic by relying on one of the studies we did regarding information manipulation on social media, more precisely the one we did with regarding to the German elections and astroturfing on X. So on X or Twitter, we analyzed all of tweets and retweets mentioning one of the main German political parties in the first part of January in order to avoid any lawsuits, and we also analyzed more than 500,000 tweets. So what we did in this regard, what we analyzed, what we saw here, is that through the support of foreign accounts, the AfD party has gained an overwhelming visibility advantage in Germany compared to other German political parties, and this was supported by the use of bots and the creation of fake accounts. So bots do not have freedom of expression, and through them and the increased visibility they get thanks to extreme commander system, it is the freedom of expression sought and information of German citizens which are under threat and so is democracy. So what we have observed is actually a continuous artificial amplification of private content and reverse censorship against all other contents. So yes, we actually agree with the U.S. that freedom of expression should be safeguarded, and it is exactly what the EU DSA allows the EU and its member states to do through its risk management systems. As for the EU Act, if the recommended system playing a part constitutes a prohibited practice under the EU Act, and we can also request social media to suspend not the whole social media, but the recommended system under the DSA during the electoral period. So we have all the tools which would allow us to fight against information manipulation while preserving freedom of expression, and this is the reason why actually we agree with the U.S. that we should preserve freedom of expression, and actually if there are risks, they come from bots that we find on social media. And for those interested, we also did studies regarding the information manipulation in Romania and regarding TikTok. Thank you, I’m sorry I have a very limited amount of time. If there are questions, I’m very happy to give further information. So if the U.S. respects fundamental values and democratic values,
Cristina Herrera : I think there is absolutely no rift. Here, Simona? No, okay. Georgie? Do we have anyone more on the line? Rahim? Good morning, everyone. Thank you for this opportunity to be able to
Brahim Baalla: intervene at this very interesting panel. Well, my statement is following. According to a note released on date 9 April 2025 by the U.S. Department of Homeland Security, Quote, Today U.S. Citizenship and Immigration Services, USCIS, will begin considering aliens’ anti-Semitic activity on social media and physical harassment of Jewish individuals as grounds for denying an immigration benefit request. This will immediately affect aliens applying for lawful permanent resident status for students and aliens affiliated with educational institutions linked to anti-Semitic activity, unquote. Given that every act of anti-Semitism and racism must be condemned in the clearest and strongest way possible, and I applaud the involvement of civil society in content moderation highlighted by first interventions, there might be issues related to the consequences of these new policies or rights and legal conditions of thousands of European students which every year choose to pursue their studies in the U.S. Many of these students make that choice, in fact, also given the academic freedom and freedom of expression which represented an important part of the history of the country. In fact, whilst the aim of such policy is totally understandable and shareable, issues might raise in respect of what might be interpreted as a violation on these terms based on how the policy is written. The given definition, in fact, is not specific enough to be predictable by the thousands of the European citizens potentially influenced by this new policy. It will be then appropriate for the diplomatic bodies of national governments and the European institutions to request further clarifications on the matter.
Cristina Herrera : Thank you. Thank you. Just in time. Is anyone else that had registered before in the room that hasn’t spoken yet? Oh, 97. Thanks.
Torsten Krause: Hello, I’m Torsten Krause. I’m working as a political science and child rights researcher at the Digital Opportunities Foundation based in Berlin, Germany. And I would like to draw your attention to one-third of the globally Internet users, which are minors, children, recognized as a vulnerable group with special human rights laid down in the Child Rights Convention and specified in the General Comment 25 with regards to the digital environment. When the UN established General Comment 25, they delivered a consultation process where around about a thousand children were involved from all over the world. And one finding was that there was a strong need and interest and trust with the And my question is with regard to the shift in the content moderation from fact-checking going to community notes. If we could keep this trustworthy content and trusted flaggers and other fact-checking and resources, if we can keep this with the DSA, or if what maybe Judith Beyer would assume, if the EU were not strong enough to keep these resources in the services for the European Union. And I would like also to ask Nissan Yasuo with regard to your comment that community involvement is an opportunity and a risk. If it’s maybe a wrong assumption by me that community notes are the worthless solution with regard to fact-checking or other way around, how do community notes have to work to be a good solution in content moderation? Thanks.
Audience: Thank you. Do you want to start? Is it working? No, 82. Okay, now it’s working. The question was in which way community notes can work. I think it’s, as I mentioned, I think the way and other places in platforms that can be manipulated by, let’s say, fake users or in coordinated inauthentic behavior. We still have this problem and I can’t see how community nodes will not have the same problems if we still have, you know, in other sections in the platform. What I think I try to suggest is to combine community in the group but not in general individuals by, let’s say, trusted flaggers and other entities which are from the community, for the community, but still have some kind of prestige or ability to be accountable and responsible in that sense. Thank you. Then perhaps for the first part of the question, if we want to go to Judy online or if you would like to answer, I think, yeah, if the EU can influence to keep trusted flaggers and other mechanisms.
Judit Bayer: I’m happy to answer. Am I on? Yes, you are. Yeah, thank you. So, first of all, I didn’t want to make the assumption that I should say, I just said that the national regulatory authorities individually don’t have, but I think that EU, this central structure, centralized structure by the commission and with the help of the board, I think that has a potential. And importantly, so this risk mitigation system means that it’s up to the platforms to define how they, to decide how they mitigate the risk. And they have to explain and to show evidence that they have reduced the They have eliminated, I don’t know, harmful material for minors or hate speech or whatever it is. And if it’s community notes, then it can be community notes. So it’s up to the discussion. And I think that this is coming to continue this discussion between the monitoring bodies, the commission, perhaps auditors, and the platforms to see how effective the community notes are. I think the idea is good. I’ve seen scholarly descriptions of how this might work. I don’t know if this is how it works currently with Twitter or with Facebook. So it has to be elaborated, obviously. And so it’s up to the practical solution and the evidence that shows which works and how much it works. Thank you.
Audience: Does anyone else want to make a statement or make a question to the panelists? One, four, six. Yes, thank you, Marie Bonner, I’m from Agence France-Presse and also from the European Fact-Checking Standards Network. And I had a question, maybe more for Judith Meyer, about the DSA implementation. We have been participating in the conversations of the Code of Practice on this information and followed up also on the transformation of the Code of Practice into a Code of Conduct within the DSA. Does that change anything about the way, how the platform has to explain what they do in terms of risk mitigation for this information, for example? What’s the role of the Code of Conduct within the legislation?
Judit Bayer: The Code of Conduct can serve as a guideline from which the platforms can voluntarily choose and pick which measure they want to take for themselves and to commit to, and then basically they can put together their own self-regulation and make a commitment that they are going to comply with that. And then in the monitoring and auditing procedure, what will be examined is whether they fulfill their own commitment to that set of measurements that are in the Code of Conduct and how well they comply with those measurements. So the problem emerges if they don’t take enough measures, if they don’t commit, like Twitter, which didn’t commit at all to the Code of Conduct or the Code of Practice, or if the commitments are insufficient, then it becomes difficult to argue that on both sides that they fulfilled the risk mitigation obligations, but if they have sufficient commitments and they can show that they fulfilled their commitments, so basically it’s an effort-based obligation, not a result-based, they don’t have to achieve, well, yeah, both of them a little bit because it has to be efficient. But the compliance with the Code of Conduct is a sign, a probability, that the platform has made it all it could, a best effort, to comply with the risk mitigation obligation. I hope I answered the question.
Cristina Herrera : Thank you, Anne. Very well. This one? This one?
Berin Szóka: Yeah, I mean, look, this is the whole ballgame, right? If American companies, especially under pressure from the Trump administration, will back out of their commitment to follow those guidelines, which is exactly what’s happened, it puts the Commission in a really difficult position, right? As I said, the DSA is written to be content neutral. It does not include any specific authority to change what is lawful, right? It simply describes the process by which unlawful content gets removed and the process by which the terms of service of the platforms, especially the very large platforms, are written and enforced. And the critical provision there when it comes to actually enforcing the risk mitigation provisions is that when the Commission brings an enforcement action, before it can actually issue any findings of any conclusions, it actually has to suggest what the platform did wrong, what it should be doing, right? So consider the situation right now that the Commission finds itself in with respect to X, right? So the Commission brought an enforcement action that covered multiple failures by X to comply with the Digital Services Act. Some of those were very easy, like selling blue checkmarks, right? That’s a very simple case. The Commission has acted already on some of those counts. It has not acted on the harder ones, specifically fact-checking, right? So the Commission could say that community notes, in the way it’s designed, can’t be an adequate risk mitigation measure for certain classes of systemic risk, because by definition the way that community notes works requires consensus. across the community, and you will, by definition, never get consensus about those categories of risk of lies about elections that are most important. The Commission could say that, but what exactly is it that the Commission expects X to do in that circumstance? It cannot require fact-checking as such, right? What the risk mitigation provision requires, first of all, you have to assess the risk under Article 34, and then under Article 35, you have to define some measure that you, the platform, are proposing to mitigate that risk. And it doesn’t just have to be fact-checking. It could be anything, right? The DSA, in that sense, is intended to be technology-neutral, but it essentially assumes that these platforms are operating in good faith and that they will make some effort to propose some mechanism. Maybe it’s slowing the spread of content or architectural changes, and the Commission, you know, if the companies won’t do that, the Commission may find itself in a position where it just doesn’t know what to do. So it’s kind of stuck, and meanwhile, the Commission is facing political pressure not to enforce the Act at all, and so the result, this is why jawboning can work, the result may be that we just never see any action on that aspect of the enforcement action. And if the Commission won’t take any action, and the Commission, the companies, won’t sign on to codes of conduct, then the DSA is sort of a dead letter on that issue. I mean, that’s the point. That’s why jawboning can work here. When I say that the Commission may lack the strategic autonomy to actually enforce the DSA, this is exactly what I’m talking about. There is no easy remedy for that problem.
Cristina Herrera : Thank you. Any reactions from the audience? Karen, I’m unmuting you, so you’ll have the floor. Yes, thank you. Just very quickly to taking all the points together.
Karine Caunes: So indeed, with regard to community notes and the bias that we have seen, you know, I think it’s very important for the DSA to be able to make a decision on its own. You can look at the reports from Viginoum, the French Viginoum, they reported about bias and issues for precisely information manipulation. Obviously, we have the system of trusted flaggers under the DSA, the problem is like you would have 50 reports. This is not enough. That’s the reason why the risk mitigation system is very interesting, Article 34, 35 of the commission of the DSA, because it means that we can make some reports based on millions of tweets, on millions of TikTok contents. And this is basically what we at our DigiHumanism are doing. And there is a difference here. If you go through trusted flaggers, the competent authority is a national authority. If you go through risk mitigation, the European Commission is directly responsible. And it’s not just up to social media. This is really based on the evidence we can bring to the Commission to prove that there were systemic risks to political discourse, to freedom of expression, to freedom of thought and so on. So if we have hard evidence, the Commission might be able to act. However, currently, I believe that they are waiting. Why? Because they have a trade war going on and there is a 90 days suspension of the tariffs and they are waiting to see what will happen with the U.S. But I do believe that ultimately the DSA will be applied and maybe we will first go to enforcement with regard to TikTok, since the U.S. is pressuring us to do so, but then we will go back to U.S.
Berin Szóka: social media, don’t worry. We’re all working on it. on measures that are being used to, quote, coerce American companies to moderate content. That’s in a report that has been presented to the White House. They haven’t taken action on it yet. In other words, whatever is happening right now on trade is separate from what the administration will do on that particular point. They will continue to use free speech and so-called censorship as a justification for tariffs as a tool to coerce the European Union. The Union is not powerless. It does have some mechanisms that it could use. We haven’t talked about this yet, but you may all be aware that the anti-coercion instrument is intended, it was drafted not with the U.S. in mind, but with other more traditionally authoritarian governments in mind. And that could be used, for example, to suspend the enforcement of intellectual property for Elon Musk and his companies in Europe. So that’s where we’re heading. It’s that kind of pressure being brought to bear on Europe and Europe trying to respond with measures like that.
Cristina Herrera : Thank you. Very interesting remarks of the geopolitics at play. Does anyone from the audience have any remarks?
Audience: I’m Daniel. I’m from the Youthdig, and I believe that the community-based fact-checking model, like community notes on X, should be expanded across all major social media platforms. This approach gives power back to the people, limits the risk of government or corporate bias in labeling political narratives as misinformation, and it strengthens democratic accountability. One possible path forward is a public European-developed API for fact-checking. Interoperable across platforms, enabling transparent community-driven moderation. It would be a tool that protects both sovereignty and freedom of expression. This is not regulating through restriction. It’s regulating through innovation. Thank you. Do we have any reactions to that? No. I just want to remind that the social media promise was to to promote democracy and to give voice to all. But then it was used to people in power, like states, governments, and political actors, to use those platforms to manipulate them and to pretend to be a huge amount of people that support candidates or support an issue. And I still don’t trust the platforms to do them by themselves yet. So this is what is my position on Community Notes.
Cristina Herrera : Thank you. Great. I’ll just say briefly, the question isn’t whether Community Notes are good or bad. Community Notes is a great idea for a lot of things.
Berin Szóka: The point is that it, by design, again, it doesn’t work for the things that matter most. If the question is who won the last election, you will never get a Community Note on that issue because certain parts of the community in the United States in particular, but this will happen in other countries, deny that the last election was legitimate. So Community Notes on its own can’t work for certain classes of systemic risks. And this is where the approach of the DSA, I think, is exactly right, that in general, we have to ask, what are the risks, how do we mitigate them, and what are the tools? And systemic risks might be mitigated through Community Notes for certain kinds of things. But for other things where society is deeply divided, and it’s not just elections, it might be vaccines, for example, you have to have some other way of dealing with that problem, or you will have the delegitimization of elections, you will have, as we had in the United States, an attempted coup, right? This is going to happen in other countries, and there has to be some other way of dealing with those problems. And it’s not going to come from the bottom up.
Judit Bayer: It has to come from editorial intervention. Can we get just a reaction from a speaker first, and then we go for the follow-up? Judy, please unmute yourself. Thank you very much. Maybe to react first on the last words of Berin, you said editorial intervention, absolutely. So I would just like to emphasize that the DSA… is not the only regulatory tool of the European Union. In fact, the digital regulatory package of the European Union includes several other laws, the DMA, Critical Advertising Act, now the European Media Freedom Act, and some others, which are to regulate the information environment and such. And when we talk about the healthy information environment, I think one of the major tasks is to reinforce the position of quality media, whether it’s online or whatever transmission method is chosen, and to push basically social media to the place that it deserves, and to emphasize that facts can be learned from quality media and not from social media. But you’re absolutely right regarding the COVID note, and back to the enforcement, I wanted to react that I agree the European Commission will probably balance the political risk with the risk that the platforms may mean for the European society. That’s also a political risk. And I think ultimately, the Commission had tried to block these platforms, to block access to the platforms within the European Union. So geo-blocking is a possibility, which is offered by the DSA. And I think
Cristina Herrera : this is an ultima ratio, which we must also keep in sight, if all the dialogues are broken up between the US and Europe. Thank you. Yeah. Yes, go ahead. 62. Yeah, thank you. Regarding your community notes, I wanted to say it’s not only a matter of
Audience: because of not dealing with divided societies or issues. But also, it can be, there’s another risk of manipulation, like we see already in the last years in Wikipedia and we can see in other places, how community knows can become a new field for manipulation, for intervention, or political manipulation, or even coordinated inauthentic behavior, trying to coordinate these kinds of information, and it can just add another layer of risk for information integrity, and at the same time, allow platforms to bypass regulation and not be accountable for their role in safeguarding safety and rights of users. So I think we cannot look at these new mechanisms of community notes as a perfect solution, because we already know there are many new risks that have to be somehow balanced with the responsibility and accountability of the platforms themselves. Yes, thank you. Do you have any remarks?
Cristina Herrera : Sorry, 2.15. Thank you.
Audience: I speak in my capacity of the advisory board of EDMO, so we are dealing with these issues every day, as you can imagine. I have two considerations that I want to bring to your attention. The first is that I don’t think that there will be a solution in the next months, in the next years, between this huge divide that there is between Europe and the US. So I think that what we have to do immediately is start to look for alliances in the rest of the world. There is a risk for democracy, and so all democracies are concerned, and I think that on this basis we can build alliances with countries, and we need to especially try to find alliances between the group of 77, because we need to ask the countries that are committed to democracy to work on this together with Europe. and I think that we have lost time because we were not we have not considered this as a priority in the past and it was a big mistake. The second thing is that there is an opportunity always in European legislation that is the deadline of 2027 for the personal interface for news that is forecast in the MFA, in the European Media Freedom Act. This is something that we need to consider seriously because this could be a long-term solution. Before was mentioned the an API for a common API for fact-checking but this is even better because if you have an interface that could allow you to access only two news that are guaranteed and comes from sources that you know that are trustable then part of the problem can be solved. And we have two years to come and this is a useful time that we can employ to arrive to the right conclusion. Thank you. Very interesting idea. I’m gonna go back to you but we’re gonna go
Karine Caunes: first online. Corinne I just ask you to unmute. Yes thank you. I just wanted to react to what Judith Beyer said and Edmo as well. So with regard to the shutdown of social media actually the first country which did it was the US shutting down TikTok for less than a day. We did it somehow, we cancelled also elections in Romania and what happened it had the negative, we got a negative reaction and if we were to shut down social media in Europe it would be used again for a new wave of information manipulation and this is a problem that all states, member states have in mind. So this is why for example with regard to manipulation, information manipulation in the context of the German elections, but we are suggesting it also with regard to the pending elections in Romania, in Portugal, in Poland, because they are going on here in May. We are requesting for the suspension of the recommender system, and the suspension of the recommender system is essential because this is how astroturfing is working, and this is also through the recommender system that illegal content, discriminatory content, revisionist content, anti-Semitic and anti-immigrant content is displayed in the For You feed of the users, even if they didn’t ask for it. So the suspension of the recommender system would be a middle ground, and I’m not sure how the US could say that it would be an attack on freedom of expression, so this could be a good tool. And as for alliances in the world with other countries which respect democratic values, yes, but we also have China, and with China we can find a middle ground, I mean a common position with regard to the labelling of AI-generated content, because we know that fake content is used to manipulate opinions, whatever the topic, and they issued rules, and we have issued rules, and we have to implement this rule in Europe, and we can check with China what we can do together to have some kind of set common standards while respecting our own values. Thank you. We’re going to have one last statement from 383.
Cristina Herrera : Well, manipulation and coups existed long before the internet, and they will continue with or
Audience: without it. Your country knows more than anyone else about it, and whether it’s Trump or any other American president, Republican or Democrat, the reality of the power games stays the same. Community notes isn’t perfect, but it’s the most transparent tool. we have at the moment in our social medias and this is the foundation we can build on through innovation. When I talk about the European fact-checking API, the goal isn’t to give the governments the power to decide what is true, it gives the user the power to open the centralized democratic tools that let people verify information collectively and not being dictated by states
Berin Szóka: or corporations. Many people don’t want to know what’s true, they don’t care about fact-checking, those are the problem. So if you design a system that is intended to fix the problem and the people who are the problem don’t want to use it, you haven’t fixed anything. Right, so with that we
Cristina Herrera : finished the section of statements and now we’re going to go for the messages drafted by the program committee. You can see them on the screen. And remember we’re going to ask for consensus, if you don’t agree with anything that is said here, let us know and there will also be a link to follow up after the session. Go ahead, thank you.
Moderator: My name is Jyrki Ransipulo from Finland and I’ve been trying to craft a couple of messages from this discussion. It was a really good discussion and very detailed on some questions. It would be very nice to write a 10-page report of the discussion, however, when you try to rewrite a couple of messages you have to work as an editor that I have been and you have to make it short. So basically, since the basic question on this part of the Euro-rig was what should Europe do, I’ve been concentrated on two things here, describing the situation and then what Europe should do. The first message, tensions between tech giants and European regulations. are nothing new, but they are now getting increasingly entangled with transatlantic political conflicts, with Internet issues risking to become pawns in disputes on trade and defence policies. This has exasperated, at least on a rhetorical level, European and American interpretations on freedom of expression. There have been attempts to label European regulation against harmful content or election interference as censorship. So this sort of paints the picture of where we are. The second, which is what we should do, as to how Europe should reply to the pressures, there was a consensus that retreating was no option. While continuing the transatlantic dialogue and trying to correct obvious American misunderstandings about the nature of the DSA, DMA and other regulations, Europe should make clear that it will defend its basic principles. On the other hand, the European regulatory instruments continue to be refined, simplified and made smarter.
Cristina Herrera : Thank you. Is there any notes about what we’re seeing? Anyone disagrees? Go ahead.
Berin Szóka: Yeah, I think that the problem is not that the European regulatory instruments are not simple enough, but that they have not been designed with this kind of conflict in mind. I think if we knew that Trump would be president again, the Digital Services Act would have been written more carefully. And I think that’s what needs to happen now. And the specific recommendation should be to rethink both the DSA, the entire package of regulation, with the current context in mind. And that should include the provisions that are most ambiguous, like, for example, what is a risk to civic discourse or a risk to electoral processes? Those terms aren’t defined. What exactly are the requirements of risk mitigation? And most importantly, there should be something in the text of the law that says that it cannot be used to deal with specific kinds of content. I don’t think it’s good enough to simply say that that’s implicit in European fundamental rights law. I mean, for example, we already have clear, explicit prohibitions on European law being interpreted to require monitoring of user communication, right? That probably was implicit in the law, in fundamental rights law, but we put it into legislation explicitly. The same sort of thing should happen in the DSA as it’s revised.
Cristina Herrera : Any reactions? The audience? Karine, I’m asking, do you want to mute? Go ahead. Yes.
Karine Caunes: As we said in committee notes, we hardly agree. I would disagree with the last position. I think that to reopen the DSA, the EOA Act, and the GDPR is a big mistake. It would only go to downplay its content, and it is not the aim. I think we have the tools. The question is, do we have the political will to enforce them? Let’s see what happens in the future. Yes. Right there. Thank you. I hope you hear me okay.
Audience: Jorge Cancio from the Swiss government. Just wanted to make a comment. I don’t have a specific wording proposal, but I have the impression that, to a certain extent, we are conflating at the European level. European Union legislations and pieces of legislations with European legislation. And there is a lot of confusion. I think we have to be careful. Thank you very much. the difference. Of course, I think we share a tradition, we share also approaches, but not necessarily on the details, on the specific instruments itself that are not adopted. For instance, here we are in the Council of Europe, 46 countries, and there are many countries who don’t exactly follow the same line as European Union instruments, Switzerland included. So I would ask the drafters to have a bit of a different wording, perhaps with a EG or with, for instance, or things like that. And on the other side of the Atlantic, I also have some difficulty coming from a global and an international background when we talk about America, because American is Mexico, is Canada, American is many things. I think we should say US-American or something in that direction. Thank you. Great, thank you. We got here. Thank you. 58. Hi, hello. I’m David Crouch from Internet Society. I thought we should be in Europe regulating or changing regulations depending on who’s sitting on the White House. We should make further proof regulation. What I would like to see there is that any decisions going forward should preserve the open nature of the Internet. There are some recommendations on intermediary liabilities that should be taken into account, and that’s it. Thank you. There was someone here. Go ahead. 327. Thank you. Tim van der Belt of the Dutch Authority for Digital Infrastructure. I would like to address the political side of the enforcement of regulation, because the regulator or the supervisory authority ought to be independent. But when you say it’s a political issue to enforce, an authority cannot be independent. So I would like to remove the political willfulness on the enforcement part and rather look at the system or the design of the regulation rather than the political will to enforce or whether an authority is allowed to enforce. Because for an authority, only the public interest is at stake over them. the main issue, not the political part, for that the state or the ministries are appointed institutions. Thank you. Thank you. 250. Yeah. One remark on the text that I think that, of course, I agree with Jorge about we are talking about the US, not of the Americas. I think that when we, in the last phrase of the second point, we say regulatory instruments continue to be refined. I think that before they’re refined, we need to implement. Because the political decision of the old package that has been elaborated in the last five years by the European Commission, regulating the digital world, is now arriving at the implementation phase. If the implementation will not be there, this will decredibilize the old process. We know that according to European legislation, we need to have years before the procedures for infringement arriving to the final point. This is the case now. We are the first procedures against the violation to GDPR arriving at the moment of the enforcement. We have the same for the first DSA provision, et cetera, et cetera, and the violation of the code of practice. So we need to implement. If we don’t implement, the old architecture that we have built during these five years will have no future, no credibility in the rest of the world. So this is the only thing that we need to do now. Thank you.
Cristina Herrera : Thank you. Sorry, an answer. 25. Yeah, is that on? Yeah. Thank you very much for all those remarks. The drafting continues with the whole program committee now until the 25th, I believe.
Moderator: So can I ask, do we have taken into account those remarks? and we try to reflect them in the text. Can we ask for a rough consensus on this? On the basic things. Maybe we’re going to take the last remarks for 1.52 and then we can make the vote. Thanks. My name is Julie Pasetti. I’m an academic and a journalist based in the UK.
Audience: I very much agree with the Swiss colleagues’ comments because unfortunately the UK is no longer part of the EU, especially so that we need to reflect those different standards and norms that should be aligned. And I just wanted to reiterate, I can’t remember who it was who said this, but the need to be more creative and networked in our response to these challenges because although most Americans apparently couldn’t see Trump 2.0 coming, those of us who work with large data sets analysing disinformation and hate speech online could see this surge and did predict this. So it is unfortunate that the various pieces of legislation were not crafted to actually predict a sort of tilt towards authoritarianism in the land that previously marked itself out as a genuine bastion of freedom of expression. But one of the most disturbing things that I have heard here in the past couple of days came from a sideline conversation where a representative of state apparatus was suggesting that we’ve already lost the battle to regulate, so we should just give up because there is no political will. And the second point that that person made was that we shouldn’t be regulating AI because we don’t know what damage it can do yet and we have to wait for the damage to be done before we regulate to prevent harm. And I don’t know what my ask is here other than to reinforce that the function of the Council of Europe could in fact be to bring together not just… European nations, but to think, as our African colleagues requested in a plenary session yesterday, to act in such a way that would take account of the networked effects of unregulated, largely US-based platforms, which now form part of what we refer to as the burligarchy, with the political power of the Trump administration reinforcing their dominance. And to do so in a way that perhaps brought together like-minded democracies, as someone suggested, in North America, such as Canada, and in Oceania, such as Australia and New Zealand, and many others. You know, we could consider South Africa and a whole range of other countries that are intent on trying to regulate in an effective way to defend democracy, human rights, and the rule of law, none of which we can save without a concerted accountability mechanism, which includes regulation. Thank you. Thank you very much for all your comments. To
Moderator: add to the request whether there is rough consensus, with the messages we ask if there are no strong objections to the messages that you see displayed here on the screen. If there is rough consensus, or if there are no strong objections, then the organizing team will take your comments and questions that you’ve made during this session to make the changes that you’ve proposed.
Berin Szóka: You had a strong, do you have a strong objection? We’ve talked a lot about regulation, but I think what you just said reminded me that at the end of the day, the rule of law is not primarily about legislation or regulation. It’s about courts. And in particular, the Council of Europe, the entire system of the European Convention on Human Rights assumes that you have an effective court to deal with claims. And the way in which J.D. Vance is most correct in his criticism of, in particular, law in the UK as I’m so sorry to interrupt, but may I ask, what specifically is the objection to- My objection is that what we need to do is not only talk about regulation, but also say something about the importance of effective judicial supervision to ensure fundamental rights, because that is not happening in the Council of Europe system.
Cristina Herrera : Thank you very much. That’s been well noted for the transcript, and that’s something that the organizing team will take into consideration when drafting the further messages. Are there any other strong objections? 483 has a strong objection.
Audience: Yes, hello, Olivier Cabana-Blanc, ISOC UK England. I just have an objection to use the word in the second part, while continuing the transatlantic dialogue and trying to correct obvious American misunderstandings, I would propose striking off obvious, because I think it’s a bit condescending to say that it’s an obvious misunderstanding. Thank you very much for your comment, this has been well noted. Any other strong objections? Nadia from online, we have an objection to the word simplified, and Judith would like to raise a strong objection. There is a strong objection to simplified, it’s been well noted. This has been well noted, thank you very much for your comment. Any other? It’s less American misunderstandings than it is mischaracterizations by the Trump administration. Thank you very much, we’ll make a note of that as well. If there are no further objections, then I hand over the moderation back to the moderator. Thank you.
Moderator: Thank you all very much for attending this session, and I think it was very interactive as Eurodig wanted, and yeah, thank you all for participating. And of course, thank you very much for the moderator for leading us into this very interesting session. We can see how many people are passionate about this topic, how many different views and ideas are about it. And hopefully you will choose not to end this conversation here, but to join us for the social evening tonight and continue these conversations where we can meet each other socially and passionately continue this. We’ll be meeting each other hopefully this evening at the Tigre. It’s not going to be a grand party, but at least the first drink is on us. So I hope that you will come and join us. And otherwise, you will see here tomorrow for main topic five, which is going to be on age verification dilemma, balancing child protection and digital access rights. Have a wonderful end of your day and a good evening. Thank you very much. Thank you. Thank you.