Child safety online – update on legal regulatory trends combatting child sexual abuse online – WS 01a 2024

From EuroDIG Wiki
Jump to navigation Jump to search

18 June 2024 | 10:30 - 11:30 EEST | WS room 1 | Video recording | Transcript
Consolidated programme 2024 overview

Proposals: #9 #21 (see list of proposals)

You may be interested also in Workshop 1b: Protecting vulnerable groups online from harmful content – new (technical) approaches as both session are closely interlinked.

You are invited to become a member of the Session Org Team by simply subscribing to the mailing list. By doing so, you agree that your name and affiliation will be published at the relevant session wiki page. Please reply to the email send to you to confirm your subscription.

Kindly note that it may take a while until the Org Team is formed and starts working.

To follow the current discussion on this topic, see the discussion tab on the upper left side of this page.

Session teasers

This session endeavors to delve into the recent advancements in the legal landscape concerning online child safety, with a particular emphasis on the U.K.'s experience in implementing the Online Safety Act and its juxtaposition with the E.U.'s proposed CSAR. Central to the discourse is the pivotal question: 'How can we create secure online environments for children while safeguarding the privacy and fundamental rights of users, including those of children?'

Session description

Currently child safety and security is at the forefront of EU's agenda, with a comprehensive legal framework inclusive of the Digital Services Act (DSA), the NIS Directive, and notably, the proposed Regulation laying down rules to prevent and combat child sexual abuse (CSAR). It is foreseen that the CSAR introduces obligations for service providers to detect, report, remove and block access to online child sexual abuse material. However, the proposal has faced significant criticism for potentially infringing on privacy rights, circumventing end-to-end encryption, and risking the creation of a new mass surveillance system.

The UK's Online Safety Act 2023 has adopted a similar approach to the CSAR. This session aims to draw insights from the UK's experience with implementing the Online Safety Act, comparing it with the proposed CSAR. We will explore whether the recent amendments to the CSAR address concerns regarding the proportionality of its measures, thus ensuring the protection of children online while safeguarding privacy and other fundamental rights. This workshop builds on the 2023 workshop 5 “Proposal for a regulation laying down rules to prevent and combat child sexual abuse”. (


This workshop will feature a discussion among relevant stakeholders. To engage the audience and gather their input, we will use Mentimeter to pose questions to the public at the beginning, during and end of the session.

You can prepare yourself for this question: Are you more concerned that children are not protected enough online or that stricter online child protection will lead to too much surveillance?, to find out whether you will have changed your mind at the end of the workshop.

Somewhere during the workshop you will be asked the following questions:

What are the major concerns you have with the CSAR proposal? and,

What are the main benefits of the CSAR proposal?''

Further reading


Please provide name and institution for all people you list here.

Programme Committee member(s)

  • Desara Dushi
  • Jörn Erbguth

The Programme Committee supports the programme planning process throughout the year and works closely with the Secretariat. Members of the committee give advice on the topics, cluster the proposals and assist session organisers in their work. They also ensure that session principles are followed and monitor the complete programme to avoid repetition.

Focal Point(s)

Workshop 1a

  • Wout de Natris, Dynamic Coalition IS3C

Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective Programme Committee member(s) and the EuroDIG Secretariat and are kindly requested to follow EuroDIG’s session principles

Organising Team (Org Team) List Org Team members here as they sign up.

  • David Frautschy
  • Sabrina Vorbau
  • Vittorio Bertola
  • Amali De Silva – Mitchell
  • Natálie Terčová
  • Menno Ettema
  • Inga Rimkevičienė
  • Kim Barker
  • Torsten Krause
  • Arda Gerkens
  • Kristina Mikoliūnienė
  • Anna Rywczyńska
  • Peter Joziasse
  • Andrew Campling
  • Michael Tunks
  • Maciej Groń

The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.

Key Participants

  • Jaap-Henk Hoepman, University of Nijmegen
  • Kristina Mikoliūnienė, Lithouanian regulator RRT
  • Fabiola Bas Palomares, Eurochild
  • Desara Dushi, Vrije Universiteit Brussel (Free University of Brussels)


  • Wout de Natris, De Natris Consult / coordinator IGF DC IS3C

Remote Moderator

Trained remote moderators will be assigned on the spot by the EuroDIG secretariat to each session.


Reporters will be assigned by the EuroDIG secretariat in cooperation with the Geneva Internet Platform. The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:

  • are summarised on a slide and presented to the audience at the end of each session
  • relate to the particular session and to European Internet governance policy
  • are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
  • are in (rough) consensus with the audience

Current discussion, conference calls, schedules and minutes

See the discussion tab on the upper left side of this page. Please use this page to publish:

  • dates for virtual meetings or coordination calls
  • short summary of calls or email exchange

Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.


  1. Advancements in legal and regulatory measures on Child Sexual Abuse (CSA)
    Workshop 1a discussed three recent measures on the protection of children from online Child Sexual Abuse (CSA): the proposed EU CSA Regulation (CSAR), the new UK Online Safety Act, and the positive results from the Lithuanian Law on the Protection of Minors against detrimental effects of public information. An agreement was found on the need for better regulation in this field, emphasising the accountability of online service providers for monitoring illegal and harmful material and safeguarding minors.
  2. Major concerns and benefits
    CSA is currently increasing exponentially and has serious consequences for the rights and development of children. For this reason, recognising such depictions and preventing child sexual abuse should go hand in hand. Participants are concerned about the safety of users, including with regard to the potential use of technology. Breaches of confidential communication or anonymity are seen critically. At the same time, advantages are recognised in the regulations, e.g. with regard to problem awareness or safety by design approaches. Age verification procedures are perceived as both a risk and an advantage. However, this should not be at the expense of anonymity and participation.
  3. The interplay of privacy and safety
    The participants of Workshop 1a of EuroDIG believe privacy and safety are intertwined and inseparable, advocating that legal solutions to combat child sexual abuse online must strive to optimise both. These measures should be centred on children’s rights and their best interests, as a way forward to achieve this balance.

Video record


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Transcripts and more session details were provided by the Geneva Internet Platform

Wout de Natris: Thank you very much. I’ll check my email in a moment because we’re missing a speaker. Thank you very much and welcome everybody. I’m going to ask the remote moderator to put up a question in Mentimeter. We are asking you to go to Mentimeter and answer a question for us. You will see later that this question returns. And in between, to keep up the interaction, we will have two more questions that you can answer. So, please go to Mentimeter and ask the question, answer the question that is asked there, that are you more concerned that children are not protected enough online from sexual abuse or that stricter online child protection will lead to too much surveillance? So welcome to this session. My name is Wout de Natris, I’m a consultant in the Netherlands and here at Eurodig in my capacity as coordinator of the Dynamic Coalition on Internet Standards, Security and Safety, which is not so relevant for this session, so I won’t go into that. With me on stage or online are Desara Dushiof the Vrije University of Brussels, and Kristina Mikoliūnienėfrom the council member of the Lithuanian Regulator RRT. Mine is Fabiola Bas Palomares, Lead Policy and Advocacy Officer at ONLINE, Safety at Euroset, Eurochild, and hopefully very soon Nigel Hickson of the UK Government for Science, Innovation and Technology, and Jaap-Henk Hoepman, who is an Associate Professor in Computer Science at the Radboud University in Nijmegen. This session endeavours to delve into the recent advancements in the legal landscape concerning online child safety, with a particular emphasis on the UK’s experience in implementing the Online Safety Act and its juxtaposition within the EU’s proposed CSR regulation. Central to this discourse is the pivotal question, how can we create secure online environments for children while safeguarding the privacy and fundamental rights of users, including those of children? And we have a result of four people, so please go on Mentimeter and answer the questions for us. Nigel, you can come and sit on stage with us, thank you. So please go on Mentimeter and answer this question so that we know where your position is at this point in time in this discussion. With that, I will start shutting up and give the floor to the people around me. And Nigel, you’re invited to be the first.

Nigel Hickson: Can I have just one minute so I can open the floor?

Wout de Natris: Okay. I will give the floor to Desara and then to you. Okay. So Desara, the floor is yours.

Desara Dushi: Hello, everyone. So I’m going to provide a brief overview of the situation in the EU regarding the regulation concerning the protection of children from sexual abuse and exploitation online. So currently, we have an interim regulation that allows the service providers to scan communication in order to detect the possibility of – in order to detect dissemination of child sexual abuse online. And for child sexual abuse material, I’m going to use from now on CSAM as an acronym. And this is done in non-encrypted environments in their services. Now, this interim regulation was supposed to be applicable until August this year, and then it would stop working. But since, of course, ensuring online safety is paramount, there was a need for a permanent law that would enter into force – was supposed to enter into force once this interim regulation stops, so in August this year. And this permanent legislation is the European Commission-proposed regulation laying down rules to prevent and combat child sexual abuse, which was made public in May 2022. But due to the fact that this resulted to be a highly contestable proposal, particularly regarding to its implications to fundamental rights, mainly to the right to privacy, it was obvious that this was – that its adoption was impossible to be reached by August this year. And that’s why the interim regulation was extended until April 2026. And meanwhile, the proposal by the European Commission is a regulation, which means that once adopted, it will be directly applicable into all EU member states. And the aim of this regulation is to harmonize the rules within the Union for easier collaboration in fighting online child sexual abuse. And it focuses on the role that online service providers should have in order to protect children. As such, it imposes obligations related to detecting, removing, reporting, and blocking three types of content. Known CSAM, unknown CSAM, and solicitation of children, or what we know as grooming. Now, known CSAM means material, and this would happen both in encrypted and non-encrypted environments. Known CSAM means material that has been previously detected and identified as constituting child sexual abuse material, and it is included in the database of known CSAM. While unknown CSAM is potential child sexual abuse material that has not been confirmed and does not exist into the database of known CSAM. While grooming means the process of befriending a child and making them enrolled in sexual acts, either in the form of self-generated material or by arranging an in-person meeting, while of course demanding secrecy in order for preventing disclosure. Now, how does this proposed regulation work? It demands service providers to conduct risk assessments to evaluate the potential of their services to facilitate child sexual abuse online. And when a risk is identified during this risk assessment, then service providers would need to implement reasonable mitigation measures, which should be effective, targeted, and proportionate to the identified risk. Now, this report, the risk assessment report, together with the mitigation measures, should be reported by the service providers to the National Coordinating Authority, which is supposed to be the authority responsible for monitoring the national implementation of the regulation. And they also have to send it to the EU Centre, which is a new EU body that will be created by this proposed regulation. and I will explain later its role. Now, after assessing this report, if the national coordinating authority thinks that there is evidence of significant risk that the services used for online system, then it can ask a competent judicial authority or an independent administrative authority to issue a detection order. And this is where the problem starts regarding the contestability of this proposed regulation. Because once a detection order is issued, service providers would be compelled to implement tools for detecting the dissemination of CSUN or grooming depending on what the detection order is about. And in the case of grooming, you’d have to implement tools regarding age verification and age assessment. Now, in March of this year, there were introduced many changes to this draft proposal. And these changes include a risk-based approach, classifying service providers into three levels, high, medium, and low. And depending on the risk level, providers, of course, will be subject to different levels of safeguards and obligations. And one of the changes includes directing, includes the fact that detection orders would be restricted only to high-risk services and only as a last resort. These measures seem to redirect the issuance of detection orders to selected communications and also to targeted individuals, which means those individuals that have been flagged several times as potentially sharing a CSUN, which means that it sort of narrows down the scope of the order in practice. The proposed regulation also provides for provisions regarding removal and blocking of CSUN, but because we have a limited time, I’m not going to enter into detail into this. And I’ll move directly to explaining what the EU Center is about. This EU Center will be established to support the implementation of this proposed regulation at EU level. And it has two main tasks. The first is to provide a list of available legally compliant technologies that service providers can use. to comply with the regulation and make them available for free. But this doesn’t mean that they have to use only technologies from that list. They can also use other technologies as long as they make sure that these technologies achieve the purpose of the detection order. And the second aim of the second task of the EU Centre is to operate databases of indicators for CSAM that providers must rely on when complying with their detection obligations. Now, this appears to be an attempt of the EU to centralise the management of a CSAM database, substituting the database that currently exists and is managed by a US-based child rights organisation, which is NECMEC, or the National Centre for Misexploitation Children. Now, with the new changes, the Centre will also be able to have investigation powers to search for CSAM in publicly accessible content. And we will also manage data exchanges among service providers, national authorities and Europol. Now, as I said, this is a highly contested proposal, but without going into the details of why it is contested, I would just like to add one thing, that the proposed detection measures and their impact on the individual rights will depend greatly on the choice of the applied technology and the selected indicators. Thank you.

Wout de Natris: Thank you. And as you can see, there’s a lot of developments going on, but still, many things need to be decided on as well. The UK is further than the European Union, so Nigel, please give us an update on where the situation in the UK is.

Nigel Hickson: All right, can you hear me? Good morning. Good morning. Piece of bread. All right. Thank you very much. I’ll be fairly brief. So I’m Nigel Hickson and I work for the Department of Science and Innovation and Technology in the… Anyone else from the UK in the room? Ah, good. So all right, Andrew. Yeah. So in the UK, we have a strange phenomena called a general election. No, actually, it’s not that strange. It just seems so. When the UK has a general election, and this was called about three weeks ago, it means that we go into a certain sort of period where civil servants and ministers are not allowed to comment on, on any new policies, they’re not allowed to give advice, or look forward to what might happen when the next government comes into force. So our election is on the 4th of July, and a new government, either the existing party, the Conservatives, or a new party made up of the Labour Party, or perhaps a grouping of other parties will form the new government. So we as civil servants are under some restrictions. So I can only say so much this morning. Indeed, the real experts on online safety, decided they couldn’t say anything, but mainly they’re young civil servants with a career ahead of them. And I’m an old civil servant with no career ahead of me. So it doesn’t really matter what I say. So to put it in some context, who’s heard of the Online Safety Act? Yeah, good. So the Online Safety Act, it’s now an act. So it was a proposed law, and now it’s become an act, had a long period of discussion and reflection and debate. But it’s, I think it’s clear to say that it was fairly controversial. And the government liked to think that it was fairly sort of groundbreaking in some of its approaches. So the Act itself came into force in October last year, and essentially it sets out new laws that protects children and adults online. It gives the Act, ensures that providers are under a set of obligations in terms of the content on their sites. And these providers, the definition of a provider of content is fairly widely drawn, as you would imagine it covers the usual platforms, if you like, but it also covers a whole range of other content providers, which could be local content providers in a community or content providers outside of the UK. One of the features of the Act is that it’s like European Union legislation in many respects, it has a territorial effect. What that means is that you can be a provider of content outside of the UK, but if your services are targeted or in any way aimed at UK citizens, then you fall under the provisions of the Act as well. So who does the Act apply to? Well, I’ve just talked about it applies to service providers in a range of institutions and also websites as well. So it really is fairly wide ranging. The Online Safety Act is being implemented in different stages. The enforcement authority is Ofcom. Ofcom is the independent regulator in the UK for telecommunication services. So Ofcom might be fairly well known to you, Ofcom. has been involved in internet issues before, but this is the first time that in the UK that the telecoms operator has been given specific powers to enforce these provisions. So two types of content the act is primarily aimed at, a legal content. So a legal content is obviously defined in the act and essentially any illegal content that appears on these platforms or appears on the various providers or is provided by the various providers has to be taken down in a certain way and under certain conditions. And these takedown sort of provisions will be set out by Ofcom and will be agreed. Indeed the Ofcom, the regulator and some of you might’ve read some of their proposed guidance have already gone underway in producing draft codes of practice and guidance for consultation. So I can give you one more minute. All right, one more minute. The second part of it is duties about harmful content to children. And that’s where I think some of the groundbreaking measures come in because we’re talking about harmful content here. You’re not necessarily talking about the legal content. 31st of January, 2024 is gone. That date, it became a criminal offense to do the following. Encourage it online. Encouraging or assisting serial self-harm, cyber flashing, sending false information intended to cause non-trivial harm, threatening communication, intimate abuse, intimate image abuse, and epilepsy trolling. So those aspects of online behavior are already illegal and individuals and companies have already been fined and penalized. of that sort of content. So I think I can finish there. I can take questions on the factual nature of any of the provisions, but this is a significant piece of legislation. I can’t obviously say anything about what a new government would do in terms of this, in terms of this legislation, but I think serious followers of CSAM and other measures, and let me just finish this point if I may, will note that the opposition party, the Labour party, when it was in opposition, when this bill was being passed, pushed the government to do even more in this area. So that might give some indication of their view if they formed a forthcoming government, but of course it might not. Thank you.

Wout de Natris: Thank you Nigel, and I think that it shows that the UK has set significant steps towards fighting this harmful content, etc. on the internet. We move into the next question, but first I would like to see if there are any changes in the Mentimeter, because a lot of people moved in. You are still able to answer the first question. So if you can go to Mentimeter, the code is up there, and also answer the questions if you haven’t yet, then we can see what happens during this session. So you’re encouraged to fill in this question. Thank you. The next speaker is Kristina from Lithuania, from the local regulator, and they have their own story to tell. So Kristina, how did Lithuania manage to regulate the Child Sexual Abuse Materials, CSAM, earlier than the UK and the European Commission, and is the law that you have in Lithuania similar to what the UK has adopted and what the EC plans to adopt?

Kristina Mikoliūnienė: Thank you. Good morning to everybody. First of all, I will mention the name of the law. It’s a law on the protection of minors against the detrimental effects of public information. It was created in 2002. And of course, speaking about the law, the first question is why? Why do we need to protect minors in Lithuania? And the answer could be, or is, that the adults are responsible for minors. And of course, in that way, we are also protecting the values of our country. It’s like family, it’s like the defense of weaknesses and moral well-being. So going forward about the law, how did we manage to be among the first? First of all, because the people, they understood very precisely how the minors are impacted by the public information. Because our law is not only related to the online, to the internet, but it has a broader view according to public available information. So we had many problems in that area. What kind of problems we had? We had bullying, we had pornography, we had seesawing, we had violence, we had promotion of gambling and many other issues who make a negative impact on minors. And also people working with minors, they understood very precise how deep this information on the media in general is impacting young people. They had the possibility also to use the national situation that in Lithuania pornography is prohibited in general. So there is no adult pornography and children pornography. There is pornography in general, which is prohibited in the whole country, not only in the movies, but also in online. as well. So, it helped us to be very precise, to be on time and to solve problems which we recognized in our country. Is the law similar to UK and European Commission proposal? Yes and no. Yes, it is similar because it share a common goal of protecting minors from harmful content, but it differs in also in relative many areas. It differs in scope, it differs in enforcement mechanism, it also differs in specific provisions. So, Lithuanian law is a wider range of harmful content. So, as I said, pornography, sexual abuse, even self-mutilation and suicide. So, it covers many issues which is harmful for minors. But in UK, the law goes deeper into focuses on online platforms and European Commission proposal is highly specialized. It goes with a specific issue on sexual abuse. It also details technological and international cooperation. So, it could be mentioned that maybe not everything needed to be written in the law because we have very strong international cooperation between Arachnid project, we are a member of INHOPE and we also trust flagger for Google, YouTube, TikTok and Discord. So, not necessary have to everything written in the law. Thank you.

Wout de Natris: Thank you. We clearly hear the local situation in Lithuania. Let’s look at it from another angle and we’re going to our first online speaker to Fabiola. What risk and harms are children actually facing online? So what is it that we are talking about in this session? So please enlighten us, Fabiola.

Fabiola Bas Palomares: Yes, I hope you can all hear me well.

Wout de Natris: Yes, again, thank you.

Fabiola Bas Palomares: Thank you very much for inviting your child here today, which for those of you who don’t know us, it’s a child rights network organization in Europe. It’s the widest one with around 200 members in 42 countries. And we fight to put children at the heart of policymaking in the EU, at EU level and national level. And regarding your question about our latest research, which we have done with 500 children globally, confirms more or less the classic frameworks of understanding online risks for children. And participants of our study highlighted concerns around viewing inappropriate content online, experiencing violence, which covered a full, you know, very wide spectrum of harms, including cyberbullying, harassment, unwanted contact from individuals with malicious intentions, like, for example, grooming and other types of malicious conducts, and data and information security. And their conceptualization of these risks actually went a little bit beyond the traditional data protection concerns, because it not only included the misuse of their personal data by the online service providers, but it also included the misuse of their pictures, videos, and the information that they post by individuals with bad intentions in the platforms, including potentially leading to CSAM. So those are the three big kind of pockets of risk that children highlighted for us in our study. But I think it is important to understand that children are not a monolithic group. They are not, all children are not the same. And there is a degree of complexity as to how different risk factors and types of harm. relate to each other. And in fact, child sexual abuse can manifest as illegal content, which is child sexual abuse material, but also contact, including solicitation or grooming, and even conduct, because we know that there is quite a lot of self-generated CSAM, which is then turned against the children themselves. So it’s a little bit more complex than just, you know, one category as we often understand or refer to child sexual abuse. But despite this complexity, we know something for sure is that child sexual abuse is becoming more and more widespread right now. I’m gonna give you some numbers. In 2023, NCMEC, which was already mentioned, the U.S. Center for Missing and Exploited Children, to whom companies in the U.S. are obliged to report CSAM that they find. NCMEC received around 36 million reports of suspected child sexual abuse, of which the category that grew the most, and actually grew 300% from 2021 to 2023, was online grooming. So grooming is becoming one of the key core aspects in these fights. We also know that service providers submitted 55 million images and 50 million videos in 2023 to the NCMEC cyber team line. We also know that in the same year, IWF, which is the Internet Watch Foundation, confirmed over 275,000 URLs, containing at least one, but in many cases, tens, hundreds, or even thousands of child sexual abuse images and videos. But I feel that the more numbers I say, the more difficult it actually gets to grasp the magnitude of this child sexual abuse crisis that we’re seeing. And I just wanna go back to the basis a little bit and remind that behind every case, there is a child who is being sexually exploited. And there is a child. whose rights to privacy, protection, self-expression, development are being violated on several fronts. And for some, repeatedly, if the material depicting their abuse is re-shared during years and even sometimes decades. Behind every number, there is a crime to a child, to those who we as a society swore to protect. So beyond the numbers a little bit, child sexual abuse is an experience that robs children of their childhood and seriously impairs children’s rights to development, but also their development as full digital citizens. And children are aware of these consequences, which is why the child sexual abuse crisis that we’re living right now, it’s not only a matter of numbers, but it’s also a matter of faces, the faces of the children behind those numbers. Thank you.

Wout de Natris: Thank you very much. I think that it’s very confrontational, all the numbers that we’re hearing, and that I didn’t know it was that bad, to be honest, despite moderating this session. It’s actually shocking. That does not mean that we have to continue in this session. And Jan Pank, I can’t see if he’s online, but he isn’t. Okay, then we have an issue because he has two questions to answer. I’ll look into my email for a moment. Then we’re going to put up the two questions that we have, because we want to have some interaction, but also there’s an option to ask a question. But first we have two multimedia questions that we’d like you to answer. So this is the score. Let’s look at the score first. I think that people are more concerned at six. I have trouble reading it from here, the little letter. So can you read it?

Desara Dushi: Six people say that they are more concerned about child protection from sexual abuse. And then we have eight more concerned about surveillance. And then nine that say we can have both child protection and…

Wout de Natris: Nobody who has no opinion or doesn’t know.

Desara Dushi: That’s good.

Wout de Natris: We have a second question on the Mentimeter. And you can just throw in words and then we can score words, I understand. So what are the major concerns you have with the CSAR proposal? Yeah, I think it’s gone online, that question. So we have 27 responses. I’ll try to get the speaker that is missing to come online. So give me a sign if he does. In the meantime, not everything shows up. I think that what comes on a lot is age verification, client-side scanning, that’s insufficient. That’s the three that are broadest, if you’d like to say.

Desara Dushi: I’m not sure what insufficient means. Insufficient in terms of insufficient protection towards children, or is the low insufficient in what sense? Maybe the people who mentioned insufficient would like to take the stage. And then we see a lot of other aspects, privacy, loss of anonymity, technical circumvention, which is related to end-to-end encrypted messages, privacy issues again, fundamental rights, unprotected children, legitimacy in all aspects, vulnerable groups. So there’s a lot in it, and many of them are bad implementation as well. Many of them are the issues that have been mentioned since the first moment that this proposal has been made public.

Wout de Natris: It does show that there’s a lot, a lot of different issues coming up with this legislation, but that the ones that’s, at least to me, as not being an expert and just a moderator, it has to do a lot with privacy like age verification, client-side scanning, it has to do with fears that the loss of privacy comes with. We have a second question to keep up the interaction. The next one, please. And what are the main benefits of the CSAR proposal? So what do you think that, what would it solve, or what do you think would become better because of the proposal, if anything, of course, because it could also be that you only see concerns. Let’s see. In the meantime, does somebody have a question for the panel from the presentation so far? There are no hands online, as far as I can see. Yes, please come up here, because the microphone is here. So people can fill in, and then we take the question first. Introduce yourself, please, and then ask a question.

Audience: I’m Leonie Poputz, I’m with the Youth Thick, and I was wondering, I guess, just in general, because there’s the child sexual abuse regulation, but then I think there’s a lot of advancement on the national level of age verification, just outside of the CSAR, I guess, as well, just as a, I guess, thing of its own. And I was wondering about maybe how the discourse or political debate is in Lithuania specifically, but maybe also in other countries, if you have insights into other countries, if there’s been advancements by policymakers, I know, for example, in Spain, in Italy, in Denmark, in the Netherlands, they have been called for age verification to be implemented more. And I was wondering about the situation in Lithuania or… about the countries maybe if you have any insights. Thank you.

Kristina Mikoliūnienė: Okay, so thank you very much for that question. So actually, I have never heard about political discussions about age verification, but it was part of the interview in our national TV, where I was asked whether Lithuania has some technical possibilities to implement age verification methodologies or some methods. So, at the moment, no, we do not have such, but we are part of DSA as a DSA coordinator, responsible institution, and we are also participating in the European initiative, which is about age verification possibility in the whole Europe. So I think in general, we wouldn’t be against it, but I think it’s very controversial because of openness of internet, the general idea of internet is anonymity, so that you can be freely, you can ask internet freely, what do you need, what do you want to know, etc. So in that case, the age verification would be necessary for young people. It means that children, they also have to be age verified on the internet and it means that every child at home playing YouTube, it should provide some pin or whatever, just to be on internet. I think it’s a bit crosses with a general idea of internet. So I think this is the reason why we act very actively as a regulator, being there for making the internet safe. And with all these prohibitions, which sometimes not seen very positively, we do that our children are safe on internet, because we as adults are responsible for them.

Wout de Natris: Thank you. We have the scores of the second question. I think the child protection jumps out most, but also child empowerment, which is different, of course, from child protection. Also, age verification comes up again, but also safety by design is coming up broader. So, also here you see a lot of different sort of opinions coming forward, but the biggest one is child protection, so that does give an input. What it is supposed to do is also perceived as the main topic that comes out of this question. Do you agree? Yeah, it’s interesting to see that age verification comes up both as a benefit and as a concern. Right. That’s what I was thinking as well, but it does both. So, in the meantime,

Fabiola Bas Palomares: I just wanted to comment on the question that was raised, just saying that it is true that there have been some initiatives going around in different countries, such as Italy and Spain, as you mentioned, but also at EU level, the European Commission is doing some work in terms of creating a task force of different member states to ensure that these initiatives on age assurance and age verification are being harmonized at EU level, because it is very important that we don’t create asymmetries between member states. So, yeah, I just wanted to put that out there.

Wout de Natris: Yeah, good comment. Thank you very much, because it would be strange if every country had a different age in place. In the meantime, I’m glad to say that Jaap-Henk Hoepman at the University of Nijmegen has joined. So, Jaap Henk, I’ll give you both questions at once. And the first question is that in March 2024, the EU presidency introduced changes to the CSAM proposal, as a result of which providers will now have to limit their reporting to users who have been repeatedly flagged as sharing potential child sexual abuse material or attempting to solicit children. To what extent does this make the other proposal more targeted and how does this address the privacy concerns that many academics and other stakeholders have raised? And second, the CSAM proposal regulation focuses on detection in three types of CSA, known CSAM, unknown CSAM, and grooming. Can any one of these be done in a privacy-preserving manner? And what are the other risks? So, Jaap Henk, the floor is yours for six minutes.

Jaap-Henk Hoepman: Thank you and apologies for joining late. Apparently, I misrepresented the time zone, but happy to be able to join and talk about these very important issues. Let me change the order of the questions because I think that makes more sense. So, let’s start with the question of the difference and the methods of detecting known, unknown CSAM and grooming. The idea is that for the detection of known CSAM, a kind of like perceptual hashing fingerprinting mechanism is used, which for very similar images creates the same fingerprint. And based on the database of known CSAM, fingerprints are generated, uploaded to the phone in blinded form, and then matched on the phone with the fingerprints derived of any picture that is being sent. There is some discussion on the number of false positives that these kind of like matching technologies offer, but in general, I think, like I said, this is currently under discussion, the false positive rate, so the risk of being falsely identified as distributing CSAM is definitely not as high as the other two things that are on the agenda for detection, namely known, unknown CSAM and grooming, because these two both involve AI-based techniques that even if they would perform very, very well and have a and in that case would have a false positive rate of 0.1%, which would be really, really good for these kinds of technologies. That would still mean given the number of pictures being sent over things like WhatsApp, that means that a million pictures a day would be flagged as potential CSM and that would have to be treated by law enforcement and checked for whether they are actually CSM or not. I think that will totally flood the system. So even from a practical perspective, it seems that that is really not practical to do. Now, the question of can this be done in a privacy preserving manner? Yes, in a way, in the sense that the matching could take place completely on the phone. And for instance, there have been proposals by Apple already before the CSM proposal was launched by the European in Europe showing how you can do that in a cryptographically secure and privacy preserving way. But that is, I would say a bit of a red herring because there is still detection taking place. And this detection is taking place on a very private device. So that does mean that even though the checking of the pictures is being done only on your phone, it is being done on your phone and it is being done on your very private device. So even for things like detecting non-CSM, there is a very important boundary being crossed in terms of like things that the states and law enforcement can do at the moment and things that law enforcement can do in the future once this regulation is approved because then they can actually monitor what we do on our devices. Now, then coming back to the second question, question of to what extent the update of the proposal in March of this year, where the reporting of flagged content is changed a bit in the sense that users are only repeated, only reported when they are repeatedly flagged for distributing CSAM. If we look at the proposal as it was fielded by the Belgian presidency, this is really not really a change, because they were actually talking about setting limits of like one or two pictures. So this is not repeatedly, I mean, if you talk about repeatedly, you might think of like, say, 10 or 100, or something like that. But one or two really doesn’t make that much of a difference, in particular, because the the concerns that we have here is that if you’re sending certain material, and, for instance, if you’re sent in the case of detecting unknown CSAM, if you’re sending pictures of the skin of your child to the doctor, or you’re sharing pool pictures with your grandparents, you’re sending repeatedly the same kind of picture. So if one picture is flagged as being CSAM, probably all the others are also flagged as CSAM, and you will still be flagged as a potential distributor of CSAM. So this doesn’t really not make the proposal more targeted in our opinion. And because of time reasons, I guess I’ll leave it at that. If there’s questions from the floor, I’m happy to answer them.

Wout de Natris: As well, perhaps there’s one question from Andrew, saying that there are various methods to implement age verification and privacy, preserved manner. Also that the Internet Watch Foundation has yet to experience a false positive from hash matching of known CSAM. So perhaps you would like to go into that, Jaap, and then I’ll pass the question to the others.

Jaap-Henk Hoepman: Yeah, so like I said, so I recently was talking to somebody who was doing research on the false positives of matching known CSAM. I haven’t yet… so I cannot really comment on that. One thing to note in that respect, if we’re talking about detecting known CSM, it is important to realize that the detection of known CSM is very easily evaded because of the fact that it needs to detect very similar pictures. Changes like mirroring or flipping the picture really changes the fingerprint and might evade matching. So this is something that is a concern. What is also a concern is the fact that most of these fingerprint technologies are not open source and very hard to analyze in terms of how they work. We basically have to believe the figures that are given to us by others. So this is a concern. With respect to age verification, there are privacy-preserving ways of doing age verification. In fact, at Radboud University we have a project called IRMA, or used to be called IRMA, now it’s called YIVI, where we do exactly that. You can actually prove certain attributes about yourself, like age or gender or nationality, or whatever you want, in a privacy-preserving way. This is, however, one, not a global European standard yet, and two, I think, again, the issue here is not the question of whether it is privacy-preserving or not, but the question is whether we want age verification to be implemented mandatorily in things like social networks. Because that creates a barrier for people to enter these kind of networks, especially if they don’t have the means to prove their age in a sufficient manner.

Wout de Natris: Okay, thank you very much for that clarification. Because of time, I have to move to the next question, but we’ll see if I can come to you in a moment.

Jaap-Henk Hoepman: Sure.

Wout de Natris: First, Fabiola, what is Eurochild’s view of the CSAR proposal? And in your view, while effectively fighting CSAE, can this proposal ensure the respect of all children’s rights, including privacy, freedom of expression, etc.? and because of time, I ask you to be as brief as possible. Thank you.

Fabiola Bas Palomares: I’m going to try. It’s a very big question for three minutes, but I’m going to try. So your child, we have supported this file from the beginning, especially in terms of the aim and approach. There is no question that there is an acute need for this regulation. Children are telling us that they want better regulation for the digital environment and they want digital environments to our companies to be more liable. The UN Convention on the Rights of the Child, and more completely, the General Comment 25, actually puts a positive obligation on all EU member states to protect children. And also on top of human rights legislation, the scale and extent of the crime, which is what I talked in my previous question, demands such an action, especially knowing that 60% of the material in the world is hosted in the EU. So there is a responsibility to at least do something there, that’s for sure. In our view, there is room in this regulation to have all the needed checks and balances to ensure that the detection and removal of child sexual abuse is done in a safe manner. And we see some advanced steps, as the European Commission proposed the EU centre to vet the detection technology. So in theory, no unsafe technology will ever be used to detect CSA. And they will also filter the reports before they are sent to law enforcement, other safeguards, you know, around the boundaries of detection orders have been introduced by other co-legislators. And the risk-based approach adds a layer of prevention that actually leaves detection to be the last resource method. So there are some, you know, steps going forward in that direction. But we want to make very clear that to actually be able to address the scale of this crime, it is key that we have both, we have prevention and detection going hand in hand, and a detection that it’s enforceable and operationable at a scale. Otherwise, we’re not going to get be able to tackle this crime in the manner that it needs, especially in terms of new challenges such as, you know, the growth of AI-generated CSAM, for example. And more importantly, we will not be able to stop the spread of CSAM, but also we will not be able to prevent the abuse from happening in the first place. We will actually fail to protect children, which was the main aim of this regulation.

Wout de Natris: Sorry, I have to stop because there’s one question left. The question is when we have to sum up, so we have about seven minutes left. Sorry about that, Fabiola. But final question here is what new technical approaches have been implemented in Lithuania for removal of CSAM and what impact does partnership in the area of CSAM have? And also only three minutes.

Kristina Mikoliūnienė: Okay, so actually, we are working, we are having, we have webpage is a in Lithuania since 2011. And we are collecting information from online users on possible, on potentially forbidden information. And we are checking it. And we were a little bit unhappy not being able to act proactively in this manner. So in 2020, we participated in a GovTech project. GovTech is an initiative for public institutions to solve their challenges in more innovative ways. So the challenge was how to automate illegal content detection on the internet. And the winner was Oxylab. It was the IE, you created IE tool. It was a prototype showing that in general, it is possible to find the, use the IE tools to find the CSAM on the internet. The rate level of positive answers was relative low, but it still shows that it’s possible. So, for example, the IE tool checked 288,000 of webpages and have found over 12,000 of potentially CSUN webpages, which at the end of the research states the man who’s checking all these 12,000 potentially CSUN websites and we sent eight reports to the police and started two pre-trial investigations. After checking 12,000 of webpages, 288 is a lot or not? For us, it’s a lot because every child we save on the crime on internet is our win. Thank you.

Wout de Natris: Thank you. That brings us to almost the end of this session. We’re going to ask to put the final question online. There’s a hand that we put the question up and then people can start answering and then I’ll go to the question. So, the final question is, and you will probably recognize this, because we’re going to see if you change your mind during this session. So, there’s one further. Yeah. So, are you more concerned that children are not protected enough from online CSA or that stricter online child protection will lead to mass surveillance? So, please answer the question again and then we’ll see what happens. We had a question. So, please come up and introduce yourself.

Audience: Good morning. My name is Diego. I’m from the YouthLink program. And I’d like to say a concern, actually, if I may. I can’t really see how client-side has matching. is privacy respecting at all. The good feature that has this half is that you can’t reverse the signature into the original content, which sounds like a desirable feature, but that also means that you can’t actually verify that it corresponds to CSUN. It could be any sort of document that a government is interested in tracking. Now, I don’t mean that this is an intention, but it makes for a very effective surveillance system, because we’re talking about an informational hazard such as CSUN. It’s not like governments can just set the database to prove that the signatures are valid. I don’t see how that is privacy.

Wout de Natris: Okay, who would like to take this question?

Jaap-Henk Hoepman: Yep, I’m happy to answer that. I’m really glad you’re bringing it up, because this is in fact a serious concern of the proposals at hand, because of the fact that the fingerprints that need to be detected are a very closely guarded secret, for one of the reasons being that you don’t want others to create pictures that collide on the fingerprints. The actual fingerprints are not stored in plain text on the phone, but in a blinded manner. That means that there is no independent oversight on the kind of stuff that is actually being matched on the phone. So, as the person asking the question suggests, it is from a technical perspective trivial to extend the database with any kind of material that a government wants to detect, and the phone cannot do anything but oblige and also match for this kind of content. Now, the only thing that prevents this from happening is oversight from the agency responsible for maintaining this database, but it’s entirely unclear to me how this all Oversight is going to be done in a public enough way and in an independent enough way. In any case, the technology providers, both the application providers and the smartphone operating system providers can by definition not play an independent role here because all the information is even sealed from them. This is a serious concern because this opens the door for widespread surveillance of this proposal. The technology is implemented and it can be abused at any point in time without oversight.

Wout de Natris: Well, thank you, Jaap-Henk. I think you’re expressing a very clear concern with the plans that currently are there. We are running out of time. So we only have three minutes left. I thank you for the question. We can have five minutes extra. Okay. Okay, so we can take one question. Torsten, you have your hand up for a long time. And Vittorio, if you have short questions, then we can take them. Thank you. And in the meantime, please answer the question because a lot less people have answered it so far than the first time. So please answer it.

Audience: Thanks, Wouter. Hello, my name is Torsten Voss from Digital Opportunities Foundation in Germany. I would like to address another question to Fabiola. With regard to the research you mentioned, Fabiola, what’s your perception how children would answer this question in the Mentimeter? Are children more concerned about the surveillance or more about protecting themselves for child sexual abuse online? Thanks.

Fabiola Bas Palomares: Thank you very much, Torsten. Yes, so actually that is one of the questions that we asked children in this research that I mentioned, which is called the voice research. And the children were very clear. So the voice research was based on focus group discussions, right? So we also asked them what they understood by privacy and protection before even asking them. to choose one or the other. And they said that they understand privacy and safety as two very interchangeable concepts, which are both very related to having their personal information that they share. So going beyond data protection, again, protected from abuses from platforms and other users. So once they showed this understanding, very complex understanding of the two concepts, we asked them what they would prioritize, and they said, none, we want both. They were very clear in saying that they believe that privacy and safety can and should go hand in hand, and they are not willing to sacrifice one on the detriment of the other. And I think this resonates quite a lot with the approach that we have taken in terms of how to balance children’s rights when we were talking about putting a little bit more intrusive measures in place to protect them from child sexual abuse. And that includes their best interests, but also assessing the proportionality of these measures to the risk of child sexual abuse, and also to the extent of how the child sexual abuse may limit the exercise of the rest of their rights, including the right to privacy. Because, and I will end with this, child sexual abuse is a major violation of the child’s rights to privacy, not only to protection. I hope that answers the question, Thorsten.

Wout de Natris: Thorsten is nodding yes, I can tell you. So yes, it has. Thank you for that. Have we seen the time that, unfortunately, we don’t have time for another question. Andrew, please do comment in the next session. I’ll see your comments there, but there is no time, sorry. So I invite Francesco to give the first ideas of what the messages of this session could be. So Francesco, please introduce yourself, and then we go on.

Moderator : Well, thank you very much, first of all, for the real. insightful, rich panel, and the real civil discussion. I was there last year on the discussion, the same topic was much more intense and actually the same was sincerely respected. So it was really a pleasure to hear I’m from Czechoslovakia. I’m here just to be the reporter to try to be as quick as possible. We gave an overview on the legal dispositions about kind of problem and try to try to show also the differences from for example, the Lithuanian national base approach to other more overseeing kind of approaches like the US. Actually, what has been quite interesting is that the major concerns about this kind of proposals, at least as far as I understood from the room, the classic ones, so client side scanning, edge verification, insufficient protection, privacy, loss of anonymity, encryption, synchronization, and bad implementation. But fun fact, edge verification is also one of the major benefits that it was actually shown. And this is probably one of the first time actually see something like this happen. Among the benefits, of course, there are child protection, stopping spread of CSA, I hope so, child empowerment, safety by design, liability of big tech and lawful interception. Generally speaking, I will take also a couple of other sentences that have been quoted during the session. First one is that of course, there are some serious concerns about the fingerprints kind of technology, because when this technology is implemented, it can always be abused. And if we lack monitoring on this kind of technologies, and the second one is that actually privacy and safety are interconnected principles that are conflicted ones, we try to find a balance on them. And of course, the CSA is a major abuse of children’s right to privacy. If anyone agrees to this overview, okay, on the concerns of the tensions in the room, I will try to draft report at the end of this couple of days, but all of you can actually comment on that in order to edit and achieve the final draft in a week or two after a year. Are you fine with that? Objections?

Wout de Natris: Once, twice, if this is a chance.

Moderator : Great. Thank you very much and see you in the next session.

Wout de Natris: Thank you very much, Francisco.

Moderator : A round of applause for the panel.

Wout de Natris: It isn’t formally closed, of course. I want to thank the people in the panel for their insightful information and that we have clearly seen that there are two sides to this discussion. Like Francisco explained. But also the people who organized it from the program committee, with De Sara as one, and the others here in the airport, who’s not in the room. But thank you very much for bringing the topic up. And that I think we had a very good session and that the work was very much worthwhile. And thank you for your attention and for your good questions. But can we finally see the differences? Because we have the difference between question one and four. So what happened during this session? What did it look like when we started? When we started, we saw we had six, eight, and nine. And not everybody answered the final question. But things changed because of that, maybe. But both sides is now higher than it was compared to the other one. So I think that is maybe because not everybody replied the second time. But on the other hand, maybe people changed their view because of the session. And that makes it very much worthwhile. Thank you for all the work in the background. And applause for the ladies. See you in the next session.