Proposal for a regulation laying down rules to prevent and combat child sexual abuse – WS 05 2023

From EuroDIG Wiki
Jump to navigation Jump to search

21 June 2023 | 10:30 - 11:30 EEST | Auditorium A1 | Video recording | Transcript
Consolidated programme 2023 overview / Workshop 5

Proposals: #39

You are invited to become a member of the Session Org Team by simply subscribing to the mailing list. By doing so, you agree that your name and affiliation will be published at the relevant session wiki page. Please reply to the email send to you to confirm your subscription.

Kindly note that it may take a while until the Org Team is formed and starts working.

To follow the current discussion on this topic, see the discussion tab on the upper left side of this page.

Session teaser

The European Commission has proposed a “Regulation laying down rules to prevent and combat child sexual abuse” which provides a uniform approach to detecting and reporting child sexual abuse and imposes obligations on online service providers to screen private communications for this purpose. Can the regulation ensure both privacy and child protection online?

Session description

Every child has the right to be protected from all forms of violence, and this includes the online environment. Child sexual abuse online can take many forms, including grooming for sexual purposes and child sexual abuse material (CSAM). In grooming, the adult’s goal is to lure the child into a situation where they can subject the child to sexual abuse, either online, offline, or in both environments. CSAM depicts sex offences against children of any age and gender. A large portion of CSAM is hosted in the European Union.

Against this backdrop, in May 2022, the European Commission proposed a “Regulation laying down rules to prevent and combat child sexual abuse” which provides a uniform approach to detecting and reporting child sexual abuse. The proposal imposes obligations on online service providers to screen private communications for this purpose.

The proposal has been criticized for including measures which put the vital integrity of secure communications at risk and opening the door to exploitation for political control and censorship. Another criticism has been its omission of allowing voluntary detection measures to protect children from sexual abuse, which are currently covered by the temporary derogation (EU/2021/1232). This workshop aims to examine different perspectives to the proposal, to privacy, and to child protection online.

Format

The workshop will be participatory and we encourage all attendees to engage in the conversation. There will be some key participants, who will each have 3 minutes to present their arguments. There will be a moderator on site to help ensure a safe and successful session. There will also be an online moderator present to facilitate the participation of online attendees.

Follow this link to submit your questions for the workshop. The Org Team will then choose which to be discussed during the session. Please be aware, that questions arising during the workshop in the zoom chat or form in person participant may be prioritised.

Join at menti.com use code 7658 1874

Further reading

People

Please provide name and institution for all people you list here.

SME

  • Desara Dushi

The Subject Matter Experts (SME) support the programme planning process throughout the year and work closely with the Secretariat. They give advice on the topics that correspond to their expertise, cluster the proposals and assist session organisers in their work. They also ensure that session principles are followed and monitor the complete programme to avoid repetition.

Focal Points

  • Eveliina Karhu, Save the Children Finland
  • Tanja Simola, Save the Children Finland
  • Lucia Hakala, Save the Children Finland

Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective Subject Matter Expert (SME) and the EuroDIG Secretariat and are kindly requested to follow EuroDIG’s session principles

Organising Team (Org Team) List Org Team members here as they sign up.

  • Vittorio Bertola
  • Michael Tunks
  • Monika Ermert
  • Jutta Croll
  • Torsten Krause
  • Andrew Campling
  • Callum Voge
  • Amy Crocker

The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.

Key Participants

  • Michael Tunks, Head of Policy and Public Affairs at Internet Watch Foundation (on-site)
Michael Tunks is the Head of Policy and Public Affairs at the Internet Watch Foundation (IWF). He joined the IWF in September 2017 and since then has been actively involved in shaping policy and legislation in the UK and European Union on how to regulate the internet and respond to the threat of child sexual abuse and exploitation online. He secured core participant status for the IWF as part of the UK Government established Independent Inquiry into Child Sexual Abuse into the Internet and has helped to shape and respond to the UK Government’s Online Safety Bill, which is currently before Parliament. In Europe, Michael is an observer to the Council of Europe’s Lanzarote Committee and has also been involved in establishing the European Child Sexual Abuse Legislation Action Group (ECLAG), along with Thorn, Missing Children Europe, Terres De Hommes, Brave Movement and ECPAT International. The aim of the group is to co-ordinate advocacy amongst almost 60 child protection organisations around the proposal from the European Commission: Laying down new rule to prevent and combat child sexual abuse. Michael is also a regular attendee of the United Nations Internet Governance Forum and is a Committee Member for the UK’s Internet Governance Forum.
  • Torsten Krause, Project Consultant "Child Protection and Children’s Rights in the Digital World" at Digital Opportunities Foundation Germany (online)
Torsten Krause is Project Consultant "Child Protection and Children’s Rights in the Digital World" at the Digital Opportunities Foundation Germany. As a political scientist and child rights researcher, he is concerned with the conditions under which young people grow up and currently focusing on contexts with (digital) media. He would like to make a contribution so that children and young people can use them independently, safely and with pleasure. From 2020 to 2023, he chaired the expert group "Children's Rights in the Digital World" based at the German Children's Fund and coordinated the joint statement by German civil society organizations on the proposal for General Comment No. 25 of the UN CRC, and also worked in the group that translated General Comment No. 25 into German.
  • Dr. Desara Dushi, Postdoctoral Researcher at Vrije Universiteit Brussel (online)
Dr. Desara Dushi is a postdoctoral researcher at the Law, Science, Technology & Society group (LSTS) of Vrije Universiteit Brussel (VUB). An expert at the intersection of law and technology, she is currently involved in the COHUBICOL and ALTEP-DP projects, focusing on the implications of legal technologies in the rule of law and fundamental rights and automation of GDPR compliance. Her research interests include data protection, cybercrime, protection of children from online sexual abuse, and other AI related topics. Dr. Dushi has a double PhD degree in Law, Science and Technology from University of Bologna and University of Luxembourg.
  • Kimmo Ulkuniemi, Chief Superintendent in the National Police Board of Finland (on-site)
Kimmo Ulkuniemi is law enforcement professional with a career spanning over 30 years, encompassing both national and international domains. Currently serving as the Chief Superintendent in the National Police Board of Finland, Kimmo Ulkuniemi takes charge of strategic planning, direction, development, and supervision of police operations aimed at combatting cybercrime and related offenses. Prior to this role, Kimmo served as the Assistant Director at the Interpol Global Complex for Innovation, further augmenting his expertise in tackling global security challenges.

Key Participants are experts willing to provide their knowledge during a session – not necessarily on stage. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance.

Moderator

The moderator is the facilitator of the session at the event. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers.

  • Eveliina Karhu, Save the Children Finland. (on-site)
Eveliina Karhu is an Advisor at Save the Children Finland. She is also an analyst at the organisation's hotline, the Finnish Hotline Nettivihje.

Remote Moderator

Trained remote moderators will be assigned on the spot by the EuroDIG secretariat to each session.

Current discussion, conference calls, schedules and minutes

See the discussion tab on the upper left side of this page. Please use this page to publish:

  • dates for virtual meetings or coordination calls
  • short summary of calls or email exchange

Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.

Messages

Rapporteur: Org Team

  1. In regard of the EU Commission Proposal for a Regulation to Prevent and Combat Child Sexual Abuse, the panel, composed of different stakeholders, agreed that something needs to be done to better protect children online due to data showing that 59% of the CSAM removed from the Internet is hosted in the EU, with the severity and proliferation of these images and videos growing year on year.
  2. Risk assessment and mitigation are crucial! Digital service providers have a responsibility to create and provide safe and reliable services for all users. To protect children, regulation should make detection and removal of communications and depictions on the Internet compulsory. Diverse views were shared on how Safety by Design can be harnessed, including encryption as a way to offer children safe online services. Concerns were also raised on how companies could be doing more to detect CSAM within End-to-End Encrypted Environments were raised. In addition, media literacy education for children and parents is recommended.
  3. Privacy concerns should be taken seriously. More research and development of reliable technologies to avoid large numbers of false positives are required. And care must also be taken to avoid technologies being repurposed for means other than their intended use – to detect child sexual abuse – by less democratic regimes. For these purposes a strong and independent EU Centre is also recommended.

Video record

https://youtu.be/qcCVpT0RHoY

Transcript

Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-481-9835, www.captionfirst.com


This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.


>> EVELIINA KARHU: Hello, everyone. Welcome to Workshop 5, proposal for regulation laying down rules to prevent and combat child sexual abuse.

>> AUDIENCE MEMBER: Do you have a microphone?

>> EVELIINA KARHU: We only have the microphone for the online, not the room.

We will be discussing the EU my name is Eveliina Karhu. I’m an advisor at Save the Children Finland. I will be the onsite moderator.

Thank you, first of all, for hosting this and organizing team for making it happen. We have a great panel of key participants here. Some of them online and two of them here in Tampere.

Just to let you know the format of this workshop is going to be conversational. So at any point you can interject, raise your hand, ask a question or comment.

There’s also an option to comment or ask questions anonymously on the wiki, which is through Mentimeter. So you can use that if you prefer to do anonymous question.

Now we will start the session by asking all the key participants one question. So the same question to all of them and then we will move on from there. Now, before we start the conversation itself, the key participants are: Kimmo Ulkuniemi, and Kimmo is here on site in Tampere. And we have Michael Tunks who is with the Internet Watch Foundation. Michael is also here.

We have Dr. Desara Dushi who is the postdoctoral researcher at Vrije Universiteit. And we have Torsten Krause, who is also online.

And now to get started, I will ask our participants the first question, what is your key takeaway from the EU commission’s proposal? Kimmo, would you like to start?

>> KIMMO ULKUNIEMI: Thank you, and check, good morning on my behalf.

I’m glad to be here to bring the law enforcement perspective to this issue.

Before I came here, I had discussion with my Finnish friends, and this new proposal for the regulation. Or the views for the regulation are not the same in law enforcement and civil society and NGOs and so on.

Personally, I think it’s a prayed proposal for new legislation. So there has to be something done to protect children from the sexual abuse online, and also offline. And when I first read the regulation, and also the impact assessment, my first thought was it was about time to do something.

We have been debating this discussion over the years. No one has been able to provide any solution to tackle this problem. So this new regulation is a proposal for regulation, it’s a completely new approach to solve these problems. And if it’s more than detection – I’m sure when we are discussing about the regulation, we will discuss a lot about detection order and that would lead to cybersecurity overall.

We should not only focus in friction and detection order. That includes other measures for governments and private sector, also civil society and NGOs.

And what I said earlier before this panel, I love discussion, I love debate about problems and earlier, when these regulations was discussed, I often said this is such a difficult problem, that we should not discuss it. Maybe sometimes it’s better not to do anything than trying to find solutions for the problems. So instead of saying that it’s not possible or regulation is bad, we should have also solutions.

So what I would like to hear from civil society and other people participating in the discussion is what would be your solution if detection order and this proposal is bad, what would you propose how to protect children online?

I will stop here.

>> EVELIINA KARHU: Thank you, Kimmo.

And next I will pose the same question to you, Michael. So what is your key takeaway from the EU proposal.

>> MICHAEL TUNKS: Thank you, so like Kimmo, I think this proposal is really important and I also think much needed and there’s some significant issues that we have to address. I want to start off by sort of setting out the scene as we currently see it in relation to some. Data that the Internet Watch Foundation has released in April of this year, in relation to the amount of child sexual abuse we removed from the Internet last year.

59% of the content we removed in the last year was hosted in the EU Member States. So Europe is the global leader, if you like of distribution or certainly the hosting of child sexual abuse material from what we see. We have seen a doubling in the most severe forms of child sexual abuse online in the last two years. We’ve seen that self-generated indecent images of children are becoming much more common, now accounting for two-thirds of the content that we removed from the Internet, and that that mainly affects girls in the 11 to 13-year-old age range. And we are seeing that get younger in the last few years as well. So it’s now affecting 7 to 10-year-olds as well.

And I think these are quite shocking statistics in terms of where we are at the moment and when we explore the impacts of the COVID-19 pandemic and all of us spending much more time online and, of course, as more content gets uploaded online, we will have more of these issues as well.

Coming on to the regulation itself, I think there’s a few important things that we need to resolve, and sort of key to that is that currently companies are able to detect child sexual abuse material because they have clearly the certainty do so because we have a privacy directive and the electronic communications code, but prior to the – prior to the recent changes, the email messaging services were in the scope of the privacy directive, they always had the certainty to do that.

I think the first point to make is companies have always been able to do this, and that we’re not talking about something new. Companies have been able to detect child sexual abuse material and it’s our view that that should continue. I think it’s really, really important, we’re coming up against a hard deadline of the Tampere delegation ending next year. And we have to have resolution before that comes to an end. We see an impact if we don’t have clear certainty for the companies to be able to detect. It drops by 50 to 60% in the six months where we didn’t have this certainty. That’s sexual abuse material that we know is happening now but was not being reported within that period.

And I don’t think that that’s the situation that we want to be in. With what I set out in terms of the statistics is Europe is the global leader going forward and I think it’s incumbent upon all of us to take steps to better protect children online. I look forward to being able to discuss that with others later today.

>> EVELIINA KARHU: Perfect. Thank you, Michael. We definitely see the same developments at Save the Children when it comes to child sexual abuse material.

Online participants, are Desara, would you like to answer the same question? So what is your key takeaway from the EU Commission’s proposal.

>> DESARA DUSHI: Hello, everyone and thank you for this very interesting panel. I’m really happy to be able to give my thoughts about this. When I first heard about this new proposal, also me, like, my first reaction was the Sam like Kimmo. It was about time.

And I have been calling for new regulation during my – in my Ph.D. thesis which ended in 2019. So there is a long time period since then, long when we think in terms the numbers that Michael just mentioned, how many children get abused every day.

Then when I read the text, there were many flaws. The main goal. Current proposal, at least the one publicly announced. I don’t want to speculate about the hidden goal of mass surveillance for all sorts of purposes. The main goal is the protection of children from sexual abuse with an online element. I think all of us want this goal to be achieved as much as possible and by all, everyone except the ones who have the sexual interest in children, of course.

Now the question is how can it be achieved? And how does the EU Commission propose to achieve that? There are three big changes to the current state of affairs proposed in this draft regulation. First is moving parliament and the detection from the Internet providers, and this clang seeks to provide legal certainty to providers as to their responsibilities to assess and mitigate risks to detect, report and also to remove online system, sexual abuse material, that is entirely in the fundamental rights the EU law. The.

This has already a clear end point as already mentioned by Michael. It is almost next year. It was functioning on a temporary delegation basis. So there was obviously the into Ed to think of what are we going to do afterwards because we do not want all sorts of detection to just stop happening because we will leave children completely unprotected.

The current proposal is supposed to enter into force at the moment when this temporary delegation stops working. They would have to wait to receive a detection order before acting, which they would only get upon failing risk assessment, this is because under the E privacy directive. They require an explicit legal basis to process communications of metadata. In practice, this would mean that all the work that’s currently already working would come to an end. The mandatory detection regime, the companies are overfilter at the risk mitigation stage to avoid the mandatory detection orders.

If they fail to comply with them, then there is to have high penalties.

So we are – we are faced with two situations, either it will stop completely the detection or it will make the Internet service providers to overload, to scan everything, and over produce a lot of material which most of them will be – will be false positives, for example, and it will also prevent – it will cause a lot of problems with the work of law enforcement and Europe more specifically.

And second today the Internet service providers are working with the detection of known child sexual abuse material. And the Commission obliges them to develop new grooming. And that becomes problematic.

And third there is the EU Center which will serve as I see it as a one-stop shop, like monitoring mechanism, assisting with new technology, providing guidelines and much more. This center is empowered to produce a list of indicators, and it challenges that the list of indicators that will be gathered to lawfully carry out obligations proposed on different providers is aligned by those endorsed by MCMEC which is based in the U.S. and they have this large database of child sexual abuse material which is currently the main database that is used for identification of online material. And in the current text of the proposal, I don’t see anything explaining how this new EU center will cope with the two databases, how will they make it possible that they merge with each other, or how will they collaborate with the one based in the US and how will they make sure there is no overlapping of the work in process?

And how will the Internet service provider decide where to report to the EU Center or the one in the US because we know that this is an online crime. So it – we don’t have that many cases where the US is not involved for example. It’s not only happening in the EU. So we need some type of beyond EU collaboration and this is not really clear how it will happen in the current text.

And the role of the hotlines in the EU Center need to be clarified because I don’t see anything explained regarding how will the EU center work with the hotlines and the hotlines have an important role in the combatting of child sexual abuse online. For years they have worked to remove content from the Internet.

So the current proposal really needs to be amended in order to fix this, and I have also – in a conference that happened one week ago in Brussels, there was somebody from the hotlines here in EU, that was not consulted during the drafting process of this new regulation and I see this as problematic.

Now, to conclude, while the intentions are good, the implement of this has many implications starting from the technology and I wonder if the drafters of the proposal has consulted tech experts, fundamental rights experts and child rights experts. We see that everyone is concerned, and I have seen many of these experts complaining that they have not been consulted at all.

Thank you.

>> EVELIINA KARHU: Thank you very much, Desara.

And then also to Torsten, would you like to answer the same question? What is your key takeaway from this EU proposal?

>> TORSTEN KRAUSE: Warm greetings to Tampere. My key takeaway as the risk mitigation, Internet and the digital services are established and create in a way where nobody had children as users in mind, but children are one-third of the population or the users of the Internet and they use services which are not enabled for them. Services declare their – or providers declare their services for users older than 13 or 16 years, but nobody controls or take mitigations to – to take care if the users are old enough for the services. And the solution, we should not exclude the children from the services. They have the right to access media as declared in the Child Rights Regulation and explained in Article 25. I think we should find solutions that children could take part in the services, but providers are responsible to create safe services for children also, and therefore, I appreciate that providers should be committed to do risk assessment and find solutions to create safe services for children and all the other users.

And, yeah, therefore, I appreciate this proposal and this is my key takeaway. Thanks.

>> EVELIINA KARHU: Thank you very much, Torsten.

With the audience, either here – yes, please.

Would you like to come to the front to ask your question? For example, over here.

Thank you.

>> AUDIENCE MEMBER: I was about to say, I’m also something of a mathematician by training and looking at the numbers, for most published reports, also in the Commission – I’m always looking for numbers. In particular this detection of new seized material, the published numbers here are position rate, which is all but useless. It tells you that once it has been detected how it was enforced. What they do not indicate the false positives.

Let’s say while they give numbers here that the position rate is above 90%, maybe 99.9%. Let’s say that the false positive rate is 99.9999, what does that mean?

Even that – given any message, the likelihood of it being missed is 1 in 100,000, that’s a random number I pulled out of my hat. If that’s the case, let’s do some math. How many messages does an individual get sent every day. Let’s look at how people chat every day, using 100 messages a day, the fact that you will be wrongly accused is 31%. That means it will be getting enormous number of New Orleans positives here. And we have about 500 people in the EU and you end up getting 170 million false positives every year. No will have time to research this.

So while this number is not in any of this research, it’s critical for this to work and in this kind of message, when they are looking for something where the base rate is more. Most people are not looking this. That kind of research always ends up at even a very small false positive rate, accumulates exponentially, it will be a disaster.

And if we are getting that kind of false positive rate – so that if millions of people end up wrongly accused, first, it’s not nice to be wrongly accused or judged by abuse. Even when you are cleared the process is bad. And worse, if that happens a lot, people stop caring. They think this must be a false alarm anyway. This is a cry wolf situation.

So again somebody is accused of child abuse, they say it’s not actually abuse. And it’s worse than it has been before. However small you get a false positive rate, if it’s a significant, it will pretty accumulate to ridiculousness.

So I would very much want this number to come out and calculated out what it would be because this position rate that they advertise does not tell anything useful.

That has my first takeaway. I have some more.

(Applause).

>> EVELIINA KARHU: Thank you. Thank you.

One of the key participants like to start?

>> KIMMO ULKUNIEMI: Well, just a quick, quick comment.

Like, Michael said before, companies are doing this already detecting child abuse material and then sharing information with contact points. Yes, I do agree that there are challenges. And that’s why the regulation also simulates the ways that we have to have technology and we have to have detection built in a way that it ensures the privacy also, decreases the possibilities for false positives.

And then what you have to understand also is you said that you would be blamed for that the Internet service providers, they should have enough resources to go through this, what is has been detected and then cross out those who have not be reported law enforcement. It’s up to the Internet service providers, the private sectors to build their processes and systems in a way that there’s human interaction.

There are not enough resources.

>> AUDIENCE MEMBER: Do the math.

>> KIMMO ULKUNIEMI: Yeah, I’m really bad with that. But I trust your math is correct.

>> AUDIENCE MEMBER: So what would be –

>> Sir, could you introduce yourself and your affiliation.

>> AUDIENCE MEMBER: I have a follow-on. Let’s not talk about the math. How many cases do you think authorities are able to handle? A million or some ballpark number. Do the math the other way and you can sort of figure out what position rate do you need the technology to have. I’m sure there will be a large gap between what it can handle in terms of resources and the messages. This is a very high problem and it might well be unsolvable and I think we need to keep that – if it was easy, we would have had a solution long ago.

Looking at the other direction, what resources do you think are available? I’m sorry I jumped in.

>> EVELIINA KARHU: I’m just going to repeat it so that the online audience also hears. This was a question about law enforcement resources and how they would match to the numbers that we discussed; is that correct?

>> AUDIENCE MEMBER: Yes.

>> EVELIINA KARHU: Yes. Questions from the audience.

>> DESARA DUSHI: If I may add something to the first question. Of.

>> EVELIINA KARHU: Yes. Absolutely.

>> DESARA DUSHI: Thank you very much for raising the question, the first one. It’s really important – an important one. And it is one of my concerns as well. And just clarification, the detection of grooming will only be made in communications between a child and adult and this is already problematic from this perspective, how can we detect and verify the age of each user? Do we trust the self-age of the user? We all have to register an age and we know it’s simple to write whatever age and we know also from research that usually – people with sexual interest in children who sent the Internet for those purposes, they actually do the no declare their real age and most of the times they declare as being children themselves. So they won’t fall under this sort of scanning, because they will be, let’s say – I’m a 30-year-old and I declare myself as a 15-year-old and I start communicating with a 15-year-old. So it looks like a normal communication between children and nobody will detect that there’s grooming happening there.

This is a problem in itself with all of these fake profiles. And the second is that there is no evidence like the first person already mentioned that the technology actually works. And there will be many false positives and all of these false positives will. Hinder the work of law enforcement and Europol and by overloading them with these false positives and then they will leave the real cases unsolved.

There are cases already that Google flagged some parents for identifying, for example, images – now this is not grooming. This is for images, but they identified – they flagged these parents as being sexual abusers of children because they had images of their children – of children’s genitals, but they had those images because they sent them to the doctor because the child had some problems. They were cleared the charges and Google still blocked access to that person.

Still Google decided and made the final decision, I mean. How much power do we give to – to the companies to decide about who we consider sexual abusers? Who will happen in cases with teenagers communicating with each other and sending self-generated images like 17-year-old is considered a child from the laws. But an 18-year-old is considered an adult. But you see they might be in a relationship and what will happen in this case? Will the 18-year-old be flagged as a potential child sexual abuser just because he or she is 18. These are problems and the current text doesn’t describe how we solve these problems.

>> EVELIINA KARHU: We have a list here growing of all the comments and questions. Just to give the law enforcement perspective, so Kimmo, the opportunity to reflect on what was said earlier, do you have a comment on the questions?

>> KIMMO ULKUNIEMI: Sure, yes. It was excellent point. Does law enforcement have enough resources to investigate all the cases? If you ask from any law enforcement agency, do they have enough resources? For sure you know that our reply would be no, no, definitely we don’t have enough resources but that’s maybe a different discussion about that.

But what I think is important that if we in law enforcement receive the information that there might be a sexual abuse going on, we can hopefully – we have enough resources to quickly analyze that information and at least stop new victims for this offender. So there was a case in Finland, two cases if two years, more than 200 victims in that case, most of the victims from Snapchat, for example. So those predators, they are trying to find new victims all the time. If we can get the information that this is going on now, hopefully we can stop that, and then the investigation will take time. That’s for sure.

Thank you.

>> EVELIINA KARHU: Thank you very much. And now I believe you had a question, sir. Please, would you like to come to the front?

>> AUDIENCE MEMBER: (The speaker is too far from the mic to hear.)

>> EVELIINA KARHU: Thank you, I believe Jutta online. Jutta Croll, had a question or a comment. Would you like to interject now?

>> JUTTA CROLL: Yes, definitely. Thank you so much for giving me the floor. I just wanted to comment on the first speaker from the audience. Of course, false positives are an issue, but do you really think that we could weigh a child that is abused versus another person being falsely accused? I do think that is a very, very difficult situation, and we, of course, cannot leave that question only to technology.

We would need human resources to solve these issues and therefore we really appreciate that the European center will also be commissioned to work towards those decisions and, of course, reduce the number of false positives.

I also would refer to the question of law enforcement resources and I would recommend that we close the IAs to these issues and problems because we assume law enforcement cannot handle the huge number of cases. We have to have in mind, it’s not only talking about people with a pedophile orientation, many, many cases just out of commercial interest in producing income out of a child sexual abuse.

Final comment, I think it was Desara who commented on the Google case, and I really do think that was a problem but, of course artificial intelligence had detected correctly the images. These were not false positives because the images were of a child’s genitals. What was the problem is there were no rules or regulations for Google in how to handle such cases. We need rules for the companies how to handle in case of the system, the algorithm says, here we have an image that shows a child’s genitals and with regard to the age of verification, I would appreciate if Torsten could speak about that. We are working in this regard and please give the floor to Torsten to explain a little bit how probably age verification could also help to solve the issues.

Thank you so much for listening.

>> EVELIINA KARHU: Thank you, Jutta. We can jump to Torsten next. Torsten, would you like to comment especially on age verification.

>> TORSTEN KRAUSE: Yes, thank you very much. I would like to add that in my perspective detection order should be a last resort. Before that, we have a lot of steps in laying down the proposal, how to solve the problem and how to mitigate risk and I see detection order as a kind of way to give permission to find solutions and measures how to create services. For children, and the is that we want to prevent child sexual abuse and not look afterwards what should be done when it happened. We want to prevent it.

At the age verification issue, first, I would say it is as one option also laying down in the proposal that age verification could help to solve the problem and, yes, it’s also hard to find solution regarding and numerous ways to verify the age of users.

I think that we should try to find the solution to verify the age of all the users, not only of children, not only the control of children are old enough to use the service, but of all users and if we find a solution, where without saying the name, without saying the picture, the passport, driver’s license, without verification about the bank account or something like that, we have to find ways where also children are possible to verify their age. And if we know maybe not the precise age of the users, but a range of the age maybe from 10 to 12 or 18 plus or 16 to 18. Then we can find solutions to mitigate the risk.

For example, if we say that people older than 18 should not contact users younger than 13 at a service or something like that. It should be discussed but the key for all the mitigation – not more all, but for most of the mitigation measures is to know the age of the users.

>> EVELIINA KARHU: Thank you very much. And Michael, did you have something you wanted to comment?

>> MICHAEL TUNKS: Yes, I think there’s a lot of different themes that have come up here, but I will go on the precision rate. I think that’s important. I think when we were looking at measures like photo DNA, previously, these were the same things we came up against, when it came up against photo DNA. I think it’s about setting challenges to companies to be better at detecting this.

And I think when we look back at the proposal as a whole, it’s about risk assessment, right? It’s what steps can they take to prevent their platform from being exploited and used and what possible mitigations could you put in place.

You could also, you know, the detection of new imagery, that’s something that this proposal says is the possibility to stack classified on top of each other. Could you detect nudity, for example, and then stack some age verification technology around that as well? So I think it’s about exploring the possible. We should be giving companies space to innovate to this problem and giving them certainty and say how as a society do we get better through law enforcement investigations and focusing law enforcement efforts this but also setting up challenges to the company as well and what role do we have on that.

On the legal opinion that was mentioned around, that I think that was also the same accusation that you can level at the Commission for the lack of consultation with civil society and tech, whatever. I think it’s exactly the same for some of the other opinions they didn’t consult widely enough either. They didn’t take the time to consider that.

I think when you look at the EPB opinion, it looked very much at what the current state of European law. And we do have a temporary delegation from some of these laws and the situation before, that they were allowed to detect child sexual abuse imagery and I think they were some bits that were missing from some of those opinions that we need to address.

On the societal problems, I completely agree that it’s a societal problem, but – and, you know, I would love to use investing more in prevention, in making sure that these images are not there in the first place. I think that’s a vital part of the solution. I think that we should not be turning a blind eye that this is happening on technology platforms as well and encouraging that as well.

Encryption, I can see in the chat that there’s a paper posted to the piece and there’s a paper to Ian Levy, two of the most highly respected cryptographers in the UK. There are a number of things that companies are not doing in terms of encrypted services.

We are already doing this for things like malware, phishing and other forms of content? So why shouldn’t we be doing it for child sexual abuse material as well? It has a huge impact on victims and I think anything that we can do to stop this content from circulating online, we should be – we should be exploring it. It cannot be right that we allow child sexual abuse material to freely circulate on the Internet.

>> EVELIINA KARHU: Thank you, Michael. Now we had a question or comment on the audience?

>> AUDIENCE MEMBER: (Inaudible).

>> EVELIINA KARHU: Next, we had Andrew, you did lower your hand. Did you want to comment on something online? Andrew? Oh, you are here. Perfect. You confused me by both being online and here.

>> AUDIENCE MEMBER: I will say it from the front. I’m conscious there are people online. So they can’t hear us very well.

Just a couple of comments where people have highlighted the difficulties, I would respectfully suggest that that’s not a reason to not act. You know, direct your intellect to solving the problems rather than saying there are problems because if we simply focus on the problems, we will never do anything and that’s ridiculous.

I just want to emphasize, something that Mike said at the start which is, you know, toughly two-thirds of the known child sex abuse imagery is currently stored in Europe. To suggest that we should do nothing about that is totally unacceptable. They are all crimes and to ignore it is just not appropriate, and it is possible to act, and I would point out in the UK where a lot of imagery was stored and it’s down to 5% of the imagery. So action can be taken and is provable as being effective.

And then on the false positives issue, again, if I look at how it’s done, I should say by of the IWF does it, all suspected illegal images are assessed by human and then they are quality checked – the assessment is quality checked by another human. So there should not be any false positives by the time it’s reported to the law enforcement agencies to make charges.

If you do that process, then you should have zero false positives and then finally, you will put it in the chat, in case people in the room are not looking at it. When people raise questions about age verification, I know there are services online services which do effective verification, one that I’m aware of is only Fams and so maybe we should look at the services that have implemented and see if their processes approach can be replicated.

And very lastly, a lot of comments hint at privacy concerns. I would respectfully point out that privacy as I said in the discussion earlier, is a qualified right. Some of the rights that are impaired of children are absolute rights. So if you have to trade one right against the other, then privacy in law loses. So you can’t focus just on the privacy implications. If you look at human rights, there’s more than one and privacy is qualified not absolute.

>> EVELIINA KARHU: Thank you very much, Andrew.

Next we had a question from the yellow jacket.

Would you like to come to the front.

>> AUDIENCE MEMBER: (Speaker is too far from the mic for captioner to hear).

And all of these things will necessarily be that easy to detect. And so in the end if prisoners are aware of the surveillance that is being placed on them, they will try to find ways – and people being placed under surveillance are most likely enough to be perpetrating these actions.

And the second point – a slippery slope, you can always say that, but the two specific risk mitigations that I want to highlight, the first one – I had a look at how it’s developed over time and it’s implied some surveillance or censorship measures and oftentimes, justifies them by what they claim are similar measures in like western democratic countries. Specifically, are the German – yes, exactly. This one which is very clearly against hate speech is what they say is the basis for their censorship laws which they have which is way more far reaching. And so in the EU Internet is something that is so intrusive, they justify their activities and it’s uses against citizens and citizen rights activists.

And another thing, the EU is a democratic institution, this does not need to be always the case for all Member States. There are currently attempts to underlie human rights, in the autonomy of women in various countries and when we have such an intrusive law that forces certain companies to break encryption and allow access and monitoring of messages that can also be used for other needs.

In my background, we have the COVID tracing app which was specifically only to be used in that case, and the police still went and used it for criminal investigations. And we see other apps for health information is passed on and used Google Searches and so on. There are so many risks associated with that go beyond just the – we really should, in my opinion, refrain from having such intrusive measures.

>> EVELIINA KARHU: And based on what is written in the chat, people could hear what you said online. I won’t try to repeat it.

And Desara, you want to interject at this point?

>> DESARA DUSHI: Yes, it was long ago but I will still say something and thank you to the speaker just now. I totally agree with – as far as I could listening there was somebody typing all the time near the mic. So it was a bit problematic.

I wanted to go back to the Internet, that Jutta mentioned that artificial intelligence detected that there were genitals of the child and I think everyone knows that the technology can do that. It’s easy to detect genitals and easy to detect also that it’s of a child. I mean, there are technologies to detect the skin and find a way to understand that it’s a child even if there is no – the face is not shown.

But the pro be is that we need to make sure that this technology that detects – that might correctly detect the genitals detects also the context related to that image so that we don’t have these cases where parents get flagged as child predators. If the technology just detects images and totally out of context, then we will have all of these false positives. And imagine what happens with the parent named and shamed and everyone learning about it and going through the process of law enforcement. And maybe the child will be taken away from that parent until the case is solved. We need to make sure that we have the right technology that can make a connection with a context.

I think this is what this proposal is about, it’s about the context and this is also the main concern of the privacy experts, I think. Because if we detect without the context – also without the context is problematic, yes, but if we detect also with the context, then we will have the text and we will cause all of those privacy issues that the speaker just raised there in – in the room.

And I also wanted comment on Andrew, he said something about automation and he said that – the question is whether automation can – can solve the problem of humans not having to check one by one the images and I think this is an important question and this is related to what I said before.

And this is what this proposal about. So far it’s been human – a human always checking the images but now this regulation is relying heavily on the technology and assuming that the technology is able to do so and with very little or no human oversight and so far, I don’t think we have such technology. And probably in the future we will have it, but since we don’t have it now, we need to make sure that we have good human oversight, and we need to really assess these technologies before allowing them to be used.

And also because of all of these problems, I think the regulation could do better in the sections regarding the redress and contestation safe services I have guards which currently they are very vague and they should clarify what measures are taken so that the name of a wrongly identified person, for example, is not included as a the child predator list of the law enforcement of Europol and national law enforcement agencies. There needs to be some type of criteria about the technology that can be used for these cases to make sure we don’t infringe the privacy and other human rights of people with the technology that actually doesn’t work. So we need to be sure that the technology works so that the intervention can be proportionate and necessary.

To make an investment of other human rights, we need to think are there other ways that are less intrusive?

>> EVELIINA KARHU: Great. Thank you very much, Desara. I’m just going to look at our key participants here to see if you – do you – yes, okay. Before we go into that, just to let everyone know, we are running out of time in our schedule, but we do have an option to extend the session around 15, 20 minutes. So if you have the option to stay longer, we can continue the discussion after 11:30, but, please, Michael, would you like to go next?

>> MICHAEL TUNKS: I would like to make a comment about the authoritarian states, and it’s been put in the chat as well. I don’t think that’s a reason to act on child sexual abuse. Children have the right not to have child sexual abuse images of them. That’s a complete and utter violation of their privacy.

If there are steps that we can take to control that, then that’s what we should be doing. We should be judging society based on the actions that we take to protect the most vulnerable within our society. I don’t see anyone else more vulnerable in society than children and preserving and their innocence and right to privacy. Respectfully, I think that this is a really important thing that we should be doing.

The UK government has done an awful lot of work on the online safety work as well. We have done work within end-to-end encrypted environments where we have given a little bit of seed money about what is possible in those environments. You can stop images entering those environments and leaving those environments in a privacy preserving way. And so there are things that we can look at that would preserve our privacy, where we are looking for exactly the child sexual abuse image and we have shown that that can be done.

So I don’t think that’s a reason to act. This is not about repurposing this technology for any other means. It’s about purely detecting child sexual abuse material and are in some cases terrorism as well.

>> EVELIINA KARHU: Thank you, Michael. Kimmo?

>> KIMMO ULKUNIEMI: Yes, just briefly, again, there were plenty of things I would like to comment, but maybe there’s something related to law enforcement and regulation in general. Someone said about – about that if detection order would be – would be placed for certain company and then the perpetrators would be moving to another service or something like that. But like Michael said earlier – or someone, anyway, that the kids, they also producing lots of material themselves. They are being persuaded or blackmailed to do that.

So offenders, they are there where the kids are. So if criminals would be moving to another service, our children would still be in the same service, which it’s a Snapchat or WhatsApp, they will be there. And so when it comes to sharing child sexual abuse images, then, of course, there might be a different service for that, and then law enforcement would have to – together with the private sector would have to target those companies. And when it comes to grooming, they are where the children are also.

There is also the discussion about Google and the discussion about too much power for big companies because of the regulation. I think they already have quite a lot of power. We should have had this discussion 20 years or 30 years ago.

So they have their terms of service and if – according to Google terms of service, it’s not allowed to share child sexual abuse images. And they will do that in the future. They can independently do their decisions what they are going to do with the user and the account and so on. If we want to make a change to that, then we should regulate those Internet service providers, what they can and cannot do with our information.

Thank you.

>> EVELIINA KARHU: Next we had a question here. Would you like to ask your comment if you want to come to the front here.

>> AUDIENCE MEMBER: I was quite surprised to see material, it’s available on public web services in Europe, and if I was to distribute this kind of material, I would never do it in public services and another aspect I would like to mention is that there’s research showing that if people are aware of data surveillance like this, they will start self-censoring their communications and what do you think about there?

>> EVELIINA KARHU: Would you like to comment on self-censoring. There was a question on that.

>> MICHAEL TUNKS: I can comment on the public hosting.

If we have that many people hosting it publicly, what is happening in the environments that we currently can’t see and what happens when we turn the rights out in terms of encryption. I think the scale of this is unbelievable. You know, in the UK, we have looked – they have done an assessment of how many people pose a sexual threat to children and it’s somewhere between 550,000 and 850,000 people in a population of 60 million. So you are talking about 1 in 60.

So think about the conference that we are here today. There’s potentially someone that has viewed child sexual abuse imagery if you look at those figures. So just to come back on the public hosting point and the scale of this, I think it’s scary what’s out there.

I think it’s scary what we don’t know as well.

>> EVELIINA KARHU: Yes, thank you.

>> AUDIENCE MEMBER: I’m mostly interested in your opinion about data surveillance and self-censorship?

>> MICHAEL TUNKS: In terms of like people uploading things and basically saying they wouldn’t –

>> AUDIENCE MEMBER: I meant they are aware of data surveillance, they would change the way they communicate with people?

>> MICHAEL TUNKS: That’s a possibility anyway. People will always change the way they communicate and that’s a technical challenge that they need to keep up with anyway.

>> EVELIINA KARHU: Kimmo.

>> KIMMO ULKUNIEMI: Companies are doing that all the time. This he have access to your information. Are you changing the way you are communicating as they have access to your information? I’m just talking from the law enforcement perspective we have in individual cases we might have right to – for lawful interception.

With some companies it could work in a criminal investigation, but those companies that don’t provide end-to-end encryption, your information is there.

>> EVELIINA KARHU: Thank you. We have three more listed questions – or actually, four from the audience. So we’ll take those and then we’ll do your final round from the key participants if that’s okay with everyone.

(Off microphone comment).

>> EVELIINA KARHU: Do you want to go next?

>> AUDIENCE MEMBER: Had there were some comments made on mitigation measures and design choices and, of course (speaker is fading out) I would like to share my reflection of looking at how to understand that these can also include the use of mitigation. So safety by design narrative. (Inaudible).

>> EVELIINA KARHU: Okay. Thank you. I will repeat some of what you said to make sure that the online audience hears. There was a comment about empowering use and using encryption as a tool for that empowerment.

But before we go into that, I’m noticing that there’s some maybe comments coming from the key participants, but we have one – actually two more questions from the audience here. So, yes?

You are next. Please. Would you like to come to the front?

>> AUDIENCE MEMBER: No, I think people can hear me. So we have been talking about really the pictures here. But it’s really – like, for example, we were given an example of the child abuser who – who had, like – I don’t know, abused 200 kids. How many of those because they were able to detect the pictures that the kids were sending to the guy? Was it really that bad? He was just grooming through text and then meeting the kids and then abusing them, right?

I think it’s a lot harder to detect the grooming if it’s done by text. And if this kind of law comes into effect, I’m sure that they will not be asking kids to send naked pictures of themselves anymore. It will only happen through text.

And then the amount of false positives will become quite enormous and that will take a lot more resources than a cop having a look, oh, is this a naked kid in the photo, versus what is the context in the text. You will be going through so much.

Then I think there’s a false dichotomy between the child sexual abuser and the right to not be abused. That’s one against one there, but it’s actually that are we willing to basically sacrifice our privacy because some – like, very small number – like we’re talking about 1 of 100,000 or 1 per million that is abused. I would not want to live in a society that all of my communications can be monitored. That will bring chill effects to the society.

And, yeah.

Also it was said here that okay, this child sexual abuse material is publicly available. I would dare anyone in the room or online try to find it! It’s very hard! It is.

Like, I have really tried to look at like –

>> KIMMO ULKUNIEMI: Please, don’t.

>> AUDIENCE MEMBER: When this was a censorship law in Finland when I actually lost my trust in government, when it said that this censorship law came effect. It said all the child abuse material is in Asia and in some countries Russia where we can’t get it off them but when we actually went through the material, the secret list of the police was printed on I web page in Finland by a famous hacker. And we actually went through the material, had a look at it, less than 1% was actually child sexual abuse material.

For example, three first pages, if you went through Google in 2008 in Finland and searched for gay porn, three first hits were on this list, right? And like 95% – over 95% of the actual material was hosted in western countries.

So basically, using this law to say that we have to have this law so that we can get this material off was completely unbest.

>> EVELIINA KARHU: One final comment or question before we get to key participants. So if you have taken notes, you can comment in just a minute. Andrew, you had the final comment or questions.

>> AUDIENCE MEMBER: I have to come to the front. Of people are not hearing.

So just two thoughts, picking up things that were being said. First one on entering encryption and dare I say I think the Internet Society’s position on this is hugely simplistic. People refer to encryption as the magic bullet that will give you privacy and security. It’s just technically wrong to equate encryption with either of those two things. All the major malware now uses encryption. So it’s not a magic thing to give you privacy.

That’s factually wrong. If you enter encryption, let as be clear the end point that you are using is absolutely still open to active monitoring by, for example, the social media platforms so that –

>> AUDIENCE MEMBER: (Inaudible).

>> AUDIENCE MEMBER: So that your data can be – or data about you can take from your system using encryption to somehow means privacy is wrong. And also if you think to CSOs of enterprises they are worried about the use of encryption because it bypasses their cybersecurity steps. And if you understand privacy, you will completely get that if your security is weak, then you are open to software landing on your platform which will bypass your security measures and therefore you will have no privacy.

So just assuming that by magically implementing encryption, you have privacy and you have security, that is wrong. It might be in some specific applications but it’s deeply not the case in others. So we shouldn’t always say it gives you a solution.

And then to say it’s a societal – people have said it’s a societal problem and not a technical problem. Yes, but we should not ignore how much worse technology makes the problem. It doesn’t make the problem go away, but we can certainly stop it being amplified. You know, that is the issue that I think we need to dress. We can’t just ignore it because it will happily continue.

>> EVELIINA KARHU: Thank you, Andrew.

We have six minutes left of our final deadline. You can comment on what is being said or simply raise whatever you would like to raise at this point and would Desara like to go first?

>> DESARA DUSHI: Yes, there’s so many things I would like to say, but my brain is – many ideas are coming at the same time and I don’t know where to start, but I will just make it very shortly.

So my – my key point would be that we should not leave the solution to the technology to fix a societal issue. Yes, we should use technology to help us in fighting this crime, but we should not base everything in technology. We should not think the technology will be the solution and everything, it will be fine. We have to have stronger safeguards in place. We have to have human oversight and we have to have stronger collaboration among all actors so that we find solutions that are feasible and long term.

>> EVELIINA KARHU: Thank you very much. And Torsten would you like to go next?

>> TORSTEN KRAUSE: Yes, first I would like to add or I would like to disagree to the point that only a small number of children are affected. In Germany, a study finds that a quarter of the boys and girls between 12 and 19 are contacted by strangers, every fourth child. And around 80% are affected by sexual harm by sending images without consent. It’s not a small number. A small number is not affected. That’s the problem that we face, and I think we need solution and therefore, we need a regulation.

Inside this regulation, we find the EU Center and this EU Center could be also seen from my perspective as a kind of guarantee of dealing with false positive, before false positives are handling over to the law enforcement by overviewing from child sexual abuse specialists and checking if it’s a false positive or not and only handling the real problem images to the law enforcement.

Also the center could be a guarantee for misusing this proposal, for example, by Member States which maybe are not as democratic as we want them to be. So to see this Member States not violate fundamental rights by using this proposal, and find measures for other issues. So if we have a strong EU center these Member States cannot act alone and realize this as it was mentioned.

Also I want to say that it’s true, we do not want only technology solutions. And therefore, I want to remind the same day when the EU proposal was for the EU for kids strategy. And we want to face the serious problem with a comprehensive approach. And if you see both together, the proposal to prevent and fight against sexual abuse and Better Internet for Kids strategy, then I think the EU met the aim to create the digital environment and create safety for children.

>> EVELIINA KARHU: Thank you, Torsten. Thank you.

And final comments. Michael, would you like to go next?

>> MICHAEL TUNKS: Yes, I think Andrew summed up very well on the position on end-to-end encryption. If companies are looking at deploying end-to-end encryption solutions, that’s fine but they have to have adequate child safety mechanisms. It shouldn’t just be one of those things that is done because it preserves privacy. And they should be able to detect child sexual abuse material and grooming and preferably enforce that. I think it’s really, really important on the end-to-end encryption.

Child sexual abuse is easy to find. We are removing it from the public Internet. There’s huge numbers of offenders out there as well. I don’t think we should be turning a blind eye to this and in your comments around privacy as well, and you said you wouldn’t want your private communications monitored. I would just ask as well, how many children do you think it is right to – is opportunity to child sexual abuse by the Internet? I would argue none and one case of child sexual abuse is one case too many.

And some of the imagery that we are removing of the Internet are images of victims for years and years. For example, if it was deployed much more broadly, we can ensure that the images are not there in the first place. I think that’s something that we should all want and try to achieve.

>> EVELIINA KARHU: Thank you very much, Kimmo, please do give us your final remarks.

>> KIMMO ULKUNIEMI: Thank you. I would like to repeat what I said in the beginning about regulation. Something has to be done. The private sector needs to be doing more than they are doing at the moment and then I would like to add also we should strengthen and grow NGOs and civil society in fighting against sexual abuse of children. That’s it.

>> EVELIINA KARHU: Thank you, everyone. Thank you to the participants. It’s been a great discussion and hopefully we can continue it offline as well. I thank all the key participants and EuroDIG.

(Applause)