Dynamic Coalition on data and trust: Stakeholders Speak – Perspectives on Age Verification – Pre 04 2025
12 May 2025 | 09:00 - 10:15 CEST | Room 7 |
Consolidated programme 2025
This session is meant as an introduction into the topic of age verification and aims to bring in viewpoints by various stakeholders. It will be focused on practical issues and case studies by different stakeholders.
Session description
This session launches a series of regional and global conversations on age verification, originally initiated by IGF Czechia. It aims to critically examine the complexities of age assurance systems, particularly the delicate balance between child protection, privacy rights, and digital inclusion. Participants will assess current and emerging age verification technologies and regulatory initiatives—highlighting, for example, Australia’s proposed social media ban for users under 16. The discussion will be informed by diverse perspectives from academia, civil society, youth, the technical community, the privacy sector, and government stakeholders. These insights will help identify enforcement gaps, risks of regulatory overreach, and opportunities for rights-respecting industry self-regulation. Youth perspectives and ethical considerations will be central to the dialogue, ensuring that age verification approaches protect users without enabling surveillance or exclusion. The outcomes of this session will be shared across National and Regional Initiatives (NRIs), contributing to an ongoing, multi-stakeholder exchange of best practices and policy development.
Format
We will bring in different speakers online and onsite and run a multistakeholder panel discussion. It will also serve as a warm-up for the main stage conference session on this topic on Wednesday.
Further reading
- https://www.tandfonline.com/doi/full/10.1080/17482798.2024.2435015
- https://www.sciencedirect.com/science/article/abs/pii/S0091743522000664?via%3Dihub
- https://www.tandfonline.com/doi/full/10.1080/17482798.2025.2480091
- https://www.cogitatiopress.com/mediaandcommunication/article/view/8963
People
Key participants:
- Natálie Terčová, IGF Czechia, speaker (remotely)
- Tatiana Tropina, ISOC, speaker (in person)
- Niels Zagema, YouthDIG fellow and Dutch Youth Representative European Affairs at National Youth Council (in person)
- Paulo Glowacki, EURid Youth Committee, speaker (remotely)
Moderator:
- Regina Filipová Fuchsová, EURid, (in person)
Transcript
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Moderator: My name is Ramon, and I will be your remote moderator for this session today. More information about the session and the speakers are available on the EuroDIG wiki, which I will share the link very soon. We encourage you to raise questions, and to raise your hand if you have any questions or if you want to present something. I’d like to ask you, if you have any questions, please put a cue in the chat and ask your question so that I can address it to the room. Before we’ll start the session, I would like to highlight the session rules, which are please enter the Zoom session with your full name. If you have to ask a question, raise your hand using the Zoom function. You will be unmuted and the floor will be given to you, and then you can raise a question. When you’re speaking, please switch on your video. State your name and your affiliation, and do not share any links to the Zoom meeting, not even with your colleagues, please. Let me just, there we go. Now we’ll send the information again in the Zoom chat, and I hand over to our moderator, Regina Fuchsová.
Regina Filipová Fuchsová: Thank you very much, Ramon, for the information. A warm welcome from me as well to this session. We are starting basically the EuroDIG Conference today with these pre-sessions on day zero, one of them being the perspectives on age verification from the viewpoint of different stakeholders. We are organizing it within the Dynamic Coalition on Data and Trust, and in cooperation with the Youth Committee of EURID and also the Czech IGF. My name is Regina Filipová Fuchsová, and I work at EURID as the Industry Relations Manager, and I will try my best to moderate the session today and give everybody space enough to raise the points and questions. We are concentrating on the various aspects of age verification as a measure to protect minors online. So let me please start with a brief note of what underlying issues are we actually addressing here. We have a bit more than the first decade behind us of experience with social media. So we can analyze the impact from different perspectives over a longer period of time already. Evidence from leading researchers across continents show a consistent link between social media use and a decline in young people’s mental health. And this is independent of local social context. Unrestricted access to social media exposes children to cyberbullying, adult abuse, image-based abuse, and illegal or age-inappropriate content. We are referring here, for example, to a well-known social psychologist, Professor Jonathan Haidt, and his book The Anxious Generation, where he writes that since 2010, tech companies have basically been exploiting our kids’ attention and mental health for profit. His research shows an evidence that between the years 2010 and 2020, the situation in the United States became especially alarming, and he documents it on data such as the high increase in suicide rate among adolescents or major depressive episodes, both very high for girls and boys in this age group, and also emergency room visit connected with self-harm, which rose for girls by 188%. percent in the decade, which is very alarming, and it was much higher than boys of the same age group. He also claims that the rise of phone-based childhood has contributed to four key harms, including social and sleep deprivation, attention fragmentation, and addiction. Acknowledging this situation, nonprofit organizations worldwide are already working to address the most harmful aspects of children’s online exposure with their missions to educate and also equip families mainly in the field of prevention of harm and also healing when the situation occurs. The minimum age requirements for social media access are now very central in regulatory debates, also in the European Union. Most platforms currently set the minimum age at 13. This is a threshold which can be easily bypassed, and it’s insufficient for protecting younger teens who are especially vulnerable to algorithm-driven content and also harmful comparisons. Australia is an emerging leader in this area. We already mentioned it in the description of this session. Its government plans to implement the Online Safety Amendments Act, which introduces a mandatory minimum age of 16 for certain social media platforms. This law is scheduled to take effect in December later this year. Since we are at EuroDIG here, we are, of course, interested in the possible response of European countries and the EU itself, taking into account the risks of regulatory overreach, which comes in attempts like this to regulate, or also opportunities for the right-respecting industry. communication, security of privacy and providing self-regulation. Age verification and the connected age systems should ideally ensure the balance from privacy rights and digital inclusion. What is important are the perspective and ethical considerations of the participants and the importance of the participation of the participants in enabling surveillance or exclusion of them. So we will focus today on practical issues and case studies as well, experienced by different stakeholders as our speakers represent a wide variety of them. Let me introduce to you our panelists and speakers. We have two persons here in person and two online. We have a woman from the Internet Society. She is a senior advisor in the Internet Society. Earlier she was involved in research, policymaking and capacity-building projects, both at academic institutions and also with various consultancy projects for international organizations, civil society and think tanks. Thank you for coming. We have on my right-hand side Niels Zagema, a fellow and Dutch youth representative for European affairs at the National Youth Council. So Niels is both a member of USDIG and also one of the Dutch young representatives for European affairs elected by Dutch youth through the Dutch National Youth Council. He advocates for the rights of young people in national and EU policy processes. Thank you for coming. And then online we have Natálie Tercova. Natálie is a senior advisor in the Internet Society and also a member of the Dutch Youth Council. She is also a member of the European Council on Human Rights. a researcher and lecturer. She serves as the European representative on ICANN’s At-Large Advisory Committee, and she is also the chair of the IGF Czechia. Her areas of focus include children’s online experiences, digital skills, and online safety. Hi, Nati. And our second speaker online is Paulo Glovacki, a member of EURID Youth Committee. He has been active in internet governance since 2022, when he first participated in the IGF in Addis Ababa. He holds a degree in international relations, and he is currently pursuing a master’s degree in international law at the Geneva Graduate Institute. Paulo is also an active member of the German Youth IGF, and yeah, and he is joining us today from Geneva, right?
Paulo Glowacki: Hello. Yes, hello, everyone.
Regina Filipová Fuchsová: Okay, so let’s dive into the discussion. I would like to start with an introductory question to all the speakers, and ask why is the topic of age verification important for each of you, for your organization or the stakeholder group you are representative of? And also, if you see the age verification as the silver bullet in protecting minors online. So can I ask Tanja to start?
Tatiana Tropina: Thank you very much, Regina, and thank you very much for having me and Internet Society on this session. Why is it important for us at the Internet Society or as a technical community as a whole? At the Internet Society, we believe that internet is for everyone. We work to make the internet open, globally connected, secure, and trustworthy. And these are not just abstract values. Internet exists as a force for good in the society. It exists to improve people’s life. This is at the core of our mission. So when we look at the age verification tools as they are now, we do not see compatibility with open, globally connected, secure, and trustworthy Internet. In fact, we believe that in the current way, shape, and form, Internet users around the world will be less secure with the age verification rule, with mandatory age verification tools. These tools do not affect only minors. They affect everybody, minors and adults alike. And we see several big risks that these tools bear if they are implemented as they are now. First of all, they introduce privacy and security and data misuse risks, as I said, not only for minors but for adults, basically for everyone. Another risk we see is the barriers for access to the Internet they introduce. And this affects the groups that already are disadvantaged, that already face these barriers for access, those groups who are already excluded somewhat, like older adults, like marginalized and disadvantaged communities. And overall, these risks can cause a chilling effect on the Internet use, on the everyday Internet use. And as I said, some of the groups could be affected very disproportionately. And at the end, when we see what is going on around the globe, We are right now at EuroDIG, but we have to think globally when we think about Internet as a global network. When various states, or stakeholders for that matter, start rolling out their own age verification solutions or start mandating them via regulation, it can create a patchwork of different approaches and different barriers. And at the end, it can disrupt the interoperable global nature of the Internet. And Regina and everyone, I know that this is only the introductory statement. I hope that I will have a chance to elaborate on these risks a bit more later in this discussion. But to wrap up, I just want to say that what Regina called a delicate balance between protection of minors online, which we absolutely appreciate and support as a very good goal, and privacy, security, and other rights. As currently framed, these solutions and this debate, instead of introducing a delicate balance, introduce a trade-off between security of minors and privacy, security of users, access. And it should not be a trade-off. We should not frame it as a trade-off. And a bit more on this later, I hope. Thank you very much.
Regina Filipová Fuchsová: Thank you very much, Tanja. Indeed, we will dive more in the risks as well of age verification systems. Can I now ask Natálie for her introductory statement? Thank you.
Natálie TerÄová: Sure. Good morning, everyone. Hope you can hear me well. Let me know by nodding. Okay, good. Thank you. Yes, we can hear you. Thank you. Thank you. Perfect. Okay. So, first of all, thank you so much for inviting me to the session. It’s a pleasure. As Regina already correctly stated, I work as a researcher during my day job, and actually my topic of focus is digital literacy of children and youth related to their online experiences and also online risks. And in this case, age verification on the surface of it does promise a way to keep, let’s say, harmful online content away from children and those who are very vulnerable. And it really does matter to me because it forces also tech companies and us as a society to really do recognize that children and young people do need some unique needs online. end-users of the internet, and sometimes they need to be treated in a different way. And in this case, if we talk about age, it can be preventing an eight-year-old from wandering into a chat room, which is 18 plus, full of intimate content that could be not useful and maybe potentially harmful for someone that age, or making sure that a 10-year-old isn’t targeted with some, let’s say, inappropriate ads. And they know how to navigate their way in this. And in my academic work, actually, we did see a lot of psychological development role in affecting how kids really do interact online. So of course, we know that younger children are more vulnerable, and teens are, on the other hand, testing the boundaries and also trying to work their way around it. So of course, it is very good to have some, let’s say, gates in place, some barriers, like age checks. However, what is really, really important to say here, and I am drawing from not only my experience but from robust data we have from around the world from very well-known scientists and academics, the thing is that just eight checks cannot solve all the issues because what really really matters and has been proven many times is the role of family, the SES, the socio-economic status and also so many other little aspects like prerequisites that children come with to the online environment and these shape the experience they have online and another thing that really we should just very briefly remember is that whenever you restrict someone from something they will find their way to still access it but doing these detours and trying to find an alternative routes these can be really really dangerous so I would just not to be too long in this opening statement I just want to say that I do not see age verification as a silver bullet, I really believe that no single tool can be a silver bullet or an ultimate solution and we should focus very much on education, enhancing digital literacy but also wise parenting, parenting strategies, we know a lot about parental mediation and how these things really really matter and of course a good platform design but that is something I will talk more I guess as we go with the session. Thank you so much.
Regina Filipová Fuchsová: Thank you very much Natalie, so yeah it looks so far that it’s not like a one-word answer to the problems which we depicted at the beginning. Can I ask Niels for his introductory statement now?
Niels Zagema: Yes you can, thank you Regina. So I’m the youth representative for the Dutch National Youth Council which means I speak to a lot of young people also in classrooms and of course we hear a lot a lot about the dangers of the Internet, but we also know that online safety is an important priority. We hear from young people that they are worried, but as the Youth Council and as a youth representative, we don’t believe that age verification is the silver bullet. I also, to be honest, have not heard anyone say it’s the silver bullet yet, but maybe that’s because I’m in different circles. In fact, when implemented poorly, I think that age verification really undermines the right of freedom that makes Internet such a powerful space for young people, and nowadays the first remark about connectivity is negative, which is, of course, understanding, but the positive often gets neglected. I mean, first of all, I think young people do want safe and appropriate online environments, but I think we can get that through different means, and it’s not trading one evil for another evil. I, for example, was in a classroom, and I asked them, what do you think of age verification? They said, well, if I have to scan my face or upload a passport just to watch YouTube, first of all, that would feel invasive and a bit creepy, and it doesn’t really show trust. And I think, I mean, I agree with it personally, but it also shows that young people are concerned about the topic of age verification, but that’s mainly when you bring the topic to them, so not really by themselves, and also when in the media you hear a lot about age verification, and it can, of course, mean different things, has a lot of technical details, and therefore it’s also difficult for young people, but also, of course, broader society to form a general opinion. So to go back to the question of whether age verification is a silver bullet, I don’t think so. I think you can focus more on education beyond the classroom, on empowering youth to have those spaces, like create more safe spaces for young people, but I do think it’s a valid concern to be addressed, and I see both sides of the argument, but the most important for youth, I think, is that they can have trust in those systems, and that age verification can neglect trust or can make trust, and that’s a way of looking at it. How do we want to see young people have trust in those digital environments? So, thank you and I think move on to the next speaker which is Paulo.
Regina Filipová Fuchsová: Thank you very much indeed.
Paulo Glowacki: Well, thank you everybody for having me. First of all, thank you Natalie and Regina for the great collaboration in organizing this. I have to give a big shout out to those two and especially to Katrin Worasz who can’t be here today, but she was very helpful in preparing me on this topic because I’m a member of the EURID youth committee, although I don’t speak on behalf of EURID or the youth committee. But of course I’m engaged in these issues, but not an expert necessarily on everything, so she was very helpful in briefing me. Let me, after this preliminary remark, start out perhaps with our or my general statement on this issue saying that of course age verification is relevant because it directly imparts how we access, young people access, online spaces, but everybody accesses online spaces. But especially for young people, I think it’s relevant because we get educational content, entertainment, but even civic participation online. So really a large degree of our lives depends on the online spaces and from the perspective of German young people, especially those engaged in digital rights and youth organizations that I have talked to in the past couple of weeks, the debate is not just about protection, as previous speakers have already outlined, it’s really about striking this balance between safety and freedom, between privacy and participation, and really striking the balance where do we need it, right? Germany is sort of a special case in the world or in Europe as far as I’m aware and looked at it. I mean Australia is pushing forward now with the social media age ban, but Germany has had a very long history actually of at least age checks, also age verification in place for so-called high-risk content that can be gambling, pornography, etc. that is already restricted and providers are actually already implementing age verification measures. Now they’re not always mandated, but many providers have gone to do so. There’s a very vibrant ecosystem of laws, of actors, of regulatory actors in Germany. And that comes, of course, from the perception that we do need to protect children from harmful content online. But we also do need to avoid the oversimplification of the solution, really, right? Because it’s not the silver bullet, let me agree with my previous speakers on that. It can become a digital gatekeeping, as Tanja has outlined. But that’s perhaps why Germany has implemented this vibrant ecosystem of providers coming to the authorities that then can approve the solutions. And then only is it that they’re implemented. So trust and agency definitely matter to us. I mean, young people, we want to be involved. We want to be involved in the development of the solutions. But of course, we also want to be asked, how is this issue taken forward? And like I said, Germany can be considered a front runner on this issue, I would say. But it’s definitely a very complex regulatory landscape. We have certain laws, like I said, implemented. We have the German Commission for the Protection of Minors in the Media, known as KionM. We have another German Association for Voluntary Self-Regulation. So there’s really a vibrant ecosystem going on. And we have over 100 solutions already approved in Germany. So I think that shows that we’re sort of in another position sometimes, I think, in the European landscape, in the global landscape, where we come from. But I definitely share the concerns and also the hopes, perhaps, on this technology. Let me stop here.
Regina Filipová Fuchsová: Okay, thank you very much, Paulo. So I think we can move to have a closer look on the main challenges and risks associated with age verification. have been get through in the last two years.
Tatiana Tropina: In addition to IDs or biometric scans, it can also include financial accounts, for example, right? So imagine what kind of information is amassed there. And immediately it exposes the users and platforms who collect and store this information to the risks of security breaches, privacy breaches, data misuse. But even let’s forget about misuse and abuse. How can we ensure that this data is handled properly, that it’s not being sold to the third parties, that it’s not being used to track users? So when people know that they can be traced, and here I come to the word creepy yet again, they will be less willing to use very legitimate services. that put their efforts to perform the age verification checks. And I know that several speakers mentioned trust in the Internet, which is very important and which is declining. And this will just have this chilling effect further on the trust in the Internet. So the bottom line for me here when it comes to the privacy and security risks is that age verification technologies do not offer any holistic approach to make people or minors, rather minors in the first place, to be safe online. Rather, they can create false sense of security while creating various vulnerabilities in security and privacy. And in this regard, as a first step, I know that we are going to talk about solutions a bit later, but we believe that at a minimum, these age verification tools should be independently audited on whether they are actually compliant with privacy and security guidance with reports available publicly so researchers can access them. So this is on privacy and security risks. The second risk I mentioned was the risk of accessibility. And in this regard, the age verification tools can have a very chilling effect as well on everyday Internet use. I’m coming back again on what information is collected and how. Government-issued ID, financial accounts. Think about people who don’t have government-issued ID or they live in a foreign country and their government-issued ID is different from the government-issued ID readable in a particular country. Think about people who cannot provide their government-issued ID. government issued ID for any legitimate reason, it will affect them significantly. The same with the financial accounts. It sounds strange, but not everybody has a bank account. So these people will not use trustworthy services. They will probably go to very dark corners of the internet. And frequently, this is going to be people who are already affected by the lack of inclusion and lack of access. People who will not have bank accounts or government issued IDs would already be vulnerable population groups, marginalized population groups, and older people. When we think about biometric tools, camera, again, it sounds strange, but not everybody has a webcam or a phone with a webcam. We also know that these tools do not perform well on people who don’t have white skin tones. They can affect access for older people who will find the use of these tools challenging. You think that it’s simple, but for many, it’s not. It’s not simple for people with cataract. It’s not simple for people with health conditions like heart veins or people who are recovering from stroke. It’s not that simple for them. It’s a major access barrier. So in this regard, we believe that the access would be significantly hampered. And as the last point on accessibility, it’s not only about old people and marginalized population groups. It’s also about young people. Because when you think how the camera verification, how the biometric verification checks perform on young people, they have the range of several years. So some teenagers or young adults might also find it challenging. When the range is not clear, so it’s both for old people and young people. And finally, I said at the beginning that the Internet Society, we do care about interoperable globally connected Internet. And in this regard, the impact of all these access barriers, security risks, lack of trust can have significant effect. And as I said at the beginning, the more countries are going to roll out these programs of age verification, mandatory tools, we might end up with Internet interoperability being impacted, breaking on various layers. We believe that it does move forward. So, if I may say, the horse is out of the barn. We cannot unroll these tools. But if we look at the international level, we probably need international standards. International standards where stakeholders participate on an equal footing, developed in the multi-stakeholder manner, so all these risks can be factored, all these risks can be addressed and not simply framed, as I said at the beginning, as a trade-off. Let’s sacrifice this and that for the safety of the minors online. So, we do have to take this question seriously and develop the standards if it comes to that point that we agree that we have to have these verification tools. Then we have to standardize them properly. Thank you.
Regina Filipová Fuchsová: Thank you very much, Tanja. You did a really thorough analysis. I would just ask who from the speakers would like to complement this overview of challenges and risks. Maybe also it would be good to tackle some from the technical field, but also legal. But just feel free to add your comments, whoever would like to. Yeah, Niels? Okay.
Niels Zagema: I mean, it’s not the technical or legal field, so my apologies for that. But I do resonate with what Tanya said. It was very extensive. But what I noticed, what also reminded me of was the restriction that comes across with the administrative slump. I know a lot of people, for example, go to government homes and they’re treated as a number because they need a passport or they stand in line. I think this age verification falls in line with the broader movement where people are not treated as people, but as in a system where they’re treated as numbers. So I think that’s a connection to make. And it can also provide a false sense of security. So we say it’s OK to provide security, but does it actually do that? And if it is implemented, is that sense of security warranted? And don’t people treat it as something that is actually that bad? And another one is to know that there’s no such thing as a universal child. The needs of men, the needs of humans are different. And therefore, also, it’s different to put an age on it. But currently, you see that with social media, there’s an age limit, for example, of 13. But because it’s not really enforced, no one really cares about it. And therefore, there is not a discussion on what actually the age should be. And I know analysis is currently still going on for guidelines, what the age should be. But this is, again, dependent on the needs of children and the needs of persons in general. So it should be more flexible and not try to fit everyone into this box where they should be, cannot really be themselves. That was what I like to add. But again, thanks to Tanja for the extensive mention of the risks.
Regina Filipová Fuchsová: Thank you. Our online speakers do like to add.
Natálie TerÄová: I can follow up if that’s fine.
Regina Filipová Fuchsová: Yes, please. Thank you.
Natálie TerÄová: I think so. I want to say I admire the long list and I fully agree with what was said by my colleague before me. All these things are very important to mention. there’s a lot of bias going on also now with new emerging technologies especially definitely as was mentioned out the skin of the color medical conditions and stuff like that and all these things can definitely hinder people’s access to things so this is very good and thank you so much for mentioning this. I might add a bit on the legal and ethical aspects because it may seem a bit funny because we have laws like GDPR that push for stricter age checks to protect children protect kids and it does really sound great however how to verify age without violating privacy laws and GDPR calls for minimization so to collect as minimum data as possible and it also calls for privacy by design meaning you shouldn’t collect more personal data than necessary yet many age verification systems as we heard demand exactly that so as we heard the personal information IDs and all sorts of these things so there’s a very big tension between protecting children in our case but also protecting their data and everyone’s data and their privacy so of course on the other side of the spectrum on the other side of the barrier companies are also worried about liability if they get age verification wrong what are the consequences they could face penalties for letting a minor slip through or breach privacy regulations by storing the sensitive data or handling them badly and these things so it’s really legally a tightrope walk I would say and ethically it’s very important to mention something we know from the psychological research which is some form of a wrestle we have here with children’s autonomy because teens in particular are developing in this period their sense of self and privacy And it shows that if we would force them constantly to prove their age, it might feel a bit too invasive and send them a message that the privacy comes second. And this is something, this is a message we definitely should not send. So it’s definitely delicate balance. I wouldn’t say we should throw the idea out, but more to implement it in a sensible and rights respecting way. That would be my add on this. Thank you so much. And Paulo, if you want to compliment me, feel free.
Paulo Glowacki: Thank you very much. And perhaps drawing from my legal background, there’s a lot of, of course, documents out there that are relevant. But I would just highlight one, perhaps the UN Convention on the Rights of the Child. I think globally, that gives us a good standing. I mean, it has 140 signatories were ratified. And in Article 13, especially contains the rights to freedom of expression, access to information online, as well as the right to privacy in Article 16. So I think, of course, those rights need to be balanced and taken into account globally. There’s also a general comment, number 25 of the UN Committee on the Rights of the Child from 2021, that highlights this relation between children’s rights in the digital environment. They’re also reflected at the European level. And of course, those are sort of risk mitigating, I would say, laws that we have or treaties, a treaty that is in place there. Now, perhaps at the European level, which is a bit more legalized and regulated even more. I see one challenge emerging on the horizon, which is the proposed regulation on to prevent and combat child sexual abuse, the CSA draft, which is currently under discussion at the European level. Nobody really knows where it’s going to go. But it contains the issue of chat controls. So really controlling minors chats and decrypting chats, that’s, of course, goes beyond age-related. verification. But that’s a huge issue. And perhaps what I’m trying to go here with is, if you have a hammer, everything looks like a nail, right? I mean, the technology of age verification is not itself a bad technology, I would say. But the question is, how do we apply it? Right. And I think I fully agree with what Tanya has outlined on the privacy concerns, the chilling effect, I would add, perhaps the lack of transparency of most of these technologies, I mean, AI technologies, most of them, they’re black boxes, we don’t know how they function, the algorithms, we don’t know how they’re how they’re working. And then a common issue in content moderation always is over and under blocking, right? I mean, if we have, if we even if we do not implement age verification technologies, but just age, perhaps if we restrict content, we always have, you know, stuff being filtered out, let’s just look at, you know, perhaps useful sex education or mental health content, right, that could be beneficial to people who would otherwise due to their socioeconomic circumstance, not have access to this, that would be blocked, or could be blocked to them. So these are concerns that we need to continuously address, I think as we go. And of course, we have the issue that not everybody in drawing back to the legal challenges here, not every country has ratified the UN Convention on the Rights of the Child. Countries have reservations on this. So we do risk having a fragmented landscape out there globally. And the EU also needs to take this into account. I mean, that the Digital Services Act is yet another document that’s relevant here in Article 28, specifically targets children protections. So I think all of these need to be taken into account and balanced out really, because they frame the discussion in legal terms. So that’s what I would, I would add to that.
Regina Filipová Fuchsová: Thank you very much, Paulo. It brings us also to the never, let’s say, or everlasting discussion about the if online environment requires more or different measures and protections than the offline world. Maybe before we have a closer look to the, let’s say, the positive side of what complementary approaches could be considered, if there is some question from the audience, either in general or any of the panelists or online we can take them on. Please.
Audeince: Thank you. I’m Tapani Tarvainen from Electronic Frontier Finland. First, I must note I very much agree with the speakers, especially Tatjana has made the case very clear. But there’s one point I want to highlight that the main problem with age verification, at least, it seems that it seems to require identification of users. And that is dangerous in itself. Especially, it’s also dangerous for the children in question. If a child tries to access, let’s say, a site with age-inappropriate material, let’s put it that way, I would rather not have that site know that identity of the child. Because the site managers may not be, well, let’s say, might have some motives that are not ideal for the interest of the children. So especially children need to be able to browse the internet anonymously. Now, there are some technical ideas of how you could verify age without revealing the identity to that site in question, but that’s something we might discuss at some point. I would be curious to hear if you think those might actually work. There are some theoretical possibilities, but I don’t know if any have been implemented so far. Thank you.
Regina Filipová Fuchsová: Would you like to react, Tania? Thank you.
Tatiana Tropina: Yes, absolutely. Thank you very much, Tapani. I think that, Regina, what you said when you moved the questions, summing it up in a way like does. online world need the same solutions and protections as offline world? And I think it very much corresponds to what Tapani said. I think there are two layers of this question. First of all, if we perform verification in the physical world, should we perform verification in an online world? Because let’s be clear here, we do perform age checks in the physical world, right? A child cannot buy alcohol. A child cannot buy a pack of cigarettes. Does it mean that we have to transpose these ideas and solutions to the online world? And here, here the question becomes acute. Because in the offline world, the age verification is temporary. You show your ID or you don’t show your ID and you walk away. It’s done and gone. Online, the data is gone. It’s not you who are gone. The data is gone. And this is why I would say, no, no, we cannot equal these two. We have to factor in different risks. And this is what you said, Tapani. It can endanger those very children. It’s interesting. There is something that I made a note of when Natalie was speaking about platforms, providers, operators being at risk of fine if a minor slips through the age verification. And I understand the risk. The problem for me is that they are not at risk if they exclude an older person from access to legitimate services, if they exclude disadvantaged groups from access to legitimate services. They are not at risk of any fine. And this feels like a big inequality in terms of what kind of barriers and risks we are creating for our society. And also, we can… improve societal trust and protection in the physical world by performing age verification checks. But in an online world, in fact, from what I’ve heard, we’re not improving trust. We’re decreasing trust. We’re trying to solve huge societal problems and harms with only technical tools. And here, again, I would agree with Natalie that the problem has so many aspects than just looking at technology and this formalistic approach, this layer, where I perform verification and how accurate it is is just simply not working. Thank you.
Regina Filipová Fuchsová: Thank you, Tanja. We have another question.
Audeince: Hey, hello, good morning. My name is Torsten Krause. I’m coming from the Digital Opportunities Foundation in Germany, and it’s more kind of a statement than a question. I’m wondering, I’m aware that we are discussing this issue from different angles and perspectives, but I’m wondering if we are on the same kind of level in this discussion, because Paulo laid down in his remarks that in Germany, we have a longstanding history of age checks, more than 20 years, and more than 100 systems are in place in this history, and that’s, I think, what maybe is kind of hurdle to overcome here is we are discussing about the protection of minors, so keep children out. That’s also what’s discussed in Australia, and I don’t like this approach, because as a child rights advocate, I am in favor of the participation of children, and I think we have to go kind of a shift that age assurance mechanism and tools. can be a key tool or a kind of a precautionary measure for creating age-appropriate digital environments where not children keep out, but maybe we keep out adults to have safe spaces for children And when we compare it with the offline work like Tanya does, what I really like is, then we have such checks also you will see if an adult is going in a kindergarten, a safe space for children. You will see that it is not a person of the same age. Online, we don’t know if we have a chat room for children, which is meant for children, and if there are going adults on because they want to offend, for example. We will not recognize that. But age checks can be a tool to find a solution for this age-appropriate safe participation for children. I totally agree on the concerns with current mechanisms of these tools. We have barriers because not everyone has a financial account or a bank account. Not everyone has an ID. That’s true. It’s creepy to scan your face, and it’s not just creepy. From a child rights perspective, it’s totally invasive and not safe to work with all this data of children. But for me, I would like to see us not to stop here at this stage, but to think how to overcome these challenges and how to create tools that secure the privacy and anonymity of all users by checking their age and how to realize that. And the German government has developed such a system together with the Fraunhofer SIT. And a double blind mechanism, working with data that already exists, so there is no new data generated and in the end the service will just get an answer yes or no if the user, which the service don’t know, who is the user, if he should be allowed to go in this space or to get this content or not. And I think that’s the way we should discuss this, how to use it as a key to make the digital environment safe for us all by securing our privacy and anonymity and widen the participation of all users in the end. Thanks.
Regina Filipová Fuchsová: Thank you very much. It actually nicely brings us to the part where we wanted to discuss the complementary approaches and it should be considered which could achieve the same goals without undermining the rights such as privacy, inclusion and autonomy, what we discussed. So can I ask also the speakers to say their view, how these complementary approaches could look like. Paulo, can I be asked with you this time?
Paulo Glowacki: Yes, thank you. Thank you. Since I have to leave at 10, this will probably be my last intervention, but once again already thank you for the organization and for all the questions that we’ve received from the floor. Perhaps let me start out with agreeing with Torsten, thank you for bringing that up again. I think you’re much more an expert on this than I am. So if you stick in the room, please, that would be great. I think also shout out to the work you guys have been doing. Let me turn to the alternatives or complementary approaches that you have asked about, Regina. I think we’ve already brought them up a bit. Natalie has highlighted, I mean, platform responsibility, of course, is one issue. We have that coming with the DSA. I mean, it’s a question of implementation now, I think, legally speaking. And then I think a big issue or a big thing that we have had in Germany, as we have experienced with dealing with these issues for a long time in a rather unique way, perhaps, is also parental and community involvement, right? We have this concept of Jugendmedienschutz, so youth media protection, where we really have a variety of projects and models that combine regulation, but also education, dialogue with young people. I think that’s very, very crucial. Of course, parental control mechanisms could be much more helpful if applied in the apps. So where the issues arise, I think we should avoid pushing this issue down the technology stack onto the operating system level. I think that we really run into big issues there. And then, of course, digital literacy. I think that’s something that we have also pushed for as the Euro Youth Committee, as the Youth Internet Governance Forum Germany. We’re keen on collaborating on these issues. So please do reach out there. But I think it’s a very important measure to complement age verification technologies. And last but not least, youth participation. I think youth participation in the designs of these systems, in the debates of the regulation, I think that’s very crucial, because it’s also showing us where do we really need the protection. I think like Niels has said it, I mean, it appears creepy, but it only appears creepy once you bring the topic to the young people. I think it’s important to reach out and keep them involved. I think there’s, of course, lots of risks and lots of benefits out there. And I think thinking about it the way that Torsten has pushed us to think is, I think, very helpful in creating safe online. spaces for everyone, of course, right, without harming our privacy and data protection. I think I’ll leave it there. And once again, thank you, everybody, for being here and having me.
Regina Filipová Fuchsová: I actually cannot hear in the room. I don’t know if it’s only me. One, two, three. It’s much better if the button is red. I just realized. So I was just thanking Paulo for his contribution and attending this panel. And just good luck for your next lecture, which you have to join. And we will continue. So we don’t need to repeat what was already said for the alternatives. But maybe if you want to compliment or highlight some thing, Tanya, in this measure, because we heard a lot of like problematic aspects. So if you can offer us some light at the end of the tunnel from your perspective as well.
Moderator: Thank you, Regina. Yes, I know that I highlighted some of the ways forward already, like, for example, development of the standards in the multi-stakeholder manner or making sure that privacy and security guidelines are followed by this age verification system by making reports available and accessible publicly. I want to make one remark before I come to the solutions on what Torsten said. And that actually, I think, brings me nicely to my sort of concluding words about solutions. The idea of safe spaces sounds very, very compelling when we think about children and protection, protecting children. But we will have to bear in mind that, and you said it yourself, that safe spaces will again require age verification. So here we come again. How do we look at this layer of technology? And to me, the way forward would be to look at this layer of technology not separately from everything else. As a German, I have a huge belief in the ability of Fraunhofer Institute to create something
Tatiana Tropina: that would be bulletproof, absolutely, foolproof. I believe in it. The point that if we try to solve complex societal problem with technology, we need to look beyond technology. And this is it. We need to look at who is included, who is excluded, how. Can we solve, what kind of aspects of complex societal problems can we solve with this technology? And what we cannot, and this is why we will need complementary solutions. To wrap this up, I will come to technology. So when we look at this, at the Internet Society, we see quite a few examples that can be less risky and more privacy-friendly, more security-friendly. Examples of this is adult websites self-labeling with metadata, right? And this helps parents to set children devices in an appropriate manner to block access to this content. So instead of creating privacy and security risks, these tools empower families, empower parents to protect their children. And in a way, this responsibility is not only with the platform, it also includes a parent. So there is certain autonomy, I would say. So they are actors on their own as well. And with this, yes, I think all the solutions I already highlighted in my previous interventions, so I don’t want to repeat myself. Thank you.
Regina Filipová Fuchsová: Thank you very much. Niels, would you like to add?
Niels Zagema: Yes, I would like to add only, for example, the fact that, I mean, many people believe that the Internet was not really designed for young people, so I think of youth-specific platforms and not only seeing what to ban, but provide alternatives that are really safe for young people in which they can better navigate the Internet. And I like what Paulo said about digital literacy, because you hear it a lot, but I think we’re addressing two different problems, both with age verification, for example, with social media and more addictive content, and more harmful content like adult websites, and they’re both on the umbrella of age verification. So we also need to look for each specific problem what the solution might be. And again, with parental and societal control, I’m a bit hesitant about how parents can influence children. I think children should have more autonomy and have the right to develop as adults without the parental narrative. But there is a role for the parents and for society in general to say, okay, this is normalized, this is not normalized. And as an alternative to age verification, I think we can much better put blame or the harm on other sides of the story and I’m mainly talking about, for example, social media and not about the harmful websites. But that we really say, okay, there are alternatives, but are we really using them? So it fits in a picture that’s much broader than age verification, but about how we want to use the Internet and how youth-specific that is. Also, another point is that youth participation is often mentioned as a good thing, that we can create trust for young people, but bad youth participation kind of hurts the cause so I think it’s often also to look critically at how youth participation is done. done, especially with regards of, like, technical solutions such as age verification, and really have a critical look on it, okay, is this actually something that young people can agree with? So those, I think, are some alternatives and some remarks I wanted to make.
Regina Filipová Fuchsová: Excuse me. Yeah. Yeah. It’s not very inclusive here, at least not obviously for my age. Natalie, can we ask you for your viewpoints? Thank you.
Natálie TerÄová: Yeah, thank you so much. I will definitely make use of my research field being on digital literacy with an aspect of parents that usually is there and plays a big role, and I’m very glad my previous colleagues and speakers mentioned this already. And this is something that really goes hand in hand. So what we know from a fact from various of longitudinal researchers, so actually some of the data that we see really does change in time, because this is something that we need to think about when we also hear all these alarming books and studies. And yes, I am a bit pointing at the opening speech about height. And honestly, this is something that in a scientific community, we do have some problems with his framing of things. And I’m more than happy to post in the chat later some of, let’s say, critics in the review of what he says, because causality is not, I’m sorry, correlation is not causality. And this is something we see a lot in his work. But enough of height. This just highlights what I want to say. One thing is that parents do play an important role in enhancing digital literacy, but this mainly works for younger children. But as children grow older and especially in the older adolescents, let’s say from 15 years old and later, they rely more on their peers for support and also some lifestyle choices when it comes to accessing various content online. So actually, in some of the studies we did that were longitudinal and parents were trying to, we say, mediate. So pretty much tell their children how to react online, what to access, what not to access and what types of risks they can encounter. We could see that even though they try and they really do as much as they could as parents, it does not really affect very much if they can actually mitigate the risks and if they do enhance their digital literacy levels. But there are some other aspects in place that play a more important role than parents, especially in this age group. But what really, really matters is the ethical compass and some of the values we pass on our family members, on our friends. Well, generally the kids and the young people in our area, because we can see, and this is something that was not done before as much, when people do encounter some harmful online content, and here I mean people as children, they have different outcomes of such encounters. Some even do intentionally search for them. And there can be various reasons. And in the past, we did not differentiate between the intentionality, if it is voluntarily or not, if this is something that just pops up and the kid is just surprised. Of course, in this case, it has negative consequences also on the well-being of the child and so forth. However, there can be various reasons why the child can be in a position that they want to search for a specific content. One example I had from… in-depth interviews with young people, they told me, oh, I was bullied in school, and I wanted to understand more why this is happening to me. So I was Googling cyberbullying. I was Googling even topics of self-harm because this is something that came to my mind. And I just wanted to understand, or maybe I wanted to find a community of people who can understand my situation. Maybe we can discuss these things. So there are so many layers behind how kids interact and what they want to see. And maybe at first glance, we would just categorize things into risky and or into beneficial ones, but it’s not so black and white. So what really, really matters is what types of values we share with them. If also they have us as the older people and parents, older siblings, if they do trust us, if they can talk to us whenever they encounter something that frustrates them, if it makes them scared, if they really know how to cope with these things. And all these aspects I’m now mentioning to you, they can all be somehow enhanced or learned with the help of enhancing digital literacy. So it really works in a circle. And I’m trying to explain it here, but please bear with me. It’s complicated. And also English, it’s not my first language. So please let me know if this really does not make much sense. However, really, really what we see is that we can do as much as we want for regulation, but we have to keep in mind that children or any end users, they are not just passive people who just enter the online environment and something happens to them and everyone react the same way and have the same outcomes. This is really not the case. And so I would say the alternative way is more like a toolbox of approaches. Once can come from us, from the family, the surrounding of the child. And then another thing would be definitely what Paula already said, all those digital literacy. programs, it doesn’t have to be from schools, it can be also from institutions, organizations that do care about this, libraries and so forth. So I will stop here, and thank you, I hope I did not take much of your time.
Regina Filipová Fuchsová: Thank you very much, very interesting, also with reference to the concrete research and statement of children, and it was also the aim of this session to involve different stakeholders, also youth participants to see the different perspectives. We have a few more minutes left, so we went actually through the introductory statements, highlighting the shortcomings of the age verification systems, we discussed quite extensively what risks are associated, and then switched to at least outlining some of the ways forward. It looks like us, and I mean us in a broad sense, cannot push away the responsibility for the safety of online environment here for the children, we cannot shift it fully on technology, technical companies, we cannot shift it away from us as a parent. It’s not easy, what seems to be the way forward, rather a multi-layered approach, which includes education, hand-in-hand with technological safeguards and social interventions, so a lot of work and discussion to do, and actually this session was meant as a warming up. discussion, a working session within one of the dynamic coalitions, and there is a main stage session on age verification later this week, I think it’s on Wednesday, where everybody is very much invited to continue the discussion. And before we conclude, I wanted to ask our panelists, out of what was said here and what we heard, what would be like the message that you would like the audience to take with them? It could be like the main, let’s say, challenge or also the main aspect of the way forward. What would you like everyone to keep in their mind when leaving the room? Who wants to start? Niels, you look like you are ready.
Niels Zagema: I wasn’t ready, but it was my fault, I was smiling. What I would like as an ending note is that I think age verification, as we’ve discussed, pretty much we’ve agreed that a lot of the downsides, but it’s dependent on transparency and the trust in the system itself. So what I would like to leave is that we should not see it as a silver bullet, but also put more focus on the broader system, and also I leave to everyone here in this room, when I hear about age verification, especially for example in news or in politics, people talk beside each other, people are miscommunicating about it because they have different goals in mind. So really be sharp in putting, that society knows what we’re talking about, and also that it can be a public discussion, and I think that the internet should be a place of high trust.
Regina Filipová Fuchsová: Thank you very much. Tania?
Tatiana Tropina: I think my main message would be that we cannot make children, minors, or young adults more secure online. by creating huge security risks, and including other groups of people who benefit also from open, globally connected, and secure Internet. We should never ever make it a trade-off. We should never frame it as a trade-off. Everybody should be safe online. Everybody should be secure online. Nobody should be excluded from access to the Internet. Thank you.
Regina Filipová Fuchsová: Thank you very much. Natalie?
Natálie TerÄová: I would say don’t underestimate children. They will find their way how to work around the regulations and restrictions. They are not just passive consumers, and technologies and the Internet is just a tool, and it’s up to us or up to the children how they will use it. And they do have the full right to benefit from technologies and the Internet as much as we do. There are definitely risks offline, but there are also risks online. So let’s be here for them, be the guides for them, and make the most so that they can minimize the risks they encounter in their lives, but still make the most of the benefits that also technologies offer for them.
Regina Filipová Fuchsová: Thank you very much. This is definitely also a very important aspect to take into account. We are almost at the end of our session with the time, so if there is one or two comments from the audience or from online, we can still take it up. If not, then thanks a lot to our speakers. It was very interesting. Thank you very much. And let’s take this as an invitation to discuss the age verification and the connected issues, not only in the working groups and dynamic coalitions which are connected with this topic, but also later this week here at EuroDIG. Thank you very much.