Irena Guidikova – Keynote 03 24
18 June 2024 | 10:00 EEST | Auditorium | | – Consolidated programme 2024 / Keynote
18 June 2024 | 12:15 - 13:15 EEST | Workshop 1b | WS room 1 | – Consolidated programme 2024 / Workshop 1b
18 June 2024 | 15:00 - 16:00 EEST and 16:45 - 17:45 EEST | Workshop 2a and 2b | WS room 1 | | – Consolidated programme 2024 / Workshop 2
Irena Guidikova
Head of Democratic Institutions and Freedoms Department, Council of Europe
A graduate of Political Science and Political Philosophy from the Universities of Sofia (BG) and York (UK), Irena Guidikova has been working at the Council of Europe since 1994. She has worked in fields such as youth research, culture policy, democratic innovation anti-discrimination and migrant inclusion, children’s rights and sport Values. She is currently Head of Democratic Institutions and Freedoms Department.
Video record
Keynote:
https://www.youtube.com/watch?v=O3jOyWrvo9M&t=945s
Workshop 1b:
https://youtu.be/AHRftjlm_Zs
Workshop 2a:
https://youtu.be/BtyjA6zVC10
Workshop 2b:
https://youtu.be/REbbY6-ehoM
Transcript
Keynote:
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Transcripts and more session details were provided by the Geneva Internet Platform
Irena Guidikova:
Thank you. You said it perfectly well. Good morning, everyone. It’s a privilege to be here in Vilnius, which is not only one of the most beautiful, or it’s my personal opinion, one of the most beautiful capitals of Europe, but also hub of a vibrant, deep tech startup community as far as I understand. So it’s really a very appropriate place for this forum. The focus of the debate on internet governance in the digital environment has shifted considerably since the early days of internet governance exchanges. I think most of you are too young to remember the Geneva Forum, UN Forum where the WISES was born, and the Tunis Commitment 20 years ago. The Tunis Agenda for the Information Society in 2005, which launched the global internet governance processes, focused on the development of capacity, access, and equitable infrastructure investment on sustainability, on inclusive governance. These were all the promises of the emerging internet at the time. And unfortunately, the focus of the debate has now shifted 20 years later considerably towards other topics, for instance, online harms. And we already heard Mrs. Ramos-Kaitis spoke about the scale of democracy backsliding, which is fueled partly by disinformation. But that’s not the only worrying trend with the digital environment, and I have some more shocking figures to share with you. According to UN data, two in five women experience online sexual harassment. 96% of deepfakes are non-consensual pornography targeting women. 59% of teens report being harassed or bullied online. And nearly 50% of LGBTI plus students experience cyber bullying in a given year. The most targeted age group by cyber criminals are people 60 and above, and these are all UN data. As we speak, the digital world is flooded with harmful content and behavior, and bear with me for the dooms list. Child pornography, hate speech, gender violence, and stereotypes, identity and intellectual property theft, deepfakes, disinformation, terrorist recruitment, illegal gambling, promotion of self-harm, and much more. Yesterday, we were discussing in another session the challenges of the metaverse and the opportunities that it promises, but also the immense potential for harm, including the so-called meta-crime, in the absence of proper governance and law enforcement. Now, the Council of Europe that I represent here is legitimately, and its member states, legitimately worried about all these developments. And the Council of Europe is one of the organizations that are spearheading regulation on online safety through its own instruments, international treaties. In the first place, the AI Framework Convention was already mentioned, but also its monitoring mechanisms that follow up the implementation of the treaties, its soft standards, and its research and field work. So it’s really a comprehensive mechanism for supporting member states in addressing the positive, but also the negative features of the digital environment. And all of this is creating a distinctly European approach to, in human rights-based approach, to digital governance, which is anchored in the European Convention on Human Rights and the case law of the European Court of Human Rights. That approach has, I must say, four distinct building blocks. First of them is cross-sectoral cooperation and dialogue, and this forum, EURODIG, but also the IGF and others, are really an example of that kind of partnerships that are key to unlocking the positive potential and building the defenses against online harm. Multi-stakeholder cooperation is one of the fundamental principles for human rights-based rulemaking on digital technologies, media, and communication governance. And we in the Council of Europe are therefore really keen to implement it in our own work, although we, of course, work with governments primarily, but we engage very much with civil society, academia, and the businesses, including through a digital partnership that we have created a few years ago, and which includes some of the main digital companies worldwide. The second building block of the European approach to digital governance promotes robust and human-rights-compliant legal frameworks to harmful the serious forms of online content. And these legal frameworks enable us, or enable the policy makers, to balance the defense mechanisms and the criminalization of the most serious offenses with the protection of other rights. So it’s always a very fine balance to walk. And the instruments such as our Cybercrime Convention, or the Budapest Convention, the Convention for the Prevention of Terrorism, the Istanbul Convention, I’m sure you’ve heard about it, on preventing and combating violence against women and domestic violence, as well as the so-called Lanzarote Convention on the Protection of Children against Sexual Exploitation and Sexual Abuse are our key instruments for balancing state policies so that we protect societies and human rights by infringing as little as possible on the human rights of individuals. Ultimately, it’s the Court of Human Rights that effectively judges whether that fine balance has been found. So it’s very important also to follow the case law of the court. I’ll give you an example. These instruments already, of course, they are treaty-based and they are fixed, but they also evolve. Through the interpretation work of their follow-up committees, as well as through having additional lawmaking, for instance, additional protocols. For instance, the Cybercrime Convention has two additional protocols. One of them, the second one, is really important because it enables states cooperating within that international treaty to ensure that when harmful or illegal content is taken down, the evidence that is needed for prosecution is not lost. So I encourage really all those present here that come from public authorities to ensure that protocol is ratified. You will hear additional presentation for the Budapest Convention later today. Another example, for instance, Lanzarote, the Protection of Children from Sexual Exploitation and Sexual Abuse. You know that today there is a really worrying trend of child-generated sexual content. So children, unfortunately, take images or videos of themselves and share them among peers. But unfortunately, these end up often within criminal networks, pedophile networks. So the Lanzarote Committee, it really, its approach is to ensure that the grooming of children, the solicitation of self-generated pornographic content of children is criminalized, but that the children themselves are not subject to criminal prosecution for having created that content. And it’s really important that also the existing, the evolution of EU legislation aligns with the Lanzarote Convention’s recommendations because in Europe, we need to have a single legal space that really is harmonious and compliant with the same principles. The third building block of the European space to prevent online harm is future-proof legal and policy frameworks. Self-regulation was, in the early days of the internet, the go-to approach. So we were, I guess, also because it was very much business-driven. It was like, okay, companies will self-regulate, the users will self-regulate, self-police. A few years on, we saw that self-regulation is not really working. So most recent Council of Europe documents and standards increasingly point at necessarily the necessity to develop core regulatory frameworks to address in a transparent manner the risks posed by platforms. Imposing on the platforms and companies the duty of care, which is proportionate, of course, to their size and reach. We’re not going to have a one-size-fits-all approach. And to prioritize safety by design, especially with regard to children. So age-appropriate content being locked out behind firewalls and age verification being enforced. This is essential, but there are much more to the safety by design and privacy by default that the platform should respect. And I’m happy that some states have already enacted legislation to that effect, controlling and effectively regulating how the companies have to enforce human rights and have to ensure human rights. You already heard about the AI Convention, which was adopted in May. It’s really the Council of Europe’s star instrument at the moment. I really hope that it will be very soon signed by many state parties and ratified eventually so that it can actually be implemented. And it will be also complemented by sectoral initiative because it’s a framework convention. It states basic principles about human rights, risk assessment and mitigation through the lifecycle of AI, but we need to go much deeper into specific areas of life. So at the moment, we have a new committee on AI discrimination and equality, which will draft a soft standard on the potentials and risks of AI for equality. And similar committees are working on AI democracy, AI and artistic creativity. So there’s a proliferation of very specific instruments in the Council of Europe regarding AI. For instance, also the Lanzarote Committee is working on AI generated child sexual abuse because that’s also an ongoing concern. There’s more and more harmful sexualizing images of children that are entirely generated by AI. And the Lanzarote Committee is calling for the criminalization of that content too. Finally, the fourth building block and there, it’s really what’s happening offline because the online environment clearly is connected to the real life and it also represents what’s happening there. And the denizens, the citizens of the internet are also essentially citizens. So we really need to work across the community, across the tech community, but also educators, young people, NGOs, to make sure that those that are online are behaving as responsible citizens, that there is a culture of respect and responsibility. And this goes through education, it goes through working with the communities, grassroots work, especially the most marginalized people who are the most easily affected by disinformation, but also by the different criminal activities happening online, elderly people as well. It’s everybody’s responsibility in a way to protect themselves and protect others, but the existence of robust policy frameworks to ensure the structural conditions that they have, of course, is essential. So I thank you very much for listening and for being here. It’s a really important dialogue that’s happening year by year in the Eurodig. The Council of Europe is a traditional longstanding partner of the Eurodig, and it’s a forum that brings ideas and knowledge and issues into our daily policymaking work. So thank you very much for attributing, and I look forward to exchanging with you later on.