IGF Youth Track: AI empowering education through dialogue to implementation – Follow-up to the AI Action Summit declaration from youth – Pre 08 2025
12 May 2025 | 11:00 - 12:15 CEST | Room 7 |
Consolidated programme 2025
During the 2025 AI Action Summit in Paris, at the invitation of the French Digital Ambassador, the IGF Youth Track co-organizing group developed its position on how AI can empower education, as outlined in their joint declaration. The declaration focuses on how a multistakeholder community can harness the potential of AI-related technologies to enhance the quality of education and achieve SDG 4. It also builds on the outcomes of the IGF 2024 Global Youth Summit, hosted in Riyadh, Saudi Arabia, and centers the discussion on leveraging the current momentum with the implementation of the Global Digital Compact, progress on the UN Agenda for Sustainable Development, and preparations for the 20-year review of the World Summit on the Information Society. During the AI Action Summit in Paris on 11 February 2025, the declaration was discussed among multistakeholder leaders, including both current and next-generation experts. This workshop, hosted at EuroDIG, will provide an opportunity for youth from around the world to engage in a dialogue with current multistakeholder leadership to discuss how the declaration’s objectives and ideas can be translated into practice. Through discussions with senior stakeholders, a network of Youth IGF coordinators will explore how AI can advance education. This includes the 193 Governments action such as modernizing curricula, strengthening international cooperation, and developing AI-driven educational frameworks that promote lifelong learning and adaptability in an evolving technological landscape. The workshop will also examine the role of all stakeholders - including Governments, the private sector, civil society, academia, and the technical community - in ensuring that the design, deployment, and utilization of AI support the creation of localized educational content that aligns with diverse cultural and contextual needs while fostering global knowledge-sharing and collaboration.
Session description
During the 2025 AI Action Summit in Paris, at the invitation of the French Digital Ambassador, the IGF Youth Track co-organizing group developed its position on how AI can empower education, as outlined in their joint declaration. The declaration focuses on how a multistakeholder community can harness the potential of AI-related technologies to enhance the quality of education and achieve SDG 4. It also builds on the outcomes of the IGF 2024 Global Youth Summit, hosted in Riyadh, Saudi Arabia, and centers the discussion on leveraging the current momentum with the implementation of the Global Digital Compact, progress on the UN Agenda for Sustainable Development, and preparations for the 20-year review of the World Summit on the Information Society. During the AI Action Summit in Paris on 11 February 2025, the declaration was discussed among multistakeholder leaders, including both current and next-generation experts. This workshop, hosted at EuroDIG, will provide an opportunity for youth from around the world to engage in a dialogue with current multistakeholder leadership to discuss how the declaration’s objectives and ideas can be translated into practice. Through discussions with senior stakeholders, a network of Youth IGF coordinators will explore how AI can advance education. This includes the 193 Governments action such as modernizing curricula, strengthening international cooperation, and developing AI-driven educational frameworks that promote lifelong learning and adaptability in an evolving technological landscape. The workshop will also examine the role of all stakeholders - including Governments, the private sector, civil society, academia, and the technical community – in ensuring that the design, deployment, and utilization of AI support the creation of localized educational content that aligns with diverse cultural and contextual needs while fostering global knowledge-sharing and collaboration.
Format
Interactive roundtable exchange between youth from around the world and senior experts
Further reading
People
Focal Points:
- Anja Gengo, UN IGF Secretariat
- Nadia Tjahja, YOUthDIG Coordinator
Key participants:
- Mr. Chengetai Masango, IGF Secretariat
- Mr. Pap Ndiaye, Ambassador and Permanent Representative of France to the Council of Europe (on site)
- Mr. Anton Aschwanden, Head of Government Affairs & Public Policy (onsite)
- Ms. Laila Lorenzon, youth from Brazil, YOUthDIG 2024, ISOC Ambassador 2025 (on site)
- Mr. Ben Mischeck, youth from Germany, YOUthDIG 2025 (on site)
Moderator:
- Ms Dorijn Boogaard, Netherlands IGF Coordinator (on site)
Rapporteur:
- Saba Tiku, Ethiopia Youth IGF Coordinator (online)
- Afi Edoh (online)
- IGF Secretariat
Transcript
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Moderator: IGF Youth Track, the AI Empowering Education Through Dialogue to Implementation. It’s a follow-up to the AI Action Summit Declaration of the Youth. I’m Ramon from the YouthDIG team. I will be your remote moderator in this session. And as I already mentioned in the previous session, you can find all information about this session, about the speakers, and about the EuroDIG in general on the EuroDIG wiki. I will share the link again in the chat. We encourage you to raise your hand if you have any questions or if you want to like to present something. But if you do ask a question in the Zoom chat, we would like to ask you to write a cue in front of your question so that we then can address it to the room. Now, let me share shortly the session rules. If you are entering through a Zoom session, please enter the session with your full name. If you ask a question, again, you can raise your hands using the Zoom function. You then will be unmuted, and the floor will be given to you. When you speak, please switch on the video, state your name, state your affiliation, and do not share any links to the Zoom meeting, not even to your colleagues. Thanks so much. Now I will be handing over to Dorijn, who is our moderator for this session.
Dorijn Boogaard: Thanks. Thank you very much, Ramon. And thank you all for joining this session. Welcome all here today, people here in the room, but also online, of course. I’m very happy to be able to moderate this session today with a wonderful panel sitting next to me. I’ll start on my left. We have Ben Mischek. He is representing youth from Germany and is joining YouthDIC this year. On my right, we have Laila, Laila Lorenzon. She is representing youth from Brazil, and she joined YouthDIC last year, but is now also an ISOC ambassador in 2025. Next, there is Anton Aschwanden, if I pronounce it correctly. He is the head of government. Affairs and Public Policy at Google. And next to him, we have Mr. Chengetai Masango from the IGF Secretariat. And next to him, we have Mr. Pap Ndiaye. He’s the Ambassador and Permanent Representative of France to the Council of Europe. Welcome all. I’m very happy that you’re all here and I would like to welcome Mr. Chengetai to start off this session. Go ahead.
Chengetai Masango: The IGF Secretariat, along with the Secretary General, appointed multi-stakeholder advisory group and the leadership panel all agree that developing youth capacity is essential. It is vital for the sustainability of digital governance processes and also is a core part of our mandate. The youth track is designed to create meaningful opportunities for the current generation of leaders and experts to engage directly with the next generation. I want to sincerely thank young people from across the globe who continue to participate actively in IGF processes. I also thank senior leaders across sectors and regions for recognizing the importance of listening to youth, engaging with them, and responding to their concerns and ideas. This year’s youth track is particularly significant. as it coincides with the 20-year review of the World Summit on the Information Society WSIS. I urge all of you, especially young participants, to speak up and contribute throughout this process, as we are in a year of decision-making and change, which also brings opportunity. The thematic focus and structure of the youth track was developed through a bottom-up approach. We work in close collaboration with youth IGFs and international youth initiatives, like the Internet Society’s Ambassadors Program, the Youth IGF Movement, and the Youth Coalition on Internet Governance. The host country also plays a key role in shaping the program, and I thank the Government of Norway for the support and contributions as the 2025 IGF host country. Throughout the year, the youth track features intergenerational dialogues amongst youth and senior leadership. Its implementation is supported by various partners, including the AI Action Summit in Paris, as well as the regional IGFs. Today’s session at EuroDIG is the first of four workshops following the outputs from the last year’s IGF in Riyadh, and the youth declaration presented at the AI Action Summit in February. With others coming up at the African IGF in Tanzania, the Asia-Pacific IGF in Nepal, and the Latin American IGF. The IGF 2025 Global Youth Summit will take place during the 20th annual IGF meeting in Lillestrom. Norway. Across these workshops and summits, we will focus on pressing issues such as AI for education and social media regulation. Youth are also encouraged to engage in other components of the IGF program, including the intersessional work. Everyone here is welcome to be part of the youth track. My colleague, Anya, from the Secretariat, is online, as I mentioned, and available to answer any questions you may have. Her contact details are also shared in the Zoom chat. Thank you, and I look forward to the discussions ahead.
Dorijn Boogaard: Thank you very much. And as this session is a follow-up to the AI Action Summit Declaration from the youth session in Paris, we actually have Laila here, who also joined the session in Paris. So I’m very curious, how do you look back at that session, and what are the critical points that you bring with you from that session?
Laila Lorenzon: Yes. Thank you for the question. And yes, I was there in February, and it was a very interesting discussion. We had a roundtable with, on one side, youth representatives from Asia, Africa, Europe, and Latin America, and on the other side, senior representatives that work in government and private sector. And it was very nice to see how the perspective of youth was actually heard, and some of the key points that I think is interesting to bring here, and it’s also on a declaration that was shared in the event, is the power of AI to facilitate and improve digital education, especially in constrained and crisis settings where physical education is not that accessible. So a very interesting point that I think is the power of AI to facilitate offline. learning, and also remote learning, especially for communities that have hard to reach accessibility in terms of internet infrastructure, and also in diverse crisis settings and conflict zones. So it’s really important to understand that AI can be used as a facilitator to bridge the digital divide, and not as something that can even deepen it. So that was a point that was raised. And also we had a translator as part of a new ISOC, France, and he raised a very interesting point as well on the use of generative AI and voice assistant for improved learning for a person with disabilities, which is also something very important to look and consider when we are drafting or idealizing AI applications for education. I think that the main takeaway that we all agree and discuss is the importance of equally involving students and teachers in the process of deploying and co-designing any AI application to really try to break this barrier that some teachers may see that AI is there to replace them or to make the process of learning harder, but it’s very important that they feel included in the conversation so they can also understand that AI is a tool that can help facilitate, it can make classes more interesting and engaging, it can help them connect with their students. So I think that was one of the key aspects and main takeaways that we take of the importance of uniting not only youth and students, which is very important, but also teachers and school representatives in this important process. Thank you.
Dorijn Boogaard: Thank you very much. I have read the declaration and there were three main points that I just wanted to repeat here today. So the first recommendation was to advance AI-driven education with a lot of sub-recommendations, of course. The second was to strengthen multi-stakeholder cooperation on AI and education. And the third one was to foster intergenerational dialogue for sustainable digital development. Coming from that, I would like to start with the first question from Mr. Pap Ndiaye. So I would like to ask you, from your perspective, what infrastructure investments and policy frameworks are necessary to ensure AI-driven education is accessible, inclusive, resilient, and particularly in underserved communities?
Pap Ndiaye: Thank you. Good morning. Thank you for your question. I’m very happy to be in this most interesting workshop. As France’s permanent representative to the Council of Europe, I welcome the organization of such an event, which constitutes an essential platform for European dialogue and international internet governments. Such a gathering is all the more essential given the current international situation. Artificial intelligence, a cross-border resource which is used by the greatest number of people, but whose production is held by a handful of key players, requires an unprecedented international governance effort to ensure that it serves the public interest. In terms of education, the international governance of AI must address several major issues. How can AI be used to benefit education and access to education? How can the risk of AI on education be prevented? How can young people be trained in AI? So the first condition for putting AI at the service of education and preventing its harmful effects among the youth is an international multi-stakeholder coherent and inclusive coordination. The Paris AI Action Summit, which focused in particular on identifying the needs and fields of action for international AI governance highlighted several priorities to be implemented. First, the need for a multi-stakeholder approach. Understanding the educational challenges of AI cannot be achieved without the representation of all education stakeholders. This includes governments, companies and international organizations, of course, but also and above all teachers, parent associations, youth representatives. International events such as today’s EuroDIG or the Internet Governance Forum are key arenas for an inclusive and diversified dialogue between governments, business and civil society. The representation of young people in the design of educational AI governance is an essential criteria for measuring the effectiveness of initiatives. I would like to salute the IGF Youth Initiatives for its growing commitment. Second, the need to include all countries, especially those where access to education remains limited. In this respect, the role of the United Nations is central in taking into account the voices of countries still isolated or partly isolated from AI. and for whom digital transformation is a major driver of development. Already in 2021, UNESCO published its recommendation for on the ethics of AI, placing equitable access to AI in education and the development of digital literacy at the heart of its agenda, as well as the need to protect students’ data. Adopted in 2022, the Global Digital Compact aims to develop the innovative voluntary financing options for artificial intelligence capacity building. Other initiatives such as the Global Partnership for AI at the OECD aim to promote access to educational technologies in developing countries, but are still too limited to developed economies. Inclusion of developing countries in international governance initiatives is a prerequisite for the spread of AI as an educational tool. The next AI Impact Summit to be held in India in February of 2026 will devote a significant part of the international discussions to developing countries. It will be a unique occasion to look closer at educational outcomes of AI, digital access and literacy and the opportunities for emerging markets to build their own public interest-oriented AI. Third, the affirmation of major AI principles. This is the ambition of the Council of Europe Framework Convention on AI, the first legally binding international instrument in this field. It aims to ensure that activities carried out as part of the life cycle of artificial intelligence systems are fully compatible. with human rights, democracy, and the rule of law, while being conducive to technological progress and innovation. Last, edtech is a highly promising emerging sector serving to improve access to education and digital tools as well as adapting AI to local cultural particularities. Investments in this sector are increasingly possible thanks to the emergence of mechanisms dedicated to financing innovative startups from emerging countries. The announcement of the current AI Foundation at the Paris Summit with an initial fund of 420 million euros serves precisely this purpose and aims to finance any project likely to serve the general interest thanks to the support of 15 countries and a funding target of 2.5 billion euros by 2030. Discussions on the financing of AI for education in developing countries must take into account the real needs of these countries and be conducted in close cooperation with them. Structures such as the G20 are effective arenas for an open dialogue and the understanding of developing countries’ needs. This year, the South African G20 presidency is placing particular emphasis on reducing inequalities in access to the digital economy by supporting a partnership-based approach to development aid that fosters the emergence of sovereign digital ecosystems, particularly in Africa. Mechanisms such as the Innovation and Development Fund, whose first results are expected in the next few months, are interesting relays for the development of educational AI. but are currently threatened by the withdrawal of US funds from the multilateral scene and funds such as USAID. The possibility of substituting American funds needs to be discussed by our European partners as part of a more global reflection on how the various initiatives fit together in the light of the emergence of new funding mechanisms dedicated to AI, such as current AI. Thank you.
Dorijn Boogaard: Thank you very much and some very interesting and important points you are making there. I think we will get back to that in the Q&A. Moving on, you already mentioned the importance of the multi-stakeholder model and in your previous reflection on the session in Paris, you also mentioned the importance of including teachers. So this is the question, what strategies can be employed to strengthen multi-stakeholder cooperation in AI-driven education, ensuring active participation from youth, educators, policymakers and technology developers? Laila.
Laila Lorenzon: Thank you for the question and thank you again for the opportunity to be here. So I think this question is very important because we often talk a lot about multi-stakeholder. We hear a lot of this term in internet governance related events, but I think it’s important to understand that cooperation doesn’t happen by default and it’s something that it needs to be built intentionally with resources, attention and care. And I think it’s very important to understand that youth is currently the most affected by AI in. internet in general, but we need to ask ourselves the questions that who gets to shape these tools and how is it being impacted by the ones that use it most. And especially with regards to education, how we can ensure that AI enhances rather than replaces the human connection that is so important in the learning process and in the social development of students as well. So to answer more of your question, I think that to ensure a truly multistakeholder process, it’s important to make sure that everyone has a seat at the table, but in addition to that, they also have the tool and the knowledge to contribute to the topic. So in that sense, I think that capacity building is key to make sure that everyone starts off on the same place and have all the technical knowledge needed to allow them to actively engage and shape the discussions on AI in education. And I’d like to bring here an example from Brazil, my home country, of a 2022 project called AI in the Classroom, developed by Data Privacy Brazil, a national NGO working with digital rights. And the goal of the project was precisely to engage students, teachers, and school leaders to make shared decisions of how to use AI in the classroom. And something that really stood out for me, in addition to all the technical robustness of the research, was something that the research called sensitive listening, which is basically creating a safe space where participants can feel generally heard and not judged and able to express concerns, and evenly in non-verbally ways. So I thought that this idea was very interesting because what ensured that we have meaningful participation is when we have trust that everyone is being equally heard. And during last year’s Youthdig, I participated. We talk a lot about how to recognize when youth participation is merely symbolic, when it’s used as decoration or a form of tokenism, and how we can move beyond that and towards meaningful and genuine engagement. And I think that really connects to how to ensure the most stakeholder approach to ensure that youth is not only heard and meaningfully heard, but also equipped with the knowledge on complex topics such as AI, so they are able to then contribute on how to shape the discussion. And that can be either through digital literacy engagements before opening the discussion on AI in education, or capacity building activities and even critical thinking activities to ensure that the participation is not symbolic, but rather transformative. And something that also happens too often is that youth is only consulted at the end of a process, so they only have a say when most of things are already decided. And I think that in order to change that, it would be very useful to have a structure and recurring spaces for co-creations, international AI strategies. And that can either be through youth assemblies or councils, which luckily we have been seeing all over the world, taking place more and more. But also build into the national AI education strategies that every development or design of a tool needs to have consultations with youth, students, and teachers, since the beginning, so they’re all able to participate as equal partners. And just to conclude, I’d like to highlight as well that I believe that investment in open, adaptable infrastructure that respects local context, language, and traditions, it’s something that cannot be… So, it’s important that all of these frameworks are open for accountability, and also thought and design, having in mind the local context and cultural diversities, and also the diverse needs of youth, youth with disabilities, and all the specific languages and context that needs to be taken into account. Because otherwise, I think that we risk deepening the very inequalities that we want to solve if we don’t ensure that this approach is meaningful and has the youth participation and the school representatives as well since the beginning. Thank you so much.
Dorijn Boogaard: Thank you very much, Laila. Coming from that, multi-stakeholder participation is of course very important, but you also mentioned something about the design and the technologies. So, moving on from that, we are going to Mr. Anton Ashvanian from Google. So, how can we ensure these AI tools used in education and services are safe, secure, and accessible, especially for students in developing countries?
Anton Aschwanden: Thank you so much. It’s working. Good. Perfect. So, yeah, thank you for the invitation. I’m Anton, working with Google, running our public policy in Switzerland and Austria, and have participated in many national IGFs and now happy to be in the European one, working with international organizations based here in Europe. So, yeah, I mean, if we’re asking the question about responsibility and accessibility, especially in developing countries, I think we need to take, before talking about AI, we need to acknowledge that we’re facing a digital divide already. So it’s, according to the latest statistics of ITU, we’re at 2.4 billion people. still offline and I think now that the real challenge is that this digital divide that we’re having right now does not become an AI divide, so we cannot afford that. And I think in order to tackle this challenge, it’s really key that everyone comes together, the private sector, public sector, obviously civil society, technical experts, in this case educators, teachers, students, and I’m really happy to be here because it’s also, as it has been said at the very beginning, I think it’s a crucial year for the multi-stakeholder governance. We’re just like six week away, if I calculate correctly, from the global IGF in Norway. We’re gonna have YCIS plus 20 in Geneva at the beginning of July and really a big thank you to all of you to be engaged and to show up and I can assure you that also Google will do so, so happy hopefully to see some of you as well in Norway, in Oslo and in Geneva. So what are we doing as Google to help tackle this challenge? I think it’s three things. It’s like digital infrastructure, invest in digital infrastructure. The second one, invest in people and then like use, the third one, using AI smartly to tackle global challenges and let me quickly go through what this means in the field of education. So when we’re talking about digital infrastructure first and I mentioned it, the 2.4 billion still offline, I think it’s key that investments do not only happen in the global north but that we’re really thinking about investments globally. We’re doing so, my employer, by really investing across the globe like as illustrations. fiber optic cables from so far not connected places. I’m thinking of fiber cables between Latin America to Africa, from Africa directly to Asia Pacific without going over Europe, and then really remote parts of the Pacific as well. And when we’re thinking about infrastructure in the education field, that also means investing in places where people can meet. I’m thinking of the local investments we did in specific AI hubs, in startup campuses, and then also in training hubs in all different countries we’re having such activities. And thinking about infrastructures, and also how you open up technology, that it’s not only closed models, but like think also happening at Google, like think of Android or the open gamma AI models that are open to developers and researchers. And then the second one is really investing in people, and I think we’re doubling down there, our efforts we have. I’m not gonna do a publicity spot, like I can use your favorite search engine and type Google AI skilling, but just like to tell you, this is really one of the biggest priorities we’re having right now. So there are the AI skilling certificates, we have like a whole menu. If you’re more interested, happy to share that. But I’m doing it myself. I somehow still consider myself young, but I’m already a bit older, but I force myself to do those AI essential classes. You can sign up on Coursera, prompting essentials, and I think this is really key. And what we’re doing as well is like pushing new ideas over Google. which is our philanthropic arm to really have like this AI opportunity for everyone. And key is that this is not only happening in a few countries, but those are global programs. And then like the third pillar I think is really how to use AI smartly to tackle global challenges. And if we’re really honest, we’re gonna probably dramatically fail with the SDGs. We’re so far behind depending on the statistics at 17 or 18%. So the clock is really ticking and the question is can AI help to accelerate progress to those goals? I’m personally optimistic in many fields. Like we’re talking about SDG four now, quality education, but if you look at better health for instance and looking at breakthroughs in science like alpha fold, drug discovery, this is making myself really optimistic that hopefully new drugs will be discovered. And like same applies for SDG four, quality education. If you look how AI powered tools can help in ways of like transmit education, transmit knowledge, personalized learnings like how the whole software can be improved, how we can access new languages and also like as it was said, also the not so spoken languages, this makes me really optimistic. Again, I’m not gonna do like the Google publicity spot. I have documents with me. There are some great illustrations. I’m thinking of Read Along Quill, the 1,000 languages initiatives. So some great illustration where we can really use AI for good and expand educational access. in native languages, for example, and also especially, I think, what is key that you can support with the technology, especially in regions where the ratio between teachers and pupils are perhaps not as good as in some more developed countries. So yeah, I’ll leave it here as a starter, and then hopefully we will start with the discussion. Thank you so much.
Dorijn Boogaard: Yes, definitely, thank you so much, and we will get back to that, of course. So on to the final question. We’ve heard it quite a few times that it’s important to include young people in this discussion. So Ben, how can intergenerational dialogue and digital commons be leveraged to foster sustainable AI governance and lifelong learning in an evolving technological landscape?
Ben Mischeck: Okay, maybe just to mention before, because we talked so much about youth participation, so I just think how great of a chance it is for Laila and me here to speak on behalf of young people, actually try to bring our points across, integrate our perspectives, because we’ve heard it’s very important, right? And the question you just asked, I mean, it’s a quite big question, maybe focusing on intergenerational dialogue first, and bringing a bit more of a practical perspective. I experienced the introduction of AI in the students’ everyday life over the past years, and from a very practical point of view, what I experienced, especially in the beginning, I do see change now, which is very welcomed, but in the beginning, it was very focused on what is written by AI and what is written by a student. So it was really about comparing and finding out, is this AI-generated, or did the student himself or herself do the work? And for me, this approach to speak about AI in education, is very critical because it shows, like from various points it’s critical. Like first of all, we do know that detecting AI, like AI-generated text is quite difficult. It might be biased. There are like technical issues here. But what is even more important for me is it does create mistrust between students and AI. And it kind of also shows that students and teachers, they were not working, not talking, like in a collaborative way, but kind of like against each other. And why I like want to raise this point is as we like mentioned, the multi-stakeholder project many times, I feel this is like a very good example of what we currently are lacking or where there’s a gap between the generational view of how AI is impacting education. And I really want to emphasize that we should like work together on like ways how to integrate AI. So it’s not about like being substituted by AI because like otherwise like students, they will find ways to like use AI for like what is like supposed to be cheating. And teachers like try to work against them. And that’s not what we imagine our education to be like. We want our teachers and students work together. And I think it’s really important to find ways how we can integrate AI to like leverage the learning experience as like already mentioned using AI tutor-based systems, for example. I think they have many big advantages. Of course, they do bring some risks that need to be mitigated. But from various points, I think AI tutor-based like also as a digital common then really can leverage lifelong learning. And I think it’s because of different point of views. Like first of all, I think it does reduce barriers to AI. Assuming we have the infrastructure to actually access those tools. I would say it does decrease financial barriers. it does decrease geographical barriers as well, and it also can, for example, for adults which are like supposed to be like part of the lifelong learning journey, it actually can also decrease like mental burdens or mental barriers. So I don’t know, but I feel the older someone is getting, the higher is the burden to actually learn and like interact with something like very new. And if you have like a safe space online where you can actually like interact with an AI system, get in touch with the new topic, it really helps to get in touch with new topics, to develop new skills, and I think that’s what like the lifelong learning journey really is about. And I think with the digital commons that are like developed right now, we see many exciting ways to think of new ways of learning and to develop new skills. Thanks.
Dorijn Boogaard: Thank you very much. Yeah, it’s working. So this was the panel, but now we’re going into the Q&A. So I’m hoping you have a lot of questions in mind. But before we do that, I wanted to welcome you to the Mentimeter, which I’m going to share right now. And this room seems pretty young, but I would like to get a view of what kind of ages we have in the room. So if you could join in Mentimeter through the code 42171593. And submit your age. Should we do it as well? Yeah. And the panel can join, of course. It’s an intergenerational dialogue. It’s 4217-1593. It seems like we have a lot of young people in the room. Ah, there they come. Yeah. Okay, we have quite a lot of people joining the Mentimeter, so I’m going to put on the first statement. I’m curious, what do you think? And we will see how the different age groups think maybe differently or the same about this statement. So, do you agree or disagree with the statement that AI will improve the quality of education? It’s quite similar. Yeah. We see young people that disagree. Yeah. Do we have a young person in the room or online who voted for disagree? Yes, someone all the way in the back. Would you like to share why you voted for Disagree?
Audience: Can you hear me? Okay. Hi, I’m Brahim Balla, intern at ACL here in Strasbourg. I don’t quite disagree, like mine is not a complete opinion, but I think that considering the current situation, we have in many countries, I think, and in many situations, an educational system that still is not ready to get along with the improvements which AI might be able to bring. So if we won’t be able to face this challenge, I think that AI will bring more harm than improvement within the educational system. So I think it’s not the AI itself that will bring the improvement, but our strength and our capability to transform it into something useful for the educational system in itself. Thank you.
Dorijn Boogaard: Thank you very much. Is there someone on the panel who would like to respond to this comment?
Anton Aschwanden: Yeah, go ahead. Happy to do so, because also in my question, there was this component about the responsible use of AI. And I think it’s absolutely right to be critical to a certain extent. It probably doesn’t surprise you that working at ACL is not just about AI, at Google, I’m a tech optimist, so I see the benefits, but of course we need to be aware of all the complexities and risks, and then really think about how to develop such a technology all across the life cycle, really, like design, testing, deployment, having the safeguards involved, and obviously, this is a big, big topic as well for us at Google. I’m based in one of the largest engineering offices, the Google Zurich office, and I mean, our security teams, for example, work on direct teaming efforts, so how you can trick the systems, how you make sure that this is not happening, and I think this is obviously a key component that we’re having those debates, and really also the feedback mechanism, and all the testing, monitoring, and the safeguards, and I think the industry is well aware of, but then again, it really requires the broad dialogue, and happy to have this one here over the next two days, but then also at the mentioned occasions later this year.
Dorijn Boogaard: Thank you very much. Do we have someone who voted for agree in the room? Yes, someone from the other generation, maybe, all the way back. Go ahead. If you have, if you would like to explain why you voted for agree.
Audience: So, well, of course, I’m 30 plus. I voted for agree, but actually, I’m not sure. I see that the potential is there to improve, but I’m not sure if it will work out, and also, I’m not sure what kind of education is still. needed, and whether we will make the transformation in education. We don’t need to create humans that can do things that will be replaced by AI. And we need to know what we have to teach people, what is required for them to be able to do a meaningful job in the future. And I see that we might be teaching people skills that they don’t need anymore, and that we might use AI in a way that doesn’t make them better in learning. But of course, the technology could do a lot of useful things, but we have to learn how to use it in a meaningful way.
Dorijn Boogaard: Very clear. Thank you very much. Maybe Ben can also reflect on that a little bit, because it also touches upon the relationship between teachers and students. So yeah, go ahead.
Ben Mischeck: Happy to do so. And I have to say I really much like the question. And I also really like that it’s coming from an older generation, because I often hear that the type AI is affecting the ways we work, the way we learn, that we’re just losing skills. Nobody is doing stuff for us, and we’re losing the skill to actually write an essay, to reflect on literature. And I have to disagree with the statement. So I like the question about what are the skills that we need to learn, and how are the skills going to develop in the future? And I think it’s a very complex question to answer. But what is important, I think, when trying to answer this question, is to think about the positive ways AI can impact a skill. And it’s also, I think, worth to mention that a skill is not just like one single thing to do. Like if I’m writing an essay. I have to do like a lot of little steps, and I believe some of these steps can be automated with AI, and I think that’s beneficial because humans then are able to focus on other aspects. If I’m writing an essay, I might have more time now to reflect on my arguments, formulate stronger arguments with more evidence, for example. And that’s like an example of how I think skills will develop in the future. But here, again, the intergenerational dialogue I think is very important, because I think the older generation has a different sense of skills or value of skills than we do, and I think the young generation might be opposed to the risk that we do lose skills because we think they’re not valuable anymore, but they actually do have some value, maybe in a cultural sense, maybe in a social sense, but I really think that the young people can learn from the older generation in this part, and the older generations need to be open of ways how to integrate AI in how we work and learn.
Dorijn Boogaard: Yes, of course, Laila.
Laila Lorenzon: Yes, I just wanted to add the fact that I agree with all the concerns that the current state of things is kind of hard to believe that AI can actually improve education when we see a lot of things about the misuse of AI, but I think it’s very important to consider that we are in a whole new generation in terms of socialization, so we and people younger than us, they are born in a way that is all connected, and very soon in their life they have screens and access to social media and all of that, so that has a huge impact on the social development side, and the usual way classes are done, I don’t think it answers to the needs of the students of this digital age anymore because they are… exposed to screens and to content and to short videos and to kind of change the way that they pay attention to things. And I think using AI to make gamified sessions or more interactive and engaging learning, I think it has a huge potential to make students more interesting and willing to learn and see AI as a tool to enhance their creativity and give them new ideas and not only doing something for them. So I think that relates as well to what Ben said of not trying to prohibit AI in classrooms. That’s never the way if you prohibit, it’s only make people gonna use it more. So instead of that, I think showing is at how it can be a tool to not do things for you, but make you think smarter or be more creative or tackle this challenge differently. And yeah, I think we should try to look more at these aspects of how we can make classes and education more interesting because it’s a whole new world with AI and also virtual reality, augmented reality. And it can really be used to make people more interested in learning again. And I think that’s something important to explore. Thank you.
Dorijn Boogaard: Thank you very much. And we also have a question online, but after that I will come to you in the room. So first I will give the word to our online moderator.
Moderator: Thank you very much. Yeah, we have a comment from Jasmine. I will just read it out loud and leave it to the debate. The key debate is always on how people leverage AI, meaning what positive and negative impacts potentially created from all several ways of usage.
Dorijn Boogaard: That’s a good comment, I think. And we will bring it into the conversation or is anyone wanting to respond to that now? Yes, yes, we can now move to loud in the room.
Audience: I’m looking at it from a little bit different angle and it’s what you’d be saying with social media that already had that in mind. I think that the way we all learn today, in some unofficial way or educated, are through what we see on social media, what comes by in videos, etc. And that is also presented to us by artificial intelligence, by all sorts of black boxes at Google and other places that we don’t know how they work. Simply we don’t know. We do know how it influences people. And this is where education comes back in, in my opinion, is how do we make sure that youths, now for me it’s too late, I’ll never be in a school class again, I think, but how do we teach youths how to deal with outcomes they see on social media, etc., etc., so that they are taught that there perhaps is another view. And I’m very pessimistic about what’s happening today, how this undermines our democracy and how it undermines youth. And coincidentally I read the whole article yesterday and I won’t read it out loud, but it’s about when you hear more about how bad it is in your country, the better it is on equality standards and the freedom of speech, etc. And go to Russia or another country, then you go to a very special place where they never hear from you again. So in other words, we need to teach how valuable differences of opinion are, but also that freedom of speech, how important that is. And there is a role for education nowadays, in my opinion. So I’m wondering what do you think about this and how we could go about organizing it, because it’s a… is about teaching our teachers as well. Thanks.
Dorijn Boogaard: Thank you very much. I’m looking at Mr. Pap Ndiaye. Could you please reflect on this? You also mentioned the importance of digital literacy, but are there other policy ways to tackle this problem?
Pap Ndiaye: Yeah, thank you. Thank you for your point. I mean, we need to be realistic. We have very powerful adversaries in many ways. That is the conjunction of a number of social media and AI all put together. And I was, I mean, French Minister of Education, and I’m very aware of the importance which the social media have in the everyday life of millions of young people. They spend more time checking TikTok than doing their homework, to put it briefly. And TikTok and others will be more and more efficient using AI and spending and money on AI which obviously makes them more and more influential and powerful. This is the reality that we face nowadays. So if I get back to the question, AI will improve the quality of education. I mean, it all depends on us. It could very well have the most detrimental and negative effects on education. AI could very well destroy education as we understand it. That’s a possibility. It could improve the quality of education. It all depends on how we organize ourselves. It all depends on the collective will to use AI in an effective way, but at this point in 2025 we have to acknowledge that all those who are attached to the common good and to this collaborative work between all stakeholders, the inclusion of the Global South, all this community so to speak, is lagging behind the rapid pace of development in a number of companies that just do not care at all about what we mean when we speak of education. Sorry to be a little gloomy here.
Dorijn Boogaard: Thank you very much.
Anton Aschwanden: Perhaps just a quick reaction. So we do care, that’s I think my reply, that we are really engaged in having our platforms that they are safe and responsible, not only for our users by the way, but also because we make money with advertisements and ads customers, they don’t want to have their products being promoted in an environment where scams or violence is present. So it’s also in our self-interest to do so. And then just regarding digital literacy, a topic close to my heart, being a vocational trainer myself, working with young people. So we do that in our own work, with our own products, but then also supporting many initiatives across the world. I know only the ones in Switzerland in all details, but we worked for now eight years on that. on digital literacy trainings with Swiss schools, like not Google, but Google.org financing the respective work of the largest Swiss youth foundation, Pro-Juventude. And perhaps just as a quick remark, obviously I think it’s because we talked about the misuse of AI and I think it’s fair to be aware of it and to be thinking of it and be critical about it. But personally, I’m not only, I’m not that, I think one should be worried about potential misuse of AI, but then also about the misused use of AI, like what will happen if you’re not gonna use it? And I think this is really a thought that I would also love to leave with you, that it can be this tool to many regions and community to kind of like make a jump as well, like looking at some countries on the African continent, like they’re more advanced in like mobile transaction payments than we are here in France and then in Switzerland, for instance. So it can also be like a tool to really like make a jump and an opportunity for progress. So this is just a thought I would love to leave you with, that it’s not only about the misuse, but the misuse that I’m worried.
Dorijn Boogaard: Thank you. Thank you very much. I would like to open the floor now for questions here in the room, but also feel free to raise a hand online, of course. Are there any questions in the room? Yes.
Audience: Can you hear me? Yeah. So I’m currently a master’s student at CU and under the CIVICA project, I’m a project lead of recognizing the impact of AI in higher education, and it is a collaborative project with LSE. and the core concern which has been shared by the educators are that the over-reliance on AI is threatening the cognitive, it’s leading to cognitive outsourcing and also threatening the critical thinking abilities of the students. So I want to know, because there is a very thin line between using AI as a substitute and as a supplement, so I want to know from the panel, how can you maybe, how can we help students to understand that difference so that it leads to the ethical use of AI and doesn’t lead, and doesn’t affect the academic integrity of the work? Thank you.
Dorijn Boogaard: Thank you very much. Is there anyone who would like to volunteer for asking, answering this question? Yeah, go ahead.
Ben Mischeck: Yeah, happy to elaborate on, but I do have to say I’m not involved in researching on education, so I do miss maybe some points regarding how you teach the best way. But what I think is really important here, and I did say that before, is that from a teacher’s side, you actively encourage the usage of AI. Because currently, students, they will use AI anyways. And I think that’s the problem. It’s very unmanaged, they just use it what they think is the best way to use it. And I mean, humans in general, it’s not just students. Humans are very comfortable, and they try to find the most easy solution to a problem. And I think teachers in the future need to take on the role to show students on how to use AI in a correct and also in a safe, secure way. So for example, as I said, with the essay writing, I think teachers need to encourage processes for students to learn how to write an essay in collaboration with an AI agent. So to actively and critically interact with the agent. And I mean, we do know, for example, that AI is hallucinating, providing false information. it’s a very important thing for a student to be critically when using AI. But I think you won’t be critical without being motivated to do so. And I think that’s where teachers come in and what role they play is to educate on how to actively use AI in a secure way that they’re not losing their mental capabilities and that it’s not a threat to actual skills, but it’s an augmentation of skills. And I mean, it might be that some parts are going to be missed out, but I think we have to question ourselves, and okay, is this really important to the task itself? Or can we say, okay, this part, AI can take that part, but we are able to critically think about the result AI is producing.
Dorijn Boogaard: Thank you.
Laila Lorenzon: Yes, I just want to second everything that Ben said, and I think it’s a matter of really encouraging the use of AI because as you said, students are going to use it anyway. But I think it’s a new challenge that is arising in education just when we had the introduction of the Internet itself in the early 2000s. It was very new, and it was a whole process of the teachers themselves learning how they can include the Internet into the classroom activities. And I think right now with AI, it’s just a similar challenge that part of the teachers need to research and understand, and I’m sure there are a lot of frameworks and reports on the best usage and the best prompts to make sure that AI enhances the educational process instead of replacing it because also some common practices like checking the sources whenever an AI tool says you want information and verifying these things that they can seem very basic, sometimes to some students they are because they don’t know how is the process of an AI, how does it work, where does it retrieve information from? So I think it’s also a little bit on the teachers themselves to try to engage the students and how would the teachers approach a task using AI themselves so they can pass that to the students and they can do more of a critical usage of AI and not only asking the AI to do because it’s something that you can also ask how can I think better on that? How can I be more creative? And I think it’s really a matter of going back to resources. There’s a bunch of resources, as you were mentioning, Google resources as well, that we can tap in to understand how to make AI a medium way of connecting better the teachers and the students because I often feel that there’s this feeling of not trusting AI and not wanting students to use it at all and teachers not using themselves as well. But I think something that is here is like when the internet started and now it’s in our daily lives and it’s something that we need to get used to it because it’s something that can actually make our lives easier if we understand well how to do it. Thank you.
Dorijn Boogaard: Thank you very much. Yes, go ahead.
Pap Ndiaye: Yeah, thank you. I may sound a little conservative but I still believe that it’s very useful for students to write essays on their own without the AI, without the internet, just to focus on the writing and their own ideas. I’m not suggesting that we should get rid of AI. It would be unrealistic and certainly not a good idea. I agree. But we still need to find spaces in schools. possibly in homework where students can think without the machines, without the AI. On their own, it is very important for their cognitive development.
Dorijn Boogaard: Thank you very much. Thinking without machines is a very important skill, I think. And we also have two comments online, so could you please reflect on them?
Moderator: Of course, thanks. June Paris wrote, older people do get AI, but we see the bad things about it because of experience in life. Those creating AI really need basic understanding prior to development. Do you want to elaborate this, or should I just continue with the next? All right, and we have a second one from Anthony Millennium. AI has the potential to significantly enhance the quality of education, but this can only be achieved if its development and deployment are equitable, accessible, and inclusive, particularly for young people in the global south. I’ll leave it to you.
Dorijn Boogaard: Yeah, I think some crucial points there in the chat. Do you want to reflect on that, one of you? Or shall we go to the next question in the room? I think I saw a hand. Yes, please go ahead.
Audience: Okay, I hope you can hear me well. So, I am George from Civil Society. I would like to know, like, from policy and government, like, a standpoint, like, how can public-private partnership be structured to ensure that AI tools developed for education are not only innovative, but also uphold transparency, like, also, like, inclusivity and respect for educational sovereignty in the diverse regions? And also, I have another question related to media that we talked about. And I would like to know, how can we prevent AI in education from spreading misinformation or disinformation or malinformation? And what role can youth play in building trust and accountability in a digital learning aspect?
Dorijn Boogaard: Thank you very much. Maybe, Chengetai, could you respond to the first question on the… Anja, are you here online? Maybe you could reflect on the first question on the importance of public-private partnership and the policy perspective. How can you make it actually effective? So, if you are in the room, yes, there you are.
Anja Gengo: Yes, I am. Thank you. I hope you can hear me. First of all, thank you so much for such an interesting and rich discussion. And I think it went beyond the aspects that the youth group working for months on preparing this session was envisioning. So, that really speaks to the fact how important it is to discuss this topic in this multi-stakeholder intergenerational setup. What I can say is that we operate in a multi-stakeholder environment. The IGF has been doing that for the past 20 years. And you can see that it has its evolution. And it’s been developing in terms of that new stakeholders are getting attracted to the model and they are meaningfully engaging. Just because the awareness is growing in the world, seeing that digital technologies as they are becoming more complex and just more integral part of us, they really require a multi-stakeholder approach. And that extends to the nature of cooperation. We call it public-private partnerships. We call it multi-stakeholder cooperation and collaboration. And we just think that through our experience in 20 years, that that’s really the only modus operandi to go if we want to speak about good governance of digital tech. And if we want to speak, in other words, about the digital technologies, including AI and everything that’s waiting for us, such as the quantum computing, for example, if we want it to work for us. But there is an issue that was mentioned several times by several speakers that spoke at this session, which is first of all, one problem is the awareness, not everyone is aware of that. And then the problem is also, I would say, not necessarily knowledge as much as skills in terms of how to do that. The countries really differ in resources in terms of having stakeholders who are interested and have resources to invest in this type of cooperation. And that’s, I think, what requires strong cooperation between primarily the developed and developing world. And I don’t mean just on economies, I really mean on different sectors with various kinds of resources to work together. And I think through these types of IGF processes, for example, such as EuroDIG for Europe, many national IGFs, that’s really an opportunity for inclusive platforms to bring together good practices and stakeholders that have resources and stakeholders that have a demand that lack resources to work together, exchange practices, and then establish good partnerships to work together. I do think that also bringing or creating these opportunities for bridging the generational gap in terms of the knowledge and skills is critically important. That’s why we build always this huge track through intergenerational dialogue where those who already have experience with the leadership, with resources, deployment of resources for good, can work with those who tomorrow will be taking the positions where they will have to make. decisions for the technologies to serve the better goods so that they know each other, and that’s how we save time. So I know that the response is not simple, but in a nutshell, I would say that just fostering dialogue really leads to cooperation and good partnerships, and these types of platforms are really proven, I think, effective resources for that. Thank you.
Dorijn Boogaard: Thank you very much, Anja. And you also had a second question, which was about tackling misinformation and disinformation. Maybe, Anton, you could quickly share your approach to this problem.
Anton Aschwanden: Yeah, obviously, it’s a key priority for us. Again, you wouldn’t use our product if you couldn’t rely on the quality of the information, and this is a top priority for us to have the answers delivered which are authoritative. And by the way, our tools, we also have been leading the respective functionalities to check where the sources are coming from and also invite you to use it. And the great illustration, for example, is then also if you’re using a tool like Notebook LM, where you can work, upload large PDFs, for instance, reports, preparing speeches or panels, that you then always have to reference where that’s the information coming from. So those are like the respective solutions we’re working on in our products. But then again, also in partnership with other players, developing standards like SynthID is an illustration on the watermarking of AI-generated content, Authenticity Coalition. So there’s a lot of work happening in this. space and overall I think it’s really to have an approach which is at the same time bold to really reach those targets but responsible and then doing it together and not just in a few places like coming back to the digital infrastructure and the skilling aspects that I was mentioning before to really do that across the globe and also invest in all the different parts of the world like some of our engineering hubs in Accra or Nairobi being an illustration that our engineers in this case in Africa are working on solution also for the African continent and that we do not like pretend that like there’s a one size fits it all solution. Thank you.
Dorijn Boogaard: Thank you very much and we are coming to the end of this session but I would like to ask all the panellists to have like one key message that you would like to share very shortly starting on the right and then we’ll go to all the panellists.
Pap Ndiaye: AI is a political issue and we need in the best sense of the word and we need to organise so as to make AI productive for the education of our youth throughout the world.
Chengetai Masango: I think AI is here to stay and I think it’s very important to use it but use it well with critical thinking skills and as Ambassador was saying that it has to be a balance as well. You can’t just do away with the way we used to do things but it’s a balance but it’s here to stay.
Anton Aschwanden: I would go for let’s make sure that the current digital divide does not become an AI divide and then what I’ve been saying before, let’s also worry about the missed opportunities and the missed use of AI and not only about the misuse.
Laila Lorenzon: I would say that AI is an ally and not an enemy and that it can be used to unite students and teachers and just improve the overall educational process, that it’s better and easier for both.
Ben Mischeck: Maybe to finish off, I would like to emphasize the importance of empathy when speaking about AI in education for all the generations, for all stakeholders, to really try to understand how the other one is perceiving AI and what benefits and risks everyone is seeing because I think all of them are very valid and it’s very worthwhile to make the effort to show empathy.
Dorijn Boogaard: Thank you all very much for joining this panel and thank you all very much for joining this session. I would like to ask or I would like to give away that this is the first of four workshops from the IGF youth track, so thank you for joining but also feel free to join the other three workshops coming up this year and now the final words to our online moderator. Thank you very much.
Moderator: Also from our side, thanks to the panelists for your insights, also thank you Dorijn for the moderation, as well thanks to the audience for the attendance. The next session will be Resilience of IoT Ecosystems, Preparing for the Future. We start at 1 p.m., giving you a 45-minute break for lunch and we look forward to seeing you back then. Thank you.