From Deepfakes to “Nudification”: Evolving threats of Generative AI to Information Integrity and Human Rights – Pre 04 2026: Difference between revisions
Created page with "26 May 2026 | 09:00 - 10:00 CEST | Room A <br /> '''Draft programme 2026 overview / 26 May'''<br /><br /> {{Sessionadvice-Pre-2026}} Working title: <big>'''IGF Youth Track'''</big><br /><br /> == Session teaser == 1-2 lines to describe the focus of the session. == Session description == Always use your own words to describe your session. If you decide to quote the words of an external source, give them the due respect and acknowledgem..." |
|||
| (20 intermediate revisions by 2 users not shown) | |||
| Line 1: | Line 1: | ||
26 May 2026 | 09:00 - 10:00 CEST | | '''by IGF Youth Track'''<br /><br /> | ||
[[ | 26 May 2026 | 09:00 - 10:00 CEST | LORD JENKINS<br /> | ||
[[Consolidated_programme_2026#pre04_26|'''Consolidated programme 2026''']]<br /><br /> | |||
== Session teaser == | == Session teaser == | ||
The 2026 IGF Youth Track on Governance of AI builds on the achievements of the Youth Tracks from 2022–2025, continuing to elevate young voices in shaping inclusive and responsible AI governance. | |||
== Session description == | == Session description == | ||
In 2024, the IGF Youth Track explored the growing intersection between artificial intelligence and information integrity, with a particular focus on strategies to detect AI-generated political deepfakes. As generative AI technologies continue to evolve at an unprecedented pace, the landscape we examined then has rapidly transformed, bringing both new capabilities and new risks into sharper focus. | |||
Today, generative AI, despite the many benefits that it can bring to our societies, can also pose a threat to democratic processes through cognitive and political manipulation, as well as highlighting forms of personal and societal harm. One of the most alarming developments has been the rise of “nudification” apps, which create non-consensual sexually explicit images by manipulating real individuals’ photos. These apps have been widely criticised for facilitating harassment, abuse and gender-based violence, highlighting significant gaps in both technical safeguards and regulatory frameworks. In response to public outrage and documented harm, the EU moved to ban such systems under amendments to the AI Act, while national-level initiatives are also emerging. For example, courts in the Netherlands have ordered platforms to curb AI-generated sexual abuse content, legislative proposals in Minnesota seek to ban deepfake “nudification” tools and policymakers in the United Kingdom and Australia are advancing measures to restrict or prohibit these applications. | |||
While such tools can be banned at the app store level, the same issue remains as a result of open-source AI models. They have no developer safeguard in place, and can be deployed locally to generate the same harmful content, pushing the issue into less visible spaces. This raises questions about enforcement feasibility. Faced with the limitations of prohibition alone, what other measures are needed? | |||
This session examines the challenges around AI-generated content and how they have evolved, what new threats have emerged, and how technical, legal, and societal responses are adapting. It will also explore whether current approaches (ranging from detection tools to bans) are sufficient to address the increasingly complex and fast-moving risks posed by generative AI. | |||
== Format == | == Format == | ||
Global interactive roundtable exchange between senior experts and youth. | |||
DRAFT AGENDA (60 min): | |||
Introduction by moderator - 2 min | |||
Speakers setting the stage - 5 minutes each | |||
2 high-level senior experts and 2 youth representatives. The speakers will respond to one of the following questions each: | |||
Policy questions: | |||
*When nudification apps are involved, which individual and human rights are violated? How are these rights legally addressed at the time being? | |||
*In what ways are current policies addressing harmful AI‑generated content, and how might responsibilities for platforms and developers need to evolve to prevent harm at scale? | |||
*Who should be primarily responsible for preventing harm from AI-generated content, and how should this responsibility be enforced in practice? | |||
*What can be learned from emerging policy approaches in different jurisdictions? | |||
Open discussion - 25 min | |||
A Mentimeter will be shared with the Youth Community when the session is announced through multiple channels, including IGF Secretariat social media and mailing lists. The community will submit and vote on questions, and the top 3 most-voted questions will be asked first during the open discussion. Additional questions from in-person and online participants will follow. | |||
Closing remarks by the speakers - 5 min total | |||
== Further reading == | == Further reading == | ||
*[https://cdn.bravemovement.org/files/data-dialogue-youth-report.pdf Report focusing on CSAM involving youth insights and experiences from survivors] | |||
*[https://childhelplineinternational.org/statement-calling-for-ban-of-nudify/ Statement calling for ban of nudifying apps] | |||
About the IGF Youth Track | |||
*[https://eurodigwiki.org/wiki/IGF_2024_Youth_Track_%E2%80%93_AI_and_Threats:_new_strategies_to_detect_AI-generated_political_deepfakes_%E2%80%93_Pre_10_2024 IGF Youth Track 2024@EuroDIG: AI and Threats: new strategies to detect AI-generated political deepfakes] | |||
*[https://eurodigwiki.org/wiki/IGF_Youth_Track:_AI_empowering_education_through_dialogue_to_implementation_%E2%80%93_Follow-up_to_the_AI_Action_Summit_declaration_from_youth_%E2%80%93_Pre_08_2025 IGF Youth Track 2025@EuroDIG: AI empowering education through dialogue to implementation - follow up to the AI Action Summit declaration from youth] | |||
*[https://www.intgovforum.org/en/content/igf-2026-youth-track IGF Youth Track 2026 official page] | |||
== People == | == People == | ||
Key participants: | The UN IGF Secretariat, in collaboration with the current IGF Host Country and all Youth IGF coordinators, is designing and implementing the IGF Youth Track as a capacity development activity implemented throughout the year-round IGF process including at the annual IGF meeting. | ||
'''Focal Points:''' | |||
*Herman Johansen, UN IGF Secretariat | |||
*Nadia Tjahja, YOUthDIG Coordinator | |||
'''Organising Team (Org Team)''' | |||
List Org Team members here as they contribute: | |||
*IGF 2026 Youth Track Network | |||
*Dorijn Boogaard, NLIGF | |||
*Sherry Shek, Asia-Pacific Youth IGF | |||
*Pilar Rodriguez, YIGF Spain | |||
*Natalie Tercova, IGF Czechia | |||
*Francesco Vecchi, youth IGF Italy | |||
*Frances Douglas-Thomas, youth member of the EuroDIG Programme Committee | |||
*Fabio Monnet, Swiss Youth IGF | |||
*Rekik Girmachew Demisse, youth IGF Italy | |||
*Hailan Wang, IGF Germany | |||
*Anna Gumenyuk, IGF Switzerland | |||
*Omor Faruque, Bangladesh | |||
'''Key participants:''' | |||
*TBC | |||
*TBC | |||
'''Moderator''' | |||
*Sherry Shek, APrIGF (confirmed, on site) | |||
'''Rapporteurs:''' | |||
*Phyo, SEA yIGF Myanmar and Myanmar yIGF | |||
*Anna Gumenyuk, IGF 2026 Youth Track Coordinator | |||
[[Category:2026]][[Category:Sessions 2026]][[Category:Sessions]][[Category:Side events 2026]] | [[Category:2026]][[Category:Sessions 2026]][[Category:Sessions]][[Category:Side events 2026]] | ||
Latest revision as of 16:22, 4 May 2026
by IGF Youth Track
26 May 2026 | 09:00 - 10:00 CEST | LORD JENKINS
Consolidated programme 2026
The 2026 IGF Youth Track on Governance of AI builds on the achievements of the Youth Tracks from 2022–2025, continuing to elevate young voices in shaping inclusive and responsible AI governance.
Session description
In 2024, the IGF Youth Track explored the growing intersection between artificial intelligence and information integrity, with a particular focus on strategies to detect AI-generated political deepfakes. As generative AI technologies continue to evolve at an unprecedented pace, the landscape we examined then has rapidly transformed, bringing both new capabilities and new risks into sharper focus.
Today, generative AI, despite the many benefits that it can bring to our societies, can also pose a threat to democratic processes through cognitive and political manipulation, as well as highlighting forms of personal and societal harm. One of the most alarming developments has been the rise of “nudification” apps, which create non-consensual sexually explicit images by manipulating real individuals’ photos. These apps have been widely criticised for facilitating harassment, abuse and gender-based violence, highlighting significant gaps in both technical safeguards and regulatory frameworks. In response to public outrage and documented harm, the EU moved to ban such systems under amendments to the AI Act, while national-level initiatives are also emerging. For example, courts in the Netherlands have ordered platforms to curb AI-generated sexual abuse content, legislative proposals in Minnesota seek to ban deepfake “nudification” tools and policymakers in the United Kingdom and Australia are advancing measures to restrict or prohibit these applications.
While such tools can be banned at the app store level, the same issue remains as a result of open-source AI models. They have no developer safeguard in place, and can be deployed locally to generate the same harmful content, pushing the issue into less visible spaces. This raises questions about enforcement feasibility. Faced with the limitations of prohibition alone, what other measures are needed?
This session examines the challenges around AI-generated content and how they have evolved, what new threats have emerged, and how technical, legal, and societal responses are adapting. It will also explore whether current approaches (ranging from detection tools to bans) are sufficient to address the increasingly complex and fast-moving risks posed by generative AI.
Format
Global interactive roundtable exchange between senior experts and youth.
DRAFT AGENDA (60 min):
Introduction by moderator - 2 min
Speakers setting the stage - 5 minutes each 2 high-level senior experts and 2 youth representatives. The speakers will respond to one of the following questions each:
Policy questions:
- When nudification apps are involved, which individual and human rights are violated? How are these rights legally addressed at the time being?
- In what ways are current policies addressing harmful AI‑generated content, and how might responsibilities for platforms and developers need to evolve to prevent harm at scale?
- Who should be primarily responsible for preventing harm from AI-generated content, and how should this responsibility be enforced in practice?
- What can be learned from emerging policy approaches in different jurisdictions?
Open discussion - 25 min A Mentimeter will be shared with the Youth Community when the session is announced through multiple channels, including IGF Secretariat social media and mailing lists. The community will submit and vote on questions, and the top 3 most-voted questions will be asked first during the open discussion. Additional questions from in-person and online participants will follow.
Closing remarks by the speakers - 5 min total
Further reading
- Report focusing on CSAM involving youth insights and experiences from survivors
- Statement calling for ban of nudifying apps
About the IGF Youth Track
- IGF Youth Track 2024@EuroDIG: AI and Threats: new strategies to detect AI-generated political deepfakes
- IGF Youth Track 2025@EuroDIG: AI empowering education through dialogue to implementation - follow up to the AI Action Summit declaration from youth
- IGF Youth Track 2026 official page
People
The UN IGF Secretariat, in collaboration with the current IGF Host Country and all Youth IGF coordinators, is designing and implementing the IGF Youth Track as a capacity development activity implemented throughout the year-round IGF process including at the annual IGF meeting.
Focal Points:
- Herman Johansen, UN IGF Secretariat
- Nadia Tjahja, YOUthDIG Coordinator
Organising Team (Org Team)
List Org Team members here as they contribute:
- IGF 2026 Youth Track Network
- Dorijn Boogaard, NLIGF
- Sherry Shek, Asia-Pacific Youth IGF
- Pilar Rodriguez, YIGF Spain
- Natalie Tercova, IGF Czechia
- Francesco Vecchi, youth IGF Italy
- Frances Douglas-Thomas, youth member of the EuroDIG Programme Committee
- Fabio Monnet, Swiss Youth IGF
- Rekik Girmachew Demisse, youth IGF Italy
- Hailan Wang, IGF Germany
- Anna Gumenyuk, IGF Switzerland
- Omor Faruque, Bangladesh
Key participants:
- TBC
- TBC
Moderator
- Sherry Shek, APrIGF (confirmed, on site)
Rapporteurs:
- Phyo, SEA yIGF Myanmar and Myanmar yIGF
- Anna Gumenyuk, IGF 2026 Youth Track Coordinator