YOUthDIG 2024
14 – 16 June 2024, YOUthDIG | 17 – 19 June 2024, EuroDIG
YOUthDIG programme 2024 | About EuroDIG 2024
YOUthDIG Messages 2024
Policy Propositions on Artificial Intelligence and Human Rights
Discrimination in AI
Artificial Intelligence (AI) has the potential to reinforce and create new forms of discrimination. This stems from the inherent biases in data, which is far from neutral and often reflects existing societal prejudices and bias. To face these issues, we propose transparency as a key action. Implementing synthetic data and involving focus groups, particularly those representing minority and intersectional backgrounds, can ensure a more balanced and inclusive dataset, making AI systems more sensitive to diverse perspectives.
Moreover, there is a pressing need for legal clarification regarding responsibility for discrimination in AI. Clear guidelines and accountability measures must be established to effectively address and prevent bias.
Bias of Policy Makers Due to Techno-Solutionism
Policy makers sometimes fall into the trap of techno-solutionism, relying heavily on technological fixes without considering the broader social context. To counter this bias, an interdisciplinary approach is essential. By involving experts from various fields – technology, sociology, ethics, and law – more sustainable and holistic solutions can be achieved, including the professional background of multiple representatives.
A multistakeholder approach is also fundamental. Collaboration across different sectors and disciplines will ensure that diverse viewpoints are considered, which leads to more applicable and comprehensive policies. To facilitate this, we propose the creation of a dedicated body focused on intersectional and interdisciplinary collaboration. This body would meet regularly to assess ongoing issues and work towards continuous improvement through cooperative efforts.
AI in Border Control
The use of AI in border control raises significant human rights concerns, particularly for refugees and individuals crossing borders who are inherently vulnerable. The collection of biometric data often occurs without proper consent, exacerbating these concerns.
Furthermore, AI is not a solution to the migration crisis. In fact, the inherent biases and risks associated with AI could worsen discrimination and lead to unjust outcomes. Therefore, the use of AI in border control should be prohibited, prioritising human rights and ethical considerations above technological solutions. These policy propositions aim to address critical issues at the intersection of AI and human rights. By promoting transparency, accountability, and interdisciplinary collaboration, we can ensure the ethical and fair use of AI technologies.
Education
Our goal is to empower individuals through education, to enable them to assert their AI and digital rights and critically analyse technological solutions. We encourage the implementation of constructive and informative campaigns to raise awareness of AI impacts. Additionally, we advocate for the integration of AI literacy into school curriculums.
Data for AI training
AI programs are “trained” by being exposed to large quantities of existing works, photos, information, and data. Public awareness of how our data are used, especially in training new AI programs, is very low. Most people are unaware that their personal photographs are being used for AI training. Even when notifications are provided, they are often buried in general Terms and Conditions or presented in a way that users do not fully understand. Currently, users can opt-out, but as this issue grows, we propose that users must explicitly opt-in, following the GDPR example. Obtaining explicit consent for using personal content should become paramount.
Given the vast scope of the data in question and the economic interests of businesses, this issue should be standardized at the international or EU level. Precise recommendations and obligations should be imposed on companies, non-governmental organizations, governmental institutions, and all stakeholders. This regulation should ensure that consent is obtained from individuals whose biometric data are being used to train models that generate new content.
Technosolutionism
Using AI to solve problems may seem progressive, glamorous, and investment-worthy. However, AI might not be the most efficient way to solve a problem, for example, in public services from waste to migration management. In fact, it may even create new issues, as is expected with online child safety measures, such as biometric age verification or client-side scanning.
We urge integrating a comprehensive, multistakeholder impact assessment and analysis of both actual and potential checks and balances before implementing AI as a problem-solving tool, be it in digital policy or for practical issues. We urge policymakers to carefully consider this impact assessment, provide justification for their decisions, and be held accountable based on the assessment and associated risks and costs.
Deepfakes
Deepfake videos are increasingly common in the media, especially during crises and elections. This misinformation prevents rational decision-making, increases suspicion in institutions, and harms democracy. To combat this, Europe needs a legal framework for deepfake usage, funding for detection technologies, and mandatory labeling for all deepfakes to ensure transparency.
We propose a system to confirm the authenticity of information, such as a "badge of authenticity" using a QR code or blue tick circle. Media houses could use this system to verify content. Additionally, educating citizens on recognizing misinformation and working with technology companies would strengthen this effort. These measures will help protect society from the damaging effects of fake news and deepfakes, ensuring a more informed and democratic populace.
An intersectional approach for youth participation for a better Internet governance future
Representation:
- The creation of spaces with co-management structures for youth to provide policy recommendations while monitoring the implementation of these actions.
- Promoting a requirement for young people to have a ‘seat at the table’ by developing standards and recommendations to encourage the fostering of meaningful participation and inclusion in high level discussions.
- Increasing the stream of additional stable funding and income for youth organizations (including youth councils), such as operational grants, thus, assessing and monitoring successful implementation.
Education:
- Strengthen both non-formal and formal education via collaborative efforts amongst the relevant stakeholders such as young people; decision makers; experts; academia and many more by working towards and understanding how and why education needs tailored to specific groups through the following topics: youth participation, digital literacy (including how bias is present in the online sphere), inclusivity and accessibility, and finally, through critical thinking.
- Recognise the current work of non-formal education agents on these topics for the creation of an organized European curriculum that can be implemented in formal education.
- This educational program would follow the ‘youth for youth’ principle, where the young people would be, within the participatory model, included in all aspects of design and development processes, adopting an intersectional approach.
Inclusivity:
- Inclusivity starts with language, and we should ensure that all policies and regulations are transparent and comprehensible to everyone, specifically the youth; limiting things like overuse of technical jargon. Policy briefs and documents need to have a youth-friendly version which can explain things transparently, including how these policies affect everyday life.
- An intercultural aspect is also crucial for an inclusive environment, therefore LLMs should be trained in different (European) languages to reduce inequalities in access to information and knowledge. To ensure accuracy, this needs to have some level of human verification wherever possible. This could be achieved, for example, in partnership with local universities.
- Apply standards that make online spheres more accessible, demanding all websites to have accessible features such as services for people with disabilities (visual and hearing impairments) to ensure that access to all content is equal.
For the successful development of the points above, it is crucial to have secured stable funding and adopt an intersectional approach to ensure no one is left behind.
Fair and Privacy Preserving Use of Data
Data Privacy shouldn’t be subject to one’s personal, social and economic status.
- Meaning that: Information about data collection and data use must be presented to users transparently, in simple language and easy to understand.
- A crackdown on dark patterns – outlawing dark patterns and empower consumer agencies to identify.
- Creating a standard of having to opt in for sharing data without penalty or exclusion of the users.
In the long term, we demand to take the burden away from users in terms of privacy experience and shifting power back to them, allowing users to decide what information to be shared about themselves.
New Economics of Data
We acknowledge that Data is an asset, the product of users labour and it is used as a commodity to facilitate price discrimination. Therefore, we advocate for preventing companies from increasing prices of products and services based on users’ personal data shared without their consent
Age verification
We are seriously concerned about children's welfare, and we acknowledge that an effective technical solution to protect children online without infringing on privacy has yet to be discovered.
Entities who wish to improve children’s welfare online should not expand on privacy reducing technocentric solutions, but prioritize:
- Strengthening law enforcement financially, educationally and structurally.
- Shifting liability to providers of explicit and mature rated content.
About YOUthDIG
The Youth Dialogue on Internet Governance (YOUthDIG) is a yearly pre-event to the European Dialogue on Internet Governance (EuroDIG) aimed to foster active youth participation. Young people (ages 18-30) from the pan-European region with a strong interest in Internet governance, digital policy and cooperation are working together to draft and advocate for the youth messages.
Every year the YOUthDIG Organising Team is compiled of former YOUthDIG participants who design the programme for the upcoming edition. This is a chance to reflect and improve the programme from last editions, build further connections with YOUthDIG alumni and the EuroDIG community and to provide an opportunity for young people throughout Europe to participate in the Internet Governance Ecosystem focusing on topics that are important to youth in the field.
Find out more at our website.
Supporting youth to attend EuroDIG and other IG spaces
When the YOUthDIG Call for Applications closed, we received 275 submissions from all over Europe. 159 applicants are women, 113 are men and 3 are non-binary, of which 30 are offered funding to participate. In previous years, we sent emails to explain how to participate in EuroDIG online, how to reach out for funding, how to join a national IGF or join other IG fellowships or spaces. However, we understand that it can be difficult to access these spaces when there is no active call to participate or contribute.
Therefore on Tuesday, 16th April 2024, the first Webinar on “Participating at EuroDIG and Connecting Locally” was held. You can read about it here. In this session we sought to provide newcomers the opportunity to connect and engage members of the EuroDIG community and their national community. We aim to give newcomers a sense of welcome in these spaces and ensure that they are informed on how they can join these communities to learn more about digital governance.
You can also find information in our How to get Involved in Internet Governance guide