AI & non-discrimination in digital spaces: from prevention to redress – WS 01 2025
13 May 2025 | 09:30 - 10:30 CEST | Room 10 |
Consolidated programme 2025
Proposal: #79 (CoE)
Get involved!
You are invited to become a member of the Session Org Team by simply subscribing to the mailing list. By doing so, you agree that your name and affiliation will be published at the relevant session wiki page. Please reply to the email send to you to confirm your subscription.
Kindly note that it may take a while until the Org Team is formed and starts working.
To follow the current discussion on this topic, see the discussion tab on the upper left side of this page.
This session will explore practical measures to establish effective safeguards and remedies against discrimination in AI systems, particularly in online environments. It will delve into discussions on how to better engage with the groups most at risk, and how to empower human rights supervisory bodies to strengthen oversight and accountability.
Session description
The impact of Artificial Intelligence (AI) on human rights, particularly regarding its potential and the risks it may pose to equality and non-discrimination, is a critical concern of our times. In Europe, newly adopted legal frameworks– such as the Council of Europe (CoE) Framework Convention and the European Union’s AI Act – provide a framework for substantive and procedural rights, safeguards, and remedies. However, strict enforcement and a prioritised concern are needed to ensure that AI systems not only undermine the enjoyment of fundamental rights, but also positively contribute to the promotion of equality and offer robust protection to vulnerable groups from algorithmic discrimination.
The session aims to provide participants with a comprehensive understanding of AI's influence on equality and non-discrimination through an interactive format. Among the speakers and contributors from the floor are representatives of academia, industry and civil society, particularly those most directly affected by discriminatory risks of use of AI systems.
Format
The session sets the context with a panel intervention where 3 experts from academia, civil society, and industry, offer concise insights into prevention, mitigation, and redress regarding AI bias and discrimination. This will set a foundational understanding of the challenges and opportunities associated with AI regulation and enforcement.
Following the panel, the session will transition into an interactive component where participants will engage in real-time polling to share their viewpoints on various AI-related issues. This will be embedded in a detailed Q&A session where responses from the polling are discussed in depth, fostering a collaborative dialogue between the audience and experts.
The workshop will conclude with key takeaways as panellists provide actionable recommendations, and participants reflect on next steps they will take, ensuring the session ends with a clear path forward for promoting equality and safeguarding human rights in the context of AI.
Further reading
The CoE Framework Convention on Artificial Intelligence
AI Act | Shaping Europe’s digital future
Information on the Committee of Experts on Artificial Intelligence, Equality and Discrimination (GEC/ADI-AI) - Gender Equality
Information on the Drafting Group on Human Rights and artificial intelligence (CDDH-IA) - Human Rights Intergovernmental Cooperation
People
Please provide name and institution for all people you list here.
Programme Committee member(s)
- Minda Moreira, Internet Rights and Principles Coalition (IRPC)
The Programme Committee supports the programme planning process and works closely with the Secretariat. Members of the committee give advice on the topics, cluster the proposals and assist session organisers in their work. They also ensure that session principles are followed and overlook the complete programme to avoid repetition among sessions.
Focal Point
- Menno Ettema, Concil of Europe, Hate Crime and Artificial Intelligence Unit
- Ayça Dibekoğlu, Council of Europe, Hate Crime and Artificial Intelligence Unit
Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective member of the Programme Committee and the EuroDIG Secretariat and are kindly requested to follow EuroDIG’s session principles.
Organising Team (Org Team) List Org Team members here as they sign up.
The Org Team is shaping the session. Org Teams are open, and every interested individual can become a member by subscribing to the mailing list.
Key Participants
- Robin Aïsha Pocornie is a computer scientist, AI ethics advocate, and consultant specializing in algorithmic bias and fairness. She became the first person in the Netherlands to establish case law around algorithmic discrimination after challenging facial recognition software that failed to detect her face due to her darker skin tone. As the founder of robinAIsha Consultancy, she combines technical expertise with a commitment to social justice, advising organizations on bias mitigation and ethical AI development.
- Louise Hooper is a human rights barrister. Her practice over the last 20 years has involved a focus on human rights, equality and dignity. She currently holds a research fellowship with 5Rights where she is examining how to assess systemic risk in children’s technology and is an expert to the Council of Europe Committee of Experts on Artificial Intelligence, Equality and Discrimination (GEC/ADI-AI).
- Mher Hakobyan is Amnesty International's Advocacy Advisor on AI Regulation and leads the advocacy work of the Algorithmic Accountability Lab (AAL).
Moderator
The moderator is the facilitator of the session at the event they must attend on-site. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.
Remote Moderator
Trained remote moderators will be assigned by the EuroDIG secretariat to each session.
Reporter
The members of the Programme Committee report on the session and formulate messages that are agreed with the audience by consensus.
Through a cooperation with the Geneva Internet Platform AI generated session reports and stats will be available after EuroDIG.
Current discussion, conference calls, schedules and minutes
See the discussion tab on the upper left side of this page. Please use this page to publish:
- dates for virtual meetings or coordination calls
- short summary of calls or email exchange
Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.
Messages
- are summarised on a slide and presented to the audience at the end of each session
- relate to the session and to European Internet governance policy
- are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
- are in (rough) consensus with the audience
Video record
Will be provided here after the event.
Transcript
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Minda Moreira: Welcome to the workshop AI and non-discrimination in digital spaces from prevention to redress. I’m just going to read the session rules and then give the floor to Aïsha. So please enter with your full name if you’re online. To ask a question, raise your hand using the Zoom function. You will be unmuted when the floor is given to you. When speaking, switch on the video, state your name and affiliation. Do not share links to the Zoom meetings, not even with your colleagues. Thank you very much. Aïsha, the floor is yours.
Ayça DibekoÄlu: Thank you, Minda Moreira. Good morning to everyone in Strasbourg and good morning to those joining online. My name is Aïsha DibekoÄlu and I work at the Hate Speech, Hate Crime and Artificial Intelligence Unit of the Council of Europe, which is within the anti-discrimination department of our organization, where among many areas of work, we focus on strengthening human rights and equality in the context of emerging technologies. I’m delighted this morning to be moderating this session together with my head of unit, Eino Etema, which we bring together expert voices from across academia, civil society, tech community and public institutions. Today we’re here to explore a critical and urgent question. How can we ensure that AI systems, which are now increasingly used in law enforcement, in welfare and employment and in health, do not infringe or amplify discrimination and the right to equality? AI is often presented as neutral or objective, but in reality, it’s shaped by data, it’s shaped by assumptions and power structures that go deep within its design and its use. This session is called, as Minda Moreira mentioned, From Prevention to Redress, AI and Non-Discrimination in Digital Spaces, and the idea is to push beyond the surface. We’ll explore what prevention, mitigation and meaningful redress should actually look like in practice and who gets to shape those outcomes. You’ll hear from a brilliant panel of experts and we will be inviting you, the audience, among who are also experts who are actively working in this area, to actively contribute Thank you all for your contributions through interactive polls and questions that we will be presenting to you via Mentimeter. Please remember that this is a conversation, not just a panel. And after short interventions from each panelist, we’ll move on to Mentimeter questions, and we hope that they will be as thought-provoking to you as they were to us. And after various answers, we’d like to encourage you to further elaborate on your answers and explore topics that we haven’t discussed to make it as effective and beneficial as possible. So, let’s dive in. I would like to first give the floor to Robin Aïsha Pocornie, who is a computer scientist who specializes in algorithmic discrimination and bias. And you might also recognize Robin as the first person who established case law around algorithmic discrimination in the Netherlands after challenging a discriminatory facial recognition algorithm. Robin, drawing both from your personal experience and also your broader work that you’re currently doing on fairness and bias, we’d love to hear on how you approach these challenges and possibilities of preventing, mitigating, and addressing algorithmic discrimination. The floor is yours. Thank you so much, Aïsha, for the great introduction. I’m happy to be here. Hi, everyone.
Robin Aïsha Pocornie: My name is Robin Aïsha Pocornie, and I work on the intersection of data-driven technologies and the implications that it has on people, especially from a race, gender, and class income perspective. I work from the approaches that are anti-techno-solutionist, which means that complex social problems cannot only be fixed by technology alone, and especially if these problems are made by the technology. An example of this is data-driven facial detection software that cannot recognize people of a certain skin color. In order to fix that, it’s not only a techno-solutionist perspective that can fix this problem. And from the approach of radical reform or also known as non-reformist reform, which indicates that community gain is more important than individual gains. And we do this by optimizing community work and going to the communities that are being harmed instead of looking at it from an outside perspective, looking in, trying to fix problems that were not personally or at least as a community related to. The three approaches for prevention of AI harms that I work with within my consultancies, or I advise different clients about this, is that community always goes above the technology. The community that’s being harmed is the expert and of the integration of diverse knowledge. This means that technological knowledge, such as computer science, data-driven knowledge on a higher education level, is more accepted than community-based expertise, so real-lived experiences. And how we do this, this is already being done, is that small communities all over the world, for example, in Singapore, there is a impact-driven community that does work around mitigation of AI harms by going directly from a prevention point of view. What they do is that they actually look at if the technological non-use, so looking at whether technology is needed or not, is actually the first question being asked instead of creating and developing these technologies. This doesn’t mean that AI cannot help or support those in need, it just means that we look further back before deploying and implementing technologies that could potentially cause harm and not have a mitigation as a reactive way, but a responsive way, but looking beforehand before deploying.
Ayça DibekoÄlu: Thank you very much, Robin. So now I’d like to give the floor to Louise Hooper, a human rights barrister with over 20 years of experience focused on human rights, human dignity and equality. Besides her consultancy work, Louise currently serves as an expert of the Council of Europe’s Committee on Artificial Intelligence, Equality and Non-Discrimination, which is abbreviated as GECCI-DADI. Louise, drawing on your legal expertise and your work in international standard setting, how do you see the role of law and policy frameworks in shaping responses to algorithmic discrimination? The floor is yours.
Louise Hooper: Thanks. Good morning, everybody. So the first thing that I’d like to talk about is what AI systems are and do in terms of being models and systems that capture patterns from data in a model. And data comes from real world processes, which means humans. And humans are inherently flawed beings. We are discriminatory by nature. And data that we get from our historical processes is very often dirty in terms of we’re not all consistent about the way that we collect or input data into models or systems. At the moment, we, by which I mean generally older white men with political or economic power, are more and more reluctant to accept discrimination is bad or take steps to ensure equality exists in society. And you can see this in things like, for example, aggressive attempts to dismantle equality, diversity and inclusion programs. So all of this feeds into how we regulate, what we’re regulating and the social approach to law. And against this background, algorithmic discrimination itself is difficult to detect. So who’s being discriminated against? Was it a mistake? Was it by design? It’s difficult to prove. We can’t access data. We don’t understand the systems. We have black boxes with. Nobody understanding what algorithms are doing, we don’t know how decisions are made, what influences them. And overarching all of this, access to justice is costly, it’s time-consuming and it can be very difficult for individual victims. In the context of AI and ADM, the opacity of systems, information and power asymmetry between the deployer and subject, the lack of capacity to monitor group effects or to compare yourself to another person, the inability to access this information and an absence of transparent, meaningful information can preclude proper assessment of discrimination or prevent legal action from being started. And it’s there where I think regulation comes in. We have two attempts that I’d like to just touch on. One is the Council of Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. And the other is the EU AI Act. Primarily under the framework convention, this is directed to and governs state actions, not the actions of companies. It places the responsibility on states to regulate, to govern providers and deployers of AI. It solely focuses on human rights, rule of law and democracy rather than other issues, and it’s not yet in force. Article 10 is the key provision in respect of equality and non-discrimination, which includes not just a negative obligation, but also, interestingly, a positive obligation for AI to be used to overcome inequality, which we may talk about in terms of techno-solutionism later on. It also has a whole series of effective procedural guarantees, safeguards and rights designed to enable people to litigate if necessary and requires effective oversight mechanisms. The EU AI Act, by contrast, is directed to and governs states, but also providers and deployers of AI. So it’s directly relevant to providers and deployers and can be directly relied on by individuals to protect and enforce their rights. It’s built on product safety principles rather than a human rights law. Thank you very much for that very insightful and insights-based approach. There are some criticisms of the approach in determining what AI is risky before you determine what AI is risky, which I find quite complicated. And then it sets out a series of requirements for both providers and deployers to comply with. Compared with some of the new European legislation on equality bodies, there are some really significant new powers for authorities responsible for protecting fundamental rights. And there are articles 77 and in the new directives on equality bodies. And for time reasons, I’m going to stop there.
Ayça DibekoÄlu: Thank you, Louise. And lastly, our final panelist is joining us online. Hi, good morning, Mher. I see you online. I would like to now give the floor to you, Mher Hakobyan, Amnesty International’s Advocacy Advisor on AI Regulation. Mher leads the advocacy work at the Algorithmic Accountability Lab of Amnesty Tech and brings experience from the European Disability Forum, Equinet and the Council of Europe. Mher, drawing from your advocacy work in the Algorithmic Accountability Lab and across these institutions, how can we build truly meaningful multi-stakeholder participation in shaping AI policy? And what does it look like in practice when it comes to tackling discrimination? The floor is yours.
Mher Hakobyan: Thank you. Thank you very much. I’m very happy to be here, as I mentioned in the coordination meeting we had last week. I started my professional journey at the Council of Europe in Armenia, so always happy to be back. So thank you for the invitation. I will dive right into the question, just maybe also a bit presenting the team that I’m in currently, which is one of the Amnesty International’s tech and Human Rights Program teams called the Algorithmic Accountability Lab. And the way we do our work, I think, is really fitting to the question that you asked, because we’re a multidisciplinary team that covers under the broad umbrella of the automated state. We do our work basically focusing on the AI use in the public sector, so by public authorities in the areas of like social protection, policing, migration, border surveillance, military use of AI, etc. And where, of course, when there’s public and private partnerships involved, we look at that as well, but mainly our focus is the use by public authorities. The way we do approach our work in terms of both research, advocacy, litigation work, media and communications work is focusing on collaborating with impacted communities, with other civil society organizations, and networks based in Brussels that represent different community organizations, so a lot of membership organizations that have members in different EU states, focusing on migration, focusing on disability rights, focusing on LGBTI rights. So it’s different networks. We try to connect to the membership on the ground. This is when we do advocacy, but in terms of research, when we do research in specific countries, such as in Serbia, in Denmark, also supported investigations in Sweden, in the Netherlands, these processes are always with collaboration with local organizations that represent different communities that are impacted by AI systems. For example, a research we did in Denmark that you might know about, which looked at welfare protection scheme use of a fraud detection algorithm that heavily discriminated and surveilled migrant communities, people with disabilities. We worked with disability rights organizations, with organizations that work on racial justice, on representing migrant communities. So I think this is really important in terms of also making sure that communities are equipped and are the ones leading the work to challenge a lot of the harms that the systems pose. Similarly, in litigation work, for example, in France, we are engaged in an initiative that is led by local or civil society, digital and human rights organizations, challenging the social security agencies, national family allowance fund, use of fraud detection algorithm, which is also allegedly discriminatory. So there is a court case filing that has called for the system to be stopped. So I think this collaborative approach with organizations that represent directly communities is very important. I think maybe when we also talk about redress and remedy measures, we need to also be mindful that we are not talking about it in an isolated manner, but we think of the system, the ecosystem that is around AI, starting from the conceptualization to the development to deployment. And in that sense, I think to ensure effective redress and remedy, we need to also ensure that the transparency and the public accountability measures are set in place. And even going back, I think what Robin said really resonates with me as well, is that sometimes we need to also think if a certain type of AI system is actually necessary. And if it’s the appropriate solution in the context, then we need to draw clear red lines on things that are. that shouldn’t be deployed. So I think in some sense, a ban could be the prevention in itself and the appropriate mitigation measure. So we don’t need to wait until the harm occurs and then think about now what is the redress and remedy process to mitigate that. I think I will stop here, but really looking forward to the continued discussion.
Ayça DibekoÄlu: Thank you, Mayor. Sorry, at some point, your screen got a bit smaller in our screens. Thank you to each of you for your interventions and powerful insights. And now it’s time to hear from the audience, from you. I’d like to hand over to Menno, who will guide us through a short series of Mentimeter questions. If you have your phones out, if you have your phones out, you can scan the QR code or enter the code on Menti.
Menno Ettema: Great. It always takes a moment before the screen comes up. Yes, to open the eyes, we want to launch a little bit. You can have a look at the, you can type in on your phone or laptop, the UR code menti.com and you have the entry code. You’ll see that code on every screen that’s coming up. Or if you can take a moment to use your phone to scan the QR code so you can enter. Don’t close the tab after the screen goes because we will use the Mentimeter the rest of the session. So then I go forward to the first question, just to go back to a point that was made by a few. The question we want to ask is, can you solve inequality with AI? Yes, it’s technically possible. Yes, it’s a good way to address inequality. No, it’s technically not possible. Or no, we must solve inequality differently. So what are your thoughts on this? I’ll give you a few minutes or a few seconds to complete. I see that 52 people registered, that’s lovely. And we’re seeing the answers coming in. So I’ll give two more seconds. Okay. So what we’re seeing at the moment is that a large part believes that AI cannot solve inequality and it must be solved differently. There are also those who think that it’s a good way to address inequality. Technically not possible is the least of the concern. And some nine people mentioned that technically it’s possible. But the largest part, nearly half, say that we cannot solve inequality through AI. Connected to that, we have our next question. What’s the main barriers to effective AI regulation for tackling discrimination and bias? So what are the main barriers to effective AI regulation for tackling discrimination and bias? We got a few answers and you can… How do you call it? Rank. Rank, thank you, Ayça. Rank the various answers. You can also skip. So if you think one is the major issue, then you put that as first, and then the second or third reasons, because they could all be relevant. But you can also skip if you don’t think they are of a concern to the effectiveness of AI regulation to tackle discrimination and bias. Thank you. It takes a little bit longer because you have to rank and move the things. ENGINEERING AND BEHAVIOUR REPUBLIC OF SANTORUM Feel free to answer, but I will summarize what we see. The responses are that it’s equal that no one agrees what discrimination is, so that’s a barrier to effective AI regulation. And it’s also already too little and too late, which is an interesting response and maybe that invites us to force some discussion. Also, there’s quite a strong assumption that AI is the answer. So, the question is if this is indeed the case. There’s some finger-pointing to big tech, and then there are some remarks about that it has to be global to be effective. Okay, I’ll just pinch in one more question. Referring a bit to the first speaker, do current approaches to AI take group-based discrimination seriously? So, this is a yes, no, maybe question. So, let’s go a bit quicker. Now, you can still vote, but I see a tendency towards no. Current approaches to AI do not take group-based discrimination seriously. I think this is another so maybe. Okay, so, let’s go a bit quicker. So, this is a yes, no, maybe question. So, let’s go a bit quicker. So I think this is an interesting reflection point as well. There are a few more questions, but maybe we stop here and give the floor back to you, Ayça, for some reflections from the audience.
Ayça DibekoÄlu: Thank you, Menno. So we have more questions, as Menno mentioned, but we’ll take it in a step-by-step approach to have discussions in the middle before finishing all of the questions. And thank you for all your responses. It’s clear that there’s quite a lot of insight and experience in the room. I’m also quite surprised at some of the answers as well, and I’d like to further discuss. And let’s now open the floor for a broader discussion. We’d like to invite you and our panellists to respond, whether that’s building on what we have answered in the questions or picking up on something that hasn’t yet been fully explored. And please feel free to raise a hand to ask a question and also comment on the chat. And let’s use this time to dig in deeper. I’d like to actually first ask whether Equinet has any input on this. Here on the panel, we have Mila Vidina. Mila leads Equinet’s work on AI and algorithmic discrimination. I think the first three questions specifically relate quite heavily to inequality and some thought-provoking questions on this. So I’d be curious to hear your insights on this, Mila. Thank you.
Milla Vidina: Thank you for inviting us. So for those of you who wonder what Equinet stands for, it’s not equestrian and horses. It’s a network of national equality public authorities. And our network seeks to empower and bring the collective voices of 47 such institutions across Europe for 35 European states. So equality bodies are anchored in EU law, national law as well, but we go broader than that. So all Council of Europe member states, to my knowledge, have equality bodies. So why are we here? Equinet has been working on non-discrimination for six years. And your first question, can AI solve inequality? that we are trying to tackle head-on, like as in literally working with technical and with software engineers and data scientists for the past year and a half in the context of technical standardization. Maybe you heard that the European Union Act rests upon compliance would be based upon conformity with a set of technical requirements. Those technical requirements are embodied to technical standards. Those are the same standards that you have for a light bulb or for a toy. And then you have that CE, safe use marking that you see across Europe. Well, this is how the high-risk AI system under the AI Act would be certified safe. Now, the problem is that part of the AI Act, especially Annex 3 to the AI Act, includes human rights, I deliberately won’t use this, like with human rights critical AI systems. So how would engineers know how to evaluate risk to human right? And how do they certify as being safe? And mind you, they self-certify. This makes it even more problematic. So what we as equities have been trying to do is answer this question, okay, can AI solve inequality if we engage with industry at this technical standardization forum, which is extremely resource intensive, unrepresentative of our society, and with a high entry barrier in terms of how much time we had to invest, we got a small pot of public money from the UK, not even the European Union. And we’re dealing with the EU legislation that only lasted for a year and a half, and most of us for now are working pro bono. But in that project, basically, we worked with industry representatives, most of them data scientists, software engineers, and cybersecurity specialists, to see whether in those technical standards, it’s a set of standards around 10, we could embed equality safeguards. Can you solve the inequality question, for example? How do you select a fairness measure or a debiasing method that could more effectively tackle potential inequality aspects? if you have to assess risks before you, when you design and develop systems. So most of this is about design and development. Some of this is deployment, but according to the AIAC, even the deployment kind of preparation is done prior to release on the market. So most of the conversation is what we anticipate, what we foresee as risks to equality. How can you foresee risks to equality? And could an engineer make this assessment? So we’ve been trying to see how to set up the human oversight mechanism. So each technical organization that has to certify a system, they have a designated person to assess those risks to human rights. Do they have the necessary competence? Do they have interdisciplinary teams? How do you include affected communities in that? Is there what is a way, because we talked about participation and it’s all fine, but when you talk to engineers, where do you actually make it mandatory for them to include people? Well, there is a concept of design engineering with software specialists and there, there is a consultation, you know, those entry points purely within the technical community. And then when you want to validate a product as a condition for validation, include stakeholder input, you know, in the product safety branch, there are such things as testing products and consulting with consumers. Only now we are not talking about consumers, we are talking about affected persons. So long story short, based on that experience, yeah, as expected, AI cannot solve equality. What it could have, this is why we are in, we want to hopefully find also the means of support financially to continue to be engaged with technical standardization. It could do minimal prevention. So there are some things that you could, some technically feasible measures, which you could implement a design and development that could give you documentation, logging, more transparency. You cannot completely resolve transparency and explainability. It’s a contested concept within the engineering community, but more transparency. So basically you’re getting leverage and information. to contest and to enforce afterwards. So this is as a minimum, we hope, and maybe some prevention mitigation in so far as you want the designers, sorry, the developers when they have a documentation to be able to say, okay, why did I use this fairness metrics and not that fairness metric? And how was the fairness metric appropriate to the outcome, right? So you want that reason giving and justification and also tracing decision-making in the way a product is developed. So this is important for accountability, right? At what point, who took the decision to use that over that? Who was overseeing the human, so the human oversight mechanism? Who signed off product validation? Those things. And so in that way, it could give us some leverage, but ultimately, it’s costly and time-consuming policy changes and legal changes that would get us there. And I would end with an example. We were discussing recruitment algorithms. And there is literature, there is some evidence that human bias in recruitment is worse, whatever that means, than computer bias, right? So you could argue that there is more neutrality. Further to that, you can also thinker with the fairness metrics so that you actually use the algorithm to implement a kind of positive action. So we choose preferentially, let’s say, more women. But then the question becomes, who comes to that recruitment place? Who shows up for that particular position? Even if we have women, would it be the single mothers from immigrant background? So in this question of broader access and participation, the structural and systemic dimensions is something that I think could only be addressed through costly and slow, as I said, policy and legal processes. And I do not believe, just to link to another, I don’t think that we necessarily have a problem with group-based discrimination. but we have a problem with the systemic and societal dimension. Because from our experience with technical standardization, and you could see this in the AACT, in the Annex that outlines the technical documentation that all companies that have to prove compliance with the AACT have to do, there you have a group. They specifically mention that you have to provide data on anticipated impact on persons and groups. So groups are okay, but that broader systemic effect is really what we are fighting for. And I’ll stop, it was a very long intervention. Sorry, lots of material accumulated over the years.
Ayça DibekoÄlu: Thank you so much Mila. Any questions from the audience? Yes, please go ahead.
Audience: Good morning and thank you for the very interesting presentation. I would like the speakers to elaborate a little bit more on intersectional discrimination.
Ayça DibekoÄlu: I’m very sorry, I’m just going to interrupt you because I forgot to mention, could you briefly say who you are as well before you ask your question? Thanks.
Audience: Of course. Hi, I am Linda Ardenghi, I’m a trainee here at the Council. And yes, thank you for the very interesting presentation. I wanted to ask the speakers to elaborate a little more on the concept of intersectional discrimination. In particular, if they think that from a technical and legal perspective this issue can be addressed because while intersectional discrimination by AI is being named in most of the documents concerning discrimination by AI and issues of accountability for algorithmic discrimination, it is not really addressed in that. It is, from what I noticed in my personal research, it is always named but never really focused on. So if you could elaborate more on this, thank you. Other questions?
Ayça DibekoÄlu: Any questions online perhaps? No? Okay. Is there one of you that would like to take up the floor first? I think that finger was pointed at me. So I think intersectional discrimination is incredibly
Louise Hooper: complicated. I think it’s not dealt with properly in law. There’s a lot of resistance in states to introduce legal prohibitions on intersectional discrimination before we even get to any problems in terms of detection and resolving intersectional discrimination in AI. So you start with we don’t understand, most people don’t properly understand what intersectional discrimination is. There’s great reluctance at the Court of Justice of the European Union to deal with it. There is an increasing recognition in the ECHR and Council of Europe documents about intersectional discrimination. It’s looked at in the context of the Istanbul Convention on Violence Against Women, for example. So that’s my starting point. I think the second point on that is that it becomes very difficult to quantify. And one of the big issues with AI and tech-based both solutions and problems is the focus on data and what you can prove and what you can’t prove. And I think starting from the perspective that statisticians know you can prove anything you like, depending on how you look at the data, you then have even bigger problems looking at something like intersectional discrimination. And I just want, within that context, want to touch on the way that I think about group-based and individual discrimination and why it’s difficult, and I understand very difficult, to be able to produce a system that is fair for both individuals and for groups. And that’s because of the way that data is analyzed within a system. You can have something that’s producing the right decision on the evidence that it has, but that ultimately produces a right decision that has very significant consequential impact. Thank you very much for the introduction, I’m Minda Moreira, Menno Ettema, Ayça DibekoÄlu, Robin Aïsha Pocornie, Louise Hooper, Mher Hakobyan, Mher Hakobyan, Ayça DibekoÄlu, Robin Aïsha Pocornie, Louise Hooper, Mher Hakobyan, Mher Hakobyan, Ayça DibekoÄlu, and some of which may not be. So I think that also feeds into issues around intersectional discrimination.
Ayça DibekoÄlu: Thank you Luis. Would any of the panellists like to respond or would you like toâ¦
Robin Aïsha Pocornie: I also think that it is important to note that intersectional discrimination as it’s defined right now is not seen as a root, like the root cause of discrimination outside of the technical sphere is not considered. It’s seen as an add-on and not a bug that immediately needs fixing. So when we look at the development of⦠I as well develop AI, I develop algorithms. So not only do I have the community-based perspective of people that are impacted by it, but I also make them. And within our pipeline we do not consider any intersectional discrimination or any types of human-based discrimination within an integral part of mitigation within the development pipeline. What is considered, for example, is privacy or data set gaps. That’s stuff that you consider from the get-go. How do you clean your data? How do you use it? Which type of model are you going to use and how are you going to validate it? That’s all based on technical model efficiency rather than the eventual or potential⦠It’s a potential harm it could have on people and that’s why intersectional discrimination is always in the literature that you’ve read is often times sort of cited as an add-on or something after the fact. And I think that’s important to recognize and acknowledge, because if we don’t focus on that as an integral part of the development process and even take a step back and look at what are the people who, where this algorithm is going to be deployed, what do they have to say about it and what alternatives are possible until then it cannot be fixed. So AI cannot, in my opinion, cannot fix inequality because it is inherently made by the inequalities that are already existing. So it cannot fix what it is based on.
Ayça DibekoÄlu: Thank you very much Robyn. I would like to give the floor back to Menno to present our next set of questions.
Menno Ettema: Thank you. We go back online. Yes, there you go. For those who closed their web, their browser, you can see on the top still the hyperlink and the code that you need to use. So we go to the next question. Which goes, most AI regulation focuses on harms after they happen. What would real prevention of discrimination look like in practice? This is again a ranking system. So you could put those that you think are most important on top and then the rest order. You can also skip if things you think are not relevant. I see two responses came in and there we go. So mandatory impact assessment is an answer involving impacted communities in design, assessing whether AI is needed at all, stronger powers for equality bodies, or other measures. And I’m particularly curious or we are particularly curious what other measures you think could be taken. 21, 23. Okay, I see still a few people responding, but it’s quite evenly split. So there’s a few answers that are popping up, but quite equally assessing whether AI is needed at all, which I think the panelists also spoke about in a few minutes ago. Stronger powers for equality bodies, I think this is for me a good response. Others, which I’m very curious about what they mean. And then the least amount of responses involving impacted communities in design and mandatory impact assessments. And maybe that’s also worthwhile to reflect further on what these are actually and what this looks like among the panelists. Okay, so others is the biggest, so curious what that is and assessing whether AI is needed at all and equality bodies are the biggest responses. We have one more question, if I remember correctly. No, sorry, two more. What are the main barriers to effective enforcing regulation? The panel already addressed on a few concerns or a few things that are really needed. But what do you think? Lack of transparency, is that a barrier? Explainability, access to data and training sets, commercial secrecy, or money and funding? What are the main barriers to effectively enforcing regulation? Okay. So here we see a trend of quite equally. So there are equal concerns when it comes to lack of transparency, access to data and training sets, money and funding, money funding for whom I might want to ask. Commercial secrecy and explainability is also mentioned. So they’re all quite equally raised as barriers to effective enforcing regulation. We go to the next question. And this is an open question. So when we had other, I was curious what you had in mind. So this is an open question. What would you how would you reduce AI discrimination, AI driven discrimination? What are your thoughts, insights, ideas? I invite you to keep it short. Don’t write an essay, but maybe have some bullet points, thoughts on what could be done that we could further discuss in the remaining of the session. So we get the first responses, proper use cases, reduce social, social discrimination, human rights, impact assessments, big tech, transparency reporting on detecting biases and improvements, quality bodies and multi community mandates. It’s a designing process, mechanisms of effective accountability, ensuring that companies have a skin skin in the game, non-discretion by design, extra control, more equals. Equal society that is informed, empathy and using of the tool, literacy and using of the tool, integration of all steps in AI elaboration, checking and control, organizational accountability, challenging ideas that solutions lie in AI, feedback, accountability, mandatory impact assessments, consulting during the design, more funding to support consultation processes, proof of data sets, including affected communities, taking their viewpoints into account, impact assessments, collect diversity and equality data, inclusive data sets, community participation in design. So these are some of the responses that are coming in. Human rights based algorithmic design, etc. I leave this a little bit open but I want to
Ayça DibekoÄlu: give the floor back to Ayça to ask to continue the process. Thank you Menno. I would like to ask, we have received quite a lot of responses to the word cloud. If you’d like to explain further your answers and bring any points that you mentioned.
Audience: Okay. Yes, please go ahead. Thank you very much. My name is Somaya. I’m with YouThink. Should I speak louder? So my name is Somaya. I’m with YouThink. If I can participate in the discussion. I think that we have to solve the problem of disincentivization in the real life. Then we can solve it on the AI. As she explained, the expert explained, she wrote codes and we cannot write good codes and fair codes and algorithms if the discrimination is already there. So I think that we have to solve the problem of disincentivization in the real life. Then we can solve it on the AI. As she explained, the expert explained, she wrote What is already happening in the real life? For example, for recruitment, as you said before, there is a lot of discrimination in the process of recruitment, so how can we write good code that is not discriminating against the communities or a group of people or a class? So I think we should work in real life so that we can assess this on the codes and algorithms. Thank you very much.
Mher Hakobyan: Thank you, Minda Moreira. I would also like to ask Mher if he would like to intervene at this point to further explore and respond to any of the questions. Thank you. Actually, I wanted to answer the last question we had on intersectionality, but I failed to find my raise the hand function in time. If it’s okay, I can just maybe go back to that question. Of course, please. Thank you. Yeah, I just wanted to say that in terms of our work, it’s been important to also try to expand the way we look at things and the harms that AI can cause, because I think there is a lot of research and knowledge accumulated when it comes to, for example, racial discrimination and gender-based discrimination, but we often lack knowledge about discrimination that impacts, for example, based on disability, on socioeconomic ground, etc. At the lab, we’ve tried to kind of push those boundaries for ourselves and some of the research in the past years that we’ve done. We’ve also tried to speak to communities that are like disability rights communities or rights communities. So I think sometimes it’s intuitive for researchers, for example, to just go based off the knowledge that exists and we can often reinforce and advance the knowledge that is there, but we cannot know what we don’t know, right? So I think sometimes we need to maybe, even if we don’t see examples out there in the media or, you know, already documented, we need to maybe push some of the boundaries of our thinking towards expanding how we see discrimination. And that can then help also address the gaps that we have in intersectional discrimination. And in the aspect of addressing it through advocacy, I think it’s also been quite challenging for us to engage with a broader range of rights holders because oftentimes organizations that represent different communities, they have so many urgent already issues that their communities are facing. So sometimes when we speak of AI, they don’t necessarily see the direct connection that technology could have on the different rights that are being violated. For example, again, with disability rights, people face, for example, accessibility barriers or there is a huge issue with institutionalization of people with disabilities. So to engage with the disability rights networks has been also kind of sometimes not, you know, they’re not fully engaged in advocacy because they have issues that are, you know, for them like much more urgent at this point. But also there is the issue of people feeling intimidated by the conversations around AI. And I think we need to do a lot of work to demystify also how we speak about AI. I think Robin also mentioned about the expertise, like who is considered to be an expert. expert. We often in this kind of fora, you know, we give priority to people who have technical expertise and a lot of professionalized kind of expertise, so to say, but we don’t think of a person that has the lived experience as an expert. And I think these are limitations that we need to address to be able to actually go broader and address intersectional discrimination more effectively.
Ayça DibekoÄlu: Thank you Mair. If there’s, are there any questions from the audience? If not, I’d like to actually pose two further questions myself to the panel. My first question actually is to Robin, because it relates to some of the points of your intervention. We had a question on do current approaches to AI take group-based discrimination seriously, and we thought we discovered the idea of involving impacted communities in the design of the process, but I wanted to ask how do we secure the meaningful involvement of these communities? I think it’s easier said than done,
Robin Aïsha Pocornie: how does it truly work in practice? That’s actually a really good question, because it’s already being done. There are communities who are currently already create, who have created working groups to educate and inform larger regulatory bodies of who gets to decide what gets to be developed, deployed and implemented. Two good examples of this is the Distributed AI Research Centre, research institution, sorry. They combine community-based work with evidence-based technological expertise, but it’s always from a community-based expertise perspective. What that means is that usually a large regulatory body will go out and collaborate with communities, but the end decision stays with the larger regulatory body. And in this case, they ensure that the community-based entities are actually the end-based decision makers. And another good example is the Canadian initiative called the Indigenous Protocol and AI Working Group. These are indigenous people who actually create the regulatory, what we would call regulatory information from a indigenous perspective, as they are the currently, the group of people being harmed by AI the most. And I think it bears mentioning that this is a very unpopular perspective still, even within a very large regulatory body building as we are today even. It’s a very unrepresented perspective because, like we said before, it has been mentioned many times before today, expertise is seen as a professional thing. You have to have some sort of technical education, higher education, but a lived experience needs to be seen as an expert experience and an expert seat to have at the table.
Ayça DibekoÄlu: Thank you, Robin. Another question of mine is to both Louise and Mher, actually, because you have also touched upon this about meaningful collaboration and multi-stakeholder cooperation in the process. And I’d like to. I would like to ask about, with regards to how you would reduce AI-driven discrimination, because this is also one of the answers that stood out on the Mentimeter, about the role of equality bodies and other regulators. How do we create a meaningful cooperation to make a real discernance, and how do we engage the equality bodies and other regulators in the structural design?
Louise Hooper: I will address that, but I just want to add to what Robin said, because one of the problems with having got community participation, apart from the fact that there’s no money to do it, but once you’ve got community participation, even when you prove something is causing discriminatory effect, quite often you don’t get an adequate solution by a court or regulator. What we saw in Canada, there was a similar issue to the sentencing case in the US, where indigenous people in Canada are discriminated against by an algorithm in sentencing decisions. The response of the court was, we accept that the discrimination is there, but it’s alright because the judge knows that and will make a different decision. It’s just not enough. It’s not acceptable. It doesn’t work. Don’t give false promises to communities if you’re asking them to become involved in anti-discrimination work by taking on board their expertise and then ignoring it afterwards because it’s not okay. That’s the first thing that I would say about community type participation. In terms of equality bodies, adequate funding is needed. Within the context of the EU, I think there’s a real need to give proper and adequate recognition as regulatory bodies in the context of the AI Act. Sorry, I’m forgetting your question. How much further do you want me to go? I got carried away with being cross about community participation not being taken seriously enough. As you wish. About most of the role of other regulators, specifically the equality bodies, I’m curious about. In fact, I’d also like to hear… and Mila’s intervention on this as well, because you have also worked with us on our project, now that we’re running with three countries and we specifically work with equality bodies in the EU sphere. Thanks. So what we’re doing with equality bodies, or what the Council of Europe is doing with equality bodies and have asked me to help with, is work on developing e-learning courses and online-offline courses to facilitate knowledge and awareness and for that to then be rolled out to help equality bodies, help other government institutions be aware of AI and equality issues, particularly when commissioning and using AI products. There’s also issues to do with the way in which equality bodies can raise awareness of AI and discrimination, the tools that are available to them to investigate on behalf of both individuals and groups and their ability to bring proceedings and all of those things collectively as a package can really assist in terms of both using powers that individuals don’t have to get access to documentation, information and to test systems, to use regulatory sandboxes and then to also bring proceedings if necessary.
Ayça DibekoÄlu: Thank you very much. Mila, would you also like to intervene briefly? Thank you.
Milla Vidina: I don’t know where to start. I mean, well maybe let’s start with the obvious points. Equality bodies do not exist for the sake of equality bodies. It’s a very technocratic language, equality bodies. It’s an access to justice mechanism. That’s the way I, a public access to justice mechanism. So what would set apart an equality body from an ombud institution or a human rights center or human rights institute? In most of the cases, what sets them apart is all equality bodies handle cases directly. And most equality bodies work with both private and public sector. So being on the front line handling cases, so being in that way an immediate, they also, some of our members decide on cases with legally binding decisions. The majority of our members have litigation powers, not all of our members, but many have investigation powers. So because of, for those reasons, because of the specificity of those powers, we think of them as access to justice mechanism. So now that said, what do we need to facilitate that access to justice, to make it actually more accessible and more effective, because those are right. Accessibility barriers and effectiveness. Well, we, the funding was already mentioned, funding also for the sake of potentially, some of our members, for example, give fund. They do leave the decision-making power with whoever brings the complaint, and they fund and give free legal advice, so that the party goes and litigates a case. So in this way, they do not influence, but they empower somebody to bring a case. But beyond funding, two points I wanted to make. Second is, one is equality bodies operate in ecosystem, and equality law is only one part of the puzzle. We also have data protection law, we also have consumer protection law, we also have competition law, if you’re talking about big tech, and all of those have a stake in algorithmic discrimination. And equality bodies, we cannot, I’ve noticed for our staff members, we’ve been encouraging them to, educated them on data protection law, and I know, for example, the French Defender of Rights works with their data protection authority, but not only, also in the Netherlands at the Chi Institute. So what states could do is setting up, formalizing, institutionalizing a platform where all those regulators sit together, and there is a kind of cross-pollination of the different types of expertise, because they do not talk to each other. And equality law alone can only do that much, because we do not, let’s be honest, we do not have the sanctions the data protection law has. We do not have the enforcement powers, or like for example even competition law, right? So we need to work together. Then we need reform in the law, but this was already made so that equality bodies are given more power, but outside of equality bodies, also burden of proof. And this is something that I know the scholar Rafael Senides has worked on that, basically a presumption of algorithmic, so that you make it easier to establish prima facie discrimination, shift the burden of proof whenever there is an algorithmic system deployed. Changing the sanctions under non-discrimination law, and perhaps equally importantly, moving away from a grounds-based approach, so having to prove you discriminate only on gender, only on disability, or combined gender and disability, but that you have to prove that each one of them, there was a discrimination, and moving beyond that to a truly intersectional approach. In some legislation, like in Belgium, intersectionality is in non-discrimination law. There is a new directive, the Pay Transparency Directive of the European Union, where intersectionality is in the operative part of the directive. So it’s a legal concept already, binding effects. In the equality bodies directives, intersectionality is explicitly mentioned in the prevention and promotion powers. So member states have obligation to also, when they enhance, not give, the prevention and promotion powers of equality bodies, to also consider intersectionality. So we’re starting to have the tools, but we need this more also to empower our members. And one last thing, setting up a facility, because we were exposed to it, I’m inspired by France, and to my knowledge, it’s the only public facility that provides that intergovernmental service of investigating algorithms and testing algorithms. I think it would be, it’s called PEREN, I don’t know how… I see a colleague, I don’t know how to translate it, but the point is if each government could set up a public facility that assists all regulators, equality bodies included, if you ask me, not only all regulators, academics and civil society as well, to actually do the technical testing and the technical investigation part, because it’s, right, we should not have the expertise, you should not be expected to have technical expertise, right? Our added value, unique added value comes from, you know, either it’s lived experience or it’s also human rights, legal and policy knowledge. But if states set up such a facility for us, I think this would allow a more coordinated, consistent and also larger scale impact for approach. So that’s it.
Ayça DibekoÄlu: Thank you so much, Mila. Sorry, somebody’s alarm went off in the room, which I think signaled that our session is over. I would just like to take one moment to see if Mher would like to answer to my final question, a very brief answer, and then we would like to wrap up with sharing the messages from the session.
Mher Hakobyan: Thank you. Sorry, it takes a few seconds to unmute. Thanks for that. I will be very short. I would just like to say that equality bodies and not only the also the NHRIs, the DPAs, the European Data Protection Supervisors, their support in the AI process has been greatly appreciated by civil society organizations, because I think they add to the kind of the legitimacy of the calls that we often make. I think we live in a world where sadly, sometimes civil societies seem to be a bit too radical or ambitious. And when we have public bodies supporting, you know, very strong, making very strong kind of also calls that we make in terms of bans, in terms of sufficient human rights safeguards, I think I think it really adds to the effectiveness of the work that we do. So, yeah, it’s just like an opportunity to thank Equinet and all the other organizations that work with us.
Ayça DibekoÄlu: Thank you. Thank you, Mhir. I would just like to share the last Mentimeter question, which will be running while we get the message from this session.
Menno Ettema: Because we are curious what you might want to take forward after this session. So for us, this is a kind of a feedback and see if we inspired you to take action. So we’ll leave this open and I give the floor to my colleague for the message from the session.
Minda Moreira: Okay. Good morning. I’m just going to try to share my screen. Yeah, so I’ll stop sharing, but the Mentimeter stays open so you can fill in your answers while the screen is shared. And this is what I could capture from the session. It was really rich, but we can only have three main takeaways that I will read. And we expect general consensus. So it’s more about the message than about the specific content or the way that it’s built in that we can still work out afterwards. We still have one or two weeks that we can give it some tweaks. So the first message should be, I decided to divide it into three parts, prevention, mitigation, and redress. On prevention, we have the session agreed that more needs to be done to address group-based discrimination inequality. And it may not be solved with AI alone. It may be necessary to assess if AI is really needed or if non-technical solutions may be more effective. Where AI is needed, transparency and accountability is crucial. Bias detection with mandatory impact assessments must be used as well involving and consulting impacted communities in the AI design and development processes combined with stronger powers for equality bodies and industry best practices. Does it make sense? Okay, so the second one would be mitigation. Algorithm discrimination is difficult to detect and to prove and those affected find it difficult to access justice. When it comes to intersectional discrimination it is even more difficult not only because of the resistance of states and international courts to deal with them but also to effectively work with affected communities. There are main barriers to effective AI regulation for tackling discrimination and bias and session participants agreed that those include the lack of transparency and accountability, access to data and training sets as well commercial secrecy and funding. And finally redress access to adequate funding particularly to equality bodies is a main barrier to access justice. Some steps are being taken by advocacy groups to collaborate with regulatory bodies but a multi-stakeholder approach at a global level involving civil society private sector equality bodies and communities is vital for meaningful cooperation and to fully address discrimination in all its forms particularly intersectional discrimination. Is everyone okay with that? Does anyone would like to include something? that is important and is not mentioned.
Ayça DibekoÄlu: Please object now, or until the 25th of May, until when we can finalize our messages. Okay, I see no objections. Thank you. Thank you, Minda. Sorry that we’re quite over time. Thank you for your time, for both our panelists and the audience for being here and joining this discussion. I think we have a break now, so I hope to talk more to you soon then, and have a great rest of your conference.