Council of Europe – AI as a risk to enhance discrimination – Pre 06 2019

From EuroDIG Wiki
Revision as of 11:45, 26 June 2020 by Eurodigwiki-edit (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

18 June 2019 | 14:30-16:00 | EVEREST 1 & 2
Consolidated programme 2019 overview

Session teaser

AI can have discriminatory effects, direct of indirectly, for example because of biased data sets or unintended correlations between used markers and protected personal characteristics. The debate around the effect and how to address it must involve national equality bodies, NGO's working with vulnerable groups effected by discrimination, in addition to relevant law enforcement agencies, policy makers and the industry. This 3 hour session invites these representatives to an interactive exchange with examples around a few key concerns.

Session description

AI can have discriminatory effects, for instance because of data based on biased human decisions. In the public and private sector, AI-enabled decisions are made in many key areas of life – recruitment, admission to universities, credit, insurance, eligibility for pension payments, housing assistance, or unemployment benefits, predictive policing, judicial decisions and many more. Many small decisions, taken together, can have large effects. Non-discrimination law and data protection law, if effectively enforced could address AI-driven discrimination. However, there is a deficit of awareness law enforcement and monitoring bodies and the general public. AI also enables new types of unfair differentiation or discrimination that escape current laws. Most non-discrimination statutes only apply to discrimination on the basis of protected characteristics, such as skin color while AI system invents new classes, which do not correlate with protected characteristics, to differentiate between people. We probably need additional regulation to protect fairness and human rights in the area of AI. But is regulating AI in general the right approach, as the use of AI systems is too varied for one set of rules. In different sectors, different values are at stake, and different problems arise. Therefore, sector-specific rules should be considered. The community of industry, public authorities and civil society should address this issue in the current Internet governance debate.

Key messages from the session (posted Aug 2019):

  • AI ethics is important, as it can come down to situations that are critical to people’s lives, even life and death situations.
  • Transparency regarding AI processes is needed, but keep in mind that just hitting the “yes, I agree” button is not the way (first step). Also knowing how to “read” the information made available is needed (or, in other words, literacy is needed) (second step). What is the third step?
  • The public sector has an obligation to transparency, to inform the people what kind of system being built and is used in the delivery of public services.
  • AI induced discrimination often also raises many violations of data protection: access, inaccurate processing, etc. There is a need for more coordination between data protection law and non-discrimination law
  • Decision-making, particularly in relation to delivering services, law enforcement etc, should not be fully automated.
  • We have to design together with the individual, and not “discover” we have to talk about individual rights after a machine was built! Machines are trained by humans who have biases already. Machines can be better than humans including in doing bad things!
  • Public consultations need to be an ongoing process, and not only a consequence of an incident such as Cambridge Analytica. Public-private cooperation in this area is blurry – it should be clarified, together with other grey areas.
  • People should know where they can appeal to when they feel something is not right.


The debate on effect on AI and algorithmic decision making would need to involve wide range of stakeholders. We would like to involve representatives of national equality bodies, NGO's working with vulnerable groups effected by discrimination and relevant law enforcement agencies.

The session therefore will consist of a short introduction followed by an interactive exchange with examples around a few key concerns between participants and reflections from the panellists.

Further reading

Study on "Discrimination, artificial intelligence and algorithmic decision-making" by Prof. Frederik Zuiderveen Borgesius

HR Commissioner of the CoE “Unboxing AI. 10 steps to protect HR”

ECRI revised General Policy recommendation No. 2 on Equality bodies to combat racism and intolerance at national level

ECRI General Policy recommendation 15 on combatting hate speech

Council of Europe Committee of Ministers Recommendation on the roles and responsibilities of internet intermediaries


Prof. Frederik Zuiderveen Borgesius, Professor of Law, Radboud University, NL (moderator)

Kirsi Pimiä, Anti-Discrimination Ombudsman, Finland

Ariane Adam, Legal Adviser - Freedoms and Justice Team, Law and Policy Programme, Amnesty International

Meeri Haataja, CEO & Co-Founder,

Menno Ettema, Programme Manager, Inclusion and Anti-Discrimination, Council of Europe


Find an independent report of the session from the Geneva Internet Platform Digital Watch Observatory at