Council of Europe – AI as a risk to enhance discrimination – Pre 06 2019: Difference between revisions
Line 3: | Line 3: | ||
Working title: <big>'''Council of Europe – AI as a risk to enhance discrimination'''</big><br /><br /> | Working title: <big>'''Council of Europe – AI as a risk to enhance discrimination'''</big><br /><br /> | ||
== Session teaser == | == Session teaser == | ||
AI can have discriminatory effects, direct of indirectly, for example because of biased data sets or unintended correlations between used markers and protected personal characteristics. The debate around the effect and how to address it must involve national equality bodies, NGO's working with vulnerable groups effected by discrimination, in addition to relevant law enforcement agencies, policy makers and the industry. This 3 hour session invites these representatives to an interactive exchange with examples around a few key concerns. | |||
== Session description == | == Session description == |
Revision as of 16:05, 9 May 2019
Consolidated programme 2019 overview
To follow the current discussion on this topic, see the discussion tab on the upper left side of this page
Final title of the session: Please send the final title until 15 April 2019 latest to wiki@eurodig.org. Do not edit the title of the page at the wiki on your own. The link to your session may otherwise disappear.
Working title: Council of Europe – AI as a risk to enhance discrimination
AI can have discriminatory effects, direct of indirectly, for example because of biased data sets or unintended correlations between used markers and protected personal characteristics. The debate around the effect and how to address it must involve national equality bodies, NGO's working with vulnerable groups effected by discrimination, in addition to relevant law enforcement agencies, policy makers and the industry. This 3 hour session invites these representatives to an interactive exchange with examples around a few key concerns.
Session description
AI can have discriminatory effects, for instance because of data based on biased human decisions. In the public and private sector, AI-enabled decisions are made in many key areas of life – recruitment, admission to universities, credit, insurance, eligibility for pension payments, housing assistance, or unemployment benefits, predictive policing, judicial decisions and many more. Many small decisions, taken together, can have large effects. Non-discrimination law and data protection law, if effectively enforced could address AI-driven discrimination. However, there is a deficit of awareness law enforcement and monitoring bodies and the general public. AI also enables new types of unfair differentiation or discrimination that escape current laws. Most non-discrimination statutes only apply to discrimination on the basis of protected characteristics, such as skin color while AI system invents new classes, which do not correlate with protected characteristics, to differentiate between people. We probably need additional regulation to protect fairness and human rights in the area of AI. But is regulating AI in general the right approach, as the use of AI systems is too varied for one set of rules. In different sectors, different values are at stake, and different problems arise. Therefore, sector-specific rules should be considered. The community of industry, public authorities and civil society should address this issue in the current Internet governance debate.
Format
Until 30 April 2019.
The debate on effect on AI and algorithmic decision making would need to involve wide range of stakeholders. We would like to involve representatives of national equality bodies, NGO's working with vulnerable groups effected by discrimination and relevant law enforcement agencies.
The session therefore will consist of a short introduction followed by an interactive exchange with examples around a few key concerns between participants and reflections from the panellists.
Further reading
ECRI General Policy recommendation 15 on combatting hate speech
People
Prof. Borgesius, Prof. Frederik Zuiderveen Borgesius, Professor of Law, Radboud University, NL (moderator)
Kirsi Pimiä, Anti-Discrimination Ombudsman, Finland
Tanya O’Carroll, Director, Amnesty Tech, Amnesty International
Meeri Haataja, CEO & Co-Founder, Saidot.ai
Menno Ettemma, Policy Officer, Inclusion and Anti-Discrimination, Council of Europe