Pre 06 2019

From EuroDIG Wiki
Jump to: navigation, search

Consolidated programme 2019 overview

To follow the current discussion on this topic, see the discussion tab on the upper left side of this page


Final title of the session: Please send the final title until latest to wiki@eurodig.org. Do not edit the title of the page at the wiki on your own. The link to your session may otherwise disappear.

Working title: Council of Europe – AI as a risk to enhance discrimination

Session teaser

AI can have discriminatory effects, direct of indirectly, for example because of biased data sets or unintended correlations between used markers and protected personal characteristics. The debate around the effect and how to address it must involve national equality bodies, NGO's working with vulnerable groups effected by discrimination, in addition to relevant law enforcement agencies, policy makers and the industry. This 3 hour session invites these representatives to an interactive exchange with examples around a few key concerns.

Session description

AI can have discriminatory effects, for instance because of data based on biased human decisions. In the public and private sector, AI-enabled decisions are made in many key areas of life – recruitment, admission to universities, credit, insurance, eligibility for pension payments, housing assistance, or unemployment benefits, predictive policing, judicial decisions and many more. Many small decisions, taken together, can have large effects. Non-discrimination law and data protection law, if effectively enforced could address AI-driven discrimination. However, there is a deficit of awareness law enforcement and monitoring bodies and the general public. AI also enables new types of unfair differentiation or discrimination that escape current laws. Most non-discrimination statutes only apply to discrimination on the basis of protected characteristics, such as skin color while AI system invents new classes, which do not correlate with protected characteristics, to differentiate between people. We probably need additional regulation to protect fairness and human rights in the area of AI. But is regulating AI in general the right approach, as the use of AI systems is too varied for one set of rules. In different sectors, different values are at stake, and different problems arise. Therefore, sector-specific rules should be considered. The community of industry, public authorities and civil society should address this issue in the current Internet governance debate.

Format

Until .

The debate on effect on AI and algorithmic decision making would need to involve wide range of stakeholders. We would like to involve representatives of national equality bodies, NGO's working with vulnerable groups effected by discrimination and relevant law enforcement agencies.

The session therefore will consist of a short introduction followed by an interactive exchange with examples around a few key concerns between participants and reflections from the panellists.

Further reading

Study on "Discrimination, artificial intelligence and algorithmic decision-making" by Prof. Frederik Zuiderveen Borgesius

ECRI revised General Policy recommendation No. 2 on Equality bodies to combat racism and intolerance at national level

ECRI General Policy recommendation 15 on combatting hate speech

Council of Europe Committee of Ministers Recommendation on the roles and responsibilities of internet intermediaries

People

Prof. Borgesius, Prof. Frederik Zuiderveen Borgesius, Professor of Law, Radboud University, NL (moderator)

Kirsi Pimiä, Anti-Discrimination Ombudsman, Finland

Tanya O’Carroll, Director, Amnesty Tech, Amnesty International

Meeri Haataja, CEO & Co-Founder, Saidot.ai

Menno Ettemma, Policy Officer, Inclusion and Anti-Discrimination, Council of Europe