Human vs. algorithmic bias – is unbiased decision-making even a thing? – WS 07 2021
You are invited to become a member of the session Org Team! By joining an Org Team, you agree to your name and affiliation being published on the respective wiki page of the session for transparency. Please subscribe to the mailing list to join the Org Team and answer the email that will be sent to you requesting your subscription confirmation.
Public policy in many countries favours the development and application of machine learning and other technologies broadly designated as “artificial intelligence” – with a view of boosting economy, streamlining the processes in the public sector and improving the peoples’ quality of life. To that end, human decision-making is replaced or supplemented by automation, and automated decision-making already affects millions of people in Europe and around the world.
The long-term result, however, might be a net harm, if automated systems merely reproduce the flaws of human decision-making due to inappropriate bias in the systems’ input data and generate new bias because deep learning even creates bias with perfect data
But if to err is human, is it even feasible to avoid bias altogether – either in human or automated decision-making?
And provided that the bias problem can be managed, are there any other substantial problems with using AI for taking significant decisions?
The goal of this workshop is to inform the discussion on AI policy and regulation in Europe and to further the understanding of these problems by the public at large.
Until 20 May 2021.
Always use your own words to describe the session. If you decide to quote the words of an external source, give them the due respect and acknowledgement by specifying the source.
Until 20 May 2021.
Please try out new interactive formats. EuroDIG is about dialogue not about statements, presentations and speeches. Workshops should not be organised as a small plenary.
Links to relevant websites, declarations, books, documents. Please note we cannot offer web space, so only links to external resources are possible. Example for an external link: Main page of EuroDIG
Please provide name and institution for all people you list here.
Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective Subject Matter Expert (SME) and the EuroDIG Secretariat and are kindly requested to follow EuroDIG’s session principles
- Elena Dodonova, Council of Europe
- Yannick Meneceur, Council of Europe
Organising Team (Org Team) List Org Team members here as they sign up.
Subject Matter Expert (SME)
- Jörn Erbguth
The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.
- André Melancia
- Desara Dushi, Vrije University Brussels
- Amali De Silva-Mitchell, Dynamic Coalition on Data Driven Health Technologies / Futurist
- Yannick Meneceur, Council of Europe
Proposed Key Participants
- Karthikeyan Natesan Ramamurthy, Research Staff Member, IBM Research AI
- Ekaterina Muravleva, Senior Research Scientist at the Skolkovo Institute of Science and Technology
- Zoltán Turbék, Vice-chair of the CAHAI Policy Development Group, Council of Europe
- Daniel Leufer, Europe Policy Analyst, Access Now
- Hiromi Arai, Head of AI Safety and Reliability Unit, RIKEN Center for Advanced Intelligence Project
Key Participants are experts willing to provide their knowledge during a session – not necessarily on stage. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance. Please provide short CV’s of the Key Participants involved in your session at the Wiki or link to another source.
- Aleksandr Tiulkanov, Special advisor to the Digital Development Unit, Council of Europe
The moderator is the facilitator of the session at the event. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.
Trained remote moderators will be assigned on the spot by the EuroDIG secretariat to each session.
- Katharina Höne, Geneva Internet Platform
Reporters will be assigned by the EuroDIG secretariat in cooperation with the Geneva Internet Platform. The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:
- are summarised on a slide and presented to the audience at the end of each session
- relate to the particular session and to European Internet governance policy
- are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
- are in (rough) consensus with the audience
Current discussion, conference calls, schedules and minutes
See the discussion tab on the upper left side of this page. Please use this page to publish:
- dates for virtual meetings or coordination calls
- short summary of calls or email exchange
Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.
- Algorithmic bias is a particular concern regarding sensitive decisions with human rights implications. Ultimately, the outcomes of machine learning should be seen as only one input into decisions eventually taken by humans.
- A broad understanding of bias is warranted to address discrimination and harm. Bias can materialise at all steps of developing and using a particular AI system. This includes decisions about the algorithms, data, and the context in which the system is used. There are also mechanisms to make humans and machines work together better for better decisions.
- Policies need to mitigate risks of algorithmic decision-making. Constraints, safety mechanisms, audit mechanisms, and algorithmic recourse all need to be in place. In addition, it is crucial, as a first step, to work towards greater transparency and explainability of AI systems involved in decision-making. Databases that list the AI systems and data in use should be considered, as well as bans on the use of certain AI systems with high risk and high harm.
- A number of technological companies have self-regulation mechanisms in place at various levels. Self-regulation of the private sector is important but ultimately not enough. Various regulatory efforts need to complement each other and greater cooperation between various stakeholders is needed to create synergies.
- Equality and fairness are values that have a strong cultural connotation. They are important principles to address bias, yet it is not easy to find an intercultural agreement on some aspects of these principles. Addressing algorithmic bias also needs to include discussion on what kind of society we want to live in in the future.
Find an independent report of the session from the Geneva Internet Platform Digital Watch Observatory at https://dig.watch/resources/human-vs-algorithmic-bias-unbiased-decision-making-even-thing.
Will be provided here after the event.