WS 9: Signed, sealed - deciphered? Holding algorithms accountable to protect fundamental rights
Please use your own words to describe this session. You may use external references, websites or publications as a source of information or inspiration, if you decide to quote them, please clearly specify the source.
An increasing share of our social interactions depends on the mediation by algorithmic decision making processes (ADM) or by algorithmic decision supporting processes. ADM and data-driven models serve to automate many of the factors affecting how news and information is produced and distributed and therefore shape the public discourse.  In the US, they are used for risk assessments before deciding who can be set free at every stage of the criminal justice system, from assigning bond amounts to even more fundamental decisions about defendants’ freedom.  In medical centers they are used as decision supporting tools in the diagnostics. The credit scores of individual’s and the performances of teachers and students are assessed partially or fully with algorithms. It is uncontested that AMD holds enormous promise and may contribute to the creation of less subjective, fairer processes and reduce the risk of careless mistakes. At the same time it carries enormous dangers of delegating discrimination to subtle automated processes that are too hermetic to be noticed. We need to discuss different questions relating to these technologies:
- What kind of scrutiny does ADM have to be submitted to?
- What objectives are meaningful, necessary and sufficient?
- Do we need to look for intelligibility, transparency, accountability?
- Can we expect any kind of control in light of self-learning systems?
- If not, what needs to be the result - a ban on ADM in cases when fundamental rights are affected?
- Would such a ban be enforceable?
- And last but not least: Who is responsible for the outcomes of ADM - the designers of the systems, the coders, the entities implementing them, the users?
Algorithms, Big Data, algorithmic accountability, data protection, human rights, innovation
Please try new interactive formats out. EuroDIG is about dialogue not about statements.
- Datasets Over Algorithms
- Demystifying Deep Reinforcement Learning
- When an algorithm isn’t…
- Algorithm Watch
Focal Point: Matthias Spielkamp, Algorithm Watch
- Achim Klabunde, Head of Sector IT Policy at European Data Protection Supervisor
- Hans-Peter Dittler, ISOC board of trustees
- Matthias Spielkamp, Algorithm Watch
- Cornelia Kutterer, Microsoft
- Elvana Thaci, Council of Europe
Remote moderator: Ayden Férdeline, New Media Summer School
Org team: Matthias Spielkamp, Algorithm Watch
Reporter: Lorena Jaume-Palasí, EuroDIG
See [the discussion tab] on the upper left side of this page.
Conference call. Schedules and minutes
- dates for virtual meetings or coordination calls
- short summary of calls or email exchange
- be as open and transparent as possible in order to allow others to get involved and contact you
- use the wiki not only as the place to publish results but also to summarize and publish the discussion process
- Regulators should focus on the social and economical aspects affected by algorithms.
- There is a need for transparency with regards to how algorithms are used instead of transparency on how data is being processed.
- There is a value in laws enabling users to request information on how algorithmic decision (supporting) processes are made, including the inputs and discriminatory criteria used, the relevance of outputs as well as purpose and function.
- Humans use criteria that still cannot be emulated by machines when interacting in daily life.
- In analogy to individuals who are accountable and supervised by others professionally and socially, algorithms should be held accountable to democratic control.
- As societies we have defined issues of responsibility and liability in a long process. When it comes to algorithmic decision making we are just starting this process.
Session twitter hashtag
Hashtag: #eurodig16 #alacc