Signed, sealed – deciphered? Holding algorithms accountable to protect fundamental rights – WS 09 2016: Difference between revisions

From EuroDIG Wiki
Jump to navigation Jump to search
Line 20: Line 20:
------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------
[1] http://www.datasociety.net/pubs/ap/QuestionsAssumptions_background-primer_2016.pdf
[1] http://www.datasociety.net/pubs/ap/QuestionsAssumptions_background-primer_2016.pdf
[2] https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
[2] https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing



Revision as of 14:36, 26 May 2016


Please use your own words to describe this session. You may use external references, websites or publications as a source of information or inspiration, if you decide to quote them, please clearly specify the source.

Session teaser

Session description

An increasing share of our social interactions depends on the mediation by algorithmic decision making processes (ADM) or by algorihtmic decision supporting processes. ADM and data-driven models serve to automate many of the factors affecting how news and information is produced and distributed and therefore shape the public discourse. [1] In the US, they are used for risk assessments before deciding who can be set free at every stage of the criminal justice system, from assigning bond amounts to even more fundamental decisions about defendants’ freedom. [2] In medical centers they are used as decision supporting tools in the diagnostics. The credit scores of individual’s and the performances of teachers and students are assessed partially or fully with algorithms. It is uncontested that AMD holds enormous promise and may contribute to the creation of less subjective, fairer processes and reduce the risk of careless mistakes. At at the same time it carries enormous dangers of delegating discrimination to subtle automated processes that are too hermetic to be noticed. We need to discuss different questions relating to these technologies:

  • What kind of scrutiny does ADM have to be submitted to?
  • What objectives are meaningful, necessary and sufficient?
  • Do we need to look for intelligibility, transparency, accountability?
  • Can we expect any kind of control in light of self-learning systems?
  • If not, what needs to be the result - a ban on ADM in cases when fundamental rights are affected?
  • Would such a ban be enforceable?
  • And last but not least: Who is responsible for the outcomes of ADM - the designers of the systems, the coders, the entities implementing them, the users?

[1] http://www.datasociety.net/pubs/ap/QuestionsAssumptions_background-primer_2016.pdf

[2] https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Keywords

Algorithms, Big Data, algorithmic accountability, data protection, human rights, innovation

Format

Please try new interactive formats out. EuroDIG is about dialogue not about statements.

Further reading

People

  • Focal Point

Matthias Spielkamp, Algorithm Watch

  • Key participants

Giovanni Buttarelli, European Data Protection Supervisor (tbc) Daniel Drewer, Head of the Data Protection Office (tbc) Cornelia Kutterer, Director, Digital Policy, EMEA at Microsoft (tbc) Irina Vasiliu, Data Protection Unit, DG Justice, European Commission (tbc)

  • Moderator

Matthias Spielkamp, Algorithm Watch

  • Remote moderator

Ayden Férdeline, New Media Summer School

  • Org team

Organising team is a group of people shaping the session. Every interested individual can become a member of an organising team (org team).

  • Reporter

Current discussion

See [the discussion tab] on the upper left side of this page.

Conference call. Schedules and minutes

  • dates for virtual meetings or coordination calls
  • short summary of calls or email exchange
  • be as open and transparent as possible in order to allow others to get involved and contact you
  • use the wiki not only as the place to publish results but also to summarize and publish the discussion process

Mailing list

Remote participation

Final report

Deadline 2016

Session twitter hashtag

Hashtag: