Signed, sealed – deciphered? Holding algorithms accountable to protect fundamental rights – WS 09 2016: Difference between revisions

From EuroDIG Wiki
Jump to navigation Jump to search
(32 intermediate revisions by 3 users not shown)
Line 1: Line 1:
----
----
'''Please use your own words to describe this session. You may use external references, websites or publications as a source of information or inspiration, if you decide to quote them, please clearly specify the source.'''
'''Please use your own words to describe this session. You may use external references, websites or publications as a source of information or inspiration, if you decide to quote them, please clearly specify the source.'''


== Session teaser ==
== Session description ==  
An increasing share of our social interactions depends on the mediation by algorithmic decision making processes (ADM) or by algorithmic decision supporting processes.
ADM and data-driven models serve to automate many of the factors affecting how news and information is produced and distributed and therefore shape the public discourse. [1]
In the US, they are used for risk assessments before deciding who can be set free at every stage of the criminal justice system, from assigning bond amounts to even more fundamental decisions about defendants’ freedom. [2] In medical centers they are used as decision supporting tools in the diagnostics.
The credit scores of individual’s and the performances of teachers and students are assessed partially or fully with algorithms.
It is uncontested that AMD holds enormous promise and may contribute to the creation of less subjective, fairer processes and reduce the risk of careless mistakes. At the same time it carries enormous dangers of delegating discrimination to subtle automated processes that are too hermetic to be noticed. We need to discuss different questions relating to these technologies:
*What kind of scrutiny does ADM have to be submitted to?
*What objectives are meaningful, necessary and sufficient?
*Do we need to look for intelligibility, transparency, accountability?
*Can we expect any kind of control in light of self-learning systems?
*If not, what needs to be the result - a ban on ADM in cases when fundamental rights are affected?
*Would such a ban be enforceable?
*And last but not least: Who is responsible for the outcomes of ADM - the designers of the systems, the coders, the entities implementing them, the users?
------------------------------------------------------------------------------------------------------------------------------------------------------------
[1] http://www.datasociety.net/pubs/ap/QuestionsAssumptions_background-primer_2016.pdf


== Session description ==
[2] https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Always use your '''own words''' to ''describe your session''. If you decide to quote the words of an external source, give them the due respect '''''and acknowledgement''''' by specifying the source.


== Keywords ==
== Keywords ==
They will be used as hash tags for easy searching on the wiki
Algorithms, Big Data, algorithmic accountability, data protection, human rights, innovation


== Format ==  
== Format ==  
Line 15: Line 27:


== Further reading ==  
== Further reading ==  
Links to relevant websites, declarations, books, documents. Please note we cannot offer web space, only links to external resources are possible.
*[https://www.edge.org/response-detail/26587 Datasets Over Algorithms]
*[http://www.nervanasys.com/demystifying-deep-reinforcement-learning/ Demystifying Deep Reinforcement Learning]
*[https://medium.com/@geomblog/when-an-algorithm-isn-t-2b9fe01b9bb5#.g2231qqn5 When an algorithm isn’t…]
*[http://algorithmwatch.org/ Algorithm Watch]


== People ==  
== People ==  
Name, institution, country of residence
'''Focal Point''': Matthias Spielkamp, Algorithm Watch
*'''Focal Point'''  
 
Focal points take over the responsibility and lead of the session organization. Focal points are kindly requested to observe [http://www.eurodig.org/get-involved/organising-a-session/#jfmulticontent_c2865-1 EuroDIG's session principles]. Focal points work in close contact and cooperation with the programme director.
'''Key participants'''  
*'''Key participants'''  
* [https://be.linkedin.com/in/achimklabunde Achim Klabunde, Head of Sector IT Policy at European Data Protection Supervisor]
Key participants (workshop) are experts willing to provide their knowledge during a session – not necessarily on stage. Key participants should contribute to the session planning process and keep statements short and punchy during the session. Panellist (plenary) will be selected and assigned by the org team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance.
* Hans-Peter Dittler, ISOC board of trustees
Panellists should contribute to the session planning process and keep statements short and punchy during the session.
* Matthias Spielkamp, Algorithm Watch
Please provide short CV’s of the participants involved in your session at the Wiki or link to another source.
 
*'''Moderator'''  
'''Ressource speakers:'''
The moderator is the facilitator of the session at the event. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers.
* Cornelia Kutterer, Microsoft
*Please provide short CV of the moderator of your session at the Wiki or link to another source.
* Elvana Thaci, Council of Europe
*'''Remote moderator'''  
 
The remote moderator is in charge of facilitating participation via digital channels such as WebEx and social medial (Twitter, facebook). Remote moderators monitor and moderate the social media channels and the participants via WebEX and forward questions to the session moderator.
'''Remote moderator''': Ayden Férdeline, New Media Summer School
Please contact the EuroDIG secretariat if you need help to find a remote moderator.
 
*'''Org team'''  
'''Org team:''' Matthias Spielkamp, Algorithm Watch
Organising team is a group of people shaping the session. Every interested individual can become a member of an organising team (org team).
 
*'''Reporter'''
'''Reporter''': Lorena Jaume-Palasí, EuroDIG
The reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:  
#are summarised on a slide and  presented to the audience at the end of each session
#relate to the particular session and to European Internet governance policy
#are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
#are in (rough) consensus with the audience
#are to be submitted to the secretariat within 48 hours after the session took place
Please provide short CV of the reporter of your session at the Wiki or link to another source and contact the EuroDIG secretariat if you need help to find a reporter.


== Current discussion ==
== Current discussion ==
Line 53: Line 61:
== Mailing list ==  
== Mailing list ==  


== Remote participation ==
== Video record ==
[https://youtu.be/zxMXlPqzqqA See the video record in our youtube channel]
 
== Transcript ==
[[Transcript: Signed, sealed - deciphered? Holding algorithms accountable to protect fundamental rights]]


== Final report ==   
== Messages ==   
Deadline 2016
* Regulators should focus on the social and economical aspects affected by algorithms.
* There is a need for transparency with regards to how algorithms are used instead of transparency on how data is being processed.
* There is a value in laws enabling users to request information on how algorithmic decision (supporting) processes are made, including the inputs and discriminatory criteria used, the relevance of outputs as well as purpose and function.
* Humans use criteria that still cannot be emulated by machines when interacting in daily life.
* In analogy to individuals who are accountable and supervised by others professionally and socially, algorithms should be held accountable to democratic control.
* As societies we have defined issues of responsibility and liability in a long process. When it comes to algorithmic decision making we are just starting this process.


== Session twitter hashtag ==   
== Session twitter hashtag ==   
Hashtag:
Hashtag: #eurodig16 #alacc


[[Category:Sessions]][[Category:Sessions 2016]][[Category:Copyright]][[Category:Netneutrality]]
[[Category:Sessions]][[Category:Sessions 2016]][[Category:Copyright]][[Category:Netneutrality]]

Revision as of 14:07, 30 July 2016


Please use your own words to describe this session. You may use external references, websites or publications as a source of information or inspiration, if you decide to quote them, please clearly specify the source.

Session description

An increasing share of our social interactions depends on the mediation by algorithmic decision making processes (ADM) or by algorithmic decision supporting processes. ADM and data-driven models serve to automate many of the factors affecting how news and information is produced and distributed and therefore shape the public discourse. [1] In the US, they are used for risk assessments before deciding who can be set free at every stage of the criminal justice system, from assigning bond amounts to even more fundamental decisions about defendants’ freedom. [2] In medical centers they are used as decision supporting tools in the diagnostics. The credit scores of individual’s and the performances of teachers and students are assessed partially or fully with algorithms. It is uncontested that AMD holds enormous promise and may contribute to the creation of less subjective, fairer processes and reduce the risk of careless mistakes. At the same time it carries enormous dangers of delegating discrimination to subtle automated processes that are too hermetic to be noticed. We need to discuss different questions relating to these technologies:

  • What kind of scrutiny does ADM have to be submitted to?
  • What objectives are meaningful, necessary and sufficient?
  • Do we need to look for intelligibility, transparency, accountability?
  • Can we expect any kind of control in light of self-learning systems?
  • If not, what needs to be the result - a ban on ADM in cases when fundamental rights are affected?
  • Would such a ban be enforceable?
  • And last but not least: Who is responsible for the outcomes of ADM - the designers of the systems, the coders, the entities implementing them, the users?

[1] http://www.datasociety.net/pubs/ap/QuestionsAssumptions_background-primer_2016.pdf

[2] https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Keywords

Algorithms, Big Data, algorithmic accountability, data protection, human rights, innovation

Format

Please try new interactive formats out. EuroDIG is about dialogue not about statements.

Further reading

People

Focal Point: Matthias Spielkamp, Algorithm Watch

Key participants

Ressource speakers:

  • Cornelia Kutterer, Microsoft
  • Elvana Thaci, Council of Europe

Remote moderator: Ayden Férdeline, New Media Summer School

Org team: Matthias Spielkamp, Algorithm Watch

Reporter: Lorena Jaume-Palasí, EuroDIG

Current discussion

See [the discussion tab] on the upper left side of this page.

Conference call. Schedules and minutes

  • dates for virtual meetings or coordination calls
  • short summary of calls or email exchange
  • be as open and transparent as possible in order to allow others to get involved and contact you
  • use the wiki not only as the place to publish results but also to summarize and publish the discussion process

Mailing list

Video record

See the video record in our youtube channel

Transcript

Transcript: Signed, sealed - deciphered? Holding algorithms accountable to protect fundamental rights

Messages

  • Regulators should focus on the social and economical aspects affected by algorithms.
  • There is a need for transparency with regards to how algorithms are used instead of transparency on how data is being processed.
  • There is a value in laws enabling users to request information on how algorithmic decision (supporting) processes are made, including the inputs and discriminatory criteria used, the relevance of outputs as well as purpose and function.
  • Humans use criteria that still cannot be emulated by machines when interacting in daily life.
  • In analogy to individuals who are accountable and supervised by others professionally and socially, algorithms should be held accountable to democratic control.
  • As societies we have defined issues of responsibility and liability in a long process. When it comes to algorithmic decision making we are just starting this process.

Session twitter hashtag

Hashtag: #eurodig16 #alacc