Algorithmic Transparency – Flash 08 2018

From EuroDIG Wiki
Revision as of 15:18, 25 April 2023 by Eurodigwiki-edit (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

6 June 2018 | 11:00-11:30
Consolidated programme 2018

Session teaser

The use of automated data-processing and decision-making techniques in all spheres of life raises challenges not only for individuals, but also for communities and societies. Do we know enough about the way they operate, the criteria they use and the data they were fed with? Do we need regulation? If so, what kind of regulation? Standards are developing at different fora. What is the role of internet intermediaries? What is the role of civil society and the technical community? Find out more and contribute to the debate!


Algorithms, Artificial Intelligence, Ethics, Ethical Considerations, Responsibility of State, Responsibility of Internet Intermediaries, Human Rights, Fairness, Accountability, Transparency

Session description

The use of algorithms affects many different spheres of daily life. Users must therefore be adequately informed about the key aspects of automated data processing techniques, including the criteria according to which algorithms collect and process data, the specific purposes of data processing, or the reasoning behind algorithmic decision making.

The concept of effective transparency implies - instead of demanding disclosure of the source code of the algorithm (which is technically not always feasible, nor possible for cyber security and trade secrets reasons) - the importance of explaining to users, scholars, and independent tech community experts, how the results of algorithms are produced.

While pleas to increase the transparency of algorithms are becoming more and more common, the task is not as straightforward at it seems. Achieving effective transparency is a challenge on its own. Moreover, scholars and tech experts observe that mere transparency is not enough anymore, nor can transparency be an aim in itself. More important is to embed human rights safeguards into the algorithms` architecture – and to keep the internet community informed about that process.

Debates on these issues are vivid at different forums, including the Council of Europe`s Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence. Also, internet intermediaries are making efforts to render their operations more transparent. But is it enough? What kind of changes and guidelines are needed and expected by the internet community?


Presentations by representatives of the Council of Europe and of Google concerning the evolving standard-setting, followed by the opening the floor to discussion.

Further reading


  • Małgorzata Pęk, Council of Europe
  • Clara Sommier, Google
  • Xianhong Hu, UNESCO