Signed, sealed – deciphered? Holding algorithms accountable to protect fundamental rights – WS 09 2016

From EuroDIG Wiki
Jump to navigation Jump to search

10 June 2016 | 14:30-16:00
Programme overview 2016

Session description

An increasing share of our social interactions depends on the mediation by algorithmic decision making processes (ADM) or by algorithmic decision supporting processes. ADM and data-driven models serve to automate many of the factors affecting how news and information is produced and distributed and therefore shape the public discourse. [1] In the US, they are used for risk assessments before deciding who can be set free at every stage of the criminal justice system, from assigning bond amounts to even more fundamental decisions about defendants’ freedom. [2] In medical centers they are used as decision supporting tools in the diagnostics. The credit scores of individual’s and the performances of teachers and students are assessed partially or fully with algorithms. It is uncontested that AMD holds enormous promise and may contribute to the creation of less subjective, fairer processes and reduce the risk of careless mistakes. At the same time it carries enormous dangers of delegating discrimination to subtle automated processes that are too hermetic to be noticed. We need to discuss different questions relating to these technologies:

  • What kind of scrutiny does ADM have to be submitted to?
  • What objectives are meaningful, necessary and sufficient?
  • Do we need to look for intelligibility, transparency, accountability?
  • Can we expect any kind of control in light of self-learning systems?
  • If not, what needs to be the result - a ban on ADM in cases when fundamental rights are affected?
  • Would such a ban be enforceable?
  • And last but not least: Who is responsible for the outcomes of ADM - the designers of the systems, the coders, the entities implementing them, the users?

[1] http://www.datasociety.net/pubs/ap/QuestionsAssumptions_background-primer_2016.pdf

[2] https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Keywords

Algorithms, Big Data, algorithmic accountability, data protection, human rights, innovation

Format

Please try new interactive formats out. EuroDIG is about dialogue not about statements.

Further reading

People

Focal Point: Matthias Spielkamp, Algorithm Watch

Key participants

Ressource speakers:

  • Cornelia Kutterer, Microsoft
  • Elvana Thaci, Council of Europe

Remote moderator: Ayden Férdeline, New Media Summer School

Org team: Matthias Spielkamp, Algorithm Watch

Reporter: Lorena Jaume-Palasí, EuroDIG

Current discussion

See [the discussion tab] on the upper left side of this page.

Conference call. Schedules and minutes

  • dates for virtual meetings or coordination calls
  • short summary of calls or email exchange
  • be as open and transparent as possible in order to allow others to get involved and contact you
  • use the wiki not only as the place to publish results but also to summarize and publish the discussion process

Mailing list

Messages

  • Regulators should focus on the social and economical aspects affected by algorithms.
  • There is a need for transparency with regards to how algorithms are used instead of transparency on how data is being processed.
  • There is a value in laws enabling users to request information on how algorithmic decision (supporting) processes are made, including the inputs and discriminatory criteria used, the relevance of outputs as well as purpose and function.
  • Humans use criteria that still cannot be emulated by machines when interacting in daily life.
  • In analogy to individuals who are accountable and supervised by others professionally and socially, algorithms should be held accountable to democratic control.
  • As societies we have defined issues of responsibility and liability in a long process. When it comes to algorithmic decision making we are just starting this process.

Video record

See the video record in our youtube channel

Transcript

Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-481-9835, www.captionfirst.com


This text is being provided in a rough draft format. Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.


>> MODERATOR: I give a short introduction to our topic here. And I would also like to point out that Lorena, who is here as the rapporteur for this session, is also one of the cofounders of Watching the Initiative that we started recently, and we’ll talk about it soon. Just you know that she’s also participating in that.

With me on the panel, Achim Klabunde. He’s the IT director of the European supervisor for data protection. And Hans-Peter, who is an entrepreneur from Germany and also a Board of Trustees member at ISOC.

And we will be discussing accountability. And you may ask what is it? I’m a little unhappy with the large room that we got. I mean, it would be fantastic if we had a roomful of people and we needed all that room. The problem now is that IEP intended this to be a real workshop in the sense that we don’t want to give long presentations. I will give a presentation. I hope it’ll be short enough. We don’t want to give long presentations here, and then talk on the panel for like half an hour before we include you. No, that’s not the way we would like to do it. We really would like to have an interactive session here because I think we have more questions than answers when we’re discussing this topic.

We have another mic, so let’s say we have two mics to spare here in the front. So we’ll give them out pretty soon so everyone can use them because otherwise it would be hard to hear. And also this session is live streamed and recorded, so please use the microphones if you have a question even if that means that you need to wait for 10 seconds before it arrives at your place. Thank you very much.

Now, what is it that we are actually talking about? And now it will be a little difficult. I’m going to stand up, not because I think it’s so important to give you that presentation that I need to stand up, but otherwise I can’t see it because it’s not on this computer, it’s on a different computer. And it’s a little hard to do a presentation if you can’t see your own slides.

So as I just said a couple weeks ago at the Republica conference, I catch it, an initiative that we launched. It’s not an organisation yet. It is just an initiative by four people who put their heads together and decided that it was about time to found something like this. And I’ll tell you in a minute what it is. But why did we decide that it was about time to look into

I’ll show you a little bit the history really quickly. About 1 1/2, probably almost two years ago, I think it’s 1 1/2 years ago, a scholar at – who was at the Tow Center for Journalism at Columbia University in New York put out a report called Algorithmic Reporting on the Investigation of Black Boxes. His name is Nick Decapolis. He is a professor for computer science at the University of Madison, I believe, I’m not quite certain right now. And this made quite some waves in the journalistic community, because it addressed the topic that no one really had thought about until then. Although in computer sciences, this is something that had been thought about for quite a while. The ethics of algorithms, the governance of algorithms, we have automated decision making processes, and someone has to look at them and see whether someone can be held accountable depending upon what decisions they make or predetermined.

This is when I started to look at this and tried to bring this to Germany. I am a journalist by profession. I run an NGO in Germany. We have been doing reporting on the legal aspects of digitisation for more than 10 years now. And I decided that this was something that we needed to look into. So I presented this at different conversations, for instance, the largest investigative research conference in Germany, and the uptake was quite encouraging. Many people said that they were interested in it, weren’t really clear on how this works, but they were at least interested in it. So I tried to push this.

And last year I developed a proposal, along with two colleagues of mine. One was on data, one was another professor, graph theory and analysis of networks at a university. We wanted to – or we put in a proposal for a call on combination of data journalism and science, and we wanted to investigate predictive policing in Germany.

Now, unfortunately, to cut this short, we didn’t get the money, but what happened was that the three of us decided that this is too important an issue to just leave it aside even though we don’t get the funding. And that’s when we decided to found Algorithm Watch., which is interesting because it got a lot of attention in Germany.

But first of all, what are we looking into? I show you a couple of examples where algorithmic accountability, the question of who’s responsible for what automate decision making systems do – can be shown?

The first one is Firecast. That is a New York City fire department programme that has been using risk based inspection systems, which is an Oracle based programme with data mining capabilities to better anticipate where fires may spark. And the centerpiece is that Firecast that enters five different agencies into list factors to create lists of buildings that are most vulnerable to fire. Sound like a smart approach in a city where you have a lot of fires and probably not enough firemen to take care of them, right? So you need something like this risk assessment.

But I think the consequences of that are quite clear. Now, if you, for example, relocate a fire department or if you, for example, you have some firemen on leave that you think there’s not a high risk of fires in that area, then this might have quite big consequences. Now who’s looking into that? Who is controlling what this algorithm does, how it works, where the data comes from, how reliable it is, so on and so forth? That is a question that can be asked. So this seems to be one of the examples that make clear what algorithmic accountability is for.

I show you another one, and that is what I already talked about. This is a story from The Verge, it’s called Policing the Future. In the aftermath of Ferguson, crime predicting software. It’s a long story on how, as it said, the police in this area, after what happened in Ferguson, decided that they needed to have help by software. Algorithms in the end, automated decision making systems. Now, this seems far away, but, I mean, there are people here from many, very many countries inside and outside Europe and in Germany alone, we now have three different places where predictive policing is also tested. We have those in Bavaria and (another area), examples of that in Britain. And I’m certain that there are examples in other European countries.

In the states, they are already experimenting not just with predictive policing in this sense where, for example, there’s crime mapping being done, and then the police decides where to patrol more intensively; but they are also looking for individual offenders already trying to put the risk assessment on them, trying to decide who will commit a crime in the future. There are a lot of parallels that are drawn to precox, the movie, I don’t think we’re that far, but it shows where the development could lead to.

There was also a fantastic story very recently that was brought by Propublica, an American-sponsored – or it’s funding journalism is what it’s called. It’s not a publication in the mainstream sense, but they do investigative research for the public, that’s what they call it. And they broke the story that they call Machine Bias of a system in the United States that is actively being used to produce such risk assessments on criminals. What they do is they have an automated system that looks – that is fed with some data, and it produces a risk assessment that says these people are more prevalent than others to commit crimes in the future, so judges, for example, can decide whether to grant parole or not.

Now, to us, sometimes to Europeans, it’s sometimes quite interesting what kind of technologies the U.S. already uses that we would probably have big problems here. On the other side, it’s also interesting how transparent they are about the data. So what Propublica did is they put in a Freedom of Information request for these risk assessments, and they were handed over these risk assessment. And what they did then – and that was a lot of research they did, a lot of resources for that – they tracked down these people and found out how useful and how good that risk assessment turned out to be in the end.

And the result is: It’s not only not very reliable, it’s also biased against blacks. Meaning that black people got a worse risk assessment than white people. And in the end, it turned out that many more of the white people, I mean statistical sense, committed crimes that were not foreseen by the risk assessment. So that’s a pretty big story that came out like two weeks ago on the consequences of using these automated systems.

And I think most of the people in the room are familiar with the debate about Facebook and their trending topics. That has made headlines worldwide. And this is about the fact that it came out that Facebook is not just using algorithms to determine what topics should trend on their news feed, but they used human editors to, one publication, this one here, it’s by Microsoft Research, they call them Click Workers because it also came out that the working conditions for these so called “journalists” were pretty dire, that these this algorithm was supported by click workers who then decided that it was a good idea if that topic is really trending or not.

And then there was an outcry. And that outcry was very interesting because many people said: Whoa, this is not really neutral because there are people looking at it. And apparently they expected that it was neutral before when no people looked at it and it was only a algorithm that looked at it, right? So all of a sudden it was the people’s fault that there was something that was not objective and not value neutral. And you can imagine that our proposition is that, well, the algorithms are rarely neutral, right?

So what is ADM? Why do we call it algorithmic decision making? Because we would like to take a more holistic approach. There has been a lot of debate about algorithms, and it’s a catch all phrase. And some people are really tired of hearing it, especially the people who know more about technology, because they say algorithms, algorithms, what about that algorithm?

So what we set out to do with Algorithm Watch is to find, first of all, a better definition. And we call the following processes algorithmic decision making. We need to design procedures to gather data. So if we say that there is a design there for these procedures to gather data, you can imagine that they’re already value judgments being made. We gather the data, right? We design algorithms, then, to analyze the data, interpret the results of this analysis based on the human defined interpretation model. And then in the end, act automatically based on the interpretation as determined in a human defined decision making model.

Now, we could have left out here the human defined in the last two sentences, and we debated that because it’s quite clear that if you design such procedures, they are human defined. But we wanted to make it really clear what we are talking about here. So we included that human defined to really drive home the point that we are talking about something where humans have a very, very big impact on the way the systems are designed and the systems work in the end. This is also what we would like to discuss later on.

Now, what are the issues that we see here? We’re talking about this algorithmic decision making, this ADM. What kind of scrutiny does ADM have to be submitted to? That’s the very general question, the outset of our question here. I’m really looking for objectives from you here. What are meaningful, necessary and sufficient? For example, what examples?

They often said we need to break up Google and we need to make the search algorithm completely transparent, right? We can discuss this at Algorithm Watch. We say this is not a meaningful objective. Whether it’s necessary and sufficient is a different question, but it’s not even meaningful in the first place.

What do we really look for? Do we look for intelligibility? Transparency? Accountability? What is it that we are talking about?

Transparency is one of the main topics we’re talking about these decision making processes. And many people say they have to be transparent. They have to be transparent. If you talk to Katerina, who is the computer scientist on our team, she often says the conferences – and I had that, I notice that there was misunderstood also – she says, you know, transparency is not really worth anything, because if someone gives me an algorithm and says look at it, I cannot do anything with that. It depends on how the variables are named. It depends on how I can – whether there’s any data that I can look at.

And then afterwards, some people came to me and said so there’s nothing we can do, right? Well, no, of course there’s something that we can do. It’s just that we need to define the goals that we have, and transparency is not imperative. It’s part of what we’re talking about. But we’ll get to that.

Can we expect any control in light of self learning systems? That is some criticism that is already voiced in this discussion because people say well, yeah, come on. We have algorithms, but we have this machine learning, self learning systems. And they are so complex, you can’t really find out in the end where that outcome, where that result comes from.

Okay. I’m not so sure, but even if that is the case, what is the consequence of that? And it could be a consequence that we said, that we say if we cannot any kind of control, what needs to be the result?

For example, a ban on algorithmic decision making, automated decision making in cases when fundamental rights are affected? You know, we could decide that if someone applies for a Visa and there is a system that determines whether this person will get the Visa or not, and that is a self learning algorithm behind that and no one can really explain what the outcome is or why that outcome is the outcome of the system, then we would probably have to decide that we do not want to use such a system to determine who gets a Visa and who doesn’t, right? It could be one of the outcomes.

Would such a ban be enforceable? Well, that’s a completely different question, right?

And last, but not least, who is responsible for the algorithmic decision making process? The designers, the coders, those using it, the users?

Now, here we often run into question marks, right? Why would the users be responsible for anything that an algorithm does, algorithm decision making system does?

Some of you might have heard the story about Tay. Recently released. It was a chat bot that was developed by Microsoft. And this chat bot was programmed to discuss openly on Twitter with users. And it would integrate the responses by users and it would try to self learn from these users and then change its reactions accordingly.

Now, I can’t remember how long it took, probably just half a day. 16 hours. So many people know about this case. 16 hours it took before Microsoft took it off the Internet because it started cursing. It started to be racist. It started to be sexist and made all kinds of remarks that were probably not in line with Microsoft’s policies, right? So they decided that they had to stop this experiment.

Now, why was that? Because of the interaction with the people who were feeding that stuff to Tay. So is it a problem of the algorithm? You could also argue that the users who are interacting with that also have a responsibility.

Now, what ways do we have to investigate algorithms? We can increase transparency, for example, by using Freedom of Information requests. I give you an example for that. The New York City Department of Education, they use a value added model. That’s what they call it. Since 2007. The purpose is to rank about 15 percent of the teachers in the city. The model’s intent is to control for individual students’ previous performance or special education status and compute a score indicating a teacher’s contribution to students’ learning. Quite a complex system. It doesn’t work in the way that you get a form – a student gets a form and he or she ticks some boxes and then this is processed, but it looks at the students’ performances and tries to find out what contribution the teachers had; right? And this is used to assess teachers’ performance in New York.

Now, there was a Freedom of Information request by media, and they obtained these rankings and scores that were used for that. And the teachers’ union, looking at that, and said the reports are deeply flawed, subjective measurements that were intended to be confidential. There was only on the number correlation 24 percent between any given teachers’ scores across different pupils or classes, which suggests that the output scores are very noisy and don’t precisely isolate the contribution of the teacher.

Now, many people who deal with such systems would have probably said that it was very difficult to design something like that in the first place, but it was done. It was used. And only because of some media research or investigation this came out and people had the chance to assess this. Okay?

I mean, probably we don’t disagree in this room here that this is quite an important case when we are talking about the assessment of teachers’ performance.

There’s a different way to investigative algorithms that’s called reverse engineering. And I’m using a graphic here from Nick Decapolis that is very simple but easy, I think. We have different statuses. For example, here we have that black box, which is the automated decision making systems. But we can observe the inputs and the outputs of the system, right? So there you can try to reverse engineer by giving different inputs, see what the outputs are and then try to understand how the thing is working. In many cases, though, we have – I mean, this is difficult enough, right? This is difficult enough. But in many areas, we have only the outputs observable and not the inputs observable. That’s much harder. Here you can work with Freedom of Information requests to make also the inputs observable; but as I said, it’s very hard, and many people argue that reverse engineering is hopeless with complex algorithms. Although there are a couple of good examples.

Nick Decapolis himself, he used a method like this to look at how Uber distributes cars in a certain area because what they claim is that they do it only for the they optimize for their customers and drivers at the same time they are trying to allocate as many drivers in an area where there are a lot of customers so customers pay a low price but drivers get a lot of routes and a lot of customers. And he found out that on that point, it’s not the case, you know. They have an API, Uber does, because you need to have apps for that, so there needs to be an API. Apparently it’s accessible. So he accessed the API, looked at the data and found out that at least on that claim, they are not saying the truth. How important is that? You can discuss that. It’s just an example of how reverse engineering can work, you know, in some determined or specified cases.

Okay. And then why Algorithm Watch? The last two slides here, we have written up a manifesto. Sounds preposterous, but we thought it was important to make clear what our aim is. It’s a little long, but I still want to go through it quickly. We said that algorithmic decision making is a fact of life today, will be a much bigger fact of life tomorrow. This is to say that we don’t say that algorithmic decision making should not be done, right? This is not what we say. We are not against using these in society. We have different focus. It carries enormous dangers and holds promise. They are black boxes by people affected by them is not a law of nature and must end.

Now, that is a pretty clear statement, especially when or depending how people are affected. We do not accept that these are just black boxes. And then what we also say is ADM is never neutral. The creator of ADM is responsible for its results. ADM is created not only by its designer. I also referred to that when I talked about Tay, right? This is what we’re talking about here. ADM has to be intelligible in order to be held to democratic control.

We use that intelligible phrase because we think that it covers more than just transparency, right? Transparency is, in many cases, not enough to understand what is happening. And there are lots of strategies to use or to counter transparency. Obfuscation is one of them. You want to have data? Here you go. You have data. You have 5 million datasets and now go and try to find out what is behind that. I mean, that is called obfuscation, and people would argue yeah, we make this completely transparent. But on the other hand, more transparency is not really worth a lot here.

So we’re talking about intelligibility.

Democratic societies have the duty to achieve this with ADM with regulation and suitable oversight institutions, and we have to decide how much of our freedom we allow ADM to pre empt. We’re not saying that ADM cannot be allowed to pre empt freedom; we’re just saying we have to decide in a democratic process how this is being done.

And how do we do this? We say that we would like to watch – that is in the name, Algorithm Watch. We would like to look at algorithms, we would like to point to algorithms that are already being used for different purposes, the way I’m doing it in this presentation. We would like to explain how algorithms work because there is a lot of misunderstanding about this. And people do not really understand what an algorithm is. And they probably make assumptions that are not true and that can instill fear in people where none is really necessary.

On the other hand, you know, you could probably think that this is not really important. But in fact it is really important what we’re looking at. We want to network. This is also what this is for here. We are in a very, very early process. We don’t have funding right now. So this is a very ambitious aim. But I think we cannot start early enough to network with people who have also an interest in doing that. There are many around the world already. And I see the room is filling, so I’m really happy about that. There seems to be very many people who are interested in that, as well. And we want to engage, which in the end means that we would like to make some propositions of how to go forward.

Okay. So that was it. Algorithmwatch.org. You can have a look at the website. And now I’d like to get into the discussion right away. And I see, Achim, that you made tons of notes. So I probably don’t even want to ask a different question than what do you think about all that?

>> ACHIM: Thank you very much, Matthias. Thank you, everybody behind, thank you for inviting me and for the initiative. Thank you to all of you for coming and to the millions which are watching us behind the computer screens or in front of their computer screens. Welcome.

I was about to, I thought about it, I was about to start commending the organizers when I read the text on the conference wiki, which in my view is a bit more advanced than the usual. Hype, oh the algorithms are threatening us.

>> Glad to hear that.

>> Having studied computer science, one of the first things I learned was the definition of an algorithm. Then I would fully agree on that basis with your colleague, computer scientists, when you just have an algorithm, you are in no way understanding in what it means to humans and for society and for the economy.

So using the term of algorithmic decision making, and calling the thing at which you are looking algorithmic decision making systems or – I’m hesitating – algorithmic decision support systems, which is a technical term which has been around for ages in computer science. So I think we are getting closer to what we need to or what is the interest of a decision.

And I think what is the element that makes these algorithmic decision making or preparing systems relevant is that, I mean, decisions support systems is an area of research and science in computer science for the last 30, 40 years. I mean, that has always been something that computers have been used for, to organise huge amounts of data in such a way that it is much easier for the human to come to a decision. It’s when you have the balance sheet of your company on your screen when you see how your staff members are being occupied with the task that you have the decision easier and then the new task comes, to whom to allocate. Then you choose your colleague who has the greatest likelihood to complete this task on time. So that’s nothing new.

So what we are now talking and in this case, it’s also by the nature of pretty transparent what’s happening in the computer service. Not for everyone, though. Not necessarily for everyone who isn’t trained what to do, the same decision process, that is one that is implemented in the computer system.

What happened is that these systems used to be completely algorithmic. So it was totally clear input what happens inside and what comes out of totally predictable. But then these systems got more sophisticated in going beyond actual algorithms to apply heuristic probabilistic as a procedure, which are very much data driven and to inference and certain limits of new, but results which are not what the programmer has known at the moment when he was writing a programme. But that could only can only develop when the right dataset is added to that. So you have algorithms which apply a set of rules which may be technically pieces of data themselves. Some of these rules are made by the designers of the system, and other rules can be inferred when the system operates on a set of data. So it would find that there is a correlation between a certain combination of factors on one hand and a specific factor on the other hand. And then it would use that as a shortcut, which becomes more and more prominent so long as the system operates. It is not static. It is also dynamic. It is changing all the time. The data makes the outcome change. So when you put in continue providing new data to that, then the output develops. And the theory is becoming better and better and more precise, so on.

So you have the algorithms. You have the inference rules. And you have data. And this is very dynamic.

Just to give one example where you can see this data – and the tests have already been mentioned, the Google search engine algorithm. The objective, according to public releases, is that the user who types in some kind of query into the search engine always finds – it’s a first link of Google – what he was actually looking for. So the success measure is in how many cases do the users click on the first link? Because that is what I feel – and only on the first link, don’t come back and remodel the query and something. So the optimization strategy is always to provide the best result with the first link. I have no formal confirmation from any Google representative that that’s the case, but I mean that’s what I read about search engine optimization to be objective.

And of course this changes permanently. So when you today put in the search engine “Internet Governance, Brussels, location” then EuroDIG ETSI Square would be the correct answer. A couple of months from now, there may be an IGF meeting at the Hilton in Brussels. So IGF Hilton Brussels would be the correct answer to the same search string. So it’s not fixed. It’s developing. It needs constant optimization. And on the one hand, optimization of the output in the moment but also optimization of the algorithm to get all these connections.

End of the technical part of the section.

Second question, so we have to – if you want to do a societal discussion about that and establish an algorithm, you have to be very sure what you talk about that is not just the algorithms; it’s the selection of the data. It’s a way of data provisioning. So dynamisty of the system, all this needs to be taken account. And you have to start, of course, from the objective.

Can you control algorithms? Well, I could – maybe I try to explain a bit what we have now in data protection. But I just want to point you to one example where self learning algorithms have actually been put under regulation, and that is in the financial trading sector. Because stock traders also are confronted with huge amounts of information. And when they do day trading, which is like changing the portfolio several times a day or permanently optimizing the portfolio, they, of course, need to know very quickly what stocks to sell, which stocks to buy, what are acceptable prices for selling, for buying and so on.

And as you can imagine, human stock traders take several years before they are really good at that. But you can buy programmes, expert systems, programmes which are on computers which take these decisions automatically. And 5, 6, 7, 8, 9, 10 years ago, started to connect these programmes from stock traders automatically to the stock exchange system. So it’s able to actually place the orders. Without the owner of all this even knowing what they were doing. So and now imagine there were thousands of these systems out in the different stock trader offices. And they were all looking at the same data. And they were all coming to the same conclusions. And they were all selling the same stocks on the market. So the market was full of sales orders for this stock. And of course what happens when the market has too many orders, the rate goes down. So they created some mini crashers or isolated crashers just by the algorithms reinforcing each other. And it’s happened quite a number of times in the stock market. And as a result of that, we now got regulations on these algorithms.

>> Can you explain really briefly? Because I’d like to come to Hans Peter. Can you say how this regulation was achieved in that case?

>> I cannot give you details. I can just point you as a result. But the reason was obviously that the financial damage was so big that very many people agreed that something needed to be done about this.

Let me just make one other point on this on data protection. You invited someone from the data protection authority. When data protection was first conceived in the 1970s, the inventors of this concept made one big mistake, and that was to call it data protection, which has been the reason of tons of misunderstandings because it’s not about protecting data; it’s about protecting humans and their rights against the impact of the processing of data related to them.

And the background of that was that they had two visions in mind when they thought about it. One was that was computers’ data could be collected and processed and analyzed much efficiently than any human could do that or could be conceived by any human at the time.

And the second one was that was a similar view but in particular between the individual and the government which exists both in Europe and in the U.S. All of these protection, the 1970s and Federal Privacy Act of ’74. And here we have our whole landscape.

And the other perspective which was more specific to Europe was that they saw this creating new balance of power not only between the state and the individual but also between big financial economic private actors and the individual. That’s the social liberal approach of Europe at the time. And so that’s why the European privacy laws also apply to the private sector.

And there were two factors in it causing this imbalance. One was surveillance that was possible of course at a larger scale, and the other one was actually automatic decision making.

That’s why, for example, European directive on data protection of ’95 has an article which says that individuals have the right to object to automatic – to decisions affecting them which are solely based automatic processing. If there is a decision, then it’s okay. That’s the way – how it is handled. And factors that your credit rating is calculated by the computer, normally the bank clerk makes a decision whether you get a loan or not. We already have something in place here which is baseline for any future regulation.

And my last point, what are we doing now, the organizers asked me to say a couple works on the ethics initiative since was launched earlier this year. The current system was much more previsionary than one would have thought at the time. They were already dealing with capabilities computers have only acquired over the last few years. 40 years ago, it was in the vision but not possible yet. But now we are getting there.

At the focus is on the processing of personal data. So data relating to me or you or to the one that is concerned. And what we see now is that it’s data which is not necessarily really related to the people who are the subject of the decision, but it’s a corpus of data which may not be qualified as personal data by strict application of today’s rules.

So the ethical question – or the question to the ethical group we have set up is: What are the issues that we need to look at now to achieve the same of what maintaining freedoms, fundamental rights in view of possibilities which may not be captured by the system which was based on the notion of personal data? So what are the new issues?

And now here we have some events even in the examples which point in the direction that there may be effects on individuals, but also society as a whole, which may need to be captured with instruments which we don’t have in place yet. Thank you.

>> MATTHIAS SPIELKAMP: Okay. Thank you very much for that very good overview of what data protection has to do with it and also what you’re trying to achieve right now or in the coming months with that initiative.

Hans Peter, when we started to talk about this algorithmic decision making and the Algorithm Watch, I got a reaction from you that I often get from people who are technically very knowledgeable, some eye rolling, suggesting that we set out to do something that is either useless or impossible. But I think when we started talking about it, you understood that we were trying to look at something meaningful and helpful.

Now, from your perspective, of someone who’s been developing networks and also running networks, what is the what kind of use do you see in looking at this algorithmic decision making processes? If there is any.

>> HANS PETER DITTLER: Okay, thank you for this introduction. I’ll try to elaborate a little bit on my eye rolling. Why did I react in this way?

To be honest, I didn’t understand what you are bringing forward to me. I only got the words “accountability” and “algorithm.” And immediately I was shooting forward with being involved with Internet Society and the IGF and making the Internet running and going forward and keeping the Internet up. I was immediately thinking “oh, one more who requests that we develop at the IETF algorithms which make the Internet more accountable or make it more secure or allow better privacy.” And therefore my reaction was, “okay, one more of these initiatives. Where is the news factor of it?”

But it took only two or three more sentences to make clear that we are not talking about Internet technology on one of the lower layers, on one of the transport layers or talking about back bones; we are talking about applications running on top of the Internet, applications working with Internet technology or things which are enabled only by being run on the Internet. And suddenly it came to me, oh, wow, that’s a very, very interesting topic.

And then thinking a little bit by the side, I’m not only sitting on the Board of the Internet Society, I have also a day to day job with this consultancy work in companies about security, about Internet. And what I’m doing there is I’m implementing algorithms which do select things, which do decision preparation, which do helping making decisions somewhere or perhaps doing decisions on their own. And, really, who controls those? Who controls algorithms? Who controls the input to the algorithm?

I like your black box with only one input and one output with reality. Most of them look with mazes with hundreds of inputs and at least dozens of outputs. Though making those things with reverse engineering understandable might be a nightmare or must be a nightmare for everybody.

And also just taking on the token from your colleague professor, which is replied or which has been taken up already from my neighbor, just having an algorithm, even having a description of an algorithm, doesn’t help you to understand it. Without the whole environment, how inputs are selected, how inputs are coming from, how they might be it’s absolutely impossible to decide whether this algorithm works correctly or not.

Good example is in Germany, algorithm which officially was intended to decide to switch offer some functionality of a motor control if some temperature or some environmental parameters are out of normal working conditions and which was a little bit modified by only changing some input parameters and suddenly recognizes I’m running on a test set, so make it clean. And the algorithm itself is absolutely nice. It’s absolutely perfect. Even if you would be able to check it, you wouldn’t find anything wrong with it. But changing the input from a simple temperature sensor into a speed and movement sensor changes the whole mechanism and changes completely outcome.

So I was challenged by you set up: How can we make those things accountable? How can we check if they work like expected? And to be honest, I have no regards. But you’re starting already. We’re assembling questions and not answers today. I think that’s enough from my side for starting, and I would like to start the discussion.

>> MATTHIAS SPIELKAMP: Yes, thank you very much. Dieselgate was also the example you pointed out a lot. You pointed out one very important question there. How can you detect something like that? Well, I think at the same time with Dieselgate, we have the example of there is a way to detect it. Now, we don’t know about all the other examples that are not detected, right? On the other hand, here the question here with Dieselgate, for example, is that a case for Algorithm Watch for algorithmic accountability? Do we really need to hold these kinds of algorithms accountable?

I think the question or here the answer is pretty clear because of the massive outcomes. I mean, they were it has outcomes on public health. It has outcomes on or it has consequences when we look at funding and what kinds of subsidies the companies got and so on and so forth. But if you have examples that are less clear, then of course you always have to start asking the question: Why do we even need that accountability here in that specific case?

Do you want to make a quick remark on that? Because I’d like to

>> HANS PETER DITTLER: Perhaps a quick remark. I would like to open up this discussion a little more. Are we only looking at the algorithms which are broken into systems, or are we also looking at algorithms which are set into the organisation of companies or set into organizations like border control, like control or similar?

>> MATTHIAS SPIELKAMP: Thanks a lot. Okay. So I promise that we wanted to have discussion here. We have 45 minutes. That gives us some time.

One thing, I see that Elvana is here from the Council of Europe. And I’m aware that you have assembled a Working Group that is putting together some thoughts on that. Would you be willing to just give us a short comment on what they’re doing? I have a mic, I come to you.

>> ELVANA THACI: So I’m working for the Council of Europe. For those who don’t know the organisation, this is an intergovernmental organisation bringing together 47 European countries. I’m not computer scientist; I’m a lawyer, so I’m not going to go into the discussion about the design of algorithms, but to inform you what is the reflection currently in the Council of Europe on algorithms.

We have put together a committee of experts whose task is, among others, to produce a report and to look at the Human Rights implications of algorithms and implications for democratic governance, as well.

We are at the very beginning of that reflection, so we are mapping out the issues, trying to define what an algorithm is and identifying the sectors where we would like to examine the usage of algorithms.

But definitely we have identified two fundamental rights on which we would like to look at this from a closer – would like to have a closer look. And first is the privacy, the right to privacy and data protection, which I think was elaborated quite eloquently by the panelists earlier. And the right to Freedom of Expression and access to information. And the two of these freedoms, in combination with the principle of nondiscrimination, which we think is also very relevant here.

So I cannot provide more details. I would just congratulate you, actually, on this initiative that you have taken for us, for the work we are doing in the Council. This is really a very good resource of information, the project that you have, the reflections you are having here. And definitely I will bring that back to Strasbourg in the work that we are doing.

>> MATTHIAS SPIELKAMP: Okay, thank you very much.

Are there already people who have questions, comments, answers? Okay.

>> Thank you. You did something dangerous. You gave away the microphone. But I’ll be modest. Thank you. This is a great discussion.

>> MATTHIAS SPIELKAMP: I want them to be floating around.

>> That’s good. Thank you very much. I’m a member of the European Parliament, and I actually am not tired yet of hearing the term. And I suspect that it’s very small community of people that’s tired of talking about algorithms. I think a large group of other stakeholders has still to grapple with it.

I’m also not a security expert or computer engineer, so I look at it really from the rule of law side. And perhaps this is a way that the panelists could approach the issue from the question of if we want the rule of law to apply also in an increasingly digitized world, what do we need when it comes to algorithms? Is the status quo acceptable? Do we see a situation where more and more prevalence of algorithms, Freedom of Expression issues could pose a risk? Or should we already put in place measures now in order to verify the impact of these algorithms?

And I think, although there might be a lot of factors feeding into the outcome of an algorithm, one of them we haven’t touched so much upon in this discussion yet is the goal of making a profit, a significant profit, which can be at odds or certainly requires assessing of fair competition and other aspects but can be at odds and has been at odds in the past with the public interest or respect for Human Rights.

And so when I look at sort of popular publications about this recently in a country like my own, the Netherlands, search engine like the one Google makes has about 98 percent penetration, a tweaking in the algorithms has been estimated to possibly have major impact on big events like elections, could you say something about that? I think going at – looking at it from the rule of law and then asking, “Does it mean anything for algorithms and the way in which we need to verify them or not?” is my main question. Thank you.

>> MATTHIAS SPIELKAMP: Okay. Thank you very much. I’d like to collect probably two more questions or remarks before I go back to us here in the front, otherwise, you know, we’ll probably take too much time answering the single remarks or questions.

So could you pass on the microphone just quickly? Ah, thank you very much.

>> Who else?

>> MATTHIAS SPIELKAMP: Okay.

>> Hi. Miguel from the Spanish association. My question is philosophical question. If we change remove the algorithm by replace this word by “person” and we replace computer by “brain,” which is the difference? We are running computer rules. And is better algorithms with computers, or is better person with brains? We are not neutral. We are not transparent. What are the differences? I don’t know.

>> MATTHIAS SPIELKAMP: Okay. Thank you very much. So we have two more that I’d like to take, and then I’ll close for this first round, but we’ll open up, of course, again after we commented on that. There’s this lady here in the front.

>> Thank you. My name is Rachael Pollock. I work at UNESCO, and Freedom of Expression and NPR with the Internet Society as a traveling Fellow. I’m also working on a Master’s thesis looking at debates around transparency and accountability and algorithms. So I’m focusing on Google search. So very interested in this topic.

I wondered if you could explain a little bit more the sort of comment you made in passing that transparency in itself or publishing the algorithm wouldn’t be meaningful. Is that basically what others were saying that you need to look at the whole picture of the data that’s being inputted and how it’s being used? If you could expand on that.

And then also I would be interested to hear how you said that your initiative received a lot of attention and interest in Germany, if you could talk about that. And if you encountered any resistance to it, who was most interested? If it also received some political attention. So thank you.

>> MATTHIAS SPIELKAMP: Thank you very much. Please go ahead.

>> Tapani from Finland. My background is also mathematics and computer science, so I look at this from the background. But I was struck by the Spanish gentleman’s comment here. I was going to ask exactly the same thing being at the point that people are black boxes, as well. And it is not theoretical. I have seen discussion. I think earlier today in computer cyber security session that people tend to trust more computers than people making the decisions in some cases, that they think that people in the decision making are biased, they will have a register whatever motives, whereas computers are supposedly objective, reliable, whatever. But of course we know that is not the case. There is no clear distinction here. People are black boxes. Complicated algorithms are black box. But we sort of feel – we understand how people work and trust them in other ways, not necessarily.

>> MATTHIAS SPIELKAMP: Thank you very much.

Hans Peter, I would like to ask you for your reactions to that. You don’t necessarily have to react to all of them, right? But pick out the ones that you are most interested in and react to those, please.

>> HANS PETER DITTLER: Thank you, Matthias.

Well, I first start to talk to the Legislator in the room, the Member of Parliament.

I think one weakness in the discussion that I hear now is not only that it is not quite clear what should be regulated, what should be the subject of regulation, but the other question where I have heard only very vague answers about: What is the objective of regulation? We want to protect some values, but no one has been really precise on what the except for Ervana from the Council of Europe, we did see extra step to say what did we want to achieve? We want to protect privacy and sort of Freedom of Expression, which actually puts your initiative ahead of everybody else in this discussion. So congratulations on that. And I think that’s only the starting point. So we would need first to know exactly what to regulate. And then what to do.

It’s not possible to do all, a regulation of all algorithms. Mine we talked in other sessions here about the upcoming explosion of the Internet and of the Internet of Things. I mean, this little device which is the smallest one that I have but has already probably several thousands of algorithms implemented in its different parts. There’s no way we could have a full inventory of this let alone some regulation. You have to know very, very precisely what you want to achieve and then set very clear priorities where you want to intervene, otherwise Algorithm Watch could become “the” stifling innovation initiative on this planet. There will be no more innovation when every algorithm has to be sent to the algorithm authority before it may be used in practice. There really needs to be a lot of clarity.

And I think the question what would happen if we put a human capacity and in place instead of the machine? Well, that is actually another angle to look at the matter. Because you have to look at the objective of the field of application and how you would exercise control and supervision on the human that do it, and what you can supervise better than the machines is the humans which design and put them into practice and which define the objectives of the machines. And I think there is the point, the starting point, where we have to look at.

>> MATTHIAS SPIELKAMP: Did you want to respond immediately? Your reaction was calling for

>> Yes, thank you. I feel like maybe I was not clear enough or the suggestion that was made was I think not at all do injustice to what I was trying to say. The basis for me was also access to information, Freedom of Expression, the protection of universal Human Rights in contrast with profit models or other perhaps unintended consequences that algorithms can have.

And the starting point of your answer in terms of, you know, why do we need regulation? What regulation do need? Why should there be regulations? Is actually a question that is still wide open. I don’t hear anyone pushing for regulation of algorithms for the sake of regulating. In fact, I think there’s quite an appreciation that the opportunities of digitisation are biggest if they are regulated as little as possible. But it’s not to say that there should not be any measures.

And my question to you is really do we need it? If your clear answer is no, universal rights can be protected even in the time of increasing use of algorithms, of services that are almost 100 percent penetrated in our markets, that’s a very clear answer and important answer. But I felt that the starting point or the interpretation of my question was very far off from how I intended it to come across.

>> MATTHIAS SPIELKAMP: Okay. Thanks for the clarification.

>> HANS PETER DITTLER: To very, very shortly clarify. I wasn’t answering exclusively to your question but to more elements that were made in the discussion.

What I actually wanted to say is the objectives must be clear in the area of regulation. That is where work has to go in. And whether it should be regulation and what kind of measure, that’s at the level of the device, the machine or browser, at the level of the organisation and/or whether it’s covered by something completely different. It’s a very, very interesting and valuable issue to research. Thank you.

>> Yeah, just keeping a little bit on this point. I think a regulation on level of algorithms doesn’t make sense at all. That won’t be possible. Regulation or laws who make clear that the customer, the user or publicity must be complete and everything must be open and documented might be a lot more useful.

You mentioned commercial interests as one of the inputs to the black box, which might shift the decision completely in one direction or only tweak it a very, very small bit. But knowledge about the possibility that this is done and how it’s done would be very valuable for the public user of anybody using those things to decide if it’s okay to use it or if you should use a different you mentioned Google. Think about a price searching engine which is known to be biased on definite rules. People could decide if these rules are okay for them or if they would look to a different price searching engine in this case.

So opening up rules and definitions would help a lot.

To another topic, if we shift a little bit from the algorithm and the machine to the algorithm and the human brain and reality, I have personally very direct connections for Social Security to refugees in Germany. And even if there are some algorithms defined how they should be handled, the larger part is handled by the personal feeling of the agent sitting in front of the person and deciding what he is doing. So a quick guess would be a machine would be more neutral, would not be influenced by what the machine sees or what the agent sees in front of him and would be better. But five minutes later, I come to the opposite conclusion because a machine could never or we are still, at least, far away from solutions which are so perfect what human beings can do when they’re talking to people and looking at people. So both sides won’t be perfect.

And then I found an additional part of it. Those people deciding on the fates of other human beings don’t do this on their own. They have supervisors. And the supervisors have other supervisors who control, at least on a statistical level or an overall level, the outcome of the process. And that must be done in a similar way to machine algorithm. When we rely on decisions by machines without control, they are out of control. So we really have to define those algorithms in a way that says the second level or even the second and third level of control and reconfirmation that they are doing what they’re intended to do.

As we started, we want to collect questions and not yet answers. This would be one of the questions. What level of control do we need for it?

>> MATTHIAS SPIELKAMP: Yes exactly. I also chime in with a couple of remarks but answer on a very, very low level because it’s exactly as they said. When you ask what’s the difference between the brain and the computer here when you’re asking about the algorithm, basically you’re asking the questions: Are we assuming that humans are prejudiced and algorithms are not? It could be – as Hans Peter just said, it could be useful if you have a computer or an algorithmic decision making process that is tightly controlled and that is monitored and many people have put some brain power into that, to enhance the possibility, the possibility to come to more objective conclusions than before.

We were debating this intensively when we were talking about that predictive policing project because, you know, you could have as the outcome that an automated system is a lot less prejudiced than policemen who are racist. I mean, I’m not saying all policemen are racist, but you can have examples or we had examples of police intervention that was basically motivated by racism. We have examples of that. Would that have happened if we had automated systems in place? We don’t know that.

The thing is we have as a society spent a lot of time and a lot of debate in the procedures of holding people accountable for their actions, right? And we have not done that yet for algorithms, for automated decision making. We have to be very clear at the same time, for example, Achim alluded to that, then we’re not talking about automated decision making before.

Now that’s all good and fine, but what, for example, if you have someone sitting at a desk and he or she has to make 200 decisions a day and these decisions are predetermined by automated system and he or she has like two minutes per case to decide whether she wants to grant some transfer money to the person or not?h Is that really a decision that she can make? Is that really an informed decision? Who’s accountable? Who is liable for that in the end, right? So, again, you know, this is exactly what we’d like to foster. This kind of debate to start developing answers about that.

Now, there was the question about the transparency and why we think that transparency is just is not the main goal of this. It’s probably sometimes a question of terminology because when some people refer to transparency, what they mean is we have to make the system transparent, the design transparent.

What I’m saying is there is some good argumentation that transparency is not a good term here because what we need to try to achieve is that intelligibility; meaning, that we can understand or we can – probably understanding is a big term for this – but we can hold someone accountable on the basis of them giving us some information to make a system intelligible without them giving us, for example, access to entire database and access to the algorithm itself.

So you could argue that there is no transparency in the sense that we cannot look at all that stuff, but we can trust them of information what data they are using; secondly, on how they are using that data. But then they are also – I mean, the end result of it would be that based on that information that they provide, they can be held liable also for the outcome of their technology. And then, for example, if it screws up, you know, they cannot resort to say yeah, but you know, we made a mistake in explaining to you how the system works, and actually it works somehow different, and we didn’t really give you access to the damages, so we didn’t really understand why this outcome was the result of it ourselves. Then it’s not good enough, right?

And, last, the question about what reaction we had in Germany, it was quite interesting because I started a couple of projects in my life. And usually if we succeeded at all, it took usually rather years than months before we had some kind, or some level of public attention that, for example, journalists, the media came to us and asked us for our opinion, our assessment of something. With Algorithm Watch, we launched at the beginning of May, and a week later, we had a request, for example, by the largest German private TV channel to give our expert statement on the case of Facebook and the debate about are they trying to influence or are they in a position to influence the outcome of the American election? And we’ve had, I think, like four or five media requests since, or even more than that. So I can feel that there is a real need for this kind of discussion.

And right now we know I mean, we are not saying that we are the first ones to look into that. There has been research into that for quite many years. But there seems to be a lack of focus or platform where people can turn to, right? And this is also what we’d like to offer; and in many cases, as you just heard, we cannot yet offer the answers. I’m not sure whether we we’ll ever be able to offer the answers. But what I’m pretty sure about is we’ll never northbound a position to stifle innovation by Facebook, Google or Volkswagon, as Achim suggested. It’s not our intention, anyway. That’s what I tried to make clear in the beginning: We are not against using these technologies, but if you want to have users’ trust, you have to make them accountable and intelligible.

Okay. Next round we have only like four minutes more. So I’d like the get people into the discussion who didn’t ask a question in the first round. So we have two of them I can see immediately. Please pass it on to your left okay. Yeah, to the person next to you. So, yeah, right. So we’ll take three more questions. I have you, I have you here and then I have the colleague on the left hand side.

>> Thank you. My name is Marilia from Brazil. I would like to quickly react to what the colleague in Spain said. I think the difference between a human brain making decisions and the algorithm making decisions, basically two. First of them is the possibility to scrutinize the possibilities that are being developed. If we have a policy on crime prevention, for instance, that has been developed in a biased way against a particular group, this is written somewhere. And if it’s not publicized, we have some instruments, we have information – law to have access to the policy. And when this develop in a way that is made through algorithms, it is not agreeable to most of us. So I think it becomes really not transparent. It’s hard to get that information.

The second possibility for me is participate in the formulation of these policies and standards. So what we are creating is actually a society in which we have a small group of people that know how to read and write in a particular language, a language of algorithms, that are going to develop policy for us.

So looking back, it seems like the societies of sanctuaries ago in which describes would have the capacity to develop policy and not all of us. So I think that it takes democracy a step further away from the average, the normal citizen. And that is concerning for me.

>> MATTHIAS SPIELKAMP: Okay. Thank you very much. Please pass it to the right hand side, to the person yes.

>> MATTHIAS SPIELKAMP: I take that as a remark and not if we want to react to that.

>> I’m Ayden, the remote moderator. We have one comment, Mr. Chris, if should become

>> Can you repeat that?

>> Should sector regulators become sandboxes for testing algorithms for bias? From Chris.

>> Lucas from the government. Remark that will fall what the previous speaker need to be counter balanced with the advantages of algorithms. At the end of the day, they take discretion out of decisions in a certain way, but that also means it removes a lot of the hidden biases and discrimination, et cetera, that are built into any human decision that aren’t always released by Freedom of Information requests because they can be the underlying of a discretionary session decision.

So what I wonder is as we have these councils on the concerns and how to ensure transparency, what is also the positive advice you would give to governments as well as private parties on how to proactively implement algorithms to take advantage of the positive benefits?

Because, I mean, and I imagine one of the goals of the transparency/intelligibility and accountability that you’re developing would be to be able to quantify the benefits. I mean, if how an algorithm works is intelligible, then you should also be able to make the argument applied to the following public policy decisions or that have such as a credit decision if we follow these guidelines. Here the benefits versus the current discretionary processes we might have if there is a human in the loop here becomes impartial.

My question and also my plea to you is that you also find a – to quantify the ways in which algorithms in a particular instance can be superior or at least more consistent than human in the loop decision making or analysis making so that we have that balanced appraisal. We’re making these decisions as to whether to have one or not.

>> MATTHIAS SPIELKAMP: Thank you very much. And then the gentleman who is almost next to you, thank you.

>> Hi, my name is Hans Akuna, researcher, University of Nottingham, Horizon Digital Economy Research Institute. And we’ve recently had a research project funded which will be starting in September looking at algorithms. We’ll be having basically three components to it, one hasn’t been discussed much yet, is actually talking to users of systems, for instance of social media, et cetera, to talk to explore their awareness and concerns around algorithms influencing the kind of information they’re getting. It’s also going towards the question of providing more information and education around, for instance, this attitude of well if it’s an algorithm that’s making the decision, then I guess it’s unbiased. But explain where the different kinds of sources are of potential bias in this. Other components of the project are helping people get some kind of tools to gain some kind of transparency about what’s going on with the algorithm. So things like querying the systems, multiple questions and seeing what kinds of answers come out, see whether we can use those kind of things to indicate the direction of bias the system can have, and the third part is sort of more stakeholder dialogue section, yeah.

>> MATTHIAS SPIELKAMP: That sounds interesting. But that is a sociological project or – because it’s not computer science, is it?

>> It’s multidisciplinary. So the part working on black boxes, getting transparency to computer science is more social science.

>> MATTHIAS SPIELKAMP: So, Achim, I would like to start with you, probably you were already or anyway going to answer to that question that was posed from remote about that, the sector regulators acting as sandbox for algorithmic testing, I’m rephrasing. Anything that comes to mind to you? The European data protection supervisor is going to do that?

>> ACHIM: Not in a general sense, but of course under the current legal framework, as I pointed out, there is the provision about the automated decisions which creates some legal handle for data protection authorities across also the national ones. And so as I said in general, we first have to better understand what we want to achieve and what is the subject of scrutiny and regulation?

It takes me to answering to Lucas from the Estonian government. I think there is a process to decide on which policies, rules, techniques and resources we put on which task. It’s called the democratic process., which decides – budget in the parliament decides where to put it. And it’s that process which should create a transparency to make it visible for people where the priorities are and which directive decisions are being taken, whether it’s more in fostering business opportunities or more in fostering social equality and rights of people. I think that has to be brought to the transparency of the general political process. I don’t think that that is a technical issue but it is really political issue. Thanks.

>> MATTHIAS SPIELKAMP: Okay. Yes, I’ll have Hans Peter answer, as well, and then you can chime in again with direct remark. Please.

>> HANS PETER DITTLER: With the regulators, I simply would expect they’re overwhelmed if they would really have to check any algorithm around. So no chance. No sensible possibility.

The other question which came up: Do we need also quantification of output or relevance of output? I think, yes, would be a good idea to add this to the amount of data which has to be published around algorithms or algorithm support or decision making. It should be possible to really know as much as possible how a decision has been made. And it might be even useful to have laws enabling the end user or the citizen to request output or possible about how a decision has been concluded, how it’s come to a decision, including where the inputs come from, including which kind of buy asking was attached to the process. Though as fully as possible, it should be possible to know what’s going on.

And I see one gentleman in the back who reported earlier today that his, let’s say, custom relationship or his problems to find out what has been done to his passport and why. And that’s exactly the same thing again, if it’s done by an algorithm and background, we should have the right to know why it has been done and how it has been done, how it was decided, and on what reasons and what causes.

>> MATTHIAS SPIELKAMP: Thank you very much.

Lucas, briefly.

>> Two quick points to clarify. The question is not about the democratic decision making process, it is what are the recommendations. If I have 100,000 Euros to make decision making through algorithmic logic, it would be good to approach the impacts and coming up with the recommendation where to spend the money.

To draw comparison where we have open data portals where you could share not just the data but also the methods and the datasets and the ontologies. One could imagine one does the same thing with algorithms where based on the rubrics that you’re working on, I can declare where I have algorithmic in play. If I can show the impact on that, I’d want others to reuse that, too. So maybe that’s a way to move forward on some of these questions.

>> Just two words, but the questions are not technological. They are economic and social.

>> MATTHIAS SPIELKAMP: And I think, I mean, as frustrating as it might be, but it’s a wonderful question. I think we’re years away from being able to really meaningfully answer that, right? But technology is developing very quickly. And we need to develop the means to answer questions like yours hopefully at the same kind of speed, right?

I don’t want to comment on this. I think there were enough answers to this again. I made a mistake. I thought that the session is over at quarter to 4, but it’s actually going on to 4. So we have one more round of questions, or we have room for one more round of questions.

So is there anyone who is still waiting to get a chance to ask questions or make comments on the feasibility of what we’re doing? Please.

>> From Finland again. I would like to bring up an analogy of some autonomous, incomprehensible decision making things we have experienced with legal and whatever namely, the dog. Think of the guide dog left outside, a dog. Guard dog. Whatever it is guarding. Making its own decisions. And we have lots of legal experience. I want to ask what’s the difference? Do we need to treat robot dogs differently from real dogs? Do we need to understand how they work? Or is it sufficient that we learn from without how they work?

[Laughter]

>> MATTHIAS SPIELKAMP: I don’t know. But I think we’re getting back to the question, my remark we made earlier as you made clear on your question, we have a lot of time to think about that. We a lot of concerns about that. Who is liable for what the dog actually is doing. And who it’s biting or who it’s harming. So we just have not had that time to do that yet if we’re looking at the robot dog.

>> One word. Pointing out that dogs can be racist and they have been.

>> MATTHIAS SPIELKAMP: Thank you. Please. And we have another question. Okay. Then you go first, please.

>> I will try to be quick. I wondered, you talked a little bit about the data protection directive from ’95 having this passage. And I wondered how the new data protection regulation which I believe has a passage about understanding how machine learning processes that make decisions about you work or sort of right to explanation. And if you could talk about what the differences are now with this new regulation.

>> Well, that was the one thing that I had wanted to look up to be able to be very precise in this session and didn’t manage for some reason. But in principle the rules have been clarified on that point because obviously the situation was much more present for the legislators this time and much more real than it was 20 years ago. The level of protection is at least the same as we had in the ’95 directive. But more things about tracking and profiling which plays a role in this context added to the provisions. So the attempt of the legislators was to make it clearer and more effective this time.

>> MATTHIAS SPIELKAMP: Any reactions to the dog example? I’ll just call it the dog example, the dog analogy?

>> I mean, I’m not a lawyer, but our legal system assigns responsibilities and liabilities to humans. At least it works on the assumption that there is always a human to whom the responsibility and liability can be traced back. For biological dog, that’s the dog owner, which has to take the responsibility for having its dog run free, bite people and has to take insurance for that. If it’s an automated lawn mower, which is maybe not too far from this dog, then of course it’s the owner of the automated lawn mower who has to make sure that it doesn’t mow the flower carpet of his neighbor. That’s an analogy which exists already. Thank you.

>> MATTHIAS SPIELKAMP: Yeah, because the neighbor is black and the lawn mower is a racist, right? So we’ll break that story hopefully at Algorithm Watch that the algorithm guiding the lawn mower is a racist one and that will be certainly a big scoop.But that’s a little further down the road.

Thank you very much for participating so actively, for posing these very, very good questions, and giving us more for thought.

Now, because we are very new and very young, I can only suggest that if you are interested, please leave your cards with us and your email addresses if you want to because we’ll start sending out information on how it’s proceeding, how it was going. We will do it not on a weekly basis, not only a monthly basis, but just when we have to tell you something. So don’t expect a high volume – I don’t – letter that’s spamming your inboxes. But we would be very happy if you would be interested to keep in touch with us and probably take this discussion further.

Thank you very much. And have a great last session or last two sessions, I think, at EuroDIG and good weekend. Bye bye.

[Applause.]

Session twitter hashtag

Hashtag: #eurodig16 #alacc