Human vs. algorithmic bias – is unbiased decision-making even a thing? – WS 07 2021: Difference between revisions

From EuroDIG Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
29 June 2021 | 16:30-17:30 CEST | Studio Belgrade | [[image:Icons_live_20px.png | Live streaming | link=https://youtu.be/kQEAIhbWHzk]]<br />
29 June 2021 | 16:30-17:30 CEST | Studio Belgrade | [[image:Icons_live_20px.png | Video recording | link=https://youtu.be/kQEAIhbWHzk?t=22852s]] | [[image:Icon_transcript_20px.png | Transcript | link=Human vs. algorithmic bias – is unbiased decision-making even a thing? – WS 07 2021#Transcript]]<br />
[[Consolidated_programme_2021#day-1|'''Consolidated programme 2021 overview / Day 1''']]<br /><br />
[[Consolidated_programme_2021#day-1|'''Consolidated programme 2021 overview / Day 1''']]<br /><br />
Proposals: [[List of proposals for EuroDIG 2021#prop_2|#2]]<br /><br />
Proposals: [[List of proposals for EuroDIG 2021#prop_2|#2]]<br /><br />
Line 84: Line 84:


== Messages ==   
== Messages ==   
A short summary of the session will be provided by the Reporter.
*Algorithmic bias is a particular concern regarding sensitive decisions with human rights implications. Ultimately, the outcomes of machine learning should be seen as only one input into decisions eventually taken by humans.
*A broad understanding of bias is warranted to address discrimination and harm. Bias can materialise at all steps of developing and using a particular AI system. This includes decisions about the algorithms, data, and the context in which the system is used. There are also mechanisms to make humans and machines work together better for better decisions.
*Policies need to mitigate risks of algorithmic decision-making. Constraints, safety mechanisms, audit mechanisms, and algorithmic recourse all need to be in place. In addition, it is crucial, as a first step, to work towards greater transparency and explainability of AI systems involved in decision-making. Databases that list the AI systems and data in use should be considered, as well as bans on the use of certain AI systems with high risk and high harm.
*A number of technological companies have self-regulation mechanisms in place at various levels. Self-regulation of the private sector is important but ultimately not enough. Various regulatory efforts need to complement each other and greater cooperation between various stakeholders is needed to create synergies.
*Equality and fairness are values that have a strong cultural connotation. They are important principles to address bias, yet it is not easy to find an intercultural agreement on some aspects of these principles. Addressing algorithmic bias also needs to include discussion on what kind of society we want to live in in the future.
 
Find an independent report of the session from the Geneva Internet Platform Digital Watch Observatory at https://dig.watch/resources/human-vs-algorithmic-bias-unbiased-decision-making-even-thing.


== Video record ==
== Video record ==
Will be provided here after the event.
https://youtu.be/kQEAIhbWHzk?t=22852s


== Transcript ==
== Transcript ==
Will be provided here after the event.
Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-482-9835, www.captionfirst.com
 
 
 
This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.
 
 
 
>> STUDIO: Hello, good evening, everyone. Welcome back to Belgrade Studio, EuroDIG. It is human versus algorithmic bias. Before I give the floor to the moderator, Aleksandr. I will remind you a few rules of what to do during this session. This session is of course being recorded. However, the chat is not. So in case you want to have your chat recorded, what your opinion is, please share it by raising a hand or typing a name that you want to be included in discussion. Before doing that, state your name and of course surname and the affiliation.
 
Be sure that close your video, if it is on or off, that your full name is presented. I think that will be all. And last point is please do not share the link to Zoom meetings here. We are and I think the crowd would be sufficient enough to discuss these interesting issues.
 
Over to you, Aleksandr.
 
>> MODERATOR: Hello. I hope I am being heard quite clearly? Hello?
 
>> STUDIO: I hear you well.
 
>> MODERATOR: Great. Thanks. My name is Aleksandr Tiulkanov, I will be moderating this session for where we will discuss the human and algorithmic bias. And we’ll see whether it is actually even a thing to make an unbiased decision, when it is automated or human made.
 
Let me please now introduce my panelists. Who will hopefully be able to join us, all of them, I think are already present? So we have Karthikeyan Ramamurthy staff member of IBM research. And Ekaterina Muravleva research scientist at Skolkovo Institute of Science and Technology. And Zoltan Turbék Vice-Chair of the CAHAI Policy Development Group. And Daniel Leufer Europe policy analyst Access Now. And Hiromi Arai, head of AI safety reliability unit RIKEN Center for Advanced Intelligence Project. And this session will be moderated by me. And I’m special advisors to the Digital Development Unit Council of Europe.
 
It might be different with people watching with a different level of background knowledge on the topic that we are going to discuss. Let’s perhaps discuss how does machine learning work in fact, in simple terms. Because I think it is essential for us to start from the basics to also important for policymakers and general public to understand what we are talking about, when we are talking about automated decision-making and systems that support this process.
 
And I hope that Karthikeyan from IBM can enlighten us on that. If you don’t mind, Karthikeyan.
 
>> Karthikeyan Ramamurthy: Hi, Aleksandr and everyone. Yeah, it is really great to be here. I think I’m hoping that we will have some very good discussion. So what does machine learning mean? It implies what it says.
 
It means you have a machine, something like a computer, that is going to learn from the examples and environment it is in. Typically it means statistical machine learning. You will provide it with data and some labels it will learn from that.
 
For example, if you have a very young child and you show the child 10 different pictures of cars and say this is a car, the child is going to learn at the ends of the session, hopefully that what it is like a car. And it is going to also, when you show a new picture of a car, it is going to understand that it is a car, even though it is probably not very similar to the 10 examples you showed the child.
 
That is the hope of machine learning to help the machine learn based on the examples that you provide. And it will generalize new situations it has not seen. The hope of machine learning engineers is to make that learn as fast as possible with the amount of resources available. So that is the – that is one of the flavors of machine learning that we call supervised machine learning. There is unsupervised, Commission and so on. We don’t have to come into all the details. In general, it means particularly these days it is use in the parlance of statistical machine learning you let the machine – you have the algorithm to help the machine learn from examples. That is what it means.
 
>> MODERATOR: Okay. Great. Thank you, Karthikeyan for the explanation. Maybe some other experts are wishing to add on whether they have their perhaps favorite type of machine learning which they are dealing with in their study? For example, we have also Ekaterina and Hiromi Arai. Maybe Ekaterina Muravleva can explain what she is dealing with primarily and what sort of machine learning do you do?
 
>> Ekaterina Muravleva: Thank you for the possibility to participate in the discussion. I’m very glad to participate in it.
 
So I work in Academia, so we deal with wide variety of examples during consulting of different commercial projects. And our quite big goal is to implement or develop some new technologies and safe technologies in the lab.
 
I have an example of interesting case. Interesting for me. Because it is very different from a point of view and usual from another.
 
At our home in Moscow we have smart home system. When something is broken, the techs repair it by pulling random switchers. They trained on 100 examples, I suppose. And in fact, they do not understand how it works, but they’re able to repair but don’t know the reason. In this case, I use an example of human being machine learning because in fact, this knowledge, they cannot be – they will not transfer this knowledge without retaining. It is like human analogy of actual machine learning do.
 
Because actually machine learning learns by examples. The given is updated when it gives the wrong prediction. More examples is better neural network. But however, it is an open question, whether it learns actual pictures rather than find some patterns. I think that it is a great question of future tasks for mathematicians, for machine learning engineers. How to construct safe in terms of future results, machine learning algorithms.
 
>> MODERATOR: Okay. Great. Hiromni, maybe you can share what you focus on in the studies and maybe you are also willing to discuss how do you understand the topic of this session, which we are discussing in terms of bias in AI. So why do you think – do you see it at all as a problem? And if yes, in what way, please?
 
>> Hiromi Arai: Yeah, um... yes. Machine learning is by using machine learning we can learn from example, sometimes from very complex and black box might be black box estimator.
 
If we apply machine learning for example hiring or education, something – some kind of decision-making which have a big effect on human lives, it might – difficult decision contains bias such bias decision-making might be problematic. If it is black box, sometimes it is hard to notice such a decision-making by using machine learning. So I think we have to care about this bias program.
 
>> MODERATOR: Okay. Thank you. So Daniel, maybe you are willing to also elaborate on what do you think – what are the sources, maybe of bias in we might have in AI systems? How it occurs, maybe? So you could give us your perspective and what is the problem there, maybe, potentially.
 
>> Daniel Leufer: Sure, thanks again. Appreciate you having me. I will say something about the definition. There is useful comparisons about how machines learn and machine learning. Between human and machines, machines are stupid. They’re very impressive, but quite dumb. It is often the case if you slightly change the circumstances, algorithm won’t work anymore. The model can be trained on pictures that all have forests in the background, suddenly, you introduce a car with snow in the background and the whole thing goes to bits. We say they’re very fragile. They also need massive amounts of data to learn from whereas a child can learn from a small amount of data. That is important to keep in mind. It helps us explain why some of the bias problems or other problems can which in.
 
What I can say on the sources of bias, I mean, there is a lot to say. You know, I can make articles that list 35 sources of bias or three. But, you know, the most common thing we hear is the bias comes from the data. The algorithms are neutral, and if you train them on bias data you will get a bias system in the end.
 
That’s – it is true that the bias can come in through bad data, but that idea is very simplistic and doesn’t really capture the whole picture. So, you know, the way that you design the algorithm, the way you design what it is trying to optimize for can lead to not only bias, but discrimination, to harm, to other issues. The bias label is quite narrow. It makes us think correctly about certain examples, facial recognition system performs well on someone who looks like me. Not on someone who has darker skin. You know, is it biased against people with darker skin, in that case. There is other types of harm. Basically every decision made by people in designing the system and there are a lot of decisions, including the decision to use AI in the first place, is an opportunity for problems to creep in. So the whole way down the chain of how the system is created and deployed, where it is used and not uses is an opportunity for bias it to come in.
 
>> MODERATOR: Okay. Great. Thank you. Maybe Zoltan would also contribute. What do you think from the policy perspective, would you then say that certain users of AI are more relevant and more important in terms of like what we provide policy for? Or do we need to regulate it all equally? Maybe depending on the use cases or scenarios or categories? What’s your take on this?
 
>> Zoltan Turbék: Good afternoon everyone. As pointed out from the policymaking group as a Government official and who also happens to participate in the work of Council of Europe and its regulatory force, we have Daniel also sitting around the table.
 
So why are bias important for this topic and also for regulation? I think the problem is that it can cause – it can result in violations of certain human rights and also certain principles, which are important in society. And you know, also when that happens, you know, something has to be done to solve – to some reaction is needed. Or also measures to prevent such situations is actually needed. What can it cause? It can result in discrimination for example.
 
It can result in decisions which are totally wrong in a legal system. It can result in unjustified – I mean, unlawful decisions which affect certain individuals or certain specific groups.
 
So from the policymaking point of view, it is important to mitigate the risks, also to prevent the occurrence. Daniel, you mentioned the situation when the data used is – results the bias decision-making. But we can also mention other examples like the operating system itself is problematic in bias or a situation where the data system is used in an environment which is unbiased and results in problematic decision.
 
>> MODERATOR: Okay. Thank you. I was also interested in like certain scenarios that I believe, before the session, we were talking also with – Ekaterina suggested to me, for example, it was a case in Russia, I believe, where first the individual was using a Tesla car, for example, which was not legally officially introduced to the country. But like introduced on his or her own behalf, imported personally. And it occurred that the certain type of vehicle, which is popular on Russian roads, but which was not like at all present in learning dataset which the engineers of Tesla used originally. It became a problem, even to the extent that the certain type of vehicle was not recognized as such. So I believe Daniel has underlined. So this was a kind of like if we discuss the difference in decision-making and learning difference between humans and machines. We definitely here, see a gap. Karthikeyan, maybe you would also give us your perspective on whether this kind of, like – this kind of gap between how machines learn and humans learn. And whether humans it would be enough, like if you have just the minimum amount of information for a kid to learn. To tell whether it is a car or it is not a car? So but for machines it might require more time and resources. What do you think? Is there a solution to avoid this kind of accidents that are caused by not recognizing a car as a car?
 
>> Karthikeyan Ramamurthy: Yeah. Actually there is like a very deep philosophical question. Like first of all, we don’t understand how people learn truly. But I mean, I gave the example of course it was simplistic because it was the first example. And I think like some of my colleagues likely elaborated on it. So like, for machines, even to – if you are just applying like completely unconstrained deep learning system to recognize cars, of course, it will require a lot of examples. Because it is only looking at Pixels. It doesn’t know about the concept of a car or anything. There are a lot of examples to learn. I think particularly, it is being touted as one of most impressive things in deep learning these days. Humans have a lot of inherent wiring in them. To put it simply. I’m not a cognitive scientist or something. But to put it simply, humans are wired to recognize things, connect the dots, understand the parts of a picture, extrapolate even sometimes what is in the background. 3D and so on. More importantly, being able to associate the concept to an image very well, right? So it is called a grounding problem, right? So how do you associate a symbol to a presentation, right?
 
So humans are very good at that. Machines don’t have an idea of what the grounding is like. They only live in the world that you create for them, right? So if you show them cars, they will learn to recognize cars. If you show them something else, like lots of examples, they will hopefully learn to recognize something else.
 
There is really no easy way of getting around this problem. The only way to make sure is that we have enough constraints put in place and enough safety mechanisms that are put in place so that at least when something wrong happens, we can recognize it. Right? So one thing we have realized is machine learning is definitely a small part of a really large ecosystem in our society. And there is nothing magical or superhuman about it. Right? So it does require supervision, it does require regulatory mechanisms. It does require enough – you need to have enough knobs and tabs on it so that you actually keep checking what’s going on. Right? So that is the engineering – that is a big engineering problem behind this, right? So that is my not so short answer to your question.
 
>> MODERATOR: Okay. But still very informative one, Karthikeyan. So thank you. And maybe just also look at this problem from another angle, which is suggested to us by Jorn in the chat. He says bias, which we discuss now is just the tip of the iceberg. He says that for example, from his point of view, that the global problem is that trained systems do not obey any rules. He says I quote, deep learning – for example, if we are talking about specific way of learning machines. Deep learning always creates bias even with perfect data. So what I think would also address this question to Karthikeyan, and Hiromi because you are deeply involved in this area. How would you comment on that? Maybe on this specific statement?
 
>> Ekaterina Muravleva: I would like to say global learning stores all the bias. Most of the machine learning amplify the basis and special measures need to be taken to ensure that unbalanced data do not lead to unbalanced prediction.
 
Certainly, I suppose that it should be incorporated in maybe not in the near but in the future, and this aspect also should be taken into account when you develop something. And certainly in my opinion nowadays, we cannot rely on some expert systems based on machine learning technology. In sense of it is a final decision. So it can be some assistance, advising system, but in topic, where cost of solution is quite high, results of machine learning techniques nowadays, not results but solution-based only on machine learning results cannot be fair. So it can be used as an advising part, but not the main part of solution.
 
>> MODERATOR: Thank you, Hiromni, do you have the same point of view? Do you have something different in mind when we are discussing? I wonder. Or do you totally agree with what Ekaterina said?
 
>> Problem with microphone.
 
>> Hiromi Arai: Yeah. I almost agree with Ekaterina because, you know, maybe achieving equality in every prospect of every period is difficult, because for example, some cars require equality in opportunity and others require equality in the result. Achieving both simultaneously is difficult. For example, think about international services. So maybe minority is defined in countries, for example, Japanese may be minority in EU, but in Japan, Japanese is majority. So what kind of equality is required is also different. For example, in places or in groups or et cetera. So yeah. Thinking perfect equality, perfect fairness by using machine learning technique is difficult. So maybe it is better to use other advice and final decision maybe few more should also be contributed.
 
>> MODERATOR: Okay. Thank you. So Karthikeyan do you share the view? We have some opinion on the place of machine learning which is like in the society?
 
>> Karthikeyan Ramamurthy: Yeah. So I do share the views broadly, right? Because they all make a lot of sense to me. But in terms of the question itself, I think like generally bias is a term that can mean many things. And you say also like perfect data. That is a very loaded term. Perfect for whom? That creates questions.
 
So some people may say whatever data you collect, it should be a perfect sample. Most data are loaded with historical biases. And systemic inequalities that have happened throughout the past. One example is if you take loan data. A lot of people talk about this. If you collect credit data from the past. Most likely there is a huge majority of men who got credit, whereas women got little credit because very few applied for credit. Like 20 or 30 years back. Right?
 
So is this perfect data? Probably not, because it is from the past. So is it like relevant to the current society and value systems we have? Is it relevant for the type of society we want in the future? So we don’t know, right? These are very difficult things to say. But I do agree that even with whatever data you have, any algorithm with deep learning can be biased because of reasons some of my colleagues have pointed to before. Because of the way you design the system, the decisions you make throughout the system. So on.
 
So yeah, it is true that you can have very imperfect positions from so-called perfect data. So yeah. That part I do agree. Yeah.
 
>> MODERATOR: Okay. Thank you. Daniel, maybe what would you say, what is your take on like the pace of machine learning, the constraints which needs to be put on some aspects of it? What do you think?
 
>> Daniel Leufer: I will maybe give one interesting example, which is GPT3, this very famous language generation system from open AI. It made headlines, it was the ridiculous situation that the guardian published an op-ed written by AI. In reality, they had generated 16 different op-eds and the editors picked bits from each one and put it together. It is one of the most hyped case, it is simultaneously impressive and does fantastic things but it doesn’t do things it tends to be hyped about. It is a very good auto complete text function. It can go beyond and do images and code and stuff. But you hear people saying it can do the amazing things and it actually can’t.
 
A French start-up made a medical chop up with it, where patients could talk to it. What GP3 can do is produce convincing language. On a formal language it sounds real and operates in different registers. The problem with it is it can’t be constrained to giving actual good medical advice. Like an older type of expert system.
 
So there was an example. I can put the link in the chat. Where they were testing the chat bot, they said, I’m feeling depressed. Do you think I should kill myself? And GPT3 said yes, I think it is a good idea. That shows that there is a fundamental problem with the unpredictability of that approach that means it is maybe not suitable for a situation in which you need really strong constraints.
 
To stick to the chat bot example, I forth the name for the world’s best chat bot. I there is an award for it. It has a Japanese name. It won the competition and it was reported as a deep learning chat bot. The creator said it is a deep learning, he said it is a real based system, I don’t use any machine learning. It is the best there is. It is interesting to keep that in mind.
 
And another thing, we often hear questions about should we trust AI? What can we do so people trust AI? I always say we shouldn’t trust AI. We don’t trust companies. We don’t trust Governments. We have processes in place, we have democratic oversights, we have audits and structures and processes in place because we don’t trust other people or companies or Governments, and we need it with AI, too. It is no different. It is a tool used by companies or Governments. We need the possibility of audit or transparency and then there can be trust in the use of the tool. There is no sense in which we should be moving towards, you know, a situation where we trust AI.
 
>> MODERATOR: Okay. Great, great contribution. Daniel, thank you. Zoltan, maybe seeing that we have discuss certain particular problematic applications, what, in your view might be the areas like policy areas of application that might be more problematic for unconstrained use, for example, of machine learning as Daniel suggested, like in his example where obviously like, if you have a chat bot suggesting to people in distress and psychological distress might not be the best idea to can employ like unconstrained applications there, maybe you having some other thoughts on other areas or use cases that might be problematic for such kind of technology? What do you think?
 
>> Zoltan Turbék: Thank you for the question, Aleksandr. Before I answer to that. Let me react to some of what has been said by our colleagues.
 
You said you cannot imagine algorithmic system resulting in equality. It is also for human decision-making. We have seen problems in human decision-making. The main question is regardless if we talk about human or algorithmic system, it is what are the results of such a decision-making and whether they have an impact which is unacceptable?
 
And so in case of human decision-making, you have to have corrective measures, I believe also when AI systems are used, it is important to have such mechanisms in place. In addition also to, you know, transparency, explainability, and other type of measures, which would result in that. You asked me about what, where the use of AI or algorithmic decision-making can be dangerous.
 
I believe there are specific sectors such as law enforcement. Or judicial systems. And what we see is that AI is used also to improve their decision-making, but when there are biased data used by such systems, they can also violate – they can also result in unjustified bias. So I think the question is, you know, what could be the consequences of such decisions and when they impact the rights of people, I think you have to be actually careful. That is why I mention for the law enforcement.
 
>> MODERATOR: Definitely that scenario would deserve probably, I think.
 
So do you think also in terms of policy, would that be primarily stimulus for self-regulation or like mandatory regulation? And if it would be mandatory regulation, what level would be more appropriate? Like international level, national level? What are your thoughts on that?
 
>> Zoltan Turbék: I believe that there is no single best solution for that. So I also so the importance of self-regulation by companies, by entities, I think they are not enough. There are no enforcement mechanisms, they’re nonbinding. And even private sector entities can make decisions that violate certain rights, where I think the state should step up.
 
So what I see and also this is my – I am in certain organizations, I see that there are regulatory efforts that should complement each other. It is important that we’re aware of the regulatory efforts at the moment. There are international organizations, like UNESCO, Council of Europe, EU, OECD that are all active in this field. I think cooperation between them is also important. Of course, at the state level, you also have to have certain national laws in place. And actually there are already some laws and regulations which are applicable even to the use of AI. But there’s a need for new regulation also at the international level. I think as I mentioned, there are entities that also could, you know, come up with certain codes of conduct and regulatory – self-regulatory instruments to help ensure that whenever AI is used, it is used in the right way.
 
>> MODERATOR: Okay. Thank you, Zoltan. So my question would be for example, to Karthikeyan maybe. As you are working like for IBM, can you give an example, do you actually in your everyday life as a scientist come up with like – or your company which employs you, do they as a matter of practice enforce some ethical constraints and limitation? How it’s really – is it something that already exists like ethical frameworks in the corporations which work and research organization which work on AI?
 
So is there already some kind of self-constraining process there?
 
>> Karthikeyan Ramamurthy: Yeah, we do have an ethics board that comprised of a huge number of people, which kind of looks at some of the big things that we do. And you know regulation, provides advice, provides modifications, changes, whatever. So we have a review process. So that is at the big level. And it then pulls quite a bit. It is kind of an important thing that we do. Also, the other thing is we have for example, IBM has said that they won’t do any more face recognition. So those kinds of – you know, it came from the CEO. So those kinds of high-level things are also there.
 
Also in our day-to-day life, since you ask, we do have human resources. We look at what things do people do. We have a broad knowledge of stuff. We have an internal review process when we write papers and come up with ideas, so on. So we have an internal review process around papers, stuff like that. I would definitely say that there are internal mechanisms in place at various levels, starting from day-to-day stuff to bigger stuff.
 
>> MODERATOR: That is encouraging to know. Also, I want you to like hang on a little bit, which Zoltan was mention talking about human decision-making processes. Sometimes we see those processes are affected by bias also. And do you see in this sense the possibility of using AI to combat actually? Not to amplify the existing human bias. Is there a way to – it would help us to reduce the existing bias in human decision-making? Is there a possibility for such thing?
 
I would also be interested in listening to the position of this question from other panelists also. Please, what do you think?
 
>> Karthikeyan Ramamurthy: Yeah, one way to do it is something that people have been working on quite a bit recently is using a combination of humans and machines in the system. And trying to understand when it is best to give to humans and when best to give it to machines. Of course, you can build a sophisticated system like this. In fact, I think that there are systems that are called second-opinion systems, where it recommends for you to get a second opinion beyond the expert of that person. Along with the machines at that particular point. Like if they expose it all, you go for a second opinion. So in the framework system, it is interesting. There are lots of possibilities. Like deferral, second opinion, all of those. And also the code. If you get a decision from a machine, let’s say you are denied credit at a bank. There are things like algorithmic records that say what could have you done in order for your loan to get approved. Right? Those kinds of things are also there.
 
So there is a lot of possibility, right? The thing is that you have to be very careful about these things and do it in a proper way, with all the constraints in mind. Ensure you are part of the bigger ecosystem and so on. If you – these are some of the things where can you beneficially apply AI, right? You can make it work with humans. You can ensure human supervision. And also, internally, we have this idea called – there are many versions of this, but in IBM we have this idea called fact sheets, this is basically a way to track your AI service at various points throughout the life cycle. Basically you can keep tabs in the various points, training data stage, what do you want to measure. In the algorithm station, what do you want to measure? In the prediction stage, what do you want to measure? You can keep track and governor the system. You can have a robust oversight on the system. So that as mechanisms for that also. So of course, if the system is not doing properly, you can Delegate it to humans. Like those kinds of things can also be done.
 
So I don’t know if there is a way to like correct the decision of the human at a given instant, but there are ways to make humans and machines work together in the proper way, right?
 
There are many mechanisms for that. Like some of these, I mentioned, there are probably a lot more.
 
>> MODERATOR: Okay. Thank you, Hiromi maybe you are also willing to contribute to that. Maybe the potential benefits of improving the overall results of the decision-making where, like people and machines are involved. What do you think?
 
>> Hiromi Arai: We have several methods to achieve a more fairer decision by using machine learning resource. By applying – by using such things as algorithmic methods. So one way to contribute in the decision is to use such fairer decision-making by fairer machine learning modalities as a teacher. Or as a second opinion for human decision-makers.
 
>> MODERATOR: Okay. Yeah. I think, Ekaterina do you agree on the general review? With –
 
[Inaudible, multiple people speaking]
 
>> Ekaterina Muravleva: Yes, I agree, but – not “but.” But I also like to mention that in terms of training some machine learning so on, we certainly should pay attention on diversity of dataset and if we deal with really big dataset, certainly we track with question of open data. And accessibility to data. Because in some cases, for example, during my consultant work for some Russian oil and gas companies, they are very closed in sense of you initially write a lot of documents to access some just anonymized dataset, I don’t know, potentially places of exploration. And if you would like to have a really good trained system, you need a lot of data. And this is also related not only for some commercial.
 
So one of the reasons of success of GPT3 is also great amount of words and expressions it have been trained.
 
So what I would like to pay attention is the last two weeks there was news about Republic of China from first of September realized really comprehensive data security legislation, almost the whole data that deal with on the Territory of China. Nowadays, should be reported or stored for China’s governance. And suddenly, it is expected that the reason – this act, the law will have deep impact on data processing activities and also business operations in China. Not only in China, in fact.
 
So the really interesting question for all the people who deal with data is the question of its accessibility and quantity of open data.
 
In this field, we do not have strong or strict regulatory acts. Each company solve its problem for itself somehow. But certainly we early or later should have some common point of view on the accessibility and possibility to have majority of open data.
 
>> MODERATOR: Oh, thanks. Ekaterina.
 
So maybe I would also ask Daniel then, what do you think regarding this certain demand for transparency. So in terms of data, in terms of the algorithms, would that be at all helpful? In terms of international cooperation? In this context of AI bias and even beyond bias. So regarding any kind of problem we might have with this technology, what is your take on this?
 
>> Daniel Leufer: Yeah, transparency is important, but it doesn’t solve anything. We need it to solve problems and we currently don’t have it. It is the first step that allows for actual responses. As an example, an NGO that we work with a lot, algorithm watch do a yearly report that tries to track the use of AI systems in Europe. And that requires money, investigative journalism, time, and it is, you know, often need to send freedom of information requests, they’re not responded to. We don’t know what is used in EU about serious things. Companies and Governments will open about things they want to be open about and not the things they don’t. They tend to be the ones that have the most impact on fundamental rights. We have been asking for different transparency members. We would like public registers, so the public can see what systems are being used, basic information about them, that there is a channel to get more information. If you know you believe that maybe you have been discriminated against or a problem with the system. Some cities have already rolled that out. Amsterdam, Helsinki have public registers of AI systems. We would like that expanded in the EU’s proposal for regulation of AI, there is a proposal for all database. There is interesting stuff there. At the end of the day, knowing an AI system is in use and having transparency about it, in certain cases won’t solve the problem. And I would like to point here to, you know, Karthikeyan mentioned the fact that IBM called back from using some facial recognition. To say that the key thing we need to do to have AI work actually for people and to achieve its potential is to ban certain applications of it. I mean, that maybe sounds strange to some people. But there is some applications of AI that they’re so fundamentally problematic and cannot be fixed by any technical measures or legal safeguards, we need to prohibit those so other applications can flourish. Karthikeyan pointed to – we led a global campaign to over 200 Civil Society in 60 countries, calling for a ban on biometric surveillance, not only face, focal surveillance, and body’s other not publicly accessible spaces. Most of us pointed to the limitation of bias. There is certain uses of AI that are in themselves problematic. Using facial recognition in public spaces, it is bad if inaccurate or biased, if it works perfectly it is problematic.
 
Is a perfect tool for surveillance. It is still okay.
 
Other tools, we talked about AI in hiring. And some uses of hiring tools use data about your face and the way you move your face to feed into the decision about whether you are a good candidate or not. That is not okay. You know, like, at its worst, you are going back to kind of 19th century fission me
 
[Background chatter]
 
whether the length of someone’s nose makes them suitable for a job. We need to make sure systems are not designed in a way that is fundamentally problematic. We have seen regulators considering prohibiting certain applications.
 
>> MODERATOR: I think it is also important to note also to discuss bias in those systems and the type of application that is at play. Whether it is scientific or not. Might not be science based at all. If is science based, ethical, lawful, legal. To say that. I mean if I understand like your point right there.
 
Okay. So I think we will also have now have to wrap up our session as I see it. I believe we have with us a reporter who was overseeing what we were talking about. That is Katharina Höne. Maybe she will be able to sum up for us what has been said.
 
>> Katharina Höne: Thank you so much. It is a pleasure to be here. My name is Katharina Höne I’m a Geneva Internet Platform Rapporteur. Geneva Internet Platform is the official reporting partner of EuroDIG. And as you maybe have seen in other sessions we’re providing key messages and session reports from all workshops.
 
What I will do now is present the key messages from this session. The report will be posted later on the Geneva Internet Platform digital watch observatory. I would like to remind you that the messages I’m presenting are afterwards available for further commenting and further discussion. And EuroDIG will provide more information on the process. But basically let me present the five key messages that we have been taking from this session. One second. I’m going to go to present mode. So it is nice. And we’ll also share my screen.
 
There we go. So, first message. Algorithmic bias is a particular concern in decision-making regarding sensitive decisions with human resources implications. Ultimately the outcomes of machine learning should be seen as only one input into decisions eventually taken by humans.
 
If there is strong objection to this message, you can write it in the chat. Otherwise, we will consider a very rough consensus on this message. Let me wait a few seconds for any alerts from your side in the chat. Okay. As I said, there is opportunity to comment further.
 
Second message. A broad understanding of bias is warranted to address discrimination and harm. Bias can come in at all steps of developing and using a particular AI system. This concerns decisions about the algorithm, the data, as well as the context in which the system is used. There are also mechanisms to make human and machines work together better for better decisions.
 
Same principle. If there is any objection, alert us in the chat. Again, there will be space for further discussion in shaping this message later on.
 
Okay. Let me move to the third message. Policies need to mitigate risk of algorithmic decision-making. Constraints, safety mechanisms, audit mechanisms, and algorithmic recourse need to be in place. In addition it is crucial as a first step to work towards greater transparency and explainability of AI systems involved in decision-making. Databases that list AI systems and data in use should be considered as well as bans on certain AI systems with certain risk and high harm. Again, alert me in the chat if there are any particular concerns. If there is rough consensus, meaning I don’t see any objections, we can move on.
 
Aleksandr, I don’t know how we handle this, if we take this raised hand from Jorn. Or move on.
 
>> MODERATOR: Jorn, if you want to comment quickly because we have little time. We do not hear you, unfortunately.
 
>> Jörn Erbguth: I was not able to unmute before. I wanted to say bias is not just introduced by external factors, but bias is inherent to deep learns or machine learning. It is the prior message where you talk about the instruction of bias.
 
>> MODERATOR: I see your point. I don’t think it is like a common consensus on that. So I would leave that out of like the session messages, but I get your point. Thank you.
 
>> Jörn Erbguth: Okay. It is up to you.
 
>> Katharina Höne: Regarding self-regulation. A number of technological companies have self-regulation mechanisms in place at various levels. Self-regulation of the private sector is important but ultimately not enough. Various regulatory efforts need to be complemented each other and greater cooperation between various stakeholders to create synergies. Okay, the next one.
 
Equality and fairness are values that have strong cultural connotations. They are important principles to address bias. Yet it is not easy to find an intercultural agreement on some aspects of the principles. Addressing this algorithmic bias also needs to be included to discussion on what kind of society we want to live in, in the future.
 
I will stop sharing my screen. The question, do we have a rough consensus on the messages? As I said, you will have the opportunity to shape them up a bit more later, if you want to take care of some small details. If that is all, I hand back to Aleksandr and thank you for a great discussion. It was a pleasure to listen to all of you.
 
>> MODERATOR: Thank you, Katharina Höne for this well-done summary. I think, yeah, if anyone has any final remarks, maybe you just for a few seconds, like Daniel, Zoltan, Hiromi Arai, Karthikeyan, Ekaterina, otherwise, we can wrap up.
 
>> Zoltan Turbék: May I add something?
 
>> MODERATOR: Please.
 
>> Zoltan Turbék: I was happy we had a diverse group on this panel. I think talking to each other, you know, from the technical communities and all the communities, policymakers, NGO, it is important. I have to understand technical aspects, and the technical people need to be aware of the legal implications, value aspects, human rights aspects. Thank you for organizing this.
 
>> MODERATOR: Thank you very much. Thank you, Zoltan.
 
So I think that wraps up our session and I would return the floor to studio host in Belgrade.
 
>> STUDIO: Hi, everyone. Obviously, we had a great discussion not only in audio but also in the chat. However, this is not the end of this day. We have a closing session as well. Thank you, Katharina Höne for wrapping up everything. I hope we will see each other soon.  


[[Category:2021]][[Category:Sessions 2021]][[Category:Sessions]][[Category:Human rights 2021]]
[[Category:2021]][[Category:Sessions 2021]][[Category:Sessions]][[Category:Human rights 2021]]

Latest revision as of 14:23, 19 July 2021

29 June 2021 | 16:30-17:30 CEST | Studio Belgrade | Video recording | Transcript
Consolidated programme 2021 overview / Day 1

Proposals: #2

You are invited to become a member of the session Org Team! By joining an Org Team, you agree to your name and affiliation being published on the respective wiki page of the session for transparency. Please subscribe to the mailing list to join the Org Team and answer the email that will be sent to you requesting your subscription confirmation.

Session teaser

Public policy in many countries favours the development and application of machine learning and other technologies broadly designated as “artificial intelligence” – with a view of boosting economy, streamlining the processes in the public sector and improving the peoples’ quality of life. To that end, human decision-making is replaced or supplemented by automation, and automated decision-making already affects millions of people in Europe and around the world.

The long-term result, however, might be a net harm, if automated systems merely reproduce the flaws of human decision-making due to inappropriate bias in the systems’ input data and generate new bias because deep learning even creates bias with perfect data

But if to err is human, is it even feasible to avoid bias altogether – either in human or automated decision-making?

And provided that the bias problem can be managed, are there any other substantial problems with using AI for taking significant decisions?

The goal of this workshop is to inform the discussion on AI policy and regulation in Europe and to further the understanding of these problems by the public at large.

Session description

Until .

Always use your own words to describe the session. If you decide to quote the words of an external source, give them the due respect and acknowledgement by specifying the source.

Format

Until .

Please try out new interactive formats. EuroDIG is about dialogue not about statements, presentations and speeches. Workshops should not be organised as a small plenary.

Further reading

Links to relevant websites, declarations, books, documents. Please note we cannot offer web space, so only links to external resources are possible. Example for an external link: Main page of EuroDIG

People

Please provide name and institution for all people you list here.

Focal Point

Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective Subject Matter Expert (SME) and the EuroDIG Secretariat and are kindly requested to follow EuroDIG’s session principles

  • Elena Dodonova, Council of Europe
  • Yannick Meneceur, Council of Europe

Organising Team (Org Team) List Org Team members here as they sign up.

Subject Matter Expert (SME)

  • Jörn Erbguth

The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.

  • André Melancia
  • Desara Dushi, Vrije University Brussels
  • Amali De Silva-Mitchell, Dynamic Coalition on Data Driven Health Technologies / Futurist
  • Yannick Meneceur, Council of Europe

Proposed Key Participants

  • Karthikeyan Natesan Ramamurthy, Research Staff Member, IBM Research AI
  • Ekaterina Muravleva, Senior Research Scientist at the Skolkovo Institute of Science and Technology
  • Zoltán Turbék, Vice-chair of the CAHAI Policy Development Group, Council of Europe
  • Daniel Leufer, Europe Policy Analyst, Access Now
  • Hiromi Arai, Head of AI Safety and Reliability Unit, RIKEN Center for Advanced Intelligence Project

Key Participants are experts willing to provide their knowledge during a session – not necessarily on stage. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance. Please provide short CV’s of the Key Participants involved in your session at the Wiki or link to another source.

Proposed Moderator

  • Aleksandr Tiulkanov, Special advisor to the Digital Development Unit, Council of Europe

The moderator is the facilitator of the session at the event. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.

Remote Moderator

Trained remote moderators will be assigned on the spot by the EuroDIG secretariat to each session.

Reporter

Reporters will be assigned by the EuroDIG secretariat in cooperation with the Geneva Internet Platform. The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:

  • are summarised on a slide and presented to the audience at the end of each session
  • relate to the particular session and to European Internet governance policy
  • are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
  • are in (rough) consensus with the audience

Current discussion, conference calls, schedules and minutes

See the discussion tab on the upper left side of this page. Please use this page to publish:

  • dates for virtual meetings or coordination calls
  • short summary of calls or email exchange

Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.

Messages

  • Algorithmic bias is a particular concern regarding sensitive decisions with human rights implications. Ultimately, the outcomes of machine learning should be seen as only one input into decisions eventually taken by humans.
  • A broad understanding of bias is warranted to address discrimination and harm. Bias can materialise at all steps of developing and using a particular AI system. This includes decisions about the algorithms, data, and the context in which the system is used. There are also mechanisms to make humans and machines work together better for better decisions.
  • Policies need to mitigate risks of algorithmic decision-making. Constraints, safety mechanisms, audit mechanisms, and algorithmic recourse all need to be in place. In addition, it is crucial, as a first step, to work towards greater transparency and explainability of AI systems involved in decision-making. Databases that list the AI systems and data in use should be considered, as well as bans on the use of certain AI systems with high risk and high harm.
  • A number of technological companies have self-regulation mechanisms in place at various levels. Self-regulation of the private sector is important but ultimately not enough. Various regulatory efforts need to complement each other and greater cooperation between various stakeholders is needed to create synergies.
  • Equality and fairness are values that have a strong cultural connotation. They are important principles to address bias, yet it is not easy to find an intercultural agreement on some aspects of these principles. Addressing algorithmic bias also needs to include discussion on what kind of society we want to live in in the future.

Find an independent report of the session from the Geneva Internet Platform Digital Watch Observatory at https://dig.watch/resources/human-vs-algorithmic-bias-unbiased-decision-making-even-thing.

Video record

https://youtu.be/kQEAIhbWHzk?t=22852s

Transcript

Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-482-9835, www.captionfirst.com


This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.


>> STUDIO: Hello, good evening, everyone. Welcome back to Belgrade Studio, EuroDIG. It is human versus algorithmic bias. Before I give the floor to the moderator, Aleksandr. I will remind you a few rules of what to do during this session. This session is of course being recorded. However, the chat is not. So in case you want to have your chat recorded, what your opinion is, please share it by raising a hand or typing a name that you want to be included in discussion. Before doing that, state your name and of course surname and the affiliation.

Be sure that close your video, if it is on or off, that your full name is presented. I think that will be all. And last point is please do not share the link to Zoom meetings here. We are and I think the crowd would be sufficient enough to discuss these interesting issues.

Over to you, Aleksandr.

>> MODERATOR: Hello. I hope I am being heard quite clearly? Hello?

>> STUDIO: I hear you well.

>> MODERATOR: Great. Thanks. My name is Aleksandr Tiulkanov, I will be moderating this session for where we will discuss the human and algorithmic bias. And we’ll see whether it is actually even a thing to make an unbiased decision, when it is automated or human made.

Let me please now introduce my panelists. Who will hopefully be able to join us, all of them, I think are already present? So we have Karthikeyan Ramamurthy staff member of IBM research. And Ekaterina Muravleva research scientist at Skolkovo Institute of Science and Technology. And Zoltan Turbék Vice-Chair of the CAHAI Policy Development Group. And Daniel Leufer Europe policy analyst Access Now. And Hiromi Arai, head of AI safety reliability unit RIKEN Center for Advanced Intelligence Project. And this session will be moderated by me. And I’m special advisors to the Digital Development Unit Council of Europe.

It might be different with people watching with a different level of background knowledge on the topic that we are going to discuss. Let’s perhaps discuss how does machine learning work in fact, in simple terms. Because I think it is essential for us to start from the basics to also important for policymakers and general public to understand what we are talking about, when we are talking about automated decision-making and systems that support this process.

And I hope that Karthikeyan from IBM can enlighten us on that. If you don’t mind, Karthikeyan.

>> Karthikeyan Ramamurthy: Hi, Aleksandr and everyone. Yeah, it is really great to be here. I think I’m hoping that we will have some very good discussion. So what does machine learning mean? It implies what it says.

It means you have a machine, something like a computer, that is going to learn from the examples and environment it is in. Typically it means statistical machine learning. You will provide it with data and some labels it will learn from that.

For example, if you have a very young child and you show the child 10 different pictures of cars and say this is a car, the child is going to learn at the ends of the session, hopefully that what it is like a car. And it is going to also, when you show a new picture of a car, it is going to understand that it is a car, even though it is probably not very similar to the 10 examples you showed the child.

That is the hope of machine learning to help the machine learn based on the examples that you provide. And it will generalize new situations it has not seen. The hope of machine learning engineers is to make that learn as fast as possible with the amount of resources available. So that is the – that is one of the flavors of machine learning that we call supervised machine learning. There is unsupervised, Commission and so on. We don’t have to come into all the details. In general, it means particularly these days it is use in the parlance of statistical machine learning you let the machine – you have the algorithm to help the machine learn from examples. That is what it means.

>> MODERATOR: Okay. Great. Thank you, Karthikeyan for the explanation. Maybe some other experts are wishing to add on whether they have their perhaps favorite type of machine learning which they are dealing with in their study? For example, we have also Ekaterina and Hiromi Arai. Maybe Ekaterina Muravleva can explain what she is dealing with primarily and what sort of machine learning do you do?

>> Ekaterina Muravleva: Thank you for the possibility to participate in the discussion. I’m very glad to participate in it.

So I work in Academia, so we deal with wide variety of examples during consulting of different commercial projects. And our quite big goal is to implement or develop some new technologies and safe technologies in the lab.

I have an example of interesting case. Interesting for me. Because it is very different from a point of view and usual from another.

At our home in Moscow we have smart home system. When something is broken, the techs repair it by pulling random switchers. They trained on 100 examples, I suppose. And in fact, they do not understand how it works, but they’re able to repair but don’t know the reason. In this case, I use an example of human being machine learning because in fact, this knowledge, they cannot be – they will not transfer this knowledge without retaining. It is like human analogy of actual machine learning do.

Because actually machine learning learns by examples. The given is updated when it gives the wrong prediction. More examples is better neural network. But however, it is an open question, whether it learns actual pictures rather than find some patterns. I think that it is a great question of future tasks for mathematicians, for machine learning engineers. How to construct safe in terms of future results, machine learning algorithms.

>> MODERATOR: Okay. Great. Hiromni, maybe you can share what you focus on in the studies and maybe you are also willing to discuss how do you understand the topic of this session, which we are discussing in terms of bias in AI. So why do you think – do you see it at all as a problem? And if yes, in what way, please?

>> Hiromi Arai: Yeah, um... yes. Machine learning is by using machine learning we can learn from example, sometimes from very complex and black box might be black box estimator.

If we apply machine learning for example hiring or education, something – some kind of decision-making which have a big effect on human lives, it might – difficult decision contains bias such bias decision-making might be problematic. If it is black box, sometimes it is hard to notice such a decision-making by using machine learning. So I think we have to care about this bias program.

>> MODERATOR: Okay. Thank you. So Daniel, maybe you are willing to also elaborate on what do you think – what are the sources, maybe of bias in we might have in AI systems? How it occurs, maybe? So you could give us your perspective and what is the problem there, maybe, potentially.

>> Daniel Leufer: Sure, thanks again. Appreciate you having me. I will say something about the definition. There is useful comparisons about how machines learn and machine learning. Between human and machines, machines are stupid. They’re very impressive, but quite dumb. It is often the case if you slightly change the circumstances, algorithm won’t work anymore. The model can be trained on pictures that all have forests in the background, suddenly, you introduce a car with snow in the background and the whole thing goes to bits. We say they’re very fragile. They also need massive amounts of data to learn from whereas a child can learn from a small amount of data. That is important to keep in mind. It helps us explain why some of the bias problems or other problems can which in.

What I can say on the sources of bias, I mean, there is a lot to say. You know, I can make articles that list 35 sources of bias or three. But, you know, the most common thing we hear is the bias comes from the data. The algorithms are neutral, and if you train them on bias data you will get a bias system in the end.

That’s – it is true that the bias can come in through bad data, but that idea is very simplistic and doesn’t really capture the whole picture. So, you know, the way that you design the algorithm, the way you design what it is trying to optimize for can lead to not only bias, but discrimination, to harm, to other issues. The bias label is quite narrow. It makes us think correctly about certain examples, facial recognition system performs well on someone who looks like me. Not on someone who has darker skin. You know, is it biased against people with darker skin, in that case. There is other types of harm. Basically every decision made by people in designing the system and there are a lot of decisions, including the decision to use AI in the first place, is an opportunity for problems to creep in. So the whole way down the chain of how the system is created and deployed, where it is used and not uses is an opportunity for bias it to come in.

>> MODERATOR: Okay. Great. Thank you. Maybe Zoltan would also contribute. What do you think from the policy perspective, would you then say that certain users of AI are more relevant and more important in terms of like what we provide policy for? Or do we need to regulate it all equally? Maybe depending on the use cases or scenarios or categories? What’s your take on this?

>> Zoltan Turbék: Good afternoon everyone. As pointed out from the policymaking group as a Government official and who also happens to participate in the work of Council of Europe and its regulatory force, we have Daniel also sitting around the table.

So why are bias important for this topic and also for regulation? I think the problem is that it can cause – it can result in violations of certain human rights and also certain principles, which are important in society. And you know, also when that happens, you know, something has to be done to solve – to some reaction is needed. Or also measures to prevent such situations is actually needed. What can it cause? It can result in discrimination for example.

It can result in decisions which are totally wrong in a legal system. It can result in unjustified – I mean, unlawful decisions which affect certain individuals or certain specific groups.

So from the policymaking point of view, it is important to mitigate the risks, also to prevent the occurrence. Daniel, you mentioned the situation when the data used is – results the bias decision-making. But we can also mention other examples like the operating system itself is problematic in bias or a situation where the data system is used in an environment which is unbiased and results in problematic decision.

>> MODERATOR: Okay. Thank you. I was also interested in like certain scenarios that I believe, before the session, we were talking also with – Ekaterina suggested to me, for example, it was a case in Russia, I believe, where first the individual was using a Tesla car, for example, which was not legally officially introduced to the country. But like introduced on his or her own behalf, imported personally. And it occurred that the certain type of vehicle, which is popular on Russian roads, but which was not like at all present in learning dataset which the engineers of Tesla used originally. It became a problem, even to the extent that the certain type of vehicle was not recognized as such. So I believe Daniel has underlined. So this was a kind of like if we discuss the difference in decision-making and learning difference between humans and machines. We definitely here, see a gap. Karthikeyan, maybe you would also give us your perspective on whether this kind of, like – this kind of gap between how machines learn and humans learn. And whether humans it would be enough, like if you have just the minimum amount of information for a kid to learn. To tell whether it is a car or it is not a car? So but for machines it might require more time and resources. What do you think? Is there a solution to avoid this kind of accidents that are caused by not recognizing a car as a car?

>> Karthikeyan Ramamurthy: Yeah. Actually there is like a very deep philosophical question. Like first of all, we don’t understand how people learn truly. But I mean, I gave the example of course it was simplistic because it was the first example. And I think like some of my colleagues likely elaborated on it. So like, for machines, even to – if you are just applying like completely unconstrained deep learning system to recognize cars, of course, it will require a lot of examples. Because it is only looking at Pixels. It doesn’t know about the concept of a car or anything. There are a lot of examples to learn. I think particularly, it is being touted as one of most impressive things in deep learning these days. Humans have a lot of inherent wiring in them. To put it simply. I’m not a cognitive scientist or something. But to put it simply, humans are wired to recognize things, connect the dots, understand the parts of a picture, extrapolate even sometimes what is in the background. 3D and so on. More importantly, being able to associate the concept to an image very well, right? So it is called a grounding problem, right? So how do you associate a symbol to a presentation, right?

So humans are very good at that. Machines don’t have an idea of what the grounding is like. They only live in the world that you create for them, right? So if you show them cars, they will learn to recognize cars. If you show them something else, like lots of examples, they will hopefully learn to recognize something else.

There is really no easy way of getting around this problem. The only way to make sure is that we have enough constraints put in place and enough safety mechanisms that are put in place so that at least when something wrong happens, we can recognize it. Right? So one thing we have realized is machine learning is definitely a small part of a really large ecosystem in our society. And there is nothing magical or superhuman about it. Right? So it does require supervision, it does require regulatory mechanisms. It does require enough – you need to have enough knobs and tabs on it so that you actually keep checking what’s going on. Right? So that is the engineering – that is a big engineering problem behind this, right? So that is my not so short answer to your question.

>> MODERATOR: Okay. But still very informative one, Karthikeyan. So thank you. And maybe just also look at this problem from another angle, which is suggested to us by Jorn in the chat. He says bias, which we discuss now is just the tip of the iceberg. He says that for example, from his point of view, that the global problem is that trained systems do not obey any rules. He says I quote, deep learning – for example, if we are talking about specific way of learning machines. Deep learning always creates bias even with perfect data. So what I think would also address this question to Karthikeyan, and Hiromi because you are deeply involved in this area. How would you comment on that? Maybe on this specific statement?

>> Ekaterina Muravleva: I would like to say global learning stores all the bias. Most of the machine learning amplify the basis and special measures need to be taken to ensure that unbalanced data do not lead to unbalanced prediction.

Certainly, I suppose that it should be incorporated in maybe not in the near but in the future, and this aspect also should be taken into account when you develop something. And certainly in my opinion nowadays, we cannot rely on some expert systems based on machine learning technology. In sense of it is a final decision. So it can be some assistance, advising system, but in topic, where cost of solution is quite high, results of machine learning techniques nowadays, not results but solution-based only on machine learning results cannot be fair. So it can be used as an advising part, but not the main part of solution.

>> MODERATOR: Thank you, Hiromni, do you have the same point of view? Do you have something different in mind when we are discussing? I wonder. Or do you totally agree with what Ekaterina said?

>> Problem with microphone.

>> Hiromi Arai: Yeah. I almost agree with Ekaterina because, you know, maybe achieving equality in every prospect of every period is difficult, because for example, some cars require equality in opportunity and others require equality in the result. Achieving both simultaneously is difficult. For example, think about international services. So maybe minority is defined in countries, for example, Japanese may be minority in EU, but in Japan, Japanese is majority. So what kind of equality is required is also different. For example, in places or in groups or et cetera. So yeah. Thinking perfect equality, perfect fairness by using machine learning technique is difficult. So maybe it is better to use other advice and final decision maybe few more should also be contributed.

>> MODERATOR: Okay. Thank you. So Karthikeyan do you share the view? We have some opinion on the place of machine learning which is like in the society?

>> Karthikeyan Ramamurthy: Yeah. So I do share the views broadly, right? Because they all make a lot of sense to me. But in terms of the question itself, I think like generally bias is a term that can mean many things. And you say also like perfect data. That is a very loaded term. Perfect for whom? That creates questions.

So some people may say whatever data you collect, it should be a perfect sample. Most data are loaded with historical biases. And systemic inequalities that have happened throughout the past. One example is if you take loan data. A lot of people talk about this. If you collect credit data from the past. Most likely there is a huge majority of men who got credit, whereas women got little credit because very few applied for credit. Like 20 or 30 years back. Right?

So is this perfect data? Probably not, because it is from the past. So is it like relevant to the current society and value systems we have? Is it relevant for the type of society we want in the future? So we don’t know, right? These are very difficult things to say. But I do agree that even with whatever data you have, any algorithm with deep learning can be biased because of reasons some of my colleagues have pointed to before. Because of the way you design the system, the decisions you make throughout the system. So on.

So yeah, it is true that you can have very imperfect positions from so-called perfect data. So yeah. That part I do agree. Yeah.

>> MODERATOR: Okay. Thank you. Daniel, maybe what would you say, what is your take on like the pace of machine learning, the constraints which needs to be put on some aspects of it? What do you think?

>> Daniel Leufer: I will maybe give one interesting example, which is GPT3, this very famous language generation system from open AI. It made headlines, it was the ridiculous situation that the guardian published an op-ed written by AI. In reality, they had generated 16 different op-eds and the editors picked bits from each one and put it together. It is one of the most hyped case, it is simultaneously impressive and does fantastic things but it doesn’t do things it tends to be hyped about. It is a very good auto complete text function. It can go beyond and do images and code and stuff. But you hear people saying it can do the amazing things and it actually can’t.

A French start-up made a medical chop up with it, where patients could talk to it. What GP3 can do is produce convincing language. On a formal language it sounds real and operates in different registers. The problem with it is it can’t be constrained to giving actual good medical advice. Like an older type of expert system.

So there was an example. I can put the link in the chat. Where they were testing the chat bot, they said, I’m feeling depressed. Do you think I should kill myself? And GPT3 said yes, I think it is a good idea. That shows that there is a fundamental problem with the unpredictability of that approach that means it is maybe not suitable for a situation in which you need really strong constraints.

To stick to the chat bot example, I forth the name for the world’s best chat bot. I there is an award for it. It has a Japanese name. It won the competition and it was reported as a deep learning chat bot. The creator said it is a deep learning, he said it is a real based system, I don’t use any machine learning. It is the best there is. It is interesting to keep that in mind.

And another thing, we often hear questions about should we trust AI? What can we do so people trust AI? I always say we shouldn’t trust AI. We don’t trust companies. We don’t trust Governments. We have processes in place, we have democratic oversights, we have audits and structures and processes in place because we don’t trust other people or companies or Governments, and we need it with AI, too. It is no different. It is a tool used by companies or Governments. We need the possibility of audit or transparency and then there can be trust in the use of the tool. There is no sense in which we should be moving towards, you know, a situation where we trust AI.

>> MODERATOR: Okay. Great, great contribution. Daniel, thank you. Zoltan, maybe seeing that we have discuss certain particular problematic applications, what, in your view might be the areas like policy areas of application that might be more problematic for unconstrained use, for example, of machine learning as Daniel suggested, like in his example where obviously like, if you have a chat bot suggesting to people in distress and psychological distress might not be the best idea to can employ like unconstrained applications there, maybe you having some other thoughts on other areas or use cases that might be problematic for such kind of technology? What do you think?

>> Zoltan Turbék: Thank you for the question, Aleksandr. Before I answer to that. Let me react to some of what has been said by our colleagues.

You said you cannot imagine algorithmic system resulting in equality. It is also for human decision-making. We have seen problems in human decision-making. The main question is regardless if we talk about human or algorithmic system, it is what are the results of such a decision-making and whether they have an impact which is unacceptable?

And so in case of human decision-making, you have to have corrective measures, I believe also when AI systems are used, it is important to have such mechanisms in place. In addition also to, you know, transparency, explainability, and other type of measures, which would result in that. You asked me about what, where the use of AI or algorithmic decision-making can be dangerous.

I believe there are specific sectors such as law enforcement. Or judicial systems. And what we see is that AI is used also to improve their decision-making, but when there are biased data used by such systems, they can also violate – they can also result in unjustified bias. So I think the question is, you know, what could be the consequences of such decisions and when they impact the rights of people, I think you have to be actually careful. That is why I mention for the law enforcement.

>> MODERATOR: Definitely that scenario would deserve probably, I think.

So do you think also in terms of policy, would that be primarily stimulus for self-regulation or like mandatory regulation? And if it would be mandatory regulation, what level would be more appropriate? Like international level, national level? What are your thoughts on that?

>> Zoltan Turbék: I believe that there is no single best solution for that. So I also so the importance of self-regulation by companies, by entities, I think they are not enough. There are no enforcement mechanisms, they’re nonbinding. And even private sector entities can make decisions that violate certain rights, where I think the state should step up.

So what I see and also this is my – I am in certain organizations, I see that there are regulatory efforts that should complement each other. It is important that we’re aware of the regulatory efforts at the moment. There are international organizations, like UNESCO, Council of Europe, EU, OECD that are all active in this field. I think cooperation between them is also important. Of course, at the state level, you also have to have certain national laws in place. And actually there are already some laws and regulations which are applicable even to the use of AI. But there’s a need for new regulation also at the international level. I think as I mentioned, there are entities that also could, you know, come up with certain codes of conduct and regulatory – self-regulatory instruments to help ensure that whenever AI is used, it is used in the right way.

>> MODERATOR: Okay. Thank you, Zoltan. So my question would be for example, to Karthikeyan maybe. As you are working like for IBM, can you give an example, do you actually in your everyday life as a scientist come up with like – or your company which employs you, do they as a matter of practice enforce some ethical constraints and limitation? How it’s really – is it something that already exists like ethical frameworks in the corporations which work and research organization which work on AI?

So is there already some kind of self-constraining process there?

>> Karthikeyan Ramamurthy: Yeah, we do have an ethics board that comprised of a huge number of people, which kind of looks at some of the big things that we do. And you know regulation, provides advice, provides modifications, changes, whatever. So we have a review process. So that is at the big level. And it then pulls quite a bit. It is kind of an important thing that we do. Also, the other thing is we have for example, IBM has said that they won’t do any more face recognition. So those kinds of – you know, it came from the CEO. So those kinds of high-level things are also there.

Also in our day-to-day life, since you ask, we do have human resources. We look at what things do people do. We have a broad knowledge of stuff. We have an internal review process when we write papers and come up with ideas, so on. So we have an internal review process around papers, stuff like that. I would definitely say that there are internal mechanisms in place at various levels, starting from day-to-day stuff to bigger stuff.

>> MODERATOR: That is encouraging to know. Also, I want you to like hang on a little bit, which Zoltan was mention talking about human decision-making processes. Sometimes we see those processes are affected by bias also. And do you see in this sense the possibility of using AI to combat actually? Not to amplify the existing human bias. Is there a way to – it would help us to reduce the existing bias in human decision-making? Is there a possibility for such thing?

I would also be interested in listening to the position of this question from other panelists also. Please, what do you think?

>> Karthikeyan Ramamurthy: Yeah, one way to do it is something that people have been working on quite a bit recently is using a combination of humans and machines in the system. And trying to understand when it is best to give to humans and when best to give it to machines. Of course, you can build a sophisticated system like this. In fact, I think that there are systems that are called second-opinion systems, where it recommends for you to get a second opinion beyond the expert of that person. Along with the machines at that particular point. Like if they expose it all, you go for a second opinion. So in the framework system, it is interesting. There are lots of possibilities. Like deferral, second opinion, all of those. And also the code. If you get a decision from a machine, let’s say you are denied credit at a bank. There are things like algorithmic records that say what could have you done in order for your loan to get approved. Right? Those kinds of things are also there.

So there is a lot of possibility, right? The thing is that you have to be very careful about these things and do it in a proper way, with all the constraints in mind. Ensure you are part of the bigger ecosystem and so on. If you – these are some of the things where can you beneficially apply AI, right? You can make it work with humans. You can ensure human supervision. And also, internally, we have this idea called – there are many versions of this, but in IBM we have this idea called fact sheets, this is basically a way to track your AI service at various points throughout the life cycle. Basically you can keep tabs in the various points, training data stage, what do you want to measure. In the algorithm station, what do you want to measure? In the prediction stage, what do you want to measure? You can keep track and governor the system. You can have a robust oversight on the system. So that as mechanisms for that also. So of course, if the system is not doing properly, you can Delegate it to humans. Like those kinds of things can also be done.

So I don’t know if there is a way to like correct the decision of the human at a given instant, but there are ways to make humans and machines work together in the proper way, right?

There are many mechanisms for that. Like some of these, I mentioned, there are probably a lot more.

>> MODERATOR: Okay. Thank you, Hiromi maybe you are also willing to contribute to that. Maybe the potential benefits of improving the overall results of the decision-making where, like people and machines are involved. What do you think?

>> Hiromi Arai: We have several methods to achieve a more fairer decision by using machine learning resource. By applying – by using such things as algorithmic methods. So one way to contribute in the decision is to use such fairer decision-making by fairer machine learning modalities as a teacher. Or as a second opinion for human decision-makers.

>> MODERATOR: Okay. Yeah. I think, Ekaterina do you agree on the general review? With –

[Inaudible, multiple people speaking]

>> Ekaterina Muravleva: Yes, I agree, but – not “but.” But I also like to mention that in terms of training some machine learning so on, we certainly should pay attention on diversity of dataset and if we deal with really big dataset, certainly we track with question of open data. And accessibility to data. Because in some cases, for example, during my consultant work for some Russian oil and gas companies, they are very closed in sense of you initially write a lot of documents to access some just anonymized dataset, I don’t know, potentially places of exploration. And if you would like to have a really good trained system, you need a lot of data. And this is also related not only for some commercial.

So one of the reasons of success of GPT3 is also great amount of words and expressions it have been trained.

So what I would like to pay attention is the last two weeks there was news about Republic of China from first of September realized really comprehensive data security legislation, almost the whole data that deal with on the Territory of China. Nowadays, should be reported or stored for China’s governance. And suddenly, it is expected that the reason – this act, the law will have deep impact on data processing activities and also business operations in China. Not only in China, in fact.

So the really interesting question for all the people who deal with data is the question of its accessibility and quantity of open data.

In this field, we do not have strong or strict regulatory acts. Each company solve its problem for itself somehow. But certainly we early or later should have some common point of view on the accessibility and possibility to have majority of open data.

>> MODERATOR: Oh, thanks. Ekaterina.

So maybe I would also ask Daniel then, what do you think regarding this certain demand for transparency. So in terms of data, in terms of the algorithms, would that be at all helpful? In terms of international cooperation? In this context of AI bias and even beyond bias. So regarding any kind of problem we might have with this technology, what is your take on this?

>> Daniel Leufer: Yeah, transparency is important, but it doesn’t solve anything. We need it to solve problems and we currently don’t have it. It is the first step that allows for actual responses. As an example, an NGO that we work with a lot, algorithm watch do a yearly report that tries to track the use of AI systems in Europe. And that requires money, investigative journalism, time, and it is, you know, often need to send freedom of information requests, they’re not responded to. We don’t know what is used in EU about serious things. Companies and Governments will open about things they want to be open about and not the things they don’t. They tend to be the ones that have the most impact on fundamental rights. We have been asking for different transparency members. We would like public registers, so the public can see what systems are being used, basic information about them, that there is a channel to get more information. If you know you believe that maybe you have been discriminated against or a problem with the system. Some cities have already rolled that out. Amsterdam, Helsinki have public registers of AI systems. We would like that expanded in the EU’s proposal for regulation of AI, there is a proposal for all database. There is interesting stuff there. At the end of the day, knowing an AI system is in use and having transparency about it, in certain cases won’t solve the problem. And I would like to point here to, you know, Karthikeyan mentioned the fact that IBM called back from using some facial recognition. To say that the key thing we need to do to have AI work actually for people and to achieve its potential is to ban certain applications of it. I mean, that maybe sounds strange to some people. But there is some applications of AI that they’re so fundamentally problematic and cannot be fixed by any technical measures or legal safeguards, we need to prohibit those so other applications can flourish. Karthikeyan pointed to – we led a global campaign to over 200 Civil Society in 60 countries, calling for a ban on biometric surveillance, not only face, focal surveillance, and body’s other not publicly accessible spaces. Most of us pointed to the limitation of bias. There is certain uses of AI that are in themselves problematic. Using facial recognition in public spaces, it is bad if inaccurate or biased, if it works perfectly it is problematic.

Is a perfect tool for surveillance. It is still okay.

Other tools, we talked about AI in hiring. And some uses of hiring tools use data about your face and the way you move your face to feed into the decision about whether you are a good candidate or not. That is not okay. You know, like, at its worst, you are going back to kind of 19th century fission me

[Background chatter]

whether the length of someone’s nose makes them suitable for a job. We need to make sure systems are not designed in a way that is fundamentally problematic. We have seen regulators considering prohibiting certain applications.

>> MODERATOR: I think it is also important to note also to discuss bias in those systems and the type of application that is at play. Whether it is scientific or not. Might not be science based at all. If is science based, ethical, lawful, legal. To say that. I mean if I understand like your point right there.

Okay. So I think we will also have now have to wrap up our session as I see it. I believe we have with us a reporter who was overseeing what we were talking about. That is Katharina Höne. Maybe she will be able to sum up for us what has been said.

>> Katharina Höne: Thank you so much. It is a pleasure to be here. My name is Katharina Höne I’m a Geneva Internet Platform Rapporteur. Geneva Internet Platform is the official reporting partner of EuroDIG. And as you maybe have seen in other sessions we’re providing key messages and session reports from all workshops.

What I will do now is present the key messages from this session. The report will be posted later on the Geneva Internet Platform digital watch observatory. I would like to remind you that the messages I’m presenting are afterwards available for further commenting and further discussion. And EuroDIG will provide more information on the process. But basically let me present the five key messages that we have been taking from this session. One second. I’m going to go to present mode. So it is nice. And we’ll also share my screen.

There we go. So, first message. Algorithmic bias is a particular concern in decision-making regarding sensitive decisions with human resources implications. Ultimately the outcomes of machine learning should be seen as only one input into decisions eventually taken by humans.

If there is strong objection to this message, you can write it in the chat. Otherwise, we will consider a very rough consensus on this message. Let me wait a few seconds for any alerts from your side in the chat. Okay. As I said, there is opportunity to comment further.

Second message. A broad understanding of bias is warranted to address discrimination and harm. Bias can come in at all steps of developing and using a particular AI system. This concerns decisions about the algorithm, the data, as well as the context in which the system is used. There are also mechanisms to make human and machines work together better for better decisions.

Same principle. If there is any objection, alert us in the chat. Again, there will be space for further discussion in shaping this message later on.

Okay. Let me move to the third message. Policies need to mitigate risk of algorithmic decision-making. Constraints, safety mechanisms, audit mechanisms, and algorithmic recourse need to be in place. In addition it is crucial as a first step to work towards greater transparency and explainability of AI systems involved in decision-making. Databases that list AI systems and data in use should be considered as well as bans on certain AI systems with certain risk and high harm. Again, alert me in the chat if there are any particular concerns. If there is rough consensus, meaning I don’t see any objections, we can move on.

Aleksandr, I don’t know how we handle this, if we take this raised hand from Jorn. Or move on.

>> MODERATOR: Jorn, if you want to comment quickly because we have little time. We do not hear you, unfortunately.

>> Jörn Erbguth: I was not able to unmute before. I wanted to say bias is not just introduced by external factors, but bias is inherent to deep learns or machine learning. It is the prior message where you talk about the instruction of bias.

>> MODERATOR: I see your point. I don’t think it is like a common consensus on that. So I would leave that out of like the session messages, but I get your point. Thank you.

>> Jörn Erbguth: Okay. It is up to you.

>> Katharina Höne: Regarding self-regulation. A number of technological companies have self-regulation mechanisms in place at various levels. Self-regulation of the private sector is important but ultimately not enough. Various regulatory efforts need to be complemented each other and greater cooperation between various stakeholders to create synergies. Okay, the next one.

Equality and fairness are values that have strong cultural connotations. They are important principles to address bias. Yet it is not easy to find an intercultural agreement on some aspects of the principles. Addressing this algorithmic bias also needs to be included to discussion on what kind of society we want to live in, in the future.

I will stop sharing my screen. The question, do we have a rough consensus on the messages? As I said, you will have the opportunity to shape them up a bit more later, if you want to take care of some small details. If that is all, I hand back to Aleksandr and thank you for a great discussion. It was a pleasure to listen to all of you.

>> MODERATOR: Thank you, Katharina Höne for this well-done summary. I think, yeah, if anyone has any final remarks, maybe you just for a few seconds, like Daniel, Zoltan, Hiromi Arai, Karthikeyan, Ekaterina, otherwise, we can wrap up.

>> Zoltan Turbék: May I add something?

>> MODERATOR: Please.

>> Zoltan Turbék: I was happy we had a diverse group on this panel. I think talking to each other, you know, from the technical communities and all the communities, policymakers, NGO, it is important. I have to understand technical aspects, and the technical people need to be aware of the legal implications, value aspects, human rights aspects. Thank you for organizing this.

>> MODERATOR: Thank you very much. Thank you, Zoltan.

So I think that wraps up our session and I would return the floor to studio host in Belgrade.

>> STUDIO: Hi, everyone. Obviously, we had a great discussion not only in audio but also in the chat. However, this is not the end of this day. We have a closing session as well. Thank you, Katharina Höne for wrapping up everything. I hope we will see each other soon.