Artificial Intelligence, Ethics and the Future of Work – WS 08 2018: Difference between revisions

From EuroDIG Wiki
Jump to navigation Jump to search
No edit summary
(13 intermediate revisions by 2 users not shown)
Line 1: Line 1:
[[Consolidated programme 2018|'''Consolidated programme 2018 overview''']]<br /><br />
6 June 2018 | 14:00-15:30 | GARDEN HALL | [[image:Icons_live_20px.png | YouTube video | link=https://youtu.be/7S4vU956Q7o]]
== <span class="dateline">Get involved!</span> ==
<br />
You are invited to become a member of the session Org Team by subscribing to the [https://list.eurodig.org/mailman/listinfo/ws8 '''mailing list'''].
[[Consolidated programme 2018| '''Consolidated programme 2018 overview''']]
If you would just like to leave a comment feel free to use the [[{{TALKPAGENAME}} | discussion]]-page here at the wiki. Please contact [mailto:wiki@eurodig.org '''wiki@eurodig.org'''] to get access to the wiki.
 
== Session teaser ==
== Session teaser ==
What are the ethical and rights considerations in relation to AI and the future of work? Come, find out and contribute!
What are the ethical and rights considerations in relation to AI and the future of work? Come, find out and contribute!
Line 18: Line 16:
*Should robots and algorithms be given legal personality to address liability and taxation issues?
*Should robots and algorithms be given legal personality to address liability and taxation issues?
*What are the rights considerations in all of this?
*What are the rights considerations in all of this?


Background:  
Background:  
In 2017, EuroDIG explored the topic of how the digital revolution is changing people’s worklife. Artificial Intelligence is expected to change business models and the jobs landscape for every kind of organization, including the government. While new jobs will be created (including AI-augmented work), jobs will also be displaced. The net effect on society is uncertain. Human beings need to adapt and will require (among others) lifelong learning skills, re-education (including a reconceptualization of education), re-training, entrepreneurial training, and more. Social welfare and security systems that underpin care for society requires new solutions with new sources of financial contributions/support as AI displaces humans in the workforce. Basic income as a means of coping with the impact of jobs displacement is being experimented upon in various countries, but the results do not yet provide clear guidance for other countries. What should Europe do? In 2018, EuroDIG explores the ethical and rights considerations of AI in relation to the future of work and humanity.
In 2017, EuroDIG explored the topic of how the digital revolution is changing people’s worklife. Artificial Intelligence is expected to change business models and the jobs landscape for every kind of organization, including the government. While new jobs will be created (including AI-augmented work), jobs will also be displaced. The net effect on society is uncertain. Human beings need to adapt and will require (among others) lifelong learning skills, re-education (including a reconceptualization of education), re-training, entrepreneurial training, and more. Social welfare and security systems that underpin care for society requires new solutions with new sources of financial contributions/support as AI displaces humans in the workforce. Basic income as a means of coping with the impact of jobs displacement is being experimented upon in various countries, but the results do not yet provide clear guidance for other countries. What should Europe do? In 2018, EuroDIG explores the ethical and rights considerations of AI in relation to the future of work and humanity.
Note: Each of the components in the topic is broad and complex. What we want to focus the workshop on is the impact of AI on the Future of Work, the ethical considerations, whether the ethical approach is sufficient, and what it means for various societal stakeholders in terms of responsibility and recourse.


== Format ==  
== Format ==  
Line 50: Line 52:
'''Focal Point'''  
'''Focal Point'''  
*Rinalia Abdul Rahim &ndash; Workshop Focal Point; Managing Director, Compass Rose Sdn Bhd & Advisory Board Member, Mozilla Foundation
*Rinalia Abdul Rahim &ndash; Workshop Focal Point; Managing Director, Compass Rose Sdn Bhd & Advisory Board Member, Mozilla Foundation
'''Organising Team (Org Team)'''
*Joelma Almeida &ndash; Fundação para a Ciência e a Tecnologia, Portugal
*Farzaneh Badii &ndash; EuroDIG Subject Matter Expert; Executive Director, Internet Governance Project / Research Associate at Georgia Institute of Technology, School of Public Policy
*Amali de Silva Mitchell &ndash; Individual, Civil Society
*Frédéric Donck &ndash; EuroDIG Subject Matter Expert; European Regional Bureau Director, Internet Society
*Claudio Lucena &ndash; Fundação para a Ciência e a Tecnologia, Portugal
*Fabio Mortari &ndash; Fundação para a Ciência e a Tecnologia, Portugal
*Malgorzata Pek &ndash; Council of Europe
*Maarit Palovirta &ndash; EuroDIG Subject Matter Expert; Senior Manager, Regional Affairs Europe, Internet Society
*Rachel Pollack Ichou &ndash; UNESCO
*Sandro Karumidze &ndash; Telecom Business Development, UGT, Georgia
*Tapani Tarvainen &ndash; Electronic Frontier Finland




Line 73: Line 61:
CONTEXT-SETTERS
CONTEXT-SETTERS


Vint Cerf, Internet Pioneer (via video) - View on Ethics and Algorithms
Vint Cerf, Internet Pioneer (VIA VIDEO) - View on Ethics and Algorithms


Olivier Bringer, Head of Next Generation Internet Unit, DG CONNECT, European Commission - View on European AI Strategy, Relationship with Next Generation Internet, Impact on Future of Work, Ethics and Human Rights implications/considerations
Olivier Bringer, Head of Next Generation Internet Unit, DG CONNECT, European Commission - View on European AI Strategy, Relationship with Next Generation Internet, Impact on Future of Work, Ethics and Human Rights implications/considerations
Line 82: Line 70:
STAKEHOLDER PERSPECTIVES
STAKEHOLDER PERSPECTIVES


Annette Muehlberg, Head of the ver.di Digitalisation Project Group, Germany - Workers/Trade Union Perspective on the ethical and rights considerations in relation to AI and its impact on the future of work  
Annette Muehlberg, Head of the ver.di Digitalisation Project Group, Germany - Workers/Trade Union Perspective on the ethical and rights considerations in relation to AI and its impact on the future of work [VIA VIDEO]


Mariam Sharangia, Chief Specialist of Strategic Development Department, Georgia's Innovation and Technology Agency - Governmental perspective on the state’s responsibility in relation to AI and its impact on the future of work, especially the ethical and rights considerations  
Mariam Sharangia, Chief Specialist of Strategic Development Department, Georgia's Innovation and Technology Agency - Governmental perspective on the state’s responsibility in relation to AI and its impact on the future of work, especially the ethical and rights considerations  
Line 107: Line 95:


'''Reporter'''
'''Reporter'''
*Su Sonia Herring


Reporters will be assigned by the EuroDIG secretariat in cooperation with the [https://www.giplatform.org/ Geneva Internet Platform]. The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:
 
*are summarised on a slide and  presented to the audience at the end of each session
'''Organising Team (Org Team)'''
*relate to the particular session and to European Internet governance policy
*Joelma Almeida &ndash; Fundação para a Ciência e a Tecnologia, Portugal
*are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
*Farzaneh Badii &ndash; EuroDIG Subject Matter Expert; Executive Director, Internet Governance Project / Research Associate at Georgia Institute of Technology, School of Public Policy
*are in (rough) consensus with the audience
*Amali de Silva Mitchell &ndash; Individual, Civil Society
*Frédéric Donck &ndash; EuroDIG Subject Matter Expert; European Regional Bureau Director, Internet Society
*Claudio Lucena &ndash; Fundação para a Ciência e a Tecnologia, Portugal
*Fabio Mortari &ndash; Fundação para a Ciência e a Tecnologia, Portugal
*Malgorzata Pek &ndash; Council of Europe
*Maarit Palovirta &ndash; EuroDIG Subject Matter Expert; Senior Manager, Regional Affairs Europe, Internet Society
*Rachel Pollack Ichou &ndash; UNESCO
*Sandro Karumidze &ndash; Telecom Business Development, UGT, Georgia
*Tapani Tarvainen &ndash; Electronic Frontier Finland


== Current discussion, conference calls, schedules and minutes ==
== Current discussion, conference calls, schedules and minutes ==
Line 121: Line 118:


== Messages ==   
== Messages ==   
A short summary of the session will be provided by the Reporter.
*Artificial intelligence (AI) must be accountable, transparent, modifiable – privacy, and determinability by design is a must.
*Unintended and unexpected consequences in the development of AI and robotics are unavoidable.
*There must be an ethical code for algorithm developers.
*The education system needs to be revamped to prepare future workers with the necessary skills to deal with the new forms of jobs that AI will bring.
*Interdisciplinary teams are needed to relieve the burden of engineers and they need to be educated about ethics.
*AI technology needs a common, international framework; ethical clearance is not sufficient.
*‘I’m just an engineer’ is not an excuse when developing AI.
*AI is or will become a race, expecting adherence to ethical code by developers is not realistic.
*We need a kill switch for automated systems.
*AI can be part of the solution to the problem of future of work.
 
Find an independent report of the session from the Geneva Internet Platform Digital Watch Observatory at https://dig.watch/resources/artificial-intelligence-ethics-and-future-work


== Video record ==
== Video record ==
Will be provided here after the event.
https://youtu.be/7S4vU956Q7o


== Transcript ==
== Transcript ==
Will be provided here after the event.
Provided by: Caption First, Inc. P.O Box 3066. Monument, CO 80132, Phone: +001-877-825-5234, +001-719-481-9835, www.captionfirst.com
 
 
''This text is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text is not to be distributed or used in any way that may violate copyright law.''
 
 
>> Shall we get started? Okay. So, good afternoon. I hope you had a nice lunch. This is Workshop 8 of EuroDIG. It's a workshop focused on artificial intelligence, ethics, and the future of work.
 
So, if you look at the components in the workshop title, each one can be very broad and extremely complex. This workshop will focus on the interaction, specifically the impact of artificial intelligence on the future of job, on the future of work and the ethical dimensions.
 
What I'm going to do right now is to explain the flow of the workshop, introduce the role players, and also highlight the key questions that we would like to pose to get your feedback on. Okay?
 
So, we've divided the workshop into two parts. The first part is context setting. You will hear from Vint Cerf. He's the world's leading internet pioneer. He will speak on ethics and algorithms. And you will hear from -- the photographer is distracting me. You will hear from Olivier Bringer who's the European commission, the head of the next generation internet unit. He's going to talk about the European AI strategy which just came about. How many people have read it? One, two, three, okay, a good time to promote it, actually, all of you, yes. And in the strategy itself, he will talk about the strategy, the impact on the future of work, and the ethical consideration. And then Claudio Lucena. He's a professor of law in Brazil. Now a research fellow for the Center of the Future of Law in Portugal.
 
After that, I'm going to come to you for a discussion. In the discussion, I'll tell you the questions right now, because I'd like for you to think about it, it's the ethical approach, the right approach, to deal with the opportunities and challenges posed by AI and its impact on the future of work. If it is the right approach, is it alone sufficient? What else does it need? And are there other alternatives and what are the gaps? So, these are the input the we'd like to get to you from the discussion? We'll have about 15 to 20 minutes of discussion. If you have questions, we'll address that as well.
 
After that, we'll move to the second part of the workshop moderated by my colleague, Maarit Palovirta, Internet Society. There we'll have perspectives from stakeholders. We'll hear from Annette Muehlberg, she's here via remote and she's from Germany. After that, you'll hear about the role of the state from Mariam Sharangia from the Georgian Innovation and Technology Agency and to the business perspective from Clara Sommier from Google and then to Leena Romppainen from Finland and she'll give us the civil society perspective. Then we'll go to Christian Djeffal, where are you, there? You'll provide an internet perspective.
 
A well-rounded view. And then Maarit will pose the question what is the role and the responsibility of state when it comes to A.I. and the impact on the future of work and what is the experience of employer and employees, the public sector, the private sector as well as civil society. Okay, I'm going to introduce a key people. This is Maria, she's our timekeeper and very important to the role players and to the moderators because she's going to tell us how much time we have left for our allocated time. Please pay attention to her. We have Sonya Heading. She's going to come up at the end and do a summary on what she thinks could be the consensus on the room on the questions that will have been posed. And I have a remote moderator, Pedro over there who will channel people and people participating on-line. Do we have people on-line? We have one. Hopefully more will come in. Let's start with Vint Cerf on ethics and algorithms.
 
>> VINT CERF: Hello, I'm Chief Internet Evangelist and sometimes known as one of the fathers of the internet. I've been given a list of pretty interesting questions about artificial intelligence and ethics. Let me read them to you so we can get an idea of what's before us in this discussion.
 
What are the ethical considerations in relation to AI and the future of work? What is the responsibility of the state and what are the state's ethical considerations? What are the ethical considerations for employers as they deploy AI in work and in production processes. Are there ethical considerations that apply to workers themselves as they work with AI? Should robots and algorithms be given legal personality to address liability and taxation issues? And finally, what are the rights and considerations in all of this?
 
Wow, that's quite a list of questions and I suspect in the short video like this, I will not succeed in getting to all of them. Let me take one of them, particularly, first and that's the idea of creating false personalities like corporations being treated as people. I think that's very premature at this point. For all practical purposes, computer programs are not people, robots are not people, algorithms are not people. And I don't think we should treat them that way. However, those algorithms are used by people. And we can speak about the ethics of using those algorithms and using those programs in the conduct of business or perhaps just in the conduct of everyday life.
 
So, let's look at this from the tool point of view. Let's ask ourselves what should we be thinking when we apply the artificial intelligence or ma sheep-learning algorithms to accomplish a task. How much autonomy do we want to give to that software when it produces a result. Should we leave the software to make decisions alone without human intervention? For example, should this prisoner be allowed to obtain parole? Should this person be allowed to get a mortgage or a loan? Should this person be allowed to enter college? There are a whole series of questions like this that sometimes are addressed by gathering a great deal of statistical data and trying to boil that down into some kind of a predictive algorithm, possibly a machine learning algorithm, that will produce an outcome that will guide our decisions.
 
What I think is unethical is to use the mechanism solely and alone. The machines are powerful, the software that drives them, helping us to analyze data to look at outcomes and conditions over perhaps a period of time to try to understand what we can see statistically. But I think when we're making decisions about people and their lives, we have to remember that they aren't statistics, they're people. And we need to keep that in mind.
 
I think anyone developing artificial intelligence algorithms has a moral responsibility to be thoughtful about the kind of information used to teach the program or a machine running a multi-layer neural network and be very conscious of the potential of bias and error. We found some of the algorithms are quite brittle. They will work very well for 95% of the cases and in a few cases, they'll make really, really bad decisions and come to the wrong conclusions. We need to be thoughtful and sensitive to that possibility.
 
We should always be asking OUFRSs, what's the rationale for accepting the output of a particular algorithm, is there anything else we can do in order to determine whether or not the proposed outcome is, in fact, accurate and appropriate?
 
Judgment is an important element here. I don't believe we can expect our computer algorithms to have the kind of human judgment based on experience, for example. And knowledge of -- of human beings and their behavior. So, in the long run, I believe, all of these tools have the potential to be very, very powerful in helping us marshal the information to inform decisions. But I think it has to be done in an ethical context in which we take care to apply human judgment to important decisions where it will affect the lives of our fellow citizens on planet earth.
 
I'm sorry I can't join you for this conference. And this is such a rich and important topic that it deserves a great deal of attention. But I do hope, perhaps, that our paths will cross on the net.
 
>> RINALIA ABDUL RAHIM: So that was Vint. We thank him very much for the message which he did as a favor because I asked him to do it. Now we're going to hear from Olivier Bringer. What I heard from Vint is a call for an ethical code for developers of algorithms. Please?
 
>> OLIVIER BRINGER: Do you hear me? Yes. I'm from the European commission. I would like to say a few words about the strategy we have adopted in April of this year. I would start by saying that okay, today we'll discuss the ethical issues and the impact on jobs. But the starting point, I think, and this is the key idea of our strategy, is that artificial intelligence is going to bring huge benefits. So, it's going to bring benefits in healthcare, safer diagnoses, help to surgeons, for example. It's going to bring benefits in transport. Today, 90% of accidents are due to human error. So autonomous vehicles are going to make our roads safer. It's going to bring benefits to anticipate cyberattacks, etc., etc., etc.
 
We should keep that in mind. This is why in Europe we want to invest in artificial intelligence, we want to invest in the technologies. We want to invest in the capacities, high-performing computing, etc., that will support artificial intelligence. And we want to make sure that the user, the -- the users will use artificial intelligence. So, today, 99% of companies in Europe are SMEs, only one out of five is highly digitized. So, there's a real work to make sure that these companies use artificial intelligence to our benefit in every sector. So, we're going to invest also in that, giving access to artificial intelligence to algorithm data, etc.
 
So that's for the -- that's for the investment part. We are very, very ambitious. The second part is the impact on jobs indeed. I would say -- I looked through a few studies that my colleagues or experts gave me. The jury is still out on the exact impact of artificial intelligence on the job market. I mean, some studies say it will have a huge impact basically more than half of jobs are -- can be replaced, some studies say 10%. Some studies say the net effect is nil. But in any case, what is certain is it will have an effect. So, some jobs will be replaced. And every job is going to be transformed. So, we need to equip our workforce with the right skills to manage this transition. And it goes from getting the basic digital skills. So today 1/3 of Europeans do not have basic digital skill. That's a real issue. How can they deal with an artificial intelligence agent in the future? We need to make sure they get the basic skills. We also need to make sure that we have the engineers, AI, deep learning specialists who will be able to develop these technologies. And we have to help people in the transition. If your job is clanged because of artificial intelligence, your company, but above all, the states, has a duty, I think, to -- to help you manage the transition. While it's happening.
 
So that's the second aspect. We are also, we'll have a number of instruments to help do that. But, of course, a lot of it will be done by the member-states themselves who are responsible for the education systems.
 
Then the third aspect of the strategy, the third pillar, is about the -- the ethical and regulatory aspects. So, we are -- our view is that these developments like any developments, if you speak about the next generation internet blockchain, etc., each should follow our values. We're not going to give up our values. We're not going to go for a race to the bottom. That's very clear. So, we need to see how we can incorporate the values into the development of artificial intelligence. Luckily, we have already a solid framework, we have GDPR, we have a solid sign security framework, we have laws, laws about product liability and safety. They are there. We can build on them, some of them. And we need to explain a bit more how they apply in a world where we are surrounded by artificial systems and the internet of things. We're doing that, for example, for the product safety -- the product liability directive.
 
And then there are novel issues, issues that Vint phrased -- the issue of the level of human control. That -- that's very important. At which point, yes, my time is up, I'm going to wrap up. How to make sure that, you know, decisions are not made by the internal system on their own. How to make sure important decisions are still made by humans. How to make sure the decisions of artificial intelligence agents are explainable, they're transparent. Otherwise, if people if they don't understand, they will never trust these systems. So, we need to find a way to do that. This will involve research work
 
And then, of course, we need to make sure to avoid the etc.
 
We have set up an alliance, a multi- stake holder approach to advise us on these ethical aspect s and they will issue the report at the end of the year.
 
And the very last point, we have signed the declaration of -- 26 member-states, not us, have signed a declaration and engage together with us with the European union in all of these aspects. So, in the investment, in managing the job market transition, in ethical aspects. That's very important. We go together to address the big challenge.
 
>> Thank you, as I understand it, the European AI strategy exercises an ethical framework as well as a legal framework that complement each other?
 
>> VINT CERF: We have a -- do you hear me? We have a legal framework already? We will see if we need to complement or explain it. That might also be self-regulatory approaches. That we will see.
 
>> RINALIA ABDUL RAHIM: Okay.
 
>> VINT CERF: And the report by the AI alliance will be important in forming our decisions.
 
>> RINALIA ABDUL RAHIM: Okay. Claudio, what's the solution?
 
>> CLAUDIO LUCENA: Thank you, Rinalia? Do we have a presentation ready over there? Is anyone there? All right, thank you very much.
 
So, we have a couple of initiatives already in that sense. Back in the '50s when this conference convened and coined the term "artificial intelligence," there was not much of a policy fuss about it, partly because the idea was that it could be -- most of the problems could be solved as you see there with a selective number of researchers over the period of summer. That time frame didn't go exactly as planned. And part of the fact might have been because back then, they lacked the engines, the few, and the network of vehicles.
 
Now, fast forward that to the second half of our decade. Now everything is there. We've got everything. The engines are there. The fuel is here. And also, the network of vehicles is here. And, yes, their sensors, they know things, that makes a difference.
 
So, when we're ready to scale things to our everyday life, how do we define the challenges? Well, from the Harvard Brookman Kline institute has done a good job in defining the challenges in a blog post when he opened the initiative. Some of which I'm bringing to you. One is looking at AI as a set of monolithic techniques as if they could be streamlined through the whole plethora of human activities. But yet segment them.
 
Another one is that it's an important instrument but it's not the only one. We must use other governance tools. We might feel the need to recode the rule of law. This is something interesting that he brings also. One last thing from the blog post, there are others, is that responses might vary across sectors and jurisdictions.
 
Two things are coming from our project in Portugal where I'm part of the group where I research AI and inclusion, we have to take care and make sure that the data sets have quality and integrity so that the benefits they can bring can also favor inclusion.
 
What are the initiatives so far? As you have heard, one of them is national strategies. That was the case with the white house, that was the case with the UK, the European union also has -- more complex environment because there was a strategy, Olivier talked about it, there are logistic efforts in that. Another example, Brazil, the jurisdiction of my origin has also just launched an internet, internet strategy national plan.
 
The initiatives come in other forms too. And ethically aligned or driven initiatives which is the case of the Montreal declaration and the initiative run by the institute of electrical and electronic engineering, ethically aligned design, a good work in progress addressing ethic challenges. Right now, three weeks ago, we had in the Toronto declaration, which focuses not in hard national law, neither in ethically-only alliance and design, but in human rights as the basis out of which we should look at the responsible development of artificial technologies.
 
The regulatory efforts have already been discussed. Now in the United States, a couple of other models are in discussion, federal search, robotics commission, interagency cooperation of many kinds. From France, we take development from 2016, which implements a mechanism that is now also in article 22 of the GDPR. I've been told by my French colleagues that there were not many developments then. I think we might have more once this mechanism is reflected also in the GDPI.
 
So, to end, these technologies, they give us so much potential that at times, we might be taken to imagine that they are looking at the future. Well, the training data sets that they use, most models of neural networks use, are data that already exist. So, in the end of the day, the best option, all they're doing is looking at the past. We have to look at the future. This is a decision that we have to make.
 
The World Bank says we're going as society and as mankind, we're doing well in developing digital technologies, very well. And we're doing very bad in sharing the results of those developments. That shouldn't happen. We have good standards. We do have good standards. But this is the way we're moving.
 
Artificial technologies -- artificial technologies and mechanisms have the potential and I will leave it here -- have the potential to contribute to one of the greatest tools for inclusion in sharing -- in sharing of power and influence. It might be the best opportunity of mankind in that sense. But it also can drive us to the path of the worst concentration of power ever. This applies to any view, but also to the future of work. We should decide. It moves too fast. We can't think too much for too long. We have to decide, are we going for concentration or are we going for sharing, thank you very much?
 
>> RINALIA ABDUL RAHIM: Thank you.
 
[ Applause ]
 
>> RINALIA ABDUL RAHIM: So that was a lot of food of thought. You heard the call for responsible development and responsible use and responsible governance. You heard the impact on the future of work and the importance of the rule of law, the code of the law itself, but also the need for other governance tool. That's where the ethical framework comes in. And that human beings need to make up its mind in terms of what kind of future and development that it wants. So now I come to you because we'd like to hear from you in terms of what you think. Yes, please? There's a microphone right behind you?
 
>> Okay, thank you. Patrick Pennings from the Information Society Department from the Council of Europe and, in fact, our Council of Europe's Commission of Ministers decided artificial intelligence was to become one of the key features of the organization in terms of research.
 
I could start with saying technological development, some say, will never be as slow again as it is today. We have to count on a rapid progression in certain areas. Now artificial intelligence is already bringing the daily results to our lives. We may not know about it, but in quite a number of thing, like urban planning, traffic planning, so on, there's already quite a bit of artificial intelligence involved. Is it sufficient to solely have ethical clearance of how we are going to make use of artificial intelligence? I don't think so. As a human rights-based organization, I think we need to look at human rights input from the start by design of algorithms, self-learning algorithms, self-learning machines, autonomous machines. I think we need to be clear on that. That is that if we want to create a legal space which is predictive -- and we're already working on quite a number of things within the organization, for example, predictive legislation or predictive justice. All of this is already in place. So, we need to make sure that we have a common legal framework in which the developments take place.
 
I give an example -- for example, when we were discussing biomedical developments, it's not sufficient to have just ethical clearance. We also need to make in that case the convention, which is an international treaty which puts everyone on the same line. That doesn't mean that we don't have to rethink the rule of law and how we deal with it in different jurisdictions but that's clear that's what needs to happen, thank you.
 
>> RINALIA ABDUL RAHIM: You started your comment about speed. With the development of a treaty, that's going to take a few years, do you think we can tolerate that? Just a quick answer? Can we afford it?
 
>> AUDIENCE: Can we afford not to spend that time? That's the response to it. For example, within the Counsel of Europe it took six months to develop a convention on lone terrorist fighters, the lone wolves. So, is six months too much?
 
>> RINALIA ABDUL RAHIM: Six months tolerable.
 
>> AUDIENCE: The question really is, what if you don't? And I think it's important that in order to suck down the genetic principles, of course you need to take time. I don't think that rushing in to any kind of legislation without involvement of all of the communities, I know they're only speaking about governments, but the technological community, the internet developers who are actually with their hands in the mold, so to say, they have to be involved in this and that's quite clear. We cannot do it, governments cannot do it on their own.
 
>> RINALIA ABDUL RAHIM: Okay, other views? Yes, please? Please. There is a mic here. And I'll come to you.
 
>> AUDIENCE: Thanks. Actually, following up to this comment, I think it's very important that we make up our mind about the agents but it's also important to find a way again to bring every country on the table on this, because what we're seeing today is like a race for each country to make use of artificial intelligence and a competitive advantage against other countries. The problem looks like a prisoner in the land mine which everybody seems to benefit from the non- collaborative approach but by not collaborating, you end up with the worst possible outcome which could be devastating. Do you think there's a way to bring countries to collaborate?
 
>> RINALIA ABDUL RAHIM: Hold on, don't answer yet, note it's for you. Yes?
 
>> AUDIENCE: Despite being a remote moderator.
 
>> RINALIA ABDUL RAHIM: You can participate. Speak closer to the microphone.
 
>> AUDIENCE: A personal question, a personal comment. I would like to think, or I would like to ask to the panel, so when we talk about these issues, and I'll speak as a researcher in artificial intelligence and also a student, that we sometimes software developers aren't really aware of these legal -- even of the legal issues. And the ethical ones are thought in a personal way and sometimes they are no guidelines to these aspects.
 
So, my question is how to bring together those two points of view. Because even -- well, I've been to the youth day program. We've been preparing the presentation at EuroDIG for two days. The discussion, we were going through artificial intelligence and my question was, so should we put it in the technical or in the legal way? And the immediate answer was, why can't we combine both? So that's my question. Thank you.
 
>> RINALIA ABDUL RAHIM: Thank you very much? Are there other views?
 
I know Claudio can handle multiple questions at one time, yes, please? Is there a microphone on the other side?
 
>> AUDIENCE: Thank you. Actually, I don't have a question. I have just a suggestion. Because honestly, I don't believe that the developer can be kind of a -- you the can't oblige a developer who are different kind of people to oblige to certain rules or maybe even ethical rules. I know what I'm talking about. I know I've been in touch with developers. I know exactly what ethics means. And I have respect of education. I really don't think you can bring even all of the countries to collaborate those who will not and it will become a race.
 
So, I would suggest let's try to be more specific, even let's define what you mean by AI because mostly when I read and try to understand what people assume under AI, it might not even be AI. So, let's just try to be more specific. TUR.
 
>> RINALIA ABDUL RAHIM: Thank you. So that's an important input. But I just want to remind that we want the question answered, is the ethical approach alone sufficient, if it's not sufficient, what more is needed. And but, of course, your input is taken, Claudio?
 
>> CLAUDIO LUCENA: Thank you, Rinalia. A clear-cut answer to that question, the ethical approach is a necessary component.
 
>> RINALIA ABDUL RAHIM: Necessary, why?
 
>> CLAUDIO LUCENA: We can't go without it.
 
>> RINALIA ABDUL RAHIM: Why?
 
>> CLAUDIO LUCENA: Because that's where we're trying to preserve our human nature in the loop of this new wave of technology. I really appreciate both questions from Pedro and from our fellow. It's interesting, the definition is very interesting. I think we're referring -- I'm sorry to come back to a terminology thing, I think it's essential for what we are going to call it. We keep calling it artificial intelligence because of the decision from the '50s. There was no other description for what they were thinking of at the time. We're talk ugh about analytics over big data. That's what we're talking about. When we do that, we eliminate most of the semantic existential issues of talking about something intelligent other than a human being. It's not that it's going to happen. There are those implications. There are huge funds and interesting studies about that part of substituting our human intelligence, semantically speaking, philosophically speaking. That's not what we're trying here. We're dealing with the everyday use of data processing over big quantities of data.
 
I'm a lawyer, but I'm also a computer scientist. I have these two nationalities, so to say. And allow me to respect to the most but disagree to the full extent of the premise you're bringing here. And that addresses Pedro's question, unfortunately he's not here.
 
We not only should but we must place and divide that burden with developers too. I would strongly encourage you to read a recent work published by -- I'm going to laugh, the name is a terrible -- but the title of the work is "The Moral Character of Cryptographic Work." It's an interesting piece. Much more elaborate than what I'll do here. Is a counterpoint. There's no way we don't advance this if we don't shift the burden of this development. The excuse -- and I'm talking also as a computer scientist, the excuse, I'm just an engineer, does not work anymore. This is too serious. Engineers are educated enough to foresee the consequences of this work. We're looking at developments of an area from which the developments are all there. Look at Yelico, they're there, they gave speech. Listen to them. They are clear in saying there are uses in which artificial intelligence is mature enough. There are other ones that it's not. Let's use one -- natural language processing. Fair enough, consequences are measurable. Predictive policing, as a decision maker, not as an assistant? Definitely not.
 
Not to give a definitive answer, because we're not going to get there. But these are the reflection s I would like to make.
 
>> RINALIA ABDUL RAHIM: Christian, you wanted to jump in?
 
>> CHRISTIAN DJEFFAL: I think, first if I may first comment to your question. I think it's a very, very important and relevant point.
 
>> RINALIA ABDUL RAHIM: Can you hear Christian?
 
>> No.
 
>> RINALIA ABDUL RAHIM: Could you speak louder, please?
 
>> CHRISTIAN DJEFFAL: It's an important and relevant point to think about the international setup. In a way we have the weapons race narrative that has been introduced. And I think it's very detrimental because other than, for example, nuclear technology, AI is not limited to energy and bombs. So, it's a much -- much more diverse and important technology. And I think the work of the European union in that regard, especially the declaration that included Norway, which is not a member of the European Union, is a very important step to an inclusive international and transnational setup in that regard. The measures highlighted, it doesn't apply to work, but I think it will -- it will impact to link research institutions and link hubs, innovation hubs and to create a kind of network. And I think this could really be a first step in an even more inclusive way to offer a different -- a different narrative that fits very well with coal and steel. So, this was my first remark.
 
If I may add a very brief second remark in the defense of engineers, I completely agree they have a big responsibility, but if we take the comment of the colleague and really look at the specific problem, we are facing, those engineers, they -- they work within an organization. They have specific tasks. They listen to the authority of others. And it's -- they deal with highly complex questions, many of them are sensitive, but it takes a lot to speak up and to change the development because they don't make ultimate decisions in many -- many settings. So, what we need is extraordinary teams. We cannot expect of them that they solve questions that the highest European courts quarrel about and then argue about. We need interdisciplinary teams and we need the involvement of the people for which those technology is made. So, there is -- the burden sharing is just necessary for the -- for the technology we're building here.
 
>> RINALIA ABDUL RAHIM: So, the you're saying engineers can take guidance. They can take guidance.
 
>> CHRISTIAN DJEFFAL: Of course, they can take guidance. Of course, they would need some level of education in order to -- everybody who's worked in an interdisciplinary project knows this. It's like language translation. But if you have an interdisciplinary team, and I see this in my own work, it can be very, very productive because once you start the conversation.
 
>> RINALIA ABDUL RAHIM: Olivier, you're an engineer?
 
>> OLIVIER BRINGER: I am an engineer. I've been working for the last ten years among lawyers, I manage them. Okay. No, I wanted to intervene on two points, the corporation and the ethical aspects, where we should put the ethical dimension -- in the training or if we should deal with it in post.
 
On the cooperation, yes, there will be a competitive dimension always. We cannot avoid that. Countries have their own interests, they have their own industries to protect. They want to attract investment. There's a competitive dimension. But still, we are faced with the same issues. We're faced with the same issues in terms of the impact on jobs and the same issues in terms of the ethical questions. We allow an algorithm to take medical decisions or not?
 
So, we have to work together. And I think we have -- we are going in that direction in Europe. I mean to have 26 countries signing the declaration is -- is a -- it's important. Once we have that, we also have a certain weight when we discuss with our international partners. We need -- when Germany discusses alone with the United States, it's not -- it's not that great. If Europe discusses as a whole with the United States, I think we can achieve better results.
 
And then on the -- on the ethical aspects, I think -- I mean, you can -- you can go that way. You can go for the legislative. If I'm being a bit self-critical, look at the time it took us to do GDPR and we're still in the implementation phase. Look at the time it takes to do a case. It takes years and years. Of course, we'll continue to do that when it is needed. But we have to intervene upstream. And I think an engineer can understand ethical issues. An engineer can be trained to that. And they would be interested in that too, I'm sure. So, we have to put that in the curriculum of scientists and engineers, these aspects. And I agree fully to have multi- disciplinary teams. And if you look at the bigger tech success from the west coast on the U.S., usually they work with different -- different competencies.
 
>> RINALIA ABDUL RAHIM: Okay, thank you very much. So, we have about three minutes if -- if you have burning contributions, I see two hands. We'll let the young lady go first.
 
>> AUDIENCE: Hello. So, Christian has emphasized the role of interdisciplinary teams. And I will agree with that and I would like to see these interdisciplinary teams, not only lawyers and engineers and philosophers, but I would really like to see also psychologies and social psychologies. I think there's an elephant in the room, namely the question of what kind of reality do we want to live in? And in a sense, what -- in what kind of conditions does a human being flourish? And we are discussing the future of work. So, if I want to go to a shop, I want to talk to a real person, not with a machine. I want my car or bus to be driven by a human being, not only because it's my preference, but there is a whole body of research and it's also common knowledge that we need human contact. And it doesn't matter if we -- if our roads will be safer and our reality will be safer and sterile, and we will live 100 years but what kind of quality of life would it be if 70% of us will be depressed because we lack social contact. So, I -- I wanted to really bring that to perspective so if one day there's an interdisciplinary team, I would really like to see --
 
>> RINALIA ABDUL RAHIM: A wholistic one.
 
>> AUDIENCE: Psychologists there as well.
 
>> RINALIA ABDUL RAHIM: Thank you very much. Very important point. Social contact, like sunshine. Yes?
 
>> AUDIENCE: So, we have two brief comments. The first one, why is an ethical component required? Better to answer how can ethics be ignored in any human activity even if we are using technology? And the other comment, engineers deal with tradeoffs all the time. One of the things they need to include in their tradeoff are ethical considerations.
 
>> RINALIA ABDUL RAHIM: Thank you, AVI deals with human rights issues and these issues as well. You also had a comment, please?
 
>> Hello. I'm from Portugal. And I want to mention one or two things. So, maybe 100 years ago, people used to do a lot of work manually. And then machines came on over and they started to do some of the work people used to do. And there was a change in society that people were being replaced by machines. It turned out it wasn't exactly like that. People were still indispensable. Nowadays we talk about AI and it's the same thing. People assume that they're going to be dispensable and AI is going to take over the world. We have to think about machines as something that actually helps us as a society to be better in the future to help us to make us not do a lot of -- you know, the work that we as a society don't -- shouldn't be doing. Machines could be doing that work.
 
So, let me just give you an example on how things can go wrong. Because I'm a technical person. And things can go wrong. So, a few months ago, there was an Uber car drive -- that drove automatically, and it actually killed someone. I'm not sure exactly the scenario because it might have been the -- you know, the other person's fault that was riding the bicycle. We don't really know. But that situation actually occurred. So, we have to be sure that when we automate these processes, that they are actually safe for humans. That depends on a lot of people. That depends on the engineers, that depends on politicians, that depends on everyone to actually make sure that there are rules in place to guarantee that that happens.
 
However, you cannot be fully sure that that's going to be a problem until it actually happens. So, if no one had died and this is obviously a bad situation, but if no one had died in that situation, you might have 1,000 or a million cars running automatically in the streets until someone actually dies and that would have been a much bigger problem. So, it was actually better in an unfortunate way, it's better that that happened now in a very early stage so that we can take measures into fixing that problem.
 
Another issue that happens maybe a few years ago, there was a twitter box that Microsoft came up with. That bot tried to learn from every TWEET that everyone talked to it. So, the problem is there was -- okay, there was manipulation and a lot of people actually ended up giving them racist comments show the bot ended up being very racist and very negative. But the thing is, that bot was a very good experience. Because that bot actually learned what the human nature of the people that were interacting with it actually did. If that bot had a nuclear switch to kill half of the world, it probably would have killed half of the world. Because it's understood that that's something that they should have done. So, from the point of view of the engineers, there's also a need to have something like a kill switch to make sure that whatever happens with the bots, it does not go beyond what's reasonable for a bot for artificial intelligence.
 
>> RINALIA ABDUL RAHIM: Okay, kill switch. Thank you very much. We need to move to the second part, the stake holder perspective. I'm going to hand over to Maarit. And also, for the role players, you heard some of the comments, some of them posed questions which we didn't address. If you have a reaction to that, please factor that into your interventions. Maarit?
 
>> Comments about the interdisciplinary aspect of artificial intelligence and here we have a very multi- stake holder panel representing workers, government, civil society, and academia. Oh, I believe we have a video intervention by Annette Muehlberg. Hello, can you hear us? We can't hear you. I think you might be on mute. Can you hear? We would like to start with you, Annette, if you can get your sound working? Okay? It's not artificial intelligence, but we're still having issues. Okay, maybe -- maybe what we can do is that we will start with Mariam in the meanwhile. And, Annette, if you could after Mariam's intervention make a sign if your system is working. So, Mariam represents the Georgian innovation agency. So, please?
 
>> Mariam Sharangia: Hello, everyone. I'm delighted to be here and share our agency's perspective regarding AI. A very controversial issue. And to start off, what is artificial intelligence. It's not like scientific fiction, we're using it every day, I'm sure. All of you have smart phone, all of you got advice from your smart phone to go to the restaurant or have the music you like. It's a perfect time for startups, for innovators to experience it hand come up with new ideas so it's still in the development stage. At the same time, there are challenges that AI brings. I would like to emphasize first, the privacy. I'm sure you would guess with this one.
 
Commercial side and politicians as well are using the data and profiting from it a lot. This is one thing that we're concerned of as a society. The second thing, employers, and skills may come obsolete. Some jobs may disappear, however, the same time, some jobs may be created. This is a paradox of technology, I guess.
 
The third one, more long-term, moral perspective, who would be responsible for AI. Who holds a moral perspective and ethical consideration for that? So now let's underline the role of the government and decide what should we do to answer these challenges.
 
So, in terms of privacy, I would like to emphasize the general data regulations that you mentioned and even though we're not part of the EU yet, hopefully, I think it will be pretty open for that because we really respect people's privacy and I think it should be harmonized. So, the role of the government in this stage is to have very specific legal framework. So, we as a society knows that our rights are safe and respected.
 
On the other hand, some jobs and skills become obsolete, what government can do and what our agency at this point is doing is we're providing free I.T. courses in order to prepare our society to have knowledge of programming languages and to be really ready for this technological breakthrough that's happening right now.
 
At the same time, we have a new program called innovation agents, actually, that go to SME, small and medium enterprises, and try to find the gaps, how they can be digitalized, where innovation can help to increase the effectiveness and efficiency of those firms. So, we're really promoting the innovation and technology.
 
At the same time, we're promoting the formation of -- of the communities of artificial intelligence, machine learning communities, we're also -- the main priority is investment in high-tech startups so we're really encouraging artificial intelligence to develop and be invested in. So, this is really one of our main priorities.
 
So, the role of the government as far as I see is to have enough initiatives in order to prepare the society, the employees to be ready for the technological breakthrough. And to ensure -- and to talk more on strategic perspective and moral global perspective, education itself needs to be modernized. In order for new generations to get the right information and the right times through the right medium and that medium is really important. I would like to emphasize.
 
So, in a neutral, AI could act as an extension that could help humans to unleash their potential and will be better than artificial intelligence on its own. In order for this to happen, etiquette groundwork should exist, and appropriate legal and structural framework should be in place along with the appropriate control mechanisms which I think are really very important. Thank you for your attention.
 
>> MAARIT PALOVIRTA: Government control, educational awareness, very nice points there. Can we get Annette back on if the link is working? Annette, can you hear us now. You're unmuted. Oh, we still can't hear you. Maybe we have a problem with the speakers here? Annette, could you try again? No, unfortunately, it's not working. Sorry about that the technical glitch. In the meanwhile, we're going to move on with the panel so that we're not going to run out of time.
 
So next, Clara Sommier from Google. How about the business perspective?
 
>> CLARA SOMMIER: Thank you very much. So, yes, I'm going to share with you briefly how we see AI relating at Google and how we're talking and thinking about the impact of the future of work and how it relates back to ethics. AI and machine learning has incredible potential. Because AI as we see it is a way of making sense through messy work. Through machine learning, we can see patterns of data that's harder to see and find more efficient solutions for societies. With that being SALD, we need to be very thoughtful as was just discuss ed about the impact that it could have on everybody's life and this is something we really tried to consider very thoughtfully. We're trying to see if there can be a broad set of principles that can be integrated when we're looking at AI. And we're already doing so.
 
The way we're doing that, the engineers developing AI are going through training on fairness to make sure it's something to consider. We're also trying to see if there are ways to analyze the data sets that they're using to develop the machine learning to make sure there are no biases in those developments. Because we shouldn't forget that AI is made by humans so there's a way of training them and making sure it's done in an appropriate way. And if you want to find more information about all those ongoing efforts that we're doing, I invite you to go on our website of the initiative. It's people and AI research project where we try to put back the human at the center of AI. And we hope to have more resources and research coming up soon to back that up.
 
Then switching to the future of work, very important topic. It has been said, it's very uncertain what will happen. Probably some jobs have changed as they have already done in the past. We don't expect one category of dog to fully disappear. Maybe some redundant tasks, but we see it as potential. We know there are millions of people who hate their jobs. And rightly so. And maybe there's a way for us as well to improve that. Through AI, maybe there will be also future jobs that will appear. There was a very interesting study by the center for the future of work trying to identify what could be the next 20 jobs that we are not thinking of? One of them will be human and machine team manager. Maybe we will need someone, a human being, to be able to coordinate the work of the machine with the work of the humans. They will still complete the job. And it sounds silly now. But 20 years ago, I wouldn't have thought that -- it could be a job, creating a filter for pictures to have cat ears or angel wings to someone that actually that would be a job. So, we don't know what the future will bring.
 
That being said, and coming back to ethics, a very important point, if we want to ensure this transition is to make sure that it will be inclusive and that everybody, as was mentioned by the commission, rightly so, that nobody will be left behind by this. And this is something where we're trying also to work on. How do we do that? We're trying to invest in digital skills because we know that those jobs will require different skills. We've already trained 5 million persons in the last four years in Europe, the Middle East, and Africa. And we've committed to help 1 million Europeans find employment or grow their business by 2020.
 
That being said, I also want to stress one point, when we're talking about the skills that we'll need for the job of the future, we shouldn't underestimate the human dimension of it. We know that creativity will still be a big part of what we'll need. There is even a study, I can see the job that would acquire those skills whereas now it's around 37%. So, there will still be a big role to play for humans. And the last point I wanted to mention, when we're talking about skims and how to prepare for this future to make sure that nobody will be left behind, is the reform of the education system. And that's been mentioned. It's also a topic that we've been working on. As you're saying, we're in a situation where we're learning and working, and we'll be moving to more continuous learning, probably your life will look more like learn, work, learn, work, and repeat. This is something that we should be ready for if we want everybody to have a chance.
 
I'll finish on a more positive note. AI will be disruptive. We'll know it. But AI can help to solve big problems. That's what AI is good for. And maybe that's where you can help us as well. AI may be part of the solution to address better the future of work.
 
>> MAARIT PALOVIRTA: Thank you, I'm hearing optimistic. We need to get a look at the bigger picture of the important tools and also to find ways to inject the human contact and have the human contact in the artificial intelligence picture. Next, I would suggest that we leave Annette to the end.
 
>> ANNETTE MUEHLBERG: No, please.
 
>> MAARIT PALOVIRTA: If she can make her intervention. In the meanwhile, I'd like to invite Leena from the electronic frontier foundation, please?
 
>> LEENA ROMPPAINEN: Hello. I'm from electronic frontier Finland. We're taking the name from the electronic frontier foundation because now we have to clarify it's Finland, not the foundation. But, yes, predicting things is very hard, especially predicting the future. Preparing for this talk also made me realize that I most certainly am not aware how much Al go rhythmic decision making is already affecting our world and how rapid progress we're making in the field of AI and robotics.
 
We have robots guiding robots in factories, self-guiding cars going through testing, and humans being humans, we have sex-bots, a lot of things in the world. However, I'm an avid fan of science fiction. So, these topics, of course, take me the science fiction route. There we have lots of different directions where it can go. Maybe our technical community can come up with the three laws of robotics. A robot may not injure a human being or through inaction allow a human being to come to harm, robots must obey the orders given by human beings unless it's contradiction with the first law. And the third law cannot conflict with the first and second laws.
 
Sometimes people become godlike allowing the human beings to retain the illusion of self-control while AI is guiding us to a better future behind the scenes. And various dystopias where AIs or robots get out of control threatening human survival. The terminator movies, they will definitely hit that kill switch.
 
AIs will not be perfect. Put in the wrong data, you will get wrong answers. Or as it's put succinctly, garbage in, garbage out for using algorithms in AI to strengthen human rights, I call your attention upon the Toronto declaration mentioned earlier which was announced a few weeks ago.
 
Then I ask all of you to think about your worlds. How many of you have some elements in your world that could be easily replaced by AI or robots? What would that mean for your work? I work in internal I.T. There are quite a few things that AI would be useful, and we are already to some extent looking at ways to improve our work by taking the repetitive tasks, the simple tasks away. What this means if we get rid of some easy work, that leaves the hard stuff for us. And at least for me, I have not realized I cannot work very effectively if I need to do a full day with a brain churning all the way with 100% efficiency. The body does not take it.
 
So, the real question becomes, to what extent will AI and robots replace humans or allow us -- or allow us to focus on the more present aspects in our lives or work less? Or we will have mass employment leading transformers and the police state keeping the citizens under control. Can we have pleasant lives. I'm not really providing any answers here. Squandering the questions just made me ask more questions. But perhaps together, we can come up with some answers. And we must also remember that whatever comes, there will be a lot of unintended consequences and unexpected consequences which we do not manage to think about with the development of robotics.
 
>> MAARIT PALOVIRTA: Thank you, Leena. I'm sorry from the Finland. I'm from Finland. I should have known better. And Christian, you gave some comments earlier. But, please, your views, Christian Djeffal?
 
>> CHRISTIAN DJEFFAL: Thank you so much. I'm from the Humboldt institute for internet and society. I have already heard so many things that have been mentioned already, like the gentleman from Portugal. And this is also a key point for me. It's a Disneyland at the moment for scientists and academics because we're thinking so much about the future and trying to understand. You raised an important point in saying that during industry, we had a lot of assumptions about the future of work that proved to be exactly the opposite. So, my point would be to keep that in mind, to think about what will happen in 20 years. But also, to think about what is happening now. I can tell you from my research work that actually things are changing changes are happening now. The same thing at a conference at MIT. We need to address some of the things that we see now.
 
From an academic perspective, we need to talk about social justice. We have a lot of efficiency gains, increased efficiencies, this poses problems of the distribution of wealth which we need to focus on. The robo-text was one idea but maybe not the only way and of solving some problems, concrete examples from my work, the things I look at, if you look at the geek economy and platforms, that use single workers, you can see a strategy, especially by certain medium and middle sized companies to train their algorithm in very less developed countries with not much regulation and the people are used to training these algorithms and then they don't participate in the economic benefits of the training. And I think this is a huge problem. The international labor organization just starts to -- to see this problem. But these kinds of problems, we have to -- we have to look at.
 
I talk to people who are laid off. I also talk to people -- this is also maybe underestimated if we talk about the workforce, the down-skilling. If AI takes certain components of my job, maybe I get paid less. This is what's happening in the German administration or has happened in the German administration right now. So, I get -- I get paid less and I think these are the side effects that we really need to -- need to focus on.
 
So, in conclusion, I think it's very important to think about how we address those issues and the first step that's already being mentioned from the people on the panel, the first step that is really important is to speak openly about it and to I think also from my side, it's important to stress the -- the opportunities AI offers but as the organizations on the panel, we really need to keep an eye on what is happening, especially to the -- to the workforce in that regard. And I salute the strategy of the commission in the communication because it was for me the first document that really, in the detailed fashion, explained it. Defined it is a problem and didn't try to solve it all at once. But I looked at it as a problem.
 
As the -- as the psychologist downstairs over there said, I think we need many -- many perspectives on this, many insights. We need to keep looking at it. But one thing I would like to stress is not to put a single narrative on that, and not to conceive AI simply as a job killer. You mentioned, and I mentioned that many things for the wellbeing for workers or for workers' rights where we could actively use AI in order to make life better and we could do it now. So, keep an eye on 20 years, but in a way, develop a -- an idea of AI that fits our purposes and that doesn't -- as a narrative, create some automated victims to this development, which is actually what happened with coal and steel and the European union. It was reframed, it was taken as a proxy to work together and if we can manage to take a step in this direction, I think this will be really great. Thank you.
 
>> MAARIT PALOVIRTA: I'm told we have Annette. We have Annette by video.
 
>> ANNETTE MUEHLBERG: Hello, everybody. Thank you.
 
>> ANNETTE MUEHLBERG: You have a video? Would you like to play it, perhaps?
 
>> ANNETTE MUEHLBERG: Yes, of course.
 
>> MAARIT PALOVIRTA: We see your video now.
 
>> ANNETTE MUEHLBERG: I hope I can give you some information from this discussion. I'm the head of the project group at digitalization project group in Berlin. We look at artificial intelligence, there's potential in getting rid of dangerous, physically straining, and boring jobs and making convenient services available for everybody, as well as contributing to the common good. We like to use technology to empower us and avoid mistakes.
 
But there are challenges for democracy and the working world if we want to fulfill the values and guiding principles. I assume all of us citizens and workers want to live freely, independently, and with dignity. No one should check their basic rights at the door of their work place, be it in the corporate, freelance, or on-line work environment. We want to be treated fairly as individuals. Contradicts our morals and our democratic structures. If you're accused of bad work or a crime, we want explanation and proof. We have a right to object, complain, and if necessary, pursue if we feel we're being treated unfairly. Innocent until proven guilty are very important principles of our society.
 
To sum it up, how we live, and work should be our decision and not a matter of scoring. If we were to accept that a black box structure becomes the norm for our behavior, our decision making and control mechanisms, that would turn us into servants of false dignity.
 
We'd rather be free citizens and employees. This includes, in those cases, where AI becomes part of fundamental services of society, the data sets, and algorithms used must be accountable. Data records must not be personalized. They're not just machine data, but also data of the employee to operate them. Especially in times of the on-line working world, the protection of employee data is of great importance. They must therefore be supplemented by employee data protection law. Many countries have already specifically had regulations for the world of work. Generally, does include co-determination rights that take effect as soon as the technology is to be introduced that can be used to monitor performance and behavior.
 
In order to sure that the technology is still co-determinable, we need the transparency, the traceability of the algorithms and work. The digital transformation of the working world partly changes and replaces areas of work with technology. Here, it's necessary as well as new forms of further education and qualification. Of course, skills already required should not be lost through the use of AI. We need the human's quantifications for control and also as a backup if technology fails. And another thing, there are already good examples and collective bargaining agreements, how rationalization gains can benefit the employees, for example, by the reduction of the working time. But society as a whole, it is especially important that large companies also pay their share of taxes so that the state is in a position to provide services of general interest and support people in need, not only privacy and determinability by design but also text by design has to be part of the program specifications. We should also provide an incentive to the producer to ensure that the risks are minimized. This would not be the case if the machine were assigned a legal personality. An employee should certainly not be taken hostage for unresolved liability problems according to the model of the autonomously drive attack against the driver so if something goes wrong, it's the driver who is liable.
 
Work like those often on on-line platforms should not lead to occupational safety and labor law must continue to imply. Employers cannot be allowed to deny responsibility by pointing to only serving them into immediate area role. Anyone like you who has the power to distribute jobs and ban employees from the platform has to have the legal responsibilities of an employer.
 
A system of termination and probation and possible arbitrary treatment quickly leads to burnout and mental illness, continuous automated control is no good. What we want is a creative, dedicated, self-determined, and social partnership-based work. Let me close in the spirit, let us not fall into self-imposed immaturity but use artificial intelligence for the empowerment of workers and citizens that there will be. Thank you.
 
>> MAARIT PALOVIRTA: Thank you, Annette. If we could keep Annette on the wall live in case there are questions in the room for her. Thank you very much for your words. I had very interesting things there that we probably didn't hear before. So, things like safety, liability, also privacy, which was mentioned before that are key aspects that also capture the ethics and the future of work issues.
 
I would now like to turn the discussion to the audience. For questions, please?
 
>> AUDIENCE: Not really a question as much as a comment. More like an issue I would like to raise with respect to privacy and artificial intelligence is like if a machine learns the data about me that I didn't know myself, who does the data belong to? Or how we control that? And this is something that probably is very precise that bothers me. To elaborate more, like if I -- I'm willing to tell Netflix the movies I like, but maybe I'm not really sure what my movie taste is, so if Netflix starts knowing my movie taste better than me, how do we control that?
 
>> MAARIT PALOVIRTA: Very good? Any other questions? We can take a few, please?
 
>> AUDIENCE: I'm Veronica from digital cities in Romania. I'm concerned with these issues. The thing is, I'm a little afraid when we have this discussion. We get in a little bit of paranoia. And this -- I know it's not new. You already said it. But these are the news that make the headlines. And this is the thing that I'm concerned, the way we communicate about new technologies. And going back to the question, if it is one very important views of talking about AI ethics, it's about the process gaining trust and highlighting that part because it's more than just skills. Yeah, we don't have skills and we cannot wait to reform the educational system. That would take decades. It's about acting now and acting with multi- stake holder approach. And with all of the other social partners as well. So, the point is, how can we avoid these? How can we make sure that this will not stop the investments and research? Because related to this couple of weeks ago, it was the news that in Germany, an employee, a personal assistant was banned because it was offered for young people, for children, actually, but they were also recording everything that was happening. It was also connected to the on-line platforms. So, all of these things are new things, but we start to bend them because we don't understand them as well.
 
>> MAARIT PALOVIRTA: Yes. So, we have two aspects here. One related to building on one's personal data and that potentially ending up in the kind of AI ecosystem. And we have building trust, how can we work with that. A third one there? And then I'll turn it back to our speakers.
 
>> AUDIENCE: Yeah, I think it's reasonable to also consider the Fa KT that you might want to create restrictions on this if we cannot control it. So, I think it's also very reasonable not to think only that, of course, we want to go to innovation and to all of this. But I think Europe actually is the only actor today that might bring this responsibility part into the discussion. And it's something that not many countries are looking at
 
And also, my question is about GDPR and right to explanation. Do you think that the fact that there are few articles in GDPR that they're considering to be creating a right to explanation with an automatic decision by a machine will prevent the application of deep learning, especially in Europe or not?
 
>> MAARIT PALOVIRTA: So, let's turn it back. Withe have the kind of GDPR-related question here. If there's a contradiction there. And then also, how do we build trust by perhaps adding transparency and other things? Olivier, would you like to comment?
 
>> OLIVIER BRINGER: Yes, for me. I think the last intervention replied to the first intervention. There is a provision, there are provisions in GDPR which say that people need to be formed when their personal data are being automatically processed. So, first, that's the first thing. You need to know that there's a machine that is processing your personal data.
 
And secondly, you have a possibility to opt out. So, you should have the possibility to tell Netflix or whoever possesses your data to provide you with a service that you don't want to take advantage of that service. So, we will have to see how we implement that. But the point is -- the point is covered there.
 
And this is the whole reflection on transparency of algorithm. And this links to the issue of control. Making sure that at the end, there is a human decision for important -- for important aspects. And that was said by several people. The trade union colleague said it. It's a very important aspect. We will see how we can enforce -- enforce. That's a very important aspect of trust. You should -- if you're not able to tell people that it's not the machine that's going to take the decision but it's going to be a human who's accountable, then you will be in trouble.
 
On the time it takes, yes, it will take time. Yes, you cannot reform the education system quickly. You cannot change the legal framework quickly either. So, this definitely will take time. We will see there's an urgency we can always -- always intervene. But in the past, we were seen -- so it's dealing with digital issues. All of the geeks tell us about artificial intelligence and all of these things. It's not really -- they are a bit crazy. It's not really important. Now we're really at the center of the game. So, you will see in the next project in European union, digital skills will be very high on the agenda. But then the implementation will definitely take some time.
 
>> MAARIT PALOVIRTA: Did anyone want to comment on these two initial questions from the panel? Clara?
 
>> CLARA SOMMIER: Thank you. I would like to go back to the question of trust. I absolutely agree we need to explain more and ensure that the humans are the ones that are controlling. They're the ones designing it and they're the ones that are fixing the objective. They're the ones using the data to build upon the machine learning. They're always in control.
 
I also believe that we have to be very careful in all of the first applications -- very visible applications of AI that we're making. Because we still have to prove it's an efficient technology and that it can really impact society in a positive way. If we get it wrong from the start, people will immediately mistrust it. And I want to come back to a point that was made earlier on why are we calling it AI? It was a great point? Is it really an intelligence? It's only a way to process data. Actually, artificial intelligence is machine learning and changing the vocabulary does not seem like an artifact. I think it would rather show what the reality behind it is.
 
>> MAARIT PALOVIRTA: Thank you. Excellent point. Annette, you wanted to comment as well. Please?
 
>> ANNETTE MUEHLBERG: I would like to point out that we already have the problems that -- really not under control. Humans -- for example, let's take a really easy example. The customs in the Humboldt #245I check the ships if there's illegal stuff there. There's an automated program and those people who have a lot of experience on how to check the ships, now they just sit in front of the computer and they get an order. You have to go there, you have to go there. If they think it's wrong, it takes two hours to oppose. If they oppose and if there's a mistake, the algorithm is the real measurement. They're in real trouble to argue what they think is important. And this is the point where they actually do have the chance to say something. But they have no possibility to integrate their knowledge in shaping the algorithm. They have they can just follow the orders. And I think this problem will increase every day by implementing more and more of these algorithms. So, we have to find ways to not only explain but also integrate those people who work with those algorithms, that they also can help shape that.
 
>> MAARIT PALOVIRTA: Very good, thank you very much. Raul and Patrick?
 
>> AUDIENCE: I'm from the international society but speaking on my own behalf. Thank you for this interesting debate. So, one thing is about transparency and algorithms, I'm afraid that's in the -- if we're thinking ten years from now, we will not be able to regulate the decision making of the -- of many of those things that are based on artificial intelligence. Because the software applications or whatever, we'll be learning about their own experience. So, their -- it will be very difficult to act before the decisions are taken because the decisions will be taken on the fly. This makes much more relevant the transparency of the algorithms, there's a big change, because when somebody produces good wine today, for example, they -- they are making probably the grapes they are using, the person takes it alcohol in the wine or some other things. And they are ensuring that they are working based on some standards in terms of help and --
 
But they are not giving out all of the secrets because there is a part of the program that is a secret for the manufacturer or for the producer or for who provides the services. In the future, it's changed. Because the near future, the very near future, they will have to share the secrets of what they produce because the decisions that they are -- that artificial intelligence programs will take will affect our life. Somebody mentioned maybe seeing some related examples. So, if a machine will take a decision in surgery, it will be very difficult to act before the decisions are taken, because probably that software application or machine will take half of the decisions during the surgery.
 
So, what I want to know in advance, I want to know how those decisions will be taken, not what the decisions are. So, I want to be sure that the decisions will be taken based on best practices, advances in knowledge, this is what we'll -- I know, need to take the risk of -- of doing the surgery. So, algorithms are very important, a big change in the priorities of how things are done.
 
Second comment. Sorry for the long intervention. I was talking to my colleague who reminded me yesterday about this. We talked many times about the skills that are needed. And we repeat that -- we need to develop the new skills and -- but we don't -- we don't talk about what the skills are. To be honest, I don't know what the skills are. What are the skills that people will need in ten years? But what I can say is whatever needs the people need now. And I think we need to do it against. One is to train people on complication and thinking. Not only with the high schooler or high school level, it's for the workers. We need to train the workers from this thinking. In two or three years, the people working with that machine, we have to work with a different device and they have to understand how the devices work, how the devices think in order to interact with them. This is urgent.
 
The second thing is we need to make a call -- we need to make a call to all governments and states around the world to urgently review their integration systems and take into consideration for things. I know it would take many years, as soon as we start, we have results. Sorry for the long intervention. Thank you.
 
>> RINALIA ABDUL RAHIM: Thank you, Raul, for your thoughts.
 
>> MAARIT PALOVIRTA: So, let's take a few questions here. I think --
 
>> AUDIENCE: I think when he speaks about artificial intelligence, I think our imagination starts flowing, yeah? And at the same time, we are not really clear about the facts. Anyone knows how many postings Facebook has taken down in the last quarter. It's in the transparency report, 860 million postings have been taken down in a three-month period. I can ask Google how many videos have been taken down in the last quarter on YouTube. And how much of that has been done through artificial intelligence or through algorithms or self-learning algorithms. Facebook claims that 99% of terrorist propaganda has been taken down. Before any posting on this matter has taken place and all of this is done through artificial intelligence. So, we're talking about what is already currently happening right now. What is human intervention and what is no human intervention. Out of these postings, very little human intervention involved, righteously so, we have machines that can do it for us.
 
Not -- I wanted to react to the psychological aspect of it. Can you image up looking at violent content for workers throughout the days as being strong criticism about exposing workers through many of the content that's being produced and posted. So, we really have to balance workers' rights and know what it brings for the workers and what is maybe less positive for them.
 
So, algorithms are there, self-learning algorithms are there. The trust, we need to work on. But I'm not so sure that our internet service providers, the companies, will let us look into how algorithms are fabricated. It's the same thing as asking Coca-Cola to reveal the secret formula of Coca-Cola. I'm sorry. Algorithms are the core of the business. That's the core of the business model. What we have to ensure, I've said that, we have to involve from the start within the design the human rights dimension of what we're developing. We need not to be able to guide every single step that is being done within artificial intelligence, but we need to set the human boundaries of where we want to go, when we developed human -- when we developed cloning possibilities, we didn't say stop all of the cloning, we said, there is a limit. And that limit is human cloning. So, within the development of artificial intelligence, self-learning machines, I'm not so sure that garbage in means garbage out or intelligent input means intelligent input out. Not necessarily. We know we put in, we don't know what comes out. There we have to set somehow the boundaries which are, of course, ethical and philosophical but also legal.
 
>> MAARIT PALOVIRTA: We're officially out of time. But we have a couple of final points here before we hand it over.
 
>> AUDIENCE: Coming back to Annette's video, I think she raised really important topic. The topic of and how much people need to make small and bigger choices in their life -- how much do they need to feel that they are in control of their lives and this is what actually makes them happy at the end of the day. So, this is the comment about Annette's video. And another point to Clara, she has mentioned that Google engineers have trainings on fairness. I think it's a really disturbing phrase, training on fairness because fairness is not a washing machine that you can operate. What kind of fairness are they taught? Is it a utilitarian version? Do they read John Rose or Robert Nozik? Yeah, this is a very specific question. Yeah.
 
>> Any other comments?
 
>> RINALIA ABDUL RAHIM: Remote.
 
>> MAARIT PALOVIRTA: Remote. We have a question from Amalid De Silva. Will AI become big brother. And are we prepared to limit AI development in 2024. And also, just came through. Agree that it is good that artificial experts are reviewing the content for certain harmful types. But once design with human rights considerations, they can enter into account, we also need to be able to test the systems against human rights situations and we need to provide a few mechanisms for those things removed.
 
Also, I think that Annette wants to say something.
 
>> MAARIT PALOVIRTA: Yes, I note that Annette has a final comment. We're out of time here, Annette. If you can keep it to one minute, I'll give you the floor for the last time as you are remote and at a disadvantage. Please, Annette, go ahead. We can't hear you, Annette?
 
>> ANNETTE MUEHLBERG: You cannot hear?
 
>> MAARIT PALOVIRTA: Now we hear you, thank you, go ahead.
 
>> ANNETTE MUEHLBERG: Okay. So, with respect to skills, I think we should address not only workers but the managers -- politicians -- they understand the issue. This is extremely important.
 
Second, someone said algorithms are the core of the business model and therefore they cannot be transparent. I think this should not be for public service. This should not be for necessary service for human kind. I think we have to clarify this and make the distinction between public and private interests
 
And, yes, I just think that I would like to say hello to the psychology lady and say the whole issue of self-determination and the possibility of thinking and acting with her and his own will is substantial. Thank you.
 
>> MAARIT PALOVIRTA: I'm sure our psychology lady says hi, back. So, thank you very much. We're going to wrap up the session now. We have Sue who's been listening very intently. And who will propose the bullet points to finish the session with. Thank you?
 
>> Sue: Since we're out of time and if you disagree with any of the comments read, please raise your hand. AI must be accountable, transparent, mod final, pricing and determinability by design is a must. Unintended and unexpected consequences in development of AI and robotics are unavoidable. There must be an ethical code for algorithm developers. The education system needs to be revamped to prepare future workers with necessary skills to deal with new forms of jobs that AI will bring. Interdisciplinary teams are needed to relieve the burden on engineers and engineers need to be educated on ethics. AI technology needs a common international framework, ethical clearance is not sufficient. I'm just an engineer, it's not an excuse when developing AI. AI is or will become a race expecting adherence to ethical code by developers is not realistic. Is anyone listening?
 
>> RINALIA ABDUL RAHIM: We're listening.
 
>> Sue: We need a kill switch for automated systems. AI can be part of the solution to the problem of the future of work. And I think -- yes. That's it. Thank you.
 
>> MAARIT PALOVIRTA: Do we have comments or objections? I think it sounds fair. No hands? No objections. I would like to thank the panelists for organizing single handedly more or less a great session and the audience who stayed here for the whole 1 1/2 hours. So very interesting topic. Thank you, everybody.
 
[Applause]
 
>> RINALIA ABDUL RAHIM: Thank you for staying until the end. I just wanted to say that it was a pleasure to organize this workshop because I had an excellent team of collaborators and I wanted to make special mention of Ms. Peck from the council of Europe, and for Maarit for the internet society and for those who have been a great help to me. I asked the role players to stay in the room because you didn't have a lot of time to engage them. If you have specific questions and want to talk to them personally, they would be happy to meet you. Thank you so much.
 
 
''This text is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text is not to be distributed or used in any way that may violate copyright law.''
 
 


[[Category:2018]][[Category:Sessions 2018]][[Category:Sessions]][[Category:Human rights 2018]][[Category:Innovation and economic issues 2018]]
[[Category:2018]][[Category:Sessions 2018]][[Category:Sessions]][[Category:Human rights 2018]][[Category:Innovation and economic issues 2018]]

Revision as of 16:45, 2 July 2018

6 June 2018 | 14:00-15:30 | GARDEN HALL | YouTube video
Consolidated programme 2018 overview

Session teaser

What are the ethical and rights considerations in relation to AI and the future of work? Come, find out and contribute!

Keywords

Artificial Intelligence, AI, Ethics, Ethical Considerations, Future of Work, Responsibility of State, European Strategy on AI, Human Rights

Session description

Like the Internet, Artificial Intelligence (AI) has the characteristics of a pervasive and disruptive technology with the potential for transforming society. The challenges and opportunities posed by AI are so great that calls are being made around the world for an ethical framework that would govern the development and use of AI, so that deployment of AI will be safe and beneficial for society as a whole. This session picks up from EuroDIG 2017’s exploration of AI and the Future of Work. Discussions will focus on the following questions:

  • What are the ethical and rights considerations in relation to AI and the future of work?
  • What is the responsibility of the state and what are the state’s ethical considerations?
  • What are the ethical considerations for employers as they deploy AI in work and production processes?
  • Are there ethical considerations that apply to workers themselves as they work with AI?
  • Should robots and algorithms be given legal personality to address liability and taxation issues?
  • What are the rights considerations in all of this?


Background: In 2017, EuroDIG explored the topic of how the digital revolution is changing people’s worklife. Artificial Intelligence is expected to change business models and the jobs landscape for every kind of organization, including the government. While new jobs will be created (including AI-augmented work), jobs will also be displaced. The net effect on society is uncertain. Human beings need to adapt and will require (among others) lifelong learning skills, re-education (including a reconceptualization of education), re-training, entrepreneurial training, and more. Social welfare and security systems that underpin care for society requires new solutions with new sources of financial contributions/support as AI displaces humans in the workforce. Basic income as a means of coping with the impact of jobs displacement is being experimented upon in various countries, but the results do not yet provide clear guidance for other countries. What should Europe do? In 2018, EuroDIG explores the ethical and rights considerations of AI in relation to the future of work and humanity.


Note: Each of the components in the topic is broad and complex. What we want to focus the workshop on is the impact of AI on the Future of Work, the ethical considerations, whether the ethical approach is sufficient, and what it means for various societal stakeholders in terms of responsibility and recourse.

Format

Moderated discussion involving ALL workshop participants/attendees (at the venue and remotely) with key interventions by designated role players to provide perspectives from government, business, workers/trade union and civil society.

Further reading/viewing

Videos:

Debating Europe's live debate in the European Parliament on the impact of AI on the future of jobs on 24 April 2018 http://www.debatingeurope.eu/2018/04/24/live-debate-artificial-intelligence-jobs-threat-opportunity/#.WuyZMci-nou

On Moral Decisions by Autonomous Systems by Virginia Dignum, Associate Professor of Social Artificial Intelligence, Delft University of Technology & Executive Director, Delft Design for Values Institute https://www.youtube.com/watch?v=FeBQXhjGVOg

People

Focal Point

  • Rinalia Abdul Rahim – Workshop Focal Point; Managing Director, Compass Rose Sdn Bhd & Advisory Board Member, Mozilla Foundation


Key Participants

Key Participants are experts that are invited to provide their knowledge during the workshop.


CONTEXT-SETTERS

Vint Cerf, Internet Pioneer (VIA VIDEO) - View on Ethics and Algorithms

Olivier Bringer, Head of Next Generation Internet Unit, DG CONNECT, European Commission - View on European AI Strategy, Relationship with Next Generation Internet, Impact on Future of Work, Ethics and Human Rights implications/considerations

Claudio Lucena, Professor and former Dean of the Law Faculty at Paraiba State University, Brazil & Research Fellow at the Research Center for Future of Law, Portugal - View on AI and Future of Work: Challenges, Ethics and Enforceable Measures


STAKEHOLDER PERSPECTIVES

Annette Muehlberg, Head of the ver.di Digitalisation Project Group, Germany - Workers/Trade Union Perspective on the ethical and rights considerations in relation to AI and its impact on the future of work [VIA VIDEO]

Mariam Sharangia, Chief Specialist of Strategic Development Department, Georgia's Innovation and Technology Agency - Governmental perspective on the state’s responsibility in relation to AI and its impact on the future of work, especially the ethical and rights considerations

Clara Sommier, Public Policy & Government Relations Analyst, Google - Business perspective on the ethical and rights considerations in relation to AI and its impact on the future of work

Leena Romppainen, Chair, Electronic Frontier Finland - Civil Society perspective on the ethical and rights considerations in relation to AI and its impact on the future of work

Christian Djeffal, Project Leader for IoT and eGovernment, Alexander von Humboldt Institute for Internet and Society, Germany - Academic perspective on ethical and rights consideration in relation to AI and its impact on the future of work


Co-Moderators

  • Rinalia Abdul Rahim – Workshop Focal Point; Managing Director, Compass Rose Sdn Bhd & Advisory Board Member, Mozilla Foundation
  • Maarit Palovirta – EuroDIG Subject Matter Expert; Senior Manager, Regional Affairs Europe, Internet Society


Remote Moderator

The Remote Moderator is in charge of facilitating participation via digital channels such as WebEx and social medial (Twitter, facebook). Remote Moderators monitor and moderate the social media channels and the participants via WebEX and forward questions to the session moderator. Please contact the EuroDIG secretariat if you need help to find a Remote Moderator.


Reporter

  • Su Sonia Herring


Organising Team (Org Team)

  • Joelma Almeida – Fundação para a Ciência e a Tecnologia, Portugal
  • Farzaneh Badii – EuroDIG Subject Matter Expert; Executive Director, Internet Governance Project / Research Associate at Georgia Institute of Technology, School of Public Policy
  • Amali de Silva Mitchell – Individual, Civil Society
  • Frédéric Donck – EuroDIG Subject Matter Expert; European Regional Bureau Director, Internet Society
  • Claudio Lucena – Fundação para a Ciência e a Tecnologia, Portugal
  • Fabio Mortari – Fundação para a Ciência e a Tecnologia, Portugal
  • Malgorzata Pek – Council of Europe
  • Maarit Palovirta – EuroDIG Subject Matter Expert; Senior Manager, Regional Affairs Europe, Internet Society
  • Rachel Pollack Ichou – UNESCO
  • Sandro Karumidze – Telecom Business Development, UGT, Georgia
  • Tapani Tarvainen – Electronic Frontier Finland

Current discussion, conference calls, schedules and minutes

See the discussion tab on the upper left side of this page. Please use this page to publish:

  • dates for virtual meetings or coordination calls
  • short summary of calls or email exchange

Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.

Messages

  • Artificial intelligence (AI) must be accountable, transparent, modifiable – privacy, and determinability by design is a must.
  • Unintended and unexpected consequences in the development of AI and robotics are unavoidable.
  • There must be an ethical code for algorithm developers.
  • The education system needs to be revamped to prepare future workers with the necessary skills to deal with the new forms of jobs that AI will bring.
  • Interdisciplinary teams are needed to relieve the burden of engineers and they need to be educated about ethics.
  • AI technology needs a common, international framework; ethical clearance is not sufficient.
  • ‘I’m just an engineer’ is not an excuse when developing AI.
  • AI is or will become a race, expecting adherence to ethical code by developers is not realistic.
  • We need a kill switch for automated systems.
  • AI can be part of the solution to the problem of future of work.

Find an independent report of the session from the Geneva Internet Platform Digital Watch Observatory at https://dig.watch/resources/artificial-intelligence-ethics-and-future-work

Video record

https://youtu.be/7S4vU956Q7o

Transcript

Provided by: Caption First, Inc. P.O Box 3066. Monument, CO 80132, Phone: +001-877-825-5234, +001-719-481-9835, www.captionfirst.com


This text is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text is not to be distributed or used in any way that may violate copyright law.


>> Shall we get started? Okay. So, good afternoon. I hope you had a nice lunch. This is Workshop 8 of EuroDIG. It's a workshop focused on artificial intelligence, ethics, and the future of work.

So, if you look at the components in the workshop title, each one can be very broad and extremely complex. This workshop will focus on the interaction, specifically the impact of artificial intelligence on the future of job, on the future of work and the ethical dimensions.

What I'm going to do right now is to explain the flow of the workshop, introduce the role players, and also highlight the key questions that we would like to pose to get your feedback on. Okay?

So, we've divided the workshop into two parts. The first part is context setting. You will hear from Vint Cerf. He's the world's leading internet pioneer. He will speak on ethics and algorithms. And you will hear from -- the photographer is distracting me. You will hear from Olivier Bringer who's the European commission, the head of the next generation internet unit. He's going to talk about the European AI strategy which just came about. How many people have read it? One, two, three, okay, a good time to promote it, actually, all of you, yes. And in the strategy itself, he will talk about the strategy, the impact on the future of work, and the ethical consideration. And then Claudio Lucena. He's a professor of law in Brazil. Now a research fellow for the Center of the Future of Law in Portugal.

After that, I'm going to come to you for a discussion. In the discussion, I'll tell you the questions right now, because I'd like for you to think about it, it's the ethical approach, the right approach, to deal with the opportunities and challenges posed by AI and its impact on the future of work. If it is the right approach, is it alone sufficient? What else does it need? And are there other alternatives and what are the gaps? So, these are the input the we'd like to get to you from the discussion? We'll have about 15 to 20 minutes of discussion. If you have questions, we'll address that as well.

After that, we'll move to the second part of the workshop moderated by my colleague, Maarit Palovirta, Internet Society. There we'll have perspectives from stakeholders. We'll hear from Annette Muehlberg, she's here via remote and she's from Germany. After that, you'll hear about the role of the state from Mariam Sharangia from the Georgian Innovation and Technology Agency and to the business perspective from Clara Sommier from Google and then to Leena Romppainen from Finland and she'll give us the civil society perspective. Then we'll go to Christian Djeffal, where are you, there? You'll provide an internet perspective.

A well-rounded view. And then Maarit will pose the question what is the role and the responsibility of state when it comes to A.I. and the impact on the future of work and what is the experience of employer and employees, the public sector, the private sector as well as civil society. Okay, I'm going to introduce a key people. This is Maria, she's our timekeeper and very important to the role players and to the moderators because she's going to tell us how much time we have left for our allocated time. Please pay attention to her. We have Sonya Heading. She's going to come up at the end and do a summary on what she thinks could be the consensus on the room on the questions that will have been posed. And I have a remote moderator, Pedro over there who will channel people and people participating on-line. Do we have people on-line? We have one. Hopefully more will come in. Let's start with Vint Cerf on ethics and algorithms.

>> VINT CERF: Hello, I'm Chief Internet Evangelist and sometimes known as one of the fathers of the internet. I've been given a list of pretty interesting questions about artificial intelligence and ethics. Let me read them to you so we can get an idea of what's before us in this discussion.

What are the ethical considerations in relation to AI and the future of work? What is the responsibility of the state and what are the state's ethical considerations? What are the ethical considerations for employers as they deploy AI in work and in production processes. Are there ethical considerations that apply to workers themselves as they work with AI? Should robots and algorithms be given legal personality to address liability and taxation issues? And finally, what are the rights and considerations in all of this?

Wow, that's quite a list of questions and I suspect in the short video like this, I will not succeed in getting to all of them. Let me take one of them, particularly, first and that's the idea of creating false personalities like corporations being treated as people. I think that's very premature at this point. For all practical purposes, computer programs are not people, robots are not people, algorithms are not people. And I don't think we should treat them that way. However, those algorithms are used by people. And we can speak about the ethics of using those algorithms and using those programs in the conduct of business or perhaps just in the conduct of everyday life.

So, let's look at this from the tool point of view. Let's ask ourselves what should we be thinking when we apply the artificial intelligence or ma sheep-learning algorithms to accomplish a task. How much autonomy do we want to give to that software when it produces a result. Should we leave the software to make decisions alone without human intervention? For example, should this prisoner be allowed to obtain parole? Should this person be allowed to get a mortgage or a loan? Should this person be allowed to enter college? There are a whole series of questions like this that sometimes are addressed by gathering a great deal of statistical data and trying to boil that down into some kind of a predictive algorithm, possibly a machine learning algorithm, that will produce an outcome that will guide our decisions.

What I think is unethical is to use the mechanism solely and alone. The machines are powerful, the software that drives them, helping us to analyze data to look at outcomes and conditions over perhaps a period of time to try to understand what we can see statistically. But I think when we're making decisions about people and their lives, we have to remember that they aren't statistics, they're people. And we need to keep that in mind.

I think anyone developing artificial intelligence algorithms has a moral responsibility to be thoughtful about the kind of information used to teach the program or a machine running a multi-layer neural network and be very conscious of the potential of bias and error. We found some of the algorithms are quite brittle. They will work very well for 95% of the cases and in a few cases, they'll make really, really bad decisions and come to the wrong conclusions. We need to be thoughtful and sensitive to that possibility.

We should always be asking OUFRSs, what's the rationale for accepting the output of a particular algorithm, is there anything else we can do in order to determine whether or not the proposed outcome is, in fact, accurate and appropriate?

Judgment is an important element here. I don't believe we can expect our computer algorithms to have the kind of human judgment based on experience, for example. And knowledge of -- of human beings and their behavior. So, in the long run, I believe, all of these tools have the potential to be very, very powerful in helping us marshal the information to inform decisions. But I think it has to be done in an ethical context in which we take care to apply human judgment to important decisions where it will affect the lives of our fellow citizens on planet earth.

I'm sorry I can't join you for this conference. And this is such a rich and important topic that it deserves a great deal of attention. But I do hope, perhaps, that our paths will cross on the net.

>> RINALIA ABDUL RAHIM: So that was Vint. We thank him very much for the message which he did as a favor because I asked him to do it. Now we're going to hear from Olivier Bringer. What I heard from Vint is a call for an ethical code for developers of algorithms. Please?

>> OLIVIER BRINGER: Do you hear me? Yes. I'm from the European commission. I would like to say a few words about the strategy we have adopted in April of this year. I would start by saying that okay, today we'll discuss the ethical issues and the impact on jobs. But the starting point, I think, and this is the key idea of our strategy, is that artificial intelligence is going to bring huge benefits. So, it's going to bring benefits in healthcare, safer diagnoses, help to surgeons, for example. It's going to bring benefits in transport. Today, 90% of accidents are due to human error. So autonomous vehicles are going to make our roads safer. It's going to bring benefits to anticipate cyberattacks, etc., etc., etc.

We should keep that in mind. This is why in Europe we want to invest in artificial intelligence, we want to invest in the technologies. We want to invest in the capacities, high-performing computing, etc., that will support artificial intelligence. And we want to make sure that the user, the -- the users will use artificial intelligence. So, today, 99% of companies in Europe are SMEs, only one out of five is highly digitized. So, there's a real work to make sure that these companies use artificial intelligence to our benefit in every sector. So, we're going to invest also in that, giving access to artificial intelligence to algorithm data, etc.

So that's for the -- that's for the investment part. We are very, very ambitious. The second part is the impact on jobs indeed. I would say -- I looked through a few studies that my colleagues or experts gave me. The jury is still out on the exact impact of artificial intelligence on the job market. I mean, some studies say it will have a huge impact basically more than half of jobs are -- can be replaced, some studies say 10%. Some studies say the net effect is nil. But in any case, what is certain is it will have an effect. So, some jobs will be replaced. And every job is going to be transformed. So, we need to equip our workforce with the right skills to manage this transition. And it goes from getting the basic digital skills. So today 1/3 of Europeans do not have basic digital skill. That's a real issue. How can they deal with an artificial intelligence agent in the future? We need to make sure they get the basic skills. We also need to make sure that we have the engineers, AI, deep learning specialists who will be able to develop these technologies. And we have to help people in the transition. If your job is clanged because of artificial intelligence, your company, but above all, the states, has a duty, I think, to -- to help you manage the transition. While it's happening.

So that's the second aspect. We are also, we'll have a number of instruments to help do that. But, of course, a lot of it will be done by the member-states themselves who are responsible for the education systems.

Then the third aspect of the strategy, the third pillar, is about the -- the ethical and regulatory aspects. So, we are -- our view is that these developments like any developments, if you speak about the next generation internet blockchain, etc., each should follow our values. We're not going to give up our values. We're not going to go for a race to the bottom. That's very clear. So, we need to see how we can incorporate the values into the development of artificial intelligence. Luckily, we have already a solid framework, we have GDPR, we have a solid sign security framework, we have laws, laws about product liability and safety. They are there. We can build on them, some of them. And we need to explain a bit more how they apply in a world where we are surrounded by artificial systems and the internet of things. We're doing that, for example, for the product safety -- the product liability directive.

And then there are novel issues, issues that Vint phrased -- the issue of the level of human control. That -- that's very important. At which point, yes, my time is up, I'm going to wrap up. How to make sure that, you know, decisions are not made by the internal system on their own. How to make sure important decisions are still made by humans. How to make sure the decisions of artificial intelligence agents are explainable, they're transparent. Otherwise, if people if they don't understand, they will never trust these systems. So, we need to find a way to do that. This will involve research work

And then, of course, we need to make sure to avoid the etc.

We have set up an alliance, a multi- stake holder approach to advise us on these ethical aspect s and they will issue the report at the end of the year.

And the very last point, we have signed the declaration of -- 26 member-states, not us, have signed a declaration and engage together with us with the European union in all of these aspects. So, in the investment, in managing the job market transition, in ethical aspects. That's very important. We go together to address the big challenge.

>> Thank you, as I understand it, the European AI strategy exercises an ethical framework as well as a legal framework that complement each other?

>> VINT CERF: We have a -- do you hear me? We have a legal framework already? We will see if we need to complement or explain it. That might also be self-regulatory approaches. That we will see.

>> RINALIA ABDUL RAHIM: Okay.

>> VINT CERF: And the report by the AI alliance will be important in forming our decisions.

>> RINALIA ABDUL RAHIM: Okay. Claudio, what's the solution?

>> CLAUDIO LUCENA: Thank you, Rinalia? Do we have a presentation ready over there? Is anyone there? All right, thank you very much.

So, we have a couple of initiatives already in that sense. Back in the '50s when this conference convened and coined the term "artificial intelligence," there was not much of a policy fuss about it, partly because the idea was that it could be -- most of the problems could be solved as you see there with a selective number of researchers over the period of summer. That time frame didn't go exactly as planned. And part of the fact might have been because back then, they lacked the engines, the few, and the network of vehicles.

Now, fast forward that to the second half of our decade. Now everything is there. We've got everything. The engines are there. The fuel is here. And also, the network of vehicles is here. And, yes, their sensors, they know things, that makes a difference.

So, when we're ready to scale things to our everyday life, how do we define the challenges? Well, from the Harvard Brookman Kline institute has done a good job in defining the challenges in a blog post when he opened the initiative. Some of which I'm bringing to you. One is looking at AI as a set of monolithic techniques as if they could be streamlined through the whole plethora of human activities. But yet segment them.

Another one is that it's an important instrument but it's not the only one. We must use other governance tools. We might feel the need to recode the rule of law. This is something interesting that he brings also. One last thing from the blog post, there are others, is that responses might vary across sectors and jurisdictions.

Two things are coming from our project in Portugal where I'm part of the group where I research AI and inclusion, we have to take care and make sure that the data sets have quality and integrity so that the benefits they can bring can also favor inclusion.

What are the initiatives so far? As you have heard, one of them is national strategies. That was the case with the white house, that was the case with the UK, the European union also has -- more complex environment because there was a strategy, Olivier talked about it, there are logistic efforts in that. Another example, Brazil, the jurisdiction of my origin has also just launched an internet, internet strategy national plan.

The initiatives come in other forms too. And ethically aligned or driven initiatives which is the case of the Montreal declaration and the initiative run by the institute of electrical and electronic engineering, ethically aligned design, a good work in progress addressing ethic challenges. Right now, three weeks ago, we had in the Toronto declaration, which focuses not in hard national law, neither in ethically-only alliance and design, but in human rights as the basis out of which we should look at the responsible development of artificial technologies.

The regulatory efforts have already been discussed. Now in the United States, a couple of other models are in discussion, federal search, robotics commission, interagency cooperation of many kinds. From France, we take development from 2016, which implements a mechanism that is now also in article 22 of the GDPR. I've been told by my French colleagues that there were not many developments then. I think we might have more once this mechanism is reflected also in the GDPI.

So, to end, these technologies, they give us so much potential that at times, we might be taken to imagine that they are looking at the future. Well, the training data sets that they use, most models of neural networks use, are data that already exist. So, in the end of the day, the best option, all they're doing is looking at the past. We have to look at the future. This is a decision that we have to make.

The World Bank says we're going as society and as mankind, we're doing well in developing digital technologies, very well. And we're doing very bad in sharing the results of those developments. That shouldn't happen. We have good standards. We do have good standards. But this is the way we're moving.

Artificial technologies -- artificial technologies and mechanisms have the potential and I will leave it here -- have the potential to contribute to one of the greatest tools for inclusion in sharing -- in sharing of power and influence. It might be the best opportunity of mankind in that sense. But it also can drive us to the path of the worst concentration of power ever. This applies to any view, but also to the future of work. We should decide. It moves too fast. We can't think too much for too long. We have to decide, are we going for concentration or are we going for sharing, thank you very much?

>> RINALIA ABDUL RAHIM: Thank you.

[ Applause ]

>> RINALIA ABDUL RAHIM: So that was a lot of food of thought. You heard the call for responsible development and responsible use and responsible governance. You heard the impact on the future of work and the importance of the rule of law, the code of the law itself, but also the need for other governance tool. That's where the ethical framework comes in. And that human beings need to make up its mind in terms of what kind of future and development that it wants. So now I come to you because we'd like to hear from you in terms of what you think. Yes, please? There's a microphone right behind you?

>> Okay, thank you. Patrick Pennings from the Information Society Department from the Council of Europe and, in fact, our Council of Europe's Commission of Ministers decided artificial intelligence was to become one of the key features of the organization in terms of research.

I could start with saying technological development, some say, will never be as slow again as it is today. We have to count on a rapid progression in certain areas. Now artificial intelligence is already bringing the daily results to our lives. We may not know about it, but in quite a number of thing, like urban planning, traffic planning, so on, there's already quite a bit of artificial intelligence involved. Is it sufficient to solely have ethical clearance of how we are going to make use of artificial intelligence? I don't think so. As a human rights-based organization, I think we need to look at human rights input from the start by design of algorithms, self-learning algorithms, self-learning machines, autonomous machines. I think we need to be clear on that. That is that if we want to create a legal space which is predictive -- and we're already working on quite a number of things within the organization, for example, predictive legislation or predictive justice. All of this is already in place. So, we need to make sure that we have a common legal framework in which the developments take place.

I give an example -- for example, when we were discussing biomedical developments, it's not sufficient to have just ethical clearance. We also need to make in that case the convention, which is an international treaty which puts everyone on the same line. That doesn't mean that we don't have to rethink the rule of law and how we deal with it in different jurisdictions but that's clear that's what needs to happen, thank you.

>> RINALIA ABDUL RAHIM: You started your comment about speed. With the development of a treaty, that's going to take a few years, do you think we can tolerate that? Just a quick answer? Can we afford it?

>> AUDIENCE: Can we afford not to spend that time? That's the response to it. For example, within the Counsel of Europe it took six months to develop a convention on lone terrorist fighters, the lone wolves. So, is six months too much?

>> RINALIA ABDUL RAHIM: Six months tolerable.

>> AUDIENCE: The question really is, what if you don't? And I think it's important that in order to suck down the genetic principles, of course you need to take time. I don't think that rushing in to any kind of legislation without involvement of all of the communities, I know they're only speaking about governments, but the technological community, the internet developers who are actually with their hands in the mold, so to say, they have to be involved in this and that's quite clear. We cannot do it, governments cannot do it on their own.

>> RINALIA ABDUL RAHIM: Okay, other views? Yes, please? Please. There is a mic here. And I'll come to you.

>> AUDIENCE: Thanks. Actually, following up to this comment, I think it's very important that we make up our mind about the agents but it's also important to find a way again to bring every country on the table on this, because what we're seeing today is like a race for each country to make use of artificial intelligence and a competitive advantage against other countries. The problem looks like a prisoner in the land mine which everybody seems to benefit from the non- collaborative approach but by not collaborating, you end up with the worst possible outcome which could be devastating. Do you think there's a way to bring countries to collaborate?

>> RINALIA ABDUL RAHIM: Hold on, don't answer yet, note it's for you. Yes?

>> AUDIENCE: Despite being a remote moderator.

>> RINALIA ABDUL RAHIM: You can participate. Speak closer to the microphone.

>> AUDIENCE: A personal question, a personal comment. I would like to think, or I would like to ask to the panel, so when we talk about these issues, and I'll speak as a researcher in artificial intelligence and also a student, that we sometimes software developers aren't really aware of these legal -- even of the legal issues. And the ethical ones are thought in a personal way and sometimes they are no guidelines to these aspects.

So, my question is how to bring together those two points of view. Because even -- well, I've been to the youth day program. We've been preparing the presentation at EuroDIG for two days. The discussion, we were going through artificial intelligence and my question was, so should we put it in the technical or in the legal way? And the immediate answer was, why can't we combine both? So that's my question. Thank you.

>> RINALIA ABDUL RAHIM: Thank you very much? Are there other views?

I know Claudio can handle multiple questions at one time, yes, please? Is there a microphone on the other side?

>> AUDIENCE: Thank you. Actually, I don't have a question. I have just a suggestion. Because honestly, I don't believe that the developer can be kind of a -- you the can't oblige a developer who are different kind of people to oblige to certain rules or maybe even ethical rules. I know what I'm talking about. I know I've been in touch with developers. I know exactly what ethics means. And I have respect of education. I really don't think you can bring even all of the countries to collaborate those who will not and it will become a race.

So, I would suggest let's try to be more specific, even let's define what you mean by AI because mostly when I read and try to understand what people assume under AI, it might not even be AI. So, let's just try to be more specific. TUR.

>> RINALIA ABDUL RAHIM: Thank you. So that's an important input. But I just want to remind that we want the question answered, is the ethical approach alone sufficient, if it's not sufficient, what more is needed. And but, of course, your input is taken, Claudio?

>> CLAUDIO LUCENA: Thank you, Rinalia. A clear-cut answer to that question, the ethical approach is a necessary component.

>> RINALIA ABDUL RAHIM: Necessary, why?

>> CLAUDIO LUCENA: We can't go without it.

>> RINALIA ABDUL RAHIM: Why?

>> CLAUDIO LUCENA: Because that's where we're trying to preserve our human nature in the loop of this new wave of technology. I really appreciate both questions from Pedro and from our fellow. It's interesting, the definition is very interesting. I think we're referring -- I'm sorry to come back to a terminology thing, I think it's essential for what we are going to call it. We keep calling it artificial intelligence because of the decision from the '50s. There was no other description for what they were thinking of at the time. We're talk ugh about analytics over big data. That's what we're talking about. When we do that, we eliminate most of the semantic existential issues of talking about something intelligent other than a human being. It's not that it's going to happen. There are those implications. There are huge funds and interesting studies about that part of substituting our human intelligence, semantically speaking, philosophically speaking. That's not what we're trying here. We're dealing with the everyday use of data processing over big quantities of data.

I'm a lawyer, but I'm also a computer scientist. I have these two nationalities, so to say. And allow me to respect to the most but disagree to the full extent of the premise you're bringing here. And that addresses Pedro's question, unfortunately he's not here.

We not only should but we must place and divide that burden with developers too. I would strongly encourage you to read a recent work published by -- I'm going to laugh, the name is a terrible -- but the title of the work is "The Moral Character of Cryptographic Work." It's an interesting piece. Much more elaborate than what I'll do here. Is a counterpoint. There's no way we don't advance this if we don't shift the burden of this development. The excuse -- and I'm talking also as a computer scientist, the excuse, I'm just an engineer, does not work anymore. This is too serious. Engineers are educated enough to foresee the consequences of this work. We're looking at developments of an area from which the developments are all there. Look at Yelico, they're there, they gave speech. Listen to them. They are clear in saying there are uses in which artificial intelligence is mature enough. There are other ones that it's not. Let's use one -- natural language processing. Fair enough, consequences are measurable. Predictive policing, as a decision maker, not as an assistant? Definitely not.

Not to give a definitive answer, because we're not going to get there. But these are the reflection s I would like to make.

>> RINALIA ABDUL RAHIM: Christian, you wanted to jump in?

>> CHRISTIAN DJEFFAL: I think, first if I may first comment to your question. I think it's a very, very important and relevant point.

>> RINALIA ABDUL RAHIM: Can you hear Christian?

>> No.

>> RINALIA ABDUL RAHIM: Could you speak louder, please?

>> CHRISTIAN DJEFFAL: It's an important and relevant point to think about the international setup. In a way we have the weapons race narrative that has been introduced. And I think it's very detrimental because other than, for example, nuclear technology, AI is not limited to energy and bombs. So, it's a much -- much more diverse and important technology. And I think the work of the European union in that regard, especially the declaration that included Norway, which is not a member of the European Union, is a very important step to an inclusive international and transnational setup in that regard. The measures highlighted, it doesn't apply to work, but I think it will -- it will impact to link research institutions and link hubs, innovation hubs and to create a kind of network. And I think this could really be a first step in an even more inclusive way to offer a different -- a different narrative that fits very well with coal and steel. So, this was my first remark.

If I may add a very brief second remark in the defense of engineers, I completely agree they have a big responsibility, but if we take the comment of the colleague and really look at the specific problem, we are facing, those engineers, they -- they work within an organization. They have specific tasks. They listen to the authority of others. And it's -- they deal with highly complex questions, many of them are sensitive, but it takes a lot to speak up and to change the development because they don't make ultimate decisions in many -- many settings. So, what we need is extraordinary teams. We cannot expect of them that they solve questions that the highest European courts quarrel about and then argue about. We need interdisciplinary teams and we need the involvement of the people for which those technology is made. So, there is -- the burden sharing is just necessary for the -- for the technology we're building here.

>> RINALIA ABDUL RAHIM: So, the you're saying engineers can take guidance. They can take guidance.

>> CHRISTIAN DJEFFAL: Of course, they can take guidance. Of course, they would need some level of education in order to -- everybody who's worked in an interdisciplinary project knows this. It's like language translation. But if you have an interdisciplinary team, and I see this in my own work, it can be very, very productive because once you start the conversation.

>> RINALIA ABDUL RAHIM: Olivier, you're an engineer?

>> OLIVIER BRINGER: I am an engineer. I've been working for the last ten years among lawyers, I manage them. Okay. No, I wanted to intervene on two points, the corporation and the ethical aspects, where we should put the ethical dimension -- in the training or if we should deal with it in post.

On the cooperation, yes, there will be a competitive dimension always. We cannot avoid that. Countries have their own interests, they have their own industries to protect. They want to attract investment. There's a competitive dimension. But still, we are faced with the same issues. We're faced with the same issues in terms of the impact on jobs and the same issues in terms of the ethical questions. We allow an algorithm to take medical decisions or not?

So, we have to work together. And I think we have -- we are going in that direction in Europe. I mean to have 26 countries signing the declaration is -- is a -- it's important. Once we have that, we also have a certain weight when we discuss with our international partners. We need -- when Germany discusses alone with the United States, it's not -- it's not that great. If Europe discusses as a whole with the United States, I think we can achieve better results.

And then on the -- on the ethical aspects, I think -- I mean, you can -- you can go that way. You can go for the legislative. If I'm being a bit self-critical, look at the time it took us to do GDPR and we're still in the implementation phase. Look at the time it takes to do a case. It takes years and years. Of course, we'll continue to do that when it is needed. But we have to intervene upstream. And I think an engineer can understand ethical issues. An engineer can be trained to that. And they would be interested in that too, I'm sure. So, we have to put that in the curriculum of scientists and engineers, these aspects. And I agree fully to have multi- disciplinary teams. And if you look at the bigger tech success from the west coast on the U.S., usually they work with different -- different competencies.

>> RINALIA ABDUL RAHIM: Okay, thank you very much. So, we have about three minutes if -- if you have burning contributions, I see two hands. We'll let the young lady go first.

>> AUDIENCE: Hello. So, Christian has emphasized the role of interdisciplinary teams. And I will agree with that and I would like to see these interdisciplinary teams, not only lawyers and engineers and philosophers, but I would really like to see also psychologies and social psychologies. I think there's an elephant in the room, namely the question of what kind of reality do we want to live in? And in a sense, what -- in what kind of conditions does a human being flourish? And we are discussing the future of work. So, if I want to go to a shop, I want to talk to a real person, not with a machine. I want my car or bus to be driven by a human being, not only because it's my preference, but there is a whole body of research and it's also common knowledge that we need human contact. And it doesn't matter if we -- if our roads will be safer and our reality will be safer and sterile, and we will live 100 years but what kind of quality of life would it be if 70% of us will be depressed because we lack social contact. So, I -- I wanted to really bring that to perspective so if one day there's an interdisciplinary team, I would really like to see --

>> RINALIA ABDUL RAHIM: A wholistic one.

>> AUDIENCE: Psychologists there as well.

>> RINALIA ABDUL RAHIM: Thank you very much. Very important point. Social contact, like sunshine. Yes?

>> AUDIENCE: So, we have two brief comments. The first one, why is an ethical component required? Better to answer how can ethics be ignored in any human activity even if we are using technology? And the other comment, engineers deal with tradeoffs all the time. One of the things they need to include in their tradeoff are ethical considerations.

>> RINALIA ABDUL RAHIM: Thank you, AVI deals with human rights issues and these issues as well. You also had a comment, please?

>> Hello. I'm from Portugal. And I want to mention one or two things. So, maybe 100 years ago, people used to do a lot of work manually. And then machines came on over and they started to do some of the work people used to do. And there was a change in society that people were being replaced by machines. It turned out it wasn't exactly like that. People were still indispensable. Nowadays we talk about AI and it's the same thing. People assume that they're going to be dispensable and AI is going to take over the world. We have to think about machines as something that actually helps us as a society to be better in the future to help us to make us not do a lot of -- you know, the work that we as a society don't -- shouldn't be doing. Machines could be doing that work.

So, let me just give you an example on how things can go wrong. Because I'm a technical person. And things can go wrong. So, a few months ago, there was an Uber car drive -- that drove automatically, and it actually killed someone. I'm not sure exactly the scenario because it might have been the -- you know, the other person's fault that was riding the bicycle. We don't really know. But that situation actually occurred. So, we have to be sure that when we automate these processes, that they are actually safe for humans. That depends on a lot of people. That depends on the engineers, that depends on politicians, that depends on everyone to actually make sure that there are rules in place to guarantee that that happens.

However, you cannot be fully sure that that's going to be a problem until it actually happens. So, if no one had died and this is obviously a bad situation, but if no one had died in that situation, you might have 1,000 or a million cars running automatically in the streets until someone actually dies and that would have been a much bigger problem. So, it was actually better in an unfortunate way, it's better that that happened now in a very early stage so that we can take measures into fixing that problem.

Another issue that happens maybe a few years ago, there was a twitter box that Microsoft came up with. That bot tried to learn from every TWEET that everyone talked to it. So, the problem is there was -- okay, there was manipulation and a lot of people actually ended up giving them racist comments show the bot ended up being very racist and very negative. But the thing is, that bot was a very good experience. Because that bot actually learned what the human nature of the people that were interacting with it actually did. If that bot had a nuclear switch to kill half of the world, it probably would have killed half of the world. Because it's understood that that's something that they should have done. So, from the point of view of the engineers, there's also a need to have something like a kill switch to make sure that whatever happens with the bots, it does not go beyond what's reasonable for a bot for artificial intelligence.

>> RINALIA ABDUL RAHIM: Okay, kill switch. Thank you very much. We need to move to the second part, the stake holder perspective. I'm going to hand over to Maarit. And also, for the role players, you heard some of the comments, some of them posed questions which we didn't address. If you have a reaction to that, please factor that into your interventions. Maarit?

>> Comments about the interdisciplinary aspect of artificial intelligence and here we have a very multi- stake holder panel representing workers, government, civil society, and academia. Oh, I believe we have a video intervention by Annette Muehlberg. Hello, can you hear us? We can't hear you. I think you might be on mute. Can you hear? We would like to start with you, Annette, if you can get your sound working? Okay? It's not artificial intelligence, but we're still having issues. Okay, maybe -- maybe what we can do is that we will start with Mariam in the meanwhile. And, Annette, if you could after Mariam's intervention make a sign if your system is working. So, Mariam represents the Georgian innovation agency. So, please?

>> Mariam Sharangia: Hello, everyone. I'm delighted to be here and share our agency's perspective regarding AI. A very controversial issue. And to start off, what is artificial intelligence. It's not like scientific fiction, we're using it every day, I'm sure. All of you have smart phone, all of you got advice from your smart phone to go to the restaurant or have the music you like. It's a perfect time for startups, for innovators to experience it hand come up with new ideas so it's still in the development stage. At the same time, there are challenges that AI brings. I would like to emphasize first, the privacy. I'm sure you would guess with this one.

Commercial side and politicians as well are using the data and profiting from it a lot. This is one thing that we're concerned of as a society. The second thing, employers, and skills may come obsolete. Some jobs may disappear, however, the same time, some jobs may be created. This is a paradox of technology, I guess.

The third one, more long-term, moral perspective, who would be responsible for AI. Who holds a moral perspective and ethical consideration for that? So now let's underline the role of the government and decide what should we do to answer these challenges.

So, in terms of privacy, I would like to emphasize the general data regulations that you mentioned and even though we're not part of the EU yet, hopefully, I think it will be pretty open for that because we really respect people's privacy and I think it should be harmonized. So, the role of the government in this stage is to have very specific legal framework. So, we as a society knows that our rights are safe and respected.

On the other hand, some jobs and skills become obsolete, what government can do and what our agency at this point is doing is we're providing free I.T. courses in order to prepare our society to have knowledge of programming languages and to be really ready for this technological breakthrough that's happening right now.

At the same time, we have a new program called innovation agents, actually, that go to SME, small and medium enterprises, and try to find the gaps, how they can be digitalized, where innovation can help to increase the effectiveness and efficiency of those firms. So, we're really promoting the innovation and technology.

At the same time, we're promoting the formation of -- of the communities of artificial intelligence, machine learning communities, we're also -- the main priority is investment in high-tech startups so we're really encouraging artificial intelligence to develop and be invested in. So, this is really one of our main priorities.

So, the role of the government as far as I see is to have enough initiatives in order to prepare the society, the employees to be ready for the technological breakthrough. And to ensure -- and to talk more on strategic perspective and moral global perspective, education itself needs to be modernized. In order for new generations to get the right information and the right times through the right medium and that medium is really important. I would like to emphasize.

So, in a neutral, AI could act as an extension that could help humans to unleash their potential and will be better than artificial intelligence on its own. In order for this to happen, etiquette groundwork should exist, and appropriate legal and structural framework should be in place along with the appropriate control mechanisms which I think are really very important. Thank you for your attention.

>> MAARIT PALOVIRTA: Government control, educational awareness, very nice points there. Can we get Annette back on if the link is working? Annette, can you hear us now. You're unmuted. Oh, we still can't hear you. Maybe we have a problem with the speakers here? Annette, could you try again? No, unfortunately, it's not working. Sorry about that the technical glitch. In the meanwhile, we're going to move on with the panel so that we're not going to run out of time.

So next, Clara Sommier from Google. How about the business perspective?

>> CLARA SOMMIER: Thank you very much. So, yes, I'm going to share with you briefly how we see AI relating at Google and how we're talking and thinking about the impact of the future of work and how it relates back to ethics. AI and machine learning has incredible potential. Because AI as we see it is a way of making sense through messy work. Through machine learning, we can see patterns of data that's harder to see and find more efficient solutions for societies. With that being SALD, we need to be very thoughtful as was just discuss ed about the impact that it could have on everybody's life and this is something we really tried to consider very thoughtfully. We're trying to see if there can be a broad set of principles that can be integrated when we're looking at AI. And we're already doing so.

The way we're doing that, the engineers developing AI are going through training on fairness to make sure it's something to consider. We're also trying to see if there are ways to analyze the data sets that they're using to develop the machine learning to make sure there are no biases in those developments. Because we shouldn't forget that AI is made by humans so there's a way of training them and making sure it's done in an appropriate way. And if you want to find more information about all those ongoing efforts that we're doing, I invite you to go on our website of the initiative. It's people and AI research project where we try to put back the human at the center of AI. And we hope to have more resources and research coming up soon to back that up.

Then switching to the future of work, very important topic. It has been said, it's very uncertain what will happen. Probably some jobs have changed as they have already done in the past. We don't expect one category of dog to fully disappear. Maybe some redundant tasks, but we see it as potential. We know there are millions of people who hate their jobs. And rightly so. And maybe there's a way for us as well to improve that. Through AI, maybe there will be also future jobs that will appear. There was a very interesting study by the center for the future of work trying to identify what could be the next 20 jobs that we are not thinking of? One of them will be human and machine team manager. Maybe we will need someone, a human being, to be able to coordinate the work of the machine with the work of the humans. They will still complete the job. And it sounds silly now. But 20 years ago, I wouldn't have thought that -- it could be a job, creating a filter for pictures to have cat ears or angel wings to someone that actually that would be a job. So, we don't know what the future will bring.

That being said, and coming back to ethics, a very important point, if we want to ensure this transition is to make sure that it will be inclusive and that everybody, as was mentioned by the commission, rightly so, that nobody will be left behind by this. And this is something where we're trying also to work on. How do we do that? We're trying to invest in digital skills because we know that those jobs will require different skills. We've already trained 5 million persons in the last four years in Europe, the Middle East, and Africa. And we've committed to help 1 million Europeans find employment or grow their business by 2020.

That being said, I also want to stress one point, when we're talking about the skills that we'll need for the job of the future, we shouldn't underestimate the human dimension of it. We know that creativity will still be a big part of what we'll need. There is even a study, I can see the job that would acquire those skills whereas now it's around 37%. So, there will still be a big role to play for humans. And the last point I wanted to mention, when we're talking about skims and how to prepare for this future to make sure that nobody will be left behind, is the reform of the education system. And that's been mentioned. It's also a topic that we've been working on. As you're saying, we're in a situation where we're learning and working, and we'll be moving to more continuous learning, probably your life will look more like learn, work, learn, work, and repeat. This is something that we should be ready for if we want everybody to have a chance.

I'll finish on a more positive note. AI will be disruptive. We'll know it. But AI can help to solve big problems. That's what AI is good for. And maybe that's where you can help us as well. AI may be part of the solution to address better the future of work.

>> MAARIT PALOVIRTA: Thank you, I'm hearing optimistic. We need to get a look at the bigger picture of the important tools and also to find ways to inject the human contact and have the human contact in the artificial intelligence picture. Next, I would suggest that we leave Annette to the end.

>> ANNETTE MUEHLBERG: No, please.

>> MAARIT PALOVIRTA: If she can make her intervention. In the meanwhile, I'd like to invite Leena from the electronic frontier foundation, please?

>> LEENA ROMPPAINEN: Hello. I'm from electronic frontier Finland. We're taking the name from the electronic frontier foundation because now we have to clarify it's Finland, not the foundation. But, yes, predicting things is very hard, especially predicting the future. Preparing for this talk also made me realize that I most certainly am not aware how much Al go rhythmic decision making is already affecting our world and how rapid progress we're making in the field of AI and robotics.

We have robots guiding robots in factories, self-guiding cars going through testing, and humans being humans, we have sex-bots, a lot of things in the world. However, I'm an avid fan of science fiction. So, these topics, of course, take me the science fiction route. There we have lots of different directions where it can go. Maybe our technical community can come up with the three laws of robotics. A robot may not injure a human being or through inaction allow a human being to come to harm, robots must obey the orders given by human beings unless it's contradiction with the first law. And the third law cannot conflict with the first and second laws.

Sometimes people become godlike allowing the human beings to retain the illusion of self-control while AI is guiding us to a better future behind the scenes. And various dystopias where AIs or robots get out of control threatening human survival. The terminator movies, they will definitely hit that kill switch.

AIs will not be perfect. Put in the wrong data, you will get wrong answers. Or as it's put succinctly, garbage in, garbage out for using algorithms in AI to strengthen human rights, I call your attention upon the Toronto declaration mentioned earlier which was announced a few weeks ago.

Then I ask all of you to think about your worlds. How many of you have some elements in your world that could be easily replaced by AI or robots? What would that mean for your work? I work in internal I.T. There are quite a few things that AI would be useful, and we are already to some extent looking at ways to improve our work by taking the repetitive tasks, the simple tasks away. What this means if we get rid of some easy work, that leaves the hard stuff for us. And at least for me, I have not realized I cannot work very effectively if I need to do a full day with a brain churning all the way with 100% efficiency. The body does not take it.

So, the real question becomes, to what extent will AI and robots replace humans or allow us -- or allow us to focus on the more present aspects in our lives or work less? Or we will have mass employment leading transformers and the police state keeping the citizens under control. Can we have pleasant lives. I'm not really providing any answers here. Squandering the questions just made me ask more questions. But perhaps together, we can come up with some answers. And we must also remember that whatever comes, there will be a lot of unintended consequences and unexpected consequences which we do not manage to think about with the development of robotics.

>> MAARIT PALOVIRTA: Thank you, Leena. I'm sorry from the Finland. I'm from Finland. I should have known better. And Christian, you gave some comments earlier. But, please, your views, Christian Djeffal?

>> CHRISTIAN DJEFFAL: Thank you so much. I'm from the Humboldt institute for internet and society. I have already heard so many things that have been mentioned already, like the gentleman from Portugal. And this is also a key point for me. It's a Disneyland at the moment for scientists and academics because we're thinking so much about the future and trying to understand. You raised an important point in saying that during industry, we had a lot of assumptions about the future of work that proved to be exactly the opposite. So, my point would be to keep that in mind, to think about what will happen in 20 years. But also, to think about what is happening now. I can tell you from my research work that actually things are changing changes are happening now. The same thing at a conference at MIT. We need to address some of the things that we see now.

From an academic perspective, we need to talk about social justice. We have a lot of efficiency gains, increased efficiencies, this poses problems of the distribution of wealth which we need to focus on. The robo-text was one idea but maybe not the only way and of solving some problems, concrete examples from my work, the things I look at, if you look at the geek economy and platforms, that use single workers, you can see a strategy, especially by certain medium and middle sized companies to train their algorithm in very less developed countries with not much regulation and the people are used to training these algorithms and then they don't participate in the economic benefits of the training. And I think this is a huge problem. The international labor organization just starts to -- to see this problem. But these kinds of problems, we have to -- we have to look at.

I talk to people who are laid off. I also talk to people -- this is also maybe underestimated if we talk about the workforce, the down-skilling. If AI takes certain components of my job, maybe I get paid less. This is what's happening in the German administration or has happened in the German administration right now. So, I get -- I get paid less and I think these are the side effects that we really need to -- need to focus on.

So, in conclusion, I think it's very important to think about how we address those issues and the first step that's already being mentioned from the people on the panel, the first step that is really important is to speak openly about it and to I think also from my side, it's important to stress the -- the opportunities AI offers but as the organizations on the panel, we really need to keep an eye on what is happening, especially to the -- to the workforce in that regard. And I salute the strategy of the commission in the communication because it was for me the first document that really, in the detailed fashion, explained it. Defined it is a problem and didn't try to solve it all at once. But I looked at it as a problem.

As the -- as the psychologist downstairs over there said, I think we need many -- many perspectives on this, many insights. We need to keep looking at it. But one thing I would like to stress is not to put a single narrative on that, and not to conceive AI simply as a job killer. You mentioned, and I mentioned that many things for the wellbeing for workers or for workers' rights where we could actively use AI in order to make life better and we could do it now. So, keep an eye on 20 years, but in a way, develop a -- an idea of AI that fits our purposes and that doesn't -- as a narrative, create some automated victims to this development, which is actually what happened with coal and steel and the European union. It was reframed, it was taken as a proxy to work together and if we can manage to take a step in this direction, I think this will be really great. Thank you.

>> MAARIT PALOVIRTA: I'm told we have Annette. We have Annette by video.

>> ANNETTE MUEHLBERG: Hello, everybody. Thank you.

>> ANNETTE MUEHLBERG: You have a video? Would you like to play it, perhaps?

>> ANNETTE MUEHLBERG: Yes, of course.

>> MAARIT PALOVIRTA: We see your video now.

>> ANNETTE MUEHLBERG: I hope I can give you some information from this discussion. I'm the head of the project group at digitalization project group in Berlin. We look at artificial intelligence, there's potential in getting rid of dangerous, physically straining, and boring jobs and making convenient services available for everybody, as well as contributing to the common good. We like to use technology to empower us and avoid mistakes.

But there are challenges for democracy and the working world if we want to fulfill the values and guiding principles. I assume all of us citizens and workers want to live freely, independently, and with dignity. No one should check their basic rights at the door of their work place, be it in the corporate, freelance, or on-line work environment. We want to be treated fairly as individuals. Contradicts our morals and our democratic structures. If you're accused of bad work or a crime, we want explanation and proof. We have a right to object, complain, and if necessary, pursue if we feel we're being treated unfairly. Innocent until proven guilty are very important principles of our society.

To sum it up, how we live, and work should be our decision and not a matter of scoring. If we were to accept that a black box structure becomes the norm for our behavior, our decision making and control mechanisms, that would turn us into servants of false dignity.

We'd rather be free citizens and employees. This includes, in those cases, where AI becomes part of fundamental services of society, the data sets, and algorithms used must be accountable. Data records must not be personalized. They're not just machine data, but also data of the employee to operate them. Especially in times of the on-line working world, the protection of employee data is of great importance. They must therefore be supplemented by employee data protection law. Many countries have already specifically had regulations for the world of work. Generally, does include co-determination rights that take effect as soon as the technology is to be introduced that can be used to monitor performance and behavior.

In order to sure that the technology is still co-determinable, we need the transparency, the traceability of the algorithms and work. The digital transformation of the working world partly changes and replaces areas of work with technology. Here, it's necessary as well as new forms of further education and qualification. Of course, skills already required should not be lost through the use of AI. We need the human's quantifications for control and also as a backup if technology fails. And another thing, there are already good examples and collective bargaining agreements, how rationalization gains can benefit the employees, for example, by the reduction of the working time. But society as a whole, it is especially important that large companies also pay their share of taxes so that the state is in a position to provide services of general interest and support people in need, not only privacy and determinability by design but also text by design has to be part of the program specifications. We should also provide an incentive to the producer to ensure that the risks are minimized. This would not be the case if the machine were assigned a legal personality. An employee should certainly not be taken hostage for unresolved liability problems according to the model of the autonomously drive attack against the driver so if something goes wrong, it's the driver who is liable.

Work like those often on on-line platforms should not lead to occupational safety and labor law must continue to imply. Employers cannot be allowed to deny responsibility by pointing to only serving them into immediate area role. Anyone like you who has the power to distribute jobs and ban employees from the platform has to have the legal responsibilities of an employer.

A system of termination and probation and possible arbitrary treatment quickly leads to burnout and mental illness, continuous automated control is no good. What we want is a creative, dedicated, self-determined, and social partnership-based work. Let me close in the spirit, let us not fall into self-imposed immaturity but use artificial intelligence for the empowerment of workers and citizens that there will be. Thank you.

>> MAARIT PALOVIRTA: Thank you, Annette. If we could keep Annette on the wall live in case there are questions in the room for her. Thank you very much for your words. I had very interesting things there that we probably didn't hear before. So, things like safety, liability, also privacy, which was mentioned before that are key aspects that also capture the ethics and the future of work issues.

I would now like to turn the discussion to the audience. For questions, please?

>> AUDIENCE: Not really a question as much as a comment. More like an issue I would like to raise with respect to privacy and artificial intelligence is like if a machine learns the data about me that I didn't know myself, who does the data belong to? Or how we control that? And this is something that probably is very precise that bothers me. To elaborate more, like if I -- I'm willing to tell Netflix the movies I like, but maybe I'm not really sure what my movie taste is, so if Netflix starts knowing my movie taste better than me, how do we control that?

>> MAARIT PALOVIRTA: Very good? Any other questions? We can take a few, please?

>> AUDIENCE: I'm Veronica from digital cities in Romania. I'm concerned with these issues. The thing is, I'm a little afraid when we have this discussion. We get in a little bit of paranoia. And this -- I know it's not new. You already said it. But these are the news that make the headlines. And this is the thing that I'm concerned, the way we communicate about new technologies. And going back to the question, if it is one very important views of talking about AI ethics, it's about the process gaining trust and highlighting that part because it's more than just skills. Yeah, we don't have skills and we cannot wait to reform the educational system. That would take decades. It's about acting now and acting with multi- stake holder approach. And with all of the other social partners as well. So, the point is, how can we avoid these? How can we make sure that this will not stop the investments and research? Because related to this couple of weeks ago, it was the news that in Germany, an employee, a personal assistant was banned because it was offered for young people, for children, actually, but they were also recording everything that was happening. It was also connected to the on-line platforms. So, all of these things are new things, but we start to bend them because we don't understand them as well.

>> MAARIT PALOVIRTA: Yes. So, we have two aspects here. One related to building on one's personal data and that potentially ending up in the kind of AI ecosystem. And we have building trust, how can we work with that. A third one there? And then I'll turn it back to our speakers.

>> AUDIENCE: Yeah, I think it's reasonable to also consider the Fa KT that you might want to create restrictions on this if we cannot control it. So, I think it's also very reasonable not to think only that, of course, we want to go to innovation and to all of this. But I think Europe actually is the only actor today that might bring this responsibility part into the discussion. And it's something that not many countries are looking at

And also, my question is about GDPR and right to explanation. Do you think that the fact that there are few articles in GDPR that they're considering to be creating a right to explanation with an automatic decision by a machine will prevent the application of deep learning, especially in Europe or not?

>> MAARIT PALOVIRTA: So, let's turn it back. Withe have the kind of GDPR-related question here. If there's a contradiction there. And then also, how do we build trust by perhaps adding transparency and other things? Olivier, would you like to comment?

>> OLIVIER BRINGER: Yes, for me. I think the last intervention replied to the first intervention. There is a provision, there are provisions in GDPR which say that people need to be formed when their personal data are being automatically processed. So, first, that's the first thing. You need to know that there's a machine that is processing your personal data.

And secondly, you have a possibility to opt out. So, you should have the possibility to tell Netflix or whoever possesses your data to provide you with a service that you don't want to take advantage of that service. So, we will have to see how we implement that. But the point is -- the point is covered there.

And this is the whole reflection on transparency of algorithm. And this links to the issue of control. Making sure that at the end, there is a human decision for important -- for important aspects. And that was said by several people. The trade union colleague said it. It's a very important aspect. We will see how we can enforce -- enforce. That's a very important aspect of trust. You should -- if you're not able to tell people that it's not the machine that's going to take the decision but it's going to be a human who's accountable, then you will be in trouble.

On the time it takes, yes, it will take time. Yes, you cannot reform the education system quickly. You cannot change the legal framework quickly either. So, this definitely will take time. We will see there's an urgency we can always -- always intervene. But in the past, we were seen -- so it's dealing with digital issues. All of the geeks tell us about artificial intelligence and all of these things. It's not really -- they are a bit crazy. It's not really important. Now we're really at the center of the game. So, you will see in the next project in European union, digital skills will be very high on the agenda. But then the implementation will definitely take some time.

>> MAARIT PALOVIRTA: Did anyone want to comment on these two initial questions from the panel? Clara?

>> CLARA SOMMIER: Thank you. I would like to go back to the question of trust. I absolutely agree we need to explain more and ensure that the humans are the ones that are controlling. They're the ones designing it and they're the ones that are fixing the objective. They're the ones using the data to build upon the machine learning. They're always in control.

I also believe that we have to be very careful in all of the first applications -- very visible applications of AI that we're making. Because we still have to prove it's an efficient technology and that it can really impact society in a positive way. If we get it wrong from the start, people will immediately mistrust it. And I want to come back to a point that was made earlier on why are we calling it AI? It was a great point? Is it really an intelligence? It's only a way to process data. Actually, artificial intelligence is machine learning and changing the vocabulary does not seem like an artifact. I think it would rather show what the reality behind it is.

>> MAARIT PALOVIRTA: Thank you. Excellent point. Annette, you wanted to comment as well. Please?

>> ANNETTE MUEHLBERG: I would like to point out that we already have the problems that -- really not under control. Humans -- for example, let's take a really easy example. The customs in the Humboldt #245I check the ships if there's illegal stuff there. There's an automated program and those people who have a lot of experience on how to check the ships, now they just sit in front of the computer and they get an order. You have to go there, you have to go there. If they think it's wrong, it takes two hours to oppose. If they oppose and if there's a mistake, the algorithm is the real measurement. They're in real trouble to argue what they think is important. And this is the point where they actually do have the chance to say something. But they have no possibility to integrate their knowledge in shaping the algorithm. They have they can just follow the orders. And I think this problem will increase every day by implementing more and more of these algorithms. So, we have to find ways to not only explain but also integrate those people who work with those algorithms, that they also can help shape that.

>> MAARIT PALOVIRTA: Very good, thank you very much. Raul and Patrick?

>> AUDIENCE: I'm from the international society but speaking on my own behalf. Thank you for this interesting debate. So, one thing is about transparency and algorithms, I'm afraid that's in the -- if we're thinking ten years from now, we will not be able to regulate the decision making of the -- of many of those things that are based on artificial intelligence. Because the software applications or whatever, we'll be learning about their own experience. So, their -- it will be very difficult to act before the decisions are taken because the decisions will be taken on the fly. This makes much more relevant the transparency of the algorithms, there's a big change, because when somebody produces good wine today, for example, they -- they are making probably the grapes they are using, the person takes it alcohol in the wine or some other things. And they are ensuring that they are working based on some standards in terms of help and --

But they are not giving out all of the secrets because there is a part of the program that is a secret for the manufacturer or for the producer or for who provides the services. In the future, it's changed. Because the near future, the very near future, they will have to share the secrets of what they produce because the decisions that they are -- that artificial intelligence programs will take will affect our life. Somebody mentioned maybe seeing some related examples. So, if a machine will take a decision in surgery, it will be very difficult to act before the decisions are taken, because probably that software application or machine will take half of the decisions during the surgery.

So, what I want to know in advance, I want to know how those decisions will be taken, not what the decisions are. So, I want to be sure that the decisions will be taken based on best practices, advances in knowledge, this is what we'll -- I know, need to take the risk of -- of doing the surgery. So, algorithms are very important, a big change in the priorities of how things are done.

Second comment. Sorry for the long intervention. I was talking to my colleague who reminded me yesterday about this. We talked many times about the skills that are needed. And we repeat that -- we need to develop the new skills and -- but we don't -- we don't talk about what the skills are. To be honest, I don't know what the skills are. What are the skills that people will need in ten years? But what I can say is whatever needs the people need now. And I think we need to do it against. One is to train people on complication and thinking. Not only with the high schooler or high school level, it's for the workers. We need to train the workers from this thinking. In two or three years, the people working with that machine, we have to work with a different device and they have to understand how the devices work, how the devices think in order to interact with them. This is urgent.

The second thing is we need to make a call -- we need to make a call to all governments and states around the world to urgently review their integration systems and take into consideration for things. I know it would take many years, as soon as we start, we have results. Sorry for the long intervention. Thank you.

>> RINALIA ABDUL RAHIM: Thank you, Raul, for your thoughts.

>> MAARIT PALOVIRTA: So, let's take a few questions here. I think --

>> AUDIENCE: I think when he speaks about artificial intelligence, I think our imagination starts flowing, yeah? And at the same time, we are not really clear about the facts. Anyone knows how many postings Facebook has taken down in the last quarter. It's in the transparency report, 860 million postings have been taken down in a three-month period. I can ask Google how many videos have been taken down in the last quarter on YouTube. And how much of that has been done through artificial intelligence or through algorithms or self-learning algorithms. Facebook claims that 99% of terrorist propaganda has been taken down. Before any posting on this matter has taken place and all of this is done through artificial intelligence. So, we're talking about what is already currently happening right now. What is human intervention and what is no human intervention. Out of these postings, very little human intervention involved, righteously so, we have machines that can do it for us.

Not -- I wanted to react to the psychological aspect of it. Can you image up looking at violent content for workers throughout the days as being strong criticism about exposing workers through many of the content that's being produced and posted. So, we really have to balance workers' rights and know what it brings for the workers and what is maybe less positive for them.

So, algorithms are there, self-learning algorithms are there. The trust, we need to work on. But I'm not so sure that our internet service providers, the companies, will let us look into how algorithms are fabricated. It's the same thing as asking Coca-Cola to reveal the secret formula of Coca-Cola. I'm sorry. Algorithms are the core of the business. That's the core of the business model. What we have to ensure, I've said that, we have to involve from the start within the design the human rights dimension of what we're developing. We need not to be able to guide every single step that is being done within artificial intelligence, but we need to set the human boundaries of where we want to go, when we developed human -- when we developed cloning possibilities, we didn't say stop all of the cloning, we said, there is a limit. And that limit is human cloning. So, within the development of artificial intelligence, self-learning machines, I'm not so sure that garbage in means garbage out or intelligent input means intelligent input out. Not necessarily. We know we put in, we don't know what comes out. There we have to set somehow the boundaries which are, of course, ethical and philosophical but also legal.

>> MAARIT PALOVIRTA: We're officially out of time. But we have a couple of final points here before we hand it over.

>> AUDIENCE: Coming back to Annette's video, I think she raised really important topic. The topic of and how much people need to make small and bigger choices in their life -- how much do they need to feel that they are in control of their lives and this is what actually makes them happy at the end of the day. So, this is the comment about Annette's video. And another point to Clara, she has mentioned that Google engineers have trainings on fairness. I think it's a really disturbing phrase, training on fairness because fairness is not a washing machine that you can operate. What kind of fairness are they taught? Is it a utilitarian version? Do they read John Rose or Robert Nozik? Yeah, this is a very specific question. Yeah.

>> Any other comments?

>> RINALIA ABDUL RAHIM: Remote.

>> MAARIT PALOVIRTA: Remote. We have a question from Amalid De Silva. Will AI become big brother. And are we prepared to limit AI development in 2024. And also, just came through. Agree that it is good that artificial experts are reviewing the content for certain harmful types. But once design with human rights considerations, they can enter into account, we also need to be able to test the systems against human rights situations and we need to provide a few mechanisms for those things removed.

Also, I think that Annette wants to say something.

>> MAARIT PALOVIRTA: Yes, I note that Annette has a final comment. We're out of time here, Annette. If you can keep it to one minute, I'll give you the floor for the last time as you are remote and at a disadvantage. Please, Annette, go ahead. We can't hear you, Annette?

>> ANNETTE MUEHLBERG: You cannot hear?

>> MAARIT PALOVIRTA: Now we hear you, thank you, go ahead.

>> ANNETTE MUEHLBERG: Okay. So, with respect to skills, I think we should address not only workers but the managers -- politicians -- they understand the issue. This is extremely important.

Second, someone said algorithms are the core of the business model and therefore they cannot be transparent. I think this should not be for public service. This should not be for necessary service for human kind. I think we have to clarify this and make the distinction between public and private interests

And, yes, I just think that I would like to say hello to the psychology lady and say the whole issue of self-determination and the possibility of thinking and acting with her and his own will is substantial. Thank you.

>> MAARIT PALOVIRTA: I'm sure our psychology lady says hi, back. So, thank you very much. We're going to wrap up the session now. We have Sue who's been listening very intently. And who will propose the bullet points to finish the session with. Thank you?

>> Sue: Since we're out of time and if you disagree with any of the comments read, please raise your hand. AI must be accountable, transparent, mod final, pricing and determinability by design is a must. Unintended and unexpected consequences in development of AI and robotics are unavoidable. There must be an ethical code for algorithm developers. The education system needs to be revamped to prepare future workers with necessary skills to deal with new forms of jobs that AI will bring. Interdisciplinary teams are needed to relieve the burden on engineers and engineers need to be educated on ethics. AI technology needs a common international framework, ethical clearance is not sufficient. I'm just an engineer, it's not an excuse when developing AI. AI is or will become a race expecting adherence to ethical code by developers is not realistic. Is anyone listening?

>> RINALIA ABDUL RAHIM: We're listening.

>> Sue: We need a kill switch for automated systems. AI can be part of the solution to the problem of the future of work. And I think -- yes. That's it. Thank you.

>> MAARIT PALOVIRTA: Do we have comments or objections? I think it sounds fair. No hands? No objections. I would like to thank the panelists for organizing single handedly more or less a great session and the audience who stayed here for the whole 1 1/2 hours. So very interesting topic. Thank you, everybody.

[Applause]

>> RINALIA ABDUL RAHIM: Thank you for staying until the end. I just wanted to say that it was a pleasure to organize this workshop because I had an excellent team of collaborators and I wanted to make special mention of Ms. Peck from the council of Europe, and for Maarit for the internet society and for those who have been a great help to me. I asked the role players to stay in the room because you didn't have a lot of time to engage them. If you have specific questions and want to talk to them personally, they would be happy to meet you. Thank you so much.


This text is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text is not to be distributed or used in any way that may violate copyright law.