Emerging technologies and human rights – PL 03 2019: Difference between revisions

From EuroDIG Wiki
Jump to navigation Jump to search
m (Eurodigwiki-edit moved page PL 03 2019 to Emerging technologies and human rights – PL 03 2019: final title)
No edit summary
(8 intermediate revisions by the same user not shown)
Line 1: Line 1:
19 June 2019 | 16:30-18:00  | KING WILLEM-ALEXANDER AUDITORIUM | [[image:Icons_live_20px.png | Video recording | link=https://youtu.be/NbRmqDesksU]] | [[image:Icon_transcript_20px.png | Transcription | link=Emerging technologies and human rights – PL 03 2019#Transcript]]<br />
[[Consolidated programme 2019|'''Consolidated programme 2019 overview''']]<br /><br />
[[Consolidated programme 2019|'''Consolidated programme 2019 overview''']]<br /><br />
Title: <big>'''Emerging technologies and human rights'''</big><br /><br />
Proposals assigned to this session: ID 1, 23, 62, 85, 89, 90, 108, 117, 154, 155, 169, 177, 205 – [https://www.eurodig.org/fileadmin/user_upload/eurodig_The-Hague/statistik_proposals_all/proposals_for_2019_2018-12-04__01_final_web_IDs_ver1.pdf list of all proposals as pdf]<br /><br />
Proposals assigned to this session: ID 1, 23, 62, 85, 89, 90, 108, 117, 154, 155, 169, 177, 205 – [https://www.eurodig.org/fileadmin/user_upload/eurodig_The-Hague/statistik_proposals_all/proposals_for_2019_2018-12-04__01_final_web_IDs_ver1.pdf list of all proposals as pdf]<br /><br />
== <span class="dateline">Get involved!</span> ==  
== <span class="dateline">Get involved!</span> ==  
Line 36: Line 36:
*the [https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai AI ethics guidelines];
*the [https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai AI ethics guidelines];
*the Commission Communication “[https://ec.europa.eu/digital-single-market/en/news/communication-building-trust-human-centric-artificial-intelligence Building trust in human centric artificial intelligence]”.
*the Commission Communication “[https://ec.europa.eu/digital-single-market/en/news/communication-building-trust-human-centric-artificial-intelligence Building trust in human centric artificial intelligence]”.
*[https://www.coe.int/en/web/freedom-expression/-/foreign-ministers-rights-obligations-and-action-priorities-of-member-states Foreign Ministers: towards a legal framework for artificial intelligence] news release 17 May 2019


== People ==  
== People ==  
Line 64: Line 65:
'''Key Participants'''
'''Key Participants'''
*Olivier Bringer, Head of Next Generation Internet Unit at the European Commission.<br />''Olivier Bringer is Head of Unit at the European Commission, in charge of the Next Generation Internet (NGI) initiative and internet governance policy.<br />Before working on NGI, Olivier held various positions in the European Commission, dealing with policy development and implementation in the area of digital and competition law.<br />Prior to joining the Commission, Olivier worked as a consultant in the telecom industry. He is an engineer by training.''
*Olivier Bringer, Head of Next Generation Internet Unit at the European Commission.<br />''Olivier Bringer is Head of Unit at the European Commission, in charge of the Next Generation Internet (NGI) initiative and internet governance policy.<br />Before working on NGI, Olivier held various positions in the European Commission, dealing with policy development and implementation in the area of digital and competition law.<br />Prior to joining the Commission, Olivier worked as a consultant in the telecom industry. He is an engineer by training.''
*Lise Fuhr, Director General of ETNO
*Lise Fuhr, Director General of ETNO<br />''Lise is ETNO’s Director General since January 2016. At ETNO, she leads and oversees all the activities and she is the main external representative of the Association. On behalf of the Association, she is also a Board and an Administrative Committee member in ECSO, the European Cybersecurity Organisation. Lise has also been appointed to the Internet Society Public Interest Registry Board of Directors for a three year term as of July 2016.<br />Prior to joining ETNO, she was Chief Operating Officer of DK Hostmaster and DIFO, the company managing the .dk domain name. In the period between September 2014 and December 2015 she also chaired the Cross Community Working Group for the IANA Stewardship Transition, building on her strong network within the internet community. Lise has 10+ years of experience in the telecoms industry. She started her career at the Danish Ministry of Science, Technology & Innovation (1996-2000) where she wrote and implemented regulation for the telecommunication markets. After that, she worked for the telecoms operator Telia Networks (2000-2009), where she led various teams dealing with issues as diverse as interconnection agreements, mobile services and industry cooperation.''
*Joanna Goodey, Head of Research & Data Unit, European Union Agency for Fundamental Rights (FRA)
*Joanna Goodey, Head of Research & Data Unit, European Union Agency for Fundamental Rights (FRA)<br />''Joanna Goodey (PhD) is a Head of Research & Data Unit, European Union Agency for Fundamental Rights (FRA). Prior to joining the FRA, Joanna was a research fellow for two years at the UN Office on Drugs and Crime, and was also a consultant to the International Narcotics Control Board. In the 1990s she held lectureships in criminology and criminal justice at the Universities of Sheffield and Leeds in the UK, and was also a regular study fellow at the Max Planck Institute for Foreign and International Criminal Law in Freiburg. She has published numerous academic journal articles and book chapters on subjects ranging from trafficking in human beings through to hate crime, and is the author of the book ‘Victims and Victimology: Research, Policy and Practice’.''
*Joe McNamee, independent expert, member of the Council of Europe Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence<br />''Joe McNamee has worked on internet regulation since 1998. Up to the end of 2018, he was Executive Director of European Digital Rights, the association of organisations defending online human rights in Europe. He participated in the Council of Europe expert committee on the roles and responsibilities of internet intermediaries and is currently a member of the Council of Europe Committee of Experts on Human Rights Dimensions of automated data processing and different forms of artificial intelligence. He holds Masters Degrees in European Politics and International Law.''
*Joe McNamee, independent expert, member of the Council of Europe Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence<br />''Joe McNamee has worked on internet regulation since 1998. Up to the end of 2018, he was Executive Director of European Digital Rights, the association of organisations defending online human rights in Europe. He participated in the Council of Europe expert committee on the roles and responsibilities of internet intermediaries and is currently a member of the Council of Europe Committee of Experts on Human Rights Dimensions of automated data processing and different forms of artificial intelligence. He holds Masters Degrees in European Politics and International Law.''
*Max Senges, Program Manager for Google Research & Education
*Max Senges, Program Manager for Google Research & Education<br />''Max Senges (1978) works as Lead for Research Partnerships and Internet Governance for Google in Berlin. While in California (2014-2018) Max worked as Program Manager for Google Research where he build and led Google‘s IoT R&D Expedition (in partnership with e.g. Carnegie Mellon, Cornell Tech and Stanford). Later he became the Head of Google‘s Hardware User Research team. For more than 8 years Max works for and collaborates with Vint Cerf on Internet Governance and interoperability and open standards.He is passionately thinking and working on the crossroads between academia and the private sector, internet politics, innovation, culture and philosophy of technology. In the last ten years he worked with academic, governmental and private organizations, centering on knowledge ecosystems, e-Learning and Internet governance. Max holds a PhD and a Master’s Degree in the Information and Knowledge Society Program from the Universitat Oberta de Catalunya (UOC) in Barcelona as well as a Masters in Business Information Systems from the University of Applied Sciences Wildau (Berlin).''


'''Moderator'''
'''Moderator'''
Line 93: Line 94:


== Messages ==   
== Messages ==   
A short summary of the session will be provided by the Reporter.
*All stakeholders, including the private sector, agree that a form of regulation concerning the use of digital technologies is needed to protect individuals, build public trust, and advance social and economic development. However, divides remain on the scope and bindingness of such regulation, even if predictability and legal certainty appear essential. There is nevertheless a broad consensus on the need to initiate open-ended and inclusive debates to provide guidance and introduce new frameworks. For example, at the stage of product development, to address the significant impact of new technologies on individuals and the exercise of their human rights.
*States should take appropriate measures to ensure effective legal guarantees and that sufficient resources are available to enforce the human rights of individuals, and in particular, those of marginalised groups. There is a need to enforce mechanisms to ensure that responsibilities for the risks and harms for individual rights are rightly allocated.
*Due to the power of asymmetry between those who develop the technologies, and those who are subject to them, there is a need to empower users by promoting digital literacy skills and to enhance public awareness about the interference of emerging technologies with human rights.


== Video record ==
== Video record ==
Will be provided here after the event.
https://youtu.be/NbRmqDesksU


== Transcript ==
== Transcript ==
Will be provided here after the event.
Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-800-825-5234, www.captionfirst.com


''This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.''
>> HOST: Now, we're ready for the last session of today for the last plenary. I will invite one of the two moderators from the session Jan Kleijssen from Council of Europe to kickoff that one. I give you the mic.
>> JAN KLEIJSSEN: Thank you. Good afternoon, ladies and gentlemen. Thank you for having stayed at this late hour for the final session on emerging technology and human rights. I'm very happy we have a cozy and intimate room to discuss as well as a panel of high-level experts.
Just a few words, perhaps by way of introduction. I represent the Council of Europe, an organization that I think many of you are familiar with. And which has been dealing in fact, with issue of emerging technology and human rights for a while.
Many of you know the Budapest convention on cybercrime, convention 108 and data protection and also relevant AVA, the convention on biotechnology which banned a technology in development, namely, the cloning of human beings.
Three weeks ago, we had a birthday party. We celebrated 70 years of Council of Europe and the foreign ministers gave us a birthday present in the form of an instruction to start working on a binding, legal instrument on the design, application, and use of artificial intelligence. And to do so in a multistakeholder context. This is obviously very new development and new decision is not yet therefore included in the notes that we sent out for this session. But I hope the questions that are put can be, of course, also examined in this light.
Isn't it a great pleasure to understand this is the prodigal for the session to call up to the podium, first of all my co-moderator, Ms. Viveka Bonde from LightNet Foundation. One of the cofounders of LightNet Foundation from Sweden who will co-moderate. And the panel of experts.
First of all, Olivier Bringer from the European Commission, who is the head of the Next Generation Internet Unit. If you would be so kind to join us. Lise Fuhr the Director General of ETNO. Joanna Goodey, Head of Fundamental Rights Agency, head of research. Joe McNamee, Council independent experts and Council of Europe Committee experts on automated data processes and IAI. And last but not least, Max Senges from Google program manager for Research & Education.
>> VIVEKA BONDE: Right. We will now discuss emerging technology and human rights. First, we would like to post questions aren't we?
>> JAN KLEIJSSEN: I forgot to mention the report. I apologize. Clemon is from Barcelona with the Internet platform who is the reporter for this session.
>> VIVEKA BONDE: Right. So we start with the first question. I will sit down here. I will ask all of you panelists, we have now regulatory frameworks. What about any shortcomings in terms of these regulatory frameworks that exist, what are your views? Any shortcomings whatsoever, what are the initial views and initial thoughts?
>> MAX SENGES: Thank you, everybody, for setting up this session and I'd like to put out that first of all, we have to realize that the Internet and the online space it created is an achievement of the last 40, 50 years, it hasn't broken down yet. We have never had to reboot it. So it is actually surprisingly resilient architecture. So many of the principles that it was built on were pretty smart and good principles. Decentralization, open standards, things like that. Especially over the last 10, maybe 15 years, really thorny issues came in -- came up, you know, we will never find a solution that is ideal and just is a clear-cut science answer to freedom of expression or privacy. These are the things to set up the right checks and balances and institutions to negotiate that. Overall, I would say, you know, we are on our way, certainly we would like to see institutions like the IGF play a better role in coupling the different efforts and institutions that make up Internet governance and that shape the way that we handle these problems. But I'm not in the dark here, I think we should complain less and cooperate more on solutions. I will point that the Internet and jurisdiction network that Chapel and Feeling [sp] have set up. That really tries to bridge the debate and deliberation around problems and brings it to something where we find solutions that are actually taking the perspectives and interests of all stakeholders together. Because what is very difficult to do is to come up with a transnational, global network where, you know, you need to police and really try to understand how to get a hold of each small or big player because there is so many ways to avoid and frankly not follow the agreement, if it's not an agreement like Internet traffic is changed, for example.
I was really fascinated to learn from Andrew Sullivan, the ISOC CEO, he said when it comes to Internet traffic, only one rule. Your network, your rules. The way the traffic is exchanged is by reputation, the interaction and relationship that the exchangers develop with each other and that works really well. I don't think that is always the right solution, but I think it is an interesting way to start to frame the problem and think about it.
>> JAN KLEIJSSEN: Please, Joe.
>> JOE MCNAMEE: Thank you, I think we use the framework of experience. We have a lot of experience, in technologies that are still called new and have been around for a long time. And in the committee on -- that I'm involved in the Council of Europe, even moving the issues that need to be addressed on algorithmic decision-making of states regulating us, regulating themselves, self-regulation for their own processes and privatized regulation by companies of their users, it is a very complex framework. But we're not starting from scratch, we have a tendency to always think that we're starting from scratch. And a wise man, I don't know who it was any more, said if you want to really seem to be up-to-date with the news, read yesterday's newspaper. Because everyone has forgotten what happened yesterday. I'm going to be really, really up-to-date and tell you on the 12th of November, 1996, the "New York Times" had a front page headline, Europe betting on self-regulation to control the Internet. I have seen no document from anybody identifying the successes and failures of that approach from 1996 until 2019. There are things that have worked, there are things that have not worked.
Law enforcement by private companies under the guise of self-regulation is a major issue for decision-making, issues like content moderation. In 2015, the Council of Europe produced or commissioned a study on Internet blocking, which showed the rule of law safeguards were not being applied.
In 2017, the European Parliament created a Resolution castigating the European Commission for failure of producing data on the same subject. We're still talking about self-regulation as both -- as one thing, when it is fighting spam, fighting illegal content, it is network security. So I think that fundamentally the thing that we miss is the ability to say okay, can we note -- not reinvent the wheel? What have we learned from privacy impact assessments that have impacted algorithmic decision-making for example.
Do we need to build from 0 every time? I think if we can learn from success and avoid making the same mistakes again, I think we're on the right track.
>> JAN KLEIJSSEN: Thank you very much. What is the view of FRA on this, Joanna?
>> JOANNA GOODEY: I can only speak on the view of the European Union. I mean when we're asked this question, what are the shortcomings, we thing of the regulatory framework.
We just heard from Jan Kleijssen about the accounts of Europe and their own initiative to looks towards unifying law. A lot of the discussion at the moment is between whether you have unifying single regulation, for example, or whether you are going on to a sectoral approach in terms of how you regulate emerging technologies, AI, the Internet, you name it. I think we really have to understand what we currently have in place. I know there are exercises in mapping, the extent of existing legislation, in the case of the EU, we have very diverse legislation that is predating AI discussions on things like product liability. We have legislation from the '80s, when really, a discussion about AI was really off the map. We have legislation on product safety, consumer protection, you name it. If we take the context of the European Union, the most up-to-date, AI piece of legislation is the general data protection regulation because within that, you do have recognition for aspects of AI, the use and misuse of algorithms. So a lot of our legislation really was developed at a time that is pre-emerging technologies. Pre-AI. And we're recognizing that. The general data protection agency, which is only one area of law, dealing with the specific right is very comprehensive, the alarm bells were huge from the industry when that piece of legislation was on the table. Now, people are calming down a little bit, seeing the added value of this legislation, only deals with a very specific narrow area of law. And it took many, many years to negotiate. I have a colleague from the commission here, who knows about that, but it is a comprehensive piece of legislation. So if we're asked, what are the current shortcomings, I think we have to really map what we do have, it's limitations of when it was drafted. Is it fit for purpose for emerging technologies and AI, but then I say, beyond existing legislation in different fields like product liability, safety, consumer protection, we have human rights and fundamental rights law. We have the European convention on human rights, and the U.N. law, the charter of fundamental rights of the European Union, the charter of fundamental rights of the EU is a modern piece of law in that regard because it separates data protection and privacy. So you have now a modern instrument that is comprehensive that we can look at. The much broader discussion about what kind of regulation we need in future is something where we need to know what we do have, is it fit for purpose, and then moving on from that, what is perhaps needed. However, of course, there is still great resistance from the industry about having any thought about regulation. Hence the enthusiasm for ethics, which are nonbinding, of course. And of a soft law option. I can perhaps say more on that later.
>> JAN KLEIJSSEN: Thank you very much. Now the floor to Lise. You represent a body with very diverse stakeholder.
>> LISE FUHR: Thank you, thank you for inviting ETNO here. ETNO is the Telco trade association for European Telcos, my members represents 70% of the investment in infrastructure in Europe. I will give a talk from a telecom perspective, if I look at technology right now, what our members are doing, they're doing 5G, doing Internet of Things, AI, cloud services is what they're concentrated with. And with that, also comes all of the things we're talking about today, emerging technologies and human rights -- rights. And things are changing. Because what we're seeing now is all the technologies are more platforms and people are -- industries are becoming more interrelated. So I would like to take a step back and say why is it technology is interesting for European citizens? It is actually because it is going to create a lot of changes to our lives. Part of it will make our lives easier. We will have online services that can help us shop, do e-governance, also remote surgery, et cetera, it will make it easier. It will also make our lives safer. We will have smart city, we will have automated cars, et cetera, that will be safer for pedestrians, for drivers, and also for bicyclists. So why is emerging technology and human rights a problem? That is because technology will make us all traceable at any time. And we see you will be connected and also the information you give away all the time to create new issues we never saw back 10 years ago. So for us, human rights, shortcomings is also about privacy, it is of course about freedom of speech, but also security is essential when you talk about human rights. And we believe a bit like Max is saying, we need to work together on this. We need multistakeholder models to actually discuss and find out where are the shortcomings. Because I also agree with Joanna that we have a lot of regulation already. We have GDPR that is regulating a lot of services, and that's a horizontal regulation. But we also as Telcos have e-privacy. e-privacy is very sector specific. And for us, that is actually a shortcoming. Because we think all regulation needs to be horizontal because services are converging, if we want to ensure human rights in the broadest extent, we need to have more horizontal views of how to deal with this. Not only AI, but IOT will mean a problem and of course a lot of good solutions for all of our citizens. Right now, we have a hundred million connected devices, if we take Europe and western -- Eastern Europe. And we think we will have half a billion in 2023. So we will all be connected, we will all have dependency on technology, and that's why anything we do needs to have the human rights aspect into it. If we don't, we will lose trust in technology, and that will actually mean a step back.
But talking about this in Europe, we have one kind of regulation, if we look at the U.S., if we look at China, we have different regulations. And we need to have a balance where we look at how we do innovation on a respectful way, but also allowing for data to be used for AI, for example.
So we think GDPR is a good start. We think we should look into how we can do some principles for AI. And the ETNO members are looking into we think the high-level report that came from the commission gave a good basis for this part of doing good and do no harm and also to have human autonomy in everything you do around AI. So I think we have a good basis. I think we need to build more on this. And I know we're going to talk later on how we actually implement any solutions.
>> OLIVIER BRINGER: From the European Commission. There are two shortcomings in the regulation environment. One is the time. The time of regulation is not the same as time of technology. That we know very well. In a way, we can try to accelerate the time of regulation. I think we have managed to do that in the current mandate. So with the digital single market strategy we have managed to adopt in the field of digital legislative proposals, 30 in four years. We can accelerate up to a certain point, I would say. Because of course, when you want to go for legislation, you need to have time to prepare it well, you need to have precisely discussion with the multistakeholder community, preparation, you need to have good discussion with the co-legislators. There is an incompressible time to issue balance in the regulation. The technology is not waiting. The technology is going very, very fast. So the response, I guess, is in good part and an participation. We need to anticipate the issues even before they occur. It means we should then regulate the issue, and then we should think about where we prepare the ground. We try to do this, the commissioner explained this morning, this is what we try to do in the field of artificial intelligence, for example, blockchain, I think we are not late in the areas, we're relooking at the issues, what are the ethical issues that are raised by artificial intelligence, in the field of blockchain, how can we maximize the opportunities of the technology. And we will see what are the next steps.
Then, another challenge is that the Internet is so broad. The Internet is Google. The Internet is someone, one person who has a blog. It's someone who is selling egg or pair of shoes on the Internet. It is very broad. It is much more difficult to intervene, to devise policies in the very large environment as it was when we started to do digital policies and regulations. Lise knows it very well, 20 years ago we were mainly regulating telecom incumbents. In that way, it is simple. Identify someone with market power and you impose access remedies. When you regulate the Internet of today or when you do public -- digital public policies, you have such a variety of stakeholders to take into account. So one key, for example, is the size. We have issue the regulations which apply to very different type of companies. They apply to very big companies and they also apply to very small companies. So we need to tailor it so everything is proportionate. We need to have the regulations, depending on the size, depending on the turnover of the company, this is something we need to take into account and also we have very different players in front of us. Last point, I would like to agree with what Joe said. I think we should avoid -- we should avoid reinventing the wheel.
Now, we start to have a good cockpit of legislation and public policy in the field of digital, GDPR is one. We have now some legislation on Cybersecurity and started work on online information. We should reuse, indeed reuse the good concepts in the information and the next measures we take. I like your example of privacy and that assessment. For example.
>> JAN KLEIJSSEN: Thank you very much. Before asking Viveka to carry on, are there any questions at this point. Not just a one-way street, to keep you awake and entertained. Is anyone --
>> MAX SENGES: While someone might be thinking about that questions, I can make the panel a bit more interesting by strongly disagreeing with what Jan said, current pro forma focus on ethics and companies don't want to participate. And finding good governance, finding good regulation.
I think on the first point, it is absolutely adequate to start with the ethics part. When you understand what topics you should actually look into. So it is principles, it is ethics. And importantly, I think there is actually three kinds of ethics. There is virtue ethics, that is okay, what does an individual player do. There is teleological ethics that is where do we want to go. That is a very important conversation we're having. What kind of world do we want to live in, for example, in the case of AI. Which is a good example of where we can get it right. We're catching it early, already have fertile community, met many different points of expertise.
So virtue ethics, telos ethics and deontological ethics that usually slowly but surely become law and based in deontological perspectives like human rights are another important part but only one piece of the puzzle. Now I see someone with a question.
>> JAN KLEIJSSEN: Please.
>> QUESTION: Hi there. My name is Collin Perry, I'm with Article 19. I find it motivating several have mentioned impact assessments, I have worked with accessory impact assessors to develop models and ideas to be applied to Internet providers, more specifically. One of the challenges we have had in transposing best practices that exist and have been applied in sectors like mining, textiles, food, beverage, supply focused impact assessment methodology has been it requires in order to be compliant with the U.N. guiding principles of human rights, it requires engagement with affected rights holders. When you talk about Internet infrastructure providers, everything from registries, registrars, cables, ISPs to platforms, it can be very difficult to lasso this constituency of who is an impacted rights holder and it is even more difficult to actively engage with them on the potential impacts. So I think it is really important to underscore, from a practitioner side, that this is somewhere where we fall short within the Internet community. And I don't -- I think it is a bit premature to refer to the work that is being done as an impact assessment, just in the interest of not diluting the practice as it exists and has been more formalized in other sectors. I wanted to pose the question, specifically to the people that mentioned this as a potential solution or avenue to be explored, if they're engaging with the people kind of on the ground who are actually trying to make these methodologies and best practices applicable to our sector?
>> JAN KLEIJSSEN: Okay. Joanna, please.
>> JOANNA GOODEY: I will answer the point you made. I thought I would respond, it was good to have a challenge from another panel member. I will also draw on what you said. I'm a member of the European Commission on AI. I'm one of 52 members, when you thought the U.N. expert group is large in terms of working with a diverse constituency of stakeholder also with Civil Society, that is a large group. I want to underline, because I was challenged, the work of the specific group. It puts three pillars there. It puts ethics, law, and robustness. And it really says all three, ethics, law, robustness need to be looked at as one whole.
So it is not that you say ethics is the way to go or law is the way to go, or indeed robustness. It is the package, it is the whole three together. That's the point I very much want to underline. It is not an either-or discussion. Law and ethics are very closely intertwined, but the heading of this group is emerging technology and human rights. Human rights are very much legally based. What I want to reply to the person who just gave the intervention there, the point you make about impact assessments is very much about the whole life cycle of if I speak only about AI, for example, developments, who is consulted from the very conceptualization through to the rollout, exposit evaluations in terms of what you are talking about. You mention business and human rights but we're currently at the stage in relation to business and human rights where they're not binding. Not binding in the sense of when one looks to general data protection regulation. There is different levels of legal oversight in Government and governance in that regard. So my understanding, in terms of the emerging work of the commission's high level group on AI, and trustworthy AI is a need for that multistakeholder engagement at all stages of product development. This is something that the high-level expert group of the commission on AI is about to pilot with different sectors. So they're actually going to be looking at the application of the checklist they developed on trustworthy of AI, with industry, with different users, which is so important. Or else it is theory, and it doesn't look at the practice in reality, which I hope is gelling with the point you raised there.
>> JAN KLEIJSSEN: Lise, please. You want to comment as well.
>> LISE FUHR: I like the question on impact assessment, to me, they're key, extremely important. What we see at the moment is impact assessments are first of all very Academic, because we're not widening the scope. Plus also, we're too technology specific. If you talk about AI and human rights, you might forget the IoT angle or 5G angle, and all of these things are converging as we speak. They're all interdependent, and it is extremely difficult, at the moment, to actually make an assessment that is fulfilling, that will give a good picture of the actual impact. So I'm a fan, but I'm just saying, we need to rethink how we do it.
>> VIVEKA BONDE: What about issues of enforcement and supervision, how do you consider that to be done, considering the challenges in the different markets and different kind of technologies as well?
>> JOE MCNAMEE: I was thinking Max could give us an example of something that Google did over the last five years, that looking at the high-level expert group ethics document, you would not do again in similar circumstances. Then we can see a real-life example of how it would impact our rights and freedoms.
>> MAX SENGES: That is a difficult question. I will try to think of something we might not have done or might not do again, while I think of the question posed. To your point, I like the aspects of the three pillars of robustness, law, ethics. I would say ethics is the virtue, understanding what we are doing, as practitioners. The law is what the stakeholder group of Governments clearly are in the lead for. So we'll consult with other stakeholders, similarly to the impact assessment question, yes, of course, companies should self-assess, but it is independent outside assessments done by NGOs, Civil Society, but other stakeholders to look at the practice that are most interesting. Allow me to just to underline how -- why I think AI governance is something where we see a next generation or more mature multistakeholder governance approach and responsible innovation by certainly the leaders in AI, that they bring the Academics and researchers together for the principles that are identified. You know, just -- I'm not going to read all the AI principles from Google to you, but the three I find particularly interesting and actually not pushing away responsibility are in the AI governance paper that was released at the world economic forum in late January, early February, that is explainability really, you know, make it possible for others to understand what this somewhat autonomous machine does. Fairness appraisal. You know, make it transparent, discuss what factors, what ratings go into the machine as it is developing its model. An really, they're asking and proposing that we need liability frameworks for AI, which I think is incredibly important and really points to the need of good regulation in that space. And one last point I wanted to point out is accountability to people. It is called human in the loop, I think it is actually a fairly difficult one to achieve. So by no means are we chickening out of the difficult questions. I think they're big questions. They're not answered yet. Nobody on the planet knows the answer so we should do it together.
>> VIVEKA BONDE: Coming back to the question of enforcement and supervision in that perspective. How would you consider that on not only where we are now but also where we are heading?
>> JAN KLEIJSSEN: Joanna, I think there is a mic over there.
>> JOANNA GOODEY: It is hard to predict where we're heading, but if we look at the European Union at the moment, we have, for example, a number of agencies that are responsible for oversight and enforcement in key areas where you can do harm and one can also think about emerging technologies and AI in that regard.
For example, we have a chemicals agency that regulates. We have a medicines agency that regulates. So lots of different models that exist, but they're sectorial for different areas. We can look at ombuds persons. There are different models that are out there. I think we can draw from experience of areas where we have decided that we need regulation because the impact on humanity is so great and the potential for harm, but also the potential for good is there, too.
I think we can really learn from models in our experience, in different areas to see what would perhaps fit best, if we are going forward for AI. But again, it is the point I raised earlier. If you're going to have oversight, if you are talking about hard law, you will have of course need to have not just NGOs overseeing, you would need to have proper organizations that have the legal weight to do that. And it is a very different scenario to talking about self-regulation, optional ethics and other guidelines. We're talking about different things. I'm not sure what you are asking about the soft law or hard law.
>> VIVEKA BONDE: I was thinking both, how to strike the balance between soft law and hard law, really. And do you need both?
>> MAX SENGES: The big problem is there is no such thing as Internet industry, Amazon is doing different from Telco and Google. It is more the Internet is eating the world. Everything is Internet governance. That is why it is difficult to it say there should be one institution that is an industry regulatory body. We need all the bodies to understand what the Internet is and how it transforms their respective pieces and need places like the Internet Governance Forum where that comes together and loosely coupled so we make sure we're not seeing things fall through the cracks.
>> JAN KLEIJSSEN: Lise, please.
>> LISE FUHR: We are talking about technology and it is developing rapidly and it is extremely difficult to keep up with what is out there. I agree that the Internet is actually eating the world. But for the industry, it is actually key that we ensure -- and I think this also is a the same for any user, that we have predictability and certain -- certainty. So whatever is the end result for AI, for Internet of Things, 5G, whatever, it is actually that we need to ensure that we know what's there and we're avoiding fragmentation. Because in Europe, we have 28, maybe 27 Member States. And we don't want to have different rules in every Member State. We want to have the same. That will also be easier for the end users to relate to. Because they would have one system.
Again, that being said, the world is not going to follow Europe, per se. I know GDPR made a huge effort to actually harmonize privacy rules all over the world and has set a great example. But I'm not sure we can have the same success with AI. But we need to be sure that we balance any framework we do with the ability to actually also develop. Because the European citizens would any way use services that comes from U.S., or China, that comes from all over the world. We need make sure whatever we do we don't fragment Europe and the rest of the world.
>> JAN KLEIJSSEN: Lise to disagree with one thing you said, Strausberg has 47 states, not 27 or 28. Joe?
>> JOE MCNAMEE: I think we should remember what Joanna said in the role of law and fundamental rights and guaranties it provides in her opening statement. I think we need to look at examples. In the ethics group document there are numerous references to fundamental rights. Fundamental rights are the responsibility of states. States have an obligation under the convention, under the charter, to ensure these rights. That doesn't mean they have to be done always by hard law, but it is the role of the state. Now, let's take a practical example of our freedom of expression rights in 2019. The European Commission and terrorism regulation and dare I mention the copyright directive places -- changes the balance of incentives of Internet companies, making it more attractive to remove more content than to leave it online. This is then -- this is not an obligation being imposed by the European Commission, it is a new framework. So then more is deleted. More legal content will be deleted. The European Commission in relation to existing content removal is dreadful in data production as the Europe parliaments and Resolution in expectation until December 2017 described in painful detail. And what happens once Google, as a company that has the resources to deal with such a complex new framework, how does Google react, let alone a small company?
And we see -- it is nice to talk about transparency and accountability, but look at the -- there's a women's rights organization called women on wades. Every ten weeks their YouTube channel is deleted through malicious gaming and then put back online eventually, and then put back online more slowly and more slowly still.
So where does a citizen -- a European -- proud European looking at the Charter of Fundamental Rights, restrictions of fundamental rights may only be imposed if necessary proportionate, genuinely achieving objectives of general interest and provided for by law, those are wonderful, strong words and I love the European framework on human rights, but it's falling between all of the tracks. The cracks. The commission is avoiding accountability. Google and everyone else is avoiding accountability. And certainly, we find ourselves, without the rights that are hard law in the international league of framework. So perhaps you can tell us about the transparency and accountability and reactiveness of Google when you remove content.
>> MAX SENGES: Thanks for the second question. I did come up with an answer to the first one, which is what I would have done different, what I think we could have done different. I think we let a real big opportunity pass when we decided to invest in Google plus rather than pursuing open social, which was an open standard for digital identity, which is one of the missing pieces of Internet architecture, that is back to the original statement that architecture is not bad, but that is a missing piece. We let that go. That was an initiative to provide standards for digital identity.
Now, on your second question, you know, how do we do these things? You know, I think appropriately, the colleagues from the commission talk about law and how to make policies, so let me speak a little bit about the tools that companies have and apply. And what we do. So basically, there is commitments that we engage in, most notably in the context of human rights, the global network initiative, GNI, which is a fairly established tool at this point. You know, it goes slowly, but it does make progress in terms of asking first to report and do a self-assessment and report about that, and then you get independent reports, that is already helpful to get the internal wheels turning and have the conversations about what we do, what we should do. That is helpful. The second one is commitments like principles and the research that I mentioned.
Then, I think it really comes to setting up the internal infrastructure to address these points. There is an internal committee for emerging technologies that gets together every quarter and the engineers report about what they want to invent, the proposals get discussed by our ethics council, things like that. Last, but really importantly, there is the public scrutiny in participating in the questions that we get posed. And that make us think about how to get this right. Now, when we talk about freedom of expression for user generated content, that's a really, really difficult question that you cannot get right for everybody. You will always make one party or one group unhappy that want more freedom of expression and the other group wants more paternalism and more control over what is generally seen. I don't think we have found the solution there, but I do think that we are on our way to getting better. To having checks and balances that, for example, you can contest when things get flagged, you can -- you need different layers of, you know, content governance, if you want, where the first one is user generated, content needs user-generated governance. A lot of things should be dealt with, with the users that are using, offended by the content, communicate about that. The second layer brings in the company that tries to resolve the issues, and then of course, that needs to be a public infrastructure that addresses the cases that can't be resolved, hopefully that is only one-third of them. Because we're talking about many, many, many cases.
>> JAN KLEIJSSEN: Perhaps before giving the floor to Olivier, may I just raise an issue here. Following what Max said. The University of Montreal in Magill are doing a mapping exercise and have come up so far with 60 ethical charters around the world relating to artificial intelligence and those include the one recently issued by the European Commission, OACD and 58 others. That raises the question, however, is whether these self-regulatory instruments are sufficient in combination with some existing law. We already heard GDPR. We have convention 108, cyber crime convention, and the European convention for human rights, is that enough? I would be grateful if the panel, if we could move a bit perhaps into the area more specifically of AI. Because emerging technology is a very wide topic. But if we can perhaps for the benefit of the participate, focus a bit.
>> OLIVIER BRINGER: If you will allow me, I will reply to the previous question. In terms of enforcement and supervision, we have two to distinguish between legislation, hard, and soft approach like self-regulation. In the case of hard regulation, I think there is very important role for national regulators, so it is very important that the regulators are well staffed can do their work properly. Again, here, we don't need to reinvent the wheel. We have sectors where regulators are affective a long time, and effective. In the case of telecom and the authorities.
>> VIVEKA BONDE: Any specific sectors you want to make a comparison with? Any specific sectors you want to compare AI and advanced data technology.
>> OLIVIER BRINGER: Well, the obvious one, again, is because a lot of issues on AI and ethics will be about protecting the personal data of individuals. So I think the data protection authorities have a big say in that.
Something which is also important is the mechanism for cooperation. So the fact that all these regulators meet and share best practices, how to implement the rules often, the European rules.
Another thing, I think, is the other skills. We need to make sure, it is not only a question of all of us acquiring basic skills, it is also the policymakers, it is also the judges, who need to acquire the skills. They need to acquire the skills about artificial intelligence, for example.
And this is happening. I mean, we see it that in sectors which were removed from the digital 10 years ago, the policymakers are requiring the skills in-house, in transport, in public Government services. When turning now to self-regulation, I think one important thing to do is really to ensure transparency. It is very good to go for self-regulatory approach, but we need to be able -- everyone needs to be able to monitor it. That is a mechanism that should be enshrined in the self-regulation approach. This is what happens, for example, if you look at what we're doing now on the online digital information, we have a code of practice. And every month, those who have applied, who have signed the code of practice, issue reports on how they applied it, how many fake beings they have removed, et cetera. That is an important aspect. In the end, it's also the role of the public authorities to look at the results. From time to time we need to look, if the self-regulatory mechanisms, and take responsibilities if not. These would be my ideas for enforcement and supervision.
Then I would like to reply to Joe. I think Max partially replied to you already. I mean, yes, in several of our regulation, we have imposed that certain content, for example, that would infringe intellectual property or infringe other rights or laws need to be removed. But we have the possibility to contest. So there is also the possibility for someone who has seen the content removed in a nonjustified manner to contest in order to get their fundamental rights respected. So for example, the freedom of speech or because they want to have their data protected, for example.
So there is a balance, there is a balance there. Probably the details, how it is going to be -- how this balance is going to be implemented practically, this is something that can be done via the multistakeholder process, too. By putting the platforms in the content, those who hold the rights in terms of copyrights, the fact checkers, et cetera, together, to see how this mechanism can be of notice and action and countenance and can work in practice.
And then on your point -- sorry, on artificial intelligence and the 60-year rule books on ethics. I mean, we are at the start of an international dialogue. So very clearly, for example, our strategy in European Commission, we said this is not something we can do on our own in the European Union. We have to open up, discuss inside the G7, in the G20, discuss in the Internet Governance Forum, and engage with the Council of Europe. Part of the reply is that we need to put in place the international dialogue to agree not only on the principles but also how to apply them.
If you read -- I did not read many of them, but a few of them. There is good convergence, I think, in terms of the principles, themselves. I think the difficulty will be in their implementation, and the risk of divergence is really there.
>> JAN KLEIJSSEN: Who else would like to comment on this, either from the panel or from the audience? Please.
>> LISE FUHR: Your question on is it enough the current regulation or framework. I think it is a difficult question. Again, I would like to come back to my prior statement on a think we are looking into AI that will converge -- that are used in many different industries that we never thought about before. So now AI is used in farming. It is used in health industry. It is used in of course, in it the Telco industry. And also Google uses it a lot. So we have different uses of AI, and I'm not sure that we can regulate this in a way that will cover all of these kind of uses. And I am a strong believer in that we need really self-regulation here. I think the industries will have big, big problems if we don't do it anyway. No matter what happens with other regulation, everyone needs to be very transparent on what are your principles for AI and how do you deal with data, because people are getting much more aware of the use of data and much more confident that the data is theirs. I think GDPR has a great influence in this development.
But also, if we are to do any further regulation, we need to really take in all stakeholders and have a multistakeholder approach to this. Because otherwise, we lose parts of what's important in regulation of AI.
>> JAN KLEIJSSEN: Thank you.
>> MAX SENGES: To underline that point. To see AI in healthcare, that is where we apply the rules of healthcare that are strict on how you innovate. Traditionally on the Internet, you have a permission-less model. It is where you have to bridge. It is not a one-size-fit approach, you don't regulate AI. I like that fundamental point. What we want to do more is apply real world metaphors. When I come back to your point of freedom of expression, you have different expectations about freedom of expression, whether you are in a place like this, whether you are at a party somewhere in a club or somewhere -- something or whether you are in a legal hearing, right? Those are very different conditions. And what we should do is we should agree on the principles and, you know, make sure that human rights are always looked at. That is where your national individual agencies come into play, to monitor, probably based on information and lines that Civil Society, the users bring to them. And then we find different community agreements for how you -- how you address it in those different environments.
>> JAN KLEIJSSEN: Joe, please.
>> JOE MCNAMEE: I can't help wondering what any of the 1.1 patients that were donated to the health trust for the Google deep mine would think about the last statement. I suspect they would be sick.
>> MAX SENGES: Obviously I apologize for that mistreatment. That was a long time ago. And that was before a lot of the work and a lot of the infrastructure that has grown since this developed. Again, that was a really bad incident.
>> VIVEKA BONDE: So on that note, if we look at AI in respect of human rights and we look at the creation of AI through from the data collection stage, down to the design development and deployment, how can we ensure the human rights element with self-regulation? How can we do that?
>> JAN KLEIJSSEN: Joanna, please.
>> JOANNA GOODEY: I think, really, self-regulation only takes you so far. Beyond that, you have to have regulation that deals with hard law and it is the duty of the states to ensure the law is enforced. We have fundament human rights that need to be applied. They're very broadly framed they can also be applied in the context of emerging technologies and AI and see if they're fit for the purpose of the advancement of technology. One thing that is raised, if you are talking about self-regulation is the point of redress and access to justice. So my fellow speakers, just on my left here, your right, talked about a data breach. We have many examples of this in all different industries that are emerging.
Now, for the individuals concerned, it is very hard to get access to justice and redress. So if we're talking about a right, this is a core right. And we have to look at ways in which mass breaches can be addressed. That is only one area, that is a data breach in that regard. So we really have to say that the companies can only address that so far. They can try to prevent it and fix it. But when there is a serious breach, for example, a data breach with huge significant implications for individuals concerned. And for that, you really need the weight of the law to make sure that redress can be met. That might mean, eventually going to court. It might mean penalties, et cetera. Self-regulation can only take you so far. I think some of the examples today are talking about a continuum of impact of emerging technologies and AI. The examples that are often given is, you know, online shopping, what's the problem with that? Versus the impact say, on your personal health and whether a decision is made, whether you get treatment or not. So whilst it might not be a one model fits all, in terms of the weight of oversight, I still think the human rights founding principles apply regardless of the level. And that has to be made very clear. So that's not a hierarchy in which you say human rights don't apply or do apply. That's where self-regulation, you have to go beyond that. That is really the role of the state and international bodies that look at the governance.
>> JAN KLEIJSSEN: Thank you, Joanna, and also for having put on the table remedies. I think that is a crucial element. Question from the audience, please.
>> QUESTION: My name is Greta Clause from the German [indiscernible] Foundation. I have more a statement than a question. You made several references to the GDPR, this is the first time we have a data protection privacy regulator that is age differentiating, that specifically addresses children's rights to privacy and data protection. So my question would be, with reference to Lise who mentioned the multistakeholder approach, what do you think about especially addressing children's rights in regard of regulation and self-regulation towards emerging technologies and especially AI? Have you any thought about that? Is it in your mind? Not only to Lise, but all of you. Thank you.
>> JAN KLEIJSSEN: Children and perhaps other vulnerable groups, too. Please?
>> LISE FUHR: I think it is a given with emerging technology. If we're not -- with the extended use of technologies also among children, we just need to take into account those -- their rights. And I think it's extremely important even also in the multistakeholder model that you include the young -- not necessarily the very young children, but you include young people. Because we tend to have a certain age level in a multistakeholder discussion and we need to ensure all -- the full spectrum of ages are included.
>> JAN KLEIJSSEN: After hearing yesterday in Strausberg where companies were represented, the observation was made the over 10 -- I'm sorry, between 10 and 21 years old are okay, it is anyone that is over 21 that is the problem.
>> MAX SENGES: I really like your question, thanks. What is good about it is the hardest problems are, of course, those where the challenges become most obvious, but when you solved them, you have actually a blueprint for that, that is good for a broader picture. In the case of children, I think what's really missing is exactly the digital ID, the age version that needs to come at one point, where I don't know when and how it can be applied. Similarly, health is a wonderful question, when it comes to data sharing, because actually, you know, innovation and that means finding cures to bad diseases and to treat us all are not pursued because the data regulations are so clumped. We have a saying, data protection plus data sharing becomes data governance. That is what we should really be thinking about, plus identity.
>> JAN KLEIJSSEN: Max, can I just prod you on this one reply a supplementary question. Because at Google you also deal with education. Now, digital literacy and AI literacy, surely must be very important element in empowering users to use their rights, to make use of the remedies they have. How do you see this issue? Because this has been pointed out in several of the ethical frameworks. I mentioned several of the 60 charters, I haven't read all, but I have read a number of them. There is the issue of the obligation of states and companies to ensure that users -- citizens and users are aware, for instance, when they interact with AI.
>> MAX SENGES: So there are two sides to the question, to the literacy and capacity to know. One is really on the developer and on the expert, you know, making sure that the technology is doing what it does, in terms of emerging technology, here I mentioned the explainability factor, all right, where you have to understand why an autonomous system is making a decision, in principle and then taken back.
The other one is really about agency and what we call in German [speaking German]
to be literate, to use a certain technology. To be frank, Google could do more than we already do. We do train or have trained more than one hundred million people in digital literacy skills. Those are more job-related skills to address the challenge, of, you know, the workforce needing the digital skills and to improve there. There is an equally important, I would argue, capacity of really using media, of participating in media. But again, I don't think this is something that any player can do by themselves. It is a consolation of using the existing educational system, of course, and, you know, it is a funny catch 21 where when Google does something they say oh, my God they go into the schools. And we don't do something, it is how can they not do something. Is it a middle path we will find a reasonable solution.
>> JAN KLEIJSSEN: Anyone from Finland in the audience, Finland is offering a free online course on AI literacy in this respect. Question from the audience.
>> QUESTION: Thank you, Mary Anne Franklin, Internet Rights Coalition. I'm speaking in terms of time, which Joe reminded us all. When do emerging technologies become emerged technologies? Once upon a time, the web was an emerging technology. Artificial intelligence is not a new issue. I would like to hear from each of you, if I may ask, a clearer definition of what you mean by "artificial intelligence." Because we're talking about possible beings that might one day wish to have human rights. But we're also talking about simply automated programs, perhaps. I would like to know specifically what you mean by artificial intelligence. What do you mean by emerging technology s here and now in 2019 and thirdly, where would you draw the line in time terms? When do emerging technology s become already emerged? Because I think we're wandering around history as if everything has stood still, when it kind of tends to go back and come back again. Thanks for being pedantic, until we have our perceptions clear the discussions will go nowhere. Thank you very much.
>> JAN KLEIJSSEN: Thank you.
>> JOE MCNAMEE: If I can answer a question about self-regulation. I think I would like to say what I said at the start.
Sometimes self-regulation has worked and sometimes it hasn't worked. The times it worked tend to be similar for various reasons. For times it hasn't worked it tended to be similar for various reasons. Call me an old cynic, when there is a self-interest of the company that is self-regulating, it will self-regulate better as a general rule. So the first thing that we can do is look at our experience and say, okay, on this end of the spectrum we know from experience that we can rely on self-regulation.
If we take something like network security. We know companies have an interest in having decent security. We know they would prefer not to notify a data breach. So having basic level of security, which can be exceeded, on self-regulatory basis, makes sense. Allowing self-regulatory data breach notifications, EH! That's not going to work. So the evidence is there, we just need to use it.
On the education question, I'm personally horrified. I saw my nieces -- niece's tablet. It scared the hell out of me. Pat Walsh that was involved in the mobile industry did an analysis of one of the apps, which is of very little value to the students and huge value huge long-term value to the companies. It is just not good enough that this is being given to kids without discussion with the kids, without discussion with the adults. Consent, meaningful consent. It doesn't exist. On emerging technologies, I think I avoided using the phrase emerging technologies and the phrase artificial intelligence. I would agree with you. There is a -- dare I say -- lobbying imperative to restart the clock every year. But there is no practical reason to. Technology isn't going to stop. There is always going to be emerging technologies, if you look at the European convention on human rights, that was written in -- 70 years ago. And it is as valid today as it was 70 -- more, probably. We don't need to reinvent it. We need to continue to adapt it and live by our principles and not live by headlines.
>> OLIVIER BRINGER: Replying, I mean, to the question that you asked about emerging technologies, I would let Joanna share the report what defines artificial intelligence. I will not try definition on the spot.
On this issue of emerging and possibly at some stage emerged technologies, I think -- I mean, we don't know as policymakers. We cannot be sure that the technology will really eat the world, as Lise was saying. So the best we have to be quite modest. The best we can do is have a discussion with the experts, a discussion with the community to try to understand where the technology is going and what type of framework is required. To make sure that the reverse -- reverse to the society and economy. So I think we have to be as policymakers, we have to be modest, and we have to be adaptive to precisely to adapt to the emergence of different technologies.
And the second thing I would say is that if we want to understand that well, we have also to use another instrument, which is investment. So policy is not only about regulation and different flavors of regulation. Policy is much more than that.
I'm managing an initiative, Next Generation Internet, where we are investing in the technologies of tomorrow. This is an innovation initiative, an investment initiative, with this type of initiative we can understand the technology and understand what is exactly the state of art where it is going, where we need to invest, where we need more, where we need to intervene. I think this technology angle is very important. I mean, if you want, in the end, to have the artificial intelligence that we want, which is in line with European values, if we want a blockchain that is truly decentralized, we have to invest in the technologies here in Europe. We have to have the innovators working on blocking the technology. Otherwise, the values that come with the technology will be imposed on us from the outside.
I think the competitiveness, investment angle is as important as the regulatory angle.
>> JAN KLEIJSSEN: Lise and perhaps Joanna after, the definition of the AI.
>> LISE FUHR: I don't know we're repeating ourselves, I don't think it is of no use. It is important to have the discussions now. What we learned about AI has been machine learning. It hasn't been real AI. I agree we call a lot are AI. What we see now are the first steps toward AI. And together with other technologies that are truly converging, like 5G, which is actually conversion of fixed and mobile networks. We actually are movingly into an era now where it -- we don't know where things begins or ends, because everything is going to be more and more interdependent, I think these discussions are extremely important, but also important that we take them as we keep saying, on a more horizontal level, also, what we see is other industries moving in to the technology industry. So everyone is now becoming very dependent on Internet, on AI, on being connected at any time.
And to Joe's example of some of the companies misusing data, we will always have bad apples in the basket. And I don't think regulation will solve it, it will of course give a clearer framework, but I'm just not really confident we can solve it all by regulation. What we need to do is raise awareness and raise transparency and educate the users.
>> JAN KLEIJSSEN: Thank you. Joanna?
>> JOANNA GOODEY: Okay. I think the interesting discussion on the one hand is the onus on the users having responsibility. But the companies have responsibility and the state has responsibility, too. I think most responsibility can't be put on the user, is another thing I would say.
There are multiple definitions of AI out there. You read any documents -- and they're all slightly varying. I have one that happens to be in front of me from the European Commission. I will not read it out, as it is very long. If I read it out, it will be contested, naturally. But I think when we talk about the label of AI, it is so diverse and some of it we are all aware of, it is in our everyday lives, we open up our smartphones and it is being used.
What is very interesting, if you look at areas like bias, discrimination in the application of algorithms or predictive policing in areas where people have written a lot about this and are very concerned. You can go back to the '80s, there was issues about bias, predictive policing, data was being gathered, but we didn't refer to AI. Same basic principles of discrimination, and bias, unfair treatment were already being raised by people working in this field.
AI is a new tool using more data, but more data doesn't equate to better quality, necessarily. I think we can really draw from our lessons, from our established critique, human rights perspective, fundamental rights perspective, and apply it to new tools we have through AI.
Sometimes when I read what is going on in AI and especially in the field of robotics, for example, a lot of it is still science fiction. So I think a lot of AI is here and now, we're very aware of it. A lot of it, alarm bells are ringing. We have to be pragmatic and real use cases. One point to underline is less theory, more diving down into the use cases and what is actually going on. Because then we can pinpoint real fundamental rights issues in the conceptualization of it through to actually getting redress if something goes wrong.
A lot of the discussion is high level at the moment, and I think we would help ourselves by grounding it in much more transparent understanding of the uses to which AI is being put. That is the stage we will go to next, as well.
>> JAN KLEIJSSEN: Thank you very much. Max, please.
>> MAX SENGES: Mariana, thank you very much. I hope to give you a precise answer. I think AI is a blurry term, but I will end up with something that is concrete and at the core of what we talk about with governance and regulation. AI is the machine learning that was discussed in a number of things and there is general AI which doesn't exist yet and is pretty far out, as far as we can tell. There is one element of the AI question that I think is cross-cutting, independent of the sector, and of its application. That is that it takes autonomous decisions. And so, if we're framing the conversation more about, you know, what decision is being taken, you know, how can we understand how it is being taken, then I think we're on a pretty good path to thinking about how to governor this space. It is the end about agency, it is about we're giving up our human agency, you know, you can imagine an AI voting tool, right? There is a German tool, Germany a tool on the Internet where you can answer all the questions, it will tell you what you should vote. I mean, take that to the next level. You tell AI everything you know and it will vote for you. Is that -- you know, you see what kind of questions you are getting into here. I think more generally, there is two kinds of technologies. There is technology of access, you get access to Wikipedia, knowledge, to means of augment yourself and your understanding of the world. And there is technologies of control. Also a really good thing when you want to control, you know, a production chain, you want to ensure that there are certain aspects of freedom of expression, et cetera. Control, that's what you want.
I think it is the balance between the two, and they're actually both there when you think about AI. If it is a technology of access, it is something that augments me. That should be loyal to me and kind of the answer to the question oh, there's such an imbalance between the enormous AI system out there and me myself, who am I to understand all of that. I think if we get good systems, you know, on our side, the individual's side, and augment us with AI tools, that's a good thing.
Let me address the second part, at least from Mary Ann's question, that is about emerging technologies. Emerging technologies are technologies where we don't know the exact outcomes of what is happening when we apply them. We have -- I argue there are some applications of AI that are not emerging, that we do know what is coming. And many we don't. At Google, we change how we rollout the technology, but now it is different. It used to be a blog post, for everybody to try out beta phase, new program called Gmail. It was in beta for a long time, famously. That is not how we roll out AI-based systems. We take small groups, test them, iterate, add diversities aspects, and it is much more responsible, I would say, in this context effort.
I'm a fan of defining what we call emerging could be defined as better. To have a clear distinction between better technology that, you know, for example, can only be out for so long, that has certain characteristics, right, that is more experimental and mature technology on beta you can't make money, all technologies where you make money, at least at certain conditions, right, that should not be in beta anymore, you have to be responsible in what you put out as a real product, if people sign up for something and they want to be part of the experimental community, that's a different contract between the user and the provider of the service.
>> VIVEKA BONDE: Thank you, Max. We are running out of time unfortunately. Final question for you. Are there any democratic oversights when it comes to emerging technologies and as such, aspects you would like to share?
>> JOE MCNAMEE: I would really, really like Governments to apply their existing electronic advertising rules to Internet advertising. In the Irish abortion system, the Irish Government didn't bother, didn't consider it relevant to have the same rules for online advertising as offline advertising. Google took, I think, a good decision -- forgive me for saying something nice to you after all of this -- to not accept advertising after a particular moment and Facebook decided to not take non-Irish-sourced money for advertising, which was pathetically easy to circumvent. So the Government didn't regulate it all. Facebook regulated one way, Google regulated another way. That is not grown-up politics in this era. If we could have at least have consistency. That is not what you are asking, it annoys me.
>> VIVEKA BONDE: You can add democracy to that.
>> JOE MCNAMEE: Democracy.
>> JAN KLEIJSSEN: Who would like to comment before we move to the final rounds of comments, given the time?
>> MAX SENGES: I think it is somewhat important to see this is a somewhat technocratic conversation, and no one was voted in terms of legitimacy of democracy. I'm not sure it would be helpful to have all of the different stakeholder groups vote and you know, introduce democracy in that way. What does seem like a good proposal is to include the view of the people and that is an informed view of the people in our debates and have that as a base to come back to.
It is a practice, citizen deliberations that is quite developed. There is a project underway, to bring that to Internet governance by a French NGO mission public. I think that is a really good way to have democratic values and practices in a multistakeholder context.
It also has -- if you allow me that last point. It also has the advantage to give you a Delta between what people say off the street, when you just go out and ask everybody and what they say after they have deliberated and spent time and weighted the tradeoffs between the different options. Because if you see that Delta is pretty big, you should not go with the general sentiment and just go around ask people what to say. Privacy is unfortunately a really good example. At this point, everybody says ah, it is very bad, we need to do something. But that is not really helping. You know, it is not a constructive way. You should not go out in a democratic way and ask everybody what to do. You will not get an answer that is going to work on the pretty complex environment with legal, technical, business, different cultural aspects. So I do think that the multistakeholder governance approach is the right one, but it should be informed by democratic practices.
>> JAN KLEIJSSEN: Olivier?
>> OLIVIER BRINGER: On this I would say the democratic oversight is enshrined in our institutional set up. When we device policies and regulation, we do it together, at the European level together with the European Parliament and member statements who represent the people. There should be a democratic oversight, of course. And it already exists, the institutional set up.
And the parliament, yes, is, for example, the parliament last year or two years ago, issued a report on artificial intelligence. So it looks into these issues I'm pretty sure, for example, that the next parliament, which is now being installed, built up will look at those issue, and it will be on their agenda.
And I fully agree with Max, if you want to discuss these issues, it is a bit like discussing Europe. If you want to discuss Europe, I mean, the European Union, I'm sorry. You need to explain to people how it works. I don't know if you have seen in your newspaper, there was a lot of explanations before the European elections how Europe works. I think we have to do the same exercise with complex technologies, we have to explain the aspects from technology to legal and ethical. So having an informed debate is really important.
>> JAN KLEIJSSEN: Thank you very much. Please, question from the audience.
>> QUESTION: Hi. Thank you very much for this discussion. It has been really an exciting panel. I think there is probably 40, 45 minutes ago Lise say something like she's not confident that you can be equally successful with regulating artificial intelligence than with GDPR, for example.
I think this is the key point. I think that's the regulating of the ethical values or behaviors is complicated. Ethical values are not universal. What is ethical in one part of the world is not in a different part. And really, so I think one question is how effective could be to the deliberational approach.
And even if you can develop a solid approach, there is a challenge, because some values are not also European. If you speak about Europe, it is something bigger than the European Union. It is a challenge.
So this is one comment. The other comment is that I think yes, there are a lot of the decisions that are being taken, based on artificial intelligence tools for recruiting people, for shorter lease, for applicants of many position, and for many other things. And the problem here is not also -- it is not only the ethical behavior, it is the incompetence and the biases. Regulating biases is complicated. It is -- there is a -- I could use -- I would not take much of your time. But there are very interesting examples of biases. If you search on a search engine about something like babies or something like that, probably, the result you will get is not representative of all of the diversity of the world. Some, a simple question asks, do you know denotes, what is the last time the Chairman of the World Cup soccer, what everybody will think is about the main championship. So there are many other categories that are competing. So it is you are not thinking about the Goodman championship or people with disabilities or many other things. I challenge you, try that. It is very simple question.
And this is something that we are seeing everyday, things that -- decisions that are taken based on artificial intelligence. This is a big challenge. Thank you.
>> JAN KLEIJSSEN: Thank you. Thank you very much. We must now, I think, come to a close. I will have a comment by Joanna and then I will ask the Rapporteur to come here, after which I will give one sentence to everyone. Please, Joanna.
>> JOANNA GOODEY: I think the last speaker made an important point. If one looks at AI, though, very neutrally, one can also say I'm giving the example of AI, it has the possibility when done well to reduce bias and discrimination because humans in our own decisions about who to recruit we have bias and discrimination as we know all. But of course, the quality of any algorithm is as good as the data put into it or the design of the algorithm. So from how engineers are trained, technicians are trained to have all of the life cycle stages to say are we thinking of potential bias and discrimination. That is not forgetting, even without AI there is bias and discrimination in everyday life. The police can have discriminatory behavior not using pred poll or some other tool out there. We know that exists. I think we can be optimistic that AI as an example can do good in a right framing. A lot of this discussion has been about the negative side. There is also the very positive side, but for that, you need the oversight, the checks and balances, we only mention children. I'm glad you mention persons with disabilities or the elderly and many, many other groups. We can all experience bias and discrimination, too. So I think, you know, one has to be cautious, when we look at saying AI is discriminatory. A lot of examples have emerged saying yes, it is, but also everyday human practice without the use of AI is also biased and discriminatory.
We need to advance AI to recognize the bias and discrimination. That is the one area where a lot is written on writes and AI -- rights and AI, rights and discrimination. We need to move forward with what we know to improve the AI tools out there.
>> JAN KLEIJSSEN: Thank you, before we're kicked out of the building, I will give the floor to our Reporter Clemon.
>> REPORTER: I have the difficult task of summarizing the key messages from this session. I will read them to you and see if the panelists and audience agree for adoption.
I articulated the messages in three topics, regulation in frameworks, enforcement mechanisms, and literacy. The first key message, all stakeholders including private sector agree that the form of regulation concerning the use of digital technology is needed to protect individuals, build public trust, and advance social and economic development. But divides remain on while the scope and dividingness of such regulation. Though, there is a shared understanding that such regulation should ensure predictability and legal certainty. There is a consensus on the need to initiate open-ended and inclusive debates to provide guidance and introduce frameworks to address the significant -- the exercise of human rights of individuals. For example, in the product development.
When it comes to enforcement mechanism, states should take appropriate measures to ensure the effective legal guaranties and there's also as efficient resources available to enforce human rights on lives of individuals and in particular for marginalized groups like children.
There is a need for enforcement mechanisms within national law to ensure that responsibilities for the adverse risk and harms for individuals rights are rightly allocated, for instance, using impact assessment as we have seen previously.
And when it comes to literacy, due to the poor symmetry between those who develop the technology and those who are subject to them, there is a need to empower users by empowering digital literacy skills, enhance public awareness about the interference of emerging technology such as AI with human rights.
That's about it. I would like to know if anyone in the room disagree. Raise your hands.
>> JAN KLEIJSSEN: I think you are putting a big challenge here. Thank you very much, Clemon.
>> MAX SENGES: Hang on. I think I have one comment. I think you did well overall. I think there is one aspect that I disagreed and we disagreed about on the panel as well. When you talk about enforcement, you stress national law. I thought you were going in that direction, but you were talking about national agencies for supervision and enforcement, not about the national law. I think the laws have to be ideally, you know, transnational, international, completely or at least regional.
>> JAN KLEIJSSEN: Absolutely. I must end it here. I have been given a clear signal that this room will be closed I think in a moment. So I would give one sentence to each of you to close. 20 seconds. Please.
>> OLIVIER BRINGER: I would say reacting to the last intervention, that we should build on our European values to build the right framework for trustworthy AI. And we should do that together indeed at European level when it comes to the European Union.
>> LISE FUHR: I would like to give three takeaways. I think Europe has a great advantage. We're very advanced on GDPR and privacy in relation both to AI and other emerging technologies, we need a multistakeholder model to look into AI and the impacts there. And last, we should avoid any fragment action of the European market.
>> JOANNA GOODEY: Okay, I think the basic message is we need the same rights only as we have offline and remember we have robust human rights, fundamental rights legislation that exists that can be looked at and see how it is fit for purpose as we go forward with emerging technologies.
>> JOE MCNAMEE: I would like to sign up wholeheartedly to what Joanna just said. Um, we need to remember that we've got cornerstones in place. We have got cornerstones in human rights law, we have got cornerstones in experience and if we -- if we build on those, we're going on the right direction.
If we question everything, every time somebody calls something a new or emerging technology or gives it a shiny new name, we will never get to our destination.
>> MAX SENGES: Thanks to the organizers, I thought it was a good conversation. I want to come back to the first point the Internet is one of the greatest inventions ever made. We put it all in a bad place right now. There are big challenges, but it is an enormous opportunity and brought us really much progress.
Thank you very much for the last question. I thought that gives me a great opportunity, bias in the dataset is crucial to address. There is a big question, do you want to represent reality or representative an ideal state? And especially when it is the second, I think we should be clear about what our tools are doing. I think a search engine should most of the time represent reality and not try to be a normative instrument. A place where we can really negotiate and deliberate and find out what the truth is, is a place like Wikipedia, where you have the transparency and you know, there is an infrastructure for that. So wiki data would be a place to think about that. Thanks again, great discussion. Very nice.
>> JAN KLEIJSSEN: Thank you, thank you very much. Thank you to the panelists, my co-moderator, the audience. We're still confused, but I suppose we're confused at a higher level. A round of applause for the panelists. Thank you all very much.
[Applause]
And see you all tomorrow. Thank you.
>> HOST: Thank you very much. Very quick household communications. The buses already are leaving for the beach. The last bus will go at 7:30. So keep in mind, and tomorrow morning, we start at 9:00. So don't drink too much.
We have a small present for you. Don't leave yet.
[Session concluded]
''This text, document or file is based on live transcription.Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.
''


[[Category:2019]][[Category:Sessions 2019]][[Category:Sessions]][[Category:Human rights 2019]][[Category:Innovation and economic issues 2019]]
[[Category:2019]][[Category:Sessions 2019]][[Category:Sessions]][[Category:Human rights 2019]][[Category:Innovation and economic issues 2019]]

Revision as of 15:43, 8 July 2019

19 June 2019 | 16:30-18:00 | KING WILLEM-ALEXANDER AUDITORIUM | Video recording | Transcription
Consolidated programme 2019 overview

Proposals assigned to this session: ID 1, 23, 62, 85, 89, 90, 108, 117, 154, 155, 169, 177, 205 – list of all proposals as pdf

You are invited to become a member of the session Org Team! By joining an Org Team you agree to that your name and affiliation will be published at the respective wiki page of the session for transparency reasons. Please subscribe to the session mailing list and answer the email that will be send to you requesting your confirmation of subscription.

Session teaser

Advanced digital technologies can contribute in a meaningful way to improve quality of life, accelerate economic development and scientific progress. But they can pose also a significant threat to the exercise of human rights and to democratic processes. Come to this session and share your views on how to ensure that emerging technologies’ development is in line with human rights!

Session description

Advanced digital technologies can contribute in a meaningful way to improve quality of life, economic development and scientific progress. But they can pose also significant threat to human rights exercise and to democratic processes.

There is a gradual recognition of the fact that these risks must be identified, mitigated, and remedied, and that a broad debate on the design, development and deployment of advanced digital technologies should be launched.

States’ policies should guarantee human rights protection and contribute to building citizens’ informed trust in advanced digital technologies, such as artificial intelligence systems by ensuring appropriate legal and institutional human rights safeguards, as well as a public scrutiny based on democratic values over the development, deployment and implementation of these technologies.

To this end, what role should state regulation and state institutions play? How can we avoid the risk of overregulation that could restrict freedom of speech or stifle innovation? How can we ensure that that the design, development, and deployment of advanced digital technologies respect human rights?

Is this a matter for binding national regulation? Could the development of self-regulatory procedures, ethical design solutions, transparency and accountability models by the tech community and tech companies constitute an appropriate response to these issues?

This plenary debate will discuss how to enhance transparency, accountability and how to ensure effective supervisory mechanisms and public oversight structures over the use of advanced digital technologies` meeting human rights standards. How to ensure that the most vulnerable groups of society are safe, especially those who are not able to protect themselves like children and people with low or no digital literacy? The respective roles and responsibilities of relevant stakeholders will be also debated.

The development and employment of advanced digital technologies are difficult to regulate and monitor in terms of transparency, explainability, accountability, effectiveness – for a variety of reasons including the need to protect trade secrets, required high level of technical knowledge necessary to understand how systems produce relevant outputs, complex and interrelated chains of inputs leading to creation of an advanced digital technology. Still, recent debates involving tech community, civil society and academia show that solutions to these obstacles are possible. Other sectors meeting similar challenges – such as aviation sector – are subject to effective regulation and supervision, supported by industry efforts to develop high standards of safety and accountability.

These issues need to be addressed urgently. Policy development should include all stakeholders and require open public debate. The session aims contribute to this goal.

Format

An informal, co-moderated debate, initiated by the interventions of key participants, will involve everyone present.

Further reading

People

Please provide name and institution for all people you list here.

Focal Point

  • Małgorzata Pek, Council of Europe

Organising Team (Org Team)

  • Zoey Barthelemy
  • Viveka Bonde, LightNet Foundation
  • Marit Brademann
  • Lucien Castex, Secretary General, Internet Society France / Research Fellow, Université Sorbonne Nouvelle
  • Jutta Croll, Stiftung Digitale Chancen
  • Amali De Silva-Mitchell
  • Fredrik Dieterle, LightNet Foundation
  • Ana Jorge, Faculty of Human Sciences, Universidade Católica Portuguesa
  • Ansgar Koene, Horizon Digital Economy Research Institute, University of Nottingham/ Working group chair for IEEE Standard on Algorithm Bias Considerations
  • Kristina Olausson, ETNO - European Telecommunications Network Operators' Association
  • Adam Peake, ICANN
  • Michael Raeder, Stiftung Digitale Chancen
  • Roel Raterink, Team EU – international affairs, City of Amsterdam
  • David Reichel
  • Veronica Ștefan, Digital Citizens Romania, Romanian Digital Think-Tank
  • Ben Wallis, Microsoft

Key Participants

  • Olivier Bringer, Head of Next Generation Internet Unit at the European Commission.
    Olivier Bringer is Head of Unit at the European Commission, in charge of the Next Generation Internet (NGI) initiative and internet governance policy.
    Before working on NGI, Olivier held various positions in the European Commission, dealing with policy development and implementation in the area of digital and competition law.
    Prior to joining the Commission, Olivier worked as a consultant in the telecom industry. He is an engineer by training.
  • Lise Fuhr, Director General of ETNO
    Lise is ETNO’s Director General since January 2016. At ETNO, she leads and oversees all the activities and she is the main external representative of the Association. On behalf of the Association, she is also a Board and an Administrative Committee member in ECSO, the European Cybersecurity Organisation. Lise has also been appointed to the Internet Society Public Interest Registry Board of Directors for a three year term as of July 2016.
    Prior to joining ETNO, she was Chief Operating Officer of DK Hostmaster and DIFO, the company managing the .dk domain name. In the period between September 2014 and December 2015 she also chaired the Cross Community Working Group for the IANA Stewardship Transition, building on her strong network within the internet community. Lise has 10+ years of experience in the telecoms industry. She started her career at the Danish Ministry of Science, Technology & Innovation (1996-2000) where she wrote and implemented regulation for the telecommunication markets. After that, she worked for the telecoms operator Telia Networks (2000-2009), where she led various teams dealing with issues as diverse as interconnection agreements, mobile services and industry cooperation.
  • Joanna Goodey, Head of Research & Data Unit, European Union Agency for Fundamental Rights (FRA)
    Joanna Goodey (PhD) is a Head of Research & Data Unit, European Union Agency for Fundamental Rights (FRA). Prior to joining the FRA, Joanna was a research fellow for two years at the UN Office on Drugs and Crime, and was also a consultant to the International Narcotics Control Board. In the 1990s she held lectureships in criminology and criminal justice at the Universities of Sheffield and Leeds in the UK, and was also a regular study fellow at the Max Planck Institute for Foreign and International Criminal Law in Freiburg. She has published numerous academic journal articles and book chapters on subjects ranging from trafficking in human beings through to hate crime, and is the author of the book ‘Victims and Victimology: Research, Policy and Practice’.
  • Joe McNamee, independent expert, member of the Council of Europe Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence
    Joe McNamee has worked on internet regulation since 1998. Up to the end of 2018, he was Executive Director of European Digital Rights, the association of organisations defending online human rights in Europe. He participated in the Council of Europe expert committee on the roles and responsibilities of internet intermediaries and is currently a member of the Council of Europe Committee of Experts on Human Rights Dimensions of automated data processing and different forms of artificial intelligence. He holds Masters Degrees in European Politics and International Law.
  • Max Senges, Program Manager for Google Research & Education
    Max Senges (1978) works as Lead for Research Partnerships and Internet Governance for Google in Berlin. While in California (2014-2018) Max worked as Program Manager for Google Research where he build and led Google‘s IoT R&D Expedition (in partnership with e.g. Carnegie Mellon, Cornell Tech and Stanford). Later he became the Head of Google‘s Hardware User Research team. For more than 8 years Max works for and collaborates with Vint Cerf on Internet Governance and interoperability and open standards.He is passionately thinking and working on the crossroads between academia and the private sector, internet politics, innovation, culture and philosophy of technology. In the last ten years he worked with academic, governmental and private organizations, centering on knowledge ecosystems, e-Learning and Internet governance. Max holds a PhD and a Master’s Degree in the Information and Knowledge Society Program from the Universitat Oberta de Catalunya (UOC) in Barcelona as well as a Masters in Business Information Systems from the University of Applied Sciences Wildau (Berlin).

Moderator

  • Viveka Bonde, LightNet Foundation
    Swedish lawyer specializing in the legal fields of Life Science, Data Governance and Data Protection Law. Co-founder of the not-for-profit organization LightNet Foundation which promotes ethical and sustainable AI and which has created a due diligence software model for the benefit of all companies and organisations that develop or use AI.
    Viveka also participates as an expert in the ISO Work Group drafting a Technical Report: ‘Information technology – Artificial Intelligence (AI) – Overview of ethical and societal concerns’ (SC 42/WG 3).
  • Jan Kleijssen, Director of the Information Society and Action against Crime Directorate, Council of Europe
    Jan Kleijssen (Dutch) joined the Council of Europe in 1983 as a Lawyer with the European Commission of Human Rights. He was Secretary to the Parliamentary Assembly’s Political Affairs Committee from 1990 to 1999 and then served as Director of the Secretary General's Private Office and afterwards as Director and Special Advisor to the President of the Parliamentary Assembly.
    Jan is currently the Director of Information Society - Action against Crime, Directorate General Human Rights and Rule of Law, of the Council of Europe.
    Jan is the author of several publications in the field of human rights and international relations.

Remote Moderator

The Remote Moderator is in charge of facilitating participation via digital channels such as WebEx and social medial (Twitter, facebook). Remote Moderators monitor and moderate the social media channels and the participants via WebEX and forward questions to the session moderator. Please contact the EuroDIG secretariat if you need help to find a Remote Moderator.

Reporter

  • Clement Perarnaud, Universitat Pompeu Fabra Barcelona, Geneva Internet Platform

The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:

  • are summarised on a slide and presented to the audience at the end of each session
  • relate to the particular session and to European Internet governance policy
  • are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
  • are in (rough) consensus with the audience

Current discussion, conference calls, schedules and minutes

See the discussion tab on the upper left side of this page. Please use this page to publish:

  • dates for virtual meetings or coordination calls
  • short summary of calls or email exchange

Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.

Messages

  • All stakeholders, including the private sector, agree that a form of regulation concerning the use of digital technologies is needed to protect individuals, build public trust, and advance social and economic development. However, divides remain on the scope and bindingness of such regulation, even if predictability and legal certainty appear essential. There is nevertheless a broad consensus on the need to initiate open-ended and inclusive debates to provide guidance and introduce new frameworks. For example, at the stage of product development, to address the significant impact of new technologies on individuals and the exercise of their human rights.
  • States should take appropriate measures to ensure effective legal guarantees and that sufficient resources are available to enforce the human rights of individuals, and in particular, those of marginalised groups. There is a need to enforce mechanisms to ensure that responsibilities for the risks and harms for individual rights are rightly allocated.
  • Due to the power of asymmetry between those who develop the technologies, and those who are subject to them, there is a need to empower users by promoting digital literacy skills and to enhance public awareness about the interference of emerging technologies with human rights.

Video record

https://youtu.be/NbRmqDesksU

Transcript

Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-800-825-5234, www.captionfirst.com


This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.


>> HOST: Now, we're ready for the last session of today for the last plenary. I will invite one of the two moderators from the session Jan Kleijssen from Council of Europe to kickoff that one. I give you the mic.

>> JAN KLEIJSSEN: Thank you. Good afternoon, ladies and gentlemen. Thank you for having stayed at this late hour for the final session on emerging technology and human rights. I'm very happy we have a cozy and intimate room to discuss as well as a panel of high-level experts.

Just a few words, perhaps by way of introduction. I represent the Council of Europe, an organization that I think many of you are familiar with. And which has been dealing in fact, with issue of emerging technology and human rights for a while.

Many of you know the Budapest convention on cybercrime, convention 108 and data protection and also relevant AVA, the convention on biotechnology which banned a technology in development, namely, the cloning of human beings.

Three weeks ago, we had a birthday party. We celebrated 70 years of Council of Europe and the foreign ministers gave us a birthday present in the form of an instruction to start working on a binding, legal instrument on the design, application, and use of artificial intelligence. And to do so in a multistakeholder context. This is obviously very new development and new decision is not yet therefore included in the notes that we sent out for this session. But I hope the questions that are put can be, of course, also examined in this light.

Isn't it a great pleasure to understand this is the prodigal for the session to call up to the podium, first of all my co-moderator, Ms. Viveka Bonde from LightNet Foundation. One of the cofounders of LightNet Foundation from Sweden who will co-moderate. And the panel of experts.

First of all, Olivier Bringer from the European Commission, who is the head of the Next Generation Internet Unit. If you would be so kind to join us. Lise Fuhr the Director General of ETNO. Joanna Goodey, Head of Fundamental Rights Agency, head of research. Joe McNamee, Council independent experts and Council of Europe Committee experts on automated data processes and IAI. And last but not least, Max Senges from Google program manager for Research & Education.

>> VIVEKA BONDE: Right. We will now discuss emerging technology and human rights. First, we would like to post questions aren't we?

>> JAN KLEIJSSEN: I forgot to mention the report. I apologize. Clemon is from Barcelona with the Internet platform who is the reporter for this session.

>> VIVEKA BONDE: Right. So we start with the first question. I will sit down here. I will ask all of you panelists, we have now regulatory frameworks. What about any shortcomings in terms of these regulatory frameworks that exist, what are your views? Any shortcomings whatsoever, what are the initial views and initial thoughts?

>> MAX SENGES: Thank you, everybody, for setting up this session and I'd like to put out that first of all, we have to realize that the Internet and the online space it created is an achievement of the last 40, 50 years, it hasn't broken down yet. We have never had to reboot it. So it is actually surprisingly resilient architecture. So many of the principles that it was built on were pretty smart and good principles. Decentralization, open standards, things like that. Especially over the last 10, maybe 15 years, really thorny issues came in -- came up, you know, we will never find a solution that is ideal and just is a clear-cut science answer to freedom of expression or privacy. These are the things to set up the right checks and balances and institutions to negotiate that. Overall, I would say, you know, we are on our way, certainly we would like to see institutions like the IGF play a better role in coupling the different efforts and institutions that make up Internet governance and that shape the way that we handle these problems. But I'm not in the dark here, I think we should complain less and cooperate more on solutions. I will point that the Internet and jurisdiction network that Chapel and Feeling [sp] have set up. That really tries to bridge the debate and deliberation around problems and brings it to something where we find solutions that are actually taking the perspectives and interests of all stakeholders together. Because what is very difficult to do is to come up with a transnational, global network where, you know, you need to police and really try to understand how to get a hold of each small or big player because there is so many ways to avoid and frankly not follow the agreement, if it's not an agreement like Internet traffic is changed, for example.

I was really fascinated to learn from Andrew Sullivan, the ISOC CEO, he said when it comes to Internet traffic, only one rule. Your network, your rules. The way the traffic is exchanged is by reputation, the interaction and relationship that the exchangers develop with each other and that works really well. I don't think that is always the right solution, but I think it is an interesting way to start to frame the problem and think about it.

>> JAN KLEIJSSEN: Please, Joe.

>> JOE MCNAMEE: Thank you, I think we use the framework of experience. We have a lot of experience, in technologies that are still called new and have been around for a long time. And in the committee on -- that I'm involved in the Council of Europe, even moving the issues that need to be addressed on algorithmic decision-making of states regulating us, regulating themselves, self-regulation for their own processes and privatized regulation by companies of their users, it is a very complex framework. But we're not starting from scratch, we have a tendency to always think that we're starting from scratch. And a wise man, I don't know who it was any more, said if you want to really seem to be up-to-date with the news, read yesterday's newspaper. Because everyone has forgotten what happened yesterday. I'm going to be really, really up-to-date and tell you on the 12th of November, 1996, the "New York Times" had a front page headline, Europe betting on self-regulation to control the Internet. I have seen no document from anybody identifying the successes and failures of that approach from 1996 until 2019. There are things that have worked, there are things that have not worked.

Law enforcement by private companies under the guise of self-regulation is a major issue for decision-making, issues like content moderation. In 2015, the Council of Europe produced or commissioned a study on Internet blocking, which showed the rule of law safeguards were not being applied.

In 2017, the European Parliament created a Resolution castigating the European Commission for failure of producing data on the same subject. We're still talking about self-regulation as both -- as one thing, when it is fighting spam, fighting illegal content, it is network security. So I think that fundamentally the thing that we miss is the ability to say okay, can we note -- not reinvent the wheel? What have we learned from privacy impact assessments that have impacted algorithmic decision-making for example.

Do we need to build from 0 every time? I think if we can learn from success and avoid making the same mistakes again, I think we're on the right track.

>> JAN KLEIJSSEN: Thank you very much. What is the view of FRA on this, Joanna?

>> JOANNA GOODEY: I can only speak on the view of the European Union. I mean when we're asked this question, what are the shortcomings, we thing of the regulatory framework.

We just heard from Jan Kleijssen about the accounts of Europe and their own initiative to looks towards unifying law. A lot of the discussion at the moment is between whether you have unifying single regulation, for example, or whether you are going on to a sectoral approach in terms of how you regulate emerging technologies, AI, the Internet, you name it. I think we really have to understand what we currently have in place. I know there are exercises in mapping, the extent of existing legislation, in the case of the EU, we have very diverse legislation that is predating AI discussions on things like product liability. We have legislation from the '80s, when really, a discussion about AI was really off the map. We have legislation on product safety, consumer protection, you name it. If we take the context of the European Union, the most up-to-date, AI piece of legislation is the general data protection regulation because within that, you do have recognition for aspects of AI, the use and misuse of algorithms. So a lot of our legislation really was developed at a time that is pre-emerging technologies. Pre-AI. And we're recognizing that. The general data protection agency, which is only one area of law, dealing with the specific right is very comprehensive, the alarm bells were huge from the industry when that piece of legislation was on the table. Now, people are calming down a little bit, seeing the added value of this legislation, only deals with a very specific narrow area of law. And it took many, many years to negotiate. I have a colleague from the commission here, who knows about that, but it is a comprehensive piece of legislation. So if we're asked, what are the current shortcomings, I think we have to really map what we do have, it's limitations of when it was drafted. Is it fit for purpose for emerging technologies and AI, but then I say, beyond existing legislation in different fields like product liability, safety, consumer protection, we have human rights and fundamental rights law. We have the European convention on human rights, and the U.N. law, the charter of fundamental rights of the European Union, the charter of fundamental rights of the EU is a modern piece of law in that regard because it separates data protection and privacy. So you have now a modern instrument that is comprehensive that we can look at. The much broader discussion about what kind of regulation we need in future is something where we need to know what we do have, is it fit for purpose, and then moving on from that, what is perhaps needed. However, of course, there is still great resistance from the industry about having any thought about regulation. Hence the enthusiasm for ethics, which are nonbinding, of course. And of a soft law option. I can perhaps say more on that later.

>> JAN KLEIJSSEN: Thank you very much. Now the floor to Lise. You represent a body with very diverse stakeholder.

>> LISE FUHR: Thank you, thank you for inviting ETNO here. ETNO is the Telco trade association for European Telcos, my members represents 70% of the investment in infrastructure in Europe. I will give a talk from a telecom perspective, if I look at technology right now, what our members are doing, they're doing 5G, doing Internet of Things, AI, cloud services is what they're concentrated with. And with that, also comes all of the things we're talking about today, emerging technologies and human rights -- rights. And things are changing. Because what we're seeing now is all the technologies are more platforms and people are -- industries are becoming more interrelated. So I would like to take a step back and say why is it technology is interesting for European citizens? It is actually because it is going to create a lot of changes to our lives. Part of it will make our lives easier. We will have online services that can help us shop, do e-governance, also remote surgery, et cetera, it will make it easier. It will also make our lives safer. We will have smart city, we will have automated cars, et cetera, that will be safer for pedestrians, for drivers, and also for bicyclists. So why is emerging technology and human rights a problem? That is because technology will make us all traceable at any time. And we see you will be connected and also the information you give away all the time to create new issues we never saw back 10 years ago. So for us, human rights, shortcomings is also about privacy, it is of course about freedom of speech, but also security is essential when you talk about human rights. And we believe a bit like Max is saying, we need to work together on this. We need multistakeholder models to actually discuss and find out where are the shortcomings. Because I also agree with Joanna that we have a lot of regulation already. We have GDPR that is regulating a lot of services, and that's a horizontal regulation. But we also as Telcos have e-privacy. e-privacy is very sector specific. And for us, that is actually a shortcoming. Because we think all regulation needs to be horizontal because services are converging, if we want to ensure human rights in the broadest extent, we need to have more horizontal views of how to deal with this. Not only AI, but IOT will mean a problem and of course a lot of good solutions for all of our citizens. Right now, we have a hundred million connected devices, if we take Europe and western -- Eastern Europe. And we think we will have half a billion in 2023. So we will all be connected, we will all have dependency on technology, and that's why anything we do needs to have the human rights aspect into it. If we don't, we will lose trust in technology, and that will actually mean a step back.

But talking about this in Europe, we have one kind of regulation, if we look at the U.S., if we look at China, we have different regulations. And we need to have a balance where we look at how we do innovation on a respectful way, but also allowing for data to be used for AI, for example.

So we think GDPR is a good start. We think we should look into how we can do some principles for AI. And the ETNO members are looking into we think the high-level report that came from the commission gave a good basis for this part of doing good and do no harm and also to have human autonomy in everything you do around AI. So I think we have a good basis. I think we need to build more on this. And I know we're going to talk later on how we actually implement any solutions.

>> OLIVIER BRINGER: From the European Commission. There are two shortcomings in the regulation environment. One is the time. The time of regulation is not the same as time of technology. That we know very well. In a way, we can try to accelerate the time of regulation. I think we have managed to do that in the current mandate. So with the digital single market strategy we have managed to adopt in the field of digital legislative proposals, 30 in four years. We can accelerate up to a certain point, I would say. Because of course, when you want to go for legislation, you need to have time to prepare it well, you need to have precisely discussion with the multistakeholder community, preparation, you need to have good discussion with the co-legislators. There is an incompressible time to issue balance in the regulation. The technology is not waiting. The technology is going very, very fast. So the response, I guess, is in good part and an participation. We need to anticipate the issues even before they occur. It means we should then regulate the issue, and then we should think about where we prepare the ground. We try to do this, the commissioner explained this morning, this is what we try to do in the field of artificial intelligence, for example, blockchain, I think we are not late in the areas, we're relooking at the issues, what are the ethical issues that are raised by artificial intelligence, in the field of blockchain, how can we maximize the opportunities of the technology. And we will see what are the next steps.

Then, another challenge is that the Internet is so broad. The Internet is Google. The Internet is someone, one person who has a blog. It's someone who is selling egg or pair of shoes on the Internet. It is very broad. It is much more difficult to intervene, to devise policies in the very large environment as it was when we started to do digital policies and regulations. Lise knows it very well, 20 years ago we were mainly regulating telecom incumbents. In that way, it is simple. Identify someone with market power and you impose access remedies. When you regulate the Internet of today or when you do public -- digital public policies, you have such a variety of stakeholders to take into account. So one key, for example, is the size. We have issue the regulations which apply to very different type of companies. They apply to very big companies and they also apply to very small companies. So we need to tailor it so everything is proportionate. We need to have the regulations, depending on the size, depending on the turnover of the company, this is something we need to take into account and also we have very different players in front of us. Last point, I would like to agree with what Joe said. I think we should avoid -- we should avoid reinventing the wheel.

Now, we start to have a good cockpit of legislation and public policy in the field of digital, GDPR is one. We have now some legislation on Cybersecurity and started work on online information. We should reuse, indeed reuse the good concepts in the information and the next measures we take. I like your example of privacy and that assessment. For example.

>> JAN KLEIJSSEN: Thank you very much. Before asking Viveka to carry on, are there any questions at this point. Not just a one-way street, to keep you awake and entertained. Is anyone --

>> MAX SENGES: While someone might be thinking about that questions, I can make the panel a bit more interesting by strongly disagreeing with what Jan said, current pro forma focus on ethics and companies don't want to participate. And finding good governance, finding good regulation.

I think on the first point, it is absolutely adequate to start with the ethics part. When you understand what topics you should actually look into. So it is principles, it is ethics. And importantly, I think there is actually three kinds of ethics. There is virtue ethics, that is okay, what does an individual player do. There is teleological ethics that is where do we want to go. That is a very important conversation we're having. What kind of world do we want to live in, for example, in the case of AI. Which is a good example of where we can get it right. We're catching it early, already have fertile community, met many different points of expertise.

So virtue ethics, telos ethics and deontological ethics that usually slowly but surely become law and based in deontological perspectives like human rights are another important part but only one piece of the puzzle. Now I see someone with a question.

>> JAN KLEIJSSEN: Please.

>> QUESTION: Hi there. My name is Collin Perry, I'm with Article 19. I find it motivating several have mentioned impact assessments, I have worked with accessory impact assessors to develop models and ideas to be applied to Internet providers, more specifically. One of the challenges we have had in transposing best practices that exist and have been applied in sectors like mining, textiles, food, beverage, supply focused impact assessment methodology has been it requires in order to be compliant with the U.N. guiding principles of human rights, it requires engagement with affected rights holders. When you talk about Internet infrastructure providers, everything from registries, registrars, cables, ISPs to platforms, it can be very difficult to lasso this constituency of who is an impacted rights holder and it is even more difficult to actively engage with them on the potential impacts. So I think it is really important to underscore, from a practitioner side, that this is somewhere where we fall short within the Internet community. And I don't -- I think it is a bit premature to refer to the work that is being done as an impact assessment, just in the interest of not diluting the practice as it exists and has been more formalized in other sectors. I wanted to pose the question, specifically to the people that mentioned this as a potential solution or avenue to be explored, if they're engaging with the people kind of on the ground who are actually trying to make these methodologies and best practices applicable to our sector?

>> JAN KLEIJSSEN: Okay. Joanna, please.

>> JOANNA GOODEY: I will answer the point you made. I thought I would respond, it was good to have a challenge from another panel member. I will also draw on what you said. I'm a member of the European Commission on AI. I'm one of 52 members, when you thought the U.N. expert group is large in terms of working with a diverse constituency of stakeholder also with Civil Society, that is a large group. I want to underline, because I was challenged, the work of the specific group. It puts three pillars there. It puts ethics, law, and robustness. And it really says all three, ethics, law, robustness need to be looked at as one whole.

So it is not that you say ethics is the way to go or law is the way to go, or indeed robustness. It is the package, it is the whole three together. That's the point I very much want to underline. It is not an either-or discussion. Law and ethics are very closely intertwined, but the heading of this group is emerging technology and human rights. Human rights are very much legally based. What I want to reply to the person who just gave the intervention there, the point you make about impact assessments is very much about the whole life cycle of if I speak only about AI, for example, developments, who is consulted from the very conceptualization through to the rollout, exposit evaluations in terms of what you are talking about. You mention business and human rights but we're currently at the stage in relation to business and human rights where they're not binding. Not binding in the sense of when one looks to general data protection regulation. There is different levels of legal oversight in Government and governance in that regard. So my understanding, in terms of the emerging work of the commission's high level group on AI, and trustworthy AI is a need for that multistakeholder engagement at all stages of product development. This is something that the high-level expert group of the commission on AI is about to pilot with different sectors. So they're actually going to be looking at the application of the checklist they developed on trustworthy of AI, with industry, with different users, which is so important. Or else it is theory, and it doesn't look at the practice in reality, which I hope is gelling with the point you raised there.

>> JAN KLEIJSSEN: Lise, please. You want to comment as well.

>> LISE FUHR: I like the question on impact assessment, to me, they're key, extremely important. What we see at the moment is impact assessments are first of all very Academic, because we're not widening the scope. Plus also, we're too technology specific. If you talk about AI and human rights, you might forget the IoT angle or 5G angle, and all of these things are converging as we speak. They're all interdependent, and it is extremely difficult, at the moment, to actually make an assessment that is fulfilling, that will give a good picture of the actual impact. So I'm a fan, but I'm just saying, we need to rethink how we do it.

>> VIVEKA BONDE: What about issues of enforcement and supervision, how do you consider that to be done, considering the challenges in the different markets and different kind of technologies as well?

>> JOE MCNAMEE: I was thinking Max could give us an example of something that Google did over the last five years, that looking at the high-level expert group ethics document, you would not do again in similar circumstances. Then we can see a real-life example of how it would impact our rights and freedoms.

>> MAX SENGES: That is a difficult question. I will try to think of something we might not have done or might not do again, while I think of the question posed. To your point, I like the aspects of the three pillars of robustness, law, ethics. I would say ethics is the virtue, understanding what we are doing, as practitioners. The law is what the stakeholder group of Governments clearly are in the lead for. So we'll consult with other stakeholders, similarly to the impact assessment question, yes, of course, companies should self-assess, but it is independent outside assessments done by NGOs, Civil Society, but other stakeholders to look at the practice that are most interesting. Allow me to just to underline how -- why I think AI governance is something where we see a next generation or more mature multistakeholder governance approach and responsible innovation by certainly the leaders in AI, that they bring the Academics and researchers together for the principles that are identified. You know, just -- I'm not going to read all the AI principles from Google to you, but the three I find particularly interesting and actually not pushing away responsibility are in the AI governance paper that was released at the world economic forum in late January, early February, that is explainability really, you know, make it possible for others to understand what this somewhat autonomous machine does. Fairness appraisal. You know, make it transparent, discuss what factors, what ratings go into the machine as it is developing its model. An really, they're asking and proposing that we need liability frameworks for AI, which I think is incredibly important and really points to the need of good regulation in that space. And one last point I wanted to point out is accountability to people. It is called human in the loop, I think it is actually a fairly difficult one to achieve. So by no means are we chickening out of the difficult questions. I think they're big questions. They're not answered yet. Nobody on the planet knows the answer so we should do it together.

>> VIVEKA BONDE: Coming back to the question of enforcement and supervision in that perspective. How would you consider that on not only where we are now but also where we are heading?

>> JAN KLEIJSSEN: Joanna, I think there is a mic over there.

>> JOANNA GOODEY: It is hard to predict where we're heading, but if we look at the European Union at the moment, we have, for example, a number of agencies that are responsible for oversight and enforcement in key areas where you can do harm and one can also think about emerging technologies and AI in that regard.

For example, we have a chemicals agency that regulates. We have a medicines agency that regulates. So lots of different models that exist, but they're sectorial for different areas. We can look at ombuds persons. There are different models that are out there. I think we can draw from experience of areas where we have decided that we need regulation because the impact on humanity is so great and the potential for harm, but also the potential for good is there, too.

I think we can really learn from models in our experience, in different areas to see what would perhaps fit best, if we are going forward for AI. But again, it is the point I raised earlier. If you're going to have oversight, if you are talking about hard law, you will have of course need to have not just NGOs overseeing, you would need to have proper organizations that have the legal weight to do that. And it is a very different scenario to talking about self-regulation, optional ethics and other guidelines. We're talking about different things. I'm not sure what you are asking about the soft law or hard law.

>> VIVEKA BONDE: I was thinking both, how to strike the balance between soft law and hard law, really. And do you need both?

>> MAX SENGES: The big problem is there is no such thing as Internet industry, Amazon is doing different from Telco and Google. It is more the Internet is eating the world. Everything is Internet governance. That is why it is difficult to it say there should be one institution that is an industry regulatory body. We need all the bodies to understand what the Internet is and how it transforms their respective pieces and need places like the Internet Governance Forum where that comes together and loosely coupled so we make sure we're not seeing things fall through the cracks.

>> JAN KLEIJSSEN: Lise, please.

>> LISE FUHR: We are talking about technology and it is developing rapidly and it is extremely difficult to keep up with what is out there. I agree that the Internet is actually eating the world. But for the industry, it is actually key that we ensure -- and I think this also is a the same for any user, that we have predictability and certain -- certainty. So whatever is the end result for AI, for Internet of Things, 5G, whatever, it is actually that we need to ensure that we know what's there and we're avoiding fragmentation. Because in Europe, we have 28, maybe 27 Member States. And we don't want to have different rules in every Member State. We want to have the same. That will also be easier for the end users to relate to. Because they would have one system.

Again, that being said, the world is not going to follow Europe, per se. I know GDPR made a huge effort to actually harmonize privacy rules all over the world and has set a great example. But I'm not sure we can have the same success with AI. But we need to be sure that we balance any framework we do with the ability to actually also develop. Because the European citizens would any way use services that comes from U.S., or China, that comes from all over the world. We need make sure whatever we do we don't fragment Europe and the rest of the world.

>> JAN KLEIJSSEN: Lise to disagree with one thing you said, Strausberg has 47 states, not 27 or 28. Joe?

>> JOE MCNAMEE: I think we should remember what Joanna said in the role of law and fundamental rights and guaranties it provides in her opening statement. I think we need to look at examples. In the ethics group document there are numerous references to fundamental rights. Fundamental rights are the responsibility of states. States have an obligation under the convention, under the charter, to ensure these rights. That doesn't mean they have to be done always by hard law, but it is the role of the state. Now, let's take a practical example of our freedom of expression rights in 2019. The European Commission and terrorism regulation and dare I mention the copyright directive places -- changes the balance of incentives of Internet companies, making it more attractive to remove more content than to leave it online. This is then -- this is not an obligation being imposed by the European Commission, it is a new framework. So then more is deleted. More legal content will be deleted. The European Commission in relation to existing content removal is dreadful in data production as the Europe parliaments and Resolution in expectation until December 2017 described in painful detail. And what happens once Google, as a company that has the resources to deal with such a complex new framework, how does Google react, let alone a small company?

And we see -- it is nice to talk about transparency and accountability, but look at the -- there's a women's rights organization called women on wades. Every ten weeks their YouTube channel is deleted through malicious gaming and then put back online eventually, and then put back online more slowly and more slowly still.

So where does a citizen -- a European -- proud European looking at the Charter of Fundamental Rights, restrictions of fundamental rights may only be imposed if necessary proportionate, genuinely achieving objectives of general interest and provided for by law, those are wonderful, strong words and I love the European framework on human rights, but it's falling between all of the tracks. The cracks. The commission is avoiding accountability. Google and everyone else is avoiding accountability. And certainly, we find ourselves, without the rights that are hard law in the international league of framework. So perhaps you can tell us about the transparency and accountability and reactiveness of Google when you remove content.

>> MAX SENGES: Thanks for the second question. I did come up with an answer to the first one, which is what I would have done different, what I think we could have done different. I think we let a real big opportunity pass when we decided to invest in Google plus rather than pursuing open social, which was an open standard for digital identity, which is one of the missing pieces of Internet architecture, that is back to the original statement that architecture is not bad, but that is a missing piece. We let that go. That was an initiative to provide standards for digital identity.

Now, on your second question, you know, how do we do these things? You know, I think appropriately, the colleagues from the commission talk about law and how to make policies, so let me speak a little bit about the tools that companies have and apply. And what we do. So basically, there is commitments that we engage in, most notably in the context of human rights, the global network initiative, GNI, which is a fairly established tool at this point. You know, it goes slowly, but it does make progress in terms of asking first to report and do a self-assessment and report about that, and then you get independent reports, that is already helpful to get the internal wheels turning and have the conversations about what we do, what we should do. That is helpful. The second one is commitments like principles and the research that I mentioned.

Then, I think it really comes to setting up the internal infrastructure to address these points. There is an internal committee for emerging technologies that gets together every quarter and the engineers report about what they want to invent, the proposals get discussed by our ethics council, things like that. Last, but really importantly, there is the public scrutiny in participating in the questions that we get posed. And that make us think about how to get this right. Now, when we talk about freedom of expression for user generated content, that's a really, really difficult question that you cannot get right for everybody. You will always make one party or one group unhappy that want more freedom of expression and the other group wants more paternalism and more control over what is generally seen. I don't think we have found the solution there, but I do think that we are on our way to getting better. To having checks and balances that, for example, you can contest when things get flagged, you can -- you need different layers of, you know, content governance, if you want, where the first one is user generated, content needs user-generated governance. A lot of things should be dealt with, with the users that are using, offended by the content, communicate about that. The second layer brings in the company that tries to resolve the issues, and then of course, that needs to be a public infrastructure that addresses the cases that can't be resolved, hopefully that is only one-third of them. Because we're talking about many, many, many cases.

>> JAN KLEIJSSEN: Perhaps before giving the floor to Olivier, may I just raise an issue here. Following what Max said. The University of Montreal in Magill are doing a mapping exercise and have come up so far with 60 ethical charters around the world relating to artificial intelligence and those include the one recently issued by the European Commission, OACD and 58 others. That raises the question, however, is whether these self-regulatory instruments are sufficient in combination with some existing law. We already heard GDPR. We have convention 108, cyber crime convention, and the European convention for human rights, is that enough? I would be grateful if the panel, if we could move a bit perhaps into the area more specifically of AI. Because emerging technology is a very wide topic. But if we can perhaps for the benefit of the participate, focus a bit.

>> OLIVIER BRINGER: If you will allow me, I will reply to the previous question. In terms of enforcement and supervision, we have two to distinguish between legislation, hard, and soft approach like self-regulation. In the case of hard regulation, I think there is very important role for national regulators, so it is very important that the regulators are well staffed can do their work properly. Again, here, we don't need to reinvent the wheel. We have sectors where regulators are affective a long time, and effective. In the case of telecom and the authorities.

>> VIVEKA BONDE: Any specific sectors you want to make a comparison with? Any specific sectors you want to compare AI and advanced data technology.

>> OLIVIER BRINGER: Well, the obvious one, again, is because a lot of issues on AI and ethics will be about protecting the personal data of individuals. So I think the data protection authorities have a big say in that.

Something which is also important is the mechanism for cooperation. So the fact that all these regulators meet and share best practices, how to implement the rules often, the European rules.

Another thing, I think, is the other skills. We need to make sure, it is not only a question of all of us acquiring basic skills, it is also the policymakers, it is also the judges, who need to acquire the skills. They need to acquire the skills about artificial intelligence, for example.

And this is happening. I mean, we see it that in sectors which were removed from the digital 10 years ago, the policymakers are requiring the skills in-house, in transport, in public Government services. When turning now to self-regulation, I think one important thing to do is really to ensure transparency. It is very good to go for self-regulatory approach, but we need to be able -- everyone needs to be able to monitor it. That is a mechanism that should be enshrined in the self-regulation approach. This is what happens, for example, if you look at what we're doing now on the online digital information, we have a code of practice. And every month, those who have applied, who have signed the code of practice, issue reports on how they applied it, how many fake beings they have removed, et cetera. That is an important aspect. In the end, it's also the role of the public authorities to look at the results. From time to time we need to look, if the self-regulatory mechanisms, and take responsibilities if not. These would be my ideas for enforcement and supervision.

Then I would like to reply to Joe. I think Max partially replied to you already. I mean, yes, in several of our regulation, we have imposed that certain content, for example, that would infringe intellectual property or infringe other rights or laws need to be removed. But we have the possibility to contest. So there is also the possibility for someone who has seen the content removed in a nonjustified manner to contest in order to get their fundamental rights respected. So for example, the freedom of speech or because they want to have their data protected, for example.

So there is a balance, there is a balance there. Probably the details, how it is going to be -- how this balance is going to be implemented practically, this is something that can be done via the multistakeholder process, too. By putting the platforms in the content, those who hold the rights in terms of copyrights, the fact checkers, et cetera, together, to see how this mechanism can be of notice and action and countenance and can work in practice.

And then on your point -- sorry, on artificial intelligence and the 60-year rule books on ethics. I mean, we are at the start of an international dialogue. So very clearly, for example, our strategy in European Commission, we said this is not something we can do on our own in the European Union. We have to open up, discuss inside the G7, in the G20, discuss in the Internet Governance Forum, and engage with the Council of Europe. Part of the reply is that we need to put in place the international dialogue to agree not only on the principles but also how to apply them.

If you read -- I did not read many of them, but a few of them. There is good convergence, I think, in terms of the principles, themselves. I think the difficulty will be in their implementation, and the risk of divergence is really there.

>> JAN KLEIJSSEN: Who else would like to comment on this, either from the panel or from the audience? Please.

>> LISE FUHR: Your question on is it enough the current regulation or framework. I think it is a difficult question. Again, I would like to come back to my prior statement on a think we are looking into AI that will converge -- that are used in many different industries that we never thought about before. So now AI is used in farming. It is used in health industry. It is used in of course, in it the Telco industry. And also Google uses it a lot. So we have different uses of AI, and I'm not sure that we can regulate this in a way that will cover all of these kind of uses. And I am a strong believer in that we need really self-regulation here. I think the industries will have big, big problems if we don't do it anyway. No matter what happens with other regulation, everyone needs to be very transparent on what are your principles for AI and how do you deal with data, because people are getting much more aware of the use of data and much more confident that the data is theirs. I think GDPR has a great influence in this development.

But also, if we are to do any further regulation, we need to really take in all stakeholders and have a multistakeholder approach to this. Because otherwise, we lose parts of what's important in regulation of AI.

>> JAN KLEIJSSEN: Thank you.

>> MAX SENGES: To underline that point. To see AI in healthcare, that is where we apply the rules of healthcare that are strict on how you innovate. Traditionally on the Internet, you have a permission-less model. It is where you have to bridge. It is not a one-size-fit approach, you don't regulate AI. I like that fundamental point. What we want to do more is apply real world metaphors. When I come back to your point of freedom of expression, you have different expectations about freedom of expression, whether you are in a place like this, whether you are at a party somewhere in a club or somewhere -- something or whether you are in a legal hearing, right? Those are very different conditions. And what we should do is we should agree on the principles and, you know, make sure that human rights are always looked at. That is where your national individual agencies come into play, to monitor, probably based on information and lines that Civil Society, the users bring to them. And then we find different community agreements for how you -- how you address it in those different environments.

>> JAN KLEIJSSEN: Joe, please.

>> JOE MCNAMEE: I can't help wondering what any of the 1.1 patients that were donated to the health trust for the Google deep mine would think about the last statement. I suspect they would be sick.

>> MAX SENGES: Obviously I apologize for that mistreatment. That was a long time ago. And that was before a lot of the work and a lot of the infrastructure that has grown since this developed. Again, that was a really bad incident.

>> VIVEKA BONDE: So on that note, if we look at AI in respect of human rights and we look at the creation of AI through from the data collection stage, down to the design development and deployment, how can we ensure the human rights element with self-regulation? How can we do that?

>> JAN KLEIJSSEN: Joanna, please.

>> JOANNA GOODEY: I think, really, self-regulation only takes you so far. Beyond that, you have to have regulation that deals with hard law and it is the duty of the states to ensure the law is enforced. We have fundament human rights that need to be applied. They're very broadly framed they can also be applied in the context of emerging technologies and AI and see if they're fit for the purpose of the advancement of technology. One thing that is raised, if you are talking about self-regulation is the point of redress and access to justice. So my fellow speakers, just on my left here, your right, talked about a data breach. We have many examples of this in all different industries that are emerging.

Now, for the individuals concerned, it is very hard to get access to justice and redress. So if we're talking about a right, this is a core right. And we have to look at ways in which mass breaches can be addressed. That is only one area, that is a data breach in that regard. So we really have to say that the companies can only address that so far. They can try to prevent it and fix it. But when there is a serious breach, for example, a data breach with huge significant implications for individuals concerned. And for that, you really need the weight of the law to make sure that redress can be met. That might mean, eventually going to court. It might mean penalties, et cetera. Self-regulation can only take you so far. I think some of the examples today are talking about a continuum of impact of emerging technologies and AI. The examples that are often given is, you know, online shopping, what's the problem with that? Versus the impact say, on your personal health and whether a decision is made, whether you get treatment or not. So whilst it might not be a one model fits all, in terms of the weight of oversight, I still think the human rights founding principles apply regardless of the level. And that has to be made very clear. So that's not a hierarchy in which you say human rights don't apply or do apply. That's where self-regulation, you have to go beyond that. That is really the role of the state and international bodies that look at the governance.

>> JAN KLEIJSSEN: Thank you, Joanna, and also for having put on the table remedies. I think that is a crucial element. Question from the audience, please.

>> QUESTION: My name is Greta Clause from the German [indiscernible] Foundation. I have more a statement than a question. You made several references to the GDPR, this is the first time we have a data protection privacy regulator that is age differentiating, that specifically addresses children's rights to privacy and data protection. So my question would be, with reference to Lise who mentioned the multistakeholder approach, what do you think about especially addressing children's rights in regard of regulation and self-regulation towards emerging technologies and especially AI? Have you any thought about that? Is it in your mind? Not only to Lise, but all of you. Thank you.

>> JAN KLEIJSSEN: Children and perhaps other vulnerable groups, too. Please?

>> LISE FUHR: I think it is a given with emerging technology. If we're not -- with the extended use of technologies also among children, we just need to take into account those -- their rights. And I think it's extremely important even also in the multistakeholder model that you include the young -- not necessarily the very young children, but you include young people. Because we tend to have a certain age level in a multistakeholder discussion and we need to ensure all -- the full spectrum of ages are included.

>> JAN KLEIJSSEN: After hearing yesterday in Strausberg where companies were represented, the observation was made the over 10 -- I'm sorry, between 10 and 21 years old are okay, it is anyone that is over 21 that is the problem.

>> MAX SENGES: I really like your question, thanks. What is good about it is the hardest problems are, of course, those where the challenges become most obvious, but when you solved them, you have actually a blueprint for that, that is good for a broader picture. In the case of children, I think what's really missing is exactly the digital ID, the age version that needs to come at one point, where I don't know when and how it can be applied. Similarly, health is a wonderful question, when it comes to data sharing, because actually, you know, innovation and that means finding cures to bad diseases and to treat us all are not pursued because the data regulations are so clumped. We have a saying, data protection plus data sharing becomes data governance. That is what we should really be thinking about, plus identity.

>> JAN KLEIJSSEN: Max, can I just prod you on this one reply a supplementary question. Because at Google you also deal with education. Now, digital literacy and AI literacy, surely must be very important element in empowering users to use their rights, to make use of the remedies they have. How do you see this issue? Because this has been pointed out in several of the ethical frameworks. I mentioned several of the 60 charters, I haven't read all, but I have read a number of them. There is the issue of the obligation of states and companies to ensure that users -- citizens and users are aware, for instance, when they interact with AI.

>> MAX SENGES: So there are two sides to the question, to the literacy and capacity to know. One is really on the developer and on the expert, you know, making sure that the technology is doing what it does, in terms of emerging technology, here I mentioned the explainability factor, all right, where you have to understand why an autonomous system is making a decision, in principle and then taken back.

The other one is really about agency and what we call in German [speaking German]

to be literate, to use a certain technology. To be frank, Google could do more than we already do. We do train or have trained more than one hundred million people in digital literacy skills. Those are more job-related skills to address the challenge, of, you know, the workforce needing the digital skills and to improve there. There is an equally important, I would argue, capacity of really using media, of participating in media. But again, I don't think this is something that any player can do by themselves. It is a consolation of using the existing educational system, of course, and, you know, it is a funny catch 21 where when Google does something they say oh, my God they go into the schools. And we don't do something, it is how can they not do something. Is it a middle path we will find a reasonable solution.

>> JAN KLEIJSSEN: Anyone from Finland in the audience, Finland is offering a free online course on AI literacy in this respect. Question from the audience.

>> QUESTION: Thank you, Mary Anne Franklin, Internet Rights Coalition. I'm speaking in terms of time, which Joe reminded us all. When do emerging technologies become emerged technologies? Once upon a time, the web was an emerging technology. Artificial intelligence is not a new issue. I would like to hear from each of you, if I may ask, a clearer definition of what you mean by "artificial intelligence." Because we're talking about possible beings that might one day wish to have human rights. But we're also talking about simply automated programs, perhaps. I would like to know specifically what you mean by artificial intelligence. What do you mean by emerging technology s here and now in 2019 and thirdly, where would you draw the line in time terms? When do emerging technology s become already emerged? Because I think we're wandering around history as if everything has stood still, when it kind of tends to go back and come back again. Thanks for being pedantic, until we have our perceptions clear the discussions will go nowhere. Thank you very much.

>> JAN KLEIJSSEN: Thank you.

>> JOE MCNAMEE: If I can answer a question about self-regulation. I think I would like to say what I said at the start.

Sometimes self-regulation has worked and sometimes it hasn't worked. The times it worked tend to be similar for various reasons. For times it hasn't worked it tended to be similar for various reasons. Call me an old cynic, when there is a self-interest of the company that is self-regulating, it will self-regulate better as a general rule. So the first thing that we can do is look at our experience and say, okay, on this end of the spectrum we know from experience that we can rely on self-regulation.

If we take something like network security. We know companies have an interest in having decent security. We know they would prefer not to notify a data breach. So having basic level of security, which can be exceeded, on self-regulatory basis, makes sense. Allowing self-regulatory data breach notifications, EH! That's not going to work. So the evidence is there, we just need to use it.

On the education question, I'm personally horrified. I saw my nieces -- niece's tablet. It scared the hell out of me. Pat Walsh that was involved in the mobile industry did an analysis of one of the apps, which is of very little value to the students and huge value huge long-term value to the companies. It is just not good enough that this is being given to kids without discussion with the kids, without discussion with the adults. Consent, meaningful consent. It doesn't exist. On emerging technologies, I think I avoided using the phrase emerging technologies and the phrase artificial intelligence. I would agree with you. There is a -- dare I say -- lobbying imperative to restart the clock every year. But there is no practical reason to. Technology isn't going to stop. There is always going to be emerging technologies, if you look at the European convention on human rights, that was written in -- 70 years ago. And it is as valid today as it was 70 -- more, probably. We don't need to reinvent it. We need to continue to adapt it and live by our principles and not live by headlines.

>> OLIVIER BRINGER: Replying, I mean, to the question that you asked about emerging technologies, I would let Joanna share the report what defines artificial intelligence. I will not try definition on the spot.

On this issue of emerging and possibly at some stage emerged technologies, I think -- I mean, we don't know as policymakers. We cannot be sure that the technology will really eat the world, as Lise was saying. So the best we have to be quite modest. The best we can do is have a discussion with the experts, a discussion with the community to try to understand where the technology is going and what type of framework is required. To make sure that the reverse -- reverse to the society and economy. So I think we have to be as policymakers, we have to be modest, and we have to be adaptive to precisely to adapt to the emergence of different technologies.

And the second thing I would say is that if we want to understand that well, we have also to use another instrument, which is investment. So policy is not only about regulation and different flavors of regulation. Policy is much more than that.

I'm managing an initiative, Next Generation Internet, where we are investing in the technologies of tomorrow. This is an innovation initiative, an investment initiative, with this type of initiative we can understand the technology and understand what is exactly the state of art where it is going, where we need to invest, where we need more, where we need to intervene. I think this technology angle is very important. I mean, if you want, in the end, to have the artificial intelligence that we want, which is in line with European values, if we want a blockchain that is truly decentralized, we have to invest in the technologies here in Europe. We have to have the innovators working on blocking the technology. Otherwise, the values that come with the technology will be imposed on us from the outside.

I think the competitiveness, investment angle is as important as the regulatory angle.

>> JAN KLEIJSSEN: Lise and perhaps Joanna after, the definition of the AI.

>> LISE FUHR: I don't know we're repeating ourselves, I don't think it is of no use. It is important to have the discussions now. What we learned about AI has been machine learning. It hasn't been real AI. I agree we call a lot are AI. What we see now are the first steps toward AI. And together with other technologies that are truly converging, like 5G, which is actually conversion of fixed and mobile networks. We actually are movingly into an era now where it -- we don't know where things begins or ends, because everything is going to be more and more interdependent, I think these discussions are extremely important, but also important that we take them as we keep saying, on a more horizontal level, also, what we see is other industries moving in to the technology industry. So everyone is now becoming very dependent on Internet, on AI, on being connected at any time.

And to Joe's example of some of the companies misusing data, we will always have bad apples in the basket. And I don't think regulation will solve it, it will of course give a clearer framework, but I'm just not really confident we can solve it all by regulation. What we need to do is raise awareness and raise transparency and educate the users.

>> JAN KLEIJSSEN: Thank you. Joanna?

>> JOANNA GOODEY: Okay. I think the interesting discussion on the one hand is the onus on the users having responsibility. But the companies have responsibility and the state has responsibility, too. I think most responsibility can't be put on the user, is another thing I would say.

There are multiple definitions of AI out there. You read any documents -- and they're all slightly varying. I have one that happens to be in front of me from the European Commission. I will not read it out, as it is very long. If I read it out, it will be contested, naturally. But I think when we talk about the label of AI, it is so diverse and some of it we are all aware of, it is in our everyday lives, we open up our smartphones and it is being used.

What is very interesting, if you look at areas like bias, discrimination in the application of algorithms or predictive policing in areas where people have written a lot about this and are very concerned. You can go back to the '80s, there was issues about bias, predictive policing, data was being gathered, but we didn't refer to AI. Same basic principles of discrimination, and bias, unfair treatment were already being raised by people working in this field.

AI is a new tool using more data, but more data doesn't equate to better quality, necessarily. I think we can really draw from our lessons, from our established critique, human rights perspective, fundamental rights perspective, and apply it to new tools we have through AI.

Sometimes when I read what is going on in AI and especially in the field of robotics, for example, a lot of it is still science fiction. So I think a lot of AI is here and now, we're very aware of it. A lot of it, alarm bells are ringing. We have to be pragmatic and real use cases. One point to underline is less theory, more diving down into the use cases and what is actually going on. Because then we can pinpoint real fundamental rights issues in the conceptualization of it through to actually getting redress if something goes wrong.

A lot of the discussion is high level at the moment, and I think we would help ourselves by grounding it in much more transparent understanding of the uses to which AI is being put. That is the stage we will go to next, as well.

>> JAN KLEIJSSEN: Thank you very much. Max, please.

>> MAX SENGES: Mariana, thank you very much. I hope to give you a precise answer. I think AI is a blurry term, but I will end up with something that is concrete and at the core of what we talk about with governance and regulation. AI is the machine learning that was discussed in a number of things and there is general AI which doesn't exist yet and is pretty far out, as far as we can tell. There is one element of the AI question that I think is cross-cutting, independent of the sector, and of its application. That is that it takes autonomous decisions. And so, if we're framing the conversation more about, you know, what decision is being taken, you know, how can we understand how it is being taken, then I think we're on a pretty good path to thinking about how to governor this space. It is the end about agency, it is about we're giving up our human agency, you know, you can imagine an AI voting tool, right? There is a German tool, Germany a tool on the Internet where you can answer all the questions, it will tell you what you should vote. I mean, take that to the next level. You tell AI everything you know and it will vote for you. Is that -- you know, you see what kind of questions you are getting into here. I think more generally, there is two kinds of technologies. There is technology of access, you get access to Wikipedia, knowledge, to means of augment yourself and your understanding of the world. And there is technologies of control. Also a really good thing when you want to control, you know, a production chain, you want to ensure that there are certain aspects of freedom of expression, et cetera. Control, that's what you want.

I think it is the balance between the two, and they're actually both there when you think about AI. If it is a technology of access, it is something that augments me. That should be loyal to me and kind of the answer to the question oh, there's such an imbalance between the enormous AI system out there and me myself, who am I to understand all of that. I think if we get good systems, you know, on our side, the individual's side, and augment us with AI tools, that's a good thing.

Let me address the second part, at least from Mary Ann's question, that is about emerging technologies. Emerging technologies are technologies where we don't know the exact outcomes of what is happening when we apply them. We have -- I argue there are some applications of AI that are not emerging, that we do know what is coming. And many we don't. At Google, we change how we rollout the technology, but now it is different. It used to be a blog post, for everybody to try out beta phase, new program called Gmail. It was in beta for a long time, famously. That is not how we roll out AI-based systems. We take small groups, test them, iterate, add diversities aspects, and it is much more responsible, I would say, in this context effort.

I'm a fan of defining what we call emerging could be defined as better. To have a clear distinction between better technology that, you know, for example, can only be out for so long, that has certain characteristics, right, that is more experimental and mature technology on beta you can't make money, all technologies where you make money, at least at certain conditions, right, that should not be in beta anymore, you have to be responsible in what you put out as a real product, if people sign up for something and they want to be part of the experimental community, that's a different contract between the user and the provider of the service.

>> VIVEKA BONDE: Thank you, Max. We are running out of time unfortunately. Final question for you. Are there any democratic oversights when it comes to emerging technologies and as such, aspects you would like to share?

>> JOE MCNAMEE: I would really, really like Governments to apply their existing electronic advertising rules to Internet advertising. In the Irish abortion system, the Irish Government didn't bother, didn't consider it relevant to have the same rules for online advertising as offline advertising. Google took, I think, a good decision -- forgive me for saying something nice to you after all of this -- to not accept advertising after a particular moment and Facebook decided to not take non-Irish-sourced money for advertising, which was pathetically easy to circumvent. So the Government didn't regulate it all. Facebook regulated one way, Google regulated another way. That is not grown-up politics in this era. If we could have at least have consistency. That is not what you are asking, it annoys me.

>> VIVEKA BONDE: You can add democracy to that.

>> JOE MCNAMEE: Democracy.

>> JAN KLEIJSSEN: Who would like to comment before we move to the final rounds of comments, given the time?

>> MAX SENGES: I think it is somewhat important to see this is a somewhat technocratic conversation, and no one was voted in terms of legitimacy of democracy. I'm not sure it would be helpful to have all of the different stakeholder groups vote and you know, introduce democracy in that way. What does seem like a good proposal is to include the view of the people and that is an informed view of the people in our debates and have that as a base to come back to.

It is a practice, citizen deliberations that is quite developed. There is a project underway, to bring that to Internet governance by a French NGO mission public. I think that is a really good way to have democratic values and practices in a multistakeholder context.

It also has -- if you allow me that last point. It also has the advantage to give you a Delta between what people say off the street, when you just go out and ask everybody and what they say after they have deliberated and spent time and weighted the tradeoffs between the different options. Because if you see that Delta is pretty big, you should not go with the general sentiment and just go around ask people what to say. Privacy is unfortunately a really good example. At this point, everybody says ah, it is very bad, we need to do something. But that is not really helping. You know, it is not a constructive way. You should not go out in a democratic way and ask everybody what to do. You will not get an answer that is going to work on the pretty complex environment with legal, technical, business, different cultural aspects. So I do think that the multistakeholder governance approach is the right one, but it should be informed by democratic practices.

>> JAN KLEIJSSEN: Olivier?

>> OLIVIER BRINGER: On this I would say the democratic oversight is enshrined in our institutional set up. When we device policies and regulation, we do it together, at the European level together with the European Parliament and member statements who represent the people. There should be a democratic oversight, of course. And it already exists, the institutional set up.

And the parliament, yes, is, for example, the parliament last year or two years ago, issued a report on artificial intelligence. So it looks into these issues I'm pretty sure, for example, that the next parliament, which is now being installed, built up will look at those issue, and it will be on their agenda.

And I fully agree with Max, if you want to discuss these issues, it is a bit like discussing Europe. If you want to discuss Europe, I mean, the European Union, I'm sorry. You need to explain to people how it works. I don't know if you have seen in your newspaper, there was a lot of explanations before the European elections how Europe works. I think we have to do the same exercise with complex technologies, we have to explain the aspects from technology to legal and ethical. So having an informed debate is really important.

>> JAN KLEIJSSEN: Thank you very much. Please, question from the audience.

>> QUESTION: Hi. Thank you very much for this discussion. It has been really an exciting panel. I think there is probably 40, 45 minutes ago Lise say something like she's not confident that you can be equally successful with regulating artificial intelligence than with GDPR, for example.

I think this is the key point. I think that's the regulating of the ethical values or behaviors is complicated. Ethical values are not universal. What is ethical in one part of the world is not in a different part. And really, so I think one question is how effective could be to the deliberational approach.

And even if you can develop a solid approach, there is a challenge, because some values are not also European. If you speak about Europe, it is something bigger than the European Union. It is a challenge.

So this is one comment. The other comment is that I think yes, there are a lot of the decisions that are being taken, based on artificial intelligence tools for recruiting people, for shorter lease, for applicants of many position, and for many other things. And the problem here is not also -- it is not only the ethical behavior, it is the incompetence and the biases. Regulating biases is complicated. It is -- there is a -- I could use -- I would not take much of your time. But there are very interesting examples of biases. If you search on a search engine about something like babies or something like that, probably, the result you will get is not representative of all of the diversity of the world. Some, a simple question asks, do you know denotes, what is the last time the Chairman of the World Cup soccer, what everybody will think is about the main championship. So there are many other categories that are competing. So it is you are not thinking about the Goodman championship or people with disabilities or many other things. I challenge you, try that. It is very simple question.

And this is something that we are seeing everyday, things that -- decisions that are taken based on artificial intelligence. This is a big challenge. Thank you.

>> JAN KLEIJSSEN: Thank you. Thank you very much. We must now, I think, come to a close. I will have a comment by Joanna and then I will ask the Rapporteur to come here, after which I will give one sentence to everyone. Please, Joanna.

>> JOANNA GOODEY: I think the last speaker made an important point. If one looks at AI, though, very neutrally, one can also say I'm giving the example of AI, it has the possibility when done well to reduce bias and discrimination because humans in our own decisions about who to recruit we have bias and discrimination as we know all. But of course, the quality of any algorithm is as good as the data put into it or the design of the algorithm. So from how engineers are trained, technicians are trained to have all of the life cycle stages to say are we thinking of potential bias and discrimination. That is not forgetting, even without AI there is bias and discrimination in everyday life. The police can have discriminatory behavior not using pred poll or some other tool out there. We know that exists. I think we can be optimistic that AI as an example can do good in a right framing. A lot of this discussion has been about the negative side. There is also the very positive side, but for that, you need the oversight, the checks and balances, we only mention children. I'm glad you mention persons with disabilities or the elderly and many, many other groups. We can all experience bias and discrimination, too. So I think, you know, one has to be cautious, when we look at saying AI is discriminatory. A lot of examples have emerged saying yes, it is, but also everyday human practice without the use of AI is also biased and discriminatory.

We need to advance AI to recognize the bias and discrimination. That is the one area where a lot is written on writes and AI -- rights and AI, rights and discrimination. We need to move forward with what we know to improve the AI tools out there.

>> JAN KLEIJSSEN: Thank you, before we're kicked out of the building, I will give the floor to our Reporter Clemon.

>> REPORTER: I have the difficult task of summarizing the key messages from this session. I will read them to you and see if the panelists and audience agree for adoption.

I articulated the messages in three topics, regulation in frameworks, enforcement mechanisms, and literacy. The first key message, all stakeholders including private sector agree that the form of regulation concerning the use of digital technology is needed to protect individuals, build public trust, and advance social and economic development. But divides remain on while the scope and dividingness of such regulation. Though, there is a shared understanding that such regulation should ensure predictability and legal certainty. There is a consensus on the need to initiate open-ended and inclusive debates to provide guidance and introduce frameworks to address the significant -- the exercise of human rights of individuals. For example, in the product development.

When it comes to enforcement mechanism, states should take appropriate measures to ensure the effective legal guaranties and there's also as efficient resources available to enforce human rights on lives of individuals and in particular for marginalized groups like children.

There is a need for enforcement mechanisms within national law to ensure that responsibilities for the adverse risk and harms for individuals rights are rightly allocated, for instance, using impact assessment as we have seen previously.

And when it comes to literacy, due to the poor symmetry between those who develop the technology and those who are subject to them, there is a need to empower users by empowering digital literacy skills, enhance public awareness about the interference of emerging technology such as AI with human rights.

That's about it. I would like to know if anyone in the room disagree. Raise your hands.

>> JAN KLEIJSSEN: I think you are putting a big challenge here. Thank you very much, Clemon.

>> MAX SENGES: Hang on. I think I have one comment. I think you did well overall. I think there is one aspect that I disagreed and we disagreed about on the panel as well. When you talk about enforcement, you stress national law. I thought you were going in that direction, but you were talking about national agencies for supervision and enforcement, not about the national law. I think the laws have to be ideally, you know, transnational, international, completely or at least regional.

>> JAN KLEIJSSEN: Absolutely. I must end it here. I have been given a clear signal that this room will be closed I think in a moment. So I would give one sentence to each of you to close. 20 seconds. Please.

>> OLIVIER BRINGER: I would say reacting to the last intervention, that we should build on our European values to build the right framework for trustworthy AI. And we should do that together indeed at European level when it comes to the European Union.

>> LISE FUHR: I would like to give three takeaways. I think Europe has a great advantage. We're very advanced on GDPR and privacy in relation both to AI and other emerging technologies, we need a multistakeholder model to look into AI and the impacts there. And last, we should avoid any fragment action of the European market.

>> JOANNA GOODEY: Okay, I think the basic message is we need the same rights only as we have offline and remember we have robust human rights, fundamental rights legislation that exists that can be looked at and see how it is fit for purpose as we go forward with emerging technologies.

>> JOE MCNAMEE: I would like to sign up wholeheartedly to what Joanna just said. Um, we need to remember that we've got cornerstones in place. We have got cornerstones in human rights law, we have got cornerstones in experience and if we -- if we build on those, we're going on the right direction.

If we question everything, every time somebody calls something a new or emerging technology or gives it a shiny new name, we will never get to our destination.

>> MAX SENGES: Thanks to the organizers, I thought it was a good conversation. I want to come back to the first point the Internet is one of the greatest inventions ever made. We put it all in a bad place right now. There are big challenges, but it is an enormous opportunity and brought us really much progress.

Thank you very much for the last question. I thought that gives me a great opportunity, bias in the dataset is crucial to address. There is a big question, do you want to represent reality or representative an ideal state? And especially when it is the second, I think we should be clear about what our tools are doing. I think a search engine should most of the time represent reality and not try to be a normative instrument. A place where we can really negotiate and deliberate and find out what the truth is, is a place like Wikipedia, where you have the transparency and you know, there is an infrastructure for that. So wiki data would be a place to think about that. Thanks again, great discussion. Very nice.

>> JAN KLEIJSSEN: Thank you, thank you very much. Thank you to the panelists, my co-moderator, the audience. We're still confused, but I suppose we're confused at a higher level. A round of applause for the panelists. Thank you all very much.

[Applause]

And see you all tomorrow. Thank you.

>> HOST: Thank you very much. Very quick household communications. The buses already are leaving for the beach. The last bus will go at 7:30. So keep in mind, and tomorrow morning, we start at 9:00. So don't drink too much.

We have a small present for you. Don't leave yet.

[Session concluded]


This text, document or file is based on live transcription.Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.