Ethics by design – Moving from ethical principles to practical solutions – PL 05 2019: Difference between revisions

From EuroDIG Wiki
Jump to navigation Jump to search
(31 intermediate revisions by 2 users not shown)
Line 1: Line 1:
20 June 2019 | 11:00-12:30  | KING WILLEM-ALEXANDER AUDITORIUM | [[image:Icons_live_20px.png | Video recording | link=https://youtu.be/WplQXzXRqOU]] | [[image:Icon_transcript_20px.png | Transcription | link=#Transcript]]<br />
[[Consolidated programme 2019|'''Consolidated programme 2019 overview''']]<br /><br />
[[Consolidated programme 2019|'''Consolidated programme 2019 overview''']]<br /><br />
{{Sessionadvice-PL-2019}}
Working title: <big>'''Data ethics, algorithms and trust by design'''</big><br /><br />
Proposals assigned to this session: ID 3, 21, 22, 24, 29, 38, 88, 119, 145, 153, 155, 164 – [https://www.eurodig.org/fileadmin/user_upload/eurodig_The-Hague/statistik_proposals_all/proposals_for_2019_2018-12-04__01_final_web_IDs_ver1.pdf list of all proposals as pdf]<br /><br />
Proposals assigned to this session: ID 3, 21, 22, 24, 29, 38, 88, 119, 145, 153, 155, 164 – [https://www.eurodig.org/fileadmin/user_upload/eurodig_The-Hague/statistik_proposals_all/proposals_for_2019_2018-12-04__01_final_web_IDs_ver1.pdf list of all proposals as pdf]<br /><br />
== <span class="dateline">Get involved!</span> ==  
== <span class="dateline">Get involved!</span> ==  
You are invited to become a member of the session Org Team by subscribing to the [https://list.eurodig.org/mailman/listinfo/pl05_2019 '''mailing list''']. Please be aware that an email will be send to you requesting confirmation of subscription, to prevent others from subscribing you to the list. As spam detection systems are rather aggressive today you may need to have a look to your spam folder too.
You are invited to become a member of the session Org Team! By joining an Org Team you agree to that your name and affiliation will be published at the respective wiki page of the session for transparency reasons. Please subscribe to the session [https://list.eurodig.org/mailman/listinfo/pl05_2019 '''mailing list'''] and answer the email that will be send to you requesting your confirmation of subscription.


If you would just like to leave a comment feel free to use the [[{{TALKPAGENAME}} | discussion]]-page here at the wiki. Please contact [mailto:wiki@eurodig.org '''wiki@eurodig.org'''] to get access to the wiki.
== Session teaser ==


== Session teaser ==
Ethics by design: Moving from ethical principles to practical solutions
Until <span class="dateline">15 April 2019</span>.


1-2 lines to describe the focus of the session.
Data ethics is at the top of the global tech agenda - in recent years a lot of time and energy has been poured into developing guidelines, codes and principles for the responsible use of data, AI, robotics etc. Now it is time to move on to the next phase: translating data ethics principles into data ethics solutions.


== Session description ==  
== Session description ==  
Until <span class="dateline">30 April 2019</span>.


Always use your own words to describe the session. If you decide to quote the words of an external source, give them the due respect and acknowledgement by specifying the source.
Several recent scandals (most prominently Facebook-Cambridge Analytica) have increased the average citizens awareness of the risks of data abuse in the new data economy. A number of intergovernmental institutions are working on developing guidelines and principles for the responsible use of data and data ethics – most prominently the EU and OECD. Also some countries have attempted to spearhead the development on how to operationalize data ethics and make it into a competitive advantage for businesses - but there remains a range of unresolved questions and dilemmas that this session will seek to answer: How do we promote data-driven business models without eroding citizens’ trust in businesses and society? How do we empower tech-workers to handle ethical questions when they arise? And most importantly: How do we transform data ethics into practical solutions and turn the responsible use of data into a competitive advantage? If we are to reap all the benefits of the digital transformation, we need to find new solutions to ensure that the consumers’ trust in the data economy stays strong. A strong focus on data ethics and the responsible use of AI could be one mean towards this end.  


== Format ==  
== Format ==  
Until <span class="dateline">30 April 2019</span>.
We have 90 min. for the session – therefore it is suggested to divide the session into smaller but interconnected parts:
 
Speaker to set the scene
 
Short introduction by moderator
 
Short presentations by panelists working with developing data ethics principles and/or working with concrete data ethical solutions
 
Panel discussion


Please try out new interactive formats. EuroDIG is about dialogue not about statements, presentations and speeches. Workshops should not be organised as a small plenary.
Q&A with questions from audience
 
Conclusions – lessons learned/practical solutions to bring back home


== Further reading ==  
== Further reading ==  
Until <span class="dateline">30 April 2019</span>.
 


Links to relevant websites, declarations, books, documents. Please note we cannot offer web space, so only links to external resources are possible. Example for an external link: [http://www.eurodig.org/ Website of EuroDIG]
Links to relevant websites, declarations, books, documents. Please note we cannot offer web space, so only links to external resources are possible. Example for an external link: [http://www.eurodig.org/ Website of EuroDIG]


== People ==  
== People ==  
Until <span class="dateline">15 May 2019</span>.


'''Please provide name and institution for all people you list here.'''
===== Moderator =====
*Aimee van Wynsberghe, Assistant Professor in Ethics and Technology at TU Delft in the Netherlands. Aimee is also co-founder and co-director of the Foundation for Responsible Robotics and on the board of the Institute for Accountability in a Digital Age. She also serves as a member of the European Commission's High-Level Expert Group on AI and is a founding board member of the Netherlands AI Alliance. https://aimeevanwynsberghe.com/
 
===== Speakers/panelists =====
 
*Elettra Ronchi, Head of Unit, Senior Policy analyst, OECD
**Elettra has been leading the work on enhanced access and sharing of data (EASD) and a the review of the OECD privacy guidelines as chair of the SPDE-working group in the OECD.
**Able to give input on both the OECD AI principles (still under development) and ethical accountability (accountability 2.0) and how to translate these into concrete policy
 
*Meeri Haataja, CEO of Saidot.ai and Chair of the Ethics certification Programme for Autonomous and Intelligent Systems (ECPAIS)
**Saidot is a company in Finland that is developing a service to help organizations (e.g. Finnish government services like taxes and social services) provide transparency about the data they use. https://www.saidot.ai/
**ECPAIS is an IEEE-SA backed programme in collaboration with industry and public service providers for developing criteria and process for a Certifications on Transparency, Accountability and Algorithmic Bias. https://standards.ieee.org/industry-connections/ecpais.html
 
*Andreas Hauptmann, Director for EU and International, incl. Data Ethics and AI, Danish Business Authority (DBA)
**Data ethics is high on the agenda in DK and in 2018 a set of recommendations was developed to strengthen Danish businesses in the responsible use of data e.g. by empowering tech-workers to handle ethical questions when they arise. The recommendations focus on how to make the responsible use of data a competitive advantage for businesses. The DBA is taking this work to the next stage and looking into transforming the recommendations into practical solutions, including establishing a data ethics seal and a requirement for the biggest companies to incorporate an outline of their data ethics policies in their management reviews as part of their annual financial statement. You can read the recommendations here: https://eng.em.dk/media/12209/dataethics-v2.pdf
 
*[Not confirmed yet] - Lucilla Sioli, Director, Artificial Intelligence and Digital Industry at DG CNECT, European Commission.


'''Focal Point'''  
'''Focal Point'''  
*Lars Rugholm Nielsen, Danish Business Authority
*Lars Rugholm Nielsen, Danish Business Authority
*Julia Katja Wolman, Danish Business Authority
*Julia Katja Wolman, Danish Business Authority


'''Organising Team (Org Team)'''  
'''Organising Team (Org Team)'''  
*Zoey Barthelemy
*Marit Brademann
*Marit Brademann
*Amali De Silva-Mitchell  
*Amali De Silva-Mitchell  
*Ansgar Koene
*Ansgar Koene, University of Nottingham
*Charalampos Kyritsis
*Artemia-Dimitra Korovesi
 
*Charalampos Kyritsis, YouthDIG Organiser
'''Key Participants'''
*João Pedro Martins
 
*Jana Misic, Wilfried Martens Centre for EU Studies
Key Participants are experts willing to provide their knowledge during a session – not necessarily on stage. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance.
*Michelle van Min
Please provide short CV’s of the Key Participants involved in your session at the Wiki or link to another source.
*Ben Wallis, Microsoft
 
'''Moderator'''
 
The moderator is the facilitator of the session at the event. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.


'''Remote Moderator'''
'''Remote Moderator'''


The Remote Moderator is in charge of facilitating participation via digital channels such as WebEx and social medial (Twitter, facebook). Remote Moderators monitor and moderate the social media channels and the participants via WebEX and forward questions to the session moderator. Please contact the [mailto:office@eurodig.org EuroDIG secretariat] if you need help to find a Remote Moderator.
Trained remote moderators will be assigned on the spot by the EuroDIG secretariat to each session.


'''Reporter'''
'''Reporter'''


Reporters will be assigned by the EuroDIG secretariat in cooperation with the [https://www.giplatform.org/ Geneva Internet Platform]. The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:  
*Jana Misic, Wilfried Martens Centre for EU Studies
 
The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:  
*are summarised on a slide and  presented to the audience at the end of each session  
*are summarised on a slide and  presented to the audience at the end of each session  
*relate to the particular session and to European Internet governance policy
*relate to the particular session and to European Internet governance policy
Line 72: Line 92:


== Messages ==   
== Messages ==   
A short summary of the session will be provided by the Reporter.
*The ethical guidelines ecosystem has grown extensively over the past years and includes more than 40 sets of guidelines. However, the challenge of creating a complementary balance between legislation, regulation, innovation, and the guidelines remains. 
*The approach of self-regulation is not enough. There is a need for a new industry model that allows for working with data ethics, but does not pose a barrier for innovation and competitiveness. Data ethics should be a parameter on the market.
*While there are many common values in the guidelines, the base values that should be addressed are transparency and explainability. Mechanisms for providing transparency have to be layered, stakeholder-specific, and able to operate on different levels. Explainability should be defined in a multistakeholder dialogue because it includes explaining algorithms’ decisions, as well as explaining what data ethics means in a specific context.
*Not all machine-learning systems operate with the same algorithms, have the same application, or are used by the same demographics. Developing tools for the practical implementation of data ethics has to be highly context-specific and targeted.
*Data ethics standardisation through certificates and seals for business entities should be explored as an instrument of ensuring trust. Other instruments include an obligation to report data ethics policies in the annual reviews and in the corporate social responsibility policies. Sharing best practice cases is crucial.
 
 
Find an independent report of the session from the Geneva Internet Platform Digital Watch Observatory at https://dig.watch/sessions/ethics-design-moving-ethical-principles-practical-solutions.


== Video record ==
== Video record ==
Will be provided here after the event.
https://youtu.be/WplQXzXRqOU


== Transcript ==
== Transcript ==
Will be provided here after the event.
Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-800-825-5234, www.captionfirst.com
 
 
''This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.''
 
 
>> Good morning. We're running a little late already, so I'd like to start with the next panel discussion. And I please ask everybody sitting, please come up to the front a little bit in the center because it's nicer for the panel because this is such a big, large, whatever room. And if they have to look to the right for just one person, to the left, it's nicer, more polite. They can just look at you. And it looks a little more crowded as well. We're trying to get the people from the corridor as well. Well, let's start. I'm Aimee van Wynsberghe. I'd like to get the floor with your panel and move on to learn more about ethics. I'm really looking forward to it. You need this one?
 
>> I do. Wonderful. Thank you so much for the introduction and for bringing us here today. So we'll just let everyone get set up. Now, the way that we will organize, we have an hour and a half for this panel. And so we wanted to make sure that we had some time on stage to get into the conversation, to get into some of these topics. But then after about 30 minutes, my plan will be to then direct our attention to the audience for any questions that you have, whether you want to ask a point of clarification of something that was said or, you know, push the conversation further. And then also I believe we will have participants at a distance who are going to feed questions to us. So first let me begin by thanking Julia Wolman who is the reason why we all came together, she's the organizer of this panel, and she's the one who -- yeah, made sure we understood the theme, kept us all in check. So really, thank you, Julia, for doing a wonderful job with that. And then to introduce the panelists today, I have Meeri Haataja, correct?
 
>> MEERI HAATAJA: Yes.
 
>> ELETTRA RONCHI: And I have Elettra Ronchi and Andreas Hauptmann. They're all incredibly impressive individuals. I'm going to open the stage to let them tell you about themselves and why they're here and their interest in this particular topic. I should also say I live and work here in the Netherlands, and I’m Assistant Professor at the Technical University of Delft, also part of the expert group on artificial intelligence.
 
So this idea of ethics by design or, you know, also generally put, this moving from principles to practice. Why is this even on the table at all? And this goes back to a few scandals that we can pinpoint, right? If we're talking about just data and the handling or the treatment or the acquisition and sourcing of data, the first thing that usually comes to mind is the Facebook Cambridge Analytica scandal, that data was acquired and sourced in an unethical manner. If we're looking at newer technologies like artificial intelligence, we've also heard stories about the way that algorithms have been trained or used and that certain cultural biases could be exacerbated or reinforced by this. I think one of the more well-known stories was the Amazon case where Amazon trained an algorithm that would help them with recruiting other individuals and to train this algorithm, they used ten years of data acquired from the company. But the company practices had certain biases in them, right? So these biases, this preference for male employees at the top level then became a part of what the algorithm would be searching for. So we've seen these situations. And this is now raised the attention of policymakers, of industry leaders, of academics to this idea of ethics.
 
We have biases, discriminations, that a result of using the technology. So is there a way for us to incorporate ethics earlier on, to have it kind of guide technology development and implementation? And then we have a variety of corporations, institutions, legal bodies as well who are now creating ethics principles or ethics guidelines. So the European commission has created their guidelines. We have the partnership on AI. Companies like Deep Mind have also created principles. So we've been doing this for a year, year and a half, and there's probably 44 different sets of principles out there. But now we're in a new stage. We're in a new phase. Now, we have these principles. We have these guidelines. How do we implement them? What do we do in practice? Because it's one thing to say do no harm, but what does that mean for the data scientists or the data analysts who have a conflict or attention between values. If I increase accuracy, I might decrease fairness. Or alternatively, if I increase fairness, make sure everyone is equally represented, that could decrease the accuracy of the algorithm. So what do we do when we actually put these principles in practice? So that is the main theme or topic of today's panel. Moving from principles to practice.
 
And now I'd like to go through the panel and have them each explain what is their experience in this space. Are they contributing to making principles and guidelines? Are they working to implement principles and guidelines, or perhaps a combination of the two? So Andreas, shall we start with you, and then we go down the line.
 
>> ANDREAS HAUPTMANN: Thank you very much, Aimee. My name is Andreas Hauptmann, and I'm director of EU and International Affairs in the Danish Business Authority, and that's part of the Danish government. And before I move on, I unfortunately have to make a small disclaimer since some of you may know that we had elections very shortly in Denmark, and we are only in the process of forming a new government. So I am without an acting government at the moment. So what I will be saying will, of course, be -- maybe it will change once we have found a new government, have new priorities. I don't expect to, but that's, of course, a real possibility.
 
So moving on. I think data ethics is a very important issue. I think in Denmark, we started the work in this field for 18 months, two years ago, at least from my perspective on the governmental side. We had -- we formed an expert group on data ethics. I served as part of the Secretariat, working with the expert group. This November they delivered their recommendations to the Danish government, and the Danish government has then made a policy towards data ethics that I am now working on implementing in Denmark. So we are working intensely at the moment with trying to implement data ethics, putting it into practice in Denmark, especially among Danish businesses. I won't say we are that far, but we know the direction we are working in, and hopefully we will be able to get back to that during the session today.
 
But maybe just to say a little more, what was the notion from this expert group and from the Danish government, it was that we need trust in order to ensure the uptick of all these new technologies in our societies. There's a lot of benefits for the new technologies, but we need to build on the trust and at least not remove the trust that is there at the moment. Furthermore, I think the notion was that we need to find a way to work with the market instead of against the market. So we need to find a model working with data ethics that is not a barrier for innovation but enhance innovation, to show that competitiveness and data ethics can work together, and hopefully we can make data ethics a competitive parameter for acting on the market. So hopefully together we can assure that you can make money on being data ethical. I think we have some sort of progressing work in this field, but we need to push it further. So that's the main focus that we are working with from the Danish perspective. I think that's what I wanted to say to begin with.
 
>> Thank you. Is this working? Okay. Great. So I'm Ellettra Ronchi. I coordinate work in the organization on privacy, data governance, and security. And what I'd like to do here is bring some more perspectives to this discussion and particularly referring back to work that we've done recently on artificial intelligence and particularly the development of the first intergovernmental recommendation that was adopted by OECD member countries and for non-OECD countries, so 40 countries to date on the 22nd of May, and I will get to some details on the recommendation. And also drawing on work that we're doing right now in reviewing the privacy guidelines of the OECD. Now, I don't know how many of you are familiar with the OECD privacy guidelines. They were first established in 1980. They set the scene for the development of privacy regimes and legislations in many countries around the world. They set out minimum standards. And we are reviewing them because exactly some of the main and core challenges that are now emerging in the context of artificial intelligence and the context of Internet of things and big data.
 
But getting back to what you said, I think I would like to raise the fact that we're looking at perhaps data also in a different way in terms that certainly there is a change of scale, but there is a change of scale in terms of both opportunities and risks. And I think it is in our -- our goal at the OECD is to make sure that we have the enabling environment to harness those opportunities while addressing the risks and concerns. And what we're talking about opportunities, perhaps it's a good reminder of the fact that with artificial intelligence now, we have a tremendous opportunity in many, many sectors of the economy. Just look at the health care sector. I know you've been working in the health sector. We have now greater opportunities for outperforming humans with diagnostics in -- for example, on cancer, surveillance. We have examples of applications for smart transportation. So let's look at the whole thing around this tension and the enabling environment that we need to put in place, and certainly trust is fundamental. People need to be at the center.
 
So for us that is the issue. And another final point that I'd like to make in this introductory discussion is that we look at risks in two ways. In the way that you've presented in that there is intrinsically some risk that we did not anticipate with big data and with artificial intelligence in the sense of certainly our original privacy legislation, you do not anticipate all of the challenges that we have now in relation to data protection and privacy. And that's why there are very promising regulatory developments right now. And certainly in relation to discrimination and bias. Just as an example, recently in the United States, the very first city in the United States, San Francisco, banned the use by government agencies artificial recognition technology. I think there we already start to see some concerns about civil liberties. But there is a whole other set of concerns at the OECD we are particularly aware of, and that are now reflected in the recommendation, and that is that it's around the develop the of artificial intelligence. The fact that in terms of power structures, who holds the data? How -- who is investing and where is the investing? We're seeing concentration in a number of countries, and concentrations at different levels. We are hearing about research in artificial intelligence which is carried out with gender imbalance. And I think that there is an incredible and important role for public policy there. And that is why the OECD has put forward a recommendation that -- and I will get to some details later -- in part looks at human-centered values and the importance of stewardship and artificial intelligence but also what type of public policy needs to be in place to make sure that we have a level playing field and we are able then to develop artificial intelligence for the interests of everyone. So let me stop at that point.
 
>> MEERI HAATAJA: All right. So my perspective background and to this topic. I've been in AI space or data space for my whole career and this AI ethics popped up, like, three years ago, I would say, while GDPR was really starting to take off and prepare for that. So at the moment, I'm CEO and co-founder of a technology company where we're building a very practical platform for organizations who want to do -- develop and use AI in a responsible manner to actually put together transparency and agree on accountabilities in their supply chains. So that's my daily work, working with that kind of organizations who already know that this is a really important topic for them and want to really take it into the practical processes. I'm also part of IAAA's ethics work in this space, so chairing the ethics certification program where we are really looking forward already this year to put together first certification regimes for AI ethics space and specifically for transparency accountability and algorithmic bias. So those are the two. But talking about this AI ethics principles and ethics by design, I was also part of Finland's national AI program and leading the ethics working group there and during the last year I was involved in many, many -- actually, we had the ethics challenge in Finland and basically challenged, like, all the major organizations who are developing and using AI to commit to ethical use and we encourage that the first step they could take there is to actually start thinking about this ethics principles and look from the organizations like existing values perspective and taking all the best practices that we have. There are lots of those. So building from those, their own ethics principles, like what is privatization, what is important in our organization, our context, and there are 70 organizations who took that challenge in Finland. And I would say roughly half of those already have or are working on the ethics principles. So quite many principles. I've been reading lots of principles, and what is encouraging that it's a mostly similar list. For all the work in this area when taking it into practice.
 
>> AIMEE VAN WYNSBERGHE: Awesome. Okay. So before we get into, you know, obstacles or benefits of actually employing or implementing the principles, maybe we could say something about the content of the principles. So could you say if you are working with principles or specific guidelines, what would you consider to be the most important or, you know, top three principles that you are concentrating on, and why are they the most important?
 
>> ANDREAS HAUPTMANN: Should I start?
 
>> AIMEE VAN WYNSBERGHE: Actually, whoever wants to start, too. But if you want to jump in please feel free. We don't have to do the same order each time.
 
>> ANDREAS HAUPTMANN: I think in Denmark, we have six values, so to speak, that's our guidelines. Autonomy, explainability, dignity, equality and fairness, responsibility, and innovation. And if I have to -- if I should name one of them, I think this explainability which is openness and transparency, I think is absolutely crucial in this field because ethics is a difficult concept. It means different things for different companies and in different settings. So I think transparency is the key in order to enhance the use of data ethics.
 
>> AIMEE VAN WYNSBERGHE: So do you mean -- sorry, do you mean explainability in terms of the algorithm, we're able to explain the decision that the algorithm is giving the human if it's a decision-making algorithm, or you mean explainability of what ethics in the company means?
 
>> ANDREAS HAUPTMANN: Actually, both, so to speak. I think it's very difficult to have explainability one to one on an algorithm that is immensely complex. But you need to explain what decisions are coming out of this algorithm so people can understand it. And I think you also need transparency on what are the data ethical policies, so to speak, for different companies, organizations and so on.
 
>> ELETTRA RONCHI: I think that brings us to the fact that there are lots of complementarities in the guidelines. The fact that the OECD builds on a lot of the work that had been done since 2016 in this area and certainly that is encouraging, and that is why we were able to also build such a wide consensus. By the way, on the 8th of June, the G-20 drew principles from the OECD guidelines so we're talking about really an international scale here. But we do have five value-based principles, and they are anchored on, first of all, they aim to promote sustainable development and inclusive growth, and they're anchored on human-centered values, transparency and explainability, and in fact I would choose the same, and I'll talk about it just shortly, robust and security and safety and accountability and we're very interested in the issue of accountability and I'll talk about it later in the context of the review of the privacy guidelines. But when we talk about transparency and explainability, because we really think that there should be and there must be transparency and responsible disclosure around artificial intelligence so that people understand how specific outcomes come about. So it is really because we want trust of people in this. But it is not easy. And we know that some countries are now putting in place some specific programs really to understand particularly -- we're not talking about narrow AI because perhaps we need to define what artificial intelligence we're talking about. We're talking about more the broader AI with machine learning, what we call opaque AI. And in this case we really need to put in place projects, research. We need to think and multistakeholder discussion about what we mean with explainability.
 
In the OECD, we have also started recently a new program of work which was mandated in some ways by Japan on new governance models because with explainability in many ways comes also auditability. The auditability of algorithms. How can we get to that? When we're talking about some legitimate concerns for business coming from been surrounded by businesspeople in relation to intellectual property rights. So transparency and explainability are very important, are core, but there are legitimate concerns in terms both of privacy and intellectual property rights that we need to address. The United Kingdom has just launched, particularly the information commissioner officer, a program called Explain AIN exactly to start to look more in depth into this issue, and we need -- we need the technologists, we need the technical experts, the legal experts. We need business. This is a very important set of principles difficult to implement.
 
>> AIMEE VAN WYNSBERGHE: Just out of curiosity, following up, this is the second time we've heard explainability. We're also in a situation where we have extensive terms and conditions, right, where they give us enough information. And so we give permission to use our data or to scrape our data, and then we get access to a certain site. When we talk about explainability, do you think that there could be some sort of obstacle there that, you know, if it's done in a way that we don't really understand what's going on, we could just follow in the footsteps of, yes, we click the terms and conditions. We agree. It's been explained to us. I agree. And then we're sitting there stuck?
 
>> MEERI HAATAJA: Yeah. My response would have been exactly the same, transparency and explainability, I guess, that covers sort of part of those. So definitely. And the reason for this is that those are basically enablers for us to, like, see if the other principles are in place or not. So it's sort of like foundational principle. But I think when talking about transparency, we really need to think that there are different levels of transparency and different audiences for it. So there is no, like, one answer to this one but, like, there needs to be an understanding of the customer citizens, the users of the service, like, require different kinds of -- different kind of -- different level of transparency, which is probably quite high-level description of the system, like, and what is it used for and probably also in the future some indication if there has been a trusted third-party auditing how the system actually works or something like that. Then if we talk about the transparency, the auditor or regulator who does a review on it, like, requires a totally different level. We need to think about -- we actually, like, when we are thinking about this, we are envisioning this what we know from airplanes, the black box concept so that we need to have the traceability or basically retrospective visibility into how the system worked. And if you think about that, it needs to be pretty detailed so that understanding about how the system from a technical perspective worked in a certain moment. So transparency as a term covers all these different perspectives and all the business processes around it and, yeah. My main concern at the moment is that how do we ensure the interoperability and similarity of our interpretation on the transparency and we sort of standardize how we define that because that's really critical in order for transparency to actually provide that value that we are looking from it.
 
>> ANDREAS HAUPTMANN: I think that's a very -- it's a hard question because we all know the cookies and home pages, we all know these terms and conditions, and you get bored before you even start reading. I think one of the things we're working on in Denmark is we're trying to make a data ethics seal in order to simplify this, to make a seal that's saying this -- these are companies that are working intensely with data ethics and taking a responsibility for the responsible use of data to actually guide consumers and businesses in a very easy manner on the market. So I think some sort of seal that could actually -- that could make the push and make it very easy to kind of have parameters working with you in this field.
 
>> MEERI HAATAJA: Can I ask a question? Is the seal for the organization or for the system? Is the seal for -- is it the sign that the organization, in general, follows ethical processes in development and using AI, or is it for one specific service AI application or system?
 
>> ANDREAS HAUPTMANN: Well, we are debating it. At the moment we don't have the seal fully developed yet so it's a work in progress. And this is one of the big debates going on, do you need to make it on the app, saying this app is data ethical, or do you take it as a company, saying this company is working with data ethics in an ethical manner? I think we are leaning towards the company version or organization, saying this is data ethical. Also because it's very hard to work with data ethics without having some sort of organizational criteria that the organization is working with data ethics and taking it very seriously.
 
>> MEERI HAATAJA: This is a really interesting topic because, like, the certification I'm part of, we're specifically focusing on the system level, like this specific system is getting the certification, but I think there is room at both levels. This is also, like, very important question, like, what is the ethics of this organization who put together these stamps and, like, what the measures and, like, how do you actually maintain those in time because the systems change, and then it's really good processes for actually reviewing that seal is valid in time.
 
>> ELETTRA RONCHI: I think you need an independent party putting in the seals, and you want to make sure that there is trust in how the seal is assigned. And it is an issue that we're looking at also in the context of the review of the privacy guidelines in terms of, you know, certification, for example, because you want -- there is a use in seals and certification and in creating an environment of trust. So I think there's a lot that we can follow there with your pilot. We're going to certainly continue some dialogue with you on that.
 
>> ANDREAS HAUPTMANN: I think that's right. And I think that's why we, as a government agency, is not making the seal. This is a multistakeholder process. So we are only initiating the work and starting to push it off the ground. And that's also why I don't decide on the criteria. I don't decide on the governance. I'm merely putting some resources into making the foundations and then the multistakeholders in Denmark will have to take this further. So we have some support for the start-up process. And then it will have to move on from there. And I think, of course, it's quite evident that hopefully we will have this ethics seal later this year, but we will need to update it regularly, especially in the start-up phase. And when we know the guidelines from the EU, it will be, of course, need to be integrated into the seal somehow.
 
>> AIMEE VAN WYNSBERGHE: Interesting, then talking about the role of governance, the role of companies, the role of principles to help you arrive at something like a seal, what would you each say? What is the function of these principles? Should it be something that complements regulation? Should it be something that gets us to regulation? Is it something entirely separate? How do we make sense of having these principles when we also have policymakers? What does this do, you know, for them? And also, whose responsibility? Who is responsible for what in this situation? Should it be governments who are trying to run multistakeholder dialogues or sessions where they create such a seal? Should it be large organizations like the IEEE? Who is responsible for what in that context?
 
>> ELETTRA RONCHI: As I mentioned before, we built on a lot of work that had been done in 2016. I think the facts really talk to themselves. There's been so much -- I think you mentioned that there are about 40 ethical guidelines. There's been so, so much development in terms of work in developing these principles from academia, from business or from standards organizations, national governance. We know that there are about now at least 13 OECD countries that have put out national strategies that have integrated ethical principles. So they talk to themselves that there's been a need. Now, you say, well, wouldn't law and regulation be enough? Well, in many ways the question that has been asked around is even if it is lawful, should we do it? And I think that with ethics, we're trying to ask that question. Even if it is lawful, should we do it? And there's a lot of lessons learned from history of dual use of technology and particularly with AI, we've heard calls for thinking thoroughly about the consequences, Steve Hawking, for example, that we're going toward dystopia. I think there's been really a call for that. But there are laws and there are regulations. There are previously legislations. There is intellectual property law. There is redress. There is competition. So we have a frame for regulation, but clearly there is a need and the facts show it for broad public discourse around the uses and the consequences, which is in some ways fantastic. If we look also now at the European Commission, the work that has been done at a regional level, but also we have now UNESCO who have announced they're going to -- in November they're going to discuss the development of ethical principles. So there is a call for it, a need for it. That's the way I think about it.
 
>> MEERI HAATAJA: Yeah. I would look at it -- I think this is such -- such an important and wide theme that there is no one single, like, party who would be responsible for this one, but this is, like, everyone's question who is in this space, like works in the area of AI and use of it. That's really something that I think, like, so that means that we need to have regulators involved. We need to have the data scientists working in an individual organization and carry their own responsibility of this one. So -- and that also, the space, why we are seeing, like, so many different kinds of organizations actively working on this one, and I think it's a really positive thing.
 
I would say that my thinking has been, like, transferring a little bit, like, from self-regulation into realizing that now that, like, while thinking actually, like, more and more how can we enforce the implementation that we definitely need regulation to support that. So I wouldn't see this, like -- it's complimentary, this laying down, like, excellent ground for thinking how to take it into practice and regulation is definitely one of the very important enforcement ways for taking principles and these guidelines into practice. But how you regulate that is a really good question because probably you cannot just, like, put these principles and, like, you do that. But, like, we need to think about this discussion about transparency is a good discussion, and that could be a good perspective for regulation as well. That is something that you can regulate. We also already have that as part of GDPR. We know at the moment, for example, FDA is considering how to change their, like, regulation or what other frameworks for regulating the learning, medical devices in this space, and the theme is they're very much the transparency, how do we ensure that in time while these systems are in use, and they develop, they learn, how do we ensure the necessary transparency, like, in order to secure that these systems actually work in the right manner and safe manner. So I think it's complimentary definitely, and we need to think more and more how to actually regulate this as well.
 
>> ANDREAS HAUPTMANN: And I think my stance would be that I agree with both of you. At the moment we need everybody to work on this together if we are to achieve data ethics, really making a difference in everyday lives. Of course, I'm coming from a government agency, so I am thinking a lot in terms of how do we -- how can we as a regulator, as a government, push the market and sense it further. That's, of course, the field I'm looking into because I think the dream would be that self-regulation would be sufficient. Everybody would just have a notion this is very important that it would be common sense and norms that's out there that guide everybody. Of course, we are not there yet, and we probably will never get entirely to that place, but we need to push in that direction and find the policies that can enhance that work, so to speak.
 
>> ELETTRA RONCHI: It's interesting that you're saying that because we just held a roundtable on accountability and what has worked and what has not worked in accountability in organizations. Exactly some of the issues that you raised at the beginning shows that we've got to do better with accountability. And it requires strong enforcement. And this is not just a call from regulators. It is, in fact, even the companies themselves that are talking about, well, we need accountability because it is, in fact, a core regulatory mechanisms. But we need enforcement to really put in place robust accountability. And in the context of AI, we're talking about accountability 2.0 that is a new form that integrates really the questions that you are raising on data ethics and what does it mean for an organization to have robust data stewardship and how can they be responsible to the users. So I think these are very big questions. We are collaborating very closely with the international conference of the privacy commissioners right now and also bringing forward some new concepts in relation to accountability and data stewardship, and we're very proud of that.
 
>> AIMEE VAN WYNSBERGHE: So I think at this moment now we're just past the 11:30 mark. Now I'll open it up to the audience who want to ask questions. I know that we have two questions already. And we can take that and also continue -- continue the discussion. Is it okay to go to that microphone there? Yeah, sorry.
 
>> Absolutely. Good morning. I work for Cab Gemini which is a company that actually helps implement AI solutions. And I feel that we're missing the second part of this discussion, which is from ethical principles to practical solutions. Now, I feel that we're a bit cozy in the fluffiness of this discussion. So they're very vague concepts. And I've been hearing you talking about that and hearing you talking more and such, and one of your colleagues on the expert group also, I think it was Thomas Metzinger was critical of the solutions, and he called it ethics washing in Europe. There's nothing very useful about it. And we call it trustworthy AI, but AI isn't trustworthy or not. It's the people behind it and such. So one of the key questions is in terms of practical solutions, what works and what red line would you draw in terms of implementing AI solutions?
 
>> AIMEE VAN WYNSBERGHE: Also I should say that Thomas is a huge fan of the guidelines. He did have a problem with some of the procedural items. He acknowledged the fact that industry was at the table, but he also says this is the best thing that we have out there. So sometimes that was misinterpreted in the press, but please, take the floor. It's what are practical solutions.
 
>> ANDREAS HAUPTMANN: Well, I can start off. I think this is very difficult. And we need to make it to a practical solution. I think at the moment there's a lot of political momentum around guidelines and around principles. But we need to make it into actually practice on the ground at the moment. And that's difficult. Especially with a term like ethics that is -- means different things to different people in different situations. So I think I mentioned the seal. I think that's one way of doing this. I think another thing we're working on in Denmark is to make data ethics part of the reporting obligations for the biggest companies when they make their annual accounts. Then we introduce it as part of this corporate social responsibility to say explain do you have a data ethics policy or not? And if you don't have it, you need to explain why you don't think it's important for your company to have a data ethical policy. So I think that's one way of pushing it into practical solutions. But still having autonomy for the business to decide for themselves what does data ethical mean for me. And it also creates some transparency to kind of follow up on the ethics-washing issue. It doesn't maybe solve it, but it will make a push in the right direction.
 
>> ELETTRA RONCHI: Well, there is an advantage being at the OECD. We have 34 countries, and we can exchange experience, best practice. And so what we put together and what we're planning to launch by this fall is in the AI observatory which will bring use cases. And I think that when you start to talk about how do you bring principles to practice, you need to look at use cases. And there is not one, let's say, solution and a single solution out there. And I think that that is going to be a very useful tool. The other point I would like to make and that's a disclaimer. And I should have probably put out the disclaimer for me at the very beginning. I'm not -- what I'm also presenting in part is clearly the OECD perspective, but also some of these comments like the one that I'm going to make now, it doesn't reflect OECD membership's opinions. But I think that there is a need to build bridges between technology and ethical values. And I think there is an enormous opportunity. And the title said ethics by design, in integrating ethics in technology, and I think you're working very much to do that, Aimee, and maybe you could step into there. And I think that we're certainly not abstract there. We are integrating in the whole process. And when we're talking about accountability 2.0, we're talking about a risk management approach that is integrated and integral -- and integrates ethics. So that's not abstract. That's not just ethics washing. So I'd like to leave it to that.
 
>> MEERI HAATAJA: You come from a perspective and basically the question you're developing for your customers these AI systems and the question is what do we do so that we are doing things in a right way. And that's a very practical way -- practical question, how do we document these systems that we develop? Like how do we -- what needs to be done? And that's a really serious question for vendors because, you know, everyone wants to do things in the right manner. So this is something that we are working on and that basically requires, like, a common way of documenting things, like the organization has their own building on all these guidelines and that this is the requirement list that we want all of our vendors to go through and document, and this is the information how -- and these sort of ways, how do we manage those in practice over time as well because it's not only one-time exercise to document it. But, like, it needs to be with all the parties who are involved in doing -- developing and maintaining, using these systems. They need to continuously contribute to that one. So that's the problem that we try to solve with our platform and practice. So maybe once you have a look at our website, because there is this practical tool, but I think this is very good and a difficult question.
 
>> AIMEE VAN WYNSBERGHE: So just to elaborate on that, the European Commission, they're now in the piloting stage of the guidelines. So when we talk about practical tools over the next month, we're now going to learn what practical tools companies are already looking at or could be looking at. So there will be a lot more to come there. There are things like the MIT and Harvard have created a project called The Data Nutrition Project, so it's like creating a data nutrition label, the same way that you see the caloric intake, the percentage of fat, the carbohydrates, that kind of thing. This project is looking at how can we do that specifically for data. There's a paper out right now called From What to How by Luciano Floridi and colleagues over at Oxford University that goes through a variety of different practical solutions, privacy by design, but much more specific than that. There's also another paper by Mittelstadt of the group looking at mapping the ethical issues related to decision-making algorithms and proposing solutions. There are quite a few tools out there but it's also important to understand that they have to be very specific and targeted. What context are we talking about? Is this machine learning supervised? Unsupervised? Is this AI? So it's really difficult to say these are the solutions to the ethical issues that we see without being incredibly specific about the algorithm, the context, the application, the demographic that's being used to talk about it.
 
>> Do you think the red line -- do you think the red line question that Thomas asked is a relevant one, or is it too tangible?
 
>> AIMEE VAN WYNSBERGHE: I do. No, I think the red line question is also really important because he's accurate in his description of what happened that we began with certain terminology and that terminology changed. We also had to acknowledge that these are ethics guidelines. This was a separate document from the policy recommendations. So April 26th, which is next Wednesday, the European Commission, the same high-level expert group -- or Monday?
 
>> ANDREAS HAUPTMANN: June.
 
>> AIMEE VAN WYNSBERGHE: June. Oh, my gosh. What -- what just happened there? June 26th. So the second part of the document comes out. And these are the policy recommendations. And this is where we get into more of this topic of red lines.
 
What are the things that we must create sort of ethical constraints? But in the ethics guideline, that was meant to be, let's have a discussion. This is where we want to go. This is how we want to educate and inform policymakers about what we think is vital and important. It wasn't -- it wasn't our job to say don't do this. You cannot do this. It was our job to say, hello. Really what do you think? If we're pointing at the possible consequences. Yes. You were next.
 
>> Oh, please.
 
>> Thank you. First, thank you for the debate, even though we are in fluffy terms, I think this is one of the most constructive ones I ever heard. I'm from Czech Internet institute and the Oxford Internet institute, so the great starter for orienting oneself in all of this is the mapping the debate, ethics of algorithms, that's the paper by Mittelstadt. But let me ask two questions. Firstly, I am very interested in what Denmark actually means when you are talking about data ethics. Do you mean, you know, bias in datasets that AI is learning on, or what is your definition? How do you approach that? Because I'm a bit confused about what that actually means. Also, that connects to the fairness. We want it to be fair. But what is fair? Do you define it in some sort of way? Have you defined fairness? Is it, you know, the known bias or representability of datasets, or is it to be just to have the same injustice for everyone, for example, you know? And then the second part is my research actually focuses on different approach transparency in order to protect self-determination and freedom of thought in environments powered by predictive algorithms. That's very connected, but it's also somehow overarching. So it would enable us to kind of do something right now as we were talking about or you were talking about consent and terms and conditions and all of this. Explainability is a part of that. Accountability is a part of that. But coming back to the first question, that's why there are two. That would somehow suggest what the fair is and what the, you know, representative is when we have self-determination as the goal, for example, not to be somehow operating in a skewed environment, then we can actually apply those values to ethics of data or data ethics or whatever you want to call it because then in the end, it's people putting in values to code. That's what in the end happens. It's not, you know, system put into another system. So, yeah. What is ethics of data, data ethics in your view, and then these kind of overarching, long-term principles? Do you even think about them, or how do you approach them or transparency in that matter?
 
>> ANDREAS HAUPTMANN: Well, thank you very much for the question. This is difficult, to be honest. I mentioned early on that what our principles in Denmark, and we refer to them as values more than principles. So -- and I think that's the notion, it's very hard to be very, very specific because data ethics means different things, whether you are working with health data on individuals or you are working with traffic data or whatever is out there. Ethics means different things in different sectors and in different organizations and businesses. So I don't have one clear definition on what we mean. We have these values that are guiding. And then we are more working towards we should have business and organizations reflect on that -- on those values and say what does it mean for me as a business or as an organization? What is important for me with the stuff that I'm working on? And that's the whole notion of the work on this reporting obligation on data ethics from the biggest companies. They should make the data ethics policy and put it out there, and then it's up for the public to judge and the media to judge, are they living up to the policies they are making themselves, and is the policy adequate in comparison with the products they are putting on the market, so to speak? And also, that's why I think I started with saying the explainability and the transparency is probably the most important one in order to get anywhere in this field.
 
And then I also think it's important to say we are, at least in Denmark, still early in the process. So we are very aware that we will probably need to adjust this as we go along. We only just started this journey.
 
>> ELETTRA RONCHI: I think you raise a pretty important issue here. When we talk data, we think we're talking about something we all -- it's one monolithic, you know, data. No, we're talking about a very heterogeneous set. And what at the OECD we're trying to do now is bring out some clarity with data taxonomy and what we mean. And the other point you're raising and it sort of explains why data ethics, in some ways it's in its infancy, I would say, starting now is exactly because it needs to be looked at in context. And we need also this clarity of terms that are being used. We're struggling -- you're struggling inside a country, and we are struggling at an international level. The terms that have been around for a long time like accountability, they're still not fully understood. When we raise the issue of data stewardship during the development of the AI recommendation, there were questions from my colleagues, what does stewardship really mean? How do we translate it? There's an issue of translation here where we're talking -- we're speaking English, but all of these terms need to be translated. And sometimes they don't keep the same meaning.
 
>> AIMEE VAN WYNSBERGHE: Yeah, ethics is one of those terms and it also speaks to disciplinary working together and I’m an ethicist, and I look at ethics in one particular way, so I applaud your question, what is data ethics because ethics is this personal development, this ongoing, how do I be a good person, the good life, whatnot. And then, you know, what does this mean for data? So I think also not just what is data, but really what is ethics in this conversation? But I'm the moderator, so. . . May I have the next question.
 
>> MEERI HAATAJA: Very small example to this, this is also -- yeah, this question is a really good call for this overall in this space, like we need to have this common understanding about how do we interpret that -- interpret these terms. One really nice project that we are right now starting with the larger cities in Finland and a few governmental agencies as well, we define together, for Finland and really look forward to also publish those so it can be used in other countries as well. But the question is what is the citizen transparency public sector players need to provide for citizens? So how do we interpret transparency in the context of public sector AI? And that will be basically a data model defining that, okay, this is the information that, like, citizen has a right to know about the public sector AI use cases. And that is something that these organizations can take that, okay, these are the things that we will put in place and provide also visibility into. So I think collaboration within the industries and the sectors and agreeing together about these terms is also extremely important. And that's something very tangible that, like, individual organizations can work and not just wait for these really large global organizations to come up with their interpretations but actually start agreeing on a national level and international level on how do we do this in practice.
 
>> Fatel, MLI Group. I'm glad the previous questions were thrown at you, so to speak because if it manifested anything, it expressed how challenged you are finding the debate. And it is because it's a blank sheet of paper. It's a new era. It's an era of unprecedented events. And let me just give you a preamble as well to put it in perspective and why we may need to add a bit of tabasco sauce to this. Not a bit. A lot of tabasco sauce to this. Part of what we -- MLI Group stands for multilingual Internet. So if you've heard of the term, I and other people pushed the multilingual from the '90s. In the last ten years we've been heavily focused on not just security but the geopolitical risk. So today where the tabasco needs to be brought into this conversation, today we have destruction-motivated cyber terrorists. These people are not interested in credit card details or data. They are coming in to destroy. We have now -- we hear about nation state hacking. It's worse than that. We have now hacking for political purposes to change the direction of nations politically and economically. Let me put it to you, last month at ITU, there was the conversation on AI for good. 2019, I was on a high-level panel on the digital $29 trillion per year of the digital economy and trade. The reason why this has to be brought into the conversation here is that the challenges that you're finding as an OECD, with so many governments involved there, the Netherlands who is probably one of the more advanced nation states in legislation but still challenged, finding it challenging to move forward is because the threat to society is on unprecedented level. Today you can bring a nation down to its knees without attacking a single critical national infrastructure nor attacking a single military base. So without going into more details, the challenge that I see with AI and the debates we're doing is we need to accelerate how we find moving forward effective. And I use the word "effective" loosely because it does require moving away from the position of let's say fair economics and minimum government intervention. That ship has sailed. We've already seen that that's no longer working, but many western governments are still delusional that the philosophy still applies and government led and, you know. We've seen Cambridge Analytica, and we have seen so many other cases of Cambridge Analyticaesque style recurring but nothing's happening. So let me just give the challenge to you now that I've given you this preamble.
 
Imagine tomorrow the United Nations secretary-general decides he wants to convene just like he did on the digital cooperation. He wants to convene some experts to help him, how do we address this threat to society from AI so it becomes something that is channeled for good? With ethics, knowing that your ethics may be different from mine, may be different from hers and God knows where we can go with this. And let's say 20 people are debating and discussing. In your opinion, what would be the first thing they need to discuss and agree between themselves and becomes an ethos for everybody else to start following as a template, a starting point? So to get the conversation going, let me give you my suggestion. It needs to be the philosophical position of what we want AI to do, and that becomes something that governments, multilateral organizations start saying, this is our position, and that's where we start. Because as it stands, if we leave it to the technical developers, guess what? That tabasco will probably be the whole bottle needed. So the challenge that I'm presenting to you here is not -- you're not alone in that challenge because 21st Century living no longer matches 20th Century modalities of the regulation, problem solving, and it's not just about the data. The data is just a consequence. So this is the challenge to you. How would you start -- where would you start? Would you start with the philosophical position so that you get as many of these businesses, governments to agree so that starts filtering through? And by the way, this will be a top down, not bottom up. It cannot be bottom up. Please. My two cents.
 
>> Let me quickly jump in and add to this, why do you trust governments, then? Because governments are like a camel, a horse designed by a committee.
 
>> AIMEE VAN WYNSBERGHE: And which governments?
 
>> You know, I'm glad you jumped into this. Let me just throw in another bottle of tabasco. Add populism. Talk about trust of governments. We have now governments now that talk about me first. Well, what does that mean? That doesn't mean that there are now going to not be involved in any agreement on a philosophical basis on what AI does because they feel they can leverage it to advance their cause. It's a challenge. This is part of the challenge. This is part of the debate. And I'm not professing there's an answer. But I'm saying if we don't start with what you and I can agree on what ethics are, we're not going to get anywhere. That's my two cents.
 
>> ELETTRA RONCHI: Well, I think this discussion started not only at the OECD, it started at UNESCO and it started at the G-20 level. I like your tabasco, but in some ways, even though I have a hard time drinking drinks with tabasco, but in some ways I think that a lot of that reflection is going on. So you're not telling me anything totally new. And at the same time, I think they're not totally new historically. Just look at the dual usage with genetic engineering. All discussion around by ethics. What have you learned from there? I must say in my past life, I dealt with the World Health Organization on an issue that perhaps had similar resonances, ceon transplantation. And we learned at that time that moratoria don't help. You've got to engage a global discussion. I think what we've started now is a global discussion. And a global discussion knowing also that the covert usage of AI, the dark side can be exactly what you said. And I don't think that this is -- these are issues that governments are not looking upon. There is a lot of hope and hopeful messages from the 8th of June G-20 statement, and I invite you all to look at it which, again, draw on the OECD principles. Clearly we are living in a digital era where, as you said, cyber-attacks can have major consequences on critical infrastructures.
 
>> Catastrophic.
 
>> ELETTRA RONCHI: Catastrophic. I agree with you.
 
>> If I may interrupt for one second.
 
>> ELETTRA RONCHI: And this is increasing awareness. So at our level, with the instruments we have, we're trying to engage as much discussion as we can on that. And perhaps we need to look back in history and the tools that we used with other technologies. As I said, biotechnology, genetic engineering, nuclear arm race. I think that there is something we can learn from.
 
>> If I may interact before the other panelists give an answer. I'd like to think that something was learned from this because the tabasco wasn't really to flavor the drink. It really is the equivalent of a sense of urgency. There is no question what you said is absolutely right is there's a lot of conversation. UNESCO, everybody's having conversations. The challenge here is we need to come to some kind of a format upon which we can start working. Otherwise it's good-bye. The nation states or the stakeholders or the people whose lives end up getting devastated because of what is pressing society today, and AI could be leveraged for this is imminent. So when we discuss -- I'll give you a simple example. What's the difference between Al Qaeda and ISIS? I ask that question at a lot of briefings, and they came up with a lot of answers which are relevant. There's no difference. The only difference here is when Al Qaeda was in its heyday, it did not have technology and the ability to do this to bring a city down to its knees. That, add to it the tabasco of the sense of urgency and AI, and then you're seeing the challenge to society. So what I'm calling for here is an accelerated collaboration between all of these conversations so that we can at least agree on the high level of what is it we're identifying so that maybe we can come up with a solution. My two cents. It's only meant to be thought provoking.
 
>> AIMEE VAN WYNSBERGHE: Just seeing all of the questions you have, do you have like a one-sentence answer to that?
 
>> ANDREAS HAUPTMANN: Maybe two-sentence just to say I don't know much about cyber terrorism, so I don't work with cyber terrorism. I think that's a whole other field than what we are talking about here. We are talking about data ethics in everyday life from companies and organizations that are not trying to take other states down but are trying to work with whatever it is they are doing. I think cyber terrorism is something completely different. But I agree with you that we need digital cooperation on a whole other level, and I think we need to have organizations like OECD and the EC taking this further and building some sort of consensus on an international level. I think that's happening. And, of course, I think we all agree that if they could pick up speed, everything would be good.
 
>> AIMEE VAN WYNSBERGHE: Okay. So just because of time, I go to the other questions, and then we can always talk more at the end if we have more time or we find you at the coffee break. So I do the next question over here and then I go to this side. Is that okay? Because I have seen them for quite a while.
 
>> Hi. John Peters speaking. I'm an AI researcher. And my first thought is why not bring in more tech people to the table? Because the topic you're discussing is ethics by design which means that you'll probably be discussing guidelines, things that you could do from a private perspective with implications for the companies. And my question is do you feel that we are still looking from a sense of application effects of the systems, or should we go a little bit more technical and trying to specify, regulate how things are being done? Because it's two different things. You talked about explainability, and if we look, for instance, at a very specific case of neuro networks, they work like black box. So you could try to go there with research and create methods to provide you with a plausible explanation of what is happening, but you could also look the other way and try to see the AI environment as a whole and try to enforce these guidelines as the outcome of those systems. I don't know if you have a position because usually the legislation is better when you say what are things you can implement or not. But in this case, in this area, I think that we would need a more general solution towards the definition. Because then people or companies would start using, okay, this is an AI system for the things that they are better or they convene in terms of legislations and so on. But is there a gap that you see here between the legislation and the developers and the systems that are really implemented? Thank you.
 
>> MEERI HAATAJA: Yeah, I could comment on that one. Exactly these questions and the questions that you are raising are the reason why I'm personally, like, really looking forward to work on, like, sector-specific use cases and looking deeper into different sector, lie how do we interpret -- for example this question of transparency. So what are the, like -- what are the exact uses, for example, in the health sector? Like what kind of models we use there and, like, actually start to, like, interpret in that specific sector context. I personally think that, like, we need to be in that technical level in order to -- basically the question how do we get the confidence so that we actually know how the system worked and works at the moment, and that doesn't come with a high-level descriptions that requires the data, technical visibility into how, like, what goes in how the model works and what comes out of it and, like, how that develops. So -- but that's something that we are working on. That really requires that the players, like certain -- like different sectors start to actually work on a technical level on that one. So really interested to discuss further if you are interested on that, very important.
 
>> ELETTRA RONCHI: Regulatory agencies are very aware of that. And privacy enforcement authorities are starting to hire technical experts just like you. And I think that that is exactly what needs to be done right now. You know, regulation needs to pair up with expertise in technology.
 
>> AIMEE VAN WYNSBERGHE: Just to also add, it's also important to understand that you can't program ethics, so ethics as such isn't something that we're trying to translate into technical requirements. The values that ethics talks about is what we're trying to translate into technical requirements. And then I think that's also an important thing to understand that ethics is the process of deciding how do we interpret this value? How do we prioritize this value against that value? How do we understand that values change? So that is ethics. And you don't put that into the system. But you look at the values that are going into the system. Okay? Okay. So go to this side and then I come back to you. Yeah?
 
>> Thanks very much. (Away from mic) of the interior. Two quick questions. Firstly -- touched upon the idea of these values because you've been talking about transparency and explainability and of course that is something we need in order to talk about what all these systems do but it's not the core of what we want. We want it to be fair and humane, et cetera. So how do you think we can come to actually guidelines, principles that developers can work with on these specific situations? Because now all the regulations will be focusing on ensuring the explainability, but maybe if we focus on that too long, then the ship will have sailed on all the other things. And secondly, Mr. Hauptmann already touched upon is the different areas. So now we try to regulate on AI on a broad scale, but as was touched upon by the previous speaker as well, in health care, in infrastructure, et cetera, all the AIs do different things. So when we talk about fairness in a health care situation, we might need something completely different than we do in an infrastructure or in a justice situation. So why are we still trying to regulate it as AI rather than looking at AI in a health care context? Thank you very much.
 
>> ANDREAS HAUPTMANN: Well, you are, of course, right. The main thing is not the transparency in itself. It's an instrument of getting where we like to go, so to speak. But on the other hand, it's -- the values need to be rather generic. And then you need to translate them into different organizations, different sectors. So I think that's the main thing. I think what we will see is probably that we will move from guidelines and principles on a generic level to more sector specific in the time that is coming. I think we need to do that. And we need to make practical tools that helps companies and organizations working within specific fields moving forward. But I don't think that means that we can't have values on the broad scale. But that these values mean different things in different settings. Yeah.
 
>> MEERI HAATAJA: One smaller perspective into this is I fully agree, like, you know, the themes that you are raising are really important. What we haven't discussed is the risk level or the differences of the applications and their risk levels. Because I think one of the main concerns is that, like, now while we start to look at the governance and put in place these practices, we do overkill for many of the, like, applications that basically have very low risk level and sort of -- so this is also something where I really look for the leaders and regulators from specific industries to build this understanding what are the applications and what are the different risk levels of those. And obviously we should, like, look -- like put our most effort on those industries where we feel that, like, risk level, like, in terms of using AI is higher than on average. So that's the reason why I'm interested, for example, in the public sector cases overall. That's definitely something that we all are concerned of, health sector transportation, defense, definitely. We need to have our own interpretations about these topics, but also make sure -- wasn't it in your list of principles that there is innovation as well. I think that's really, really important and securing innovation through understanding that not all AI applications or systems have the same risk level, and we don't need to push those through the same quality measures.
 
>> Hi. My name is (Inaudible) I represent Humanity of Things, an institution that was founded by civilians that are worried with a basic concept of literacy. We talked a lot about data, and we believe that governments can regulate, companies can do a lot, but the citizens need to be empowered. And for that critical thinking needs to happen. And critical thinking comes from knowledge and wisdom. So my question for you is that what is the effort at this moment to promote -- I will take that literacy 2.0, maybe. Do we know what we need to know? And the second part of my question is regarding regulation. I'm also a lawyer. And I would like to know, are we to a path that we will have a treaty soon, the way we had with other situations all over history? And regarding compared law that we discussed, for instance, with human cloning, are we prepared already to use all this knowledge that we had in other areas to adapt to the current challenges?
 
>> ELETTRA RONCHI: Thank you for this question. The set of five recommendations for governments that I did not list before and is the second half of the OECD recommendation really lists national policies that go toward what you just said. Just to list them. Investing in artificial intelligence, research and development, fostering a digital ecosystem for AI, shaping and enabling policy environment, building human capacity, and preparing for labor market transformation, international cooperation for trustworthy AI. And within this, there is a very, very -- a core role is digital literacy. And the OECD is also producing a digital skill strategy. I think you point to a very important issue there and definitely it's core. But we are looking at this from another angle and it is the children online recommendation. We need not to forget that there are vulnerable populations and children is one of them that we are now looking at very carefully.
 
Now, in terms of a treaty, while we know that all of this discussion is not just happened at the OECD. It's happening at G-20 level. It's happening at the UNESCO level and certainly at the United Nations level. So it is not in my capacity to comment on whether we are on the path of a treaty, but certainly there is a lot of momentum and sense of urgency that this discussion needs to be lifted at a global level.
 
>> ANDREAS HAUPTMANN: And maybe adding to that, I think one of the initiatives we are working on in Denmark is to build up the common knowledge in this field. So how do we build up digital literacy, so to speak? And I think an example is that we are -- this March we started having as part of the curriculum in public schools, in primary schools on all levels, that pupils are being taught in this field. Not in order -- not in data ethics specific but on the whole notion of this whole digital world in order to try and make them agents and not passive consumers once they grow up. I think we are -- this is a pilot going on because it's rather difficult to do this in the best manner. So it's being done at 50 schools at the moment, and then it's being evaluated in order to how do we do this the right way? And it's also being introduced in higher education at the moment. So that's some of the concrete answers.
 
Then your part 2, I think it's difficult to know if we are ready for a treaty, but I think there's a lot of momentum behind this agenda at the moment. So I think if we could somehow find out what is the area where we definitely need regulation, I think there's a momentum there at the moment, but it's very difficult to say what should we regulate and what should we not regulate. Getting regulation right is harder than it seems.
 
>> AIMEE VAN WYNSBERGHE: In terms of literacy, look up Jim Stolz who runs the programs online, and this is trying to teach people what is AI to teach the average individual. Was it in Finland or Denmark that he ran the first program and now the next one is in the Netherlands? Do you know which one I'm talking about?
 
>> MEERI HAATAJA: I'm not sure what you're referring to exactly, but yeah, I was going to say this element of AI is something we are really proud of. So the target has been, like, actually in one month or, like, a few months, like, we got this one person, Finnish people educated by this amount of AI, and now lots of other countries have also taken it. It's a free online course, like six sections there. So if you don't know it, that's, like, a good exercise for us all for summer. Yeah, first course on actually familiarizing with AI. But this is, like, extremely important, both the skills and the information, like, having for people to actually have agency in this AI-driven society.
 
>> Hi. (Away from mic)
 
>> AIMEE VAN WYNSBERGHE: I think they turned up the mic.
 
>> Seeing that you all agreed on the explainability of these systems and having read Floidi and studied ethics, I still would like to address and ask, like, how to sustain explainability when the nature of algorithms are complex and when the companies do not want to share their things. One more question. And the second is about the data seal. What are the categories or certificates or values that says an application is data ethical? And the third question is you mentioned self-regulation which I'm so interested in because there is a risk of overregulation that we are facing. And, yeah. So what are the ways that we are going to make self-regulation sufficient? So two hard questions and one of them. Thank you.
 
>> MEERI HAATAJA: I could comment on the first one. It's really an important question because that is this question of typically used as a counterargument for transparency basically saying there is no means to build transparency or try that effort forward because of this trade secret problem. We are approaching that in that way that we need to secure transparency so that we can provide the transparency for those parties who need it. So that means that not everything needs to be open for everyone. It means that, like, there needs to be third parties who do these reviews audits, and there needs to be mechanisms for -- and contracts for providing that visibility into these, like, precious assets, like for that moment when the audit is, like, being done. There needs to be transparency for regulators who can, like, request for that, that access. Then obviously another question is then, like, what level of transparency we need to provide for citizens and customers. And that's totally different level of transparency, what is required for these different parties. So I don't actually see this as a problem, this question in this context. It's just, like, approaching transparency from that perspective that we need to be able to have mechanisms for providing the reasonable level of transparency for those stakeholders as they need it. So that's very important, I know, to this one, this question. I will -- I'm probably focused on the first question and not sure if I remember the rest of it.
 
>> ANDREAS HAUPTMANN: I can maybe get back to your question about the seal. I think important to say we are working on it at the moment. There is nothing set in stone. Again, it's a multistakeholder process, so we need -- we have, for example, both consumer organizations and business organizations working together trying to agree on what do we need. We have something that consumers actually will act upon and something that businesses can work with and see for themselves, so to speak. But I think it will -- it will -- it will be a mix of some organizational criteria. You have a data ethical policy as a company that refers to a set of specific values and what will your company do. It's also to have guidelines within the business or the organization, that you have a notion of how are you working with your subcontractors on this issue. So there's a range of different criteria we are working with at the moment. We hope to have the first list of criteria more formerly endorsed by the multistakeholder setting at the end of August or maybe September. So then I will hopefully be able to say more about it.
 
>> AIMEE VAN WYNSBERGHE: Yeah. Okay. So were there any more questions? Yeah. Oh, yeah. Okay. And then we will wrap up and do recommendations.
 
>> Thank you very much. Just a follow-up question. I think you already answered the previous question, but from my own experience when I talk to the developers, be they in the private sector, be they in the public sector, they're just asking when am I doing it right? And if I then tell them just when you're thinking about you're doing it right, of course that's part of it, but they also just want to know when I'm developing an AI use in health care, can I or can't I use data about a client's ethnicity? When I take a classical example in the autonomous vehicle domain, should I prioritize the safety of the people inside the car or other traffic participants and when am I doing it right? And I feel the answers given so far, okay, we think about it and the market should answer the question within the companies. If we look at the traffic example, we'd see that, well, people would usually buy the cars that protect their families who are inside the car rather than the other traffic participants. So there's a role here, a regulatory role, on the specific domain for the government. And I feel that most of the things we are discussing and we see it at UNESCO, OECD, et cetera, all these levels, also within my own country that we're still talking about AI in a general sense. So just to reiterate that question, how can we actually answer these questions for developers who work towards these answers? Because I feel that we're, again, taking a bit too long with our time and we're saying, well, we have to discuss and we have to think about the thing that's right, but that always happens in these kinds of situations where we are missing usually the developers but also the experts in health, in infrastructure, and it's always digitalization AI experts. So how can we actually engage in those discussions better to ensure that we get answers that people can actually work with rather than say it needs to be humane, but then what does that mean?
 
>> ELETTRA RONCHI: Well, I'm going to use two words that maybe would require even further discussion. But you're calling for something that a number of countries call regulatory sandboxes. This is experimental. We need to start putting together sandboxes. Sandboxes mean that you are in sort of a protected environment where you're testing, be it new regulations or the current regulations and you're testing conditions. I don't think that we have any real hard solutions to this. We need to experiment. And there are tons of laboratories out there. So I think regulatory sandboxes would be a response to that. And certainly about it is also very context specific use cases. I know the government in Singapore which is doing a lot of work in this, and we are now in contact and learn from what they're doing. And I know that the United Kingdom, particularly the ICO, has been putting in place now a regulatory sandbox program. Perhaps it's not the answer you might have wanted from me, but it's an attempt.
 
>> ANDREAS HAUPTMANN: Building on that I think you're absolutely right. We need more regulatory sandboxes where we are trying to find our way in a very concrete manner. I also think we will have to have hard regulation in different fields. I don't think ethics solve everything. I think ethics need to work together with law in a lot of fields. I think autonomous vehicles is a good example. Of course, we have traffic laws. We need traffic laws updated to the autonomous world somehow. I'm not an expert in that field. I can't say how we should do it. And the same goes for the health sector. But I think we definitely have fields where we need to have regulation and then we need ethics on top of that moving on.
 
>> AIMEE VAN WYNSBERGHE: As a further concrete solution to what you're talking about, I used to work as an ethics adviser where my job was to be at a technical institute and I sat with engineers or data scientists and helped them come to terms with these questions. And it means looking at the very specific problem that they have, what are the values of the organization or the corporation. How can we map these values onto this particular decision? So is ethnicity allowed to be a classifier for prediction of a certain disease or syndrome? That has to do with whether or not ethnicity is relevant for the disease or syndrome that you're talking about, and that allows you going through these step-by-step questions that an ethicist is trained and supposed to be able to do with you that allows you to come to terms with and understand, am I doing the right thing? And sometimes it's not a question of am I doing the right thing. The problem is the famous example of kill one person, kill five people. Neither of those are the right thing. One is better than the other. Right? One is good comparatively, but it's never right to kill an individual. So ethicists are meant to help you go through these -- I know. It's so annoying and frustrating, ethics. But it's not meant to be mathematics. It's meant to be something different, right? Anyway. So in our last two minutes, can each of the panelists now give perhaps one recommendation that you have, perhaps the thing that you find the most exciting out there in the space or the most worrisome, something for you to take away?
 
>> ANDREAS HAUPTMANN: I can start maybe saying the most worrisome and the most positive is actually the same thing. I think we have a lot of political momentum. We have 44 guidelines and principles. I think that's the very promising thing. There's a lot of momentum working on this. I think also that's the worry that we have, that we don't get a convergence on what we are doing and actually building on it and making practical solutions that moves things on the ground right now because it's now we have the momentum. My recommendation would be that we should strive towards making data ethics a parameter on the market, a competitive parameter on the market. And I think a very good first step would be an international seal on which companies and products are data ethical.
 
>> ELETTRA RONCHI: From my perspective, I think what we've learned from the OECD process is the importance of a multistakeholder debate. And so the need to continue on that and aware that people need to be at the very center, and we have to prove a multistakeholder debate continue to the conditions for practical implementability of the recommendations that we have. And I think that that is -- would be my main recommendation right now.
 
>> MEERI HAATAJA: I would sort of on this -- this discussion that we have on the latter part of this discussion, having the questions on a very concrete level. My focus and recommendations would be to, like, basically challenge the industry leaders, the different sectors and especially these high-risk industries, highly regulated industries that we need to start defining the sector-specific interpretations for these global principles and guidelines that have been defined. Also, I want to sort of, like, challenge the regulators that there is really urgency for, like, coming up with those viewpoints, how to actually regulate this one because this world is developing so fast, and I think the sort of, like, the dilemma is there that, like, industry feeling that things aren't processing in a quick enough pace in this area, so. . .
 
>> AIMEE VAN WYNSBERGHE: Wonderful. So please join me in thanking our fantastic panelists for the debate and discussion today.
 
[ Applause ]
 
And enjoy your lunch and the rest of the conference. Oh, yes. We are going to read the --
 
>> So I have noted five different key messages and the idea is that as I read them, the room will, in a way, agree or disagree with them. We can go with yea or nay or we can go with humming or as you see fit. And a second point is, so there's supposed to be neutral objective kind of taking out what was said. Third point is that they will all be uploaded on the EuroDIG website and platform where you can read this after EuroDIG.
 
>> (Inaudible)
 
>> You can choose if you want to say yea. It's acceptable. Or nay. Not acceptable. Or you can hum. Or whatever you want to do. So the first one would be message one, the ethical guidelines ecosystem has grown extensively over the past years and includes more than 40 sets of guidelines. However, the challenge of creating a complimentary balance between legislation, regulation, innovation and the guidelines remains.
 
[ Applause ]
 
Humming. Key message two. The approach of self-regulation is not enough. There is a need for a new industry model that allows working with data ethics but does not pose a barrier for innovation and competitiveness. Data ethics should be a parameter in the market.
 
[ Applause ]
 
Message three. This is a longer one. While the ethical guidelines are numerous, the base values that should be addressed are transparency and explainability. Mechanisms for providing transparency have to be layered. Stakeholder-specific and able to operate on different levels. Explainability has to be defined in a multistakeholder dialogue because it includes explaining algorithms decisions as well as explaining what data ethics mean in a specific context.
 
[ Applause ]
 
Yes. Okay. So while the ethical guidelines are numerous, the base values that should be addressed are transparency and explainability. Mechanisms for providing transparency have to be layered, stakeholder specific and able to operate on different levels. Explainability has to be defined in a multistakeholder dialogue because it includes explaining algorithms decisions as well as explaining what data ethics mean in a specific context.
 
>> AIMEE VAN WYNSBERGHE: Can you maybe say explainability should be defined rather than has to be defined in that way?
 
>> Yes, absolutely.
 
>> AIMEE VAN WYNSBERGHE: And can you say there are many common values in these guidelines? One or two of which are? Because I would say, you know, harm and preservation of autonomy.
 
>> There are many common values in the guidelines.
 
>> AIMEE VAN WYNSBERGHE: Yeah. And these are two of the common ones.
 
>> Also, this type of changes will -- you will have access to it. Okay. Next one, not all machine learning systems operate with the same algorithms, have same application or the demographics using them. Developing tools for practical implementation of data ethics has to be highly content specific and targeted. These developments need to be created in a multistakeholder environment.
 
>> AIMEE VAN WYNSBERGHE: Did you say content specific or context specific?
 
>> Context, sorry. Yeah. Okay.
 
>> ANDREAS HAUPTMANN: Maybe just to say I don't think all tools need to be made in a multistakeholder. A lot of practical tools should also just be made quickly and fast by everything from consultancies to agencies and. . .
 
>> So we leave it developing tools for practical implement of data ethics has to be highly context specific and targeted.
 
>> ANDREAS HAUPTMANN: Yeah.
 
>> MEERI HAATAJA: We have been discussing about this industry sector a lot. We are referring to context specifics and probably it means in practice that we look from the industry or sector-specific perspectives. So I don't know, like, maybe you want to consider adding that word, industry or sector?
 
>> Again, you will have a chance to comment.
 
>> ANDREAS HAUPTMANN: Okay.
 
>> (Inaudible)
 
>> And the last one, several tools for practical implementation could be further developed and disseminated. Data ethic standardization through certificates and seals for business entities should be explored as an instrument of ensuring trust. Other instruments include an obligation to report the ethics policies in the annual reviews and in the corporate social responsibility policies. Sharing best practice cases is crucial.
 
>> ANDREAS HAUPTMANN: Perfect.
 
>> Okay. Thank you.
 
[Applause]
 
>> AIMEE VAN WYNSBERGHE: Good job.
 
>> ANDREAS HAUPTMANN: Thank you.
 
>> Thank you. No, no, no. Just go. Everybody knows that they can go for lunch. So when you are ready. Ethics is not mathematics. I like that one as well. And ethics is more -- it's easier to debate and dialogue, so this is why we go to lunch now. Thank you very much.
 
>> Thank you.
 
 
''This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.''


[[Category:2019]][[Category:Sessions 2019]][[Category:Sessions]][[Category:Human_rights_2019]][[Category:Innovation_and_economic_issues_2019]]
[[Category:2019]][[Category:Sessions 2019]][[Category:Sessions]][[Category:Human_rights_2019]][[Category:Innovation_and_economic_issues_2019]]

Revision as of 11:38, 26 June 2020

20 June 2019 | 11:00-12:30 | KING WILLEM-ALEXANDER AUDITORIUM | Video recording | Transcription
Consolidated programme 2019 overview

Proposals assigned to this session: ID 3, 21, 22, 24, 29, 38, 88, 119, 145, 153, 155, 164 – list of all proposals as pdf

You are invited to become a member of the session Org Team! By joining an Org Team you agree to that your name and affiliation will be published at the respective wiki page of the session for transparency reasons. Please subscribe to the session mailing list and answer the email that will be send to you requesting your confirmation of subscription.

Session teaser

Ethics by design: Moving from ethical principles to practical solutions

Data ethics is at the top of the global tech agenda - in recent years a lot of time and energy has been poured into developing guidelines, codes and principles for the responsible use of data, AI, robotics etc. Now it is time to move on to the next phase: translating data ethics principles into data ethics solutions.

Session description

Several recent scandals (most prominently Facebook-Cambridge Analytica) have increased the average citizens awareness of the risks of data abuse in the new data economy. A number of intergovernmental institutions are working on developing guidelines and principles for the responsible use of data and data ethics – most prominently the EU and OECD. Also some countries have attempted to spearhead the development on how to operationalize data ethics and make it into a competitive advantage for businesses - but there remains a range of unresolved questions and dilemmas that this session will seek to answer: How do we promote data-driven business models without eroding citizens’ trust in businesses and society? How do we empower tech-workers to handle ethical questions when they arise? And most importantly: How do we transform data ethics into practical solutions and turn the responsible use of data into a competitive advantage? If we are to reap all the benefits of the digital transformation, we need to find new solutions to ensure that the consumers’ trust in the data economy stays strong. A strong focus on data ethics and the responsible use of AI could be one mean towards this end.

Format

We have 90 min. for the session – therefore it is suggested to divide the session into smaller but interconnected parts:

Speaker to set the scene

Short introduction by moderator

Short presentations by panelists working with developing data ethics principles and/or working with concrete data ethical solutions

Panel discussion

Q&A with questions from audience

Conclusions – lessons learned/practical solutions to bring back home

Further reading

Links to relevant websites, declarations, books, documents. Please note we cannot offer web space, so only links to external resources are possible. Example for an external link: Website of EuroDIG

People

Moderator
  • Aimee van Wynsberghe, Assistant Professor in Ethics and Technology at TU Delft in the Netherlands. Aimee is also co-founder and co-director of the Foundation for Responsible Robotics and on the board of the Institute for Accountability in a Digital Age. She also serves as a member of the European Commission's High-Level Expert Group on AI and is a founding board member of the Netherlands AI Alliance. https://aimeevanwynsberghe.com/
Speakers/panelists
  • Elettra Ronchi, Head of Unit, Senior Policy analyst, OECD
    • Elettra has been leading the work on enhanced access and sharing of data (EASD) and a the review of the OECD privacy guidelines as chair of the SPDE-working group in the OECD.
    • Able to give input on both the OECD AI principles (still under development) and ethical accountability (accountability 2.0) and how to translate these into concrete policy
  • Meeri Haataja, CEO of Saidot.ai and Chair of the Ethics certification Programme for Autonomous and Intelligent Systems (ECPAIS)
    • Saidot is a company in Finland that is developing a service to help organizations (e.g. Finnish government services like taxes and social services) provide transparency about the data they use. https://www.saidot.ai/
    • ECPAIS is an IEEE-SA backed programme in collaboration with industry and public service providers for developing criteria and process for a Certifications on Transparency, Accountability and Algorithmic Bias. https://standards.ieee.org/industry-connections/ecpais.html
  • Andreas Hauptmann, Director for EU and International, incl. Data Ethics and AI, Danish Business Authority (DBA)
    • Data ethics is high on the agenda in DK and in 2018 a set of recommendations was developed to strengthen Danish businesses in the responsible use of data e.g. by empowering tech-workers to handle ethical questions when they arise. The recommendations focus on how to make the responsible use of data a competitive advantage for businesses. The DBA is taking this work to the next stage and looking into transforming the recommendations into practical solutions, including establishing a data ethics seal and a requirement for the biggest companies to incorporate an outline of their data ethics policies in their management reviews as part of their annual financial statement. You can read the recommendations here: https://eng.em.dk/media/12209/dataethics-v2.pdf
  • [Not confirmed yet] - Lucilla Sioli, Director, Artificial Intelligence and Digital Industry at DG CNECT, European Commission.

Focal Point

  • Lars Rugholm Nielsen, Danish Business Authority
  • Julia Katja Wolman, Danish Business Authority

Organising Team (Org Team)

  • Zoey Barthelemy
  • Marit Brademann
  • Amali De Silva-Mitchell
  • Ansgar Koene, University of Nottingham
  • Artemia-Dimitra Korovesi
  • Charalampos Kyritsis, YouthDIG Organiser
  • João Pedro Martins
  • Jana Misic, Wilfried Martens Centre for EU Studies
  • Michelle van Min
  • Ben Wallis, Microsoft

Remote Moderator

Trained remote moderators will be assigned on the spot by the EuroDIG secretariat to each session.

Reporter

  • Jana Misic, Wilfried Martens Centre for EU Studies

The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:

  • are summarised on a slide and presented to the audience at the end of each session
  • relate to the particular session and to European Internet governance policy
  • are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
  • are in (rough) consensus with the audience

Current discussion, conference calls, schedules and minutes

See the discussion tab on the upper left side of this page. Please use this page to publish:

  • dates for virtual meetings or coordination calls
  • short summary of calls or email exchange

Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.

Messages

  • The ethical guidelines ecosystem has grown extensively over the past years and includes more than 40 sets of guidelines. However, the challenge of creating a complementary balance between legislation, regulation, innovation, and the guidelines remains.
  • The approach of self-regulation is not enough. There is a need for a new industry model that allows for working with data ethics, but does not pose a barrier for innovation and competitiveness. Data ethics should be a parameter on the market.
  • While there are many common values in the guidelines, the base values that should be addressed are transparency and explainability. Mechanisms for providing transparency have to be layered, stakeholder-specific, and able to operate on different levels. Explainability should be defined in a multistakeholder dialogue because it includes explaining algorithms’ decisions, as well as explaining what data ethics means in a specific context.
  • Not all machine-learning systems operate with the same algorithms, have the same application, or are used by the same demographics. Developing tools for the practical implementation of data ethics has to be highly context-specific and targeted.
  • Data ethics standardisation through certificates and seals for business entities should be explored as an instrument of ensuring trust. Other instruments include an obligation to report data ethics policies in the annual reviews and in the corporate social responsibility policies. Sharing best practice cases is crucial.


Find an independent report of the session from the Geneva Internet Platform Digital Watch Observatory at https://dig.watch/sessions/ethics-design-moving-ethical-principles-practical-solutions.

Video record

https://youtu.be/WplQXzXRqOU

Transcript

Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-800-825-5234, www.captionfirst.com


This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.


>> Good morning. We're running a little late already, so I'd like to start with the next panel discussion. And I please ask everybody sitting, please come up to the front a little bit in the center because it's nicer for the panel because this is such a big, large, whatever room. And if they have to look to the right for just one person, to the left, it's nicer, more polite. They can just look at you. And it looks a little more crowded as well. We're trying to get the people from the corridor as well. Well, let's start. I'm Aimee van Wynsberghe. I'd like to get the floor with your panel and move on to learn more about ethics. I'm really looking forward to it. You need this one?

>> I do. Wonderful. Thank you so much for the introduction and for bringing us here today. So we'll just let everyone get set up. Now, the way that we will organize, we have an hour and a half for this panel. And so we wanted to make sure that we had some time on stage to get into the conversation, to get into some of these topics. But then after about 30 minutes, my plan will be to then direct our attention to the audience for any questions that you have, whether you want to ask a point of clarification of something that was said or, you know, push the conversation further. And then also I believe we will have participants at a distance who are going to feed questions to us. So first let me begin by thanking Julia Wolman who is the reason why we all came together, she's the organizer of this panel, and she's the one who -- yeah, made sure we understood the theme, kept us all in check. So really, thank you, Julia, for doing a wonderful job with that. And then to introduce the panelists today, I have Meeri Haataja, correct?

>> MEERI HAATAJA: Yes.

>> ELETTRA RONCHI: And I have Elettra Ronchi and Andreas Hauptmann. They're all incredibly impressive individuals. I'm going to open the stage to let them tell you about themselves and why they're here and their interest in this particular topic. I should also say I live and work here in the Netherlands, and I’m Assistant Professor at the Technical University of Delft, also part of the expert group on artificial intelligence.

So this idea of ethics by design or, you know, also generally put, this moving from principles to practice. Why is this even on the table at all? And this goes back to a few scandals that we can pinpoint, right? If we're talking about just data and the handling or the treatment or the acquisition and sourcing of data, the first thing that usually comes to mind is the Facebook Cambridge Analytica scandal, that data was acquired and sourced in an unethical manner. If we're looking at newer technologies like artificial intelligence, we've also heard stories about the way that algorithms have been trained or used and that certain cultural biases could be exacerbated or reinforced by this. I think one of the more well-known stories was the Amazon case where Amazon trained an algorithm that would help them with recruiting other individuals and to train this algorithm, they used ten years of data acquired from the company. But the company practices had certain biases in them, right? So these biases, this preference for male employees at the top level then became a part of what the algorithm would be searching for. So we've seen these situations. And this is now raised the attention of policymakers, of industry leaders, of academics to this idea of ethics.

We have biases, discriminations, that a result of using the technology. So is there a way for us to incorporate ethics earlier on, to have it kind of guide technology development and implementation? And then we have a variety of corporations, institutions, legal bodies as well who are now creating ethics principles or ethics guidelines. So the European commission has created their guidelines. We have the partnership on AI. Companies like Deep Mind have also created principles. So we've been doing this for a year, year and a half, and there's probably 44 different sets of principles out there. But now we're in a new stage. We're in a new phase. Now, we have these principles. We have these guidelines. How do we implement them? What do we do in practice? Because it's one thing to say do no harm, but what does that mean for the data scientists or the data analysts who have a conflict or attention between values. If I increase accuracy, I might decrease fairness. Or alternatively, if I increase fairness, make sure everyone is equally represented, that could decrease the accuracy of the algorithm. So what do we do when we actually put these principles in practice? So that is the main theme or topic of today's panel. Moving from principles to practice.

And now I'd like to go through the panel and have them each explain what is their experience in this space. Are they contributing to making principles and guidelines? Are they working to implement principles and guidelines, or perhaps a combination of the two? So Andreas, shall we start with you, and then we go down the line.

>> ANDREAS HAUPTMANN: Thank you very much, Aimee. My name is Andreas Hauptmann, and I'm director of EU and International Affairs in the Danish Business Authority, and that's part of the Danish government. And before I move on, I unfortunately have to make a small disclaimer since some of you may know that we had elections very shortly in Denmark, and we are only in the process of forming a new government. So I am without an acting government at the moment. So what I will be saying will, of course, be -- maybe it will change once we have found a new government, have new priorities. I don't expect to, but that's, of course, a real possibility.

So moving on. I think data ethics is a very important issue. I think in Denmark, we started the work in this field for 18 months, two years ago, at least from my perspective on the governmental side. We had -- we formed an expert group on data ethics. I served as part of the Secretariat, working with the expert group. This November they delivered their recommendations to the Danish government, and the Danish government has then made a policy towards data ethics that I am now working on implementing in Denmark. So we are working intensely at the moment with trying to implement data ethics, putting it into practice in Denmark, especially among Danish businesses. I won't say we are that far, but we know the direction we are working in, and hopefully we will be able to get back to that during the session today.

But maybe just to say a little more, what was the notion from this expert group and from the Danish government, it was that we need trust in order to ensure the uptick of all these new technologies in our societies. There's a lot of benefits for the new technologies, but we need to build on the trust and at least not remove the trust that is there at the moment. Furthermore, I think the notion was that we need to find a way to work with the market instead of against the market. So we need to find a model working with data ethics that is not a barrier for innovation but enhance innovation, to show that competitiveness and data ethics can work together, and hopefully we can make data ethics a competitive parameter for acting on the market. So hopefully together we can assure that you can make money on being data ethical. I think we have some sort of progressing work in this field, but we need to push it further. So that's the main focus that we are working with from the Danish perspective. I think that's what I wanted to say to begin with.

>> Thank you. Is this working? Okay. Great. So I'm Ellettra Ronchi. I coordinate work in the organization on privacy, data governance, and security. And what I'd like to do here is bring some more perspectives to this discussion and particularly referring back to work that we've done recently on artificial intelligence and particularly the development of the first intergovernmental recommendation that was adopted by OECD member countries and for non-OECD countries, so 40 countries to date on the 22nd of May, and I will get to some details on the recommendation. And also drawing on work that we're doing right now in reviewing the privacy guidelines of the OECD. Now, I don't know how many of you are familiar with the OECD privacy guidelines. They were first established in 1980. They set the scene for the development of privacy regimes and legislations in many countries around the world. They set out minimum standards. And we are reviewing them because exactly some of the main and core challenges that are now emerging in the context of artificial intelligence and the context of Internet of things and big data.

But getting back to what you said, I think I would like to raise the fact that we're looking at perhaps data also in a different way in terms that certainly there is a change of scale, but there is a change of scale in terms of both opportunities and risks. And I think it is in our -- our goal at the OECD is to make sure that we have the enabling environment to harness those opportunities while addressing the risks and concerns. And what we're talking about opportunities, perhaps it's a good reminder of the fact that with artificial intelligence now, we have a tremendous opportunity in many, many sectors of the economy. Just look at the health care sector. I know you've been working in the health sector. We have now greater opportunities for outperforming humans with diagnostics in -- for example, on cancer, surveillance. We have examples of applications for smart transportation. So let's look at the whole thing around this tension and the enabling environment that we need to put in place, and certainly trust is fundamental. People need to be at the center.

So for us that is the issue. And another final point that I'd like to make in this introductory discussion is that we look at risks in two ways. In the way that you've presented in that there is intrinsically some risk that we did not anticipate with big data and with artificial intelligence in the sense of certainly our original privacy legislation, you do not anticipate all of the challenges that we have now in relation to data protection and privacy. And that's why there are very promising regulatory developments right now. And certainly in relation to discrimination and bias. Just as an example, recently in the United States, the very first city in the United States, San Francisco, banned the use by government agencies artificial recognition technology. I think there we already start to see some concerns about civil liberties. But there is a whole other set of concerns at the OECD we are particularly aware of, and that are now reflected in the recommendation, and that is that it's around the develop the of artificial intelligence. The fact that in terms of power structures, who holds the data? How -- who is investing and where is the investing? We're seeing concentration in a number of countries, and concentrations at different levels. We are hearing about research in artificial intelligence which is carried out with gender imbalance. And I think that there is an incredible and important role for public policy there. And that is why the OECD has put forward a recommendation that -- and I will get to some details later -- in part looks at human-centered values and the importance of stewardship and artificial intelligence but also what type of public policy needs to be in place to make sure that we have a level playing field and we are able then to develop artificial intelligence for the interests of everyone. So let me stop at that point.

>> MEERI HAATAJA: All right. So my perspective background and to this topic. I've been in AI space or data space for my whole career and this AI ethics popped up, like, three years ago, I would say, while GDPR was really starting to take off and prepare for that. So at the moment, I'm CEO and co-founder of a technology company where we're building a very practical platform for organizations who want to do -- develop and use AI in a responsible manner to actually put together transparency and agree on accountabilities in their supply chains. So that's my daily work, working with that kind of organizations who already know that this is a really important topic for them and want to really take it into the practical processes. I'm also part of IAAA's ethics work in this space, so chairing the ethics certification program where we are really looking forward already this year to put together first certification regimes for AI ethics space and specifically for transparency accountability and algorithmic bias. So those are the two. But talking about this AI ethics principles and ethics by design, I was also part of Finland's national AI program and leading the ethics working group there and during the last year I was involved in many, many -- actually, we had the ethics challenge in Finland and basically challenged, like, all the major organizations who are developing and using AI to commit to ethical use and we encourage that the first step they could take there is to actually start thinking about this ethics principles and look from the organizations like existing values perspective and taking all the best practices that we have. There are lots of those. So building from those, their own ethics principles, like what is privatization, what is important in our organization, our context, and there are 70 organizations who took that challenge in Finland. And I would say roughly half of those already have or are working on the ethics principles. So quite many principles. I've been reading lots of principles, and what is encouraging that it's a mostly similar list. For all the work in this area when taking it into practice.

>> AIMEE VAN WYNSBERGHE: Awesome. Okay. So before we get into, you know, obstacles or benefits of actually employing or implementing the principles, maybe we could say something about the content of the principles. So could you say if you are working with principles or specific guidelines, what would you consider to be the most important or, you know, top three principles that you are concentrating on, and why are they the most important?

>> ANDREAS HAUPTMANN: Should I start?

>> AIMEE VAN WYNSBERGHE: Actually, whoever wants to start, too. But if you want to jump in please feel free. We don't have to do the same order each time.

>> ANDREAS HAUPTMANN: I think in Denmark, we have six values, so to speak, that's our guidelines. Autonomy, explainability, dignity, equality and fairness, responsibility, and innovation. And if I have to -- if I should name one of them, I think this explainability which is openness and transparency, I think is absolutely crucial in this field because ethics is a difficult concept. It means different things for different companies and in different settings. So I think transparency is the key in order to enhance the use of data ethics.

>> AIMEE VAN WYNSBERGHE: So do you mean -- sorry, do you mean explainability in terms of the algorithm, we're able to explain the decision that the algorithm is giving the human if it's a decision-making algorithm, or you mean explainability of what ethics in the company means?

>> ANDREAS HAUPTMANN: Actually, both, so to speak. I think it's very difficult to have explainability one to one on an algorithm that is immensely complex. But you need to explain what decisions are coming out of this algorithm so people can understand it. And I think you also need transparency on what are the data ethical policies, so to speak, for different companies, organizations and so on.

>> ELETTRA RONCHI: I think that brings us to the fact that there are lots of complementarities in the guidelines. The fact that the OECD builds on a lot of the work that had been done since 2016 in this area and certainly that is encouraging, and that is why we were able to also build such a wide consensus. By the way, on the 8th of June, the G-20 drew principles from the OECD guidelines so we're talking about really an international scale here. But we do have five value-based principles, and they are anchored on, first of all, they aim to promote sustainable development and inclusive growth, and they're anchored on human-centered values, transparency and explainability, and in fact I would choose the same, and I'll talk about it just shortly, robust and security and safety and accountability and we're very interested in the issue of accountability and I'll talk about it later in the context of the review of the privacy guidelines. But when we talk about transparency and explainability, because we really think that there should be and there must be transparency and responsible disclosure around artificial intelligence so that people understand how specific outcomes come about. So it is really because we want trust of people in this. But it is not easy. And we know that some countries are now putting in place some specific programs really to understand particularly -- we're not talking about narrow AI because perhaps we need to define what artificial intelligence we're talking about. We're talking about more the broader AI with machine learning, what we call opaque AI. And in this case we really need to put in place projects, research. We need to think and multistakeholder discussion about what we mean with explainability.

In the OECD, we have also started recently a new program of work which was mandated in some ways by Japan on new governance models because with explainability in many ways comes also auditability. The auditability of algorithms. How can we get to that? When we're talking about some legitimate concerns for business coming from been surrounded by businesspeople in relation to intellectual property rights. So transparency and explainability are very important, are core, but there are legitimate concerns in terms both of privacy and intellectual property rights that we need to address. The United Kingdom has just launched, particularly the information commissioner officer, a program called Explain AIN exactly to start to look more in depth into this issue, and we need -- we need the technologists, we need the technical experts, the legal experts. We need business. This is a very important set of principles difficult to implement.

>> AIMEE VAN WYNSBERGHE: Just out of curiosity, following up, this is the second time we've heard explainability. We're also in a situation where we have extensive terms and conditions, right, where they give us enough information. And so we give permission to use our data or to scrape our data, and then we get access to a certain site. When we talk about explainability, do you think that there could be some sort of obstacle there that, you know, if it's done in a way that we don't really understand what's going on, we could just follow in the footsteps of, yes, we click the terms and conditions. We agree. It's been explained to us. I agree. And then we're sitting there stuck?

>> MEERI HAATAJA: Yeah. My response would have been exactly the same, transparency and explainability, I guess, that covers sort of part of those. So definitely. And the reason for this is that those are basically enablers for us to, like, see if the other principles are in place or not. So it's sort of like foundational principle. But I think when talking about transparency, we really need to think that there are different levels of transparency and different audiences for it. So there is no, like, one answer to this one but, like, there needs to be an understanding of the customer citizens, the users of the service, like, require different kinds of -- different kind of -- different level of transparency, which is probably quite high-level description of the system, like, and what is it used for and probably also in the future some indication if there has been a trusted third-party auditing how the system actually works or something like that. Then if we talk about the transparency, the auditor or regulator who does a review on it, like, requires a totally different level. We need to think about -- we actually, like, when we are thinking about this, we are envisioning this what we know from airplanes, the black box concept so that we need to have the traceability or basically retrospective visibility into how the system worked. And if you think about that, it needs to be pretty detailed so that understanding about how the system from a technical perspective worked in a certain moment. So transparency as a term covers all these different perspectives and all the business processes around it and, yeah. My main concern at the moment is that how do we ensure the interoperability and similarity of our interpretation on the transparency and we sort of standardize how we define that because that's really critical in order for transparency to actually provide that value that we are looking from it.

>> ANDREAS HAUPTMANN: I think that's a very -- it's a hard question because we all know the cookies and home pages, we all know these terms and conditions, and you get bored before you even start reading. I think one of the things we're working on in Denmark is we're trying to make a data ethics seal in order to simplify this, to make a seal that's saying this -- these are companies that are working intensely with data ethics and taking a responsibility for the responsible use of data to actually guide consumers and businesses in a very easy manner on the market. So I think some sort of seal that could actually -- that could make the push and make it very easy to kind of have parameters working with you in this field.

>> MEERI HAATAJA: Can I ask a question? Is the seal for the organization or for the system? Is the seal for -- is it the sign that the organization, in general, follows ethical processes in development and using AI, or is it for one specific service AI application or system?

>> ANDREAS HAUPTMANN: Well, we are debating it. At the moment we don't have the seal fully developed yet so it's a work in progress. And this is one of the big debates going on, do you need to make it on the app, saying this app is data ethical, or do you take it as a company, saying this company is working with data ethics in an ethical manner? I think we are leaning towards the company version or organization, saying this is data ethical. Also because it's very hard to work with data ethics without having some sort of organizational criteria that the organization is working with data ethics and taking it very seriously.

>> MEERI HAATAJA: This is a really interesting topic because, like, the certification I'm part of, we're specifically focusing on the system level, like this specific system is getting the certification, but I think there is room at both levels. This is also, like, very important question, like, what is the ethics of this organization who put together these stamps and, like, what the measures and, like, how do you actually maintain those in time because the systems change, and then it's really good processes for actually reviewing that seal is valid in time.

>> ELETTRA RONCHI: I think you need an independent party putting in the seals, and you want to make sure that there is trust in how the seal is assigned. And it is an issue that we're looking at also in the context of the review of the privacy guidelines in terms of, you know, certification, for example, because you want -- there is a use in seals and certification and in creating an environment of trust. So I think there's a lot that we can follow there with your pilot. We're going to certainly continue some dialogue with you on that.

>> ANDREAS HAUPTMANN: I think that's right. And I think that's why we, as a government agency, is not making the seal. This is a multistakeholder process. So we are only initiating the work and starting to push it off the ground. And that's also why I don't decide on the criteria. I don't decide on the governance. I'm merely putting some resources into making the foundations and then the multistakeholders in Denmark will have to take this further. So we have some support for the start-up process. And then it will have to move on from there. And I think, of course, it's quite evident that hopefully we will have this ethics seal later this year, but we will need to update it regularly, especially in the start-up phase. And when we know the guidelines from the EU, it will be, of course, need to be integrated into the seal somehow.

>> AIMEE VAN WYNSBERGHE: Interesting, then talking about the role of governance, the role of companies, the role of principles to help you arrive at something like a seal, what would you each say? What is the function of these principles? Should it be something that complements regulation? Should it be something that gets us to regulation? Is it something entirely separate? How do we make sense of having these principles when we also have policymakers? What does this do, you know, for them? And also, whose responsibility? Who is responsible for what in this situation? Should it be governments who are trying to run multistakeholder dialogues or sessions where they create such a seal? Should it be large organizations like the IEEE? Who is responsible for what in that context?

>> ELETTRA RONCHI: As I mentioned before, we built on a lot of work that had been done in 2016. I think the facts really talk to themselves. There's been so much -- I think you mentioned that there are about 40 ethical guidelines. There's been so, so much development in terms of work in developing these principles from academia, from business or from standards organizations, national governance. We know that there are about now at least 13 OECD countries that have put out national strategies that have integrated ethical principles. So they talk to themselves that there's been a need. Now, you say, well, wouldn't law and regulation be enough? Well, in many ways the question that has been asked around is even if it is lawful, should we do it? And I think that with ethics, we're trying to ask that question. Even if it is lawful, should we do it? And there's a lot of lessons learned from history of dual use of technology and particularly with AI, we've heard calls for thinking thoroughly about the consequences, Steve Hawking, for example, that we're going toward dystopia. I think there's been really a call for that. But there are laws and there are regulations. There are previously legislations. There is intellectual property law. There is redress. There is competition. So we have a frame for regulation, but clearly there is a need and the facts show it for broad public discourse around the uses and the consequences, which is in some ways fantastic. If we look also now at the European Commission, the work that has been done at a regional level, but also we have now UNESCO who have announced they're going to -- in November they're going to discuss the development of ethical principles. So there is a call for it, a need for it. That's the way I think about it.

>> MEERI HAATAJA: Yeah. I would look at it -- I think this is such -- such an important and wide theme that there is no one single, like, party who would be responsible for this one, but this is, like, everyone's question who is in this space, like works in the area of AI and use of it. That's really something that I think, like, so that means that we need to have regulators involved. We need to have the data scientists working in an individual organization and carry their own responsibility of this one. So -- and that also, the space, why we are seeing, like, so many different kinds of organizations actively working on this one, and I think it's a really positive thing.

I would say that my thinking has been, like, transferring a little bit, like, from self-regulation into realizing that now that, like, while thinking actually, like, more and more how can we enforce the implementation that we definitely need regulation to support that. So I wouldn't see this, like -- it's complimentary, this laying down, like, excellent ground for thinking how to take it into practice and regulation is definitely one of the very important enforcement ways for taking principles and these guidelines into practice. But how you regulate that is a really good question because probably you cannot just, like, put these principles and, like, you do that. But, like, we need to think about this discussion about transparency is a good discussion, and that could be a good perspective for regulation as well. That is something that you can regulate. We also already have that as part of GDPR. We know at the moment, for example, FDA is considering how to change their, like, regulation or what other frameworks for regulating the learning, medical devices in this space, and the theme is they're very much the transparency, how do we ensure that in time while these systems are in use, and they develop, they learn, how do we ensure the necessary transparency, like, in order to secure that these systems actually work in the right manner and safe manner. So I think it's complimentary definitely, and we need to think more and more how to actually regulate this as well.

>> ANDREAS HAUPTMANN: And I think my stance would be that I agree with both of you. At the moment we need everybody to work on this together if we are to achieve data ethics, really making a difference in everyday lives. Of course, I'm coming from a government agency, so I am thinking a lot in terms of how do we -- how can we as a regulator, as a government, push the market and sense it further. That's, of course, the field I'm looking into because I think the dream would be that self-regulation would be sufficient. Everybody would just have a notion this is very important that it would be common sense and norms that's out there that guide everybody. Of course, we are not there yet, and we probably will never get entirely to that place, but we need to push in that direction and find the policies that can enhance that work, so to speak.

>> ELETTRA RONCHI: It's interesting that you're saying that because we just held a roundtable on accountability and what has worked and what has not worked in accountability in organizations. Exactly some of the issues that you raised at the beginning shows that we've got to do better with accountability. And it requires strong enforcement. And this is not just a call from regulators. It is, in fact, even the companies themselves that are talking about, well, we need accountability because it is, in fact, a core regulatory mechanisms. But we need enforcement to really put in place robust accountability. And in the context of AI, we're talking about accountability 2.0 that is a new form that integrates really the questions that you are raising on data ethics and what does it mean for an organization to have robust data stewardship and how can they be responsible to the users. So I think these are very big questions. We are collaborating very closely with the international conference of the privacy commissioners right now and also bringing forward some new concepts in relation to accountability and data stewardship, and we're very proud of that.

>> AIMEE VAN WYNSBERGHE: So I think at this moment now we're just past the 11:30 mark. Now I'll open it up to the audience who want to ask questions. I know that we have two questions already. And we can take that and also continue -- continue the discussion. Is it okay to go to that microphone there? Yeah, sorry.

>> Absolutely. Good morning. I work for Cab Gemini which is a company that actually helps implement AI solutions. And I feel that we're missing the second part of this discussion, which is from ethical principles to practical solutions. Now, I feel that we're a bit cozy in the fluffiness of this discussion. So they're very vague concepts. And I've been hearing you talking about that and hearing you talking more and such, and one of your colleagues on the expert group also, I think it was Thomas Metzinger was critical of the solutions, and he called it ethics washing in Europe. There's nothing very useful about it. And we call it trustworthy AI, but AI isn't trustworthy or not. It's the people behind it and such. So one of the key questions is in terms of practical solutions, what works and what red line would you draw in terms of implementing AI solutions?

>> AIMEE VAN WYNSBERGHE: Also I should say that Thomas is a huge fan of the guidelines. He did have a problem with some of the procedural items. He acknowledged the fact that industry was at the table, but he also says this is the best thing that we have out there. So sometimes that was misinterpreted in the press, but please, take the floor. It's what are practical solutions.

>> ANDREAS HAUPTMANN: Well, I can start off. I think this is very difficult. And we need to make it to a practical solution. I think at the moment there's a lot of political momentum around guidelines and around principles. But we need to make it into actually practice on the ground at the moment. And that's difficult. Especially with a term like ethics that is -- means different things to different people in different situations. So I think I mentioned the seal. I think that's one way of doing this. I think another thing we're working on in Denmark is to make data ethics part of the reporting obligations for the biggest companies when they make their annual accounts. Then we introduce it as part of this corporate social responsibility to say explain do you have a data ethics policy or not? And if you don't have it, you need to explain why you don't think it's important for your company to have a data ethical policy. So I think that's one way of pushing it into practical solutions. But still having autonomy for the business to decide for themselves what does data ethical mean for me. And it also creates some transparency to kind of follow up on the ethics-washing issue. It doesn't maybe solve it, but it will make a push in the right direction.

>> ELETTRA RONCHI: Well, there is an advantage being at the OECD. We have 34 countries, and we can exchange experience, best practice. And so what we put together and what we're planning to launch by this fall is in the AI observatory which will bring use cases. And I think that when you start to talk about how do you bring principles to practice, you need to look at use cases. And there is not one, let's say, solution and a single solution out there. And I think that that is going to be a very useful tool. The other point I would like to make and that's a disclaimer. And I should have probably put out the disclaimer for me at the very beginning. I'm not -- what I'm also presenting in part is clearly the OECD perspective, but also some of these comments like the one that I'm going to make now, it doesn't reflect OECD membership's opinions. But I think that there is a need to build bridges between technology and ethical values. And I think there is an enormous opportunity. And the title said ethics by design, in integrating ethics in technology, and I think you're working very much to do that, Aimee, and maybe you could step into there. And I think that we're certainly not abstract there. We are integrating in the whole process. And when we're talking about accountability 2.0, we're talking about a risk management approach that is integrated and integral -- and integrates ethics. So that's not abstract. That's not just ethics washing. So I'd like to leave it to that.

>> MEERI HAATAJA: You come from a perspective and basically the question you're developing for your customers these AI systems and the question is what do we do so that we are doing things in a right way. And that's a very practical way -- practical question, how do we document these systems that we develop? Like how do we -- what needs to be done? And that's a really serious question for vendors because, you know, everyone wants to do things in the right manner. So this is something that we are working on and that basically requires, like, a common way of documenting things, like the organization has their own building on all these guidelines and that this is the requirement list that we want all of our vendors to go through and document, and this is the information how -- and these sort of ways, how do we manage those in practice over time as well because it's not only one-time exercise to document it. But, like, it needs to be with all the parties who are involved in doing -- developing and maintaining, using these systems. They need to continuously contribute to that one. So that's the problem that we try to solve with our platform and practice. So maybe once you have a look at our website, because there is this practical tool, but I think this is very good and a difficult question.

>> AIMEE VAN WYNSBERGHE: So just to elaborate on that, the European Commission, they're now in the piloting stage of the guidelines. So when we talk about practical tools over the next month, we're now going to learn what practical tools companies are already looking at or could be looking at. So there will be a lot more to come there. There are things like the MIT and Harvard have created a project called The Data Nutrition Project, so it's like creating a data nutrition label, the same way that you see the caloric intake, the percentage of fat, the carbohydrates, that kind of thing. This project is looking at how can we do that specifically for data. There's a paper out right now called From What to How by Luciano Floridi and colleagues over at Oxford University that goes through a variety of different practical solutions, privacy by design, but much more specific than that. There's also another paper by Mittelstadt of the group looking at mapping the ethical issues related to decision-making algorithms and proposing solutions. There are quite a few tools out there but it's also important to understand that they have to be very specific and targeted. What context are we talking about? Is this machine learning supervised? Unsupervised? Is this AI? So it's really difficult to say these are the solutions to the ethical issues that we see without being incredibly specific about the algorithm, the context, the application, the demographic that's being used to talk about it.

>> Do you think the red line -- do you think the red line question that Thomas asked is a relevant one, or is it too tangible?

>> AIMEE VAN WYNSBERGHE: I do. No, I think the red line question is also really important because he's accurate in his description of what happened that we began with certain terminology and that terminology changed. We also had to acknowledge that these are ethics guidelines. This was a separate document from the policy recommendations. So April 26th, which is next Wednesday, the European Commission, the same high-level expert group -- or Monday?

>> ANDREAS HAUPTMANN: June.

>> AIMEE VAN WYNSBERGHE: June. Oh, my gosh. What -- what just happened there? June 26th. So the second part of the document comes out. And these are the policy recommendations. And this is where we get into more of this topic of red lines.

What are the things that we must create sort of ethical constraints? But in the ethics guideline, that was meant to be, let's have a discussion. This is where we want to go. This is how we want to educate and inform policymakers about what we think is vital and important. It wasn't -- it wasn't our job to say don't do this. You cannot do this. It was our job to say, hello. Really what do you think? If we're pointing at the possible consequences. Yes. You were next.

>> Oh, please.

>> Thank you. First, thank you for the debate, even though we are in fluffy terms, I think this is one of the most constructive ones I ever heard. I'm from Czech Internet institute and the Oxford Internet institute, so the great starter for orienting oneself in all of this is the mapping the debate, ethics of algorithms, that's the paper by Mittelstadt. But let me ask two questions. Firstly, I am very interested in what Denmark actually means when you are talking about data ethics. Do you mean, you know, bias in datasets that AI is learning on, or what is your definition? How do you approach that? Because I'm a bit confused about what that actually means. Also, that connects to the fairness. We want it to be fair. But what is fair? Do you define it in some sort of way? Have you defined fairness? Is it, you know, the known bias or representability of datasets, or is it to be just to have the same injustice for everyone, for example, you know? And then the second part is my research actually focuses on different approach transparency in order to protect self-determination and freedom of thought in environments powered by predictive algorithms. That's very connected, but it's also somehow overarching. So it would enable us to kind of do something right now as we were talking about or you were talking about consent and terms and conditions and all of this. Explainability is a part of that. Accountability is a part of that. But coming back to the first question, that's why there are two. That would somehow suggest what the fair is and what the, you know, representative is when we have self-determination as the goal, for example, not to be somehow operating in a skewed environment, then we can actually apply those values to ethics of data or data ethics or whatever you want to call it because then in the end, it's people putting in values to code. That's what in the end happens. It's not, you know, system put into another system. So, yeah. What is ethics of data, data ethics in your view, and then these kind of overarching, long-term principles? Do you even think about them, or how do you approach them or transparency in that matter?

>> ANDREAS HAUPTMANN: Well, thank you very much for the question. This is difficult, to be honest. I mentioned early on that what our principles in Denmark, and we refer to them as values more than principles. So -- and I think that's the notion, it's very hard to be very, very specific because data ethics means different things, whether you are working with health data on individuals or you are working with traffic data or whatever is out there. Ethics means different things in different sectors and in different organizations and businesses. So I don't have one clear definition on what we mean. We have these values that are guiding. And then we are more working towards we should have business and organizations reflect on that -- on those values and say what does it mean for me as a business or as an organization? What is important for me with the stuff that I'm working on? And that's the whole notion of the work on this reporting obligation on data ethics from the biggest companies. They should make the data ethics policy and put it out there, and then it's up for the public to judge and the media to judge, are they living up to the policies they are making themselves, and is the policy adequate in comparison with the products they are putting on the market, so to speak? And also, that's why I think I started with saying the explainability and the transparency is probably the most important one in order to get anywhere in this field.

And then I also think it's important to say we are, at least in Denmark, still early in the process. So we are very aware that we will probably need to adjust this as we go along. We only just started this journey.

>> ELETTRA RONCHI: I think you raise a pretty important issue here. When we talk data, we think we're talking about something we all -- it's one monolithic, you know, data. No, we're talking about a very heterogeneous set. And what at the OECD we're trying to do now is bring out some clarity with data taxonomy and what we mean. And the other point you're raising and it sort of explains why data ethics, in some ways it's in its infancy, I would say, starting now is exactly because it needs to be looked at in context. And we need also this clarity of terms that are being used. We're struggling -- you're struggling inside a country, and we are struggling at an international level. The terms that have been around for a long time like accountability, they're still not fully understood. When we raise the issue of data stewardship during the development of the AI recommendation, there were questions from my colleagues, what does stewardship really mean? How do we translate it? There's an issue of translation here where we're talking -- we're speaking English, but all of these terms need to be translated. And sometimes they don't keep the same meaning.

>> AIMEE VAN WYNSBERGHE: Yeah, ethics is one of those terms and it also speaks to disciplinary working together and I’m an ethicist, and I look at ethics in one particular way, so I applaud your question, what is data ethics because ethics is this personal development, this ongoing, how do I be a good person, the good life, whatnot. And then, you know, what does this mean for data? So I think also not just what is data, but really what is ethics in this conversation? But I'm the moderator, so. . . May I have the next question.

>> MEERI HAATAJA: Very small example to this, this is also -- yeah, this question is a really good call for this overall in this space, like we need to have this common understanding about how do we interpret that -- interpret these terms. One really nice project that we are right now starting with the larger cities in Finland and a few governmental agencies as well, we define together, for Finland and really look forward to also publish those so it can be used in other countries as well. But the question is what is the citizen transparency public sector players need to provide for citizens? So how do we interpret transparency in the context of public sector AI? And that will be basically a data model defining that, okay, this is the information that, like, citizen has a right to know about the public sector AI use cases. And that is something that these organizations can take that, okay, these are the things that we will put in place and provide also visibility into. So I think collaboration within the industries and the sectors and agreeing together about these terms is also extremely important. And that's something very tangible that, like, individual organizations can work and not just wait for these really large global organizations to come up with their interpretations but actually start agreeing on a national level and international level on how do we do this in practice.

>> Fatel, MLI Group. I'm glad the previous questions were thrown at you, so to speak because if it manifested anything, it expressed how challenged you are finding the debate. And it is because it's a blank sheet of paper. It's a new era. It's an era of unprecedented events. And let me just give you a preamble as well to put it in perspective and why we may need to add a bit of tabasco sauce to this. Not a bit. A lot of tabasco sauce to this. Part of what we -- MLI Group stands for multilingual Internet. So if you've heard of the term, I and other people pushed the multilingual from the '90s. In the last ten years we've been heavily focused on not just security but the geopolitical risk. So today where the tabasco needs to be brought into this conversation, today we have destruction-motivated cyber terrorists. These people are not interested in credit card details or data. They are coming in to destroy. We have now -- we hear about nation state hacking. It's worse than that. We have now hacking for political purposes to change the direction of nations politically and economically. Let me put it to you, last month at ITU, there was the conversation on AI for good. 2019, I was on a high-level panel on the digital $29 trillion per year of the digital economy and trade. The reason why this has to be brought into the conversation here is that the challenges that you're finding as an OECD, with so many governments involved there, the Netherlands who is probably one of the more advanced nation states in legislation but still challenged, finding it challenging to move forward is because the threat to society is on unprecedented level. Today you can bring a nation down to its knees without attacking a single critical national infrastructure nor attacking a single military base. So without going into more details, the challenge that I see with AI and the debates we're doing is we need to accelerate how we find moving forward effective. And I use the word "effective" loosely because it does require moving away from the position of let's say fair economics and minimum government intervention. That ship has sailed. We've already seen that that's no longer working, but many western governments are still delusional that the philosophy still applies and government led and, you know. We've seen Cambridge Analytica, and we have seen so many other cases of Cambridge Analyticaesque style recurring but nothing's happening. So let me just give the challenge to you now that I've given you this preamble.

Imagine tomorrow the United Nations secretary-general decides he wants to convene just like he did on the digital cooperation. He wants to convene some experts to help him, how do we address this threat to society from AI so it becomes something that is channeled for good? With ethics, knowing that your ethics may be different from mine, may be different from hers and God knows where we can go with this. And let's say 20 people are debating and discussing. In your opinion, what would be the first thing they need to discuss and agree between themselves and becomes an ethos for everybody else to start following as a template, a starting point? So to get the conversation going, let me give you my suggestion. It needs to be the philosophical position of what we want AI to do, and that becomes something that governments, multilateral organizations start saying, this is our position, and that's where we start. Because as it stands, if we leave it to the technical developers, guess what? That tabasco will probably be the whole bottle needed. So the challenge that I'm presenting to you here is not -- you're not alone in that challenge because 21st Century living no longer matches 20th Century modalities of the regulation, problem solving, and it's not just about the data. The data is just a consequence. So this is the challenge to you. How would you start -- where would you start? Would you start with the philosophical position so that you get as many of these businesses, governments to agree so that starts filtering through? And by the way, this will be a top down, not bottom up. It cannot be bottom up. Please. My two cents.

>> Let me quickly jump in and add to this, why do you trust governments, then? Because governments are like a camel, a horse designed by a committee.

>> AIMEE VAN WYNSBERGHE: And which governments?

>> You know, I'm glad you jumped into this. Let me just throw in another bottle of tabasco. Add populism. Talk about trust of governments. We have now governments now that talk about me first. Well, what does that mean? That doesn't mean that there are now going to not be involved in any agreement on a philosophical basis on what AI does because they feel they can leverage it to advance their cause. It's a challenge. This is part of the challenge. This is part of the debate. And I'm not professing there's an answer. But I'm saying if we don't start with what you and I can agree on what ethics are, we're not going to get anywhere. That's my two cents.

>> ELETTRA RONCHI: Well, I think this discussion started not only at the OECD, it started at UNESCO and it started at the G-20 level. I like your tabasco, but in some ways, even though I have a hard time drinking drinks with tabasco, but in some ways I think that a lot of that reflection is going on. So you're not telling me anything totally new. And at the same time, I think they're not totally new historically. Just look at the dual usage with genetic engineering. All discussion around by ethics. What have you learned from there? I must say in my past life, I dealt with the World Health Organization on an issue that perhaps had similar resonances, ceon transplantation. And we learned at that time that moratoria don't help. You've got to engage a global discussion. I think what we've started now is a global discussion. And a global discussion knowing also that the covert usage of AI, the dark side can be exactly what you said. And I don't think that this is -- these are issues that governments are not looking upon. There is a lot of hope and hopeful messages from the 8th of June G-20 statement, and I invite you all to look at it which, again, draw on the OECD principles. Clearly we are living in a digital era where, as you said, cyber-attacks can have major consequences on critical infrastructures.

>> Catastrophic.

>> ELETTRA RONCHI: Catastrophic. I agree with you.

>> If I may interrupt for one second.

>> ELETTRA RONCHI: And this is increasing awareness. So at our level, with the instruments we have, we're trying to engage as much discussion as we can on that. And perhaps we need to look back in history and the tools that we used with other technologies. As I said, biotechnology, genetic engineering, nuclear arm race. I think that there is something we can learn from.

>> If I may interact before the other panelists give an answer. I'd like to think that something was learned from this because the tabasco wasn't really to flavor the drink. It really is the equivalent of a sense of urgency. There is no question what you said is absolutely right is there's a lot of conversation. UNESCO, everybody's having conversations. The challenge here is we need to come to some kind of a format upon which we can start working. Otherwise it's good-bye. The nation states or the stakeholders or the people whose lives end up getting devastated because of what is pressing society today, and AI could be leveraged for this is imminent. So when we discuss -- I'll give you a simple example. What's the difference between Al Qaeda and ISIS? I ask that question at a lot of briefings, and they came up with a lot of answers which are relevant. There's no difference. The only difference here is when Al Qaeda was in its heyday, it did not have technology and the ability to do this to bring a city down to its knees. That, add to it the tabasco of the sense of urgency and AI, and then you're seeing the challenge to society. So what I'm calling for here is an accelerated collaboration between all of these conversations so that we can at least agree on the high level of what is it we're identifying so that maybe we can come up with a solution. My two cents. It's only meant to be thought provoking.

>> AIMEE VAN WYNSBERGHE: Just seeing all of the questions you have, do you have like a one-sentence answer to that?

>> ANDREAS HAUPTMANN: Maybe two-sentence just to say I don't know much about cyber terrorism, so I don't work with cyber terrorism. I think that's a whole other field than what we are talking about here. We are talking about data ethics in everyday life from companies and organizations that are not trying to take other states down but are trying to work with whatever it is they are doing. I think cyber terrorism is something completely different. But I agree with you that we need digital cooperation on a whole other level, and I think we need to have organizations like OECD and the EC taking this further and building some sort of consensus on an international level. I think that's happening. And, of course, I think we all agree that if they could pick up speed, everything would be good.

>> AIMEE VAN WYNSBERGHE: Okay. So just because of time, I go to the other questions, and then we can always talk more at the end if we have more time or we find you at the coffee break. So I do the next question over here and then I go to this side. Is that okay? Because I have seen them for quite a while.

>> Hi. John Peters speaking. I'm an AI researcher. And my first thought is why not bring in more tech people to the table? Because the topic you're discussing is ethics by design which means that you'll probably be discussing guidelines, things that you could do from a private perspective with implications for the companies. And my question is do you feel that we are still looking from a sense of application effects of the systems, or should we go a little bit more technical and trying to specify, regulate how things are being done? Because it's two different things. You talked about explainability, and if we look, for instance, at a very specific case of neuro networks, they work like black box. So you could try to go there with research and create methods to provide you with a plausible explanation of what is happening, but you could also look the other way and try to see the AI environment as a whole and try to enforce these guidelines as the outcome of those systems. I don't know if you have a position because usually the legislation is better when you say what are things you can implement or not. But in this case, in this area, I think that we would need a more general solution towards the definition. Because then people or companies would start using, okay, this is an AI system for the things that they are better or they convene in terms of legislations and so on. But is there a gap that you see here between the legislation and the developers and the systems that are really implemented? Thank you.

>> MEERI HAATAJA: Yeah, I could comment on that one. Exactly these questions and the questions that you are raising are the reason why I'm personally, like, really looking forward to work on, like, sector-specific use cases and looking deeper into different sector, lie how do we interpret -- for example this question of transparency. So what are the, like -- what are the exact uses, for example, in the health sector? Like what kind of models we use there and, like, actually start to, like, interpret in that specific sector context. I personally think that, like, we need to be in that technical level in order to -- basically the question how do we get the confidence so that we actually know how the system worked and works at the moment, and that doesn't come with a high-level descriptions that requires the data, technical visibility into how, like, what goes in how the model works and what comes out of it and, like, how that develops. So -- but that's something that we are working on. That really requires that the players, like certain -- like different sectors start to actually work on a technical level on that one. So really interested to discuss further if you are interested on that, very important.

>> ELETTRA RONCHI: Regulatory agencies are very aware of that. And privacy enforcement authorities are starting to hire technical experts just like you. And I think that that is exactly what needs to be done right now. You know, regulation needs to pair up with expertise in technology.

>> AIMEE VAN WYNSBERGHE: Just to also add, it's also important to understand that you can't program ethics, so ethics as such isn't something that we're trying to translate into technical requirements. The values that ethics talks about is what we're trying to translate into technical requirements. And then I think that's also an important thing to understand that ethics is the process of deciding how do we interpret this value? How do we prioritize this value against that value? How do we understand that values change? So that is ethics. And you don't put that into the system. But you look at the values that are going into the system. Okay? Okay. So go to this side and then I come back to you. Yeah?

>> Thanks very much. (Away from mic) of the interior. Two quick questions. Firstly -- touched upon the idea of these values because you've been talking about transparency and explainability and of course that is something we need in order to talk about what all these systems do but it's not the core of what we want. We want it to be fair and humane, et cetera. So how do you think we can come to actually guidelines, principles that developers can work with on these specific situations? Because now all the regulations will be focusing on ensuring the explainability, but maybe if we focus on that too long, then the ship will have sailed on all the other things. And secondly, Mr. Hauptmann already touched upon is the different areas. So now we try to regulate on AI on a broad scale, but as was touched upon by the previous speaker as well, in health care, in infrastructure, et cetera, all the AIs do different things. So when we talk about fairness in a health care situation, we might need something completely different than we do in an infrastructure or in a justice situation. So why are we still trying to regulate it as AI rather than looking at AI in a health care context? Thank you very much.

>> ANDREAS HAUPTMANN: Well, you are, of course, right. The main thing is not the transparency in itself. It's an instrument of getting where we like to go, so to speak. But on the other hand, it's -- the values need to be rather generic. And then you need to translate them into different organizations, different sectors. So I think that's the main thing. I think what we will see is probably that we will move from guidelines and principles on a generic level to more sector specific in the time that is coming. I think we need to do that. And we need to make practical tools that helps companies and organizations working within specific fields moving forward. But I don't think that means that we can't have values on the broad scale. But that these values mean different things in different settings. Yeah.

>> MEERI HAATAJA: One smaller perspective into this is I fully agree, like, you know, the themes that you are raising are really important. What we haven't discussed is the risk level or the differences of the applications and their risk levels. Because I think one of the main concerns is that, like, now while we start to look at the governance and put in place these practices, we do overkill for many of the, like, applications that basically have very low risk level and sort of -- so this is also something where I really look for the leaders and regulators from specific industries to build this understanding what are the applications and what are the different risk levels of those. And obviously we should, like, look -- like put our most effort on those industries where we feel that, like, risk level, like, in terms of using AI is higher than on average. So that's the reason why I'm interested, for example, in the public sector cases overall. That's definitely something that we all are concerned of, health sector transportation, defense, definitely. We need to have our own interpretations about these topics, but also make sure -- wasn't it in your list of principles that there is innovation as well. I think that's really, really important and securing innovation through understanding that not all AI applications or systems have the same risk level, and we don't need to push those through the same quality measures.

>> Hi. My name is (Inaudible) I represent Humanity of Things, an institution that was founded by civilians that are worried with a basic concept of literacy. We talked a lot about data, and we believe that governments can regulate, companies can do a lot, but the citizens need to be empowered. And for that critical thinking needs to happen. And critical thinking comes from knowledge and wisdom. So my question for you is that what is the effort at this moment to promote -- I will take that literacy 2.0, maybe. Do we know what we need to know? And the second part of my question is regarding regulation. I'm also a lawyer. And I would like to know, are we to a path that we will have a treaty soon, the way we had with other situations all over history? And regarding compared law that we discussed, for instance, with human cloning, are we prepared already to use all this knowledge that we had in other areas to adapt to the current challenges?

>> ELETTRA RONCHI: Thank you for this question. The set of five recommendations for governments that I did not list before and is the second half of the OECD recommendation really lists national policies that go toward what you just said. Just to list them. Investing in artificial intelligence, research and development, fostering a digital ecosystem for AI, shaping and enabling policy environment, building human capacity, and preparing for labor market transformation, international cooperation for trustworthy AI. And within this, there is a very, very -- a core role is digital literacy. And the OECD is also producing a digital skill strategy. I think you point to a very important issue there and definitely it's core. But we are looking at this from another angle and it is the children online recommendation. We need not to forget that there are vulnerable populations and children is one of them that we are now looking at very carefully.

Now, in terms of a treaty, while we know that all of this discussion is not just happened at the OECD. It's happening at G-20 level. It's happening at the UNESCO level and certainly at the United Nations level. So it is not in my capacity to comment on whether we are on the path of a treaty, but certainly there is a lot of momentum and sense of urgency that this discussion needs to be lifted at a global level.

>> ANDREAS HAUPTMANN: And maybe adding to that, I think one of the initiatives we are working on in Denmark is to build up the common knowledge in this field. So how do we build up digital literacy, so to speak? And I think an example is that we are -- this March we started having as part of the curriculum in public schools, in primary schools on all levels, that pupils are being taught in this field. Not in order -- not in data ethics specific but on the whole notion of this whole digital world in order to try and make them agents and not passive consumers once they grow up. I think we are -- this is a pilot going on because it's rather difficult to do this in the best manner. So it's being done at 50 schools at the moment, and then it's being evaluated in order to how do we do this the right way? And it's also being introduced in higher education at the moment. So that's some of the concrete answers.

Then your part 2, I think it's difficult to know if we are ready for a treaty, but I think there's a lot of momentum behind this agenda at the moment. So I think if we could somehow find out what is the area where we definitely need regulation, I think there's a momentum there at the moment, but it's very difficult to say what should we regulate and what should we not regulate. Getting regulation right is harder than it seems.

>> AIMEE VAN WYNSBERGHE: In terms of literacy, look up Jim Stolz who runs the programs online, and this is trying to teach people what is AI to teach the average individual. Was it in Finland or Denmark that he ran the first program and now the next one is in the Netherlands? Do you know which one I'm talking about?

>> MEERI HAATAJA: I'm not sure what you're referring to exactly, but yeah, I was going to say this element of AI is something we are really proud of. So the target has been, like, actually in one month or, like, a few months, like, we got this one person, Finnish people educated by this amount of AI, and now lots of other countries have also taken it. It's a free online course, like six sections there. So if you don't know it, that's, like, a good exercise for us all for summer. Yeah, first course on actually familiarizing with AI. But this is, like, extremely important, both the skills and the information, like, having for people to actually have agency in this AI-driven society.

>> Hi. (Away from mic)

>> AIMEE VAN WYNSBERGHE: I think they turned up the mic.

>> Seeing that you all agreed on the explainability of these systems and having read Floidi and studied ethics, I still would like to address and ask, like, how to sustain explainability when the nature of algorithms are complex and when the companies do not want to share their things. One more question. And the second is about the data seal. What are the categories or certificates or values that says an application is data ethical? And the third question is you mentioned self-regulation which I'm so interested in because there is a risk of overregulation that we are facing. And, yeah. So what are the ways that we are going to make self-regulation sufficient? So two hard questions and one of them. Thank you.

>> MEERI HAATAJA: I could comment on the first one. It's really an important question because that is this question of typically used as a counterargument for transparency basically saying there is no means to build transparency or try that effort forward because of this trade secret problem. We are approaching that in that way that we need to secure transparency so that we can provide the transparency for those parties who need it. So that means that not everything needs to be open for everyone. It means that, like, there needs to be third parties who do these reviews audits, and there needs to be mechanisms for -- and contracts for providing that visibility into these, like, precious assets, like for that moment when the audit is, like, being done. There needs to be transparency for regulators who can, like, request for that, that access. Then obviously another question is then, like, what level of transparency we need to provide for citizens and customers. And that's totally different level of transparency, what is required for these different parties. So I don't actually see this as a problem, this question in this context. It's just, like, approaching transparency from that perspective that we need to be able to have mechanisms for providing the reasonable level of transparency for those stakeholders as they need it. So that's very important, I know, to this one, this question. I will -- I'm probably focused on the first question and not sure if I remember the rest of it.

>> ANDREAS HAUPTMANN: I can maybe get back to your question about the seal. I think important to say we are working on it at the moment. There is nothing set in stone. Again, it's a multistakeholder process, so we need -- we have, for example, both consumer organizations and business organizations working together trying to agree on what do we need. We have something that consumers actually will act upon and something that businesses can work with and see for themselves, so to speak. But I think it will -- it will -- it will be a mix of some organizational criteria. You have a data ethical policy as a company that refers to a set of specific values and what will your company do. It's also to have guidelines within the business or the organization, that you have a notion of how are you working with your subcontractors on this issue. So there's a range of different criteria we are working with at the moment. We hope to have the first list of criteria more formerly endorsed by the multistakeholder setting at the end of August or maybe September. So then I will hopefully be able to say more about it.

>> AIMEE VAN WYNSBERGHE: Yeah. Okay. So were there any more questions? Yeah. Oh, yeah. Okay. And then we will wrap up and do recommendations.

>> Thank you very much. Just a follow-up question. I think you already answered the previous question, but from my own experience when I talk to the developers, be they in the private sector, be they in the public sector, they're just asking when am I doing it right? And if I then tell them just when you're thinking about you're doing it right, of course that's part of it, but they also just want to know when I'm developing an AI use in health care, can I or can't I use data about a client's ethnicity? When I take a classical example in the autonomous vehicle domain, should I prioritize the safety of the people inside the car or other traffic participants and when am I doing it right? And I feel the answers given so far, okay, we think about it and the market should answer the question within the companies. If we look at the traffic example, we'd see that, well, people would usually buy the cars that protect their families who are inside the car rather than the other traffic participants. So there's a role here, a regulatory role, on the specific domain for the government. And I feel that most of the things we are discussing and we see it at UNESCO, OECD, et cetera, all these levels, also within my own country that we're still talking about AI in a general sense. So just to reiterate that question, how can we actually answer these questions for developers who work towards these answers? Because I feel that we're, again, taking a bit too long with our time and we're saying, well, we have to discuss and we have to think about the thing that's right, but that always happens in these kinds of situations where we are missing usually the developers but also the experts in health, in infrastructure, and it's always digitalization AI experts. So how can we actually engage in those discussions better to ensure that we get answers that people can actually work with rather than say it needs to be humane, but then what does that mean?

>> ELETTRA RONCHI: Well, I'm going to use two words that maybe would require even further discussion. But you're calling for something that a number of countries call regulatory sandboxes. This is experimental. We need to start putting together sandboxes. Sandboxes mean that you are in sort of a protected environment where you're testing, be it new regulations or the current regulations and you're testing conditions. I don't think that we have any real hard solutions to this. We need to experiment. And there are tons of laboratories out there. So I think regulatory sandboxes would be a response to that. And certainly about it is also very context specific use cases. I know the government in Singapore which is doing a lot of work in this, and we are now in contact and learn from what they're doing. And I know that the United Kingdom, particularly the ICO, has been putting in place now a regulatory sandbox program. Perhaps it's not the answer you might have wanted from me, but it's an attempt.

>> ANDREAS HAUPTMANN: Building on that I think you're absolutely right. We need more regulatory sandboxes where we are trying to find our way in a very concrete manner. I also think we will have to have hard regulation in different fields. I don't think ethics solve everything. I think ethics need to work together with law in a lot of fields. I think autonomous vehicles is a good example. Of course, we have traffic laws. We need traffic laws updated to the autonomous world somehow. I'm not an expert in that field. I can't say how we should do it. And the same goes for the health sector. But I think we definitely have fields where we need to have regulation and then we need ethics on top of that moving on.

>> AIMEE VAN WYNSBERGHE: As a further concrete solution to what you're talking about, I used to work as an ethics adviser where my job was to be at a technical institute and I sat with engineers or data scientists and helped them come to terms with these questions. And it means looking at the very specific problem that they have, what are the values of the organization or the corporation. How can we map these values onto this particular decision? So is ethnicity allowed to be a classifier for prediction of a certain disease or syndrome? That has to do with whether or not ethnicity is relevant for the disease or syndrome that you're talking about, and that allows you going through these step-by-step questions that an ethicist is trained and supposed to be able to do with you that allows you to come to terms with and understand, am I doing the right thing? And sometimes it's not a question of am I doing the right thing. The problem is the famous example of kill one person, kill five people. Neither of those are the right thing. One is better than the other. Right? One is good comparatively, but it's never right to kill an individual. So ethicists are meant to help you go through these -- I know. It's so annoying and frustrating, ethics. But it's not meant to be mathematics. It's meant to be something different, right? Anyway. So in our last two minutes, can each of the panelists now give perhaps one recommendation that you have, perhaps the thing that you find the most exciting out there in the space or the most worrisome, something for you to take away?

>> ANDREAS HAUPTMANN: I can start maybe saying the most worrisome and the most positive is actually the same thing. I think we have a lot of political momentum. We have 44 guidelines and principles. I think that's the very promising thing. There's a lot of momentum working on this. I think also that's the worry that we have, that we don't get a convergence on what we are doing and actually building on it and making practical solutions that moves things on the ground right now because it's now we have the momentum. My recommendation would be that we should strive towards making data ethics a parameter on the market, a competitive parameter on the market. And I think a very good first step would be an international seal on which companies and products are data ethical.

>> ELETTRA RONCHI: From my perspective, I think what we've learned from the OECD process is the importance of a multistakeholder debate. And so the need to continue on that and aware that people need to be at the very center, and we have to prove a multistakeholder debate continue to the conditions for practical implementability of the recommendations that we have. And I think that that is -- would be my main recommendation right now.

>> MEERI HAATAJA: I would sort of on this -- this discussion that we have on the latter part of this discussion, having the questions on a very concrete level. My focus and recommendations would be to, like, basically challenge the industry leaders, the different sectors and especially these high-risk industries, highly regulated industries that we need to start defining the sector-specific interpretations for these global principles and guidelines that have been defined. Also, I want to sort of, like, challenge the regulators that there is really urgency for, like, coming up with those viewpoints, how to actually regulate this one because this world is developing so fast, and I think the sort of, like, the dilemma is there that, like, industry feeling that things aren't processing in a quick enough pace in this area, so. . .

>> AIMEE VAN WYNSBERGHE: Wonderful. So please join me in thanking our fantastic panelists for the debate and discussion today.

[ Applause ]

And enjoy your lunch and the rest of the conference. Oh, yes. We are going to read the --

>> So I have noted five different key messages and the idea is that as I read them, the room will, in a way, agree or disagree with them. We can go with yea or nay or we can go with humming or as you see fit. And a second point is, so there's supposed to be neutral objective kind of taking out what was said. Third point is that they will all be uploaded on the EuroDIG website and platform where you can read this after EuroDIG.

>> (Inaudible)

>> You can choose if you want to say yea. It's acceptable. Or nay. Not acceptable. Or you can hum. Or whatever you want to do. So the first one would be message one, the ethical guidelines ecosystem has grown extensively over the past years and includes more than 40 sets of guidelines. However, the challenge of creating a complimentary balance between legislation, regulation, innovation and the guidelines remains.

[ Applause ]

Humming. Key message two. The approach of self-regulation is not enough. There is a need for a new industry model that allows working with data ethics but does not pose a barrier for innovation and competitiveness. Data ethics should be a parameter in the market.

[ Applause ]

Message three. This is a longer one. While the ethical guidelines are numerous, the base values that should be addressed are transparency and explainability. Mechanisms for providing transparency have to be layered. Stakeholder-specific and able to operate on different levels. Explainability has to be defined in a multistakeholder dialogue because it includes explaining algorithms decisions as well as explaining what data ethics mean in a specific context.

[ Applause ]

Yes. Okay. So while the ethical guidelines are numerous, the base values that should be addressed are transparency and explainability. Mechanisms for providing transparency have to be layered, stakeholder specific and able to operate on different levels. Explainability has to be defined in a multistakeholder dialogue because it includes explaining algorithms decisions as well as explaining what data ethics mean in a specific context.

>> AIMEE VAN WYNSBERGHE: Can you maybe say explainability should be defined rather than has to be defined in that way?

>> Yes, absolutely.

>> AIMEE VAN WYNSBERGHE: And can you say there are many common values in these guidelines? One or two of which are? Because I would say, you know, harm and preservation of autonomy.

>> There are many common values in the guidelines.

>> AIMEE VAN WYNSBERGHE: Yeah. And these are two of the common ones.

>> Also, this type of changes will -- you will have access to it. Okay. Next one, not all machine learning systems operate with the same algorithms, have same application or the demographics using them. Developing tools for practical implementation of data ethics has to be highly content specific and targeted. These developments need to be created in a multistakeholder environment.

>> AIMEE VAN WYNSBERGHE: Did you say content specific or context specific?

>> Context, sorry. Yeah. Okay.

>> ANDREAS HAUPTMANN: Maybe just to say I don't think all tools need to be made in a multistakeholder. A lot of practical tools should also just be made quickly and fast by everything from consultancies to agencies and. . .

>> So we leave it developing tools for practical implement of data ethics has to be highly context specific and targeted.

>> ANDREAS HAUPTMANN: Yeah.

>> MEERI HAATAJA: We have been discussing about this industry sector a lot. We are referring to context specifics and probably it means in practice that we look from the industry or sector-specific perspectives. So I don't know, like, maybe you want to consider adding that word, industry or sector?

>> Again, you will have a chance to comment.

>> ANDREAS HAUPTMANN: Okay.

>> (Inaudible)

>> And the last one, several tools for practical implementation could be further developed and disseminated. Data ethic standardization through certificates and seals for business entities should be explored as an instrument of ensuring trust. Other instruments include an obligation to report the ethics policies in the annual reviews and in the corporate social responsibility policies. Sharing best practice cases is crucial.

>> ANDREAS HAUPTMANN: Perfect.

>> Okay. Thank you.

[Applause]

>> AIMEE VAN WYNSBERGHE: Good job.

>> ANDREAS HAUPTMANN: Thank you.

>> Thank you. No, no, no. Just go. Everybody knows that they can go for lunch. So when you are ready. Ethics is not mathematics. I like that one as well. And ethics is more -- it's easier to debate and dialogue, so this is why we go to lunch now. Thank you very much.

>> Thank you.


This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.