Fighting COVID19 with AI – How to build and deploy solutions we trust? – WS 14 2020: Difference between revisions

From EuroDIG Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(32 intermediate revisions by 3 users not shown)
Line 1: Line 1:
[[Consolidated_programme_2020#day-2|'''Consolidated programme 2020 overview''']]<br /><br />
12 June 2020 | 14:30-16:00 | Studio The Hague | [[image:Icons_live_20px.png | Live streaming | link=https://youtu.be/XvCciO9lYX0?t=18937]] | [[image:Icon_transcript_20px.png | Transcript | link=Fighting COVID19 with AI – How to build and deploy solutions we trust? – WS 14 2020#Transcript]] | [[image:Icons_forum_20px.png | Forum | link=https://www.eurodig.org/?id=821]]<br />
{{Sessionadvice-WS-2020}}
[[Consolidated_programme_2020#day-2|'''Consolidated programme 2020 overview / Day 2''']]<br /><br />
Working title: <big>'''Algorithms, deep learning, competition and trust'''</big><br />
Proposals: [[EuroDIG proposals 2020#prop_52|#52]], [[EuroDIG proposals 2020#prop_80|#80]], [[EuroDIG proposals 2020#prop_82|#82]], [[EuroDIG proposals 2020#prop_96|#96]], [[EuroDIG proposals 2020#prop_160|#160]] ([[EuroDIG proposals 2020#prop_76|#76]], [[EuroDIG proposals 2020#prop_86|#86]], [[EuroDIG proposals 2020#prop_106|#106]], [[EuroDIG proposals 2020#prop_181|#181]])<br /><br />
Proposals: [[EuroDIG proposals 2020#prop_52|#52]], [[EuroDIG proposals 2020#prop_80|#80]], [[EuroDIG proposals 2020#prop_82|#82]], [[EuroDIG proposals 2020#prop_96|#96]], [[EuroDIG proposals 2020#prop_160|#160]] ([[EuroDIG proposals 2020#prop_76|#76]], [[EuroDIG proposals 2020#prop_86|#86]], [[EuroDIG proposals 2020#prop_106|#106]], [[EuroDIG proposals 2020#prop_181|#181]])<br /><br />
== <span class="dateline">Get involved!</span> ==
You are invited to become a member of the session Org Team! By joining a Org Team you agree to that your name and affiliation will be published at the respective wiki page of the session for transparency reasons. Please subscribe to the [https://list.eurodig.org/mailman/listinfo/WS14_2020 '''mailing list'''] to join the Org Team and answer the email that will be send to you requesting your confirmation of subscription.


== Session teaser ==
== Session teaser ==
Until <span class="dateline">6 April 2020</span>.
The COVID-19 pandemic has resulted in an unprecedented situation which has called for unprecedented solutions, including a rapid development of applications based on Artificial Intelligence (AI) and data. These solutions are essential in tackling the pandemic, as applications can benefit treatment as well as generate a specific overview of the spread. However, these solutions could be introducing potential risks such as biases and privacy issues.


1-2 lines to describe the focus of the session.
Such examples display the dilemmas surrounding the application of AI and data in general. The question is how to address these serious risks and to ensure the trustworthy use of AI and data, while reaping the benefits and opportunities stemming from the new technologies? This session aims at looking at the regulatory as well as non-regulatory toolbox for AI and data by discussing practical models and tools on how to ensure secure, ethical and trustworthy usage of AI and data without stifling innovation.


== Session description ==  
== Session description ==  
Until <span class="dateline">11 May 2020</span>.
Across the globe, new AI solutions have been built and deployed in order to fight the COVID-19 pandemic – and more innovative solutions are being created, as we speak. These solutions can both benefit disease prevention, diagnosis and treatment and varies from spotting specific patterns in x-rays to tracking people diagnosed with COVID-19. However, such applications raise questions such as: Are the AI solutions able to take autonomous decisions? Are the AI solutions protecting existing rights and values, such as privacy and data ethics? Are results from AI safe and reproducible? Are the AI solutions trained on representative data in order not to discriminate?


Always use your own words to describe the session. If you decide to quote the words of an external source, give them the due respect and acknowledgement by specifying the source.
These are extraordinary times which call for extraordinary measures, but the pandemic will surely also leave its mark on jobs, mobility, consumption patterns, markets, businesses etc., thereby giving way to new AI solutions in the long-term. As these solutions increasingly become a central part of our everyday life, the question is how to address the potential risks and challenges related to AI and instead build trust in innovative AI solutions.
 
Already before COVID-19, there was awareness of the challenges brought by AI, such as bias, transparency and privacy, which spurred the debate whether the race for AI should also include regulatory responses. While some intergovernmental institutions and countries are working on developing guidelines, others are drafting regulation. Experience within the regulatory landscape is still scarce and the technology is rapidly evolving - thereby, leaving a range of questions unresolved:
*Which are the best tools in order to ensure trustworthy AI?
*Which requirements for the development and deployment of AI should be instated?
*How do we strike the right balance between trust and innovation?
 
These are some of the questions this session will touch upon in order to come one step closer to defining how the regulatory as well as non-regulatory toolbox can help ensure secure, ethical and trustworthy usage of AI and data without stifling innovation.


== Format ==  
== Format ==  
Until <span class="dateline">11 May 2020</span>.
* Introduction by moderator
 
* Key speaker 1
Please try out new interactive formats. EuroDIG is about dialogue not about statements, presentations and speeches. Workshops should not be organised as a small plenary.
* Key speaker 2
* Key speaker 3
* Follow up questions from moderator
* Q & A with audience. Moderator facilitates discussion.
* Recommendations from speakers  - (to be included in the messages from the session)
* Q & A with audience. Moderator facilitates discussion
* Wrap up by moderator
* Reporter presents “messages” from the session. The messages from our session will be published in the “Messages from 2020”, which is the outcome of the EuroDIG. Please see attached email from the EuroDIG secretariat.


== Further reading ==  
== Further reading ==  
 
*ITU AI/ML in 5G Challenge [https://www.itu.int/en/ITU-T/AI/challenge/2020/Pages/default.aspx]
 
Links to relevant websites, declarations, books, documents. Please note we cannot offer web space, so only links to external resources are possible. Example for an external link: [http://www.eurodig.org/ Website of EuroDIG]


== People ==  
== People ==  
Line 32: Line 41:


'''Focal Point'''  
'''Focal Point'''  
 
*Julia Katja Wolman
Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective Subject Matter Expert (SME) and the EuroDIG Secretariat and are kindly requested to follow [http://www.eurodig.org/get-involved/organising-a-session/#jfmulticontent_c2865-1 EuroDIG’s session principles]
*Maria Danmark Nielsen


'''Organising Team (Org Team)''' ''List them here as they sign up.''
'''Organising Team (Org Team)''' ''List them here as they sign up.''
 
*Nadia Tjahja
The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.
*Narine Khachatryan
*Moritz Schleicher
*Amali De Silva-Mitchell
*André Melancia
*Ashwini Sathnur
*Marie-Noemie Marques
*João Pedro Martins


'''Key Participants'''
'''Key Participants'''
 
*Martin Ulbrich, DG CONNECT, European Commission, an economist by training, has joined the European Commission in 1995.<br /> Since 1997 he has been dealing with digital affairs, and in 2018 he moved to the unit dealing with artificial intelligence policy. The team is in charge of the AI White Paper process in the EC. 
Key Participants are experts willing to provide their knowledge during a session – not necessarily on stage. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance.
*Mikael Jensen, CEO for the new Danish labelling scheme and seal for IT-security and responsible use of data, which is planned to launch by the end of 2020.
Please provide short CV’s of the Key Participants involved in your session at the Wiki or link to another source.
*Dr. Sebastian Hallensleben, Head of Digitalisation and Artificial Intelligence, VDE/DKE Germany; Head of Practice Network Digital Technologies; Convenor of the international IEC SEG 10 “Ethics in Autonomous and Artificial Intelligence Applications”; Convenor of the European CEN-CENELEC AI Focus Group; VDE/DKE Representative in the IEEE OCEANIS initiative; Steering Board DIN/DKE Standardisation Roadmap AI<br />
Has recently published the report ”From principles to practice – An interdisciplinary framework to operationalise AI ethics


'''Moderator'''
'''Moderator'''
 
*Charlotte Holm Billund, The Danish ICT Industry Association.<br /> Charlotte is working with digitalization and new technologies, including AI, dataethics and green solutions.  
The moderator is the facilitator of the session at the event. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.


'''Remote Moderator'''
'''Remote Moderator'''
Line 53: Line 68:


'''Reporter'''
'''Reporter'''
 
*Marco Lotti, [https://www.giplatform.org/ Geneva Internet Platform]
Reporters will be assigned by the EuroDIG secretariat in cooperation with the [https://www.giplatform.org/ Geneva Internet Platform]. The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:
*are summarised on a slide and  presented to the audience at the end of each session
*relate to the particular session and to European Internet governance policy
*are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
*are in (rough) consensus with the audience


== Current discussion, conference calls, schedules and minutes ==
== Current discussion, conference calls, schedules and minutes ==
Line 67: Line 77:


== Messages ==   
== Messages ==   
A short summary of the session will be provided by the Reporter.
*Trustworthiness should be regarded as a prerequisite for innovation. When addressing it, we shall look at two sides: One that regards the characteristics  product (i.e. its ethically relevant characteristics) and one that is related to how trustworthiness is communicated to the people. One solution could be developing a standardised way of describing an ethically relevant framework of AI systems. As an example, an independent organisation formed by four Danish organisations launched a new company labelling system in 2019 that aims to make it easier for users to identify companies who are treating customer data responsibly.
*Striking the right balance between trustworthiness and innovation represents an important regulatory challenge for AI applications. The European Commission’s White Paper addresses this aspect especially in high-risk scenarios when rapid responses are needed. Trustworthiness can also be a driver for innovation.
*AI and data are interlinked: It is difficult to make sense of large data sets without AI, and AI applications are useless if fed with poor quality data or no data at all. Therefore, AI discussions need to be linked to data governance schemes addressing sharing, protection, and standardisation of data.  However, AI also presents important peculiar characteristics (such as ‘black box’ and self-learning elements) that make it necessary to update existing frameworks that regulate other technologies.
 
 
Find an independent report of the session from the Geneva Internet Platform Digital Watch Observatory at https://dig.watch/resources/fighting-covid19-ai-how-build-and-deploy-solutions-we-trust.


== Video record ==
== Video record ==
Will be provided here after the event.
https://youtu.be/XvCciO9lYX0?t=18937


== Transcript ==
== Transcript ==
Will be provided here after the event.
Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-481-9835, www.captionfirst.com
 
 
 
''This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.''
 
 
 
>> NADIA TJAHJA: Hello and welcome back in studio The Hague. I hope you had a lovely lunch break. My name is Nadia and I’m your studio host, here to assist you with any technical difficulties you are facing. I’m joined by Remote Moderator, Auke Pals.
 
>> AUKE PALS: Good morning. I’m your remote moderator. Feel free to ask a lot of questions so I can help you.
 
>> NADIA TJAHJA: So before we get started I would like to go over our Code of Conduct. EuroDIG is all about dialogue. It is your contribution to your ideas, thoughts, questions that make the sessions inspiring and engaging. I hope you will choose to actively participate in these virtual sessions. Now that you joined the studio you will be able to see your name in the participants list. Make sure that you have your full name displayed so we know who we are talking to. You can set it up by clicking more on your name and choosing the option rename.
 
When you entered the room you were muted. This is to prevent feedback to disturb the sessions. Raise your hand if you have a question and we will unmute you. But when you are unmuted for your intervention we kindly ask you to switch on your video. It will be great to see who we are having a discussion with and let us know your name and affiliation. Now I would like to introduce you to the moderator Fighting COVID19 with AI -- How to build and deploy solutions we trust?
 
Our Moderator for today is Charlotte Holm Billund from the Danish ICT Association. Charlotte, you have the floor.
 
>> CHARLOTTE HOLM BILLUND: Thank you and welcome to our session Fighting COVID19 with AI -- How to build and deploy solutions we trust? It is a relevant topic in the time of COVID-19. We acknowledge that the potential of AI is great and under COVID-19 it has been emphasized in various solutions for fighting the pandemic, for monitoring the spread of the virus, diagnosing patients, and optimizing business solutions.
 
But AI solution also raises a lot of questions. And challenges. And these challenges to AI are not new a phenomenon. But in this workshop we will try to look into questions such as which are the best tools to ensure the trustworthy AI, which requirements for the development and deployment of AI should be stated and how do we strike the right balance between trust and innovation?
 
To help us with answering these questions we have an excellent panel with us today. I hope Martin is here as well from the European Commission. And Martin is part of the team responsible for the recently published AI white paper from the commission. And he will lay out the Commission’s vision with respect to ensuring trustworthy AI, give us insights into the ideas of regulatory regime and might even give us some hints on the dilemmas into the formulation of the white paper.
 
Next in line we have Dr. Sebastian Hallensleben, Head of Digitalisation and Artificial Intelligence in VDE. He is involved in AI Ethics Impact Group and give us insight into their work on a model to put ethical principles into practice.
 
And our third speaker today is Mikael Jensen. And he is the CEO from the Danish labeling scheme and seal for IT security and responsible use of data.
 
And he will give us insight into how to implement a labouring scheme for AI in practice. He will bring fresh insights as the Danish seal is planned to launch at the end of this year.
 
But just before we get started I will just run through a quick lineup of the setup of this workshop. Start with the three panel with an introduction remark from each of the three. This is followed by Q&A. And questions from hopefully all of you attending this session.
 
So if you have a remark, feel free to raise your hand, use the chat and we will invite you to ask your questions.
 
After the Q&A, our key speakers will present their recommendations. And then we will take a second round of questions from the audience.
 
At the end of our workshop we have, we are lucky to have a reporter with us today, Michael, and he will gather the key messages from this workshop and present them. We will have an agreement upon these key messages. They will be published in a report. So like finalizing the whole EuroDIG setup.
 
And with niece introductory words I give the floor to Mr. Martin Ulbrich from the European Commission.
 
(There is no response.)
 
>> CHARLOTTE HOLM BILLUND: Do we have Martin with us, Nadia?
 
>> NADIA TJAHJA: It seems that Martin has dropped the connection. He is currently not in the Zoom room.
 
>> CHARLOTTE HOLM BILLUND: Let’s move on then. Would you take over?
 
>> SEBASTIAN HALLENSLEBEN: Yes, of course, thank you very much, Charlotte for the introduction and also for the invitation and I am also happy to see quite a few people in the room. I’m looking forward to a fruitful discussion.
 
I would like to pick up one of the subtitle of the session really, which talks about the balance between trustworthiness and innovation.
 
I think this might be a little bit too pessimistic because I would put it in a different way. I would say that trustworthiness is a prerequisite and a basis for innovation. So if you have technology that is trustworthy, you get broader deployment. It is easier to scale, easier to achieve network effects.
 
Also if you have technology that is trustworthy it tends to pull in a broader development community. And those of you who are involved in the, in particular in the open source development community, are aware that the brightest minds there can be quite critical about anything that is perceived as being deceptive or untrustworthy. Trustworthy technology for me is actually a prerequisite for innovation and the drive to action.
 
If we look at trustworthiness it really has two sides or two perspectives that we can look at. The one perspective is the product of the service itself. So if we have an AI system, AI, a COVID tracing app or any other secure. We can ask ourselves is it safe, robust? Those are characteristics of the product that can or cannot be trustworthy. But the second level of trustworthiness, how do we communicate that to users and citizens? If we haven’t got a way of communicating the properties of a system that make it trustworthy to a broad audience, we also won’t get acceptance and we only really have one half of trustworthiness.
 
So as Charlotte mentioned at the beginning, I have been involved in a fairly broad consortium of mostly academics, actually, ranging from technical, technology, philosophy, physics, business, fairly broad multidisciplinary range. What brought us together was this thought or observation that everybody talks about AI ethics. Everyone sort of agrees on what the important principles are for AI ethics: Transparency is good, fairness is good, but no one has a concrete way of putting those principles into practice.
 
So we set out to work last year and earlier this year to create a framework for bringing principles of AI ethics into practice and therefore how to make products trustworthy. I don’t know if we are going into any detail of the framework later, but there are two key points in there. One point is that yes, it is possible to make something soft such as transparency or fairness measurable and, therefore, enforceable.
 
Also if we do want to communicate trustworthiness to users and citizens and customers, we do need a sort of little data sheet that shows the relevant ethical characteristics in a standardized and clear way. This is also been one of the products of our work.
 
So just to conclude my little introduction, yes, we can. Yes, we can make AI trustworthy if we address both the characteristics of the products and services and also the way in which we communicate trustworthiness, including ethics to a broad audience. Thank you.
 
>> CHARLOTTE: Thank you so much, Sebastian. And Mikael, the floor is yours.
 
>> MIKAEL JENSEN: Can you hear me?
 
>> CHARLOTTE HOLM BILLUND: Yes.
 
>> MIKAEL JENSEN: Okay. I would also like to thank you, Charlotte and the organisers for inviting me in here. What is very interesting to hear about Sebastian and the work you are doing because it is highly relevant to what we are doing in Denmark as well.
 
You talked about how to be communicated. And I’m just going to give you a small status on the current status of the Danish voluntary labeling scheme for IT security and responsible labeling that we are working on in Denmark at the moment.
 
To add context to that the labeling scheme was proposed as a result of will recommendations to the Denmark government in November of ’18 and in January of ’19 the government appointed IT security business Council also proposed a voluntary labeling scheme but for IT security. There was a data ethics label on the way and the IT security level label. Both of those recommends were combined into an idea of developing a voluntary IT security and ethics labeling scheme in Denmark.
 
Subsequently the Danish business authority as long as with other actors from the industry foundation developed a draft prototype on how the voluntary labeling scheme could work and then the concept was tested among consumers and companies in the summer of last year with basically positive feedback from businesses but also consumers that could see an idea in having a label that could guide their action.
 
The labeling scheme for IT security and responsible data users were then founded as an independent information founded late last year by four foundations, the Consortium, SME Denmark, the Danish government and the Danish Consumer Council and the Danish foundation is funding the initiative and the business foundation has an observer role in the board. The objective is to develop a voluntary labels team including responsible ethics. The aim is not to label specific services but rather the companies itself. I’m heading up the initiative starting from February 1 of this year. And we are now working on operationalising the labeling schemes, nine overall criteria and making it practical to enforce just like Sebastian is actually working on AI specifics. One of the nine criteria is about algorithmic systems and AI and we expect to launch the labeling scheme in the end of 2020.
 
We have defined nine overall criteria that are meant to deliver digital trust and which the companies in question need to follow. One of the nine overall criteria is about algorithmic systems and AI. The way we are working on it is, we take the high level criteria, it could be technical AI security or algorithmic systems and AI and will then ethics and we try to operationalise that on the final level 3 describes the company implementation. It is along the lines of what you are doing also in Germany. In relation to, for example, algorithms and AI we have label 2 criteria like human agency and oversight, transparency and expandability, model quality and so. We take each of those criteria and trim them down to the final requirement for the business to live up to.
 
And for each of these criteria we are using Danish, European and international frameworks. In the case of the AI part we are using GDPR but also the high level Expert Group guidelines for trustworthy AI which is part of the white paper. The recommendation from May last year, the Council of Europe’s recommendation on the human rights impacts of algorithmic systems from April this year. Draft ISO standard and then two Danish standards addressing transparency and AI.
 
And basically we are not applying a one size fits all labeling system, but instead a well defined risk-based approach where the criteria are different shaded based initial risk profiling of the companies in question. Based on initial control questions, then we put in the companies into four different segments from low risk to high risk-based on, for example, their data complexity or organisational complexity. Then the different criteria then are applied per risk type. So in order to obtain the labeling scheme, each segment is required to live up to a different set of criteria which are also vary in strictness.
 
And basically that is what the companies need to do. Then from a consumer point of view, we believe that the labeling scheme and seal give the consumer confidence in sharing and using data and make it easy to use companies and services take provide for liability, security, privacy and ethics. From the company perspective, it will provide a list of and guidelines to how to make trustworthy services and how to be handle data safely and responsibly. From a society point of view, the seal is meant toe increase the IT security and responsible data use in companies and also increase digital trust and make that a competitive parameter.
 
So that is basically my intro to the panel.
 
>> CHARLOTTE HOLM BILLUND: Perfect, Mikael. Very interesting. I guess there will be a lot of questions for this scheme later on.
 
But I have been told that Martin Ulbrich from the European Commission is with us and can hear us. Are you there, Martin?
 
>> MARTIN ULBRICH: Yes. Can you see me?
 
>> CHARLOTTE HOLM BILLUND: Yes, we can. In my introduction of you I said that you were part of the white paper on AI and that you would give us insight into the, maybe into some of your ideas on regulatory schemes and might also include some of the dilemmas that you have in addressing the white paper. But I will give the word to you. Thank you.
 
>> MARTIN ULBRICH: Yes, thank you very much. I very much appreciate the opportunity to explain our approach that we took in the white paper today to this large audience.
 
The white paper obviously has a certain history that originally the Commission President had announced that we would have actually a regulation within the first 100 days.
 
However, it turns out that was not really possible because this is such a complicated and important topic, it was important to take the widest possible input from the stakeholder community. The white paper is in that respect really is an attempt to launch a debate to gather feedback from the audience, from the stakeholders.
 
So we know that until Sunday you can actually submit your position papers or simply put in the forms online. We are looking currently at some 600 submissions and the majority of the submissions will come at the very end we will get to eight or 900 submissions. That is clearly a strong interest in the white paper which is something very helpful to us and helpful for the community because they can have an argument and everybody, but it is extremely important for us in the commission.
 
Now, I think there is a keyword that in the white paper is the issue of balance. It is, I think the overall approach in there is clear, a very strong yes but. Yes, AI is good and we will do everything we can do to develop it and make the European Union strong and take advantages of all the possibilities there, but at the same time there are certain things we have to look at.
 
And the first, so the first part is fairly, I wouldn’t say noncontroversial, but there are very few people who object to the general idea of promoting AI. Obviously then you have lots of discussion about the details of what exactly it is that you want to do or do not want to do.
 
There is a very strong interest, very strong debate on the second part on the rules and framework which may be necessary in order to make AI trustworthiness trustworthy.
 
Now, clearly we want to make it trustworthy and useful. The COVID-19 crisis has clearly shown it may be helpful to have trustworthy AI but you need to be very rapid in it.
 
It is clearly, medical applications are high risk. I come back to that in a second. But medical applications, things that can impact your health and may be, take the decision or make proposals which have a consequence life or death of a person is clearly a very high risk area.
 
Yet at the same time it is an area where sometimes very, very fast innovations have to be posited and where sometimes it may be better to have entirely good solution right now rather than a perfect solution in one year which might be much better but may simply come today.
 
It is very much a double edged sword we are taking, making AI useful for society and at the same time also trying to put it in some words.
 
The key concept is the question of high risk approach. Clearly not every AI application which you need to sort out parking spaces or find out the nails which are lower quality in a factory meet any -- need any kind of intervention at all. Clearly some AI applications need intervention. The key is how do you distinguish them? Which is high risk? Which is low risk?
 
In the white paper we proposed, I underline that because it is to be discussed, an approach in having a double consideration, it has to be high risk such as public intervention, law enforcement, policing, medical, something like that. Clearly the list is long.
 
And it has to be also a high risk application such as, you know -- has to be a high risk application, not every application in those sectors is necessary. I was talking about parking allocation. You can have a hospital using an AI system to do parking allocation for its staff and that is clearly not in any way high risk.
 
We propose this double filter in order to give the largest possible share of the business community the security that what they are doing is not high risk and therefore, doesn’t have to be considered as something which could fall under this regulatory approach. That is important for SMEs and very also very important for the legal certainty.
 
Then there is one issue which will raise a lot of debates. I have been talking already for too long but I will stop here at this time and we can discuss that later.
 
>> CHARLOTTE HOLM BILLUND: Okay. It was really interesting. So you could have kept on but we can do that.
 
I haven’t seen any questions. So I will start.
 
You talked about, Martin, the urgency and the time for AI. But it also comes to guidelines registrations. How does the EU see that aspect on the urgency of having niece AI guidelines before nationality governments start making their own?
 
>> MARTIN ULBRICH: Well, I think the core business of the European Union is really -- one way is to keep internal markets, internal, border free for people but border free for companies to develop their solutions. And especially in an area like AI which depends on the combining of large sets of data which translate very often into sets of data from different Member States, having a framework which would make it then possible because of national legislation which contradict each other or are very burdensome, that would be disadvantageous to European industry an therefore to the rest of Europe.
 
Ever there, I think it is one of our key motivations here to take an initiative in order to make sure that in initiative marketplace stays intact and gives the opportunities which it will give both to the consumers but also to the companies to develop their business.
 
>> CHARLOTTE HOLM BILLUND: Perfect. Sounds good.
 
And just a follow-up question for you, Mikael. How many companies do you expect to join in on the labeling scheme?
 
Is there a volume that you need to have in order for it to work?
 
>> MIKAEL JENSEN: Actually, the way the initiative is funded is that in the beginning it is going to be free for the companies to actually on board the programme. And then later, later on it will be something that need to subscribe to and actually pay for.
 
>> CHARLOTTE HOLM BILLUND: Okay.
 
>> MIKAEL JENSEN: But we need to help the companies in the beginning to actually, you could say on board them to the labouring scheme and see the value of it. Because I mean if there’s only one company having it, there is not so much value. If the whole business community gets the label, then it also gets to be something that the consumers and users see and then it is going to be a demand pulled for trustworthy services. That’s where we want to be in a later stage.
 
>> CHARLOTTE HOLM BILLUND: That is very interesting.
 
And maybe also a question on how do we engage the common European citizen in AI and making it trustworthy? How do you see this? Because for now it is mostly a company, an organisation oral discussion that we have on AI and trustworthiness.
 
Can we see like a societal demand for this? Or should it come from above, like from the EU in order for it to work? Sebastian, you might want to start.
 
>> SEBASTIAN HALLENSLEBEN: Well, I think both. When you try to pin down certain aspects of trustworthiness, for example fairness, you cannot just have statistical parameters that you can measure from a technical perspectives, but you do actually need to engage in dialogue with the people who are affected or subjected to an AI system in order to agree what is actually fair for a given system and a given circumstances.
 
I actually find it quite interesting, the approach that Mikael, you’ve taken in Denmark. You are saying well, we are not really trying to label the product. We are actually trying to label the organisation. This is actually something that is, we moved away from. If the difficulty of, how do you deal with really, really big organisations, companies like Facebook, like Amazon, like Google, who have interests spread all over the world that will be acting in different or by different standards all over the world? And how can you sort of draw the line around the Danish activities to give them such a label?
 
Or will it be more like this is intended for SMEs.
 
>> MIKAEL JENSEN: The idea is that it has to, needs to work for all kind of companies sizes and forms, both the smaller ones, the SMEs and also the big companies.
 
Of course, if you have a company like Microsoft or Google or Facebook, it can be difficult. We have to find out how that will work.
 
You know, they can be split up into different business lines and have different market focuses. The privacy and security might differ depending on the product that they actually serve to different customer types.
 
So that is, of course, something we need to find out how to work with. But I think if you have the ISO 2701 standard, that is also from the company perspective that you are actually looking into standards.
 
So I think we want to look into the whole company and then we also are having in the criteria privacy and security by design and default which is also something you could say that the company, it is about internal processes and how they are actually developing systems from the start until they launch and also post launch.
 
So yeah, we are taking that perspective in Denmark, which is different from the white paper which is based on marking or labeling-specific services, right, for AI.
 
>> SEBASTIAN HALLENSLEBEN: We are actually seeing -- maybe that is also a line of inquiry that might be helpful. We are seeing a lot of requests from developers of AI who are saying, well, there are lots of ethical dilemmas to consider and as developers it is not really our competency and our role to make decisions on how to deal with these. So they are also clients or users, if you like, of any labeling or certification scheme. It makes their life and their work a lot easier.
 
>> MARTIN ULBRICH: Yes, I think both IT security and privacy and ethics, those can be difficult for companies to find out what actually they need to do different in order to live up to this. We hope that different criteria that we profile out to the risk-based approach on company types will give that guidance, basically, to the companies.
 
But I think now we are talking AI here. And I believe also we are looking at the same frameworks. I guess they are quite new. There is the EU high level Expert Group, ICT and what not. There is the need for European alignment on these criteria. In the European Union you have some idea what the criteria should be for the high risk AI algorithms. More work needs to be done. I can see when we are operationalising the criteria also in AI we need to find out how, what is it actually that the companies need to do. How do we audit it? How do we check whether they are actually living up to the criteria that we are claiming that they need to do? There needs to be more understanding on that across Europe, I think.
 
>> CHARLOTTE HOLM BILLUND: Thank you for that perspective.
 
We have a few questions coming up. And we have -- I have help from my Remote Moderator, Auke, who will switch to some of the people asking questions.
 
>> AUKE PALS: Thank you, Charlotte. Yes, I’ve received some questions and one question was from Louisewies Van der Lean. She said I’m enjoying the discussion but it is very abstract. I think the workshop can benefit from some concrete examples. Where are the actual benefits, especially for fighting COVID-19? The real question is: Can you give some concrete examples of a databased application that is actually helping with COVID?
 
>> CHARLOTTE HOLM BILLUND: Martin, please?
 
>> MARTIN ULBRICH: Yes, to give you an example of that, there is a CT scan which can analyze the CT scan data and identify whether a patient has, can be diagnosed as having COVID-19 which has been based on data from China because that is where they had the first data for and which has been used in various European hospitals.
 
That is clearly, that is a perfect example, actually, of the dilemma you have between speed and quality. Because the identification starting at the beginning was extremely urgent. They needed something very quickly and transfer, the hospitals used this without going through the usual things that you would do. You typically have a software which has to identify certain, you have months and years of testing. You would go through a very complicated procedure and be sure it is absolutely the best thing you have to the market.
 
In this case it wasn’t possible. It was probably better than nothing but not as perfect as it could have been or it would be once it is developed on the basis of say one or two years data, it will be much better in the future. But that time wasn’t available.
 
So people had to use what they had. But definitely it did help. That is one example.
 
You can also use a data for the modeling of the spreading. I don’t know how far they have advanced. Clearly the modeling needs a lot of improvement. We have all seen the modeling of the COVID-19 spreading has been very far off the mark. So using AI to improve that is certainly another avenue.
 
Then there is also in terms of developing the treatments, both the exposed treatment if you are sick and developing the vaccine, AI is used to actually go through the various combinations of medical agents, active ingredients to see which ones are the ones which are most promising. That is pretty much every element of the fight against COVID-19. It actually has been or is still being developed with the help of AI.
 
>> AUKE PALS: Thank you very much, Martin, for answering this question.
 
We do have loads of other ones. One is from Desiree and she has a question for Mikael Jensen. The question is why would companies be paying for a labeling scheme and she is asking are there any ideas on how the data can be stored in a centralized or decentralized system?
 
>> MIKAEL JENSEN: First of all, in terms of the value of a label, I think it is valuable for the companies to get a criteria that they need to adhere to. So instead of themselves actually finding out, then they can use this as a guide. There can be value.
 
But the value for companies is also that the companies would actually live up to the criteria. They can through their external customers and consumers and society basically communicate with the data seal that they are trustworthy and are treating the users and customers data in a trustworthy way.
 
That way they will get competitive advantage. And we also, I mean in the criteria we also have focused on IT security parts. And many companies are being threatened by cybersecurity attacks and by having the seal and living up to the requirements they can save money on that part of the creation also.
 
So we believe that it will be valuable for the companies to actually take on being, you can say, certified and getting the seal but also communicating it afterwards. The companies taking on this will be able to differentiate themselves in the markets on trustworthiness going forward.
 
>> SEBASTIAN HALLENSLEBEN: I would like to just add on there, one question that comes up again and again when we talk about a seal, a seal is, you have it or you don’t have it. It is a on or nothing label.
 
From a simplicity perspective it is as simple as it gets. A company is either trustworthy or it is not trustworthy.
 
In our work we considered this approach. In the end we rejected it because we felt in particular for AI ethics there are quite a few trade-offs to be made and quite a few shades of gray, if you like. For example, if you talk about a COVID-19 app, you can say, well, an important ethical characteristic of it would be that it is completely transparent, that it shows what algorithm it uses. That it shows what data was used to train it. But then at the same time you might want such an app to be a good protection of your privacy. You immediately get a conflict because total privacy is not possible when data and movement data and other sensitive data of a large number of citizens goes into the app to train the algorithm.
 
So you are having to make a trade-off between transparency and privacy in this case and it is not something that we felt can be aggregated into a single seal but rather something that needs to be shown separately in a standardized way as a more detailed label. Almost like a little data sheet like the nutrition information you find on items of food, which also is tiny little data sheets that look the same on every item of food.
 
I think that is a very important clarification when we talk about labeling. We have to be clear: Do we talk about a simple yes/no seal of approval, if you like? Or do we actually talk more about something that is a bit like a data sheet? Maybe also similar to the energy efficiency label or nutrition label. Thank you.
 
>> MIKAEL JENSEN: Yes, I mean we are working on making one seal and but then the criteria that the companies need to live up to is dependent on the risk profile that they have. That is in order to make it simple for the users, end users, actually, so they don’t have to look into different versions of the market just like the energy mark that you are talking about. We believe it is important to communicate in a simple, easy way for the consumers because basically the consumers or end users, they don’t really know much about data, whether it is privacy or IT security, whatever. It is all kind of data for them. And we need to give some guidance for them. So that’s kind of the idea.
 
In terms of the COVID apps, we had a discussion in Denmark as well about it, whether data should be stored in a centralized database or should be stored on the client instead. If the company behind the application would have used the seal that we are developing, they probably would have made a solution where they didn’t make a storage on a centralized database, but rather on the clients and they used the price and security by the sign and default criteria that we have in the seal.
 
In terms of privacy, the idea is that you build in privacy and security from the start. But then the users or customers can basically opt in to give more data if they want. If not, that they don’t have that choice. So I think it is more about being transparent and giving the customer also solutions and choices. On how much they want to basically share of their data.
 
>> AUKE PALS: Also --
 
>> CHARLOTTE HOLM BILLUND: In looking into --
 
>> AUKE PALS: No, sorry, I’m also seeing a hand from Andre that has been up for a while. So I will give him the floor.
 
>> ANDRE MELANCIA: Hello, everyone. My name is Andre Melancia. I’m from Portugal and part of the technical communities here in Portugal and also around the world.
 
My focus is mainly to deploy AI solutions and other kinds of solutions as well. I have had an extreme level of requests in the last few months specifically because of COVID but also in the last two years this has also been a trend.
 
So let me just give you some of my practical problems and maybe use this as food for thought for the discussion. So as I mentioned in the last two years there has been a big hike in using artificial intelligence. A lot of people want to learn about artificial intelligence. But most of them are not what we can call data scientists. They don’t have the basis to understand all of the concepts of AI.
 
Most of the providers like Microsoft, Google, AWS, and many others are making it easier for people who are, for instance, developers, not so much data scientists, to be able to use these technologies without extreme knowledge of things like statistics and math and advanced information about AI itself.
 
That means that anyone who is a developer can just go to pre-created services that all these providers can supply and just use them even without actually using AI or doing the actual training.
 
So just maybe pressing a button or call the res API, something like that.
 
That brings a whole set of problems that some of you discussed already and one of them is about ethics.
 
Sebastian first mentioned ethics in this discussion. Of course, if you are a data scientist and if you are learning data science, you probably have at least for most Universities I know, you probably have some subject about ethics and try to understand ethics in the context of AI.
 
However, if you are a developer, that is not really a problem that you are faced with. In most cases if you have a project and they ask you to use AI, you just need to use AI as fast as possible, solve the problem and move on. That becomes an issue when it comes to ethics especially.
 
Also some of you have already mentioned when I raised my hand I was about to say, you mentioned this already, which is about privacy. So most of these providers that provide easy AI, let’s call it like that, they have their services in the clouds. It is their business, it is the way that they get money. And in which case privacy might be overlooked when you send everything to the cloud so that you can do AI on the information.
 
Just to finish this information, there was someone that wanted more examples on how to use this. So there is a lot. So most companies now that provide products and services, they are using AI and they have done a lot of work in AI, especially eCommerce shops. They have done a lot of work in retraining all of their models in the last few months because all of the things that they knew before was a normal, let’s say, suggestion to give to the users, became completely wrong starting March, something like that. From that point on people were asking for different things.
 
Of course, I’m not mentioning the usual things like toilet paper or anything like that but the requirements, like buying masks, disinfectants, things like that online became an actual thing. Most providers of online commerce or even in-person big retail commerce, they have started to use AI to try to predict things which are not very well predictable if you are just a human taking from your own experience.
 
So this is something that I have in using every single day until now in the last two or three months to actually make some sense out of all the irrationality and all of the madness that comes with the current pandemic. Okay. Thank you.
 
>> CHARLOTTE HOLM BILLUND: Thank you, Andre. Any thoughts on this from some of the Panelists?
 
>> SEBASTIAN HALLENSLEBEN: Yes, I think Andre mentioned one very important point which is that a lot of AI development is based on existing frameworks, be it Google’s TensorFlow or Facebook PyTorch, which are de facto standards that are widely used because they are good. They get the job done. But we do have two limitations we need to be aware of, particularly when we look from the European perspective.
 
One limitation is, Andre lightly touched on it, is the quality of the data that goes into it. It is one thing to get an AI framework up and running and train an algorithm and get it to do something, but it is a different level of quality to consider, how good is the data that I’m feeding into it? Is it complete? Is it biased? Am I just automating errors that human beings used to make?
 
And that is something that is not necessarily taught to computer scientists or the computer engineers who programme and train AI.
 
The second issue is the transparency of the frameworks themselves. So we use the frameworks that are normally also integrated with cloud offerings. So we don’t know what else might be happening with the data. We are actually forked to trust certain entities, some of which haven’t always proven that they are very trustworthy in their past behavior. And we might want to consider concepts such as European AI distributions which could take a leaf out of very successful way in which, for example, Linux is being distributed.
 
Very, very few people take the Linux code and compile it and implement it from scratch. Most people take a ready made package where someone has taken the actual Linux code and added lots and lots of tools that are specific for a particular application or type of user and is giving that usually packaged with some sort of support contract. And we might want to consider building a similar infrastructure, similar ecosystem of AI distributions, maybe with some of the big toolkits at the call, but decorated with the tools and issues that are relevant for Europe, to make it easier for developers to create trustworthy AI applications.
 
>> CHARLOTTE HOLM BILLUND: I know, Martin, the EU has a data strategy working along the AI white paper. Could you comment on the interplay between the two?
 
>> MARTIN ULBRICH: Yes, let me first agree with Sebastian. I mean, one of the problems when you do any kind of modeling is the quality of the data. That is certainly important for AI and not specific to AI.
 
As an economist when I was doing my studies I used to be told data is basically rubbish in, rubbish in. If you put contracting party data in, you get contracting party out. One current example, some of the predictions have been made about the COVID crisis about hundreds of thousands of dead people mostly because of the data which they he used. That wasn’t actually AI but very limited data, not good enough quality and you get predictions ten times higher than anything that is reasonable.
 
Therefore, the data quality is extremely important. In AI, of course, it is more important because AI can be scaled up easier. So many things, you have the same concerns as before but the higher impacts are very important.
 
Now, in order to improve the quality you have to have a choice of data. If you have three data points, there is nothing you can do. One hundred is better. A thousand, slightly getting there. By 10 million, you have a good chance of having a good quality set of data.
 
Therefore, making data available which is really related to the data strategy is a key requirement for the entire architecture that we are talking about here in terms of making sure that AI is high quality.
 
If there is no data available, then we can talk about need for data quality and anti-biasing and all kinds of things, but if there is no data, all these things cannot be done. Therefore, the data is really the A and the O of artificial intelligence.
 
Of course, the other way around as well. If you have large sets of data without AI you won’t be able to make sense of them. There are a limited number of data points you can make even with computer help. If the computer help needs human intelligence to actually go through the data so you can use your computer to do various searches, et cetera, but if you have to manage very, very large data sets it becomes impossible which is why AI has such big promise because it actually can see connections between data, can find relations that humans couldn’t find because the volume of data is too big.
 
These two things are two sides of the medal and that is why they have been published together and why they will continue to evolve together.
 
>> CHARLOTTE HOLM BILLUND: Thank you. I know we have a few more questions in the chat.
 
>> AUKE PALS: Yes, charts, Charlotte. I have a question from Amali De Silva-Mitchell. And the question is, what has been the role for professional bodies such as lawyers, engineers, doctors, and accountants in setting consistent standards for AI?
 
>> SEBASTIAN HALLENSLEBEN: Well, I can say something about that. At least from a German perspective it fits -- actually, it is a very good question because it fits neatly into the previous categories we discussed so far, looking at the products, looking at the organisations, looking’s communication.
 
Now we are actually looking at the professionals as individuals who are operating AI, be it doctors in their practice, getting AI to help them with diagnosing COVID or using AI to help them interpret x-rays or be it engineers using AI to help them analyze traffic flows or the construction of lightweight building materials, anything like that.
 
So at least in Germany there has been some work 20 years ago to draw up a code of ethics for engineers which was done in one of the largest engineering associations. It was a multidisciplinary process, came up with eight or nine-page cold of ethics which sort of stood there for 20 years.
 
Now last year more or less the sale group as far as they were still alive, but more or less the same group came together again to update their code of ethics to reflect certain characteristics of AI. And it is interesting because usually you would have thought that something like ethics should be technology neutral. But if you draw up accompanied of ethics, what is right and wrong for an engineer, it shouldn’t really matter exactly what technology that engineer uses.
 
But since AI is so new and has characteristics that we’ve never seen before in another technology, it actually necessities updating such technology-neutral codes of ethics as well. Just like many technical standards have to be touched again.
 
>> CHARLOTTE HOLM BILLUND: Would you elaborate a little bit on the technology neutral? What was the problem?
 
>> SEBASTIAN HALLENSLEBEN: Well, the two characteristics of AI that make it difficult to treat it just like any other new technology, one is the black box character of AI. So I cannot tell how AI reaches a decision. They may say this is a stop sign or a speed limit sign, but I can’t look into the box to see what leads you to believe that this is a stop sign or that this person is not credit worthy.
 
It is that black box nature, we can’t be completely sure yet, but it is an issue in principle. It is not that it takes a bit of work to overcome but it is a principal limitation of one of the most common AI methods at the moment, neural networks.
 
And the second fundamentally new characteristics of AI is that it is self learning. And which means that if I satisfy AI today and prove that it has certain desirable characteristics, it is safe, ethical and everything else, it might learn, it might change itself. Tomorrow, well, who knows whether it is still safe and ethical and robust. So that poses whole new challenges for things like testing certification. In practice you have to find a way of alternating those processes and repeat notices processes which used to be sort of one-offs or something that you may do yearly or so.
 
These are the two main new characteristics which mean that things that were meant to be technology neutral do need to be taken up again and checked again, maybe revised.
 
>> CHARLOTTE HOLM BILLUND: Just before we go to the recommendation, do you think it, maybe for both Sebastian and Martin, do you think there is a demand to use the AI models that have more explainability in them? That you can’t use neural AI networks models for some parts because you need to have the transparency? Do you foresee that?
 
>> SEBASTIAN HALLENSLEBEN: It is quite possible. If you look at, for example, AI in the justice system. A number much countries use AI today to decide whether someone is released on bail, whether someone is released on parole or to decide this person should go into prison for five years or 11 years.
 
It might be possible to prove statistically that on average as a whole the AI way of doing it might be better than a human judges and museum parole officers, but you could argue, of course, let’s go with AI because as a whole it seems to be better. Statistically it seems to be better. Big assumption there, but let’s assume that’s the case.
 
But if you are a person who is affected by that decision, if you are a person who, for example, is not released on bail even though you think you should be, if it is a judge you can ask the judge, explain to me, why are you not releasing me on bail? Why are you sentencing me to 11 years in prison rather than five years in prison? The judge has to explain that individual case. The judge won’t say, you know, on average people like you tend to reoffend, sort of assuming, just go to prison for 11 years.
 
No, the judge has to give a good explanation for that individual case. And if you use AI you cannot get that explanation for the individual case. And that is, to answer your question, from my perspective would be a use case where we would say we are not using AI because it is not explainable. It is not transparent for individual cases.
 
>> CHARLOTTE HOLM BILLUND: Perfect. And Martin?
 
>> MARTIN ULBRICH: I agree, it clearly depends so much on the particular use case. In some cases it’s just a meta, and whether it is explainable or not. In some cases you might want to have premium on performance versus explainability. If you have a medical system which works on, which is explainable but on the other hand 90 percent success rate and another one is not explainable but 99 percent success rate, most people will go for the 99 percent even if there is no explainability.
 
The other cases like Sebastian was describing, clearly explainability is certainly one of if not the most important thing. But there we also have to be very careful with our technical concerns here. The kind of ethical questions. Sometimes we keep on discussing, you know, the performance of the AI and the data not good and it is biased and we tend to forget the question of whether we would actually want to use AI at all. For example, there have been questions that taking the same example, there have been discriminations in the U.S., that the system has been better to whites than to blacks, more negative on blacks than to whites. That is clearly a problem. The key question, is that -- let’s assume that in 23 years time that is solved. If you keep on discussing all the time that is a problem of discrimination and discrimination is solved we set ourselves to accept the idea that we do it at all.
 
Then the AI would be okay, we talk about discrimination, there is no discrimination anymore, now we can use it. The fundamental question, do we want to use the technology for that kind of purpose, rely on large databases to put you in jail for four years as opposed to six years. We need to keep that in mind even if we first of all talk about the technical details and the nondiscrimination and all the other issues.
 
>> CHARLOTTE HOLM BILLUND: Perfect.
 
Well, I think that was a nice stepping stone over to our section on recommendations. Martin, you were first in line before and didn’t make it. This time you are first in line with offering your recommendation.
 
>> MARTIN ULBRICH: Yes, my recommendation is really to keep a balance. I am coming back to what I said in the beginning. AI without any kind of wild west model of AI is not clearly something we want. But the perfect state without any innovation is something we certainly want to avoid as well. Therefore, we really have to find a balance.
 
I think in our particular case balance requires differentiation. One with all is probably not going to be the right way to find a balance for AI across the board.
 
>> CHARLOTTE HOLM BILLUND: Perfect. And Sebastian, you want to come up next?
 
>> SEBASTIAN HALLENSLEBEN: Yes. So my recommendation, maybe unsurprisingly after the discussion so far, is to create a standardized way to describe the ethically relevant character I cans of an AI system. Like the degree of privacy, the degree of transparency, degree of fairness, robustness, so on. Whether you call it a label is a different story. If it is a label, it is a detailed label. I would like to think about it more like a short data sheet.
 
The reason I’m recommending it, pretty much every stakeholder gets the benefit out of such a standardized AI ethics data sheet or label. If you are an operator, some company who wants to operate AI systems, you can look at your use case. You can consider, okay, what ethical risks do I have in my application area? Am I maybe in personal mapping or in treatment which is very treatment? Am I in another area?
 
I can look at the market and say what is available? Some AI ethics systems might be labeled as being very ethical or with maybe a high degree of privacy or transparency or certain relevant characteristics and another may not. If I put out a request for tender, I can specify these are the minimum requirements in ethics that I want.
 
From the regulatory perspective, the regulator can use such a data sheet to set minimum requirements and say okay, in certain high risk, medium, low risk cases we require certain minimal levels. They can be specified in a standardized way within such a data sheet.
 
If you are a consumer who wants to buy an AI system, for example a piece of software, an app and you have a choice between different apps, you can compare, just like you compare the nutrition labels on food items. Finally, for a manufacturer, and just like to pick up what very nice choice of words that Mikael brought up earlier. You can turn the ethics of your AI systems as a competitive parameter. You can say look, these are the levels of transparency, of privacy that my ethics system reaches. And, therefore, I’ve got a much broader market available compared to the competitors who have lower levels or who may not be able to provide that data at all. Thank you.
 
>> CHARLOTTE HOLM BILLUND: Very good. Interesting thoughts on this standardisation. I guess that’s the difficult part of it.
 
And we will take questions after this. We had a few left from the round before, but we will take Mikael’s recommendations as well.
 
>> MIKAEL JENSEN: Yes, thank you. My recommendation is basically that it is crucial that we as citizens and consumers have trust in the digital services we use every day. And today it can be very difficult to know whether a company has good IT security or is handling our data in a ethical and responsible manner or not. We must see that trustworthiness as a driver of competitive advantage in Europe. So basically, I think we need to consider algorithmic systems and AI as part of a broader agenda regarding digital trust. Instead of developing a trustworthy label within certain areas, privacy, security, we should develop a scheme with a more holistic approach to digital trust which should include reliability, security, privacy, and ethics, thereby also covering AI which is what the citizens are concerned about. By aiming, having a demand driven European voluntary labeling scheme within digital trust it is possible to get impacts scene from a consumer company and societal point of view.
 
>> CHARLOTTE HOLM BILLUND: Very good.
 
Thank you. And we will take the few questions left and if you have further questions, please join in in the chat because this is going to be the last round of questions before we have the key messages of this session being read out.
 
Please?
 
>> AUKE PALS: Thank you, Charlotte. Yes, I do have some questions still in line. One is from Olga -- sorry if I mispronounce your name. The question is for Martin Ulbrich: Does a mission of a temporary band ban on the deployment of facial recognition technology from the final version of the white paper as opposed to the initial text leaked in December 2019, mean that the EU is willing to prioritize technology over privacy concerns? And how do you, how does Auke facial recognition comply with the GDPR requirements to get individuals’ explicit consent?
 
>> MARTIN ULBRICH: Yes, I guess that’s for me. Now, it is actually the question on facial recognition and the draft white paper is very funny. As you know, if you write a white paper it goes through many, many iterations. I think in our case, something like 27 different draft.
 
The interesting thing, out of these 27 drafts there was a single draft, version of 13 or 14, in which the moratorium was ever mentioned. It wasn’t mentioned before or ever afterwards. It was very interesting that this single draft way which was the one that was leaked to the press. So I don’t think that you can draw the conclusion that you were mentioning.
 
AI is clearly, as you have seen in the white paper, a topic which we want to have a debate about. We clearly are concerned. It is a technology which is not yet ready for rollout from a technology point of view.
 
I don’t think we are ready for a ban from a political point of view either. Therefore, we would agree now is the debate.
 
We expect that among the 600 maybe seven or 800 people, organisations who do comment in the public consultation, a large number of them will have something to say on facial recognition. It is clearly one of the highest profile applications of AI.
 
Interestingly enough, it is where we are dealing with AI regulation in general in countries which don’t necessarily want to regulate AI in general, like the United States. They have a flurry of local initiatives on facial recognition.
 
Now, facial recognition has to comply with the GDPR if you actually apply it or with the law enforcement directive. GDPR doesn’t apply to the police force and border patrol.
 
And the instances where facial recognition has been used in European Union, there is an indication in the U.K., while the U.K. was still in the union, whether the use by the police was appropriate and the court said yes. There have been a couple of cases in Sweden and France where the national data protection authority stopped facial recognition systems although everybody had agreed to it.
 
There are a number of test cases which are really working for the purpose of research, such as one in Berlin a year ago or two. So for the moment everything is fine. But the question is really whether a widespread use of facial recognition is acceptable to European citizens.
 
As always there is a trade-off, you may get more security but less liberty and I’m over simplifying. What it is that the European public wants, I don’t know. That is why we’ve decided it would be a good idea to ask the public.
 
>> SEBASTIAN HALLENSLEBEN: Actually, if I can add something to Martin’s points. When discussing AI ethics, facial recognition is a wonderful example because it allows us to consider the many shades of gray that are to be considered in AI ethics.
 
So with the very same technology, facial recognition could be used for unlocking your mobile phone, in which case, okay, you are probably not on ethically sensitive territory. But it also can be recognition in the streets for masked people in demonstrations or just for people in the streets which has a whole different raft of ethical issues.
 
Basically it is the same technology. You can take those examples even further and go on to the factory floor. You might have machinery that recognizes workers for safety reasons and maybe also remembers whether certain workers tend to be more alert than others. It might want to identify workers via facial recognition there. All of a sudden you have to talk about safety versus data protection, versus AI ethics.
 
So it is a wonderful example that to me shows that AI ethics is not something that can be discussed separate or independent of the application context and in fact, we actually need to stop and consider the application or the application area and the sensitivity of it, the ethical risks in it. Then we can work towards the AI systems. For that level of sensitivity what is actually the appropriate ethical standard that the AI system needs to reflect. Thank you.
 
>> CHARLOTTE HOLM BILLUND: But are we good enough at handling these gray zones? Because they keep coming up.
 
Is there some way that it can’t be case-by-case that we have something we can fall back on to? Do we have a comment on that?
 
>> SEBASTIAN HALLENSLEBEN: It is a little bit like in law enforcement. You can’t list all the possible cases and specify the fines and the prison term for each imaginable case. You have to group them. So was it intentional? Was it accidental? If it was intentional, what was the motive? So on. So you have certain criteria that you put into the board to determine what the fine or the prison sentence should be.
 
In the same way you cannot list every imaginable use case to come up with the ethical demands that you may want to put on it. And therefore, you also have to classify, there’s a lot of work, quite a large number of different schemes to do that.
 
Mart I has explained the white paper approach which is sort of a mix of two major risk levels, low risk and high risk and also a sector-based approach.
 
Then there is work from a variety of groups which tend to end up with four to five risk levels, depending on what the impact is, whether the AI gets it wrong and whether it is subject to - AI or not.
 
I think those classification schemes also need to be standardized in some way. So consensus needs to be built on what the right classification scheme is to determine the ethical sensitivity of an AI use case.
 
>> CHARLOTTE HOLM BILLUND: You want to comment as well, Martin?
 
>> MARTIN ULBRICH: Yes, I was going to say it, the world is a complicated place. Gray zones will always be there. It is unimaginable that whatever regulation comes up out of the white paper process, if indeed there is regulation coming up, this is one category or another. There always needs to be some practical appreciation of the circumstances.
 
Therefore, you will definitely need some kind of authorities who will actually do that. And clearly that would have to be done in the Member States would know much better than the centralized authority, what the particular circumstances are. Yes, there will be gray zones. We won’t necessarily be good with dealing with them at the beginning. But over time in many other areas of the world and of the modern world, there are plenty of gray zones. We have developed practices how to deal with them. That will happen with AI as well.
 
We have to keep in mind we are at the beginning of technology and therefore it is going to take some time before everything is as good as it possibly could be. Even then it is not going to be perfect, of course.
 
>> CHARLOTTE HOLM BILLUND: All right. We are about to round up this session. We have a track coming up in a minute.
 
If somebody wants to pose a last question. Do we have anything?
 
>> AUKE PALS: Yes, we do have another question, Charlotte. And the question is from Amali De Silva-Mitchell. The question is: What is the possible approach to getting to consistent data sets according, collaborating entities? Saying that we hear data is inconsistent to enable good AI collaboration.
 
>> CHARLOTTE HOLM BILLUND: Anyone?
 
>> MARTIN ULBRICH: Well, making data compatible, of course, is kind of the Holy Grail of that. When I said earlier you access lots of data and if you don’t have lots of data, you don’t have anything. Many of the data is wrong and you first need to curate it to make it usable. That means very often making it compatible with certain standards or to have a common category that you can link various data sets. That’s very labour intensive work and that is one reason why it is going to be a huge demand in terms of data scientists and AI professionals in the near or medium term future.
 
Depending on the area in which you are, sometimes it will remain the way that you have to manually actually do it the way it is. Sometimes standardisation can help. If you have a well defined area in which that is possible. I’m sure Sebastian knows much more about that. Maybe a combination of the two of them. It is a very wide field. I don’t think you can have any simple answer on to that.
 
>> CHARLOTTE HOLM BILLUND: All right. I think I will round up with a really easy question maybe. Maybe not.
 
If the three of you could think of what has surprised you the most in working with AI? What has been the most exciting challenges, just to mention one thing. Could you round up with that?
 
And I think you can join in who wants to start.
 
>> SEBASTIAN HALLENSLEBEN: Okay. I will start. Well, what has surprised me is the quality of the public discussion and the way in which ethics is being discussed right from the start. It is very different from earlier hyped technologies, be it atomic energy a few decades ago or genetic engineering other nanotechnology. In those cases it was just the engineers creating a new technology, seeing a bright future and getting a lot of development done.
 
Later on the rest of society woke up and said wait, hang on, we are not quite sure we like everything you are doing there and we got into a very confrontational situation around these technologies.
 
For AI, I am observing that it is actually the community of scientists and engineers themselves who are reaching out to the rest of society and saying, well, you know, there are some ethical issues here we think but we are not comfortable about. We should talk about this, create a societal consensus.
 
I’m optimistic and hopeful we will not see the same kind of confrontation or discourse for AI that we’ve seen in major technologies previously.
 
>> CHARLOTTE HOLM BILLUND: Interesting. Thank you very much. Mikael? Your turn.
 
>> MIKAEL JENSEN: I think it is, when we are working and trying to make the AI criteria concrete, I can see there is a need for more standardisation and alignment about how to actually operationalize the criteria. I think on a high level we, there is kind of getting common ground on transparency and model and data quality and stuff like that. But how to make it practical for companies to adhere to those standards is still, steel needs stuff to be done there.
 
Also how to actually follow up from a labeling scheme point of view to find out whether the companies are actually living up to the specific criteria. That’s one of the key take-outs at the moment that we need to focus on.
 
>> CHARLOTTE HOLM BILLUND: Perfect, thank you. Martin, do you want to end?
 
>> MARTIN ULBRICH: The thing that most impresses me is kind of the difference between talking to the engineers and people in the field and talking to the wider public. When you are talking to people who actually work in the areas, we are talking about things like expanding transparency, et cetera, et cetera. When you talk to people in the regular work, consumer groups they are talking about computers taking over the world. It is a completely different approach. And sometimes I wonder whether when we are talking about creating trust, I won’t say we are barking up the wrong tree, but we are assuming the public has certain issues with explainability, bias or whatever, when in reality it is a much wider and more profound concern in the public. Which isn’t that much linked to ethical niceties or even legal niceties but to fundamental point of view.
 
>> CHARLOTTE HOLM BILLUND: Interesting. Thank you very much.
 
And for my part I’m about to round off because we have a track coming up with key messages we have to agree on.
 
Thank you so much for the lively discussion, for all of the questions and for all of your good answers in the panel. It has been a pleasure being part of this. And now, our rapporteur, Marco Lotto.
 
>> MARCO LOTTI: Thank you, Charlotte. I assume you can hear me and see me well. Thank you for giving me the floor, following the session was very, very interesting and as you mentioned I’m Marco Lotti. My task today was to write a comprehensive report about what was said which will be available next Monday, but my second task was also to sum up the discussion on three main take-away messages on which we should see whether there is a draft consensus.
 
Before reading the messages I would like to remind you these are, the EuroDIG platform will email you the text of the messages and it will be possible for you to comment and edit furthermore. It is just to see whether more or less the content of these messages reflects what has been discussed.
 
And if we can show the first message from the slides?
 
So that I can read it out. If there are any objections, objections, please type them know the chat. First message we have trustworthiness should be regarded as a prerequisite for innovation. When addressing it we should look at two sides, one regards the characteristics of the product, meaning technical safety requirements and one that is related to how trustworthiness is communicated to the people.
 
One way of describing ethical, an example was also mentioned that the Danish government launched a new company labeling system which aims to make it easier for users to identify companies where treating customer data responsibly.
 
I’m quickly looking at the chat to see if there are strong objections from the public and from the speakers on this first message.
 
In case, please let me know before we move to the second message.
 
One correction: It is not the Danish government. Sorry if I misunderstood. I will correct it, of course, accordingly.
 
Do we have --
 
Can I ask maybe to Mikael to clarify this point, please?
 
>> MIKAEL JENSEN: Yes. It is the partnership of the Danish consequence Consortium of Danish bases. I can send you the specifics.
 
>> MARCO LOTTI: Yes, okay.
 
>> SEBASTIAN HALLENSLEBEN: One other small correction, 1, 2, and 3. It is not the technical safety requirements. It is the ethically relevant characteristics.
 
>> MARCO LOTTI: Okay. Noted.
 
>> CHARLOTTE HOLM BILLUND: Perfect.
 
>> MARCO LOTTI: Besides these two comments the messages will be sent later to you for later commenting and editing. I think we can move to the second message, which reads: Striking the right balance between trustworthiness and innovation represents an important regulatory challenge for AI applications. The EU commission’s white paper addresses this aspects especially in high risk scenarios when rapid responses are needed.
 
Are there any strong responses on this slide or on this message?
 
>> SEBASTIAN HALLENSLEBEN: You might want to add a sentence at the end saying that trustworthiness can also be a driver of innovation.
 
>> MARCO LOTTI: Yeah.
 
>> SEBASTIAN HALLENSLEBEN: To capture that aspect as well.
 
>> MARCO LOTTI: Okay. I would now then move to the last message. Which reads AI and data are interlinked. It is difficult to make sense of large data sets without AI and AI applications are useless if fed poor quality data or no data at all. Therefore, AI discussions need to be linked to data governance schemes addressing sharing, protection and standardisation of data. However, it has been said that AI also presents important particular characteristics such as black box and self-learning elements, that make it necessary to update the existing frameworks that regulate other technologies.
 
Again, if you have any edits, comments, reactions to these messages, let me know.
 
(There is no response.)
 
>> MARCO LOTTI: It seems there are no further reactions for the messages. Therefore, I would give the floor back to Charlotte. I think the first part of my work is done here. And the second part will be wrapping up the report which again you will find on the digital watch website and also will be emailed to you by the EuroDIG Secretariat. Thank you very much and thank you to the speakers for the interesting discussion.
 
>> CHARLOTTE HOLM BILLUND: Thank you so much, Marco. For my part, I think we are out of time.
 
Than I don’t know, Nadia, if you need to say something at the end here. Thanks again for participating in. It has been a pleasure.
 
>> NADIA TJAHJA: Thank you so much for your moderation. It was definitely an interesting session. We have the possibility to hear from not only the key participants but also from the participants who are joining us online.
 
And we are very happy that we were able to address this issue because it is so pertinent, happening in our times now.
 
There is a quick note from our Remote Moderator, Auke.
 
>> AUKE PALS: Yes, Nadia. I would like to thank Charlotte for moderating and being on time and also the Rapporteur and the chat was really active. Thank you, everyone, for asking your questions. I hope to see continue this on the EuroDIG Forum.
 
>> NADIA TJAHJA: Now we have come to the end of the session. I would like to see if EuroDIG headquarters is on line to come and join us. EuroDIG headquarters, are you here?
 
>> SANDRA HOFERICHTER: Yes, thank you.
 
>> NADIA TJAHJA: Hi. How are you doing?
 
>> SANDRA HOFERICHTER: It is getting hot meanwhile. There is a thunderstorm coming and it is hot in our studio with all the computers and devices that make even more hot air.
 
But I could see in your session there was no hot air at all. Was a very good discussion and what was really enlightening for me is how the messages were quickly discussed at the end and how consensus was built on these key messages. That is basically the essence of EuroDIG. That will go out into the brochure that we will distribute to key policy organisations and governments, et cetera.
 
So congratulations to all, how you managed to do this. This is exactly how it should be.
 
And with this, I think we can go into the coffee break. We will reconvene here at 4:30. And I think it is you, Nadia, that guides us then through the youth messages?
 
>> NADIA TJAHJA: It will be my pleasure to present the youth messages that were presented by the youth participants who have worked for the last three weeks. They do want to share with you all the thoughts that are happening with them in their local communities. We hope you will come and join us after this small break.
 
>> OPERATOR: Recording has stopped.
 
>> SANDRA HOFERICHTER: I will now rush to Studio Berlin and Studio Trieste. They will stop streaming for the two days and I would like to say goodbye to them. We meet each other again in half an hour. Enjoy the music, meanwhile.
 
>> AUKE PALS: Bye, Sandra.
 
(The meeting concluded at 1600 CET.)
 
(CART captioner signing off.)




[[Category:2020]][[Category:Sessions 2020]][[Category:Sessions]][[Category:Human rights 2020]][[Category:Innovation and economic issues 2020]]
[[Category:2020]][[Category:Sessions 2020]][[Category:Sessions]][[Category:Human rights 2020]][[Category:Innovation and economic issues 2020]]

Latest revision as of 22:28, 15 December 2020

12 June 2020 | 14:30-16:00 | Studio The Hague | Live streaming | Transcript | Forum
Consolidated programme 2020 overview / Day 2

Proposals: #52, #80, #82, #96, #160 (#76, #86, #106, #181)

Session teaser

The COVID-19 pandemic has resulted in an unprecedented situation which has called for unprecedented solutions, including a rapid development of applications based on Artificial Intelligence (AI) and data. These solutions are essential in tackling the pandemic, as applications can benefit treatment as well as generate a specific overview of the spread. However, these solutions could be introducing potential risks such as biases and privacy issues.

Such examples display the dilemmas surrounding the application of AI and data in general. The question is how to address these serious risks and to ensure the trustworthy use of AI and data, while reaping the benefits and opportunities stemming from the new technologies? This session aims at looking at the regulatory as well as non-regulatory toolbox for AI and data by discussing practical models and tools on how to ensure secure, ethical and trustworthy usage of AI and data without stifling innovation.

Session description

Across the globe, new AI solutions have been built and deployed in order to fight the COVID-19 pandemic – and more innovative solutions are being created, as we speak. These solutions can both benefit disease prevention, diagnosis and treatment and varies from spotting specific patterns in x-rays to tracking people diagnosed with COVID-19. However, such applications raise questions such as: Are the AI solutions able to take autonomous decisions? Are the AI solutions protecting existing rights and values, such as privacy and data ethics? Are results from AI safe and reproducible? Are the AI solutions trained on representative data in order not to discriminate?

These are extraordinary times which call for extraordinary measures, but the pandemic will surely also leave its mark on jobs, mobility, consumption patterns, markets, businesses etc., thereby giving way to new AI solutions in the long-term. As these solutions increasingly become a central part of our everyday life, the question is how to address the potential risks and challenges related to AI and instead build trust in innovative AI solutions.

Already before COVID-19, there was awareness of the challenges brought by AI, such as bias, transparency and privacy, which spurred the debate whether the race for AI should also include regulatory responses. While some intergovernmental institutions and countries are working on developing guidelines, others are drafting regulation. Experience within the regulatory landscape is still scarce and the technology is rapidly evolving - thereby, leaving a range of questions unresolved:

  • Which are the best tools in order to ensure trustworthy AI?
  • Which requirements for the development and deployment of AI should be instated?
  • How do we strike the right balance between trust and innovation?

These are some of the questions this session will touch upon in order to come one step closer to defining how the regulatory as well as non-regulatory toolbox can help ensure secure, ethical and trustworthy usage of AI and data without stifling innovation.

Format

  • Introduction by moderator
  • Key speaker 1
  • Key speaker 2
  • Key speaker 3
  • Follow up questions from moderator
  • Q & A with audience. Moderator facilitates discussion.
  • Recommendations from speakers - (to be included in the messages from the session)
  • Q & A with audience. Moderator facilitates discussion
  • Wrap up by moderator
  • Reporter presents “messages” from the session. The messages from our session will be published in the “Messages from 2020”, which is the outcome of the EuroDIG. Please see attached email from the EuroDIG secretariat.

Further reading

  • ITU AI/ML in 5G Challenge [1]

People

Until .

Please provide name and institution for all people you list here.

Focal Point

  • Julia Katja Wolman
  • Maria Danmark Nielsen

Organising Team (Org Team) List them here as they sign up.

  • Nadia Tjahja
  • Narine Khachatryan
  • Moritz Schleicher
  • Amali De Silva-Mitchell
  • André Melancia
  • Ashwini Sathnur
  • Marie-Noemie Marques
  • João Pedro Martins

Key Participants

  • Martin Ulbrich, DG CONNECT, European Commission, an economist by training, has joined the European Commission in 1995.
    Since 1997 he has been dealing with digital affairs, and in 2018 he moved to the unit dealing with artificial intelligence policy. The team is in charge of the AI White Paper process in the EC.
  • Mikael Jensen, CEO for the new Danish labelling scheme and seal for IT-security and responsible use of data, which is planned to launch by the end of 2020.
  • Dr. Sebastian Hallensleben, Head of Digitalisation and Artificial Intelligence, VDE/DKE Germany; Head of Practice Network Digital Technologies; Convenor of the international IEC SEG 10 “Ethics in Autonomous and Artificial Intelligence Applications”; Convenor of the European CEN-CENELEC AI Focus Group; VDE/DKE Representative in the IEEE OCEANIS initiative; Steering Board DIN/DKE Standardisation Roadmap AI

Has recently published the report ”From principles to practice – An interdisciplinary framework to operationalise AI ethics

Moderator

  • Charlotte Holm Billund, The Danish ICT Industry Association.
    Charlotte is working with digitalization and new technologies, including AI, dataethics and green solutions.

Remote Moderator

Trained remote moderators will be assigned on the spot by the EuroDIG secretariat to each session.

Reporter

Current discussion, conference calls, schedules and minutes

See the discussion tab on the upper left side of this page. Please use this page to publish:

  • dates for virtual meetings or coordination calls
  • short summary of calls or email exchange

Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.

Messages

  • Trustworthiness should be regarded as a prerequisite for innovation. When addressing it, we shall look at two sides: One that regards the characteristics product (i.e. its ethically relevant characteristics) and one that is related to how trustworthiness is communicated to the people. One solution could be developing a standardised way of describing an ethically relevant framework of AI systems. As an example, an independent organisation formed by four Danish organisations launched a new company labelling system in 2019 that aims to make it easier for users to identify companies who are treating customer data responsibly.
  • Striking the right balance between trustworthiness and innovation represents an important regulatory challenge for AI applications. The European Commission’s White Paper addresses this aspect especially in high-risk scenarios when rapid responses are needed. Trustworthiness can also be a driver for innovation.
  • AI and data are interlinked: It is difficult to make sense of large data sets without AI, and AI applications are useless if fed with poor quality data or no data at all. Therefore, AI discussions need to be linked to data governance schemes addressing sharing, protection, and standardisation of data. However, AI also presents important peculiar characteristics (such as ‘black box’ and self-learning elements) that make it necessary to update existing frameworks that regulate other technologies.


Find an independent report of the session from the Geneva Internet Platform Digital Watch Observatory at https://dig.watch/resources/fighting-covid19-ai-how-build-and-deploy-solutions-we-trust.

Video record

https://youtu.be/XvCciO9lYX0?t=18937

Transcript

Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-481-9835, www.captionfirst.com


This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.


>> NADIA TJAHJA: Hello and welcome back in studio The Hague. I hope you had a lovely lunch break. My name is Nadia and I’m your studio host, here to assist you with any technical difficulties you are facing. I’m joined by Remote Moderator, Auke Pals.

>> AUKE PALS: Good morning. I’m your remote moderator. Feel free to ask a lot of questions so I can help you.

>> NADIA TJAHJA: So before we get started I would like to go over our Code of Conduct. EuroDIG is all about dialogue. It is your contribution to your ideas, thoughts, questions that make the sessions inspiring and engaging. I hope you will choose to actively participate in these virtual sessions. Now that you joined the studio you will be able to see your name in the participants list. Make sure that you have your full name displayed so we know who we are talking to. You can set it up by clicking more on your name and choosing the option rename.

When you entered the room you were muted. This is to prevent feedback to disturb the sessions. Raise your hand if you have a question and we will unmute you. But when you are unmuted for your intervention we kindly ask you to switch on your video. It will be great to see who we are having a discussion with and let us know your name and affiliation. Now I would like to introduce you to the moderator Fighting COVID19 with AI -- How to build and deploy solutions we trust?

Our Moderator for today is Charlotte Holm Billund from the Danish ICT Association. Charlotte, you have the floor.

>> CHARLOTTE HOLM BILLUND: Thank you and welcome to our session Fighting COVID19 with AI -- How to build and deploy solutions we trust? It is a relevant topic in the time of COVID-19. We acknowledge that the potential of AI is great and under COVID-19 it has been emphasized in various solutions for fighting the pandemic, for monitoring the spread of the virus, diagnosing patients, and optimizing business solutions.

But AI solution also raises a lot of questions. And challenges. And these challenges to AI are not new a phenomenon. But in this workshop we will try to look into questions such as which are the best tools to ensure the trustworthy AI, which requirements for the development and deployment of AI should be stated and how do we strike the right balance between trust and innovation?

To help us with answering these questions we have an excellent panel with us today. I hope Martin is here as well from the European Commission. And Martin is part of the team responsible for the recently published AI white paper from the commission. And he will lay out the Commission’s vision with respect to ensuring trustworthy AI, give us insights into the ideas of regulatory regime and might even give us some hints on the dilemmas into the formulation of the white paper.

Next in line we have Dr. Sebastian Hallensleben, Head of Digitalisation and Artificial Intelligence in VDE. He is involved in AI Ethics Impact Group and give us insight into their work on a model to put ethical principles into practice.

And our third speaker today is Mikael Jensen. And he is the CEO from the Danish labeling scheme and seal for IT security and responsible use of data.

And he will give us insight into how to implement a labouring scheme for AI in practice. He will bring fresh insights as the Danish seal is planned to launch at the end of this year.

But just before we get started I will just run through a quick lineup of the setup of this workshop. Start with the three panel with an introduction remark from each of the three. This is followed by Q&A. And questions from hopefully all of you attending this session.

So if you have a remark, feel free to raise your hand, use the chat and we will invite you to ask your questions.

After the Q&A, our key speakers will present their recommendations. And then we will take a second round of questions from the audience.

At the end of our workshop we have, we are lucky to have a reporter with us today, Michael, and he will gather the key messages from this workshop and present them. We will have an agreement upon these key messages. They will be published in a report. So like finalizing the whole EuroDIG setup.

And with niece introductory words I give the floor to Mr. Martin Ulbrich from the European Commission.

(There is no response.)

>> CHARLOTTE HOLM BILLUND: Do we have Martin with us, Nadia?

>> NADIA TJAHJA: It seems that Martin has dropped the connection. He is currently not in the Zoom room.

>> CHARLOTTE HOLM BILLUND: Let’s move on then. Would you take over?

>> SEBASTIAN HALLENSLEBEN: Yes, of course, thank you very much, Charlotte for the introduction and also for the invitation and I am also happy to see quite a few people in the room. I’m looking forward to a fruitful discussion.

I would like to pick up one of the subtitle of the session really, which talks about the balance between trustworthiness and innovation.

I think this might be a little bit too pessimistic because I would put it in a different way. I would say that trustworthiness is a prerequisite and a basis for innovation. So if you have technology that is trustworthy, you get broader deployment. It is easier to scale, easier to achieve network effects.

Also if you have technology that is trustworthy it tends to pull in a broader development community. And those of you who are involved in the, in particular in the open source development community, are aware that the brightest minds there can be quite critical about anything that is perceived as being deceptive or untrustworthy. Trustworthy technology for me is actually a prerequisite for innovation and the drive to action.

If we look at trustworthiness it really has two sides or two perspectives that we can look at. The one perspective is the product of the service itself. So if we have an AI system, AI, a COVID tracing app or any other secure. We can ask ourselves is it safe, robust? Those are characteristics of the product that can or cannot be trustworthy. But the second level of trustworthiness, how do we communicate that to users and citizens? If we haven’t got a way of communicating the properties of a system that make it trustworthy to a broad audience, we also won’t get acceptance and we only really have one half of trustworthiness.

So as Charlotte mentioned at the beginning, I have been involved in a fairly broad consortium of mostly academics, actually, ranging from technical, technology, philosophy, physics, business, fairly broad multidisciplinary range. What brought us together was this thought or observation that everybody talks about AI ethics. Everyone sort of agrees on what the important principles are for AI ethics: Transparency is good, fairness is good, but no one has a concrete way of putting those principles into practice.

So we set out to work last year and earlier this year to create a framework for bringing principles of AI ethics into practice and therefore how to make products trustworthy. I don’t know if we are going into any detail of the framework later, but there are two key points in there. One point is that yes, it is possible to make something soft such as transparency or fairness measurable and, therefore, enforceable.

Also if we do want to communicate trustworthiness to users and citizens and customers, we do need a sort of little data sheet that shows the relevant ethical characteristics in a standardized and clear way. This is also been one of the products of our work.

So just to conclude my little introduction, yes, we can. Yes, we can make AI trustworthy if we address both the characteristics of the products and services and also the way in which we communicate trustworthiness, including ethics to a broad audience. Thank you.

>> CHARLOTTE: Thank you so much, Sebastian. And Mikael, the floor is yours.

>> MIKAEL JENSEN: Can you hear me?

>> CHARLOTTE HOLM BILLUND: Yes.

>> MIKAEL JENSEN: Okay. I would also like to thank you, Charlotte and the organisers for inviting me in here. What is very interesting to hear about Sebastian and the work you are doing because it is highly relevant to what we are doing in Denmark as well.

You talked about how to be communicated. And I’m just going to give you a small status on the current status of the Danish voluntary labeling scheme for IT security and responsible labeling that we are working on in Denmark at the moment.

To add context to that the labeling scheme was proposed as a result of will recommendations to the Denmark government in November of ’18 and in January of ’19 the government appointed IT security business Council also proposed a voluntary labeling scheme but for IT security. There was a data ethics label on the way and the IT security level label. Both of those recommends were combined into an idea of developing a voluntary IT security and ethics labeling scheme in Denmark.

Subsequently the Danish business authority as long as with other actors from the industry foundation developed a draft prototype on how the voluntary labeling scheme could work and then the concept was tested among consumers and companies in the summer of last year with basically positive feedback from businesses but also consumers that could see an idea in having a label that could guide their action.

The labeling scheme for IT security and responsible data users were then founded as an independent information founded late last year by four foundations, the Consortium, SME Denmark, the Danish government and the Danish Consumer Council and the Danish foundation is funding the initiative and the business foundation has an observer role in the board. The objective is to develop a voluntary labels team including responsible ethics. The aim is not to label specific services but rather the companies itself. I’m heading up the initiative starting from February 1 of this year. And we are now working on operationalising the labeling schemes, nine overall criteria and making it practical to enforce just like Sebastian is actually working on AI specifics. One of the nine criteria is about algorithmic systems and AI and we expect to launch the labeling scheme in the end of 2020.

We have defined nine overall criteria that are meant to deliver digital trust and which the companies in question need to follow. One of the nine overall criteria is about algorithmic systems and AI. The way we are working on it is, we take the high level criteria, it could be technical AI security or algorithmic systems and AI and will then ethics and we try to operationalise that on the final level 3 describes the company implementation. It is along the lines of what you are doing also in Germany. In relation to, for example, algorithms and AI we have label 2 criteria like human agency and oversight, transparency and expandability, model quality and so. We take each of those criteria and trim them down to the final requirement for the business to live up to.

And for each of these criteria we are using Danish, European and international frameworks. In the case of the AI part we are using GDPR but also the high level Expert Group guidelines for trustworthy AI which is part of the white paper. The recommendation from May last year, the Council of Europe’s recommendation on the human rights impacts of algorithmic systems from April this year. Draft ISO standard and then two Danish standards addressing transparency and AI.

And basically we are not applying a one size fits all labeling system, but instead a well defined risk-based approach where the criteria are different shaded based initial risk profiling of the companies in question. Based on initial control questions, then we put in the companies into four different segments from low risk to high risk-based on, for example, their data complexity or organisational complexity. Then the different criteria then are applied per risk type. So in order to obtain the labeling scheme, each segment is required to live up to a different set of criteria which are also vary in strictness.

And basically that is what the companies need to do. Then from a consumer point of view, we believe that the labeling scheme and seal give the consumer confidence in sharing and using data and make it easy to use companies and services take provide for liability, security, privacy and ethics. From the company perspective, it will provide a list of and guidelines to how to make trustworthy services and how to be handle data safely and responsibly. From a society point of view, the seal is meant toe increase the IT security and responsible data use in companies and also increase digital trust and make that a competitive parameter.

So that is basically my intro to the panel.

>> CHARLOTTE HOLM BILLUND: Perfect, Mikael. Very interesting. I guess there will be a lot of questions for this scheme later on.

But I have been told that Martin Ulbrich from the European Commission is with us and can hear us. Are you there, Martin?

>> MARTIN ULBRICH: Yes. Can you see me?

>> CHARLOTTE HOLM BILLUND: Yes, we can. In my introduction of you I said that you were part of the white paper on AI and that you would give us insight into the, maybe into some of your ideas on regulatory schemes and might also include some of the dilemmas that you have in addressing the white paper. But I will give the word to you. Thank you.

>> MARTIN ULBRICH: Yes, thank you very much. I very much appreciate the opportunity to explain our approach that we took in the white paper today to this large audience.

The white paper obviously has a certain history that originally the Commission President had announced that we would have actually a regulation within the first 100 days.

However, it turns out that was not really possible because this is such a complicated and important topic, it was important to take the widest possible input from the stakeholder community. The white paper is in that respect really is an attempt to launch a debate to gather feedback from the audience, from the stakeholders.

So we know that until Sunday you can actually submit your position papers or simply put in the forms online. We are looking currently at some 600 submissions and the majority of the submissions will come at the very end we will get to eight or 900 submissions. That is clearly a strong interest in the white paper which is something very helpful to us and helpful for the community because they can have an argument and everybody, but it is extremely important for us in the commission.

Now, I think there is a keyword that in the white paper is the issue of balance. It is, I think the overall approach in there is clear, a very strong yes but. Yes, AI is good and we will do everything we can do to develop it and make the European Union strong and take advantages of all the possibilities there, but at the same time there are certain things we have to look at.

And the first, so the first part is fairly, I wouldn’t say noncontroversial, but there are very few people who object to the general idea of promoting AI. Obviously then you have lots of discussion about the details of what exactly it is that you want to do or do not want to do.

There is a very strong interest, very strong debate on the second part on the rules and framework which may be necessary in order to make AI trustworthiness trustworthy.

Now, clearly we want to make it trustworthy and useful. The COVID-19 crisis has clearly shown it may be helpful to have trustworthy AI but you need to be very rapid in it.

It is clearly, medical applications are high risk. I come back to that in a second. But medical applications, things that can impact your health and may be, take the decision or make proposals which have a consequence life or death of a person is clearly a very high risk area.

Yet at the same time it is an area where sometimes very, very fast innovations have to be posited and where sometimes it may be better to have entirely good solution right now rather than a perfect solution in one year which might be much better but may simply come today.

It is very much a double edged sword we are taking, making AI useful for society and at the same time also trying to put it in some words.

The key concept is the question of high risk approach. Clearly not every AI application which you need to sort out parking spaces or find out the nails which are lower quality in a factory meet any -- need any kind of intervention at all. Clearly some AI applications need intervention. The key is how do you distinguish them? Which is high risk? Which is low risk?

In the white paper we proposed, I underline that because it is to be discussed, an approach in having a double consideration, it has to be high risk such as public intervention, law enforcement, policing, medical, something like that. Clearly the list is long.

And it has to be also a high risk application such as, you know -- has to be a high risk application, not every application in those sectors is necessary. I was talking about parking allocation. You can have a hospital using an AI system to do parking allocation for its staff and that is clearly not in any way high risk.

We propose this double filter in order to give the largest possible share of the business community the security that what they are doing is not high risk and therefore, doesn’t have to be considered as something which could fall under this regulatory approach. That is important for SMEs and very also very important for the legal certainty.

Then there is one issue which will raise a lot of debates. I have been talking already for too long but I will stop here at this time and we can discuss that later.

>> CHARLOTTE HOLM BILLUND: Okay. It was really interesting. So you could have kept on but we can do that.

I haven’t seen any questions. So I will start.

You talked about, Martin, the urgency and the time for AI. But it also comes to guidelines registrations. How does the EU see that aspect on the urgency of having niece AI guidelines before nationality governments start making their own?

>> MARTIN ULBRICH: Well, I think the core business of the European Union is really -- one way is to keep internal markets, internal, border free for people but border free for companies to develop their solutions. And especially in an area like AI which depends on the combining of large sets of data which translate very often into sets of data from different Member States, having a framework which would make it then possible because of national legislation which contradict each other or are very burdensome, that would be disadvantageous to European industry an therefore to the rest of Europe.

Ever there, I think it is one of our key motivations here to take an initiative in order to make sure that in initiative marketplace stays intact and gives the opportunities which it will give both to the consumers but also to the companies to develop their business.

>> CHARLOTTE HOLM BILLUND: Perfect. Sounds good.

And just a follow-up question for you, Mikael. How many companies do you expect to join in on the labeling scheme?

Is there a volume that you need to have in order for it to work?

>> MIKAEL JENSEN: Actually, the way the initiative is funded is that in the beginning it is going to be free for the companies to actually on board the programme. And then later, later on it will be something that need to subscribe to and actually pay for.

>> CHARLOTTE HOLM BILLUND: Okay.

>> MIKAEL JENSEN: But we need to help the companies in the beginning to actually, you could say on board them to the labouring scheme and see the value of it. Because I mean if there’s only one company having it, there is not so much value. If the whole business community gets the label, then it also gets to be something that the consumers and users see and then it is going to be a demand pulled for trustworthy services. That’s where we want to be in a later stage.

>> CHARLOTTE HOLM BILLUND: That is very interesting.

And maybe also a question on how do we engage the common European citizen in AI and making it trustworthy? How do you see this? Because for now it is mostly a company, an organisation oral discussion that we have on AI and trustworthiness.

Can we see like a societal demand for this? Or should it come from above, like from the EU in order for it to work? Sebastian, you might want to start.

>> SEBASTIAN HALLENSLEBEN: Well, I think both. When you try to pin down certain aspects of trustworthiness, for example fairness, you cannot just have statistical parameters that you can measure from a technical perspectives, but you do actually need to engage in dialogue with the people who are affected or subjected to an AI system in order to agree what is actually fair for a given system and a given circumstances.

I actually find it quite interesting, the approach that Mikael, you’ve taken in Denmark. You are saying well, we are not really trying to label the product. We are actually trying to label the organisation. This is actually something that is, we moved away from. If the difficulty of, how do you deal with really, really big organisations, companies like Facebook, like Amazon, like Google, who have interests spread all over the world that will be acting in different or by different standards all over the world? And how can you sort of draw the line around the Danish activities to give them such a label?

Or will it be more like this is intended for SMEs.

>> MIKAEL JENSEN: The idea is that it has to, needs to work for all kind of companies sizes and forms, both the smaller ones, the SMEs and also the big companies.

Of course, if you have a company like Microsoft or Google or Facebook, it can be difficult. We have to find out how that will work.

You know, they can be split up into different business lines and have different market focuses. The privacy and security might differ depending on the product that they actually serve to different customer types.

So that is, of course, something we need to find out how to work with. But I think if you have the ISO 2701 standard, that is also from the company perspective that you are actually looking into standards.

So I think we want to look into the whole company and then we also are having in the criteria privacy and security by design and default which is also something you could say that the company, it is about internal processes and how they are actually developing systems from the start until they launch and also post launch.

So yeah, we are taking that perspective in Denmark, which is different from the white paper which is based on marking or labeling-specific services, right, for AI.

>> SEBASTIAN HALLENSLEBEN: We are actually seeing -- maybe that is also a line of inquiry that might be helpful. We are seeing a lot of requests from developers of AI who are saying, well, there are lots of ethical dilemmas to consider and as developers it is not really our competency and our role to make decisions on how to deal with these. So they are also clients or users, if you like, of any labeling or certification scheme. It makes their life and their work a lot easier.

>> MARTIN ULBRICH: Yes, I think both IT security and privacy and ethics, those can be difficult for companies to find out what actually they need to do different in order to live up to this. We hope that different criteria that we profile out to the risk-based approach on company types will give that guidance, basically, to the companies.

But I think now we are talking AI here. And I believe also we are looking at the same frameworks. I guess they are quite new. There is the EU high level Expert Group, ICT and what not. There is the need for European alignment on these criteria. In the European Union you have some idea what the criteria should be for the high risk AI algorithms. More work needs to be done. I can see when we are operationalising the criteria also in AI we need to find out how, what is it actually that the companies need to do. How do we audit it? How do we check whether they are actually living up to the criteria that we are claiming that they need to do? There needs to be more understanding on that across Europe, I think.

>> CHARLOTTE HOLM BILLUND: Thank you for that perspective.

We have a few questions coming up. And we have -- I have help from my Remote Moderator, Auke, who will switch to some of the people asking questions.

>> AUKE PALS: Thank you, Charlotte. Yes, I’ve received some questions and one question was from Louisewies Van der Lean. She said I’m enjoying the discussion but it is very abstract. I think the workshop can benefit from some concrete examples. Where are the actual benefits, especially for fighting COVID-19? The real question is: Can you give some concrete examples of a databased application that is actually helping with COVID?

>> CHARLOTTE HOLM BILLUND: Martin, please?

>> MARTIN ULBRICH: Yes, to give you an example of that, there is a CT scan which can analyze the CT scan data and identify whether a patient has, can be diagnosed as having COVID-19 which has been based on data from China because that is where they had the first data for and which has been used in various European hospitals.

That is clearly, that is a perfect example, actually, of the dilemma you have between speed and quality. Because the identification starting at the beginning was extremely urgent. They needed something very quickly and transfer, the hospitals used this without going through the usual things that you would do. You typically have a software which has to identify certain, you have months and years of testing. You would go through a very complicated procedure and be sure it is absolutely the best thing you have to the market.

In this case it wasn’t possible. It was probably better than nothing but not as perfect as it could have been or it would be once it is developed on the basis of say one or two years data, it will be much better in the future. But that time wasn’t available.

So people had to use what they had. But definitely it did help. That is one example.

You can also use a data for the modeling of the spreading. I don’t know how far they have advanced. Clearly the modeling needs a lot of improvement. We have all seen the modeling of the COVID-19 spreading has been very far off the mark. So using AI to improve that is certainly another avenue.

Then there is also in terms of developing the treatments, both the exposed treatment if you are sick and developing the vaccine, AI is used to actually go through the various combinations of medical agents, active ingredients to see which ones are the ones which are most promising. That is pretty much every element of the fight against COVID-19. It actually has been or is still being developed with the help of AI.

>> AUKE PALS: Thank you very much, Martin, for answering this question.

We do have loads of other ones. One is from Desiree and she has a question for Mikael Jensen. The question is why would companies be paying for a labeling scheme and she is asking are there any ideas on how the data can be stored in a centralized or decentralized system?

>> MIKAEL JENSEN: First of all, in terms of the value of a label, I think it is valuable for the companies to get a criteria that they need to adhere to. So instead of themselves actually finding out, then they can use this as a guide. There can be value.

But the value for companies is also that the companies would actually live up to the criteria. They can through their external customers and consumers and society basically communicate with the data seal that they are trustworthy and are treating the users and customers data in a trustworthy way.

That way they will get competitive advantage. And we also, I mean in the criteria we also have focused on IT security parts. And many companies are being threatened by cybersecurity attacks and by having the seal and living up to the requirements they can save money on that part of the creation also.

So we believe that it will be valuable for the companies to actually take on being, you can say, certified and getting the seal but also communicating it afterwards. The companies taking on this will be able to differentiate themselves in the markets on trustworthiness going forward.

>> SEBASTIAN HALLENSLEBEN: I would like to just add on there, one question that comes up again and again when we talk about a seal, a seal is, you have it or you don’t have it. It is a on or nothing label.

From a simplicity perspective it is as simple as it gets. A company is either trustworthy or it is not trustworthy.

In our work we considered this approach. In the end we rejected it because we felt in particular for AI ethics there are quite a few trade-offs to be made and quite a few shades of gray, if you like. For example, if you talk about a COVID-19 app, you can say, well, an important ethical characteristic of it would be that it is completely transparent, that it shows what algorithm it uses. That it shows what data was used to train it. But then at the same time you might want such an app to be a good protection of your privacy. You immediately get a conflict because total privacy is not possible when data and movement data and other sensitive data of a large number of citizens goes into the app to train the algorithm.

So you are having to make a trade-off between transparency and privacy in this case and it is not something that we felt can be aggregated into a single seal but rather something that needs to be shown separately in a standardized way as a more detailed label. Almost like a little data sheet like the nutrition information you find on items of food, which also is tiny little data sheets that look the same on every item of food.

I think that is a very important clarification when we talk about labeling. We have to be clear: Do we talk about a simple yes/no seal of approval, if you like? Or do we actually talk more about something that is a bit like a data sheet? Maybe also similar to the energy efficiency label or nutrition label. Thank you.

>> MIKAEL JENSEN: Yes, I mean we are working on making one seal and but then the criteria that the companies need to live up to is dependent on the risk profile that they have. That is in order to make it simple for the users, end users, actually, so they don’t have to look into different versions of the market just like the energy mark that you are talking about. We believe it is important to communicate in a simple, easy way for the consumers because basically the consumers or end users, they don’t really know much about data, whether it is privacy or IT security, whatever. It is all kind of data for them. And we need to give some guidance for them. So that’s kind of the idea.

In terms of the COVID apps, we had a discussion in Denmark as well about it, whether data should be stored in a centralized database or should be stored on the client instead. If the company behind the application would have used the seal that we are developing, they probably would have made a solution where they didn’t make a storage on a centralized database, but rather on the clients and they used the price and security by the sign and default criteria that we have in the seal.

In terms of privacy, the idea is that you build in privacy and security from the start. But then the users or customers can basically opt in to give more data if they want. If not, that they don’t have that choice. So I think it is more about being transparent and giving the customer also solutions and choices. On how much they want to basically share of their data.

>> AUKE PALS: Also --

>> CHARLOTTE HOLM BILLUND: In looking into --

>> AUKE PALS: No, sorry, I’m also seeing a hand from Andre that has been up for a while. So I will give him the floor.

>> ANDRE MELANCIA: Hello, everyone. My name is Andre Melancia. I’m from Portugal and part of the technical communities here in Portugal and also around the world.

My focus is mainly to deploy AI solutions and other kinds of solutions as well. I have had an extreme level of requests in the last few months specifically because of COVID but also in the last two years this has also been a trend.

So let me just give you some of my practical problems and maybe use this as food for thought for the discussion. So as I mentioned in the last two years there has been a big hike in using artificial intelligence. A lot of people want to learn about artificial intelligence. But most of them are not what we can call data scientists. They don’t have the basis to understand all of the concepts of AI.

Most of the providers like Microsoft, Google, AWS, and many others are making it easier for people who are, for instance, developers, not so much data scientists, to be able to use these technologies without extreme knowledge of things like statistics and math and advanced information about AI itself.

That means that anyone who is a developer can just go to pre-created services that all these providers can supply and just use them even without actually using AI or doing the actual training.

So just maybe pressing a button or call the res API, something like that.

That brings a whole set of problems that some of you discussed already and one of them is about ethics.

Sebastian first mentioned ethics in this discussion. Of course, if you are a data scientist and if you are learning data science, you probably have at least for most Universities I know, you probably have some subject about ethics and try to understand ethics in the context of AI.

However, if you are a developer, that is not really a problem that you are faced with. In most cases if you have a project and they ask you to use AI, you just need to use AI as fast as possible, solve the problem and move on. That becomes an issue when it comes to ethics especially.

Also some of you have already mentioned when I raised my hand I was about to say, you mentioned this already, which is about privacy. So most of these providers that provide easy AI, let’s call it like that, they have their services in the clouds. It is their business, it is the way that they get money. And in which case privacy might be overlooked when you send everything to the cloud so that you can do AI on the information.

Just to finish this information, there was someone that wanted more examples on how to use this. So there is a lot. So most companies now that provide products and services, they are using AI and they have done a lot of work in AI, especially eCommerce shops. They have done a lot of work in retraining all of their models in the last few months because all of the things that they knew before was a normal, let’s say, suggestion to give to the users, became completely wrong starting March, something like that. From that point on people were asking for different things.

Of course, I’m not mentioning the usual things like toilet paper or anything like that but the requirements, like buying masks, disinfectants, things like that online became an actual thing. Most providers of online commerce or even in-person big retail commerce, they have started to use AI to try to predict things which are not very well predictable if you are just a human taking from your own experience.

So this is something that I have in using every single day until now in the last two or three months to actually make some sense out of all the irrationality and all of the madness that comes with the current pandemic. Okay. Thank you.

>> CHARLOTTE HOLM BILLUND: Thank you, Andre. Any thoughts on this from some of the Panelists?

>> SEBASTIAN HALLENSLEBEN: Yes, I think Andre mentioned one very important point which is that a lot of AI development is based on existing frameworks, be it Google’s TensorFlow or Facebook PyTorch, which are de facto standards that are widely used because they are good. They get the job done. But we do have two limitations we need to be aware of, particularly when we look from the European perspective.

One limitation is, Andre lightly touched on it, is the quality of the data that goes into it. It is one thing to get an AI framework up and running and train an algorithm and get it to do something, but it is a different level of quality to consider, how good is the data that I’m feeding into it? Is it complete? Is it biased? Am I just automating errors that human beings used to make?

And that is something that is not necessarily taught to computer scientists or the computer engineers who programme and train AI.

The second issue is the transparency of the frameworks themselves. So we use the frameworks that are normally also integrated with cloud offerings. So we don’t know what else might be happening with the data. We are actually forked to trust certain entities, some of which haven’t always proven that they are very trustworthy in their past behavior. And we might want to consider concepts such as European AI distributions which could take a leaf out of very successful way in which, for example, Linux is being distributed.

Very, very few people take the Linux code and compile it and implement it from scratch. Most people take a ready made package where someone has taken the actual Linux code and added lots and lots of tools that are specific for a particular application or type of user and is giving that usually packaged with some sort of support contract. And we might want to consider building a similar infrastructure, similar ecosystem of AI distributions, maybe with some of the big toolkits at the call, but decorated with the tools and issues that are relevant for Europe, to make it easier for developers to create trustworthy AI applications.

>> CHARLOTTE HOLM BILLUND: I know, Martin, the EU has a data strategy working along the AI white paper. Could you comment on the interplay between the two?

>> MARTIN ULBRICH: Yes, let me first agree with Sebastian. I mean, one of the problems when you do any kind of modeling is the quality of the data. That is certainly important for AI and not specific to AI.

As an economist when I was doing my studies I used to be told data is basically rubbish in, rubbish in. If you put contracting party data in, you get contracting party out. One current example, some of the predictions have been made about the COVID crisis about hundreds of thousands of dead people mostly because of the data which they he used. That wasn’t actually AI but very limited data, not good enough quality and you get predictions ten times higher than anything that is reasonable.

Therefore, the data quality is extremely important. In AI, of course, it is more important because AI can be scaled up easier. So many things, you have the same concerns as before but the higher impacts are very important.

Now, in order to improve the quality you have to have a choice of data. If you have three data points, there is nothing you can do. One hundred is better. A thousand, slightly getting there. By 10 million, you have a good chance of having a good quality set of data.

Therefore, making data available which is really related to the data strategy is a key requirement for the entire architecture that we are talking about here in terms of making sure that AI is high quality.

If there is no data available, then we can talk about need for data quality and anti-biasing and all kinds of things, but if there is no data, all these things cannot be done. Therefore, the data is really the A and the O of artificial intelligence.

Of course, the other way around as well. If you have large sets of data without AI you won’t be able to make sense of them. There are a limited number of data points you can make even with computer help. If the computer help needs human intelligence to actually go through the data so you can use your computer to do various searches, et cetera, but if you have to manage very, very large data sets it becomes impossible which is why AI has such big promise because it actually can see connections between data, can find relations that humans couldn’t find because the volume of data is too big.

These two things are two sides of the medal and that is why they have been published together and why they will continue to evolve together.

>> CHARLOTTE HOLM BILLUND: Thank you. I know we have a few more questions in the chat.

>> AUKE PALS: Yes, charts, Charlotte. I have a question from Amali De Silva-Mitchell. And the question is, what has been the role for professional bodies such as lawyers, engineers, doctors, and accountants in setting consistent standards for AI?

>> SEBASTIAN HALLENSLEBEN: Well, I can say something about that. At least from a German perspective it fits -- actually, it is a very good question because it fits neatly into the previous categories we discussed so far, looking at the products, looking at the organisations, looking’s communication.

Now we are actually looking at the professionals as individuals who are operating AI, be it doctors in their practice, getting AI to help them with diagnosing COVID or using AI to help them interpret x-rays or be it engineers using AI to help them analyze traffic flows or the construction of lightweight building materials, anything like that.

So at least in Germany there has been some work 20 years ago to draw up a code of ethics for engineers which was done in one of the largest engineering associations. It was a multidisciplinary process, came up with eight or nine-page cold of ethics which sort of stood there for 20 years.

Now last year more or less the sale group as far as they were still alive, but more or less the same group came together again to update their code of ethics to reflect certain characteristics of AI. And it is interesting because usually you would have thought that something like ethics should be technology neutral. But if you draw up accompanied of ethics, what is right and wrong for an engineer, it shouldn’t really matter exactly what technology that engineer uses.

But since AI is so new and has characteristics that we’ve never seen before in another technology, it actually necessities updating such technology-neutral codes of ethics as well. Just like many technical standards have to be touched again.

>> CHARLOTTE HOLM BILLUND: Would you elaborate a little bit on the technology neutral? What was the problem?

>> SEBASTIAN HALLENSLEBEN: Well, the two characteristics of AI that make it difficult to treat it just like any other new technology, one is the black box character of AI. So I cannot tell how AI reaches a decision. They may say this is a stop sign or a speed limit sign, but I can’t look into the box to see what leads you to believe that this is a stop sign or that this person is not credit worthy.

It is that black box nature, we can’t be completely sure yet, but it is an issue in principle. It is not that it takes a bit of work to overcome but it is a principal limitation of one of the most common AI methods at the moment, neural networks.

And the second fundamentally new characteristics of AI is that it is self learning. And which means that if I satisfy AI today and prove that it has certain desirable characteristics, it is safe, ethical and everything else, it might learn, it might change itself. Tomorrow, well, who knows whether it is still safe and ethical and robust. So that poses whole new challenges for things like testing certification. In practice you have to find a way of alternating those processes and repeat notices processes which used to be sort of one-offs or something that you may do yearly or so.

These are the two main new characteristics which mean that things that were meant to be technology neutral do need to be taken up again and checked again, maybe revised.

>> CHARLOTTE HOLM BILLUND: Just before we go to the recommendation, do you think it, maybe for both Sebastian and Martin, do you think there is a demand to use the AI models that have more explainability in them? That you can’t use neural AI networks models for some parts because you need to have the transparency? Do you foresee that?

>> SEBASTIAN HALLENSLEBEN: It is quite possible. If you look at, for example, AI in the justice system. A number much countries use AI today to decide whether someone is released on bail, whether someone is released on parole or to decide this person should go into prison for five years or 11 years.

It might be possible to prove statistically that on average as a whole the AI way of doing it might be better than a human judges and museum parole officers, but you could argue, of course, let’s go with AI because as a whole it seems to be better. Statistically it seems to be better. Big assumption there, but let’s assume that’s the case.

But if you are a person who is affected by that decision, if you are a person who, for example, is not released on bail even though you think you should be, if it is a judge you can ask the judge, explain to me, why are you not releasing me on bail? Why are you sentencing me to 11 years in prison rather than five years in prison? The judge has to explain that individual case. The judge won’t say, you know, on average people like you tend to reoffend, sort of assuming, just go to prison for 11 years.

No, the judge has to give a good explanation for that individual case. And if you use AI you cannot get that explanation for the individual case. And that is, to answer your question, from my perspective would be a use case where we would say we are not using AI because it is not explainable. It is not transparent for individual cases.

>> CHARLOTTE HOLM BILLUND: Perfect. And Martin?

>> MARTIN ULBRICH: I agree, it clearly depends so much on the particular use case. In some cases it’s just a meta, and whether it is explainable or not. In some cases you might want to have premium on performance versus explainability. If you have a medical system which works on, which is explainable but on the other hand 90 percent success rate and another one is not explainable but 99 percent success rate, most people will go for the 99 percent even if there is no explainability.

The other cases like Sebastian was describing, clearly explainability is certainly one of if not the most important thing. But there we also have to be very careful with our technical concerns here. The kind of ethical questions. Sometimes we keep on discussing, you know, the performance of the AI and the data not good and it is biased and we tend to forget the question of whether we would actually want to use AI at all. For example, there have been questions that taking the same example, there have been discriminations in the U.S., that the system has been better to whites than to blacks, more negative on blacks than to whites. That is clearly a problem. The key question, is that -- let’s assume that in 23 years time that is solved. If you keep on discussing all the time that is a problem of discrimination and discrimination is solved we set ourselves to accept the idea that we do it at all.

Then the AI would be okay, we talk about discrimination, there is no discrimination anymore, now we can use it. The fundamental question, do we want to use the technology for that kind of purpose, rely on large databases to put you in jail for four years as opposed to six years. We need to keep that in mind even if we first of all talk about the technical details and the nondiscrimination and all the other issues.

>> CHARLOTTE HOLM BILLUND: Perfect.

Well, I think that was a nice stepping stone over to our section on recommendations. Martin, you were first in line before and didn’t make it. This time you are first in line with offering your recommendation.

>> MARTIN ULBRICH: Yes, my recommendation is really to keep a balance. I am coming back to what I said in the beginning. AI without any kind of wild west model of AI is not clearly something we want. But the perfect state without any innovation is something we certainly want to avoid as well. Therefore, we really have to find a balance.

I think in our particular case balance requires differentiation. One with all is probably not going to be the right way to find a balance for AI across the board.

>> CHARLOTTE HOLM BILLUND: Perfect. And Sebastian, you want to come up next?

>> SEBASTIAN HALLENSLEBEN: Yes. So my recommendation, maybe unsurprisingly after the discussion so far, is to create a standardized way to describe the ethically relevant character I cans of an AI system. Like the degree of privacy, the degree of transparency, degree of fairness, robustness, so on. Whether you call it a label is a different story. If it is a label, it is a detailed label. I would like to think about it more like a short data sheet.

The reason I’m recommending it, pretty much every stakeholder gets the benefit out of such a standardized AI ethics data sheet or label. If you are an operator, some company who wants to operate AI systems, you can look at your use case. You can consider, okay, what ethical risks do I have in my application area? Am I maybe in personal mapping or in treatment which is very treatment? Am I in another area?

I can look at the market and say what is available? Some AI ethics systems might be labeled as being very ethical or with maybe a high degree of privacy or transparency or certain relevant characteristics and another may not. If I put out a request for tender, I can specify these are the minimum requirements in ethics that I want.

From the regulatory perspective, the regulator can use such a data sheet to set minimum requirements and say okay, in certain high risk, medium, low risk cases we require certain minimal levels. They can be specified in a standardized way within such a data sheet.

If you are a consumer who wants to buy an AI system, for example a piece of software, an app and you have a choice between different apps, you can compare, just like you compare the nutrition labels on food items. Finally, for a manufacturer, and just like to pick up what very nice choice of words that Mikael brought up earlier. You can turn the ethics of your AI systems as a competitive parameter. You can say look, these are the levels of transparency, of privacy that my ethics system reaches. And, therefore, I’ve got a much broader market available compared to the competitors who have lower levels or who may not be able to provide that data at all. Thank you.

>> CHARLOTTE HOLM BILLUND: Very good. Interesting thoughts on this standardisation. I guess that’s the difficult part of it.

And we will take questions after this. We had a few left from the round before, but we will take Mikael’s recommendations as well.

>> MIKAEL JENSEN: Yes, thank you. My recommendation is basically that it is crucial that we as citizens and consumers have trust in the digital services we use every day. And today it can be very difficult to know whether a company has good IT security or is handling our data in a ethical and responsible manner or not. We must see that trustworthiness as a driver of competitive advantage in Europe. So basically, I think we need to consider algorithmic systems and AI as part of a broader agenda regarding digital trust. Instead of developing a trustworthy label within certain areas, privacy, security, we should develop a scheme with a more holistic approach to digital trust which should include reliability, security, privacy, and ethics, thereby also covering AI which is what the citizens are concerned about. By aiming, having a demand driven European voluntary labeling scheme within digital trust it is possible to get impacts scene from a consumer company and societal point of view.

>> CHARLOTTE HOLM BILLUND: Very good.

Thank you. And we will take the few questions left and if you have further questions, please join in in the chat because this is going to be the last round of questions before we have the key messages of this session being read out.

Please?

>> AUKE PALS: Thank you, Charlotte. Yes, I do have some questions still in line. One is from Olga -- sorry if I mispronounce your name. The question is for Martin Ulbrich: Does a mission of a temporary band ban on the deployment of facial recognition technology from the final version of the white paper as opposed to the initial text leaked in December 2019, mean that the EU is willing to prioritize technology over privacy concerns? And how do you, how does Auke facial recognition comply with the GDPR requirements to get individuals’ explicit consent?

>> MARTIN ULBRICH: Yes, I guess that’s for me. Now, it is actually the question on facial recognition and the draft white paper is very funny. As you know, if you write a white paper it goes through many, many iterations. I think in our case, something like 27 different draft.

The interesting thing, out of these 27 drafts there was a single draft, version of 13 or 14, in which the moratorium was ever mentioned. It wasn’t mentioned before or ever afterwards. It was very interesting that this single draft way which was the one that was leaked to the press. So I don’t think that you can draw the conclusion that you were mentioning.

AI is clearly, as you have seen in the white paper, a topic which we want to have a debate about. We clearly are concerned. It is a technology which is not yet ready for rollout from a technology point of view.

I don’t think we are ready for a ban from a political point of view either. Therefore, we would agree now is the debate.

We expect that among the 600 maybe seven or 800 people, organisations who do comment in the public consultation, a large number of them will have something to say on facial recognition. It is clearly one of the highest profile applications of AI.

Interestingly enough, it is where we are dealing with AI regulation in general in countries which don’t necessarily want to regulate AI in general, like the United States. They have a flurry of local initiatives on facial recognition.

Now, facial recognition has to comply with the GDPR if you actually apply it or with the law enforcement directive. GDPR doesn’t apply to the police force and border patrol.

And the instances where facial recognition has been used in European Union, there is an indication in the U.K., while the U.K. was still in the union, whether the use by the police was appropriate and the court said yes. There have been a couple of cases in Sweden and France where the national data protection authority stopped facial recognition systems although everybody had agreed to it.

There are a number of test cases which are really working for the purpose of research, such as one in Berlin a year ago or two. So for the moment everything is fine. But the question is really whether a widespread use of facial recognition is acceptable to European citizens.

As always there is a trade-off, you may get more security but less liberty and I’m over simplifying. What it is that the European public wants, I don’t know. That is why we’ve decided it would be a good idea to ask the public.

>> SEBASTIAN HALLENSLEBEN: Actually, if I can add something to Martin’s points. When discussing AI ethics, facial recognition is a wonderful example because it allows us to consider the many shades of gray that are to be considered in AI ethics.

So with the very same technology, facial recognition could be used for unlocking your mobile phone, in which case, okay, you are probably not on ethically sensitive territory. But it also can be recognition in the streets for masked people in demonstrations or just for people in the streets which has a whole different raft of ethical issues.

Basically it is the same technology. You can take those examples even further and go on to the factory floor. You might have machinery that recognizes workers for safety reasons and maybe also remembers whether certain workers tend to be more alert than others. It might want to identify workers via facial recognition there. All of a sudden you have to talk about safety versus data protection, versus AI ethics.

So it is a wonderful example that to me shows that AI ethics is not something that can be discussed separate or independent of the application context and in fact, we actually need to stop and consider the application or the application area and the sensitivity of it, the ethical risks in it. Then we can work towards the AI systems. For that level of sensitivity what is actually the appropriate ethical standard that the AI system needs to reflect. Thank you.

>> CHARLOTTE HOLM BILLUND: But are we good enough at handling these gray zones? Because they keep coming up.

Is there some way that it can’t be case-by-case that we have something we can fall back on to? Do we have a comment on that?

>> SEBASTIAN HALLENSLEBEN: It is a little bit like in law enforcement. You can’t list all the possible cases and specify the fines and the prison term for each imaginable case. You have to group them. So was it intentional? Was it accidental? If it was intentional, what was the motive? So on. So you have certain criteria that you put into the board to determine what the fine or the prison sentence should be.

In the same way you cannot list every imaginable use case to come up with the ethical demands that you may want to put on it. And therefore, you also have to classify, there’s a lot of work, quite a large number of different schemes to do that.

Mart I has explained the white paper approach which is sort of a mix of two major risk levels, low risk and high risk and also a sector-based approach.

Then there is work from a variety of groups which tend to end up with four to five risk levels, depending on what the impact is, whether the AI gets it wrong and whether it is subject to - AI or not.

I think those classification schemes also need to be standardized in some way. So consensus needs to be built on what the right classification scheme is to determine the ethical sensitivity of an AI use case.

>> CHARLOTTE HOLM BILLUND: You want to comment as well, Martin?

>> MARTIN ULBRICH: Yes, I was going to say it, the world is a complicated place. Gray zones will always be there. It is unimaginable that whatever regulation comes up out of the white paper process, if indeed there is regulation coming up, this is one category or another. There always needs to be some practical appreciation of the circumstances.

Therefore, you will definitely need some kind of authorities who will actually do that. And clearly that would have to be done in the Member States would know much better than the centralized authority, what the particular circumstances are. Yes, there will be gray zones. We won’t necessarily be good with dealing with them at the beginning. But over time in many other areas of the world and of the modern world, there are plenty of gray zones. We have developed practices how to deal with them. That will happen with AI as well.

We have to keep in mind we are at the beginning of technology and therefore it is going to take some time before everything is as good as it possibly could be. Even then it is not going to be perfect, of course.

>> CHARLOTTE HOLM BILLUND: All right. We are about to round up this session. We have a track coming up in a minute.

If somebody wants to pose a last question. Do we have anything?

>> AUKE PALS: Yes, we do have another question, Charlotte. And the question is from Amali De Silva-Mitchell. The question is: What is the possible approach to getting to consistent data sets according, collaborating entities? Saying that we hear data is inconsistent to enable good AI collaboration.

>> CHARLOTTE HOLM BILLUND: Anyone?

>> MARTIN ULBRICH: Well, making data compatible, of course, is kind of the Holy Grail of that. When I said earlier you access lots of data and if you don’t have lots of data, you don’t have anything. Many of the data is wrong and you first need to curate it to make it usable. That means very often making it compatible with certain standards or to have a common category that you can link various data sets. That’s very labour intensive work and that is one reason why it is going to be a huge demand in terms of data scientists and AI professionals in the near or medium term future.

Depending on the area in which you are, sometimes it will remain the way that you have to manually actually do it the way it is. Sometimes standardisation can help. If you have a well defined area in which that is possible. I’m sure Sebastian knows much more about that. Maybe a combination of the two of them. It is a very wide field. I don’t think you can have any simple answer on to that.

>> CHARLOTTE HOLM BILLUND: All right. I think I will round up with a really easy question maybe. Maybe not.

If the three of you could think of what has surprised you the most in working with AI? What has been the most exciting challenges, just to mention one thing. Could you round up with that?

And I think you can join in who wants to start.

>> SEBASTIAN HALLENSLEBEN: Okay. I will start. Well, what has surprised me is the quality of the public discussion and the way in which ethics is being discussed right from the start. It is very different from earlier hyped technologies, be it atomic energy a few decades ago or genetic engineering other nanotechnology. In those cases it was just the engineers creating a new technology, seeing a bright future and getting a lot of development done.

Later on the rest of society woke up and said wait, hang on, we are not quite sure we like everything you are doing there and we got into a very confrontational situation around these technologies.

For AI, I am observing that it is actually the community of scientists and engineers themselves who are reaching out to the rest of society and saying, well, you know, there are some ethical issues here we think but we are not comfortable about. We should talk about this, create a societal consensus.

I’m optimistic and hopeful we will not see the same kind of confrontation or discourse for AI that we’ve seen in major technologies previously.

>> CHARLOTTE HOLM BILLUND: Interesting. Thank you very much. Mikael? Your turn.

>> MIKAEL JENSEN: I think it is, when we are working and trying to make the AI criteria concrete, I can see there is a need for more standardisation and alignment about how to actually operationalize the criteria. I think on a high level we, there is kind of getting common ground on transparency and model and data quality and stuff like that. But how to make it practical for companies to adhere to those standards is still, steel needs stuff to be done there.

Also how to actually follow up from a labeling scheme point of view to find out whether the companies are actually living up to the specific criteria. That’s one of the key take-outs at the moment that we need to focus on.

>> CHARLOTTE HOLM BILLUND: Perfect, thank you. Martin, do you want to end?

>> MARTIN ULBRICH: The thing that most impresses me is kind of the difference between talking to the engineers and people in the field and talking to the wider public. When you are talking to people who actually work in the areas, we are talking about things like expanding transparency, et cetera, et cetera. When you talk to people in the regular work, consumer groups they are talking about computers taking over the world. It is a completely different approach. And sometimes I wonder whether when we are talking about creating trust, I won’t say we are barking up the wrong tree, but we are assuming the public has certain issues with explainability, bias or whatever, when in reality it is a much wider and more profound concern in the public. Which isn’t that much linked to ethical niceties or even legal niceties but to fundamental point of view.

>> CHARLOTTE HOLM BILLUND: Interesting. Thank you very much.

And for my part I’m about to round off because we have a track coming up with key messages we have to agree on.

Thank you so much for the lively discussion, for all of the questions and for all of your good answers in the panel. It has been a pleasure being part of this. And now, our rapporteur, Marco Lotto.

>> MARCO LOTTI: Thank you, Charlotte. I assume you can hear me and see me well. Thank you for giving me the floor, following the session was very, very interesting and as you mentioned I’m Marco Lotti. My task today was to write a comprehensive report about what was said which will be available next Monday, but my second task was also to sum up the discussion on three main take-away messages on which we should see whether there is a draft consensus.

Before reading the messages I would like to remind you these are, the EuroDIG platform will email you the text of the messages and it will be possible for you to comment and edit furthermore. It is just to see whether more or less the content of these messages reflects what has been discussed.

And if we can show the first message from the slides?

So that I can read it out. If there are any objections, objections, please type them know the chat. First message we have trustworthiness should be regarded as a prerequisite for innovation. When addressing it we should look at two sides, one regards the characteristics of the product, meaning technical safety requirements and one that is related to how trustworthiness is communicated to the people.

One way of describing ethical, an example was also mentioned that the Danish government launched a new company labeling system which aims to make it easier for users to identify companies where treating customer data responsibly.

I’m quickly looking at the chat to see if there are strong objections from the public and from the speakers on this first message.

In case, please let me know before we move to the second message.

One correction: It is not the Danish government. Sorry if I misunderstood. I will correct it, of course, accordingly.

Do we have --

Can I ask maybe to Mikael to clarify this point, please?

>> MIKAEL JENSEN: Yes. It is the partnership of the Danish consequence Consortium of Danish bases. I can send you the specifics.

>> MARCO LOTTI: Yes, okay.

>> SEBASTIAN HALLENSLEBEN: One other small correction, 1, 2, and 3. It is not the technical safety requirements. It is the ethically relevant characteristics.

>> MARCO LOTTI: Okay. Noted.

>> CHARLOTTE HOLM BILLUND: Perfect.

>> MARCO LOTTI: Besides these two comments the messages will be sent later to you for later commenting and editing. I think we can move to the second message, which reads: Striking the right balance between trustworthiness and innovation represents an important regulatory challenge for AI applications. The EU commission’s white paper addresses this aspects especially in high risk scenarios when rapid responses are needed.

Are there any strong responses on this slide or on this message?

>> SEBASTIAN HALLENSLEBEN: You might want to add a sentence at the end saying that trustworthiness can also be a driver of innovation.

>> MARCO LOTTI: Yeah.

>> SEBASTIAN HALLENSLEBEN: To capture that aspect as well.

>> MARCO LOTTI: Okay. I would now then move to the last message. Which reads AI and data are interlinked. It is difficult to make sense of large data sets without AI and AI applications are useless if fed poor quality data or no data at all. Therefore, AI discussions need to be linked to data governance schemes addressing sharing, protection and standardisation of data. However, it has been said that AI also presents important particular characteristics such as black box and self-learning elements, that make it necessary to update the existing frameworks that regulate other technologies.

Again, if you have any edits, comments, reactions to these messages, let me know.

(There is no response.)

>> MARCO LOTTI: It seems there are no further reactions for the messages. Therefore, I would give the floor back to Charlotte. I think the first part of my work is done here. And the second part will be wrapping up the report which again you will find on the digital watch website and also will be emailed to you by the EuroDIG Secretariat. Thank you very much and thank you to the speakers for the interesting discussion.

>> CHARLOTTE HOLM BILLUND: Thank you so much, Marco. For my part, I think we are out of time.

Than I don’t know, Nadia, if you need to say something at the end here. Thanks again for participating in. It has been a pleasure.

>> NADIA TJAHJA: Thank you so much for your moderation. It was definitely an interesting session. We have the possibility to hear from not only the key participants but also from the participants who are joining us online.

And we are very happy that we were able to address this issue because it is so pertinent, happening in our times now.

There is a quick note from our Remote Moderator, Auke.

>> AUKE PALS: Yes, Nadia. I would like to thank Charlotte for moderating and being on time and also the Rapporteur and the chat was really active. Thank you, everyone, for asking your questions. I hope to see continue this on the EuroDIG Forum.

>> NADIA TJAHJA: Now we have come to the end of the session. I would like to see if EuroDIG headquarters is on line to come and join us. EuroDIG headquarters, are you here?

>> SANDRA HOFERICHTER: Yes, thank you.

>> NADIA TJAHJA: Hi. How are you doing?

>> SANDRA HOFERICHTER: It is getting hot meanwhile. There is a thunderstorm coming and it is hot in our studio with all the computers and devices that make even more hot air.

But I could see in your session there was no hot air at all. Was a very good discussion and what was really enlightening for me is how the messages were quickly discussed at the end and how consensus was built on these key messages. That is basically the essence of EuroDIG. That will go out into the brochure that we will distribute to key policy organisations and governments, et cetera.

So congratulations to all, how you managed to do this. This is exactly how it should be.

And with this, I think we can go into the coffee break. We will reconvene here at 4:30. And I think it is you, Nadia, that guides us then through the youth messages?

>> NADIA TJAHJA: It will be my pleasure to present the youth messages that were presented by the youth participants who have worked for the last three weeks. They do want to share with you all the thoughts that are happening with them in their local communities. We hope you will come and join us after this small break.

>> OPERATOR: Recording has stopped.

>> SANDRA HOFERICHTER: I will now rush to Studio Berlin and Studio Trieste. They will stop streaming for the two days and I would like to say goodbye to them. We meet each other again in half an hour. Enjoy the music, meanwhile.

>> AUKE PALS: Bye, Sandra.

(The meeting concluded at 1600 CET.)

(CART captioner signing off.)