New developments and prospects in data protection (with regard to AI). – WS 04 2021: Difference between revisions

From EuroDIG Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
Line 1: Line 1:
29 June 2021 | 14:45-15:45 CEST | Studio Belgrade | [[image:Icons_live_20px.png | Video recording | link=https://youtu.be/kQEAIhbWHzk?t=17346s]] | [[image:Icon_transcript_20px.png | Live transcription | link=https://www.streamtext.net/text.aspx?event=CFI-EuroDIG-B]]<br />
29 June 2021 | 14:45-15:45 CEST | Studio Belgrade | [[image:Icons_live_20px.png | Video recording | link=https://youtu.be/kQEAIhbWHzk?t=17346s]] | [[image:Icon_transcript_20px.png | Transcript | link=New developments and prospects in data protection (with regard to AI). – WS 04 2021#Transcript]]<br />
[[Consolidated_programme_2021#day-1|'''Consolidated programme 2021 overview / Day 1''']]<br /><br />
[[Consolidated_programme_2021#day-1|'''Consolidated programme 2021 overview / Day 1''']]<br /><br />
Proposals: [[List of proposals for EuroDIG 2021#prop_10|#10]] [[List of proposals for EuroDIG 2021#prop_24|#24]] [[List of proposals for EuroDIG 2021#prop_26|#26]] [[List of proposals for EuroDIG 2021#prop_72|#72]] [[List of proposals for EuroDIG 2021#prop_79|#79]]<br /><br />
Proposals: [[List of proposals for EuroDIG 2021#prop_10|#10]] [[List of proposals for EuroDIG 2021#prop_24|#24]] [[List of proposals for EuroDIG 2021#prop_26|#26]] [[List of proposals for EuroDIG 2021#prop_72|#72]] [[List of proposals for EuroDIG 2021#prop_79|#79]]<br /><br />
Line 88: Line 88:


== Transcript ==
== Transcript ==
Will be provided here after the event.
Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-482-9835, www.captionfirst.com
 
 
 
This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.
 
 
 
>> STUDIO: Hello, welcome back to Studio Belgrade. I’m Johanna I will be the studio host for the next session. Before I hand over to the moderator of the workshop, I would like to briefly go over some session rules. The first one is when joining the studio, make sure to display your full name so we have a better view of who is in the room with us. Next, if you have a question during the session, use the Zoom function to raise your hand. As you might have noticed, everybody is muted by default. So for the purpose of presenting your question or comment, you will be unmuted. When speaking, remember to state your name and affiliation. You also have an option to enable your video so we see who you are. But this is optional and totally up to you. You also can post your questions to the chat. For this session, I will be the chat moderator. And from time to time, I will gather the questions and present them to the panelists.
 
A note that chat will not be stored or published. So don’t be afraid to use it. And lastly, we kindly ask you not to share the Zoom link with anybody. There is still an option to register for the meeting or follow us on the YouTube Live stream.
 
And with that, my introduction part is done. I will now hand over to Urska Umek that will be moderating the workshop, new developments, and prospects in data protection with regard to AI. Hi, Urska Umek.
 
>> MODERATOR: Hi, yes, I can hear you well.
 
>> STUDIO: The floor is yours, I will see you later.
 
>> MODERATOR: Thank you very much. I will also try to follow the chat a little bit. Good afternoon, I am Urska Umek I will moderate this workshop. I work for the Council of Europe. As we heard in the introduction our head of information society Patrick penning said we work around human rights, democracy, and the rule of law. In this session, which is supposed to be 60 Minutes, but we’re running late, so we’ll try to condense it a little bit. We will be discussing how to develop and use new technologies, especially AI in compliance with data protection rules. As we know today, systems powered by artificial intelligence are creating a shift in how we communicate, how we live, in how we make our decisions. And new technologies create a lot of opportunities or as Roberto said just before, they’re a blessing, but they also present risks. They present risks for values, for the exercise of human rights, for the application of rule of law. And for the functioning of democratic societies, which is why AI are important for data protection. But not the only reason why we work on these issues. The AI powered technologies are often used for protection. And we have instruments that are designed to protect personal data. But the instruments, I’m talking here mainly about the Council of Europe convention 108, with the regard to automatic processing of data. And the European Union and the protection. But they were not designed with protection in mind. They provide little with regard to AI and these techniques can take information from what appears to be mundane dataset. And GDPR imposes quite a heavy burden on companies, but meaningful control over personal data by data subjects is still missing.
 
And of course, the question here is how to regulate AI in such a way to empower data subjects and achieve data protection friendly AI. So we will talk about these challenges with our three experts. The first is Emanuela Girardi, Jean-Christophe Le Toquin and Alessandro Mantelero. We’ll start with presentations by the experts in the beginning of the session. Then we will hopefully have a lively discussion in the second part on what it means to have control over one’s personal data and how to achieve that in the context of AI. You are all warmly invited to share your ideas and ask questions in the chart, raise your hand with the feature. We would like to give the floor to the three experts. We start with the international perspective on regulation. So the EU and Council of Europe as we have heard, both have plans to regulate AI.
 
However, how are they going to, or how are we going to address the risks for the protection of personal data? Which are being amplified by AI. So we’ll start with Alessandro Mantelero, who I think is best suited to provide reflections and perhaps answers to this question. Alessandro is associate professor of private law and technology at Politecnico di Torino. And he’s also Council of Europe’s scientific expert on AI, human rights, and serves as an expert on many other international organizations including United Nations, European Commission. And worked with the Italian Ministry of Justice. I think he’s quite the right interlocutor. Please, Alessandro, the floor is yours.
 
>> Alessandro Mantelero: Thank you for this kind introduction. In order to explore this topic, I would like to start reading a few lines from a book. Little rock a migrant bank is being set up here. The system is designed to help the rapid placement of children in school uses a line, the school official can call the line rock center free of charge and gather school and records of the child. With this system, the child can be placed immediately according to John Miller, the Director of the data Bank of Migrant children. This is a short story told in a book and is about what happened in 20 August of 1979. The book is the assault on privacy but Harry Mueller, written in 1971.
 
Mueller said in this introduction to the book, but many people voice concern that the computer, with its incessant appetite of information image of infallibility and inability to forget anything stored in it, may be a system to turn society into a transparent world where our homes, our family, association will be bear to a wide range of casual service, including the curious and malicious commercial interests. So after 50 years, we are here discussing again about the risk of technology and the impact of technology and what we have read in the book that happened in 1977 could be the description of something that happens nowadays with migrant child in our countries.
 
But what is the distinction? The distinction is in the process. When this concern arises, when the book on the assault of privacy raises the question, there was a reaction. The reaction was basically in the field of regulation. The reaction was to set some red line to set some principle and paradigms for data process. This was the reaction, the legal benefit to share the data and create the digital economy. And also to safeguard in the rise of freedom. This is clear in that one of the milestones of this process that is convention 108, adopted by the Council of Europe in 1981, which is that only international legal framework that right now exists at international level on data protection. So in this sense, we can say that when there is an issue in terms of risk, in terms of potential negative impact on individual rights, we need a legal framework, a regulation in order to tackle potential risk. But what is change introduced by AI?
 
The change is due to the fact that AI add a sort of new layer. Because AI is not only about data. With AI, we can do a lot of different things. We mainly use AI to predict something. To analyze society. To shape society. And to a certain extent, also, to design the future path of our society.
 
So in this sense, there are a lot of benefits in using AI. A lot of benefits in terms of better healthcare, more efficient transportation, better education, et cetera. But of course, there are also some concerns. If we consider credit scoring, if you consider as mentioned by Patrick before, content moderation, recruitment services, health prediction, predictive policing. All of this can have also a dark side. A potential misuse or use that might negatively impact on individual rights and freedom. In this sense, the focus on data is no longer enough in order to address all of the issues. Of course, data protection is important and considered enabling rights. What was important in the 70s, data protection, the concern as described in the book that we read before was about social surveillance by private and public entities. But nowadays, the impact is a multiple impact that affect different kinds of rights and freedom. You can shape the content, so limit the freedom of speech. You can limit the access to services. You can impact on the freedom of business. You can impact on the future career of individuals and future life. In this sense it is important to set like in the past to set some rules in order to use AI for the benefits of the human being. This is the work set out by CAHAI and by Patrick. The same is done at EU level on regulation. What is important in both this regulation, although from different angle, because the Council of Europe is focused on human rights democracy and rule of law. And we have to recognize that the proposal from the EU is more focused on product safety, although provide an important reference to the protection of fundamental rights. But in both this regulation is clear they intend to go beyond data protection and to consider the potential consequence and negative impact of the use AI. Of course, as usual, we as the regulator, as the legal scorer, we don’t factor on the benefits. If there is benefits, there is no need for lawyer and rules. If everything goes well, no problem. We’re concerned about risk. In this sense, the idea of risk represents the common core of both data protection and AI regulation. As we mentioned, the risk was at the Region of data protection regulation. The concern about the potential negative use of data for discrimination. Ownership for social control. And now, with AI, we have a broader variety, if you want, of potential resulted to the use of AI. And so, as in data protection, we have to build an efficient system in order to monitor, mitigate control. In order to have an AI that is developed for humans and not against them.
 
In that sense, the common core as presented by the risk management. A risk assessment is not the case that in both the documents, in the proposal from the CAHAI is and others. What is in the mandate of the CAHAI, this is the way we manage this risk. There is debate about the risk, about the risk assessment, the risk management. But there is not an effective model, methodology to assess the risk. In the data protection we have data protection impact assessment and the longest experience of the sense we cannot replicate the data protection in the broader context of human rights impacted an AI. At the same time traditional human rights impact assessment are created for large scale system that are not the same for AI. So we have to invest more in order to figure out concrete models, concrete application of risk assessment and human rights impacts assessment and human rights duty images. In this sense, the Academia is working on that. If you want on the corner, in the right you see a recent publish article, published yesterday. It is the first attempt to define a concrete model for human rights impact assessment for AI, based on seven documents, the decision of the data protection authorities in Europe. So the link between the experience of the data protection authorities in managing that intensive system and related risk is used to build a model that is evidence-based, that fix some measurable goals in order to set the measurements of the risk and give announcement to the CAHAI and EU document about the level of risk and the related safeguard we have to provide when the risk is high or medium, et cetera. So I think, to conclude, that this is the goal of the future regulation. Am not necessary repeat what is already existing in terms of human rights and fundamental freedoms. But create concrete solution to implement them in the development of AI.
 
And I think that both European Union and Council of Europe are working on that. I think that is very important that all the actors, Academia, also the companies, the assessors and Civil Society will constitute the cornerstone or future framework. Thank you.
 
>> MODERATOR: Thank you so much Alessandro, it was concise and yet comprehensive. Now that we heard about the international level of regulation and how that is proceeding, we will move to the next level of regulation. We will explore what is being done by states who are probably not sitting idle sided and waiting for international regulations while challenges and risks mount. So we will be talking to Emanuela Girardi who will present us the Italian example of regulation. Emanuela is a member of the high-level Expert Group of [audio skipping] the Italian national AI strategy. And she’s the founder of Pop AI. And creating awareness to companies and the general public. Emanuela, the floor is yours. Thanks.
 
We can’t –
 
>> Emanuela Girardi: Can you hear me?
 
>> MODERATOR: Great. We can hear you. Thank you.
 
>> Emanuela Girardi: Thank you, I couldn’t unmute me. Thank you, good afternoon everybody. I was thinking about the great presentation Alessandro made. I took notes. There were some very interesting points.
 
We heard about what we are doing actually in Europe for data protection and for AI strategies and promoting AI technologies into our society. And I would like to give you some more information about what we were doing and we’re trying to do in Italy. But I would like, first, to start from the European point of view. So ... like Alessandro said, and also the previous presentation from Roberto and Patrick, we see Europe started later in developing the AI strategy, later than China and U.S.
 
This gave us in Europe the opportunity to develop a vision based European values and promoting a European ecosystem where AI technologies and AI system, like we just heard from Alessandro are developed in a safe and secure way. And are used to improve European citizens’ lives.
 
In April 2018, the European states sign the declaration of cooperation on AI technologies, identifying three main areas of cooperation. The first was promoting the investment in research and innovation of AI to boost the cooperation of AI by European companies. The second was to prepare to face such economical changes that AI technologies bring. So we heard with Patrick about the many changes that AI technology are bringing to our society, in particular focusing on updating the education system and the training system.
 
The last one that we heard from Alessandro, it was to ensure, legal, ethical, regulatory framework aligned with European values and with fundamental rights.
 
So in December 2010, the European Commission presented the coordinated plan and asked each Member States to develop an AI strategy. In the Italian ministry development presented AI expert from industry, academy, Civil Society, like Patrick said, it is very important this multistakeholder approach when we are dealing with new technologies. And ask them to develop in Italian, AI strategy. This, according basically to the priorities that have been set up by the European Commission, the three main priorities that I just mentioned.
 
So we started working at the better of 2019, and the final strategy was published in July 2020. The starting point of our strategy is very important. Because it is that AI technologies alone are not enough. We need to develop an entire technology system that includes connectivity, blockchain, data, infrastructure for data, super computers to work with the data, content, sciences, an entire ecological set.
 
From this point, we have three pillars to build the development of AI technologies and policy. The first pillar is AI for the human being. The second one is AI for a trustworthy and productive digital ecosystem. And the third one is AI for the Sustainable Development.
 
First one, AI for the human being. First pillar is perfectly aligned with the European strategy. We need to promote the adoption of AI technologies to increase the welfare of the entire population. It is to educate users, consumers, labors and provide the necessary skills to use AI technologies. This means, we heard from the dual approach of AI technologies. One end, we need to make – to enable people to enjoy the great benefits of AI. On the other side, we need to enable them to manage the risk, so this dark side. Being able to actively participate in the digital society.
 
We have computing, computational thinking is a subject in all of the schools starting from primary school to universities. Then we propose the approach of the continuous learns, for all the employees. Dedicating – because they’re needed to dedicate some of the working time to learning the new AI and digital skills. This is important. We always hear about AI technologies and they will destroy the new jobs. In the end, most of the research say they will create new jobs.
 
The problem is today, we don’t have the digital skills necessary to fill those jobs. It is important to develop educational programs for schools and also for employees to make them learning the digital skills necessary for the future. This is the first pillar. The second pillar is AI for trustworthy and productive digital system. The second is aligned with the European strategy. That is to build an ecosystem of trust and excellence. This is well described also in the European white paper on artificial intelligence. The idea is to create the network of excellence with university, research centers, companies, public administration, and citizens. In particular, the Italian strategies highlights the need to develop a data economy and to support the small and medium enterprises in collecting, stocking, sharing data in a safe and competitive way.
 
What we did in the Italian strategy is identify six sectors were to focus the investment in AI and in the knowledge transfer from lab to the market. Like the new updated coordinated plan, that was well described. The one that was released a couple of months ago. The six sectors are IoT, manufacturing robotics, services, energy, transportation, aerospace, public administration, and the last one is the digital humanities that are very important for the Italian ecosystem.
 
So this is the second pillar. The third pillar, that I think it is the most important, and also the most innovative one is AI for Sustainable Development. So the Italian vision starts from the humanitarian vision that is human centric, and AI is a very human approach. We need a change in the paradigm. We need to move from human-centered approach to planet-centered approach and use the AI technologies to reach the Sustainable Development of the 2030 United Nations agenda.
 
We do believe with the support of AI technologies, we can really realize the goal of leaving no one behind, like we heard also from Patrick and Roberto. Basically, we started from the three pillars, we focus on how to bring AI technologies to the Italian companies.
 
Italy presents a sort of dual situation. Because we have excellence in AI in the academic world. Italian science researchers are among the best in the world. We have successful AI innovative companies. The average level of adoption of AI technology is still quite low.
 
Basically if you consider the data published by a recent study in polytechnic in Milan, there are about 70 – maybe 40% declare they’re planning to do it. But if you look at total investment of the AI market, it is very small. In 2020 it was only about 300 million. The many investments were only for project in processing, chat box and assistance. The main challenge here is how to bring AI technologies to small and micro business, micro companies, which represent the largest part of the Italian business system, which currently, they don’t have access to the necessary expertise to start using the innovative technologies.
 
So what we did in Italian, national AI strategy, we tried find a solution for developing these from lab to market strategy. We propose the development of an Italian institute for artificial intelligence, it was needed to be near the companies. Proximity for small and micro business is the only way to reach them. Italian institute was formed with two thoughts. One focused on research, the other on the knowledge transfer.
 
Working together with companies, we think we could really bring AI technologies and skills to the industry system. So another proposal that we made and that is included in the coordinated plan is the development of Sandboxes. It has been mentioned before, it is really important because it gives the opportunities to companies to test, develop and bring to market new AI system in a safe regular framework.
 
But we also made some quite operational proposal to support the companies and embrace the data economy. We propose a new profession. We call it SID, data intermediary society. It is to support the small and micro business in understanding AI and start developing small project with a sort of certified AI consultant. Remember the dark side of AI. In the same time, they had to raise awareness about the risk of sharing the data. And teach them how – teach the small companies how to share in a safe way without giving away the advantages of the companies.
 
For instance, if you think about manufacturing companies and the predictive maintenance service that is today. They’re buying together with the machine. We really have to regulate very well which data the supplier of the industrial machine can collect and for which purpose the data can be used. This is to be protect the small companies and the micro business, in order that they don’t give away the company secrets and values.
 
We shared data sharing agreements formats and some proposals. The best way for small and micro business is to create consortium of companies in the sector and supply chain and develop common data space like in food, automotive or information. You can create a considerable amount of data important for AI analysis, but also work together to develop new services, new products and really bring in the benefit of AI technologies to the entire supply chain.
 
So I really believe – we develop several other services, but I will also leave space open for questions. And I really believe that supporting Italian companies but also European companies in developing AI projects is really the key to improving the competitiveness of the small and micro business and should be a top priority for Italian Government but also for European Government.
 
>> MODERATOR: Thank you so much Emanuela. This was really rich. It seems the Italian Government is thinking ahead very much. Helping small and micro businesses in many ways. Indeed, we will continue with the response of the industry to these initiatives, regulatory initiatives, in particular.
 
So how do businesses see the line between data-driven innovation on one hand and effective privacy protection on the other? We have with us our third expert that comes from private sector, Jean-Christophe Le Toquin he will share with us his rich experience, he’s stakeholder relations at Videntifier Technologies which identifies visual content at scale for law enforcement and for copyright protection. And in this role, he also Chairs the French hotline against illegal content. And he’s the President of in hope global federation of hotlines against child sexual abuse material.
 
So I would like without further ado, give the floor to Jean-Christophe.
 
>> Jean-Christophe Le Toquin: Thank you, Urska Umek and congratulations for pronouncing the name of Videntifier Technologies correctly. No one succeeds in this.
 
I want to discuss today, I’m an independent consultant, I work for eight years for Videntifier Technologies, this Icelandic company also based in this area to work on technology to identify technologies at the scale of the Internet. I want to tell you about, to be very complementary, nicely, the two past presentations, it is to tell you about this report that you see on the screen, published yesterday by the Council of Europe on which I contributed as an independent expert, but also based on the knowledge I collected from my different clients, including the identifier. You have the link I will send you in the chat afterwards.
 
This is a great example of cross-disciplinary work between lawyers, privacy expert, child safety advocates, and cybercrime advocates. All of these activities that are really active in the field of the Council of Europe. So what we did together was to – provide this diagram that you see on the screen, the image should not appear. This is a mistake I made. What I do is give you an example of how artificial intelligence may not be necessary. And that may be counterintuitive in such a context. But the situation is that artificial intelligence is still an exploratory technology. Telling Emanuela this is small market. Totally agree with Alessandro that we need a look at this field with really new eyes. And to have a clear idea that is important to understand okay, this may not be the easiest type of activity to regulate. What you see on the deck, on the bottom, blue, orange, red, pink. So basically, what I am going to briefly talk to you about is about something which is a big issue for regulators and society, that is how do we detect and remove legal content – illegal content so it can be really terrible stuff, like terrorist videos, and attacks. It can be also child abuse material. So sexual child abuse material. Images, videos again. This can be also just more regular copyright content, catalog of movies or TV shows that you don’t want to be available on platforms or black web.
 
So how to deal with this? Trying to think, oh, I mean, AI is the solution. It is so powerful. It can do anything. So what we found is that actually, you can use non-AI technology which can be much more reliable in the sense that there are concerns.
 
So basically to go quickly to give you an example. 3 technologies. On the left, file hashing, computer vision, and artificial intelligence is the third one. How do they differentiate?
 
The first is file hashing, it is mature. And first you have this picture of Obama that you want to detect on platforms of Black web and find it. So you will produce a hash, a signature, and you will look for the signature on the web. You can scan all images, you can find, turn it into a signature and see whether a match. MD5, Chad 1 are the most popular, heavily used by law enforcement across the globe. It is very efficient. If you change one Pixel in the image, you can easily not find this image any longer upon it does a poor job at detecting similar images. This is where you would use another family of technology, called computer vision. You don’t have to train the algorithm like artificial intelligence. Some are based on global descriptors. They take the whole picture and look for similarities. So basically, if it is the same image, or slightly cropped or reduced, on the part of the image, you can find it. It is much more effective. Today, the Google and Facebook of the world, they use this to detect and find and remove illegal images of reviews. So it is really a great improvement. The most famous algorithm is called photo D.N.A. and developed by Microsoft about 10 years ago.
 
The limitation of this technology is that it does not really cope well when the image is focused on some part of the image or flipped or rotated whatever.
 
So then you use a second family of the computer vision which is the local descriptors. What it does, it takes individual details on the image and find similarities. So we can just a fraction of an image, if you look for on the web, similar image with similar background, helpful for law enforcement typically. You can find same rooms taken different – like you see on this young guy picture in front of a door. You can find some object. You can find as shown here, inserted images. So what we want to show here is that if you use these technologies and they are public in the public domain today, you can as a regulator how it works, you can basically at the scale of Internet, identify, find, and remove all the known illegal images of the world. If you are a regulator, you should typically be interested in the technology to scale to the Internet, does not take too much resources. It is transparent, you know there is no bias into the technology.
 
Of course, if you want to deal with unknown, if you say I want to predict that this never seen image is actually terrorist attack or scene of a crime or sexual abuse of a child. So before even human eye I have seen it, I want to be notified and alerted and I want to be able to call the police. Well, this case, yes, for sure, you need to use artificial intelligence. Which is a very powerful technology. You have two families. One is machine learning. The other one is deep learning. Machine learning is probably, to make it simple, a question of size. You have – if you want to train your algorithm on recognizing a cat in a picture, you need to have hundreds of thousands of images. But if you want to go into more complex scenario, you are not processing hundreds of thousands of data or millions, but billions. And this is deep learning. It is definitely not available for a small company. You need to be a big player. And what is important to understand is if you are in the deep learning technology you are in a very exploratory phase and then there is tons of bias and the best or the worst of the story is that no one is tell you where the bias is.
 
For example, you say to the computer, look, I want to find any white man wearing a hat. Okay. It will scan the web, it will bring you back hundreds of thousands of images of white men with a hat. But for some reason, maybe you will have some very strange pictures of a cat or of a Jihad or whatever. And there is no way in the technology to understand why the algorithm is pulling such a bizarre result. This is a mystery of AI and why it is scary from human rights perspective. Because if you trust too much machine, it can make mistakes you cannot anticipate, that is why we have all this discussion in this group. So this diagram is – I have been told helpful to understand the different technologies and why you – what you can get with maybe less fancy technologies, such as computer vision, but more aware level. This was a kind of testimony. This is explained in more details in this report, published yesterday by the Council of Europe, which I will share the link. Great work by the Council of Europe and hopefully this will help bring the discussion to the next level. Thank you to Urska Umek for giving us the opportunity to present today.
 
>> MODERATOR: Thank you very much Jean-Christophe. This is the first time I hear about the differences between different technologies. Jean-Christophe will have to leave us in a couple of minutes, but we have been told by the Belgrade Studio, that we can perhaps take a few more minutes, so we have opportunity for some questions. I would like to start because I don’t see a question in the chat, although I would ask everyone who has a question to please either post it in the chat or just raise hand. There is a feature for that. Jean-Christophe because you are leaving us, one question about the businesses. Do you think that the industry, the private sector is feeling this growing responsibility in keeping the space that they’re co-creating for individuals, for our interaction, e-commerce, et cetera. Do they feel this responsibility to keep the space clean and safe, free of crime so that perhaps they’re thinking about technical collusions that can underpin their endeavors to keep their businesses clean, housed with that right now?
 
>> Jean-Christophe Le Toquin: I am chairing the child hotline and other hotlines and they’re supported by the teak industry, Google, Facebook, Snapchat, TikTok, what have you. Honestly, I can tell, yes, it is clear that over the last, what, five years, I would say. All the work done by European Union and I say it has played the role to make companies be more focused on the issues. And also controversies on how to control fake news, illegal content, terrorist videos. All of this has really gradually have the companies pay more attention. In my opinion, you must continue the work of the EU. And the EU can learn from the U.S. what they’re doing with illegal content. We have the disagreement on the e-privacy regulation the last few months. That is a very healthy situation. So I do see that the bigger companies, the big platforms, they’re seeing more and more pressure from the regulators and Civil Society for more better response. And they’re more handled today, I feel, as the people I work with, are more humble than people are used to see about the user group.
 
Now, on the tech provider side, the people like SME, like my client, I think they have to do a better job. Some of them are doing it, like Videntifier Technologies, to be biased. They have to be a bit more transparent on the technology they use and why they use it.
 
Do companies that use a combination of computer vision and artificial intelligence, to secure and people and companies, my device with them is look, be more transparent what is the computer vision in your solution and what is artificial intelligence. And if you use artificial intelligence, be more transparent on the deficit. Where did you get the data? What is the volume you are working on. If you are not transparent, people will not trust you. So I see that the teak companies are behind the curve, should be more transparent, they have to understand being transparent is okay and actually will generate more trust and more business.
 
>> MODERATOR: Uh-huh. Thanks, so much, Jean-Christophe Le Toquin. That is great. Thank you very much.
 
I see that we’re approaching our cutoff time. But not for all of us. Only for you.
 
So perhaps, if there is still – I can’t see any question in the chat, then I’ll just continue with a few questions to our experts.
 
A few things that we haven’t had really the time to broach in detail. One question for Emanuela, because a little bit -- also sort of going back to what the companies are doing, what they’re supposed to be doing. This time, not so much from point of view of innovation, but what it creates for them.
 
You have talked about the Italian Government and all actors embracing the benefits of AI. I was wondering what about privacy of individuals, of users, of the services? Because GDPR is directly applicable, but as we have all said, we haven’t really drafted data protection instruments with AI in mind. So is there kind of guidance also for users or manufacturers how to deal with AI technologies in privacy friendly compliant way? Could you perhaps expand on that? Sorry. You have to unmute.
 
>> Emanuela Girardi: Okay, can you hear me now. I couldn’t again unmute myself. Sorry. Okay. So when it comes to privacy in Italian – or maybe I should say European people are sensitive. For instance, think about during the pandemic, the Italian Government and some European Governments develop a tracing app, the Italian one was called diamondy. This was tracing basically our contacts to inform us if we were in contact with the COVID-19 positive person. In Italy lots of discussions about rights of the state to control citizens or centralized versus decentralized model to stock data. Which right was more important? The right to privacy or the right to health?
 
So I think that, I mean, what we just also heard from Jean-Christophe, the main problem is people don’t really know which data they’re sharing every day on the Internet platform. And what could be done with the personal data. The data we give to the big platforms are much more than the ones collected. And we love this platform, to influence us, to influence our behavior or I could say to manipulate our behavior online and offline behavior.
 
I believe that – I mean, the key the way out for this situation is really to invest in educational program. I mean, to make people really understanding what AI technologies are. The benefits, the risk, the dark side like Alessandro was mentioning before. How to manage them and use them in a safe way. I think for this, it is important not only the Italian program, but the European program, the will digital compass for the digital decade. The first point is we need for this direction, we need to reach in this digital decade a digital skilled population and highly skilled digital professional.
 
The other goal is to develop a new sustainable economy model, an index in social, environmental, benefits. I think this is the key. To invest in education and to bring these digital skills to all the population. It is the only way that we can also learn how to use these technologies, learn how to manage them. Control the risk and also, I mean, be more conscious and aware when it comes also to sharing our data. What does it mean? If it is important to share them, like for instance, the money app because was helping us through the pandemic. Or a matter of using like a social platform.
 
>> MODERATOR: Thank you.
 
Thank you so much Emanuela. If I may, just one brief question. Because we’re talking about small and medium size enterprises and how Italy is trying to help the enterprises. Just a question whether such help – how does that help to be regarded from the perspective of global economy? Is that perhaps going to be seen or might be seen as unduly protective? Is that a possibility? How do the measures perhaps apply in relation to international data flows, if you could just expand a little bit on that? And then I have a few questions for Alessandro as well. Thanks.
 
>> Emanuela Girardi: I will try to be brief. To start the strategy. On the 21st of June, the European Commission sign an MOU with partnerships. One was Adra, to bring together AI, and robotics to bring together a European system with European values, as we said a trustworthy human centric AI. This is to bring AI into the European society. This is the way to go to follow a European strategy.
 
If we refer to international data flow, the digi-2020 that is taking place digi-2020 offering Minister. There is the innovation and it is really focusing on the digital industry, how to connect the suppliers across countries, building really – I mean, a global digital supply chain. In particular, it will be focused on data governance to promote the data economy while at the same time protecting the SMEs. I think this is really the key to try to balance these two needs on one hand, to promote the data economy. On the other hand, we need to protect our companies. I think this is the most important thing. Referring to the new regulation, I think that – the European one, the proposal presented a couple of years, eight months ago by the European Commission is the first in the world. I think that it is a very important document. Because it will start the dialogue and the discussion about AI regulation. The needs for globally regulate AI technologies.
 
It is in a way, at the moment – I mean, from what Jean-Christophe said, it is important to have transparency. On the other hand, what is included in this new AI regulation proposal, for a small and micro business, it is a burden. For a small company it is difficult to do all the confirming assessments, especially for AI risk. High risk AI system. It will take a couple of years before the regulation is in place and will be applied.
I think it is a fantastic starting point. There will be multistakeholder discussion between the Commission, the Council of Europe. And I think the final result will for sure protect the rights of European citizens and companies.
 
>> MODERATOR: Thank you so much, Emanuela. That’s great. We’ll just check the chat. Right. Okay. Just one question, perhaps, quickly to Alessandro, also.
 
Now, coming back, sort of to wrap up where we began with data protection, because you in your presentation mentioned the right to privacy and that it is considered dually considered by these regulatory initiatives and processes on AI. But how, if at all, will individuals, users, data subjects be covered against possible risks, negative impacts of AI on their privacy? If you can expand a little bit on that? Thanks.
 
>> Alessandro Mantelero: Yes. This is something addressed in CAHAI and I was part of that and at the EU level. We have the convention, 108 plus, the modified version for Council of Europe and GDPR for the European Union. We can see in the field of personal data processing exists a clear framework. Of course there are challenges and we know like CAHAI and other issues that we don’t have time to discuss.
 
But the issues at the CAHAI and EU level we cannot consider the data protection because it is fixed and works well. We have to address issues not fully addressed by data protection.
 
I think that probably also from academic perspective, the main mistake, if I can use this word – is imagine that the answer to AI is a sort of explosion of data protection. So any interoperability idea of data protection. It doesn’t fit well with what we need.
 
Because it is clear, for instance, when we talk about human rights, democracy, rule of law, and challenges of AI with regard to this field, it is clear that data protection cannot address all this kind of issues.
 
So I want to say that for data protection, there is an existing system that more or less provide a quite adequate level of protection in Europe, outside of Europe it is a bit different. But fortunately the model of convention 108 is considered broad as a reference. There are several countries that are adopting data protection rule based on this model. And there is the same also for the EU, although the process is longer because it is not easily achieved.
 
For data protection, we have to say also to the citizen that the situation is not so critical. And we have to turn our gaze to focus on something new that address the impacts not only data protection privacy but many other fundamental rights that cannot be properly addressed by only data protection tools. I don’t know if I fully addressed the question, but this is the regulatory issues that we have to deal with.
 
>> MODERATOR: Yes. No, thanks a lot, Alessandro. I think that we would never be able to fully address all of the issues. But I also think we have covered a lot of ground today. And thank you very much. We have to wrap up the session, unfortunately. Not unfortunately, because we still have to hear from our reporter of the session. So looking forward to that. But I would like to thank the of experts for the inspiring presentations and discussion. A lot of valuable inputs that we have heard. I hope that somehow the contributions have indicated some ways forward to also turn users from data objects to data subjects and give us all a little bit more meaningful control of our personal data.
 
So I would go back to the studio and give the floor back to Yellena. Thank you for hosting us on your platform Yellena.
 
>> Alessandro Mantelero: Thank you, bye.
 
>> STUDIO: We will hear from the reporter of the session.
 
>> My name is Ekaterinaioc Vick. I’m with Geneva Internet Platform and we’re providing key messages and session reports from all workshops. I will now present the messages from this session and the report will be posted on the digital watch observatory. A reminder the messages will be available for additional comment and EuroDIG will provide more detail on that. Without further ado, message 1 is ... AI has added a new layer to issues of privacy, data protection and protection of basic human rights and freedoms. In this sense, the focus of data is no longer enough in order to address all of the issues. The proposed regulations therefore intend to go beyond data protection and consider the potential consequence and negative impact of the use of AI.
 
If there is any strong objection to this message, please write in the chat. Otherwise, we will consider there is a rough consensus on the message and move to the next one.
 
All right. Moving to the message number 2. The development of AI requires a par time shift, there is a need to move from a human centered approach to a planet centered approach and use AI technologies to achieve the 2030 agenda on Sustainable Development.
 
And the final message from the session is the main problem is that users do not usually know which data they should on the Internet on a daily basis and what can be done to their personal data. Therefore, investment in educational programs and raising awareness is key in order to help users understand AI technologies, their benefits, as well as the risks.
 
That is all from my end. Thank you very much.
 
>> STUDIO: Thank you, Ekaterina. Thank you, everyone, for the great session. We will now take a 4 to 5-minute break. When we come back, we will continue with the workshop on human versus algorithmic bias. The studio host will be my colleague. You can choose to stay here with us or you can close the Zoom room and go back to Gather.Town. After the break, you can choose to join us again. Or join some other studio. It should be easy to navigate around the map. But if you have some troubles, you can go to help desk and they will be able to assist you with that.
 
See you in 4 to 5 minutes.


[[Category:2021]][[Category:Sessions 2021]][[Category:Sessions]][[Category:Human rights 2021]]
[[Category:2021]][[Category:Sessions 2021]][[Category:Sessions]][[Category:Human rights 2021]]

Latest revision as of 15:04, 19 July 2021

29 June 2021 | 14:45-15:45 CEST | Studio Belgrade | Video recording | Transcript
Consolidated programme 2021 overview / Day 1

Proposals: #10 #24 #26 #72 #79

You are invited to become a member of the session Org Team! By joining an Org Team, you agree to your name and affiliation being published on the respective wiki page of the session for transparency. Please subscribe to the mailing list to join the Org Team and answer the email that will be sent to you requesting your subscription confirmation.

Session teaser

AI can bring enormous advantages but represents some challenges as well, in particular in the field of human rights. As often used to process personal data, its compliance to existing data protection legal frameworks, laws needs to be seen. GDPR was not designed with AI in mind. Whoever wants to train AI system in the EU faces tough challenges when she wants to use training data that is not fully anonymized. This is particularly true when medical AI applications are developed. At the same time, GDPR is still poorly enforced towards big tech companies. GDPR also provides little regulatory guidance regarding the use AI techniques that can derive very sensitive information like sexual preferences from innocent looking data. Although GDPR imposed a heavy compliance burden on companies, meaningful control over personal data by data subjects is still missing. It might seem that GDPR mostly gifted us with annoying cookie walls. Can one regulation in one region of the world ensure appropriate protection to all? Aren’t there already existing or emerging international laws, solutions? Can't we regulate AI to empower data subjects? For example, some automated ruleset could free us from clicking on the same cookie consent settings again and again.

The EU and the Council of Europe announced plans to regulate AI. However, will that fix these issues with GDPR that have been amplified by AI technology? What is the way forward to turn users from data objects to data subjects being in meaningful control of their personal data and privacy?

Session description

Until .

Always use your own words to describe the session. If you decide to quote the words of an external source, give them the due respect and acknowledgement by specifying the source.

Format

Until .

Please try out new interactive formats. EuroDIG is about dialogue not about statements, presentations and speeches. Workshops should not be organised as a small plenary.

Further reading

Links to relevant websites, declarations, books, documents. Please note we cannot offer web space, so only links to external resources are possible. Example for an external link: Main page of EuroDIG

People

Until .

Please provide name and institution for all people you list here.

Focal Point

Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective Subject Matter Expert (SME) and the EuroDIG Secretariat and are kindly requested to follow EuroDIG’s session principles

  • Peter Kimpian

Organising Team (Org Team) List Org Team members here as they sign up.

Subject Matter Expert (SME)

  • Jörn Erbguth

The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.

  • André Melancia
  • Amali De Silva-Mitchell, Dynamic Coalition on Data Driven Health Technologies / Futurist
  • Daniil Golubev
  • Alessandro Mantelero

Key Participants

  • Ms Emanuela Girardi - Founder and President of Pop AI (Popular Artificial Intelligence), member of CLAIRE’s Industry Taskforce and Board Member of Adra
  • Mr Jean-Christophe Le Toquin, Stakeholder relations, Videntifier Technologies ehf.
  • Prof. Avv. Alessandro Mantelero - Politecnico di Torino - Associate Professor

Key Participants are experts willing to provide their knowledge during a session – not necessarily on stage. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance. Please provide short CV’s of the Key Participants involved in your session at the Wiki or link to another source.

Moderator

  • Umek Urska, Council of Europe

The moderator is the facilitator of the session at the event. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.

Remote Moderator

Trained remote moderators will be assigned on the spot by the EuroDIG secretariat to each session.

Reporter

Reporters will be assigned by the EuroDIG secretariat in cooperation with the Geneva Internet Platform. The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:

  • are summarised on a slide and presented to the audience at the end of each session
  • relate to the particular session and to European Internet governance policy
  • are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
  • are in (rough) consensus with the audience

Current discussion, conference calls, schedules and minutes

See the discussion tab on the upper left side of this page. Please use this page to publish:

  • dates for virtual meetings or coordination calls
  • short summary of calls or email exchange

Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.

Messages

  • AI has added a new layer to issues of privacy, data protection, and other human rights and freedoms. In this sense, the focus on data is no longer enough in order to address all of the issues. The proposed regulations, therefore, intend to go beyond data protection and to consider the potential consequences and negative impacts of the use of AI.
  • The development of AI requires a paradigm shift. There is a need to move from a human-centred approach to a planet-centred approach and use AI technologies to achieve the 2030 Agenda on Sustainable Development.
  • The main problem is that users do not usually know what data they share on the internet on a daily basis and how this data is used. Therefore, investment in educational programs and raising awareness is key to help users understand AI technologies, their benefits as well as their risks.

Find an independent report of the session from the Geneva Internet Platform Digital Watch Observatory at https://dig.watch/resources/new-developments-and-prospects-data-protection-regard-ai.

Video record

https://youtu.be/kQEAIhbWHzk?t=17346s

Transcript

Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-482-9835, www.captionfirst.com


This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.


>> STUDIO: Hello, welcome back to Studio Belgrade. I’m Johanna I will be the studio host for the next session. Before I hand over to the moderator of the workshop, I would like to briefly go over some session rules. The first one is when joining the studio, make sure to display your full name so we have a better view of who is in the room with us. Next, if you have a question during the session, use the Zoom function to raise your hand. As you might have noticed, everybody is muted by default. So for the purpose of presenting your question or comment, you will be unmuted. When speaking, remember to state your name and affiliation. You also have an option to enable your video so we see who you are. But this is optional and totally up to you. You also can post your questions to the chat. For this session, I will be the chat moderator. And from time to time, I will gather the questions and present them to the panelists.

A note that chat will not be stored or published. So don’t be afraid to use it. And lastly, we kindly ask you not to share the Zoom link with anybody. There is still an option to register for the meeting or follow us on the YouTube Live stream.

And with that, my introduction part is done. I will now hand over to Urska Umek that will be moderating the workshop, new developments, and prospects in data protection with regard to AI. Hi, Urska Umek.

>> MODERATOR: Hi, yes, I can hear you well.

>> STUDIO: The floor is yours, I will see you later.

>> MODERATOR: Thank you very much. I will also try to follow the chat a little bit. Good afternoon, I am Urska Umek I will moderate this workshop. I work for the Council of Europe. As we heard in the introduction our head of information society Patrick penning said we work around human rights, democracy, and the rule of law. In this session, which is supposed to be 60 Minutes, but we’re running late, so we’ll try to condense it a little bit. We will be discussing how to develop and use new technologies, especially AI in compliance with data protection rules. As we know today, systems powered by artificial intelligence are creating a shift in how we communicate, how we live, in how we make our decisions. And new technologies create a lot of opportunities or as Roberto said just before, they’re a blessing, but they also present risks. They present risks for values, for the exercise of human rights, for the application of rule of law. And for the functioning of democratic societies, which is why AI are important for data protection. But not the only reason why we work on these issues. The AI powered technologies are often used for protection. And we have instruments that are designed to protect personal data. But the instruments, I’m talking here mainly about the Council of Europe convention 108, with the regard to automatic processing of data. And the European Union and the protection. But they were not designed with protection in mind. They provide little with regard to AI and these techniques can take information from what appears to be mundane dataset. And GDPR imposes quite a heavy burden on companies, but meaningful control over personal data by data subjects is still missing.

And of course, the question here is how to regulate AI in such a way to empower data subjects and achieve data protection friendly AI. So we will talk about these challenges with our three experts. The first is Emanuela Girardi, Jean-Christophe Le Toquin and Alessandro Mantelero. We’ll start with presentations by the experts in the beginning of the session. Then we will hopefully have a lively discussion in the second part on what it means to have control over one’s personal data and how to achieve that in the context of AI. You are all warmly invited to share your ideas and ask questions in the chart, raise your hand with the feature. We would like to give the floor to the three experts. We start with the international perspective on regulation. So the EU and Council of Europe as we have heard, both have plans to regulate AI.

However, how are they going to, or how are we going to address the risks for the protection of personal data? Which are being amplified by AI. So we’ll start with Alessandro Mantelero, who I think is best suited to provide reflections and perhaps answers to this question. Alessandro is associate professor of private law and technology at Politecnico di Torino. And he’s also Council of Europe’s scientific expert on AI, human rights, and serves as an expert on many other international organizations including United Nations, European Commission. And worked with the Italian Ministry of Justice. I think he’s quite the right interlocutor. Please, Alessandro, the floor is yours.

>> Alessandro Mantelero: Thank you for this kind introduction. In order to explore this topic, I would like to start reading a few lines from a book. Little rock a migrant bank is being set up here. The system is designed to help the rapid placement of children in school uses a line, the school official can call the line rock center free of charge and gather school and records of the child. With this system, the child can be placed immediately according to John Miller, the Director of the data Bank of Migrant children. This is a short story told in a book and is about what happened in 20 August of 1979. The book is the assault on privacy but Harry Mueller, written in 1971.

Mueller said in this introduction to the book, but many people voice concern that the computer, with its incessant appetite of information image of infallibility and inability to forget anything stored in it, may be a system to turn society into a transparent world where our homes, our family, association will be bear to a wide range of casual service, including the curious and malicious commercial interests. So after 50 years, we are here discussing again about the risk of technology and the impact of technology and what we have read in the book that happened in 1977 could be the description of something that happens nowadays with migrant child in our countries.

But what is the distinction? The distinction is in the process. When this concern arises, when the book on the assault of privacy raises the question, there was a reaction. The reaction was basically in the field of regulation. The reaction was to set some red line to set some principle and paradigms for data process. This was the reaction, the legal benefit to share the data and create the digital economy. And also to safeguard in the rise of freedom. This is clear in that one of the milestones of this process that is convention 108, adopted by the Council of Europe in 1981, which is that only international legal framework that right now exists at international level on data protection. So in this sense, we can say that when there is an issue in terms of risk, in terms of potential negative impact on individual rights, we need a legal framework, a regulation in order to tackle potential risk. But what is change introduced by AI?

The change is due to the fact that AI add a sort of new layer. Because AI is not only about data. With AI, we can do a lot of different things. We mainly use AI to predict something. To analyze society. To shape society. And to a certain extent, also, to design the future path of our society.

So in this sense, there are a lot of benefits in using AI. A lot of benefits in terms of better healthcare, more efficient transportation, better education, et cetera. But of course, there are also some concerns. If we consider credit scoring, if you consider as mentioned by Patrick before, content moderation, recruitment services, health prediction, predictive policing. All of this can have also a dark side. A potential misuse or use that might negatively impact on individual rights and freedom. In this sense, the focus on data is no longer enough in order to address all of the issues. Of course, data protection is important and considered enabling rights. What was important in the 70s, data protection, the concern as described in the book that we read before was about social surveillance by private and public entities. But nowadays, the impact is a multiple impact that affect different kinds of rights and freedom. You can shape the content, so limit the freedom of speech. You can limit the access to services. You can impact on the freedom of business. You can impact on the future career of individuals and future life. In this sense it is important to set like in the past to set some rules in order to use AI for the benefits of the human being. This is the work set out by CAHAI and by Patrick. The same is done at EU level on regulation. What is important in both this regulation, although from different angle, because the Council of Europe is focused on human rights democracy and rule of law. And we have to recognize that the proposal from the EU is more focused on product safety, although provide an important reference to the protection of fundamental rights. But in both this regulation is clear they intend to go beyond data protection and to consider the potential consequence and negative impact of the use AI. Of course, as usual, we as the regulator, as the legal scorer, we don’t factor on the benefits. If there is benefits, there is no need for lawyer and rules. If everything goes well, no problem. We’re concerned about risk. In this sense, the idea of risk represents the common core of both data protection and AI regulation. As we mentioned, the risk was at the Region of data protection regulation. The concern about the potential negative use of data for discrimination. Ownership for social control. And now, with AI, we have a broader variety, if you want, of potential resulted to the use of AI. And so, as in data protection, we have to build an efficient system in order to monitor, mitigate control. In order to have an AI that is developed for humans and not against them.

In that sense, the common core as presented by the risk management. A risk assessment is not the case that in both the documents, in the proposal from the CAHAI is and others. What is in the mandate of the CAHAI, this is the way we manage this risk. There is debate about the risk, about the risk assessment, the risk management. But there is not an effective model, methodology to assess the risk. In the data protection we have data protection impact assessment and the longest experience of the sense we cannot replicate the data protection in the broader context of human rights impacted an AI. At the same time traditional human rights impact assessment are created for large scale system that are not the same for AI. So we have to invest more in order to figure out concrete models, concrete application of risk assessment and human rights impacts assessment and human rights duty images. In this sense, the Academia is working on that. If you want on the corner, in the right you see a recent publish article, published yesterday. It is the first attempt to define a concrete model for human rights impact assessment for AI, based on seven documents, the decision of the data protection authorities in Europe. So the link between the experience of the data protection authorities in managing that intensive system and related risk is used to build a model that is evidence-based, that fix some measurable goals in order to set the measurements of the risk and give announcement to the CAHAI and EU document about the level of risk and the related safeguard we have to provide when the risk is high or medium, et cetera. So I think, to conclude, that this is the goal of the future regulation. Am not necessary repeat what is already existing in terms of human rights and fundamental freedoms. But create concrete solution to implement them in the development of AI.

And I think that both European Union and Council of Europe are working on that. I think that is very important that all the actors, Academia, also the companies, the assessors and Civil Society will constitute the cornerstone or future framework. Thank you.

>> MODERATOR: Thank you so much Alessandro, it was concise and yet comprehensive. Now that we heard about the international level of regulation and how that is proceeding, we will move to the next level of regulation. We will explore what is being done by states who are probably not sitting idle sided and waiting for international regulations while challenges and risks mount. So we will be talking to Emanuela Girardi who will present us the Italian example of regulation. Emanuela is a member of the high-level Expert Group of [audio skipping] the Italian national AI strategy. And she’s the founder of Pop AI. And creating awareness to companies and the general public. Emanuela, the floor is yours. Thanks.

We can’t –

>> Emanuela Girardi: Can you hear me?

>> MODERATOR: Great. We can hear you. Thank you.

>> Emanuela Girardi: Thank you, I couldn’t unmute me. Thank you, good afternoon everybody. I was thinking about the great presentation Alessandro made. I took notes. There were some very interesting points.

We heard about what we are doing actually in Europe for data protection and for AI strategies and promoting AI technologies into our society. And I would like to give you some more information about what we were doing and we’re trying to do in Italy. But I would like, first, to start from the European point of view. So ... like Alessandro said, and also the previous presentation from Roberto and Patrick, we see Europe started later in developing the AI strategy, later than China and U.S.

This gave us in Europe the opportunity to develop a vision based European values and promoting a European ecosystem where AI technologies and AI system, like we just heard from Alessandro are developed in a safe and secure way. And are used to improve European citizens’ lives.

In April 2018, the European states sign the declaration of cooperation on AI technologies, identifying three main areas of cooperation. The first was promoting the investment in research and innovation of AI to boost the cooperation of AI by European companies. The second was to prepare to face such economical changes that AI technologies bring. So we heard with Patrick about the many changes that AI technology are bringing to our society, in particular focusing on updating the education system and the training system.

The last one that we heard from Alessandro, it was to ensure, legal, ethical, regulatory framework aligned with European values and with fundamental rights.

So in December 2010, the European Commission presented the coordinated plan and asked each Member States to develop an AI strategy. In the Italian ministry development presented AI expert from industry, academy, Civil Society, like Patrick said, it is very important this multistakeholder approach when we are dealing with new technologies. And ask them to develop in Italian, AI strategy. This, according basically to the priorities that have been set up by the European Commission, the three main priorities that I just mentioned.

So we started working at the better of 2019, and the final strategy was published in July 2020. The starting point of our strategy is very important. Because it is that AI technologies alone are not enough. We need to develop an entire technology system that includes connectivity, blockchain, data, infrastructure for data, super computers to work with the data, content, sciences, an entire ecological set.

From this point, we have three pillars to build the development of AI technologies and policy. The first pillar is AI for the human being. The second one is AI for a trustworthy and productive digital ecosystem. And the third one is AI for the Sustainable Development.

First one, AI for the human being. First pillar is perfectly aligned with the European strategy. We need to promote the adoption of AI technologies to increase the welfare of the entire population. It is to educate users, consumers, labors and provide the necessary skills to use AI technologies. This means, we heard from the dual approach of AI technologies. One end, we need to make – to enable people to enjoy the great benefits of AI. On the other side, we need to enable them to manage the risk, so this dark side. Being able to actively participate in the digital society.

We have computing, computational thinking is a subject in all of the schools starting from primary school to universities. Then we propose the approach of the continuous learns, for all the employees. Dedicating – because they’re needed to dedicate some of the working time to learning the new AI and digital skills. This is important. We always hear about AI technologies and they will destroy the new jobs. In the end, most of the research say they will create new jobs.

The problem is today, we don’t have the digital skills necessary to fill those jobs. It is important to develop educational programs for schools and also for employees to make them learning the digital skills necessary for the future. This is the first pillar. The second pillar is AI for trustworthy and productive digital system. The second is aligned with the European strategy. That is to build an ecosystem of trust and excellence. This is well described also in the European white paper on artificial intelligence. The idea is to create the network of excellence with university, research centers, companies, public administration, and citizens. In particular, the Italian strategies highlights the need to develop a data economy and to support the small and medium enterprises in collecting, stocking, sharing data in a safe and competitive way.

What we did in the Italian strategy is identify six sectors were to focus the investment in AI and in the knowledge transfer from lab to the market. Like the new updated coordinated plan, that was well described. The one that was released a couple of months ago. The six sectors are IoT, manufacturing robotics, services, energy, transportation, aerospace, public administration, and the last one is the digital humanities that are very important for the Italian ecosystem.

So this is the second pillar. The third pillar, that I think it is the most important, and also the most innovative one is AI for Sustainable Development. So the Italian vision starts from the humanitarian vision that is human centric, and AI is a very human approach. We need a change in the paradigm. We need to move from human-centered approach to planet-centered approach and use the AI technologies to reach the Sustainable Development of the 2030 United Nations agenda.

We do believe with the support of AI technologies, we can really realize the goal of leaving no one behind, like we heard also from Patrick and Roberto. Basically, we started from the three pillars, we focus on how to bring AI technologies to the Italian companies.

Italy presents a sort of dual situation. Because we have excellence in AI in the academic world. Italian science researchers are among the best in the world. We have successful AI innovative companies. The average level of adoption of AI technology is still quite low.

Basically if you consider the data published by a recent study in polytechnic in Milan, there are about 70 – maybe 40% declare they’re planning to do it. But if you look at total investment of the AI market, it is very small. In 2020 it was only about 300 million. The many investments were only for project in processing, chat box and assistance. The main challenge here is how to bring AI technologies to small and micro business, micro companies, which represent the largest part of the Italian business system, which currently, they don’t have access to the necessary expertise to start using the innovative technologies.

So what we did in Italian, national AI strategy, we tried find a solution for developing these from lab to market strategy. We propose the development of an Italian institute for artificial intelligence, it was needed to be near the companies. Proximity for small and micro business is the only way to reach them. Italian institute was formed with two thoughts. One focused on research, the other on the knowledge transfer.

Working together with companies, we think we could really bring AI technologies and skills to the industry system. So another proposal that we made and that is included in the coordinated plan is the development of Sandboxes. It has been mentioned before, it is really important because it gives the opportunities to companies to test, develop and bring to market new AI system in a safe regular framework.

But we also made some quite operational proposal to support the companies and embrace the data economy. We propose a new profession. We call it SID, data intermediary society. It is to support the small and micro business in understanding AI and start developing small project with a sort of certified AI consultant. Remember the dark side of AI. In the same time, they had to raise awareness about the risk of sharing the data. And teach them how – teach the small companies how to share in a safe way without giving away the advantages of the companies.

For instance, if you think about manufacturing companies and the predictive maintenance service that is today. They’re buying together with the machine. We really have to regulate very well which data the supplier of the industrial machine can collect and for which purpose the data can be used. This is to be protect the small companies and the micro business, in order that they don’t give away the company secrets and values.

We shared data sharing agreements formats and some proposals. The best way for small and micro business is to create consortium of companies in the sector and supply chain and develop common data space like in food, automotive or information. You can create a considerable amount of data important for AI analysis, but also work together to develop new services, new products and really bring in the benefit of AI technologies to the entire supply chain.

So I really believe – we develop several other services, but I will also leave space open for questions. And I really believe that supporting Italian companies but also European companies in developing AI projects is really the key to improving the competitiveness of the small and micro business and should be a top priority for Italian Government but also for European Government.

>> MODERATOR: Thank you so much Emanuela. This was really rich. It seems the Italian Government is thinking ahead very much. Helping small and micro businesses in many ways. Indeed, we will continue with the response of the industry to these initiatives, regulatory initiatives, in particular.

So how do businesses see the line between data-driven innovation on one hand and effective privacy protection on the other? We have with us our third expert that comes from private sector, Jean-Christophe Le Toquin he will share with us his rich experience, he’s stakeholder relations at Videntifier Technologies which identifies visual content at scale for law enforcement and for copyright protection. And in this role, he also Chairs the French hotline against illegal content. And he’s the President of in hope global federation of hotlines against child sexual abuse material.

So I would like without further ado, give the floor to Jean-Christophe.

>> Jean-Christophe Le Toquin: Thank you, Urska Umek and congratulations for pronouncing the name of Videntifier Technologies correctly. No one succeeds in this.

I want to discuss today, I’m an independent consultant, I work for eight years for Videntifier Technologies, this Icelandic company also based in this area to work on technology to identify technologies at the scale of the Internet. I want to tell you about, to be very complementary, nicely, the two past presentations, it is to tell you about this report that you see on the screen, published yesterday by the Council of Europe on which I contributed as an independent expert, but also based on the knowledge I collected from my different clients, including the identifier. You have the link I will send you in the chat afterwards.

This is a great example of cross-disciplinary work between lawyers, privacy expert, child safety advocates, and cybercrime advocates. All of these activities that are really active in the field of the Council of Europe. So what we did together was to – provide this diagram that you see on the screen, the image should not appear. This is a mistake I made. What I do is give you an example of how artificial intelligence may not be necessary. And that may be counterintuitive in such a context. But the situation is that artificial intelligence is still an exploratory technology. Telling Emanuela this is small market. Totally agree with Alessandro that we need a look at this field with really new eyes. And to have a clear idea that is important to understand okay, this may not be the easiest type of activity to regulate. What you see on the deck, on the bottom, blue, orange, red, pink. So basically, what I am going to briefly talk to you about is about something which is a big issue for regulators and society, that is how do we detect and remove legal content – illegal content so it can be really terrible stuff, like terrorist videos, and attacks. It can be also child abuse material. So sexual child abuse material. Images, videos again. This can be also just more regular copyright content, catalog of movies or TV shows that you don’t want to be available on platforms or black web.

So how to deal with this? Trying to think, oh, I mean, AI is the solution. It is so powerful. It can do anything. So what we found is that actually, you can use non-AI technology which can be much more reliable in the sense that there are concerns.

So basically to go quickly to give you an example. 3 technologies. On the left, file hashing, computer vision, and artificial intelligence is the third one. How do they differentiate?

The first is file hashing, it is mature. And first you have this picture of Obama that you want to detect on platforms of Black web and find it. So you will produce a hash, a signature, and you will look for the signature on the web. You can scan all images, you can find, turn it into a signature and see whether a match. MD5, Chad 1 are the most popular, heavily used by law enforcement across the globe. It is very efficient. If you change one Pixel in the image, you can easily not find this image any longer upon it does a poor job at detecting similar images. This is where you would use another family of technology, called computer vision. You don’t have to train the algorithm like artificial intelligence. Some are based on global descriptors. They take the whole picture and look for similarities. So basically, if it is the same image, or slightly cropped or reduced, on the part of the image, you can find it. It is much more effective. Today, the Google and Facebook of the world, they use this to detect and find and remove illegal images of reviews. So it is really a great improvement. The most famous algorithm is called photo D.N.A. and developed by Microsoft about 10 years ago.

The limitation of this technology is that it does not really cope well when the image is focused on some part of the image or flipped or rotated whatever.

So then you use a second family of the computer vision which is the local descriptors. What it does, it takes individual details on the image and find similarities. So we can just a fraction of an image, if you look for on the web, similar image with similar background, helpful for law enforcement typically. You can find same rooms taken different – like you see on this young guy picture in front of a door. You can find some object. You can find as shown here, inserted images. So what we want to show here is that if you use these technologies and they are public in the public domain today, you can as a regulator how it works, you can basically at the scale of Internet, identify, find, and remove all the known illegal images of the world. If you are a regulator, you should typically be interested in the technology to scale to the Internet, does not take too much resources. It is transparent, you know there is no bias into the technology.

Of course, if you want to deal with unknown, if you say I want to predict that this never seen image is actually terrorist attack or scene of a crime or sexual abuse of a child. So before even human eye I have seen it, I want to be notified and alerted and I want to be able to call the police. Well, this case, yes, for sure, you need to use artificial intelligence. Which is a very powerful technology. You have two families. One is machine learning. The other one is deep learning. Machine learning is probably, to make it simple, a question of size. You have – if you want to train your algorithm on recognizing a cat in a picture, you need to have hundreds of thousands of images. But if you want to go into more complex scenario, you are not processing hundreds of thousands of data or millions, but billions. And this is deep learning. It is definitely not available for a small company. You need to be a big player. And what is important to understand is if you are in the deep learning technology you are in a very exploratory phase and then there is tons of bias and the best or the worst of the story is that no one is tell you where the bias is.

For example, you say to the computer, look, I want to find any white man wearing a hat. Okay. It will scan the web, it will bring you back hundreds of thousands of images of white men with a hat. But for some reason, maybe you will have some very strange pictures of a cat or of a Jihad or whatever. And there is no way in the technology to understand why the algorithm is pulling such a bizarre result. This is a mystery of AI and why it is scary from human rights perspective. Because if you trust too much machine, it can make mistakes you cannot anticipate, that is why we have all this discussion in this group. So this diagram is – I have been told helpful to understand the different technologies and why you – what you can get with maybe less fancy technologies, such as computer vision, but more aware level. This was a kind of testimony. This is explained in more details in this report, published yesterday by the Council of Europe, which I will share the link. Great work by the Council of Europe and hopefully this will help bring the discussion to the next level. Thank you to Urska Umek for giving us the opportunity to present today.

>> MODERATOR: Thank you very much Jean-Christophe. This is the first time I hear about the differences between different technologies. Jean-Christophe will have to leave us in a couple of minutes, but we have been told by the Belgrade Studio, that we can perhaps take a few more minutes, so we have opportunity for some questions. I would like to start because I don’t see a question in the chat, although I would ask everyone who has a question to please either post it in the chat or just raise hand. There is a feature for that. Jean-Christophe because you are leaving us, one question about the businesses. Do you think that the industry, the private sector is feeling this growing responsibility in keeping the space that they’re co-creating for individuals, for our interaction, e-commerce, et cetera. Do they feel this responsibility to keep the space clean and safe, free of crime so that perhaps they’re thinking about technical collusions that can underpin their endeavors to keep their businesses clean, housed with that right now?

>> Jean-Christophe Le Toquin: I am chairing the child hotline and other hotlines and they’re supported by the teak industry, Google, Facebook, Snapchat, TikTok, what have you. Honestly, I can tell, yes, it is clear that over the last, what, five years, I would say. All the work done by European Union and I say it has played the role to make companies be more focused on the issues. And also controversies on how to control fake news, illegal content, terrorist videos. All of this has really gradually have the companies pay more attention. In my opinion, you must continue the work of the EU. And the EU can learn from the U.S. what they’re doing with illegal content. We have the disagreement on the e-privacy regulation the last few months. That is a very healthy situation. So I do see that the bigger companies, the big platforms, they’re seeing more and more pressure from the regulators and Civil Society for more better response. And they’re more handled today, I feel, as the people I work with, are more humble than people are used to see about the user group.

Now, on the tech provider side, the people like SME, like my client, I think they have to do a better job. Some of them are doing it, like Videntifier Technologies, to be biased. They have to be a bit more transparent on the technology they use and why they use it.

Do companies that use a combination of computer vision and artificial intelligence, to secure and people and companies, my device with them is look, be more transparent what is the computer vision in your solution and what is artificial intelligence. And if you use artificial intelligence, be more transparent on the deficit. Where did you get the data? What is the volume you are working on. If you are not transparent, people will not trust you. So I see that the teak companies are behind the curve, should be more transparent, they have to understand being transparent is okay and actually will generate more trust and more business.

>> MODERATOR: Uh-huh. Thanks, so much, Jean-Christophe Le Toquin. That is great. Thank you very much.

I see that we’re approaching our cutoff time. But not for all of us. Only for you.

So perhaps, if there is still – I can’t see any question in the chat, then I’ll just continue with a few questions to our experts.

A few things that we haven’t had really the time to broach in detail. One question for Emanuela, because a little bit -- also sort of going back to what the companies are doing, what they’re supposed to be doing. This time, not so much from point of view of innovation, but what it creates for them.

You have talked about the Italian Government and all actors embracing the benefits of AI. I was wondering what about privacy of individuals, of users, of the services? Because GDPR is directly applicable, but as we have all said, we haven’t really drafted data protection instruments with AI in mind. So is there kind of guidance also for users or manufacturers how to deal with AI technologies in privacy friendly compliant way? Could you perhaps expand on that? Sorry. You have to unmute.

>> Emanuela Girardi: Okay, can you hear me now. I couldn’t again unmute myself. Sorry. Okay. So when it comes to privacy in Italian – or maybe I should say European people are sensitive. For instance, think about during the pandemic, the Italian Government and some European Governments develop a tracing app, the Italian one was called diamondy. This was tracing basically our contacts to inform us if we were in contact with the COVID-19 positive person. In Italy lots of discussions about rights of the state to control citizens or centralized versus decentralized model to stock data. Which right was more important? The right to privacy or the right to health?

So I think that, I mean, what we just also heard from Jean-Christophe, the main problem is people don’t really know which data they’re sharing every day on the Internet platform. And what could be done with the personal data. The data we give to the big platforms are much more than the ones collected. And we love this platform, to influence us, to influence our behavior or I could say to manipulate our behavior online and offline behavior.

I believe that – I mean, the key the way out for this situation is really to invest in educational program. I mean, to make people really understanding what AI technologies are. The benefits, the risk, the dark side like Alessandro was mentioning before. How to manage them and use them in a safe way. I think for this, it is important not only the Italian program, but the European program, the will digital compass for the digital decade. The first point is we need for this direction, we need to reach in this digital decade a digital skilled population and highly skilled digital professional.

The other goal is to develop a new sustainable economy model, an index in social, environmental, benefits. I think this is the key. To invest in education and to bring these digital skills to all the population. It is the only way that we can also learn how to use these technologies, learn how to manage them. Control the risk and also, I mean, be more conscious and aware when it comes also to sharing our data. What does it mean? If it is important to share them, like for instance, the money app because was helping us through the pandemic. Or a matter of using like a social platform.

>> MODERATOR: Thank you.

Thank you so much Emanuela. If I may, just one brief question. Because we’re talking about small and medium size enterprises and how Italy is trying to help the enterprises. Just a question whether such help – how does that help to be regarded from the perspective of global economy? Is that perhaps going to be seen or might be seen as unduly protective? Is that a possibility? How do the measures perhaps apply in relation to international data flows, if you could just expand a little bit on that? And then I have a few questions for Alessandro as well. Thanks.

>> Emanuela Girardi: I will try to be brief. To start the strategy. On the 21st of June, the European Commission sign an MOU with partnerships. One was Adra, to bring together AI, and robotics to bring together a European system with European values, as we said a trustworthy human centric AI. This is to bring AI into the European society. This is the way to go to follow a European strategy.

If we refer to international data flow, the digi-2020 that is taking place digi-2020 offering Minister. There is the innovation and it is really focusing on the digital industry, how to connect the suppliers across countries, building really – I mean, a global digital supply chain. In particular, it will be focused on data governance to promote the data economy while at the same time protecting the SMEs. I think this is really the key to try to balance these two needs on one hand, to promote the data economy. On the other hand, we need to protect our companies. I think this is the most important thing. Referring to the new regulation, I think that – the European one, the proposal presented a couple of years, eight months ago by the European Commission is the first in the world. I think that it is a very important document. Because it will start the dialogue and the discussion about AI regulation. The needs for globally regulate AI technologies.

It is in a way, at the moment – I mean, from what Jean-Christophe said, it is important to have transparency. On the other hand, what is included in this new AI regulation proposal, for a small and micro business, it is a burden. For a small company it is difficult to do all the confirming assessments, especially for AI risk. High risk AI system. It will take a couple of years before the regulation is in place and will be applied. I think it is a fantastic starting point. There will be multistakeholder discussion between the Commission, the Council of Europe. And I think the final result will for sure protect the rights of European citizens and companies.

>> MODERATOR: Thank you so much, Emanuela. That’s great. We’ll just check the chat. Right. Okay. Just one question, perhaps, quickly to Alessandro, also.

Now, coming back, sort of to wrap up where we began with data protection, because you in your presentation mentioned the right to privacy and that it is considered dually considered by these regulatory initiatives and processes on AI. But how, if at all, will individuals, users, data subjects be covered against possible risks, negative impacts of AI on their privacy? If you can expand a little bit on that? Thanks.

>> Alessandro Mantelero: Yes. This is something addressed in CAHAI and I was part of that and at the EU level. We have the convention, 108 plus, the modified version for Council of Europe and GDPR for the European Union. We can see in the field of personal data processing exists a clear framework. Of course there are challenges and we know like CAHAI and other issues that we don’t have time to discuss.

But the issues at the CAHAI and EU level we cannot consider the data protection because it is fixed and works well. We have to address issues not fully addressed by data protection.

I think that probably also from academic perspective, the main mistake, if I can use this word – is imagine that the answer to AI is a sort of explosion of data protection. So any interoperability idea of data protection. It doesn’t fit well with what we need.

Because it is clear, for instance, when we talk about human rights, democracy, rule of law, and challenges of AI with regard to this field, it is clear that data protection cannot address all this kind of issues.

So I want to say that for data protection, there is an existing system that more or less provide a quite adequate level of protection in Europe, outside of Europe it is a bit different. But fortunately the model of convention 108 is considered broad as a reference. There are several countries that are adopting data protection rule based on this model. And there is the same also for the EU, although the process is longer because it is not easily achieved.

For data protection, we have to say also to the citizen that the situation is not so critical. And we have to turn our gaze to focus on something new that address the impacts not only data protection privacy but many other fundamental rights that cannot be properly addressed by only data protection tools. I don’t know if I fully addressed the question, but this is the regulatory issues that we have to deal with.

>> MODERATOR: Yes. No, thanks a lot, Alessandro. I think that we would never be able to fully address all of the issues. But I also think we have covered a lot of ground today. And thank you very much. We have to wrap up the session, unfortunately. Not unfortunately, because we still have to hear from our reporter of the session. So looking forward to that. But I would like to thank the of experts for the inspiring presentations and discussion. A lot of valuable inputs that we have heard. I hope that somehow the contributions have indicated some ways forward to also turn users from data objects to data subjects and give us all a little bit more meaningful control of our personal data.

So I would go back to the studio and give the floor back to Yellena. Thank you for hosting us on your platform Yellena.

>> Alessandro Mantelero: Thank you, bye.

>> STUDIO: We will hear from the reporter of the session.

>> My name is Ekaterinaioc Vick. I’m with Geneva Internet Platform and we’re providing key messages and session reports from all workshops. I will now present the messages from this session and the report will be posted on the digital watch observatory. A reminder the messages will be available for additional comment and EuroDIG will provide more detail on that. Without further ado, message 1 is ... AI has added a new layer to issues of privacy, data protection and protection of basic human rights and freedoms. In this sense, the focus of data is no longer enough in order to address all of the issues. The proposed regulations therefore intend to go beyond data protection and consider the potential consequence and negative impact of the use of AI.

If there is any strong objection to this message, please write in the chat. Otherwise, we will consider there is a rough consensus on the message and move to the next one.

All right. Moving to the message number 2. The development of AI requires a par time shift, there is a need to move from a human centered approach to a planet centered approach and use AI technologies to achieve the 2030 agenda on Sustainable Development.

And the final message from the session is the main problem is that users do not usually know which data they should on the Internet on a daily basis and what can be done to their personal data. Therefore, investment in educational programs and raising awareness is key in order to help users understand AI technologies, their benefits, as well as the risks.

That is all from my end. Thank you very much.

>> STUDIO: Thank you, Ekaterina. Thank you, everyone, for the great session. We will now take a 4 to 5-minute break. When we come back, we will continue with the workshop on human versus algorithmic bias. The studio host will be my colleague. You can choose to stay here with us or you can close the Zoom room and go back to Gather.Town. After the break, you can choose to join us again. Or join some other studio. It should be easy to navigate around the map. But if you have some troubles, you can go to help desk and they will be able to assist you with that.

See you in 4 to 5 minutes.