Regulating emerging technologies: artificial intelligence and data governance – FA 03 Sub 01 2022: Difference between revisions

From EuroDIG Wiki
Jump to navigation Jump to search
 
(7 intermediate revisions by 2 users not shown)
Line 1: Line 1:
22 June 2022 | 10:30 - 11:15  CEST | SISSA Main Auditorium | [[image:Icons_live_20px.png | Live streaming | link=https://youtu.be/qFEpZUpEML8]] | [[image:Icon_transcript_20px.png | Live transcription | link=https://www.streamtext.net/text.aspx?event=EuroDIG1]]<br />
22 June 2022 | 10:30 - 11:15  CEST | SISSA Main Auditorium | [[image:Icons_live_20px.png | Video recording | link=https://youtu.be/qFEpZUpEML8?t=2086]] | [[image:Icon_transcript_20px.png | Transcript | link=Regulating emerging technologies: artificial intelligence and data governance – FA 03 Sub 01 2022#Transcript]]<br />
[[Consolidated_programme_2022#day-2|'''Consolidated programme 2022 overview / Day 2''']]<br /><br />
[[Consolidated_programme_2022#day-2|'''Consolidated programme 2022 overview / Day 2''']]<br /><br />
Proposals: [[List of proposals for EuroDIG 2022#prop_15|#15]] [[List of proposals for EuroDIG 2022#prop_16|#16]] [[List of proposals for EuroDIG 2022#prop_39|#39]] [[List of proposals for EuroDIG 2022#prop_74|#74]]<br /><br />
Proposals: [[List of proposals for EuroDIG 2022#prop_15|#15]] [[List of proposals for EuroDIG 2022#prop_16|#16]] [[List of proposals for EuroDIG 2022#prop_39|#39]] [[List of proposals for EuroDIG 2022#prop_74|#74]]<br /><br />
Line 36: Line 36:
*Vittorio Bertola
*Vittorio Bertola
*Desara Dushi
*Desara Dushi
*Constance Weise
*Elif Kiesow
*Elif Kiesow
*Alève Mine
*Alève Mine
Line 44: Line 43:


'''Key Participants'''
'''Key Participants'''
*Gianclaudio Malgieri<br />is an Associate Professor of Law & Technology at the EDHEC Business School in Lille (France), where he conducts research at the Augmented Law Institute. He is Co-Director of the Brussels Privacy Hub; Guest Professor at the Free University of Brussels (VUB); Editorial Board Member of Computer Law and Security Review; and External Ethics Expert of the European Commission (Research Executive Agency). He conducts research on and teaches Data Protection Law, privacy, AI regulation, Digital Law, Consumer protection in the digital market, Data Sustainability, Intellectual Property Law.
*Golestan (Sally) Radwan<br />is an international AI expert and PhD candidate at the Royal Holloway University of London. For the past three years, she served as AI Advisor to the Minister of ICT of Egypt, where she led the team in charge of developing and implementing Egypt’s national AI strategy. Radwan served as vice-chair of the UNESCO ad-hoc expert group tasked with drafting the first international recommendation on the ethics of AI. She is also part of the OECD expert network ONE.AI, GPAI’s Responsible AI group, and chairs two AI working groups within the African Union and the League of Arab States.<br />Prior to her appointment at MCIT, Radwan held several executive positions in the technology industry over 17 years, working in Germany, Austria, the UK and the US. Radwan earned a BSc in Computer Engineering from Cairo University and an MBA from London Business School, as well as an MSc in Clinical Engineering and Healthcare Technology Management from City University of London. She is currently finalizing her PhD thesis, focusing on AI explainability and its ethical considerations in clinical genomics.
*Ryan Carrier<br />Ryan is DataEthics4All Top 100 DIET Champion 2021, Executive Director at ForHumanity, public charity, which endeavors to be a beacon, examining the impact of AI & Automation on jobs, society, our rights, and our freedoms. They focus on Independent Audit of AI Systems, supplying audit certification criteria to governments around the world including GDPR and the Children's Code for the ICO in the UK and retained as a technical liaison to CEN/CENELEC JTC 21, the body tasked with creating the conformity assessment in the EU AI Act.


Key Participants are experts willing to provide their knowledge during a session – not necessarily on stage. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance.  
Key Participants are experts willing to provide their knowledge during a session – not necessarily on stage. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance.  
Line 49: Line 51:


'''Moderator'''
'''Moderator'''
 
*Thomas Schneider, Head of International Affairs in the Federal Office of Communication (Switzerland) and the chair of the Committee on Artificial Intelligence of the Council of Europe
The moderator is the facilitator of the session at the event. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.
The moderator is the facilitator of the session at the event. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.


Line 71: Line 73:


== Messages ==   
== Messages ==   
A short summary of the session will be provided by the Reporter.
*[[Messages_2022#Focus_Area_3_Coming_next_.E2.80.93_outlook_on_new_technologies_and_can_existing_governance_bodies_cope_with_them.3F | Go to the messages from Focus Area 3]]
*Find an independent report of the session from the Geneva Internet Platform Digital Watch Observatory at https://dig.watch/event/eurodig-2022/regulating-emerging-technologies-artificial-intelligence-and-data-governance.


== Video record ==
== Video record ==
Will be provided here after the event.
https://youtu.be/qFEpZUpEML8?t=2086


== Transcript ==
== Transcript ==
Will be provided here after the event.
Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-482-9835, www.captionfirst.com
 
 
This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.
 
 
>> NADIA TJAHJA: Good morning. I’m the host here in the main auditorium. Hello to you all online. We’re very excited to have you here. We’re about to start focus area 3. Just a very brief reminder, EuroDIG has a new format in which we have four focus areas. Today we’re looking at focus area 3, which is about coming next, Outlook on new technologies and can existing governance bodies cope with them and focus area four, internet in trouble times. There will be main sessions, they’ll be here, and right now we’ll go into focus area 3. If you’re interested in focus area 4, you can join FabLab or online to join those workshop and in the afternoon you can come from FabLab back here to continue the discussions and if you’re here in focus 3 and want to continue in more depth, go into the FabLab.
 
Now I ask the moderator Thomas Schneider to come forward on Regulating emerging technologies: artificial intelligence and data governance as brief reminder, we’re here in person and online, when we have a question online we’ll turn on a bright red light for you so you know there is online engagement. I hope you have a great session.
 
>> THOMAS SCHNEIDER: Hello, good morning again. I hope those that were here physically enjoyed our party at the sea last night.
 
Thanks, Nadia, for the introduction.
 
Indeed, as we have heard in the keynote speech, I think we have challenging aspects of technology we’re facing. We agree we want to use the technology, we have to use them, but there are some issues that we need to cope with. It is very timely to have this discussion we have 40 minutes, we have three speakers that are not supposed to go more than 7 minutes, we hope to have half of the session for interactive let’s. That’s a key element, of course, of EuroDIG to interact. Looking forward to the input from the three experts and then an interesting discussion.
 
I will not lose time and give directly the floor to Gianclaudio Malgieri. He’s an associate professor of technology at the EDHEC Business School in Lille in France, and he will say a few words about something that we have already alluded to at least several times, which is the Draft AI Act of the E.U. and what that means for a road to trustworthy AI.
 
The floor is yours, please.
 
>> GIANCLAUDIO MALGIERI: Thank you very much. I hope you can hear me. Yes.
 
Thank you very much, Thomas.
 
Thank you for the invitation. I hear some feedback. Hopefully – I will put the audio down so that the feedback is remote.
 
Yeah. Thank you very much. It is a great opportunity to address – to be here, to address the European Union approach to the regulation of AI. I would like to start with a provocation, and then I will rapidly describe what is the AI Draft Act with some comments and provocations too and then final provocation. Ten minutes is not much, but we’ll do our best.
 
The title of this, intervention is trustworthy AI. I would like to underline the difference between trust and trustworthy AI. You know, here we come with the regulation, why we need the regulation on AI in Europe. In my approach, under my perspective, in my perspective, it was not enough that AI technology could be put in the market if the user had trust in them, it is a necessary that they’re trustworthy. Trust is a marketing concept. Right. Trust is, you know, where consumers trust something. Trustworthiness, it has a normative value based on some principles, based on some values that are designed or decided by the political bodies and regulators.
 
First of all, we needed this because we needed a shift from autonomy of the market you could say and the protection and dignity aspect of AI.
 
Beyond this general umbrella, it is a little bit floss call, why do we need this beyond the GDPR? The potentiality, the broad approach, broad impact of the GDPR, right, the general data protection and regulation that just celebrated six years from the approval and four years from the entering into forcing. We know this is inclusive, comprehensive, the European Union, describe this as the law of everything, we have this wide definition of personal data and we have broad principles to apply to nearly any situation, fairness, lawfulness, transparency. Couldn’t we just say that the information of data is big, and it is comprehensive, like fairness, couldn’t we just say that we actually make it just the GDPR to regulate AI. Why did we need something more. In my view, there was two reasons, maybe three. The first reason, it is not everything actually is based on personal data, there are many artificial intelligence systems that thanks to technologies can process, can have impact on individuals even beyond personal data processing.
 
For example, I mentioned privacy and technologies, federated learning can reduce the impact on individuals and still have an impact. Artificial intelligence can still manipulate the commercial behavior of individuals and even produce items without identifying the individual.
 
AI can harm you without even identifying you.
 
The second point, on why we need an AI act, it is that there was a need of a political threshold of what is an acceptable, in terms of the AI market because there is general accountability principles like fairness, but then the AI act, as proposed in April of 2021 by the European Commission, now on this discussion, it sets clear, a clear list of things that are unacceptable, even beyond the principles of fairness and lawfulness in the GDPR.
 
So now we can go to the – we can describe how the AI act is built.
 
It is built on different layers of risks.
 
The definition of risk, it is not based on the accountability principle like in the GDPR and risks are already predetermined by the legislator in a way if it is approved.
 
We have some unacceptable risks, variable risk, and those practices that bring to those risks, they’re considered prohibited. We call the black list of AI systems. This black list we have for example the exploitation of vulnerability of individuals based on age or disability and the parliament is trying to add more categories or vulnerabilities, for example, social and economic condition, gender, so on.
 
We have the dark patterns leading to physical or psychological harm and then we have indiscriminate use of AI for politicalizing in the mass, social scoring under certain definition, social scoring when it is unjust identifiable harmful. In this prohibited list of practices, it is important to have them. My provocation on this, it is on harms and vulnerable individuals. Why? First of all, because the prohibition includes, as I was saying artificial intelligence producing harm, physical or psychological harms, why not economic harms too? Artificial intelligence, in particular on social media can manipulate how a behavior, even bringing economic harms, European Commission replied when I asked them, I asked the people who was drafting the regulation why just physical and psychological harms, it is very difficult to prove in practice because you need a medical certificate, right. They said for economic harms, we already have unfair commercial practices directed in the European Union. I have been working on this and it is not the most adequate tool, it is 2005 and not very updated. And a provocation on vulnerable individuals. Why vulnerability is considered unjust identifiable based on the age and disability, the answer from the Commission is that because age and disability can be easily proved, you can prove the age or the ID card, you can prove the disability through a medical certificate. Vulnerabilities are much beyond. I have been devoting the last years of my research to vulnerable data subjects and we launched just last month an observatory, it is called vulnerable, we tried to classify all social vulnerability of human vulnerabilities in the digital sphere. It is something that we can at least – at least something like 20 sources of vulnerability, then there was intersectionality approach. So more and more layers of vulnerability, one on top of the other can apply.
 
Okay. I would like to move to the second part of the regulation, which is a high-risk of AI. The high-risk, it is not this – there is another list of AI practices, for example including the credit storying, for example, the use in general of biometric surveillance. There is a whole list of AI practices in the high-risk list. What happens if the developer, the user, the user is not the end user, it is the company that uses AI. What happens if they produce or use this form of AI system. They have a list of duties and accountability duties. For example, for transparency, human oversight, you consider the risk of black box AI, they, the developer has a duty to give, to understand the algorithm. So the algorithm should be interpretable by design.
 
Also there should be human oversight in any case of high-risk AI. There should be a human capable to understand the AI system. There is a difference with the GDPR. The GDPR, we have Article 22, which is about the right to have a human in the loop in automatic decision making systems when these decisions have a significant impact on data subjects. This is different, this is human oversight for the design and the development of AI, not for the final decision.
 
>> THOMAS SCHNEIDER: Sorry to interrupt you. Time is basically up. I ask you to please make the final point.
 
Thank you very much.
 
>> GIANCLAUDIO MALGIERI: Thank you.
 
Yes, just to say, there is an accountability. There is a list of accountability, duties and it is still incomplete. Two reasons, and then I will conclude:
 
First thing, we still don’t know how to do a data management plan for AI, a Human Rights based assessment for AI, there is a lot to do with academia.
 
Another point, it is this high-risk list, it should be bigger. For example, emotion recognition is not included in the list. The only authority that can manage this is the European Commission according to the text of the proposal which is something very much based on even political control.
 
Very, very last point, to overcome most of the problems of the Digital World, to overcome the problems of consent, vulnerability, underrepresentation, we would need participation in the design of AI. Participatory design. AI act, it is just a practice. So just the participatory design, it should be focused more. Thank you very much.
 
>> THOMAS SCHNEIDER: Thank you very much, Gianclaudio Malgieri, for this interesting provocation and points.
 
Let’s move directly to Golestan Radwan, an international expert from Egypt and heads a number of important roles at national level with AI strategy and internationally, she was a key driver of the UNESCO recommendation. You can read more about her in the Wiki. Sally, good to hopefully see you. I don’t see you yet. We met in Geneva about a month ago. The floor is your, please.
 
>> GOLESTAN RADWAN: Thank you very much. I hope you can all hear me and all see me.
 
>> THOMAS SCHNEIDER: We can.
 
>> GOLESTAN RADWAN: Great. Perfect.
 
This is actually my first EuroDIG, it is very exciting. I wish I could have been there in person. This is a good start at least.
 
I will be very quick, hopefully I will stay on time. I would like to focus on a very specific aspect of AI governance, the implementation of these various guidelines and recommendations that we have floating around.
 
As you said, I used to be a government official until very shortly and I can tell you that most guidelines, recommendations that came out of multilateral efforts so far, and I have participated, as you said, in drafting some of them, I’m certainly not hacking them here, but they’re not enough. On the one hand, the efforts are global and as inclusive as they have been, any conversation about AI governance needs to start globally, even for countries that choose not to actively participate in the development efforts around AI. The markets and appropriations are still going to be exposed to the risks in products. It is important that they’re part of that discussion and I’m glad that separate international organizations, maybe a bit too many in this stage has taken the initiative to other guidelines.
 
However, the problem with multilateral efforts is that they’re always built on consensus and compromise. We saw this clearly in the UNESCO recommendation, for example, which is probably the most inclusive in terms of Member States that participated in it, and there are several stages starting with an expert group followed by regional, global expectations and interoperability governmental negotiation phase, and it was finally adopted last November. During those negotiations, we received almost every possible piece of feedback. Arab, African countries, for example, thought the text was not detailed enough to be acted upon. While other countries from other regions said it was too prescriptive, too intrusive on the sovereignties of countries, so on. You ended up with a watered down text, just broad points and brief outputs.
 
The next step, it is what do we do with that. In my opinion, the next step cannot be a global effort. Once you get down to the details, there are enough differences between regions at least, if not between individual countries even, that make it impossible to harmonize across the board. Not to mention the fact that there are a number of such documents as I said, which one is the basis.
 
The next step, it has to be regional at least and I think, the new AI act is a new start in that direction. Crucially, it needs to be more detailed and more prescriptive. It needs to produce tools, menu, playbooks for all stakeholders to follow, not just policymakers. I can tell you from experience in the developing countries that no policymaker will have the time or inclination to wade through the hundreds of pages of recommendations to figure out which ones apply to them and how they can be implemented. Thankfully we have seen a number of efforts in that direction, the network on AI produced a classification of the AI systems which is useful and can actually serve as a basis for things like assessment along the lines that Gianclaudio Malgieri had just mentioned. UNESCO is working on an assessment tool to help the countries determine where they are, with compliance, with recommendation, and to find gap where is they need help.
 
We have things like the AI Bill of Rights in the U.S. that’s currently in the making and so on.
 
There is an effort need in the rest of the world on what the implementation could look like. The reason why I say it could be regional is because AI, unlike other technologies, there are – they are touching on aspects of the daily lives that are sensitive and personal to certain people and the values and the customs and the traditions of the population, imposing different types of risks to each group even than the same population.
 
More importantly, each country or each region has a different set of priorities and a different starting point.
 
I always use the example of Egypt and Finland because we form the strong partnership with Finland and I love what they have done on AI and when you compare, when Finland developed the excellent elements of AI force with the role of educating 1% of the Finnish population on AI, they developed a University level course because more than 7 5% of the population, they’re college educated. You couldn’t translate that course, which was exactly what we’re doing, you couldn’t translate that directly to a country like Egypt where 1% of the population is a million people and 1% of the Finnish population, 50,000, that’s a busy street in a Cairo neighborhood. By the way, 30% of this 1 million aren’t even literate, most are not publicly educated or technology literate. The starting point, it is very different. You can have a global goal saying we’re going to educate 1% of everyone on AI within 5 year, 10 years, whatever. Egypt is consider admitted level country in that respect, there are countries that are worse off.
 
The same goes with global initiative. I sit on the board for a Global Initiative for Climate Change. A finding, recommendation, you can’t transfer a weather forecasting model from Europe to the U.S. to work in other parts of the world, it will have to have accuracy, unreliable and what you have to do, you have to help those countries build enough AI capacity to develop their own models. Then you can exchange results and lessons learned. This is what I think we need to do for AI governance in general for every aspect of it.
 
Every part of the world needs to come together to decide how the basic principles that were developed on a global level apply to the European context. In Egypt we’re doing that with something called the Egypt charter on responsible AI and also trying to coordinate efforts on the Arab, African levels to create similar documents on the regional scale and it includes adaptation of the various principles to fit with the needs and priorities and to respect the values, traditions, cultures.
 
Why do we do this? Just to wrap up, let’s keep in mind two things. One, there needs to be a cross regional dialogue following the development and so we need regional efforts and even country specific efforts and then the different regions need to sit down and talk together and we need to identify interface points because as I said, AI is a very cross-border technology and we cannot contradicting regulations or contradicting guidelines in different countries that would then hinder innovation. As much as we want to protect and minimize risk, we also want to encourage innovation progress.
 
Finally, let’s respect each other. Let’s not try to impose our own values, our way of doing this on others. I have been hearing oh, the AI act must now be the basis for all AI regulation globally. That kind of talk, it is quite dangerous and will alienate people in other parts of the world.
 
If developed, the company for the E.U., it will work well for the E.U. I think it was mentioned, a number of very good examples of why it could work with the E.U. under specific circumstances and then let everyone else decide what works for them without trying to impose your own values, otherwise you’re not winning these harmonization efforts before they even start.
 
I can stop here and elaborate further.
 
>> THOMAS SCHNEIDER: Thank you.
 
Indeed, it is a challenge, of course, to harmonize between global technologies and cultural diversities. Thank you for that.
 
I will right away move to our last input speaker here, Ryan, an Executive Director at For Humanity named Data Ethics For All Champion in 2021 and a few other achievements found in the CV.
 
The floor is yours.
 
>> RYAN CARRIER: I wish I was there since I’m calling from New York today. It would be great to be there with you.
 
I’m Executive Director of For Humanity, a non-profit charity dedicated to examining the downside risks associated with AI, algorithm and autonomous systems and we’re involved with the risk mitigation. If we mitigate the risk from the systems, we get the best possible result for humanity and, thus, this overly ambitious name of our organization.
 
We are 1,000 people from 70 countries, and we have drafted already 60,000 lines of audit, an entire working audit for the E.U. AI Act as proposed. We have been contracted to build the conformity assessment built in to the act. We’re also providing a service to the U.K. government drafting certification schemes around AI and other systems for GDPR and the children’s code in the U.K. Our work is fully crowdsourced and so all are welcome inside of For Humanity.
 
It is a grassroots effort that is growing by 40 to 60 people by month. And what we have, it is a set of tools to enable you and your voice to be heard in the governance oversight and accountability of the systems and we offer our services directly to governments and authoritative bodies around the world as a bit of a Secretariat. As a body that can bring the actual laws, guidelines, standards, Best Practices, things that are built not to be auditable.
 
Auditable has one key component when we talk about compliance. It is compliant or non-compliant. We don’t do gray as auditors. What you have to do, you have to very craftily take apart the laws, guidelines, regulations. And when For Humanity does this, we don’t do it out of our own authority, we do it as a service to governments and to regulators to try to replicate the oversight governance found in financial audits and tested over the last 50 years.
 
For those of you that don’t share those experiences as financial audits, the transparency, the governance, the oversight that comes from independent audit, financial accounts and reports, builds an enormous amount of trust in the system through that governance oversight and accountability.
 
Of course, now, in AI, algorithmic, autonomous system, we don’t care about the debts, credit, balance sheets, so on. We have to adapt that body of work into AI systems and we do that with focused on five areas of risk to humans. In the end, Gianclaudio Malgieri had mentioned this specifically, the risks from the systems happen to humans not only from the outcomes but because they’re a system where the human is embedded in the system through personal data frequently.
 
The nature of the risks to humans, they’re ethics, bias, privacy, trust, our catch all category and cybersecurity, a holistic lens of examining all risks from the design all the way to the decommissioning phase of the systems. Embedded in our approach is a human-centric approach that calls and demands and requires diverse inputs and multistakeholder feedback, including not only protected categories but diversity of thought, diversity of lived experience. Humans required in the loop as the E.U. AI Act requires. We call that overseer role that’s built into the act a human in command who has to be trained and provided with the resources to recognize exceptions, and on each agenda anomalies, dysfunctions and to have procedures in place to address that on behalf of humans.
 
This process exists in a transparent crowdsourced way that you are all welcome to participate in. We provide the service across countries all around the world, not only the E.U.AI Act, children’s code, also California, Colorado, Virginia privacy law, the new data protection law happening in India.
 
One of the roles For humanity plans to play, a maximum of harmonization created across the different laws so that we can make it easier for compliance by design built into the system.
 
So in this ecosystem we’re engineering, empowering auditors, preaudit service providers that have to abide by key, critical roles like independence and anti-inclusion, and they have to be trained and qualified. We have already created For Humanity certified auditors for GDPR, For Humanity certified auditors for the Children Code through For Humanity University, a set of free classes you’re all welcome to take and to participate in.
 
To begin to build this entire industry, this entire ecosystem that we need to enable and empower good works like the E.U. Artificial Intelligence Act and to take that talked-about conformity assessment and to turn it into the practicalities of compliance and non-compliance so that designers and developer, they can build compliance by design throughout their entire process.
 
A last thing that I will mention, built into all of this, and Sally mentioned this, right, it is that you have to have this approach that allows each organization to establish in a public, transparent way their code of ethics, their shared moral framework and built into that process, it has to be a standing, empowered ethic Committee. One of the biggest problems we have seen, it is that when we go to solve problems we turn to engineers. Engineers are fantastic at solving problems. They build the systems, they build the tools, that creates solutions. Engineers are not great at, it is based on our education system, it is understanding when they have made ethical choices along the way of those design and development phases. What the audit criteria that at For Humanity develop, it extracts instances of ethical choice out of the design, development, deployment Fabio Monnet and turns it over to train, empower ethic officers sitting on the standing and empowered ethic Committee to make these instances of ethical choice and to provide the transparency and disclosure needed to create a virtuous feedback loop with the public and with the marketplace. I’ll conclude the remarks there, I could go on for hours. I hope that doesn’t surprise anyone! I’m more than happy to take questions and obviously to get into a lot more detail on how for humanity operates as a Secretariat plugging into governments, taking these audit criteria and submitting them under the authority of governments for certification approval.
 
>> THOMAS SCHNEIDER: Thank you, Ryan, for the very interesting expose as well.
 
We have a few minutes left, less than we hoped. It is a challenge every time. Let’s go to Nadia who is following the online discussions.
 
Nadia, what’s going on online in the remote chat?
 
>> NADIA TJAHJA: Thank you very much.
 
As you may have noticed, a red light has gone on, we have a question from the online floor. I would invite Mike Nelson to do his intervention.
 
>> MIKE NELSON: Thank you very much.
 
Greetings from Washington D.C.
 
Just a question about how the AI Act, all of these national AI bills are being perceived: When you read the journalist accounts, it sounds like these bills will focus on regulating the software developers and the code they write to make sure that we only have ethical AI systems. This doesn’t seem right to me. First off, I could do a hell of a lot of damage with data and a spreadsheet.
 
Second, if I’m a company that wants to avoid regulation I could just say well, I don’t use AI, I use advanced analytics. Nothing in any of these bills really specifies what an AI system is.
 
The last thing is, the whole point of AI systems, it is to build code that constantly evolves as new data is plugged into it. Doing an audit on Tuesday doesn’t help you if the code has evolved by Thursday.
 
I’m just confused. Is the real purpose of the bills to regulate the code?
 
>> THOMAS SCHNEIDER: Let’s take a few more question, otherwise we’ll have no space for others. Please, we have a lady in the room. Thank you, go ahead.
 
>> Hello. Thank you for the discussion. I’m.
 
In these days I have heard the focus on ethic AI, human centred AI. However, I have not heard a perspective of defensive and strategic application of AI which is also missing from the E.U. frameworks and there are issues on the UN level with autonomous weapons and it is well-known that the first movers in technology, they’re typically the ones who set the standards for applications further on.
 
My question is, how do you see this developing in the future, for developing the framework for AI application in strategic matters and maybe a more ignorant question, I excuse myself for that to Mr. carrier, do you think risk mitigation could go hand in hand with the application of AI or still manipulated to work differently? Thank you.
 
>> THOMAS SCHNEIDER: Please.
 
>> Thank you. I’m still confused, people talk about coding AI, AI is trained, not coded. My question is, you cannot understand a trained system, you don’t give them rules. You cannot understand it. Asking developers to understand the system, it is not talking about AI. You can analyze AI. You can make it opensourced – not opensource code, but opensource meaning that you can provide a system to be tested by anybody, like you do with opensource code you can do with opensource AI. You can provide the system to be analyzed so you know why the system is – what the system is replying to, what question. Why is this approach not used in you really want to provide transparency and there is no possibility to provide transparency about rules and you can provide transparency about the system itself so that everybody can test it, analyze it, you see what the system actually is doing and, of course, you will see, also where the system misbehaves. Why is this approach not being used?
 
>> THOMAS SCHNEIDER: Thank you very much. That was I think very, very helpful.
 
Let’s give the floor to Anriette. Thank you.
 
>> ANRIETTE ESTERHUYSENL: I expect that may be part of an answer to my question. I wanted to ask particularly Sally, Sally, it is fantastic to see you, you will remember me from the IGF.
 
My question is, how do you deal with this paradox by the one hand it being as you pointed out a fairly lost cause to impose global, even regional approaches and agreements, and on the one hand, at the same time, to not lose the value of international frameworks, international law, international Human Rights agreements. How do we navigate that on the one hand, not having approaches imposed, particularly from the Global North and Global South which we know provokes often increasing authoritarian responses and increase in isolationism? I support your approach, but I also worry about that approach.
 
I think once we abandon what we have achieved in terms of global protections and agreements, what next? A difficult question. Transparency definitely is a solution. Sally, if you could respond to that from a Global South perspective.
 
>> THOMAS SCHNEIDER: Thank you.
 
We have 3 minutes left. Each of you has exactly 60 seconds and not one single more to reply to all of the questions we have heard.
 
Ladies first.
 
Sally, first.
 
>> Okay. Great to see you.
 
Yeah. It is not an easy answer to question. I have given my best shot of saying we first need to agree globally on a set of principles. What are the underlying values that no one disagrees with and that we want to govern AI globally like human centricity, trustworthiness, whatever we end up defining it, like transparency, so on, so forth. Then, how you implement them region by region, country by country, that can’t be done globally. And as you said, if we try to impose any kind of specific values that could potentially trigger adverse reactions.
 
So I think what needs to happen, it is this cross-cultural line that attempts to reharmonize efforts after they have been developed regionally.
 
I don’t have a specific answer I’m afraid. I’m not even sure if that approach would work. It is the best I could come up with in terms of people actually talking to each other. There is a lot of merit in exchanging ideas and exchanging our starting points and our respected priorities and trying to understand where the other person has come from and from that perspective, if not at the political and regulatory level officially, then maybe at the second highest level, the level of academia, the level of the industry, maybe we could find the common ground.
 
That’s my best attempt.
 
>> THOMAS SCHNEIDER: Thank you.
 
Ryan, 60 seconds.
 
>> RYAN CARRIER: Trying to tackle a couple of questions.
 
Opensource versus intellectual property, too much opensource, companies will not even participate and develop all of their work.
 
Proper audit framework actually protects intellectual property when an auditor works on behalf of a company or works with that company to audit on behalf of the public. So they act as a proxy for the public, abiding by a whole set of rules that don’t require complete transparency to the rest of the world and this helps protects intellectual property, trade secrets, enhances innovation.
 
Opensource is always possible and always welcome. Where it doesn’t exist, the audit framework allows for still having governance oversight and accountability.
 
When it comes to risk mitigations, we have the opportunity as we draft these audit requirements to tackle each risk as we can identify it, always balancing between what is best for humans and what is practical and implementable. You have that voice in side humanity, you see risk, identify concerns, you can raise them so that we build it into the audit system and criteria and I’ll tackle one or two other questions in the online chat while Gianclaudio Malgieri wraps up us.
 
>> THOMAS SCHNEIDER: Thank you. It is quite some good discussions going on also regarding the audit approach.
 
Gianclaudio Malgieri, 60 seconds for you. Thank you.
 
>> GIANCLAUDIO MALGIERI: Very rapid.
 
To the first question, I think that this is not just regulating codes but business models, implementation, social and technical choices behind the codes. For example, the data, the input data, to the training data, how you make sure that they are geographically conditionalized, the bias, the gap analysis that are detected and so on.
 
Just the same, the second, the third question, you cannot understand an AI system, but you can understand its implication, its significance, its impact, its technical choices, more than understanding AI, you should justify AI and put in the chat, we have the father of the black box society, the black box book brought together about justification. So I know as a technical choice, it is to address this, we need to have the participation in the design.
 
Participatory design may be a tool, I have this book, design justice, I was reading, I suggest to you to address – to have a participatory bottom-up approach. Thank you very much.
 
>> THOMAS SCHNEIDER: Thank you, Gianclaudio Malgieri. The points raised, really extremely relevant, all three of you, I don’t know whether you know, I’m currently Chairing the Council of Europe Committee that Jan Kleijssen referred to and the points raised in this session are exactly the points that we are struggling the most. I guess like the E.U., it is as well, one, what are we actually talking about, what are we trying to regulate, what falls under any kind of regulation, what doesn’t, how you can cheat by using different names. The other, it is the notion of how to assess, how do we measure the things that we try to regulate and the third one, I hope this is where the Council of Europe, the Convention that will come out of it, it can make a difference and whereas the E.U.AI Act will regulate a particular market defined, the E.U. internal market, the Council of Europe treaty is a value-based Document where you can sign up to values you’re willing to respect and to implement in your national legal system in a way that allows you to respect some level of cultural and other differences. I only encourage you, Sally, if you haven’t already to join the work of the Council of Europe because we’re also interested in having our countries, in addition to the ones that were named by Jan Kleijssen in our work.
 
With this, I have to stop. People need time to – technicians need time to breathe before the next session starts actually very soon and maybe go to have a coffee.
 
Thank you very much, it was exciting. Let’s continue the discussion outside. We need to stop here. We’ll meet again at 11:30.
 
Thank you very much.
 
>> NADIA TJAHJA: Thank you to Thomas for moderating the last session.
 
Coming up next is focus area 3, sub topic 2, The multistakeholder model: from its origins to its future. We hope to see you back here in the room in the next 10 minutes.  


[[Category:2022]][[Category:Sessions 2022]][[Category:Sessions]][[Category:Development of IG eco system 2022]][[Category:Human rights 2022]]
[[Category:2022]][[Category:Sessions 2022]][[Category:Sessions]][[Category:Development of IG eco system 2022]][[Category:Human rights 2022]]

Latest revision as of 22:33, 21 July 2022

22 June 2022 | 10:30 - 11:15 CEST | SISSA Main Auditorium | Video recording | Transcript
Consolidated programme 2022 overview / Day 2

Proposals: #15 #16 #39 #74

You are invited to become a member of the session Org Team! By joining a Org Team you agree to that your name and affiliation will be published at the respective wiki page of the session for transparency reasons. Please subscribe to the mailing list to join the Org Team and answer the email that will be send to you requesting your confirmation of subscription.

Session teaser

1-2 lines to describe the focus of the session.

Session description

With the emergence of new tools that employ artificial intelligence (AI) and Big Data, we are witnessing another technological revolution. Progress and innovation have always been driving factors for societies and the way we live, yet, these new technologies stand out as a game changer with the potential of affecting the core of our societies. While their benefits may be manifold, they raise complex and urgent legal, ethical, policy and economic questions with thus far uncertain implications. Clearly, however, their impact on peoples’ enjoyment of human rights and fundamental freedoms and on the functioning of democratic institutions and processes is significant. As a result, they require careful analysis and decisive action.

Whilst measures aimed at creating the right conditions in which these technologies could thrive are multiplying, the transition from principles to practice remains one of the key issues driving the debates.

Format

  1. The draft EU AI Act: on the road to trustworthy AI (7 minutes)
    Gianclaudio Malgieri (tbc)
  2. Trustworthy AI and Data Governance (7 minutes)
    Golestan Radwan
  3. Audit rules and certification for AI systems (7 minutes)
    Ryan Carrier
  4. Discussion (25 minutes)

Further reading

Links to relevant websites, declarations, books, documents. Please note we cannot offer web space, so only links to external resources are possible. Example for an external link: Main page of EuroDIG

People

Please provide name and institution for all people you list here.

Focal Point

  • Vadim Pak

Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective Subject Matter Expert (SME) and the EuroDIG Secretariat and are kindly requested to follow EuroDIG’s session principles

Organising Team (Org Team) List Org Team members here as they sign up.

The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.

  • Vittorio Bertola
  • Desara Dushi
  • Elif Kiesow
  • Alève Mine
  • Auke Pals
  • Charles Martinet
  • Vadim Pak

Key Participants

  • Gianclaudio Malgieri
    is an Associate Professor of Law & Technology at the EDHEC Business School in Lille (France), where he conducts research at the Augmented Law Institute. He is Co-Director of the Brussels Privacy Hub; Guest Professor at the Free University of Brussels (VUB); Editorial Board Member of Computer Law and Security Review; and External Ethics Expert of the European Commission (Research Executive Agency). He conducts research on and teaches Data Protection Law, privacy, AI regulation, Digital Law, Consumer protection in the digital market, Data Sustainability, Intellectual Property Law.
  • Golestan (Sally) Radwan
    is an international AI expert and PhD candidate at the Royal Holloway University of London. For the past three years, she served as AI Advisor to the Minister of ICT of Egypt, where she led the team in charge of developing and implementing Egypt’s national AI strategy. Radwan served as vice-chair of the UNESCO ad-hoc expert group tasked with drafting the first international recommendation on the ethics of AI. She is also part of the OECD expert network ONE.AI, GPAI’s Responsible AI group, and chairs two AI working groups within the African Union and the League of Arab States.
    Prior to her appointment at MCIT, Radwan held several executive positions in the technology industry over 17 years, working in Germany, Austria, the UK and the US. Radwan earned a BSc in Computer Engineering from Cairo University and an MBA from London Business School, as well as an MSc in Clinical Engineering and Healthcare Technology Management from City University of London. She is currently finalizing her PhD thesis, focusing on AI explainability and its ethical considerations in clinical genomics.
  • Ryan Carrier
    Ryan is DataEthics4All Top 100 DIET Champion 2021, Executive Director at ForHumanity, public charity, which endeavors to be a beacon, examining the impact of AI & Automation on jobs, society, our rights, and our freedoms. They focus on Independent Audit of AI Systems, supplying audit certification criteria to governments around the world including GDPR and the Children's Code for the ICO in the UK and retained as a technical liaison to CEN/CENELEC JTC 21, the body tasked with creating the conformity assessment in the EU AI Act.

Key Participants are experts willing to provide their knowledge during a session – not necessarily on stage. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance. Please provide short CV’s of the Key Participants involved in your session at the Wiki or link to another source.

Moderator

  • Thomas Schneider, Head of International Affairs in the Federal Office of Communication (Switzerland) and the chair of the Committee on Artificial Intelligence of the Council of Europe

The moderator is the facilitator of the session at the event. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.

Remote Moderator

Trained remote moderators will be assigned on the spot by the EuroDIG secretariat to each session.

Reporter

Reporters will be assigned by the EuroDIG secretariat in cooperation with the Geneva Internet Platform. The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:

  • are summarised on a slide and presented to the audience at the end of each session
  • relate to the particular session and to European Internet governance policy
  • are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
  • are in (rough) consensus with the audience

Current discussion, conference calls, schedules and minutes

See the discussion tab on the upper left side of this page. Please use this page to publish:

  • dates for virtual meetings or coordination calls
  • short summary of calls or email exchange

Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.

Messages

Video record

https://youtu.be/qFEpZUpEML8?t=2086

Transcript

Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-482-9835, www.captionfirst.com


This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.


>> NADIA TJAHJA: Good morning. I’m the host here in the main auditorium. Hello to you all online. We’re very excited to have you here. We’re about to start focus area 3. Just a very brief reminder, EuroDIG has a new format in which we have four focus areas. Today we’re looking at focus area 3, which is about coming next, Outlook on new technologies and can existing governance bodies cope with them and focus area four, internet in trouble times. There will be main sessions, they’ll be here, and right now we’ll go into focus area 3. If you’re interested in focus area 4, you can join FabLab or online to join those workshop and in the afternoon you can come from FabLab back here to continue the discussions and if you’re here in focus 3 and want to continue in more depth, go into the FabLab.

Now I ask the moderator Thomas Schneider to come forward on Regulating emerging technologies: artificial intelligence and data governance as brief reminder, we’re here in person and online, when we have a question online we’ll turn on a bright red light for you so you know there is online engagement. I hope you have a great session.

>> THOMAS SCHNEIDER: Hello, good morning again. I hope those that were here physically enjoyed our party at the sea last night.

Thanks, Nadia, for the introduction.

Indeed, as we have heard in the keynote speech, I think we have challenging aspects of technology we’re facing. We agree we want to use the technology, we have to use them, but there are some issues that we need to cope with. It is very timely to have this discussion we have 40 minutes, we have three speakers that are not supposed to go more than 7 minutes, we hope to have half of the session for interactive let’s. That’s a key element, of course, of EuroDIG to interact. Looking forward to the input from the three experts and then an interesting discussion.

I will not lose time and give directly the floor to Gianclaudio Malgieri. He’s an associate professor of technology at the EDHEC Business School in Lille in France, and he will say a few words about something that we have already alluded to at least several times, which is the Draft AI Act of the E.U. and what that means for a road to trustworthy AI.

The floor is yours, please.

>> GIANCLAUDIO MALGIERI: Thank you very much. I hope you can hear me. Yes.

Thank you very much, Thomas.

Thank you for the invitation. I hear some feedback. Hopefully – I will put the audio down so that the feedback is remote.

Yeah. Thank you very much. It is a great opportunity to address – to be here, to address the European Union approach to the regulation of AI. I would like to start with a provocation, and then I will rapidly describe what is the AI Draft Act with some comments and provocations too and then final provocation. Ten minutes is not much, but we’ll do our best.

The title of this, intervention is trustworthy AI. I would like to underline the difference between trust and trustworthy AI. You know, here we come with the regulation, why we need the regulation on AI in Europe. In my approach, under my perspective, in my perspective, it was not enough that AI technology could be put in the market if the user had trust in them, it is a necessary that they’re trustworthy. Trust is a marketing concept. Right. Trust is, you know, where consumers trust something. Trustworthiness, it has a normative value based on some principles, based on some values that are designed or decided by the political bodies and regulators.

First of all, we needed this because we needed a shift from autonomy of the market you could say and the protection and dignity aspect of AI.

Beyond this general umbrella, it is a little bit floss call, why do we need this beyond the GDPR? The potentiality, the broad approach, broad impact of the GDPR, right, the general data protection and regulation that just celebrated six years from the approval and four years from the entering into forcing. We know this is inclusive, comprehensive, the European Union, describe this as the law of everything, we have this wide definition of personal data and we have broad principles to apply to nearly any situation, fairness, lawfulness, transparency. Couldn’t we just say that the information of data is big, and it is comprehensive, like fairness, couldn’t we just say that we actually make it just the GDPR to regulate AI. Why did we need something more. In my view, there was two reasons, maybe three. The first reason, it is not everything actually is based on personal data, there are many artificial intelligence systems that thanks to technologies can process, can have impact on individuals even beyond personal data processing.

For example, I mentioned privacy and technologies, federated learning can reduce the impact on individuals and still have an impact. Artificial intelligence can still manipulate the commercial behavior of individuals and even produce items without identifying the individual.

AI can harm you without even identifying you.

The second point, on why we need an AI act, it is that there was a need of a political threshold of what is an acceptable, in terms of the AI market because there is general accountability principles like fairness, but then the AI act, as proposed in April of 2021 by the European Commission, now on this discussion, it sets clear, a clear list of things that are unacceptable, even beyond the principles of fairness and lawfulness in the GDPR.

So now we can go to the – we can describe how the AI act is built.

It is built on different layers of risks.

The definition of risk, it is not based on the accountability principle like in the GDPR and risks are already predetermined by the legislator in a way if it is approved.

We have some unacceptable risks, variable risk, and those practices that bring to those risks, they’re considered prohibited. We call the black list of AI systems. This black list we have for example the exploitation of vulnerability of individuals based on age or disability and the parliament is trying to add more categories or vulnerabilities, for example, social and economic condition, gender, so on.

We have the dark patterns leading to physical or psychological harm and then we have indiscriminate use of AI for politicalizing in the mass, social scoring under certain definition, social scoring when it is unjust identifiable harmful. In this prohibited list of practices, it is important to have them. My provocation on this, it is on harms and vulnerable individuals. Why? First of all, because the prohibition includes, as I was saying artificial intelligence producing harm, physical or psychological harms, why not economic harms too? Artificial intelligence, in particular on social media can manipulate how a behavior, even bringing economic harms, European Commission replied when I asked them, I asked the people who was drafting the regulation why just physical and psychological harms, it is very difficult to prove in practice because you need a medical certificate, right. They said for economic harms, we already have unfair commercial practices directed in the European Union. I have been working on this and it is not the most adequate tool, it is 2005 and not very updated. And a provocation on vulnerable individuals. Why vulnerability is considered unjust identifiable based on the age and disability, the answer from the Commission is that because age and disability can be easily proved, you can prove the age or the ID card, you can prove the disability through a medical certificate. Vulnerabilities are much beyond. I have been devoting the last years of my research to vulnerable data subjects and we launched just last month an observatory, it is called vulnerable, we tried to classify all social vulnerability of human vulnerabilities in the digital sphere. It is something that we can at least – at least something like 20 sources of vulnerability, then there was intersectionality approach. So more and more layers of vulnerability, one on top of the other can apply.

Okay. I would like to move to the second part of the regulation, which is a high-risk of AI. The high-risk, it is not this – there is another list of AI practices, for example including the credit storying, for example, the use in general of biometric surveillance. There is a whole list of AI practices in the high-risk list. What happens if the developer, the user, the user is not the end user, it is the company that uses AI. What happens if they produce or use this form of AI system. They have a list of duties and accountability duties. For example, for transparency, human oversight, you consider the risk of black box AI, they, the developer has a duty to give, to understand the algorithm. So the algorithm should be interpretable by design.

Also there should be human oversight in any case of high-risk AI. There should be a human capable to understand the AI system. There is a difference with the GDPR. The GDPR, we have Article 22, which is about the right to have a human in the loop in automatic decision making systems when these decisions have a significant impact on data subjects. This is different, this is human oversight for the design and the development of AI, not for the final decision.

>> THOMAS SCHNEIDER: Sorry to interrupt you. Time is basically up. I ask you to please make the final point.

Thank you very much.

>> GIANCLAUDIO MALGIERI: Thank you.

Yes, just to say, there is an accountability. There is a list of accountability, duties and it is still incomplete. Two reasons, and then I will conclude:

First thing, we still don’t know how to do a data management plan for AI, a Human Rights based assessment for AI, there is a lot to do with academia.

Another point, it is this high-risk list, it should be bigger. For example, emotion recognition is not included in the list. The only authority that can manage this is the European Commission according to the text of the proposal which is something very much based on even political control.

Very, very last point, to overcome most of the problems of the Digital World, to overcome the problems of consent, vulnerability, underrepresentation, we would need participation in the design of AI. Participatory design. AI act, it is just a practice. So just the participatory design, it should be focused more. Thank you very much.

>> THOMAS SCHNEIDER: Thank you very much, Gianclaudio Malgieri, for this interesting provocation and points.

Let’s move directly to Golestan Radwan, an international expert from Egypt and heads a number of important roles at national level with AI strategy and internationally, she was a key driver of the UNESCO recommendation. You can read more about her in the Wiki. Sally, good to hopefully see you. I don’t see you yet. We met in Geneva about a month ago. The floor is your, please.

>> GOLESTAN RADWAN: Thank you very much. I hope you can all hear me and all see me.

>> THOMAS SCHNEIDER: We can.

>> GOLESTAN RADWAN: Great. Perfect.

This is actually my first EuroDIG, it is very exciting. I wish I could have been there in person. This is a good start at least.

I will be very quick, hopefully I will stay on time. I would like to focus on a very specific aspect of AI governance, the implementation of these various guidelines and recommendations that we have floating around.

As you said, I used to be a government official until very shortly and I can tell you that most guidelines, recommendations that came out of multilateral efforts so far, and I have participated, as you said, in drafting some of them, I’m certainly not hacking them here, but they’re not enough. On the one hand, the efforts are global and as inclusive as they have been, any conversation about AI governance needs to start globally, even for countries that choose not to actively participate in the development efforts around AI. The markets and appropriations are still going to be exposed to the risks in products. It is important that they’re part of that discussion and I’m glad that separate international organizations, maybe a bit too many in this stage has taken the initiative to other guidelines.

However, the problem with multilateral efforts is that they’re always built on consensus and compromise. We saw this clearly in the UNESCO recommendation, for example, which is probably the most inclusive in terms of Member States that participated in it, and there are several stages starting with an expert group followed by regional, global expectations and interoperability governmental negotiation phase, and it was finally adopted last November. During those negotiations, we received almost every possible piece of feedback. Arab, African countries, for example, thought the text was not detailed enough to be acted upon. While other countries from other regions said it was too prescriptive, too intrusive on the sovereignties of countries, so on. You ended up with a watered down text, just broad points and brief outputs.

The next step, it is what do we do with that. In my opinion, the next step cannot be a global effort. Once you get down to the details, there are enough differences between regions at least, if not between individual countries even, that make it impossible to harmonize across the board. Not to mention the fact that there are a number of such documents as I said, which one is the basis.

The next step, it has to be regional at least and I think, the new AI act is a new start in that direction. Crucially, it needs to be more detailed and more prescriptive. It needs to produce tools, menu, playbooks for all stakeholders to follow, not just policymakers. I can tell you from experience in the developing countries that no policymaker will have the time or inclination to wade through the hundreds of pages of recommendations to figure out which ones apply to them and how they can be implemented. Thankfully we have seen a number of efforts in that direction, the network on AI produced a classification of the AI systems which is useful and can actually serve as a basis for things like assessment along the lines that Gianclaudio Malgieri had just mentioned. UNESCO is working on an assessment tool to help the countries determine where they are, with compliance, with recommendation, and to find gap where is they need help.

We have things like the AI Bill of Rights in the U.S. that’s currently in the making and so on.

There is an effort need in the rest of the world on what the implementation could look like. The reason why I say it could be regional is because AI, unlike other technologies, there are – they are touching on aspects of the daily lives that are sensitive and personal to certain people and the values and the customs and the traditions of the population, imposing different types of risks to each group even than the same population.

More importantly, each country or each region has a different set of priorities and a different starting point.

I always use the example of Egypt and Finland because we form the strong partnership with Finland and I love what they have done on AI and when you compare, when Finland developed the excellent elements of AI force with the role of educating 1% of the Finnish population on AI, they developed a University level course because more than 7 5% of the population, they’re college educated. You couldn’t translate that course, which was exactly what we’re doing, you couldn’t translate that directly to a country like Egypt where 1% of the population is a million people and 1% of the Finnish population, 50,000, that’s a busy street in a Cairo neighborhood. By the way, 30% of this 1 million aren’t even literate, most are not publicly educated or technology literate. The starting point, it is very different. You can have a global goal saying we’re going to educate 1% of everyone on AI within 5 year, 10 years, whatever. Egypt is consider admitted level country in that respect, there are countries that are worse off.

The same goes with global initiative. I sit on the board for a Global Initiative for Climate Change. A finding, recommendation, you can’t transfer a weather forecasting model from Europe to the U.S. to work in other parts of the world, it will have to have accuracy, unreliable and what you have to do, you have to help those countries build enough AI capacity to develop their own models. Then you can exchange results and lessons learned. This is what I think we need to do for AI governance in general for every aspect of it.

Every part of the world needs to come together to decide how the basic principles that were developed on a global level apply to the European context. In Egypt we’re doing that with something called the Egypt charter on responsible AI and also trying to coordinate efforts on the Arab, African levels to create similar documents on the regional scale and it includes adaptation of the various principles to fit with the needs and priorities and to respect the values, traditions, cultures.

Why do we do this? Just to wrap up, let’s keep in mind two things. One, there needs to be a cross regional dialogue following the development and so we need regional efforts and even country specific efforts and then the different regions need to sit down and talk together and we need to identify interface points because as I said, AI is a very cross-border technology and we cannot contradicting regulations or contradicting guidelines in different countries that would then hinder innovation. As much as we want to protect and minimize risk, we also want to encourage innovation progress.

Finally, let’s respect each other. Let’s not try to impose our own values, our way of doing this on others. I have been hearing oh, the AI act must now be the basis for all AI regulation globally. That kind of talk, it is quite dangerous and will alienate people in other parts of the world.

If developed, the company for the E.U., it will work well for the E.U. I think it was mentioned, a number of very good examples of why it could work with the E.U. under specific circumstances and then let everyone else decide what works for them without trying to impose your own values, otherwise you’re not winning these harmonization efforts before they even start.

I can stop here and elaborate further.

>> THOMAS SCHNEIDER: Thank you.

Indeed, it is a challenge, of course, to harmonize between global technologies and cultural diversities. Thank you for that.

I will right away move to our last input speaker here, Ryan, an Executive Director at For Humanity named Data Ethics For All Champion in 2021 and a few other achievements found in the CV.

The floor is yours.

>> RYAN CARRIER: I wish I was there since I’m calling from New York today. It would be great to be there with you.

I’m Executive Director of For Humanity, a non-profit charity dedicated to examining the downside risks associated with AI, algorithm and autonomous systems and we’re involved with the risk mitigation. If we mitigate the risk from the systems, we get the best possible result for humanity and, thus, this overly ambitious name of our organization.

We are 1,000 people from 70 countries, and we have drafted already 60,000 lines of audit, an entire working audit for the E.U. AI Act as proposed. We have been contracted to build the conformity assessment built in to the act. We’re also providing a service to the U.K. government drafting certification schemes around AI and other systems for GDPR and the children’s code in the U.K. Our work is fully crowdsourced and so all are welcome inside of For Humanity.

It is a grassroots effort that is growing by 40 to 60 people by month. And what we have, it is a set of tools to enable you and your voice to be heard in the governance oversight and accountability of the systems and we offer our services directly to governments and authoritative bodies around the world as a bit of a Secretariat. As a body that can bring the actual laws, guidelines, standards, Best Practices, things that are built not to be auditable.

Auditable has one key component when we talk about compliance. It is compliant or non-compliant. We don’t do gray as auditors. What you have to do, you have to very craftily take apart the laws, guidelines, regulations. And when For Humanity does this, we don’t do it out of our own authority, we do it as a service to governments and to regulators to try to replicate the oversight governance found in financial audits and tested over the last 50 years.

For those of you that don’t share those experiences as financial audits, the transparency, the governance, the oversight that comes from independent audit, financial accounts and reports, builds an enormous amount of trust in the system through that governance oversight and accountability.

Of course, now, in AI, algorithmic, autonomous system, we don’t care about the debts, credit, balance sheets, so on. We have to adapt that body of work into AI systems and we do that with focused on five areas of risk to humans. In the end, Gianclaudio Malgieri had mentioned this specifically, the risks from the systems happen to humans not only from the outcomes but because they’re a system where the human is embedded in the system through personal data frequently.

The nature of the risks to humans, they’re ethics, bias, privacy, trust, our catch all category and cybersecurity, a holistic lens of examining all risks from the design all the way to the decommissioning phase of the systems. Embedded in our approach is a human-centric approach that calls and demands and requires diverse inputs and multistakeholder feedback, including not only protected categories but diversity of thought, diversity of lived experience. Humans required in the loop as the E.U. AI Act requires. We call that overseer role that’s built into the act a human in command who has to be trained and provided with the resources to recognize exceptions, and on each agenda anomalies, dysfunctions and to have procedures in place to address that on behalf of humans.

This process exists in a transparent crowdsourced way that you are all welcome to participate in. We provide the service across countries all around the world, not only the E.U.AI Act, children’s code, also California, Colorado, Virginia privacy law, the new data protection law happening in India.

One of the roles For humanity plans to play, a maximum of harmonization created across the different laws so that we can make it easier for compliance by design built into the system.

So in this ecosystem we’re engineering, empowering auditors, preaudit service providers that have to abide by key, critical roles like independence and anti-inclusion, and they have to be trained and qualified. We have already created For Humanity certified auditors for GDPR, For Humanity certified auditors for the Children Code through For Humanity University, a set of free classes you’re all welcome to take and to participate in.

To begin to build this entire industry, this entire ecosystem that we need to enable and empower good works like the E.U. Artificial Intelligence Act and to take that talked-about conformity assessment and to turn it into the practicalities of compliance and non-compliance so that designers and developer, they can build compliance by design throughout their entire process.

A last thing that I will mention, built into all of this, and Sally mentioned this, right, it is that you have to have this approach that allows each organization to establish in a public, transparent way their code of ethics, their shared moral framework and built into that process, it has to be a standing, empowered ethic Committee. One of the biggest problems we have seen, it is that when we go to solve problems we turn to engineers. Engineers are fantastic at solving problems. They build the systems, they build the tools, that creates solutions. Engineers are not great at, it is based on our education system, it is understanding when they have made ethical choices along the way of those design and development phases. What the audit criteria that at For Humanity develop, it extracts instances of ethical choice out of the design, development, deployment Fabio Monnet and turns it over to train, empower ethic officers sitting on the standing and empowered ethic Committee to make these instances of ethical choice and to provide the transparency and disclosure needed to create a virtuous feedback loop with the public and with the marketplace. I’ll conclude the remarks there, I could go on for hours. I hope that doesn’t surprise anyone! I’m more than happy to take questions and obviously to get into a lot more detail on how for humanity operates as a Secretariat plugging into governments, taking these audit criteria and submitting them under the authority of governments for certification approval.

>> THOMAS SCHNEIDER: Thank you, Ryan, for the very interesting expose as well.

We have a few minutes left, less than we hoped. It is a challenge every time. Let’s go to Nadia who is following the online discussions.

Nadia, what’s going on online in the remote chat?

>> NADIA TJAHJA: Thank you very much.

As you may have noticed, a red light has gone on, we have a question from the online floor. I would invite Mike Nelson to do his intervention.

>> MIKE NELSON: Thank you very much.

Greetings from Washington D.C.

Just a question about how the AI Act, all of these national AI bills are being perceived: When you read the journalist accounts, it sounds like these bills will focus on regulating the software developers and the code they write to make sure that we only have ethical AI systems. This doesn’t seem right to me. First off, I could do a hell of a lot of damage with data and a spreadsheet.

Second, if I’m a company that wants to avoid regulation I could just say well, I don’t use AI, I use advanced analytics. Nothing in any of these bills really specifies what an AI system is.

The last thing is, the whole point of AI systems, it is to build code that constantly evolves as new data is plugged into it. Doing an audit on Tuesday doesn’t help you if the code has evolved by Thursday.

I’m just confused. Is the real purpose of the bills to regulate the code?

>> THOMAS SCHNEIDER: Let’s take a few more question, otherwise we’ll have no space for others. Please, we have a lady in the room. Thank you, go ahead.

>> Hello. Thank you for the discussion. I’m.

In these days I have heard the focus on ethic AI, human centred AI. However, I have not heard a perspective of defensive and strategic application of AI which is also missing from the E.U. frameworks and there are issues on the UN level with autonomous weapons and it is well-known that the first movers in technology, they’re typically the ones who set the standards for applications further on.

My question is, how do you see this developing in the future, for developing the framework for AI application in strategic matters and maybe a more ignorant question, I excuse myself for that to Mr. carrier, do you think risk mitigation could go hand in hand with the application of AI or still manipulated to work differently? Thank you.

>> THOMAS SCHNEIDER: Please.

>> Thank you. I’m still confused, people talk about coding AI, AI is trained, not coded. My question is, you cannot understand a trained system, you don’t give them rules. You cannot understand it. Asking developers to understand the system, it is not talking about AI. You can analyze AI. You can make it opensourced – not opensource code, but opensource meaning that you can provide a system to be tested by anybody, like you do with opensource code you can do with opensource AI. You can provide the system to be analyzed so you know why the system is – what the system is replying to, what question. Why is this approach not used in you really want to provide transparency and there is no possibility to provide transparency about rules and you can provide transparency about the system itself so that everybody can test it, analyze it, you see what the system actually is doing and, of course, you will see, also where the system misbehaves. Why is this approach not being used?

>> THOMAS SCHNEIDER: Thank you very much. That was I think very, very helpful.

Let’s give the floor to Anriette. Thank you.

>> ANRIETTE ESTERHUYSENL: I expect that may be part of an answer to my question. I wanted to ask particularly Sally, Sally, it is fantastic to see you, you will remember me from the IGF.

My question is, how do you deal with this paradox by the one hand it being as you pointed out a fairly lost cause to impose global, even regional approaches and agreements, and on the one hand, at the same time, to not lose the value of international frameworks, international law, international Human Rights agreements. How do we navigate that on the one hand, not having approaches imposed, particularly from the Global North and Global South which we know provokes often increasing authoritarian responses and increase in isolationism? I support your approach, but I also worry about that approach.

I think once we abandon what we have achieved in terms of global protections and agreements, what next? A difficult question. Transparency definitely is a solution. Sally, if you could respond to that from a Global South perspective.

>> THOMAS SCHNEIDER: Thank you.

We have 3 minutes left. Each of you has exactly 60 seconds and not one single more to reply to all of the questions we have heard.

Ladies first.

Sally, first.

>> Okay. Great to see you.

Yeah. It is not an easy answer to question. I have given my best shot of saying we first need to agree globally on a set of principles. What are the underlying values that no one disagrees with and that we want to govern AI globally like human centricity, trustworthiness, whatever we end up defining it, like transparency, so on, so forth. Then, how you implement them region by region, country by country, that can’t be done globally. And as you said, if we try to impose any kind of specific values that could potentially trigger adverse reactions.

So I think what needs to happen, it is this cross-cultural line that attempts to reharmonize efforts after they have been developed regionally.

I don’t have a specific answer I’m afraid. I’m not even sure if that approach would work. It is the best I could come up with in terms of people actually talking to each other. There is a lot of merit in exchanging ideas and exchanging our starting points and our respected priorities and trying to understand where the other person has come from and from that perspective, if not at the political and regulatory level officially, then maybe at the second highest level, the level of academia, the level of the industry, maybe we could find the common ground.

That’s my best attempt.

>> THOMAS SCHNEIDER: Thank you.

Ryan, 60 seconds.

>> RYAN CARRIER: Trying to tackle a couple of questions.

Opensource versus intellectual property, too much opensource, companies will not even participate and develop all of their work.

Proper audit framework actually protects intellectual property when an auditor works on behalf of a company or works with that company to audit on behalf of the public. So they act as a proxy for the public, abiding by a whole set of rules that don’t require complete transparency to the rest of the world and this helps protects intellectual property, trade secrets, enhances innovation.

Opensource is always possible and always welcome. Where it doesn’t exist, the audit framework allows for still having governance oversight and accountability.

When it comes to risk mitigations, we have the opportunity as we draft these audit requirements to tackle each risk as we can identify it, always balancing between what is best for humans and what is practical and implementable. You have that voice in side humanity, you see risk, identify concerns, you can raise them so that we build it into the audit system and criteria and I’ll tackle one or two other questions in the online chat while Gianclaudio Malgieri wraps up us.

>> THOMAS SCHNEIDER: Thank you. It is quite some good discussions going on also regarding the audit approach.

Gianclaudio Malgieri, 60 seconds for you. Thank you.

>> GIANCLAUDIO MALGIERI: Very rapid.

To the first question, I think that this is not just regulating codes but business models, implementation, social and technical choices behind the codes. For example, the data, the input data, to the training data, how you make sure that they are geographically conditionalized, the bias, the gap analysis that are detected and so on.

Just the same, the second, the third question, you cannot understand an AI system, but you can understand its implication, its significance, its impact, its technical choices, more than understanding AI, you should justify AI and put in the chat, we have the father of the black box society, the black box book brought together about justification. So I know as a technical choice, it is to address this, we need to have the participation in the design.

Participatory design may be a tool, I have this book, design justice, I was reading, I suggest to you to address – to have a participatory bottom-up approach. Thank you very much.

>> THOMAS SCHNEIDER: Thank you, Gianclaudio Malgieri. The points raised, really extremely relevant, all three of you, I don’t know whether you know, I’m currently Chairing the Council of Europe Committee that Jan Kleijssen referred to and the points raised in this session are exactly the points that we are struggling the most. I guess like the E.U., it is as well, one, what are we actually talking about, what are we trying to regulate, what falls under any kind of regulation, what doesn’t, how you can cheat by using different names. The other, it is the notion of how to assess, how do we measure the things that we try to regulate and the third one, I hope this is where the Council of Europe, the Convention that will come out of it, it can make a difference and whereas the E.U.AI Act will regulate a particular market defined, the E.U. internal market, the Council of Europe treaty is a value-based Document where you can sign up to values you’re willing to respect and to implement in your national legal system in a way that allows you to respect some level of cultural and other differences. I only encourage you, Sally, if you haven’t already to join the work of the Council of Europe because we’re also interested in having our countries, in addition to the ones that were named by Jan Kleijssen in our work.

With this, I have to stop. People need time to – technicians need time to breathe before the next session starts actually very soon and maybe go to have a coffee.

Thank you very much, it was exciting. Let’s continue the discussion outside. We need to stop here. We’ll meet again at 11:30.

Thank you very much.

>> NADIA TJAHJA: Thank you to Thomas for moderating the last session.

Coming up next is focus area 3, sub topic 2, The multistakeholder model: from its origins to its future. We hope to see you back here in the room in the next 10 minutes.