Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies – WS 06 2025
13 May 2025 | 14:30 - 15:30 CEST | Room 10 |
Consolidated programme 2025
Proposals: (#1, #28), (#41), #49, #55
Get involved!
You are invited to become a member of the Session Org Team by simply subscribing to the mailing list. By doing so, you agree that your name and affiliation will be published at the relevant session wiki page. Please reply to the email send to you to confirm your subscription.
Kindly note that it may take a while until the Org Team is formed and starts working.
To follow the current discussion on this topic, see the discussion tab on the upper left side of this page.
As artificial intelligence becomes a staple in business operations, the question is no longer if companies will adopt AI, but how they do it—and at what cost. Experts will reflect on how companies and institutions can align AI strategies with human rights principles, build trust, and embed responsibility into their digital agendas. Together, we’ll unpack the human, ethical, and operational dimensions of this transformation—from efficiency-driven deployment to concerns over transparency, surveillance, and algorithmic bias.
Join us to discuss the challenges and opportunities at the intersection of innovation, responsibility, and the future of work.
Session description
In an era of rapid advancement in artificial intelligence technologies, an increasing number of businesses are considering the integration of AI tools into their daily operations. However, the perception of these solutions—both by management and employees—plays a crucial role in enabling effective and responsible digital transformation.
During the session, the latest data illustrating the current level of AI implementation in companies will be presented, along with the key motivations behind adopting such tools. Participants will reflect on the factors that shape a positive attitude toward AI-driven solutions, as well as explore why a significant portion of employees (around 50%) still express negative or indifferent views toward these technologies. At the heart of the discussion will be issues such as lack of trust, fear of job displacement, and uncertainty about how AI operates in the workplace. The panelists will explore how to develop trustworthy, ethical, and responsible models of artificial intelligence that uphold ethical values and comply with existing regulations. Special attention will be given to designing and implementing AI tools in ways that respect human rights, ensure data security, and promote algorithmic transparency.
The session will offer a space for collective reflection on the role of AI within the business ecosystem and the challenges that arise at the intersection of technology and human rights. It will serve as an opportunity to consider how to create an environment where AI empowers rather than replaces people—and where innovation goes hand in hand with responsibility.
Format
This will be an interactive and participatory session, designed to engage the audience at every stage. The workshop will begin with a short introduction outlining its objectives, followed by a brief data presentation to set the context.
Participants will be actively involved through live polls (via Mentimeter) and an open Q&A segment, encouraging them to share their views, questions, and experiences. Panelists will respond to audience input in real time, creating space for an open exchange of ideas between speakers and participants.
Rather than a series of formal presentations, the session will follow a dialogue-driven format that promotes dynamic discussion and collective reflection. Audience members are encouraged to contribute throughout the workshop, helping shape the direction of the conversation.
Whether you're a policymaker, researcher, business representative, or simply curious about AI in the workplace—your voice will be an important part of this conversation.
Further reading
Links to relevant websites, declarations, books, documents. Please note we cannot offer web space, so only links to external resources are possible. Example for an external link: Main page of EuroDIG
People
Please provide name and institution for all people you list here.
Programme Committee member(s)
- Jörn Erbguth, University of Geneva
The Programme Committee supports the programme planning process and works closely with the Secretariat. Members of the committee give advice on the topics, cluster the proposals and assist session organisers in their work. They also ensure that session principles are followed and overlook the complete programme to avoid repetition among sessions.
Focal Point
- Monika Stachon, NASK – National Research Institute
Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective member of the Programme Committee and the EuroDIG Secretariat and are kindly requested to follow EuroDIG’s session principles.
Organising Team (Org Team) List Org Team members here as they sign up.
- Angela Coriz, Connect Europe
- Biljana Nikolic, Department for the Implementation of Human Rights, Justice and Legal Co-operation Standards, Council of Europe
- Marilia Maciel
- Mariam Chaladze, ISET
- Gianluca Diana, Lenovo
- Loredana Tassone, GRCI Law Limited
The Org Team is shaping the session. Org Teams are open, and every interested individual can become a member by subscribing to the mailing list.
Key Participants
The session will feature contributions from academic and industry experts (panelists subject to change).
- Katarzyna Ellis (Partner, Leader of the People Consulting Team, EY Poland) - An expert with over 20 years of experience in HR and technology transformation on the global market. A passionate advocate for placing humans at the center of attention and organizational processes (EY Humans@Center).
At EY, Katarzyna leads the People Consulting Team. She also heads the Digital HR team in Poland and is a member of the EMEIA Center of Excellence, responsible for HR transformation. Her main areas of interest include HR operating models, SSC/BPO, and digital employee support (including process engineering, automation, and AI implementation).
She lived in the United Kingdom for 13 years, where she managed international Digital HR programs, working with FTSE 100 companies.
She later moved to Poland to focus on the development of the SSC/BPO/ITO sector in the Polish market, bringing her extensive experience and international expertise to drive local development processes. Her leadership and innovative approach have been key to shaping HR strategies and technological solutions across various industries.
In her previous roles, Katarzyna managed the global Digital HR offering for BPO teams in Australia, Malaysia, India, South America, the UK, and Poland — developing transformation strategies to implement new ways of working with clients by leveraging technology and focusing on delivering value to people and businesses. - Domenico Zipoli PhD. (Geneva Human Rights Platform) - Senior Research Fellow and Project Coordinator at the Geneva Human Rights Platform, affiliated with the Geneva Academy of International Humanitarian Law and Human Rights. His research focuses on the connectivity among international human rights mechanisms and national strategies for monitoring, implementing, and following up on international human rights obligations and recommendations. Currently, he leads a research initiative aimed at advancing digital solutions to improve the efficiency of human rights and Sustainable Development Goals (SDG) monitoring and implementation, leveraging digital human rights tracking tools, databases, and artificial intelligence.
- Lyra Jakulevičienė PhD. (UN Working Group on Business and Human Rights) - international legal scholar specialising in international and European Union law, human rights law in particular, for more than two decades. She is currently a member of the UN Working Group on Business and Human Rights for Central and Eastern Europe. Since 2005, she has pioneered on business engagement in human rights issues in the Central and Eastern European region, fostered development of business networks and guidance and advised on development of national actions plans in this area, and co-initiated social labelling initiatives. She has lectured on human rights and new technologies in various contexts and is currently in charge of preparation of report on business, human rights and AI for the UN Human Rights Council. Ms. Jakulevičienė has served as a member of the Management and Executive Boards of the EU Fundamental Rights Agency and the European Commission against Racism and Intolerance, and a co-arbitrator at the OSCE Court of Conciliation and Arbitration. From 1997 to 2013, Ms. Jakulevičienė worked in United Nations organisations in Lithuania, Sweden, Turkey and Ukraine. She is a Professor and the Dean of the Law School of Mykolas Romeris University in Lithuania.
- Angela Coriz (Connect Europe) - Angela Coriz is a Public Policy Officer at Connect Europe, where she focuses on regulatory and public policy matters related to artificial intelligence, taxation, international affairs, and naming, addressing, and numbering. She joined the organization in January 2024. Prior to this role, Angela worked as an assistant in the Regulatory Affairs Department of WSBI-ESBG, an association representing European savings and retail banks.
Key Participants (also speakers) are experts willing to provide their knowledge during a session. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance. Please provide short CV’s of the Key Participants at the Wiki or link to another source.
Moderator
- Biljana Nikolic, Department for the Implementation of Human Rights, Justice and Legal Co-operation Standards, Council of Europe
The moderator is the facilitator of the session at the event they must attend on-site. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.
Remote Moderator
Trained remote moderators will be assigned by the EuroDIG secretariat to each session.
Reporter
The members of the Programme Committee report on the session and formulate messages that are agreed with the audience by consensus.
Through a cooperation with the Geneva Internet Platform AI generated session reports and stats will be available after EuroDIG.
Current discussion, conference calls, schedules and minutes
See the discussion tab on the upper left side of this page.
OrgMeating #1 2025-04-03
During the preparatory meeting for the EuroDIG 2025 workshop on artificial intelligence in business, participants discussed the structure, thematic focus, and panel composition of the session. It was agreed that the panel will consist of up to three speakers and be divided into two parts: a moderated segment (presentations or Q&A) and an open discussion with the audience to encourage active engagement.
The group emphasized the importance of stakeholder diversity in selecting panelists, proposing representatives from the private and public sectors, academia, and the judiciary. Suggested discussion topics include ethical and responsible AI, human rights impacts, business perceptions of AI adoption, and the geopolitical dimensions of AI implementation.
Monika Stachon (NASK, Poland) was confirmed as the moderator, with Biljana Nikolic (Council of Europe) as the on-site substitute moderator.
OrgMeating #2 2025-04-09
During the second preparatory meeting for the EuroDIG 2025 workshop on artificial intelligence in business, participants finalized the session description and teaser, and agreed on the list of panelists. The group refined the interactive format of the session, emphasizing audience engagement and a balance between moderated discussion and open dialogue.
Organizational challenges and potential risks—such as last-minute speaker changes or the moderator's unavailability—were discussed. The team outlined contingency plans to ensure smooth execution, including backup moderation support and flexible agenda adjustments.
The next organisational meeting is scheduled to take place on 25 April 2025.
Messages
- are summarised on a slide and presented to the audience at the end of each session
- relate to the session and to European Internet governance policy
- are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
- are in (rough) consensus with the audience
Video record
Will be provided here after the event.
Transcript
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Tigran Karapetyan: Good afternoon everyone, good afternoon to all the people present here in Palais de l’Europe of the Council of Europe and all those who are joining us online. Very nice to see you all and I hope that you had interesting sessions before this one and they will be followed by others afterwards. And I pass the word to the organiser right now to give us the technical details on how this is going to all go. Alice, please.
Moderator: Hello, this is mostly for the online participants. Hello everyone and welcome to workshop six on the perception of AI tools in business operations, building trustworthy and rights respecting technologies. My name is Alice Marns, I’m a participant in this year’s YouthDig, the youth segment of EuroDig and will be remote moderating this session. We will briefly go over the session rules now. So the first one, please enter with your full name. To ask a question, raise your hand using the Zoom function. You will be unmuted when the floor is given to you. And when speaking, switch on the video, state your name and affiliation. And please do not share links to the Zoom meetings, even with your colleagues. Thank you. Thank you very much, Alice. With these few simple rules, we can now start this session. So warm welcome to everyone once again.
Tigran Karapetyan: And this workshop on perception of AI tools in business operations, where we will speak about building trustworthy and rights respecting technology. My name is Tigran Karapetan, I’m head of Transversal Challenges and More. and the Multilateral Projects Division at the Council of Europe. And before we start, I would like to say my words of thanks to co-organizers and the speakers that we’re going to be hearing later on. Monika Stachon, Cybersecurity Strategic Analysis Expert from NASK, National Research Institute of Poland, that unfortunately could not be with us in person but will be joining us online. Katarzyna Ellis, partner and leader of the People Consulting Team at EY Poland, that is also joining us virtually. And Angela Coriz, who is joining us personally here from Connect Europe, a public policy officer there. Today’s workshop is the result of strong collaboration and great coordination among all the partners involved, so I’d like to thank you once again for that. Furthermore, I’d like to also extend my gratitude to distinguished panelists that will be speaking today. Professor Lyra JakuleviÄienÄ, member of the UN Working Group on Business and Human Rights, and Domenico Zipoli, Senior Research Fellow and Project Coordinator at the Geneva Human Rights Platform. This workshop has been organized in the framework of the Council of Europe’s pilot project on human rights and environmentally responsible business practices. It’s a project that is run in my division, and the Council of Europe’s initiative reinforces the protection of human rights and environmental sustainability within business operations in line with the existing international frameworks and standards. Through cooperation, the project supports the member states and businesses in aligning with the human rights standards, addresses gaps, and encourages cooperation among governments, businesses and civil society. As a result of the collective efforts under the project and collaboration with Monika, Katarzyna and Angela, we are pleased to be here today. This interactive workshop is designed to explore how AI is perceived within companies, the challenges involved in its implementation, including human rights challenges, and the vital role of ensuring compliance with human rights. Hopefully, we can also have a word on whether AI can help companies in fact comply with human rights, along with increasing productivity. So, without further ado, I would like to now invite Katarzyna Ellis from EY to present a recently published report on how Polish companies implement AI. Katarzyna, please be mindful of the time. We’ve got one hour for the entire session, so you’ve got your chance now. Please, go ahead.
Katarzyna Ellis: Fabulous. Thank you, Jörn, and thank you for such a warm welcome, really. It’s such a pleasure to be here. If you don’t mind, I will share a presentation to give you some insights from our research. I’ll do share. Three, two, one. Do we all see? I’ll go into the presentation mode. Can everybody see the presentation? I can see now that you can. Yes. What I want to share with you today is at EY we have been doing a global report on how the future of work will look like for the next 5, 10, 15 and 20 years. We call it EY Work Reimagined and that survey has been done across the globe, 15,000 employees, over 1,500 different entities globally that we have spoken to. The results are quite exciting, that’s why I will share with you today not only the Polish insights, which we have done this year, but also what we see globally that might be a better representation to what you see in your respective countries. Firstly, we have been asking the question about the work technology and the generative AI for the past two years and we see a massive, significant impact of the last 12 months on the ways that we work using genAI. At the moment, from what we see, the use of genAI for work is around 75%. That’s what the employees are reporting in our global survey. Just to give you an answer from last year, it was 35%. When you see what an extra potential growth the genAI technology is that we see across different businesses, it is quite incredible. 90% of the organizations that we have spoken to use the genAI technology already or have plans to use it very shortly. And 60% use it in government and public sector. We thought that was going to be a big impact. lower, but it isn’t. So surprisingly, public sector is not far behind. When we look at the adoption of GenAI, employee and employer sentiments are still net positive. We’re seeing that employees see that GenAI tools will help the employee productivity, it will help the ways of working that we do in specific sectors, and also it enhances the ability to focus on high value work, while giving away the low value administrative repetitive tasks to be managed through either automation or with the help of GenAI. Furthermore, what we see is that in pairing it up, so GenAI technology and the investments that the businesses are making in GenAI goes hand in hand with the need of enabling the upskilling and reskilling the organizations. So what we see is that there is a massive gap when it comes to what organizations have and what organizations should have in order to be able to utilize GenAI effectively, but also ethically. So what we see is that employees and employers are more aligned on the need to learn the skills, but they are less aligned on whether they have the opportunities at work to learn those skills. And then what we also see is that what we at EY call talent health, so the way that the organization is able to deliver business value through high potential talent is directly connected with the amount of skills and the usage of GenAI across the organizations. So as I said at the moment, 75% of employees are reporting that they’re using GenAI for for work. So not only at work, but for work. But that also is connected with different parts of business operations. Because I look at people function, and I believe that people are the greatest asset of every organization, and without people, the organizations cannot perform and bring business value. I looked quite deeply into how the HR or people functions are utilizing those Gen-AI tools, and I think that will be very deeply connected with the conversations that we’ll be having here today, so with ethics and the human rights. So how do we use that within the people function, specifically within recruitment, talent acquisition, performance, and employee engagement? And what we see is that quite a small amount of HR departments, because less than a quarter, are effectively using Gen-AI tools currently. But more and more HR or people function leaders are thinking of deploying those. And this is the question, how are we going to do it in order to impact the organizations in the most positive manner? And I believe we do have a mentee here. So Monika, I think you might have to help me here. I will share the mentee. So what I wanted to ask you is how do you think the employees feel about AI at work? And then I’ll show you what we found out, surprisingly, within the Polish market. Do you think employees have a positive or negative attitude towards the AI tools at work? Please scan the QR code, yes, or you can join by using the code in the upper corner. I think we’ll give you a minute or so. Curious, yeah. Cautious, yeah. Most will like it because it makes their work easier. Positive, makes feeling, depends on the functions. E4HR purpose perhaps not positive, for automation of tasks positive. Very good. So, I think that, let me go back to the presentation then. So, you are very, very much spot on to what we’ve discovered. So, in Poland, we wanted to deepen the global research and just focus on the organisations within our geography. And what we have discovered is that, let me just move this a little bit here, that just a little bit less than 50% have a positive attitude towards using Gen AI at work. But 40% were really negative. And they were negative because they really worried that firstly, Gen AI will take away their jobs, or they change their jobs beyond what they are actually. So, also, what was very interesting is that 4% of all the people that we’ve asked said that they are already tired of AI discussions because they hear about it everywhere and continuously and constantly, and it doesn’t really bring much value to them. But yet, 36% believe that the use of AI is inevitable, and they’re trying to acquire as much knowledge and skills in this area. So why is this a very important message? It is the fact that more than 50% of employees of the organizations on average are scared of using AI or have a negative affiliation when it comes to AI
Tigran Karapetyan: at work. And as we know, the AI is here to stay. It’s not going to go away. It’s there already. So it’s not the question of how or if, it’s the question of when every organization will use it effectively. So what do we do with that over 50% of all the workforce that actually does not believe or that does not have positive feelings about using those tools? It’s a big problem. So what I talk about is education, education, education. And there is one additional point that employees and employers have a very different understanding of the actual utilization of the Gen AI tools. So employers do overestimate how effectively and how much the employees are using the Gen AI tools that they invested in. So that’s another part, education, again, education, education. So what do we need to do? How can we actually, within the private and public sector, when we work with the organizations, empower people to bring us the return on investment that we’ve all committing to while we’re now purchasing the AI solutions? Firstly, it’s building trust. So companies should really clearly define the allowed areas for experimenting with AI. And we need to ensure that employees have the value of increased productivity, but it will not lay to layoffs of the people, of the staff. So we need to create a sense of psychological safety, that is very crucial. Rewarding innovation. So the organizations should really focus on bringing the innovation at the forfeit of every work that we do, and allow that innovation and also reward for it. So significant rewards should be introduced for revealing and sharing the ways to utilize AI. So for example, I’ve worked with multiple clients when we do AI hackathons, or we do prompt competitions, who writes the best prompt within each department. And then there are some prizes at the end of it. So using gamification to ensure that that reward for innovation is there. Driving by example. So it’s very often that we, at the management level, we say you should be using it, but we don’t stand and do it ourselves. So the management should be driving, leading by example. Creating spaces for knowledge sharing, communities of practice, or hackathons as well, just to ensure that people know where to go to when they want to get better. Ensuring access to tools and training, and that’s very important. So driving the upskilling and reskilling agenda across the organizations. Mentoring programs and individual support for the individuals. So creating a network of internal mentors, or inviting external experts who can talk to the employees and they can answer them questions. Again, building that trust to the organization, building that psychological safety. And also, last but not least, the regular evaluation and feedback loops. We need to know what’s going right and what’s going wrong, and we need to address what’s going wrong very quickly and reward what’s going right. So these are the elements that we really should focus on. on when driving the GEN-AI agenda across the organizations. And just as a last maybe food for thought for you is that we’ve discovered within the Polish study is that recently, within the last year, I think 18 months, 32% of the companies that we’ve spoken to established new dedicated to AI teams. However, only 16% of those companies have invested in hiring new employees with the necessary skills to implement the AI processes. So where do they get people to fill those vacancies that they created by creating the teams? They take them from the organization. So the companies are very likely redirecting the existing employees to do new AI teams. And we do see that there is a significant shortage of AI specialists in the job market. But that’s why the organization might have difficulty in finding and attracting qualified experts. So they’re looking from within. So how to minimize that gap? Again, it’s education, education, education. So re-skilling, up-skilling. That’s very important
Katarzyna Ellis: and changing the cultural aspects of the organization. And in addition, very recently, I read one of the Polish employment studies. And there for every 1,100 people in Poland that go into retirement, only 435 people entered the workplace. So if you think about that in that manner, what a huge gap of talent we actually have and how we’re going to address that. So that’s a question for you. and some food for thought. Hopefully, I have given you an overview, a very brief one. And now I’ll pass on the stage to the next speaker. Thank you.
Tigran Karapetyan: Thank you very much, Katarzyna. This was very, very interesting. Thank you for the report. And I think what we’ll do is we’ll open the floor for one minute, one, two minutes for quick reactions. Anyone in the audience, maybe from the panelists or online would like to react? Do you have any of the panelists? You would like to? Yeah, please go ahead.
Lyra JakuleviÄienÄ: We can start with you, please. Thank you very much. Thank you for this opportunity to participate here. And there are quite a number of UN people here. But the reason why I’m here is also to find synergies of our common work on common issues. Now, on the report, very interesting report, I had only possibility to look briefly at it, but a few observations. Firstly, what has been mentioned about the lack of expertise within the business sector and gap in knowledge concerning the technological aspects. Now, I’m coming from business and human rights topics, so I can only echo that not only on the technological aspects, but also on the human rights aspects. So it’s another burden, let’s say, another challenge for businesses to address these aspects. And as here, we are going also to speak about business and human rights while using AI. I think this is very relevant. So just to echo what was said. Secondly, the report emphasizes the growing importance of regulatory compliance. And just also to illustrate that at the moment, we have identified around 1000 various standards that exist everywhere in the world. that deal with AI technologies and relationship with human rights. And of course, needless to say, but it’s extremely important because these are the first two initiatives that have materialized for mandatory standards. It’s the Council of Europe Framework Convention on AI, Human Rights, Democracy and Rule of Law, and of course the EU-AI Act that was already figuring in the discussions today. So I think there will be more of this pressure that we see, that it’s going more to mandatory regulations, so businesses will have to actually embed besides the business case and the run for sustainability. What I was a bit surprised, but maybe it’s probably the methodology that was used, that in a way sometime it’s mentioned that AI in the workplace has to be used responsibly, but then I only found reference to foundations of sustainable growth of companies through the system security and some other aspects. So not really mention of compliance with human rights and human rights due diligence, which is the topic for today, and I hope that we can dwell a little bit more on that. And the last point, which is important, and that goes back a little bit to capacities and the gap in knowledge, is the interdisciplinary approach that will have to be applied in this field, because clearly the businesses will not become the tech people, unless these are tech companies, and also will not become the human rights specialists. So clearly there will be needs for interdisciplinary teams, and I would really like to echo on this what was also mentioned in the report. So very briefly on the report. Thank you, thank you very much.
Tigran Karapetyan: Please, Mr. Zipoli.
Domenico Zipoli: Thank you, thank you very much. And as this is the first time that I’m taking the floor, just if… if you can allow me to briefly introduce our work, represent the Geneva Human Rights Platform of the Geneva Academy where we lead a global initiative on digital human rights tracking tools and databases and these are essentially digital systems that help governments, UN and regional human rights bodies, civil society, national human rights institutions, equality bodies track how human rights recommendations and decisions as well as the SDGs are implemented. Increasingly, the use of AI is being used to manage complexity, clustering data, detecting gaps, generating alerts and against this backdrop the EY report is highly relevant. I was in fact surprised that only 90% of companies report readiness to scale AI and that most have a formal governance framework in place and this is of course encouraging. In our field, in the public sector, we’ve learned that readiness is not just technical, it’s institutional in fact and success I’d say depends on governance, transparency and ethical safeguards so of course the highlight of trustworthiness is key. I think the report also showed 60% of companies experienced efficiency gains if I’m not mistaken, yes, so we’ve seen similar trends in the public sector where AI supported digital tracking tools can now analyze hundreds of recommendations in seconds, a task that beforehand took weeks of course, but again with these gains come responsibility and without fairness and inclusivity in design, AI risks amplifying the very inequalities that we’re trying to fix. So in a sense, whether in civic tech or corporate systems, human oversight and bias audits must be built from the start. I think if we want AI in business to be rights respecting, we don’t need to start from scratch. The public sector indeed has a blueprint, a little bit following up on what was just said, with tested frameworks that could be adapted for business use. But I’ll talk a little bit more about the use of AI in public sector digital tools in a bit. Thank you.
Tigran Karapetyan: Thank you very much, Mr. Zipoli. Please, Dr. Erbguth.
Jörn Erbguth: Thank you for the presentation. I have a little question. As we have the AI Act now in force, does the AI Act answer those questions that have been opened? For example, the AI Act requires education of people using AI. This is mandatory and already in force. Do we see that the EU AI Act in Poland already has consequences? And how does this play into this research? Does it go in the right direction? Do we see that it will support this process or do we see things missing or going in the wrong direction?
Katarzyna Ellis: I think we’ll have the answers to all those questions that are deeply valid with the next, probably, the next iteration of the report, because it’s only just the beginning of what we’re seeing. We are already seeing that the education and skills gap is a massive issue, not only in Poland, but across the globe to enable the AI Act to be enforced properly. So most likely we will repeat this survey within the next six to 12 months, and then we’ll see the impact to what we see at the moment.
Tigran Karapetyan: Thank you. Thank you. Thank you very much. I think given the time constraints, we have to now move on. Thank you very much Katarzyna for this wonderful presentation, this is very interesting. And thank you to Mr. Zipoli and Ms. JakuleviÄienÄ and Mr. Erbguth for their interventions as well. So as we continue, we move on to the existing frameworks and international standards, which already have been mentioned a few times by some of the speakers, that play a crucial role in guiding the responsible use of AI in business operations. In this context, I’d like to draw particular attention to the Council of Europe’s 2016 Recommendation on Human Rights and Business, which reinforces the UN’s guiding principles on business and human rights. Together, these instruments offer a solid foundation for ensuring that AI solutions used in business operations are developed and implemented in alignment with human rights standards. That’s on top of the AI-specific regulations that were mentioned. Many of you also heard yesterday from colleagues from the Framework Convention on Artificial Intelligence and Huderia about the guidance on the risk and impact assessment of AI systems on human rights, democracy and rule of law. Today’s workshop will build on those discussions, hopefully, and offer a complementary perspective to that with a particular emphasis on how these frameworks intersect with business practices. With that, I’m pleased to pass the word back to our next speaker, Professor Ljera Jakulevic-Jene, to share her insights on the link between international standards such as the UN guiding principles and the implementation of the AI tools.
Lyra JakuleviÄienÄ: Thank you. Indeed, I should have done it in the beginning, but it’s never late. I just want also to say, with your permission, Chair, a few words on what the UN Working Group on Business and Human Rights does, not just for formality, but because there are ways how you could use also the work of the Working Group and create synergies also with Council of Europe work. So, first of all, we are independent experts in the Working Group, so we are working on voluntary basis and we are mandated by the Human Rights Council. The mandate is covering several functions. The most important is that we are mandated to disseminate and to support the implementation of the UN guiding principles on business and human rights, which is at the moment the only global standard in this area. Secondly, we also prepared the thematic reports for the UN General Assembly and Human Rights Council. And just to announce, because it is relevant to the topic that we are discussing today, that in June we will be presenting report on AI, business and human rights, and use of artificial intelligence by states, by businesses in procurement and also deployment. So, we are not looking really into the technological aspects or the development of AI, but rather in something what was less explored is the procurement and deployment. So, this report will be out soon and it might be contributing also to the discussion that we have today. Then, we are also having the communication procedure, so we are not a quasi or judicial body, but we have the opportunity to, and we are mandated, to examine complaints against companies, sometimes states, and we are engaging in dialogue. So, we don’t have the, let’s say, mandatory decision, but this is also quite a quick way if you compare with the judicial bodies, We are a judicial body, so we examine complaints every year, around 100 complaints, because our capacities are small, but it’s always possible to address us through the communication procedure, which is also confidential, so there are no issues to be feared of. And we also hold country visits, which allow us to discuss the issues on business and human rights, both with businesses, but also with the states. So just briefly on what the working group does, and I really hope that we can engage with some of you. Now, going back to the topic of today’s, how businesses use AI, a lot we have heard about the report, and the main conclusion is, of course, that companies are increasingly uptaking the use of AI in various ways, and it would be difficult to enumerate all the possible ways of use of AI. But just to also mention, when we talk to businesses, the interesting conclusion sometimes is made that even companies are saying that, well, we don’t even know which AI tools we’re using within the company, what tools our persons, our employees are using. So this also demonstrates that firstly, we have to start from there, from the knowledge, what exactly do we use? Do we use the generative? Do we use the narrow AI? Do we use other systems? And of course, with open AI systems that are available, then sometimes within the company, you may have the use of AI that you may not as a management behavior. That’s why it’s so important to have certain policies to establish certain rules within companies for the use of AI. Now, with regard to various ways where AI is being used, of course, here I just put the slight example of using the workplace. But of course, the use of AI in the workplace is not only about human resources, workforce management. and management of people. It’s also about people in the workplace using, for example, big data, that need to collect data, that need to process and to work with this. Then marketing custom relations, a lot of AI driven personalization, targeted advertising, pricing algorithms, a lot of possibilities to use. Regulatory compliance, AI is being used for human rights due diligence, not always positive, but it’s also used quite a lot, in particular in the value chain assessments. Then in decision making, for example, algorithmic or automated decision making is not really anything new. If you talk with businesses from healthcare, finance sector, insurance, retail operations, so they use for quite some time this automated business decisions. But what is more complicated now with AI, that it introduces this new levels and scale of complexity. That’s why we have to really not only talk about it, but also educate ourselves, as has been also mentioned by our colleague from EY. Now, what is also essential, as we increasingly use those systems both in the private sector, but also in the public sector, that these systems are used in a transparent, explainable and understandable way. And that stakeholders are also involved, both before the deployment in discussing and maybe even auditing, let’s say, some of the systems in order to prevent some discriminatory or other uses. So it is extremely important that there are policies, there are practices, and there are also people behind in the companies, but also in the state institutions for the use of those systems. Now… Indeed, a lot could be told about benefits and risks of using AI. I just try to exemplify this in the slide in front of you. And I just want to emphasize that, and this is what they try to show in the slide, that if we look at certain benefits, certain use of AI has both sides, they stick to two ends. So, for instance, if we see that use of AI has played a really important role in monitoring, for example, the air quality, fatigue levels in the workplace, certain workplace risks, in particular in those sectors, for example, mining, where it is extremely important to observe that people are not tired, because that could create some health and safety risks for the workers themselves. So this is, let’s say, the benefit side. But then on the other side, we see that AI is being used for monitoring productivity, for calculations, how people work in the workplace, how doctors or how lawyers, or even judges, how many cases and how quickly they are being processed and so on, to have all kinds of indicators. So that, on the other hand, is meant to boost the productivity. But on the other hand, it creates a lot of stress. I think there was something about the mental health said already in the beginning of the panel. So it could work also negatively, because it creates a lot of pressure, a lot of stress. And this sometimes pushes the workers then to ignore certain safety standards. So there could be both ways. And I think this is important also to emphasize that we all always have to look both at benefits and risks. And in reality, what we see that when we speak about use of AI, it’s quite frequently that benefits are being emphasized. for promoting use of AI in businesses but also in the public sector. Now if we go back to the standards, indeed the UN guiding principles work on three pillars. So the first pillar is what the states have to do in order to make sure that companies do not engage in violations or to prevent violations and if they do happen to provide the opportunities for remedying. So there are obligations for states and indeed sometimes people say that UN guiding principles, this is soft law. But if we look at the first pillar where the obligations for states are embodied, this is reliant on the international mandatory obligations. So all the UN treaties that I don’t need to present here, but it is not that much soft law, so to say. Then we have the pillar for businesses and there of course we emphasize a lot in the guiding principles on human rights to diligence, which should help the businesses to identify, prevent, mitigate and address potential negative impacts on human rights. And then we have the pillar on the effective remedy and here what is particular with AI and why it is so important to be transparent, why it is so important to disclose that you use AI, whether in the workplace or elsewhere, is because if something happens you cannot have remedy without the disclosure, without transparency. Because you as a person, as a worker or even as a partner, you cannot know sometimes that AI has been used. So if you don’t know, how can you apply for certain remedy to certain oversight institution court or any kind of commission and so on. So that’s why the remedy issue is very important. Now, I am aware of time, but let me just maybe summarize on some of the steps that could be useful to bear in mind for the companies, but also I think it’s equally important for the public institutions, because we have seen a number of challenges for certain governments around Europe also, as a result of which the governments also started to do human rights due diligence. So, several steps. Firstly, to start with knowledge mapping, what are you using exactly in the company, in the state institution, and then work on identification of impacts by doing human rights impact assessment. Now, the impact assessment does not mean that you will have to address everything, like in anti-money laundering field, if you identify certain risks, you must address, because you cannot just leave it for the future. Now, here we emphasize with human rights due diligence, it is important to prioritize, because not everything can be done at the same time. So, of course, the recommendation is to look for, let’s say, crucial risks for businesses with regard to severity of the impacts, something that has to be addressed immediately and something that has to be addressed later. Now, the AI Act, for example, looks also through risk-based analysis, through the high risk and less risk, and depending on that, there are different obligations. Then, of course, if risks are identified and prioritized, then it’s important to address those impacts, be it with preventive measures or actual measures. And here, what is extremely, extremely important is to talk to stakeholders, because talking to stakeholders may also help to understand the severity or the importance of certain risks. that the use of AI involves. And when we talk about stakeholders, it’s not only the trade units and workers as we speak about the workplace, but also the broader stakeholders, civil society organizations. Disclosure and ensuring the transparency is extremely important. So if AI is being used, it has to be disclosed, be it to the employees or be it to other stakeholders. Collaboration among businesses is extremely helpful, in particular if we talk about SMEs, small and medium businesses, because they have even less capacities to address issues and increasingly they use AI because this helps them with their productivity and other aspects. So if there is collaboration between businesses, in particular in the value chain, then the certain issues could be leveraged much easier. Then sometimes also the state support is needed, in particular if we talk about SMEs. But this collaboration can help also to address those challenges more effectively and in a more optimal way. And it’s extremely important also to ensure effective and timely communication because in this process where a lot is unknown, both by using the AI itself, but also to know how AI is impacting on different stakeholders is extremely, extremely important to communicate because that could build trust, it could strengthen the relationships and also to demystify certain myths that we have seen also as part of the process. And this is where I stop, even though I have many things to say, but I’m aware of the time and I don’t want to be rude also to my colleagues.
Tigran Karapetyan: Thank you very, very, very much. This is very interesting and I think that this session is way too short to actually discuss all the things that need discussing. So let’s take this only as an inspiration for further reading and further exploration. and on this I would like to also mention about the Council of Europe materials as being sources of standards and it was very nice your reference to the fact that it’s a soft standard but not really because it’s based on hard standards and the positive obligations of the state being a very specific one and this is where also the European Court of Human Rights case law comes in this is where also monitoring reports by the Council of Europe, various Council of Europe monitoring bodies can become helpful for businesses to do their due diligence. So given the time now I’m moving on to the next panelist, I’d like to invite back Mr. Domenico Zipoli to discuss how human rights digital tracking tools can support not only public institutions but also businesses in conducting their rights due diligence. So let’s speak about how to use AI for the good.
Domenico Zipoli: Thank you, thank you very much Chair and yes, as I said I come from Geneva, a city like Strasbourg after all that has long championed the idea of human rights by design and today as AI becomes crucial and embedded in business operations and government workflows alike that principle is more urgent than ever and our contribution to this discussion indeed builds on our work on digital tracking tools and as I mentioned earlier on, these platforms have transformed how states monitor their human rights obligations and the point that I’d like to make today is that increasingly their architecture and logic may be relevant to business actors as well especially those navigating ESG risks, regulatory pressure and impact investment frameworks. So essentially over the last decade we’ve seen a rise of human rights software We divide them in different categories. Digital human rights tracking tools, such as CIMORE+, that is present in the Latin American region. CIMORE stands for Sistema de Monitoreo de Recomendaciones. IMPACT, open source software, that is more present in the Pacific region. Or indeed the Office of the High Commission of Human Rights National Recommendations Tracking Database. We then have information management systems, where you might know the Office of the High Commission of Human Rights, Universal Human Rights Index, but indeed the Council of Europe’s very own European Court of Human Rights knowledge sharing platform. So all these systems help governments track progress on human rights recommendations, be it treaty bodies, special procedures, regional courts. And what these platforms create is a holistic and organized way to understand what’s happening on the ground. What Biljana, thank you so much, is now sharing on the screen is a directory that we hold on our website. We don’t have a fancy QR as you do, but we’re definitely taking that idea back home. If you want, you can check the directory yourself. Just put digital human rights tracking tools directory on your search engine. And within this directory you can see a selection of what is more now than 20 of these digital tracking tools and databases. With a description of its functions, users, and the link to the tools themselves. I think we can describe the value of these tools according to what we call an ABC model. Alerts, benchmarking, and coordination. And always have businesses in mind when I go through this framework. When it comes to alerts, AI-powered platforms can act as early warning systems. They automatically scan social media, news, reports for red flags, such as spikes for instance in hate speech or disinformation ahead of elections. And in the business world, this same logic could… be applied to supply chain grievance monitoring, for instance, or reputational risk detection, allowing companies to intervene before a situation escalates. B stands for benchmarking. So, AI-powered databases allow clear benchmarking of human rights performance. The European Court of Human Rights Knowledge Sharing Platform, now it’ll be interesting to discuss with your colleagues how you’ll intend on leveraging AI for its use. But what is the Knowledge Sharing Platform doing? It essentially organizes and visualizes case law by thematic area, legal principles, helping national authorities understand how rights are being implemented. And for businesses in multiple jurisdictions, such a resource can be invaluable. Oftentimes, these resources are only used by us, you know, in the human rights space, in the public sector space. But for businesses, it would allow legal teams and compliance officers to benchmark corporate policy, for instance, conduct against emerging human rights standards, for instance, in areas like data privacy, freedom of expression, anti-discrimination, so not based on, you know, assumptions, surveys, but on actual jurisprudence. And this kind of structured legal insight can meaningfully support ESG alignment, risk mitigation, and innovation. And finally, coordination. I won’t go much into detail about this, but this is something that I’m particularly fond of. Digital tracking tools like the CIMORE or OHCHR’s National Recommendations Tracking Database, they bring different ministries, courts, civil society into one shared workflow. And everyone sees the same data and tracks the progress that is shared. So let’s talk about Impact OSS. You can go and take a look at the software itself. As it’s an open source… system, it allows diverse actors, including the public, to follow implementation efforts. You can see SADATA is the tool that the Ministry of Foreign Affairs of Samoa, for instance, is using. Today, us all can see how Samoa is faring when it comes to UN human rights recommendations. So indeed, for businesses, this is what I believe could be the next frontier, something that one could co-create. Just quickly on, you know, how these public sector tools relate to businesses, so indirectly, but also increasingly with more of a direct use. They establish shared benchmarks, they clarify state commitments, and illuminate risks. And for companies that wish to align with international norms, the data and logic embedded in these tools that you see in the directory offer a ready-made structure that I think is worth thinking about for due diligence, ESG monitoring, and the like. And the last point I want to make here is also that there can be a compelling investment case here, because many digital human rights tracking tools are open source, public goods. They benefit states, as I said, international organizations. But by supporting these platforms through funding, through private funding, businesses can help build infrastructure that they themselves would benefit from. And this aligns with the principles of impact investment. So this is a space that we’d like to create, the Geneva Human Rights Platform. We have expert roundtables every year where we invite representatives from different sectors around the table to discuss the emergence of these tools and how these tools can be supported. You were mentioning the AI for Good. Now in July, the AI for Good Global Summit will take place in Geneva. There will be a dedicated workshop on AI for human rights monitoring. So I am, of course, inviting you all, if you’re in Geneva, to attend. It’s on the 8th of July. And yes, indeed, there has to be more engagement between the private sector and the public sector, specifically when it relates to human rights monitoring, a space that hopefully will have more attention in the future.
Tigran Karapetyan: Thank you. Thank you very much. This is very interesting. And I just understood that I inadvertently plugged in the next summit that is going to be held. But please feel encouraged to take part. Absolutely. And it’s I think it’s also interesting and something that is worth looking into maybe in another session somewhere. The fact that once the data on AI, sorry, on HR performance is tracked, it can actually be assigned certain value. So that’s another area that needs exploration and might become an investment case or a business case. So tracking of human rights data, I think, is extremely important. And that makes that data eventually can even turn it into a commodity, de facto. So now we move to the to the next speaker, Angela Coriz, from Connect Europe, who will share positive case examples from the telecom sector, please, Angela.
Angela Coriz: Thank you. I will try to be quick. So I work at Connect Europe. This is a trade association that represents the leading providers of telecoms in Europe. And so today, what I wanted to do was to quickly show a snapshot of the business side and specifically the telecom side uses of AI that are already happening. And as we saw in the. presentations from the reports already. It’s not of a question of when this is happening, it’s already happening. So it’s more how this will happen. So also in the telecom sector, our members are still exploring some of the potential benefits and solutions that AI can offer. But also looking at the drawbacks and risks. And in the meantime, we’re operating within the within the AI Act. Since our members are European, we are entering the implementation phase of the AI Act. And there’s a lot to be considered there. Well, and a lot of still remaining questions from businesses that will need to be answered along the way. So just to share just a few examples of how AI is being used in telecoms now. One thing that’s very specific to the sector that is helpful for AI now is within the network. It can be very helpful, for example, to optimize network investment choices, from finding the best location to place a network antenna, to also improving network capacity planning and optimizing the traffic flow through the networks. There’s also a lot of benefits that can be found with predictive maintenance. So helping technicians to actually fix issues within the network, summarizing trouble tickets, and basically speeding up that process. There’s also benefits within the radio access network or RAN. This is an area where a lot of operators are already using AI solutions. 25% of operators have already deployed some functionality in this area and over 80% have some sort of AI activity of some kinds. This can be in commercial trial test or R&D phase, but this has already gotten started on quite a big scale. Then if we look at greener connectivity networks, as we’ve said before, AI offers benefits and risks and it’s a really a two-sided coin. So while AI has raised challenges in terms of energy consumption, specifically with data centers, it also can be potentially beneficial for reducing emissions in telecom networks. We have one of our members, Orange, that uses AI systems to monitor their energy consumption of their routers and that’s resulted in 12% decrease in consumption. A couple of our members, Telenor and Ericsson, have also collaborated on a project that saves energy with RAN and it’s resulted in a 4% reduction of energy usage for radio units. So it’s about balancing these potential rewards and also these challenges that AI can bring to sustainability. There’s also functions for customer engagement, for customer service, and internal solutions that help employees within their companies navigate their own, finding the right people they need to find internally within their own companies. That’s also a potential benefit. But as we said, there’s plenty of challenges that come along with these. One of them in Europe is regulatory uncertainty. That being, we will need some clarification along the way, and some of these are coming through the form of guidelines from the Commission. But in order to follow, especially a push from the Commission to embrace AI and become an AI continent, in order to develop these solutions, clear definitions on how to classify high-risk scenarios, for example, are important. And also, for telecoms, classifying AI-based safety components is really, really essential. There’s also cybersecurity risks. AI can increase these threats, and there can be modified contents. For example, we see this impersonation fraud within the telecom sector, and not only telecom sector, also on platforms. And on the other hand, you also have solutions that are being built with AI. So again, this kind of counterbalance, for example, our member Telefonica has a solution called TU-Verify, which can detect content generated or modified by AI. So you have these kind of counterbalances as well. Then the other thing to be considered is that AI will likely cause a spike in data traffic. So if we’re spiking data traffic on telecom networks, this raises a big investment need from a telecom side. And it’s also a bit difficult to say exactly how much more data will be used, because it depends on what kind of AI will be used. So this is an area that should also be kept in mind. And already mentioned in the report as well, we see skills development as very important, especially with employees, like several people have already said. So training both citizens and people working in companies is really crucial. We have some members who have taken some steps in this direction for training programs, both internal and external, but this is certainly an area that’s crucial. And we’ve seen the statistics within the report. There’s still a lot of work to be done there. So I hope I haven’t gone too fast. I was trying to make that really quick. But just to conclude, it’s an ongoing developing area, and there’s a lot of potential to be found. But of course, we need to be operating within a framework that keeps ethical principles in mind and is rights respecting. And in order to do that, we need to continue having these discussions with lawmakers, with the public sector, with the private sector. So yes, there’s a lot still to be done, of course. Thank you.
Tigran Karapetyan: Thank you very much, Ms. Korys, for this. It was very interesting. Indeed, as you said, there’s still lots of questions as things develop, and some of them aren’t going to be very easy to answer. Having said that, I think we are over our time already, but if there is one or two questions, and we still need five minutes to summarize the session. Thank you. My name is Pablo Arrenovo from Telefonica. I’ll ask you if the panel is on the implementation of AI by companies of different sectors or public administrations.
Audience: My question would be if you feel that human rights and ethics is something taken into account when thinking about implementing AI systems by private or public sector. Thank you. Any of the panelists would like to respond? Please go ahead.
Domenico Zipoli: Thank you very much. It’s always fascinating to be in a room with both stakeholders coming from companies and from the public sector. The short question is, as I think everyone here and in the morning has said, there’s still a lot of trust to be built around AI. We need to get AI governance right. Whenever we talk about AI design, I personally always talk about four main challenges that we always have to bear in mind. One is the key fundamental one, which is bias. AI is as representative as the data that we have. If data is unrepresentative, it can reinforce discrimination rather than solving it. Then there’s transparency, of course. Stakeholders must understand. from EY was, you know, referring to education, education, education. We must understand how AI tools reach their conclusions. The International Telecommunications Union has this beautiful initiative that we’re part of called the AI Skills Coalition. So, indeed, coalitions such as this that educate not just the public, but also us, you know, stakeholders, I think, is crucial. And explainability is no longer optional. We need to be part of this discussion. Privacy, of course, and oversight. And this is the last thing that I’d like to say, but I keep on repeating, is that human rights-based AI demands a human-in-the-loop scenario, or governance. Whether it’s a state actor deploying a rights-tracking system, the ones that we study, or a business automating compliance reviews, accountability cannot be outsourced to an algorithm. So, it’s a work in progress, but I think that with this discussion between companies, regulatory bodies, equality bodies, this academia, this conundrum can be solved. I don’t think that we have the solution yet, but we’re there.
Tigran Karapetyan: Thank you very much, Mr. Zipoli. If you turn off your mic. Just two short points.
Lyra JakuleviÄienÄ: Firstly, if we talk about impact of use of AI on both positive, but also adverse impact on human rights, I think there is also another myth sometimes that we are usually talking about privacy rights or something like that. So, I just want to underline for the end that actually all human rights are involved. And there could be impact for all human rights, including, if we talk about environment, this new right on healthy and sustainable environment. Now, in this respect, I think there is not a big difference when the state is using it. and state authorities are using AI or companies because lots of rights could be involved. Now, the second point that I would like to also mention is just to highlight that here we got the good practices from the telecommunications sector. Indeed, the International Telecommunications Union is developing lots of standards for technological companies, but not only and increasingly realizes and acknowledges the need for human rights due diligence. But the report that I mentioned that is coming up of UN Working Group on Business and Human Rights in June is actually looking into different sectors. We try to look at the state as regulators, state as deployers, state as procurer of AI, but also we try to look in different sectors of businesses or different functions, let’s say, of businesses to see if there are some differences and if there is some specifics. And of course, there is some specifics. So, I just want to say that there will be many good practices as we manage to identify emerging practices in this report that you can also benefit to be inspired how businesses indeed or state authorities could undertake this road of human rights due diligence with regard to use of AI.
Tigran Karapetyan: Thank you very much. We’ll all be looking forward to that report. And I’m passing the word to Dr. Erkut for giving us a short summary of the session. Please.
Moderator: Thank you. Thank you for those presentations. They were quite diverse on different topics and I tried to summarize them. And please correct me if I’m missing very important things. We will have the platform online. So, if you want to improve the wording, this is not the place here to do it. Otherwise, we might be taking half an hour to do that. So I understood from the first presentation that within one year, the use of AI in business has risen from 35% to 75%. Employees have a mixed feeling about it, with a slight majority being positive and a strong minority being negative. I think this is the basis we have been starting from. We see there is a lack of upskilling, governance, ethical policies in place. The use of AI needs to be made transparent, monitored, and evaluated. Impact assessments are needed when there is a possible risk to human rights. It is too early to see an impact of the EU AI Act and even more the impact of the COE Framework Convention of AI, and legal certainty, as we have learned, of course, has to settle in as well. This needs to be evaluated in the future. UN guiding principles of business and human rights, and if I am correct, they are from 2011, they provide important guidance as well, so we do not have to start from scratch regarding regulation. There are human rights tracking tools that can track the adherence to and implementation of human rights by nations and corporations. I think that is it. My personal view is that if you start to use human rights, AI-based human rights tracking tools to track people and to track if they are using hate speech, then you are doing exactly AI not for good. So I would have some concerns about that, but when I think about nations and corporations, I think this is a good point. So if you allow me this last comment, but this is not, of course, in the text. So if you agree with this. with these messages and we will forward them to being finalized. Thank you very much. As you said in the very beginning, the panelists will have a chance to
Tigran Karapetyan: actually introduce changes later on. Given the time constraints, I think we can do that and then the final version will be possible. If you see that there is a strong disagreement with a point, please voice it now. If there is a little wording thing, we can do that later. Okay. With this, then, feeling really pressed now for time, I’m going to pass the word back to Alice, but before doing that, I’d like to thank the co-organizers and the panelists, as well as my own colleague, Biljana Nikolic here, who’s actually worked hard to organize the session, as well as Dr. Erbgut for giving us a great summary. With this, and all those who were present and listened to us here and online, thank you all very much for your interest, for your questions, for your participation. Alice, the floor is back to you, please. So, thank you, and the next session, Workshop 8, How AI Impacts Society and Security, Opportunities and Vulnerabilities, will start at 4.30 p.m., and we look forward to seeing you back then. Thank you.