The European Union’s Digital Transformation – Regulatory Challenges, Technical Impacts and Emerging Opportunities – TOPIC 03 Sub 02 2023
You are invited to become a member of the Session Org Team by simply subscribing to the mailing list. By doing so, you agree that your name and affiliation will be published at the relevant session wiki page. Please reply to the email send to you to confirm your subscription.
Kindly note that it may take a while until the Org Team is formed and starts working.
To follow the current discussion on this topic, see the discussion tab on the upper left side of this page.
In the European continent, particularly the EU, the regulatory responses from the recent years have attempted to keep abreast or even get ahead of the fast-paced technological developments. This session will explore the resilience of these legislative responses, in particular their implementation and enforcement challenges and paths to regulating digital technologies and the positive and negative impact of the regulation on technological advancement.
The European Union's (EU) digital services regulatory framework and digital transformation directives (e.g., data protection and cybersecurity) reflect the EU's ongoing efforts to address the challenges, impacts and opportunities presented by emerging technology. This session aims to explore the views of stakeholders on the linkages between EU policy directives and what is needed for the implementation of emerging technologies; balancing regulatory policy and the common good against technical capabilities and requirements for a trustworthy environment.
The format of this session is a panel with three speakers and a moderator.
Please provide name and institution for all people you list here.
- Desara Dushi
- Jörn Erbguth
- Meri Baghdasaryan
The Subject Matter Experts (SME) support the programme planning process throughout the year and work closely with the Secretariat. They give advice on the topics that correspond to their expertise, cluster the proposals and assist session organisers in their work. They also ensure that session principles are followed and monitor the complete programme to avoid repetition.
- Karen Mulberry
Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective Subject Matter Expert (SME) and the EuroDIG Secretariat and are kindly requested to follow EuroDIG’s session principles
Organising Team (Org Team) List Org Team members here as they sign up.
- Emilia Zalewska
- Karen McCabe
- Constance Weise
- Małgorzata Bojko
- Vittorio Bertola
- Romy Mans
- Giacomo Mazzone
- Stephen Wyber
- Bruna de Castro e Silva
The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.
- Vittorio Bertola, Head, Policy & Innovation, Open-Xchange (confirmed)
- Vittorio Bertola holds a degree cum laude in Electronical Engineering obtained at Politecnico di Torino. He deals with the Internet in all its aspects, including technical, business, social and political matters, as an entrepreneur, writer, activist and engineer. Vittorio works for Open-Xchange, a global leader in free software applications supporting the Internet's infrastructure, where he takes care of research and innovation activities and coordinates the company's policy and community activities. He has been dealing for a decade with Internet policies at the national and international level, and has been busy as a conference speaker, a blogger and a writer for Italian newsletters and magazines.
- Peter Eberl is deputy head of unit, cybersecurity and digital privacy at the European Commission’s Directorate-General for Communications Networks, Content and Technology. Prior to this he was case manager at the European Commission’s Directorate-General for Competition dealing with mergers in the sectors of health, transport, post and other services. Prior to his current assignment, he worked in DG Competition’s Antitrust and Merger Case Support Unit that assists the Deputy Director General and advises on horizontal and coherence issues in antitrust and merger cases. One of his tasks was advising the Chinese State Council and MOFCOM on the implementation of the Chinese merger control regime. Peter Eberl joined the European Commission in 2002 and worked as a case officer in a number of high profile cases in various sectors, in particular in telecommunications, media, energy and consumer goods. Peter studied law and economics in Bayreuth, Aix-en Provence (Maîtrise en droit international) and Rennes (D.E.A. en droit communautaire). Before joining the European Commission, he worked in law firms in Belgium, Germany and the United States and for the French Telecommunications Regulator.
- Stefano Zanero, Department of Electronics, Information and Bioengineering, Polytechnic University of Milan, Italy (confirmed)
- Stefano Zanero received a PhD in Computer Engineering from Politecnico di Milano, where he is currently a full professor with the Dipartimento di Elettronica, Informazione e Bioingegneria. His research focuses on malware analysis, cyberphysical security, and cybersecurity in general. Besides teaching “Computer Security” and “Digital Forensics and Cybercrime” at Politecnico, he has an extensive speaking and training experience in Italy and abroad. He co-authored over 100 scientific papers and books. He is a Senior Member of IEEE and the IEEE Computer Society, which has named him a Distinguished Lecturer and Distinguished Contributor; he is a lifetime senior member of the ACM, which has named him a Distinguished Speaker; and has been named a Fellow of the ISSA (Information System Security Association). Stefano is also a co-founder and chairman of Secure Network, a leading cybersecurity assessment firm, and a co-founder of BankSealer, a startup in the FinTech sector that addresses fraud detection through machine learning techniques.
- Karen Mulberry, Senior Manager, Public Affairs, IEEE Standards Association (IEEE SA) (confirmed)
- Karen Mulberry is Senior Manager of Public Affairs at the IEEE Standards Association (IEEE SA) where she manages strategic public and government affairs programs and engagements within and across the organization. Karen has been involved in shaping policy development, identifying strategic initiatives, and providing technical standards through leadership at the intersection of technology, standards, regulation, and policy throughout her career. She holds a BS in Organizational Behavior from the University of San Francisco and an MBA from the University of Phoenix.
Trained remote moderators will be assigned on the spot by the EuroDIG secretariat to each session.
Reporters will be assigned by the EuroDIG secretariat in cooperation with the Geneva Internet Platform. The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:
- are summarised on a slide and presented to the audience at the end of each session
- relate to the particular session and to European Internet governance policy
- are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
- are in (rough) consensus with the audience
Current discussion, conference calls, schedules and minutes
See the discussion tab on the upper left side of this page. Please use this page to publish:
- dates for virtual meetings or coordination calls
- short summary of calls or email exchange
Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.
Rapporteur: Bojana Kovač, Geneva Internet Platform
- Defining security is difficult, if not impossible, due to the evolving nature of technology. Current EU regulatory frameworks aim to cover most of the risks posed by existing technologies, including the Cyber Resilience Act, which is in the making to protect the security of digital products.
- Security is not absolute; it is always about risk management and reducing vulnerabilities. While larger companies are already equipped to comply with cybersecurity regulations and certifications, the challenge lies in ensuring security in the open source ecosystem, which relies on numerous projects run by individuals, nonprofits, and universities. Rather than solely relying on legal requirements, providing financial support to smaller open-source projects for making security audits and bug fixes would be more effective. Legal requirements should not disrupt the global and collaborative open source software development model.
- Ensuring comprehensive technological literacy is crucial, as it empowers individuals with a deeper understanding of technology. Due to its continuous evolution, industry professionals and users must remain informed and educated about emerging risks and challenges.
Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-481-9835, www.captionfirst.com
This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.
>> NADIA TJAHJA: Welcome back. I invite you to take a seat, it is very comfortable. With this, we’ll go back into the main topic, digital platforms, looking at the European Union’s digital transformation, regulatory challenges, technical impacts and emerging opportunities.
Gentlemen in the back, would you like to take a seat? Chris, would you like to take a seat? Thank you very much. I would like to invite the moderator of the session Karen Mulberry, senior manager ever public affairs at the IEEE standard association. Please, welcome her to the podium.
>> KAREN MULBERRY: I have one panelist here and two remote. Vittorio Bertola, can you join us, please. As we were introduced, this is a panel on the E.U. regulatory directives and what that means for users and businesses and the opportunities that are out there.
We have three panelists today, one will focus on the technical issues, one will focus on the regulatory directive it’s and one will look at the emerging opportunities and what that means.
We’ll start with Vittorio Bertola and have the panelists each introduce themselves and make opening remarks and then we’ll go to some questions.
>> VITTORIO BERTOLA: Thank you. I’m Vittorio Bertola, I’m here again, before getting to the topic, I really do want to encourage everybody to join in the preparation of EuroDIG and maybe volunteer to be in panels, we need more people, especially from the people attending in person.
Still, I’m here because I was really involved in the topic of new European regulation. Before getting to maybe some detailed specific regulations and directives, I think we should make the general point that we have seen a rush of new regulation in Europe in the last few years. This was quite a market marked change in the policies of the European Union, beside the commerce directive, there was not much regulation over the Internet industry, how we used the net and, of course, apart from privacy laws. There is a need to discuss whether we’re going in the right direction, and also the new ones that are coming, if they’re the right ones, especially for those under discussion and today we’ll discuss especially for example about the cyber resilience act. At the same time, there is a need to evaluate whether this general change of direction is actually a positive thing or not.
Personally, I think it is a positive thing, it is a necessary change to the – response to the changing environment.
We, like the people like me, we’re part of the Internet industry at the early stages, in the ’90s, I mean, we believe and we still believe that the freedom of innovation, the ability to create new content, services without having to ask for licenses, it is without coping with much regulation, it is with the success of the Internet.
At the same point, we’re in a different stage where the Internet is pervasive, everywhere is underlying economies of our lives, culture, and also there’s been a lot of consolidation.
So the traditional decentralization, it is being put at risk by the decentralization into the hands of few companies in very limited parts of the world. .
Europe by the way is a loser in this game, not completely, but definitely we have less development of the Internet industries than in the U.S. and China.
At the same time, we did create problems in other fields of the economy, where we have to reclaim a certain degree of control and the ways that our citizens are using the Internet, what they do with the Internet. So this is why I think what justifies the change in the regulatory approach, and then I think we can discuss further later on aspects that are still important, how much regulation do we actually need, where do we set the right balance between regulation and the initiative and the regulation of the he industry.
>> KAREN MULBERRY: Thank you. The next panel member, Peter Eberl, and he is joining us online.
Over to you to introduce yourself.
>> PETER EBERL: Good morning. Thank you for being inviting to this nice, interesting panel.
I’m Peter Eberl, deputy head of unit in the digital department of the European Commission. Our unit is dealing with cybersecurity and digital privacy.
Before that, I was working on Telecom, net neutrality and competition aspects.
I would like to give an overview of the existing and the legislation in the making of the digital sphere and I will focus on the cybersecurity of networks and services and on the protection of user data. You all know, of course, the GDPR, which was in 2019 about personal data, we also have the privacy directive, which is dealing with the confidentiality of communications, with the protection of equipment, no tracking of users, the Internet, consent, it is important for the Internet of Things and it is currently being looked at by the ePrivacy negotiations, and then we have in cybersecurity, we have the two directives, and it is added in force in January of this year, revising and the directive, the first law, the European level on cybersecurity of critical infrastructure, it is about essential and important entities in critical, highly critical sectors like energy, transport, many digital services as well, like Telecoms and a new directive as well and it setup cybersecurity obligations for providers and notifications, obligations in case of an incident.
Then we have the Digital Services Act which sets harmonized tools for safe, trusted online environments and contains legal obligations for providers. It entered in force in November of 2022 and will be able cable until February of 2024 and there are steps in the implementation. Then we have the Digital Market Act, related to what we had stalked about, a few companies that control the Internet, it is about gate keepers, fair, competitive markets, and it also contains protection of users, for instance, regarding the combination of data, services and provisions and online advertising.
Finally, we have the digital governance act, which has entered into force and which is about sharing of data, both personal and non-personal data.
What do we have in the pipeline in the negotiations? We have the privacy deregulation. The cyber resilience act, the cybersecurity of hardware and software, so products with digital elements and I think I can elaborate on that later. We have the sharing of data, access to data, fairness of action and condition, and finally we have the AI act, which is now entering into trial negotiations and which deals with the use of AI applications.
>> KAREN MULBERRY: Thank you, Peter.
Now to the last panelist, Stefano Zanero, over to you to introduce yourself.
>> STEFANO ZANERO: Thank you very much for having me. I’m a professor of cybersecurity at the Polly Technical University of Milan. and the remark by Vittorio Bertola by having speakers in person, not online, I completely concur, and I’m sorry that I cannot be with you and join the event in person today, I hope to do so the next time.
My involvement in the certifications of the digital cybersecurity has been long. One of the things I’m involved with currently, it is the SCCG, the stakeholder group that is arriving at the Cybersecurity Certification according to the Cybersecurity Act.
The experience has been difficult in different ways, and the reason for the experience in trying to certify cybersecurity being difficult, it is that we don’t know how to do it. Not we as in we here in this room or that are online today, not we as in the European Union, we as the Human Rights. It is just a subject where we do not have a book.
We do not have the a positive book of how to do things.
So we do not have correspondingly a way to certify things.
It’s very hard to try to regulate into existence something when this something cannot be provided.
This has been done, we were done with a light touch, I commend the European approach on this, because of the cybersecurity certification as it has been envisioned is basically voluntary, set around verticals, provided by the stakeholders in a single sector coming together, inwriting their own rulebook. On the other hand, it is very hard, and this has been I think demonstrated by the fact that there is no significant uptake of cybersecurity certifications in Europe or around the world.
I think what we have already gone through in the past few years, it is a good case study for what is going to happen with other emerging technologies.
For instance, as already has been mentioned, emerging technology, that is on the regulatory stand right now, it is artificial intelligence, that’s another area where what needs to be regulated is hard to define and I look forward to the efforts in the European regulators to try to define what needs to be analyzed in artificial intelligence.
>> KAREN MULBERRY: Thank you.
Now to get us started and discussing what’s in front of us, I mean, you have heard three dinner perspectives from a technical community, from the regulatory perspective and from the user who must implement the various cybersecurity and E.U. directives that are out there. We have also heard this week that cybersecurity, in particular cybersecurity has become very critical and it is part of the resilient component that everyone is trying to implement as they face the ongoing threats that are out there today.
I would like to turn to Vittorio Bertola and ask the first question.
In terms of cybersecurity principles, what is the correct line to draw between what needs to be mandated by regulation and what should be left to the industry’s Best Practices and to each actors own responsibility.
>> VITTORIO BERTOLA: For us, I work for a software company, we’re one of the biggest open software in Europe and we’re part of the OpenSource system. So OpenSource, it is peculiar, what keeps the Internet working, so even when you access the services, provided services, 95% of them is actually OpenSource software and a tiny bit of providing stuff on top. Traditionally the problem with OpenSource software, but also the quality, it is that it has been developed in a cooperative manner. So sometimes it is hard to enforce cybersecurity, it is really depending how you manage your code making practices and the gathering of contributions from many different developers that maybe don’t even work with the company and are spread over the world and just saying bits of code that needs to be integrated. Of course, it is tempting to just make the regulation, say the code, you should have detailed description of what you do. At the same time, this process is quickly changing, it is hard to qualify the detailed practices in regulation.
I think that the right approach is as you said, the principles in the regulation, then when it comes to having technical nexus, very prescriptive in what you should do, that’s a bit much. Maybe we’ll discuss the specific example later with the cyber resilience act.
Just to make – just the arch now, the proposal for the cyber resilience act, it is a prescription that you will never ship any bit of code with any vulnerability with any severity. If you develop code, I mean, you know that that’s not really the best thing to do sometimes.
Sometimes you know that there are 500 theoretical, no one really explored them and they’re not severe, someone just discovered a major one. So you just want to push immediately the fix for that vulnerability without having to wait to fix the others.
If the law says no, you cannot that, then you’re actually going against the cybersecurity of the product.
There should be Best Practice was the companies and other punishments for the companies that don’t do it, yes, there are many companies that don’t care about cybersecurity.
But for example, in our case, we just do the penetration tests for those that are in the environment, so I think that there are proofs that we care about that. Companies that prove that they care about that, they should be left alone in managing what they do.
>> KAREN MULBERRY: Thank you. Peter, Stefano Zanero, would you like to comment on that question?
>> STEFANO ZANERO: If I may add one thing to what Vittorio Bertola had just said and stress upon it, cybersecurity cannot be defined, even academically as the absence of vulnerabilities, even in the absence of, you know, discovered vulnerabilities, that’s not how cybersecurity works at all. Cybersecurity is always about risk management, it is always about the attraction of the vulnerabilities in appropriate ways as hinted about, to reduce the amount of risk in the way that the system is used. The exact same software system in different context can be perfectly adequate in terms of cybersecurity or completely inadequate. The exact same system depending how it is seen, that’s one of the things that makes analyzing cybersecurity in the context that’s regulatory, so the void of the actual application context.
>> There is no absolute cybersecurity, there is also the Cybersecurity Act proposal, obligations to fix vulnerabilities, it assumes that there will be vulnerabilities, it is not a perfect cybersecurity. I would probably not agree with the fact of the statement that is a detailed regulation of the cybersecurity requirement.
I think that we set the principles in the cyber resilience act, there will be standardization, there will be details set out, Annexes set out in implementing and delegated acts.
I think that those are also necessary. I think it is very good that companies like Stefano Zanero, Vittorio Bertola’s, do testing, that they find the vulnerabilities in the product and that we also see the difficulties and we see the benefits and cause of OpenSource and we’re not against OpenSource, but, of course, also OpenSource, it may create cybersecurity risks for digital products and so therefore it is important that there are certain obligations and we’re working on finding the right measures to apply.
I think it cannot be left only to the companies, but, of course, companies have a strong part of responsibility and many products are for self-assessment by the companies under the Cyber Resilience Act. I don’t think it is intrusive or not more intrusive than necessary. I think it is necessary and we see from the general feedback that the Cyber Resilience Act, it is something that is largely desired by the communities and by the economy to be there because I think it has become clear that it is a regulatory gap with respect to the Cyber Resilience Act wants to close.
>> KAREN MULBERRY: Thank you. We also have an online question.
Let me read that.
How is the European Union ensuring that harmonization of regulations across Member States while developing regulatory frameworks that foster a competitive digital market and safeguard public interests. For instance, the emergence of ChatGBT highlights the varying responses of different Member States such as Italy’s decision to block chat GPT. Who would like to address that.
Don’t all raise here –
>> VITTORIO BERTOLA: I’m familiar with Italy’s decision to block ChatGBT and Stefano Zanero is as well. That was actually I think a good decision, because they were not compliant with GDPR. I was proud of having the data privacy authority that had that boldness to actually tell the operator that you have to comply with GDPR.
Then, of course, it is hard. How do we implement the model that needs to be understood.
I think it is important that more authorities have the courage to actually get laws respected.
>> KAREN MULBERRY: Anyone else want to comment?
>> PETER EBERL: I would agree with what was said by Vittorio Bertola, it was a good decision in the sense that the government from Italy acted within its pours under the GDPR. It acted as a data protection authority, and if there are concerns regarding the data protection then, of course, the authority has to act.
I think the decision also triggered a certain discussion within Europe, maybe even on a broader level, about the benefits and risks of ChatGPT. So in terms of – in terms of having a debate, a structured debate, it was a good decision.
>> VITTORIO BERTOLA: ChatGPT decided to shut down the service rather than comply with GDPR.
If you do something we don’t like, we just pull out of your country, I find that an unpleasant way with many European countries in general and sovereign states.
>> STEFANO ZANERO: It technically is the tools, the two parts of the kind of push their way through, the service provider can, you know, yeah, in a way threaten, in the same way that you threaten are to change something during commercial negotiations really to pull out a service.
That is exactly the same thing we do as Europeans. Right.
We have wielded the power of having a market of 500 million of the most wealthy customers on the planet and other countries, well, if you want this market, if you want access to the market, you need to adhere to such standards.
That’s the way we have basically imposed the same level of rules that GDPR has imposed in Europe and outside of Europe. It is a two-way route.
Decide agree, there is a lot to be discussed in not just ChatGPT but all of the GPT based areas, how they’ll work with personal data. Possibly my point would be that some of these things can be approached with regulation. And we have – and with regulatory tools like fines, blocks, whatever.
Some of the things are way too complex for regulation to actually meaningfully impact them and we need a lot of discussion.
So there’s space for both types of intervention, in particular, on the AI machine learning front, I think that there is a lot to discuss, and a lot that we need to figure out. As I said, by in part cooperating with the company’s offering service and in part also, reasonably regulating them.
I think that the European approach has proved that over time to be well balanced mix of these two things.
>> KAREN MULBERRY: Thank you. And to build on that, I mean, since the European Union’s digital single market strategy is actually to come up with a uniform approach, I would like to ask Peter, what more do you think needs to be done in light of AI and ChatGPT and working those emerging technologies within the E.U. regulatory structure?
>> PETER EBERL: Well, as we have been discussing already, the Cyber Resilience Act is in the making, that’s one issue that we’ll definitely follow the cybersecurity, the price to cover the cybersecurity issues regarding the digital products, hardware and software.
We have the AI act which now entered into trials, we have the parliament text including even the power of GPT issue, so that will be interesting discussions.
Then we have the data act, on data access, data sharing, which also will empower users to use the data so it will – it aims at fostering the European data economies, to build counter to the American and other Chinese, other let’s say data players.
I think once all this legislation has been adopted, the privacy regulation, of course, which should be adopted with this legislature, and once this has been adopted, it is also a big challenge to implement it, that directive has to be transposed by Member States and most of the other regulations are regulations, it still requires implementation at the Member State level, and then it requires also implementation at the corporate level so the companies have to implement, the European companies and also the non-European companies active in Europe, and in that context – and I think it was Stefano Zanero that mentioned it, not the – not everything can be fixed by legislation and regulations.
I think that there is also some cooperation needed and then it is also in the implementation that one has to react to the new developments. The legislation tries to anticipate what may happen, but the AI act may not have been able to anticipate everything which will happen in the next five to ten years, it’s a piece of legislation that could last 5 to 10 years. It needs to be adapted to new developments in the application, and that is then for the national authorities to take into account, new developments, and then it also is interpreted by the judge, by the European Court of justice as a last resort, how to interpret and how it is applied and one see there is are gaps and one has to think about the vision but for the time being, the big challenge is when the pieces of legislation has been adopted and implemented and entered into force, implementation, the application.
>> KAREN MULBERRY: Thank you, Peter.
Now, Stefano Zanero, you talked a little bit about things that might be needed to support the regulation. I know Peter just laid out a whole menu of actions that need to take place, in essence not only now but in the future, but cyber threats, they’re an evolving opportunity.
It is also an evolving challenge. You know, how can you develop the skills needed not only to address what’s going on today, but that evolving challenge for new regulation and new threats tomorrow?
>> STEFANO ZANERO: That’s a topic that’s dear to my heart as a professor and as an educator.
Very often we focus a lot on the current state of technology.
For instance, right now, as also the first question coming from the audience, it has shown challenges, ChatGPT, it is all encompassing technology in the minds ever those discussing AI. First of all, ChatGPT is not technically an AI, it is a generator of text which has a very good chance of producing text that’s very interesting, but that’s what it is.
One of the ways to demystify that, it is to study how it actually works in the background, and in order to do that, in order to create, to educate the class of professionals that can deal with this, and with all of the other challenges that come in the future, we do not need to train them to do what is currently called prompting, there are people creating this, thinking that the good way to work on these technologies is to figure outweighs to construct the questions that you ask in order to generate a meaningful answer. That’s not the way we work.
That’s not the way we need to educate people. That’s one technology that’s probably going to fade-away in the next few months and replaced by something else.
What we need to do is to educate our skills and our up-and-coming colleagues on the fundamentals of technology.
And I have always been very weary of educating people on a very specific, very in depth aspect of technology.
In my opinion, even at the graduate student level, we have to give them a deep background on how the technology work, the more we demystify technology for students and allow them to see how the black box opens and the ideas inside, the more we do them a service.
That being said, there is also a challenge in educating today our colleagues that are going to be working in 30 years on technologies that we cannot even begin to conceive.
This means that besides our role in the universities for education at the beginning of professional life we need to create our own role for universities and other institutions of learning for continuing education of people, allowing people to attend the University courses even after they have started their professional lives and since we’re Europeans, we take pride in offering education as a service to the community, there is a free service to the community, that should also become a free opportunity for continues lifetime learning.
continuous lifetime learning.
>> KAREN MULBERRY: Thank you. From your perspective, as a user and implementer, what do you see are the challenges to be prepared for the future? What kind of training and skills do you see you need in your industry to address not only the threats, but to work on the regulatory requirements and implement all of this into a uniform fashion?
>> VITTORIO BERTOLA: Well, in technical terms, in terms of having engineers and other people that are able to produce secure products, we are already there, we have been doing this for – since the software industry exists. We do already have education. Now, there are actually universities starting entire degrees in cybersecurity, or there are already several of them. Even if you do not entirely agree with the courses in cybersecurity, but what is new now, it is the comparison part. We have to be equipped to comply with the certification requirements, with the new regulation requirements, and again, companies already have been doing that, I mean, we as a company, like many others, there are certifications in the industry world, in the standardization world that are already addressing the cybersecurity.
This is more of a problem for smaller – my concern, it is part of the OpenSource ecosystem, it is not about companies like me, a 300 person company, capable of dealing with this, the problem is that the open software interactions relies on thousands of projects, many made by non-profits, universities, individuals and for some reason or another, they have become fundamental for everything else.
So from one viewpoint you cannot say that since something is made by people this is not secure, this is not acceptable anymore. We have had problems in the past because of the projects that were key and the people running them, they don’t have the means to keep them secure.
At the same time, I’m not sure that just imposing them to be secured by law is the solution, because if the problem is that you are just two people maintaining a log-in library, used by the entire world, it is not like if the law makes you do extra stuff, you don’t have the resources to do that. The question is how to fund the cybersecurity, secure the cybersecurity, this is a task for the industry and for the public institutions.
Bigger companies, like we do, we can sponsor maybe projects for smaller projects, cybersecurity activities, so on.
Also for example in Europe, they could put funds, there are interesting European projects around it, the next generation network, and there is – it would be more effective in terms of cybersecurity, especially of the software world which is very different from the hardware, software, it would be way more effective to find ways to put funds, maybe to even have 20,000 to the OpenSource projects to secure them with the cybersecurity audits, fixing them, whatever, then just making legal requirements. Let’s make the legal requirement it is we want, but let’s make sure that we put the money on the table to help the small, non-profits, others.
>> KAREN MULBERRY: Thank you. Coming from an organization made up of engineers who are working very diligently on requirements and products and services to meet a variety of not only the regulatory needs but the needs of corporations, people, end users, you know, that’s a soft spot in my heart.
Anyway, previously I know you raised a point about certification being important as part of the regulatory process and I know that Vittorio Bertola had touched upon that. Stefano Zanero, I wondered if you wanted to expand upon that? I know part of the Cyber Resilience Act includes a component to certify, that you’re compliant with the regulation, that you meet all of the needs and the projects, the hardware, the software, they do meet the cybersecurity rules.
>> STEFANO ZANERO: That’s actually significant, that’s actually a significant challenge. On one hand, what the customer expects, even rightly so, it is that at some point you want to be able to get an – we need to think that we’re not just talking about software that will be used in enterprises or on computers, we’re also talking about software that’s going to be embedded in devices. When you buy a device, for instance nowadays, there will be some software upon it that may have cybersecurity vulnerabilities.
Most customer, the customers, they’re not, of course, you know, able to understand by themselves the challenges connected to this.
So what we really think of, it is something similar to the CE mark, so when I buy a fridge, I don’t have to worry to test it, I am an engineer, I could probably do it myself, but I do not have to test for electromagnetic compatibility or for the fact that it is going to do something, there are standards that will assure me a certain level of confidence in this.
So we would like naively to have the same thing for cybersecurity. The fact is, unfortunately, cybersecurity does not work like that. It is based on the context of users. So we’re not able to provide a very parallel with this, what we would mainly expect.
What we can provide though, it is an assurance that the device has been tested against a reasonable attacker, a reasonable level of interested attacker, and it has been found to be free of obvious glaring vulnerabilities. This will already raise the bar. With respect to the bar, at the movement, it is completely you lying on the ground. So we have devices that get sold on the market, they sometimes have egregious vulnerabilities that are not evidence to anyone.
So these we can probably achieve with a reasonable standards-based certification that is discussed even within industries by themselves.
If instead we try to do this, we have the certification, not just the problem that has been described, cybersecurity is a process that does not scale, so we will have the problem at that point of how to manage all of the SMEs throughout Europe are a significant portion of the industrial days, in particular, in Italy, for instance, they’re 97%, so it is a very high problem, higher than what happens in other areas of the world.
But we will also inadvertently, automatically trigger the problem of how do we define secure in terms that are usable in a legislative act? I’m a big fan ever sectorial self-defined standards to which the stakeholders agree to keep up to.
>> KAREN MULBERRY: Thank you. We also have an online question.
What do the panelists think of U.S. efforts to promote use of software bills of materials?
>> VITTORIO BERTOLA: In general, again, if you run a big coding project in the OpenSource world, you need to do that anyway, you need know what’s in the product. Again, we might decide it is good to certify and formalize in the document, there are tools to do that nowadays and to track the vulnerabilities, and I think that’s in general a good thing.
>> KAREN MULBERRY: Other comments from the panelists? We have a question on the floor.
>> Thank you. I’m from the Internet Society of Finland chapter. It was pointed out in the discussion that ChatGPT is technically not an AI, but it may seem like a question of semantics, but if we want to regulate AI, it would be nice to have some kind of a useful definition of what we are regulating. My question is, do you think it is even possible to make a meaningful definition of AI without it being either too narrow or too general to be useful? Thank you.
>> KAREN MULBERRY: Okay. Who wants to take that one on?
>> STEFANO ZANERO: I think it was preached at me.
I will take a first shot at it.
It was exactly the reaction I was hoping for. Thank you very much for the question.
Of course, it is a provocation saying that ChatGPT is not AI, everybody has said the opposite. You know, they may be right and they may be wrong. We do not have, as we said, a good definition of artificial intelligence because we do not have a definition of intelligence. Our definition of intelligence is intuitive. We think that something is intelligence if it behaves the same way an intelligent human would behave.
This is not necessarily a good definition. We are defining intelligence in the sense that it looks like us. It is the way that we define intelligence of our pets, when we look at our pet, we think this dog is intelligent, what we’re say, this dog looks like it understands, he or she understands me, looks like they are able to perform what things I’m asking them to perform, they’re communicating with me. That’s our definition of intelligence.
This is a very human-centric definition that’s not necessarily useful for defining intelligence in general. Most of the systems that we have right now are systems based on machine learning. They are able to learn from examples and generalize, extract things, things like ChatGPT, they are very, very complex machines that learn on patterns and examples. Is that all there is to intelligence? I don’t know.
Is that very fitting to our definition of intelligence? Well, of course, a machine that are is designed to replicate our language, we can say it is intelligent, it sounds like us.
Now, is there a way to define what we mean by artificial intelligence? Not really. It is not really necessary.
What we’re going to define are some properties on computer-based systems that we want any computer-based system to respect.
Systems that are implementing what is commonly called or commonly intuitively defined as artificial intelligence has certain characteristics and those characteristics we have already started regulating, when we – when people wrote in the GDPR for instance The Rights that are related to the processing of information and to understanding the criteria for automatic decisions, they were not thinking of AI. That rule, it is already a very good rule that applies to AI systems.
So that’s probably the answer, even if it is not I think the answer that the European regulators would like to hear.
>> KAREN MULBERRY: You want to comment as well?
>> PETER EBERL: The automatic processing, I guess people may have had in mind something like artificial intelligence. I think as was said, it doesn’t matter whether it is intelligence concept as we have for human intelligence, but I think we need some concept, some definition a helps us to grasp for the regulator perspective what is going on with machine learning and other elements. I think it doesn’t matter how it is called, in the chat there are different conceptual terminology concepts and I don’t know which one is the right one. I think one needs to have one common denominator on the basis of which one attempts to regulate, it is a challenge of course but we cannot just say that because it is difficult to find a definition of AI, we cannot start thinking about how to regulate it.
>> KAREN MULBERRY: Thank you, Peter. Thank you, everyone, for participating. I especially want to thank the panelist for their insight and comments on regulatory activities and actions and what we’re looking at in terms of the future opportunities and what we want to look out for and prepare ourselves for as it happens.
Thank you very much.
>> NADIA TJAHJA: Of course, also thank you very much to Karen for moderating this wonderful session.
So now we’ll take a short break, then we’ll come back to the final session of this main topic on digital platforms, subtopic 3, platforms as critical infrastructure for democratic discourse and a look forward to inviting you back here at 12:30. See you then.