Resilience of IoT Ecosystems: Preparing for the Future – Pre 12 2025

From EuroDIG Wiki
Jump to navigation Jump to search

by IGF DC-IoT & IS3C

12 May 2025 | 13:00 - 14:15 CEST | Room 7 | Transcript
Consolidated programme 2025

Session teaser

How can we ensure today’s IoT systems remain secure and resilient in a rapidly evolving digital landscape shaped by AI and quantum computing? Join us to explore practical actions and future-ready strategies for trustworthy IoT.

Session description

IoT ecosystems form the backbone of many critical services—from energy grids to healthcare to smart cities. As these systems grow in complexity and interconnectedness, their resilience and trustworthiness are under increasing pressure. This session addresses how to secure and future-proof IoT in light of rapidly evolving technologies and global policy developments.

Building on the 2025 update of the IGF DC-IoT Global Good Practices, recent joint discussions with both Dynamic Coalitions at IGF 2025, and the current research undertaken by IS3C the session will explore concrete actions and policy needs for strengthening IoT ecosystems. Key focus areas include:

  1. Current priorities:
    • Implementing zero trust architecture, robust encryption (e.g. TLS 1.3, DNSSEC, RPKI), and secure-by-design principles.
    • Leveraging AI for real-time monitoring, anomaly detection, and predictive maintenance—while ensuring ethical use and transparency.
    • Addressing the expanding role of data governance, balancing privacy with functionality, especially as AI aggregates personal data from diverse IoT sources.
    • Encouraging interoperability and secure lifecycle management through adherence to global standards.
  2. Emerging priorities:
    • Preparing for quantum computing’s impact on encryption by supporting adoption of post-quantum cryptography now—even before commercial quantum systems arrive.
    • Scaling AI safety and accountability frameworks to manage risks posed by general-purpose AI models in critical IoT environments.
    • Strengthening resilience by embedding redundancy, OTA updates, and DDoS mitigation, and ensuring disaster response readiness.
  3. Global alignment:
    • Responding to the rapid development of IoT device labeling and certification schemes, and encouraging mutual recognition frameworks between national and regional efforts.
    • Monitoring international standardization activities to support secure and ethical IoT deployment at scale.

This session will convene experts from technical, policy, industry, and civil society domains to chart a path from today’s concrete capabilities to tomorrow’s anticipated threats. Together, we’ll explore what’s needed to ensure IoT remains secure, resilient, and fit for purpose—today and in a quantum- and AI-driven future.

Format

Roundtable discussion. Invited speakers will make short 5 minute statements per key focus area, after which all participants (online and in the room) will be invited to contribute to the discussion

Further reading

People

Moderator

  • Maarten Botterman, Chair IGF DC IoT; GNKS Consult BV Director; ICANN Board Director, GFCE Triple-I coordinator

Online moderator

  • Wout de Natris-van der Borght, coordinator IGF DC Internet Standards, Security and Safety Coalition (IS3C) / De Natris Consult

Agenda

  1. Opening by moderators representing DC IS3C and DC IoT (10 minutes)
  2. Current priorities: (20 minuten)
    • João Moreno Rodrigues Falcão, Cybersecurity Researcher at Falcão Moreno Cybersecurity Solutions
    • Matthias Hudobnik, ICANN SSAC Member
  3. Emerging priorities: (20 minutes)
    • Elif Kiesow, Director of Quantum AI; Stanford Law School Research Fellow; MIT Research Affiliate; IS3C Chair Emerging Tech WG (tbc)
    • Chris Buckridge, GFCE Senior Strategic Advisor; IGF MAG Member
  4. Global alignment: (15 minutes)
    • Jonathan Cave, Alan Turing Fellow, GNKS Consult Associate
  5. Conclusions by moderators (10 minutes)

Transcript

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

The Geneva Internet Platform will provide transcript, session report and additional details shortly after the session.


Maartin Botterman: which is co-organized by the Dynamic Coalition for IoT and the Dynamic Coalition Internet Standards Security and Safety Coalition, the IS3C, which Wout is coordinating. The focus is really on resilience of IoT ecosystems and preparing for the future. It’s important to understand that IoT is not one thing and this in many different applications ranging from industrial applications to domotics, ranging from natural environment monitoring systems to tracking devices to managing, for instance, dynamic management, dynamic traffic management systems, the whole city infrastructures, how traffic flows are going. These have been around for a while, but they’re increasingly around, so they come to be part of the fabric not only in the world that started adopting those technologies early, but also in other places all around the world. In that, it becomes more and more critical. So IoT devices enable systems that help address specific societal issues. If you think of the sustainable development goals, we can see that IoT has an influence on many of them and plays an important role whether it is on zero hunger, eradicating hunger. We can see that IoT systems such as drones and irrigation systems and analysis will help to increase the return on crops to clean water where the monitoring is clear and systems can be deployed to act when necessary to the city systems we talked about before. Climate action is clearly something where IoT and the whole network of IoT devices plays an important role in ensuring that you actually know what’s going on and can take immediate action or learn from it for the future. So that requires sharing global knowledge about solutions and local action to make things happen, which means that it also cannot all be developed in one part of the world and then deployed everywhere else. The deployment requires the local presence and awareness as well. As well as for the development, it’s important to understand that not every society is structured as the society where the system is built. So hence, very good to do this on a global level here in IGF as a platform. How does this, these technologies, these systems that are built all over the world, that are made to be used all over the world, can be used everywhere in a responsible way? So the key messages that we’ve seen develop over time, and that’s where our two coalitions come together as well, is that new technologies bring us ways to response to these challenges that never existed before and also come with new challenges. So who has access to those technologies, the digital divide, who can afford it, how can we make sure it’s usable where it’s needed? And on the other hand, carbon footprint. If we do these new technologies, they can help reduce the carbon footprint, but they come also with their own. So how can we minimize that? The recognition that technologies are not good or bad in themselves, it’s not only for IoT, but broadly, but it’s how we use them. And a transparency of how these systems work. Are you observed by a camera? Where are the data going from that camera? And do you have… user agency, the ability to influence these systems, for instance, your iPhone, tracking of location devices. You can turn it off, but then you need to think of that. And last but not least, the more we depend on it, the more secure it needs to be today, today, but also towards the future. And this is where we’re going to go deeper in with the panel. From the DC IoT perspective, we’ve been working on developing an insight of what is global practice, how does it look like? And where we are today is that the Internet of Things good practice aims at developing IoT systems, products and services that take ethical considerations into account from the outset, both in the development, deployment and use phases. So also when you get rid of them. And to find an ethical, sustainable way ahead using IoT that helps to create a free, secure and enabling rights-based environment, the future we want, a future that serves the people. All the discussions had these elements in common. And while technology progresses and the intensity of use of IoT systems and services increases, this still stands. Now, the focus of IS3C, maybe you can allude to that, Wout.

Wout de Natris-van der Borght: Yes, thank you, Maartin. My name is Wout de Natris-van der Borght and I’m the coordinator of the Internet Standards Security and Safety Coalition. Mark Revelle, sitting over there, is our senior policy advisor. I’d like to start with a very short analogy. You’re going to buy a car and the salesman is at the top of a mountain. You get in by the car, you drive away and all of a sudden in your back mirror, while you’re driving comfortably down, faster and faster. advocate the deployment of existing security related internet standards and after five years I can still say that the interest in the topic, just look into this room, is scantily low because we should be having this when you never buy a car or a plane or a train without these sort of measures in place that we don’t have on the internet. So that is where we start. Our presentation today will focus on the input and that’s going to be given by Joao from Brazil online, will focus on the opportunity we have today to do things right in quantum computing because how that is going to affect our security is going to be beyond measure and we have been commissioned to do research into the societal implications of quantum computing if it’s not secure when it comes on the market but also when the existing tools that we have will not be secure before that date. So that is where IS3C comes from we have done several researches on education skills, we have done some tooling that is on the market with the most important internet standards that is out there, the arguments that technicians use to convince their bosses to deploy, not only to deploy but to procure secure by design. done a report on procurement. Do governments procure IOT or ICT secure by design? And the answer mostly is no. So in other words, you don’t we don’t even buy secure by design. So let me stop there with my three minutes. I think that that is what it says there, what I see is trying to achieve. And my presentation will have made that a little bit more clear on the practical side. So I’ll hand back to you, Maartin. Thank you very much. And look forward to the rest of the session.

Maartin Botterman: Thank you for explaining what drives IS3C. And as you can see, it’s very complimentary. So with that, I’m very happy that we together have gathered a number of very insightful speakers to kick off a discussion. But the invitation is very much to you online and in the room to come in with your questions, your suggestions, your remarks. And we’ll do so in basically three blocks. We’ll first ask Joao and Matthias to introduce the current priorities. So how does good practice look like today? How do we ensure security stability of IOT systems in which we see that the impact is growing? So that will be the first block of discussion. Second block, then we’ll go deeper. It’s like, so we know where we are today. If you look to the future, we’ll see more quantum computing eventually. We’ll see more AI, we’ll see more deployment of IOT devices. What does that mean? Chris Buckridge has been working on that. And we will introduce that. And again, as Wout already said, IS3C is also working particularly on quantum computing and the impact there. And last but not least, so how do we make sure that this all comes together? And how do we make sure that technologies that are developed everywhere can be used everywhere in a good way. And Jonathan Cave will go into how we can do this with standards going global, how do we use appropriate labels and certifications, and how do we make sure that the different frameworks that emerge come together. So with this little task, we have about an hour to go, and we’ll try to divide it in about three blocks of 20 minutes. Matthias, if you can kick off on the current priorities, then I’ll ask after this Joao to kick off on the emerging priorities, Joao to respond on Matthias, and Chris to respond on Joao, and then we’ll take it from there. Yeah. So, Matthias, please, ensuring security stability of IoT systems. How do we do that? Where are we today?

Matthias Hudobnik: Thanks, Maartin. Hello, everyone. Yeah, it’s a pleasure to be here at the European Internet Governance Forum 2025. Yeah, my name is Matthias Hudobnik and I am excited to contribute to this panel. I speak today in my personal capacity as a lawyer and engineer, focusing on AI and data protection, and also as a member of ICANN Security and Stability Advisory Committee. I’m not necessarily reflecting the opinions and advices of the ICANN Security and Stability Advisory Committee. And for information, the ESSAC advises the ICANN community and the ICANN Board on matters relating to the security and integrity of the naming and address allocation systems of the Internet. As IoT devices connect our hospitals, infrastructures, and homes, Their security depends on the strong foundations of the internet itself. And this includes core principles like decentralization, redundancy, end-to-end design, and especially a secure domain name system. And my short intervention will focus on four points. Firstly, internet security principles and the domain name system. Secondly, IoT security and lifecycle management. Thirdly, AI governance in IoT. And then fourthly, a short future outlook and potential threats. So firstly, internet security principles and the domain name system. The internet’s resilience is built on layers, as we know. And its core lies in the domain name system, DNS. The system that converts domain names to IP addresses and also protecting the DNS is critical because if it fails, IoT services from, let’s say, smart lights to critical medical devices fail as well. And this is reinforced through domain name system security extensions, DNSSEC, which provides integrity by digital signing DNS data and also by adding cryptographical signatures to ensure data authenticity. The second point is resource public key infrastructure, RPKI, which verifies which autonomous systems can announce specific IP prefixes and thereby also preventing border gate protocol BGP hijacking. And the third point I want to mention in this first slot is DNS-based authentication for named entities, which is enhancing transport layer security authentication, TLS, security by binding certificates to domain names for enhancing, again, authentication. And these measures illustrate how these Chris Buckridge, Joao Moreno Rodrigues Falcao, Matthias Hudobnik, Elif Kiesow, Jonathan Cave secure boot hardware routes of trust and also requiring for example software bills of materials. And at the network level we should have encryption and strict network segmentation which are crucial and at the data layer we should have robust encryption, minimal data collection and privacy safeguards are essential. Again lifecycle management is key. Consider smart energy meters that for example once deployed seldom receive firmware updates which leaves them vulnerable for years or implementing secure lifecycle practices such as I mentioned before software bills of materials and also over-the-air update mechanisms are indispensable. Then thirdly AI governance in IoT. Here as you already have heard AI is both an enabler and a risk in IoT. It optimizes operations in areas like traffic control and energy management but yet issues also are here such as for example data poisoning, model drift and opacity can undermine systems trust and here also a regulatory framework such as the EOI Act and data protection laws like the GDPR require that the AI systems are ethical, trustworthy, auditable, transparent but also reliable, secure and accountable. And effective governance means building in human oversight and robust explainability at every stage. Then I’m coming to my fourth point, future outlook and potential threats. Here looking ahead, major challenges loom in relation to quantum computing. We know that current encryption protocols risk obsolete when quantum computers become available. Harvest now, decrypt later is a real risk. So starting the shift to post-quantum cryptography today is critical, already underway as per NIST’s standardization efforts. Another point are supply chain and certification gaps. So many IoT devices lack secure updates or update mechanisms through the lifecycle. There is a regulatory fragmentation with initiatives like for example the EU Cyber Resilience Act and the NIST’s IoT standards. But there are no global mutual recognition frameworks similar to those in the DNS governance which must be promoted. And my last point is then capacity and awareness gaps. So beyond technology there is a human element. Cyber security education and capacity building are essential. To conclude, resilience in IoT isn’t built solely by adding features. It is engineered from the ground up, starting with the secure domain name system, enforcing lifecycle aware security and ensuring that if so AI driven decisions are transparent and auditable. And by aligning our system with core internet principles and international regulatory frameworks, we create a robust, adaptive and also trustworthy digital future. I will stop here to stick to my time and I’m looking forward to the discussion. Thank you.

Maartin Botterman: Thank you for that, Matthias. Very insightful and also a bit beyond what’s here today with a look through to tomorrow. So in a way that makes sense and changing things on the spot, I would say Joao, please come in and go from your earlier research on this towards your insights, just go over. At any point in time, I open the floor for questions, remarks and after Joao, we’ll just move to Chris and we’ll change the flow but we’ll keep the same subject. Is that okay for you? Good. So with that, Joao, as you were back to back on this, I think you will be able to deal with this too. So Matthias, thanks and please again raise your one question in the room, two questions in the room. Let’s take those first.

Audience: Yeah, the back. Is it working? Yeah. Yes, thank you. Frederic Taas speaking. I have different functions. I speak here on my own but I’m notably a cyber security advisor and my question is about what is specific to IoT compared to OT and IT because lifecycle management, etc, governance, those are all common cyber security things. What is really specific to IoT? Thank you.

Maartin Botterman: Thank you. Let’s keep that question and we’ll make sure it gets answered either through the presentations or later on in further discussion. Very good question. Please, sir.

Audience: Thank you so much. My name is Alexander Savchuk, Institute of International Relations Ukraine and I would like to ask you in 2018 GDPR, General Data Protection Regulation, make a revolution in European Union and its direct application act for the all European European Union countries. It’s the most, as for me, powerful act, not only in Europe but also all over the world, according to the amount of population that I apply and the scope. The GDPR has also extraterritorial application, but the time is going on, and we have now the Artificial Intelligence Act, we have now the development of Internet of Things, and how is in your mind, should GDPR be developed, maybe make some amendments, changes, according to the development of the informational technologies?

Maartin Botterman: Thank you very much for that question. Indeed, guidance for that is not explicitly given yet, and it’s under development, but I’m sure we’ll come back to this discussion as well between Joao and also later on, Jonathan has a clear view on that. So how does that further develop? So I’ve got two questions from the room. One is, what is different with IoT than with IT? And the other one is, so how would data privacy develop with the upcoming of new technologies, basically, more IoT devices? And I think the combination in AI transforms also the way IoT devices deal with that. So with that, Joao, the floor is yours.

Joao Moreno Rodrigues Falcao: Okay, thank you. So should I answer these questions now, or can I go with the results of the research? What you feel is most appropriate, but I know that between us, we’ll go a long way on this. Okay. Good, then. So, well, I’m speaking here on behalf of I3C and I’m here to present the work we’ve been doing for this year in collaboration with AFNIC, the French registry. So, well, IoT, we have around 75 billion devices and many of them ship with default credentials and weak formers. So, we have a huge challenge to overcome. And what we see is that attacks already disrupt healthcare, transport, and DNS, as Matthias also noted. And, well, we have quantum computing that will soon break RSA and elliptic curve cryptography. So, we need to execute an urgent crypto overhaul. And, well, the first part of our research is focused on the current challenge that we have on IoT. So, we did a literature reveal, we evaluated policy frameworks from the past few years, and we assessed the readiness for quantum cryptography, for post-quantum cryptography. And, well, so what is the threat landscape snapshot that we have? Well, we are talking about resource-constrained devices and using fragmented protocols. We have, for several IoT systems, a cloud dependence. So, even though the device is in your home, you need to communicate with a cloud service to send a message to your device. And also, we have low user awareness and a patch inertia that widens the attack surface for these devices. So, to understand better this picture, we did a case study focusing on the Jeep Cherokee hack that, in 2015, forced a recall of 1.4 million vehicles. And we had also the St. Jude cardiac implants, so the base keepers for heart had a very serious flaw that could discharge the device. And well, FDA made a recall for half a million devices because of this. And also, we have the notorious Mirai botnet that recorded a huge attack against Zyn, the DNS service, and involved 600,000 devices. So we see a clear picture of the difficulties that we have. And when we talked about Mirai botnet, we saw that we had a botnet evolution in the past 10 years focusing on IoT devices. So we had the Mirai botnet, the source code leaked in a forum, and we saw more than 30 active variants targeting different brands, different sets of devices. And this is, well, this is frightening because we know that we have now a couple of devices in our homes. The city relies a lot on these devices to function. It’s really the texture of… our society. These devices working correctly. And I also brought a specific brand of botnets called Rektortrain that was discovered in 2024 and it’s linked to active spionage. So we saw in the past they using the devices to cause disruptions in the internet because it’s like the most simple way to use a huge number of devices for malicious activities. And then they started, wait, we have access to thousands of households and companies and countries, governments. Why we don’t use this as an initial step to do higher damage to these groups. And we are starting to see this now. So the Department of Justice from the U.S. cleaned 200,000 devices that were infected with Rektortrain. And when we focus on the devices itself, we lack also the supply chain and cloud risks of these devices. Because, well, as I said, we have a single point of failure when we have thousands of devices badly configured, connecting to a badly configured cloud system. So we had the Vercado Bridge, which infected 100,000 cameras across hospitals, schools, households. We also had a SDK, which means a a library used for several brands also infected with Rektortrain. very serious vulnerability, which made 100 million cameras exposed. So, we see a challenging scale issue. So, they are with millions of users and usually all of them need to actively connect to the device and protect them and configure them to fix a vulnerability related to these devices. And this is very serious because when we talked about security of computers, we had a single computer to take care of. And we also connected to them, well, we also connect to them daily. And when we speak about IoT, when did you connect it to your washing machine to guarantee that it has its firmware patched or configured correctly? We don’t do. And this creates loads of vulnerabilities that could be used for these kinds of hacks. And also, now, when we go to a bigger picture and think about the policy implications of it, we have a being enriched policy landscape. We have ISO, ISC, and also ETSI creating baselines for security of IoT devices, the EU Cyber Resilience Act, the US NIST IoT guidance. We have APAC labeling system as a Singaporean one, the Korean. We have the UK PQC roadmap for 2035. All this work focusing on trying to protect these devices, but we have a global reach. So it would be very important to converge this effort into a common work. And well, what are the social implications of these vulnerabilities that I’m speaking of? Well, it erodes public trust and disrupts essential systems that we have around us. Well, we have privacy breaches via cameras, via wearables. This creates a huge surveillance risk that is starting to be exploited in the world. And also, when we talk about the need for innovative solutions, we also need to think about the digital divide, because we have a huge set of devices, some vulnerable, some not, and how we can guarantee that we could protect the whole ecosystem. And well, I think that’s it for my first contribution.

Maartin Botterman: Thank you very much, Joao, and the passion sparks off the screen. Really appreciate it. I think we’ll get into the questions that have been asked now. Chris, if you want to go into it from the perspective of the governance, you may be able to allude and push a little bit bigger context. As soon as I kill my microphone, yours will work.

Chris Buckridge: I’m a technical expert. No, thank you, Maartin Botterman, and thank you, Joao. and Matthias for the input so far. I’m speaking really today just on my own behalf, I’ve got a few relevant hats in this, one of which is certainly as a MAG member with the Internet Governance Forum and I think these kind of issues and particularly the governance aspects of it are coming through very strongly in the global internet governance discussion and I think Val’s point, I actually don’t think there is a lack of interest in this topic, I think there is perhaps some problematic fragmentation in the discussion around this topic, so when we look at cyber security and the concerns around that, we see very active governance discussions in the UN, it’s something like the Open Ended Working Group, we see very active private sector discussions, I just have to look at my LinkedIn feed to see how much is coming through on that. I don’t know that it necessarily makes its way into these internet governance discussions, particularly the sort of multi-stakeholder description and that’s a problem and I think something that we need to work on. The other hat that is relevant here today I think is the work I’m doing with the Global Forum on Cyber Expertise and so this is a body that was launched actually ten years ago, I think in the coming days they have a ten year anniversary, with the goal of strengthening cyber capacity building around the world but particularly with some focus on the global south and part of the work I’ve been doing with them has been to foster some new discussions on emerging technologies and what they mean for cyber security, what they mean for cyber capacity building needs and opportunities. Obviously, with that definition of emerging technologies is a very broad one, AI very much at the forefront there, quantum similarly something that is, there is a lot of concern about although obviously it’s at a different phase in its adoption and impact than AI. But then also looking more broadly at things like LEO satellite networks and what they mean for security at blockchain applications and what they mean for security. So it’s quite a broad field. I think when we look at IoT, for one thing I think that’s often an area that’s a bit overlooked in the AI discussions and I think that’s certainly problematic as well. To the question about how IoT is different from IT generally, I think Joao probably had a couple of good responses to that. I think the sheer attack surface that’s available, some of the manufacturing processes that lead to those devices, distinct on a spectrum rather than categorically distinct, but I think that’s a really, it is important because it leads to the visibility overall of the IoT. Too often these devices and these applications are not visible to the user, not visible to many people at various stages of the value chain. And the scale, the scale currently but also the scale potentially of these networks, these devices, the reach that they have into our everyday actions is different. It’s new and it creates new levels of vulnerability and of criticality, I think, for these networks and these applications. So when we look at what AI means for that, there’s been a lot of work that has started to be done on AI and Cybersecurity. UNIDEA, which is the UN’s Institute for Disarmament Research, has done a really useful starting study on the AI-ICT security nexus. They look at both what AI can offer in terms of defence against attacks, but also what AI offers attackers, essentially. The new tools that it offers, the new malware, the new skills and understanding, and the abilities that they have to process the huge amounts of data that come from something like an IoT network in new ways and to new degrees. They talk about this in concept of outside the perimeter, so looking at really the development of new malware processes on the perimeter, so actually breaching the networks, and then once inside the perimeter, and how the attacks can look different when they’re AI-enabled in that sense. That study itself actually doesn’t mention IoT, which I was just going back and checking that in my copy of it, but I think IoT is really the use case that it is aimed at because that’s where the real vulnerability is going to be going forward. I think the other point that I would want to make, and this actually links both to the GFCE work and the focus on the Global South, and also to IGF and its very global and inclusive mission, is the potential for this in relation to digital divides. Because what we see in the UNIDIR work that comes through is that, yes, AI It provides the potential for attackers, but it also provides the potential for defence. But that potential for defence is something that requires resources, it requires expertise, it requires investment. So if we see users of networks in the Global South or in under-resourced areas, under-served areas that don’t have access to the defence capabilities that AI provides, then their vulnerability to AI-enhanced attacks is so much greater. So we really do see the potential, and the potential which is right now growing and evolving for even greater vulnerability in Global South, under-served areas, to cyber attacks. And so that need for cyber capacity building, particularly cyber capacity building with a focus on Global South, is more urgent and required than ever. And I think that’s a really important issue to consider as we talk about IoT networks, IoT applications, and security in relation to them. I’ll stop there.

Maartin Botterman: Thank you very much for that, Chris. I think that helps. The IoT and IT is clearly different in that way that IoT is machine to machine or machine to people. And if you see how AI will influence that, is that the whole area where IoT will make a difference will become bigger, and it will become more integrative. So in a way, IoT is also IT, but it’s specifically those things that are activators and sensors that make the big difference, the hands and feet. We haven’t gone very deep into the privacy issue, but I know that… Our next speaker knows a bit about that too. And Jonathan, knowing that your focus would be on labeling and certification schemes in that privacy plays a role too, right? Floor is yours.

Jonathan Cave: Thank you. Thank you, Maartin. Yes, privacy. To me, privacy is an indicator of a set of concerns that got, let’s say, fetishized in terms of privacy, where some of the boundaries, like the boundary between data privacy and personal privacy or autonomy of action, got obscured. And it was an inevitable consequence with the way the law had to be written, because laws could only talk about certain things, not necessarily about whether people feel free to act, for example, or free in a way that allows them usefully to be held responsible for the actions that they undertake. And so what I was going to say about this is perhaps slightly oblique, but we’ll come back to these things. When I think about the IoT, with and without things like AI and quantum added on top of it, what occurs to me is that in the global context, different countries, different actors, different spheres of influence, business, civil society, and formal government, respond to emerging problems in different ways. And they put in place institutions like laws or like labeling schemes or certification schemes, each of which is an attempt to address these problems. And each of which tells us something about the problems. But they do so in different ways. The law acts, as it were, by sending a signal to people that these are things that you should and shouldn’t do. But whether the law is effectively implementable or whether it changes people’s behavior is another question. And in the global context where jurisdictions are not universal, that’s by no means obvious. Things like labeling and certification should work. by harnessing the autonomous rational choice of individuals. In other words, the reason for certifying something is so that I can trust it and allow it to come into my system. But that certification is only relative to the system at one moment in time, with a particular generation of devices, and particular assumptions about how people use the elements inside the system. So once those things change, the certificate no longer does what it used to do, but it is something that allows something to come inside the system. With things like labels, it’s even stronger, because the label works on, or is predicated on, the idea that people care about those labels, and that therefore they will buy the things that have labels that reflect their own preferences, and that therefore those preferences will become incentives for innovators, for system providers, and so on. So, in that sense, the mechanisms that people use are, let’s say, different in the way in which they operate. The second point I wanted to make was that since we have these differences of perspective, I’m considering mostly the global perspective. Things like GDPR, for example, reflect a certain set of values and understandings which the EU was eager to advance on the world stage in the hopes that they would become more broadly adopted. That was one of the hopes. Another one was so that they could be protected at home in what is a global ecosystem. So, in other words, we need to protect our citizens so that the rights that they’ve come to rely on are guaranteed, but hopefully also that these will then recommend them to other people. And things like ethical interoperability will become strengthened. But when we have these differences of perspective, they also reflect different positions in that ecosystem. that countries think they have. Countries which see themselves primarily as creating the technologies or as providing the services will balance the competing interests in different ways than countries that primarily use the services. And which is why I was particularly interested to hear about what happens in the Global South. Because the needs of the Global South and the way the Global South adopts these devices, and even the drivers of that adoption, which might have to do with things like, are they cheap? Are they reliable? Can they operate when the power fails or when the network fails? And things like that may be very different. And we’ve seen with a lot of other devices, well, like Raspberry Pi or mobile phones, you know, just feature phones, that they’re used in fundamentally different ways. But the technologies are global and the functions are global. So that there are conflicts between countries, between layers of the value chain, and in that sense, the globalization adds something unique that merely transitioning to the IoT doesn’t. So the other two things I wanted to say before addressing my final point is that from the ethical perspective, values, the thing that defines good and bad, are embedded in the technologies. Technologies are not good and bad, but we said before that they’re good or bad depending on how they’re used, but that applies to their design and provisioning as well. They make it easier or harder to do certain things, and that can allow a system or a society to drift into problems that it might not even have perceived. So what we might want to do is think about a way of proceeding where through our use of soft law techniques like labeling and certification, market-enabled techniques, multi-stakeholder agreements, international agreements, how we can identify and re-enforce enforce those values we think are universal, while protecting the particularity of values that different cultures and individuals need to have in order to play a useful role in this. Because what we saw with a lot of previous technologies was that the cultural image, the social image of the country or region that developed the technologies was then imposed on the rest of the world without the kind of social evolution that enabled them to manage those problems. We saw that when the market economy was imposed on former Soviet Russia and it produced not the kind of economy we look forward to, but a kind of capitalist gangsterism. Quite reasonable when you think about the fact that it hadn’t grown up naturally, but was kind of imposed. So the ability to have appropriate localization without too much fragmentation is, I think, one of the essential elements of this. Now, mutual recognition is important, and that means that we’re not thinking just about soft law things, but we’re also thinking about, well, what’s very much in the news at the moment, trade agreements. How do we handle these things when we say that my regulations have to be aligned to yours in some sense, have to recognize yours? That has an economic consequence. It has a power consequence. We may need to use other tools as well, such as self-regulatory activities or competition regulation. Then the final thing I wanted to mention was I’ve been reflecting during the discussion about what it was that was IOT specific in all of this. And from what I’ve heard, scale is one element of this. Our illusions of control and design are sort of tied to a particular scope of our activities in variation and in numbers of devices. Once we cross those lines, that quantitative change becomes a qualitative change, and we don’t always keep pace with it. Another thing that Joao mentioned was attention. Sorry, I see that I’m past time and my computer is complaining at me for that. So I’ll just say that among the things… No, go away. Sorry, it’s not dismissing. Okay, yeah, so the attention that we pay to these devices, the extent to which we rely on them, and the complexity of the ecosystem and its emergent behaviors, all of these things are different. So merely patching the things that we’ve done before will not allow us accurately to move into this. And in this sense, the global context is our friend because it allows us to pursue natural experiments in ways that no single nation could. Okay, let’s see if I can shut this thing up. In the meantime, I’ll shut it up for everyone else.

Maartin Botterman: Thank you so much, Jonathan, and thanks for not automating the shut off immediately as the time was planned, but to allow it to finish your contribution. So we got a couple of answers. We got some insights of what this world is and what we need to do to make sure and why we need to keep it secure. So any specific input from anybody in the room? Please, sir.

Audience: Remarks? Today… Yes, yes, Alexander Shevchuk. Today was some very important remarks about the video surveillance and the cameras. At the beginning of the full-scale invasion of Russian Federation in Ukraine, in Ukraine there are a lot of Chinese production video cameras set for public security in different places, and the Russian Federation used vulnerabilities in these cameras, making such… And it was only one point. The second point is the CCTV cameras with biometric recognition that is used to find some specific persons with the technology of recognizing faces. And also these cameras are used in defense, like gathering some facts about the crimes and after that put it to the court. And in the attacks, because these cameras could be a weapon in the war. And it’s very important, the vulnerability and security confidentiality is one of the most important points in the usage of the Internet of Things up to time.

Maartin Botterman: Yes, thank you very much. If one of the speakers wants to, please also just raise your hand. But you’re making a very good point, because anything that can serve us can be weaponized against us too. And the more dependent we become on it, the more important it is that we make sure that it’s not used in unauthorized ways or accessed in unauthorized ways, either to use the devices, the actuators, the things that do things with the digital information they get, or in that way really can do something we need to address. We become dependent on them and they become critical in our infrastructures. I see two hands up. Joaol, please. And then Matthias.

Joao Moreno Rodrigues Falcao: Okay, hello. Well, I would like to verify. briefly answer the question about the difference between operational technology and IoT. One of them is the objective. Like when I did some assessments in industries, when we think about the cybersecurity FRIAD, which is confidentiality, integrity and availability, when we speak about an industrial site, you are speaking about availability, availability and availability. You need to keep the machines running. And this requirement really changes the way we handle these kinds of systems. So about the change in the GDPR, I don’t think it’s needed because, well, technology evolves really fast, but we cannot change the regulations as fast as this, because developing products takes time. So when you are developing an IoT system, it would take like three years to develop one. If this regulation changes very fast, like erasing parts and not just adding new features, in the reality, in the end, you make that they will not comply with the regulation. And well, about the risk of attacks, I have a story to tell. Like I worked as a cybersecurity tester, like active tester, and we had one of the security tests we did. We invaded the system by its air conditioning. equipments because they were very poorly secured and actually they were strong and we had computational power to use it as a bridge to target attacks inside the company and well no one cares about the air conditioning so this is the risk.

Maartin Botterman: Thank you very much and just saying that we’re progressing as well with the initial devices now to many devices. I remember when I had my first camera in my home and I hadn’t done a good great job of securing it. In a certain moment I’m there in the room and the camera turns to me and I hear voices. I pulled out the electricity cord and it was fortunately sufficient but we are also beyond the time where admin.admin as user and password is no longer used and things like that so we do progress but we need to do more in particular when technologies further develop. Matthias please and then Wout and then Jonathan.

Matthias Hudobnik: So no actually I also wanted to contribute and complement a bit like to the questions. So first of all thanks Frederik. It’s a very good question OT versus IoT. So in general what is maybe also interesting thing is that I mean OT is always or most of the time used in industrial equipment and processes like manufacturing, energy utilities and IoT is really more like let’s say smart devices. that collect data, send data via the internet, and also the OT, operational technology, is very often isolated in an air-gapped system, so now also sometimes increasingly connected via industrial networks, but normally it’s air-gapped, also due to the critical infrastructure which is like facilitating it. And IoT, again, is more IP-based, cloud-connected via Wi-Fi or sometimes Bluetooth, and also let’s say the real-time requirements are also different, like OT is often hard real-time strict latency and timing constraints, and for IoT it’s more like often soft real-time or none, more, let’s say, tolerant in terms of delays. And also maybe a bit like to the attacks, there are some attack vectors which are very similar, some of them, let’s say cyber attacks, legacy system, interconnectivity risks, supply chain attacks, then you have also AI and automation, so there are various, let’s say the most key difference in future risk, I would say OT, as I said, risk is physical, shutdown of a factory, explosion, IoT more, the risk is more digital, data-focused, privacy loss, data theft, and also like systems are high value targets like OT, IoT can be, but devices more like mass target or exploitation and the botnet or something like that. To the data protection question per se, it’s also a very good question, indeed the Commission thought about like amending, there are plans to amend the GDPR, it was like I think in February this year where they had a first set of like changes, and especially it was like in relation to medium-sized enterprises and compliance and how the GDPR addresses AI so we have in the GDPR automated decision making and profiling in article 22 which let’s say restricts a decision based solely on automated processing that significantly affects individuals unless let’s say specific conditions are met and this is important also in terms of AI then you have also data protection impact assessments which are also for high-risk AI applications necessary and you have some principles which are similar to the IACT and very often when we talk about data processing the IACT is referring to the GDPR in certain let’s say articles and then a last point to the let’s say biometric identification and surveillance a big thing is also emotional facial recognition where you really assess emotions based on the facial recognition per se and that’s also quite a problem in terms of bias which is a big thing where already now companies are using this in for example hiring people and we know there are various articles about it and I can also recommend you to watch the movie Coded Bias which is quite good you can find it on Netflix and there you can see also the problems in terms of bias and facial recognition. Thanks.

Wout de Natris-van der Borght: Yes thank you Maartin Botterman and come back to you to your comment there what it shows is that there are so many levels of security and safety in these devices that for an individual it’s almost impossible to deal with it who thinks when you buy a fax machine or a coffee machine in your purity in your company that it automatically connects to your to your system And I know from a factory where I used to work a long time ago that our sysadmin found that the printer and the fax machine, and we’re talking about a long time ago here, that they were automatically connecting to the company that sold it to us and that there was no form of security inside. And we dealt with some pretty, well, sensitive information, also industry-sensitive information. Another step in there is when you talk about these cameras and that they come from China or whatever other country, who controls these cameras really? Is it you who bought them? Or is security your problem? Or can they do danger to you with these cameras as well by sending out whatever is on those cameras to other people or selling it to other people? So I think that there are levels of security that really need to be dealt with. And when we look into the future with all the options that are coming, with everything becoming faster and let alone with quantum computing, then if we do not deal with it now, and I’m repeating my message, I think that we’re really going to be lost. And we’re not going to be running down the mountain with the brake in our hand, but we’ll never see the mountain in the car again. Thanks.

Maartin Botterman: I see about running down the mountain now with a brake. I’m a very visual person. Please ask your question and then we continue to Jonathan. And can you introduce yourself?

Audience: Hi, I’m Marijana Puljak, a member of the Croatian Parliament. But after 25 years in IT, I ended up somehow in Parliament. And I like to come to these kind of sessions and pose some questions. For example, we talked here about the safety, security… Security, especially IOT devices that collect and store vast amounts of personal sensitive data and much of it is encrypted using current standards. But once quantum computing becomes powerful enough, those encrypted data could be decrypted. Today we still don’t know what quantum computing is capable of, but in the near future, as you said, and that could be, I don’t know, next week maybe. So how are we prepared or how do we prepare ourselves for these risks? I know that legislation and regulation is always slow in comparison of new technologies, but how can we prevent this future from happening?

Maartin Botterman: Yes, that’s always a balancing act, right? If legislation is too early and stifles progress. So I think we have to turn sandboxes for that nowadays, but it’s an important question. I think Jonathan will be able to even refer to this, although that was not why you raised your hand. Then Chris, then Joao, and then we need to round off.

Jonathan Cave: Indeed it wasn’t, but I will respond briefly to it. I think it’s certainly true that new technologies raise new challenges and there’s a certain extent to which through, let’s say, active regtech or something like that, we need to get ahead of these challenges. In the case of quantum-enabled encryption, of course, encryption and decryption are in a continual tension, and so it is quite possible, of course, that quantum encryption may offer the same level of functionality in relation to the threat of quantum decryption. However, there is also a vast amount of stored data. Much of these data are stored in encrypted form on devices which remain accessible, and it will not be possible to decrypt and re-encrypt those devices with it to a new stronger standard, so that that historical trove of data, which is built into the training of our algorithms and everything else in intimate ways, will be exposed. And this is something, this legacy is something I think we need to be aware of. The thing I mostly wanted to talk about is the consequences of complexity in attention. Now, I won’t beat the complexity drum too hard, except to say that there are strongly emergent things that happen in complex systems, things that cannot be understood by looking at the behavior of individual elements of this system. And if you look at how the laws and regulations and the conventions and individual device designs are predicated, it is on the basis of individual systems. They will, even leaving aside the generational problem of systems which have different iterations or levels of maturity of devices in them, how they function, even leaving that aside, it will be very difficult to know where regulatory surveillance and responsibility should lie. The other thing I wanted to talk about was in relation to the attention that we pay to these things. With many of these devices, where we do interact with the device, it’s on the basis of biometrics, which for reasons of human practicality, we have simplified. We’ve simplified to the use of things like biometrics, for example. Once you have begun to use your device by simply speaking to it, and being recognized as it, you cease to notice that you’re going through a layer of security. And because your voice is just your voice, that voice then becomes, it’s like having one password for all of your systems. And it creates a collective risk in relation to all these devices that we know you don’t think of distinctly, that is part of a system that you interact with. But beyond that, if they are using your voice, they could begin to learn who you are through learning how you respond, how your voice responds. And what you can do with voices, you can recognize patterns, you can recognize emotional states, as has already been said. And these emotional states can not only be recognized, but they can be manipulated through the nature of your interaction with the system. And just as we do, if you do sentiment tagging on texts that people write, you know how to write texts that will influence their sentiments. These things are not impossible. They’re not science fiction. They’re happening right now all the time. Once you move beyond a brute force capturing of people’s attention to trying to use the attention in particular ways, then new things become possible, new forms of bias. And the privacy related solution to this is something like synthetic data, that we don’t use your data, but we use data that are kind of modeled on you and people like you. But the reasons why we have privacy are not protected by transition to sin data types of modeling, because they can control you, they can influence your behavior, subvert your autonomy, just as easily using the synthetic data. And then the final point I want to make came out of something that Val mentioned, which is when people buy these devices, they may be informed, they may have labels, they may think about them in a particular way. But when they subsequently come to use those devices, They’re thinking about it in a very different way, lower down the brainstem, not as consciously. We know that from the studies that were done in the Netherlands on things like pricing and energy efficiency. When you buy a car, you think about these things in a very overt way, and thanks to labeling regulations, you put price and features and environment together with each other. When it comes to deciding, do I drive here or drive there, all of that goes out of the window. And so the incentives operate only at point of purchase, but during the lifetime of the device or the system, those incentives on which we rely cease to operate. So, okay, that’s enough. Thanks.

Maartin Botterman: Rebound effect. Yeah. So thanks very much for that. Time is gone, but we still have three hands up. So I would really like to ask you to do one minute remark for Chris, Joao and Matthias, please.

Chris Buckridge: Okay, I’ll jump in and I will keep it brief because it’s somewhat of an advertisement. I think in response to Val’s point and really the broader question in terms of governance, one of the sessions we’re currently planning for the Oslo Internet Governance Forum, which is taking place next month, is looking at emerging technologies. And one of the really key areas there is anticipatory governance, which is a phrase that seems to have emerged a lot in the last couple of years. The OECD has developed a framework for anticipatory governance, which is looking at ways in which policymakers, regulators, parliamentarians can develop agile responses to these technologies, which are really coming in very quickly but having very significant social, economic impacts. And actually one other point, and our colleague from Croatia. reminded me, the International Parliamentarian Union actually has also done a lot of work on this anticipatory governance with their World Summit of the Committee of the Future work. So it’s something that is being looked at, but I think it’s something that needs to be done very consciously.

Maartin Botterman: Thanks, Joao.

Joao Moreno Rodrigues Falcao: Okay, good. So I only speak here about the results that we will have in our report published during the IGF. So we’ll talk a lot about IoT and also the quantum challenges. Unfortunately, I couldn’t have time to speak about this future, but we have a wonderful work on the PQC part. So yeah, please stay tuned.

Maartin Botterman: Thank you. And I think we’ll hear a little bit for people in the room tomorrow morning as well. Last but not least, Matthias.

Matthias Hudobnik: Thanks a lot. Just very quickly answer to the question. So really in relation to post-quantum, first of all, you need to ask yourself decryption for what purpose? Confidentiality, authenticity or integrity. And there are standards from NIST and also ITF, which are looking into that, just that you are aware. And then, yeah, from my side, resilience in IoT, I would say it’s not just a technical challenge, it’s really a governance imperative. So as said, from the DNS up through AI governance and also lifecycle management, our system must be, let’s say, secure by design and continuously updated for emergency threats. And here again, we need to strengthen foundational protocols and international cooperation. And so we can build an, let’s say, internet that remains trustworthy and also robust for the future.

Maartin Botterman: Thank you so much. And we’re committed to that, many of us. Thank you all for your attention and the interaction. Thank you, speakers, for the insightful sharing. It’s clear that IoT is becoming part of the fabric and increasingly important there that we do that consciously and make sure that these extensions of the systems we know will not hurt us but help us, being aware that they can also hurt us and consciously deal with that, both in terms of protection and how we design the systems, certification, telling people what they can count on, what Wout also said, crucial, so that citizens don’t blindly put something in their home or their hand that they don’t know how to use, building it to be safe, secure, and private will be crucial, too, because we can’t count on the user to take all decisions and all measures themselves. So this balance needs to be found, too. My last conclusion is the digital footprint, a digital divide. How do you make sure that people who need it get it within reason? And last but not least, this all comes with a carbon footprint. The first devices had batteries, and they may still be hanging there, out there somewhere, leaking, et cetera. When it becomes so prevalent and so part of the fabric, let’s make sure that we also do that in a conscious way. So with that, thank you all for attending and for an excellent discussion. Wout, thanks for joining, and we’ll see you in the next session. Thank you.