The technological edge in 2030 – What role for technology in society? – Lightning talk 01 2019: Difference between revisions

From EuroDIG Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 117: Line 117:


''This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.''
''This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.''
[[Category:2019]]

Revision as of 17:06, 27 November 2020

19 June 2019 | 11:00-11:20 | KING WILLEM-ALEXANDER AUDITORIUM | Video stream will follow shortly | Transcription

Consolidated programme 2019 overview

As a precursor to the Panel 1 discussion on technology supporting policy, Jonathan Cave and Jaya Baloo will sketch a possible future of 2030, highlighting what may be possible, as to invite the panel of distinguished representatives of the four traditional EuroDIG stakeholder groups to comment on what they like about this future, what they would like to see different, and in a subsequent round consider what needs to be done to make it all likely to happen in that way.

By putting the perspective on 2030, and "looking back from the future" to what needs to be done today to work towards the best possible future tomorrow, and by having two inspiring speakers, this lightning talk will be an excellent kick-off of the EuroDIG series of Panel sessions for 2019.

Speakers

  • Jonathan Cave, Member of the UK Regulatory Policy Committee, Fellow at the Alan Turing Institute, and Senior Teaching Fellow at the University of Warwick, United Kingdom
    Jonathan Cave has been Senior Teaching Fellow in Economics at the University of Warwick since 1994. For more than 30 years, he also worked for the RAND Corporation most recently as Senior Research Fellow at RAND Europe. He has previously been a visiting professor, research fellow and lecturer at several universities in the US, including UC Los Angeles. Before entering academia, he was an economist at the Bank of England and later the US Federal Trade Commission. Jonathan also served on Defra’s Science Advisory Council Exotic Disease Subgroup and the Academic Liaison Panel of the Information Assurance Advisory Council.
  • Jaya Baloo, Chief Information Security Officer, KPN Telecom, The Netherlands
    Jaya Baloo has been working internationally in Information Security for nearly 20 years. In the last few years, she has been named CISO of the year, top 100 CISOs globally, and top 100 Global Security Influencers. Her focus has been on secure network architecture where her work has ranged in areas from Lawful Interception, VoIP & Mobile Security, to designing national MPLS infrastructures, ISP architecture, as well as Quantum Communications networks. She has worked for numerous telecom providers, Verizon and France Telecom among others, and currently works for KPN Telecom in the Netherlands where she is the Chief Information Security Officer (CISO).

Transcripts

Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-800-825-5234, www.captionfirst.com


This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.


>> MARJOLIJN BONTHUIS: Please, we are about to start. Again, I invite you to come a little bit closer. You are all so far away and the distance between the floor over here and the audience is already very wide. So don't sit at the back. It's nicer for the panel to be a little bit closer.

Everybody find a seat. Just come a little bit closer. It's good for everybody. Then I would like to begin with our first panel, this high-level panel on the first day of EuroDIG 2019. And I would really like to introduce the moderator of the panel, Maarten Botterman. He's an independent advisor and policy and society. I'm sorry this coffee break was too long for me. I have to adjust to the English again. You do far more better than I do. And I give the floor to you. Good luck for all of you.

>> MAARTEN BOTTERMAN: Thank you very much, Marjolijn. Thank you all for being here. As Marjolijn said, the closer you are the better. I'm actually moderating this panel together with Emily Taylor from Oxford Information Labs and I'm very happy to do that. Emily is going to focus on the panel discussion and I have the honor of keeping the -- embrace beyond that.

So what we want to do with you is take you to 2030. By going to 2030, and a sketch world out there, see how that may look like. What we like about it and how we can get there. We help to go beyond the struggles, the ripples, as they say, when you are canoeing in a river, of the things that are faced with today.

And really look ahead to the future and what is really important. And for that, we are very happy to have Jonathan Cave and Jaya Baloo to be willing to sketch a little bit of their insight in how that may look like. Jaya Baloo is the chief information security officer of KPN, the biggest service provider in The Netherlands and the former telecom when countries still had just one telecom operator.

You will love to hear from her. I have heard her before. It's a big honor to have you here and she can speak of her feats and really blow you away. So let's get you to do that. Beautiful.

And next to her, and first speaker being Jonathan Cave. Jonathan is a policy analyst. I learned to know him at Rand Corporation, and he's been working at University of Warricks, economics and game theory and currently connected to the Turing Institute and from that perspective, he would like to give the first sketch of 2030 and how that looks like. Jonathan, the floor is yours.


Lightning talk

>> JONATHAN CAVE: Thank you, Maarten. The microphone works. What Maarten didn't say, I have a government role. Pay nothing to it.

I want to think about what will happen in 2030. I won't present optimistic or pessimistic. We will solve a lot of problems that we have today but I think we will discover that those are not the problems that we should have worried about or there will be new problems that are deeper problems that sit underneath them, because our tendency, particularly in policy, but also to a certain extent in civil society, is to try to patch up, preserve and restore what we think we have. And some of those assumptions, some of those restorations will become insupportable, in part because of the changing worlds of power.

So I do expect that at the moment there's a sort of artificial tension between concepts of privacy, and concepts of cybersecurity. I don't expect that that will be an animating feature of the debate in the future. There are many I was in which it could be resolved. We could give up on one of them or the other or more pragmatically or possibly, both of them, but I think that if we think about them both as matters of access, access to data, access to the power to use data, to change things, there aren't really much difference between privacy and security, except in terms of who wishes to control whose access and for what purpose. So I expect that those tensions will be resolved. Oh, hey, I can advance things and so on.

I think that a lot of things that we try to fix -- well, a million years ago when the Internet was invented. I started a long time ago at Rand in Santa Monica, and it was DARPANET. We controlled through social conventions and other things, that we worry about now. Flame wars which would have turned into cyberbullying and things like that. We didn't need to use them because we were a fairly homogenous culture, and then they atrophied. I think the things we try to do through regulation and law, and twisting people's arms will have been pushed up the stack, they will been have dealt with technologically or by design and we will regard these things as having been globally negotiated and they will be accepted by all the people who care about them. It's not all the people affected by them, of course, and the things that will be embedded in the common technology will be at the lowest common denominator.

And we see this already today, because our concept of privacy in Europe seems to be bound up with data protection and it -- to my view, it isn't that at all.

We will expect, however, above this floor, above this common floor, that there will be some states like The Netherlands or the UK, they may have more extensive statements like rights, rights to communication, and variety of tools and these will be engaged in the ecological structure, some that we can see even today, we see in the better regulation agenda, where we try to get regulations that don't foreclose change but sort of support evolution. The complier explains stuff that we were hearing about earlier is an example of this.

And we would imagine if there were a set of common principles, not about acts, but you enforce the rules based on the harm that people suffered, that might be better. Also, we won't have so much commitment to this legislate, then force people to obey it. We will have certain kinds of sandboxes where the consequences of not having regulations or of having different regulations can be explored before they are tried in the wild.

One of the things that we see already today in relation to Internet harm -- and this is something I do expect to be widespread by 2030, is replacing statutory rules and restrictions with duties of care. At the moment, the large platform parties, they are not going to go away because the economies of connection and the economies of scale and provision will continue to be with us, but they may act more like utilities. But because of the subtle nature of the Internet, it's not like an electricity network and it can have impacts on people. Therefore, these people are in a position to do something about it, and I expect them to do it.

So one example of a kind of thing that we can see today that we might expect to see in the future is the use of reinforcement learning algorithms to feed content to people based on what they are seeing to do. So here's a little thought picture. Somebody goes online and they are interested in self-harm, or they become aware of a community that does self-harming things or wants to control their diet and something like that.

The algorithm sees this and it feeds them more of that content. Now, I can think of at least four ways somebody might respond to that. They might, as a 13-year-old girl unfortunately in England do, be led down a path, which leads ultimately to them killing themselves. They will might, on the other hand, see where that path leads and be dissuaded from going any further down that path or reflect more deeply behind them as to what the consequences might be.

Or it may be that the mere provision of that graphic material, traumatizes them independently of when they themselves might do or the reverse might happen, the provision of that graphic material might erode their sense of affect. They might become desensitized to it and the social groups become desensitized to it. The point I want to make is not that these different effects are possible. The point I want to make is that the algorithm that feeds the content to those people, only sees about them what those people have revealed through their internet behavior. And the privacy protections we have prevent those algorithms from seeing any more.

So what we have is a magnified effect, and an attenuated ability to predict what that effect is. And so that is a failure of responsibility. It's a failure of agency. It's something that I think we need to worry about, but placing a duty of care on somebody who is in a position to see what's going on, and to experiment with different ways of dealing with it, is perhaps better than saying, this is what you should do. This is the material that you should screen out.

We can see this also in the use of AI for automated content mediation. You've got to determine what is harmful speech, what should be prohibited speech online but the ontologies with which we expressing this speech are continually changing. You need something more than just a list of prohibited words. I expect by 2030, we will be well ahead of that curve.

And we will also have international collaboration because law is a national thing and these things that we are talking about are not national. They are not even bound up to individual cultures, but we will see more mutual recognition and free trade agreements and so on.

Now, there's another thing, which has to do with innovation. We heard about permissionist innovation. I can row back from it if it turns out to be not so good. In a networked environment where people only see little parts of it, may things may be irreversible and so the normal way the law operates by saying, you harmed me. Therefore, I'm going to come and punish you so that nobody else will harm you in that way may no longer be effective and we may need to have certain social conventions as to what is permissible and what is not. So some of the sweeping statements of rights may have been somewhat rowed back from.

Now, among the things that we will still be learning are the data are not reality. And they are not even models. And the things that we believe in may be more important in terms of what we do, than the things that are actually true. Now, I'm not saying -- I mean, the use of the word "trump" is incidental, but perhaps it's not. And I'm sure that during the discussion we can talk about the role of truth, and the role of speech as action, because these things will be important in the journey from now until 2030.

We can hold people responsible for things. We can hold the gaffer firms responsible for things but if they can't comprehend what is going on and they can't control it, it may not be useful to do it.

It may feel good because we have the risks off our place, but it doesn't produce movement towards a better outcome. I believe that we will discover even in Europe, data protection is not privacy and most of the things that I worry about are my decisions to act and decide what I will do.

And some of the things that we think is good, like data protection, privacy, trust, efficiency and growth are not necessarily good in themselves. They are good because they lead to better lives and as we become more digital in everything that we do, every sector of the economy will have a digital component that effects its business model, affects its relations with suppliers and consumers and governments. Everything will be digital to an extent and that expose to us the limitations of views that we have that are rooted in old technologies.

Finally, we have this belief in accountability and transparency and implication. I just am here to say that being able to give an account of how an algorithm functions doesn't mean that you understand it. It doesn't mean that you can understand what it does to you, let alone to the society that you are a part of it. These things, I think, will not be solved by them, and that's enough from me.

>> MAARTEN BOTTERMAN: Thank you very much, Jonathan.

(Applause)

A world that is unfolding and many lessons to be learned before we get there. Fortunately, we have ten years. So Jaya, can I ask you to come from your perspective.


Lightning talk

>> JAYA BALOO: Thank you. Hi, everybody.

I'm going to do the magic clicking again. There we go. I would like to ask you, what if? What if by 2030 we had trust proven with consistency over time in all of our hardware and software?

What if? What if we did that because we focused not on certification, which is a long and lengthy and expensive process and will kill the majority of the start-up culture and innovation that we are starting to achieve, but we didn't focus on that. We didn't focus on certification but we got there because we focused on creating trust by trusting in the basis, but proving liability for the people who were responsible for their own securities.

So we focused on liability when they get it wrong but we don't focus on a lengthy and complex certification for getting it right.

I'm just going to hold with that controversial statement for now.

What if we had a trusted global can, secure, routing resilience. What if we did actual adoption of routing protocols? What if we applied good manners, RPKI. We could trust the information we got from each other, from the global telecom space?

What if we gave back control of identity and data traffic back to users? And what if we did that by saying it was nonnegotiable to build or build out without having those items in place. So identity back in the hands of the user so fully user centric identity.

Again, GDPR is a grit starting point but there are mistakes that are made that have a sort of backlash. When I think what we did to the WHOIS database. GDPR needs amending and it needs building out for the points that are good within it rather than just being an administrative exercise.

What if we were ready for quantum computers, which we expect to get within the next decade by 2030, with more than a couple of thousands of cubitz, what if we knew what to do, by already adopting some of the candidates of the algorithm. We are up to the second round of NIST and what if we when we had that by NIST in 2024 because we were already acclimated because we knew our crypto resources that we were ready to transition by 2030 so that when the quantum computer arrived, we were good to go, because we still had a secure environment.

What if when we talk about AI, that we actually focus on building these things out with a high degree of trust and transparency. And what if we always said that when it comes to critical application for AI -- because you can do a lot of good that we actually encourage, fostered, opening up of AP Is in order to have that transparency in place.

Hey, guys what if we just banned the selling of spyware, not to regimes that we deem as legitimate or illegitimate but we stop selling spyware. David Kaye, at the UN, there's no legitimate reason to be selling spyware.

I will just leave you with that for a second. What if we then also said that we do not trade in zero days, especially when it comes from a black market. We do not hack back each other, because it has issues on stability and resilience and we condemn any cyber kinetic shift for hybrid warfare based on the Israeli attack of the Hamas cyber cell in a residential area. So we recognize that the collateral damage of hybrid warfare for civilian lives is too great a toll and we do not want to see that shift from cyber to kinetic.

What if we deescalate all the things and try to promote a culture of peace? Because really, this, I see is also nonnegotiable if we actually want to achieve the greater technological goals to explore other planets and to eventually to get a place where to explore our universe and cure cancer and all the other things. Then we actually need to start by fixing the things that threaten that fundamental security and stability. Five things. Just five things in ten years.

Thank you.

(Applause)


This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.