Cybersecurity – The technical realities behind the headlines – Edu 02 2017

From EuroDIG Wiki
Jump to navigation Jump to search

6 June 2017 | 14:30 - 16:00 | Room Tartu, Swissotel, Tallinn, Estonia | video record
Programme overview 2017

Session teaser

Until 1 April 2017.

Keywords

Until 1 April 2017. They will be used as hash tags for easy searching on the wiki

Session description

Cybersecurity is more in the news than ever, and the need for solutions to cybersecurity challenges more urgent than ever. With all stakeholders needed in these discussions, a basic grounding in some of the technical concepts at work is vital – that’s where this session comes in! With so many aspects to cybersecurity, ranging from regulation and liability to keeping systems patched up and the public aware, this is an opportunity to ask the experts and gain the building blocks to contribute to cybersecurity discussion.

Format

Until 30 April 2017. Please try out new interactive formats. EuroDIG is about dialogue not about statements, presentations and speeches. Workshops should not be organised as a small plenary.

Further reading

Until 30 April 2017. Links to relevant websites, declarations, books, documents. Please note we cannot offer web space, so only links to external resources are possible. Example for an external link: Main page of EuroDIG

People

Focal Point

Subject Matter Expert

  • Tatiana Tropina (Max Planck Institute)

Key Participants

Moderator

Remote Moderator

Organising Team (Org Team)

Reporter


Video record

https://www.youtube.com/watch?v=xtV6nZIxvbg

Messages

Please provide a short summary from the outcome of your session. Bullet points are fine.

Transcript

Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: 800-825-5234, www.captionfirst.com


This text is being provided in a realtime format. Communication Access Realtime Translation (CART) or captioning are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.


>> MODERATOR: Good morning. We're nearly ready to start. We've got seats left. Hello. Hello. See, there are more people coming. Welcome.

>> Ladies and gentlemen the conference program --

>> MODERATOR: One minute. 58 seconds.

>> (Speaking off mic).

>> MODERATOR: You can talk amongst yourselves. All right, everyone. We might get started here. Everybody coming in? Okay. Hi, everyone. Thank you all for coming. Welcome to this first session after lunch.

This is the second of these educational-track sessions, and so the basic idea of these is a little different to the traditional workshop in EuroDIG. It's a bit more about providing some -- well, educational content, I guess, some background material for all of this to go forward and take part in other discussions that are happening at this event and elsewhere.

The topic for this one is cybersecurity, and so the way we've done -- let me start a sort of thematic summary of this. There was an article that Cory Doctorow wrote yesterday talk about Theresa May's comments and plans for regulating the Internet. And he quoted Alan Schwartz an activist I'm sure a lot of you are familiar with. The quote was "It's no longer okay to not understand how the Internet works." That was aimed at policymakers, lawyers, politicians, everyone who is actually making policy and regulation about the Internet. It's absolutely vital they have an underlying understanding of what's actually happening on the Internet, how we're using that technology, what it's doing, how it's transferring information from one place to another, so that's the point of what we're doing here today.

My name is Chris Buckridge. I'm sort of moderator. I don't think there will be a whole lot for me to do beyond this introduction. I work for the RIPE NNC where regional Internet registry for this part of the world, and also a general secretariat for the RIPE NNC which is people interested in Internet. We have a couple people from RIPE NNC today who will be presenting on slightly different aspects on technical realities behind the cybersecurity headlines. That's how we framed it.

We have Patrik Faltstrom from Netnod, you probably recognize him. We have Marco Hogewoning, who is my colleague from RIPE NNC. Marylin, from RIPE NNC, and Peter Van Rosete, each of them will have a bit of a brief presentation looking at, I guess, how does cybersecurity work.

We also hoped to have a bit of a microphone session here, as we framed it, you can ask an expert. Basically, if you have questions about cybersecurity issues, and there have plenty of those issues in the news recently that may lead you to the questions. We have a pretty good team here of experts who should be able to provide some perspective on those issues from a technical background. So, with that I'll dive straight in and let Patrick take over

>> PATRIK FALTSTOM: Thank you very much. Should I have that thing. This is cool. As an engineer, you still screw up how you use these kinds of things. These cool guys put tape over here so there is actually only one button I can use. And I used the wrong one.

(Laughing).

Okay. When we discussed what I was going to talk about, of course, I could do a talk about more operational issues, but I was actually thinking about taking one step ahead and not even talking about operational issues. I'm going to talk about what is behind some of the operational issues.

So, what I'm going to talk about is timing frequency, and as one of the building blocks that we have to use to get a stable and well-functioning Internet because as I will sort of talk about in a bit, it's quite important to understand and know when something happens

So, first Netnod, where I work, what are we doing, we do Internet connect points in Copenhagen and Stockholm, Sweden. We run DNS in 75 locations in the world. We run a root server, run DNS for 20 or 30 TLD, and then we run the timer frequency in four locations in Stockholm and (?) and a fifth place and I'm going to describe what we're doing and why we're doing it.

So, this picture was an advisor to the Swedish IT Minister for 12 years or 11 years, something like that. This is from the Prime Minister's office in Stockholm. You don't have to know much about time and you understand this picture right. I took a picture of the clock in the reception area here at the hotel. It's a little better than this, a little, but not much, it's actually kind of fun.

When seeing different various things like this make you either sort of ask what is going on, or you might be actually very interested in what this time thing actually is.

So, the question is then, what is time? Time consists of two parts. One is something that ticks. You have something that ticks once every second, 10,000 times a second; however, you need to know of course how often this is supposed to take because the second thing you need is a counter that counts how many ticks there is. So, a clock is to some degree, releases something that ticks and something that counts.

And for the people that are sort of understanding how these two things understand that, okay, there are some problems with this because first of all you need something that ticks exactly the same speed as other things that ticks. Two clocks have to be sort of in sync with each other.

The other important thing is when you start that ticking thing, you need to know what number you start counting at. Okay. So, you have the bootstrap problem in the counter.

But if you figure that out and you have something that ticks very precisely and you have a counter and you bootstrap the counter at the correct number. Then you have a very precise clock. It's not much more difficult than that. How hard can it be?

There are actually two different things we have to look at when we look at various different kind of measurements and this goes back to in general security issues and knowing what happens. One is to look at accuracy and one to look at precision.

Of course, it might be the case that you have low accuracy but very high precision, which means that all the numbers you have are approximately the same, but they're wrong. Okay. Or you might have things that are up there to the left, something that is approximately right, but they're all over the place but the medium is actually -- the mean value is correct.

What we want is, of course, that we want something that is relatively precise but also accurate. And when you start to try to do something in the correct way, regardless of what it is, sometimes you need to actually choose whether you want accuracy or precision. And if it is the case that you go into sort of the clock kind of things, this is where there is a difference between hydrogen and others and whether it's short term or long term, but that's the sort of detail I don't want and will not go into.

So the next question, of course, is why is it important that this time thing is correct? Why do we know about the time? Okay. All of us, we are very social people, all of us in the room, and after long experiments, I understood that when two persons are going to meet, it's very important that the accuracy that our clocks, my clock and Marco's clock, they cannot be wrong more than 15 minutes. The question is then, why 15 minutes? This is actually kind of easy. If we decide to go -- this is my favorite restaurant in Stockholm. If we decide to go and meet, we need the geographical location, three dimensions or maybe two. But anyways, we need to agree on the restaurant and at what time we're going to meet.

Now, if it is the case that our clocks go like to -- that there is an error or they are very different, I figure out over the years of sorting knowing Marco that the person that counts the first order of beer, after 50 minutes the beer is out and there is a risk that that person leaves. You have sort of one beer that is approximately 15 minutes, that is sort of the maximum error.

But, the real problem that we have is when you have events that happens in the distributed system, you have various events and each one of them get a time stamp. And then when specifically, if these time stamps are done, for example, in different servers or different computers or different locations, you later on would like to know in what order those events happened in this distributed system.

And if those events happen, for example, once every second, we all understand that it's pretty important that the time difference between the clocks in the various systems that do the time stamps must be more accurate than one second, and of course, also quite precise.

So, the overall question that you try to answer quite often when dealing with these kind of security issues is in what order events took place. The question is well, is this a cyber or computer issue? Well, yes, it is to some degree, but we also have a problem in the real world itself because we have, unfortunately, we couldn't agree on what time it is.

So, this is just one example of a list of various time scales, and if we compare the various time scales, we see that there is a difference between GPS and TAI. We have a difference between GPS and UTC, and we know that there is a difference between UTC and TAI, and it's also the case that the difference varies because we have leap seconds.

So, one of the first things we need to do when dealing with time is not only to make sure that things are accurate and precise, but we also need to know what time scale we're going to use.

And, how do you know what time scale it is? Most people think we're using UTC. Yes, we are using UTC, but unfortunately, that is something that is created as a mean between clocks of maybe 40 different national institutes, and each one of the institutes get to know how wrong their time was compared to UTC one month behind in time. So, they get a letter every month saying your clock last month had this much error so you have to sort of extrapolate forward.

This scale here, here you have -- between each line is 100 days. So, this is a modified so cannot read this and not even I can read it, 57, 100, 57, 200. 100 days between. These are the nanoseconds and this is Swedish National Time and UTC. And what you should know is, you see that it varies between, let's say, plus 13 and minus 20 nanoseconds. People might think, okay, that's not so much, but will GPS differ with UTC with 34 to 14 nanoseconds plus or minus so they are different, so you follow either UTC or GPS.

It is also the case that in 5G, mobile standards, the clock needs to be more accurate and more precise than 100 nanoseconds, so we start to go into -- so it is the case that with the five-year standard, we start to go into the sort of same number range which is the difference between the different time standards. We follow the Swedish time standard, the Swedish time, not UTC or GPS, and we're approximately within 10 nanoseconds if things work well.

How come this is important? We have a couple of cases in Sweden, where normally these days you have electronic-based procurement processes. You log into computer, and you go to web page and sort of submit your bid before a certain point in time. We have a case where the public requirement was spending a lot of money and the time in the computer was seven minutes too fast, so the computer actually stopped the ability to submit procurement, sort of, input to the procurement seven minutes too early. That was not good.

It's also the case that there was a response with an RFP that was sent via fax, and as you all remember, those of you who remember faxes, you had to time stamp at the top of the paper. That timestamp was the clock in the fax which was, of course, sort of accurate as the VCR and microwave that blinks all the time, but they used the timestamp of the phone call and brought it to court and the court did put down the case and didn't make a verdict because there was no definition in the RFP itself what timescale it should be, the actual timestamp when the RFP should be delivered.

It's also the case we had the Swedish Foreign Minister murdered, stabbed to death in a department store in Stockholm in 2003, I think. They had a lot of surveillance cameras in the department store, but just because the time, the clock was different in the cameras, it took three days to understand how the murderer was running around in the department store that delayed the sort of, delayed the ability to actually track down who was the murderer.

It's also the case that, we in Stockholm, we have fees for, what are called higher competency road fees around Stockholm and the fee you pay differs depending on what time it is. Someone actually challenged the decision what was made for the fee. That was kind of interesting because they didn't really know what time it was. So, if it is the case you drive by one of those cameras or you have illegal parking or something close to when the time zone, challenge that. It was actually kind of interesting.

If we look at more seriously, we have a legislation in Sweden from 1736 that we are still using, and it's one of the oldest legislations we have. And this legislation says, if it is the case that you sell the same goods twice. In that case, whoever bought the goods first keeps it. So, since 1736 in Sweden has been sort of by law, you need to keep track of when a transaction takes place.

A little bit more recent, there was an article in the Times on the 9th of May, 2017 and they pointed out that in the financial industry now days, it's actually kind of popular to do service attacks against GPS receivers close to when you have certain things happening on the stock market because people hope that their competitor, because you blocked the GPS, will have issues with the timestamps and will not really know when to actually sell and do the deals, so you get an advantage if it is the case that you block your competitors GPS receiver. Kind of interesting. Really good article.

So, this says here that the financial firms are fighting back and the attackers interfere with the GPS signal, and when we have seen a lot of sort of jamming of the GPS signal for quite a large number of years, but lately we have also seen, and I claim it's because of Pokémon, you see replay attacks on GPS because people don't have enough distractions to walk around, instead you have the replay of GPS signals to the software on the phone, the Pokémon software believes you're walking around and you can collect all that you want.

So, on the financial market, they also have new requirements from 3rd of January, 2018 which says that if it is the case that you do some trading based on voice, that you call a trader, the trading company needs to have a timestamp that is accurate to one second. If it is the case to have email or SMS-based communication, the trader needs to have accuracy up to a millisecond.

And if it is to have some sort of auto-trading must be 10 to the minus 4 power of accuracy. And not only does it need to be that precise to the trader but also the case that you need to have an evidence package and evidence log that explains what the actual error is. In 5G, as I said, it's about 100 nanoseconds precision that is required. Nobody really knows why, but that's something that all of the manufactures have decided, including us, so we have a precision of about 10 nanoseconds for our system. We can ignore that part.

So, how do you do this? So, what does a clock look like now days? Not really like what's on the wall downstairs or on my arm. This is what a node looks like with two clocks, and in reality, you have the thing that ticks down here and then you have some adjustments for the ticking, and then you have counters higher up.

That's basically how it works. And in each one of the nodes we have two clocks, an A and a B clock so that if one of the ticks stop, in that case we start to use the other ticking box, so we have the frequency of face management, the PTP and satellite thing, and the servers and measurements and computers and routers and switches, and that's basically how it works

So what you do is you have the clock, you generate 10 megahertz pulse. You adjust pulse within the 10 nanoseconds, so you do a phase shift and delay, and then you have a frequency distribution amplifier, you have a measurement unit that makes sure everything is correct and feedbacks the adjustment mechanism, and then when you have a very precise pulse then you feed your PTP and NTP server that everyone is looking at.

So, another interesting thing is that, as all of you know, a signal within fiber or something, 1 nanosecond is about 1 foot, so when I talk about having sort of 10 nanoseconds of error, what you have to do is that you measure all cables in seconds because that's a propagation delay of the signal. You mark all the cables very precisely and then you configure the system according to what cablings are so you can compensate for the length of the cables to be able to get the precision accurate enough.

One of the things, the only thing we really had to do was to develop a hardware-based NTP server to be able to respond to NTP queries over the Internet, but that is done, and also we made it autocard so you can measure not only control the NTP server but also measure how it works, and then of course, compare with some of the satellites and the GPS system. And, one of the things that happened was that there was actually a bug in one of the software releases in the GPS satellite, so there was an error in January of last year, but we and a few others had collected data so we can actually help debugging the software in the satellites that created some issues. So, we also do measurements in comparison with the satellites.

So, this is basically what it looks like for people that are interested, and this is also -- so this is one node and two clocks. Not so different at all. If interested in time, what's really interesting is to go back and look at, for example, this one. Once upon a time before GPS, it was actually pretty difficult to know where you were, what the longitude was where you were, so you had to have precise clocks to measure where you were to calculate the longitude. Kind of an interesting story. That's all, thank you.

(Applause)

>> MODERATOR: So, I'll quickly just say we do want to sort of keep moving with the other presentations, but I think if there is anyone who has sort of specific questions about the issues here or the ticking box, now would be a time if you want to speak now,

>> AUDIENCE MEMBER: I'll speak loud.

>> There is a microphone.

>> AUDIENCE MEMBER: Thank you very much for the presentation. In concerning the legal system, like say the positioning system developing in Europe?

>> PATRIK FALTSTOM: Particularly, I think it's very, very sad the system is very similar to the GPS system. It has exactly the same kind of weaknesses, and what I mean by similar is that you have approximately the same kind of math and do the same kind of triangulation and precision. Galileo is better because of that -- because it's more modern, but it's not a different system than GPS. If you look at the Chinese system or Russian system, they are much more different, so people like us, we could sort of use GPS or Galileo and then we compare with one of the other systems like BeiDou, for example, so I would have liked it to be different.

On the other hand, GPS is so worldwide, sort of in use so, of course, it's also convenient that Galileo is similar to GPS because it's very easy to transition from GPS to Galileo which is a more modern system.

But to take a step back, from our perspective and the regulator in Sweden that asked us to do this, the largest part of any kind of distribution of important information like time signal over radio is not something that you can rely on in the case of an attack. Jamming radio signals is extremely common, so you cannot really rely on radio for any kind of communication in the case of a disaster.

>> MODERATOR: You want to say anything?

>> AUDIENCE MEMBER: A very dumb question from a dummy. I understand this is extremely important. Precision is extremely important. How difficult is it to tamper from the outside? Can I just hack it?

>> PATRIK FALTSTOM: If it is the case that -- this is one of the reasons why you should not use radio. If it is the case that you're going to hack something, in that case you need to have access to the cable, so that's one way of, sort of, securing a system.

Regarding our system, no, you cannot access from the outside. The internal system itself, you cannot hack. On the -- just because of how you -- certain things in the system you need to walk there physically, and the boxes themselves are down in bunkers which are sort of on secret locations in Sweden.

That said, there is a reason why we have two clocks in each node because if you tamper with one, the system itself with the tech and measurement that there is too large of difference, so the system will shut itself down. There is also a reason why there are four or five different nodes we use because we compare with each other and we also compare with GPS and Galileo and the other satellite systems, and each one of the systems will detect if there is a tamper and turn itself off if that happens.

So over and over again, not only here but also otherwise, we'll probably come back to that. Redundancy is very important regarding resilience and security issues.

>> MARCO HOGEWONING: That gives me a nice bridge. I don't have slides. My name is Marco from RIPE NNC. I usually describe my budget between the technology and everything else and talking about how easy is it to tamper with it. I've been doing it for a long time and know quite a bit about the topic. Imagine jamming a GPS signal is a matter of technically placed aluminum foil. Basically, 20 cents worth of aluminum foil at the right window or angle and that's your GPS clock is gone to the point where you won't have accuracy at least not in the nanosecond range you were talking about.

Which brings me to the part I was driving or wanted to drive here is that we see cybersecurity as a technical problem. It's always about technology, oh, somebody made a mistake and you can secure systems and build them tamperproof and build a bunker and put your clock in and a second clock next to it. There is a cost associated with it. That's the point.

It's not only about building such an accurate time system but also about the willingness to connect it up because despite all your efforts, the clock in the Sweden Prime Minister's Office is now fixed. Is it now sort of digital and hooked up to your system and actually showing an accurate timestamp or is there probably still the cheap batteries that once every few weeks somebody might think to adjust it.

>> PATRIK FALTSTOM: It is still the old clocks but we're close to actually fixing the problem.

>> MARCO HOGEWONING: You go down there every week and reset them. That's often what we do. We just go down there and push the button and if something fails, oh, it's crashed. Oh, reset, try again.

How many of you, I'm not sure, how many of you are old enough to remember Slammer 2001, 2002? A few? WannaCry? Yeah. We've all seen those headlines. Yeah. (Laughing).

Use the exact same protocol transport 16 years in. Okay, explore a different hole in the software because meanwhile the software did progress a bit, but it used the same protocol transport. Nothing really changed. We just fell in that hole again. And again, WannaCry, I call it a blessing in surprise. It was so visible. Everybody is like oh, cybersecurity. Oh, and then they're looking at technical solutions, all the technical solutions were right there and you should have applied to the patch Microsoft made available three months ago, and Microsoft only made it available because somebody actually found out that a government was hitting on the information that made it all possible, but let's not go there immediately.

So yeah, back to my point. It's you, or basically us humans. Of course, we can build technical solutions to a lot of problems, but it's also about the willingness to apply and I think that is also here.

Again, this is kind of, when we started putting the session together, oh, yeah explain how the Internet works and explain technology, and it's really important to understand how things work and it's also puts a bit of light on, yeah, how important is it to get time accuracy and how difficult it is.

Of course, we can build it, but you kind of have to use it. Discuss with people with very high technical levels to make cyberspace more secure, but what's the chance you're going to pay for that and what's the chance you're actually going to deploy those patches because there is always a risk in there, yes, but not doing it is also a risk.

I kind of want to leave it there. I can go much further into details, but yeah. I realize what we have is not only a technological problem, and I would actually almost say it's shame that we don't have a psychological expert or someone in human behavioral sciences behind it because I think that's very often missing from the whole cybersecurity space.

>> MODERATOR: Okay. Maybe it's best to just go straight on through.

>> MARILYN: I think so because I'm not a psychologist, but I'm maybe, I don't know I'm not the expert, the technical expert at the desk, but maybe the awareness expert. Because you speak about willingness, Marco, but what about knowing?

How are your skills, your technical skills because I think there is a lack as well sometimes? I really want to make this call. Stop and teach yourself because there is where it all starts.

Secondly, it's not only you who know -- who has to know what to do, but you have to teach your surrounding as well, the people at work, the employees, but most important as well, your spokesman. But your spokesman, if he doesn't know, if the incident is ransomware or phishing and speaks to politicians because we have to teach everyone as well if we want to move forward.

So at least so teaching each other is very, very important, I think. And then we have -- oh, doesn't work with me as well. Then we have to teach kids as well. It's not about one hour coding a year, because if you have the skills at schools, we don't all live in Estonia where they start with coding and robotics at primary school. At least it's not that case in the Netherlands, so we need skilled people to go to schools and teach our children if we want to increase the knowledge in our countries, and we want to have more skilled people in jobs in a shorts time of about 30 years or something.

Then I have another -- well not this one. I will come to that maybe. But it's about campaigning because we talk about campaigning. Is it useful anymore? Don't we have to stop because we don't listen? We all react reactive. If there is an incident then we go on and do something about it, but isn't it about knowing what to do? So if there is something happening, so I think we don't need to stop with campaigning. We take small steps when we have a campaign. We know about the cybersecurity month in Europe, and I think it's in this state as well, and we have two weeks, it's called an auto-line in the Netherlands. Last year we had 150 companies joining this campaign. One is making a newsletter for the employees, others are doing something for their clients, but a lot of companies joining, actually, the community. We make some press results, and it works because, for example, two years ago I talked with these people at the airport and they didn't feel like doing something about cybersecurity. They didn't care. And campaigning behind the scenes, of course, but not for their employees. And then last year the marketing people called me and they said, well the efforts you made are very good because this year we joined the campaign and actually we are going to start with a campaign all year round to teach our employees, and I think it works.

And at least it works that people know when an incident has happened. We read about WannaCry, people know that well, it's us. We have to teach ourselves and we have to make a change. So, I showed you this is not exactly maybe for you at work, but this is a campaign. It's an example of a successful campaign. It was made in the network for families. It was made for the German network about 10 years ago. I didn't know David was here eight or nine years ago. It was translated in a lot of languages and it still works today, so I show it to you just as an example because I really would like to know from you, if we talk about campaigning inside or outside your networks or your company, what works? What are best practices and what didn't work because we can learn from each other from that way as well.

Oh, few minutes, I talk. I just do the last because I had the slide of power repeating, and I think that says it all. Maybe it works.

(chime).

>> Is your son home?

>> Yes, he's up in his room. Come in.

(loud music).

>> Hi, it's (?). Well then, we'll go up.

(audio distorted).

>> Is this the new Anna? You've got (?), come on I show you a real (?).

>> (Speaking off mic).

>> MARILYN: Well, this is the campaign, 10 years old, no solutions in it, not technical. It doesn't help us what to do, but at least it can show you that this campaign, 10 years old, is still actual today. So this last call to you, go home, don't stop, teach yourself and teach everybody around you.

>> MODERATOR: Okay. Before we through it to Peter, maybe it's good to just sort of take a second and see if anyone has any questions either from Marco's input or Marilyn's presentation there? Perhaps can we pass the mic across there?

>> AUDIENCE MEMBER: Hi, just to come back on the point that Marco made. More of an observation than a question, but I think it's interesting when we talk about that cybersecurity is not necessarily a technical problem but more broader, and that's why, for instance, you see more and more policymakers looking at how to enhance cybersecurity by actually mandating it by law.

We have recently the NAS director at the European level which basically forced companies who might not necessarily currently implement proper cybersecurity to actually do so, and I think with the recent ransomware attacks you see more and more providers trying to enforce cybersecurity standards.

>> Sometimes great initiatives, but the problem with most of the loss is they're very reactive. Even, I mean, the biggest threat you get from this directive and the biggest threat you get from GDPR, you will get a fine if you screw it up. And that kind of puts people in -- it might help in that it puts a price on not doing security, which might incentivize people to invest more in proactive security, but it's still a tradeoff, and at some point, it's famous for (?) problem. How much do I spend in preventing disaster and how much will it cost when it happens?

And I would like to really step people beyond that mindset. The real cheap thing is -- well, it's not cheap, but in the long run like Marilyn said, educate people, build quality products, and be willing to pay the price for such quality projects and then we don't need that law.

>> Thank you. It's good that you started and not me. It's a common regard in the NIS directive, I think. I think one of the problems with the directive is that it is focusing not so much on people doing the right thing but focusing on requirements that you should report whenever you think that you have a security incident. Okay.

How many people in the market-based economy that have competitors, do you think, will tell the truth all the time? And for example, not lie or not try to tell the truth. So first of all, -- first of all, the successful search, for example, that we see around the world, we are the ones that are built bottom up by having organizations see the need for coordination of various different kinds of incidents so then having them give back and produce public reports because the goal here with the cert and the reason why people should report is by reporting what happened.

The cert can produce some report back to you and to others so that you can learn without actually being -- being the target of an incident yourself. So, you learn from each other. That is the overall goal, and that is what a functioning cert is doing, and I think that is completely forgotten in the NIS directive, which means the reporting back, the requirement on whoever receives the report to actually produce for the market valuable data, that is not there.

The second thing which is unfortunate an NIS directive is that it's not harmonized with the electronic communication directive. I think the goal was, originally, that either you are covered by the electronic communication directive and you report the incidents according to that report channel or you report the NIS directive, but that is not how the NIS directive is written, and in Sweden there are certain organizations that actually have to report twice as the proposal implementation. So, I think there are lots of things that could have done better in the NIS directive but on the other hand it's on the way to be implemented all around Europe and let's hope on the best. The number one thing, a well-functioning cert primarily produces good reports.

And then if they produce good reports, they can use the need for good reporting to them as an argument to actually get data. Carrot and a stick, don't always lean and use the stick. Thank you. Sorry for being so long, but I have some emotional feelings there, as you notice. (Laughing).

>> MODERATOR: Pass it down. Any other questions so far?

>> Any other emotional responses?

>> AUDIENCE MEMBER: Not an emotional response, but it also was mentioned before, the year 2001 there was already something looking quite similar like WannaCry right now. Actually, business already had an announced dress rehearsal for a security concern which is the Y2K bug. This is the year 2000 incident where many, many softwares and they were obsolete because there were just leading zeros missing in dates, but this wasn't the moment where everybody agreed where something had to be done, and actually major companies really, for the first time, really compared really useful procedures on how to handle future incidents.

The funny thing is, this is apparently quite forgotten now days. Although most certainly, most of today's security policies in major companies go back to the Y2K bug.

>> Fair point. It costs a lot of people to test their software. That was a unique event. My technical brain says everybody touched age-old software and did they actually look at the security problems and introduce a few more in the urgency to fix the Y2K bug. That's often what I see. One problem triggers the next, then oh, we really have to rush now and fix things and then they don't. But I'll leave it, I think, to later to talk a bit more about that.

>> MODERATOR: Maybe that's a good opportunity to go to Peter who is the last presenter here and then we'll throw open the floor again.

>> PETER VAN ROSTE: Thank you. I have little to present except the imaginary part that we need standards, which is usually a response to cybersecurity challenges. And since all the emotional comments have already been made, I can add another one to the list of directive things, so this is an old story of the parents preparing the miracle (?) and instructing the kid to watch when the milk boils over and then they leave the house. And they come back and it stinks really badly. And they approach the kid, what happened? Well, it was exactly half passed 5:00 when the milk boiled over so that is the reporting thing being put into precision, and that's exactly what is kind of happening and should not be encouraged. The secret instruction, implicit instruction was, of course, to prevent things from happening instead of doing precise reporting, and I guess that's another aspect of what putting the wrong incentives in place.

Enough bashing on this directive and everything around that. So, in the technical community, we do a lot of things. We develop standards. They're implemented, and then things are operated. And I would like to draw some attention to the role of standards in the technical community, like technical standards. We know there are standards of behavior. We sometimes even comply to them, but when we talk about standards in security and in the technical environment, what's the role of standards? Why do we have them in the first place? And it's important to remember that these are, first and foremost, to ensure interoperability so that different systems doing the same thing can talk to each other and understand each other so that the syntax and semantics, the meaning of the packages and the messages exchange is interpreted equally on both sides of the communication channel.

There are, of course, at least when we look at the standardization that are most important for the Internet, which is the Internet Engine Task Force and the W3C and a couple of others more recently, they also mandatory security considerations so that when standards are developed or protocols are developed, that the engineers and other people involved not only look at the cheapest or easiest solution but also take into account security issues that might be buried somewhere deep, deep in the intricacies of the protocol.

Even if those considerations are used in developing the standards, and the standards are how the bits and bytes go back and forth on the wire, even if they are adhered to, the considerations, that doesn't necessarily make safe and secure products because there is more than that.

These standards usually address the implementer because most of what we talk about here is dealt within software. Of course, you can do some things with hardware, but that's the hardware manufacturer or the chip manufacturer, for example, as we've seen Patrick's example of doing NTP solutions in hardware for acceleration, of course.

Anyway, so these standards usually address the manufacturer, the implementer, and there are lots of opportunities to do things right, but there are also lots of opportunities to miss the point to a certain extent.

No software is actually bug free. We know that. Some good standards may still result in software with bugs, some ware with deeply buried bugs, and software with bugs that are only discovered ages after the software has been first deployed, and this is one aspect that bites us very badly with the Internet of Things, for example, with some of these devices that are attached to a wall and then left alone for a couple of years and after that nobody remembers how to fix the software, for example.

There are also some dependencies. So, usually software isn't built from scratch. People use so called libraries that are like building blocks from other places that they use in the software. Of course, there are dependencies. These building blocks could have bugs, and these bugs can have consequences in one implementation but not in another. That is, so even in a library has been in use for ages and never triggered a significant incident, you use this library now for some other product, for some new protocol, for some new implementation of a new protocol, and it turns out that there is a bug in that library in that software piece that isn't really easily identified because it's somewhere deep in the code and bad things happen.

Now, finding out what the real issue and what the origination of the bug is really hard, and it's even harder to fix this. Most of the success, or much of the success of the Internet, I should say, is based on Open Source software. We've heard before that this gives everybody the opportunity to review the code. Good luck.

In the earlier days when we had less code, many people actually did that. There were code reviews and people would actually be involved in developing the code further anyway. That was the small core of the Internet then, so bugs would turn out sooner or later.

More recently, we've had bugs in widely used security software even that surfaced all of a sudden. It wasn't discovered before, and it turned out, yes, that even though everybody had the opportunity, in theory at least, to review the software, that hadn't happened. So, open software is a big chance, a big opportunity, but there, of course, needs to be money and other investment to actually check the security of that software and that is something that needs to happen that can be organized without sacrificing or jeopardizing the basic ideas of Open Source software, so that needs to go hand in hand, and again a little about standards here.

Some other things, so oh, yes. Talking about fixing software. So, ages ago there was a new version of software every now and then. For some reason, you were usually incentivized patching software or deploying a new version because of new interesting features. These cycles have changed these days. Software has quicker release cycles, but not everybody is actually applying those patches. Sometimes you don't even have a license for the software. We have similar challenges.

Now, we have these new functions in there that are like two-handed or two-bladed swords like automatic updates. That sounds like a great solution to a problem, so you can just install software and it will phone home every now and then and install and fix itself.

I usually turn off this function. I wouldn't necessarily recommend to everybody else to do that, but there are reasons for doing that, but I'm paranoid so that's one thing. I'm not the only one, hopefully.

The other is that there are some -- you are in situations where you don't want the software to waste your bandwidth updating several megabytes while you are in a very low bandwidth link. Other people just deactivate that for other reasons. The important point is, of course, to then do manual updates and get things deployed and fixed quickly.

The other problem is, and that's the paranoid part, many people don't trust the vendors because not only do the software pieces phone home for updating, but they report interesting things that you've been doing and many people don't like that, so part of the industry has actually disincentivized people from doing the right thing and keeping their software updated.

Another observation from that is that bugs could be spread by this easily and you have a big population out there, a big swarm of installed software. You're using your software and the bug is out there, and usually you would think oh, yes, but the next update cycle will actually get the software fixed. And a strange observation is that the disease spreads really, really quickly, but the vaccine usually doesn't follow it in the same way for some weird reason, so we can spread bugs easier than we can spread the fixes. I'll leave the like scientific proof to somebody else, but this is in the time of whatever news.

So, and the final thing about the whole suite of standards is, I guess, complexity in that environment. We have more and more standards. We have more and more details in standards, more and more software to deploy and to fix, and interestingly we see that old mistakes are repeated. We've just heard from Marco that, basically, the same security rules have been exploited over and over again, but we also see newly deployed software, misrepresented standards that have been thought to overcome but haven't.

Part of this is probably due to new players entering the field. Again, not blaming the Internet of Things, but pointing to it. There is a couple of new industries, actually, discovering the Internet and bringing software into the Internet that haven't done that before and they have a very, very steep learning curve. I'm not jealous. And that is one of the reasons that we see old bugs popping up again.

That's like setting the scene, standards, technical standards, and we need to look at implementers, and then finally also, standards usually have certain parameters that are left open either to the implementers, so they have -- I wouldn't say ambiguities, but opening and closing as probably people in the legal sphere would say, opening closes so that implementers have some leeway for decisions but most importantly they leave options for the operator, could be professional operator but could also be the home user to twist the software and enable or disable features or set parameters like temperature, the strength of light, or something like that.

Sometimes the standards bodies can actually give recommendations on how you choose these varying parameters, like timeouts, how long would you wait until you decide that the clock that Patrick operates has gone wild and you need to switch to the other clock, but not always the standards body is in the position to do that because standards developers are not necessarily the same folk that operates the things and then there is more experience in the operators' group, so that's another part of the technical community that needs to be involved in developing these guidelines. And all of these together, like the standards, the implementation, and then the operators need to work together to get the whole system secure. And that's it.

>> MODERATOR: Thank you.

>> MARCO HOGEWONING: Stepping on your toes and sliding into a moderator chair, but you've mentioned something really important and I want to hook up with that and that is sort of the kind of the message that bugs are unavoidable. To brutally drag it out of context, more is less, and I know that in the security field, one of the solutions then is to go multi-vendor. You add redundancy. If you know the problem exists you put another system in it. We've seen it with Patrick. We put two clocks together, in case one breaks down or it tampered with, I've got another one.

Personally, it's exactly the same. There is no multi-vendor, if one of your clocks has a bug, the other one also has this issue. (Laughing). Care to explain and then sort of bring be it to the educational point of this session how?

>> PATRIK FALTSTOM: Yeah. So just because of that reason, the way the system is designed, it's actually part of the agreement that you sign up with us if it is the case that you take up time with PTP which is actually what is used by most cell phone operators. It actually says in the contract, part of the regulation in Sweden, that you are not allowed to use the PTP from us as the only source. You must, by definition, use another source, so the goal with the regulator and the Swedish government to give money to us to run this is to ensure that everyone has two different sources and not only one. So, it's not to give people one source, it's to give people the second. That's what we're doing.

>> And to add to that, yes, multi-vendor is a good idea. You can take it to the extreme, but then of course, you increase the operational complexity because you need staff to operate various flavors of systems that have reallocations anyway. And the other point is, pointing back to the library, finding products that have a complete disjoint set of software, including all the libraries down there, is really, really difficult. But in general, I would support your advice.

>> MODERATOR: Okay. So as I said, Peter was the last sort of arranged speakers, and we have a half hour left in the session which is very much what we planned. I do want to basically throw the floor open, and I want to say, obviously, any questions you have or comments relating to any of the points our speakers have made here today would be great, but this is an educational session on cybersecurity issues and as I've said, there has been a lot of cybersecurity discussion in the news and sort of in the air recently, so this is an opportunity as well for any other questions you have that might sort of be the technical realities of what these sort of cybersecurity problems are. To bring it to, essentially, a group of experts here available to you, so the floor is very much open.

>> AUDIENCE MEMBER: I just wanted to point out on this multi-vendor thing, it's also a double-edged sword. If you have two different solutions, likely one of them will fail is, of course, bigger than both will fail. And whether that is good or bad depends. For example, think of two firewalls in a (?), the chances of both of them having a hole that someone gets through is much smaller, but chances that they have little (?) to disable the thing is a little bigger. So it supports the context you should have multiple solution.

>> If you have two firewalls and do an announcement from both of them, if it is the case that you have an issue in one of them, we draw the router from the firewall and traffic is redirected to the other. The good thing in that in cases you have two firewalls from two vendors that are both in place, so yes, you get attacked and yes there is a higher risk, of course, with fewer statistics, but on the other hand when you have an incident, it might be the case that it's an advantage to have the second, an alternative already in place.

So, I completely agree with you, but this is also -- and I wanted to expand on this, on what you said, just to also explain to the other people in the room that this is exactly how you have to think about redundancy of time, make sure that things work, should you go multiple vendors, increase the complexity. In our case, we run exactly the same hardware and software on all of our nodes, two different vendors on each need for certain things, but otherwise the same vendor. And instead the people that buy our service have to go to a different provider than us. That's the redundancy, and that's where we have -- so the important thing for people that buy services from each other, because that's what we do all the time, is that we tell everyone else how we have designed our systems so that the other party can make informed decisions.

>> I wonder if there is anything else. Otherwise, I have a question for the room. I was jokingly saying to Patrick, are we going to offer to fix their computer? They're here, we're happy to look into your problem. Anybody? Any burning security questions that we can answer?

>> AUDIENCE MEMBER: Not a security question, but just a comment on a previous thought on Open Source software. There are a couple of initiatives out there, like open security bounty, who are actually looking at Open Source in that sense, but in another sense, to be blunt, there are many (?) who Open Source software without telling you that they are using Open Source components. And in that sense, I would consider it to be worse in some sense because in that sense, they don't contribute back to the community and actually don't give you the ability to realize that you might be vulnerable to some Open Source component vulnerability that is actually affecting the general public.

>> Yeah. Thanks for pointing that out. I think that you're partly referring to initiatives that have been follow-ups to the incident that I spoke about regarding this open SSL library. One point, I'm not sure I got it correctly here, is that also vendors that ship closed-source software can make you vulnerable. So, the vulnerability isn't a feature of the Open Source nature of the software, but I would agree with what you said about the contributing back.

Depending on what the license is, the license model of the Open Source software and so on and so forth, but my point there was that the illusion that everybody could look into it doesn't automatically lead to anybody really doing that or that being done in a sufficient way. And when more and more critical infrastructures, critical in quotes or not, is relying on software which is open or closed. That software needs closer inspection no matter what, so yes, complete agreement.

>> You've got a mic.

>> AUDIENCE MEMBER: Yep. During Peter's presentation when he was speaking about the idea of automatic updates and vendor patches. I was reminded of a case, I think it's on-going in the Netherlands, an association was suing a software vendor because the software vendor decided they would no longer provide patches for software that was 10 years old, and I was wondering from the panel's perspective, what do you think that a provider of software should be mandated by law for as long as the piece of software is viable to automatically give out free patches or even whether that's commercially feasible for providers to do.

>> I leave the answer to Patrick which is which case you're referring to, that is consumer organization suing a large phone manufacturer for not releasing software updates for over two years. As I recall, the Judge put that compliance aside. Basically, said that, yeah, that's apparently based on the -- some general EU directive that was two years was enough and tough luck. It's their choice and you're not paying for it, I think, was sort of the reasoning for that, but I'll leave it to Patrick to give you an answer.

>> PATRIK FALTSTOM: I think there are two aspects to we have to remember. One is the pure commercial aspect. What is the responsibility of a vendor that delivers something to the consumer. The consumer rights are used to a very large degree, and they're going to comment on another part of the story, I think.

The reason why I think open standards are important is not so much whether to follow a standard or not, but I think what is important is that various equipment and various services are, what I call, replaceable. Because if it is the case that a vendor misbehaves, which might be the case in this case, at least, from at least from the consumer organization's perspective, the vendor was doing the wrong thing. Yes, that's something you have to deal with in court, but what's much more important than that is that the end user can choose a different phone. The way out of that is just to buy another kind of -- so that's why, for example, I think directors, for example, that makes it responsible for the end user, the FTT8 connection to be able to connect whatever CPU wants and not only one that brings you the Internet connection is much more important. Because if it is the case that consumers screwed up and you get some gear that doesn't work, it is really, really important that you can go down to shop by something from a different vendor and connect it yourself.

So, we have to talk about different kind of things here, and I think we should not forget the replaceable part of this because having the ability to at least get out of the mess that you're in is actually from, what I claim, a security standpoint and resilience standpoint is actually really, really important. But of course, we have this other thing, which we cannot forget, which is what is the responsibility for a vendor to ensure that something continues to work even, sort of, over longer periods of time.

So, both of these aspects we need to think about. Replaceability, I think, is something that is very, very important.

>> AUDIENCE MEMBER: So I have one question which I think comes back to the issue of standards and sort of, you mentioned, just in passing almost the growing complexity of the sort of, I guess, web of standards and different standards and protocols. And, I guess, we're certainly very aware that more and more we're seeing mobile operators coming on, and IOT as something that's happening. Standardization isn't now happening in the single place like the IETF. There are a lot of other standardization, IEEE, any number of them. Is that complexity of those standards having to work together? Is that a sustainable model? Is that something that is going to sort of make it too difficult to actually solve security problems or foresee even what the security problems are going forward?

>> That's a very good question, which usually is the diplomatic version of really difficult. (Laughing).

So, I'll try to give two responses, but I also invite the other panelists to chime in here. I think that partly the sphere of standardization is partitioned in a way so that like the 3GPP and others are more down to the wire and down to the radio waves actually than like the IETF is.

I think it's important to get a common understanding of the standards bodies that they have their activity and don't overlap too much so that nobody steps on each other's toes, and more importantly, so that the expertise is not spread around. We know the core Internet protocols will be dealt with the ITEF, and things happening above the http, like the web layer is happening in W3C, and whatever is radio waves. There are different organizations, in Europe and others. If I'm not mistaken, there is open standard's bodies that actually agreed on things like that. The IEEE is involved, if I remember correctly, so that it is clear for certain ideas and certain standardization projects and efforts. These things are going to the right place and where there are overlaps that the standards bodies actually incorporate in ways of liaisons like the IETF does with the IETU and IEEE and others.

>> I think it comes back to one way of looking at it. It goes back to what I just said about replaceability. But today, it's sort of -- it's not so much up to us as a user what kind of standard is in use. Standards, we have to remember, are products on the market and the vendors choose what standard they want to use. Just look at the IET thing, there are I know five different consortium which claim they're dealing with whatever IET standards and then choosing their own part forward in a way, so for example in the control.

So, I think the problem for us consumers are that I want to role switch. I want a thermostat, and I am so far away from choosing which standard to use, so it's just -- you cannot even think -- you don't even know whether WiFi or a sim card in the device. I buy a car and there is a sim card in there and I cannot even change the mobile operator.

It goes over to sort of, I used the Facebook service that I can only get from the company of Facebook. I cannot choose a different provider of the Twitter service than Twitter, which is completely different than web and email where I can choose who should be my mail provider and I can even run my own mail server. I can do that with Twitter.

The big question for us, as consumers, is whether we want the Facebook world, where you get the thermostat from the thermostat vendor and it's up to them how at the implement the communication or if it is the case where it's more the email version where I can choose what client I use.

>> I fully agree with you, but I see the looming problem of viability and responsibility because if I can give you the standard, but usually the security problem is in the way of the standards get implemented. It's about writing code in a more (?) I give you, the more faults you seem to introduce. I'm looking at random people. Not even sure if you're writing code, but that just sort of seems to be the overall problem with cybersecurity is that everybody is an expert, and I greatly agree with that in terms of Patrick, that also make us replaceable. So, yeah, I would rather see you all become an expert. He.

>> AUDIENCE MEMBER: I would like to follow up on Patrick's comments again. We have an interesting case of if we have a Facebook-type environment. It's like a monoculture and agriculture, just everybody using the same thing. It's better protected more likely than in this spread-out area. We have different people and many different people using mail servers. It's likely something breaks, but less likely that everything breaks. That's a tradeoff we need to make. In the long term, I tend to think the latter is better. To have lots of failures now and then rather than taking a risk of a total disaster at some point, but it's not an easy issue.

>> MODERATOR: Okay. Looking around. Do we have any other? Yep.

>> AUDIENCE MEMBER: Back on the Legacy topic, I recently had to -- I've had a discussion on how secure are all the keys. So, I understand access to all systems, are of course, secured by keys of a given length, and no (?) if not, there is no necessity to use two complicated keys.

I remember when 40-bit keys were something that was under (?) control, but isn't it almost sort of that almost any key in present 10-years’ time will be just ludicrous giving the possibility to break it by brute force?

>> PATRIK FALTSTOM: Okay. Just like Peter, I'm saying that's a difficult question, (Laughing), to answer. Let me explain a little bit from my perspective, not being a person going back and forth. I was working with DNS like with Peter, but Peter is thinking much more about the theater behind this, so I hope that you can correct me or fill in some blanks here when I'm trying to explain where I am.

First, is that some people think that sort of keys got warn out over time, which is wrong. Whether you have to replace your key or not, more has to do with the evolution of competition, just like you're saying. How many prime numbers can you actually calculate on a computer within a certain amount of time? And one of the reasons why, so they're basically, from my perspective, from a pure math perspective, there are two reasons for changing the key. One is that you have CPU power enough to actually do calculations that you need. The other one is that you get weaknesses of various kinds like coalitions and the checks and other kind of things and that means you have to move over to other algorithms because you find weaknesses in the actual algorithm itself.

There are in reality, no other real reasons to change the key. So, for example in DNS (?), I was one of the persons that said it's really important to rotate the keys. Now, we say wait a second, it's probably more important to keep them so the KSK can be kept for a very long time.

That said, one thing that's really important to do is the day when you have to change the key you need to know how to do it. It's like, you don't know whether the backup actually worked unless you try to restore it and you see that it didn't.

So, for pure operational reasons, I see a good thing to actually rotate keys to be able to know how to do it, et cetera, et cetera, like who in this room had problems when the certificates of web servers run out and you don't remember how to reinvent how to buy a certificate if this happened every month. I would automate it, and now I have to do it manually instead. Three reasons to rotate a key, to keep your operations up to date so you know how to do it so it's normal labor that you do just like having lunch. Two, that evolution of CPUs are such that computational, like for computational reasons or that there are weaknesses in algorithms, otherwise there is no reason.

>> Brief interruption. If you are running DNS (?), have a chat with our colleagues from ICANN because they better describe this.

>> (Speaking off mic).

>> I'm not trying to correct Patrick, but just maybe adding one angle or perspective. Keys can be used for difficult things. Like digital signature or encryption like keeping information confidential, and especially for information that needs to be kept confidential and should be kept confidential over a long period of time. The key needs to be in a shape or form, that is length, that is considered safe until the end of that time, like 30 years or something depending on what your requirements are, and then you might consider like rekeying and getting rid of all the copies and so on and so forth, but everything else, I probably agree with. So, discussion about key length and security and so forth, always needs to take into account what the key is actually used for and the different applications have subtly different requirements. Yeah. That's it.

>> AUDIENCE MEMBER: So, what I gather from the answers is, actually, yes. There is an issue with it and actually something should be done, but most probably, they have at least a big chance to make some things up if things don't work. So, wouldn't that be something that should be taken into account for public education to much higher degree? In particular, just encryption and switching passwords.

>> Now, I'll go into switching passwords which is absolutely different from choosing keys, and I think switching passwords. I think, the weak password, a strong password is a strong password like the same thing with keys and key length, depending what you use them for. I would say it's much more important to, first of all, make people actually use DNS to validate DNS, validate encryption, use hardware keys, and good passwords and it's much more important that people use good passwords than they rotate passwords. Rotate passwords, that was something people talked about like in the previous millennium. We don't anymore. We talk about choosing good passwords.

>> I'm actually no crypto expert. It's impossible. I understand the high-level things, and then again, I think it's part of the educational point is behavioral. Much and this morning we already flagged talking about sustainability in (?). Part of that is, should you really have that information lingering around for 20 years when there is a possibility that your key is being broken or should we, as Marilyn said, this is the collaborative effort to educate people also in making sure that information is not lingering around in the first place.

As Peter said, if you need to keep it for 30 years, then plan ahead, but in my -- in what you see around me, a lot of these issues just exist because people don't plan ahead. And actually, what is a credit card transaction that's two years old worth? Well, not much until I find it and see that the credit card is still valid for another year. My bank now gives credit cards valid until 2030. Talking about rollover, that's also blah, different topic. (Laughing).

>> They probably found a way to make you liable for the losses. Anyway, chiming in on the password thing. Passwords are so yesterday anyway, but everybody is using them for a variety of reasons. And pointing back to the rants about organized security or security standards, many security standards will say you shall have a password policy, and then running around with policy, and then rollover password, and then it was a good idea at the time, but it ignored the human factor because what happened is people were confronted with, you now have to change your password and they immediately changed their password and, of course, it wasn't a good password.

Now, some of the policies have adopted, and they've tried to dictate or encourage the user to choose a good password by explaining to them that it needs to have a digit and upper case and lower-case letter and a special character and so on and so forth. From a tech perspective, that is helpful because they now know a bit more about the password than they would otherwise, but yeah, the average user is a bit challenged. The most important thing about this is if you still use passwords and there are some occasions that you can't get around them. Never use a single password in two different venues. Don't reuse them. That's the most important part.

Now, what we all do then is we have a password vault on our system and we have the derived passwords, and what do we do if the smartphone is stolen and somebody gets access to that? So, there is no really good solution, but the real point here is that, yes, access to systems and end-user access to systems is a very important part in the puzzle of network security.

>> PATRIK FALTSTOM: I just want to mention, and looking here as well, that many of you that have the same thing as we. I got the question earlier, so what happens if people compromise the clocks? And our password policy for many of our systems is that you are not allowed to use a password. Instead, we have various different kind of hardware mechanisms that actually locks people in and out, so you can simply not access it. So, if it is the case that you are using certain system, passwords are not allowed.

>> MODERATOR: We have a, I think, remote question.

>> They have a question about -- and I quote him. I would like to put an example. I think was like an earlier discussion. A private company wants to provide XYZ service in a country, for example, a global gaming industry. What agents do they have to consider, and who runs their network management versus players? And, what standards do they have and who is behind them?

>> I hope this is still relevant?

>> Always.

>> That's a difficult one. (Laughing). Which do you consider largely also depends on the country. I think that some countries have much more strict regulation than others when it comes to the situation of online gaming. You most certainly also want to get paid so you probably also have some requirements from authorities and financial sector and the bank handling your payments. This is really a very complex world, and I guess most of them have a different standard, and some people say that's often the case. Also, so where you see like privacy, on one hand we always, when we have any security discussion, privacy is a concern. But on the other hand, a lot of security discussions are focused around retaining data and that meta level, but that's always a tradeoff. I'm not sure we are here in a position to make a decision on that tradeoff. I think that everybody should make that decision for themselves, but it's a really high level. Yeah, when building a gaming service, I don't know. Patrick, any low-level ideas apart from making sure that the time on all your game servers is in it reasonable accuracy?

>> PATRIK FALTSTOM: I was taught gamers are high on latency, so that's, that's -- yeah. The stock market is a big game, right. So, yes. No, I think you're absolutely right, and we now go into those areas where you have, where you have sort of a cross jurisdiction and you have services provided and all different kind of things. It's kind of messy, just like we mentioned, that it's supposed to be easy, directives are supposed to be easy to communicate. And but we see it's difficult and the rest ends up being sort of different and then if we go into gaming, oh, my lord that's really complicated. Specifically coming from Sweden, I can tell you that because Sweden has specific legislations on online gaming, as you know in Finland.

>> I just had a little story on a related issue. Sometimes too much security becomes a problem. I had a case where the system changed their encryption key and then accidently died before he could tell it to anybody and the data was never recovered.

>> PATRIK FALTSTOM: Yep. That has happened. I've been in a few of those cases, and it's very tragic and it is just one of those reasons where you really have to plan for all different kind of things. And planning for key people sort of passing away and other kind of things is part of resilience of a company. And it's not easy to write those. The first -- when I started Netnod, one of the first things be I did is look through the policy for the company and make sure that the staff had the ability to continue to run the company even though I myself personally did not exist anymore. And actually, writing that policy is kind of scary, but it's something that you have to do.

>> MODERATOR: Okay. As the counting box, in Patrick's particular way of explaining it, I am noting that we're about three minutes from the end of what this session should be. I guess probably I was going to offer our four speakers here the opportunity to give any final thoughts in case they wanted to summarize what they've been saying?

>> PATRIK FALTSTOM: I can start. I think we have been talking about a couple of different things here. I think it's important to think about replaceability, redundancy, use different vendors, make sure that you know how each one of the vendors of services used are implemented and things so you can do informed decisions, and this is one of the reasons why we at Netnod explained just like I did, how our time service works. When you use our NTP or PTP service, you can choose based on your context whether you're going to use us or another provider, et cetera, so that you get the highest quality that you need.

>> I often jokingly say, like with a lot of these discussions, the cause of the problem is never in the room. It's always somebody outside. I think one of the takeaways should be is look beyond the complexity. Cybersecurity is tremendously complex problem, and also sometimes has very simplistic solutions. What Patrik is building is a really complex solution, but what you are facing as a consumer is a relatively simple solution. You get the time from him or you get the time from somebody else. And it can be as simple as that, and we just spun around a bit with passwords as well. You can write a very complex password policy, but usually that's overdoing it. Less is more. And indeed, also you are replaceable, just for the sake of redundancy.

>> MARILYN: Don't stop learning. Don't stop. Teach yourself, and I steal this from you, it's no longer not okay to understand how the Internet works.

>> Yeah, so security is like a life organism that demands continued attention, and also likes to be treated like something living, as in -- yeah, attention and don't let the human factor be ignored.

>> MODERATOR: Okay. So as the tens of thousands of nanoseconds tick by, I'll bring this to a close and thank the speakers we had here today. Thank you very much. Thank you all for attending and your questions and comments, and enjoy the rest of the event.

(Applause).

>> MODERATOR: We'll be around if you have any other questions.

(Applause).

>> Quick question? What time is it?

(Laughing).


This text is being provided in a realtime format. Communication Access Realtime Translation (CART) or captioning are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.