Enhancing users’ confidence in cyberspace – risks and solutions – WS 02 2020: Difference between revisions

From EuroDIG Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(24 intermediate revisions by 2 users not shown)
Line 1: Line 1:
11 June 2020 | 11:30-13:00 | Studio Berlin | [[image:Icons_live_20px.png | Video recording | link=https://youtu.be/qV5EFUzF6Rs?t=1377]] | [[image:Icon_transcript_20px.png | Transcript | link=Enhancing users’ confidence in cyberspace – risks and solutions – WS 02 2020#Transcript]] | [[image:Icons_forum_20px.png | Forum | link=https://www.eurodig.org/?id=821]]<br />
[[Consolidated_programme_2020#day-1|'''Consolidated programme 2020 overview / Day 1''']]<br /><br />
[[Consolidated_programme_2020#day-1|'''Consolidated programme 2020 overview / Day 1''']]<br /><br />
Title: <big>'''Enhancing users’ confidence in cyberspace – risks and solutions'''</big><br />
Proposals: [[EuroDIG proposals 2020#prop_75|#75]], [[EuroDIG proposals 2020#prop_86|#86]], [[EuroDIG proposals 2020#prop_155|#155]] ([[EuroDIG proposals 2020#prop_121|#121]], [[EuroDIG proposals 2020#prop_160|#160]], [[EuroDIG proposals 2020#prop_166|#166]])<br /><br />
Proposals: [[EuroDIG proposals 2020#prop_75|#75]], [[EuroDIG proposals 2020#prop_86|#86]], [[EuroDIG proposals 2020#prop_155|#155]] ([[EuroDIG proposals 2020#prop_121|#121]], [[EuroDIG proposals 2020#prop_160|#160]], [[EuroDIG proposals 2020#prop_166|#166]])<br /><br />
== <span class="dateline">Get involved!</span> ==
You are invited to become a member of the session Org Team! By joining a Org Team you agree to that your name and affiliation will be published at the respective wiki page of the session for transparency reasons. Please subscribe to the [https://list.eurodig.org/mailman/listinfo/WS2_2020 '''mailing list'''] to join the Org Team and answer the email that will be send to you requesting your confirmation of subscription.


== Session teaser ==
== Session teaser ==
Information communication technologies (ICT) open up enormous opportunities for both social and economic development. However, at the same time they pose threats risking the safety and security, including privacy, of users by creating new vectors for cyberattacks. During the session, from different stakeholder groups’ perspectives, we will discuss users’ perceptions and concerns of the risks in cyberspace and identify the challenges in addressing them, and explore existing and possible future solutions to ensure users’ confidence and trust when using ICTs and going online.
Information communication technologies (ICT) open up enormous opportunities for both social and economic development. However, at the same time they pose threats risking the safety and security including privacy of users by creating new vectors for cyberattacks. During the session, from different stakeholder groups’ perspectives, we will discuss users’ perceptions and concerns regarding risks in cyberspace, and identify the challenges in addressing them – including the current pandemic crisis. We will also explore existing and possible future solutions to ensure users’ confidence and trust when using ICTs and going online.


== Session description ==  
== Desired outcome ==  
Until <span class="dateline">11 May 2020</span>.
The primary aims will be (1) to conceptualize the existing situation for users in cyberspace by outlining key existing and potential risks, and (2) to identify best practices and actionable recommendations for users and actors that are capable of making a contribution to enhancing security and users’ confidence in cyberspace. These actors include governments and intergovernmental institutions, the technical community, civil society and private sector entities.
 
Always use your own words to describe the session. If you decide to quote the words of an external source, give them the due respect and acknowledgement by specifying the source.


== Format ==  
== Format ==  
Until <span class="dateline">11 May 2020</span>.
The interactive moderated discussion will take place with experts representing different stakeholder groups and addressing three sections on the topic:
 
* Risks (what should we – users – be aware of and what makes us vulnerable?);
Please try out new interactive formats. EuroDIG is about dialogue not about statements, presentations and speeches. Workshops should not be organised as a small plenary.
* Challenges (do we have enough resources and capacity to address the existing risks and ensure the security and safety for users in cyberspace?); and
 
* Solutions (what different stakeholder groups could and should do to ensure not only a secure, but also a human-centric cyberspace, where fundamental values are guaranteed and technology works for people?).
== Further reading ==


The session would take place with interventions from the audience and other participants after each section to ensure a fruitful dialogue and exchange of views in the EuroDIG community.


Links to relevant websites, declarations, books, documents. Please note we cannot offer web space, so only links to external resources are possible. Example for an external link: [http://www.eurodig.org/ Website of EuroDIG]


== People ==  
== People ==  
Until <span class="dateline">27 April 2020</span>.
The session brings together representatives of different stakeholder groups:
* '''Intergovernmental representative''': Jaroslaw K. Ponder, Head of the ITU Office for Europe, International Telecommunication Union
* '''Technical community representative''': Luca Antilli, Head of Media Literacy Research, Ofcom
* '''Civil society representative''': Julia Schuetze, Project Manager, International Cyber Security Policy, Stiftung Neue Verantwortung (SNV)
* '''Private sector representative''': Anastasiya Kazakova, Public Affairs Manager, Kaspersky


'''Please provide name and institution for all people you list here.'''
'''Moderator''': Vladimir Radunovic, Director of e-diplomacy and cybersecurity programmes, DiploFoundation


'''Focal Point'''
'''Organising Team (Org Team)'''
*Anastasiya Kazakova, Kaspersky
 
'''Organising Team (Org Team)''' ''List them here as they sign up.''
*Claire Local
*Claire Local
*Nicole Darabian
*Nicole Darabian
Line 37: Line 33:
*Fotjon Kosta
*Fotjon Kosta


'''Key Participants'''
'''Focal Point'''  
 
*Anastasiya Kazakova, Public Affairs Manager, Kaspersky
Key Participants are experts willing to provide their knowledge during a session – not necessarily on stage. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance.
Please provide short CV’s of the Key Participants involved in your session at the Wiki or link to another source.
 
'''Moderator'''
 
The moderator is the facilitator of the session at the event. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.


'''Remote Moderator'''
'''Remote Moderator'''
Line 52: Line 42:
'''Reporter'''
'''Reporter'''


Reporters will be assigned by the EuroDIG secretariat in cooperation with the [https://www.giplatform.org/ Geneva Internet Platform]. The Reporter takes notes during the session and formulates 3 (max. 5) bullet points at the end of each session that:
*Andrijana Gavrilovic, [https://www.giplatform.org/ Geneva Internet Platform]
*are summarised on a slide and presented to the audience at the end of each session
 
*relate to the particular session and to European Internet governance policy
== Messages == 
*are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
*There is a need for stronger digital literacy, particularly for children, their parents and teachers, and those who are forced to become a part of digital society by the pandemic, such as the elderly. Digital literacy should be approached in an interdisciplinary manner. Users should be more aware of risks and taught to think critically, as well as differentiate between safe and unsafe practices.
*are in (rough) consensus with the audience
*Security needs to be more user-friendly. To that end, ICT providers need to provide greater transparency around their practices, especially regarding the implementation of security by design and security by default.
*Companies should implement policies that will raise user trust in these companies. They should be more transparent on how data management is installed, how they handle user data, how their vulnerability disclosure practices work, and how the mechanisms for reporting inappropriate content on social media platforms function.


== Current discussion, conference calls, schedules and minutes ==
See the [[{{TALKPAGENAME}} | discussion]] tab on the upper left side of this page. Please use this page to publish:
*dates for virtual meetings or coordination calls
*short summary of calls or email exchange
Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.


== Messages == 
Find an independent report of the session from the Geneva Internet Platform Digital Watch Observatory at https://dig.watch/resources/enhancing-users-confidence-cyberspace-risks-and-solutions.
A short summary of the session will be provided by the Reporter.


== Video record ==
== Video record ==
Will be provided here after the event.
https://youtu.be/qV5EFUzF6Rs?t=1377


== Transcript ==
== Transcript ==
Will be provided here after the event.
Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-481-9835, www.captionfirst.com
 
 
 
''This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.''
 
 
 
>> VLADIMIR RADUNOVIC: Good morning. Good afternoon. Just a quick test that you can hear and see me.
 
>> ELISABETH SCHAUERMANN: Hello, Vladimir. Yes, we can hear you and see you.
 
>> JULIA SCHUETZE: Me as well? Sound and video is okay?
 
>> ELISABETH SCHAUERMANN: Yes, Julia, we can hear you and see you well.
 
>> JULIA SCHUETZE: Awesome. See you in a bit.
 
>> SANDRA HOFEICHTER: Welcome back. Nadia, can you hear me?
 
>> NADIA TJAHJA: Yes, I can hear you.
 
>> SANDRA HOFEICHTER: Yes. The first session was wonderful. So much interactivity both in the discussion forum but also in the chat itself. People were having discussions, bringing it together, and people had different types of opportunity to participate, people who are usually rather quiet could participate by writing and people who are less, you know, keen on speaking have a different way to engage in the dialogue and I think that’s a fantastic opportunity.
 
Okay. That’s basically what EuroDIG is about. We have to apologize, we did have some – my team told me I shouldn’t call it technical issues, but we had for a short moment some room limitations because although we upgraded our license, we had to assign the bigger room to our license; although, we only had one so my apologies to everyone if you were not able to enter the Zoom room. If this happens at any point that you have problems with your credentials, that you cannot enter the room for any reason, please refer them to the streaming that we also provide on EuroDIG.org and on YouTube so you are at least able to follow the session and to use the forum. But usually or normally, you shouldn’t have any problem to enter the rooms because we have upgraded to 1,000 participants for the room in the Hauge and up to 300 and 500 participants in the studios in Trieste and Berlin. Nadia, the next is about encryption in your room. And I see around 60 participants joined your studio already, and I guess these are the ones that will be speaking on this workshop. I wish you a fruitful session and welcome back after your session with the big stage in your studio here.
 
>> NADIA TJAHJA: Wonderful. We look forward to having you back.
 
>> SANDRA HOFEICHTER: Okay. See you later, Nadia. So, then I would like to try to connect with the next studio which is Berlin and in Berlin we have Elisabeth.
 
>> ELISABETH SCHAUERMANN: Good morning.
 
>> SANDRA HOFEICHTER: I see the weather in the Berlin is perfect.
 
>> ELISABETH SCHAUERMANN: At least in our background it is.
 
>> SANDRA HOFEICHTER: Wonderful. Did you follow so far the EuroDIG a little bit?
 
>> ELISABETH SCHAUERMANN: Yes. I could tune in a bit today for the opening before we did our setup here and it was really good to see that everything went well to far.
 
>> SANDRA HOFEICHTER: And I also see the first people are already connecting to your session, including the moderator Vladimir Radunovic, and he is a very experienced person in remote moderating so with him as the studio moderator as the session moderator. You will be absolutely on the safe side. Hi, Vladimir. I see you already. Perfect.
 
So then, I wish you a fruitful day and one thing that we should mention, our studio in Berlin, here we are going to do a great experiment because this is going to be the networking area. We said already networking in a virtual meeting is a difficult thing, nevertheless we try, and Nadia will be the networking host in the lunch break and she will have prepared some questions. But in case it doesn’t really work out, we take the liberty to play some music in your studio as well; but of course, I hope you will be able to offer some great networking opportunities for our participates. Good luck, Nadia.
 
>> ELISABETH SCHAUERMANN: Thank you, Sandra. See you later.
 
>> SANDRA HOFEICHTER: Elisabeth by the way. I’m sorry.
 
>> ELISABETH SCHAUERMANN: That’s a lot of names in a short amount of time.
 
>> SANDRA HOFEICHTER: Okay. Thank you. And then let’s take the bridge further to our studio in Trieste and here we have Marco. Marco you’re sitting in front that have castle. How did you manage that?
 
>> MARCO: We rented a boat and we’re all sitting on a boat in front of the castle. I’m joking. Welcome. You should have seen in Trieste in June, and I hope you will be able to see it next year and this is the castle and our campus is just behind the castle.
 
>> SANDRA HOFEICHTER: I can confirm that’s really the case. We’ve been there for the planning meeting last year in September and this year in January and the picture that you can see is really what you get when you go to Trieste and when you participate in the EuroDIG next year in the ICTP facilities, not that this is ITCP, but it’s just around the corner.
 
Marco, the session in your studio will deal about innovative use of Blockchain and public empowerment. I see the focal point and she’s already connected to your session. Nadia, thank you also again for all the effort again that you put into the session, and I wish you and Marco and the entire team in Trieste all the good luck for your session and with this. I think we are ready to go and I hand over to all three studios and see you after – see you at 1:00 sharp. Over to you.
 
>> ELISABETH SCHAUERMANN: Thank you, Sandra. All right. And with this, we still have two minutes before we officially start, which I would like to use to remind all of us of the code of conduct; but before we do that, hello again, everyone in the room and on the live stream to the start of EuroDIG 2020. I’m Elisabeth Schauermann the Host for this session and the Networking and the session after that here in Studio Berlin and together with my colleague. And later we’ll hear from the German Informatics Society to keep the sessions up and running. So, for the sessions, just a quick reminder that all of those present in the Zoom room, please identify yourselves with your full names. You can change your name yourself. If you want to ask a question or make a comment in the interactive parts of the session, please raise your hand, and then our remote moderator will unmute you and you will be given the floor.
 
Once you are given the right to speak, please switch on your camera if you like. We do not force you, but it’s, you know, it’s nice. And then state your name and your affiliation before you make your comment or ask your questions. Contributions can also be made in the Chat and in the forum and Lilian will try and follow and bring that up to the discussion as well.
 
The Zoom rooms offer so many participates only, please do not share the links with anyone. If participants fail to comply with me parts of the Code of Conduct, we will gently remind you to follow it, and as a last resort, participants can be removed from the room but we really hope we do not have to take that measure.
 
One more important note, I’m happy to announce that we’re partnering with the Geneva Internet Platform for reporting and curate the EuroDIG 2020 messages and Andrijana is the reporter for this session and will be given the last five minutes of the session to present the main points for the messages to all of us. And with this, I would like to close my opening remarks and open the first session of today, Workshop 2, Enhancing Enhancing Users’ Confidence in the Cyberspace‑risks and solutions, which is moderated by Vladimir Radunovic. Over to you, Vladimir.
 
>> VLADIMIR RADUNOVIC: Thank you so much for the introduction, Elisabeth, and for the housekeeping notes. I guess we can start slowly. We have a bit of introduction. You can relax and get your coffee. I do welcome you to Berlin, even if it is only a remote or virtual Berlin space that you have joined today, and with that I hope that your journey was safe, and that no ID was stolen in the meantime, no one followed your data that traveled through the net, no VPN was hacked because I guess you’re connecting from different places, and that you haven’t met any scammers on your way to joining and following EuroDIG in the previous days.
 
I do guess that most of you are actually sitting back home, and you can probably note if you’re sitting back home, probably in pajamas, you can switch on the videos at some point so that we see the dressing code, but that doesn’t make you really safe or more safe because you’re home and you haven’t actually traveled to Berlin and you haven’t encountered all the possible fraudsters on the way there, and those guys or girls are around and can you see all of these risks along your way from your home town to Berlin, even virtually.
 
The pandemic has definitely changed our lives in a way. We are moving toward something that I like to call a blended life, life where we’re going to mix the encounters and online life more and more. But it has also changed the cybersecurity environment or the security of our digital environment. We have seen a number of reports, a number of webinars and discussions in the previous months which were looking at what are the main risks that emerged or are emphasized during the COVID crisis.
 
So, basically, what we want to do today is run through the three elements of discussions. The first one is, what are the particular risks that each one of us is, as users, are seeing or feeling with a special focus on the times during and after crisis of what we are likely to see more and more.
 
The second block of the discussion will be focused on challenges. What are the particular challenges; mainly, related to our social environment, our behavior, also technologies, resources that we have, and so on, when it comes to addressing these risks.
 
And, lastly, we will look into solutions. So, some of the practices or examples of how we can be addressing these particular challenges.
 
With us today I’m pleased to welcome to the Berlin Studio Julia Schuetze a Project Manager at SNV. Julia I’m not going to try to read or pronounce the German full name, so you can probably do that once you have the mic if you help me. With us also Anastasiya Kazakova. And Jaroslaw and Lidia, and big thanks to Anastasiya for putting this up with the team and a huge team behind the operations and particularly for giving me the opportunity to at least have an equal number of ladies at the panel which deals with cybersecurity. It’s not that often, and I’m really pleased that now we have two ladies with us today.
 
One of the desired outcomes of the panel, in a way, well we want to try to somehow list or conceptualize the existing situation and to outline the key existing and potential risks, and secondly to identify some of the best practices, and hopefully, actionable recommendations on how we can deal and address these risks.
 
The format, as Elisabeth already mentioned, we do have four panelists, but we do exact a lot of interventions from you. Looking at the participant list, I see some family names, some good friends, some people that are well into the topic, but also some people that might not be much into cybersecurity but definitely have to share own experiences as users or different perspectives from human rights, connectivity, economic aspects and so on, and I encourage you all to try to be as active as you can.
 
You have an option to raise your hand, and we’ll try to follow that as much as possible throughout the session; and you also have an option to post a chat comment. If you do that, I kindly ask you to try to add a hashtag at the beginning of your comment. Whatever the hashtag could be. Whether it’s #vulnerabilities or #childcrimes or whatever, so it’s easier for us to locate and put in context your comment and we might call upon you if we have good comments to raise as well.
 
We’ll be using Mentimeter to allow some of your collective input. And with that, Elisabeth, I think we can start with the Mentimeter. There are three different questions that we want to ask you, and your experiences of what you think are places which make you the most vulnerable. What are the main tracks or do you see as the main tracks in the danger in this experience, and what are the particular assets that we should be protecting? So, we start with the vulnerability and we’ll move to the other questions later on. Let’s start with this one.
 
To fill in the poll, simply open your browser or use your mobile phone and open it with the mobile phone, whatever you prefer, go to Menti.com and you get the code and get the option to respond. What makes you as a user most vulnerable? If it’s none of those but other, feel free to put other; but also, we kindly ask you to put more explanation in the chat down there. And with that I’m actually moving on to the first block of our discussion where we’re discussing what are the risks that we are seeing in the environment.
 
And with that, just to help us outline better the aspects of the risks; in the risk management tier you have the components that make the risks, which is the vulnerabilities, the assets we want to protect, and then the threats. There is a visualization that you can see probably somewhere behind me and I’ll try to illustrate it more, and so we’re looking at any sort of ducklings on this side that we need to protect. We’re looking at any sort of cracks in the branches in the applications that we use, in our behavior, or whatever. And we’re looking at the crocodiles, the what are the threats really to date?
 
We’re starting with vulnerabilities as we noted, some reflections there and most of you say it’s actually humans and some say software and technologies and some of you say it’s laws and regulations. No one with the other. I would be keen to see what other vulnerabilities you see, so please move forward with your comments. With that I pass the floor to Anastasiya, who is somebody that represents Kaspersky, a company which has a lot of data and details about what’s actually happening. And so, Anastasiya, can you tell us briefly, what are some of the results of your analysis about what are the main developments, the main threats, and certainly, you can comment on the results of the poll if you wish as well. Anastasiya?
 
>> ANASTASIYA KAZAKOVA: Hi, to everyone. Thank you for passing the floor. Thank you for being here today at the EuroDIG session. I will cover key cyberthreats that Kaspersky team reports and –
 
>> VLADIMIR RADUNOVIC: Anastasiya, should we put on the PowerPoint? Elisabeth, if you can kindly put on the PowerPoint by Kaspersky, and I can probably lead through the slides and Anastasiya can tell me next slide and so forth.
 
>> ANASTASIYA KAZAKOVA: Uh‑huh. So, for Q1, there is some fresh data for Q1, 2020, that makes the situation completely unique, the COVID‑19 pandemic has affected us all in some way and the entire cybercrime landscape has changed in the last few months. And so, it would be not correct to attribute all of those changes to the pandemic but a sort of connection of trust, and particularly, from a user side we’ve all gotten more cyber vulnerable than before. The first thing you see on the slide is remote work and attacks at the remote service and remote access tools. From an information security standpoint, an employee within the office network and an employee connecting to the same office network from home are two completely different users, and cybercriminals share this view and the number of attacks on service and remote access tools are called remote desktop critical attacks has increased by 23% from January and there are a couple of slides, Vladimir, if you could click a little bit. In Italy just some – in Italy and other countries, Germany, and also in France, and you can see the spikes actually happen somewhere at the beginning, at the beginning of March, and then finally in the U.S, also, just an illustration.
 
You could ask why does it happen? Corporate data that moves from corporate secure environment to less secure and less protected environment at home‑to‑home service and super attractive target, for example, ransomware attacks and phishing attacks. And remote entertainment is one thing that is actually unique during this period and during this Q1 and actually illustrates the growth and vulnerability of users in Cyberspace. Online activity via streaming services increased, and everybody was logged in at home, and many of the services announced that increased after user traffic. But the cybercriminals, again, responded to this trend too, and the average daily number of attacks blocked increased by 25% since January of this year, so usually users follow ancillary and moderated authentications and they could be caught by using malicious adware and so on.
 
The next slide is -- it’s geography of mobile threat, and in Q1 the attacks was around and users frequently have adware and telegram clone apps. And the next slide, can you see the distribution of mobile threats across the regions?
 
Speaking of the mobile bank trojans, this is the next slide, there is also the increase which actually gives us indicator of the growing financial cyberthreats to users; however, at some point the mobile ransomware decreased through the period. Next slide. Which also provides that ransomware criminals find attractive desktop and mobile applications and mobile devices of the users.
 
And, finally, two final points that also indicated the cyberthreat landscape during the Q1 is what else makes us vulnerable. It’s the vulnerable applications that we use and security flows of IT products, and so you can see on the graph here on the diagram, that most of those such applications still remain the office applications, the office programs, and cybercriminals, of course, use that.
 
Lastly, attacks, we have upped resources of the so-called Internet attacks but they just redirect users to exploit websites that contain exploits and malicious programs, bots, and can you see the distribution of the countries where most of the Internet attacks have been registered by us in Q1? And just for a note to determine the geographical source of the based attacks, the main names are matched against the actual domain name IP address, so they be the geographical allocation of specific IP addresses of malicious activity could be then established.
 
I would also just a final note, I would like to add that actually also in terms of vulnerability for users in Cyberspace is definitely the lack of cyber awareness. It’s more and more sophisticated landscape. It’s definitely the vulnerabilities of software and the delays in patching them. Vulnerabilities in critical societal sectors, for example, healthcare. And it’s also past speed of emerging technologies, so I will stop at this point.
 
>> VLADIMIR RADUNOVIC: Anastasiya, maybe a quick reflection or a quick question back to you. To what extent has the COVID‑19 context been mentioned the mobile trojans and the phishing and what sort of the context has it been used for the criminals, do you have any data?
 
>> ANASTASIYA KAZAKOVA: A good point and good question. Colleagues at our secure teams, security teams in Kaspersky usually tell that like tremendous increase in cyberattacks throughout this period; however, the COVID‑19 as one more agenda item, has been largely exploited by cybercriminals. In regard to users like consumers, ordinary users, as well as in regard to the private and public, attacks at medical facility, healthcare and in those facilities that are conducting and still conduct the medical research.
 
>> VLADIMIR RADUNOVIC: Thanks. We’ll get back to that, definitely. We’ll discuss more of the threats. I’ll ask Elisabeth just to put back the Menti just to see the results of the first poll and then we’ll move to the second one.
 
And in the meantime, I’ve noticed some of the good comments that are mentioned when we were talking about what are the main risks in the vulnerability. And Tatiana mentioned that software and regulations are actually created by the human, so again we have more of the impacts of the humans.
 
So, let’s see what the results of the first Mentimeter are. So, the humans and behavior, and particularly focused on behavior is by and large the biggest vulnerability. And I must say I expected that, and there are a lot of statistics that 90% of attacks are actually based on the human errors, and then we have software and technology, and then laws and regulations, and still no one on the “other”, which is interesting.
 
Let’s move a little bit, and, probably, I can even move on to the next one. Or if I can’t, you can move just to the next question. Yeah, you can move on to the next question, the next poll question, basically. What are the assets you think are in danger? So, what are the particular aspects that we want to protect or we need to protect more and more in Cyberspace? Security or devices, personal information, finances? We’ve seen financial threats, society, stability of society and democracy and values, lives, we haven’t seen any casualties yet but it can happen, and then maybe “other”, maybe you can think of the other while you’re voting. Certainly, if there is anyone that wants to jump in at any point, just raise a hand and we’re following on that and then we’ll follow up with giving you the floor or post your comments in the chat.
 
In the meantime, I’m passing on the microphone to Julia. We have touched upon a lot on the technological aspects, and particularly the threats which are related to VPNs and financial fraud and so on, but there is a lot also which touches directly on the personal data and incidents related to personal data. Julia, do you want to comment, or certainly, you can reflect on the results of the Mentimeter as well and positions of the participants, Julia?
 
>> JULIA SCHUETZE: Yes. Thank you, Vladimir. I think I’ll take some pictures while there are risks coming through software and regulations right now and how that could affect the stability of society, and so I have four points that I would say, these are the risks that I’m seeing when I look at responses of states to malicious cyber activities and then also how states and governments behave.
 
So, we see deployment of vulnerable infrastructure, for example, in elections that could make elections insecure and thereby also erode trust in the end results; for example, online voting, vulnerable tallying software, confidentiality availability and integrity can be. There was a case in Germany where they figured out the tallying software that adds the votes and brings them up to the next level is actually insecure. So this can be even more critical when I think about that we’re moving more towards e‑government services, and so these attacks might not only go around election cycles, but actually just affect cities, and we’ve seen cases like that also in the U.S. where cities were affected by ransomware attacks and then some of the services cannot be used by citizens.
 
And then, secondly, I would say some government policies, so for example, the broader use of government hacking and exploitation of zero days and for that reason can actually make us all weaker. So, especially if encryption is weakened in consumer products that are broadly used, then to some extent like the militarization of Cyberspace, so the use of offensive means in retaliation or persistently engage can have collecting damages that can affect users, and so more and more cyberspace is used just for other national security objectives that can then have impact on the security in Cyberspace.
 
And then I also see a risk in companies not deploying or using due diligence that can affect users, so that can lead then to massive leakages of personal data. Recently, Easy Jet, I talked two weeks ago with a security researcher, Kris, and she said vulnerabilities in Boeing, bad security practices, not using effective security and having bad policies on disclosing vulnerabilities. So, if vulnerabilities are found by outside experts or researchers, that they cannot be safely and securely reported.
 
So, yeah, those are my kind of four broader points on how regulation, but also human behavior and developing software can affect and risk the security.
 
>> VLADIMIR RADUNOVIC: Thank you. In looking at the results here, they actually sort of support your concerns about elections and generally the stability of the society. You see the majority of people say it’s society and stability of values, which is under risk. Tatiana mentioned in the chat a couple of other aspects or assets if we wish, which are not necessarily just lives when it comes to humans, so what it says is what about risks to mental health and bullying. We’ll address that in a second when it comes to child protection in general, but also, Tatiana, I understand that some of those will go to the values of society like some of the gender issues related to cybersecurity. But it doesn’t cover anything when it comes to human‑centric cybersecurity view, in my opinion. I probably don’t sound very coherent here, but it’s harder for me to place humans into the asset category.
 
Good point. I mean, we can certainly go into different aspects of what is under the risks and what we should protect more, and you outlined well some of the parameters there.
 
Julia, you mentioned the elections and either it’s about either hacking the machines or information warfare in a way or social media warfare and so on, but there are other aspects that are changing or hacking the voter list or even the way the votes are counted, which is usually through the machine or through the exosheet in developing countries, but still it’s computer based and so there are other aspects. Any other?
 
>> JULIA SCHUETZE: Yeah. Maybe just one add there. What is also important is that, obviously, citizens trust the process, and so to what extent do they trust the technology that is then employed because we studied different cases and then came to the conclusion in the end that even though maybe things are not hacked, if someone just says they’re hacked and people believe it, even though the result is actually the true result, but no one believes in the result, that’s like a really big risk, so I mean there really the legitimacy of the democratic process is at risk if people don’t believe or don’t trust the result anymore because of the use of technology. Yeah. So that I see in some critical aspects in society that the use of technology is very risky.
 
>> VLADIMIR RADUNOVIC: And I think this is a really important point, whether the elections are being hacked or not or any part of this chain, if there is a speculation that it was hacked and you already have the decrease in trust and in all system, and so yeah, thanks for that.
 
We’ll move on to the next question to all of you, and so Elizabeth, you can just move to the next point in the Mentimeter and the text question actually builds on what Julia has mentioned. She has mentioned or Anastasiya previously, the criminals, and Julia mentioned also the states, and there are other different groups that try exploiting the Cyberspace in various ways, so the question for you is where does the main threat come from, or if you wish, what are the main crocodiles behind me that are the threats? Whether it’s petty criminals, whether it’s organized criminal groups, terrorists, states, companies, it could be neighbors as well or friends even, and I hope not. So, what are the others? And, again, feel free to add more on the “others”.
 
I think mentioning you feel that in – it’s an actual good introduction to Luca’s part in a way, and Luca already commented in the chat related to trust, and so I’ll leave it to you, Luca, to certainly reflect on the previous discussion, but you can also reflect more on the research that you did on the, what are the different risks and measures the way we measure the risks and harms that are coming. Luca, the floor is yours, and let me know if you want me to switch to the slides. Luca?
 
>> LUCA ANTILLI: I will do that. Listen, I’m really sorry because I missed the test just before we started the session, so I’m sorry about that, I think someone was trying to hack on me maybe. But can I get a nod from you Vladimir, can you hear me and see me? It might be a little bit dark.
 
>> VLADIMIR RADUNOVIC: Yes, loud and clear.
 
>> LUCA ANTILLI: That’s great. Thank you, everyone. Very briefly because I’m coming from a particular angle in this discussion, and I work at Ofcom which is the UK statutory independent communicator for services, and so we cover all the broad ranges of the kind of com services that citizen consumers use in the UK, and that’s not just Internet‑enabled services but also TV, radio, phone, mobile phone, postal services, and all the rest of it.
 
We’re not actually a regulator for online in any official form yet, but the UK Government has said that it’s thinking about appointing us. But really, in a way that’s irrelevant because since the last whenever Ofcom was founded in 2003, we had a duty to monitor and promote this notion of media literacy; and media literacy, there is a lot of debate about what that actually is and what it covers. We’ve got a definition as the ability to use, understand, and create media and communications in a variety of contexts.
 
Back in the day when the Internet wasn’t quite to dominant, that meant looking at people’s more general understanding of, for instance, appear advertising models, public service broadcasting but seen increasingly as the Internet has dominated our lives in so many different ways that has come to the floors and now we’re looking much more closely at how people use Internet‑based services, how they – or their attitude in terms of trust, understanding, judgment when it comes not just to cyber issues or security issues but also things like information, news, advertising personal data and all that sort of stuff. We’ve got quite a broad view of this.
 
And, remember, my focus is very much on the UK, but I think so far what’s been said about vulnerabilities is really, really interesting because one thing that our data tells us is that you can research a block, you know, a UK representative population, but the vulnerability of one given user is very different to the next one, and this is where media literacy comes in because people’s ability to understand how to use different types of software, for example, varies immensely depending on background being socioeconomic or geographical and so on and so forth.
 
So, our focus is very much on trying to understand how people interact with these services, what their levels of concern are. The extent to which they’re experiencing actual issues online, including cyberattacks, but also again, fake news and other kinds of online harms such as exposure to violent or abusive content and so on and so forth, and so I’m not going to go through data research because there is just so much of it, but I wanted to give you an example of the kind of things that we look at when we’re assessing how people in the UK, certainly, think about risk online and what they do about it.
 
So if you go to the next slide, please, the research that we did the last couple of years, we’ve done quite a large annual piece just looking specifically at people’s concerns about using the Internet, what they actually experience using the Internet, and finally, what they do as a result of experiencing potential harms online, and so here is an example as I was saying before society has different view. And in terms of concerns, one thing is clear by the way, that if you put aside offensive and abusive material involving children or risks for children, the highest claimed concern online for UK Internet users is around things like hacking and security and things like data and privacy, and so it’s very top of mind in the average UK Internet users’ thoughts around concerns.
 
If we move then on to actual incidents, and this is an example that’s quite granular, we can look at gender, age group, socioeconomic background, and that’s where we start to see differences come through. If you go to the next slide, this is an example more simply of the kind of things that people are experiencing, so and any UK Internet user in the last month from this field which was done in February of this year, we have 62%, two‑thirds of adults have had some kind of experience, some kind of harmful thing online, which rises to 81% among children and so these things are happening out there. We’ve got an awful lot more granular data on it and we can look at it, for example, on this slide and we can see the prominence of those experienced harms such as social media sites on email, search engines, and so on and so forth.
 
So, what I’m building towards really is it’s a very, very nuanced picture and there are a number of different ways or categorizing these potential harms and there are a number of things that people are experiencing in different ways and in different environments.
 
I know the focus of this session is more about the cybersecurity, and I think it’s interesting to note that, again, putting aside harms and risks for children online, cybersecurity is the key thing that people think about. The question is, what are people doing about it? And another area of this research is how far people feel they’re protected and that falls into a number of categories.
 
Firstly, how far do people think there is – that they themselves are able to use the Internet in a way that protects them, and are they confident? We see some interesting things in our research. People generally tend to think they’re quite able to navigate the Internet safely, and people in our surveys consistently for the last 10 years, three‑quarters of the average adult user will say I’m able to manage my personal data and use software effectively. However, that’s a big – there is a big gap there between their confidence, peoples’ confidence and what they’re actually doing online and so what we see when we ask them – oh, man –
 
>> VLADIMIR RADUNOVIC: We lost the connection. We’ll get back. That’s one of the things that happens in the online environment, unfortunately.
 
>> JULIA SCHUETZE: Just when it got really interesting.
 
>> VLADIMIR RADUNOVIC: Exactly, yeah. I think he’s doing that on purpose so that we keep waiting for him to come back, and we’ll continue with that as soon as Luca is back. We can probably –
 
>> LUCA ANTILLI: Hello? Can you hear me now?
 
>> VLADIMIR RADUNOVIC: We just said you’re running out at the moment when it became really interesting.
 
>> LUCA ANTILLI: Oh, okay. (Laughing). Just a minute or so more because I think – so the point I’m trying to make is that, and I think it’s a little bit in our session so far, but to say that there are human – that the human factor is one simple variable in all of this isn’t true. There are lots of different variables and within the human factor, when it comes to security and behavior online, there are an awful lot of nuances, too. And I think as a researcher, which is kind of my angle on this, we just have to be really careful that we don’t take at face value what people are telling us, so even children, children will tell us that they’re kind of okay when they see what they note is clearly a pedophile on Snapchat and they think they’re right and then you talk about what they’re doing and they’re actually not. Similarly, when it comes to cyberattacks, people say I know what to do when something is happening. Or people say I know what’s happening when I click to accept those terms and conditions; and in reality, it’s become almost a normalized thing and people are accepting things, people are seeing things happen in front of them, but they’re not necessarily in control of their own safety or indeed of other users, and so that’s kind of the – I’m sorry it was a bit broken up, but that’s kind of our angle really. And it’s all wrapped under this notion of media literacy, which is about not just having skills to differentiate between something that looks safe or not safe. But more generally, having an awareness and, if you like, this kind of sense of sort of healthy skepticism just to be able to look at things in front of you, understand where they might be coming from, and make judgments in that way.
 
I’m afraid at the moment, our research would say that, yes, a lot of adults in the UK are able to do that to a degree that you could argue is satisfactory, but there is still quite a lot who aren’t able to do that and that’s where the likes of us in this group need to make more interventions. So, I’ll stop there.
 
>> VLADIMIR RADUNOVIC: Thank you. Luca. Excellent points. We’ll get back to media literacy a little bit later in the solutions part in a way, in the challenges, and Elisabeth, can you probably bring back the Mentimeter to see where we are. And quick question for you, Luca, in the meantime, the invitation for everyone, I know you’re silenced but please raise a hand to jump in, otherwise I’ll start calling upon you, and I know there are a few comments in the mobile participants as well because we need time to then find you and give the you floor, but Luca, quick reflection to you.
 
Based on the research and having the COVID crisis, since we have become more connected and depend even more on the tools, firstly, do we have any data or otherwise do you have any opinion whether the current crisis could actually make us more aware of the threats and actually raise or change this perception that we get more clearly what the risks are, or which direction could it actually go?
 
>> LUCA ANTILLI: Really good question, and we have done some, obviously, quite a bit of ad hoc work during lockdown in the UK, during the COVID crisis, and we find that certainly people are using the Internet more obviously as you might imagine, but what we found is our focus is more on things like information and news. And we found that in a way during this period, people have become even more polarized or more engrained in their own behavior and so those who tend not to really worry about if something, a source they see is trustworthy or not are continuing that way, and those who aren’t, are increasingly more skeptical and there is an interesting clip on video because we did some video calls with some participants online, and they were saying that they see messages on their Instagram feed from the government or you know UK Gov or NHS, our health service, and we say oh, that’s good, that must be reassuring there is a message on your feed, and they say actually, no, it says NHS up there but I don’t know if it’s fake or not. And I think so particularly, in a social media environment, which is where so many people are more and more, if anything, the doubt and the lack of trust is already in social media has actually been exacerbated and inflated a bit more, and so I think it’s a really interesting place and there is an idea out there that lockdown has made us all more Internet savvy and much more self‑aware and I don’t think that’s necessarily true.
 
>> VLADIMIR RADUNOVIC: I encourage all of you to maybe share your thoughts in the chat, whether you think that our environment or surroundings, our friends and colleagues, actually will feel or will be more aware of the risks and do more to make their environment safe or not.
 
Then looking at the poll here. Expected, we have organized groups as one of the big threat, and we have discussed some of the states, and it’s interesting to see, actually, the companies rank quite high and so I encourage those of you if you wish to comment in the chat or raise a hand to reflect more on what is the role of the companies and what particular threat is coming from the companies.
 
There are a couple of comments in the chat and I’ll get back to them in a minute. I want to pass the floor to Jaroslaw to reflect because the ITU is one of the main institutions following on threats and doing a lot with states on capacity building and monitoring, and so probably two aspects, two questions that I might have at this point for you, Jaroslaw; one is, related to monitoring the risks and the other is protecting the vulnerable groups, and child protection is one aspect of that, but there are other examples I’m sure. And certainly, if you wish to reflect on any of these inputs thus far, please feel free to do so. Jaroslaw?
 
>> JAROSLAW PONDER: Thank you very much, Vladimir. So, as you have already mentioned the ITU is the UN Specialized Agency focusing on the ICTs and the cybersecurities, one of the aspects which is very close to our heart and to our members of the ITU.
 
Working on the enabling environment and the strategies, but also on the standardization aspect, there is more than 2,000 standards which are patching on the cyber issues and many, many more are coming.
 
So, of course, during the COVID, we experienced a very new situation in terms of bringing to the digital space the new users, and we noticed that a lot of stakeholders expressed more interest in being engaged in helping and addressing those new emerging threats related to the behavior of the not experienced users of the ICT.
 
That’s why this brought us to the point that, in fact, digital skills became one of the important issues to be addressed in the coming time to make sure that those who are forced to be part of the digital society by the pandemic, they can now engage in the proper way and to know that they know how to navigate. But more importantly, that they’re avoiding any kind of the threats and that they are not misusing and not doing the wrong experience with the ICTs.
 
And there are many of those and many of those which we are not thinking about on a daily basis. There is a huge amount of elderly people who through COVID were forced to become digital citizens and start to use the social media tools, the platforms, and they, unfortunately, have become victims of phishing campaigns, and so that’s why we have to pay attention to those who maybe require much more attention and who are forced, also, to use the new services.
 
This is the reason also why the ITU has developed and launched immediately after the pandemic, the new Digital Skills Assessment Guidebook and we’ll be rolling this out at least in our case, in Europe and some of the countries, to assess the gaps and to make sure that the countries are from the development strategies but also bringing to the higher level the importance of bringing those unconnected to the digital space.
 
At the same time, we noticed that the children became also very much vulnerable group, and this is the reason why we have the pleasure of launching on the 24 of June, the new set of the guidelines, the global guidelines developed by the international community addressing the four groups of those who make the change in the digital space, the children, educators, industry, and the policymakers.
 
And let me start first from the children, which were perceived as vulnerable and a lot of programs have been developed to help them in many countries. This is some sort of normal for all of us, but not for all of the countries. We are also on the 28 of June, launching the new studies focusing on the Western Balkan countries, eight economies, and we notice that some of the countries are still missing the strategies, missing attachment to this issue in terms of providing some programs, building capacities of the children, and will be looking at this not only from the institutional perspective to strengthen the capacities of the countries, but also providing some means for the countries to launch the national initiative.
 
The other group are the teachers. In many, many cases, these are the fresh users of the applications, of the of the means which from day one of the COVID has been forced be a put in their hands as the way forward for delivering the contents of the children and in many ways we’ve seen different behavior of this group, and so this requires a lot of attention and a lot of also, redesign of the way of how the new skills for this group will be provided, that also some sort of the charter on protection measures can be pushed through this means.
 
So, of course, from our side as the ITU, we have succeeded since COVID started and to develop quite good understanding of the UN System, we called for the agenda of action together with many UN Agencies, UNHCR, UNICEF, UN LDC, ILO, IOM. In follow‑up to the Secretary‑General Policy Brief on the impact of COVID on children, and so we’re united to make sure that in the coming years and coming time, we can make a difference in this field.
 
So, just not to prolong, of course, for us one of the important things is that we’re not only acting ad hoc. COVID has come and hopefully it will go, but the hazards like this will be coming in the future as well, and we have to build the systemic preparedness of the countries and raise the commitment of the countries, and this is the reason why we are referring to the Global Cybersecurity Index where we’re measuring the commitment of the countries and taking a look at different components of the institutional setup of the countries to deal with the cybersecurity, and I believe that COVID, in fact, it helps a lot in this context and to raise the importance of the cybersecurity. And during this, thanks to the COVID, we accelerated digital transformation worldwide, not only in developed economy, but also in the transition economies and the developing countries and much more attention, but also commitment and also support is needed of the international community, and to those who would like to catch up as fast as possible, but they don’t have maybe a means. And if we do not do this, they will becoming the source of possible threats, and so that’s why let me also use this as an opportunity to express the thanks to those working already with the international community to strengthen the cybersecurity capacities worldwide, but also to call for the collaboration in this supporting subregional and regional actions in this field. So over to you.
 
>> VLADIMIR RADUNOVIC: Thank you. Thank you for the great overview. We’ll definitely get back to some of the particular, if I can say, solutions that the ITU is providing too within the JCA.
 
I’m looking at the chat and we’re, basically, moving to the second part of the discussion which is challenges, and you have already, all of you, including in the chat, have marked some of the challenges that we can probably address. There are three tracks, currently, that I see. One is related to the role of the companies, if we can put it that way.
 
So, there is a comment by Matjia that the companies is a track or sort of a concern as they’re driven by profits and maybe even more in the ownership of them same persons controlling the companies and so they control the production, evaluation, marketing products and so on.
 
There is a reflection by Julia that can probably comment later, who says I see potential civil attack that could be great to develop services and products.
 
And Marie mentioned that the risks with companies is that they have the means to make the law to get what they decide or desire in user data, and moreover because the states don’t have the way to do their own IT solutions they depend on these companies putting their citizens at risk.
 
There are another two topics and one is related to digital literacy, and we’ll definitely focus more on that. And there was a question on the protection, but I’ll get back to that.
 
Any quick comments? Maybe, Julia, from you on the role of the company’s responsibilities of companies in this regard? Julia?
 
>> JULIA SCHUETZE: Yeah. I definitely share the challenge that was mentioned because some of the solutions that are then, obviously, created have a profit focus and are not necessarily developed in mind to be nonprofit or even to address societal challenges, so and then on the use of services and products by institutions, then there could be a lack in effect and then you’re basically reliant on that one company that you started with as a ministry. And if you want to add features, it creates extra cost and so you don’t have the expertise of the development expertise in house. The IC, actually some interesting development in Germany right now. There is one project funded that’s called the Prototype Fund that funds the nonprofit‑oriented startups in digital for six months and then now that is – yeah, and so it’s putting public money behind some software solutions and there has been already some really interesting, for example, encrypted messengers coming out of these projects.
 
And then Germany is also experimenting with the digital taskforce where they’re trying to bring in developers, UX designers coming from companies to join the government for six months to help innovate internally and just share their expertise. It’s called Tech for Germany, and so I find these approaches very interesting because I also see that challenge.
 
And then to the point on media literacy, I do agree that there is obviously training needed, but I also find that it can be made too hard to update. For example, a browser. It shouldn’t take four clicks or a Google Search to know how to update a browser, and I’m working in the field and I get the alerts on how to like update your browser because there is vulnerability, et cetera, and some things are just not made as easy as they could be, and so I also do agree, in general, that they should be more security knowledge. But at the same time, user‑friendly security is really key and I think some products and services are not really living up to that standard yet.
 
>> VLADIMIR RADUNOVIC: Thank you. I know that Anastasiya wanted to reflect as well. Anastasiya?
 
>> ANASTASIYA KAZAKOVA: Yes. I really like the point, two points actually from Julia and the data out of the polls that we saw regarding the responsibility that companies have in this regard. This is the full transfer by users, that the users trust technology and if they are told that it could be hacked, or that it’s secure and what a user is to do at this point. I think that there are many, many things that security could, and that service providers or IT developers do not properly explain to users how to use them in a user‑friendly format, in a user‑friendly communication and they’re not properly transparent about how the technology works either for the consumer side or corporate side or the government side. And so my thought is here that a greater transparency should be in that regard around the security by design and by default this is a major concept for ICT developers to make the technologies truly secure for users because I really think that is important to confront that it is not the ICTs that could be dangerous for users. It’s the users’ use of them that could have negative implications on the society.
 
>> VLADIMIR RADUNOVIC: Thanks, Anastasiya. I wonder if anyone else wants to comment on that? Or there is another good question which was raised, let me see. Related to the needs to be – the common needs – there needs to be a standard level of protection afforded to all users, despite the varying level and types of threats, so what should that start level of protection look like, one that should be efficient?
 
And I guess that’s, again, one of the – well, it could be a question also related to the regulatory environment; but in this case, it can also be related to the question of companies and the role of the companies to provide some sort of the efficient protection to everyone for their services. Any reflection or anyone want to take that question? Anastasiya, do you want – a company which actually provides solutions for that?
 
>> ANASTASIYA KAZAKOVA: Yes. It’s actually really not an easy question, I would say, because I know that many, many teams in Kaspersky really dedicate a lot of time to educate the community, the broader community net users despite the age, despite the gender to make a lot of good stuff and materials public, and dedicate a lot of time not only in the high‑level security investigations but on the top basic level reports. So in this regard, we’ve recently, as I also mentioned in the chat, we’ve – the second year we conducted the privacy report to give the track what are the major, I would say, security problems in that that led to the data breaches, to the fact that the users feel less secure in cyberspace in the Internet space in terms of the protection, the personal data protection.
 
The second aspect is the social ratings, the interest that actually reveals that many – actually, not so many people know about social ratings that are in some contrast already and that social ratings already contain a lot of personal data and I actually can identify them and that could also cause a lot of threats to them, and so we are trying to increase their awareness around the community. I can’t say whether they should be the standard, the particular standard to that, but I think we definitely should invest a lot as a company to building greater awareness to incorporating with other parts of the community, Civil Society, academia, technical community, and public structure as well to help find innovative solutions that will actually target real being in the society.
 
>> VLADIMIR RADUNOVIC: There are two other questions. Jaroslaw, did you want to?
 
>> JAROSLAW PONDER: Yes. Yes. Just that I think it would be very much challenges to develop a standard, a global standard for the industry by industry on the protection level, unless this would be the standard which would be very generic.
 
But, you know, at least in the context of the general protection, what we believe the model of the guidelines, which have been developed by the industry and for the industry and going through the – a lot of complements and providing some checklist for the providers and hardware producers, what to be integrated into their services and the products, and to ensure their safety and trust of using of the ICTs.
 
This might be a good way forward in realistically increasing the safety in the digital space. That’s why we hope that with the reiteration of the guidelines for the industry, we would be able also to engage with all private stakeholders and private sector to apply the guidelines and make sure that we have some sort of the universal way of understanding on some international standard.
 
So, we would really encourage all private companies, also, to take into and make, basically, the checklist if they’re complying with the international standard. Yeah.
 
>> VLADIMIR RADUNOVIC: Thank you. Jaroslaw, as expected we’re switching between comments and solutions and I will focus a little bit more on digital literacy and it was raised quite prominently here; but before that, there are two other questions related to the role of the company, which probably we can wrap up this part of the discussion. One is for Luca, and I see Luca already responded in the chat, but I’ll give you the floor to elaborate. The question or the comment is about the trust of sources in social media is important and interesting, so is there any solution to propose when it comes to addressing the lack of trust of sources in social media? And the other one is, again, trust at a different level commented by Henrick in companies, is that no more safe companies are currently able to authenticate themselves securely to consumers. And so, apparently, consumers have no real alternatives to blind trust, and so the question is how do we get from blind trust to zero trust from consumers towards the companies?
 
So, Luca, probably you can reflect on the first question that I read, but you can also reflect on the second one or any of the current discussions on the companies. Luca?
 
>> LUCA ANTILLI: Great. Thanks very much. Yeah. I think as we all know, working in this area particularly as a researcher, you just find the paradoxes everywhere in terms of – I say social media. Social media is by far now in the UK the number one source for daily news and information for UK users.
 
Yet at the same time, if you ask people whether it’s on a survey or qualitatively face‑to‑face, the different platforms or sources that they trust to get news and information, social media is always at the bottom, and so it’s the least trusted source but continuously gets the most usage, so there is obviously, there is an issue there.
 
And, you know, in terms of trust, is there a solution? And I wish I had one and I don’t, but as I’ve written here, I think it is a combination of things, and it’s not just by the way, about saying that the platforms are the evil guys and users are innocent. In fact, human behavior, I would say, is one of the big contributors to ongoing lack of trust in social media because people like to share stuff, they always will, and some people don’t bother checking sources. Some people do it for fun, and so it’s almost like innocent misinformation and we have to accept that as well.
 
So that’s one side of it. One thing that’s interesting to notice, as I put, is that when we do research amongst younger people using the Internet, new to this sort of thing and to the whole concept of having to be critical online, is actually they’re etiquette in social media and need to examine sources is actually better than their parents. And when we know when it comes to classic fake news stuff we see on social media, actually, it’s as much older people, 55 to 60 plus who are spreading the stuff as it is younger people, and so there is an interesting generational thing there.
 
And but it’s also true that I think platforms have something to do, and again just going to the evidence that we have, when people are asked, do you report things that you’ve seen that you feel you’re concerned about on social media app or sites, and about 45% do some kind of reporting of anything they come across. But of those people, only about 25% believe they know what happens next, so I think we need a bit of transparency in those reporting mechanisms for platforms to be a bit more open about, okay, you reported this concern and it might be in financial, it might be in data, it might be in offensive material or whatever, but this is what happened to it, this is what we did, and I think that way you start to build trust in those particular features as well, and so it’s a mixture of things. I mean, obviously, when we come for the bigger, the notion of state‑sponsored social media – oh, man I timed out again.
 
>> VLADIMIR RADUNOVIC: We can still hear you, but probably you drop there.
 
Okay. Moving on and thanks for that –
 
>> LUCA ANTILLI: I’m sorry. My final point is that social media is the biggest coms platform in the world and so of course bad actors, if there is such a thing as state‑sponsored, they’re going to go to social media because that’s the biggest, biggest platform and we have to think about all of these things and there are lots of different things to do I’m sure, and I’m sorry again for dropping off.
 
>> VLADIMIR RADUNOVIC: The problem is you always drop off at the most interesting point. Stop doing that. Stop doing that. (Laughing). Thanks. There are a couple more follow‑ups in the chat, so what about strengthening verification of official social media accounts making it visually more aware of accounts and mentioned what is an official social media – and I don’t know why I’m reading as you can see all of that or say all of that when you take the mic, but very interesting statistics.
 
And then for me, it’s interesting to observe that while we haven’t been discussing in the recent years this part of the misinformation and content‑related issues within cybersecurity but more and more aspects of cybersecurity the aspects are popping up, misinformation, disinformation which is quite an interesting paradigm shift. And I wonder if anyone has a quick reflection on the Henrick’s question relating to having companies move from blind trust to zero trust towards the companies and so on? I don’t know if anyone wants to reflect on that one, on the trust toward companies? Anastasiya?
 
>> ANASTASIYA KAZAKOVA: I think it would be good on a definition side to doublecheck that we speak on single terms, so I ask the question, what actually zero trust and blind trust implies in that context? That will definitely help to answer the question.
 
>> VLADIMIR RADUNOVIC: Can you offer any of those from your perspective?
 
>> ANASTASIYA KAZAKOVA: I don’t really like the term zero trust because I don’t believe that, my personal belief here as we live in a society, we live in a world that the zero trust is a really tricky thing that you need to trust and to at least to have that minimum level of trust to what surrounds you, whether it’s people or technology or products. I mostly for verified trust, so verified by trust and for clear set of measures for consumers, for corporate users and private users to let them as a company ensure that the product is verifiable and trusted and are trustworthy and so in that sense I do believe the companies definitely could do more to be more transparent to report how the technology work, whether it be implication, how data management is installed, how all the things are organized, how the companies handle users’ data. And the more transparent you are of this, the more security and the more trustworthiness you bring to that. So, my point would be this.
 
>> VLADIMIR RADUNOVIC: Thanks, Anastasiya. It’s worth mentioning that you’re also one of the partners in the Swiss driven Geneva Responsible Behavior in the space and the industry discussing things like transparency and trustworthiness and accountability and trying to link that with the global norms and responsible behavior, so stay tuned to the Geneva Responsible Behavior.
 
Another question related to trust from Matjia, which is can we one more thing connected to trust, acceptance of general conditions for use. You do not accept the conditions of use in terms of use by the services, you simply cannot use the service. Do we have leverage to push the providers to get limited access? This is especially important for children. If we manage to teach them critical thinking but being on social media becomes more important.
 
If anyone wants to reflect on that one, feel free to. Otherwise, in the meantime, I’ll also Henrick to maybe clarify the terms of blind trust and zero trust in the context of trust over toward companies.
 
And now coming back to the digital literacy, certainly, you can develop more discussion in the chat on these other tracks, but moving on to digital literacy, there were a couple of good points on that one. So, I read some of it back to you. Anastasiya said that it seems that the users often think that it’s a possibility of platforms to secure the users and that their platform providers can only secure the platform, but it could be difficult to secure and manage the user behaviors, so that’s where the digital literacy or comes in place.
 
And then Roberto said digital literacy is needed from the most early stages in the schools. And Marcel mentioned how can parents be introduced into this important topic, and I think Jaroslaw mentioned the teachers in this sense and mentioned parents are very important and we also find children 12 and above are increasingly important in educating their own parents in digital literacy, and that’s interesting, the role of the kids to allocate parents and even the teachers, if we wish.
 
And then I’m not sure who actually posted this one, about you the problem is not with people, but most people know how to authenticate themselves and so on, and so any reflections on digital literacy? I don’t know, I think, Luca, you were the one who, basically, when we discussed in preps for this, mentioned a lot of media literacy and what does it mean in practice and how could we approach that digital literacy, media literacy, and even what is the difference between the to. Luca, if you wish?
 
>> LUCA ANTILLI: Yeah. I think on that point on schools and education is really interesting, and part of our role is to collaborate and reach out to different sort of agencies and bodies in the UK, and obviously educational ones, are very much part of that. And, again, we’re not officially involved in any sort of regulation right now of these things, but certainly from the work we’ve done, I think our general feeling is that currently children in schools are told what you must have not do online to stay safe, and that’s it.
 
So, and I think children need a slightly more rounded introduction to digital communications, where it’s more about this is how you can, you know, create things. This is how you can share with each other, this is how you can communicate, and so the positives, which makes children – because I think, currently, there seems to be a them and us thing. There is a culture around, particularly, social media use and we do quite a lot of qualitative work amongst children and they tell us I just want adults to understand what we’re doing and not laugh at it and not think we’re just being either really dangerous or risky or silly with our silly memes and the rest of it but actually this is our world and we want you to understand what we do, and but telling us not to go there, we’re kind of aware of that but give us a more rounded education and also let us share with you what we’re doing. So there is a danger are in the naughty bad behavior bubble, so they’re seeing stuff all the time and there are sorts of things that children often inadvertently come across, you know, on fairly everyday apps such as SnapChat, Instagram, Tik‑Tok, some of the things that they see without looking out for it are very worrying for them, and you know, they don’t always know if they can tell someone about it because they think they’ll be blamed or told off for seeing the stuff. So, there are a lot of very sensitive issues there; and again, I’m not an educator but we are very close to that part of this whole thing and so I think, yeah. I’ll stop there but it’s – that’s just one thing that certainly amongst children it’s really, really key.
 
>> VLADIMIR RADUNOVIC: Thanks, Luca. I see your hand, Julia, in a second. It’s quite important when you mention the kids, particularly, might be actually fed up with someone telling them what to do and what not to do without actually telling them why and understanding. It was on paper that we did some time ago on DiploFoundation trying to outline the competencies on digital literacy which goes way beyond understanding what is secure and not secure and into critical thinking and into the responsibility of us as users and citizens if you wish, and to understanding even the Internet governance as a condition of the digital age we’re living in and consequences of that. It’s a broad collective thing that falls under digital literacy to even with simple things of being more safe. Julia?
 
>> JULIA SCHUETZE: I would just like to comment on the security aspect of how to behave in a secure way, and there I would just like to point out also the role of companies, so to increase the awareness of adults working, having routine security established in companies where, for example, regular phishing, big phishing campaigns that make people really know how to, yeah, understand what does a phishing email look like, but then also having routine responses and trainings like you train for a fire but also train for what happens if there is a cyberincident, and at least these stakeholders who are responsible to react should know what to do, so I think there is definitely huge potential in doing that, and then also talking about the private life because more and more we’re interconnected with home office; but also, not all companies have a clear policy that you’re not using any of your private devices for work. And there was recently a good article that showed how risky that can be, in general, on the use of Untapped, it’s a beer app where you check in beer, and take a picture and say checked in and the privacy policies are very good and you can make the profile private and, obviously, you shouldn’t share pictures with any confidential information from your company on them, but that has happened where military and CIA personnel has actually posted a lot of critical location data on there that was public, posted pictures in the back where there were military planes, and so there are – companies can do obviously so much with their policies, but at the same time, yeah, you need actual training and everyday routine of security in some way for adults as well.
 
>> VLADIMIR RADUNOVIC: Thanks, Julia. It’s important point, it’s not just the kids and we usually think about the kids. Now, I’ll get back to more maybe in a second with Jaroslaw. There are a lot of comments, I don’t know why you guys are not taking the mic, but certainly there are a lot of questions and certainly my voice is not the best one, but interesting stats on the agency and spreading fake facts, Luca and I believe it’s because the older users might not be well conversed with what differentiates fake news with authentic news online and goes back to the literacy of the adults included.
 
And then Rosalia shared the link, Digital EYDU, and Andrea says it’s interesting because digital literacy a social phenomenon than any deliberate prominent part of action which is an interesting point. And Jaroslaw can check on and understand trusts of safety online and understand concepts of online environment, and it was mentioned, I think, the reason why children can teach adults about digital literacy is because children take a trial and error approach to the Internet and digital media and therefore learn quickly and the important thing is to document the knowledge gained and pass to the future generations to prevent mistakes and uncertainties.
 
Rosalia continues, schools are important in bridging literacy gap. COVID is putting disconnected families in difficult position and that’s a really interesting point on East Coast, and also whether the current school system supports and can support these parameters. I won’t read Luca’s comment and he’ll connect afterwards. And Wolfgang mentioned that there is a book Growing Up Digital the children of yesterday are the teachers of today and all the proposals also in the UN Secretary‑General Roadmap issued yesterday are true; but unfortunately, it takes time and money from recognition to implementation and thank you Annriette for the choice. And then though children get technology really fast, there is a need to teach them and everyone else about it. And some French schools have incorporated critical thinking in social media as part of the curriculum. Quite some different thoughts and Jaroslaw, I don’t know where you want to start from.
 
>> JAROSLAW PONDER: A lot of angles, but let me recall our current experience. We’re just now thinking of the work for some South Eastern European countries in the work on the digital skills which also encompasses the digital literacy and three parts of the digital – three big chunks of the digital literacy, basically, to determine nature and advance and once we’re starting the discussion with even at the country level, we see that there is very different understanding of at least how to approach the way of how the digital skills should be developed in the country, but address the needs on the one hand of the gigabit society. But on the other hand, also, the needs of the simple users as the consumers of the digital and goods services and other. And we notice that, for example the Western Balkans, and Serbia currently has the digital skill strategy which is very much cross‑sectoral and incorporates the interest of the users, but also the interest and potential of the industry with a good understanding of where the challenges stand.
 
While in the other countries when we’re starting the conversation, there is still process of developing why we should have one strategy like that and why is it so instrumental? Is it not only the issue of the education? And it’s not because it is cross‑sectoral approach which is needed, not only of the public sector but also with the engagement of the private sector to make sure that the digital literacy provided to the children in the very early age can, basically, shape them and prepare for the future and taking into account the issues of the privacy, of the trust, and many, many other issues for them to be sure that they can be confident in the use of the ICT, but more importantly that they can actually produce digital goods in this space.
 
So, we are looking forward to this journey to advance it. It’s not a simple task, and as you’ve seen in the chat we have already a lot of different proposal, and we are in the community of the ICT professional, and once we are bringing this conversation to the broader public, then the number of the angles of its work is even bigger, so I would use this opportunity also to encourage those who are interested in working on those aspects to join us in our efforts of supporting the economies in transition, European ones in particular, to make sure that also the protection aspects, the online protection aspects are taken into account.
 
>> VLADIMIR RADUNOVIC: Thank you, Jaroslaw, again, a lot of interesting points. I wonder whether one of the good questions is, if the current school system is good enough or how can we embed – is it the right place where we can actually embed more on digital literacy and are there any good experiences that any one of you has about embedding or using the current school system for, vocational system for digital literacy? I don’t know, Julia, if you have any thoughts from the community perspective and user perspective on how to approach digital literacy? What could be good practices?
 
>> JULIA SCHUETZE: I think good practices in Germany are just on the top of my head because this is not media literacy, is something two colleagues of mine are working extensively and also building these persona profiles of what things you should know, like you already mentioned the different aspects that are necessary to know.
 
But what has worked well is, it’s called (?) also from the computer, they have a nonprofit in Germany and they have all over Germany volunteers, and they go into schools and they practically mentoring young children to try out hacking and also teaching the hacker ethics with it, so really yeah. They’re learning IT skills but at the same time, the hacker ethics, and so I think this is an example.
 
What I would have wished for in school would be more interdisciplinary projects, so if you’re teaching media literacy in IT class but also, yeah, teach a little bit of the engineering part, maybe try to imagine some products that could be built, what would they be for, yeah, including in political science and so I think it would be good if just it’s a broad topic that is taught also into disciplinary because in the end those things are interdisciplinary now when we’re, yeah, when we’re in companies but also in government should be approached in interdisciplinary way and so maybe some more pilot projects where kids can get engaged.
 
>> VLADIMIR RADUNOVIC: I think this example of actually using the hackathons or sort of light hacking approach for kids to have fun, but at the same time learn about ethics and behavior online or practice in the substance is quite an interesting one.
 
We have a few more minutes. Is there any comments by any one of you on digital literacy to wrap up this part of the discussion and then just try to conclude somehow? Anyone who wants to jump in fine on digital literacy?
 
>> LUCA ANTILLI: Yeah. Hi, Luca here. I just say that, you know, we haven’t – we haven’t got – we haven’t cracked it yet. So, we haven’t got the answer, and I think that’s clear from all the comments coming through and all the different experiences across Europe and the world that, you know, we’re all really thinking hard about how to make this happen. It’s such a big, fluid, and you know it covers everything from the early ages all the way through society, so that, you know, it’s maybe a bit of a depressing thought; but you know, we’re no closer to the perfect sort of package of solutions than any other country or body I’m sure, but it’s really interesting to see that some of the things that you’re saying about, that where the risks are, what we really need to think about more carefully, and we absolutely are actually seeing the same debate we’re having in the UK and so for me it’s very encouraging that the same discussions are being had and I just wish that we could all get closer to a solution more quickly because there are, indeed, unfortunately people that are already suffering in different ways as a result of harm online and we just want to get to it and try and fix it as quickly as possible, but it’s very – it’s going to be quite a long journey, I fear.
 
>> VLADIMIR RADUNOVIC: And, unfortunately, the base of technology development doesn’t help us.
 
>> LUCA ANTILLI: Definitely.
 
>> VLADIMIR RADUNOVIC: It goes over everything that we tried thus far in the meantime for that so we have to device some new aspects. It could be both challenging and comforting back to Annriette’s point that media digital literacy is more social problem deliberate action problem and mentioned that reflecting on Julia’s points that it’s actually this informal setting in classes that help a lot like hacker space and hackathon and Annriette’s added, I think, we need more political approach and it should be as much about politics and society as technology. For example, teaching in critical thinking, and pre‑digital and for reasons not digital, but digital literacy has to include them.
 
Marie says especially since hackers have a cool aura in the eyes of many kids in reality with anonymous in pop culture and Mr. Robot, et cetera, and Toni mentioned that Council of Europe does a lot of media literacy using a right‑based approach. And thanks for that.
 
And, Annriette says sex education needs to include Internet pornography and understand how children/teens need to understand it and analyze it. We’re coming close to the end of the discussion, so I’ll pass the floor to each of you; but literally, like a tweet, 260 characters, and then we’ll hear from Andrijana to sort of get the messages. So, yeah, I don’t know, who should we start with? Should we start the other way around? I’ll start with Luca and then Jaroslaw and then Anastasiya and then Julia. Luca, tweet?
 
>> LUCA ANTILLI: Kind of what I just said really which is we have a long way to go but we are slowly unearthing the very complex and nuanced contributing factors to the issue of digital literacy, and we need to keep working at it.
 
>> VLADIMIR RADUNOVIC: Thanks. Optimistic, at least. Jaroslaw?
 
>> JAROSLAW PONDER: Yes. From our side I think we see that the pandemic was useful in highlighting and strengthening the importance of the work on the cyber and the fact that we’re becoming really dependent on the digital, leaves no other choice than really to dedicate all efforts to make progress in this area. But not only in the policy discussion but also in the implementation and support in those countries which requires help from those that are much more advanced already in this area.
 
>> VLADIMIR RADUNOVIC: Thanks a lot. Anastasiya?
 
>> ANASTASIYA KAZAKOVA: I would say that we definitely need to move from the zero trust that has been mentioned today to the greater transparency in Cyberspace and Internet space around all the actors, what they do, what their intentions and motivations in Cyberspace, and more verifiable trust meaning the trust when you provide a clear set of measures of how you actually make your technologies and products trustworthy and thus explain to users about all the implications that might be with the use of this technology.
 
And, also, I think that though it could be trivial but still the dialogue plays a really important role and all at the municipal level, state level, at the global level, and especially during this interesting times with the UN global discussions again to hear what other parts of the world actually need in terms of the protections, cyberprotection, and ensuring that the users are also actually all protected in that sphere.
 
>> VLADIMIR RADUNOVIC: Thanks, Anastasiya. We did accept the microblog rather than the tweet form but, Julia?
 
>> JULIA SCHUETZE: I wrote down this session gave me energy to focus more on practical solutions that make users more secure.
 
>> VLADIMIR RADUNOVIC: Ah! I was hearing that you would just say retweet, but that’s great. Thanks a lot. Back to our Report to, Andrijana, to come up with messages. The floor is yours.
 
>> ANDRIJANA GAVRILOVIC: Thank you for the floor. And no thanks for the fact that you left me absolutely no time and we’re already running late. I’m on the Geneva platform as Elisabeth kindly mentioned and I’ll present the three messages that I prepared during this discussion. Now, don’t worry if there is something that isn’t in there, because it will be in the report which will be available on the watch/EuroDIG 2020.
 
The first message is there is message for stronger literacy particularly for children, parents, and teachers and forced to be part of the digital society because of pandemic because of elderly. Should be interdisciplinary, and users should be aware of risks and thought to think critically between safe and unsafe practices. If you have any strong objections to this message, please write them in the chat. Elisabeth, in the meantime, maybe we can go to the next message. Or the next slide. Thank you.
 
Security needs to be more user‑friendly and to that end, ICT providers need to provide greater transparency around their practices, especially regarding the implementation of security by design and security by default.
 
Again, if there are any strong objections, please raise them in the chat. Next slide, please, Elisabeth. Thank you.
 
Companies should implement policies that will raise users’ trust in the companies. They should be more transparent on how data management is installed, how they handle users’ data, and how the mechanisms for reporting inappropriate content on social media platforms work. That would be it from me. A kind reminder that this is not the final form of the messages, and you will be more than welcome to comment on the messages and EuroDIG will provide more details on the commenting platform. Thank you for the floor. Vlad, over to you.
 
>> VLADIMIR RADUNOVIC: Thank you, Andrijana. I saw Julia raised a hand and I don’t know if you want to jump in quickly.
 
>> JULIA SCHUETZE: On the last part could you add, report vulnerabilities as well, if no one else objects?
 
>> VLADIMIR RADUNOVIC: Makes sense.
 
>> ANASTASIYA KAZAKOVA: I strongly agree with you Julia, on that.
 
>> VLADIMIR RADUNOVIC: Makes sense. We didn’t touch very much on that we need a special session for vulnerabilities and disclosure, but it would be very useful. Thanks a lot for inputs. Kind reminder, and I think Elisabeth already posted in the chat, that there is a forum at EuroDIG so you can switch back to the forum and continue these discussions. There were a lot of interesting inputs, and I certainly hope to see you, wow, hopefully next year. In the meantime, grab your coffee, relax, and move to the next session. Thank you for joining today, and thank you for coming. Bye‑bye.
 
>> ELISABETH SCHAUERMANN: Thank you, Vladimir, and everyone here in the room and also on YouTube watching. We are actually here in Studio Berlin making this networking space for everyone who is interested, so you can come join in and there will be a few fun Mentimeters to participate in, and you can chat and we can also see if sound and video works. If that’s a thing that you want to do, I’ll be here for your assistance. And in the meantime, have a nice lunchbreak, and the next session here will start at 2:30 p.m. See you then. Bye.




[[Category:2020]][[Category:Sessions 2020]][[Category:Sessions]][[Category:Security and crime 2020]][[Category:Cross cutting/other issues 2020]]
[[Category:2020]][[Category:Sessions 2020]][[Category:Sessions]][[Category:Security and crime 2020]][[Category:Cross cutting/other issues 2020]]

Latest revision as of 22:24, 15 December 2020

11 June 2020 | 11:30-13:00 | Studio Berlin | Video recording | Transcript | Forum
Consolidated programme 2020 overview / Day 1

Proposals: #75, #86, #155 (#121, #160, #166)

Session teaser

Information communication technologies (ICT) open up enormous opportunities for both social and economic development. However, at the same time they pose threats risking the safety and security – including privacy – of users by creating new vectors for cyberattacks. During the session, from different stakeholder groups’ perspectives, we will discuss users’ perceptions and concerns regarding risks in cyberspace, and identify the challenges in addressing them – including the current pandemic crisis. We will also explore existing and possible future solutions to ensure users’ confidence and trust when using ICTs and going online.

Desired outcome

The primary aims will be (1) to conceptualize the existing situation for users in cyberspace by outlining key existing and potential risks, and (2) to identify best practices and actionable recommendations for users and actors that are capable of making a contribution to enhancing security and users’ confidence in cyberspace. These actors include governments and intergovernmental institutions, the technical community, civil society and private sector entities.

Format

The interactive moderated discussion will take place with experts representing different stakeholder groups and addressing three sections on the topic:

  • Risks (what should we – users – be aware of and what makes us vulnerable?);
  • Challenges (do we have enough resources and capacity to address the existing risks and ensure the security and safety for users in cyberspace?); and
  • Solutions (what different stakeholder groups could and should do to ensure not only a secure, but also a human-centric cyberspace, where fundamental values are guaranteed and technology works for people?).

The session would take place with interventions from the audience and other participants after each section to ensure a fruitful dialogue and exchange of views in the EuroDIG community.


People

The session brings together representatives of different stakeholder groups:

  • Intergovernmental representative: Jaroslaw K. Ponder, Head of the ITU Office for Europe, International Telecommunication Union
  • Technical community representative: Luca Antilli, Head of Media Literacy Research, Ofcom
  • Civil society representative: Julia Schuetze, Project Manager, International Cyber Security Policy, Stiftung Neue Verantwortung (SNV)
  • Private sector representative: Anastasiya Kazakova, Public Affairs Manager, Kaspersky

Moderator: Vladimir Radunovic, Director of e-diplomacy and cybersecurity programmes, DiploFoundation

Organising Team (Org Team)

  • Claire Local
  • Nicole Darabian
  • Amali De Silva-Mitchell
  • Fotjon Kosta

Focal Point

  • Anastasiya Kazakova, Public Affairs Manager, Kaspersky

Remote Moderator

Trained remote moderators will be assigned on the spot by the EuroDIG secretariat to each session.

Reporter

Messages

  • There is a need for stronger digital literacy, particularly for children, their parents and teachers, and those who are forced to become a part of digital society by the pandemic, such as the elderly. Digital literacy should be approached in an interdisciplinary manner. Users should be more aware of risks and taught to think critically, as well as differentiate between safe and unsafe practices.
  • Security needs to be more user-friendly. To that end, ICT providers need to provide greater transparency around their practices, especially regarding the implementation of security by design and security by default.
  • Companies should implement policies that will raise user trust in these companies. They should be more transparent on how data management is installed, how they handle user data, how their vulnerability disclosure practices work, and how the mechanisms for reporting inappropriate content on social media platforms function.


Find an independent report of the session from the Geneva Internet Platform Digital Watch Observatory at https://dig.watch/resources/enhancing-users-confidence-cyberspace-risks-and-solutions.

Video record

https://youtu.be/qV5EFUzF6Rs?t=1377

Transcript

Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-481-9835, www.captionfirst.com


This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.


>> VLADIMIR RADUNOVIC: Good morning. Good afternoon. Just a quick test that you can hear and see me.

>> ELISABETH SCHAUERMANN: Hello, Vladimir. Yes, we can hear you and see you.

>> JULIA SCHUETZE: Me as well? Sound and video is okay?

>> ELISABETH SCHAUERMANN: Yes, Julia, we can hear you and see you well.

>> JULIA SCHUETZE: Awesome. See you in a bit.

>> SANDRA HOFEICHTER: Welcome back. Nadia, can you hear me?

>> NADIA TJAHJA: Yes, I can hear you.

>> SANDRA HOFEICHTER: Yes. The first session was wonderful. So much interactivity both in the discussion forum but also in the chat itself. People were having discussions, bringing it together, and people had different types of opportunity to participate, people who are usually rather quiet could participate by writing and people who are less, you know, keen on speaking have a different way to engage in the dialogue and I think that’s a fantastic opportunity.

Okay. That’s basically what EuroDIG is about. We have to apologize, we did have some – my team told me I shouldn’t call it technical issues, but we had for a short moment some room limitations because although we upgraded our license, we had to assign the bigger room to our license; although, we only had one so my apologies to everyone if you were not able to enter the Zoom room. If this happens at any point that you have problems with your credentials, that you cannot enter the room for any reason, please refer them to the streaming that we also provide on EuroDIG.org and on YouTube so you are at least able to follow the session and to use the forum. But usually or normally, you shouldn’t have any problem to enter the rooms because we have upgraded to 1,000 participants for the room in the Hauge and up to 300 and 500 participants in the studios in Trieste and Berlin. Nadia, the next is about encryption in your room. And I see around 60 participants joined your studio already, and I guess these are the ones that will be speaking on this workshop. I wish you a fruitful session and welcome back after your session with the big stage in your studio here.

>> NADIA TJAHJA: Wonderful. We look forward to having you back.

>> SANDRA HOFEICHTER: Okay. See you later, Nadia. So, then I would like to try to connect with the next studio which is Berlin and in Berlin we have Elisabeth.

>> ELISABETH SCHAUERMANN: Good morning.

>> SANDRA HOFEICHTER: I see the weather in the Berlin is perfect.

>> ELISABETH SCHAUERMANN: At least in our background it is.

>> SANDRA HOFEICHTER: Wonderful. Did you follow so far the EuroDIG a little bit?

>> ELISABETH SCHAUERMANN: Yes. I could tune in a bit today for the opening before we did our setup here and it was really good to see that everything went well to far.

>> SANDRA HOFEICHTER: And I also see the first people are already connecting to your session, including the moderator Vladimir Radunovic, and he is a very experienced person in remote moderating so with him as the studio moderator as the session moderator. You will be absolutely on the safe side. Hi, Vladimir. I see you already. Perfect.

So then, I wish you a fruitful day and one thing that we should mention, our studio in Berlin, here we are going to do a great experiment because this is going to be the networking area. We said already networking in a virtual meeting is a difficult thing, nevertheless we try, and Nadia will be the networking host in the lunch break and she will have prepared some questions. But in case it doesn’t really work out, we take the liberty to play some music in your studio as well; but of course, I hope you will be able to offer some great networking opportunities for our participates. Good luck, Nadia.

>> ELISABETH SCHAUERMANN: Thank you, Sandra. See you later.

>> SANDRA HOFEICHTER: Elisabeth by the way. I’m sorry.

>> ELISABETH SCHAUERMANN: That’s a lot of names in a short amount of time.

>> SANDRA HOFEICHTER: Okay. Thank you. And then let’s take the bridge further to our studio in Trieste and here we have Marco. Marco you’re sitting in front that have castle. How did you manage that?

>> MARCO: We rented a boat and we’re all sitting on a boat in front of the castle. I’m joking. Welcome. You should have seen in Trieste in June, and I hope you will be able to see it next year and this is the castle and our campus is just behind the castle.

>> SANDRA HOFEICHTER: I can confirm that’s really the case. We’ve been there for the planning meeting last year in September and this year in January and the picture that you can see is really what you get when you go to Trieste and when you participate in the EuroDIG next year in the ICTP facilities, not that this is ITCP, but it’s just around the corner.

Marco, the session in your studio will deal about innovative use of Blockchain and public empowerment. I see the focal point and she’s already connected to your session. Nadia, thank you also again for all the effort again that you put into the session, and I wish you and Marco and the entire team in Trieste all the good luck for your session and with this. I think we are ready to go and I hand over to all three studios and see you after – see you at 1:00 sharp. Over to you.

>> ELISABETH SCHAUERMANN: Thank you, Sandra. All right. And with this, we still have two minutes before we officially start, which I would like to use to remind all of us of the code of conduct; but before we do that, hello again, everyone in the room and on the live stream to the start of EuroDIG 2020. I’m Elisabeth Schauermann the Host for this session and the Networking and the session after that here in Studio Berlin and together with my colleague. And later we’ll hear from the German Informatics Society to keep the sessions up and running. So, for the sessions, just a quick reminder that all of those present in the Zoom room, please identify yourselves with your full names. You can change your name yourself. If you want to ask a question or make a comment in the interactive parts of the session, please raise your hand, and then our remote moderator will unmute you and you will be given the floor.

Once you are given the right to speak, please switch on your camera if you like. We do not force you, but it’s, you know, it’s nice. And then state your name and your affiliation before you make your comment or ask your questions. Contributions can also be made in the Chat and in the forum and Lilian will try and follow and bring that up to the discussion as well.

The Zoom rooms offer so many participates only, please do not share the links with anyone. If participants fail to comply with me parts of the Code of Conduct, we will gently remind you to follow it, and as a last resort, participants can be removed from the room but we really hope we do not have to take that measure.

One more important note, I’m happy to announce that we’re partnering with the Geneva Internet Platform for reporting and curate the EuroDIG 2020 messages and Andrijana is the reporter for this session and will be given the last five minutes of the session to present the main points for the messages to all of us. And with this, I would like to close my opening remarks and open the first session of today, Workshop 2, Enhancing Enhancing Users’ Confidence in the Cyberspace‑risks and solutions, which is moderated by Vladimir Radunovic. Over to you, Vladimir.

>> VLADIMIR RADUNOVIC: Thank you so much for the introduction, Elisabeth, and for the housekeeping notes. I guess we can start slowly. We have a bit of introduction. You can relax and get your coffee. I do welcome you to Berlin, even if it is only a remote or virtual Berlin space that you have joined today, and with that I hope that your journey was safe, and that no ID was stolen in the meantime, no one followed your data that traveled through the net, no VPN was hacked because I guess you’re connecting from different places, and that you haven’t met any scammers on your way to joining and following EuroDIG in the previous days.

I do guess that most of you are actually sitting back home, and you can probably note if you’re sitting back home, probably in pajamas, you can switch on the videos at some point so that we see the dressing code, but that doesn’t make you really safe or more safe because you’re home and you haven’t actually traveled to Berlin and you haven’t encountered all the possible fraudsters on the way there, and those guys or girls are around and can you see all of these risks along your way from your home town to Berlin, even virtually.

The pandemic has definitely changed our lives in a way. We are moving toward something that I like to call a blended life, life where we’re going to mix the encounters and online life more and more. But it has also changed the cybersecurity environment or the security of our digital environment. We have seen a number of reports, a number of webinars and discussions in the previous months which were looking at what are the main risks that emerged or are emphasized during the COVID crisis.

So, basically, what we want to do today is run through the three elements of discussions. The first one is, what are the particular risks that each one of us is, as users, are seeing or feeling with a special focus on the times during and after crisis of what we are likely to see more and more.

The second block of the discussion will be focused on challenges. What are the particular challenges; mainly, related to our social environment, our behavior, also technologies, resources that we have, and so on, when it comes to addressing these risks.

And, lastly, we will look into solutions. So, some of the practices or examples of how we can be addressing these particular challenges.

With us today I’m pleased to welcome to the Berlin Studio Julia Schuetze a Project Manager at SNV. Julia I’m not going to try to read or pronounce the German full name, so you can probably do that once you have the mic if you help me. With us also Anastasiya Kazakova. And Jaroslaw and Lidia, and big thanks to Anastasiya for putting this up with the team and a huge team behind the operations and particularly for giving me the opportunity to at least have an equal number of ladies at the panel which deals with cybersecurity. It’s not that often, and I’m really pleased that now we have two ladies with us today.

One of the desired outcomes of the panel, in a way, well we want to try to somehow list or conceptualize the existing situation and to outline the key existing and potential risks, and secondly to identify some of the best practices, and hopefully, actionable recommendations on how we can deal and address these risks.

The format, as Elisabeth already mentioned, we do have four panelists, but we do exact a lot of interventions from you. Looking at the participant list, I see some family names, some good friends, some people that are well into the topic, but also some people that might not be much into cybersecurity but definitely have to share own experiences as users or different perspectives from human rights, connectivity, economic aspects and so on, and I encourage you all to try to be as active as you can.

You have an option to raise your hand, and we’ll try to follow that as much as possible throughout the session; and you also have an option to post a chat comment. If you do that, I kindly ask you to try to add a hashtag at the beginning of your comment. Whatever the hashtag could be. Whether it’s #vulnerabilities or #childcrimes or whatever, so it’s easier for us to locate and put in context your comment and we might call upon you if we have good comments to raise as well.

We’ll be using Mentimeter to allow some of your collective input. And with that, Elisabeth, I think we can start with the Mentimeter. There are three different questions that we want to ask you, and your experiences of what you think are places which make you the most vulnerable. What are the main tracks or do you see as the main tracks in the danger in this experience, and what are the particular assets that we should be protecting? So, we start with the vulnerability and we’ll move to the other questions later on. Let’s start with this one.

To fill in the poll, simply open your browser or use your mobile phone and open it with the mobile phone, whatever you prefer, go to Menti.com and you get the code and get the option to respond. What makes you as a user most vulnerable? If it’s none of those but other, feel free to put other; but also, we kindly ask you to put more explanation in the chat down there. And with that I’m actually moving on to the first block of our discussion where we’re discussing what are the risks that we are seeing in the environment.

And with that, just to help us outline better the aspects of the risks; in the risk management tier you have the components that make the risks, which is the vulnerabilities, the assets we want to protect, and then the threats. There is a visualization that you can see probably somewhere behind me and I’ll try to illustrate it more, and so we’re looking at any sort of ducklings on this side that we need to protect. We’re looking at any sort of cracks in the branches in the applications that we use, in our behavior, or whatever. And we’re looking at the crocodiles, the what are the threats really to date?

We’re starting with vulnerabilities as we noted, some reflections there and most of you say it’s actually humans and some say software and technologies and some of you say it’s laws and regulations. No one with the other. I would be keen to see what other vulnerabilities you see, so please move forward with your comments. With that I pass the floor to Anastasiya, who is somebody that represents Kaspersky, a company which has a lot of data and details about what’s actually happening. And so, Anastasiya, can you tell us briefly, what are some of the results of your analysis about what are the main developments, the main threats, and certainly, you can comment on the results of the poll if you wish as well. Anastasiya?

>> ANASTASIYA KAZAKOVA: Hi, to everyone. Thank you for passing the floor. Thank you for being here today at the EuroDIG session. I will cover key cyberthreats that Kaspersky team reports and –

>> VLADIMIR RADUNOVIC: Anastasiya, should we put on the PowerPoint? Elisabeth, if you can kindly put on the PowerPoint by Kaspersky, and I can probably lead through the slides and Anastasiya can tell me next slide and so forth.

>> ANASTASIYA KAZAKOVA: Uh‑huh. So, for Q1, there is some fresh data for Q1, 2020, that makes the situation completely unique, the COVID‑19 pandemic has affected us all in some way and the entire cybercrime landscape has changed in the last few months. And so, it would be not correct to attribute all of those changes to the pandemic but a sort of connection of trust, and particularly, from a user side we’ve all gotten more cyber vulnerable than before. The first thing you see on the slide is remote work and attacks at the remote service and remote access tools. From an information security standpoint, an employee within the office network and an employee connecting to the same office network from home are two completely different users, and cybercriminals share this view and the number of attacks on service and remote access tools are called remote desktop critical attacks has increased by 23% from January and there are a couple of slides, Vladimir, if you could click a little bit. In Italy just some – in Italy and other countries, Germany, and also in France, and you can see the spikes actually happen somewhere at the beginning, at the beginning of March, and then finally in the U.S, also, just an illustration.

You could ask why does it happen? Corporate data that moves from corporate secure environment to less secure and less protected environment at home‑to‑home service and super attractive target, for example, ransomware attacks and phishing attacks. And remote entertainment is one thing that is actually unique during this period and during this Q1 and actually illustrates the growth and vulnerability of users in Cyberspace. Online activity via streaming services increased, and everybody was logged in at home, and many of the services announced that increased after user traffic. But the cybercriminals, again, responded to this trend too, and the average daily number of attacks blocked increased by 25% since January of this year, so usually users follow ancillary and moderated authentications and they could be caught by using malicious adware and so on.

The next slide is -- it’s geography of mobile threat, and in Q1 the attacks was around and users frequently have adware and telegram clone apps. And the next slide, can you see the distribution of mobile threats across the regions?

Speaking of the mobile bank trojans, this is the next slide, there is also the increase which actually gives us indicator of the growing financial cyberthreats to users; however, at some point the mobile ransomware decreased through the period. Next slide. Which also provides that ransomware criminals find attractive desktop and mobile applications and mobile devices of the users.

And, finally, two final points that also indicated the cyberthreat landscape during the Q1 is what else makes us vulnerable. It’s the vulnerable applications that we use and security flows of IT products, and so you can see on the graph here on the diagram, that most of those such applications still remain the office applications, the office programs, and cybercriminals, of course, use that.

Lastly, attacks, we have upped resources of the so-called Internet attacks but they just redirect users to exploit websites that contain exploits and malicious programs, bots, and can you see the distribution of the countries where most of the Internet attacks have been registered by us in Q1? And just for a note to determine the geographical source of the based attacks, the main names are matched against the actual domain name IP address, so they be the geographical allocation of specific IP addresses of malicious activity could be then established.

I would also just a final note, I would like to add that actually also in terms of vulnerability for users in Cyberspace is definitely the lack of cyber awareness. It’s more and more sophisticated landscape. It’s definitely the vulnerabilities of software and the delays in patching them. Vulnerabilities in critical societal sectors, for example, healthcare. And it’s also past speed of emerging technologies, so I will stop at this point.

>> VLADIMIR RADUNOVIC: Anastasiya, maybe a quick reflection or a quick question back to you. To what extent has the COVID‑19 context been mentioned the mobile trojans and the phishing and what sort of the context has it been used for the criminals, do you have any data?

>> ANASTASIYA KAZAKOVA: A good point and good question. Colleagues at our secure teams, security teams in Kaspersky usually tell that like tremendous increase in cyberattacks throughout this period; however, the COVID‑19 as one more agenda item, has been largely exploited by cybercriminals. In regard to users like consumers, ordinary users, as well as in regard to the private and public, attacks at medical facility, healthcare and in those facilities that are conducting and still conduct the medical research.

>> VLADIMIR RADUNOVIC: Thanks. We’ll get back to that, definitely. We’ll discuss more of the threats. I’ll ask Elisabeth just to put back the Menti just to see the results of the first poll and then we’ll move to the second one.

And in the meantime, I’ve noticed some of the good comments that are mentioned when we were talking about what are the main risks in the vulnerability. And Tatiana mentioned that software and regulations are actually created by the human, so again we have more of the impacts of the humans.

So, let’s see what the results of the first Mentimeter are. So, the humans and behavior, and particularly focused on behavior is by and large the biggest vulnerability. And I must say I expected that, and there are a lot of statistics that 90% of attacks are actually based on the human errors, and then we have software and technology, and then laws and regulations, and still no one on the “other”, which is interesting.

Let’s move a little bit, and, probably, I can even move on to the next one. Or if I can’t, you can move just to the next question. Yeah, you can move on to the next question, the next poll question, basically. What are the assets you think are in danger? So, what are the particular aspects that we want to protect or we need to protect more and more in Cyberspace? Security or devices, personal information, finances? We’ve seen financial threats, society, stability of society and democracy and values, lives, we haven’t seen any casualties yet but it can happen, and then maybe “other”, maybe you can think of the other while you’re voting. Certainly, if there is anyone that wants to jump in at any point, just raise a hand and we’re following on that and then we’ll follow up with giving you the floor or post your comments in the chat.

In the meantime, I’m passing on the microphone to Julia. We have touched upon a lot on the technological aspects, and particularly the threats which are related to VPNs and financial fraud and so on, but there is a lot also which touches directly on the personal data and incidents related to personal data. Julia, do you want to comment, or certainly, you can reflect on the results of the Mentimeter as well and positions of the participants, Julia?

>> JULIA SCHUETZE: Yes. Thank you, Vladimir. I think I’ll take some pictures while there are risks coming through software and regulations right now and how that could affect the stability of society, and so I have four points that I would say, these are the risks that I’m seeing when I look at responses of states to malicious cyber activities and then also how states and governments behave.

So, we see deployment of vulnerable infrastructure, for example, in elections that could make elections insecure and thereby also erode trust in the end results; for example, online voting, vulnerable tallying software, confidentiality availability and integrity can be. There was a case in Germany where they figured out the tallying software that adds the votes and brings them up to the next level is actually insecure. So this can be even more critical when I think about that we’re moving more towards e‑government services, and so these attacks might not only go around election cycles, but actually just affect cities, and we’ve seen cases like that also in the U.S. where cities were affected by ransomware attacks and then some of the services cannot be used by citizens.

And then, secondly, I would say some government policies, so for example, the broader use of government hacking and exploitation of zero days and for that reason can actually make us all weaker. So, especially if encryption is weakened in consumer products that are broadly used, then to some extent like the militarization of Cyberspace, so the use of offensive means in retaliation or persistently engage can have collecting damages that can affect users, and so more and more cyberspace is used just for other national security objectives that can then have impact on the security in Cyberspace.

And then I also see a risk in companies not deploying or using due diligence that can affect users, so that can lead then to massive leakages of personal data. Recently, Easy Jet, I talked two weeks ago with a security researcher, Kris, and she said vulnerabilities in Boeing, bad security practices, not using effective security and having bad policies on disclosing vulnerabilities. So, if vulnerabilities are found by outside experts or researchers, that they cannot be safely and securely reported.

So, yeah, those are my kind of four broader points on how regulation, but also human behavior and developing software can affect and risk the security.

>> VLADIMIR RADUNOVIC: Thank you. In looking at the results here, they actually sort of support your concerns about elections and generally the stability of the society. You see the majority of people say it’s society and stability of values, which is under risk. Tatiana mentioned in the chat a couple of other aspects or assets if we wish, which are not necessarily just lives when it comes to humans, so what it says is what about risks to mental health and bullying. We’ll address that in a second when it comes to child protection in general, but also, Tatiana, I understand that some of those will go to the values of society like some of the gender issues related to cybersecurity. But it doesn’t cover anything when it comes to human‑centric cybersecurity view, in my opinion. I probably don’t sound very coherent here, but it’s harder for me to place humans into the asset category.

Good point. I mean, we can certainly go into different aspects of what is under the risks and what we should protect more, and you outlined well some of the parameters there.

Julia, you mentioned the elections and either it’s about either hacking the machines or information warfare in a way or social media warfare and so on, but there are other aspects that are changing or hacking the voter list or even the way the votes are counted, which is usually through the machine or through the exosheet in developing countries, but still it’s computer based and so there are other aspects. Any other?

>> JULIA SCHUETZE: Yeah. Maybe just one add there. What is also important is that, obviously, citizens trust the process, and so to what extent do they trust the technology that is then employed because we studied different cases and then came to the conclusion in the end that even though maybe things are not hacked, if someone just says they’re hacked and people believe it, even though the result is actually the true result, but no one believes in the result, that’s like a really big risk, so I mean there really the legitimacy of the democratic process is at risk if people don’t believe or don’t trust the result anymore because of the use of technology. Yeah. So that I see in some critical aspects in society that the use of technology is very risky.

>> VLADIMIR RADUNOVIC: And I think this is a really important point, whether the elections are being hacked or not or any part of this chain, if there is a speculation that it was hacked and you already have the decrease in trust and in all system, and so yeah, thanks for that.

We’ll move on to the next question to all of you, and so Elizabeth, you can just move to the next point in the Mentimeter and the text question actually builds on what Julia has mentioned. She has mentioned or Anastasiya previously, the criminals, and Julia mentioned also the states, and there are other different groups that try exploiting the Cyberspace in various ways, so the question for you is where does the main threat come from, or if you wish, what are the main crocodiles behind me that are the threats? Whether it’s petty criminals, whether it’s organized criminal groups, terrorists, states, companies, it could be neighbors as well or friends even, and I hope not. So, what are the others? And, again, feel free to add more on the “others”.

I think mentioning you feel that in – it’s an actual good introduction to Luca’s part in a way, and Luca already commented in the chat related to trust, and so I’ll leave it to you, Luca, to certainly reflect on the previous discussion, but you can also reflect more on the research that you did on the, what are the different risks and measures the way we measure the risks and harms that are coming. Luca, the floor is yours, and let me know if you want me to switch to the slides. Luca?

>> LUCA ANTILLI: I will do that. Listen, I’m really sorry because I missed the test just before we started the session, so I’m sorry about that, I think someone was trying to hack on me maybe. But can I get a nod from you Vladimir, can you hear me and see me? It might be a little bit dark.

>> VLADIMIR RADUNOVIC: Yes, loud and clear.

>> LUCA ANTILLI: That’s great. Thank you, everyone. Very briefly because I’m coming from a particular angle in this discussion, and I work at Ofcom which is the UK statutory independent communicator for services, and so we cover all the broad ranges of the kind of com services that citizen consumers use in the UK, and that’s not just Internet‑enabled services but also TV, radio, phone, mobile phone, postal services, and all the rest of it.

We’re not actually a regulator for online in any official form yet, but the UK Government has said that it’s thinking about appointing us. But really, in a way that’s irrelevant because since the last whenever Ofcom was founded in 2003, we had a duty to monitor and promote this notion of media literacy; and media literacy, there is a lot of debate about what that actually is and what it covers. We’ve got a definition as the ability to use, understand, and create media and communications in a variety of contexts.

Back in the day when the Internet wasn’t quite to dominant, that meant looking at people’s more general understanding of, for instance, appear advertising models, public service broadcasting but seen increasingly as the Internet has dominated our lives in so many different ways that has come to the floors and now we’re looking much more closely at how people use Internet‑based services, how they – or their attitude in terms of trust, understanding, judgment when it comes not just to cyber issues or security issues but also things like information, news, advertising personal data and all that sort of stuff. We’ve got quite a broad view of this.

And, remember, my focus is very much on the UK, but I think so far what’s been said about vulnerabilities is really, really interesting because one thing that our data tells us is that you can research a block, you know, a UK representative population, but the vulnerability of one given user is very different to the next one, and this is where media literacy comes in because people’s ability to understand how to use different types of software, for example, varies immensely depending on background being socioeconomic or geographical and so on and so forth.

So, our focus is very much on trying to understand how people interact with these services, what their levels of concern are. The extent to which they’re experiencing actual issues online, including cyberattacks, but also again, fake news and other kinds of online harms such as exposure to violent or abusive content and so on and so forth, and so I’m not going to go through data research because there is just so much of it, but I wanted to give you an example of the kind of things that we look at when we’re assessing how people in the UK, certainly, think about risk online and what they do about it.

So if you go to the next slide, please, the research that we did the last couple of years, we’ve done quite a large annual piece just looking specifically at people’s concerns about using the Internet, what they actually experience using the Internet, and finally, what they do as a result of experiencing potential harms online, and so here is an example as I was saying before society has different view. And in terms of concerns, one thing is clear by the way, that if you put aside offensive and abusive material involving children or risks for children, the highest claimed concern online for UK Internet users is around things like hacking and security and things like data and privacy, and so it’s very top of mind in the average UK Internet users’ thoughts around concerns.

If we move then on to actual incidents, and this is an example that’s quite granular, we can look at gender, age group, socioeconomic background, and that’s where we start to see differences come through. If you go to the next slide, this is an example more simply of the kind of things that people are experiencing, so and any UK Internet user in the last month from this field which was done in February of this year, we have 62%, two‑thirds of adults have had some kind of experience, some kind of harmful thing online, which rises to 81% among children and so these things are happening out there. We’ve got an awful lot more granular data on it and we can look at it, for example, on this slide and we can see the prominence of those experienced harms such as social media sites on email, search engines, and so on and so forth.

So, what I’m building towards really is it’s a very, very nuanced picture and there are a number of different ways or categorizing these potential harms and there are a number of things that people are experiencing in different ways and in different environments.

I know the focus of this session is more about the cybersecurity, and I think it’s interesting to note that, again, putting aside harms and risks for children online, cybersecurity is the key thing that people think about. The question is, what are people doing about it? And another area of this research is how far people feel they’re protected and that falls into a number of categories.

Firstly, how far do people think there is – that they themselves are able to use the Internet in a way that protects them, and are they confident? We see some interesting things in our research. People generally tend to think they’re quite able to navigate the Internet safely, and people in our surveys consistently for the last 10 years, three‑quarters of the average adult user will say I’m able to manage my personal data and use software effectively. However, that’s a big – there is a big gap there between their confidence, peoples’ confidence and what they’re actually doing online and so what we see when we ask them – oh, man –

>> VLADIMIR RADUNOVIC: We lost the connection. We’ll get back. That’s one of the things that happens in the online environment, unfortunately.

>> JULIA SCHUETZE: Just when it got really interesting.

>> VLADIMIR RADUNOVIC: Exactly, yeah. I think he’s doing that on purpose so that we keep waiting for him to come back, and we’ll continue with that as soon as Luca is back. We can probably –

>> LUCA ANTILLI: Hello? Can you hear me now?

>> VLADIMIR RADUNOVIC: We just said you’re running out at the moment when it became really interesting.

>> LUCA ANTILLI: Oh, okay. (Laughing). Just a minute or so more because I think – so the point I’m trying to make is that, and I think it’s a little bit in our session so far, but to say that there are human – that the human factor is one simple variable in all of this isn’t true. There are lots of different variables and within the human factor, when it comes to security and behavior online, there are an awful lot of nuances, too. And I think as a researcher, which is kind of my angle on this, we just have to be really careful that we don’t take at face value what people are telling us, so even children, children will tell us that they’re kind of okay when they see what they note is clearly a pedophile on Snapchat and they think they’re right and then you talk about what they’re doing and they’re actually not. Similarly, when it comes to cyberattacks, people say I know what to do when something is happening. Or people say I know what’s happening when I click to accept those terms and conditions; and in reality, it’s become almost a normalized thing and people are accepting things, people are seeing things happen in front of them, but they’re not necessarily in control of their own safety or indeed of other users, and so that’s kind of the – I’m sorry it was a bit broken up, but that’s kind of our angle really. And it’s all wrapped under this notion of media literacy, which is about not just having skills to differentiate between something that looks safe or not safe. But more generally, having an awareness and, if you like, this kind of sense of sort of healthy skepticism just to be able to look at things in front of you, understand where they might be coming from, and make judgments in that way.

I’m afraid at the moment, our research would say that, yes, a lot of adults in the UK are able to do that to a degree that you could argue is satisfactory, but there is still quite a lot who aren’t able to do that and that’s where the likes of us in this group need to make more interventions. So, I’ll stop there.

>> VLADIMIR RADUNOVIC: Thank you. Luca. Excellent points. We’ll get back to media literacy a little bit later in the solutions part in a way, in the challenges, and Elisabeth, can you probably bring back the Mentimeter to see where we are. And quick question for you, Luca, in the meantime, the invitation for everyone, I know you’re silenced but please raise a hand to jump in, otherwise I’ll start calling upon you, and I know there are a few comments in the mobile participants as well because we need time to then find you and give the you floor, but Luca, quick reflection to you.

Based on the research and having the COVID crisis, since we have become more connected and depend even more on the tools, firstly, do we have any data or otherwise do you have any opinion whether the current crisis could actually make us more aware of the threats and actually raise or change this perception that we get more clearly what the risks are, or which direction could it actually go?

>> LUCA ANTILLI: Really good question, and we have done some, obviously, quite a bit of ad hoc work during lockdown in the UK, during the COVID crisis, and we find that certainly people are using the Internet more obviously as you might imagine, but what we found is our focus is more on things like information and news. And we found that in a way during this period, people have become even more polarized or more engrained in their own behavior and so those who tend not to really worry about if something, a source they see is trustworthy or not are continuing that way, and those who aren’t, are increasingly more skeptical and there is an interesting clip on video because we did some video calls with some participants online, and they were saying that they see messages on their Instagram feed from the government or you know UK Gov or NHS, our health service, and we say oh, that’s good, that must be reassuring there is a message on your feed, and they say actually, no, it says NHS up there but I don’t know if it’s fake or not. And I think so particularly, in a social media environment, which is where so many people are more and more, if anything, the doubt and the lack of trust is already in social media has actually been exacerbated and inflated a bit more, and so I think it’s a really interesting place and there is an idea out there that lockdown has made us all more Internet savvy and much more self‑aware and I don’t think that’s necessarily true.

>> VLADIMIR RADUNOVIC: I encourage all of you to maybe share your thoughts in the chat, whether you think that our environment or surroundings, our friends and colleagues, actually will feel or will be more aware of the risks and do more to make their environment safe or not.

Then looking at the poll here. Expected, we have organized groups as one of the big threat, and we have discussed some of the states, and it’s interesting to see, actually, the companies rank quite high and so I encourage those of you if you wish to comment in the chat or raise a hand to reflect more on what is the role of the companies and what particular threat is coming from the companies.

There are a couple of comments in the chat and I’ll get back to them in a minute. I want to pass the floor to Jaroslaw to reflect because the ITU is one of the main institutions following on threats and doing a lot with states on capacity building and monitoring, and so probably two aspects, two questions that I might have at this point for you, Jaroslaw; one is, related to monitoring the risks and the other is protecting the vulnerable groups, and child protection is one aspect of that, but there are other examples I’m sure. And certainly, if you wish to reflect on any of these inputs thus far, please feel free to do so. Jaroslaw?

>> JAROSLAW PONDER: Thank you very much, Vladimir. So, as you have already mentioned the ITU is the UN Specialized Agency focusing on the ICTs and the cybersecurities, one of the aspects which is very close to our heart and to our members of the ITU.

Working on the enabling environment and the strategies, but also on the standardization aspect, there is more than 2,000 standards which are patching on the cyber issues and many, many more are coming.

So, of course, during the COVID, we experienced a very new situation in terms of bringing to the digital space the new users, and we noticed that a lot of stakeholders expressed more interest in being engaged in helping and addressing those new emerging threats related to the behavior of the not experienced users of the ICT.

That’s why this brought us to the point that, in fact, digital skills became one of the important issues to be addressed in the coming time to make sure that those who are forced to be part of the digital society by the pandemic, they can now engage in the proper way and to know that they know how to navigate. But more importantly, that they’re avoiding any kind of the threats and that they are not misusing and not doing the wrong experience with the ICTs.

And there are many of those and many of those which we are not thinking about on a daily basis. There is a huge amount of elderly people who through COVID were forced to become digital citizens and start to use the social media tools, the platforms, and they, unfortunately, have become victims of phishing campaigns, and so that’s why we have to pay attention to those who maybe require much more attention and who are forced, also, to use the new services.

This is the reason also why the ITU has developed and launched immediately after the pandemic, the new Digital Skills Assessment Guidebook and we’ll be rolling this out at least in our case, in Europe and some of the countries, to assess the gaps and to make sure that the countries are from the development strategies but also bringing to the higher level the importance of bringing those unconnected to the digital space.

At the same time, we noticed that the children became also very much vulnerable group, and this is the reason why we have the pleasure of launching on the 24 of June, the new set of the guidelines, the global guidelines developed by the international community addressing the four groups of those who make the change in the digital space, the children, educators, industry, and the policymakers.

And let me start first from the children, which were perceived as vulnerable and a lot of programs have been developed to help them in many countries. This is some sort of normal for all of us, but not for all of the countries. We are also on the 28 of June, launching the new studies focusing on the Western Balkan countries, eight economies, and we notice that some of the countries are still missing the strategies, missing attachment to this issue in terms of providing some programs, building capacities of the children, and will be looking at this not only from the institutional perspective to strengthen the capacities of the countries, but also providing some means for the countries to launch the national initiative.

The other group are the teachers. In many, many cases, these are the fresh users of the applications, of the of the means which from day one of the COVID has been forced be a put in their hands as the way forward for delivering the contents of the children and in many ways we’ve seen different behavior of this group, and so this requires a lot of attention and a lot of also, redesign of the way of how the new skills for this group will be provided, that also some sort of the charter on protection measures can be pushed through this means.

So, of course, from our side as the ITU, we have succeeded since COVID started and to develop quite good understanding of the UN System, we called for the agenda of action together with many UN Agencies, UNHCR, UNICEF, UN LDC, ILO, IOM. In follow‑up to the Secretary‑General Policy Brief on the impact of COVID on children, and so we’re united to make sure that in the coming years and coming time, we can make a difference in this field.

So, just not to prolong, of course, for us one of the important things is that we’re not only acting ad hoc. COVID has come and hopefully it will go, but the hazards like this will be coming in the future as well, and we have to build the systemic preparedness of the countries and raise the commitment of the countries, and this is the reason why we are referring to the Global Cybersecurity Index where we’re measuring the commitment of the countries and taking a look at different components of the institutional setup of the countries to deal with the cybersecurity, and I believe that COVID, in fact, it helps a lot in this context and to raise the importance of the cybersecurity. And during this, thanks to the COVID, we accelerated digital transformation worldwide, not only in developed economy, but also in the transition economies and the developing countries and much more attention, but also commitment and also support is needed of the international community, and to those who would like to catch up as fast as possible, but they don’t have maybe a means. And if we do not do this, they will becoming the source of possible threats, and so that’s why let me also use this as an opportunity to express the thanks to those working already with the international community to strengthen the cybersecurity capacities worldwide, but also to call for the collaboration in this supporting subregional and regional actions in this field. So over to you.

>> VLADIMIR RADUNOVIC: Thank you. Thank you for the great overview. We’ll definitely get back to some of the particular, if I can say, solutions that the ITU is providing too within the JCA.

I’m looking at the chat and we’re, basically, moving to the second part of the discussion which is challenges, and you have already, all of you, including in the chat, have marked some of the challenges that we can probably address. There are three tracks, currently, that I see. One is related to the role of the companies, if we can put it that way.

So, there is a comment by Matjia that the companies is a track or sort of a concern as they’re driven by profits and maybe even more in the ownership of them same persons controlling the companies and so they control the production, evaluation, marketing products and so on.

There is a reflection by Julia that can probably comment later, who says I see potential civil attack that could be great to develop services and products.

And Marie mentioned that the risks with companies is that they have the means to make the law to get what they decide or desire in user data, and moreover because the states don’t have the way to do their own IT solutions they depend on these companies putting their citizens at risk.

There are another two topics and one is related to digital literacy, and we’ll definitely focus more on that. And there was a question on the protection, but I’ll get back to that.

Any quick comments? Maybe, Julia, from you on the role of the company’s responsibilities of companies in this regard? Julia?

>> JULIA SCHUETZE: Yeah. I definitely share the challenge that was mentioned because some of the solutions that are then, obviously, created have a profit focus and are not necessarily developed in mind to be nonprofit or even to address societal challenges, so and then on the use of services and products by institutions, then there could be a lack in effect and then you’re basically reliant on that one company that you started with as a ministry. And if you want to add features, it creates extra cost and so you don’t have the expertise of the development expertise in house. The IC, actually some interesting development in Germany right now. There is one project funded that’s called the Prototype Fund that funds the nonprofit‑oriented startups in digital for six months and then now that is – yeah, and so it’s putting public money behind some software solutions and there has been already some really interesting, for example, encrypted messengers coming out of these projects.

And then Germany is also experimenting with the digital taskforce where they’re trying to bring in developers, UX designers coming from companies to join the government for six months to help innovate internally and just share their expertise. It’s called Tech for Germany, and so I find these approaches very interesting because I also see that challenge.

And then to the point on media literacy, I do agree that there is obviously training needed, but I also find that it can be made too hard to update. For example, a browser. It shouldn’t take four clicks or a Google Search to know how to update a browser, and I’m working in the field and I get the alerts on how to like update your browser because there is vulnerability, et cetera, and some things are just not made as easy as they could be, and so I also do agree, in general, that they should be more security knowledge. But at the same time, user‑friendly security is really key and I think some products and services are not really living up to that standard yet.

>> VLADIMIR RADUNOVIC: Thank you. I know that Anastasiya wanted to reflect as well. Anastasiya?

>> ANASTASIYA KAZAKOVA: Yes. I really like the point, two points actually from Julia and the data out of the polls that we saw regarding the responsibility that companies have in this regard. This is the full transfer by users, that the users trust technology and if they are told that it could be hacked, or that it’s secure and what a user is to do at this point. I think that there are many, many things that security could, and that service providers or IT developers do not properly explain to users how to use them in a user‑friendly format, in a user‑friendly communication and they’re not properly transparent about how the technology works either for the consumer side or corporate side or the government side. And so my thought is here that a greater transparency should be in that regard around the security by design and by default this is a major concept for ICT developers to make the technologies truly secure for users because I really think that is important to confront that it is not the ICTs that could be dangerous for users. It’s the users’ use of them that could have negative implications on the society.

>> VLADIMIR RADUNOVIC: Thanks, Anastasiya. I wonder if anyone else wants to comment on that? Or there is another good question which was raised, let me see. Related to the needs to be – the common needs – there needs to be a standard level of protection afforded to all users, despite the varying level and types of threats, so what should that start level of protection look like, one that should be efficient?

And I guess that’s, again, one of the – well, it could be a question also related to the regulatory environment; but in this case, it can also be related to the question of companies and the role of the companies to provide some sort of the efficient protection to everyone for their services. Any reflection or anyone want to take that question? Anastasiya, do you want – a company which actually provides solutions for that?

>> ANASTASIYA KAZAKOVA: Yes. It’s actually really not an easy question, I would say, because I know that many, many teams in Kaspersky really dedicate a lot of time to educate the community, the broader community net users despite the age, despite the gender to make a lot of good stuff and materials public, and dedicate a lot of time not only in the high‑level security investigations but on the top basic level reports. So in this regard, we’ve recently, as I also mentioned in the chat, we’ve – the second year we conducted the privacy report to give the track what are the major, I would say, security problems in that that led to the data breaches, to the fact that the users feel less secure in cyberspace in the Internet space in terms of the protection, the personal data protection.

The second aspect is the social ratings, the interest that actually reveals that many – actually, not so many people know about social ratings that are in some contrast already and that social ratings already contain a lot of personal data and I actually can identify them and that could also cause a lot of threats to them, and so we are trying to increase their awareness around the community. I can’t say whether they should be the standard, the particular standard to that, but I think we definitely should invest a lot as a company to building greater awareness to incorporating with other parts of the community, Civil Society, academia, technical community, and public structure as well to help find innovative solutions that will actually target real being in the society.

>> VLADIMIR RADUNOVIC: There are two other questions. Jaroslaw, did you want to?

>> JAROSLAW PONDER: Yes. Yes. Just that I think it would be very much challenges to develop a standard, a global standard for the industry by industry on the protection level, unless this would be the standard which would be very generic.

But, you know, at least in the context of the general protection, what we believe the model of the guidelines, which have been developed by the industry and for the industry and going through the – a lot of complements and providing some checklist for the providers and hardware producers, what to be integrated into their services and the products, and to ensure their safety and trust of using of the ICTs.

This might be a good way forward in realistically increasing the safety in the digital space. That’s why we hope that with the reiteration of the guidelines for the industry, we would be able also to engage with all private stakeholders and private sector to apply the guidelines and make sure that we have some sort of the universal way of understanding on some international standard.

So, we would really encourage all private companies, also, to take into and make, basically, the checklist if they’re complying with the international standard. Yeah.

>> VLADIMIR RADUNOVIC: Thank you. Jaroslaw, as expected we’re switching between comments and solutions and I will focus a little bit more on digital literacy and it was raised quite prominently here; but before that, there are two other questions related to the role of the company, which probably we can wrap up this part of the discussion. One is for Luca, and I see Luca already responded in the chat, but I’ll give you the floor to elaborate. The question or the comment is about the trust of sources in social media is important and interesting, so is there any solution to propose when it comes to addressing the lack of trust of sources in social media? And the other one is, again, trust at a different level commented by Henrick in companies, is that no more safe companies are currently able to authenticate themselves securely to consumers. And so, apparently, consumers have no real alternatives to blind trust, and so the question is how do we get from blind trust to zero trust from consumers towards the companies?

So, Luca, probably you can reflect on the first question that I read, but you can also reflect on the second one or any of the current discussions on the companies. Luca?

>> LUCA ANTILLI: Great. Thanks very much. Yeah. I think as we all know, working in this area particularly as a researcher, you just find the paradoxes everywhere in terms of – I say social media. Social media is by far now in the UK the number one source for daily news and information for UK users.

Yet at the same time, if you ask people whether it’s on a survey or qualitatively face‑to‑face, the different platforms or sources that they trust to get news and information, social media is always at the bottom, and so it’s the least trusted source but continuously gets the most usage, so there is obviously, there is an issue there.

And, you know, in terms of trust, is there a solution? And I wish I had one and I don’t, but as I’ve written here, I think it is a combination of things, and it’s not just by the way, about saying that the platforms are the evil guys and users are innocent. In fact, human behavior, I would say, is one of the big contributors to ongoing lack of trust in social media because people like to share stuff, they always will, and some people don’t bother checking sources. Some people do it for fun, and so it’s almost like innocent misinformation and we have to accept that as well.

So that’s one side of it. One thing that’s interesting to notice, as I put, is that when we do research amongst younger people using the Internet, new to this sort of thing and to the whole concept of having to be critical online, is actually they’re etiquette in social media and need to examine sources is actually better than their parents. And when we know when it comes to classic fake news stuff we see on social media, actually, it’s as much older people, 55 to 60 plus who are spreading the stuff as it is younger people, and so there is an interesting generational thing there.

And but it’s also true that I think platforms have something to do, and again just going to the evidence that we have, when people are asked, do you report things that you’ve seen that you feel you’re concerned about on social media app or sites, and about 45% do some kind of reporting of anything they come across. But of those people, only about 25% believe they know what happens next, so I think we need a bit of transparency in those reporting mechanisms for platforms to be a bit more open about, okay, you reported this concern and it might be in financial, it might be in data, it might be in offensive material or whatever, but this is what happened to it, this is what we did, and I think that way you start to build trust in those particular features as well, and so it’s a mixture of things. I mean, obviously, when we come for the bigger, the notion of state‑sponsored social media – oh, man I timed out again.

>> VLADIMIR RADUNOVIC: We can still hear you, but probably you drop there.

Okay. Moving on and thanks for that –

>> LUCA ANTILLI: I’m sorry. My final point is that social media is the biggest coms platform in the world and so of course bad actors, if there is such a thing as state‑sponsored, they’re going to go to social media because that’s the biggest, biggest platform and we have to think about all of these things and there are lots of different things to do I’m sure, and I’m sorry again for dropping off.

>> VLADIMIR RADUNOVIC: The problem is you always drop off at the most interesting point. Stop doing that. Stop doing that. (Laughing). Thanks. There are a couple more follow‑ups in the chat, so what about strengthening verification of official social media accounts making it visually more aware of accounts and mentioned what is an official social media – and I don’t know why I’m reading as you can see all of that or say all of that when you take the mic, but very interesting statistics.

And then for me, it’s interesting to observe that while we haven’t been discussing in the recent years this part of the misinformation and content‑related issues within cybersecurity but more and more aspects of cybersecurity the aspects are popping up, misinformation, disinformation which is quite an interesting paradigm shift. And I wonder if anyone has a quick reflection on the Henrick’s question relating to having companies move from blind trust to zero trust towards the companies and so on? I don’t know if anyone wants to reflect on that one, on the trust toward companies? Anastasiya?

>> ANASTASIYA KAZAKOVA: I think it would be good on a definition side to doublecheck that we speak on single terms, so I ask the question, what actually zero trust and blind trust implies in that context? That will definitely help to answer the question.

>> VLADIMIR RADUNOVIC: Can you offer any of those from your perspective?

>> ANASTASIYA KAZAKOVA: I don’t really like the term zero trust because I don’t believe that, my personal belief here as we live in a society, we live in a world that the zero trust is a really tricky thing that you need to trust and to at least to have that minimum level of trust to what surrounds you, whether it’s people or technology or products. I mostly for verified trust, so verified by trust and for clear set of measures for consumers, for corporate users and private users to let them as a company ensure that the product is verifiable and trusted and are trustworthy and so in that sense I do believe the companies definitely could do more to be more transparent to report how the technology work, whether it be implication, how data management is installed, how all the things are organized, how the companies handle users’ data. And the more transparent you are of this, the more security and the more trustworthiness you bring to that. So, my point would be this.

>> VLADIMIR RADUNOVIC: Thanks, Anastasiya. It’s worth mentioning that you’re also one of the partners in the Swiss driven Geneva Responsible Behavior in the space and the industry discussing things like transparency and trustworthiness and accountability and trying to link that with the global norms and responsible behavior, so stay tuned to the Geneva Responsible Behavior.

Another question related to trust from Matjia, which is can we one more thing connected to trust, acceptance of general conditions for use. You do not accept the conditions of use in terms of use by the services, you simply cannot use the service. Do we have leverage to push the providers to get limited access? This is especially important for children. If we manage to teach them critical thinking but being on social media becomes more important.

If anyone wants to reflect on that one, feel free to. Otherwise, in the meantime, I’ll also Henrick to maybe clarify the terms of blind trust and zero trust in the context of trust over toward companies.

And now coming back to the digital literacy, certainly, you can develop more discussion in the chat on these other tracks, but moving on to digital literacy, there were a couple of good points on that one. So, I read some of it back to you. Anastasiya said that it seems that the users often think that it’s a possibility of platforms to secure the users and that their platform providers can only secure the platform, but it could be difficult to secure and manage the user behaviors, so that’s where the digital literacy or comes in place.

And then Roberto said digital literacy is needed from the most early stages in the schools. And Marcel mentioned how can parents be introduced into this important topic, and I think Jaroslaw mentioned the teachers in this sense and mentioned parents are very important and we also find children 12 and above are increasingly important in educating their own parents in digital literacy, and that’s interesting, the role of the kids to allocate parents and even the teachers, if we wish.

And then I’m not sure who actually posted this one, about you the problem is not with people, but most people know how to authenticate themselves and so on, and so any reflections on digital literacy? I don’t know, I think, Luca, you were the one who, basically, when we discussed in preps for this, mentioned a lot of media literacy and what does it mean in practice and how could we approach that digital literacy, media literacy, and even what is the difference between the to. Luca, if you wish?

>> LUCA ANTILLI: Yeah. I think on that point on schools and education is really interesting, and part of our role is to collaborate and reach out to different sort of agencies and bodies in the UK, and obviously educational ones, are very much part of that. And, again, we’re not officially involved in any sort of regulation right now of these things, but certainly from the work we’ve done, I think our general feeling is that currently children in schools are told what you must have not do online to stay safe, and that’s it.

So, and I think children need a slightly more rounded introduction to digital communications, where it’s more about this is how you can, you know, create things. This is how you can share with each other, this is how you can communicate, and so the positives, which makes children – because I think, currently, there seems to be a them and us thing. There is a culture around, particularly, social media use and we do quite a lot of qualitative work amongst children and they tell us I just want adults to understand what we’re doing and not laugh at it and not think we’re just being either really dangerous or risky or silly with our silly memes and the rest of it but actually this is our world and we want you to understand what we do, and but telling us not to go there, we’re kind of aware of that but give us a more rounded education and also let us share with you what we’re doing. So there is a danger are in the naughty bad behavior bubble, so they’re seeing stuff all the time and there are sorts of things that children often inadvertently come across, you know, on fairly everyday apps such as SnapChat, Instagram, Tik‑Tok, some of the things that they see without looking out for it are very worrying for them, and you know, they don’t always know if they can tell someone about it because they think they’ll be blamed or told off for seeing the stuff. So, there are a lot of very sensitive issues there; and again, I’m not an educator but we are very close to that part of this whole thing and so I think, yeah. I’ll stop there but it’s – that’s just one thing that certainly amongst children it’s really, really key.

>> VLADIMIR RADUNOVIC: Thanks, Luca. I see your hand, Julia, in a second. It’s quite important when you mention the kids, particularly, might be actually fed up with someone telling them what to do and what not to do without actually telling them why and understanding. It was on paper that we did some time ago on DiploFoundation trying to outline the competencies on digital literacy which goes way beyond understanding what is secure and not secure and into critical thinking and into the responsibility of us as users and citizens if you wish, and to understanding even the Internet governance as a condition of the digital age we’re living in and consequences of that. It’s a broad collective thing that falls under digital literacy to even with simple things of being more safe. Julia?

>> JULIA SCHUETZE: I would just like to comment on the security aspect of how to behave in a secure way, and there I would just like to point out also the role of companies, so to increase the awareness of adults working, having routine security established in companies where, for example, regular phishing, big phishing campaigns that make people really know how to, yeah, understand what does a phishing email look like, but then also having routine responses and trainings like you train for a fire but also train for what happens if there is a cyberincident, and at least these stakeholders who are responsible to react should know what to do, so I think there is definitely huge potential in doing that, and then also talking about the private life because more and more we’re interconnected with home office; but also, not all companies have a clear policy that you’re not using any of your private devices for work. And there was recently a good article that showed how risky that can be, in general, on the use of Untapped, it’s a beer app where you check in beer, and take a picture and say checked in and the privacy policies are very good and you can make the profile private and, obviously, you shouldn’t share pictures with any confidential information from your company on them, but that has happened where military and CIA personnel has actually posted a lot of critical location data on there that was public, posted pictures in the back where there were military planes, and so there are – companies can do obviously so much with their policies, but at the same time, yeah, you need actual training and everyday routine of security in some way for adults as well.

>> VLADIMIR RADUNOVIC: Thanks, Julia. It’s important point, it’s not just the kids and we usually think about the kids. Now, I’ll get back to more maybe in a second with Jaroslaw. There are a lot of comments, I don’t know why you guys are not taking the mic, but certainly there are a lot of questions and certainly my voice is not the best one, but interesting stats on the agency and spreading fake facts, Luca and I believe it’s because the older users might not be well conversed with what differentiates fake news with authentic news online and goes back to the literacy of the adults included.

And then Rosalia shared the link, Digital EYDU, and Andrea says it’s interesting because digital literacy a social phenomenon than any deliberate prominent part of action which is an interesting point. And Jaroslaw can check on and understand trusts of safety online and understand concepts of online environment, and it was mentioned, I think, the reason why children can teach adults about digital literacy is because children take a trial and error approach to the Internet and digital media and therefore learn quickly and the important thing is to document the knowledge gained and pass to the future generations to prevent mistakes and uncertainties.

Rosalia continues, schools are important in bridging literacy gap. COVID is putting disconnected families in difficult position and that’s a really interesting point on East Coast, and also whether the current school system supports and can support these parameters. I won’t read Luca’s comment and he’ll connect afterwards. And Wolfgang mentioned that there is a book Growing Up Digital the children of yesterday are the teachers of today and all the proposals also in the UN Secretary‑General Roadmap issued yesterday are true; but unfortunately, it takes time and money from recognition to implementation and thank you Annriette for the choice. And then though children get technology really fast, there is a need to teach them and everyone else about it. And some French schools have incorporated critical thinking in social media as part of the curriculum. Quite some different thoughts and Jaroslaw, I don’t know where you want to start from.

>> JAROSLAW PONDER: A lot of angles, but let me recall our current experience. We’re just now thinking of the work for some South Eastern European countries in the work on the digital skills which also encompasses the digital literacy and three parts of the digital – three big chunks of the digital literacy, basically, to determine nature and advance and once we’re starting the discussion with even at the country level, we see that there is very different understanding of at least how to approach the way of how the digital skills should be developed in the country, but address the needs on the one hand of the gigabit society. But on the other hand, also, the needs of the simple users as the consumers of the digital and goods services and other. And we notice that, for example the Western Balkans, and Serbia currently has the digital skill strategy which is very much cross‑sectoral and incorporates the interest of the users, but also the interest and potential of the industry with a good understanding of where the challenges stand.

While in the other countries when we’re starting the conversation, there is still process of developing why we should have one strategy like that and why is it so instrumental? Is it not only the issue of the education? And it’s not because it is cross‑sectoral approach which is needed, not only of the public sector but also with the engagement of the private sector to make sure that the digital literacy provided to the children in the very early age can, basically, shape them and prepare for the future and taking into account the issues of the privacy, of the trust, and many, many other issues for them to be sure that they can be confident in the use of the ICT, but more importantly that they can actually produce digital goods in this space.

So, we are looking forward to this journey to advance it. It’s not a simple task, and as you’ve seen in the chat we have already a lot of different proposal, and we are in the community of the ICT professional, and once we are bringing this conversation to the broader public, then the number of the angles of its work is even bigger, so I would use this opportunity also to encourage those who are interested in working on those aspects to join us in our efforts of supporting the economies in transition, European ones in particular, to make sure that also the protection aspects, the online protection aspects are taken into account.

>> VLADIMIR RADUNOVIC: Thank you, Jaroslaw, again, a lot of interesting points. I wonder whether one of the good questions is, if the current school system is good enough or how can we embed – is it the right place where we can actually embed more on digital literacy and are there any good experiences that any one of you has about embedding or using the current school system for, vocational system for digital literacy? I don’t know, Julia, if you have any thoughts from the community perspective and user perspective on how to approach digital literacy? What could be good practices?

>> JULIA SCHUETZE: I think good practices in Germany are just on the top of my head because this is not media literacy, is something two colleagues of mine are working extensively and also building these persona profiles of what things you should know, like you already mentioned the different aspects that are necessary to know.

But what has worked well is, it’s called (?) also from the computer, they have a nonprofit in Germany and they have all over Germany volunteers, and they go into schools and they practically mentoring young children to try out hacking and also teaching the hacker ethics with it, so really yeah. They’re learning IT skills but at the same time, the hacker ethics, and so I think this is an example.

What I would have wished for in school would be more interdisciplinary projects, so if you’re teaching media literacy in IT class but also, yeah, teach a little bit of the engineering part, maybe try to imagine some products that could be built, what would they be for, yeah, including in political science and so I think it would be good if just it’s a broad topic that is taught also into disciplinary because in the end those things are interdisciplinary now when we’re, yeah, when we’re in companies but also in government should be approached in interdisciplinary way and so maybe some more pilot projects where kids can get engaged.

>> VLADIMIR RADUNOVIC: I think this example of actually using the hackathons or sort of light hacking approach for kids to have fun, but at the same time learn about ethics and behavior online or practice in the substance is quite an interesting one.

We have a few more minutes. Is there any comments by any one of you on digital literacy to wrap up this part of the discussion and then just try to conclude somehow? Anyone who wants to jump in fine on digital literacy?

>> LUCA ANTILLI: Yeah. Hi, Luca here. I just say that, you know, we haven’t – we haven’t got – we haven’t cracked it yet. So, we haven’t got the answer, and I think that’s clear from all the comments coming through and all the different experiences across Europe and the world that, you know, we’re all really thinking hard about how to make this happen. It’s such a big, fluid, and you know it covers everything from the early ages all the way through society, so that, you know, it’s maybe a bit of a depressing thought; but you know, we’re no closer to the perfect sort of package of solutions than any other country or body I’m sure, but it’s really interesting to see that some of the things that you’re saying about, that where the risks are, what we really need to think about more carefully, and we absolutely are actually seeing the same debate we’re having in the UK and so for me it’s very encouraging that the same discussions are being had and I just wish that we could all get closer to a solution more quickly because there are, indeed, unfortunately people that are already suffering in different ways as a result of harm online and we just want to get to it and try and fix it as quickly as possible, but it’s very – it’s going to be quite a long journey, I fear.

>> VLADIMIR RADUNOVIC: And, unfortunately, the base of technology development doesn’t help us.

>> LUCA ANTILLI: Definitely.

>> VLADIMIR RADUNOVIC: It goes over everything that we tried thus far in the meantime for that so we have to device some new aspects. It could be both challenging and comforting back to Annriette’s point that media digital literacy is more social problem deliberate action problem and mentioned that reflecting on Julia’s points that it’s actually this informal setting in classes that help a lot like hacker space and hackathon and Annriette’s added, I think, we need more political approach and it should be as much about politics and society as technology. For example, teaching in critical thinking, and pre‑digital and for reasons not digital, but digital literacy has to include them.

Marie says especially since hackers have a cool aura in the eyes of many kids in reality with anonymous in pop culture and Mr. Robot, et cetera, and Toni mentioned that Council of Europe does a lot of media literacy using a right‑based approach. And thanks for that.

And, Annriette says sex education needs to include Internet pornography and understand how children/teens need to understand it and analyze it. We’re coming close to the end of the discussion, so I’ll pass the floor to each of you; but literally, like a tweet, 260 characters, and then we’ll hear from Andrijana to sort of get the messages. So, yeah, I don’t know, who should we start with? Should we start the other way around? I’ll start with Luca and then Jaroslaw and then Anastasiya and then Julia. Luca, tweet?

>> LUCA ANTILLI: Kind of what I just said really which is we have a long way to go but we are slowly unearthing the very complex and nuanced contributing factors to the issue of digital literacy, and we need to keep working at it.

>> VLADIMIR RADUNOVIC: Thanks. Optimistic, at least. Jaroslaw?

>> JAROSLAW PONDER: Yes. From our side I think we see that the pandemic was useful in highlighting and strengthening the importance of the work on the cyber and the fact that we’re becoming really dependent on the digital, leaves no other choice than really to dedicate all efforts to make progress in this area. But not only in the policy discussion but also in the implementation and support in those countries which requires help from those that are much more advanced already in this area.

>> VLADIMIR RADUNOVIC: Thanks a lot. Anastasiya?

>> ANASTASIYA KAZAKOVA: I would say that we definitely need to move from the zero trust that has been mentioned today to the greater transparency in Cyberspace and Internet space around all the actors, what they do, what their intentions and motivations in Cyberspace, and more verifiable trust meaning the trust when you provide a clear set of measures of how you actually make your technologies and products trustworthy and thus explain to users about all the implications that might be with the use of this technology.

And, also, I think that though it could be trivial but still the dialogue plays a really important role and all at the municipal level, state level, at the global level, and especially during this interesting times with the UN global discussions again to hear what other parts of the world actually need in terms of the protections, cyberprotection, and ensuring that the users are also actually all protected in that sphere.

>> VLADIMIR RADUNOVIC: Thanks, Anastasiya. We did accept the microblog rather than the tweet form but, Julia?

>> JULIA SCHUETZE: I wrote down this session gave me energy to focus more on practical solutions that make users more secure.

>> VLADIMIR RADUNOVIC: Ah! I was hearing that you would just say retweet, but that’s great. Thanks a lot. Back to our Report to, Andrijana, to come up with messages. The floor is yours.

>> ANDRIJANA GAVRILOVIC: Thank you for the floor. And no thanks for the fact that you left me absolutely no time and we’re already running late. I’m on the Geneva platform as Elisabeth kindly mentioned and I’ll present the three messages that I prepared during this discussion. Now, don’t worry if there is something that isn’t in there, because it will be in the report which will be available on the watch/EuroDIG 2020.

The first message is there is message for stronger literacy particularly for children, parents, and teachers and forced to be part of the digital society because of pandemic because of elderly. Should be interdisciplinary, and users should be aware of risks and thought to think critically between safe and unsafe practices. If you have any strong objections to this message, please write them in the chat. Elisabeth, in the meantime, maybe we can go to the next message. Or the next slide. Thank you.

Security needs to be more user‑friendly and to that end, ICT providers need to provide greater transparency around their practices, especially regarding the implementation of security by design and security by default.

Again, if there are any strong objections, please raise them in the chat. Next slide, please, Elisabeth. Thank you.

Companies should implement policies that will raise users’ trust in the companies. They should be more transparent on how data management is installed, how they handle users’ data, and how the mechanisms for reporting inappropriate content on social media platforms work. That would be it from me. A kind reminder that this is not the final form of the messages, and you will be more than welcome to comment on the messages and EuroDIG will provide more details on the commenting platform. Thank you for the floor. Vlad, over to you.

>> VLADIMIR RADUNOVIC: Thank you, Andrijana. I saw Julia raised a hand and I don’t know if you want to jump in quickly.

>> JULIA SCHUETZE: On the last part could you add, report vulnerabilities as well, if no one else objects?

>> VLADIMIR RADUNOVIC: Makes sense.

>> ANASTASIYA KAZAKOVA: I strongly agree with you Julia, on that.

>> VLADIMIR RADUNOVIC: Makes sense. We didn’t touch very much on that we need a special session for vulnerabilities and disclosure, but it would be very useful. Thanks a lot for inputs. Kind reminder, and I think Elisabeth already posted in the chat, that there is a forum at EuroDIG so you can switch back to the forum and continue these discussions. There were a lot of interesting inputs, and I certainly hope to see you, wow, hopefully next year. In the meantime, grab your coffee, relax, and move to the next session. Thank you for joining today, and thank you for coming. Bye‑bye.

>> ELISABETH SCHAUERMANN: Thank you, Vladimir, and everyone here in the room and also on YouTube watching. We are actually here in Studio Berlin making this networking space for everyone who is interested, so you can come join in and there will be a few fun Mentimeters to participate in, and you can chat and we can also see if sound and video works. If that’s a thing that you want to do, I’ll be here for your assistance. And in the meantime, have a nice lunchbreak, and the next session here will start at 2:30 p.m. See you then. Bye.