Criminal justice in cyberspace – what’s next? – WS 07 2020

From EuroDIG Wiki
Revision as of 22:25, 15 December 2020 by Eurodigwiki-edit (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

11 June 2020 | 14:30-16:00 | Studio Berlin | Video recording | Transcript | Forum
Consolidated programme 2020 overview / Day 1

Proposals: #46, #47, #48, #51, #92 (#68)

Session teaser

This session focuses on three main aspects of criminal justice in cyberspace: regulatory fragmentation, cooperation of LEA with CSIRTs and service providers, and human rights implications of cybercrime investigations. Due to the pandemic crisis, a fourth important element of this session will be focusing on the impact of the COVID-19 on cybercrime.

Session description

The session will address current debates related to criminal justice in cyberspace in four main areas:

Regulation: This part will focus on the regulatory efforts to address the issues of crime online. In terms of criminal justice, this would include some the recent initiatives of the Council of Europe and the UN to enhance the current framework applicable to cybercrime – namely, the Second Additional Protocol to the Budapest Convention on Cybercrime and the UN treaty proposal which led to many debates on the fragmentation of the fight against cybercrime. Furthermore, the focus can be broadened with the inclusion of the EU work in the field of addressing challenges of online harms and crime, such as E-Evidence proposals and EU initiatives to tackle the issues of illegal and harmful online content.

Cooperation: Following the scenario of a fragmented regulation, further impact on the cooperation of LEAs with CSIRTs and service providers on access to electronic evidence in the cloud and trans-border access to data is envisaged.

Human rights: As many of the initiatives that address illegal use of cyberspace raise concerns related to fundamental rights and due process, human rights issues will be an integral part of this discussion. The discussion related to human rights aims to include a special emphasis on the new tools, such as the use of AI facial recognition technology and automated criminal profiling. It will also tackle how technology/AI/Machine Learning/Natural Language Processing can support law enforcement now and in the future and possible ethical and legal aspects of the use of AI for these purposes.

COVID-19 and cybercrime: The session will try to approach the topic of criminal justice and the use of cyberspace for malicious purposes in the context of the current crisis, assuming that it is more important than ever to strengthen criminal justice and other regulatory responses, in order to prevent cybercrime while respecting fundamental rights and the rule of law.

Format

The workshop will consist of short presentations from the key participants followed by Q&As with the workshop participants in the live chat. The workshop will also include polls as icebreakers and to keep the audience active. One moderator will facilitate the dialogue among key participants and a second moderator will facilitate the active participation of the audience.

Further reading

People

Please provide name and institution for all people you list here.

Focal Point

  • Desara Dushi

Organising Team (Org Team) List them here as they sign up.

  • Nertil Berdufi
  • Irina Drexler
  • Sofia Badari
  • Debora Cerro Fernandez
  • Gratiela Dumitrescu
  • Fotjon Kosta

Key Participants

  • Marina Kaljurand – Member of European Parliament
  • Christian Berg – Founder of Safer Society Group, CEO Paliscope
  • Markko Kunnapu legal adviser in the Ministry of Justice of Estonia and member of the CoE Bureau of the Convention Committee on Cybercrime (T-CY)
  • Pavel Gladyshev, University College Dublin

Moderator

  • Tatiana Tropina
  • Ceren Unal

Remote Moderator

Trained remote moderators will be assigned on the spot by the EuroDIG secretariat to each session.

Reporter

Current discussion, conference calls, schedules and minutes

See the discussion tab on the upper left side of this page. Please use this page to publish:

  • dates for virtual meetings or coordination calls
  • short summary of calls or email exchange

Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.

Messages

  • Finding the right balance between the control of online content and upholding fundamental rights will remain an important challenge. Given that most incidents are occurring across borders and that there is no common definition of crime and terrorism, co-operation between states (and with the private sector) on such matters is crucial.
  • The use of artificial intelligence by law enforcement provides a big opportunity but must be explored diligently because it requires vast amounts of resources as well as an advanced understanding of the technology. It should however not be implemented without human oversight.
  • The flurry of activity to create new norms that deal with cybercrime bears the risk of increasing legal fragmentation as well as only finding agreements on minimum standards. It is thus important to avoid falling below already existing standards such as the Budapest Convention.
  • Due to increasing levels of encryption and anonymisation by cyber criminals, alternatives must be found in terms of upholding privacy protections while allowing law enforcement to protect users online (and offline).


Find an independent report of the session from the Geneva Internet Platform Digital Watch Observatory at https://dig.watch/resources/criminal-justice-cyberspace-whats-next.

Video record

https://youtu.be/qV5EFUzF6Rs?t=12230

Transcript

Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-481-9835, www.captionfirst.com


This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.


>> TATIANA TROPINA: Welcome to everyone who has joined.

For all of you would have joined, please, please, raise your hand and write the question in the chat. No stone is going to be left unturned and we are going to answer all of your questions and collect all of your interventions and give you the room to speak.

>> ELIZABETH SCHAUERMANN: All right. Hello, everyone in the room and on the livestream. We are resuming our activity also here in studio Berlin. My name is Elizabeth Schauermann, I’m the studio host and we are here from the German Informatic Society to keep the session up and running.

We will remind you of the code of conduct. All of those in Zoom room, please identify yourself. And you can change your name yourself. If you struggle, reach out to me. When you want to ask a question or make a comment, which should be the case, because this will be a very interactive session, you can raise your hand and then the moderators will unmute you so you can make your statement. Before you do, please say out loud your full name and your affiliation and then continue with your question or statement. Contributions can also be made in written in the chat and in the forum. We will try and follow those and include it in the session discussion.

One important thing, there are some rooms that you are in, all of the Zoom rooms here at EuroDIG are for registered participants only. Please do not share the session links with anyone on the outside.

If anyone fails to repeatedly comply with the code of conduct, we might need to remove said person, but I’m sure it will not be a problem.

One other important note, I’m happy to announce we are part with the Geneva Internet Platform again this year. They are present in all sessions and recording the EuroDIG 2020 messages and for this session, it’s Cedric Amon, who will be here for us in this capacity. In the last five minutes of the session, he will present the main points to us that we make during the discussion.

And with this, I would like to close my opening remarks and open the session, workshop 7, “Criminal Justice In Cyberspace, What’s Next?”

Moderated by Tatiana Tropina and Ceren Unal.

>> TATIANA TROPINA: Good morning. I see Marina in the room. Can we make her a cohost. Thank you. It’s nice to see all who have joined this workshop and we hope we will have a great discussion.

A bit of an introduction to this workshop. If you attended the previous sessions related to cybersecurity at EuroDIG a year ago, you probably have heard all of these discussions that staying home during COVID actually doesn’t make us safer. It makes us much more vulnerable to cybersecurity threats, to crime, and to other online harms and misdeeds online. And while we are mainly want to talk about how criminal justice addresses these harms, we want to make this discussion much broader. And to talk not only about criminal justice but about other instruments related to tackling the problem of illegal acts online, like let’s say terrorist content and other illegal and harmful content online. So we are taking it from a broader perspective, but still building up on what we discussed at the previous EuroDIG in the Hague last year.

Those of you who did not participate in this discussion, you can always go on and revisit the transcripts and the videos online.

So let’s just take a broad approach and discuss not only criminal justice itself, but the instruments which compliment it, and challenges related not only to laws and regulation, but to how criminal justice and other instruments can tackle the online harms. And before I hand it over to my co‑moderator from Ceren Unal to introduce the panel. I would like to thank the studio host and the reporter from Geneva Internet Platform Cedric Amon. We will sure save, five, seven minutes at the end of this session to agree on the messages from this session, and also to Tracy, captioner, who is going to caption this session and the transcript.

And with this, I will hand it over to Ceren to introduce the panel and kick this discussion off.

>> CEREN UNAL: Thank you very much, Tatiana. Yes, this will be really exciting. We have an excellent panel. We have Marina Kaljurand, member of European Parliament. You probably already know her. We have Pavel Gladyshev who is an Associate Professor at University College Dublin, who has huge experience in digital IPv6. We have Markko Kunnapu who is working with the Ministry of Justice of Estonia and relating to cybercrime laws at cybercrimes Europe and many others and then we have Mr. Christian Berg who has many hats and currently serves as the CEO of Safer Society Group and Paliscope. He has background and NetClean which detects child abuse on corporate devices.

After this introduction and without further ado I would like to start the discussion to set the scene. So I would like to ask our distinguished panelists about what they see as the major top challenges they see related to the criminal use of cyberspace and criminal justice that could influence our response to crime and other online harms.

So in your view, based on your expertise, from your own perspective, whether technical or policy, what are the most important aspects related to our sessions title? What’s next? Of course, some current challenges and how they would inform our responses. And before going to the panelists, I would like to remind the audience to just use the chat if they have questions or comments, Tatiana will be monitoring. And I would like to start with Marina in this first round. Thank you. The floor is yours.

>> MARINA KALJURAND: Well, thank you. I feel as a newcomer to this group because I’m not somebody who has been dealing with criminal justice on a daily basis, but as Tatiana said, I will be happy maybe to bring in a couple of thoughts from the wider perspective. Here in the European Parliament, there is terrorist content online and the digital services market. And where do I see the problems?

When we talk about terrorist content, when we talk about illegal content, there’s no question. There’s no place for terrorist content or illegal content online, full stop.

There are no doubts. It has to be taken down. It has to – all the measures have to be taken so that it’s not put up, no questions. Everybody understands that.

But the question for me is: Where is the right balance so that we have control of the content online but we do not violate fundamental rights. We do not violate human rights. Because in many cases, when I talk about terrorist content, we have different laws. We have different legal systems. We have different laws. And at the moment, at least if I speak from the perspective of the EU, we are not in a position to introduce one unique, united, legal system or criminal law system.

There are some things that are crimes in Estonia and not crimes in other countries. There is some terrorist content which might be terrorist – I don’t know, in Hungary, Poland, in Spain, but not in the Baltics. So we have differences.

One is how do we not violate human rights or fundamental rights when we will fight against terrorist content and the other question is how to do it in a way where we have different legal systems, different jurisdictions and at the moment, we are not in a position to create in the EU one united, unique, criminal jurisdiction. We still respect the jurisdictions and different countries. We can’t give, for example, removal orders to the jurisdiction of another country because they are just null and void in another country if it’s not a crime there. So it’s a bunch of questions and happy to raise them. And during the discussion, also happy maybe to reflect on them in more details.

Thank you.

>> CEREN UNAL: Thank you very much, Marina. There are major challenges regarding harmonization and more cooperation and how to do that will be critical.

I would like to continue with Pavel. Thank you, Pavel. The floor is yours.

>> PAVEL GLADYSHEV: Thank you very much. So what I would like to discuss today are three issues. They are slightly different, but they are related in a way. Again, I will just briefly outline them now.

First issue that is in everyone’s mind who is working in law enforcement these days is the use of encryption by the criminals. To start with, I think everyone or most European citizens are nowadays acutely aware of which the children can procure drugs online. All it takes it conversion is of some physical money into bit coins, download and installing a browser from the Internet, navigating to one of the advertised online illegal drugstores and ordering drugs and getting these drugs delivered online – sorry, delivered via post.

And this situation cannot continue, for example. It’s just one of the examples of how criminals misuse encryption, either be ransomware and other forms of criminality, but from the mental challenge here is I think the disconnect between policymakers and technology developers which I will elaborate on in my talk. That’s one issue. How do we deal with the encryption and elimination services.

Second issue is security of software. The matter of fact is that the modern information technology industry cannot produce perfectly secure systems, or it cannot produce perfectly secure systems cheaply. And the companies are not incentivized to produce secure software. And I think this is something where regulation can play a role.

And the third question I would like to touch upon is the use of artificial intelligence for law enforcement purposes. Clearly, it’s a new technology, which has potential, which has not been fully explored, but my perception is that it has some inherent problems which will reduce its usefulness to the law enforcement community, and also will probably diminish the fears of the general population that this technology can be used for blanket surveillance purposes.

>> CEREN UNAL: Thank you very much, Pavel. Now, I would like to turn to Christian. What are your main list – main list of challenges?

>> CHRISTIAN BERG: Yes, hi. I come from two different aspects, I think. So one is the online, what to remove online and what not. Working a lot against child abuse investigations. The child abuse is bad material online which we can agree on globally. Terrorists is a little bit selective it depends on where you are. The problem is we don’t want the Facebook and all the other social media companies to be forced to remove content. They will protect themselves and introduce censorship. And you are talking about child abuse, had and organizations like Facebook, they are required to report when they find something online. And, of course, to not miss anything, they are reporting everything, and they would record on potential child abuse content. I think it’s valid thereof, how do you push this censorship.

And I would also agree with the previous speaker. I think AI is a big topic when it comes to law enforcement and how they are going to approach the data because we can see the data has grown exponential. And we have also grown the intelligence. So law enforcement is getting more and more intelligence to think about, and here there’s a big, big topic needed to discuss how can we approach or apply an investigation in a good way, not just putting A on top of existing solutions, but AI, it’s better now, because I think it just creates problems. I think we need to think from the beginning where we incorporate AI in the workflow and incorporate it in the process. They are going to find more evidence and some of them are correct and not correct.

So I think those are two main topics for me.

>> CEREN UNAL: Thank you very much, Christian. Now I would like to turn to Markko with his list of main challenges that you would like to address. Thank you, Markko.

>> MARKKO KUNNAPU: Thank you very much. Actually, when it comes to problems and then challenges, related to criminal justice response, then you could have a very long list, but I think I could just focus maybe on the most important ones. First, it’s legislation. Minimum standards and then it was already mentioned that if you have a behavior, a conduct or this criminal offense in one country, it’s not always the case that it would be a criminal offense in another country. And then actually, series of problems, actually, would be raised concerning the effectiveness of international cooperation related to criminal investigations and then many others as well.

There are also problems related to capacity including technical capacity. First, you need to have dedicated units in place, competent authorities, but already one of my fellow panelists mentioned the challenges related to encryption, use of purchase services and organization services in order to follow their leads and then to gather evidence, you know, to continue with the investigations. You need a vast amount of resources and it’s often the case that law enforcement authorities don’t have all of this.

And then last but not least, international cooperation. Most of the incidents, most of the cybercrime and then cyber‑related offenses, these are – you need international cooperation. In order to solve the case, it’s somewhere abroad, somewhere in cyberspace, somewhere in the cloud. In order to get this information, first you need legal basis and then you need competent authorities and then the units and then the cooperation channels in place.

When it comes to legal basis, it’s often the case that you cannot use that with all the countries in the world. There are some countries who may cooperate with you. There are countries who don’t cooperate and then, again, additional challenges arise. So I think that I would just – I would stick to those three points at this point.

>> CEREN UNAL: Thank you very much, Markko. Thank you to all the speakers for setting a scene with a bunch of a variety of issues, but I think the main messages were how to balance the rule of law and fundamental rights and also how to go on with effective, efficient cooperation in all of those matters. So let’s dive in deeper to the topics.

I would like to Tatiana to check whether there are interventions from the audience.

>> TATIANA TROPINA: Except for me sending the wrong messages in Russian, there are no interventions which really saddens me, because I see some familiar names on the chat, but I hope that we are just warming up and setting the scene and we will have more interventions.

My summary from here, is basically more or less the same as yours Ceren. First of all, we have policy and regulatory frameworks that have to respect the border but at the same time, have to bring our countries to cooperate, however, there are new technologies. There are new technological challenges and this framework sometimes just simply don’t catch up with those.

And I think this sets the scene for what we actually planned for this session because we wanted – we were thinking – we started this session, with the structure in mind that we will discuss regulatory fragmentation and regulatory approaches and then new challenges including technological challenges such as artificial intelligence and human rights and I can certainly see that all of our speakers are very much capable to talk about this and we already identified some of the control issues in the views.

First, I would like to move us to the part, which is devoted to regulatory challenges, and I would like to build my question up on what Markko and Marina said, and also upon the general knowledge.

So we have these different standards being developed and they do have to respect the borders and the jurisdictions of the country. They have to respect different legal orders, and at the same time, they have to respect human rights. There is a big concern right now that there is a flurry of activity related to creation of new illegal measures to fight cybercrime. We know that right now UN, for example, is jumping on this wagon, on this train to actually deal with cybercrime and come up with what we call universal solution and there are some concerns that, for example, the low standards set by the Council of Europe.

I wanted to ask Markko and Marina, of course and then other speakers. Do you think we can call it regulatory fragmentation? Do you think it can lower the standards and safeguards for human rights protection that we have? How do we set the standards? How do we preserve them? But basically, yes, is there a risk of fragmentation and how we are trying to avoid this?

Or are we living in the age of fragmentation already? Because Marina pointed to different legal standards. Markko pointed to different legal standards and I will start with Markko. Markko, what are your thoughts about this, about going from global to European level? What’s going on in the UN? What is going on in Europe and how do you remedy this?

>> MARKKO KUNNAPU: Yes, thank you, and short answer to your question would be that, yes, there is already fragmentation in place, unfortunately, I think that the process that are taking place right now could even lead to further fragmentation. Right now, I think it’s quite an interesting time. So many different international organizations are involved in drafting new instruments, European Union, the EU proposal, the Council of Europe cybercrime convention committee and the negotiations on the second additional protocol. And also UN is also has started to take action and there will be new ad hoc committee and probably new cybercrime convention as well.

The question is, again, what could be the basis? And then what would be the minimum standards? When we talk about the Budapest Convention, this has global coverage almost. Most of the UN states have legislation in place, which is in – to a large extent in compliance with the Convention. Many countries all over the world who haven’t ratified or succeeded, they are using the Budapest Convention. So we have certain standards in place. And yes, the question is, what would happen if we have new standards? What would be the minimum? And then as regards the future discussions at the United Nations level, I’m a bit skeptical whether we could achieve similar standards like we – like we have at the European Union level or at the Council of Europe level, in case the standards will be lower.

If the threshold for different procedural measures will be lower, if there will be less conditions than safeguards and less respect of human rights and privacy, then I think that would be in’ big problem. Already we have fragmentation and then cyber – the fight against cybercrime, this is also very sensitive issue and then we have seen already how – how divided countries can be. There is already a polarization and if the UN cannot agree on similar standards, if the standards will be lower, if the standards will be different, then most of the countries are using or having right now, then it would have a negative impact on the fight against not only cybercrime, but any crime that has electronic evidence, and that would have a negative impact on international cooperation as well.

>> TATIANA TROPINA: Thank you very much, Markko. Just a follow‑up question from me, because I haven’t seen anything on the chat yet. And I understand that it is – it is a bit of specialized discussion which we are going to broaden, no worries. Markko, I want to ask you about the semantics of the – of the words minimum standards. Are we talking about minimum, minimum standards? Does it make us think that we have to come to a bare minimum? Because to me what Council of Europe is doing is quite high standard. I’m just thinking, just as a thought, do you think that we can actually change this wording and rather talk about high standards established by Europe versus lowering the standards?

Because even the language “minimum standards” makes me be worried.

>> MARKKO KUNNAPU: Right now, when I was referring to minimum standards, then I just took the Budapest Convention as a basis, which is right now international instrument that is being used by many, many countries in the world. In case new standards will be lower, I don’t know, whether in terms of substantial law, cyber criminal offenses, in terms of procedural law and conditions and safeguards, it would be a problem. It would be a huge challenge for those countries who have already their own standards in place because I don’t think that the government or maybe the – I don’t know. The people in different countries, all of these civil society organizations would agree on just lowering the standards, just in order to cooperate with particular countries.

So I think that that would be a problem. And then the – it’s not only about carrying on investigations, you also need to respect human rights and then privacy, and, for example, when we – when we just talk about European Union legislation, and then all of those countries who are members of the European Union are also bound by EU rules and regulations. And then going below EU regulation, I don’t think that this would work.

>> TATIANA TROPINA: Thank you very much, Markko. And just a caveat, when I was speaking about minimum standards, I wasn’t referring to you. I was referring rather to the debate to the UN when they talk about minimum standards. And before I move to Marina who can probably talk a bit more about how the standards can get divided at the European Union level, which she spoke about before, I would like to go to our audience, which is equal participants here, I hope. And Lousewies van der Laan, has her hand up. You can just speak. The floor is yours.

Thank you.

So I have not seen Lousewies yet. I would like to ask the – ah yes, you can’t unmute yourself because the host –

>> CEREN UNAL: Now she’s unmuted. Unmute again.

>> TATIANA TROPINA: One moment.

>> CEREN UNAL: Okay. Okay.

>> Yes? Okay. Sorry. I was trying to press all kinds of buttons and then my Internet connection was also unstable. Hi everyone, and first of call, congratulations to EuroDIG for pulling this off offline. It’s great to see everybody. I’m really appreciative of this subject and all the very qualified speakers here and I just wanted to mention a couple of things to make life even more complicated.

Not that it has been said, and I think I can say it less diplomatically and the first is that, of course, governments are not always the good guys and since criminal prosecutions and law enforcement is done by governments, as long as we don’t – even in Europe, we can’t have consensus whether the governments are the good guys, it will be extraordinarily difficult to resolve. This I think Marina has suggested in that direction.

It didn’t make you a terrorist in Holland but does in Spain. And what do we do with that person in Amsterdam, and that’s a small example.

In Europe, we can’t get our act around that, can you imagine trying to do that around the world?

The second is that I think there’s not a lot of consensus about what constitutes a crime and if we can’t even agree on terrorism and in some places you can’t even agree on child porn, how are we ever going to agree on selling things like drugs and other things?

I think this is another one where I would rather err on the side of caution. You know, the last thing we need is to give all kinds of power and technological capability to governments we don’t trust, to prosecute crimes which may not be crimes everywhere. And so as much as we all agree, that certain crimes need to be prosecuted, I really think that law enforcement and also politics should get together and do that and not keep on saying they need more power, more technology, and less encryption and all of these kinds of things because then we really have a huge risk of throwing out our human rights and our privacy with the bath water.

Now, there is a bigger issue here. I think it’s up to Europe to set the standard. I hope to not insult anybody when I simplify it, but in the US, data is being monetized by big companies without any regard to privacy. They have a completely different concept of this. And in places like China, you have personal data being used by the government for tracking and giving social credits. I’m making a very black and white simplistic perspective, but I think you guys understand what I’m trying to say here.

In Europe, we are trying to do something different. We have GDPR, which may not be perfect but it’s an attempt to do something. We are trying to regulate. And Facebook is saying we are going to be regulated, let’s talk about it together. Europe is doing something quite spectacular and we can be really proud that it brings a huge responsibility to get it right, and to make sure because what we are going to do, will also be an example for others. It sets the standard, you know, in a way that GDPR has influenced businesses around the world.

And I wanted to be not just Europe, EU, but really beyond that, include Russia and the whole region. Include any other countries who feel the same way. I don’t think it’s a geographic thing. I think anyone who cares about human rights and privacy and human dignity and not giving governments control over our lives should be on this.

Now – so that’s one point and the other point I want to make is that we have to realize that the more bad stuff that happens on the Internet, the higher the pressure on politicians to legislate, regulate, and intervene. And so unless we actually start addressing these issues, be they child abuse, be they terrorist content or all the other things that people are scared by, we are giving – we are creating political pressure to get legislators intervene on the Internet on the technical layer with legislation. Legislators and I can say this openly, because I used to be one of them and some of them are my very good friend. They don’t understand this. Marina is nodding, because there’s so few people who actually understand what it’s like and we don’t have enough capable people to actually avoid legislation going in the wrong direction.

And we also see pressure of people making on purpose a mess between governments on the Internet and governance of the Internet. And this is where it’s really up to us. You know, people at EuroDIG, people who understand the technicalities of it, to reach out beyond our usual groups and to speak to other people and to explain how these things and what the risks are and the complexities of it. I mean, we really have – (Garbled audio) – educating. I was educated by you guys when I joined five years ago. If I can learn it, anybody can. We have to reach out beyond this group and we avoid stupid regulation.

And if we do any kind of regulation, please build in a sunset clause so that any kind of law automatically expires unless it is actually expressly renewed because otherwise you get stuck with legislation which is made in the craziness of the time and the fear of terrorism or whatever the problem happens to be, COVID, but it’s not going to be relevant five or ten years from now. If it is, we will renew it but please let it expire. Do not adopt any legislation without a sunset clause.

>> TATIANA TROPINA: Thank you so much, Louswies. It’s a pleasure to think about what Marina said. You talked about companies coming together with the governments and the fear of legislation, and say let’s cooperate on this. Let’s draft it together. You asked the speakers and the audience, can we trust the government? But doesn’t this provide another layer of the question, can you trust the government when it cooperates with the companies that it’s going to regulate. This is a question to Marina. How much of the industry cooperation are we expecting here and isn’t there is a regulatory capture at the end.

Marina, I’m really sorry you have a lot of questions on you, on fragmentation, on standards and what Lousewies brought up here. So the floor is yours.

>> MARINA KALJURAND: Yes, thank you, Tatiana. I listened very carefully to what you said, Lousewies, and I agree to everything you said, but I will try to – I will try to go through the questions raised in a more structured I what.

First of all, if we look at human rights globally. I have a long background in government. I was more than 25 years in government. Now a member of parliament, which gives me much more freedom to say. Unfortunately, on a global level, we all say right things, but we do not behave that way. We all talk about open, free resilience, accessible Internet. We all talk about human rights, human rights online, equal to human rights offline, but in practice, we have different understanding of that. And that’s why I’m very skeptical about any – any breakthroughs on a global level in the United Nations, when we talk about laws, regulations, conventions.

Yes, UN has its role as a capacity building awareness raising, educational, but I do not think – terrorist has been on the table in the United Nations since the ’70s and it’s still being discussed because the ideologies are so different, but let’s UN has its role.

I believe much more in progress, in the groups of like‑minded states and countries. Markko mentioned the Council of Europe, maybe for cybercrime, it’s a good place, but, for example, when I talk about terrorism, I do see that we can achieve more pragmatic solutions or some solutions within the EU.

Copenhagen criteria is the minimum of the EU. We can’t fall below that. As we all know there are some suspects that Hungary is not a democratic country anymore after some reports from Freedom House. We have an unequal understanding. Terrorist content. I think one of the worst things we can all – when we ask the platforms to define what is terrorist content? We are putting on them the burden that we are not able to solve ourselves.

So although it’s not a perfect system, I would still trust the legal systems or the jurisdictions of Member States of the EU. I’m not ready to accept the terrorist definition of Catalonia as you mentioned or maybe some other countries that are – that are moving away from human rights and protection of fundamental freedoms. So I trust my jurisdiction. And I agree to remove terrorist content online if my Estonian jurisdiction says it’s terrorist content and it has to be removed.

I can’t say all the countries are doing the wrong thing. They have the right laws in place but still at the moment we are not at a point where we are able to – to cooperate with other jurisdictions. We are still different jurisdictions and I would argue we will not be able in the near future to create unified laws.

Examples, I think it was Pavel who raised the securer software. I think that GDPR is an excellent example of how EU can set rule examples with the example of certification of ICT products or online services and we are working on that. And like GDPR at the beginning, this was a lot of criticism. And now even Facebook is asking for GDPR in the United States.

So EU has the power and EU has the moral strength to introduce rules that might become later, if not global, then at least to accepted by some – but also other regions.

And my final remark on pressure, you are very right, and I feel the political pressure. Terrorist content online. As I said this’ no place for terrorists online, but I have the feeling of what we are doing today, we are writing a political document. Where we are writing the document that the content should be removed within one hour, it’s not doable. It’s doable for big platforms but it’s not doable for those small SMEs. So why do we write it? Why do we put them into the position that we know from the beginning it’s not doable.

Why are we trying to introduce automated tools and filters when we know that artificial intelligence is not at the level of making decisions that can be made only by humans? Terrorist content in one – from one perspective is a terrorist content, illegal content, but it can also be news. It can be police information. It can be educational information, and then it could be peace of art and we need humans to – a piece of art and we need humans to make the final decision.

So very many open questions. I don’t think that we have to give up to political pressure and to do something very quickly because the political will is there, or the politicians are expected to say something strong on there. Being politicians means we also have to find workable solutions, and that also goes with terrorism online.

Yes, I will stop here. Thank you.

>> TATIANA TROPINA: Thank you so much, Marina for such a frank and again, strong and powerful intervention. I think that before we move to other speakers, I want to get back to Markko and ask him, would you like to add something? I actually – I find it interesting when Marina talked about her native country Estonia and trust in its standards. What about you? Can you – do you think that you can, you know, lift the standards of your country to the UN level or to the EU level or do we always have to make a compromise?

And it’s especially important for you in criminal investigations, because sometimes you will not get data if the standard is not met. How do we square this circle?

>> MARKKO KUNNAPU: Thank you. First, I think I should say that Estonia is a country and other countries, we are not isolated. We are just part of the global international community and that it includes also several international instruments on human rights. The convention on human rights we need to have the fundamental rights and many others and we have minimum standards on privacy, and so on. This is actually the basis and when we are going to build new tools, procedural measures and then also this is the basis we need to respect and we need to accomplish. And this is also important when we cooperate with others. And, yes, even if we have sort of common minimum standards, minimum common understanding, mutual recognition in place within the European, we need to cooperate with a third country as well. It could be located in third countries and that’s why we need to pay attention to this as well.

However, when we cooperate with them, we still need to respect our domestic standards as well. For example, if we have rules on personal data protection, privacy, then we need to just take this before we are sending out requests for, I don’t know, for a foreign company or for foreign law enforcement authority. There are rules in place that needs to be respected, but, again, even if we have strong legislation in place – domestic level, or maybe within the European Union, then we need to just bear in mind often, just most of the times, these attacks, incidents, they originate from other countries. And again, it’s not only about having strong legislation in place, but also we need to take also action in order to protect the victims, in order to ensure the rule of law also applies in cyberspace. And if something that happens, then it would be effective criminal justice response.

Unfortunately right now, this is not working all the time. Because yes so some extent international frameworks and instruments can be used, but, again, there are definitely problems.

>> TATIANA TROPINA: Thank you very much, Markko. And I will hand it over to Ceren, because we have a chat comment. So – and Ceren did monitor the chat. So do you want to take it?

>> CEREN UNAL: Yes, thank you very much. So before we turn to the other speakers we have a comment from Hans Soal Park, he doubts that lots of companies intend to come up with the definition of terrorism, as many speakers pointed out it will be very difficult to come up with a unified definition. So the key how difficult it is to define these illegal activities. I’m so sorry. She. Sorry, Han Soal, I have the same thing all the time. And Lousewies had also noted the same challenges applies with e‑evidence which is impossible for small businesses.

Yes, we are seeing more and more private companies cooperating with governments and most of those companies are the with ups who can afford the regulation.

I would like to turn to Pavel and Christian. Pavel, you mentioned in line with Lousewies’ intervention before, how the policymakers need to be educated. So it’s like, of course, you will respond to all the previous comments but I would like to hear your more detailed approach on this too. Thank you.

>> PAVEL GLADYSHEV: Well, just to comment on what Markko said about lowering the standards of international cooperation, I think, we will just – right now not every country subscribed to the Budapest Convention to the full extent and indeed the Budapest Convention itself allows for signing up with certain exceptions but it provides certain common ground which is more or less addressed in legislation of different countries.

I did study the history of cybercrime‑related legislation and it was interesting to look back at what was defined in the United States. The first computer crime act, I think it was adopted back in 1983. Even though it provides a lot of the same offenses and I think it’s broad on what cybercrime puts forward, it’s much less systemic and much less refined in my nonprofessional opinion than the Convention is.

I think what Convention has done is to harmonize and to provide certain systemic approach, cybercrime versus cyber‑related crimes, ways of quickly preserving digital evidence. But it’s also not without its problems because, for example, if we look at cyber‑related crimes, I think the scope of offenses defined there is insufficient to define modern technology what about crimes committed with the use of artificial intelligence.

For the lowest possible standard, I don’t think it will be significantly different from the modern state of reality when there are still countries that cannot agree on even the minimal set of standards.

In terms of preservation of human rights, I will agree with Marina and Markko, that a balance is much needed. For example, can we talk about technological issues or do you want me to defer it?

>> CEREN UNAL: It’s up to you.

>> PAVEL GLADYSHEV: Okay. I will get into that.

I think – I study – well, I was involved in research of problems related with the use of criminal – criminal use of encryption and bypass and use of encryption. Sometime ago, I came across this article, written by Michael Chertoff who is a former US – I think he was home secretary of United States.

>> MARINA KALJURAND: Homeland.

>> PAVEL GLADYSHEV: Homeland Security, I’m sorry. Here he looks at different perspectives around the world and encryption technology. And in that effect, he says that in China, and in Russian everyone has to surrender their encryption keys and in the United States, they believe that Thor. serves its purpose and they need to be given tools to work around the encryption technology and leave it as is.

What strikes me in this analysis is that there is a disconnect between the way policymakers perceive technology and the way technologist perceive technology.

And from this article, it transpires that Michael Chertoff and perhaps others see technology as something that can either be adopted or prohibited and this doesn’t seem to be middle ground. If you look at technologists, then a lot of them focus on priority preservation to no end. And cryptographers in the United States have put forward arguments for prohibition of a mass surveillance initiatives, like back in the 1990s and the main argument is that technology can be broken into.

And if certain agency collects a lot of data about the general population, it’s a big security risk. It’s very expensive, and it violates, obviously everyone’s privacy.

And privacy violation was also the – not the privacy violation, but the lack of clear purpose for collecting the information was behind the repeal of the data retention directive, which was repealed by – by the European Court. I can’t remember which one.

>> TATIANA TROPINA: Court of justice.

>> PAVEL GLADYSHEV: Yes thank you very much. And I think interesting finding by the European court of Justice, it’s not that the data collection was inherently anti‑privacy, but that collection of data was no reason or no specification regarding to the purpose, and proportionality of it, that was the issue.

And so last September, I published an article, where I bring attention to attempts to – just a second.

Attempts to build a middle ground, something is not completely unbreakable as the Thor is designed to be, but at the same time, it’s not completely hmm, open to the law enforcement requests as other previous initiatives would appear to be.

And just to give you an example. This architecture was proposed by researchers at a university and it’s based on the idea of blind signatures which was suggested by David Chon back in the 1990s. It’s a cryptical message that they can put their signature over something without being able to read the content.

You might ask, what is the reason for signing something in an envelope, which you cannot see? Well, they proposed an escrowable architecture for anonymization network. Basically, it works like this. We have a Thor‑like network in the middle, the mix network, but before a connection can be established, whoever is initiating the connection needs to get an authorization from the management agency, run by government or agency or an entity, which – to which they provide the destination of the communication and which is blind signed. They make sure that they don’t have access tore the ability to figure out what is being signed.

Then this information is transferred through the mixed network, and in the last stage, the unblinded information is – can be reviewed to the management entity if they possess certain secret.

So in a nutshell, it’s an architecture which allows discovery of the identity of responder and initiator in certain circumstances.

And in order to achieve this unblinded secret can be divided into multiple entities. So using classic cryptographic schemes, the proposed approach of these researchers was that different parts of this secret should be divided between the government, the management of this network, and the end users and only when three of them together decide to cooperate, then this information can be recovered.

The scheme is not ideal. It’s not perfect, but I think the most important aspect here is that it shows that technologists can work towards something that provides sufficient guarantees of security, trying to strike middle ground between total surveillance and no surveillance at all. I think that’s what we are missing.

I think my main point, my main contribution here is that the policymakers should work towards building this middle ground solution alongside with the technologists and that ultimately we should ban fully anonymizing solutions like Thor.

I know that many people will disagree with me on that, and they would say, even if we ban Thor, it doesn’t mean that people still cannot use it and I totally agree with you, but it would make it much more difficult for teenagers to have access to a prohibited technology and it would remove some of the most blatant abuses of this technology that relates to online research of drugs and other related criminal activities.

>> CEREN UNAL: Thank you very much, Pavel. I’m sure there will be many questions and further discussion on this. Now we have some questions in line. I will get back to you after I go to Christian who has been waiting. And Christian, you also mentioned the balance between – balancing the different perspectives of policymakers and technologists, experts and also in the previous correspondence while preparing for this workshop, you mentioned the – why don’t we focus more on the preventative side. So it’s like preventive vs. punitive and which would lead to more legislation. So without further ado I would like to very much hear your views on this.

>> CHRISTIAN BERG: Yes. So I’m probably going to answer a little bit all over the place.

Talk a little bit about collaboration. I’m coming from a world where we have been working with law enforcement on helping them solve child sex abuse cases. Looking at these images, you know, if you look at one of them – you look at them and it’s objective and it’s bad and it’s illegal. But still we have different legislation in various countries there. So in Sweden, for instance, we cartoons are illegal. Child sexual abuse cartoons are illegal. That’s not the same in other countries. So it’s more or less the same around the world, but you can’t say this is illegal in Sweden and then you transfer that information into the UK and say, hey, it’s illegal there too. It’s not that simple.

So we have been working with various attempts here and one of them is actually a concept where we are trying to break now the evidence, the image in this case into pieces to say, hey, this contains nudity. It contains a child and an adult, and this type of action. And then you sort are building a description of the image and now you can translate descriptions.

Let’s say you have a Swedish illegal image. Hey, this is a cartoon and it’s child and adult and then when it comes to the UK, they say it’s contains a cartoon. It’s interesting for us. See, in that case, it’s actually possible to collaborate globally even if there are different legislations. In this case, we are using AI to do the conversion between the legislations. It is not the perfect system, but it – but it – you know, I also believe we can’t have a global – we are not going to have the same view on this in every part of the world. It’s very different.

But, I think we can collaborate on sharing evidence with each other.

Coming back a little bit to our solution and what you spoke about, and I think that could actually be a solution to a big problem, which is nowadays as a mentioned Facebook and other social media giants are reporting suspects to NCMEC. It’s receiving 10 or 15 million reports only from Facebook and that, of course is very good source information to rescue kids. But on the other hand, Facebook is saying, well, we need to have privacy for our members. So they say, well, we will go encrypt Facebook messenger, which is good from a privacy perspective, of course. That is not an argument, but on the other hand, when Facebook goes ahead and does this, that means that law enforcement will lose a lot of information and they will rescue less children.

And, you know, this kind of solution that Pavel was presenting or others, I think could be a solution to have the encryption level where, you know, the companies and the governments are collaborating to provide some middle ground where this – it is secure communication but at some stage, we can also open it up and share the evidence with the police to do the good things.

But it’s – I think it’s hard in who to trust and where to trust. There’s not a global entity where we can trust at the moment and no one is perfect. And so I think that is the main challenge who – who is managing the whole system.

And, yeah. No, I think that’s – I will leave it this.

>> CEREN UNAL: Thank you very much. I would hike to go back to the chat and Matthias had the first raised hand for a question. Can we please unmute?

I’m also looking.

>> Great! Matthias speaking. Can you hear me?

>> CEREN UNAL: Yes.

>> Nice! Thank you very much for this great panel. Very interesting. I have two more particular questions. So the first question would be, what does the panel think about the current developments of companies like Clearview AI. I’m sure you know it for the people which do not know it, it’s collecting pictures on the Internet which are publicly available, and they are selling them access to these pictures to law enforcement and other companies. This may not be allowed in the European Union, but I think it’s still kind of development and we’ll be able to tackle this issue. This would be first question, what does the panel think about this development and how can we tackle this?

And the second question would be more related to a facial recognition. We heard yesterday that, for example, companies like IBM said, oh, they will stop while developing and putting some energy into like facial recognition because there are a lot of problems and we have a lot of uncertainty and discrimination with the other victims and it’s only as good as the data behind it. We know that this will be filled by other gaps and companies putting facial recognition behind it.

We know like in the US, in California, they banned facial recognition because it’s a bad development. The EU has still not decided the direction. I would be very curious to know, for example, what Marina is also thinking about this because she’s also like in the parliament. And, yeah, and also the second question related to these companies which I can you see the tools and how we can bridge the gap between freedom and security because it always depends upon your site. So I’m more on the side of the government, which is really trying to catch the bad guys or I’m more on the side of like the actives which are sometimes having a problem with it, even though if like the framework in the European Union is quite good and there’s not that much gap from issues but still it can happen. Thank you very much and I’m very much looking forward to your questions.

>> TATIANA TROPINA: Thank you, Matthias. Tatiana here. I want to thank you for saving me as the moderator, between making the bridge between the first part of the session and the second part of the session, because your questions are exactly what we want to discuss towards the end of the session.

And Marina, would you like to take the floor to answer Matthias’ question.

>> MARINA KALJURAND: It’s a good question, but to find the right balance between both of them and not take a side, so that the criminals are detective and brought to justice and respond to justice and that’s a human rights activist or fundamental right or human rights are not being violated. That’s the ideal picture.

Of course, I understand it’s difficult. And when we talk about the use of AI, I think that today we understand that we are not developing AI at the speed that maybe we thought even some five or six years ago.

And in the end, what for me is very positive are the principles of the human centeredness of AI and the human oversight of AI. It has to be centered around people and the final decision has to be made by people. That’s why I oppose completely any automated tools or filters that control the – the content which is put by platforms or whomever online. They can do some work, but they make mistakes and they are biased and the same can be said about facial recognition.

I get the point, the facial recognition will come at one point or another, but at the moment, it’s not at that stage. We see so much bias, so much mistakes, discriminating mistakes, so it’s not ready to be used at the moment.

And having said that, I truly believe in the use of ICTs. I could from many Estonia, otherwise it’s impossible. We live in a country where we have the privilege of online services for almost 30 years. So I’m a true believer in the use of ICTs for the improvement of people’s lives and also for cooperation with authorities, including judiciary, prosecution, and other authorities, law enforcement authorities, but it has to appear at the time when the automated tools are ready. They are so – they are so established and examined that we can be sure they are not bias, they are not making mistakes. Until that time, I don’t know whether it will come – I don’t know if and when it will come. Until that time, we need human oversight.

And another thing I wanted to say, which is very right, that policymakers are disconnected from experts. We have to listen to each other and it’s not in the DNA of governments to listen to industry or private sector, but the best experts are today in private sector and in industry, so help us to change that.

In the European Parliament but also when I was foreign minister, it was always important for me to listen to all the stakeholders because all of them have their bits and pieces to the big solution that we are working on, being academia, human rights, industry, government, everybody has their pieces.

And I also like my final point here is that many panelists have mentioned cooperation. Crucial. Without cooperation, we will not be able to be efficient and successful, whether we are talking about cyber criminals, terrorists online or cybersecurity. Cooperation is crucial and we have to cooperate.

>> TATIANA TROPINA: Thank you very much, Marina. I see that Han Soal Park has your hand up. If you can wait for a few minutes. I will hand it to Christian and then you will be next. Christian, you are next and then Han Soal.

>> CHRISTIAN BERG: I agree with you, that we need to use AI to filter data, not to say this is the truth, but we can use AI to bring up intelligent information so a human can take a look at it then make a decision. I don’t think we can have AI automatic filtering, I think that will be too blunt for many years to come.

Regarding to Clearview in the first question, I think it’s – you know, the question is more about privacy online and what kind of information is available for you online. Now we are saying, well, you can search faces using Clearview. But you can do that with Microsoft Bing for a long time. You have sites like people and Twitter is monitoring a lot of things.

The facial recognition is one thing, but in the sense of searching a line, it’s just part of all the other search tools and I don’t think that is the most efficient one Clearview. The most efficient one is Google itself, starting with textual information or with similar data and so on.

I think a big problem with Clearview is the transparency, what has it been indexing. Has it been indexing my Facebook images and if I use Google, I can see what can be found using the service. But it’s – it’s more, I think – I think about how much data do we have – do we have online and how easy is it to find it and track it? And using all of the big search engines, it’s very, very easy to track most of the things.

And Clearview is a small piece in the puzzle, but it’s not making any change to what can be online in the big schema.

Yeah. That’s it.

>> TATIANA TROPINA: Thank you, Christian. We have Han Soal Park with your question rating. And so if the host can unmute her, we would be very grateful?

>> Hello, do you hear me?

Is it unmuted?

>> CEREN UNAL: Yes.

>> Thank you. Thank you. Thank you very much, my name can Han Soal Park and the director of Internet policy network. Thank you very much for this interesting session. I have been listening to it with quite great interest. And I just want to comment about the e‑evidence that we discussed earlier in the session. What we are concerned right now, now a lot of countries are getting electronic evidence from other countries if the investigation is cross border in nature. And – and we believe that mutual legal assistance is not – is very traditional system. It’s not very adapted to security electronic evidence right now in 2020. And we can see that a lot of countries that we talk are getting more and more frustrated that they cannot get the evidence to prosecute in their own home country.

And some actually look into the possibility of data localization, which is very problematic for us because we believe open and free Internet and we believe that because of the increasing tension on jurisdiction, we will have some new laws to just address this issue in one way or another, and we have been closely monitoring the developments related to cloud act, of course and the violator agreement with other countries and the Budapest Convention and European Union e‑evidence proposal. We want to emphasize three points over here. First of all, all ever these initiatives need to be interoperable from one another, because if – just like Markko said, if you have two fragmented approaches for e‑evidence it creates a lot of confusion and ambiguity at the end and we believe that this would create more risk of human rights violations.

And second, if this needs to take place, having a new law to address this issue, then we want to make sure that the procedure and safeguards remain here to make sure that the human rights is protected. And current MLA procedure, mutual assistance procedure has a lot of good lessons but if you make it more short and efficient, we believe that we will take out a lot of components from there, and we hope to find the right balance from there to make it more like – adapted for the e‑evidence but at the same time have the right balance to protect the human rights.

And lastly, we believe that communications and knowledge mutualizations among the policy workers are really critical in this – in this topic. And the technology will develop very fast and these new laws may face different challenges in the future because the technology would be different at the time and even now we are fairly not sure what kind of law would clearly help smaller companies with – and also different types of enterprises could easily to law enforcement authorities requests and we are working hard on it to facilitate this discussion, but I believe the most critical thing is the willingness of stakeholders working on this topic.

So I just want to highlight that very much here. And we also talked about AI for the – the use of AI for law enforcement proposed. This no the related to my program, but it’s more for – I’m speaking for individual capacity because I used to work for a counterterrorism‑related job before. I wanted to say that there are terrorist content that are really, really obvious and it’s very disturbing for a human to continuously look over and over again every day.

And personally, I think that it’s understandable that if there are very obvious terrorist content, that is too disturbing and it could have be nice to have it taken down. And there’s other content that requires human review. And I believe that we are already doing this and maybe increasingly we will find the best practices as we improve the capacity of AI. But if you were to increase the capacity of AI, I was wondering if there could be possibility of remedy if AI makes mistakes.

And how would law facilitate this type of procedure? That’s something I wanted to ask of the panel. Thank you very much.

>> TATIANA TROPINA: Thank you very much for your intervention and I must admit the time flies! And we have 14 minutes left from which we have to leave five or six for the messages. So let us do like this. There are some questions. There are some interventions and I also wanted to ask Lousewies, do you want to speak before we hand it over to the panel? Or would you rather prefer your interventions to stay on the chat? Because I do find your interventions quite interesting. If you want to jump in just raise your hand.

I will hand it over to the panel. And I would like to – the panel – I understand that we talked a lot about things. So if you could please cover some answers to the last question but also just give us three or four words what is next for the criminal justice, three or four words that you would like to put emphasis on for the next year or two.

And, of course, if you could answer the last questions in your intervention, this would be great. So let me start with Markko. Markko, you just take the floor and – and go. Thank you.

>> MARKKO KUNNAPU: Thank you, Tatiana. I think the biggest challenge is to improve international cooperation and first countries need to adopt necessary legislation concerning precision measures and industrial countries need to be able to use international instruments as a legal basis for cooperation, because without legal basis international cooperation, it wouldn’t take place.

And I think when it comes to the cybercrime response, the cyber‑related crime, now right now the problem is not so much about substantive law because this is something that countries can do by themselves. They can just adopt law and then say that this is criminalized and end of story.

But in order to cooperate at an international level, you must do more. And then this is the – the place where international cooperation and international efforts are needed. You need to just agree to some sort of international legal framework. You need to agree. You need to introduce, I don’t know, necessary tools or necessary measures that would enable you to get the computer data, to get the electronic evidence from another country.

Also, when speaking about the Budapest Convention, then Budapest Convention is also about the electronic evidence because all the precision measures that are there, can also be used to any other crime where electronic evidence is present. So you can also use the convention and all of these procedures, tools to investigate murder cases and terrorism cases and corruption cases and drug trafficking and you can just continue endlessly.

But, again, these new tools and the new measures, you need to have balanced solution.

Yes, it’s not only about government and then maybe certain activists but when we talk about criminal offenses, then they are also victims who might not seek justice and restoration. We need to do something with these as well. And, again, prevention. Yes, prevention, it is really important, but again you can have really strong legislation and maybe information security in place, but if all of those attacks come from abroad, then you should do something as well, actually, just when its to those perpetrators.

Mutual legal assistance and existing MLA system, I think everybody agrees that this is old and this is outdated and this is too slow. And when we talk about the challenges we are having right now, using Thor and Dark Web and all the illegal content on the Dark Web. It’s not possible to use the traditional MLA tools because you don’t know where the data is located. You need something in addition. And this is what actually several international organizations have been discussing for years.

There are questions concerning cooperation with private sector. If you have an entity in your country, in your territory, then actually you have legislation in place and then if you send the preservation order or the production order, then that particular entity has to comply with this order.

Right now, done. Due to the cloud computing and open endless cyberspace, the situation has become more complicated because you have company entity in one country and server, infrastructure in different places and then this particular company could offer services to all over the world. And, again, actually, what legislation should be taken as a basis? And then which jurisdiction is the primary one? There could be also conflicts when it comes to on one hand, the company has to disclose the data, but on the other hand, actually, there is also law somewhere that says that it has to keep the data confidential.

So there are problems and this is why all of this negotiations and discussions take place.

As regards the use of different technologies, like Thor, then actually – and then those encryption, but the problem is that most of the time, actually, these technologies are being used for lawful purposes. Yes, we know that sometimes these are being abused, but, again, we cannot actually just – based on this prohibit or ban these technologies because anyway, actually, even if we have some sort of declarative provision and legislative act that since now, that this use of this has been prohibited, anyway, people still use it and people still can use proxies and VPNs and companies can just move – I don’t know – location to another place.

I don’t know, this is – we cannot prohibit it, but law enforcement authorities need to find a way, actually, how to just – how to cope with these abuses and how to live with this.

Thank you.

>> TATIANA TROPINA: Thank you very much, Markko and next, I would like to ask Pavel if he has something to wrap up this session with.

>> PAVEL GLADYSHEV: Thank you, Tatiana. Thank you, everyone. Thank you, Markko, for a detailed exploration of the encryption‑related concerns. I support you with these conclusions except I’m saying we need to work towards a middle ground solution, but I agree with you that it’s not possible to prevent people from using prohibited technology.

What I think is important is not just reach agreement at the MLA level. We need to work towards research and technical standards for acquisition of digital items. Something that would provide unified access to a million of Internet of Things devices, to cloud services and something that we can engineer with privacy preservation in mind.

Because if you speak with law enforcement on the ground, a lot of times they will ask you how to get data out of this mobile phone or out of that mobile phone and it’s a never ending story because the market is being populated with new devices.

I think if we had common standard and we could convince the vendors that this standard is preserving privacy, to a reasonable extent, then I think it would simplify the life of the law enforcement quite substantially. So that’s one. And second, I believe that we need to elevate software development to the level of engineering profession. For example, in many countries before you can build a house, you need to join engineering society and pass required examination because it’s something that human lives depend on if you build a house the wrong way.

And I think software has become equal importance. I think we need to educate every software creator how to build proper software.

>> TATIANA TROPINA: Thank you, Pavel. Christian, I remember that you actually mentioned that technology helps law enforcement to cooperate while laws can divide them. How would you wrap the session after what Markko and Pavel said?

>> CHRISTIAN BERG: Yes, it’s interesting. I really agree with what you said Pavel, about unified access to data.

That would help I allot.

Another problem here is that getting data out of phones, that problem – that problem is almost solved. The problem is that the data is stayed there. There’s not a unified way to share the data within software. If you are using one software, you are stuck with the data in there. Police would benefit so much in having a standard. This is how we treat evidence so you can share it between different software vendors and between legislations is all. That would be a massive time saver, I think.

Another thing is also that, you know, we were talking a lot about collaboration, nationally and internationally. It’s hard to collaborate internationally but some of the police say it’s crazy hard to work internationally and I think that’s a big thing that we haven’t really covered here. Local police officers, usually have local knowledge of the local bad guys and the local crimes and that’s saying your police and selling drugs on the site. It might be the guy on the Dark Web site, selling a lot more. I think democratizing technology to make it possible for more of the officers to use it is – I think is a key to – to solve more crimes and in most of the countries we have a big data solution with some extremely expensive technology in the center of the country. When, in fact, most of the crimes are taking place in the local community. So we need to democratize this so everyone has access to knowledge. But, of course, doing this, it’s critical that we have the privacy aspect of this and how – how can – what can be used and what can’t be used in investigations.

I think that’s the key thing in this. It will be pretty a key component in all of these solutions and to serve what is most important, because with the data load we have in society, it’s just getting worse and worse. And applying – on top of this, it means we get even more intelligence out of this to almost another layer of AI used to take care of what is being produced by AI technologies so we can see what should we look at? It’s crazy how much data is getting out there.

So I think we’re at a tipping point. It’s a very interesting time, I think, where you know, we have so much technology. We have so much data and I think the technology solutions to date, especially for law enforcement, they have been around for like 15 years. They look more or less the same. You know, they look more or less the same for 15 years. I think we are at a point where we should approach the crimes and make it so that they are not just expert on crimes but all police officers and cybercrimes because all crimes will be digital in some respect.

I think that is my key message. All crimes have a digital component in it.

>> TATIANA TROPINA: Thank very much, and before I will hand it to the reporter, I really want to read one of the chats and give the wrap‑up to Marina. As a host, I’m sorry that we are running over time, but I think it is important to wrap this session properly.

So there is a message from Lori Schulman, thank you for this. Agree on privacy point. Must understand what can and cannot be used. It’s been great to hear this discussion. I must jump to a call. See you at the later session. Thank you very much, Lori, if you are still here.

So before we will hand it to reporter, marina, I would really like you to say a couple of words as a wrap‑up, even if it’s only three words. Thank you. So the floor is Marina’s.

>> MARINA KALJURAND: Thank you. Although I’m ready to skip my words but I will be really brief. I take from this session which is it’s high on the political agenda, which is good. We are paying attention to the topic.

Cooperation has been mentioned by so many speakers which is extremely positive. And third, I will mention inclusiveness or multi‑stakeholder approach because we have different backgrounds and we can come to the same room and we can discuss the things so decision making shoe be the same. Listening to all parties, only then we can make effective and right decisions. And thank you so much for putting this panel up and thank you so much for giving me so many unproportionately much time. Thank you, Tatiana and others.

>> TATIANA TROPINA: Thank you very much, Marina. Thank you very much, everyone, but before we will finish this and give everyone a round of digital applause, I want to hand it to Cedric, would is our reporter and who is going to read us the message from the session.

>> CEDRIC AMON: So hello. Thank you very much. Thank you very much for the great discussion, and panel we have had. My name is Cedric Amon. I’m a Geneva Internet platform rapporteur, and I have written up some of the messages which will be available for comments on the wiki shortly after. And I will also invite you to read the session report, which will then later be published on the tick tock watch at EuroDIG, a link I will provide later.

In the first message, finding the right balance between the control of online content and upholding fundamental rights will remain an important challenge. Given that most incidents are occurring across borders and that there is no common definition of crime and terrorism, cooperation between states and with the private sector, on all of those matters is crucial.

The second is, the use of artificial intelligence for law enforcement provides a big opportunity, but must be explored diligently because it requires vast amounts of resources and an advanced understanding of the technology. It should, however, not be implemented without human oversight.

Third, the flurry of activity to create new norms to deal with cybercrime bears a risk of increasing legal fragmentation and finding agreements only on minimum standards. It is as important to avoid falling below already existing standards such as the Budapest Convention.

And the final message I took away due to the increasing levels of encryption and anonymizations by cyber criminals alternatives must be found in terms of upholding levels of privacy while allowing the law enforcement to protect users online and offline.

So that’s it for me. If there’s –

>> TATIANA TROPINA: Thank you very much, Cedric. I think that these messages will be available to anyone for comments and edits and if there is no strong objection, although I think that for the last one, we might be bitten off by the privacy activists but I think that’s what this basically this session expressed in a way. I’m no the sure about the last one, if we can put this one online but other than that, I don’t see objection on the chat.

I see the comment from Lousewies that was a really good session. It was – it was a good session because of you who participated, because of you Lousewies, because of Matthias and everyone who – because of Han Soal Park and everyone who shared their opinions. Thank you very much. I wish we could continue this discussion now that 30 minutes or an hour but, of course we have our format.

Thank you to my co‑moderator Ceren.

>> CEREN UNAL: Thank you very much. It was fun.

>> TATIANA TROPINA: Thanks to Elizabeth and thanks again to the captioner who is – who had to go through all of this diligently.

Thanks a lot and with this, I will end this session. Thank you.

>> ELIZABETH SCHAUERMANN: Thank you, bye. I actually don’t know if Sandra can join us and tell us more about what’s to come. If that’s not the case, we will resume – ah. There she is.

>> SANDRA HOFERICHTER: Elizabeth, can you hear us?

>> ELIZABETH SCHAUERMANN: Hi, Sandra.

>> SANDRA HOFERICHTER: Hi, Elizabeth. I see you just finished the session and from a distance, it looked quite interesting and interactive. Would you confirm that?

>> ELIZABETH SCHAUERMANN: Yes, definitely. This was a really good discussion going on here in Studio Berlin.

>> SANDRA HOFERICHTER: I know for today you are done. You can close the studio now. You will reconnect with us tomorrow during the day.

I would like to thank you for all of your effort and among the studios, we will discuss and reconvene later on during the night to prepare the next day.

So thank you very much. And for those who would like to stay connected to EuroDIG, please move and then later after coffee break, to the studio in the Hague. We convene there with the keynote and the panel on the serenity and then first day of EuroDIG is finished. Thank you very much and bye‑bye, everyone, or see you later.