Keynote 02 2022

From EuroDIG Wiki
Jump to navigation Jump to search

22 June 2022 | 10:00 - 10:30 CEST | SISSA Main Auditorium | Video recording | Transcript
Consolidated programme 2022 overview / Day 2


2 x 15 min.

  • Jan Kleijssen, Director of the Information Society and Action against Crime Directorate, Council of Europe (15')

Video record


Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-482-9835,

This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.

>> NADIA TJAHJA: Good morning, everyone. Kindly I would ask you to take your seats so that we can start the day.

Good morning. Welcome to the second day of EuroDIG, the European Dialogue on the Internet Governance. I’m so glad to see you all today here and online. It is a magnificent opportunity for to us have this hybrid interaction not only gathering to talk to the people in the room but also online and finding a way to connect with each other, bringing people from all around Europe to discuss the topics most important to us right now and introducing new topics and thoughts and perspectives to ensure that we have a more broader understanding of the ideas and concerns and the thoughts that are going around.

I am very pleased to, therefore, introduce the keynote by Jan Kleijssen, who is the Director of the Information Society and action against crime Directorate at the Council of Europe.

Please, I direct your attention to the screen for Jan Kleijssen.

>> JAN KLEIJSSEN: Good morning, everyone. I hope the and is okay and you can all hear me. That everything works.

Greetings from the Council of Europe, in my case, from Spain where I’m attending the conference on prisons, believe it or not. I’m very happy to be able to share with you some thoughts on the Outlook on new technologies and whether existing governance bodies can cope with that.

First allow me a few words about the Council of Europe as an organization. I know there is often confusion between the Council of Europe and the European Union, for instance. Just a few words on our organization. The Council of Europe was set up over 70 years ago to promote democracy of Human Rights and rule of law and until recently it had 47 Member States. Given the values I just mentioned, we were set up to promote and defend, it will not surprise you that following Russia’s aggression against Ukraine the Council of Europe expelled the Russian Federation because of its violation of the fundamental responses to the statute of the Council of Europe, we are therefore at 46 Member States.

The cooperation and the values we promote, we have done for the past 70 year, mainly through legal cooperation. I will explain to you in a moment a bit how this fits with the new technologies that are the subject, of course, of the discussion at EuroDIG.

With EuroDIG, the Council of Europe has had a long relationship over the years and also this year, a number of my colleagues are present both online and physically in Trieste where unfortunately I cannot be but would have much liked to I can tell you. They are at a number of workshops and Committees on cybercrime, data protection, Internet Governance and Freedom of Expression and others.

The technology and the Council of Europe today, what do they have to do with each other? I mentioned legal cooperation and in the Council of Europe we do so through treaties which in the Council of Europe, they’re called conventions. Many of you will have heard of the European Convention on Human Rights, for instance, and its court which are one of some 200 conventions we have in the Council of Europe. Many years ago already it was realized that if we wanted to ensure protection of Human Rights, respect for the rule of law, promoting democracy that we would need to look at new technologies and therefore over 40 years ago the Council of Europe established data protection Convention which was the first legal instrument of its kind and it came at a time when most countries didn’t have any data protection legislation at all. This was really a pioneering text, it influenced the legislation in many countries and is considered I think rightly so as the mother or the grandmother, if you like, of the GDPR and the European Union’s regulation on data protection and it together – today brings together more than 55 parties and we work with more than 70 countries because we also have a number of observers that follow the work on data protection and that is about half of the countries, more than half of the countries in the world with any data protection legislation at all.

More than 20 years ago the Council of Europe established cybercrime Convention which to this day remains the only instrument against cybercrime on a global level because that Convention brings together more than 66 states, several others have requested accession and are in the pipeline, if you would like, and the parties come from all five continents and last year the Council of Europe carried out capacity building activities in over 130 countries. I’m mentioning this to demonstrate that the work that’s been carried out at the Council of Europe on new technologies goes well beyond the borders of Europe and the organization has been able to set global standards. It is therefore, not surprising that the Council of Europe also decided a couple of years ago to start looking seriously into artificial intelligence and its implications once again for our continents, our Human Rights, rule of democracy. It was decided to do an assessment that we called the feasibility study to check whether existing legislation both national and international instruments and non-binding instruments, many ethical charters that exist, self-regulation, if you like, by companies, but also multistakeholder ones, like the Montreal legislation was sufficient to ensure when artificial intelligence is used and developed that our fundamental rights are sufficiently protected. The committee was set up to do this, then the pandemic intervened and nonetheless, into ’20, at the end of 2020 this committee, which was called an Ad Hoc Committee on Artificial Intelligence.

I mention this not to bore you with acronyms in the case you would like to look it up.

So that came to the unanimous conclusion that the existing legal standards and non-binding standards of ethical charters, of course, they were important, dealt with certain aspects of AI, but were in themselves insufficient to guarantee that the document and the implementation of AI, especially by governments would not interfere with Human Rights and the rule of law.

Something I should mention also about this Committee, it is that it was not just Member States sitting at the table, at the time we were still 47. But also our Observer States, the United States, Canada, Mexico, Japan, Holy See and the Government of Israel which requested to be admitted to these discussions, and in addition to the states, the government representative, we had representatives from NGOs at the table, including youth organizations and we had industry because the Council of Europe had established a partnership agreement, if you like, with a number of internet companies including most important the big one, if you like, that you will note and a series of smaller associations and industries such as for instance IEEE that works on standardization. Some 26 companies and associations of companies in the tech internet field were represented at the meetings. We also did an online consultation and as I said, unanimous conclusion was we needed something more. What happened, this was brought to the attention of the Member States who then decided to set up, to continue the work and to ask to identify the elements of a possible treaty, possibly a recommendation, but it was very clear early on that the overwhelming majority of stakeholder, the states, non-state participants wanted binding instrument treaty and during 2021 they identified a series of elements that should go into a treaty and this concerned transparency, explainability, people that dealing with AI understand that they’re dealing with a bot and there should be remedies, of course, oversight, meaningful human oversight to ensure that artificial intelligence would – the use of artificial intelligence would be useful and good. This is very necessary, it became apparent notably in a major scandal that forced my country, the Netherlands, my country of origin, where a scandal erupted regarding child benefits and thousands of people were wrongly identified as having tried to defraud the system and were forced to pay back child benefits they had received. Now, it turned out, I’m speaking here about some 26,000 families, and the system wrongly identified them because it was highly bias, it took family names, spelling mistakes made in filling out forms into account in wrongly identifying this huge number of people as having criminally defrauded the system and led to real tragedy. These people were very low basic income, they had to pay back huge sums of money over a short period of time, several committed suicide, families were broken up, 1600 children were taken away from their families and to this day have not yet returned. The administrative courts did not sufficiently take into account or assess the issues and the cases that were brought by the victims were dismissed basically with the argument because the computer had said it was okay, it was assisted, it was an AI system that had made this identification and it was considered by the judiciary wrongly that this was objective, impartial and that, therefore, it need not to be assessed. Since then, the President of the highest administrative tribunal in the Netherlands publicly apologized for this, the government fell over this scandal, the parliament presented a report, called unprecedented injustice and all of this, there were many things that went wrong, but it originated in the use of artificial intelligence system that was not properly tested, where the right boxes were not checked and there was no human oversight, no transparency and the data which went into the system, they were highly biased.

I’m telling you this story to illustrate that things can go, indeed, wrong when governments use, implement AI systems. The council of Europe has a consultative body which is made up of Constitutional lawyers and provides advice to governments and parliaments like. The Netherlands parliaments on this particular case requested the opinion of that group which produced a report which may be found interesting on the website of the Council of Europe and it was presented in December, 2021. It underlines a number of shortcomings I mentioned and also points out very interestingly that many of the things that did go wrong in this case happened to go wrong in the Netherlands, but could have easily have gone wrong in other countries as well and may go wrong as we speak now.

It is really a matter – it is really urgent that standards are adopted and, of course, preferably at international level to ensure that this sort of tragedy cannot repeat itself. When these elements that I mentioned earlier could go in the treat Y a future treaty that could prevent such tragedies was brought to the attention of the government and a new Committee was set up on artificial intelligence and it has started and they now are negotiating the legal instrument and I very much hope that I can tell you that it will be a binding, legal instrument, the overwhelming majority, virtually all Member States are in favor of a treaty now. There was a concern before that – and there still is a bit of a concern, I’ll be frank – that regulation maihem per innovation, but for instance the companies that are mentioned, industry that sits at the table, it is very much in favor of regulation, good regulation because good regulation would also lead to good innovation, it will provide level playing field for companies, because the same rules will apply to everyone. It will enhance trust, and namely what is being rolled out, produced, it will meet certain standards and to give you an example by the regulation and innovation certainly, they go hand in hand. Take the pharmaceutical industry, it is one of the most tightly regulated industries in the world, and yet it came up with a vaccine against COVID in record time. Innovation and regulation certainly can go hand in hand.

The work started in April of this year. We hope, the council has a mandate to produce the text by November 23 which is, of course, very, very fast for a Convention for a treaty in the United Nations context treaty negotiations, they take up to ten years, if not longer. Basically we have slightly over a year to do so now and we have, of course, the preliminary work that’s been done by – the ingredients of the dish, if you like, they have been identified and now the dish needs to be Cooked, there is a difference between identifying what should go in a treaty potentially and the exact wording of something that’s binding law on states.

This is a very exciting period and will be an intense period now. There will be questions perhaps on whether we do so in isolation, what about the other international organizations.

>> NADIA TJAHJA: We have about ten minutes for questions. Are there any questions from the room? Please come forward towards the mic? For those online, please raise your hand and I’ll be happy to share you on the screen so that you can give your intervention.

>> JAN KLEIJSSEN: Can I finish with the final remark, I was coming to the final remark, the Council of Europe works closely with other international organization, such as the European Union that works on the I act, complimentary to our work, dealing with the market issues and we work closely with other organizations and aspects on policy, aspects of AI, but not on the specific Human Rights aspect of the Council of Europe. The name of the advisory Committee, it is called the Venice Commission because it was established in very and it is called the Venice Commission and you will find it on the Council of Europe website.

Thank you.

>> NADIA TJAHJA: I see there is someone who has come to the microphone.

>> It is a pity you’re not with us today. You spoiled with the last phrase the question. I will reformulate it in a different way. It is interesting to know that you’re coordinating efforts with the other initiatives that are working in the same field, even with different facets. It is important for me to understand how this will work, for instance, in the context of the digital cooperation framework that the United Nations are bringing ahead. This will establish a close cooperation with them?

>> JAN KLEIJSSEN: Thank you very much.

Yes, we’re very much reaching out to others, international organizations are – most of them are already observers to this new Committee and I think we can reassure you that we will take other initiatives on board and you will be inspired by what others do. We’re not there to regulate a technology, it is – we certainly do not want to hamper innovation. In fact, our mandate says whatever we do should be conducive to innovation and we have strict instruction by governments, by the 46 Member States to take into account what’s happening elsewhere. That, of course, also includes the UN.

>> The digital compact you mean, you will cooperate with the digital compact?

>> JAN KLEIJSSEN: We are already, several of us, myself, other, we’re regularly speakers at the UN events and we will certainly take their work into account and we hope, of course, that they’ll take our work.

>> Thank you.

>> NADIA TJAHJA: Go ahead.

>> Good morning. I’m here in my capacity as part of the YOUthDIG here in front of you.

I have two question, but I’ll allow myself to first have one on the liability and accountability. You mentioned the case of the Netherlands, which is quite disappointing in how the technology actually had a strong impact in society. My question for you, despite all of the efforts, despite these being quite substantive, exhaustive process to provide for individuals, how can youth as well make sure that the technologies being placed in the market and the efforts that you have been describing today can actually be tangible. How can we actually trust work that the work that’s been doing, will not repeat itself once again or the damages that were in the Netherlands do not repeat themselves again? As individuals we require that to give that kind of trust and I’m asking this because it is going to be part of our message as well, the point of liability and accountability that we kind of want to have that in mind and really observe this, especially because there are different Member States implementing artificial intelligence and there is this element of cross-border, one company in one state and then the damage is another one. How is this going to function in practice? My second question is very short. How is the committee that you mentioned, what is the opinion with regard to the European Union act on artificial intelligence? Thank you.

>> JAN KLEIJSSEN: Thank you very much for these questions. Yes.

First of all, we’ll go to the Netherlands, I would go beyond disappointed, it was an absolute disaster. As I said, just to take the children, 1600 children taken away from families, still not being brought back, people having killed themselves, it thousands of people being forced to file bankruptcy. It is an absolute disaster. How can we ensure this doesn’t happen again.

This was the predecessor Committee, there were two Committees, Ad Hoc Committee, it is just very simple, Committee on Artificial intelligence, or CAI if you Google it on other search engines. We’re trying to establish basic rules that governments should comply with and that is to ensure that there are a number of guarantees regarding the development and implementation of artificial intelligence. I mentioned developments, but because the idea, it is very much to ensure that there will be Human Rights done by design already from the outset. When companies start to develop artificial intelligence, they from the very beginning take Human Rights considerations into account, a number of big companies are doing this already fortunately. Of course, they’re at the table, we seize every opportunity to stress this point. I would also like have it in the treaty, that governments should require this. Governments, when they use artificial intelligence should, if you like, take a number of boxes that the data should be checked as much as possible, there should be guarantees. We know all data bias, let’s not be naive, it will not be the perfect AI system. There will be bias, we’re bias as human beings, the data we produce is bias. It should be recognized and because if you recognize, if you’re aware of it, of course, you are – you interpret the results differently.

There should be, of course, awareness raise, people should be aware when they’re dealing with artificial Dell against system that already exists by the way, California, the State of California, United States, it has a law, a state law requiring companies to inform customers when they deal with a bot and that’s a very interesting model for us. Of course, we would like to see that extended to all European citizens, that they’re informed. There should be oversight, oversight bodies, there should be remedies, independent, effective remedies when something goes wrong. It should be possible that when you’re subject to a decision made by a machine that you have a possibility to appeal to a human being, to overlook that and there’s a series of other guarantees and it would take me too long to list everything and you suddenly will find all of the details on the website of the Council of Europe and, of course, with regard to youth, you are very much there. Youth organizations, they’re present at the table. To take your concerns into account, your specific concerns into account as well.

For the second question, the second question on the European Union, the European Commission is also represented at CAI, and we follow the work of the European Parliament. And as I mentioned before, the work is parliamentary, the approach of the European Union is a market approach, valid, important in products; and the Council of Europe, it is more about the process, when the products are used, what guarantees should there be, especially used by, of course, governments and also when governments allow the private sector to use particular technologies, what guarantees should there be to prevent abuse.

>> Thank you very much.

>> NADIA TJAHJA: Thank you.

We’re coming to the last final points. Kindly I would ask if you didn’t mind keeping your point short, we’ll take both questions.

Please allow each other the time to give the intervention and time for him to respond.

Thank you.

>> Thank you very much. I’ll be very short. Nigel, good to see you again. It is a shame you’re not here.

I really – well two points: Firstly, you know, the Council of Europe, as you said, has done amazing work in a number of areas and certainly, you know, the UKCIS government considers it a privilege to be involved in some of your work. Not least on data protection issues, but also on broader internet issues and the work you did in some of the early years on Internet Governance was very important indeed.

The question I had, you know, looking forward on the Internet Governance agenda for the next few years, obviously nothing is certain at all, but we have the review of the WSIS mandate and the discussions at the United Nations General Assembly and I wondered if it is the CDMSI still, the committee, I probably got the initials wrong, but so many years ago, but whether that Committee at all may have on its agenda in due course the WSIS+20 review. Recalling in the past that you have considered these issues.

Thank you very much. Good to see you.

>> Thank you for your great presentation.

I have got one question: Often data is seen as the only source of bias in AI, and this is not the case. When you train a system, when you do not give rules to a system, it means that you let the system develop stereotype. Training, assistive, it is basically generating stereotype, and when you train a system, you need random as an input, if you train the system two times with the same data, you get different results. Focusing on the data input, if you have the perfect data, which is, of course, not possible, it would generate a perfect system, it is not true, but if you do not give rules, you will receive stereotype so the answer is we have to either live with the stereotypes or not use those trained systems.

>> JAN KLEIJSSEN: I’ll ply to both questions in the order in the set of interventions if you like and the order they were put. First, yes, very good to see you, I get not being there in person.

We recently adopted a digital agenda with a number of concerns that you raised on the Internet Governance in general and it is still the same Committee, same name, you’re quite right, the Steering Committee on the Information Society which will look at n I can assure you that we’re following the issues you mentioned and UKCIS in fact is actually not participating, some is, but not actively participating with all of the activities of the Council of Europe and also very much so in these relating to the internet.

H1n1 to have an opportunity to talk on these issues.

On the second question, yes, bias does not only come from data, that’s clear. The training of systems, it is a concern. It is also a concern because of the environmental impact of training system. Training systems can be very, very resource intensive and I’m thinking here of natural resources also, the impact, we see the footprint, the ecological footprint of training and you all know better than I do, it can be huge which also raises an issue on whether there was an interesting seminar this week in Germany on the environmental impacts of artificial intelligence and whether in some cases we should perhaps not refrain from using AI when other alternatives could be just as effective and also whether continuous training to get even greater accuracy is always necessary. Of course, the Dutch system.

In the Dutch case I mentioned, a lot of things went wrong for other uses, one would – basic training may be sufficient. You are probably aware that the algorithms of course, they’re across the difficulties and an interesting study done by algorithm watch in Austria regarding the central employment agency demonstrated the shortcomings there. Again, it has to be a balance.

In any case, I would like to end with this, it demonstrate, that’s my final remark here, it demonstrates the need of human competence, human oversight. Artificial intelligence and other technology, it is a very good servant but poor master and it is extremely important in any use of artificial intelligence there is a possibility of humanity.

>> Would public access obligation be –

>> NADIA TJAHJA: I’m sorry, we cannot go further due to lack of time. I’m very appreciative to the keynote speaker Jan Kleijssen and of course, all of you for the kind attention because this is the most exciting things to have the conversations and keep asking these question also allow us to move forward.

Jan Kleijssen, thank you for joining us from afar, it is interesting it hear and to see you. We hope to welcome you the next time in person at EuroDIG.

Thank you very much.

>> JAN KLEIJSSEN: Thank you very much.

Bye to all.