Tobias Bornakke – Keynote 06 2023

From EuroDIG Wiki
Jump to navigation Jump to search

21 June 2023 | 10:00 EEST | Main auditorium | Video recording | Transcript
Consolidated programme 2023 overview / Tobias Bornakke, Keynote

Keynote slideshow: https://ogtal.sharepoint.com/:b:/s/ogtal-project/Een0bV0XvNdJloJqd7zI-fQBe_323_LqxN9bracErgmfTA?e=C5BSDh

About Tobias Bornakke

Tobias Bornakke, chairman of the Nordic Think Tank for Tech and Democracy, Denmark, is a researcher and co-founder of Analyse & Tal. Tobias holds a PhD in digital methods and has led several studies on the democratic debate on social media across the Nordic countries.

Find out more on the Nordic Think Tank for Tech and Democracy at their website.

Recommendations by the Nordic Think Tank for Tech and Democracy at https://pub.norden.org/nord2023-004/.

Video record

https://youtu.be/BrmH4NTvnAY?t=1707

Transcript

Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-481-9835, www.captionfirst.com


This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.


>> NADIA TJAHJA: Thank you very much for your keynote.

I would like to direct your attention to our participant online who will distinguish the second keynote. Tobias Bornakke, Chairman of the Nordic Think Tank for Tech and Democracy. Give him a warm welcome.

>> TOBIAS BORNAKKE: Hey, everybody. I will just try to share my screen. This is always an exciting moment when you find out if people can hear you and see your screen.

Are you seeing anything now?

>> NADIA TJAHJA: We can see your screen now.

>> TOBIAS BORNAKKE: Thank you.

Thank you for inviting me here today. I’m afraid that the invitation came too late to be there physically but I really wish to discuss many things with you. I have heard a lot of people will take some of our ideas on.

My name is Tobias Bornakke, I’m the cofounder of the company where together with 30 other colleagues look the misinformation posed in the Nordic region, last year, we started the Nordic Think Tank for Tech and Democracy, it is looking at how the government should protect democracy against others. This involved researchers from all over the Nordic countries. Last month we presented the fruit of our work to the Ministers, we had 11 recommendations for how to unite the Nordic region and we’re now looking at how the Council of Ministers want to proceed. Already, however, they have announced they will implement five of the recommendations, around 6 million euros to the implementation.

So with the hope to inspire other countries to join, and those with practical limited timeframe today, I will present the three most important recommendations that with some minor adjustments are now being implemented I about the government.

Our first, most interesting is to make digital law enforcement. As most of you probably know, there is a Constitution hon how to protect our data in public space in the Digital Service Act of the DSA. Going forward we expected the DSA to be critical to our conversation as the GDPR has become for privacy. Certainly not perfect, the DSA represents our first serious legal attempt to address some very diverse problems in the democratic debate. The spread of illegal content online, upholding of our digital freedom and expression, discrimination against select groups, spread of mis and disinformation and potentially harmful to children and youth. Why do we need a Nordic Think Tank to develop recommendations if almost all technical legislation covering the debate is now handled by the E.U.? For the moment, high-level E.U. parliamentarians and civil servants close to the process are nervous if this act will be efficiently enforced. In other words, if we want to secure that this is enforced in our part of the world, we need to assist the E.U. Commission into discovering our problems.

Our first renovations are for Nordic countries to unite and establish a democracy, the censorship is the most vital cases violating E.U. laws and make sure that these cases reach the Commission. To ensure that the Nordic countries have a strong voice in deciding what degrees of risk we’re willing to accept online rather than leaving the decision to others.

A reasonable question here would be why Nordic tech region? In Denmark we have made several attempts the last few years to actually change the path of tech, the big tech giants. Our conclusion is that alone we’re too small and too optimistic, we get too much into one case and we forget the big picture.

We have also seen some really good work for E.U. who have been setting the ground rules and it is also really slow and our democratic challenges are too different. Basically our analysis is that while we have something similar, something similar to the E.U., we have also differences between the democratic problems in Denmark and Bulgaria.

Uniting 28 million citizens sharing similar values on democracies we believe may be right.

We also see this centre as a way to specialize and getting building expertise within specifically an area of great importance to us. One would be children’s wellbeing with the DSA also offering attention. Researchers around the areas are not 100% conclusive, but we are seeing extremely worrying development in especially young girls’ mental wellbeing across the entire Western civilization and the numbers to be clear is chilling.

In the think tank we believe increasing evidence of a negative impact will reach a point where from a principle of precautionary needs to act. We recommend that the countries work towards establishing obligatory certification of platforms with content and nothing acceptable for children, so today all of the big social media platforms have age limitations on their usage but it is not being enforced and we want to force them to actually enforce these lines that they’re setting themselves.

Further, we recommend to work towards establishing obligatory parental controls for minors with social media platforms. We suggest that all underage accounts on social media platforms should come with a default time limit, for example, an hour that can only be increased by an hour.

Through such mechanism we’re moving some responsibility away from both the children and the parents while not blocking the children’s right to information since the time would be lifted should the parent think that they need more time on TikTok.

In a few months ago we saw the election in Brazil and like any other election in Brazil, we saw that disinformation played a crucial role on both sides of the struggle. However, one thing had changed, when researchers like me working with disinformation found big networks of disinformation, the sharing content about the election, they sent it into to Twitter but only this time the results came back because the email had been closed down. Basically Musk had fired the Brazil team before the election, deciding this is not – our thinking is that this is not acceptable, we cannot trust and rely on the tech to decide themselves what basic needs we need to protect our democratic conversation. Independent researchers have been blocked from the platforms, the E.U.’s DSA now requires that they hand over data, if a researcher wants to investigate are where the platform spreads illegal content, limits the Freedom of Expression, discriminates against select groups, spreads misinformation harmful to minors and the giants must now allow access to the necessary data to study this phenomena. This is truly a milestone.

The members of the think tank fear that the tech giants will try to make it as difficult as possible to get in sight into the problems through bureaucratic processes and relevant technical requirements.

We, therefore, recommend that a Nordic office is established to support our researchers in the bureaucratic struggle to get access to the data that they now have a rightful place to. This office now constitutes a cost reduction, it will help us secure hundreds of current Nordic projects, and this is one of the most efficient tools to keep an eye on the platforms.

So the three recommendations are just discussed and presented, they have all been followed up and they’ll now be implemented.

One last thing I wanted to share here today, it is of great importance, it is our recommendation regarding generative AI.

We are quite sure that AI will revolution nice our lives, however a dangerous side of the technology is the potential use as a weapon in the ongoing information, what we are facing in the revolution, similar to replacing the front loaded rifle with a machine gun, we already are witnessing the first outside of the Nordic countries and fake accounts spread hate speech and disinformation to journalists and others while hiding behind the AI generated profile images and posting AI generated content that a citizen can no longer identify as fake. So just to stress the seriousness of this problem and underline this is not just another attempt to join the AI hype, I this is the recommendation developed long before the general hype of AI hitting the news. For us, who spent our working lives fighting Russian campaigns towards our democratic conversation in elections we look at this new enemy with disbelief. We therefore recommend to set up a provisional Nordic taskforce that should follow the technological and global development around AI and disinformation in a decisive upcoming years and this background continues to have measures to protect our public conversation.

So I have a picture from a movie, this is not very good, I don’t want you to watch it, it is about where the world is flooded and the human lives in small areas high enough not to be flooded. I believe that this is quite a good image of how the future is going to be like. We’ll have so much synthetic fake data that are generated by AI, flooding our Internet, and we’ll have to find ways of leaving out some areas where the democratic discussion exists to decide in this area we only want humans to participate. And this is something that we have taken time to agree on, it is also breaking with many of our values on a free, open Internet.

That’s what I have decided to bring today. I was told that maybe we have one or two questions, if we have time for that, I will stop. Thank you.

>> NADIA TJAHJA: Thank you very much for your keynote.

Tobias kindly offered to take one, two questions, please do come forward.

Please state your name and affiliation.

>> Hello. I’m from the University of Warsaw, I’m a researcher of disinformation online so I’m interested how do you monitor the work in cases of disinformation provided by the AI? We know that sound detection systems are not perfect in cases of showing which items are generated by AI and even though we have ChatGPT, things like that and the future, it may be a serious problem. What is your answer for those issues of how we can prevent people from following and submitting information in the online world? Thank you.

>> TOBIAS BORNAKKE: Thank you. That was an interesting question.

My present answer is that I don’t believe that we can. I seriously – I’m really skeptical that we’ll be able to detect AI in the future. We have made some studies of the new detection algorithms, the tech giants are putting out to be able to verify the business made by AI and most of them works by simply throwing a coin and guessing. We’re really skeptical, we have for the last five years professionally worked with technical disinformation and we have a lot of tools to detect it and we’re seeing these becoming obsolete and I’m fearing – this is not a future I want to go into – this is just a stress, it is a future where I’m afraid that be may end up in a future of simply having to say you have to verify it is a human, to be on some part of the Internet. Otherwise, we won’t be able to have a democratic conversation but flooded by AIs participating trying to move us in a certain direction.

>> NADIA TJAHJA: Thank you very much. There is another question from the floor. Please.

>> I’m a coSecretary of The Swiss IGF with which we had last week. We deducted in part of our messages that one of the important things about encountering the threat of AI is just teaching the population. Did you also address this particular thing? How to teach people, population what it is before taking just counter measures?

>> TOBIAS BORNAKKE: Yes. One of the recommendations is to step up literacy, that’s our recommendation, I didn’t have time to talk about that today. And we also have a recommendation regarding fake news detection, fact checker. I certainly believe that we can continue to move in this direction, it is already happening and has been happening for the last many years, but I also believe that we’re now witnessing a change in how the AI, the possibilities, the weapons of the enemies is so strong and I’m really skeptical that teaching my young daughter how to detect AI – detect this information, how it will work in the future.

Maybe I’m too depressing, too pessimistic, I’m sorry about that. But as a person working with this for many, many years, it seems like we’re losing a battle.

>> NADIA TJAHJA: Thank you very much.

I fear that we’re nearing the end of the time. Perhaps we can have one more question if anybody is wanting to ask a question.

Is there anything happening online?

>> No. Not at the moment.

>> NADIA TJAHJA: There are no questions online.

So thank you so much, Tobias Bornakke, for joining us here. We read a little bit about the recommendations and perhaps you can also share that with us so that we can add that to the EuroDIG Wiki so that those asking about the recommendations and as well as about the fake news one you mentioned then can have another read about it. I really appreciate you answering these questions.

Thank you very much. Please give him an applause.