Trustworthy AI: Large Language Models for Children and Education – WS 03 2023

From EuroDIG Wiki
Jump to navigation Jump to search

20 June 2023 | 15:00 - 16:00 EEST | Auditorium A1 | Video recording | Transcript
Consolidated programme 2023 overview / Workshop 3

Proposals: #63 #65

You are invited to become a member of the Session Org Team by simply subscribing to the mailing list. By doing so, you agree that your name and affiliation will be published at the relevant session wiki page. Please reply to the email send to you to confirm your subscription.

Kindly note that it may take a while until the Org Team is formed and starts working.

To follow the current discussion on this topic, see the discussion tab on the upper left side of this page.

Session teaser

With ChatGPT, progress in AI is visible to all. These Large Language Models (LLMs) are trained with large volumes of documents which requires large computing resources and access to proprietary information.

Who will control these systems? How can we ensure that they are reliable and unbiased? How will the current high energy demand for training develop?

Session description

LLMs are a type of statistical learning method that can process and generate human-like language from models pre-trained on large text corpora. The availability of LLMs like ChatGPT has ushered in a new phase of AI pervasion in society and of humans interacting with computers, with many open questions and problems.

This session focuses on the topic of large language models (LLMs) and the consequences of their use by young people and children as well as in the sphere of education. It will draw on experiences and points of view of academia, private industry, civil society, and national and international regulators to discuss the capabilities and limitations resulting from this architecture as well as specific issues regarding the regulation, design, development and use of applications based on such models.

Format

Please try out new interactive formats. EuroDIG is about dialogue not about statements, presentations and speeches. Workshops should not be organised as a small plenary.

Further reading

Links to relevant websites, declarations, books, documents. Please note we cannot offer web space, so only links to external resources are possible. Example for an external link: Main page of EuroDIG

People

Please provide name and institution for all people you list here.

SME

  • Desara Dushi
  • Jörn Erbguth
  • Minda Moreira

The Subject Matter Experts (SME) support the programme planning process throughout the year and work closely with the Secretariat. They give advice on the topics that correspond to their expertise, cluster the proposals and assist session organisers in their work. They also ensure that session principles are followed and monitor the complete programme to avoid repetition.

Focal Point

  • Vadim Pak

Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective Subject Matter Expert (SME) and the EuroDIG Secretariat and are kindly requested to follow EuroDIG’s session principles

Organising Team (Org Team) List Org Team members here as they sign up.

  • Marcel Krummenauer
  • Amali De Silva-Mitchell
  • Concettina Cassa
  • Thomas Schneider

The Org Team is a group of people shaping the session. Org Teams are open and every interested individual can become a member by subscribing to the mailing list.

Key Participants

Key Participants are experts willing to provide their knowledge during a session – not necessarily on stage. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance. Please provide short CV’s of the Key Participants involved in your session at the Wiki or link to another source.

  • Ms Morgan Dee, Director of AI and Data Science at EDUCATE Ventures Research.
  • An expert from the 5Rights foundation.
  • Mr Guido Scorza, the Italian Data Protection Authority.
  • Mr Jascha Bareis, a researcher at the Karlsruhe Institute of Technology.

Moderator

  • Mr Thomas Schneider, Head of International Affairs in the Federal Office of Communication (Switzerland) and the chair of the Committee on Artificial Intelligence of the Council of Europe.

The moderator is the facilitator of the session at the event. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.

Remote Moderator

Trained remote moderators will be assigned on the spot by the EuroDIG secretariat to each session.

Biographies of participants:

  • Ms Morgan Dee is the Director of AI and Data Science at EDUCATE Ventures Research. She leads the Data Science team in using data and AI to extract meaningful insights about human learning. She holds a Master's degree in Data Science with AI from The University of Exeter, where she received recognition for her outstanding academic performance. She also trained as a secondary school Physics teacher at the Institute of Education after completing her Master's degree in Astrophysics from The University of Manchester. With a decade of teaching experience spanning across diverse locations including the UK, Nepal, Malawi, Hong Kong, and Japan, Morgan has developed a deep commitment to the ethical use of AI in Education. Her primary goal is to unlock the full potential of every learner, ensuring their growth and success.
  • Jascha Bareis  is a researcher at the Institute for Technology Assessment and Systems Analysis (ITAS) at the Karlsruher Institute for Technology (KIT) and Associate Researcher at the Alexander von Humboldt Institute for Internet and Society (HIIG), Berlin. He approaches the topics of AI Regulation and Algorithmic Governance by merging Political Science, Media Studies and Science & Technology Studies. He worked at ITAS for the AI policy advisory project GOAL, “Governance by and through algorithms”, commissioned by the German Federal Ministry of Education and Research. Currently, he is involved in the project “social trust in learning systems” (GVLS) at ITAS.
  • Andrea Tognoni is Head of EU Affairs at the 5Rights Foundation, focusing on advancing children’s rights in the digital environment through European policies, notably the AI Act, the implementation of GDPR and the Digital Services Act, consumers’ law and standardisation processes. Formerly leading the government affairs practice at a global consultancy, he has worked for the EU Delegation to the UN and other international organisations in Geneva, as well as with global NGOs and think tanks on human rights issues. He holds a J.D. from Barcelona University, and an LL.M in Public International Law from Leiden University.

Current discussion, conference calls, schedules and minutes

See the discussion tab on the upper left side of this page. Please use this page to publish:

  • dates for virtual meetings or coordination calls
  • short summary of calls or email exchange

Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.

Messages

Rapporteur: Francesco Vecchi, United Nations University – CRIS

  1. Large Language Models (LLMs):
    Large Language Models like ChatGPT4 have a revolutionary potential for customer services, translation, and human-machine communication, but they do not produce knowledge. Actually, they simply map statistical relationships between linguistic tokens by identifying patterns and finding correlations. AI-generated texts are always fictional, and the result of an easily biased statistical equation. Regulation to protect the most fragile users is certainly needed, but it must be gradual and focused on core principles rather than on quickly out-of-date technologies.
  2. Italian Data Protection Authority:
    The Italian Data Protection Authority stopped the use of ChatGPT in Italy since they believe that the technology is not mature enough, that the current AI market is dangerously monopolistic, and that it is rising faster than the regulation (e.g. EU regulation on AI is going in the right direction, but it will not be implemented before 2025). Finally, children need special protection, and should be considered as legally unable to enter in any kind of personal-data and digital-service contract.
  3. LLMs in education:
    LLMs can remarkably improve reading, writing, analytical skills and the production of educational content while providing more personalised learning options. Nevertheless, children are less able to distinguish reality from AI-generated content; LLMs can cause overexposure to biases and disinformation; relational drawbacks such as depression, addiction, and anxiety can take place; and plagiarism, truth, and information quality remain serious issues. Therefore, regulation must be focused on putting children’s rights at the center, by spreading digital literacy among children, parents, and teachers, and entailing legal responsibility for the design, the outcome, and the oversight of the system.

Video record

https://youtu.be/K6TOwpLm4e8

Transcript

Provided by: Caption First, Inc., P.O. Box 3066, Monument, CO 80132, Phone: +001-719-481-9835, www.captionfirst.com


This text, document, or file is based on live transcription. Communication Access Realtime Translation (CART), captioning, and/or live transcription are provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. This text, document, or file is not to be distributed or used in any way that may violate copyright law.


>> THOMAS SCHNEIDER: Hello, everybody in the room, and also in the world outside this room. Welcome to Workshop 3 of this year’s EuroDIG, which is about AI, and in particular about the trustworthy AI, with regard to large language models for children and education. My name is Thomas Schneider. I’m working for the Swiss government. And I’m currently chairing the Council of Europe’s committee that is negotiating the first binding convention on AI. And this session has been prepared also with the help of the Secretariat of the Council of Europe and a number of other persons and institutions that work in the field of AI.

So as we all know by now is that applications like ChatGPT and other large language models are serious technological advancement in the field of AI and have played a significant role in the so-called ongoing AI revolution.

It’s important in its ability to understand and generate human language, making it an invaluable tool for various applications. This has the potential to revolutionize not only customer services and chatbot, but language translation, content generation and probably more things to come in the future.

By providing user-friendly conversational experience, it enables individuals to interact with machines more naturally and this, of course, opens up new possibilities for human-computer interaction. In today’s session, we’ll try to look at large language models from the point of view of their advantages and limitations when it comes to them being used in education, as well as the sphere of education which is not only an important but also a sensitive area for many of us.

We have assembled a cool panel with vast experience in the field. Given the short time that we have, let me already start and lot lose more time, and give the floor to Mr. Jascha Bareis, who is a researcher at the institute of technology assessment and systems analysis at the Karlsruhe Institute for Technology, and associate researcher at the Alexander von Humboldt Institute of Internet and Society.

Please explain how language models work.

>> JASCHA BAREIS: Thank you very much. First of all, can you hear me well?

>> THOMAS SCHNEIDER: Yes, we can hear you well.

>> JASCHA BAREIS: Perfect! Okay. Good.

Then – yes?

Oh, good.

I will then try in very plain language explain how large language models function and what they are. So as a background. All of us have heard about ChatGPT, GPT 4, which is part of the large language models and the most important thing is actually already in the name. They are large language models and not large knowledge models. And in this short presentation about the functionality of these large language models I want to focus on this distinction. Right now, there’s lots of confusion going on, how large language models are used.

This is with great relevance and explains some of the weaknesses and the strength of our models which can help us to see how we can use them in elementary education, for example.

So against maybe layperson models, large language models don’t store knowledge. It’s easy to believe that ChatGPT, if you ask it, it has Wikipedia, articles, books and website content and uses this knowledge to provide an answer when asked, but this is actually wrong.

You have probably heard or read many times that ChatGPT has the knowledge status of the level of 2021, but ChatGPT is actually not based on the knowledge of the Internet 2021, but it maps the statistical relationships of the tokens. So the knowledge, in that case words or words components with which it was trained.

So especially if you look into detail, what then do these large language models, what do they do? How do they function? Well, they are based on statistical models. So what they do is they identify patterns and language, for example, grammatically patterns and then out of the patterns they create rules and out of these rules, they can rearrange language. Let me give you two very plain examples, short examples to make clear what kind of language we are dealing with.

So you may have heard recently that large language models, they hallucinate. So sometimes they do state false facts. They can say that people for example live on the moon and also large language models have great difficulties to calculate. For example, ChatGPT cannot sum up the numbers 245 plus 821, and even ChatGPT 4, which is the most powerful large language model at the moment, cannot create a list of words where every third letter is a “d.”

And this is actually knowledge elementary school kids can do. But actually, it’s not what they were trained for. Large language models have no understanding of what a number or the letter D is. This is very important to comprehend. What they do care about and what they comprehend are the relationships these components, the letter D or a number can have.

So to give you a precise example, for example, if large language models analyze the net and they often come across the token that a price rises. So the inflation is high, then it can see correlation that other news that people cannot afford goods anymore and the central bank increases the interest rate, et cetera.

So we see here a model emerges. If the price rises, then ChatGPT can analyze, oh, central bank is mentioned a lot and interest rates rise. And they perform astonishingly well. They create flawless text, and why is this the case? Because grammar, humor or even a romantic literature piece from the 19th century has a pattern and large language models can understand this pattern.

But let’s be certain, the text it creates is always fictional. It is the base, and it is the result of the statistical equation. And that’s why I come to the end, the problems we face with ChatGPT are not so new, actually. Associated with all the ethical risks that we have been discussing on the EU level and also in science from other deep neural networks. Among other things data bias, user surveillance, and user influence for nudging, et cetera, et cetera.

And in the future, as you can see, there’s large language models and not knowledge models, I think we are going to see them much more prone in core dimension and user interfaces. So translation between humans and between machines and humans. Because there they perform surprisingly well.

I hope I could give you a simple layperson talking understanding of how they function and then I hope we can take this as a grounding for the upcoming debate to see how these LLMs could be used in functional context. Thank you very much.

>> THOMAS SCHNEIDER: Thank you very much, Jascha, for this very useful explanation and, yeah. That helps us to understand what we are dealing with and what we are not dealing with.

So next on the list of speakers would be Mr. Guido Scorza. He’s the – he’s from the board of the Italian Data Protection Authority and was an official who was directly involved in the recent decisions of the Italian DPA regarding the use of ChatGPT, unfortunately, he had had an urgent incident that prevented him from participating, but we have received from him a video, where he will make his point so that we can still have his input because this is, of course, a very interesting case.

I would like to ask the techie team.

>> GUIDO SCORZA: Good afternoon. Let me apologize for not being able to attend this meeting in person, I’m in an extraordinary plenary of our board. So I will therefore limit myself to very few remarks that I hope will be useful to the discussion. The first thing is that I personally believe that the market is not a laboratory and people are not guinea pigs with the consequence that accepting technology that can have a measurable impact should not be tested until they reach a level of maturity on the market.

This is one the inn main reasons that led us a few months ago to take urgent action against open AI, which in my opinion is confusing the market with research laboratory.

The second consideration I would like to share with you is that we can’t leave to the market, especially in the case of artificial intelligence to regulate the market. In general, there are too many rights and freedoms even fundamental ones at stake and in particular, because the market in question is a monopolistic market and then leaving the market to regulate itself would mean leaving this task to five or six corporations.

The main problem that we face, and my concern how to regulate the market that will rise faster than the regulation. With the European regulation on artificial intelligence, we are doing, in my opinion very well, in terms of timing, but probably we come too late. Those rules and the best scenario, I think will not be with us before 2025.

The question here we have to face is what do we do between now and 2025, made clear as I mentioned in the data that we can leave to regulate artificial intelligence for the market? The best answer that I can find is that we have to try to apply the rules. So we have to invent artificial intelligence as we wait for the new rules. Honestly speaking, I think some of the most effective and useful for artificial intelligent, is probably ChatGPT.

Let me say one thing about children. They are among the most fragile on which the artificial intelligence will fall. And they need special protection. They should be kept out through appropriate age and identity verification solutions from platform and services that are not designed for them and that they should be considered as they are legally unable to enter in any kind of contracts with which they change their personal data with a kind of digital service.

Again, many apologies for not being there with you. And thank you.

I really thank you for the invitation. See you next time.

>> THOMAS SCHNEIDER: So thank you to Mr. Scorza for this short and precise and clear points that he makes. I’m already now looking forward to your views on it’s three or four points that he’s making. With that, let’s move on to Antonio Tognoni, with 5 Rights Foundation. So Andrea, can you please shortly present us some of the strengths and weaknesses of LLM-based applications from your point of view and from your experience?

>> ANDREA TOGNONI: Thank you. Thank you, Thomas. I was about to say thank you, “chair” but that’s out of habit.

(Laughter).

Thank you, everyone, for attending and inviting us, of course. As said, I will say a couple of words on LLMs and children’s rights. Firstly, they are already here some of the benefits, I think we will talk more in the rest of the sessions and are often still potential ones.

And sadly, of course, because risk for children are really specific and higher than for adult users as their brains and bodies are developing, their cognitive and relational, their autonomy are evolving.

As it was said already, there’s several risks. Some may already be none exasperation or simple risk for children online. Some are a bit more specific. I think broadly speaking, it’s how LLMs use information, and on the other hand, the difficulty that children may have in understanding this and distinguishing reality from AI-generated content. So we have risk related to data protection and the views that Mr. Scorza was use and misuse of LLMs in content and behaviors. Risk related to information. We know that LLMs can perpetrate certain biases and disinformation content in a way create over exposure of certain kinds of information and those reduce learning for children.

We have risk related to human relations for children, and they can mistake it as a teacher and this can have relational and depression, addiction and anxiety as an impact of social media on children. And then we have specific education risk-related tools and abilities. We know that LLMs can develop in reading and writing and analysis skills but making it easier and simpler. And so these risks are crucial in the educational settings. It’s about veracity and quality of information.

So I think there is a lot of concern for LLMs in a lab tech as part of everyday learning for most children.

On the other hand, and I’m sure we will talk about this maybe more, it is not about shutting children out from technology, from the use. Dealing with LLMs early can prepare children in the process and make sure that they are safe. It’s not created with children at the core of their purpose.

Just one example, we had some LLM power features that it may include incorrect comment and suggesting users verify it. Children will not use – will not even read the information page and certainly not going to act on it. And we could name more examples. So we must focus on designing the LLMs with children’s rights at the center, children’s rights in mind to prevent this – these risks.

Digital literacy of parents and children and teachers, of course, is important to manage the challenge. What we need to not only turn teachers and children into engineers and software developers but we should bring children’s rights to the developers and LLMs and LLM-powered features because, of course, these are designed by someone with a purpose and this should entail responsibility for the design, the outcome and the oversight of the system.

So this is a bit what we try to do at 5 Rights, we try to bring children’s rights to regulators and inform regulation, which is not only needed but probably urgently needed.

So that’s my initial thought. What we need are child-centered LLMs and rules, but we will talk about this in the rest of the session. Thank you.

>> THOMAS SCHNEIDER: Thank you very much, Andrea.

(Applause).

We do have one more speaker and so obviously, when we think about ChatGPT, the – its use is something that will have an impact on education and the school system, not just for students that can ask ChatGPT to write an essay for them, but also with regard to tutoring and training, the basis of unique interactive possibilities that these large language models offer that will change obviously the way schools function, the way exams are organized and so on and so forth.

So let me turn to Ms. Morgan Dee, because she has an expensive – extensive –

(Laughter)

– experience both in education and AI. She leads the data science team at EDUCATE Ventures Research, a company using data and AI to extract meaningful insights about human learning and applied in real life. So please, Morgan, please give us an AI system using LLM are doing.

>> MORGAN DEE: Thank you so much. Obviously, these sorts of tools are relatively new, but currently what we are seeing is educators using tools such as ChatGPT to quickly create new content for their lessons such as lesson plans or revision materials or quiz questions for their students and we are seeing students engaging with these tools, seeing how they might have support for their learning or support for their homework.

So already these sorts of tools based on LLMs are used in the classroom. We are starting it to see the adoption of these sorts of tools such as ChatGPT in ed tech tools. So they are being engaged in ed tech tools to offer additional functionality, such as virtual tutors or tools for teachers to help with content generation. So that’s already in place, happening now.

Soon, we would expect even more tools to be released. We’re seeing promise of virtual tutors and more personalized learning options to help support students in their education. So so far it seems that the focus in terms of kind of practical uses has very much been on saving time for educators, or creating more personalized learning support for students.

And of course, there’s the topic of plagiarism and, you know, how students should be engaging with these sorts of – types of tools. What we have been looking at EDUCATE Ventures Research is taking this moment to identify what are the priority for education in the future? What we like to think is important is human intelligence, celebrating human intelligence and what kind of skills are unlikely to be replaced by an AI tool and what can we encourage in the education system to encourage the development of those sorts of skills in our learners.

Thank you.

>> THOMAS SCHNEIDER: Thank you very much.

So this is an insight of what is going on right now at least maybe not in all schools in Europe and the world, but at least in some, but others will follow and I think that raises a number of questions that our education system and societies will have to deal with with regard to what effect does this have if the work of teachers is at least to some extent replaced by machines. That raises, of course – that offers unique opportunities to offer personalized support and learning methods to special needs and weaknesses and there’s a huge potential that will be used but, of course, that raises a lot of questions.

So with this, we do have half an hour for discussion. So I invite you all present here in the room, but also those online to actively participate in the discussion. I will just throw a few questions in the room and then see how the discussion develops so one thing, I think is to see what the perspective is and whether there’s a different regarding the use of LLM between educators, parents, and children. So do they all see – well children, of course, the older they are, the more they maybe see things but also young kids have reactions.

So are there differences in perspectives. How should we best use LLMs and how should we in education, we have already heard, how can we empower children but not only children but the whole school systems. What do parents need to know about LLM and their use? What about explainability of AI in this context? How do we make this nontechnical? And the question is do we need a special regulation of LLM, and do we consider it covered with whatever will be the AI action and similar legislations? Yeah, these are just questions that I would like to ask you.

So I think, yes, we have – how does this work with the mics? You just need to speak loud. There’s one room mic that should cover the whole room, but you need to speak loud and we hope that the people online can hear it.

>> AUDIENCE MEMBER: So I have a question, but rather a comment. Personally, I attend – (Speaker is too far away from the mic to hear).

>> THOMAS SCHNEIDER: Can you speak a little louder, because it’s not –

>> AUDIENCE MEMBER: The universities – (Inaudible).

If we remember ourselves, it was natural to Google something that we don’t know. Now we have LLMs that explain to you in language, we really want to be confident in this. Some things that these LLMs are wrong. And as first speaker said, it’s not language models. It’s not knowledge models, it’s language. And I see very dangerous trend that the younger generation, they don’t have to remember anything to critical thinking about the information, because LLM will give them this feeling. And that’s very frightening point.

So are there any things on that, especially from the speakers? Because you try to speak about LLMs provide, but how about this one, with the knowledge simplification.

>> THOMAS SCHNEIDER: Yes, thank you.

We have a few request from those in the room, those online, you use the raise a hand function.

I see there’s a few things in the chat. So the question is, basically, could you explain –

>> AUDIENCE MEMBER: Where are we going?

>> THOMAS SCHNEIDER: You put one risk and you question whether people are able to deal with that because you could say in the past, you also learned to deal with newspapers where journalists may give their own views. For instance, there we introduced the notion of an article that is supposed to display facts and then there’s comments.

Who is next? The lady in the middle.

>> AUDIENCE MEMBER: Actually, I was going to add on that comment. This is precisely from a friend of mine, which is with our think tank. He explains things like in a Tim. Way. He says Google. It was like surfing. You need to teach students how to surf. But they are doing a sport.

But with LLM, it’s you are paying money to teach how you to make a cake and then they just give you the cake and you have to take it away. And then actually, it’s been already some studies saying at that time brain is a muscle. So if you don’t train the brain, then it’s not going to be ready to run marathons or even 300 meters. And this is the risk.

And actually, they are even considering how much was better for the brain to be trained to drive this, than just to type.

I have to say that my friend always says I work for the children. He says Isabel, I will ask them questions and make them learn so they cannot use ChatGPT because it will not make sense to them.

So I also think, like, if somebody does tangible good things, I will be happy to hear.

>> THOMAS SCHNEIDER: Okay. I will take one more and then we go – we ask the speakers to reply.

You need to speak as loud as you can.

>> AUDIENCE MEMBER: I would like to say, what do children really need to learn? They need to learn how to use AI systems in their future work. So they need to become familiar with this kind of system. They need to become problem with kind of problems that these systems have and they need to learn what society has not learned yet, how to use those systems for the future work. How to use those systems for the studies and creating them from those systems. Whether or not it enables them to give future societies.

This is just a prototype. We will have different systems with different properties, but I think we have to adapt to those systems. We have to adapt the curriculum because people need to have different things.

>> THOMAS SCHNEIDER: Thank you. So you should compare it to kids that, first they need to learn to walk and then there are bikes and motorcycles and all the cars. So they need to learn to ride a bike and know what the dangers are. They need to learn how to ride a motorcycle and then drive a car.

The question is what is appropriate at what age? Are there certain applications that should not be used at a certain age. How do you control something? It’s a little bit more difficult than preventing a kid from using a car before he’s 16 years old.

We have Jascha and then Andrea.

>> JASCHA BAREIS: Very good points. We are dealing here with a very invasive technology. It’s not only a newspaper, which to are sure has revolutionized the medium, but AI is so invasive and people are showing emotions to LLMs and many of you have probably seen the movie “her” for example, and so they appear very empathic.

So the question for a young child to be able to differentiate if this being, which is chatting with me or talking with me, is telling the truth or not is trying to influence me or not is very tricky. And it will be even more trickier in the future, because the appearance of these large language models will become ever more empathic.

Right now at the moment, we need to check for the facts because they hallucinating. Right now, I don’t actually think – I mean, if you use them as knowledge models that they are making things much more efficient because you have to make very clear to students and also to teachers who teach students what they can do and what they cannot do, because right now, if I’m as a scientist, letting ChatGPT write an abstract of a paper of mine, it sounds amazing, but it’s full of false facts. This is something that we have to make clear to everyone, especially for children who have less cognitive behaviors to understand what is right and wrong.

At the moment, to be honest, I don’t see – I mean, I have no experience with this in language, schools, for example, or children’s schools, but, I mean, I would be very interested to hear other experiences, but so far, actually, we have to be very critical about what to expect and what not to expect from this kind of technology and this transparency takes a lot of effort for teachers, actually, to make them clear and teach them about the side effects and how it functions.

>> THOMAS SCHNEIDER: Thank you. Actually, that reminds me of something called Tamagotchi, these were small devices that were pretending to be pets and fielded some attention and if they didn’t get the attention, they would beep. There were some people that were actually using that. I wasn’t part of those, but, yeah. That is, of course, slightly different role as being invasive.

We have Andrea.

>> ANDREA TOGNONI: Thank you. And I do remember Tamagotchi, I think at some point they were banned from my class or my middle school back in the day. I think jokes aside, that, indeed, there is a degree to which the type of risks that we are talking about and as Jascha was saying, the type of ethical questions are not necessarily new and specific just to LLMs, they are, of course, very broad and general. From the perspective of children, I think the first and most important thing given that nobody has the silver bullet to solve everything for everyone right now. Is to make sure that children in anywhere specificity and their needs and vulnerabilities are considered in all the efforts to deal with will this challenge.

We talk more often about the regulation. It’s important that in regulation, if regulation, for example, like – a lot of regulation on digital technology, not only AI nowadays is focusing on risk assessment and risk mitigation, that the specific risk and the specific vulnerabilities and the special needs of children are considered to raise a bar a bit on them as children. We don’t just consider children as any other users it’s true that all users will have problems and issues and challenges. I don’t think that people, that you know, turn 18 will necessarily from one day to the next be more able or skilled to recognize AI-generated content from others, but it’s true that we need to think about age appropriateness of services that we deliver to children.

And so I think that this was another point that was raised that it’s not about shutting children out but also general technology and the online world. It’s making sure that they can be in the online world with their rights respected and their – they are safe in the online world. And how do we do this? I mean, I think part of it, it’s really looking at what they do, and what they use online. So recognizing children when they are online. This can be done, for example, through certain age assurance, not necessarily to stop them from accessing things but they are recognized when they are accessing a certain service and that those services that are accessed for sure or likely to be accessed by children and I think that when we talk about education, we are really talking about something that doesn’t leave a lot of room for doubt in that point.

We make sure that those services and those models that are using those services are designed with the best interest of the child in mind. And this means – it’s a very long answer but it’s bringing the children’s rights perspective into the work of the developers, designers of systems.

For example, what I can mention here is the one – one technical industry standard, which is the IEEE age-appropriate design standard which is extremely powerful as a tool because it is a list of processes and actions that a designer of an online service can undertake to ensure that their service recognizes the presence of children and recognizes risks. The system is adapted to be safe for children and if there are risks that they are mitigated, et cetera, et cetera.

I think it’s an incremental perspective, is to thank you for signaling a few possibilities on how to deal with it. I think we should try – I urge you to be as short as you can. I have a gentleman here on the right.

>> AUDIENCE MEMBER: My name is Andre.

>> Please mute.

>> AUDIENCE MEMBER: So my name is Andre, I work with AI and machine learning, amongst other things, and one of the issues that I see here, one mentioned about explainability and I have seen some of the things in the chat. If you can see things like ChatGPT, there’s no explainability, there’s randomness and dreaming, and, of course, that will not work if you are extracting knowledge for learning purposes for children or for adults even.

So one the things that you start to see is that search engines like Bing, for instance, now have the possibility of showing you similar answers but at the same time, they give you the sources. And while we should not ever trust what they say directly, you can go to the sources and see what they say. So at the very least, you have a second opinion, so you can refute what Bing and Google in the future will be able to say.

I saw something in the comments which is worth mentioning. If we consider what age we allow children to use these kind of AI schools. For one simple reason, there was no computers and my generation is the first generation that started to use computers in a rough way and started doing programming. The following generations whatever limits you decide, but sometime in the middle, you have people that used the computers. They were just clients at the computer. They didn’t learn how a computer worked. They wanted to use a tablet, what. Some films, et cetera. Nowadays we need to consider what children will use them for.

So we have to watch this. We can say, oh, use them for whatever you want and just be a client of them or try to understand how they actually work, which is what my generation when they started with computers, they have to learn how computers to work. It could be as simple as loading games. So in this case, the suggestion is anyone at any age should be allowed to access these services but they should learn, and taught these services are not 100 reliable and trust them in a relative priority importance and then take your own conclusions out this. That’s always the most important part.

>> THOMAS SCHNEIDER: Yes, thank you.

Thank you, I think Morgan, you had your hand up?

>> MORGAN DEE: Yes, there’s so many fantastic things that I really agree with, and I wanted to add that I really feel we have a duty to prepare children and locking them away will not prepare them for a future with AI. So thinking how we prepare them is really important. How would you have lessons on Internet safety or looking at newspapers and interpreting bias or propaganda or the same thing should be – or new technology. It won’t just be LLMs. That’s what’s happening now. Just any new technology that is coming. We need to prepare young children to interact with that in an age appropriate way. If should not be a specific age where you are allowed to engage with these tools.

It should be age appropriate and it’s a slower process rather than just being released into this new world at a certain age. I think that’s a really great point. And the other point I wanted to make is that there’s a difference in risk when you’ve got teachers or educators using these tools and there’s a human in the loop. So you are using it for content generation where the human is a specialist, very knowledgeable and able to filter through that content before it goes to a child. That’s a different risk proposition. I wanted to raise that point as well.

>> THOMAS SCHNEIDER: Thank you very much.

I think everybody agrees at the idea of using these things age appropriate. But how do you concretely do it? And if we look at – I’m an historian and economist and sometimes I try to find out how did people deal with challenges in the past and, for instance, for games or films, you have age limits that could be bypassed by using older brothers and sisters and so on and friends. But if people want to have age-appropriate access then that needs to be organized in a way that actually works. It’s less trivial than what we have been used to.

I have another question from the room here.

>> AUDIENCE MEMBER: Yes, I hope you can hear me all. I will try to be loud. I’m Natalie, I work in the Czech Republic and I would like to share with you my experience, but very briefly. I also teach very many and right now we see that our students are very actively using ChatGPT not only to use as the process of writing their thesis, but also homework, for example. There were some very good thoughtful homeworks in the past, because they would be very easily sold by ChatGPT.

Right now we are having the discussions how to change this. How to make it more effective. And a good example of how to use it and change the mindset we ask them to use ChatGPT on purpose and then we all try to scrub the mistakes happening. And we have the critical skills? But also in the past, we all need those critical skills. We them now and in the future. That’s one approach we are trying to do now we will see how it works out for the future. And I wanted to share this.

>> THOMAS SCHNEIDER: This reminds me of the discussion, should calculators be used or not? At some point – nobody expects kids to do what calculators do because we have the calculators. You do other things. But now calculators can also calculate with the variables. The teachers are aware and sensitized and the students you can the problem is probably in the lower public schools where neither the parents nor the skids, maybe not even the teachers in any public school somewhere in the countryside with 10-year-old kids really do have the mindset and the resources to actually experiment with this. So that’s maybe one thing. I –

>> AUDIENCE MEMBER: We, just briefly. We saw examples of universities and high schools, et cetera, et cetera, but I can tell you it starts much earlier than that. I think and it’s not just ChatGPT but certain LLMs which are connected to TikTok and are being used by primary school students. And that are used to do their homework and where you made – the teachers here in the room may be very conscious of all the consequences of it, but it’s spread throughout. And I think at the level kids, children, they learn much faster how to use and probably far less critical on how to use it.

So I think we need to have an approach, an overall approach to this and not just the focus of pun particular grouping but also certain potential.

>> THOMAS SCHNEIDER: Thank you. Giving the floor. Maybe one question in my country, Switzerland, there’s also a big debate about what to do with AI in general. We have the tendency to solve as much as we can in the sectors and go for the least necessary, horizontal regulation but try to – given that the risks and everything is text based. So the question is – one of the key questions and I would like to hear from you a little bit on this. Is it okay to just have this, like, as one the applications in horizontal legislation. Is there a need to specifically do something about large language models and maybe other general purpose AI? Is there a significant difference? Is this a particular type that needs to be dealt with as a group different from other groups of AI applications? This is something, of course, that I think is an interesting question but Roberto please.

>> AUDIENCE MEMBER: This debate reminds me of a parallel debate, that we had when I was much younger, and it is about having the solution, the answer to a question when you were in school. To know how to get a good answer or how to learn the process to get the good answer. And those are two different things and I think that the danger that these tools that are becoming easier and easier is that you can give a good answer but you have not understood what the question was.

And this is – you feel that also in – in some – I hate navigators because in most of the cases, you have a good indication of where you have to go but you have no clue where you are, especially when you are driving. And I think that knowledge is something that much bigger and much more complex than the notion is – I don’t the real word in English. So it’s not because you know a couple of things and you are able to give a couple of smooth answers that you have the answer. It’s the parallel.

With these tools, I think it’s magnified because it’s easy and to find the easy solution, it will probably prevent students have doing the effort in order to be really learning.

>> THOMAS SCHNEIDER: Yes. The comparison with the navigation is an interesting one, because for instance, when I’m on holiday, I try not to use any navigation and we have like physical maps and you try to follow the signs and so on and so forth to not lose the ability. Sometimes I also realize when I use the navigation system, I know left and right. I see outside. I would never find the way without the tool.

And the best is that sometimes you use the navigation although – you perfectly know where to go but there may be traffic jams or whatever then you try to find the quickest way and sometimes they are wrong. You know what the system is telling you wrong. And sometimes I dare to ignore the machine and say I trust my experience and I take my route. And sometimes – then you can have discussions with your wife or husband whether or not to follow the advice of Google or – so it blurs your own experience or it questions your own experience, even if what the machine tells you is wrong, you tend to believe that the machine must have a reason why it tells it to you.

So I Andrea. Do we need something special? Is this something that needs to be treated specially differently from other applications in Andrea, please, briefly. You are still muted.

>> ANDREA TOGNONI: Yes, very briefly. On whether we need a specific regulation, I think maybe Jascha can respond with more insight, I think. We certainly need the regulation, as far as to self-regulation. I think we cannot just try it out and leave this up to self-regulation. We need strong rules.

It was mentioned in the chat, the right to participate is one of the rights of the child and in all of these processes designing and implementing and getting feedback on how the systems are operated and designed, the children’s voice must be always think I core and center. I cannot stress this enough and I hope I try to bring some perspectives of the children, even if I’m not one, unfortunately for me.

And the other point that I wanted to mention is that we should focus a lot on literacy because it will be important, but it will be a lot about design of service, because the knowledge and the capabilities are an asymmetry and the deployers of LLMs and other services are so great, that we cannot expect the burden to be on parents, and teachers to learn only. That would be part of the picture, but a lot is needed on the design of services to be some age appropriate and be fit and safe for children.

Sorry.

>> THOMAS SCHNEIDER: Thank you very much. You and then you.

>> AUDIENCE MEMBER: Let me focus on your question, because we could explain our entire time talking about this. Regarding legislation. If you consider that a few years ago we only had machine learning. No up with was talking about AI, as we see it today or this year. No one would enact legislation about what we see today. And next year something different will happen. So consider this. We either define things in general. And that will not work for anything that changes. Or we do that very specifically and now we say LLMs, now we have to limit this on certain conditions, whatever they are. So if we limit it too much, they will come one a different technology that’s not called LLM, it’s called LML. Because it’s fundamentally different in one small thing. It just fails that legislation and it’s a completely different thing. So the solution is not to be very specific, or very generic, but it’s to keep up with the times. We cannot guarantee legislation from the 1950s, and actually write machine learning. It’s impossible. So all we have to guarantee is that we have the legislative bodies and they will focus with the times and maybe once a year they will review things but this needs to be done on a regular basis.

>> THOMAS SCHNEIDER: This is indeed a challenge for somebody like the EU that’s developing concrete legislation, with the annex that basically constantly should change.

At the Council of Europe, we’re trying to develop a convention on the level of principles that should hold 20 to 30 years which then must be formulated differently. And if you look the US blueprint on the bill of rights they don’t even define AI. They say whatever is using data and algorithms and has a negative impact, this is – this somehow should be treated. It sounds nice but if this is supposed to be then legally binding.

Yeah. One last word from Jörn and then I think we need to wrap up. I think this was way really interesting discussion and it will probably have to be followed.

>> AUDIENCE MEMBER: One small LLM view. Like old technology, I started this in ’89. I think we need to differentiate a lot regarding the use case.

>> THOMAS SCHNEIDER: Mm-hmm.

>> AUDIENCE MEMBER: So you could use such a system for baby-sitting. Of course, this has to be heavily regulated. If it’s teaching primary school children, this has to be regulated.

Students, on the other hand, they should be able to deal with the reality and it should not be regulated. It should be free for research and free to learn from it. If you connect those things to the Internet and they can act on the Internet, maybe then there’s some regulations. So we really have to look at the use case and have a use case-specific regulation. It doesn’t make sense to have a regulation of LLMs on just anything.

>> THOMAS SCHNEIDER: So again, we are very much in the context-based situation but how to develop something that’s binding and very graduated and differentiated at the same time.

>> AUDIENCE MEMBER: When we talk about regulation – (Inaudible) – it’s about data protection – (Inaudible).

What it will be up to 10, 20 years, where you can substitute everything – so this is what it should be.

>> THOMAS SCHNEIDER: Thank you. This is at least part of the thinking and of the analysis in the Council of Europe, for instance, because it has something to do with the right for integrity of life and health and safety and so on, but, of course, we are in the early days because we don’t really yet know what that does. It’s like with the mobile phones we know that if you hold this here, it heats your brain. But whether this is positive and/or negative, it’s not too clear because we don’t have centuries of experience. It’s an important point.

I think we have to wrap up. Let me give the floor to –

>> All right thank you very much. I will try to be brief. We run out of time. We started by the focusing on the potential risks and benefits, of AI, including LLM as systems, especially it was sad that they could be really useful in translation, content generation and they can smooth interaction between the users and machines. But actually starting to focus on what they actually are, it’s been said they don’t produce knowledge. They are not knowledge based but match statistical relationships they map languages. The text produced by AI are always fictional and they do not produce real knowledge and it’s also one of the worries that was raised during the discussion.

Since they can be influenced by as and actually, it’s really important to deal with the misinformation and other important things like this.

Actually, the Italian Data Protection Authority expressed its concern about perfecting an imperfect tool in the market because the market is not a laboratory and people are not guinea pigs. It’s not just about the tool but this is not – you know, this is not a really competitive market, it’s a monopolistic market. It’s giving data to five or six big organizations.

Also one of the main concerns even though EU regulation is really good on AI, it won’t be implemented before 2025. So what are we going to do in the next two years? Finally, when it comes to the Italian DPA, it is clear that children need special protection and that is felt across the entire panel. And it’s also important to consider them as they are not able to sign any specific data disclosure.

I’m certainly leaving many things. You said so many interesting things. It’s not producing content but about the veracity and the quality of the content that is created. Also when it comes to children’s rights, it is really important to focus on design the LLMs, putting children’s rights at the center to prevent all the risks that are related to the tool. Digital literacy is always really important not only for children but for parents and teachers. It’s important to advocate for the developers and the LLM-powered features, and it is important to achieve that these are designed by someone with a specific purpose and so that this should entail responsibility to the design and the outcome and the oversight of the system.

Finally, of course, we also spoke about LLMs for students, eye university professor and so forth. They are already really employed in add text – as add tech tools and also for virtual tutoring. They can help to give more personalized education, and there’s problems with plagiarism. We need to celebrate – they are completely different and, of course, they follow different aims.

I couldn’t keep track of all the discussion because you said so many interesting things but, of course, when it comes t regulation, legislation, what emerged is that we do not need too strict regulation because otherwise, we will be outdated in like two years but what is important is to agree on principles and to find a common ground for the next, 20, 30 years. And, also, of course, to try to find balance between benefits and risks. I hope I have been clear. I’m sorry if I spoke too quickly and I forgot anything. But it was a really rich and interesting panel. Thank you very much.

>> You forgot to propose that legislation should be made to change the name of LLMs.

>> Of course. Absolutely.

>> THOMAS SCHNEIDER: That will automatically change sooner or later. Thank you so much. Sorry for taking a little bit longer. I didn’t manage to stop in time. Again, the discussion will continue, and we’ll get a wrap-up along the lines of what we heard. So thanks for providing us with this. So, yeah. Thanks for this very exciting session.

(Applause)