Neurotechnology and privacy: Navigating human rights and regulatory challenges in the age of neural data – MT 02 2025
13 May 2025 | 11:00 - 12:30 CEST | Hemicycle |
Consolidated programme 2025
Proposals: (#10), #72, #82 (CoE)
Get involved!
You are invited to become a member of the Session Org Team by simply subscribing to the mailing list. By doing so, you agree that your name and affiliation will be published at the relevant session wiki page. Please reply to the email send to you to confirm your subscription.
Kindly note that it may take a while until the Org Team is formed and starts working.
To follow the current discussion on this topic, see the discussion tab on the upper left side of this page.
As neurotechnology advances, our thoughts may no longer be entirely private. A provocative discussion on the current and future state of mental privacy, brain-data regulation, and the human rights implications of technologies that can access or potentially even influence the mind.
Session description
If neurotechnology is science fiction, then we are living it. From brain-computer interfaces to emotion-tracking wearables, we are entering an era where technology can monitor, decode, or alter brain activity. But as neuroscience converges with AI and data-driven systems, urgent questions arise: What counts as “mental data”? How should we regulate access to the human mind? Do existing data protection laws go far enough?
As neurotechnology evolves from labs to daily life, humans’ thoughts, emotions, and brain activity are becoming the next frontier of data. This panel explores the emerging concepts of mental and neural data, and the implications of decoding or influencing the brain through technology. With a keynote from the UN Special Rapporteur on the Right to Privacy, followed by a debate led by two expert speakers, the session dives into the legal, ethical, and human rights challenges of protecting mental privacy in an era of brain-machine interfaces and cognitive surveillance. Audience participation will be central to shaping this critical conversation at the crossroads of tech, law, and humanity.
Guiding questions:
- How well do existing data protection frameworks (like Convention 108+ or GDPR) address the unique sensitivities of neural or cognitive data? Should we be thinking about a new legal category for “mental data”?
- In an increasingly interconnected digital ecosystem, neural data won’t exist in isolation. How should we understand the risks when brain data is combined with other personal data streams—like online behaviour, biometrics, or health records—and what challenges does this pose for governance and accountability on the internet?
Format
We introduce a new format for all Main Sessions. They are NOT panel discussion and conducted as follows:
- 30 min input (2 x 15' or 3 x 10' VIP / expert presentation)
- 45 min moderated discussion with the entire audience along a set of guiding questions
- 15 min agreeing on the messages
Interpretation in English and French.
Further reading
People
Please provide name and institution for all people you list here.
Programme Committee member(s)
- Minda Moreira, Internet Rights and Principles Coalition (IRPC)
- Moritz Taylor, Data Protection Unit, Council of Europe – Conseil de l'Europe
The Programme Committee supports the programme planning process and works closely with the Secretariat. Members of the committee give advice on the topics, cluster the proposals and assist session organisers in their work. They also ensure that session principles are followed and overlook the complete programme to avoid repetition among sessions.
Focal Point
- Moritz Taylor, Data Protection Unit, Council of Europe – Conseil de l'Europe
Focal Points take over the responsibility and lead of the session organisation. They work in close cooperation with the respective member of the Programme Committee and the EuroDIG Secretariat and are kindly requested to follow EuroDIG’s session principles.
Organising Team (Org Team) List Org Team members here as they sign up.
- Mariam Chaladze, ISET
The Org Team is shaping the session. Org Teams are open, and every interested individual can become a member by subscribing to the mailing list.
Key Participants
- Doreen Bogdan-Martin, Secretary-General, International Telecommunication Union (ITU) (video message)
- Ana Brian Nougrères, UN Special Rapporteur on the right to privacy (in person)
- Damian Eke, Assistant Professor at the University of Nottingham, Founder African Data Governance Initiative and African Brain Data Network
- Petra Zandonella pre-doctoral university assistant at the University of Graz, expert in law and neurotechnologies
Key Participants (also speakers) are experts willing to provide their knowledge during a session. Key Participants should contribute to the session planning process and keep statements short and punchy during the session. They will be selected and assigned by the Org Team, ensuring a stakeholder balanced dialogue also considering gender and geographical balance. Please provide short CV’s of the Key Participants at the Wiki or link to another source.
Moderator
- Moritz Taylor, Data Protection Unit, Council of Europe – Conseil de l'Europe
The moderator is the facilitator of the session at the event they must attend on-site. Moderators are responsible for including the audience and encouraging a lively interaction among all session attendants. Please make sure the moderator takes a neutral role and can balance between all speakers. Please provide short CV of the moderator of your session at the Wiki or link to another source.
Remote Moderator
Trained remote moderators will be assigned by the EuroDIG secretariat to each session.
Reporter
The members of the Programme Committee report on the session and formulate messages that are agreed with the audience by consensus.
Through a cooperation with the Geneva Internet Platform AI generated session reports and stats will be available after EuroDIG.
Current discussion, conference calls, schedules and minutes
See the discussion tab on the upper left side of this page. Please use this page to publish:
- dates for virtual meetings or coordination calls
- short summary of calls or email exchange
Please be as open and transparent as possible in order to allow others to get involved and contact you. Use the wiki not only as the place to publish results but also to summarize the discussion process.
Messages
- are summarised on a slide and presented to the audience at the end of each session
- relate to the session and to European Internet governance policy
- are forward looking and propose goals and activities that can be initiated after EuroDIG (recommendations)
- are in (rough) consensus with the audience
Video record
Will be provided here after the event.
Transcript
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Moritz Taylor: I’m going to ask you all to take a seat, come back in from your coffee break, if you can hear us outside. Welcome back, those of you who are already in main session one. Welcome back to those of you who were here yesterday and had your socks rocked off by the Mariam Chaladze band and are finally coming in. And welcome to all of you who’ve been in other sessions and workshops before and are coming into the main session, into the hemicycle for the first time today. Now I’m going to start with main session two, which is neurotechnology and mental privacy, regulating the mind in the digital age. We’ll start off with a video, followed by a keynote, followed by statements, the panel, questions and the messages. I’ll hand over now to our online moderator, Joao, who will explain to you the rules of the session. Thank you very much.
Online moderator: So, for those who join online, please always raise your hand to speak, to ask for a speaking slot. Then we’ll ask you to switch your video and unmute when the time comes. For those who join on-site, please always join with your microphone muted and your speaker from the device also disabled. Thank you. Back to you.
Moritz Taylor: Thank you, Joao. I hope that was clear for everyone. I’m here to help you. For those of you in the room, when you wish to speak, you’ll be able to press a button next to your microphone. In any case, we will begin today’s session with a video message from Doreen Bogdan-Martin, Secretary General of the International Telecommunication Union. Please play it. Thank you.
Doreen Bogdan-Martin: Hello, everyone. Let me start by thanking the Council of Europe and Luxembourg for inviting me to share a few words with you today. I would love to be with you in Strasbourg, especially at a time when digital is top of mind all over Europe and around the world. The Internet is a mirror that reflects humanity at our best and at our worst. Digital technologies evolve rapidly, as do the associated opportunities and risks. Human rights do not. As we achieve breakthroughs in science and technology, our outlook on the future changes, but human rights remain a constant. They are a guidepost for our actions across tech frontiers, especially now. With powerful technologies like like AI and quantum, poised to drive the next phase of the Internet’s evolution. To safeguard human rights, by balancing regulation and innovation, all voices are needed at the governance table. To understand how policy and regulation, whether nascent or enforced, can impact everyone’s aspirations as innovators and users. That’s agile and adaptive governance, keeping everyone involved in designing and fine-tuning our policy actions. And that’s why open, multi-stakeholder forums like the World Summit on the Information Society and EuroDIG are so important. They give everyone a say in how public policy could best reflect our shared values. We will keep this conversation moving at our back-to-back WSIS Plus 20 High-Level Meeting and our AI for Good Global Summit in July. We are reviewing 20 years of the WSIS process and looking to strengthen this multi-stakeholder framework for global action on digital development. 2025 also marks the 160th anniversary of the ITU. As our founding members, European countries are deeply experienced in building the collaboration and consensus that powers our work. Now, more than ever, the world needs to see this spirit of cooperation in action. Thank you.
Moritz Taylor: Thank you so much, Doreen. Now, I’d like to begin with the session proper. The idea that our thoughts are private has long been considered a cornerstone of personal freedom. But with neurotechnology rapidly evolving, from brain-computer interfaces to mood-tracking devices, that assumption is being challenged. This session tackles one of the most provocative frontiers of digital governance, the legal and ethical implications of decoding, collecting, or even influencing brain activity. We will explore whether our current regulatory frameworks are fit for purpose, and whether we need to rethink privacy in a world where even our minds can become data streams. I’d like to therefore invite our keynote speaker today, UN Special Rapporteur on the right to privacy, or on privacy only, I think, Ana Brien-Nogueres. Please take the floor.
Ana Brian Nougrères: Thank you so much. Thank you so much. Good morning to you all. Well, this is because I’m trying to be strict with my timing. So, distinguished guests and colleagues, it is a great honor to address you today. I’m also grateful to the organizers for their invitation. Today, I will invite you to reflect on neuroscience, neurotechnology, neurodata. So, why is privacy at the frontiers of neuroscience? As you know, neurotechnologies are devices and systems capable of monitoring, interpreting, or even modifying brain activity. They are no longer a theoretical promise. They are here, and they are advancing rapidly. What only a decade ago seemed inconceivable is now being tested in laboratories, deployed in medical settings, and increasingly explored for commercial and security-related purposes. As these technologies develop, so do the risks they pose to fundamental rights, particularly the right to privacy. We are entering a new frontier in the domain of privacy. This frontier appears like a boundary between the self and the outside world. This boundary is becoming porous. It is a frontier where our inner thoughts, intentions, and emotions may be inferred, stored, transmitted, and even altered by external technologies. The frontier demands not only technical safeguards, but a reaffirmation of the foundational principles that protect human dignity and autonomy. In my recent report to the UN Human Rights Council on Neurotechnologies and Neurodata, I have laid out a framework for action, one that calls for regulatory clarity, for ethical commitment, and for international solidarity. The stakes are high. If neurotechnologies are used wisely, they hold the potential to advance science. improve human health and empower individuals. But if misused, they could erode mental freedom, facilitate surveillance of thought, and deepen existing inequalities. We must act now, together, to ensure that our values evolve alongside our innovations. The question before us is not whether neurotechnologies will become part of our life. They are already here. The question is whether we will be governing them wisely, justly, in a way that protects what is most sacred, which is the integrity of the human mind. So, what is so important about neurotechnologies? To understand the significance of protecting personal data in the context of neuroscience, we must first define what we mean by neurotechnologies, and more importantly, why they matter for human rights in general and for the right to privacy in particular. Neurotechnologies refer to the tools, systems, and devices capable of accessing, monitoring, interpreting, or altering brain activity and the nervous system. These include invasive methods such as brain implants used in medical treatments for conditions like epilepsy or Parkinson’s disease, as well as non-invasive techniques such as electroencephalography, functional magnetic resonance imagining, or emerging wearable brain-computer interfaces. Although many of these tools are initially developed for therapeutic or research purposes, their application has expanded far beyond clinical settings. Today, neurotechnologies are entering the commercial domain, integrated into consumer products, educational platforms, WordPress monitoring systems, and even digital entertainment. They are also being explored for potential use in law enforcement. law enforcement, military operations, criminal justice, raising deeply troubling ethical and legal questions. Why do these technologies matter so much? I mean, I think it’s because the brain is not just another organ. It is the seed of our consciousness, our identity, our thoughts, emotions, memories, and intentions. The activity of our neural circuits encodes not only what we do, but who we are. And for the first time in history, we now possess tools that can reach into this most intimate and private space. Let us be clear, neurodata, the data collected through neurotechnologies, is fundamentally different from other types of personal data. While a fingerprint may identify us, and geolocalization data may track us, neurodata can reveal how we feel, what we fear, what we desire, or what we intend to do sometimes, even before we are consciously aware of it ourselves. This kind of information is not merely sensitive. It is existentially revealing. This is why the Global Privacy Assembly and numerous human rights bodies, including the UN Human Rights Council, have recognized neurodata as a special category of personal data that requires the highest level of protection. Neurotechnologies also raise the prospect of mental manipulation. We must reckon with the real possibilities that once brain activity can be decoded, it can also be influenced through electrical or digital stimulation, algorithmic feedback, or even predictive behavioral nudging. This raises profound concerns regarding free will, mental autonomy, and cognitive liberty. And yet, we must not overlook the enormous benefits that these technologies offer. Neurotechnologies can revolutionize medical diagnostics, facilitate communication for individuals with severe disabilities, and deepen our scientific understanding of the brain. They can bring hope to people suffering from neurological or psychiatrist conditions for which current treatments are inadequate. The challenge, then, is not to resist innovation, but to guide it ethically, to ensure that the development and deployment of neurotechnologies respect human rights, protect mental privacy, and empower individuals, rather than expose them to surveillance, discrimination, or exploitation. Neurodata as a special category of personal data. At the heart of the ethical and legal debate surrounding neurotechnologies lies a fundamental question. How should we treat the data generated by the human brain and nervous system? The answer is inequivocal. Neurodata must be recognized and regulated as a special category of personal sensitive data. This recognition is not merely symbolic. It reflects the unique nature, sensitivity, and potential consequences of processing neurodata. Neurodata are not just health data. They are not merely biometric data. Neurodata go far beyond what we traditionally understand as personally identifiable information. Neurodata constitute windows into the cognitive, emotional, and psychological fabric of the human being. And like other data, neurodata can provide deep insights into a person’s mood, personality, memory patterns, decision-making, and even unconscious mental states. These are not inferences drawn from online behavior or wearable devices. They are recordings of the brain’s actual electrical and psychological activity. This data can be used not only to identify a person, but to analyze, predict, or even alter their thoughts and behaviors. For these reasons, neurodata meet and exceed the criteria that define sensitive personal data under international privacy standards. As such, they require enhanced legal safeguards. They must be subject to. strict access controls, strong encryption, cybersecurity protocols, explicit, informed, and revocable consent mechanisms, clear limits on their collection, retention, and sharing. Furthermore, the mere collection of new data should trigger a high-risk processing assessment. This is particularly vital in contexts involving vulnerable populations, such as persons with disabilities, children, older persons, or individuals in institutional settings, where the potential for coercion, manipulation, or misuse is even greater. Mental privacy emerged as a necessary evolution of the right to privacy, emphasizing that thoughts and mental states, absent a compelling legal justification and strict safeguards, must remain off-limits to external surveillance or intrusion. The mind is the final frontier of human privacy, and it must be treated as such. National bodies such as the Ibero-American Data Protection Network, the Global Privacy Assembly, and the Berlin Group have all recognized the need to establish special frameworks for the processing of neurodata. Their recommendations are aligned with a precautionary principle, calling for clear legal definitions, transparency, accountability, human rights impact assessments, and prerequisites for any neurodata-driven activities. In addition, these bodies emphasize that neurodata may imply a power of anticipation, capable of revealing information not only about the current mental state of an individual, but about their future behavior, psychological predispositions, or cognitive performance. This introduces unprecedented risks of profiling, stigmatization, and discriminatory treatment, particularly in employment, insurance, education, and criminal justice settings. This may lead to a biased hiring process, unequal access to services, or unjustified exclusion from opportunities depending on social inequalities. We demand, at this, equitable access and non-discrimination procedures. To regulate neurodata adequately, we must adopt forward-looking regulatory frameworks that incorporate the complexity and the implications of this new form of personal information. Our legal systems must reflect that neurodata are not merely data about the brain, but deeply personal representations of the self. I will refer next to the principles and safeguards for the use of neurotechnologies and the processing of neurodata. In my most recent report to the Human Rights Council, I outlined a set of principles that should serve as the ethical and legal compass for regulating the use of neurotechnologies and the processing of neurodata. These principles are not abstract ideals. They are concrete tools for building legal, institutional, and technological systems that respect privacy, autonomy, and equality in the context of the brain. The first and most essential principle refers to human dignity. The mind is where human dignity resides. It is the source of self-awareness, of decision-making, of creativity, of morality. Any attempt to access or alter brain activity without the individual’s informed and voluntary consent should be considered a violation of this dignity. We need to safeguard the integrity of every individual’s cognitive and emotional self. Neurotechnologies must never be used in ways that reduce the person to a target of data extraction or behavioral engineering. We must acknowledge parental privacy as an emerging but indispensable dimension of the right to privacy. The human mind must be treated as an inviolable space. Any intrusion, whether through decoding brain signals, stimulating specific regions, or interpreting patterns of cognition, must be subject to the most rigorous legal scrutiny and ethical oversight. The processing of new data must be based on a freely given, informed, specific, and revocable consent of the individual. This ensures that individuals remain in full control of their thoughts and decisions. Then, the precautionary principle must apply. Where the risks of harm to mental integrity, cognitive liberty, or psychological well-being are not fully understood, the default must be restrained. A lack of scientific certainty should never be used as a justification for experimentation on commercial exploitation. Especially in the face of technologies that could irreversibly affect the brain, caution must be the rule, not the exception. Privacy by design and by default must be embedded at every stage of development. From the earliest research and design phases to deployment and commercialization, new technologies must be shaped by ethical values, human rights norms, and transparency requirements. This includes conducting human rights impact assessments prior to implementation and involving diverse stakeholders, including civil society persons with disabilities and neuroscientists in the process of oversight. We must also ensure accountability. Developers, manufacturers, healthcare providers, employers, and public institutions that use new technologies must be held responsible for ensuring data protection, transparency, and compliance with legal standards. Governments must establish independent regulatory bodies equipped to monitor the use of new technologies and provide accessible remedies in case of right violations. Then, we need to prohibit discrimination and manipulation. Neurodata must never be used to categorize, profile, or exclude individuals based on psychological characteristics, emotional responses, or neural patterns. Nor must they be used to manipulate thoughts, alter beliefs, or induce behaviors for purposes of political, commercial, or punitive control. The principle of free will ensures that individuals remain in full control of their thoughts and decisions. Finally, individuals must have enforceable rights. They must be able to access their neurodata, challenge unlawful processing, and seek redress. Privacy rights in this context are not a luxury. They are a shield against the commodification of the self and the erosion of mental autonomy. In fact, the regulation of neurotechnologies is not only a legal imperative, it is a moral one. If we do not act now to build a rights-based framework, we risk creating a future in which the last domain of privacy, the mind itself, is no longer protected. But if we succeed, we can ensure that neurotechnologies are developed and used to enhance human flourishing, not to diminish it. There is another issue at hand. Neurodata do not respect national borders. Brain-computer interfaces, cognitive monitoring tools, and neural wearables are being developed and deployed. Yeah, I’m okay. I’m calculating, okay. And being developed and deployed by transnational actors. Their regulation, therefore, cannot be fragmented or isolated. It must be coordinated, comprehensive, and coherent. In this context, the Council of Europe’s Convention 108 and its modernized version, Convention 108+, offers a critical foundation for global convergence. It is the only legal binding international treaty dedicated specifically to the protection of personal data. And crucially, it is open to countries beyond the Council of Europe, allowing for true international alignment. Convention 108+, embodies many of the values that are essential to the regulation of neurotechnologies. Transparency, accountability, proportionality, data minimization, and the protection of sensitive data. It recognizes that certain categories of personal data, such as those related to health, require special treatments under the law. By building on the principles of Convention 108+, we can create a shared normative baseline for regulating neurodata. This is especially important given the rapidly evolving technological landscape. National laws vary significantly in scope, substance, and enforcement capacity. Yet, if we are to protect individuals from harmful or discriminatory users of neurotechnologies, we must avoid a patchwork of weak protections and regulatory loopholes. Convention 108+, also provides mechanisms for institutional cooperation. It fosters dialogue, mutual assistance, and the exchange of best practices. These tools are essential as we confront common challenges, such as how to define neurodata in legal terms, how to apply informed consent in cognitively vulnerable populations, or how to regulate cross-border flows of neural information. In my view, Convention 108+, should serve as a platform for international leadership in shaping how we treat the privacy of the human mind. It offers a flexible yet principled framework that can inspire national reforms and influence regional and global initiatives. Already, it has helped shape modern data protection standards beyond Europe, in Latin America, Africa, and in various international fora. But we must go further. We must ensure. that the unique features of new technologies are explicitly addressed within data protection regimes. We must expand the reach of Convention 108+, by encouraging more states, especially those at the forefront of technological innovation, to ratify and implement its provisions. And we must integrate the Convention’s principles into the design of future specific instruments on emerging technologies. Let us be clear, we are not starting from zero. Convention 108+, already gives us the legal vocabulary, the ethical principles, and the cooperative tools we need. What we must do now is apply them boldly, to ensure that these protections exist also for new technology. The brain must be the final frontier of privacy, but it is one we must defend together, and with the same spirit of solidarity and shared responsibility that underpins Convention 108+. Now, a final conclusion. As we stand at the intersection of neuroscience, data protection, and human rights, we are compelled to confront a profound truth. The future of privacy is not only digital, it is mental. In a world where technology can increasingly peer into our thoughts, predict our behaviors, and influence our decisions, the protection of mental autonomy is emerging as a defining frontier of the 21st century. Faced with this reality, our path forward must be both principled and pragmatic. We must be clear that technological progress cannot come at the expense of human dignity. That innovation, to be legitimate, must be bound by law. And that privacy, in its fullest sense, includes not only the protection of our personal data, but the protection of the self, of our identity, our thoughts, our emotions, and our freedom to be who we are without interference. This is why the recognition of new data as a special category of personal data is not just a technical adjustment. It is a moral imperative. This is why informed consent must be more than a checkbox. It must be a process of genuine understanding and free will. And this is why the principles of human dignity, mental privacy, personal identity, free will, equitable access, non-discrimination, accountability, must be all woven into every law, policy, and device that touches the human mind. The challenges ahead are immense. The speed of technological innovation is outpacing legal and institutional responses in nearly every region of the world. But we are not starting from zero. We have frameworks like Convention 108+, that can serve as a foundation for international convergence. We have regional and global bodies committed to right-based governance. And we have a growing awareness across disciplines and sectors that the mind must be protected as sacred ground. Let us act with urgency, but also with care. Let us regulate, not to obstruct, but to elevate the promise of new technology. Let us educate, not to alarm, but to empower. Let us legislate, not in isolation, but in solidarity with each other, and with those whose rights are more at risk. Above all, let us remember that the right to privacy is not simply the right to be left alone. It is the right to control our personal space, our bodies, and yes, our minds. It is the right to define who we are, free from coercion, manipulation or surveillance. In this new cognitive era, that right must be defended with renewed determination. Thank you very much.
Moritz Taylor: Thank you very much, Anna. You can take a seat also. I’d invite the panellists also to come forward so that we can start introducing you. After I give them each a little chance to talk about, to answer a question, we’ll also have the statements before moving on to a question round. So if you listen to what Anna has said, if you listen to what the panellists say and have some questions, write them down so that they can be asked after the prepared statements. Thank you very much. So, I’d like to begin the session with Damien Eke. He is Assistant Professor at the University of Nottingham, Chair of the International Brain Initiatives Data Sharing and Standards Group, and founder of the African Brain Data Network and African Data Governance Initiative as well. Currently in his role as PI at the Wellcome Trust. project he is co-creating Responsible International Data Governance for Neuroscience. So the way I’m going to do this is I’m going to introduce them all and give them all a chance to answer one question. So Dr. Eke, how will do existing data protection frameworks like Conventional Norway Plus or the GDPR address the unique sensitivities of neural or cognitive data? Should we be thinking about a new legal category for mental data? And are some of the current neural rights debates distracting from more immediate and under-acknowledged risks posed by today’s neurotechnologies?
Damian Eke: Thank you very much for that question, and thanks a lot for the presentation, Ana. That was very comprehensive. The answer, I’ll give you the simple answer and maybe the complicated answer. The simple answer is that existing regulations or ethical and legal frameworks are not addressing the issues of neural data and neurotechnology as they should be. Most of the recognition of neural data in the GDPR and also the Convention 108 Plus is implied rather than explicit. There’s no specific mention of brain data or neural data or mental health data in these legal provisions. So the special category data in Article 9 of the GDPR includes data concerning health and biometric and genetic data, while EEG, fMRI, and also most of the neural data sets that are generated from neurotechnology may fall under health data in certain contexts. This is not explicitly clear in the regulation. And also, in Article 22, profiling and automated decision-making, EES, may apply if cognitive data is used. used and also principles of data minimization, purpose limitation and consent mechanisms also apply, but this is not clear to both researchers and also people in the industry on how to address some of the issues neural data raise. What I will say is that also in the second question you asked, whether the debates about neural rights are distracted from other issues, yes, the intense focus on neural rights is as a distinct new category of human rights, while it captures public imagination, could indeed overshadow some very tangible and immediate risk associated with development and application of neural technology, and I was also right to highlight the issues of ethics and ethical obligation involved here, it is important to realize that there are also other issues that we need to focus on, like ethics dumping, like the safety issues and bias issues that are involved in neural technology. Ethics dumping, what do I mean by ethics dumping? As neural technologies advance, as you mentioned, it’s not just developed in one location, it actually blows boundaries, there’s a risk that research and development that might be ethically questionable or face treated regulations in one region, may be outsourced to regions or areas with less stringent oversight, and that is a critical problem, and ethics dumping could lead to the exploitation of vulnerable populations, and also disregard for ethical principles in pursuit of scientific progress, and also, I will also mention this, exploitative level practices that characterize the extraction of resources that shape the fundamental infrastructure of neural technology. the extraction of lithium, the extraction of rare earths for the development of neurotechnology. The conditions of work in terms of mining these resources are not as they should be, so the right to human dignity should also be focused on that rather than just data. Should we have a special category for neural data? I would say yes, but not as the discussion is happening in neural rights debate, but as a special category that can be at the same level with genetic data and biometric data, because neural data can also be classed as a hidden biometrics. Datasets like fMRI or MRI have brain prints in them that are unique or maybe more unique than biometrics or genetic data, so it is important for us to discuss how we can change the language of regulations or provisions in the regulations to attribute the same level of sensitivities and sensibilities to neural data as we do to genetic data and biometric data.
Moritz Taylor: Thank you so much, Damian. I think that was a very good start for people to have their brains activated. Speaking of brains and neural activity, next I’d like to welcome Petra Zandonella, a pre-doctoral assistant at the University of Graz at the law faculty and part of an interdisciplinary research group working on the intersection of law, ethics and neurotechnologies for the last two years. In her dissertation, she focuses on the protection of health data in the EU, which has now already popped up a couple of times in the last couple of minutes. From your legal research and your interdisciplinary perspective, I’d like to hear your point of view on whether you believe the existing legal framework already adequately addresses the challenges of neurotechnologies and, of course, maybe to expand on that, do you think there are gaps perhaps in enforcement, in scope or conceptual clarity even that we still need to fill?
Petra Zandonella: Okay, thank you very much. Thank you for the invitation and the opportunity to be here today and also for the introduction. So, I will give you a legal perspective and then we can go quite in the same way as you did already. So, you heard before that there’s a call for mental privacy and in the interdisciplinary group we’ve also focused on the mental privacy issue and we do not recommend to add a new right to mental privacy. So, there is an ongoing debate if the right to mental privacy is needed or if it will overburden our existing and well-established legal system and also framework. So, the question is what will exactly, oh sorry, what exactly will the scope of the mental privacy be and is it about mental, neurocognitive or brain data? So, the existing right to privacy, not to mental privacy, it’s the right to privacy as enshrined in Article 8 of the European Convention on Human Rights. It’s really a broad right to privacy. So it’s really broad understanding about it, and it’s also interpreted in a really broad sense. So I will mention one case in front of the European Court of Human Rights, and it took place last spring, so in spring 2024, and it’s the case of the Klimaseniorin in Switzerland. So it’s not a case about new technologies, but it’s a case about how broad the understanding of the existing right to privacy within the Convention on Human Rights is already. So the Klimaseniorin claimed that Switzerland had violated the right to private and family life, so the Article 8 of the Convention on Human Rights, by failing to take measures against climate change. And the court ruled in favor of the Klimaseniorin. So you see how broad the understanding of the existing right already is. So what benefit will we gain by cutting off the right to mental privacy from the already broad existing right to privacy? And where the existing right to privacy will end, and where the mental right to privacy will start? So, of course, there will be legal uncertainty if we create a new right to mental privacy. So there is, of course, an established case law, because it’s a new right, and it’s also a question if the privacy right will be interpreted in the broad sense if we explicitly mention mental privacy in the Convention on Human Rights. So if we mention mental privacy there, there’s a question if there should be also other privacies, because the problem is that if we explicitly mention one privacy, what about the other privacies? Are they still within the broad scope of the right to privacy? We don’t know. So to summarize, the existing right to privacy in the Convention on on Human Rights is already a good foundation, I would say. So in our opinion, there shouldn’t be a split up right to mental privacy. But nevertheless, you already mentioned that there should be a discussion on how we should deal with these neurotechnologies and with neural data. So for example, in the data protection law, as you already mentioned, and there is the Convention 108 and 108 plus and also the TDPR, and there is health data inside. So if neurotechnologies are used for health purpose, of course, it will be in the scope of the health data. But nowadays, the neurotechnologies are expanding into the non-medical domain, such as human enhancement or gaming. And in this context, there’s no medical purpose. So there is a gap. And as you mentioned in the keynote, it’s really important that we also protect these data because neural data is not only specific when it’s about the medical purpose or medical data. It’s also really a specific data when it’s in another purpose used. So for example, we could implement neural data within the scope of Article 9 of the TDPR or in the Article 6 of the Convention 108 or 108 plus. And it already had been done with biometric data. So in the Convention 108, there wasn’t biometric data. And now, with the 108 plus and before in the TDPR, biometric data was added. So it isn’t a big deal to implement a new category of data there, of course. It is not that easy because you need consent in the member states. But it could be an idea how we can deal with these new challenges we have with neurotechnologies. And so I will come to a conclusion, if it’s OK. Sorry for the long statement. In our opinion or my opinion and also in the opinion of our interdisciplinary group, the Convention on Human Rights and also the Charter of the Fundamental Rights already is a good and robust legal framework. And mental privacy should explicitly not be added. So we should stick to the already existing right to privacy. But of course, we need action to tackle the challenges that arise. arise with neurotechnologies, so for example, as already mentioned, by adopting the data protection law. So, thank you.
Moritz Taylor: Thank you very much, Petra. Right. Give them all a round of applause. Thank you. I’m going to allow the statements to happen. Meanwhile, perhaps, so that you can digest that information, listen to the statements and ask one or two questions after. I’ll try to collect them because we’re starting to be a bit short on time. Classically. May I have the statements? No, there are no statements. No, no online statements at all. And so, on site, do we have UNESCO’s Women for Ethical AI present? Kokse Kobanashoy-Hizal, are you here? I’m going to assume no. Is Lazar Simona from the Union Romani Voices, the CEO here, to speak, make a statement? Berna Tepe. Jan Kleissen, a recognisable name in the building. Please, Jan. It’s number 94.
Kleijssen Jan: Good afternoon, or good morning, rather, still, with 10 minutes to go. Thank you very much for the very interesting presentations, and also drawing attention to the already existing validity of Convention 108, 108+, when coming to protecting neural rights, as they have been labelled, and the very self, as was so pointedly stated a moment ago. I have a question relating the use of, or the use of, the interpretation of neural rights when it comes to AI systems, sentient computing. There’s a big debate out whether this will remain pure fiction, science fiction, or come into reality, but what would your position be on the research guiding this, and on the limits, perhaps, on the regulation that needs to be there in time if we not want to find ourselves facing something quite abominable? Thank you.
Moritz Taylor: Thank you, Jan. Can we do this as a quickfire round? Do you want to give your quick answers, maybe each one after the other? You don’t want to. So, I’ll start with Damian, and we’ll go down the line.
Damian Eke: Okay. So, you’re right. AI complicates the ecosystem. With the convergence of neurotechnology and AI, and the predictive inferential power of neural data, when combined with AI and big data, it’s uniquely dangerous, which enables maybe preemptive profiling, maybe neuromarketing at an advanced level, and also cognitive surveillance. This is a problem that needs to be addressed also, because the question is, does it then warrant a special category of data, of convergence of data sets? Now, it’s not just neural data, but then it’s combined with other data sets, biomedical data, combined with AI. It is a problem. Thank you very much. So, we have a challenge that needs to be addressed, but just as my colleague here pointed out earlier, there are provisions in the law to address some of these things, but it’s just that the ecosystem of regulations are a bit diverse. There’s the AI Act, there’s the GDPR, and there are other data regulations in the EU. It’s a case of trying to harmonize these provisions to address a specific problem of convergence of neural data and AI.
Petra Zandonella: So maybe I can just add a sentence to this. It’s not about the technology, it should always be about the human being. So how can we make sure that the human being is still in the focus of the regulation? So it’s not that easy to regulate each technology. So we should have a broader approach to this. So of course there is an interference with other technologies, I guess that’s normal and that’s already existing, and as you mentioned there are already really good legal frameworks on that, but when we come back to the Convention on Human Rights, there is really a broad understanding and it’s really a good reflection in the human rights and also in the fundamental rights when you go on the European perspective.
Moritz Taylor: Thank you, Petra. It’s okay, only if you have a short on time, so if you have, yeah.
Ana Brian Nougrères: Okay, I believe that there are difficult topics like the one you brought now, and I believe also that there is one moment in which one can feel there is an important risk, and that the risk might manipulate society. And then I think, well, what shall we do? Because we are looking at the process, we are looking at our people, how they might be manipulated. We have concrete examples of people who received a little bit of money to have a picture of their eye. and then all that is a whole mess. But well, it’s not to come to examples, the moment is not this one. But I think that when we are seeing that problem, it is because the problem has advanced in our society. And that is a moment in which we need to do something. We need to do awareness first of all. But regulation, I won’t discard it. I think that regulation is important. It gives a before and an afterwards. But before regulation to come, we need to have a real conversation, multi-stakeholder conversation on these topics in which we can have the opinions of the different professions that are involved in these topics. So I think there is a point in neurotechnologies, there’s a point that is of our concern and that something has to be done. Perhaps it’s not the moment for a regulation, but we need to have those strong conversations. We need to attend to see how the social movements are feeling the impact of all this. And maybe the regulation appears, maybe it doesn’t. But well, we have to be open to it, I think that. I think that we as lawyers or professionals of the law, we all see that when technical advancements appear, then the law is always back, back, back. And when we decide to act, then the moment passed. So I think that that is an important thing that we have to take into consideration also in this moment. Thank you.
Moritz Taylor: We have to go through some statements and we’re running out of time. So thank you, Jan, for the insightful question. I think that already caused some more neurons to fire. Next on the list of prepared statements was Redon Pilinci from Albania. Are they in the room or are they online? Next, Torsten Krause from the Stiftung Digitale Chancen, number 61. Give me the floor, it’s on. Number 62, yeah, okay.
Panelist: Thanks, hello. Thanks for your interesting presentations and statements. I’m Torsten Krause, I’m working as a political scientist and child rights researcher at the Digital Opportunities Foundation based in Berlin, Germany. And I would like to introduce you shortly to a legal concept implemented in the Youth Protection Act in Germany with the second amendment in 2021. It was, it is personal integrity. And it put kind of a third layer to the previously existing concepts of integrity. And you know, the first layer is the physical. So it’s not allowed to beat someone because it harms the physical integrity of a person. And the second layer means that it’s also not allowed to harm someone by words or bully someone because of the mental layer. And the legislator implemented the third layer. And it means also to protect the data in the digital environment because the data we’re presenting ourselves. So if someone is violating my data in the digital environment, he is violating me. So that’s the concept of personal integrity. And it was implemented four years ago. So it was in mind to regulate, well, time is running really fast. It’s handling with really, with existing collecting data and then to prohibit, to influence, to use this data to influence you in a special direction. When we think about a newer technologies that level it up because it’s not existing data, it’s data that when they arise, when I’m not, maybe not recognizing yet that I have the thought or this feeling, in this moment I can be manipulated. And so I think, I’m not sure if we need to have a special category, but I think we need a kind of guarantee of a really broad understanding to protect the data and yeah, the personality of us as human beings. I hope that was helpful. Thanks.
Moritz Taylor: Thank you. Okay. Thank you, Torsten, for this great thing. I think a broad understanding of protection is clearly one of the things that is coming up, whether it is broadly understood specifically in the national legislation, it seems, or on a wider international context is also one of the questions that comes up quite often is how can national legislators interpret international rules. The next speaker prepared statement is from Amira Saber from the. I will swiftly move on then to Sana Bhatia from VIPS-TC, Kuram Shuktai from Enox Centre of Innovation, Transformation and Intelligence, Souheila Soulkia, and last on the list before I can open the floor is Karin Kaunas from DigiHumanism, Centre for AI and Digital Humanism. Well, Joao mentioned that someone would like to participate and ask a question from online, so I’d like to give them the floor please.
Online moderator: Yes indeed, so it was both a question during the keynote presentation and then some comments added to the panel. I will be the one reading the points raised from Siva Supramanian Muthusamy, and I’m sorry if I pronounced it incorrectly. The question was, will these frameworks and safeguards or even regulation work adequately well to sufficiently prevent negative aspects such as cognitive control and behavioral intervention? These positive aspects of neurotechnologies are not summarily opposed in this question. And then to the panelists, it was a remark that in the future, once neural electronics are in place, it merely requires the access or technical expertise to get into the network, and as easy as sending ones and zeros to someone’s brain or into that of a group of people to alter their behavior or even to trigger them, in theory, at least. Yeah, there was some buzzing, so we didn’t really hear a question as such. Let’s give it one go. Yeah, perfect. So I will straight go into the point again, and I’ll repeat, will these frameworks and safeguards or even regulation work adequately well to sufficiently prevent negative aspects such as cognitive control and behavioral intervention? And the follow-up remark was, in the future, once neural electronics are in place, it merely requires the access or technical expertise to get into the network, and as easy as sending ones or zeros to someone’s brain or into that of a group to alter their behavior or even to trigger them, in theory, at least.
Moritz Taylor: Thank you for your contribution from online. Before we answer questions, I was thinking that we can open the floor and collect one or two, so that we’re not constantly going back and forth. Were there any other people who wanted to ask a question? At this point, number 007, James Bond, please.
Panelist: Hello, I am George from HRI, Civil Society. I would like to ask, as brain data becomes increasingly valuable for governments and tech companies, how do we avoid a future where the rights to mental privacy is sacrificed for profit or control? And on the other hand, if someone’s thought can be decoded and stored, where do we draw the line between consent and surveillance in a neural age? Thank you.
Moritz Taylor: Actually, that sounds like a very exciting question I want to hear answers to immediately. It shows very good risks.
Damian Eke: Yeah, maybe I will go first, but I will also try to address the first question from online, which was, are these legal frameworks adequate enough to address some of the risks? some of the issues that we have on neurotechnology raises. And this is maybe a question against a rights-based approach to governance. Is it always the best approach to governing neurotechnology or any technology at all, including AI? There are so many strategic paradigms of governance of technology. One is rights-based approach. Another is value-based approach. Because technologies are value-laden. They’re not neutral. But the question is, whose values are embedded in these technologies? Looking at the value-based approach in some regions might be the best way to govern these technologies rather than the rights-based approach because of the diversity of interpretations or implementations of human rights. So when we look at the values that should inform the technology, if they are adequately embedded in the systems, then they can address the issues. One reason why I’m pointing this out is oftentimes when we have these discussions in Europe or the global North, we forget that some of the values that shape technologies are not understood the same way in all regions. They are not interpreted the same way in all regions. Whether it is privacy, in Europe we’ll think about individual privacy. Maybe in some communities in Africa we’ll think about collective privacy. But the GDPR is informed by individual privacy, the concept of individual privacy. So understanding governance of these technologies from the value-based approach is sometimes something that we need to consider in order to address some of the culturally aligned issues that these technologies raise.
Moritz Taylor: Thank you, Damien. I think sitting in Europe, and it’s a regional and European approach generally in preparation for the IGF, etc., I think there’s always a danger of forgetting that the global majority is not Europe and that the approaches are indeed very, very different, not even that far away from Europe. If we want to have global standards, then we need to take other people’s standards into account as well. So definitely a good first answer. Petra, if you wanted to add something.
Petra Zandonella: Thank you also for pointing out that we live in the North and in Europe. But I will come back to the questions first of all. If we address the legal challenges, I guess yes and no. And as you mentioned before, there’s actually really a need that we have a discussion and a debate such as we have the opportunity here, for example, or as it is also in UNESCO or in the UN. And we really need an interdisciplinary exchange. So coming back to your question, you said it’s about brain data. In our interdisciplinary group, we discussed a lot what the data should be named because our neuropsychologists say the best way to address most of the mental and cognitive and whatever states is to call them neurodata and then to talk about mental states because cognitive states are part of mental states but are not every state. So I’m not the perfect person to answer that question what the name should be because there should be an interdisciplinary question for that. And then you pointed out the consent. And that’s really a big issue. It’s already an issue when it comes to health data, because if you need something, it’s quite obvious that you will say, yeah, go for it, because I need it. And with brain, or with neurodata, it’s a similar point. And our neuropsychologists also say, or told us, that when there is something about the brain, we tend to believe everything. So if there is neuromarketing or neuroenchantment, we tend to believe the promises, and therefore there is no real consent, because if we don’t know about the limitations, there is no informed. Because it’s about the opportunities and about the limitations as well. And if we don’t know the limitations, that’s a big issue. And with the surveillance part, yeah, of course, absolutely. We are just about to finish a project on neurotechnologies in dementia. And of course, it’s in the healthcare sector and not in the commercial sector. And already there is a big issue that surveillance will be, and is, when it comes to neurodata. And in the commercial part, it’s even worse, because there is no actual need for neurotechnologies. So maybe I can also add European law, may I?
Moritz Taylor: Sure, but quickly.
Petra Zandonella: Yeah. And that’s, for example, the medical device regulation. I don’t know who of you are familiar with that regulation. And of course, neurotechnologies, when there is a medical purpose, are in the regulation. But the medical device regulation already addresses that neurotechnologies are a bit special, I would say, because in its annex 16, in the point 6, yeah, they also mention that neurotechnologies, but only specific category of neurotechnologies. So, non-invasive and simulation neurotechnologies are also part within the scope of the medical device regulation. And that is an example that the regulators are a bit aware of the specificity of neurotechnologies. So thank you.
Moritz Taylor: Thank you very much. I’ll just move on because you have to collect others. Number 118, you have the floor. I’ll also collect another question, 195, afterwards. Thank you.
Panelist: Thank you. Lars Lundberger from the World Federalist Movement. I speak in my personal capacity. Three quick thoughts. Thank you for the insights and especially the definition of neurotechnologies. Your focus on the brain activity, I think that might be a bit too narrow. If you would record my finger muscles, you would see that I’m a bit nervous. If you would record my skin conductance, you would see that I’m sweating a bit. And if you would look at the pupil of my eye, you probably would see that I’m also a bit nervous. And so brain activity in the neural sense of recording of intracellular and extracellular recordings will be insufficient to address the mental aspect. Part of the discussion or large part of the discussion was about privacy. So the protection of data being read out. There was a remark on the altering of the brain activity, which would be manipulation. I think that should be emphasized a bit more and also there is the border to more traditional or conventional technologies like visual or acoustic stimuli. In total, I liked a lot the discussion about whether it’s a new human right or whether it’s a new challenge to existing human rights. And I think that discussion has to be continued in a multi-stakeholder approach that includes practitioners and policy makers, so engineers, neuroscientists and lawyers. Thank you.
Moritz Taylor: Thank you very much. 195, please.
Panelist: Thank you for giving me the floor and thank you for a very interesting debate and worrisome debate. I’m Kristin from the University of Oslo and my question relates to the human rights lens that you’ve analyzed this through, the privacy one, which is obviously there are major issues here connected to privacy. My question is if any of you in your work on this also have encountered discussions on this in other rights as well, such as the freedom of expression, mainly the right to freely form opinions, and the freedom of thought, which is an absolute right. So my question is if you’ve seen any discussions on this and specifically seeing that what was presented earlier, that these technologies also can influence the mind and not only extract data from the mind. So that’s my question.
Moritz Taylor: And just before I let you answer, I’ll also take the floor from number 100, please. You just press the round button next to the microphone.
Panelist: Am I audible? Yeah. Yeah. Hi. So I’m Ankita. I’m from India. I’m a lawyer. And many of you mentioned that it is crucial to recognize the complex ethical and societal implications of neurotechnology and the processing of personal neurodata. But I would also like to hear the thoughts of the speakers and the panelists on accountability mechanisms across the entire lifecycle of neurodata processing from collection to storage to analysis. Do you think that there should be any particular stage which should bear greater accountability than others? And if yes, then why? Thank you.
Moritz Taylor: All right. Are you ready to answer already, Damien?
Damian Eke: I’ll try to answer the first one. Okay. Okay. So the discussion on neuro rights and also whether we need to have a special category of data called neurodata also involves discussions about freedom of thought and freedom of expression because manipulation of neurodata can surely breach
Petra Zandonella: I totally agree with you. We also wrote a study for the Stoa, and we also partially mentioned these rights, although the focus was of course on privacy. So thank you over there for pointing out that we had a really limited understanding so far in the discussion on neurotechnologies, because it’s really needed that we had a broader understanding of neurotechnologies. I totally agree with you. For the right understanding of neurotechnologies, also the UNESCO have a really broad understanding of these technologies, and there is now at the moment an ongoing debate on recommendations on ethics on neurotechnologies in Paris, and we will see the outcomes soon, I guess. Fingers crossed that they will stay inside.
Moritz Taylor: Do you want to add anything?
Damian Eke: I wanted to add something in terms of definition of neurodata or brain data. It’s a difficult concept to conceptualize actually, and I don’t think the debate on the definition of neurodata is going to end very quickly, because everything can be neurodata. Everything can be neurodata, when you combine it with other sets of data. So having the limitations of what we refer to as neurodata is important to governing it, and is a critical discussion that we all continue to have.
Petra Zandonella: Although if you have a legal definition on neurodata, it can also limit neurodata. So that’s also why we need an interdisciplinary approach, also with the ethics part. And you mentioned before that also values should be added, and I totally agree. So maybe you can go further in that, if I may ask a question to Damian.
Moritz Taylor: Sure. Please add your, if you wanted to add something. Yeah, please do. Okay, great. Okay, answer please.
Damian Eke: So in terms of values, just as I mentioned earlier, all technologies are value-laden. There is, if it is not a value of the designers or developers, it will be the values of the deployers or the users embedded in these systems. But whose values are embedded in neurotechnologies used worldwide? Understanding that these technologies are being developed mainly in the Global North, and values embedded in them are from the Global North. An instance would be EEG. EEG devices. I don’t think the value of usability within the African population was considered when it was being developed. So that is an important one. We might develop these technologies, but they are not generalizable to all populations of the world. And the idea that people’s values are neglected or maybe relegated to the background in the development of these technologies raises the questions of coloniality embedded in these systems. And I always point out that in terms of principles of trustworthiness, of AI, of technologies, what is missing is decoloniality as a requirement for trustworthiness. Because if these technologies are seen as tools of epistemic dominance by the global South, it can lead to rejection or non-acceptability of these technologies. So we need to consider all these values.
Moritz Taylor: Thank you.
Ana Brian Nougrères: Okay, so I want to say thank you to everybody who made some comments. They are all very, very welcome. Thank you so much. And just additional remarks. I totally agree with 118’s comment. I totally agree and feel that it is a good moment to begin that discussion. Your 061 opinion was very interesting. Yours were quite to the point of the risk, very interesting too. And there was 195 that I would like to say something special about. Oh, your opinion was 094, very interesting. So thank you. But well, you asked if there has been another sort of important issue that brought to the discussion in similar terms as neurotechnology. And I would say no. I don’t know it at least. But I would say that when we came to terms with artificial intelligence and when we saw how it changed the world, then we noticed that we could have done something before. At least we could have considered the possibility of putting ethics in any moment that it was possible. At least that. We didn’t do it. So when this important topic as neurotechnology comes to stake, I personally think that we should not wait. We should try to study the point, try to consider possibilities and try to come to terms in a multi-stakeholder way. That’s what I think. And as law, as people who work with the law, I feel like that we have to do the effort not to do that, not to make the law come. last. I mean, the law has to try to be updated, and we have to try and be at the correct moment in the correct place. That’s what I think. Artificial intelligence changed the world. It changed it and continues changing it. And, well, we don’t know when it’s going to finish and what is going to happen. So if you consider that plus neurotechnologies, you might get into the risk, absolute risk opinion of our colleague. So it is important that people know that awareness is very important. And a way of getting to the point of awareness is to bring these topics to our agendas. Not to our agendas only, but to our agenda in general of all those who are involved.
Moritz Taylor: The microphone is close to the mouth, the microphone.
Ana Brian Nougrères: Oh, sorry.
Moritz Taylor: I think for the participants online, it’s difficult to understand. OK, it was just the end. I still have a statement or question from online, which will go first, and then also 463. My stigmatism is failing me. I think it’s 463. OK, I invite Samir Gallo if he still wants to raise the question. I’m asking to unmute. His mic may be not working.
Online moderator: I had a comment from him that since it’s at the end, he is more than welcome to connect virtually. So perhaps he’s already on that phase.
Moritz Taylor: OK, well, then let’s go give the floor to 463 first, and then maybe they’ll come back by the time.
Panelist: Thank you. Thank you very much. And I want to thank the keynote speaker. It was a great intervention. Another frontier of knowledge, not so recent, but very important, is the human genome mapping. And I want to know if data from DNA can or is classified as neurodata and could be protected under the same kind of laws or not? Thank you.
Ana Brian Nougrères: You know what I think? I think that lawmakers, we all use lots of definitions. We need definitions to produce a law. But I think that we are not yet in the moment to decide which definition is the correct one and which not. We need previously the multi-stakeholder study. In my personal opinion, I would say maybe, maybe, I don’t discard it. But I think that previously to the classifications, we need to do discussions with everybody.
Moritz Taylor: Also, so just before you answer, did the online participant come back? Then maybe just a very quick one, because then we have to move on.
Damian Eke: Technically, no. If I can imagine what will happen in the neuroscience research community if we introduce the idea that genetic data is now also neural data, there will be open arms. So, but I think what we are saying is the special recognition genetic data has in regulation, neural data needs to have the same, because it’s still very confusing for a lot of people. If you have been involved in big projects, I was a data governance coordinator for the EU Human Brain Project, which was a big EU project with over 500 neuroscientists. So, when you introduce the idea of protecting neural data as special category data, they resist it. So, for them, this is not genetic data, this is just research data. It’s difficult for them to understand that. But when we have specific awardees in the regulation…
Ana Brian Nougrères: It’s very difficult to qualify if we don’t decide yet the definition. And that is why many times you can see that in the laws that refer to technological aspects, you always need to introduce some sort of thesaurus. So it is difficult, it is difficult.
Damian Eke: I completely agree with you, and there are so many things happening in this space at the moment. UNESCO with the guidance, WHO, OECD, now the United Nations is also assembling interdisciplinary team to discuss this. There will be a three-day workshop actually in Berlin on this, I will be part of that by the United Nations. So these different initiatives are offering definitions, different definitions, which might become a little bit of a problem. There needs to be harmonization.
Petra Zandonella: Yeah, absolutely. If there is no harmonization, it will be a struggle. That’s also why we need discussions forward and not afterwards, because then it’s too late.
Moritz Taylor: Okay. Well, thank you so much to the panelists and our keynote speaker for your contributions and participation in the panel. Thank you to the audience for their very active participation. We’ll move on before we finish the session at half past. We’ll move on to the messages of this session. Minda looks less than delighted with the messages she’s put together, but well, either, you know, we’ll get some preliminary messages and then we’ll move on, yeah? I think everyone could take their seats again.
Ana Brian Nougrères: There’s a short version of my speech if you want to, if you are interested, we have it here, okay?
Moritz Taylor: Thank you. And please don’t leave, we’ll take a photo afterwards. Later though.