The Future of XR Under the AI Act: A Path Towards Ethical Innovation
With extended reality (XR) technology rapidly evolving, there’s a growing need for legislation that ensures the responsible use of AI within XR applications.
One of our team members, Bianca, attended AWE XR Europe 2024 and reported on an inspiring talk by Andrea Bravo, founder of the Metaverse Data & Ethics newsletter. Andrea led a thought-provoking panel on how the newly finalised AI Act will impact innovation, privacy and ethics within the context of XR.
The panel included experts such as Data Scientist Iveta Lohovska, Privacy Ethics Expert Chad Wollen, Union Avatars CEO Cai Felip and The Metaverse Society Founder Marine Boulot. Together, they explored the AI Act’s implications, emphasizing how this new legislation provides a framework to guide ethical practices during the conception and development phases of new technologies.
Understanding the AI Act: A Risk-Based Framework to Mitigate Negative Externalities
The AI Act introduces a four-tier risk framework, ranging from “unacceptable risk,” which bans certain use-cases, to “low risk”. This tiered approach is crucial to safeguarding human rights and ensuring that innovative technologies do not overstep ethical boundaries within the European Union’s territory.
Use-cases that have been recognised as an unacceptable risk and have been consequentially banned are:
Social scoring systems, such as those seen in China;
Emotional profiling, which can lead to biased assumptions about users;
Predictive algorithms that calculate a person’s likelihood of committing a crime based on arbitrary factors.
AI applications and use-cases that were deemed high risk include:
HR processes like automated CV screening;
Educational and vocational use-cases;
Biometrics, which could include the processing of eye-tracking, voice sampling and recognition or gestures within XR;
Critical infrastructure, health, and safety-related applications that could affect users' physical and / or mental well-being;
Any data-driven technology that could impact EU citizens’ human rights, privacy and / or personal autonomy.
The Intersection of XR and AI
The link between AI and XR lies in the data these technologies collect and utilise. The purpose of the AI Act is to ensure developers consider the ethical implications of data usage—such as how data is stored, analysed and applied to user experiences.
By setting up a regulatory “sandbox,” the AI Act allows developers to collaborate with policymakers during the design process, helping to mitigate potential harms before they reach users.
A notable point raised during the discussion was the risk of allowing AI-driven customisation in XR.
When AI makes decisions on behalf of users—often in the name of “personalisation”—it becomes the lens through which users view their virtual world.
The panel cautioned that this could lead to problematic outcomes similar to certain social media’s echo chambers, where algorithm-driven content reinforces existing beliefs and isolates users from diverse perspectives. Over time, such a bubble effect could potentially foster anti-social tendencies and discourage critical thinking.
Children, in particular, are a focus within the AI Act, as they represent a vulnerable group especially due to being susceptible to external influence. The panel noted that AI used in educational XR settings should involve adult supervision to prevent undue manipulation and to safeguard young users’ development.
Establishing Trust: An Important Step Forward for Businesses
Throughout the conversation, the panel emphasized that regulation is key to building trust between users and emerging XR companies. Enforcing regulations is important to uphold the company image and brand values.
By upholding a consistent set of ethical standards, the AI Act offers a community-driven code of conduct to ensure that the use of AI within the XR industry aligns with societal values. Chad Wollen highlighted the importance of placing user experience and rights at the forefront, stressing that informed consent and transparency are essential to empower users in making informed choices about their data. Cai Felip also agreed, adding to the conversation by advocating for “real options” to give users full control.
Another notable aspect discussed, was the possibility of continuously updating the AI Act over time, setting it apart from other legislation that often struggles to keep up with rapidly evolving technology.
The talk concluded on a crucial note during the Q&A session, addressing a developer’s concerns about whether the law’s restrictions might stifle innovation, a view partly shaped by an American perspective on regulation. Chad responded by reminding the audience, that “We are more than markets and transactions. Europe starts as social by giving the power to communicate to its people. We safeguard people’s rights and wellbeing first. That’s far more important than transactions”. This exchange ultimately highlighted a deeper clash between differing social, cultural and political philosophies, which could also be seen during the discussion of similar takes surrounding the GDPR, which the AI Act is subservient to.
The key takeaway remains that the AI Act is not intended to intimidate or dictate how XR developers build their products. Rather, it represents a crucial step forward in protecting European citizens—both present and future—by providing a safety net against dark UX patterns and exploitative technologies.
We at the Metavethics Institute are thankful for Bianca and her amazing work 👏
Comments