Digital, virtual, and immersive environments have come a long way since their inception.
On the other hand, with the recent release from OpenAI, Google, Chatsonic, and many other companies of Artificial Intelligence (AI) based tools, the importance of trust in AI has become a fundamental topic to unfold.
Considering that AI tools like ChatGPT generated 100 million active users in only two months since its release, a hyped and excited community around the world has started to use AI to generate marketing, educational, and professional content, with almost no boundaries to the intellectual property, ethical and moral implications that have been generated.
Bringing back the conversation around AI to the domain of digital, virtual, and immersive environments, a series of implications emerge from the use of this new disruptive conversational AI technology.
How would you feel if tomorrow you’d be chatting with an avatar that generates a dialogue based on an AI tool?
In a recent study developed at UCL (University College London) from the research group directed by Prof. Anthony Steed the integration of external (open-source) sources with social virtual reality (VR) proves that there is potential to enhance the way we collaborate within virtual environments.
With the Ubiq-Genie project researchers developed an open-source Ubiq social VR platform that provides a modular approach to server-assisted social VR applications.
Two prototypes of collaborative applications were created to showcase the potential of Ubiq-Genie in the context of generative AI including an embodied conversational agent based on ChatGPT and a voice-controlled texture generation method based on Stable Diffusion 2.0.
The prototypes allow users to collaboratively interact with a conversational agent through voice prompts in a virtual environment.
The agent is represented by a robot-like avatar and has the ability to engage in multi-party conversations with users. Users can interact with the agent through voice prompts, ranging from simple questions to complex questions involving other users. The agent’s response is spatialized and accompanied by hand gestures and a visual speech indicator. The agent also turns towards the users who speak to indicate that it is listening.
This study provides a great example of how a robot-like avatar offers the ability to engage in multi-party conversations in cooperation with avatars represented by human beings.
This study shows that with AI-based tools, there are limitless opportunities to build new content, animate avatars, generate conversations, and much more.
However, with these disruptive technologies, we might incur the chance of generating a phenomenon called the “uncanny valley” which has been previously created in the domain of robot development.
The uncanny valley phenomenon can be described as an eerie or unsettling feeling that some people experience in response to not-quite-human figures like humanoid robots and lifelike computer-generated characters.
This phenomenon can generate a violation of humans' expectations or norms of what humans and robots or avatars look like and can cause a lack of trust in the system the human is interacting with.
The Ubiq-Genie project shows how the boundary between trust, safety, and predictability of an avatar behavior can generate the uncanny valley phenomenon and how the impact of AI in the metaverse can generate unanswered questions.
How might we trust AI-generated content in the metaverse?
Trust in technology is an essential factor to build positive relationships, whether between humans or humans and technology.
To establish trust in AI applied to the metaverse, it is crucial to understand how different groups and organizations perceive trust in technology.
For example, the pharmaceutical industry trusts AI to personalize medications and enhance patient care. However, technicians also need to trust AI to operate it effectively.
One central aspect to help build trust lays the foundations in the concept of explainability. Explainability is critical in building trust as it refers to the ability to explain how an AI system makes decisions.
In the context of digital, virtual, and immersive environments, explainability can provide more information to build trust dynamics.
Furthermore, it can help mitigate the risks associated with the lack of trust in AI.
For example, if an AI system makes a decision that negatively impacts a user, explainability can help the user understand why the decision was made, leading to increased trust in the system.
Correlated to explainability there is a key aspect around the concept of automation vs. augmentation.
Automation refers to the complete replacement of human decision-making with AI. Augmentation, on the other hand, refers to using AI to support human decision-making. Augmentation is generally seen as a safer approach to AI as it allows human oversight of AI decisions. However, it is essential to understand that augmentation does not entirely eliminate the risks associated with AI.
A human-in-the-loop approach refers to the use of human oversight in AI decision-making.
It is generally seen as a safer approach to AI as it allows human intervention in the decision-making process. However, it also comes with the risk of human error.
Trust in AI for the metaverse is essential to build relationships between humans and digital, virtual, and immersive environments.
The ethical aspects of how to trust AI for the metaverse need to be still uncovered, explained, developed, and deployed in order to derisk technologies that might have a massive impact on the behavior and decisions of their users.
At the Metavethics Institute through our global network of passionate scientists thought leaders, and industry experts we are currently working on tackling these and more implications with the goal to ensure that metaverses and digital, virtual, and immersive environments are supported by AI tools while delivering integrity, privacy, safety, diversity, equity, accessibility, and inclusion.
Do you want to become a Metavethicist?
Comentarios