[Japanese roboticist Masahiro Mori and other experts comment on the application of the uncanny valley phenomenon in the shared virtual spaces of a metaverse in this story from The Mainichi. An introduction to the uncanny valley in a November 2021 story from Discover includes this passage that includes examples and refers to a new paper about different explanations for the phenomenon:
“When we spoke by Zoom, [Karl] MacDorman, a world-renowned expert in human-computer interaction, showed me several robots that tend to drop into the uncanny valley but do not come close to being humanlike. One was less humanlike than R2D2, but still managed, perhaps by virtue of its appearing to have four eyes, to evoke the uncanny feeling in many observers. ‘You can get into the uncanny at all levels of humanness,’ MacDorman says. Take, for example, the Telenoid, a teleoperated android designed by Japanese roboticist Hiroshi Ishiguro. The Telenoid is designed to be so abstract that you can project onto it whatever you think it is or need it to be — male or female, young or old. ‘This is something designed not to be uncanny,’ says MacDorman. ‘But it is kind of uncanny.’
On the other hand, some robots that do not evoke the uncanny at first pass can still create strange effects. Ishiguro created an android based on his nine-year-old daughter. The child was unfazed when she met her android doppelgänger. It was not uncanny at all. But that night, she had nightmares. The valley is a different place for different people.
It’s not at all clear why people respond in this way to robots that are almost — but not quite — humanlike. MacDorman says he hasn’t counted but suspects there could be as many as two dozen theories. This April, he and Alexander Diel, a psychology researcher at Cardiff University in Wales, published a paper that organized the current explanations into nine categories. These range from novelty avoidance to threat avoidance. One leading theory, and one that MacDorman thinks may be at least partially behind the experience, is perceptual mismatch. ‘I think [the uncanny-valley effect] is probably caused by some features appearing human and other features not appearing human,’ he says.”
And a December 2021 story in Big Think adds this:
“Understanding how the uncanny valley works is often the first step in circumnavigating it. Interactions, a bi-monthly magazine on design and engineering, outlines a number of methods animators can use. They advise to ‘steer clear of atypicalities at high levels of realism,’ and point to the large, anime-style eyes plastered on the heroine of Alita: Battle Angel as an example.”
–Matthew]
Is the ‘uncanny valley’ good for a future metaverse?
January 7, 2022
TOKYO (Kyodo) — It has been over five decades since Japanese roboticist Masahiro Mori developed a theory describing the eerie or uneasy feeling people experience in response to humanoid robots that closely, but not perfectly, resemble human beings.
Labeled the “uncanny valley” by Mori in 1970, the phenomenon has stood the test of time with more recent examples of creepiness filtering into the burgeoning fields of artificial intelligence, photorealistic computer animation, virtual reality, augmented reality, and increasingly lifelike androids.
But what happens beyond the other side of the valley as resemblance to humans is perfected? Some researchers worry that as “trusted” virtual humans become indistinguishable from real people, we open ourselves to more manipulation by platform providers. In other words, our responses while still in the uncanny valley, as creepy as they can be, could be a good thing — a kind of self-defense mechanism.
Mori, now 94, a professor emeritus of the Tokyo Institute of Technology who retired in 1987, originally plotted his uncanny valley hypothesis in a graph, showing an observer’s emotional response against the human likeness of a robot.
He stated that as a robot’s appearance is made more humanlike, there is a growing affinity for it but only up to a point beyond which the person experiences a reaction of extreme disgust, coldness, or even fear, shown by a plunge into the valley.
But as the robot becomes more indistinguishable from a real person, positive emotions of empathy similar to human-to-human interaction emerge once more. The disconcerting void between “not-quite-human” and “perfectly human” is the uncanny valley.
With tech companies led by Mark Zuckerberg’s Meta Platforms Inc. staking a claim on the creation of a metaverse — viewed as the internet’s next iteration “where people can work and socialize in a virtual world” — some experts say the uncanny valley graph is just as pertinent in immersive environments, including in VR and AR.
While we have become accustomed to interacting with “low-fidelity versions of human faces going back to the early days of TV,” we will have the ability to project photorealistic humans in 3D virtual worlds before the end of this decade, Louis Rosenberg, a 30-year veteran of AR development and CEO of Unanimous AI, recently told Kyodo News in an interview. How will we determine what is real?
“Personally, I believe the greatest danger of the metaverse is the prospect that agenda-driven artificial agents controlled by AI algorithms will engage us in ‘conversational manipulation’ without us realizing that the ‘person’ we are interacting with is not real.”
In a corporate-controlled metaverse featuring “virtual product placement,” we could easily think we are simply having a conversation with a person like ourselves, causing us to drop our defenses. “You won’t know what was manipulated to serve the agenda of a paying third-party and what is authentic.”
This is dangerous because “the AI agent that is trying to influence us could have access to a vast database about our personal interests and beliefs, purchasing habits, temperament, etc. So how do we protect against this? Regulation,” Rosenberg said.
Mori himself has said designers should stop before the first peak of the uncanny valley and not “risk getting closer to the other side,” where robots — and now, by extension AI or AR — become indistinguishable from humans.
Applying his theory to the virtual world of the metaverse, he said, “If the person (in the real world) understands that the space they are in is imaginary, I do not think this presents a problem, even if it is creepy,” he recently told Kyodo.
But if the person is unable to distinguish reality from a virtual world, this itself will be a problem, he said, adding that the “bigger issue” is if bad actors misuse the technology for malicious purposes, comparing it to a sharp implement that can either be used as “as a ‘dagger’ to kill or a ‘scalpel’ to save someone.”
In her research, Rachel McDonnell, an associate professor in Creative Technologies at the School of Computer Science and Statistics at Trinity College Dublin, poses the question, “Should we tread softly across the uncanny valley” with virtual humans?
She says while virtual humans have almost reached photorealism, “their conversational abilities are still far from a stage where they are convincing enough to be mistaken for a real human converser.”
A longtime proponent of making virtual humans more realistic, she says the biggest dangers now are “AI-driven video avatars or deepfake videos, where convincing videos can be created of one human, driven by the motion and speech of another.”
But she adds: “Transparency around how avatars and videos are created will help overcome some of the ethical challenges around privacy and misrepresentation.” She gives an example of attaching a watermark to distinguish deepfakes from authentic video content.
Rosenberg, meanwhile, outlines various forms of regulation to keep the metaverse safe, such as informing users when they are engaging with a virtual persona.
“It could be that they are all required to dress a certain way, indicating they are not real, or have some other visual clue. But, an even more effective method would be to ensure that they actually don’t look quite human as compared to other users.”
That is, regulation could ensure virtual humans trigger the “uncanny valley” response deep within our brains, he said. “This is the most effective path because the response within us is visceral and subconscious, which would protect us most effectively from being fooled.”
Meta, the social media giant formerly known as Facebook that has rebranded to focus on the metaverse, has been under fire in recent years for spreading disinformation, mishandling users’ data, and using algorithms that end up sowing discord and distrust on the internet, where users cling to their own “facts.”
On Dec. 9, Meta launched the cartoonlike Horizon Worlds to people 18 and older in the United States and Canada as Zuckerberg’s first attempt at his vision of an “embodied internet,” where avatars of real people will share a virtual space.
Christoph Bartneck, an associate professor at the University of Canterbury in New Zealand, says that the metaverse, a name taken from the 1992 sci-fi novel “Snow Crash” by Neal Stephenson, is not a new concept, and for now, merely fiction.
“It is a sign of a lack of originality that Facebook resorts to promise another virtual world. It seems like a gigantic distraction maneuver to take our attention away from all the bad influence that Facebook and its products have on society,” he said.
In 2021, Meta announced it would spend at least $10 billion on its metaverse division to create AR and VR hardware, software, and content. Other tech companies, including Microsoft and video game and software developer Epic Games, have jumped on the bandwagon, while Nike Inc. has launched Nikeland, featuring virtual sneakers, on video-game platform Roblox.
Unanimous AI’s Rosenberg says making the metaverse seem “uncanny,” i.e., not quite real, is easier than we think. “It turns out very small changes can make a big difference” by focusing on how our perception assigns authenticity to experiences.
British design and manufacturing company Engineered Arts’s Ameca is described as “the perfect platform to develop interaction between us humans and any metaverse or digital realm.” A recently unveiled AI robot with remarkably humanlike facial expressions, it appears astonished to be “awake” — perplexed and eerily amused.
“In the metaverse, the simplest thing — like how a virtual persona’s eyes move, or hair moves, or even just the speed of their motion (do they take longer to move than an actual human?) is enough to make them seem deeply unreal,” Rosenberg said, adding that regulation should require that artificial agents be distinguishable from others since this would be easy to achieve.
McDonnell, meanwhile, says she is still optimistic that realistic virtual humans will make a positive impact on society in a future metaverse, including benefits such as preserving users’ privacy in sensitive situations such as with whistleblowers or witnesses testifying in court and overcoming phobias, racial bias, and even conditions such as post-traumatic stress disorder.
“There is a huge potential for the use of virtual humans for good,” she said.
In experiments, her research team found that participants in survival tasks games “generally trusted” virtual agents who had suggested a ranking of objects vital for survival in hypothetical crash scenarios, “but small manipulations of the agents’ facial expressions or voice could influence the level of trust,” she said.
The notion of the uncanny valley as a defense mechanism dates back to Mori in 1970, who called it a “self-preservation instinct,” not from lifeless objects that appear different than us but to protect us from things that are “exceedingly similar, such as corpses and related species,” Karl F. MacDorman, an associate professor in the School of Informatics and Computing at Indiana University, noted.
As for Mori, who has said he never intended the uncanny valley to be a rigorous scientific theory but more a caveat for robotic designers, his message about the metaverse is simple.
“I hope (those) involved in creating it will make something healthy for the happiness of humanity,” Mori said.
|