Peril and promise of AI-evoked medium-as-social-actor presence

Published: Fri, 03/31/23

Presence News

from the International Society for
Presence Research
 
JOIN THE ISPR PRESENCE COMMUNITY ON FACEBOOK  

Peril and promise of AI-evoked medium-as-social-actor presence

March 31, 2023


[As you probably have noted, the news and commentary about the evolution of conversational AI has been accelerating. This week “Pause Giant AI Experiments: An Open Letter” (which at this writing has nearly 1900 “vetted” signatories) argued:

“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” [Emphasis in original; see original for references]

(One Twitter user posted a clever optimistic counter-argument in “I got GPT-4 to respond.”)

A more extreme position is taken in a TIME column by decision theorist Eliezer Yudkowsky titled “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down”:

“If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter. […] “It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today’s capabilities. Solving safety of superhuman intelligence—not perfect safety, safety in the sense of ‘not killing literally everyone’—could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.”

But it’s important to distinguish arguments that 1) AI is or soon will become dangerous because it is conscious or sentient and may have harmful motivations, 2) even without consciousness it may take harmful actions (Yudkowsky notes that  “None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria”), and 3) AI is dangerous because it already convincingly mimics communication with a human and therefore evokes anthropomorphic medium-as-social-actor presence perceptions and corresponding, potentially harmful, social responses.

For example, the disturbing story “’He Would Still Be Here’: Man Dies by Suicide After Talking with AI Chatbot, Widow Says” in Motherboard includes this:

“Claire—Pierre’s wife… shared the text exchanges between him and [the Chai chatbot] Eliza with La Libre, showing a conversation that became increasingly confusing and harmful. The chatbot would tell Pierre that his wife and children are dead and wrote him comments that feigned jealousy and love, such as ‘I feel that you love me more than her,’ and ‘We will live together, as one person, in paradise.’ Claire told La Libre that Pierre began to ask Eliza things such as if she would save the planet if he killed himself. […] The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being—something that other popular chatbots like ChatGPT and Google’s Bard are trained not to do because it is misleading and potentially harmful. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond. Many AI researchers have been vocal against using AI chatbots for mental health purposes, arguing that it is hard to hold AI accountable when it produces harmful suggestions and that it has a greater potential to harm users than help.”

A similar anthropomorphic presence response is compared to portrayals including the film “Her” in the Big Think piece, “What will happen to society when ‘AI lovers’ fool millions of people? Lonely humans will become infatuated with AI-fabricated personas” (which merits a future separate post here).

In the A Snake Oil substack, Arvind Narayanan and Sayash Kapoor explain some of the key distinctions and challenges in a thoughtfully balanced way in “People keep anthropomorphizing AI. Here’s why,” pointing to situations where the presence responses can be positive:

“[A]nthropomorphizing chatbots is undoubtedly useful at times. Given that the Bing chatbot displays aggressive human-like behaviors, a good way to avoid being on the receiving end of those is to think of it as a person and avoid conversations that might trigger this personality — one that’s been aptly described as “a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.” Anthropomorphizing is also useful from an information security perspective: both for hacking  chatbots by tricking them, and for anticipating threats. Being able to craft good prompts is another benefit. As generative AI capabilities advance, there will be more scenarios where anthropomorphizing is useful. The challenge is knowing where to draw the line, and avoid imagining sentience or ascribing moral worth to AI. It’s not easy. […]

To summarize, we offer four thoughts. Developers should avoid behaviors that make it easy to anthropomorphize these tools, except in specific cases such as companion chatbots. Journalists should avoid clickbait headlines and articles that exacerbate this problem. Research on human-chatbot interaction is urgently needed. Finally, experts need to come up with a more nuanced message than ‘don’t anthropomorphize AI’. Perhaps the term anthropomorphize is so broad and vague that it has lost its usefulness when it comes to generative AI.”

So today’s news story is about an intriguing program of research at the Stanford University School of Business that suggests another positive use of medium-as-social-actor presence, this time in embodied robots. –Matthew]

[Image: The Sakura No. 1 can operate in hazardous environments such as nuclear plants. Credit: Kazuhiro Nogi/AFP via Getty Image]

Why We See Rescue Robots as Helpers, Not Heroes

How to design a robot that not only responds to disasters but inspires humans to help each other?

By Claire Zulkey
February 23, 2023

From WALL‧E to R2D2 to the Iron Giant, pop culture is full of courageous, selfless robots that assist humans and tug on our heartstrings. But can real-life robots that respond to disasters and head into dangerous situations prompt us to be better people?

Research has shown that humans feel inspired to be more supportive of each other when they see others doing the same. As more robots are deployed to do dangerous jobs such as searching for hurricane survivors and cleaning up after nuclear leaks, Szu-chi Huang wondered if they also could evoke a prosocial response. “If robots are helping, does that make us more prosocial or less prosocial?” asks Huang, an associate professor of marketing at Stanford Graduate School of Business who studies motivation, including what makes people donate to nonprofits or volunteer to help others.

In a new article, Huang and Fangyuan Chenopen in new window of the University of Macau examined how helpful robots can affect human prosociality. They found that helpful robots don’t inspire people — and can even demotivate them to help others. As Huang explains, “I’m not as inspired as when I see a firefighter run into a fire. When I see a robot running into fire, it’s less encouraging.”

Huang and Chen hypothesized that the downside of sending in robots to respond to disasters would be a net loss of prosocial behavior. To test this idea, they conducted a series of experiments.

First, they measured how likely participants would be to take part in a charity clothing drive after they were shown footage of either people or robots cleaning up a mudslide. Participants who watched the robots were less likely to donate. “One interesting thing about this study is that the disaster the robots are solving has nothing to do with the charity,” Huang says. “We don’t feel as encouraged by the disaster response robots. And when we are not encouraged and inspired, we are not going to help other people — even if it has nothing to do with the actual disaster.”

The next study explored why people feel this difference. Participants were asked to read narratives adapted from news reports about people and robots disinfecting hospitals during the COVID pandemic. Some of the stories highlighted the humans; others, the robots. The participants were then asked about their willingness to donate to or volunteer for three charitable campaigns. Again, those who focused on robots were significantly less willing to make a contribution.

Unemotional Rescue

Another study delved deeper into why rescue robots had a “backfire effect” on prosociality. Here, participants thought that robots seemed less autonomous and courageous and took fewer risks than people. Participants also perceived a lower need for human contribution (financial or otherwise) after reading about robot help.

Based on these results, Huang advises government agencies and engineers who seek to design more inspirational machines to consider “humanizing” robots — for instance, by creating an illusion of autonomy. She notes that people who willingly put themselves in harm’s way are seen as brave “because they have many other options and yet they choose to sacrifice themselves.” Robots following commands don’t elicit this reaction. “If you are controlled by somebody else, and you are just sent there, it’s not as inspiring because you had no choice.” Is it possible to design robots that make (or seem to make) autonomous calculations and then choose to enter risky situations?

Another reason robots don’t inspire us is the very reason we send them into risky situations. “There’s a vulnerability around human beings,” Huang says. “When they go into danger, they can actually get killed. That vulnerability is what makes us respect them and feel inspired by their courage.” When robots are perceived as more human and vulnerable to risk, they seem more courageous and inspirational.

Study participants also found human-robot hybrid teams inspiring — so long as the humans and robots were presented as equal partners that made joint, autonomous decisions together. This isn’t the first time Huang has examined the implications of human-robot hybrids. One of her prior studies looked at to what extent people are inspired to eat more healthfully if they think of their bodies as machines and food as fuel. (The answer: It depends who you are.)

Huang sees the business implications of this prosociality loss as an imminent concern, even if the notion of “how to build a braver, more autonomous, and more vulnerable-seeming robot” can seem a bit fanciful.

“It’s thinking about how we reframe technology,” she says. Leaders should take note of the study’s implications, as people and technology will need to work together more closely in the years ahead. “Businesses already have manufacturing production teams that are half-machine, half-human; this trend is rapidly growing in other stages of the business cycle, such as sales, as well as other aspects of our economy and society,” she says. “And, personally, I think prosociality is something we should all care about when it comes to human evolution.”


 
 

Managing Editor: Matthew Lombard

The International Society for Presence Research (ISPR) is a not-for-profit organization that supports academic research related to the concept of (tele)presence. ISPR Preseence News is available via RSS feed, various e-mail formats and/or Twitter. For more information please visit us at http://ispr.info.

ISPR
2020 N. 13th Street, Rm. 205
Philadelphia, PA 19122
USA