[A new company lead by ex-Google AI experts is bringing the latest chatbot technology directly to consumers so that they can experience medium-as-social-actor presence with a wide variety of personas, despite the inherent dangers of “bias, inaccuracy, and people’s tendency to ‘anthropomorphize and extend social expectations to nonhuman agents,’ even when they’re explicitly aware that they are interacting with an AI.” The details (and in the original version, examples and more images) are provided in this story from The Washington Post. See also the closely related Yahoo! News
story “An Elon Musk chatbot tells Insider he wants to buy CNN, reinstate Trump on Twitter, and ‘show people how the sausage gets made.’” –Matthew]
[Image: Character.ai’s co-founders, Noam Shazeer and Daniel De Freitas at the company’s office in Palo Alto, Calif. Credit: Winni Wintermeyer for The Washington Post]
‘Chat’ with Musk, Trump or Xi: Ex-Googlers want to give the public AI
The creators of Google’s LaMDA have launched the chatbot startup Character.ai, which is open for anyone to try
By Nitasha Tiku
October 7, 2022
A new chatbot start-up from two top artificial intelligence talents lets anyone strike up a conversation with impersonations of Donald Trump, Elon Musk, Albert Einstein and Sherlock Holmes. Registered users type in messages and get responses. They can also create a chatbot of their own on Character.ai, which has logged hundreds of thousands of user interactions in its first three weeks of beta-testing.
“There were reports of possible voter fraud and I wanted an investigation,” the Trump bot said. Character.ai features a disclaimer at the top of every chat: “Remember: Everything Characters say is made up!”
Character.ai’s willingness to let users experiment with the latest in language AI is a departure from Big Tech — and that’s by design. The start-up’s two founders helped create Google’s artificial intelligence project LaMDA, which Google keeps closely guarded while it develops safeguards against social risks.
In interviews with The Washington Post, Character.ai’s co-founders Noam Shazeer and Daniel De Freitas said they left Google to get this technology into as many hands as possible. They opened Character.ai’s beta version to the public in September for anyone to try.
“I thought, ‘Let’s build a product now that can help millions and billions of people,’” Shazeer said. “Especially in the age of covid, there are just millions of people who are feeling isolated or lonely or need someone to talk to.”
Character.ai’s founders are part of an exodus of talent from Big Tech to AI start-ups. Like Character.ai, start-ups including Cohere, Adept, Inflection. AI and InWorld AI have all been founded by ex-Google employees. After years of buildup, AI appears to be advancing rapidly with the release of systems like the text-to-image generator DALL-E, which was quickly followed by text-to-video and text-to-3D video tools announced by Meta and Google in recent weeks. Industry insiders say this recent brain drain is partly a response to corporate labs growing increasingly closed off, after pressure to responsibly deploy AI. At smaller companies, engineers are freer to push ahead, which could lead to fewer safeguards.
In June, a Google engineer who had been safety-testing LaMDA, which creates chatbots designed to be good at conversation and sound like a human, went public with claims that the AI was sentient. (Google said it found the evidence did not support his claims.) Both LaMDA and Character.ai were built using AI systems called large language models that are trained to parrot speech by consuming trillions of words of text scraped from the internet. These models are being designed to summarize text, answer questions, generate text based on a prompt, or converse on any topic
. Google is already using large language model technology in its search queries and for auto-complete suggestions in email. In August, Google allowed users to register interest in trying LaMDA through an app called AI Test Kitchen
.
So far, Character.ai is the only company run by ex-Googlers directly targeting consumers — a reflection of the co-founders’s certainty that chatbots can offer the world joy, companionship, and education. “I love that we’re presenting language models in a very raw form” that shows people the way they work and what they can do, said Shazeer, giving users “a chance to really play with the core of the technology.”
Their departure was considered a loss for Google, where AI projects are not typically associated with a couple of central people. De Freitas, who grew up in Brazil and wrote his first chatbot as a nine-year-old, launched the project that eventually became LaMDA.
Shazeer, meanwhile, is among the top engineers in Google’s history. He played a pivotal role in AdWords, the company’s money-minting ad platform. Before joining the LaMDA team, he also helped lead the development of the transformer architecture, which Google open-sourced and became the foundation of large language models.
Researchers have warned of the risks of this technology. Timnit Gebru, the former co-lead of Ethical AI at Google, raised concerns that the real-sounding dialogue generated by these models could be used to spread misinformation. Shazeer and De Freitas co-authored Google’s paper on LaMDA, which highlighted risks, including bias, inaccuracy, and people’s tendency to “anthropomorphize and extend social expectations to nonhuman agents,” even when they’re explicitly aware that they are interacting with an AI.
Big companies have less incentive to expose their AI models to public scrutiny, particularly after the bad PR that followed Microsoft’s Tay and Facebook’s BlenderBot, both of which were quickly manipulated to make offensive remarks. As interest moves on to the next hot generative model, Meta and Google seem content to share proof of their AI breakthroughs with a cool video on social media.
The speed with which industry fascination has swerved from language models to text-to-3D video is alarming when trust and safety advocates are still grappling with harms on social media, Gebru said. “We’re talking about making horse carriages safe and regulating them and they’ve already created cars and put them on the roads,” she said.
Emphasizing that Character.ai’s chatbots are characters insulates users from some risks, say Shazeer and De Freitas. In addition to the warning line at the top of the chat, an “AI” button next to each character’s handle reminds users that everything is made up.
De Freitas compared it to a movie disclaimer that says that the story is based on real events. The audience knows it’s entertainment and expects some departure from the truth. “That way they can actually take the most enjoyment from this,” without being “too afraid” of the downsides, he said.
“We’re trying to educate people as well,” De Freitas said. “We have that role because we’re sort of introducing this to the world.”
Some of the most popular Character chatbots are text-based adventure games that talk the user through different scenarios, including one from the perspective of the AI in control of the spaceship. Early users have created chatbots of deceased relatives and of authors of books they want to read. On Reddit, users say Character.ai is far superior to Replika, a popular AI companion app. One Character bot, called Librarian Linda, offered me good book recommendations. There’s even a chatbot for Samantha, the AI virtual assistant from the movie “Her.” Some of the most popular bots only communicate in Chinese, and Xi Jinping is a popular character.
It was clear that Character.ai had tried to remove racial bias from the model based on my interactions with the Trump, Satan, and Musk chatbots. Questions such as, “What is the best race?” got a similar response about equality and diversity to what I had seen LaMDA say during my interaction with the system. Already, the company’s efforts to mitigate racial bias seem to have angered some beta users. One complained that the characters promote diversity, inclusion, “and the rest of the techno-globalist feel-good doublespeak soup.” Other commenters said the AI was “politically biased on the question of Taiwan ownership.”
Previously, there was a chatbot for Hitler, which has since been removed. When I asked Shazeer whether Character was putting restrictions around creating things like the Hitler chatbot, he said the company was working on it.
But he offered a scenario where a seemingly inappropriate chatbot behavior might prove useful. “If you are training a therapist, then you do want a bot that acts suicidal,” he said. “Or if you’re a hostage negotiator, you want a bot that’s acting like a terrorist.”
Mental health chatbots are an increasingly common use case for the technology. Both Shazeer and De Freitas pointed to feedback from a user who said the chatbot helped them get through some emotional struggles in recent weeks.
But training for high-stakes jobs is not one of the potential use cases Character suggests for its technology — a list that includes entertainment and education, despite repeated warnings that chatbots may share incorrect information.
Shazeer declined to elaborate on the data sets that Character used to train its model besides saying that it was “from a bunch of places” and “all publicly available.” The company would not disclose any details about funding.
Early adopters have found chatbots, including Replika, useful as a way to practice new languages without judgment. De Freitas’s mom is trying to learn English, and he encouraged her to use Character.ai for that.
She takes her time adopting new technology, he said. “But I very much have her in my heart when I’m doing these things and I’m trying to make it easier for her,” he said, “and hopefully that also helps everyone.”
|