[This disturbing column from Fast Company presents an informed, speculative timeline from 2022 to 2033 in which the ability of generative AI to create increasingly effective illusions leads to small and very large disasters. The author concludes with three “actionable steps” that could prevent the dystopia. See the original version of the column for seven more images and a video. –Matthew]
[Image: Credit: Wei Ding/Unsplash]
AI feels like a magic act. By 2033, it will be a horror movie
Generative AI is blurring the lines between fact and fiction. This speculative timeline shows how, if unchecked, the AI big bang could usher in the end of reality as we know it.
By Jesus Diaz
April 5, 2023
It’s hard to write about generative artificial intelligence. By the time you finish an article, a new development makes it feel obsolete. Here is the perfect example of that: My editor wanted to publish this feature at the end of 2022, the year in which we all realized that generative AI was the next big thing. (It’s been in a dozen drafts since.)
Some time in the near future, maybe as early as this year in fact, we will lose our ability to distinguish fact from machine-made fiction, no matter how many forensic tools we can come up with. After speaking to half a dozen of the top experts in this field, they agreed this “near future” will likely happen within the next 10 years.
Emad Mostaque—CEO and founder of Stability AI, the organization behind AI image generator Stable Diffusion—told me that in just two years it will be possible to generate in real time a moving, talking face so realistic that anyone will believe it while on a video conference. By 2033, Mostaque believes it will be impossible to distinguish between human-made and artificially created content, including audio, video, images, and text. During an interview from his home in Tel Aviv, Gil Perry, the CEO and cofounder of the Israeli AI company D-ID, put it more bluntly: “In one or two years, you won’t be able to tell what is true and what is false.”
To underscore this need, a group of AI and tech luminaries including Steve Wozniak, Elon Musk, and Mostaque himself published an open letter urging for a six-month moratorium on large-scale generative AI experiments. Fear of the unknown is natural, but in this case, the very people working on AI are worried—and for good reason.
I have always been the techno-optimist; the guy who thinks that there is no problem that can’t be solved through sheer human ingenuity. Global warming, cancer, energy crisis—we will figure it all out eventually. But, all these interviews left me with a deep feeling of desolation and anxiety that I’ve not been able to shake off yet.
As I started to write the original version of this piece, more news about AI’s advancements kept breaking. I wrote and rewrote what I already had. At one point, I threw it all away. I figured this future was going to be impossible to articulate in a traditional structure of a journalistic feature, so I turned to sci-fi prototyping, a technique used by futurists and organizations like the Pentagon to prepare for what’s coming. With this technique, real information is distilled into a set of tight constraints in order to create a world and produce a logical projection of events into the near future.
Because projections are usually made in a ten-year frame—after that it is very hard to project with any accuracy—it fit perfectly with the predictions the scientists and engineers were giving me. Prototyping, it turned out, could be the perfect vehicle to tell the story of how generative AI could evolve.
Using what I’ve learned from months of interviews, here is my best guess at an AI timeline, starting in 2022.
2022
It’s the Big Bang of generative artificial intelligence.
Programs like DALL-E, Midjourney, and Stable Diffusion generate endless streams of synthetic images from simple text descriptions.
In just a few months, generative artificial intelligence has jumped from laboratories into the hands of anyone with a computer or phone. You don’t realize it at the time, but you’re playing with the most powerful and potentially destructive force ever created since the invention of the atomic bomb.
2023
Infatuated with the success of ChatGPT, Microsoft invests $10 billion in OpenAI and incorporates it into Bing. After more than a decade of cautiously developing its own artificial intelligence, Google hastily responds by launching its own version, called Bard.
You try out the new Bing to search to look for a restaurant recommendation and feel excited about its potential. Within a few days of launching, the new AI Bing proves to be a manipulative and lying sociopath. Bard also makes serious mistakes, but Google and Microsoft argue that they’re still in testing and ignore these warning signs.
In Hollywood, production designers use AI to create concepts and sets. The new Indiana Jones trailer is released, featuring a perfectly rejuvenated Harrison Ford. Metaphysic, a company that went viral for its deep fake Tom Cruise, signs an agreement with the world’s largest talent agency to create biometric profiles of its clients, even after they’re dead.
By early spring, news breaks about the potential arrest of former president Donald Trump. Before the arrest happens, you see images circulating on Twitter showing Trump dragged from Trump Tower by a group of officers. They are marked as fake, but your eyes believe they are real at first sight.
2024
[A note to readers: At this point, you’re entering the future as I imagine it might unfold.]
At the beginning of the year, Stable Diffusion releases a software update so powerful that a simple prompt can materialize any image you imagine. Only a very detailed analysis can reveal errors that betray their synthetic nature. If you are a photographer, you dread losing your job.
Developers try to limit the criminal use of Stable Diffusion’s new engine, but since it’s open source code, the tools are cloned across the web, accessible to everyone with a credit card.
A friend generates some ridiculous photos of you in a compromising situation that make you laugh. Not long after, you read headlines about a growing wave of criminal cases and begin to feel nervous. Bullying runs rampant, and abusers create images to humiliate and blackmail their victims. These digital attacks leave a measurable, psychological impact as clinicians begin linking deepfake technology to a rise in depression and suicide.
2025
New specialized chatbots come online. They are now so sophisticated that their dialog is indistinguishable from that of a real human. You use an AI assistant app to help you keep track of your daily routine. A friend confesses that he has started to flirt with an AI and half-jokes he’s falling in love.
Late in the year, a startup launches an app that allows you to transform your voice into someone else’s in real-time. Your favorite TikTok account is an endless scroll of memes showing people speaking with the voices of famous people. This new technology results in a spike of identity theft and fraud.
In the newspaper you read about a case of political corruption in a European country, maybe it’s Italy, where a lawyer managed to have real audio recorded by the police dismissed as evidence. He argued that it may have been fabricated with AI. The case is dismissed, and the defendants are acquitted. Experts are overwhelmed by the perfection of the audio, and prosecutors and judges worldwide debate the merits of admitting audio recordings as evidence.
2027
A new artificial intelligence synthesizes perfect high-definition images using natural language voice prompts. The platform is so advanced that it can reproduce the errors of digital camera sensors and optics. Legacy tech companies partner to launch a service that can detect the synthetic photos, but it can’t keep up with the evolution of generative AI. Your cousin tells you that she joined a new, ‘AI-free’ social network, but soon fake images show up there, too.
Political organizations of all stripes use the tool to design propaganda, sowing chaos and increasing social gaps across the globe. The situation is so dire that people actually tune into congressional hearings to listen to lawmakers who have no clue how to solve the crisis. Without clear regulation, AI companies continue to release new versions of their software, arguing that their apps are simply creative tools.
By the end of the year, digital, sexual abuse becomes an everpresent online witchhunt, with Redditors performing armchair analyses to track down child pornographers. Shocked, you watch as the first public figure is falsely accused of such acts in court. Only a forensic analysis of the alleged photographic evidence manages to spare the defendant from prison, but a majority of the public is unconvinced.
2029
Six years from now in the spring, you sign up for a new social network that allows users to record and transform video with simple voice commands. You test the app by saying “change Mike’s water glass into a glass of wine,” and the video metamorphosizes before your eyes. The same technology powers a wildly popular new multiplayer video game that allows people to build uncannily real virtual worlds with simple voice and text prompts. You can’t put it down.
Later in the year, a friend forwards you what appears to be a hidden camera clip of a megalomaniac billionaire admitting his tech company is about to go bankrupt. The company’s stock loses more than half of its value in just a few hours. The magnate claims the video is fake, but only his most ardent fans believe him. Nobody knows if the video is actually real, but it doesn’t matter: customers and investors flee, and a few months later, the company declares bankruptcy.
2033
You watch a film starring an actor who has been dead for 15 years. It later wins five Oscars. The movie is the directorial debut of an unknown auteur, who completed the film with a budget of just $2 million and a team of six people working from their homes.
Meanwhile, in a contested region overseas, several videos surface showing members of an opposition group killing young teens who were hanging out in a park. Captured from different angles by various phones, the clips ignite fury on social media. You watch the news, but like many others, you think the videos might all be AI-generated.
That doesn’t matter in the victims’ home country, where community members retaliate with attacks in broad daylight, unleashing a chain of violent riots in several towns and cities. A neighboring country steps in to intervene militarily. Ignoring the requests of the international community, the two factions begin a war that only ends in a ceasefire after a limited exchange of nuclear missiles and more than a million deaths.
At least, that’s what you think has happened. In the short decade since AI has been popularized, it’s increasingly impossible to know.
Only months later do British police arrest a young radical who created and distributed the original (if you can call it that) AI generated videos from his parents’ house in a London suburb. He has caused what’s deemed the biggest disaster since World War II.
LEARN FROM PAST MISTAKES
The story above tells a harrowingly dystopian story—one version of our future that could play out all sorts of different ways in actuality.
Generative AI is something that we can’t undo, but we still have time to imagine more thoughtful and responsible ways to deploy it. But history shows us again and again that trusting companies to self-regulate is naive. Blinded by the profit motive, they lack the ethics and the foresight to curb potential technological harms.
So where do we go from here? The experts I’ve spoken to say there needs to be an urgent public debate about generative AI. They believe there are three actionable steps we can take to prevent a social crisis of dire consequences.
CREATE CRYPTOGRAPHIC CERTIFICATION STANDARDS
To avoid the dissolution of reality, people need to be able to authenticate content captured by digital cameras and microphones. The goal of these standards is to establish a basic level of certainty that a photograph, video, or audio is real. According to Gil Perry, of Israeli AI company D-ID, detection of synthetic content will be next to impossible in the future given the fidelity of AI-generated images. Not even invisible watermarks will work because they can be falsified by bad actors.
Some companies are already at work, creating authentication around the recorded material itself. Ziad Asghar is SVP of product management at Snapdragon Technologies and Roadmap at Qualcomm, the company that designs the chips for most Android phones and many other devices. He says his company already has technology to preserve the authenticity of all the pixels and soundwaves captured by a device. It’s the same technology Qualcomm uses for secure face authentication.
A group of companies including Adobe, MIcrosoft, Intel, and the BBC are now working towards establishing standards around the authenticity of images, videos, text, and audio through the Coalition for Content Provenance and Authentication. Still, there’s no current industry standard that can certify the authenticity of this media as it transmits from device to device across the web. For that, Asghar says we need “multiple layers of security” and file formats that work similarly to NFTs, using blockchain certificates to authenticate captured images, videos, and audio files as not AI-generated or modified. “It’s a big concern for everyone,” he says. “As these [AI] technologies become more prevalent, this is going to be a challenge.”
CREATE PUBLIC AWARENESS
The second task is to launch public awareness programs so that the people understand the scope of generative AI. “The world is changing, and children are growing up in a very different place. It’s a bit scary,” Perry says. “The idea is to make AI open to the public and make everyone have access to it and get used to it, not controlled by some governments and tech giants.” Tom Graham, co-founder of Metaphysic—the company that created the viral Tom Cruise clones on TikTok— agrees that PSAs could make a difference. Recently, his company participated with their real-time AI avatars in the popular TV show America’s Got Talent. Their mission, he says, was more than promotional; he wanted to show the general public the power of this technology. “If that can help a person reduce the psychological impact [of a fake image or video], it’s positive,” he says.
ENCOURAGE COLLABORATIVE LEGISLATION
Finally, we need to urge governments worldwide to collaborate with the scientific community on legislation that protects individual rights and establishes penal limits on the toxic use of generative AI. This will require companies to sit down with institutions and governments, psychologists, philosophers, and human rights organizations to ensure all sides of this technology have been considered. Emad Mostaque—CEO and founder of Stability AI—agrees that there’s a need for an open discussion around the impacts of AI, but he doesn’t believe in tightening regulation around the technology. “Open debate is always best, due to the complexity of what this could do to social composition,” he says. Graham, however, believes that “lawmakers must think about how to implement those laws as quickly as humanly possible to protect people from potential harm.”
While these measures can help to prevent a total collapse of reality, they will not prevent authoritarian regimes, organizations, and individuals from using generative AI to do harm. That’s why it is important that both the makers and consumers of AI prioritize developing new ways to protect society from the potential dangers of this technology. Only by working together and taking proactive steps to address the ethical and social implications of generative AI, can we create a future that is both technologically advanced and socially just.
Will that actually happen? It’s hard to have faith. Looking at the current uncontrolled AI craze and the general disregard for its potential effects–by both the key decision-makers and the public itself—I’m pessimistic. Is the cataclysmic chain of events outlined in this thought exercise already all but inevitable?
|