Imagine a dystopian future that isn’t that far away. There, each of us lives in an AI-driven digital filter bubble created just for us and designed to serve corporate interests. This future resembles 1998’s The Truman Show. There, the eponymous protagonist, played by Jim Carrey, has unknowingly lived his life in a reality show where all of his experiences have been choreographed by the production studio.
Large-scale language models, a subset of AI, won’t turn our lives into reality TV shows. No such luck. Instead, personalized AI agents will trap each of us in an individualized, illusory unreality, deprive us of our digital funds, and distance us from authentic connections with others. is threatening.
Things are going well. The launch of Apple Intelligence beta in October may mark a turning point in our relationship with artificial intelligence. This new release will bring a highly accessible, large-scale language model experience to over 1 billion people around the world. But Apple is just one of many companies developing large-scale, personalized language models, including OpenAI, Google, and a number of startups.
About supporting science journalism
If you enjoyed this article, please consider supporting our award-winning journalism. Currently subscribing. By subscribing, you help ensure future generations of influential stories about the discoveries and ideas that shape the world today.
The principle behind this so-called personalized adjustment is that an AI model learns about each individual user: what they know and don’t know, their likes and dislikes, their values ​​and goals, their attention span, and their preferred media formats. That’s it. Adapt accordingly.
The goal is to place a bespoke AI between each user and the vast amount of information on the internet, finding the information the user needs, repackaging it according to the user’s preferences and background knowledge, and delivering it to their screen. is. If this project is successful, our ability to collectively understand the world will be further disrupted. We will no longer live in one of several competing filter bubbles. Each of us will be in our own private filter bubble.
This is the best-case scenario when the system is designed solely for the benefit of the user. But, of course, it is unlikely to remain benign. Like everything on the internet, the internet will be “encityd”, using the technology industry to capture our attention while separating us from our money.
Consider an American tradition like college football. Are you a die-hard Ohio State Buckeyes fan? Click on Ohio State football articles, buy Ohio State merchandise, and subscribe to Ohio State videos, podcasts, and news feeds. Are you spending an inordinate amount of time trying to figure out what to do? This type of information is available to you 24 hours a day, across all your devices. Some algorithms learn a user’s daily schedule and respond accordingly, pushing information exactly at the times the user is most likely to be looking for it.
Soccer rivalries aside, this may sound harmless (albeit boring). In many ways, this already describes our online experience. User-tracking algorithms like Facebook, X, Instagram, and Google already track our interests and habits to choose what we see on our screens. But the next step will be to unleash large-scale language models to generate memes and even entire articles tailored to each of us and our interests, including our content, including ads masquerading as information. The sole purpose is to maintain interest in. Probability that we will buy.
LLM creates in-depth articles about your favorite college football teams, their recruiting process, and their outlook for next season. Listen to podcasts like AI-generated sports talk radio. And you’ll hear conspiracy theories about rival football teams. For example, how they are involved in player recruitment violations, how they commit fraud, how members of their coaching staff are tied to a cocaine empire, etc.
This is a tragic reality for at least two reasons. First, there are no computational methods or ethical incentives in place to ensure that the information received is true. Of course, the goal of this company is not to portray reality. LLMs generate what philosophers in technical jargon call “bullshit.” They are designed to sound plausible and authoritative, not to be factually accurate.
But as terrifying as the blatant indifference to the truth is an even more frightening element. Our hypothetical Ohio State football fan will no longer have an accurate understanding of college football that is fully compatible with anyone else, even any other Ohio State football fan. These fans act on information generated just for themselves. LLM is already very efficient, so there is no need to reuse content. Why write the same article for two different people when an LLM can easily create two articles tailored for each? This vision causes anxiety even when talking about sports and entertainment. But what about organizations that have a more direct social impact? Religion? education? Politics?
Commentators across the political spectrum lament the decline of news media and the polarization of everything. For many large families, conversations around the holiday table have already become impossible.
As bad as the current situation may be, there may be strange times ahead that make today’s echo chambers much more difficult to look forward to. Soon our bubble will become smaller and smaller, and we will be the only ones involved in the digital world. In an AI-mediated future, each of us will live in a private environment. truman show. As a society, we will have no common understanding of the world and will therefore be completely unable to make fruitful collective decisions.
What are your options? First, remember the advice of our parents, and those before them, going back at least to the days of television. “Go outside and play.” Stop staring at that screen. Meet your friends in person. Find entertainment in a space with real people, exchanging ideas and creations with each other.
Even online, we must continue to make sense of the world based on human-created documents and artifacts. Evaluating something made by humans is not just a matter of reliability. It also ensures that we focus on arguments carefully made by authors and carefully preserved by speakers.
Otherwise truman showassumptions become our reality and we unknowingly live in a false world where our every experience is cherry-picked for profit. Will it be even more existentially alienating? live in truman show The director, the producer, and the AI ​​are the only ones watching.
This is an opinion and analysis article and the views expressed by the author are not necessarily those of the author. scientific american.