All day, every day, you make choices. Philosophers have long believed that this ability is intentional or agency, It distinguishes humans from simpler life forms and machines. But artificial intelligence may soon cross that divide, as technology companies build AI “agents,” systems that can make decisions and accomplish goals with minimal human oversight.
Facing pressure to show returns from billions of dollars of investment, AI developers are pushing agents as the next wave of consumer technology. Agents such as chatbots leverage large language models and can be accessed from phones, tablets, and other personal devices. However, unlike chatbots, which require a constant handhold to generate text and images, agents can autonomously interact with external apps to perform tasks on behalf of individuals or organizations. OpenAI lists agents as the third of five steps to building artificial general intelligence (AGI), an AI that can outperform humans at any cognitive task, codenamed ” The company is reportedly planning to release an agent called “Operator” in January. This system could be an early fall of heavy rain. Meta CEO Mark Zuckerberg predicts that AI agents will eventually outnumber humans. Meanwhile, some AI experts worry that commercializing agents is a dangerous new step for an industry that has tended to prioritize speed over safety.
The sales pitch from big tech companies is that agents will free human workers from drudgery and open the door to more meaningful work (and significantly increase productivity for companies). Iason Gabriel, a senior researcher at Google DeepMind, said, “By freeing us from mundane tasks, we can focus on the things that really matter, like relationships, personal growth, and informed decision-making.” “It will allow you to focus on your work.” Last May, the company announced a prototype of Project Astra, which is described as a “universal AI agent useful in daily life.” In the video demonstration, Astra speaks to the user through a Google Pixel smartphone and analyzes the environment through the device’s camera. At some point, the user holds the phone up to a coworker’s computer screen. A line of code appears on the screen. The AI explains the code (which “defines encryption and decryption functionality”) in a human-like female voice.
About supporting science journalism
If you enjoyed this article, please consider supporting our award-winning journalism. Currently subscribing. By subscribing, you help ensure future generations of influential stories about the discoveries and ideas that shape the world today.
Project Astra isn’t expected to be publicly available until next year at the earliest, and agents currently available are limited to menial tasks like writing code and submitting expense reports. This reflects technical limitations and developers’ wariness about trusting agents in high-stakes areas. Silvio Savarese, chief scientist at cloud-based software company Salesforce, says it’s “very well-defined” and “you need to deploy agents to perform monotonous, repetitive tasks.” . The company recently introduced Agentforce, a platform that provides agents who can answer customer service questions and perform other narrow functions. Savarese said he would be “very hesitant” to trust investigators in more sensitive situations, such as legal decisions.
While Agentforce and similar platforms are primarily marketed to enterprises, Savarese believes that the eventual rise of personal agents with access to personal data to constantly update their understanding of users’ needs, preferences, and quirks. I’m predicting. For example, an app-based agent responsible for summer vacation planning can book flights, secure restaurant seats, and book lodging while remembering things like window seat preferences, peanut allergies, and pool hotel preferences. You can do it. The important thing is that you need to be able to respond to unexpected situations. If the best ticket is sold out, you’ll need to adjust course (perhaps checking with another airline). “The ability to adapt and react to the environment is essential for agents,” Savarese says. The initial stages of personal agency may already be underway. For example, Amazon is reportedly working on an agent that can recommend and buy products based on your online shopping history.
What is an agent?
The sudden rise in corporate interest in AI agents belies their long history. All machine learning algorithms are technically ‘agent-like’, constantly ‘learning’ or refining their ability to achieve a specific goal based on patterns gleaned from mountains of data. “In AI, we have been looking at all systems as agents for decades,” says Stuart Russell, a pioneering AI researcher and computer scientist at the University of California, Berkeley. “But some of them are very simple.”
However, modern AI tools are currently more Become agentic thanks to some new innovations. One is the ability to utilize digital tools such as search engines. A new “Use Computer” feature released for public beta testing in October allows the model behind AI company Anthropic’s Claude chatbot to move the cursor and click buttons after a screenshot of the user’s desktop appears. Now you can click. A video released by the company shows Claude filling out and submitting a fictitious vendor request form.
Agency also correlates with the ability to make complex decisions over time. As agents become more sophisticated, they tackle more advanced tasks. Google DeepMind’s Gabriel envisions future agents that will help discover new scientific knowledge. And it may not be far off. A paper posted to the preprint server arXiv.org in August outlines an “AI scientist” agent that can formulate new research ideas, test them through experiments, and effectively automate scientific methods. Masu.
Despite the close ontological connection between agency and consciousness, there is no reason to believe that advances in the former will lead to the latter in machines. Tech companies certainly don’t tout these tools as having anything close to free will. Users may treat agent AI as if it were sentient, but that reflects, above all, millions of years of evolution in which people’s brains have been hardwired to attribute consciousness to things. There will be. Apparently human.
growing challenges
The rise of agents could bring new challenges to the workplace, social media, the internet, and the economy. Legal frameworks that have been carefully crafted over decades and centuries to limit human behavior must account for the sudden introduction of artificial agents that differ fundamentally from human behavior. Probably. Some experts even argue that a more accurate description of AI is “alien intelligence.”
Let’s take the financial sector as an example. Algorithms have long helped track the prices of various goods, adjusting for inflation and other variables. However, agent models are now beginning to make financial decisions for individuals and organizations, which can raise many thorny legal and economic issues. “We still don’t have the infrastructure to integrate[agents]into all the rules and structures needed to make markets work properly,” said Jillian Hadfield, an expert on AI governance at Johns Hopkins University. It hasn’t been built.” If an agent signs a contract on behalf of an organization and then violates the terms of that contract, should the organization be held liable, or should the algorithm itself be held liable? Should we give people legal “personhood”?
Another challenge is designing agents whose behavior complies with human ethical norms. This is a problem known in the field as “coordination.” As independence increases, it becomes more difficult for humans to decipher how AI is making decisions. Goals are divided into increasingly abstract subgoals, and models may exhibit new and unpredictable behavior. Joshua Bengio, a computer scientist who helped invent the neural networks that made the current AI boom possible, said: “The path from the existence of agents who are good at planning to the loss of human control is very clear.” ”.
Bengio said the problem of coordination is exacerbated by the fact that the priorities of big tech companies tend to conflict with those of humanity as a whole. “There’s a real conflict of interest between making money and protecting public safety,” he says. In the 2010s, algorithms used by Facebook (now Meta) promoted hateful content against Myanmar’s Rohingya minority population to users in Myanmar, with the seemingly innocuous goal of maximizing user engagement. I started. This strategy, which the algorithm decided to pursue entirely on its own after learning that inflammatory content would drive more user engagement, ultimately fueled a campaign of ethnic cleansing that killed thousands of people. It happened. As algorithms become more agentic, the risk of model inconsistency and human manipulation can increase.
agent watchdog
Bengio and Russell argue that AI needs to be regulated to ensure it doesn’t repeat past mistakes or discover new ones in the future. Both scientists are among 33,000 signatories of an open letter published in March 2023 calling for a six-month pause on AI research to establish guardrails. As technology companies rush to develop agent-based AI, Bengio believes in the precautionary principle: Strong scientific advances should be scaled up slowly, and commercial interests should take a backseat to safety. I strongly advocate this idea.
This principle is already standard in other industries in the United States. Pharmaceutical companies cannot launch a new drug until it undergoes rigorous clinical trials and is approved by the Food and Drug Administration. Aircraft manufacturers cannot launch new airliners without certification from the Federal Aviation Administration. Some early regulatory actions have been taken, most notably President Joe Biden’s executive order on AI (which President-elect Donald Trump has vowed to rescind), but the development of AI There is currently no comprehensive federal framework to oversee development and development.
Bengio warns that the race to commodify agents could quickly lead us to a point of no return. “Once you hire an agent, they become convenient, they create economic value, and their value increases,” he says. “And once the government realizes that it could be dangerous, it may be too late, because the economic value will be so great that there will be no stopping it.” and the dominance of social media. Social media rapidly outpaced the potential for effective government surveillance in the 2010s.
As the world prepares for a flood of artificial agency, there has never been a more urgent time to exercise our own agency. “We need to think carefully before jumping in,” Bengio said.