At first glance, a series of recent research papers published by the renowned Artificial Intelligence Institute at the University of British Columbia in Vancouver might not seem all that noteworthy: incremental improvements on existing algorithms and ideas, they read like the stuff of a half-baked AI conference or journal.
But the study is actually noteworthy because it is the product of “AI scientists” working at a UBC research lab in collaboration with researchers from the University of Oxford and a startup called Sakana AI.
The project marks an early step in what could be a breakthrough: teaching AI to learn by inventing and exploring new ideas. But it’s not all that new at this point. Several papers describe tweaks to improve an image-generation technique called diffusion modeling, while another outlines an approach to speed up learning in deep neural networks.
“These aren’t groundbreaking ideas. They’re not super original,” acknowledges Professor Jeff Clune, who leads the UBC lab, “but they seem like pretty cool ideas that someone might try.”
Today’s AI programs are incredible, but they are limited by the need to use training data created by humans. If AI programs could learn in an open-ended manner, by trying and exploring “interesting” ideas, they might be able to surpass capabilities demonstrated by humans.
Clune’s lab has previously developed AI programs designed to learn in this way. For example, a program called Omni generated behaviors for virtual characters in several video game-like environments, tried to classify the ones it found interesting, and then iterated on them with new designs. These programs previously required hand-coded instructions to define what was interesting. But large language models can help these programs identify what is most interesting. Another recent project from Clune’s lab used this approach, devising code that allowed an AI program to make virtual characters do different things in a Roblox-like world.
The AI scientist is one example of what the Clune lab is talking about: The program comes up with machine learning experiments, decides which ones seem most promising with the help of a master’s in law, writes the necessary code, runs it, and repeats the process. While the results were disappointing, Clune says open-ended learning programs could become much more promising with more computer power to feed them, as well as the language models themselves.
“It feels like exploring a new continent or a new planet,” Klune says of the possibilities the LLM opens up. “You never know what you’ll discover, but everywhere you turn there’s something new.”
Tom Hope, an assistant professor at the Hebrew University of Jerusalem and a research scientist at the Allen Institute for AI (AI2), said AI scientists, like law graduate researchers, are highly derivative and cannot be considered trustworthy. “At this point, you can’t trust any of the pieces,” he said.
Hope said efforts to automate elements of scientific discovery date back to the work of AI pioneers Allen Newell and Herbert Simon in the 1970s, and later Pat
Langley, of the Institute for Learning and Expertise, points out that several other research groups, including the team at AI2, have recently been using the LLM to generate hypotheses, write papers, and review research. “They’ve captured the zeitgeist,” Hope says of the UBC team. “And of course, that direction is potentially very valuable.”
It also remains unclear whether LLM-based systems can generate truly novel or groundbreaking ideas. “That’s the trillion-dollar question,” Clune says.
Even without scientific breakthroughs, open-ended learning may be essential to developing more capable and useful AI systems today. A report published this month by investment firm Air Street Capital highlighted the potential for Clune’s work to develop more powerful and reliable AI agents: programs that perform useful tasks autonomously on a computer. Every major AI company seems to see agents as the next big thing.
This week, the Clune lab announced its latest open-ended learning project: an AI program that invents and builds AI agents. The AI-designed agents outperform human-designed agents at some tasks, such as math and reading comprehension. The next step will be devising ways to stop such systems from generating misbehaving agents. “This is potentially dangerous,” Clune says of the research. “It has to be done right, but I think it’s possible.”
1 Comment
Thank you for your sharing. I am worried that I lack creative ideas. It is your article that makes me full of hope. Thank you. But, I have a question, can you help me?