The Nobel Prize ceremony, held in December, celebrated both research in artificial intelligence and the work of Japan Hidankyo, an organization to end nuclear war.
This is an amazing contrast, and one that stuck with me as a mathematician who studies how deep learning works. In the first half of the 20th century, the Nobel Committee awarded prizes in physics and chemistry for discoveries that revealed the structure of atoms. This research also made possible the development and subsequent deployment of nuclear weapons. Decades later, the Nobel Committee awarded this year’s Peace Prize for efforts to counter the one-sided use of nuclear science.
There are parallels between the development of nuclear weapons from basic research in physics and the risks posed by applications of AI that stem from research that began as basic research in computer science. These include the incoming Trump administration’s push for a “Manhattan Project” for AI, as well as broader societal risks such as misinformation, turnover, and surveillance.
About supporting science journalism
If you enjoyed this article, please consider supporting our award-winning journalism. Currently subscribing. By subscribing, you help ensure future generations of influential stories about the discoveries and ideas that shape the world today.
I worry that my colleagues and I aren’t engaging enough with the potential impact of our work. In the next century, will the Nobel Committee award a peace prize to those who clean up the mess left behind by AI scientists? I am determined not to repeat the story of nuclear weapons.
About 80 years ago, hundreds of the world’s top scientists joined the Manhattan Project in a race to build atomic weapons before the Nazis. But even after Germany’s bombing program ceased in 1944, and even after Germany surrendered the following year, work at Los Alamos continued unabated.
Even when the Nazi threat passed, only one Manhattan Project scientist, Joseph Rotblat, left the project. Looking back on those days, Rotblat explained: It becomes an addiction and you end up continuing to work with the sole purpose of creating gadgets without thinking about the consequences. And when you do this, you’ll see the validity of creating it. Not the other way around. ”
The US military conducted its first nuclear test soon after. U.S. leaders subsequently authorized two atomic bombings on Hiroshima and Nagasaki on August 6 and 9. The atomic bomb killed hundreds of thousands of Japanese civilians, some instantly. Some people died years or even decades later from the effects of radiation sickness.
Although Rotblat’s words were written decades ago, they eerily accurately describe the prevailing ethos in AI research today.
I first began to notice the similarities between nuclear weapons and artificial intelligence while working at the Institute for Advanced Study in Princeton, home of the unforgettable final scene of Christopher Nolan’s film. oppenheimer It has been set. While I had made some progress in understanding the mathematical innards of artificial neural networks, I also began to worry about the ultimate impact of my research on society. At the suggestion of a colleague, I went to talk to the institute’s director at the time, physicist Robert Dijkgraf.
He encouraged us to seek guidance from the life story of J. Robert Oppenheimer. I read one biography and then another. I tried to guess what Dijkgraf was thinking, but Oppenheimer’s path held no appeal. And by the time I finished my third biography, the only thing that was clear to me was that I didn’t want it to reflect my life. His. I didn’t want to end my life carrying the same burden as Oppenheimer.
Oppenheimer is often quoted as saying that when scientists discover something technically superior, they put it into practice. In fact, one of the 2024 Nobel Prize winners in physics, Jeff Hinton, has mentioned this. This is not universally true. Lise Meitner, a prominent female physicist of her time, was asked to join the Manhattan Project. Despite being Jewish and narrowly surviving the Nazi occupation, she flatly refused, saying, “I have nothing to do with the bomb!”
Rotblat also offers another model for how scientists can meet the challenge of developing their talents without losing sight of their values. After the war, he returned to physics, focusing on the medical uses of radiation. He also became a leader in the nuclear non-proliferation movement through the group he co-founded in 1957, the Pugwash Conference on Science and World Affairs. In 1995, he and his colleagues were awarded the Nobel Peace Prize for this work.
Today, thoughtful, down-to-earth people still stand out when it comes to developing AI. Taking a stance reminiscent of Rotblat, Ed Newton Rex cited the company’s insistence on creating generative AI models trained using copyrighted data without paying for their use. Last year, he stepped down from his role leading Stability AI’s music generation team. Earlier this year, Suthir Balaji resigned as an OpenAI researcher due to similar concerns.
Just as Meitner refused to address military applications of his discoveries, at a 2018 company town hall Meredith Whitaker said he would develop AI to enhance targeting and surveillance of military drones. Raised employee concerns about Project Maven, a Department of Defense contract. Ultimately, the employees successfully pressured Google, where 2024 Nobel Prize winner in physics Demis Hassabis works, to cancel the project.
There are many ways society influences the work of scientists. The direct one is economic. We collectively choose which research to fund, and individually we choose which products to pay for from that research.
Prestige is indirect, but very effective. Most scientists care about their legacy. When we look back at the nuclear age, for example, when we choose to make a film about Oppenheimer among the scientists of that era, we ask today’s scientists what we value. It sends a signal about what is going on. By selecting the Nobel Prize recipient from among those currently working on AI, the Nobel Prize Committee sets up strong incentives for current and future AI researchers.
Although it is too late to change the events of the 20th century, we can hope for better results with AI. We can start by looking to the field of machine learning, which focuses on rapid development of capabilities. Instead, like Newton-Rex and Whittaker, they insist on engaging with and evaluating the context of their work, as well as responding to changing circumstances. Paying attention to what scientists like them have to say offers the greatest hope for the positive development of science now and in the future.
As a society, we can choose who to elevate, emulate, and maintain as role models for the next generation. As the nuclear age teaches us, now is the time to consider how any application of scientific discoveries can be applied to who among today’s scientists is, rather than to the values of the world in which we currently live. Now is the time to carefully assess whether we reflect the values of the world in which we want to live.
This is an opinion and analysis article and the views expressed by the author are not necessarily those of the author. scientific american.