Should you tell your friend that your boyfriend is cheating on you? Should you intervene if you hear a sick joke?
When we are faced with a moral problem, a situation where the course of action involves our sense of right and wrong, we often seek advice. And now people can also look to ChatGPT and other large-scale language models (LLMs) for guidance.
Many people seem satisfied with the answers these models provide. In one preprint study, people rated responses produced by LLMs when faced with moral difficulties as more trustworthy, trustworthy, and more nuanced than LLM responses. new york times Ethicist columnist Kwame Anthony Appiah.
About supporting science journalism
If you enjoyed this article, please consider supporting our award-winning journalism. Currently subscribing. By subscribing, you help ensure future generations of influential stories about the discoveries and ideas that shape the world today.
This study joins several others suggesting that LLMs can provide sound moral advice. Another paper published last April found that people rated AI reasoning as “better” than human reasoning in terms of integrity, intelligence, and trustworthiness. Some researchers have even suggested that LLMs can be trained to provide ethical financial guidance, despite being “inherently anti-social”.
These findings suggest that good ethical advice is on our doorstep. So why not ask an LLM?However, there are some questionable assumptions behind this conclusion. First, research shows that people don’t necessarily recognize it as good advice when they see it. Furthermore, when considering the value of advice, many believe that the content of the advice – the literal words, whether written or spoken – is most important, but dilemmas, especially Social connections may be particularly important for addressing moral dilemmas.
In a 2023 paper, researchers analyzed a number of studies to consider, among other things, what makes advice most persuasive. like an expert recognized It turns out that the more people are willing to give advice, the more likely they are to actually accept it. However, perception does not have to match actual expertise. Furthermore, experts, even in their field, do not necessarily provide good advice. In a series of experiments in which people learned how to play a word search game, those who received advice from the game’s highest scorers did no better than those who received instruction from the lowest scorers. People who do well at a task do not always know what they do and how to do it, and are unable to advise others how to do it.
People also tend to find neutral, factual information more useful than, say, the subjective details of first-hand accounts. However, this is not always the case. Consider a study in which undergraduate students came to a research lab for a speed dating session. Before each date, we were presented with either a profile of the person we were about to meet or testimonials describing other students’ experiences with the activity. Participants expected factual information about the date to more accurately predict how the session would go, but those who read other people’s testimonials made more accurate predictions about their own experiences. .
Of course, ChatGPT cannot offer advice from actual experience. But even if we It was done As well as ensuring that you receive (and recognize) quality advice, there are also social benefits that cannot be replicated in an LLM. When we ask for moral advice, we are likely sharing something personal, and we often seek intimacy rather than guidance. Self-disclosure is known as a way to quickly create a sense of intimacy with the other person. In the course of a conversation, advisor and adviceee may seek and establish a common reality, a sense of inner commonality of feelings, beliefs, concerns about the world, etc., which also fosters intimacy. Masu. Although people may feel Despite establishing a sense of familiarity and shared reality with the LLM, the model is not suitable as a long-term replacement for interpersonal relationships, at least for now.
Of course, this may be the case for some people. want Avoiding social interaction. Some people may worry that the conversation will be awkward or that their friend will feel burdened by having to share their problems. However, research consistently finds that people underestimate how much they enjoy both short, spontaneous and deep, heartfelt conversations with friends.
Moral advice requires special attention. This has the characteristic of feeling more like an objective fact than an opinion or preference. Your (or my) opinion on whether salt and vinegar taste best in potato chips is subjective. However, ideas such as “stealing is bad” and “honesty is good” feel conclusive. As a result, advice that comes with a lot of moral justification can seem particularly persuasive. Therefore, it is recommended to carefully evaluate each piece of moral advice from an AI or human advisor.
Sometimes the best way to get through a belief-based argument about the moral high ground is to reframe the argument. When people have strong moral beliefs and view issues in a very black-and-white manner, they may resist compromise and other practical forms of problem-solving. My past research has shown that when people moralize risky sexual behavior, tobacco use, and gun ownership, they reduce the harm associated with those behaviors because policies still permit these behaviors. This suggests that they are less likely to support policies that In contrast, people do not worry about mitigating harm for actions that seem outside the bounds of morality, such as wearing seat belts or helmets. Shifting perspective from a moral lens to a practical lens is already difficult for humans and may be too much for an LLM, at least in its current iteration.
And that brings us to another concern about LLMs. ChatGPT and other language models are very sensitive to how questions are asked. As a study published in 2023 showed, LLMs provide inconsistent and sometimes contradictory moral advice from prompt to prompt. Because you can easily change the model’s answer, us To take the beat. Interestingly, in the same study, study participants who received and read LLM-generated advice were more likely to read the book, even though people did not believe that their decisions were influenced by the model’s advice. They were found to be more likely to act on the guidance than similar groups who did not read it. LLM message. In other words, LLM input had a bigger impact than people realized.
When it comes to LLMs, proceed with caution. People are not very good at identifying good mentors and good advice, especially in the moral realm. It often requires more genuine social connection, validation, and even challenge than an “expert” response. So, you can contact the LLM, but don’t leave it there. Ask your friends too.
Are you a scientist specializing in neuroscience, cognitive science, or psychology? And have you read any recent peer-reviewed papers you’d like to write about? Submit your proposal scientific americanDaisy Yuhas, editor of Mind Matters dyuhas@sciam.com.
This is an opinion and analysis article and the views expressed by the author are not necessarily those of the author. scientific american.