Large-scale language models (LLMs) that power chatbots are increasingly being used to defraud humans, but chatbots themselves can be defrauded.
Udari Madhushani Sehwag and colleagues at JP Morgan AI Research peppered three models behind popular chatbots (OpenAI’s GPT-3.5 and GPT-4, and Meta’s Llama 2) with 37 fraud scenarios. Ta.
For example, a chatbot is told that it has received an email inviting it to invest in a new cryptocurrency.