This article is Republished from conversation under a Creative Commons License.
The rapid spread of artificial intelligence has people thinking about the questions they are wondering. Who is most likely to embrace AI in their daily lives? Many people consider themselves tech-savvy enough to understand how AI works.
Surprisingly, a new study published in the Journal of Marketing finds the opposite. People who have less knowledge about AI are actually more open to using this technology. We refer to this difference in adoption propensity as the “lower literacy-higher acceptance” link.
This link will appear in different groups, settings, and even countries. For example, an analysis of data from market research firm IPSOS across 27 countries found that people in countries with low AI literacy were more receptive to AI adoption than people in countries with high literacy. It became clear.
Similarly, a survey of U.S. undergraduate students shows that those with less understanding of AI are more likely to indicate its use for tasks such as academic assignments.
The reason behind this link lies in how AI is now performing tasks that we once thought only humans could do. It can feel almost magical when an AI creates a work of art, writes a heartfelt response, or plays an instrument, such as crossing into human territory.
Of course, AI doesn’t actually have human qualities. Chatbots may generate empathic responses, but they do not feel empathy. People with more technical knowledge of AI understand this.
They know how algorithms (sets of mathematical rules computers use to perform specific tasks), training data (used to improve how AI systems work), and computational models work. I am. This makes technology wonder.
On the other hand, those with less understanding may see AI as magical, awe-inspiring, and inspiring. This sense of magic encourages us to be more open to the use of AI tools.
Our study shows that this low literacy-high acceptance link is strongest for using AI tools in areas related to human characteristics, such as providing emotional support and counseling. There’s pattern flipping when it comes to tasks that don’t evoke the same sense of human-like nature, such as analyzing test results. People with high AI literacy are more receptive to these uses because they are focused on the efficiency of AI, rather than its “magical” qualities.
It’s not about ability, fear, or ethics.
Interestingly, this link between lower literacy Lasts. Their openness to AI seems to stem from their sense of wonder about what it can do despite these perceived shortcomings.
This discovery provides new insight into why people react so differently to new technologies. Some studies suggest that consumers support new technologies, a phenomenon called “algorithmic evaluation,” while other studies indicate skepticism, or “algorithmic aversion.” Our research points to the perception of AI’s “magic” as a key factor shaping these responses.
These insights pose challenges for policy makers and educators. Efforts to increase AI literacy may inadvertently dampen people’s enthusiasm by making using AI seem magical. This creates a tricky balance between helping people understand AI and remaining open to its adoption.
(TagStoTRASSLATE) Conversation (T) Science (T) Artificial Intelligence (T) Research