The paper abstract:
We study how humans form expectations about the performance of artificial intelligence (AI) and consequences for AI adoption. Our main hypothesis is that people project human-relevant task features onto AI. People then over-infer from AI failures on human-easy tasks, and from AI successes on human-difficult tasks. Lab experiments provide strong evidence for projection of human difficulty onto AI, predictably distorting subjects’ expectations. Resulting adoption can be sub-optimal, as failing human-easy tasks need not imply poor overall performance in the case of AI. A field experiment with an AI giving parenting advice shows evidence for projection of human textual similarity. Users strongly infer from answers that are equally uninformative but less humanly-similar to expected answers, significantly reducing trust and engagement. Results suggest AI “anthropomorphism” can backfire by increasing projection and de-aligning human expectations and AI performance.
raphaelraux – Research
And a simplified explanation by Copilot (seemed apt to use in this case given the topic):
The paper explores how people form expectations about AI performance and how this impacts their willingness to use AI. The researchers’ main idea is that people tend to think of AI as if it should perform tasks in the same way humans do. This leads to two key behaviors:
- Overestimating AI failures: When AI makes mistakes on tasks that are easy for humans, people think the AI is not very capable overall.
- Overestimating AI successes: When AI does well on tasks that are hard for humans, people think the AI is more capable than it actually is.
Experiments show that these assumptions distort people’s expectations of AI. For example, if an AI struggles with simple tasks, people might avoid using it, even if it’s actually quite effective at other things. On the flip side, if it excels at complex tasks, people might over-trust it.
The researchers conducted a real-world experiment with an AI that provides parenting advice. They found that users were less trusting of the AI if its answers didn’t resemble what a human would say, even if the information was the same. This shows that making AI seem human-like (anthropomorphism) can sometimes backfire, leading to misaligned expectations between what AI can do and what people expect from it.
In essence, the study highlights that our human biases can lead us to misunderstand AI capabilities, which can affect how we adopt and use AI technologies.
From: https://sites.google.com/view/raphaelraux/research?authuser=0
via https://marginalrevolution.com/marginalrevolution/2024/11/how-badly-do-humans-misjudge-ais.html
Full paper here – https://www.dropbox.com/scl/fi/pvo3ozkqfrmlwo3ndscdz/HLA_latest.pdf?rlkey=mmz8f71xm0a2t6nvixl7aih23&e=1&dl=0
