Conceptual Metaphors Impact Perceptions of Human-AI Collaboration
August 05, 2020 ยท Declared Dead ยท ๐ Proc. ACM Hum. Comput. Interact.
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Pranav Khadpe, Ranjay Krishna, Li Fei-Fei, Jeffrey Hancock, Michael Bernstein
arXiv ID
2008.02311
Category
cs.HC: Human-Computer Interaction
Cross-listed
cs.AI
Citations
146
Venue
Proc. ACM Hum. Comput. Interact.
Last Checked
2 months ago
Abstract
With the emergence of conversational artificial intelligence (AI) agents, it is important to understand the mechanisms that influence users' experiences of these agents. We study a common tool in the designer's toolkit: conceptual metaphors. Metaphors can present an agent as akin to a wry teenager, a toddler, or an experienced butler. How might a choice of metaphor influence our experience of the AI agent? Sampling metaphors along the dimensions of warmth and competence---defined by psychological theories as the primary axes of variation for human social perception---we perform a study (N=260) where we manipulate the metaphor, but not the behavior, of a Wizard-of-Oz conversational agent. Following the experience, participants are surveyed about their intention to use the agent, their desire to cooperate with the agent, and the agent's usability. Contrary to the current tendency of designers to use high competence metaphors to describe AI products, we find that metaphors that signal low competence lead to better evaluations of the agent than metaphors that signal high competence. This effect persists despite both high and low competence agents featuring human-level performance and the wizards being blind to condition. A second study confirms that intention to adopt decreases rapidly as competence projected by the metaphor increases. In a third study, we assess effects of metaphor choices on potential users' desire to try out the system and find that users are drawn to systems that project higher competence and warmth. These results suggest that projecting competence may help attract new users, but those users may discard the agent unless it can quickly correct with a lower competence metaphor. We close with a retrospective analysis that finds similar patterns between metaphors and user attitudes towards past conversational agents such as Xiaoice, Replika, Woebot, Mitsuku, and Tay.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Human-Computer Interaction
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Improving fairness in machine learning systems: What do industry practitioners need?
R.I.P.
๐ป
Ghosted
Identifying Stable Patterns over Time for Emotion Recognition from EEG
R.I.P.
๐ป
Ghosted
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
R.I.P.
๐ป
Ghosted
Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges and Opportunities
R.I.P.
๐ป
Ghosted
Educational data mining and learning analytics: An updated survey
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted