At a time when new healthcare AI solutions are getting unveiled every week, a study in Nature Machine Intelligence found that the way people are introduced to these models can have a major effect on their perceived effectiveness.
Researchers from MIT and ASU had 310 participants interact with a conversational AI mental health companion for 30 minutes before reviewing their experience and determining whether they would recommend it to a friend.
Participants were divided into three groups, which were each given a different priming statement about the AI’s motives:
- No motives: A neutral view of the AI as a tool
- Caring motives: A positive view where the AI cares about the user’s well-being
- Manipulative motives: A negative view where the AI has malicious intentions
The results revealed that priming statements certainly influence user perceptions, and the majority of participants in all three groups reported experiences in line with expectations.
- 88% of the “caring” group and 79% of the “no motive” group believed the AI was empathetic or neutral – despite the fact that they were engaging with identical agents.
- Only 44% of the “manipulative” group agreed with the primer. As the authors put it, “If you tell someone to be suspicious of something, then they might just be more suspicious in general.”
- As might be expected, participants who believed the model was caring also gave it higher effectiveness scores and were more likely to recommend it to a friend. That’s obviously relevant for those developing similar mental health chatbots, but a key insight for presenting any AI agent to new users.
An interesting feedback loop was also found between the priming and the conversation’s tone. People who believed the AI was caring tended to interact with it in a more positive way, making the agent’s responses drift positively over time. The opposite was true for those who believed it was manipulative.
The Takeaway
The placebo effect is a well documented cornerstone of medical literature, but this might be the first study to bridge the phenomenon from sugar pill to AI chatbot. Although AI is often thought of as primarily an engineering problem, this research does a great job highlighting how human factors and the power of belief play a huge role in the perceived effectiveness of the technology.