The majority of AI research centers on model performance, but a new paper in JAMIA poses five questions to guide the discussion around how physicians actually interact with AI during diagnosis.
A little reframing goes a long way. As the clinical scope of AI expands alongside its capabilities, the interface between the models and doctors is becoming increasingly important.
- Researchers from UCLA and Tufts University point out that this “human-computer interface” is essential to make sure AI is properly integrated into care delivery, serving as the first line of defense against common AI pitfalls like distracting doctors or giving them too much confidence in its answers.
Here’s the questions they came up with, and why they’re each important:
Question 1: What type of information and format should AI present?
- Why it’s important: Deciding how information gets presented is just as important as deciding what information to present. Format affects doctors’ attention, diagnostic accuracy, and possible interpretive biases.
Question 2: Should AI provide that information immediately, after initial review, or be toggled on and off by the physician?
- Why it’s important: Immediate information can lead to a biased interpretation, while delayed cues can help physicians maintain their hard-earned diagnostic skills by allowing them to fully engage in each diagnosis.
Question 3: How does AI show its reasoning?
- Why it’s important: Clear explanations of how AI arrives at a decision can highlight features that were ruled in or out, provide “what if” explanations, and more effectively align with doctors’ clinical reasoning.
Question 4: How does AI affect bias and complacency?
- Why it’s important: When physicians lean too heavily on AI, they might rely less on their own critical thinking, widening the space for an accurate diagnosis to slip past them.
Question 5: What are the risks of long-term reliance on AI?
- Why it’s important: Long-term AI reliance could end up eroding learned diagnostic abilities. We recently covered a great study in The Lancet that investigated the topic.
The Takeaway
AI holds enormous potential to improve clinical decision-making, but poor integration could end up doing more harm than good. This paper provides a solid framework to push the field from “Can AI detect disease?” to “How should AI help doctors detect disease without introducing new risks?”

