Explainable AI (XAI) has emerged as a promising solution for addressing the implementation challenges of AI/ML in healthcare. However, little is known about how developers and clinicians interpret XAI and what conflicting goals and requirements they may have. This talk presents the findings of a longitudinal multi-method study involving 112 developers and clinicians who co-developed XAI solutions for a clinical decision support system.
Our study identifies three key differences between developer and user mental models of XAI and proposes five design solutions that could help address the XAI conundrum in healthcare, including the use of causal inferences, counterfactual queries, interactive visualizations, personalized explanations, and contextual information. The importance of considering these possibilities in the design of XAI systems as well as practical recommendations for improving the effectiveness and usability of XAI in healthcare (and other contexts) will be discussed with the audience.
———-
Nadine Bienefeld
Nadine is a Senior Researcher and Lecturer at ETH Zurich. Her passion is to help build intelligent systems that augment rather than undermine human work tasks to increase trust, worker motivation, and effective collaboration in human-AI teams.
She has an industry background in healthcare and aviation human factors with positions held at Swiss International Airlines, the Federal Office for Civil Aviation, and NASA. Besides her engagement in academia, she continues to advise senior leaders from various high-reliability organizations on their digital transformation journey.



