Beyond explainability. Study of the design of intelligible artificial intelligence in pathological anatomy and cytology
The explainability of artificial intelligence (AI) is often presented as an essential element in doctors’ appropriation of these technologies. However, the usual approach to explainability has two shortcomings: first, the absence of anchorage in real professional situations, creating a gap between the solutions proposed and users’ expectations; and second, an almost exclusive focus on the functioning of AI systems, to the detriment of other aspects of their design, which can also undermine the intelligibility of the results. This article focuses on an AI project in pathological anatomy and cytology that offers a novel perspective on the intelligibility of AIs, emphasizing the construction of their ‘ground truth’. The study of the strategy implemented by this project argues for a contextual approach to the appropriability of AI, enabling the development of solutions that are genuinely aligned with the expectations of healthcare professionals.