Beyond explainability. Study of the design of intelligible artificial intelligence in pathological anatomy and cytology
Explainability is often considered to be an essential prerequisite for the adoption of artificial intelligence (AI) systems by medical professionals. However, the way that explainability is usually approached has two major shortcomings: first, a lack of grounding in real professional situations means that there can be a gap between solutions proposed and users’ expectations; second, there tends to be too much focus on how an AI functions, and too little focus on other aspects of its design, which may also lead to results lacking in intelligibility. This article looks an AI project in pathology that offers a new perspective on intelligibility of AIs, with a particular emphasis on how their ground truth is established. A close examination of the strategy implemented in this project highlights the advantages of a contextual approach to AI adoption that favours the development of solutions genuinely aligned with the expectations of healthcare professionals.
- explainability
- adoption
- artificial intelligence
- pathology
- dataset
- ground truth