The black box and the physician. Investigating the legal implications of explainability in medical AI
The deployment of decision-support tools based on artificial intelligence techniques has highlighted issues of explainability and transparency when automated processing renders the path from input data to output results opaque. By studying tools developed for the treatment of cancer and multiple sclerosis, this paper compares the theoretical thinking of legal experts with the practical concerns of medical professionals on this issue. Drawing on the legal literature and on interviews and evaluation forms, the analysis reveals a partial disjunction: professionals are far more concerned with training data and validation methods than with the internal ‘logic’ of the computer algorithm; they consider the technical options, which are more or less explainable, to be complementary rather than alternative. However, the adoption of the new European Regulation on AI does offer interesting prospects for convergence likely to support the effectiveness of the principles of explainability and their various legal declinations.