The black box and the physician

By Sonia Desmoulin, John Crisp
English

The deployment of decision-support systems based on artificial intelligence technologies has highlighted issues of explainability and transparency when automated processing renders the path from input data to output results opaque. By studying systems developed for the treatment of cancer and multiple sclerosis, this paper compares the theoretical thinking of legal experts with the practical concerns of medical professionals on this issue. Drawing on the legal literature and on interviews and evaluation forms, the analysis reveals a partial disconnect: professionals are far more concerned with training data and validation methods than with the internal “logic” of the computer algorithm. They seem to consider explainability as a factor for developing complementary (more or less transparent) tools, rather than as a choice to be made in order to eliminate one path. However, the adoption of the new European Regulation on AI does offer interesting prospects for convergence that is likely to give substance to the principles of explainability and transparency, and their legal variations.

  • AI
  • explainability
  • transparency
  • law
  • oncology
  • neurology
Go to the article on Cairn-int.info