Algorithmic agency, futuristic fiction or an imperative of procedural justice?

Special report
Reflections on the future of the Product Liability legislative framework in the European Union
By Ljupcho Grozdanovski
English

This study examines if and how a high standard of judicial protection can apply to litigants in cases dealing with harm caused by autonomous and opaque decisions made by Machine Learning (ML) systems. Based on a critical analysis of the procedural means offered by the EU’s Product Liability Directive (85/374), this study makes two normative claims. First, that a fault liability (rather than a strict liability) model should govern the evidence adducing and assessment, regarding the proof of accountability in cases of AI-related harm. This model appears to be more procedurally fair given that programmers, users and developers of ML systems would have the possibility to rebut the—currently irrefutable—presumption of human agency, by proving that in causing harm, an algorithm had acted alone. Second, the study argues that, for the purpose of compensating AI-related harm, two criteria should be applied: on the one hand, for ML systems presenting so-called notorious risks of harm (i.e. types of harm that have already occurred in practice), the duty of compensation should lie with the programmer, user or deployer who accepted the risk of such harm materializing. On the other hand, cases of harm that have not yet occurred in practice could qualify as force majeure and could, therefore, be compensated through insurance schemes set up for the purpose of not leaving victims without compensation, when it is plausibly proven that such a harm was authored by an ML system without any human intervention.

  • Machine Learning
  • opacity
  • justice
  • agency
  • accountability
  • evidence
Go to the article on Cairn-int.info