Deep learning to predict sexual orientation in the public space

Special report: Predictive machines
Deconstructing an ambiguous alert
By Nicolas Baya-Laffite, Boris Beaude, Jérémie Garrigues
English

In September 2017 the alarm was raised about an algorithm that could predict people’s sexual orientation, thus calling into question the status of “predictive machines” and the role of social science in such circumstances. Between claims of a return to physiognomy in the era of deep learning, its explanation of performance drawing on a “biologizing” theory of the origins of sexual orientation, and its announcement of the end of private life, this research – headed by Stanford psychology professor Michal Kosinski – calls for not allowing the debate to revolve solely around ethics. This article examines the relevance of the alert raised by Kosinski in light of the pivotal controversy it sparked on predictive algorithms entering public debate. This article highlights the ambiguity of the “whistleblower” status that the authors explicitly assumed, showing that a critical examination of their predictive model ultimately reveals its inability to demonstrate the prenatal hormonal origins of sexual orientation and to distinguish the sexual orientations of individuals in the public space.

Keywords

  • alert
  • deep learning
  • false positives
  • machine learning
  • prediction
  • facial recognition
  • sexual orientation
  • computational social science
Go to the article on Cairn-int.info