Skip to main content

Checklist for toxic algorithms


Based on comments in O'Neil's Weapons of Math Destruction. Full review here.

Opacity
  • Is the subject aware they are being modelled?
  • Is the subject aware of the model's outputs?
  • Is the subject aware of the model's predictors and weights?
  • Is the data the model uses open?
  • Is it dynamic - does it update on its failed predictions?

Scale
  • Does the model make decisions about many thousands of people?
  • Is the model famous enough to change incentives in its domain?
  • Does the model cause vicious feedback loops?
  • Does the model assign high-variance population estimates to individuals?

Damage
  • Does the model work against the subject's interests?
    • If yes, does the model do so in the social interest?
  • Is the model fully automated, i.e. does it make decisions as well as predictions?
  • Does the model take into account things it shouldn't?
  • Do its false positives do harm? Do its true positives?
  • Is the harm of false positives symmetric with the good of true positives?



Note that "Inaccuracy" is not a criterion for O'Neil. This is maybe the core shortcoming of the book: it doesn't wrestle much with the hard tradeoff involved in when modelling unfair situations, e.g. living in a bad neighbourhood which increases your risks and insurance costs through no fault of your own. She comes down straightforwardly on the direct "make the model pretend it isn't there" diktat. My preferred measure would be to not prevent models from being rational, but instead make transfers to the victims of empirically unfair situation. (This looks pointlessly indirect, but price theory, and the harms of messing with them, is one of the few replicated economic.) My measure has the advantage of not requiring a massive interpretative creep of regulation: you just see what the models do as black boxes and then levy justice taxes after.


Comments