Naming Algorithmic Concerns

What’s in a name? Everything of course. Naming creates identity but it also shapes affect. Take algorithms. The literature on algorithmic concerns about fairness and discrimination is vast and growing.

The term “algorithmic bias” is most commonly used to describe these concerns. Bias is a generally well understood concept and is applied in many settings (e.g. journalism.) It works. Except, of course, in algorithms, “bias” can have a different (and benign) meaning with respect to the range of observations or measurements. Algorithmic bias is a good thing, except when it isn’t.

Other terms have emerged

Examples such as “algorithmic inequity” (Sara Watcher-Boettcher: Technically Wrong) or “algorithmic inequality” (Virginia Eubanks: Automating Inequality) highlight the social and economic effects, and echo “pay inequity” or other types of systemic discrimination. These are terms that resonant with familiar concepts but probably don’t provoke us as much as the authors would wish.

Perhaps the most dramatic example is “algorithmic violence” (Mimi Onuoha: Notes on Algorithmic Violence). No subtlety here. The harm is active, direct, and physical.

A related approach is Noble’s “algorithmic oppression” (Safiya Noble: Algorithms of Oppression). Highlighted here are the political dimensions and the social justice implications. It links to Noble’s observation that “artificial intelligence will become a major human rights issue in the twenty-first century.”

The last example, from the Politico journalist Julia Angwin, is insightful: “algorithmic privilege” (Julia Angwin: Quantifying Forgiveness). The perspective here is not the harm but the advantage. The bias in algorithms favours mainstream populations (i.e. white, middle class, etc.); it recognizes the differential rewards rather than the discriminatory penalties.


This entry was posted in AI. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *