Weapons of Maths Destruction

Andrew Gould
2 min readFeb 24, 2021

WMDs, often thought to be murderous intercontinental ballistic missiles and nuclear warheads, are actually less physically dazzling, but equally insidious. In Cathy O’Neil’s novel, Weapons of Maths Destruction: How Big Data Increases Inequality and Threatens Democracy, she goes on to explain how AI and algorithmic models can be both gloriously revolutionary and equally as dangerous as the growing nuclear arsenal. In the same ways that the major governments have increased their power through the means of 30 foot tall missiles, they have also increased their power through mathematical models.

Thus far in the text, O’Neil explains how not all mathematical models are created equal. Although most begin in their conception as good-willed attempts to create fair, unbiased methods for evaluating the environment, some result in extremely unfair systems that only perpetuate the issues they originally tried to prevent.

Mathematical models, by extrapolation, are based wholly in logic and observation. They do not have feelings, emotions, or biases, but as it turns out, this is not always the case. Models have inherent “blindspots,” as Cathy describes it, which showcase the designers intent. Cathy gives the example that our phone’s GPS doesn’t consider the buildings, trees, and farm animals along our path because they are not relevant. Similarly, plane avionics do not model them either because they have no bearing (at least topically) on the answers it gives.

These mathematical blindspots are what results in bias in models. They strongly reflect the ideology and opinions of their designers, similar to a human cannot effectively speak without bias, it is an intrinsic attribute of their existence. O’Neil goes on to address recidivism models which are an attempt to mathematically evaluate the danger convicts pose to society. These models utilize intricate calculations, which on the surface, attempts to remove all bias from the sentencing of inmates; however, within these intricate calculations are implicit biases thrust upon the models by their creators. The question remains open ended, have we successfully eliminated the prejudicial biases of humans by relying on mathematical models, or simply established a technological cop out for our humanistic downfalls?

--

--