- August 31, 2018
- Crime Prevention
As data-driven algorithms have come to play an increasingly important role in the ways governments make decisions, concern over what goes into these algorithms and what comes out has become more urgent. Using data to inform government decisions promises to improve efficiency and impartiality, but many fear that in reality, these tools fail to deliver on their potential. Many advocates argue that by using data tainted by historically prejudiced practices or by reflecting the (often unconscious) biases of mostly-white, mostly-male developers, algorithms will spit out results that discriminate against people of color, religious minorities, women, and other groups. These concerns gained significant traction following a 2016 article by ProPublica that analyzed racial disparities in the predictions made by COMPAS, a tool that creates risk scores to help assign bond amounts in Broward County, FL. By comparing risk scores to actual criminal activity, the analysis concluded that the software was twice as likely to falsely label black defendants as future criminals than white defendants, and more likely to falsely label white defendants as low risk. In other words, more supposedly high-risk black defendants did not commit crimes, while more supposedly low-risk white defendants did commit crimes. A number of other investigations and analyses have surfaced on social media monitoring efforts that target racially and religiously-loaded language, facial recognition software that has lower accuracy when evaluating black female faces, and child neglect prediction software that disproportionately targets poor black families, among many other examples of bias.
The foundation for ensuring fairness in the algorithms created by an organization is a set of structural conditions that promote a culture committed to reducing bias. Without this, the policy and technical strategies that follow are irrelevant, because they will never become a priority for analysts.