The dangers of letting algorithms make decisions in law enforcement, welfare, and child protection.
By Virginia Eubanks
Public services are becoming increasingly algorithmic, a reality that has spawned hyperbolic comparisons to RoboCop and Minority Report, enforcement droids and pre-cogs. But the future of high-tech policymaking looks less like science fiction and more like Google’s PageRank algorithm.
For example, according to the Chicago Tribune, Robert McDaniel, a 22-year-old Chicago resident, was surprised when police commander Barbara West showed up at his West Side home in 2013 to warn “the most dangerous gangbangers” to stop their violent ways. McDaniel, who had a misdemeanor conviction and several arrests on a variety of offenses—drug possession, gambling, domestic violence—had made Chicago’s now-notorious “heat list” of the 420 people most likely to be involved in violent crime sometime in the future. The list is the result of a proprietary predictive policing algorithm that likely crunches numbers on parole status, arrests, social networks, and proximity to violent crime.
In December 2007, Indiana resident Sheila Perdue received a notice in the mail that she must participate in a telephone interview in order to be recertified to receive public assistance. In the past, Perdue, who is deaf and suffers from emphysema, chronic obstructive pulmonary disease, and bipolar disorder, would have visited her local caseworker to explain why this was impossible. But the state’s welfare eligibility system had recently been “modernized,” leaving a website and an 800 number as the primary ways to communicate with the Family and Social Services Administration.
Perdue requested and was denied an in-person interview. She gathered her paperwork, traveled to a nearby help center, and requested assistance. Employees at the center referred her to the online system. Uncomfortable with the technology, she asked for help with the online forms and was refused. She filled out the application to the best of her ability. Several weeks later, she learned she was denied recertification. The reason? “Failure to cooperate” in establishing eligibility.
An algorithm is a set of instructions designed to produce an output: a recipe for decision-making, for finding solutions. In computerized form, algorithms are increasingly important to our political lives. According to legal scholar Danielle Keats Citron, automated decision-making systems like predictive policing or remote welfare eligibility no longer simply help humans in government agencies apply procedural rules; instead, they have become primary decision-makers in public policy. These abstract formulas have real, material impacts: One branded Robert McDaniel a likely criminal, while the other left Sheila Perdue without access to life-sustaining nutritional and health benefits.
Algorithms help human beings make decisions even as they—often less transparently—make decisions for them as well. Google’s much-debated PageRank algorithm, for example, calculates the relative importance of Web sources by counting the number and quality of links to individual pages. It also takes into account information collected about your previous searches, the “mobile compatibility” of resulting websites, and whether the results are Google’s own products and services. So Google’s algorithm sifts information in ways that influence what you see and what you don’t.
But algorithmic decision-making takes on a new level of significance when it moves beyond sifting your search results and into the realm of public policy. The algorithms that dominate policymaking — particularly in public services such as law enforcement, welfare, and child protection — act less like data sifters and more like gatekeepers, mediating access to public resources, assessing risks, and sorting groups of people into “deserving” and “undeserving” and “suspicious” and “unsuspicious” categories.
Policy algorithms promise increased efficiency, consistent application of rules, timelier decisions, and improved communication. But they also raise issues of equity and fairness, challenge existing due process rules, and can threaten Americans’ well-being. Predictive policing relies on data built upon a foundation of historical racial inequities in law enforcement. Remote eligibility systems run on the questionable assumption that lacking a single document—in a process that often requires dozens of pages of supporting material—is an affirmative refusal to cooperate with the welfare determination process.
Policy algorithms can cause real damage that is difficult to remedy under existing legal protections, especially when algorithms terminate basic services. If community members are unfairly stigmatized by police surveillance or incorrectly denied care for acute medical conditions, it is nearly impossible to make them whole after the fact.
So how do we preserve fairness, due process, and equity in automated decision-making?
1) We need to learn more about how policy algorithms work. Even after multiple Freedom of Information Act requests, the Chicago Police Department refuses to share the names of the people on its heat list or disclose the algorithm that generates it. Likewise, the code that determines welfare eligibility in Indiana is kept hidden. This is the rule, not the exception; policy algorithms are generally considered corporate intellectual property or are kept under wraps to keep users from developing ways to game the system. With his colleagues, Christian Sandvig, a scholar of information technology and public policy, suggest that one way to expose discrimination in automated decision-making is to perform algorithmic audits—much like paired audit studies that test for racial discrimination in housing and employment.
2) We need to address the political context of algorithms. Even if we achieve perfect transparency in policy algorithms, it might not change their innate biases. As both the Chicago and the Indiana cases show, automated systems are built on unexamined assumptions about the targets of public policy—their predisposition to criminal behavior or fraud, for example. These presumptions become inequities baked into the code that need to be uncovered and excised.
3) We need to address how cumulative disadvantage sediments in algorithms. All technological glitches are not equal, and patterns of digital error and response recapitulate historical forms of disadvantage. As the Leadership Conference on Civil and Human Rights recently stated, “Computerized decisionmaking … must be judged by its impact on real people, must operate fairly for all communities, and in particular must protect the interests of those that are disadvantaged or that have historically been the subject of discrimination.”
4) We need to respect constitutional principles, enforce legal rights, and strengthen due process procedures. Policy algorithms are neither individuals nor legal rules, per se, so it is difficult to prevent or address the damage caused by their mistakes and design flaws. We need to ask big, new questions: Who is at fault if a computer system correctly follows policy but the results disproportionately harm the poor? Can a computerized decision system be accused of racism? Even if we have to develop entirely new safeguards for due process and constitutional principles, we must act to protect participatory decision-making — a core tenet of our democracy.
Perhaps the most troubling aspect of policy by algorithm is what makes it most similar to ED-209, the two-legged enforcement droid in RoboCop: its lack of empathy and the potential for separating digital decision-makers from the embodied impact of their choices. Like drones, decision-making algorithms are a form of politics played out at a distance, generating a troubling amount of emotional remove.
But we also have to recognize that not all governance is data-based. Policy by algorithm seems clean, clear, and efficient, but its foundations are sunk in the same human complexities as any other form of decision-making. And that’s as it should be. Through politics, we enter into and adjust a social contract, create and reconsider shared values, and navigate our conflicting needs and desires. It’s a messy business. Policymaking can’t be paint-by-numbers—it’s a human enterprise that requires us to deploy and adapt all the quantitative and qualitative capacities we can muster.
20150503
The Policy Machine
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment