If you believe that the fundamental fact of the world is its mistreatment of women, you usually discover that each new thing you investigate—quelle surprise!—mistreats women. If you begin with an overwhelming presupposition of racism, you often find racism down at the root of everything. And if you start with the long history of oppressing the poor, you almost always find, at the end of the day, that the poor are being oppressed.
The trouble is not that the results of such investigations are necessarily wrong. Tautologies tend to be true, after all. But the tautological also tends to tell us nothing new. The failure of circularity comes in part from the skepticism it engenders in us when we encounter it—tempting us to dismiss a book by indulging our own fallacy to say, in essence, Of course that author finds that result. The more important failing, however, comes from the fact that circularity rarely marks an advance. If the reason for some social problem is a structural flaw in the entire history of the world, then it has no discrete solution. Short of the end times, short of the divine resolution of everything human, there exists no human way to fix it.
We've had any number of books in recent months that look at current technology through such predetermined lenses. Andrew Guthrie Ferguson's The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement, for example. Safiya Umoja Noble's Algorithms of Oppression: How Search Engines Reinforce Racism. The paperback edition of Cathy O'Neil's Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.
Each of these does some interesting reporting on the uses and abuses of the latest advances in the computer revolution. It's true none of them, not even Cathy O'Neil's Weapons of Math Destruction, really has strong objections to the whole idea of computerizing culture (at least, not when compared with the almost pyrrhonian skepticism about social statistics displayed in Jerry Z. Muller's latest volume, The Tyranny of Metrics). Still, they are all rightly suspicious of the current effects of that computerizing. And they each close their argumentative circle like a gate slamming shut, leaving us not one step further on. We end mostly with just an unhelpful truism: If an economic technique or governmental instrument is oppressive, that oppression is usually going to fall more heavily on the poor and the vulnerable. The answer we need is how to halt oppression, in itself, not a conclusion that we need to spread the oppression around more fairly.
And then we have Virginia Eubanks's Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. In many ways, this book is a model of how to think about the current effects of the strange expectation that computers will solve the problems of our social programs. Not in every way. Eubanks feels compelled to signal her political virtuousness at times, reminding readers that she is a standard-issue lefty, opposed to those evil Republicans. And she stumbles at times into the dead circle, falling to the temptation to blame everything on the culture's endemic prejudices about race and economic class, which leaves readers just to shrug about the computer revolution: More of the same, innit? Ah, well. Move along.
But if we read past the occasional politics and occasional dead ends of tautology, conservatives and liberals alike will find that Automating Inequality is the best book we have thus far about the ways in which governments at nearly every level of authority are using computer algorithms as essentially magic: easy technological substitutes for the difficult balance of sympathy and intelligence needed to govern the messy thing that is human society.
Automating Inequality looks, for example, at the 2006 welfare reform launched under Governor Mitch Daniels in Indiana. We could argue about whether Indiana's welfare systems needed reform. Republicans tended to think it did, while Democrats tended to think it did not. But that's just politics, the kind of dispute elections are designed to resolve. In Automating Inequality, Eubanks clearly sides with the virtuous Democrats against the vicious Republicans, the same old dull masquerading of political partisanship as logical argument. But the better and far more interesting analysis she performs is of the means that Indiana used to implement its welfare reform.
Daniels had insisted that new computerization would "clean up welfare waste" by reducing the number and, especially, the authority of welfare caseworkers (some of whom, he reasonably noted, were caught in corrupt collusion with the people in their casework). IBM had a new set of hardware and software—a new algorithm—that would provide an objective and incorruptible substitute for all those faulty human beings.
Indiana quickly discovered what many state governments would discover a few years later during the rollout of Obamacare: The installers of large-data computer programs often promise more than they can deliver. From its first moments, the system had troubles, and the welfare algorithm proved spotty, arbitrary, and just flat-out stupid. The examples are infuriating: insurance cancelled for minor flaws in paperwork, denials and awards of benefits issued without warning, and an automated delivery system that couldn't keep food banks stocked with food. In the end, Daniels admitted that the computerization was flawed, and Indiana switched to a mix of automation and caseworkers.
Governments controlled by Democrats fare no better—proof, if proof were needed, that our computerization problem cuts across ideological lines. Los Angeles, for example, announced with great pride that it had a solution to the urban disgrace of thousands of people living in the tent city known as "Skid Row." And the solution, of course, was an algorithm. A computer program would identify who most needed housing, and thus the city could prioritize: finding the at-risk, like flowers growing amidst the weeds of those who didn't need help.
Why are we surprised that the attempt to substitute algorithmic decisions for human judgments often worsened the situation? It's old news that human beings are flawed, but the computerized "objective system" was ugly in new ways. It collected reams of intrusive personal data—none of which required a warrant for prosecutors and police to examine—and it committed such interpretive absurdities as downgrading ex-convicts, not on the grounds that they had been criminals but on the grounds that they had recently had housing. In prison.
Automating Inequality then turns to the child protective service of Allegheny, Pennsylvania—another place where government officials, dazzled by high-tech promises, imagined that they could solve all their woes. Child protective services have always been dangerously conflated agencies, charged both with helping families remain intact and with taking away their children. And when the dual charge is put in the digital hands of an algorithm, the results turned bizarre. So, for example, the Allegheny programs decided that children of abusive families had a higher risk of being abusers when they grew up—and would recommend taking away children on the grounds that the grandparents once had an anonymous abuse complaint lodged against them.
In policing and prosecution, in everything from zoning to jury selection, governmental function is increasingly being outsourced to computer algorithms. And the outsourcing originates, as Automating Inequality repeatedly notes, with the desire of officials to diffuse and dodge responsibility.
As the subtitle of her book suggests, Virginia Eubanks has a special concern for the poor. It's that concern that occasionally tempts her into the tautological trap of blaming everything new on the old oppressions of the underclasses—as though these algorithms would be fine, if they just had what John Paul II memorably called "a preferential option for the poor." But she is right that any new oppression will fall hardest on the most vulnerable. And she is just as right to demand that politicians, bureaucrats, and programmers alike need to ask themselves two questions about any new tool in the march toward computerization: Does it increase the self-determination and self-agency of the poor? and Would it be tolerated if aimed at the rich?
Given our moral duties to charity and the corporal works of mercy, those are important and vital questions. But in their formal nature, concerns about algorithms reach beyond even that. At every stage of computerization, at every moment in the rush to cede authority and responsibility, we need to ask the broadest of questions: Does a new technology increase human self-determination and self-agency? Does it free us to be more human, or reduce us to something less human?
Published under: Book reviews