Understanding algorithms and their impact on public discourse, then, requires thinking not simply about how they work, where they are deployed, or what animates them financially. This is not simply a call to unveil their inner workings and spotlight their implicit criteria. It is a socio- logical inquiry that does not interest the providers of these algorithms, who are not always in the best position to even ask. It requires examining why algorithms are being looked to as a credible knowledge logic, how they fall apart and are repaired when they come in contact with the ebb and flow of public discourse, and where political assumptions might not only be etched into their design, but also constitutive of their widespread use and legitimacy.
I see the emergence of the algorithm as a trusted information tool as the latest response to a fundamental tension of public discourse. The means by which we produce, circulate, and consume information in a complex society must necessarily be handled through the division of labor: some produce and select information, and the rest of us, at least in that moment, can only take it for what it’s worth. Every public medium previous to this has faced this challenge, from town criers to newspapers to broadcasting. In each, when we turn over the provision of knowledge to others, we are left vulnerable to their choices, methods, and subjectivities. Sometimes this is a positive, providing expertise, editorial acumen, refined taste. But we are also wary of the intervention, of human failings and vested interests, and find ourselves with only secondary mechanisms of social trust by which to vouch for what is true and relevant (Shapin 1995). Their procedures are largely unavailable to us. Their procedures are unavoidably selective, emphasizing some information and discarding others, and the choices may be consequential. There is the distinct possibility of error, bias, manipula- tion, laziness, commercial or political influence, or systemic failures. The selection process can always be an opportunity to curate for reasons other than relevance: for propriety, for commercial or institutional self-interest, or for political gain. Together this represents a fundamental vulnerability,
8/2/13 10:52 AM
PROPERTY OF MIT PRESS: FOR PROOFREADING AND INDEXING PURPOSES ONLY
192 Tarleton Gillespie
one that we can never fully resolve; we can merely build assurances as best we can.
From this perspective, we might see algorithms not just as codes with consequences, but as the latest, socially constructed and institutionally managed mechanism for assuring public acumen: a new knowledge logic. We might consider the algorithmic as posed against, and perhaps supplant- ing, the editorial as a competing logic. The editorial logic depends on the subjective choices of experts, who are themselves made and authorized through institutional processes of training and certification, or validated by the public through the mechanisms of the market. The algorithmic logic, by contrast, depends on the proceduralized choices of a machine, designed by human operators to automate some proxy of human judg- ment or unearth patterns across collected social traces. Both struggle with, and claim to resolve, the fundamental problem of human knowledge: how to identify relevant information crucial to the public, through unavoid- ably human means, in such a way as to be free from human error, bias, or manipulation. Both the algorithmic and editorial approaches to knowledge are deeply important and deeply problematic; much of the scholarship on communication, media, technology, and publics grapples with one or both techniques and their pitfalls.
A sociological inquiry into algorithms should aspire to reveal the com- plex workings of this knowledge machine, both the process by which it chooses information for users and the social process by which it is made into a legitimate system. But there may be something, in the end, impenetrable about algorithms. They are designed to work without human intervention, they are deliberately obfuscated, and they work with information on a scale that is hard to comprehend (at least without other algorithmic tools). And perhaps more than that, we want relief from the duty of being skeptical about information we cannot ever assure for certain. These mechanisms by which we settle (if not resolve) this problem, then, are solutions we cannot merely rely on, but must believe in. But this kind of faith (Vaidhyanathan 2011) renders it difficult to soberly recognize their flaws and fragilities.
So in many ways, algorithms remain outside our grasp, and they are designed to be. This is not to say that we should not aspire to illuminate their workings and impact. We should. But we may also need to prepare ourselves for more and more encounters with the unexpected and ineffable associations they will sometimes draw for us, the fundamental uncertainty about who we are speaking to or hearing, and the palpable but opaque undercurrents that move quietly beneath knowledge when it is managed by algorithms.