(Another one in the series promised
here; I’m writing a paper. This is still somewhat muddled, so read at your own risk, though comments are greatly appreciated if you find this of interest.)
Many problems of social life have “solutions” that can be correct or incorrect. Determining whether someone is guilty or innocent of a violation of legal rules; allocating goods and services to their best uses; policing a neighbourhood so that potential violators of community norms are deterred; all of these things can be characterized as problems with solutions that can be said to be at least partially correct or incorrect, assuming there is broad agreement on the values a solution should promote. Different institutional solutions to these problems can thus be evaluated with respect to their epistemic power: the degree to which, on average, a given institution is able to reach the “correct answer” to the problem in question. A “computational” argument for a particular institutional solution to a problem is simply an argument that, given the incentives the institution provides for gathering or revealing information or knowledge important to the problem, the average capacities of the relevant decisionmakers to process this information or use this knowledge, and the rules or mechanisms it uses for aggregating their judgments, the institution in question has greater epistemic power than the alternatives (relative, of course, to a particular problem domain).
Consider, for example, the institution of trial by jury. Though jury trials have more than one justification, their epistemic power to determine violations of legal norms is often invoked as an argument for their use
vis à vis, say, trials by ordeal or combat. A trial by jury is expected to be a
better “truth tracker” than trials by ordeal or trials by combat in the sense that it is expected to better discriminate between violators and non-violators of legal norms (more precisely, it is expected to better identify
non-violators, even at the expense of failing to identify violators) because, among other things, it may provide good incentives for the revelation of both inculpatory and exculpatory evidence via adversarial proceedings and may allow the judgments of individual jurors to be combined in ways that amplify their epistemic power (see, e.g.,
Condorcet’s jury theorem). By contrast, trials by ordeal or combat are supposed to be lousy discriminators between violators and non-violators of legal norms (or lousy ways of identifying non-violators), because they provide bad incentives for the revelation of the relevant information (though see
Peter Leeson’s work for some interesting ideas on how trials by ordeal might have exploited widespread beliefs in “god’s judgment” to discriminate accurately between the guilty and the innocent). To be sure, even if the computational argument for jury trials is correct, we may still not want to use them to determine every violation of legal norms: considerations of cost, fairness, lack of suitable jurors, or speed, may reasonably limit their use. But the epistemic power of jury trials would surely be one important consideration for using them rather than other mechanisms in trying to figure out whether a particular person has violated a legal norm.
Now, the idea that some institutions are better than others at “social cognition” or “social information processing” is not inherently conservative, as the example of jury trials indicates. “Computational” or “social cognition” arguments have been deployed in defence of a wide variety of institutions, from
democracy to
Wikipedia, and from
markets to the
common law, without necessarily bolstering a “conservative” position in politics, however conceived. (For a good discussion of the concepts of social cognition and social information processing, as well as a review of some of the research that attempts to untangle when and how social cognition is possible, see this
short paper by Cosma Shalizi). But there is a set of arguments for “conservatism,” broadly understood, that argues for the epistemic power of some “default” solution to a social problem and against the epistemic power of an “explicit” intervention on computational grounds. The same contrast is sometimes expressed differently – e.g., in terms of decentralized vs. centralized institutions, “unplanned” vs. “planned” social interaction, or customary vs. explicit rules – but it always indicates something like the idea that some institutional solutions to a problem need not be explicitly
produced by a single, identifiable agent like a government. A computational argument for conservatism thus makes the (implicit or explicit) claim that we can “conserve” on reason by relying on the computational services of such patterns of interaction or institutions to determine the solution to a problem of social life rather than attempting to explicitly compute the solution ourselves.
This can get confusing, for it is not always clear what would count as a “default” solution to a social problem, and “restoring” (or even implementing) the default solution may entail robust changes to a social system (amounting even to revolution in some cases). So bear with me while I engage in a classificatory exercise. Three options for defining the default seem possible: the pattern of interaction that has historically been the case (“custom” or “tradition”); the pattern of interaction that would have prevailed in the absence of explicit planning or design by one or more powerful actors (e.g., “the free market” as opposed to a system of economic allocation involving various degrees of centralized planning); and the pattern of interaction that “computes” the solution to the social problem implicitly rather than explicitly (compare, for example, a “cap and trade” market for carbon emissions with an administrative regulation setting a price for carbon emissions: both are
designed, but the former computes the solution to the problem of the appropriate price for carbon implicitly rather than explicitly). We might call these options the “Burkean,” “Hayekian,” and (for lack of a better word) “Neoliberal” understandings of the relevant “default” pattern of interaction, which in turn define epistemic arguments for, respectively, the
superiority of tradition over innovation, the superiority of
spontaneous order over planned social orders, and the superiority of implicit systems of “parallel” social computation over explicit centralized systems of social computation. But what reasons do we have to think that any of these “default” patterns of interaction have greater epistemic power than the relevant alternatives, or rather, under what conditions are they computationally better than the relevant alternatives?
Let us start with the last (“Neoliberal”) position, since it seems to me the easiest to analyze and at any rate is the farthest from conservatism in the usual sense of the term (the “Burkean” position is the closest to conservatism, while the “Hayekian” sits more or less in the middle; I'm leaving the analysis of "Burkean" arguments to another post). Here the relevant comparison is between two designed institutional solutions to a problem, one that aims to determine the solution to a social problem by setting the rules of interaction and letting the solution emerge from the interaction itself, and another that aims to induce a set of actors to consciously and intentionally produce the solution to the problem. Thus, for example, a “cap and trade” market in carbon emissions aims ultimately to efficiently allocate resources in an economy on the assumption that the economy should emit less than X amount of carbon into the atmosphere, but it does so by setting a cap on the amount of carbon that may be produced by all actors in the market and letting actors trade with one another, not by asking a set of people to directly calculate what the best allocation of resources would be (or even by directly setting the price of carbon). We might compare this solution to a sort of parallel computation: given a target amount of carbon emissions, the relevant computation concerning the proper allocation of resources is to be carried out in a decentralized fashion by economic actors in possession of private and sometimes difficult to articulate knowledge about their needs, production processes, and the like, who communicate with one another the essential information necessary to coordinate their allocation plans via the messaging system of “prices.”
This sort of pattern of interaction will be computationally superior to a
centralized computation of the solution to the same problem whenever the relevant knowledge and information to determine the solution is dispersed, poorly articulated, time-sensitive, and expensive or otherwise difficult to gather by centralized bodies (perhaps because actors have incentives not to truthfully disclose such information to centralized bodies), yet essential features of such knowledge are nevertheless communicable to other actors via decentralized and asynchronous message passing (like prices). (Hayek’s famous argument for markets and against central planning
basically boils down to a similar claim). The problem can thus be decomposed into separate tasks that individual actors can easily solve on their own while providing enough information to other actors within appropriate time frames so that an overall solution can emerge.
But these conditions do not always hold. Consider, for example, the problem of
designing an appropriate “cap and trade” market. Here the relevant knowledge is not dispersed, poorly articulated, and time-sensitive but is instead highly specialized and articulated (e.g., knowledge of “
mechanism design” or “
auction theory” in economics), is not as obviously time-sensitive, and cannot easily be divided. (Though the problem of
discovering the truth about mechanism design or auctions might itself be best tackled in a decentralized manner). We might perhaps learn here from computer science proper: some problems can be tackled by easily “parallelized” algorithms (algorithms that can be broken down into little tasks that can run in a decentralized fashion in thousands of different computers), but some problems cannot (the best available algorithm needs to run in a
single processor, or the problem can only be broken down into steps that need to run sequentially, like the
algorithms for calculating pi); in fact there seems to be an entire research programme trying to figure out which classes of problems can be parallelized and which cannot. (And this seems to be
a deep and difficult question). Or we might speak here of “epistemic bottlenecks” that limit the degree to which a problem can be broken down into tasks that can be solved via a division of epistemic labor; the problem of
designing an appropriate division of epistemic labor for a specific purpose might be one of these.
The “computational” argument for implicit over explicit computation depends on the identification of an epistemic bottleneck in explicit
mechanisms of computation that are not present in the implicit
mechanism. But it does
not depend on a contrast between designed and undesigned solutions to a problem: both a carbon market and an administrative regulation are equally designed solutions to the same problem. In order to make the computational case for
spontaneous order (as against “planned” order), one has to argue not only that there are epistemic bottlenecks in the explicit mechanism of computation, but that the problem of designing an order for
computing the solution to the problem is itself subject to the epistemic bottlenecks that render explicit solutions unfeasible; and here, I am not sure that Hayek or anyone else has given a convincing argument yet. (One could, of course,
give “selection” arguments for preferring spontaneous to designed orders; but that is a subject for another post).