Wednesday, December 15, 2010

The uncanny accuracy of European public opinion on the amount of foreign aid that governments give

Ok, this is probably the last post on this topic for a while. But a student (thanks Andrew!) put some of the data on European perceptions of how much foreign aid their governments give (from Eurobarometer 50.1, 1999) into nice electronic form, and I was able to calculate exactly the modal response. And really, the results surprised me: European public opinion turns out to be uncannily accurate at determining the answer to that question, far more than Americans, to the extent to which I wonder if the results discussed in this post are not simply driven by the way the question is asked in the US. The accuracy of European public opinion on this topic actually seems like a striking confirmation of the models of "information aggregation" I invoked earlier: when signals are unbiased, public opinion should converge on the true answer.

The question Eurobarometer 50.1 asked is: "We are not talking about humanitarian aid, that is assistance provided in emergency situations, like wars, famine, etc, but about development aid. Do you think the (NATIONALITY) government helps the people in poor countries in Africa South America Asia, etc to develop? (IF YES) Roughly how much of its budget do you think the (NATIONALITY) government spends on this aid?"

The potential answers are:


1 No
2 Yes, less than 1%
3 Yes, between 1 and 4%
4 Yes, between 5 and 9%
5 Yes, between 10 and 14%
6 Yes, between 15 and 19%
7 Yes, between 20 and 24%
8 Yes, between 25% and 29%
9 Yes, 30% or more
10 Yes, but I do not know the percentage (SPONTANEOUS)
NSP No response/Don't know

The correct response is coded 3, between 1 and 4%.

So how did Europeans do in 1996-1998?

Their answers are collected in this table. As you can see, on average about 40-45% of Europeans say they don't know how much aid their governments give (though only about 20% don't know if their governments give any aid, or refuse to answer; another 20% say they think their governments give ODA (official development assistance), but don't know how much), and only about 16% give the correct response. So most Europeans seem to lack knowledge of how much ODA their governments give. (Though note the variance: the vast majority of Danes claim to know that their government gives aid, and something like 40% of them give the correct response).

But this is the wrong metric to focus on. In order to determine how accurate the aggregate public opinion is, we have to do something like what Francis Galton did when he asked people at a country fair to estimate the weight of an ox, and calculate the median response among those who claim to know the answer (roughly, this is the answer that would emerge from a "democratic" vote). And here the results are quite different. In this table, I've included only the answers of people who claim to know the actual percentage of the budget given by European governments as ODA (the number represents the percentage of people giving an answer who claim they know how much money their governments give as ODA), as well as their average and median responses. And Europeans get it exactly right: the median answer in both 1996 and 1998 was precisely 3 (the correct answer). The median in most countries was also very close to the truth: Germans and Belgians overestimate the amount of aid they give (their median answer is 4, meaning between 5% and 9% of the budget, perhaps because Germans suffer from a status effect and Belgians have Brussels?), whereas Greece, Spain, Finland, and Sweden (and Italy in 1998) slightly underestimate the amount of aid they give.

So, collective opinion in the EU, in 1996-1998, "knew" the right answer to the question that seems to stump Americans. I wonder if the problem of bias in American estimates of ODA today is caused by the way the question is asked in PIPA's survey? Would Americans display such a large bias if the question of Eurobarometer 50.1 was asked of them?

[update: fixed some typos and other minor problems for the sake of clarity, 12/15/2010]

On the idea of Tolerable Outcomes (Epistemic Arguments for Conservatism V)

What does it mean for an institution to be associated with “tolerable” outcomes over a period of time? The question is more subtle than I thought at first; under prompting from a friend who commented on the paper I am writing, here’s a stab. (For an introduction to this series, see here; for all the other posts, click here; the idea of “tolerable” or "ok" outcomes is used here, here and here).

The first problem is to determine the sense in which we might say that some outcome (or some set of circumstances) is “tolerable.” One promising idea identifies tolerable outcomes with those outcomes that do not involve “clear evils.” By a “clear evil” I mean the sort of thing that all (reasonable?) individuals could identify as an evil: slavery, genocide, etc. (Though then, of course, we have the problem of sorting out the reasonable from the unreasonable; see here Estlund’s Democratic Authority). Some evils are not clear in this sense: reasonable individuals (in the Rawlsian sense of the term) might disagree about their importance, or their identification as an evil, given their (differing) beliefs about justice and the good.

A more problematic, but more substantive sense of “tolerable,” identifies tolerable outcomes with those outcomes that are above some threshold of badness on some substantive scale. Here the idea is not that some evils are necessarily clear in the sense discussed above, but that the determination of which evils are tolerable and which are not is an “easier” problem than the determination of which goods make a society optimal or fair or just, for example. Even if reasonable people disagree about whether, for example, persistent poverty is a tolerable evil, the conservative can still argue that determining whether persistent poverty is a tolerable evil is an “easy” problem relative to, for example, determining whether an egalitarian society is justified. (Perhaps the majority of people believe that poverty is a tolerable evil, while slavery is not; if we assume that the majority of people have some warrant for these beliefs, then the belief that persistent poverty is a tolerable evil might be epistemically justified, even if some reasonable individuals disagree). 

Taking some criterion of “tolerability” as given, a second problem emerges: institutions are associated with outcomes over time. Should a conservative discard any institution that is associated with even a single intolerable outcome? Or should the conservative somehow “average” these outcomes over time, or “discount” past outcomes at a specific rate?

For an example, consider the basic institutions of liberal democracy. If we look, say, at the institutions of the Netherlands or Sweden since 1960, we could easily agree that these institutions have been associated with tolerable outcomes since then, in the sense that they do not seem to have been associated (or produced, though by assumption we cannot tell whether outcomes associated with these institutions have been produced by them) with clear evils.

But now consider the entire history of relatively liberal institutions in the USA since the late 18th century.  These institutions were not always associated with tolerable outcomes; they were in fact associated with slavery and ethnic cleansing, which count as clear evils if anything does, and with many other evils besides (aggressive war and colonialism among them). But at the time they were also not the same institutions as today; there has been a great deal of institutional change in the USA. Though the basic structure of the institutions, as specified in the US constitution, has not changed that much – e.g., we still have competitive elections, two legislative chambers with specific responsibilities, an executive, a relatively independent judiciary, a bill of rights, etc. – the actual workings of these institutions, the associated circumstances under which they operate, and the expectations that shape their use have changed quite a bit. Suffrage was extended to all adult males; then it was extended to women in the early 20th century. Slavery was abolished. The regulatory powers of the Federal government expanded. The country industrialized. And so on. Since (by assumption) we do not know which aspects of American institutions and circumstances produced clear evils and which aspects and circumstances did not, we cannot in general answer the question of whether liberal institutions in the USA have produced tolerable outcomes in all past circumstances; at best, we can say that American institutions that are in some ways similar to existing institutions were associated with intolerable (not ok) outcomes in the past.

What might a conservative say to this? One possibility would be for the conservative to have a particular “discount rate” for the past: the further back in the past an outcome is associated with an institution, the less it is to “count” towards an evaluation of whether the institution is to be preserved, on the assumption that the further back in time we go, the less we are talking about the same institutions. Early nineteenth century American institutions were only superficially similar to modern American institutions, on this view; and so the outcomes associated with them should be discounted when we consider whether or not American institutions should have “epistemic authority.”

The problem with this is that, the smaller the discount rate is, the more intolerable outcomes it will “catch,” so that the conservative is forced to discard almost all institutions. With a small discount rate, the conservative is forced to argue that American institutions should not, in general be given the benefit of the doubt, since they (or similar enough institutions) have produced intolerable outcomes. But with a large discount rate, the conservative can be far less confident that the institutions  in question will be associated with tolerable outcomes in the future, since he has less evidence to go on. So the conservative faces a sort of evidence/discount rate tradeoff: the conservative position is most powerful, the more evidence we have of the association of institutions with tolerable outcomes; but the more evidence we have of outcomes, the more likely it is that some of these will be intolerable, forcing the conservative to argue for changes.

(In more formal terms: consider the series of states of the world {X1,..Xi…Xn}, associated with the institution {I1,…In}. For each Xi, we know whether it represents a tolerable or an intolerable outcome, and we know that it was associated with Ii, though we do not know whether Ii produced it. Suppose all intolerable outcomes are found in the past (i.e., in the series {X1…Xi}, where i is less than n). Suppose also that our confidence that institution In (today’s incarnation of the institution) is similar enough to institution Ii decreases according to some discount rate d. The larger the d, the smaller the series of states that can serve as evidence that In will be associated with tolerable outcomes in the future; but the smaller the d, the more likely it is that the evidential series of states will include some states in the series {X1…Xi}).

What do people think?

Tuesday, December 07, 2010

One hypothesis weakened

In an earlier post I wondered about the sensitivity of estimates of US foreign aid to the definition of foreign aid; if people included "military involvement" as foreign aid, then their estimates would be biased upwards. But apparently the good people at PIPA already thought of this in an earlier poll (Thanks Andrew, for doing what I was too lazy to do!):
Some have wondered whether the high estimate of foreign aid spending is due to Americans incorrectly including in their estimates the high costs of defending other countries militarily. To determine if this was the case, in June 1996 PIPA presented the following question: US foreign aid includes things like humanitarian assistance, aid to Israel and Egypt, and economic development aid. It does not include the cost of defending other countries militarily, which is paid for through the defense budget. Just based on what you know, please tell me your hunch about what percentage of the federal budget goes to foreign aid. Despite this clarification, the median estimate was 20% and the mean 23%.
Europeans, however, do appear to produce less biased estimates of foreign aid than Americans:
When Europeans are asked how much the government spends on overseas aid from the national budget, approximately one third of respondents do not know. Another third will choose between 1-5 per cent and 5-10 per cent. The smallest proportion will mention less than one per cent.21 The consistent trend across OECD countries is to overestimate the aid effort.
The figures cited appear to be from this report, I think, though the question is not exactly comparable. Most citizens admit they don't know (57% or so). Here's a table (click for larger size):
The correct response is "around 100 Euros per European citizen." (Based on the figures in the table, however, it looks like most Europeans actually underestimate the amount of foreign aid the EU gives - which does not support the conclusion of the other report. I wonder what the results would be if the question were asked in these terms in the USA). Anyway, it seems like the evidence is inconsistent with the hypothesis that high foreign aid estimates are driven by the inclusion of military spending in the results, though the fact that European populations do produce lower estimates of aid spending (even though the questions are not exactly comparable) does suggest that perhaps military spending plays  small role.

Another option: perhaps this is driven in part by national status? "High status" (powerful) countries will tend to have a self-image that includes lots of aid to others. But disaggregated figures for all the EU countries do not appear to be easily available to test this sort of thing (e.g., maybe France, Britain, and Germany produce more incorrect estimates than small, peripheral countries like Latvia and the Czech Republic).

[Update 12/8/2010 - thanks again Andrew: A 1999 Eurobarometer report (p. 11) notes that "Approximately a quarter of Europeans thinks that their government actually contributes to development aid, but does not feel well enough informed to say how much The largest proportions of votes go to the categories « Between 1 and 4% » (14%, -2 since 1996) and « Less than 1% » (10%, -2) Europeans are not far from reality when they make this choice." The question asked then was "We are not talking about humanitarian aid, that is assistance provided in emergency situations like wars. famine, etc, but about development aid Do you think the (NATIONALITY) government helps the people in poor countnes in Afnca, South America Asia etc to develop (I F YES) Roughly how much of its budget do you think the (NATIONALITY) government spends on this aid." The correct answer is "between 1 and 4%". If I'm reading the accompanying table right, Denmark, Finland and Sweden give especially accurate answers - around 40% of people in Denmark give the correct answer.]

The Robustness or Resilience Argument in Practice: Noah Millmann vs. Jim Manzi (Epistemic Arguments for Conservatism IV.55)

Noah Millmann and Jim Manzi over at The American Scene (and Karl Smith at Modeled Behavior) have been debating the degree of deference we should give to economic science when considering what governments should do about a recession. Manzi emphasizes the large degree of uncertainty and difficulty attendant on any attempt to determine whether a particular policy actually works, and he is right about this: we do not know very well whether any policy intervention actually works (or worked), given the enormous number of potentially confounding variables. Lots of econometric ink is spilled trying to figure out this problem, but the problem is intrinsically hard, given the information available. By contrast, knowledge in physics or chemistry is far more certain, since it can be established by means of randomized experiments that are easily replicated. So, Manzi argues, we should give less deference to economists than we do to physicists when making decisions. Millmann sensibly points out that the relevant analogy is not to physics or chemistry but to something like medicine. The knowledge produced by medical science is hard to apply in practice, and doctors base their treatment decisions on a combination of customary practice, experience, and some limited experimental and observational evidence. In particular cases, then, medical practice offers at best an informed guess about the causes of a disease and the best course of action. But Millmann argues that this does not undermine the epistemic authority of medicine: in case of sickness, we should attend to the advice of doctors, and not to the advice of nonexperts.

I think Manzi’s argument would be more compelling if it were put as a robustness or resilience argument (discussed previously here and here). Consider first the case of medicine. If we get sick, we have three basic options for what to do: heed the advice of doctors, heed the advice of non-experts, and do nothing. It seems clear that heeding the advice of non-experts should (normally) be inferior to heeding the advice of doctors. But is heeding the advice of doctors always epistemically preferable to doing nothing? (Or, more realistically, to discounting the advice of doctors based on one’s own experience and information about one’s body). The answer to this question depends on our estimation of the potential costs of medical error vs doing nothing. Because medical knowledge is hard, doctors may sometimes come up (by mistake) with treatments that are actively harmful; in the 18th century, for example, people used “bleeding” as a treatment for various diseases, which may have been appropriate for some things (apparently bleeding with leeches is used effectively for some problems), but probably served to weaken most sick people further. At any rate, we may not know whether a treatment works or not any better than the doctor; all we know is that people treated by doctors sometimes die. If our estimate of medical knowledge is sufficiently low (e.g., if we think that in some area of medical practice medical knowledge is severely limited), our estimate of the potential costs of medical error sufficiently high (we could die), and our experience of what happens when we do nothing sufficiently typical (most illness goes away on its own, after all: the human immune system is a fabulously powerful thing, perfected to a high degree by millions of years of evolution!) it may well be the case that we are better off discounting medical advice for the sake of doing nothing. Of course, atypical circumstances may result in us dying from lack of treatment; that is one of the perversities to which this sort of argument may give rise. But given our epistemic limitations (and the epistemic limitations of medicine), there may be circumstances where “doing something” is equivalent to doing something randomly (because the limitations on our medical knowledge are so severe), and so we may be (prospectively) better off doing nothing (i.e., tolerating some bad outcomes that we hope are temporary, since our bodies have proven to be resilient in the past).

Consider now the case of a government that is trying to decide on what to do with respect to a moderately severe recession. Here the government can do nothing (or rather, rely on common sense, tradition, custom and the like: i.e., do what non-experts would do), heed the advice of professional economists (who disagree about the optimal policy), or heed the advice of some selected non-economists (or the advice of some mixture of economists and non-economists). When is “heeding the advice of economists” better than “doing nothing,” given our epistemic limitations? And when is “heeding the advice of non-economists” better than “heeding the advice of economists”?

We know that the current architecture of the economic system produces recessions with some frequency, some of which seem amenable to treatment via monetary policy (whenever certain interest rates are not too close to zero), some of which appear to be less so (these are atypical), but in general produces long-run outcomes that seem tolerable (not fair, or right, or just: merely tolerable) for the majority of people (there are possible distributional concerns that I am ignoring: maybe the outcomes are not tolerable for some people). The system is robust for some threshold of outcomes and some unknown range of circumstances: it tends to be associated with increasing wealth over the long run, though it is also associated with certain bad outcomes, and we do not know if it is indefinitely sustainable into the future (due to environmental and other concerns). We also know that there is some disagreement among economists about what is the optimal policy in an atypical recession (which suggests that there are limits to their knowledge, if nothing else). If we think that the limits on economic knowledge are especially severe for some area of policy (e.g., what to do in atypical recessions), historical evidence suggests that sometimes economists may prescribe measures that are associated with intolerable outcomes (e.g., massive unemployment, hyperinflation, etc.), and we think that most recessions eventually go away on their own, we may be justified in doing nothing on epistemic grounds. In other words, if we think that for some area of policy economists’ guesses about optimal policy are not likely to be better than random, and carry a significant risk of producing intolerable outcomes, then conservatism about economic policy is justified (doing what custom, tradition, etc. recommend, and heavily discounting the advice of economists).

But these are big ifs. Suppose that the epistemic limitations of economic science are such that most policy interventions recommended by professional economists have a net effect of zero in the long run; that is, economists recommend things more or less randomly, some good, and some bad, but in general tend not to recommend things that are very bad for an economy (or very good for it). (Historical evidence may support this; “Gononomics” is something of an achievement, not necessarily something common). In that case, we are probably better off heeding the advice of economists (and gaining the experience of the results) than doing nothing (and not gaining this experience); there may not be exceedingly large costs from heeding economic advice, but there may not be very large benefits either, and the result will still be “tolerable.” (At the limit, this sort of argument suggests that we ought to be indifferent about almost any policy intervention, so long as we have reasonable expectations that the outcomes will still be tolerable). Moreover, distributional concerns may dominate in these circumstances; doing nothing has a distributional cost that is passed to some particular group of people (e.g., the unemployed), so we may have reason to be concerned more about distribution than about long-run economic performance. And much depends on our estimates of the epistemic limitations of economic science: sure, economics is not like physics, but is it more like 20th century medicine, or more like 17th century medicine? (And the answer to this question may be different for different areas – different for macroeconomics than for microeconomics, for example).

Monday, December 06, 2010

Why are estimates of US foreign aid so biased?

A number of people have pointed to the latest reiteration of the fact that Americans do not appear to know what percentage of the budget goes to foreign aid. The median guess is 25% of the total budget, which is far higher than the actual 0.6%. Moreover, as far as I know, for as long as this question has been asked (1995), Americans have always hugely overestimated the percentage of the budget that goes to foreign aid; according to PIPA, the median guess has been about 20%. More educated people guess a bit lower, and less educated people a bit higher, but they mostly err on the high side. But why? As I mentioned in an earlier post, if people estimate such quantities on the basis of unbiased signals, they should converge on the true answer. So what is the source of this bias?

Eric Crampton suggests that voters count a lot of military spending as "foreign aid." This strikes me as plausible. Voters do not have in mind the same technical definition of "foreign aid" that the budget wonks use; they mostly see a large degree of involvement by the US in various countries, some of it justified on "nation building" grounds, which they can easily classify as "foreign aid/involvement." (These are the "signals" that they use to estimate the total amount of aid). And indeed the military accounted for about 23% of federal spending in FY2009 (a bit less this year), depending on how you count, which is close enough to the public guess for "foreign aid."

How would we know if this is what is going on? I wonder if answers to the question fluctuate in ways that are more or less correlated with the foreign wars of the US. Are answers to the question lower in times of peace? (I am too lazy to download the data and crunch it myself. But perhaps some enterprising soul could do it.) Also, has this question been asked in other countries, and does the magnitude of the bias remain constant? Or are the publics of countries with fewer foreign entanglements in war more likely to offer lower guesses of the amount of foreign aid spent? (If anybody kindly points me to easily downloadable date on this, I will make some graphs). I would also like to see a poll that asks this question but primes recipients by explicitly indicating that they are not to count military spending as foreign aid. (E.g., "Just based on what you know, please tell me your hunch about what percentage of the federal budget goes to foreign aid, not counting money spent by the military.") This may well produce a biased estimate, but would it be as biased as the current one? Has some enterprising public opinion researcher asked this question or something similar before?

And I would like to see the question asked in terms of the absolute number of dollars spent. (E.g., "Just based on what you know, please tell me your hunch about how many billions of dollars the Federal government spends on to foreign aid, [not counting money spent by the military]."). Would the estimates be similarly biased upwards? I have a hunch that they might even be biased downwards, and also suspect that asking the question in terms of percentages limits guesses to a degree of coarseness that produces biased estimates. (Foreign aid is 0.6-2.6% of the budget, depending on how you calculate it. Assume people guess the true number based on relatively unbiased signals from the news, including perhaps signals about foreign military involvement, but their guesses are made in 1% increments. Since 0% is an implausible guess, the smallest guess would be 1%, which would inevitably bias the collective estimate upwards, though not necessarily nearly as much as the current estimate. Is this idea too harebrained?)

Another possibility is that answers to this question do not reflect factual beliefs, but rather what Julian Sanchez once called "symbolic beliefs." Here the idea would be that respondents interpret the question as a question about the evaluation of US commitments abroad. The high guesses merely mean "the US spends too much on foreign entanglements," and the 10% median answer to the question of how much the US should spend  merely says something like "whatever it is, halve it." On this view, voters do not really believe that the US should spend 10% on foreign aid, only that it should spend less; educating them about the true amount that the US spends would have only a limited impact on their apparent misperceptions (though could education increase the amount that voters are willing to spend on foreign aid, maybe not to 10%, but perhaps to 3%?). There would be reason to suspect that this is the case if, as Robin Hanson notes, we never see politicians run on increasing foreign aid, even though they could conceivably explain to them that the US actually spends very little on non-military foreign aid.

Could this sort of "symbolic" belief ever be consistently corrected? It would not do to simply tell the voters that the actual value of "foreign aid" is less than 1% of the budget; they might simply adjust their views to say that it should be less, or redefine "foreign aid" to include all sorts of things that the budget analyst would not include (like military spending). Even if the belief were truly a factual and not a symbolic belief, mere provision of information would not necessarily change it: these sorts of quantities are estimated on the basis of signals from the social world of the voters, not merely on the basis of remembered (or misremembered) facts. Since signals are constantly received but mere factual information is not, unless you change the bias in the signals, the public will continue to overestimate "foreign aid" (whatever they actually mean by this).

Other ideas?

Thursday, November 25, 2010

Epistemic Arguments for Conservatism IV.5: An Addendum on Resilience

Rereading the long post below, it occurred to me that I didn’t mention why the argument I describe there should be called a “resilience” argument. Here’s what I had in mind. Institutions that have lasted for a long time have presumably endured in diverse circumstances while still producing tolerable outcomes, so we may think that there is a reasonable probability that they will still do ok in many unknown future circumstances: their endurance can be taken as evidence of resilience. If the potential costs of error in trying to find the optimal set of institutions are very high (e.g., getting a really bad political system, like the mixture of feudalism and Stalinism they have in the DPRK), and the “optimal” set of institutions for a given set of circumstances is very hard to find (if, for example, nobody knows with any certainty what the optimal political system would be for that set of circumstances, and the system would have to be changed anyway as they change), then it would make sense to stick with institutions that are correlated with ok outcomes over long periods of time and tolerate their occasional inefficiencies and annoyances. Resilient institutions are better than optimal institutions, given our epistemic limitations.

The argument also seems to imply that we ought to be indifferent about different sets of “ok” institutions. For example, there are a variety of democratic institutions in use today: some countries have parliamentary forms of government, some presidential; some have bicameral legislatures, others unicameral; some have FPP electoral systems, others use MMP; some countries use rules mandating “constructive” no confidence votes, others use other rules. But though we have some (statistically not especially good) evidence that some of these combinations work better than others (in some sets of circumstances: e.g., unitary parliamentary systems with list PR seem to produce better long-run outcomes than federal presidential government with nonproportional systems, at least on average, though I would not put too much stress on this finding), for the most part they all work ok, and we cannot tell with reasonable certainty whether some particular combination would be much better for us given foreseeable (and unforeseeable) circumstances. Perhaps switching back to FPP in New Zealand, for example, would produce better economic performance or induce better protection of civil liberties, but the best estimate of the effect of switching to FPP (or retaining MMP) on long run economic performance or the average level of protection of civil liberties is basically zero. (We might have reason to retain MMP or switch to FPP, but these will probably have more to do with normative concerns about representation and ideas about how easy it is for citizens to punish a government they dislike for whatever reason than with any special ability of MMP to deliver better economic performance). So we should not be bothered overmuch about these details of institutional design; given our epistemic limitations, on this view, it is unlikely that we would achieve even marginal improvements in our institutions that are sustained over the long run.

This does assume that gradual tinkering cannot at least serve to mitigate the effects of a changing “fitness landscape” (to use the terminology of the previous post), a controversial assumption. (It might be better to constantly tinker with our institutions than to let them just be, even if the tinkering is unlikely to lead to sustained improvements: we are just trying to stay on a local peak of the fitness landscape). And it also assumes that this landscape is very rugged for all the heuristics available: either minor changes just take you to another set of "ok" institutions (another variety of democracy, with some other combination of electoral system, relationships between executive and legislative powers, veto points, etc., and producing basically the same average long-run benefits), or they mostly throw you down a deep chasm if you try something new and radical (you get communist feudalism, or some variety of kleptocracy, and so on). I'm not sure this assumption makes sense for most problem domains, however: perhaps gradual tinkering in some cases does lead to better long-run outcomes, pace my previous argument against gradualism. But I have to think some more about this problem.

Wednesday, November 24, 2010

Epistemic Arguments for Conservatism IV: The Resilience Argument and the “Not Dead Yet” Criterion

(Fourth in the series promised here. Usual disclaimers apply: this is work in progress and so it is still a bit muddled, though comments are welcome).

One of the more promising epistemic arguments for conservatism is the argument from resilience. The general idea is that we owe deference to certain institutions (and so should not change them) not because they are “optimal” for the circumstances in which we find ourselves, but because they have survived the test of time in a variety of circumstances without killing us or otherwise making us worse off than most relevant alternatives. This argument might be used, for example, to justify constitutional immobility in the USA: even if the US constitution is not optimal for every imaginable circumstance, it is tolerable in most (“we’re not dead yet”); after all, it has lasted more than 200 years with relatively minor changes to its basic structure (save for the treatment of slavery, of course; but let us focus only on the basic structure of constitutional government); and if we have no good reason to think that changes to the constitution would improve it (because the effects of any change are exceedingly difficult to predict, and would interact in very complicated ways with all sorts of other factors, a caveat that would not necessarily apply to the treatment of slavery in the original constitutional text, which we may take as an obvious wrong), and some reason to think that the costs of ill-advised changes would be large (“we could die,” or at the very least unchain a dynamic leading to tyranny, oppression, and economic collapse), we are better off not changing it at all and putting up with its occasional inefficiencies.

The oldest and in some ways the most powerful version of this argument can be found in Plato’s Statesman (from around 292b to 302b). There the Eleatic Stranger (the main character in this dialogue) argues for a very strict form of legal conservatism, suggesting that we owe nearly absolute deference to current legal rules in the absence of genuine political experts who have the necessary knowledge to change them for good. This might seem extreme (indeed, it has seemed extreme to many interpreters), but given the assumptions the Stranger makes, the argument seems rather compelling. 

The basic logic is as follows. (For those interested in a “chapter and verse” interpretation of the relevant passages, see my paper here [ungated], especially the second half; it’s my attempt to make sense of Platonic conservatism.) In a changing environment, policy has to constantly adjust to circumstances; the optimal policy is extremely “nonconservative.” But perfect adjustment would require knowledge (both empirical and normative) that we don’t have. In Platonic terminology, you would need a genuine statesman with very good (if not perfect) knowledge of the forms of order (the just, the noble, and the good) and very good (if not perfect) knowledge of how specific interventions cause desired outcomes; in modern terminology, you would need much better social science than we actually have and a much higher degree of confidence in the rightness of our normative judgments than the “burdens of judgment” warrant. Worse, in general we cannot distinguish the people who have the necessary knowledge from those who do not; if unscrupulous power-hungry and ignorant sophists can always mimic the appearance of genuine statesmen, then the problem of selecting the right leaders (those who actually know how to adjust the policy and are properly motivated to do so) is as hard as the problem of determining the appropriate policy for changing circumstances.

If the first best option of policy perfectly tailored to circumstances is impossible, then (the Eleatic Stranger argues) the second best option is to find those policies that were correlated in the past with relative success according to some clear and widely shared criterion (the “not dead yet” or “could be worse” criterion), and stick to them. Note that the idea is not that these policies are right or optimal because they have survived the test of time (in contrast to some modern Hayek-type “selection” arguments for conservatism), or even that we know if or why they “work” (in the sense that they haven’t killed us yet). On the contrary, the Stranger actually assumes that these policies are wrong (inefficient, non-optimal, unjust); they are just not wrong enough to kill us yet (or, more precisely, not wrong enough for us to bear the risk of trying something different), even if we happen to live in the highly competitive environment of fourth century Greece. (The true right policy can only be known to the possessor of genuine knowledge; but ex hypothesi there is no such person, or s/he cannot be identified). And he also assumes that we do not know if the reason we are not dead yet has to do with these past policies; correlation is not causation, and the Stranger is very clear that by sticking to past policies we run large risks if circumstances change enough to render them dangerous. But the alternative, in his view, is not a world in which we can simply figure out which policies would work as circumstances change with some degree of confidence, but rather a world in which proposals are randomly made without any knowledge at all of whether they would work or not, and where the costs of getting the wrong policy are potentially very high (including potentially state death). If these conditions hold, sticking to policies that were correlated with relative “success” (by the “not dead yet” or “could be worse” criterion) is then rational. (There are some complications; the Stranger’s position is not as absolute as I’m making it seem here, as I describe in my paper, and Plato’s final position seems to be that you can update policy on the basis of observing sufficient correlation between policies and reasonable levels of flourishing and survival even in the absence of perfect knowledge).

Does this argument work? In order to understand the circumstances under which it might work, let us recast the argument in the terminology of a “fitness landscape.” Let us assume that, in some problem domain, we have some good reason to believe that the “fitness landscape” of potential solutions (potential policies) has many deep valleys (bad policies with large costs), some local but not very high “peaks” (ok policies) and only one very high peak (optimal policies). Assume further that this “fitness landscape” is changing, sometimes slowly, sometimes quickly; a reasonably ok policy in some set of circumstances may not remain reasonably ok in others. Under these circumstances, an agent stuck in one of the local peaks has very little reason to optimize and lots of reason to stick to their current policy if it has reason to think that its heuristics for traversing the fitness landscape are not powerful enough to consistently avoid the “deeps.” Conservatism is then rational, unless your local “peak” starts to sink below some acceptable threshold of fitness (in which case you may be dead whether or not you stick to the policy).

For a concrete example, consider the space of possible political systems. The vast majority of imaginable political systems may be correlated with some very bad outcomes, sometimes very bad –oppression, economic collapse, slavery, loss of political independence, even physical death. A smaller set – including liberal democratic systems, but potentially including other systems, are reasonably ok; they are correlated with a measure of stability and other good things, though (let us assume) we have no good way to know if they actually cause those good outcomes or if the correlation occurs by chance, and we have no reason to assume that these are the best possible outcomes that can be achieved, or that these good outcomes will be forever associated with these political systems. Finally, let us assume that there exists some utopian political system which would induce the best possible outcomes (however defined) for current circumstances (imagine, for the sake of argument, that this is some form of communism that had solved the calculation problem and the democracy problem plaguing “real existing communism”), but that we do not have enough knowledge (neither our social science or our theory of justice is advanced enough) to describe it with any certainty. Does it make sense to try to optimize, i.e., to attempt to find and implement the best political system, in these circumstances? I would think not; at best, we may be justified in tinkering a bit around the edges. Both the uncertainty about which political system is best and the potential costs of error are enormous, and circumstances change too quickly for the “best” system to be easily identified via exhaustive search. Hence conservatism about the basic liberal democratic institutions might be justified. (Note that this does not necessarily apply to specific laws or policies: here the costs are not nearly as large, and the uncertainty about the optimal policy might be smaller, or our heuristics more powerful. So constitutional conservatism is compatible with non-conservatism about non-constitutional policies).  

On the other hand, it is important to stress that the “not dead yet” criterion is compatible with slow death or sudden destruction, and somehow seems highly unsatisfactory as a justification for conservatism in many cases. Consider a couple of real-life examples. First, take the case of a population in the island of St Kilda, off the coast of Scotland, described in Russell Hardin’s book How do you Know? The Economics of Ordinary Knowledge. According to Hardin, this population collapsed over the course of the 19th century in great part due to a strange norm of infant care:

It is believed that a mixture of Fulmar oil and dung was spread on the wound where the umbilical cord was cut loose. The infants commonly died of tetanus soon afterwards. The first known tetanus death was in 1798, the last in 1891. Around the middle of the nineteenth century, eight of every tenth children died of tetanus. By the time this perverse pragmatic norm was understood and antiseptic practices were introduced, the population could not recover from the loss of its children (p. 115, citing McLean 1980, pp. 121-124)

Though this norm was bound to decimate the population eventually, it worked its malign power over the course of a whole century, slowly enough that it may have been hard to connect the norm with the results. And, perversely, it seems that the conservatism of the St Kilda’s was perfectly rational by the argument above: a population that has that kind of infant mortality rate is probably well advised not to try anything that might push them over the edge even quicker, especially as they had no rational basis to think that the Fulmar oil mixed with dung was the root cause of their troubles (rather than, for example, the judgment of god or something of the sort).

Or consider the example of the Norse settlers of Greenland described in Jared Diamond’s Collapse. Living in a tough place to begin with, they were reluctant to change their diet or pastoral practices as the climate turned colder and their livelihood turned ever more precarious, despite having some awareness of alternative practices that could have helped them (the fishing practices of the Inuit native peoples, for example). So they eventually starved and died out. Yet their conservatism was not irrational: given their tough ecological circumstances, changes in subsistence routines were as likely to have proved fatal to them as not, and they could have little certainty that alternative practices would work for them. (Though it is worth noting that part of the problem here was less epistemic than cultural: the Greenland Norse probably defined themselves against the Inuit, and hence could not easily learn from them).

In sum, the resilience argument for conservatism seems most likely to “work” when we are very uncertain about which policies would constitute an improvement on our current circumstances; the potential costs of error are large (we have reason to think that the distribution of risk is “fat tailed” on the “bad” side, to use the economic jargon); and current policies have survived previous changes in circumstances well enough (for appropriate values of “enough”). This does not ensure that such policies are “optimal”; only that they are correlated with not being dead yet (even if we cannot be sure that they caused our survival). And in some circumstances, that seems like a remarkable achievement. 

Thursday, November 18, 2010

Why do people underestimate income and wealth inequality?

There was a recent paper in the news by Michael Norton and Dan Ariely arguing that Americans substantially underestimate the degree of income and wealth inequality in the USA. Other papers have found similar results. But why? Crowds do quite well at estimating all sorts of other quantities, but they fail dramatically on this problem, as Timothy Noah notes in a Slate piece on the Norton and Ariely paper here. More technically, we might expect from models of information aggregation that if the signals people get about the true distribution of income are unbiased, the errors should cancel out, except they do not here. So what is the source of the bias here?

Two ideas. First, maybe people estimate the distribution of income and wealth based on signals from their friends and neighbors, and they mostly associate with people like them in terms of income. Since most people also tend to place themselves somewhere in the middle of the distribution (but why? national ideology?), the estimated distribution will be more egalitarian than the true distribution. If someone were to go around publicizing information about the true distribution of income then these estimates might shift a bit, but probably not reliably; people receive signals from their friends and neighbors about the distribution of income all the time, whereas few read or care about econometric estimates of income distribution.

But perhaps a second possibility (not necessarily incompatible with the first) is that people estimate the distribution of income and wealth from signals about consumption (whether or not these are their friends); if consumption inequality is lower than income or wealth inequality (as some people suggest it is), then estimates of income and wealth inequality will also be biased downwards. Again, providing information about the true distribution of income to people is also unlikely to change these perceptions reliably, but changes in consumption patterns might (e.g., if the rich engage in more conspicuous consumption).

Do either of these accounts sound like plausible explanations? Other ideas?

Bonus query: if Wilkinson and Pickett are right that income inequality causes social and health problems via status competition over consumption, then the fact that people are systematically deluded about the true extent of inequality might be a sort of silver lining; greater awareness of inequality might induce even more social and health problems, though it might also induce more redistributive policies than currently prevail (but I wonder: beliefs about the proper degree of inequality might also adjust downwards with more accurate information, depending on how strong our natural inequality aversion actually is).

Wednesday, November 17, 2010

An anarchist sensibility

Justin Smith has recently written a very interesting series of posts on anarchism as a certain kind of political and moral sensibility (rather than as a political programme). From the latest:
The anarchist prefers to think about the human species as having got by for the vastly greater part of its existence without states and armies (and airports, etc.), and insists on asking, based on the perspective of the longue durée, whether so many things that are taken as inevitable in our age are in fact so. I grew up assuming cars were inevitable; now they strike me as relics from a swiftly waning era. I don't see why at least some of us should not be trying to imagine how we might go about securing a similar fate for armies, police, and prisons. It bears pointing out that whether you believe these institutions are inevitable or not, it is undeniable that they are capable of radical transformation. So if you tell me that it is impossible to imagine a world without prisons, it seems to me a reasonable challenge to your claim to note that the very denotation of the term you are using has shifted drastically, not just over the centuries, but even over the past few decades. The fact that this has been a shift for the worse, from the perspective of any lover of peace and freedom, does not diminish the strength of the challenge.

[...]

Anarchism, then, as I see it, is a certain perspective on the affairs of men. It is realistic and naturalistic, in that it takes human beings as first and foremost a kind of primate, which only in certain circumstances comes to saddle itself with police and armies and so on. It asks whether and how human beings might thrive in the absence of these, and perhaps also hopes that they might someday (again) thrive without them, even if much of what we now value would have to be relinquished, and even as we soberly acknowledge that human pre-history was no idyll either.
I find this a very congenial perspective, not least perhaps because I am not naturally a highly political person and tend to the abstract and theoretical rather than the practical and concrete, despite having ended up teaching political science; my interests when I started university lay in pure mathematics, but turned to political theory by way of Heidegger. (Talk about corrupting the young. Heidegger books should come with a philosophical health warning, like cigarettes). The programmatic aspects of politics (the "what is to be done?" of everyday political life), while obviously important and worth thinking about seriously, just do not hold my interest that much. And some of my recent reading - James C. Scott and Christopher Boehm first and foremost, but also things like Adam Przeworski's wonderful book on the limits of self-government, about which I keep meaning to blog - has tended to reinforce my sense that our thinking about politics is too tied to a particular vision of a world of (well-ordered) states that seems, in its way, as utopian as the anarchist vision of a world without states. And on alternate days I think that if I am to be an idle utopian (which I am, with the emphasis on idle), I kind of prefer the vision without states.

Sunday, November 14, 2010

Trends in income inequality

Preparing for the "policy forum" about Wilkinson and Pickett's "The Spirit Level" I mentioned here, I found Deiniger and Squire's 1996 dataset on income inequality, which presents historical estimates of income inequality (in some cases going as far back as 1890) for a large number of countries. They simply looked up every study that tried to measure income inequality in particular countries and put it into their dataset, with some notes regarding the quality of the underlying data and the sources; not every study is of very high quality, but there is sometimes more than one study for a given year and country, and the resulting data probably gives you a good picture of overall trends. So I thought of looking at the trends in inequality in the countries Wilkinson and Pickett look at, and comparing it to trends in inequality in the communist countries for the period 1960-1993 (where Deiniger and Squire have data).

Here's the result:


"Rich countries" means rich today, and includes I think all of the countries that Wilkinson and Pickett look at, plus Hong Kong and Taiwan: Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Greece, Hong Kong, Ireland, Israel, Italy, Japan, Netherlands, New Zealand, Norway, Portugal, Singapore, Spain, Sweden, Switzerland, Taiwan, UK, USA. "Communist" includes Bulgaria, China, Cuba, Czechoslovakia, Hungary, Poland, Romania, Soviet Union, and Yugoslavia. For some of these countries there are only 3-4 estimates, for others there are long series for many years. I simply average all estimates for a country for a given year for rich and communist countries. (This is probably a terrible idea, but I'm just an amateur. I expect correction from irate statisticians, and shall be grateful for it.)

A scatterplot with the point estimates (as well as information about the original sources) is here:


Income inequality has probably gone up since then in many countries (not just rich ones); I haven't tried to merge Deiniger and Squire's estimates with later data, but I suspect we would see an upward trend after 1990 or so. (There are some fuller estimates available here; I might try to make a graph with them later).

Interpretations? Perhaps the threat of communism made rich countries engage in more redistribution than otherwise? (Following something like Acemoglu and Robinson's argument: the rich allowed more redistribution in the period 1960-1990 because of the potential threat of communist revolution). Or perhaps there was some feature of the world economy that tended to reduce inequality in advanced capitalist economies, but is now tending to increase it? (Something about finance capital, perhaps?) Pointers?

Thursday, November 11, 2010

The Potato, Food of Anarchists

A fascinating bit from The Art of Not Being Governed that I never got around to blogging when I first read it:
In general, roots and tubers such as yams, sweet potatoes, potatoes, and cassava/manioc/yucca are nearly appropriation-proof. After they ripen, they can be left in the ground for up to two years and dug up piecemeal  as needed. There is thus no granary to plunder. If the army or the taxmen wants your potatoes, for example, they will have to dig them up one by one. Plagued by crop failures and confiscatory procurement prices for the cultivars recommended by the Burmese military government in the 1980s, many peasants secretly planted sweet potatoes, a crop specifically prohibited. They shifted to sweet potatoes because the crop was easier to conceal and nearly impossible to appropriate. The Irish in the early nineteenth century grew potatoes not only because they provided many calories from the small plots to which farmers were confined but also because they could not be confiscated or burned and, because the were grown in small mounds, an [English!] horseman risked breaking his mount’s leg galloping through the field. Alas for the Irish, they had only a minuscule selection of the genetic diversity of new world potatoes and had come to rely almost exclusively on potatoes and milk for subsistence.

A reliance on root crops, and in particular the potato, can insulate states as well as stateless peoples against the predations of war and appropriation. William McNeill credits the early-eighteenth-century rise of Prussia to the potato. Enemy armies might seize or destroy grain fields, livestock, and aboveground fodder crops, but they were powerless against the lowly potato, a cultivar which Frederick William and Frederick II after him had vigorously promoted. It was the potato that gave Prussia its unique invulnerability to foreign invasion. While a grain-growing population whose granaries and crops were confiscated or destroyed had no choice but to scatter or starve, a tuber-growing peasantry could move back immediately after the military danger had passed and dig up their staple, one meal at a time (pp. 195-196).

Planting potatoes is, for Scott, part of an arsenal of agricultural techniques used by certain peoples for “repelling” the state, including planting a large variety of cultivars (which makes the output of agriculturists less “legible” to the state), cultivating “crops that will grow on marginal land and at high altitudes” (like maize), require little attention and/or mature quickly, blend into surrounding vegetation, and are easily dispersed. "Real-existing" anarchists (at least the kind that decides to retain some form of agriculture) have been potato eaters, apparently.

Clearly planting potatoes does not work on its own to repel the state, however. Prussian peasants were dependent on potatoes, but they certainly did not escape the state (but did they escape it more than similarly situated peasants? Or did social structures in Prussia produce peasant subordination by other mechanisms, not necessarily via state violence? Perhaps the land was too flat?). And Scott does not mention this, but the staple crop in the Inca empire was also the potato (and they also grew other crops, like maize, that are state-repelling in Scott’s view, and happened to be situated in the highlands rather than the lowlands; the Inca empire seems to be a big counterexample to Scott’s general argument). So this sort of claim calls out for testing and further investigation: are peoples with the sort of agriculture that Scott describes less likely to have had states (at least in the past) than peoples that did not, beyond Southeast Asia? Why did the Incas manage to create a state in ecological conditions that seem very unfavourable to it, at least in Scott's view? I suppose that it could be the case that there was less “stateness” in Inca lands than we think, but still, a bit puzzling. 

Epistemic Arguments for Conservatism III: Computational Arguments

(Another one in the series promised here; I’m writing a paper. This is still somewhat muddled, so read at your own risk, though comments are greatly appreciated if you find this of interest.)

Many problems of social life have “solutions” that can be correct or incorrect. Determining whether someone is guilty or innocent of a violation of legal rules; allocating goods and services to their best uses; policing a neighbourhood so that potential violators of community norms are deterred; all of these things can be characterized as problems with solutions that can be said to be at least partially correct or incorrect, assuming there is broad agreement on the values a solution should promote. Different institutional solutions to these problems can thus be evaluated with respect to their epistemic power: the degree to which, on average, a given institution is able to reach the “correct answer” to the problem in question. A “computational” argument for a particular institutional solution to a problem is simply an argument that, given the incentives the institution provides for gathering or revealing information or knowledge important to the problem, the average capacities of the relevant decisionmakers to process this information or use this knowledge, and the rules or mechanisms it uses for aggregating their judgments, the institution in question has greater epistemic power than the alternatives (relative, of course, to a particular problem domain).

Consider, for example, the institution of trial by jury. Though jury trials have more than one justification, their epistemic power to determine violations of legal norms is often invoked as an argument for their use vis à vis, say, trials by ordeal or combat. A trial by jury is expected to be a better “truth tracker” than trials by ordeal or trials by combat in the sense that it is expected to better discriminate between violators and non-violators of legal norms (more precisely, it is expected to better identify non-violators, even at the expense of failing to identify violators) because, among other things, it may provide good incentives for the revelation of both inculpatory and exculpatory evidence via adversarial proceedings and may allow the judgments of individual jurors to be combined in ways that amplify their epistemic power (see, e.g., Condorcet’s jury theorem). By contrast, trials by ordeal or combat are supposed to be lousy discriminators between violators and non-violators of legal norms (or lousy ways of identifying non-violators), because they provide bad incentives for the revelation of the relevant information (though see Peter Leeson’s work for some interesting ideas on how trials by ordeal might have exploited widespread beliefs in “god’s judgment” to discriminate accurately between the guilty and the innocent). To be sure, even if the computational argument for jury trials is correct, we may still not want to use them to determine every violation of legal norms: considerations of cost, fairness, lack of suitable jurors, or speed, may reasonably limit their use. But the epistemic power of jury trials would surely be one important consideration for using them rather than other mechanisms in trying to figure out whether a particular person has violated a legal norm.

Now, the idea that some institutions are better than others at “social cognition” or “social information processing” is not  inherently conservative, as the example of jury trials indicates. “Computational” or “social cognition” arguments have been deployed in defence of a wide variety of institutions, from democracy to Wikipedia, and from markets to the common law, without necessarily bolstering a “conservative” position in politics, however conceived. (For a good discussion of the concepts of social cognition and social information processing, as well as a review of some of the research that attempts to untangle when and how social cognition is possible, see this short paper by Cosma Shalizi). But there is a set of arguments for “conservatism,” broadly understood, that argues for the epistemic power of some “default” solution to a social problem and against the epistemic power of an “explicit” intervention on computational grounds. The same contrast is sometimes expressed differently – e.g., in terms of decentralized vs. centralized institutions, “unplanned” vs. “planned” social interaction, or customary vs. explicit rules – but it always indicates something like the idea that some institutional solutions to a problem need not be explicitly produced by a single, identifiable agent like a government. A computational argument for conservatism thus makes the (implicit or explicit) claim that we can “conserve” on reason by relying on the computational services of such patterns of interaction or institutions to determine the solution to a problem of social life rather than attempting to explicitly compute the solution ourselves.

This can get confusing, for it is not always clear what would count as a “default” solution to a social problem, and “restoring” (or even implementing) the default solution may entail robust changes to a social system (amounting even to revolution in some cases). So bear with me while I engage in a classificatory exercise. Three options for defining the default seem possible: the pattern of interaction that has historically been the case (“custom” or “tradition”); the pattern of interaction that would have prevailed in the absence of explicit planning or design by one or more powerful actors (e.g., “the free market” as opposed to a system of economic allocation involving various degrees of centralized planning); and the pattern of interaction that “computes” the solution to the social problem implicitly rather than explicitly (compare, for example, a “cap and trade” market for carbon emissions with an administrative regulation setting a price for carbon emissions: both are designed, but the former computes the solution to the problem of the appropriate price for carbon implicitly rather than explicitly). We might call these options the “Burkean,” “Hayekian,” and (for lack of a better word) “Neoliberal” understandings of the relevant “default” pattern of interaction, which in turn define epistemic arguments for, respectively, the superiority of tradition over innovation, the superiority of spontaneous order over planned social orders, and the superiority of implicit systems of “parallel” social computation over explicit centralized systems of social computation. But what reasons do we have to think that any of these “default” patterns of interaction have greater epistemic power than the relevant alternatives, or rather, under what conditions are they computationally better than the relevant alternatives? 

Let us start with the last (“Neoliberal”) position, since it seems to me the easiest to analyze and at any rate is the farthest from conservatism in the usual sense of the term (the “Burkean” position is the closest to conservatism, while the “Hayekian” sits more or less in the middle; I'm leaving the analysis of "Burkean" arguments to another post). Here the relevant comparison is between two designed institutional solutions to a problem, one that aims to determine the solution to a social problem by setting the rules of interaction and letting the solution emerge from the interaction itself, and another that aims to induce a set of actors to consciously and intentionally produce the solution to the problem. Thus, for example, a “cap and trade” market in carbon emissions aims ultimately to efficiently allocate resources in an economy on the assumption that the economy should emit less than X amount of carbon into the atmosphere, but it does so by setting a cap on the amount of carbon that may be produced by all actors in the market and letting actors trade with one another, not by asking a set of people to directly calculate what the best allocation of resources would be (or even by directly setting the price of carbon). We might compare this solution to a sort of parallel computation: given a target amount of carbon emissions, the relevant computation concerning the proper allocation of resources is to be carried out in a decentralized fashion by economic actors in possession of private and sometimes difficult to articulate knowledge about their needs, production processes, and the like, who communicate with one another the essential information necessary to coordinate their allocation plans via the messaging system of “prices.” 

This sort of pattern of interaction will be computationally superior to a centralized computation of the solution to the same problem whenever the relevant knowledge and information to determine the solution is dispersed, poorly articulated, time-sensitive, and expensive or otherwise difficult to gather by centralized bodies (perhaps because actors have incentives not to truthfully disclose such information to centralized bodies), yet essential features of such knowledge are nevertheless communicable to other actors via decentralized and asynchronous message passing (like prices). (Hayek’s famous argument for markets and against central planning basically boils down to a similar claim). The problem can thus be decomposed into separate tasks that individual actors can easily solve on their own while providing enough information to other actors within appropriate time frames so that an overall solution can emerge.

But these conditions do not always hold. Consider, for example, the problem of designing an appropriate “cap and trade” market. Here the relevant knowledge is not dispersed, poorly articulated, and time-sensitive but is instead highly specialized and articulated (e.g., knowledge of “mechanism design” or “auction theory” in economics), is not as obviously time-sensitive, and cannot easily be divided. (Though the problem of discovering the truth about mechanism design or auctions might itself be best tackled in a decentralized manner).  We might perhaps learn here from computer science proper: some problems can be tackled by easily “parallelized” algorithms (algorithms that can be broken down into little tasks that can run in a decentralized fashion in thousands of different computers), but some problems cannot (the best available algorithm needs to run in a single processor, or the problem can only be broken down into steps that need to run sequentially, like the algorithms for calculating pi); in fact there seems to be an entire research programme trying to figure out which classes of problems can be parallelized and which cannot. (And this seems to be a deep and difficult question). Or we might speak here of “epistemic bottlenecks” that limit the degree to which a problem can be broken down into tasks that can be solved via a division of epistemic labor; the problem of designing an appropriate division of epistemic labor for a specific purpose might be one of these.

The “computational” argument for implicit over explicit computation depends on the identification of an epistemic bottleneck in explicit mechanisms of computation that are not present in the implicit mechanism. But it does not depend on a contrast between designed and undesigned solutions to a problem: both a carbon market and an administrative regulation are equally designed solutions to the same problem. In order to make the computational case for spontaneous order (as against “planned” order), one has to argue not only that there are epistemic bottlenecks in the explicit mechanism of computation, but that the problem of designing an order for computing the solution to the problem is itself subject to the epistemic bottlenecks that render explicit solutions unfeasible; and here, I am not sure that Hayek or anyone else has given a convincing argument yet. (One could, of course, give “selection” arguments for preferring spontaneous to designed orders; but that is a subject for another post).

Saturday, November 06, 2010

The Ancient War between States and Non-state Peoples, Modern Botswana Edition

The NY Times has a really nice piece on the conflict between the San Bushmen and the Botswanan state that illustrates pretty well some of the things that Scott writes about in The Art of Not Being Governed. The Bushmen are a group of foraging peoples living in dry areas in and around the Kalahari desert. Much of their territory seems to have been pretty marginal for agriculture, and hence effectively stateless before the 20th century. For decades, they had moved back and forth between "civilization" and traditional hunter-gathering, depending on trade opportunities and the like, but in the 1960s the state decided they were "poor" and needed to be helped; and they could not be helped unless they were settled and legible. At first, the state tried carrots, drilling boreholes that freed them from the constant search for water in the desert; and many took the deal, taking up a more settled existence:
Botswana became independent in 1966, and the government’s eventual view was that the Bushmen were an impoverished minority living in rugged terrain that made them hard to help. Already, many were moving to Xade, a settlement within the reserve where a borehole had been drilled years before.
The Bushmen were pragmatists. Liberated from the strenuous pursuit of water, people began keeping goats and chickens while also scratching away at the sandy soil to grow gardens. The government provided a mobile health clinic, occasional food rations, a school.
Since the 1980s, however, the Botswanan state has tried harsher tactics in its quest to evict them from the areas they live in, which were designated a "game reserve" in 1961. Indeed, it used the fact that some Bushmen had voluntarily taken up agriculture against them:
Later on, these activities were commonly mentioned as reasons for removing the Bushmen. They “were abandoning their traditional hunter-gatherer lifestyle,” and even hunting with guns and horses, the government argued in a written explanation of its rationale.
So the state began to push harder to sedentarize the Bushment, with predictable consequences:
Since the 1980s, Botswana, a landlocked nation of two million people, has both coaxed and hounded the Bushmen to leave the game reserve, intending to restrict the area to what its name implies, a wildlife refuge empty of human residents. Withholding water is one tactic, and in July a High Court ruled that the government had every right to deny use of that modern oasis, the borehole. An appeal was filed in September.
These days, only a few hundred Bushmen live within the reserve, and a few, like Mr. Taoxaga, still survive largely through their inherited knowledge, the hunters pursuing antelope and spring hares, the gatherers collecting tubers and wild melons, tapping into the water concealed in buried plants.
But most of the Bushmen have moved to dreary resettlement areas on the outskirts, where they wait in line for water, wait on benches at the clinic, wait around for something to do, wait for the taverns to open so they can douse their troubles with sorghum beer. Once among the most self-sufficient people on earth, many of them now live on the dole, waiting for handouts.
“If there was only some magic to free me into the past, that’s where I would go,” said Pihelo Phetlhadipuo, an elderly Bushman living in a resettlement area called Kaudwane. “I once was a free man, and now I am not.”
 “I once was a free man, and now I am not." Yet the story has another side. Just as Scott says, the culture of the "valleys" (here, the core areas of the Botswanan state, as opposed to the "bush") has some allure for at least some of the Bushmen:
Families have come apart, most often with grandparents or a father staying in the reserve and a mother and children living in a resettlement area, near a school and a reliable supply of water. Gana Taoxaga, the old man who was among the last holdouts, the one completing his two-day walk, has six children and seven grandchildren in Kaudwane. “I miss them and they miss me,” he said.

Mr. Taoxaga did not know his own age. His brown coat was missing half its fabric. His leather shoes had no laces. Beside him on the journey, a younger man, Matsipane Mosethlanyane, led some donkeys with empty water jugs strapped across their backs. He said he was proud to be a Bushman and, boasting of his resourcefulness, he described how he had sometimes squeezed the moisture from animal dung to slake his thirst. Animals eat the flowers off the small trees, he said. The moisture from the dung was nutritious.

“But I don’t want to drink the dirty water any more,” he said. “That’s why we are walking today. I am used now to the new water, the modern water.”
As they say, read the whole thing.

Friday, November 05, 2010

Idle Queries: Exit and Voice in Economic and Political Life

In his classic Exit, Voice, and Loyalty Albert Hirshmann suggested that “voice” and “exit” are the two basic responses to organizational problems. When someone is dissatisfied with an organization, they can either express their dissatisfaction (voice) or try to leave (exit). Whether one takes one or the other course of action depends both on the relative costs of voice and exit (sometimes voice is punished, or exit is difficult) and on the strength of “loyalty.” (More loyal people may forgive faults in the organization more easily, though they may also prefer complaining to leaving). But these responses are not independent of one another: if lots of people “exit” an organization, the efficacy of voice is typically reduced, partly because the possibilities for coordinating are also reduced (though under some conditions, exit can serve as a “signal” that temporarily enhances “voice”: for a modern example taken from the dissolution of the GDR, see this earlier post). By contrast, a lack of exit options seems to boost voice; in more economic terminology, when the cost of exit relative to voice is low, exit will be the predominant response to dissatisfaction with an organization and vice-versa.

Now, democracy can be roughly conceptualized as a form of voice in organizations. Democracy is, to be sure, more than voice; for one thing, democratic voice is always at the very least formally equal (one person one vote, for example), and those with voice in a democratic organization are supposed to include the vast majority of its members. But for most of the history of the state, political voice of any kind did not really exist (at least not much – there are always exceptions); the usual response to oppression appears to have been “exit,” as James C. Scott documents in his The Art of Not Being Governed. Yet this was only possible because the pre-modern state had a limited reach: one could always take to the hills if one did not like the current ruler.

First query. Could one then argue that modern political democracy was made possible by the greater difficulty of exit in the modern state system? There does seem to be a correlation between the development of the modern state system and the emergence of institutions of voice, though this correlation is typically explained in terms of the “taxation bargains” that monarchs had to strike with their subjects; but what if the key parameter here is the increasing cost of exit from the state system? (The increasing wealth of state spaces relative to nonstate spaces may also play a role here.) And could democracy become less common if exit from the state system became more easily available? (This could take many forms: the emergence of more “ungoverned spaces” like the hills of Yemen, or the success of projects like “seasteading”). Does anybody know of work in this vein?

Second query. I did some reading on the Yugoslav workers’ councils for the post below, and it struck me as odd that similar organizational forms are not more popular in market economies. (The councils appear to have been quite popular while they lasted, despite their limited autonomy). Sure, “voice” exists in firms as labor unions, “codetermination” arrangements, “company unions,” and other such things; and I’m sure there’s a ton of literature on this problem, but I was idly wondering if the structure of a competitive capitalist economy hinders the development of voice within organizations because it lowers the cost of exit for the worker. In a well-functioning market economy, the dissatisfied worker can often go to another job, so voice might seem less important (though perhaps where workers have scarce skills, the costs of both voice and exit are lowered; the total effect might be indeterminate). Conversely, should we expect that in economies where unemployment is high or in firms where workers do not have scarce skills, exit costs would be higher, thus boosting the prospects for voice? (But perhaps the weaker bargaining position of workers there would increase the costs of both exit and voice, so that the overall effect would depend). Any pointers here? 

Third query. Is there a "moral reason" for preferring voice to exit? That is, should one work for voice even where exit is easily available? Or are voice and exit perfect "moral substitutes"?