I am mulling over a post on the events in Tunisia, but I've been distracted. I may get around to it eventually, but right now I've been busy trying to make some animated maps for use in my "dictatorships and revolutions" class. Here's the first result:
I tried something like this a few months ago, but this is an improvement over the previous map. First, it uses the update to the Alvarez, Cheibub, Limongi, and Przeworski dataset of political regimes by Cheibub, Gandhi, and Vreeland (the "DD" dataset) that covers the entire period 1946-2008 (not just to 2002). And second, it uses a dataset of historical maps of state borders by Nils Weidmann, Doreen Kuse, and Kristian Skrede Gleditsch that allows you to visualize such things as the breakup of the Soviet Union or the reunification of Germany.
The DD definition of democracy is very minimalist: a country counts as a democracy if both the government and the legislature are elected in competitive elections. Thus, some countries are classified as democracies which seem to have all sorts of political problems. Moreover, by "competitive elections" CGV mean that a) the opposition can contest the election, and b) the government actually relinquishes power if the opposition wins. Since in some countries the current regime has never lost an election (e.g., Botswana), it is not always possible to unambiguously code the country as a democracy or a dictatorship given their coding rules. In such cases, they err on the side of classifying the country as a dictatorship (this is their "type II error" rule), which leads to some curious outcomes: for example, South Africa never turns into a democracy (look at the video at around the year 1994), and Botswana is always classified as a dictatorship. But they identify these cases with their "type II" variable, so it is possible to see which countries might be democracies but are classified as dictatorships: these are the "ambiguous" cases in the video.
One thing that comes out very clearly in the animation is that regime types tend to cluster temporally and spatially. There are waves of civilian dictatorships and of military dictatorships (see, for example, Africa in the late 1970s and 1980s), as well as of democracies. Most communist regimes clustered around the Soviet Union, and most absolute monarchies are in the Middle East/North Africa. There seem to be strong "regional" influences on regime change, which suggests that the events in Tunisia are unlikely to remain isolated.
Some people criticize the DD data for conceptualizing the democracy/dictatorship distinction as a categorical rather than a gradual distinction (most of my students, for example, really dislike this categorical distinction when I assign a reading from Gandhi in my class). So most political regime datasets (like Freedom House or Polity IV) have some kind of scale from most autocratic to most democratic. The choice is, to some, extent, pragmatic, but I think there is something to the idea that regimes come in types, not just gradations of a single underlying dimension. So I like CGV's effort to identify different regime types, and I am largely in agreement with many of their criticisms of "graduated" indexes of democracy like Polity IV here. Nevertheless, I've also made a similar animation using Polity IV data:
There is less to note here, except the march of democracy. You miss some of the geographic and temporal patterns visible in the DD data.
I'm thinking of making an animated map that shows coups as they happen (using the Coups d'Etat dataset by Marshall and Marshall) now that I've mastered the process of making these maps (it took a while: R and ArcGIS are not the most easy to use pieces of software). Other ideas?
Update, 1/21/2011: I went ahead and did the animated map showing coups d'etat.
Friday, January 21, 2011
Monday, January 17, 2011
Is oil bad for democracy? (A footnote on Thad Dunning’s “Crude Democracy”)
The view that oil is bad for democracy and freedom has become conventional wisdom. (Any view espoused by Thomas Friedman is by definition conventional wisdom). On this view, the rents produced by oil (and, to a lesser extent, other minerals) tend to provide authoritarian rulers both with incentives to entrench themselves and to give them the capacity to do so; as a result, as Friedman puts it, “the price of oil and the pace of freedom tend to move in opposite directions.”
There is indeed some apparent association between high levels of resource wealth and authoritarianism, as documented in many studies (see, e.g., Michael Ross’ “Does Oil Hinder Democracy?” which has been cited more than 900 times). Qualitative studies of Middle Eastern politics (e.g., Kiren Aziz Chaudhry’s fantastic book “The Price of Wealth”) examine in detail the ways in which oil has served to buttress authoritarianism in some countries. And there are theoretical reasons to believe that democracy is unlikely to emerge or be stable when elites control highly immobile assets (like oil wells) from which they derive large rents (see, e.g., Carles Boix’s “Democracy and Redistribution”).
Yet there have always been apparent outliers: Venezuela, for example, seems to have sustained a relatively democratic regime even during the oil boom of the seventies, and Norwegian democracy did not seem to have been adversely affected by a huge influx of oil. Oil and natural resources seem to be bad for democracy in some countries (mostly in the Middle East and Africa) but good in other regions (Latin America). And some recent studies claim that the statistical evidence actually does not favour the view that oil and natural resources are bad for democracy (see, for example, Stephen Haber and Victor Menaldo’s work, who go so far as to argue that there is a “resource blessing” rather than a curse).
Thad Dunning’s book, “Crude Democracy,” develops a neat argument that makes sense of the conflicting evidence. Resource rents, he concedes, tend to produce incentives for autocrats to hang on to power or for elites to struggle to control these rents. (Sudden oil booms, on this view, might increase the attractiveness of capturing the state). But these incentives are sometimes overridden or at least mitigated by the fact that resource rents also decrease incentives to redistribute wealth in the non-resource sectors of the economy. If the non-resource sectors of the economy are highly unequal (and relatively large), then resource rents are likely to decrease the redistributive costs of democracy for elites and the attractiveness of coups; this has been the case, according to Dunning, in much of Latin America. By contrast, if the non-resource sectors of the economy are very equal (and small relative to the resource sectors), then resource rents are likely to have an “authoritarian effect;” this has been the case in African countries like Equatorial Guinea or Middle Eastern countries like the UAE or Saudi Arabia. In other words, when oil (or other natural resources, like tin in Bolivia) is the only game in town for elites, then oil is bad for democracy; but if elites mostly derive their income from other sources, and the country is overall poor and unequal, then oil can actually mitigate redistributive pressures and make democracy more palatable to elites.
Dunning uses game-theoretical models, statistical analysis, and case studies to support this thesis. He has a detailed case study of Venezuela that is of particular interest to me. In Dunning’s view, the non-oil sectors of the Venezuelan economy have always been very unequal, which would suggest high levels of class conflict. And indeed we do observe lots of class conflict, but especially so during those periods where oil revenues (or more precisely, the oil “take” of the Venezuelan state; not exactly the same thing) declined. Venezuelan democracy appeared most consolidated, and class conflict was lowest, when the price of oil was highest (i.e., during the oil boom of the 1970s). By contrast, the rise of Chavez (and more class conflict) coincided with a period where the oil take of the Venezuelan state had declined.
Dunning was writing in 2007, so he cannot address every development of Venezuelan politics since then, but he does argue that Chavez’ redistributive rhetoric has tended to “bite” more – there have been more expropriations, for example – when the price of oil declined, and it has tended to remain mere rhetoric when the price of oil was high. (There are some complications here, since the price of oil is not an exact measure of the oil-related resources available to the Venezuelan state, but you should read the book if you are interested in the complications). The upshot is that if the Venezuelan state’s take from oil is declining, perhaps because petroleum production is declining (as it seems most analysts agree: see also chart below) and oil prices decline or fail to rise sufficiently to compensate for the decline in production, we should expect to see more class conflict, and potentially more authoritarianism from the Chavez regime. So high oil prices (so long as there is enough oil production) moderate actual redistribution and authoritarian temptations (though not necessarily redistributive rhetoric), whereas low oil prices increase actual redistribution and authoritarian temptations (on both government and opposition sides: the coup attempt of 2002 is explained by Dunning in part as a result of higher redistributive pressures on the elite due to a fall in oil revenues).
Qualitatively speaking, this seems more or less right to me, though I am no more than an amateur Venezuela-watcher (I play one in my class, though). But I would complicate the analysis a bit. For one thing, the price of oil has been on an upward trend recently (this is the price of West Texas Intermediate, which tends to be a bit more expensive than the heavy Venezuelan crude, but it will do as a proxy), yet it seems that actual redistribution and the authoritarianism of the Venezuelan government have both increased recently, at least in some respects. This could be because the expectations of redistribution have increased (perhaps because of Chavez’ rhetoric) so that actual oil revenues no longer suffice to mitigate redistributive pressures, even if they are on an upward trend, or because the actual amount of resources that the government perceives from oil have actually decreased due to declines in production and unfavourable deals with other countries, I don’t know. Or it could be that there is some other thing going on, not accounted for in Dunning’s model. (A more industrious blogger would actually try to look up the time series of oil revenue that accrues to the government, to see whether this time series is in accord but this seems to be a non-trivial task; Dunning’s own sources for reconstructing the oil take of the Venezuelan state seem quite inaccessible from New Zealand).
It is also worth emphasizing, as Dunning himself does briefly at the end of his book, that though some form of democracy may be supported by high natural resource prices when the rest of the economy is highly unequal, the quality of that democracy is not necessarily great. A rentier democracy may be democratic in the Schumpeterian minimalist sense, but it is a form of politics that often appears inimical to responsibility, and may be accompanied by a great deal of corruption. (I could speak from personal experience, but it’s been a long time since I’ve lived in Venezuela). In this sense, it could be that Friedman is at least partly right: rentierism may be bad for freedom (to some extent), regardless of whether or not it is always bad for democracy.
Finally, I would have wanted Dunning to say more about how dependence on oil may have long-term “authoritarian” (or “democratic”) effects. Institutions may be hollowed out by state dependence on rents (this is Aziz Chaudhry’s argument in The Price of Wealth, if I remember correctly – it’s been a while since I read it, but basically the idea is that extreme oil dependence means you do not need to collect taxes and can basically give lots of people unproductive jobs in the government bureaucracy, which has all kinds of deleterious effects on other institutions); and the “Dutch disease” may decrease the size of the non-oil sector over time, increasing the “authoritarian effects” of natural resource rents. Dunning does speak a bit about both of these things, discussing some potential countervailing mechanisms, but some additional qualitative evidence would have been nice.
The quality of the analysis in this book – the game-theoretical models, quantitative tests, and qualitative case studies – is consistently high, though like many books that come out of dissertations there is too much cross-referencing and repetition. (Also, I wonder why the game-theoretical formalization of Dunning’s model leads to such ugly math. There’s nothing wrong with it, but isn’t there a way of handling the math of these optimization problems in a more elegant manner?). I wonder what recent detractors of the "resource curse" (like Victor Menaldo) think of it?
Saturday, January 08, 2011
The Weather under Ceausescu
In Romania the temperature never officially dropped below 10deg C, even when there was ice and snow on the ground, because the law said that heating in public buildings had to be turned on when it did.From Revolution 1989: The Fall of the Soviet Empire, by Victor Sebestyen, p. 165. Many more interesting stories in this book.
Wednesday, January 05, 2011
The Decline of Tyranny and the Rise of Dictatorship
Happy new year! I've wanted to blog this for a while, but sickness intervened. May all your new years be free of bacterial warfare. At any rate, here's my small contribution to the burgeoning Ngram literature:
(Link for a bigger picture.)
The Google corpus seems to be pretty sparse before 1800, so I would not take the big spike of "tyranny" of around 1760 as evidence of much. But I'm curious about the slow decline of "tyranny" and the slow increase of "dictatorship" as a catch-all term for the pathologies of political regimes.
"Tyranny" is the older, Greek term. Originally a more or less neutral designation for a "usurper" (as opposed to a legitimate heir in a dynasty) it was later transformed into the term for the worst form of government in Plato and Aristotle, and the sense stuck. Tyranny, however, was never precisely characterized by any institutional features; though there was a loose association between tyranny and "lawless" or "arbitrary" monarchy, ultimately the tyrant was simply the unjust ruler. Thus all regimes can become tyrannical; the distinction between tyrannical and non-tyrannical government is moral rather than institutional.
By contrast, "dictatorship" is a Roman term that is far more directly tied to a particular set of institutions. (For a quick and useful potted history of the term, see Jennifer Gandhi's "Political Institutions under Dictatorship"). The dictator was originally a magistrate chosen by the Senate for a limited time (six months) and formally empowered to act extralegally in situations of crisis. The term acquired a bad connotation after Sulla and later Caesar abused the office in various ways, but it still retained an association with a particular institutional context: the dictator is a sole ruler, typically acting extralegally and commanding substantial force, and so on. It is not used to refer to a distinct political regime by any of the early modern political thinkers I'm aware of, even those like Montesquieu who are interested in classification matters (Montesquieu prefers despotism to tyranny as well), but it is revived by Marx in a sort of paradoxical turn of phrase: the "Dictatorship of the Proletariat." (I imagine the paradox was intentional: in the Roman context, the dictator had often been the instrument of the ruling class to put down revolts from below). What distinguishes the dictator from other rulers is not the justice or injustice of his actions, but the fact that he can "dictate" - impose a command on others.
With Marx we also see a return to the more "neutral" original sense of dictatorship; and though the term still carries a sting - to call someone a dictator is typically to imply something bad about them - it typically needs to be qualified or intensified with some adjective ("brutal" or "totalitarian" dictator, for example). In political science, in fact, some people use "dictatorship" to mean simply non-democracy. (This is not to say that they ignore variation within non-democracy; they use dictatorship as an umbrella term that encompasses everything from Mexico under the PRI to the Soviet Union under Stalin, but they do pay careful attention to some forms of institutional variation within this vast array of regimes).
But why does the more "institutional" term - in fact, a range of such terms, from autocracy and totalitarianism to authoritarianism - seem to displace the more "moral" terms (like tyranny, and to a lesser extent despotism, which also seems to have been popular in the 19th century) as labels for political pathology? Part of this must be the rise of democracy as the recognized "good regime" - even if, in actual practice, democracies often disappoint (but they disappoint less!). If the good regime is institutionally identifiable, then the bad regime might also be institutionally identifiable. Here's another Ngram:
Around 1900, the term "democracy" rises enormously in popularity, while the usage of older terms for different kinds of political regime decrease significantly in English. (Here's the Ngram for the Spanish corpus: some interesting differences, similar broad pattern). It's like the distinctions between non-democratic political regimes "flatten," as Norberto Bobbio argued in his "Democracy and Dictatorship" (though I think he got this from Hans Kelsen): the key dimension of difference we see today among political regimes is whether authority is imposed from above ("dictatorially") or emerges from below ("democratically"). (In social science practice, the key dimension tends to be whether executive recruitment happens through genuinely competitive elections, but it is not clear that this always corresponds well to the contrast between "imposed" authority and "consent" just mentioned).
I also wonder whether this sort of phenomenon is a problem: do the Ngrams tell us anything about the potential loss of conceptual distinction, or merely about words? The conceptual distinctions need not go away: if political scientists start using "dictatorship" as an umbrella term for non-democracy, this does not mean they ignore all variation among non-democracies. (In fact, regime classifications have proliferated in recent years). But if the terms themselves incorporate important conceptual distinctions, their decline suggests a loss of conceptual variety. And then we might ignore, for example, the moral dimensions of variation among political regimes to focus on institutional variations that do not have sufficient moral relevance. But my thoughts on this point are still too muddled, so best to stop here.
(Link for a bigger picture.)
The Google corpus seems to be pretty sparse before 1800, so I would not take the big spike of "tyranny" of around 1760 as evidence of much. But I'm curious about the slow decline of "tyranny" and the slow increase of "dictatorship" as a catch-all term for the pathologies of political regimes.
"Tyranny" is the older, Greek term. Originally a more or less neutral designation for a "usurper" (as opposed to a legitimate heir in a dynasty) it was later transformed into the term for the worst form of government in Plato and Aristotle, and the sense stuck. Tyranny, however, was never precisely characterized by any institutional features; though there was a loose association between tyranny and "lawless" or "arbitrary" monarchy, ultimately the tyrant was simply the unjust ruler. Thus all regimes can become tyrannical; the distinction between tyrannical and non-tyrannical government is moral rather than institutional.
By contrast, "dictatorship" is a Roman term that is far more directly tied to a particular set of institutions. (For a quick and useful potted history of the term, see Jennifer Gandhi's "Political Institutions under Dictatorship"). The dictator was originally a magistrate chosen by the Senate for a limited time (six months) and formally empowered to act extralegally in situations of crisis. The term acquired a bad connotation after Sulla and later Caesar abused the office in various ways, but it still retained an association with a particular institutional context: the dictator is a sole ruler, typically acting extralegally and commanding substantial force, and so on. It is not used to refer to a distinct political regime by any of the early modern political thinkers I'm aware of, even those like Montesquieu who are interested in classification matters (Montesquieu prefers despotism to tyranny as well), but it is revived by Marx in a sort of paradoxical turn of phrase: the "Dictatorship of the Proletariat." (I imagine the paradox was intentional: in the Roman context, the dictator had often been the instrument of the ruling class to put down revolts from below). What distinguishes the dictator from other rulers is not the justice or injustice of his actions, but the fact that he can "dictate" - impose a command on others.
With Marx we also see a return to the more "neutral" original sense of dictatorship; and though the term still carries a sting - to call someone a dictator is typically to imply something bad about them - it typically needs to be qualified or intensified with some adjective ("brutal" or "totalitarian" dictator, for example). In political science, in fact, some people use "dictatorship" to mean simply non-democracy. (This is not to say that they ignore variation within non-democracy; they use dictatorship as an umbrella term that encompasses everything from Mexico under the PRI to the Soviet Union under Stalin, but they do pay careful attention to some forms of institutional variation within this vast array of regimes).
But why does the more "institutional" term - in fact, a range of such terms, from autocracy and totalitarianism to authoritarianism - seem to displace the more "moral" terms (like tyranny, and to a lesser extent despotism, which also seems to have been popular in the 19th century) as labels for political pathology? Part of this must be the rise of democracy as the recognized "good regime" - even if, in actual practice, democracies often disappoint (but they disappoint less!). If the good regime is institutionally identifiable, then the bad regime might also be institutionally identifiable. Here's another Ngram:
Around 1900, the term "democracy" rises enormously in popularity, while the usage of older terms for different kinds of political regime decrease significantly in English. (Here's the Ngram for the Spanish corpus: some interesting differences, similar broad pattern). It's like the distinctions between non-democratic political regimes "flatten," as Norberto Bobbio argued in his "Democracy and Dictatorship" (though I think he got this from Hans Kelsen): the key dimension of difference we see today among political regimes is whether authority is imposed from above ("dictatorially") or emerges from below ("democratically"). (In social science practice, the key dimension tends to be whether executive recruitment happens through genuinely competitive elections, but it is not clear that this always corresponds well to the contrast between "imposed" authority and "consent" just mentioned).
I also wonder whether this sort of phenomenon is a problem: do the Ngrams tell us anything about the potential loss of conceptual distinction, or merely about words? The conceptual distinctions need not go away: if political scientists start using "dictatorship" as an umbrella term for non-democracy, this does not mean they ignore all variation among non-democracies. (In fact, regime classifications have proliferated in recent years). But if the terms themselves incorporate important conceptual distinctions, their decline suggests a loss of conceptual variety. And then we might ignore, for example, the moral dimensions of variation among political regimes to focus on institutional variations that do not have sufficient moral relevance. But my thoughts on this point are still too muddled, so best to stop here.
Wednesday, December 15, 2010
The uncanny accuracy of European public opinion on the amount of foreign aid that governments give
Ok, this is probably the last post on this topic for a while. But a student (thanks Andrew!) put some of the data on European perceptions of how much foreign aid their governments give (from Eurobarometer 50.1, 1999) into nice electronic form, and I was able to calculate exactly the modal response. And really, the results surprised me: European public opinion turns out to be uncannily accurate at determining the answer to that question, far more than Americans, to the extent to which I wonder if the results discussed in this post are not simply driven by the way the question is asked in the US. The accuracy of European public opinion on this topic actually seems like a striking confirmation of the models of "information aggregation" I invoked earlier: when signals are unbiased, public opinion should converge on the true answer.
The question Eurobarometer 50.1 asked is: "We are not talking about humanitarian aid, that is assistance provided in emergency situations, like wars, famine, etc, but about development aid. Do you think the (NATIONALITY) government helps the people in poor countries in Africa South America Asia, etc to develop? (IF YES) Roughly how much of its budget do you think the (NATIONALITY) government spends on this aid?"
The potential answers are:
1 No
2 Yes, less than 1%
3 Yes, between 1 and 4%
4 Yes, between 5 and 9%
5 Yes, between 10 and 14%
6 Yes, between 15 and 19%
7 Yes, between 20 and 24%
8 Yes, between 25% and 29%
9 Yes, 30% or more
10 Yes, but I do not know the percentage (SPONTANEOUS)
NSP No response/Don't know
The correct response is coded 3, between 1 and 4%.
So how did Europeans do in 1996-1998?
Their answers are collected in this table. As you can see, on average about 40-45% of Europeans say they don't know how much aid their governments give (though only about 20% don't know if their governments give any aid, or refuse to answer; another 20% say they think their governments give ODA (official development assistance), but don't know how much), and only about 16% give the correct response. So most Europeans seem to lack knowledge of how much ODA their governments give. (Though note the variance: the vast majority of Danes claim to know that their government gives aid, and something like 40% of them give the correct response).
But this is the wrong metric to focus on. In order to determine how accurate the aggregate public opinion is, we have to do something like what Francis Galton did when he asked people at a country fair to estimate the weight of an ox, and calculate the median response among those who claim to know the answer (roughly, this is the answer that would emerge from a "democratic" vote). And here the results are quite different. In this table, I've included only the answers of people who claim to know the actual percentage of the budget given by European governments as ODA (the number represents the percentage of people giving an answer who claim they know how much money their governments give as ODA), as well as theiraverage and median responses. And Europeans get it exactly right: the median answer in both 1996 and 1998 was precisely 3 (the correct answer). The median in most countries was also very close to the truth: Germans and Belgians overestimate the amount of aid they give (their median answer is 4, meaning between 5% and 9% of the budget, perhaps because Germans suffer from a status effect and Belgians have Brussels?), whereas Greece, Spain, Finland, and Sweden (and Italy in 1998) slightly underestimate the amount of aid they give.
So, collective opinion in the EU, in 1996-1998, "knew" the right answer to the question that seems to stump Americans. I wonder if the problem of bias in American estimates of ODA today is caused by the way the question is asked in PIPA's survey? Would Americans display such a large bias if the question of Eurobarometer 50.1 was asked of them?
[update: fixed some typos and other minor problems for the sake of clarity, 12/15/2010]
The question Eurobarometer 50.1 asked is: "We are not talking about humanitarian aid, that is assistance provided in emergency situations, like wars, famine, etc, but about development aid. Do you think the (NATIONALITY) government helps the people in poor countries in Africa South America Asia, etc to develop? (IF YES) Roughly how much of its budget do you think the (NATIONALITY) government spends on this aid?"
The potential answers are:
1 No
2 Yes, less than 1%
3 Yes, between 1 and 4%
4 Yes, between 5 and 9%
5 Yes, between 10 and 14%
6 Yes, between 15 and 19%
7 Yes, between 20 and 24%
8 Yes, between 25% and 29%
9 Yes, 30% or more
10 Yes, but I do not know the percentage (SPONTANEOUS)
NSP No response/Don't know
The correct response is coded 3, between 1 and 4%.
So how did Europeans do in 1996-1998?
Their answers are collected in this table. As you can see, on average about 40-45% of Europeans say they don't know how much aid their governments give (though only about 20% don't know if their governments give any aid, or refuse to answer; another 20% say they think their governments give ODA (official development assistance), but don't know how much), and only about 16% give the correct response. So most Europeans seem to lack knowledge of how much ODA their governments give. (Though note the variance: the vast majority of Danes claim to know that their government gives aid, and something like 40% of them give the correct response).
But this is the wrong metric to focus on. In order to determine how accurate the aggregate public opinion is, we have to do something like what Francis Galton did when he asked people at a country fair to estimate the weight of an ox, and calculate the median response among those who claim to know the answer (roughly, this is the answer that would emerge from a "democratic" vote). And here the results are quite different. In this table, I've included only the answers of people who claim to know the actual percentage of the budget given by European governments as ODA (the number represents the percentage of people giving an answer who claim they know how much money their governments give as ODA), as well as their
So, collective opinion in the EU, in 1996-1998, "knew" the right answer to the question that seems to stump Americans. I wonder if the problem of bias in American estimates of ODA today is caused by the way the question is asked in PIPA's survey? Would Americans display such a large bias if the question of Eurobarometer 50.1 was asked of them?
[update: fixed some typos and other minor problems for the sake of clarity, 12/15/2010]
On the idea of Tolerable Outcomes (Epistemic Arguments for Conservatism V)
What does it mean for an institution to be associated with “tolerable” outcomes over a period of time? The question is more subtle than I thought at first; under prompting from a friend who commented on the paper I am writing, here’s a stab. (For an introduction to this series, see here; for all the other posts, click here; the idea of “tolerable” or "ok" outcomes is used here, here and here).
The first problem is to determine the sense in which we might say that some outcome (or some set of circumstances) is “tolerable.” One promising idea identifies tolerable outcomes with those outcomes that do not involve “clear evils.” By a “clear evil” I mean the sort of thing that all (reasonable?) individuals could identify as an evil: slavery, genocide, etc. (Though then, of course, we have the problem of sorting out the reasonable from the unreasonable; see here Estlund’s Democratic Authority). Some evils are not clear in this sense: reasonable individuals (in the Rawlsian sense of the term) might disagree about their importance, or their identification as an evil, given their (differing) beliefs about justice and the good.
A more problematic, but more substantive sense of “tolerable,” identifies tolerable outcomes with those outcomes that are above some threshold of badness on some substantive scale. Here the idea is not that some evils are necessarily clear in the sense discussed above, but that the determination of which evils are tolerable and which are not is an “easier” problem than the determination of which goods make a society optimal or fair or just, for example. Even if reasonable people disagree about whether, for example, persistent poverty is a tolerable evil, the conservative can still argue that determining whether persistent poverty is a tolerable evil is an “easy” problem relative to, for example, determining whether an egalitarian society is justified. (Perhaps the majority of people believe that poverty is a tolerable evil, while slavery is not; if we assume that the majority of people have some warrant for these beliefs, then the belief that persistent poverty is a tolerable evil might be epistemically justified, even if some reasonable individuals disagree).
Taking some criterion of “tolerability” as given, a second problem emerges: institutions are associated with outcomes over time. Should a conservative discard any institution that is associated with even a single intolerable outcome? Or should the conservative somehow “average” these outcomes over time, or “discount” past outcomes at a specific rate?
For an example, consider the basic institutions of liberal democracy. If we look, say, at the institutions of the Netherlands or Sweden since 1960, we could easily agree that these institutions have been associated with tolerable outcomes since then, in the sense that they do not seem to have been associated (or produced, though by assumption we cannot tell whether outcomes associated with these institutions have been produced by them) with clear evils.
But now consider the entire history of relatively liberal institutions in the USA since the late 18th century. These institutions were not always associated with tolerable outcomes; they were in fact associated with slavery and ethnic cleansing, which count as clear evils if anything does, and with many other evils besides (aggressive war and colonialism among them). But at the time they were also not the same institutions as today; there has been a great deal of institutional change in the USA. Though the basic structure of the institutions, as specified in the US constitution, has not changed that much – e.g., we still have competitive elections, two legislative chambers with specific responsibilities, an executive, a relatively independent judiciary, a bill of rights, etc. – the actual workings of these institutions, the associated circumstances under which they operate, and the expectations that shape their use have changed quite a bit. Suffrage was extended to all adult males; then it was extended to women in the early 20th century. Slavery was abolished. The regulatory powers of the Federal government expanded. The country industrialized. And so on. Since (by assumption) we do not know which aspects of American institutions and circumstances produced clear evils and which aspects and circumstances did not, we cannot in general answer the question of whether liberal institutions in the USA have produced tolerable outcomes in all past circumstances; at best, we can say that American institutions that are in some ways similar to existing institutions were associated with intolerable (not ok) outcomes in the past.
What might a conservative say to this? One possibility would be for the conservative to have a particular “discount rate” for the past: the further back in the past an outcome is associated with an institution, the less it is to “count” towards an evaluation of whether the institution is to be preserved, on the assumption that the further back in time we go, the less we are talking about the same institutions. Early nineteenth century American institutions were only superficially similar to modern American institutions, on this view; and so the outcomes associated with them should be discounted when we consider whether or not American institutions should have “epistemic authority.”
The problem with this is that, the smaller the discount rate is, the more intolerable outcomes it will “catch,” so that the conservative is forced to discard almost all institutions. With a small discount rate, the conservative is forced to argue that American institutions should not, in general be given the benefit of the doubt, since they (or similar enough institutions) have produced intolerable outcomes. But with a large discount rate, the conservative can be far less confident that the institutions in question will be associated with tolerable outcomes in the future, since he has less evidence to go on. So the conservative faces a sort of evidence/discount rate tradeoff: the conservative position is most powerful, the more evidence we have of the association of institutions with tolerable outcomes; but the more evidence we have of outcomes, the more likely it is that some of these will be intolerable, forcing the conservative to argue for changes.
(In more formal terms: consider the series of states of the world {X1,..Xi…Xn}, associated with the institution {I1,…In}. For each Xi, we know whether it represents a tolerable or an intolerable outcome, and we know that it was associated with Ii, though we do not know whether Ii produced it. Suppose all intolerable outcomes are found in the past (i.e., in the series {X1…Xi}, where i is less than n). Suppose also that our confidence that institution In (today’s incarnation of the institution) is similar enough to institution Ii decreases according to some discount rate d. The larger the d, the smaller the series of states that can serve as evidence that In will be associated with tolerable outcomes in the future; but the smaller the d, the more likely it is that the evidential series of states will include some states in the series {X1…Xi}).
What do people think?
Tuesday, December 07, 2010
One hypothesis weakened
In an earlier post I wondered about the sensitivity of estimates of US foreign aid to the definition of foreign aid; if people included "military involvement" as foreign aid, then their estimates would be biased upwards. But apparently the good people at PIPA already thought of this in an earlier poll (Thanks Andrew, for doing what I was too lazy to do!):
Some have wondered whether the high estimate of foreign aid spending is due to Americans incorrectly including in their estimates the high costs of defending other countries militarily. To determine if this was the case, in June 1996 PIPA presented the following question: US foreign aid includes things like humanitarian assistance, aid to Israel and Egypt, and economic development aid. It does not include the cost of defending other countries militarily, which is paid for through the defense budget. Just based on what you know, please tell me your hunch about what percentage of the federal budget goes to foreign aid. Despite this clarification, the median estimate was 20% and the mean 23%.Europeans, however, do appear to produce less biased estimates of foreign aid than Americans:
When Europeans are asked how much the government spends on overseas aid from the national budget, approximately one third of respondents do not know. Another third will choose between 1-5 per cent and 5-10 per cent. The smallest proportion will mention less than one per cent.21 The consistent trend across OECD countries is to overestimate the aid effort.The figures cited appear to be from this report, I think, though the question is not exactly comparable. Most citizens admit they don't know (57% or so). Here's a table (click for larger size):
The correct response is "around 100 Euros per European citizen." (Based on the figures in the table, however, it looks like most Europeans actually underestimate the amount of foreign aid the EU gives - which does not support the conclusion of the other report. I wonder what the results would be if the question were asked in these terms in the USA). Anyway, it seems like the evidence is inconsistent with the hypothesis that high foreign aid estimates are driven by the inclusion of military spending in the results, though the fact that European populations do produce lower estimates of aid spending (even though the questions are not exactly comparable) does suggest that perhaps military spending plays small role.
Another option: perhaps this is driven in part by national status? "High status" (powerful) countries will tend to have a self-image that includes lots of aid to others. But disaggregated figures for all the EU countries do not appear to be easily available to test this sort of thing (e.g., maybe France, Britain, and Germany produce more incorrect estimates than small, peripheral countries like Latvia and the Czech Republic).
[Update 12/8/2010 - thanks again Andrew: A 1999 Eurobarometer report (p. 11) notes that "Approximately a quarter of Europeans thinks that their government actually contributes to development aid, but does not feel well enough informed to say how much The largest proportions of votes go to the categories « Between 1 and 4% » (14%, -2 since 1996) and « Less than 1% » (10%, -2) Europeans are not far from reality when they make this choice." The question asked then was "We are not talking about humanitarian aid, that is assistance provided in emergency situations like wars. famine, etc, but about development aid Do you think the (NATIONALITY) government helps the people in poor countnes in Afnca, South America Asia etc to develop (I F YES) Roughly how much of its budget do you think the (NATIONALITY) government spends on this aid." The correct answer is "between 1 and 4%". If I'm reading the accompanying table right, Denmark, Finland and Sweden give especially accurate answers - around 40% of people in Denmark give the correct answer.]
The Robustness or Resilience Argument in Practice: Noah Millmann vs. Jim Manzi (Epistemic Arguments for Conservatism IV.55)
Noah Millmann and Jim Manzi over at The American Scene (and Karl Smith at Modeled Behavior) have been debating the degree of deference we should give to economic science when considering what governments should do about a recession. Manzi emphasizes the large degree of uncertainty and difficulty attendant on any attempt to determine whether a particular policy actually works, and he is right about this: we do not know very well whether any policy intervention actually works (or worked), given the enormous number of potentially confounding variables. Lots of econometric ink is spilled trying to figure out this problem, but the problem is intrinsically hard, given the information available. By contrast, knowledge in physics or chemistry is far more certain, since it can be established by means of randomized experiments that are easily replicated. So, Manzi argues, we should give less deference to economists than we do to physicists when making decisions. Millmann sensibly points out that the relevant analogy is not to physics or chemistry but to something like medicine. The knowledge produced by medical science is hard to apply in practice, and doctors base their treatment decisions on a combination of customary practice, experience, and some limited experimental and observational evidence. In particular cases, then, medical practice offers at best an informed guess about the causes of a disease and the best course of action. But Millmann argues that this does not undermine the epistemic authority of medicine: in case of sickness, we should attend to the advice of doctors, and not to the advice of nonexperts.
I think Manzi’s argument would be more compelling if it were put as a robustness or resilience argument (discussed previously here and here). Consider first the case of medicine. If we get sick, we have three basic options for what to do: heed the advice of doctors, heed the advice of non-experts, and do nothing. It seems clear that heeding the advice of non-experts should (normally) be inferior to heeding the advice of doctors. But is heeding the advice of doctors always epistemically preferable to doing nothing? (Or, more realistically, to discounting the advice of doctors based on one’s own experience and information about one’s body). The answer to this question depends on our estimation of the potential costs of medical error vs doing nothing. Because medical knowledge is hard, doctors may sometimes come up (by mistake) with treatments that are actively harmful; in the 18th century, for example, people used “bleeding” as a treatment for various diseases, which may have been appropriate for some things (apparently bleeding with leeches is used effectively for some problems), but probably served to weaken most sick people further. At any rate, we may not know whether a treatment works or not any better than the doctor; all we know is that people treated by doctors sometimes die. If our estimate of medical knowledge is sufficiently low (e.g., if we think that in some area of medical practice medical knowledge is severely limited), our estimate of the potential costs of medical error sufficiently high (we could die), and our experience of what happens when we do nothing sufficiently typical (most illness goes away on its own, after all: the human immune system is a fabulously powerful thing, perfected to a high degree by millions of years of evolution!) it may well be the case that we are better off discounting medical advice for the sake of doing nothing. Of course, atypical circumstances may result in us dying from lack of treatment; that is one of the perversities to which this sort of argument may give rise. But given our epistemic limitations (and the epistemic limitations of medicine), there may be circumstances where “doing something” is equivalent to doing something randomly (because the limitations on our medical knowledge are so severe), and so we may be (prospectively) better off doing nothing (i.e., tolerating some bad outcomes that we hope are temporary, since our bodies have proven to be resilient in the past).
Consider now the case of a government that is trying to decide on what to do with respect to a moderately severe recession. Here the government can do nothing (or rather, rely on common sense, tradition, custom and the like: i.e., do what non-experts would do), heed the advice of professional economists (who disagree about the optimal policy), or heed the advice of some selected non-economists (or the advice of some mixture of economists and non-economists). When is “heeding the advice of economists” better than “doing nothing,” given our epistemic limitations? And when is “heeding the advice of non-economists” better than “heeding the advice of economists”?
We know that the current architecture of the economic system produces recessions with some frequency, some of which seem amenable to treatment via monetary policy (whenever certain interest rates are not too close to zero), some of which appear to be less so (these are atypical), but in general produces long-run outcomes that seem tolerable (not fair, or right, or just: merely tolerable) for the majority of people (there are possible distributional concerns that I am ignoring: maybe the outcomes are not tolerable for some people). The system is robust for some threshold of outcomes and some unknown range of circumstances: it tends to be associated with increasing wealth over the long run, though it is also associated with certain bad outcomes, and we do not know if it is indefinitely sustainable into the future (due to environmental and other concerns). We also know that there is some disagreement among economists about what is the optimal policy in an atypical recession (which suggests that there are limits to their knowledge, if nothing else). If we think that the limits on economic knowledge are especially severe for some area of policy (e.g., what to do in atypical recessions), historical evidence suggests that sometimes economists may prescribe measures that are associated with intolerable outcomes (e.g., massive unemployment, hyperinflation, etc.), and we think that most recessions eventually go away on their own, we may be justified in doing nothing on epistemic grounds. In other words, if we think that for some area of policy economists’ guesses about optimal policy are not likely to be better than random, and carry a significant risk of producing intolerable outcomes, then conservatism about economic policy is justified (doing what custom, tradition, etc. recommend, and heavily discounting the advice of economists).
But these are big ifs. Suppose that the epistemic limitations of economic science are such that most policy interventions recommended by professional economists have a net effect of zero in the long run; that is, economists recommend things more or less randomly, some good, and some bad, but in general tend not to recommend things that are very bad for an economy (or very good for it). (Historical evidence may support this; “Gononomics” is something of an achievement, not necessarily something common). In that case, we are probably better off heeding the advice of economists (and gaining the experience of the results) than doing nothing (and not gaining this experience); there may not be exceedingly large costs from heeding economic advice, but there may not be very large benefits either, and the result will still be “tolerable.” (At the limit, this sort of argument suggests that we ought to be indifferent about almost any policy intervention, so long as we have reasonable expectations that the outcomes will still be tolerable). Moreover, distributional concerns may dominate in these circumstances; doing nothing has a distributional cost that is passed to some particular group of people (e.g., the unemployed), so we may have reason to be concerned more about distribution than about long-run economic performance. And much depends on our estimates of the epistemic limitations of economic science: sure, economics is not like physics, but is it more like 20th century medicine, or more like 17th century medicine? (And the answer to this question may be different for different areas – different for macroeconomics than for microeconomics, for example).
Monday, December 06, 2010
Why are estimates of US foreign aid so biased?
A number of people have pointed to the latest reiteration of the fact that Americans do not appear to know what percentage of the budget goes to foreign aid. The median guess is 25% of the total budget, which is far higher than the actual 0.6%. Moreover, as far as I know, for as long as this question has been asked (1995), Americans have always hugely overestimated the percentage of the budget that goes to foreign aid; according to PIPA, the median guess has been about 20%. More educated people guess a bit lower, and less educated people a bit higher, but they mostly err on the high side. But why? As I mentioned in an earlier post, if people estimate such quantities on the basis of unbiased signals, they should converge on the true answer. So what is the source of this bias?
Eric Crampton suggests that voters count a lot of military spending as "foreign aid." This strikes me as plausible. Voters do not have in mind the same technical definition of "foreign aid" that the budget wonks use; they mostly see a large degree of involvement by the US in various countries, some of it justified on "nation building" grounds, which they can easily classify as "foreign aid/involvement." (These are the "signals" that they use to estimate the total amount of aid). And indeed the military accounted for about 23% of federal spending in FY2009 (a bit less this year), depending on how you count, which is close enough to the public guess for "foreign aid."
How would we know if this is what is going on? I wonder if answers to the question fluctuate in ways that are more or less correlated with the foreign wars of the US. Are answers to the question lower in times of peace? (I am too lazy to download the data and crunch it myself. But perhaps some enterprising soul could do it.) Also, has this question been asked in other countries, and does the magnitude of the bias remain constant? Or are the publics of countries with fewer foreign entanglements in war more likely to offer lower guesses of the amount of foreign aid spent? (If anybody kindly points me to easily downloadable date on this, I will make some graphs). I would also like to see a poll that asks this question but primes recipients by explicitly indicating that they are not to count military spending as foreign aid. (E.g., "Just based on what you know, please tell me your hunch about what percentage of the federal budget goes to foreign aid, not counting money spent by the military.") This may well produce a biased estimate, but would it be as biased as the current one? Has some enterprising public opinion researcher asked this question or something similar before?
And I would like to see the question asked in terms of the absolute number of dollars spent. (E.g., "Just based on what you know, please tell me your hunch about how many billions of dollars the Federal government spends on to foreign aid, [not counting money spent by the military]."). Would the estimates be similarly biased upwards? I have a hunch that they might even be biased downwards, and also suspect that asking the question in terms of percentages limits guesses to a degree of coarseness that produces biased estimates. (Foreign aid is 0.6-2.6% of the budget, depending on how you calculate it. Assume people guess the true number based on relatively unbiased signals from the news, including perhaps signals about foreign military involvement, but their guesses are made in 1% increments. Since 0% is an implausible guess, the smallest guess would be 1%, which would inevitably bias the collective estimate upwards, though not necessarily nearly as much as the current estimate. Is this idea too harebrained?)
Another possibility is that answers to this question do not reflect factual beliefs, but rather what Julian Sanchez once called "symbolic beliefs." Here the idea would be that respondents interpret the question as a question about the evaluation of US commitments abroad. The high guesses merely mean "the US spends too much on foreign entanglements," and the 10% median answer to the question of how much the US should spend merely says something like "whatever it is, halve it." On this view, voters do not really believe that the US should spend 10% on foreign aid, only that it should spend less; educating them about the true amount that the US spends would have only a limited impact on their apparent misperceptions (though could education increase the amount that voters are willing to spend on foreign aid, maybe not to 10%, but perhaps to 3%?). There would be reason to suspect that this is the case if, as Robin Hanson notes, we never see politicians run on increasing foreign aid, even though they could conceivably explain to them that the US actually spends very little on non-military foreign aid.
Could this sort of "symbolic" belief ever be consistently corrected? It would not do to simply tell the voters that the actual value of "foreign aid" is less than 1% of the budget; they might simply adjust their views to say that it should be less, or redefine "foreign aid" to include all sorts of things that the budget analyst would not include (like military spending). Even if the belief were truly a factual and not a symbolic belief, mere provision of information would not necessarily change it: these sorts of quantities are estimated on the basis of signals from the social world of the voters, not merely on the basis of remembered (or misremembered) facts. Since signals are constantly received but mere factual information is not, unless you change the bias in the signals, the public will continue to overestimate "foreign aid" (whatever they actually mean by this).
Other ideas?
Eric Crampton suggests that voters count a lot of military spending as "foreign aid." This strikes me as plausible. Voters do not have in mind the same technical definition of "foreign aid" that the budget wonks use; they mostly see a large degree of involvement by the US in various countries, some of it justified on "nation building" grounds, which they can easily classify as "foreign aid/involvement." (These are the "signals" that they use to estimate the total amount of aid). And indeed the military accounted for about 23% of federal spending in FY2009 (a bit less this year), depending on how you count, which is close enough to the public guess for "foreign aid."
How would we know if this is what is going on? I wonder if answers to the question fluctuate in ways that are more or less correlated with the foreign wars of the US. Are answers to the question lower in times of peace? (I am too lazy to download the data and crunch it myself. But perhaps some enterprising soul could do it.) Also, has this question been asked in other countries, and does the magnitude of the bias remain constant? Or are the publics of countries with fewer foreign entanglements in war more likely to offer lower guesses of the amount of foreign aid spent? (If anybody kindly points me to easily downloadable date on this, I will make some graphs). I would also like to see a poll that asks this question but primes recipients by explicitly indicating that they are not to count military spending as foreign aid. (E.g., "Just based on what you know, please tell me your hunch about what percentage of the federal budget goes to foreign aid, not counting money spent by the military.") This may well produce a biased estimate, but would it be as biased as the current one? Has some enterprising public opinion researcher asked this question or something similar before?
And I would like to see the question asked in terms of the absolute number of dollars spent. (E.g., "Just based on what you know, please tell me your hunch about how many billions of dollars the Federal government spends on to foreign aid, [not counting money spent by the military]."). Would the estimates be similarly biased upwards? I have a hunch that they might even be biased downwards, and also suspect that asking the question in terms of percentages limits guesses to a degree of coarseness that produces biased estimates. (Foreign aid is 0.6-2.6% of the budget, depending on how you calculate it. Assume people guess the true number based on relatively unbiased signals from the news, including perhaps signals about foreign military involvement, but their guesses are made in 1% increments. Since 0% is an implausible guess, the smallest guess would be 1%, which would inevitably bias the collective estimate upwards, though not necessarily nearly as much as the current estimate. Is this idea too harebrained?)
Another possibility is that answers to this question do not reflect factual beliefs, but rather what Julian Sanchez once called "symbolic beliefs." Here the idea would be that respondents interpret the question as a question about the evaluation of US commitments abroad. The high guesses merely mean "the US spends too much on foreign entanglements," and the 10% median answer to the question of how much the US should spend merely says something like "whatever it is, halve it." On this view, voters do not really believe that the US should spend 10% on foreign aid, only that it should spend less; educating them about the true amount that the US spends would have only a limited impact on their apparent misperceptions (though could education increase the amount that voters are willing to spend on foreign aid, maybe not to 10%, but perhaps to 3%?). There would be reason to suspect that this is the case if, as Robin Hanson notes, we never see politicians run on increasing foreign aid, even though they could conceivably explain to them that the US actually spends very little on non-military foreign aid.
Could this sort of "symbolic" belief ever be consistently corrected? It would not do to simply tell the voters that the actual value of "foreign aid" is less than 1% of the budget; they might simply adjust their views to say that it should be less, or redefine "foreign aid" to include all sorts of things that the budget analyst would not include (like military spending). Even if the belief were truly a factual and not a symbolic belief, mere provision of information would not necessarily change it: these sorts of quantities are estimated on the basis of signals from the social world of the voters, not merely on the basis of remembered (or misremembered) facts. Since signals are constantly received but mere factual information is not, unless you change the bias in the signals, the public will continue to overestimate "foreign aid" (whatever they actually mean by this).
Other ideas?
Thursday, November 25, 2010
Epistemic Arguments for Conservatism IV.5: An Addendum on Resilience
Rereading the long post below, it occurred to me that I didn’t mention why the argument I describe there should be called a “resilience” argument. Here’s what I had in mind. Institutions that have lasted for a long time have presumably endured in diverse circumstances while still producing tolerable outcomes, so we may think that there is a reasonable probability that they will still do ok in many unknown future circumstances: their endurance can be taken as evidence of resilience. If the potential costs of error in trying to find the optimal set of institutions are very high (e.g., getting a really bad political system, like the mixture of feudalism and Stalinism they have in the DPRK), and the “optimal” set of institutions for a given set of circumstances is very hard to find (if, for example, nobody knows with any certainty what the optimal political system would be for that set of circumstances, and the system would have to be changed anyway as they change), then it would make sense to stick with institutions that are correlated with ok outcomes over long periods of time and tolerate their occasional inefficiencies and annoyances. Resilient institutions are better than optimal institutions, given our epistemic limitations.
The argument also seems to imply that we ought to be indifferent about different sets of “ok” institutions. For example, there are a variety of democratic institutions in use today: some countries have parliamentary forms of government, some presidential; some have bicameral legislatures, others unicameral; some have FPP electoral systems, others use MMP; some countries use rules mandating “constructive” no confidence votes, others use other rules. But though we have some (statistically not especially good) evidence that some of these combinations work better than others (in some sets of circumstances: e.g., unitary parliamentary systems with list PR seem to produce better long-run outcomes than federal presidential government with nonproportional systems, at least on average, though I would not put too much stress on this finding), for the most part they all work ok, and we cannot tell with reasonable certainty whether some particular combination would be much better for us given foreseeable (and unforeseeable) circumstances. Perhaps switching back to FPP in New Zealand, for example, would produce better economic performance or induce better protection of civil liberties, but the best estimate of the effect of switching to FPP (or retaining MMP) on long run economic performance or the average level of protection of civil liberties is basically zero. (We might have reason to retain MMP or switch to FPP, but these will probably have more to do with normative concerns about representation and ideas about how easy it is for citizens to punish a government they dislike for whatever reason than with any special ability of MMP to deliver better economic performance). So we should not be bothered overmuch about these details of institutional design; given our epistemic limitations, on this view, it is unlikely that we would achieve even marginal improvements in our institutions that are sustained over the long run.
This does assume that gradual tinkering cannot at least serve to mitigate the effects of a changing “fitness landscape” (to use the terminology of the previous post), a controversial assumption. (It might be better to constantly tinker with our institutions than to let them just be, even if the tinkering is unlikely to lead to sustained improvements: we are just trying to stay on a local peak of the fitness landscape). And it also assumes that this landscape is very rugged for all the heuristics available: either minor changes just take you to another set of "ok" institutions (another variety of democracy, with some other combination of electoral system, relationships between executive and legislative powers, veto points, etc., and producing basically the same average long-run benefits), or they mostly throw you down a deep chasm if you try something new and radical (you get communist feudalism, or some variety of kleptocracy, and so on). I'm not sure this assumption makes sense for most problem domains, however: perhaps gradual tinkering in some cases does lead to better long-run outcomes, pace my previous argument against gradualism. But I have to think some more about this problem.
Wednesday, November 24, 2010
Epistemic Arguments for Conservatism IV: The Resilience Argument and the “Not Dead Yet” Criterion
(Fourth in the series promised here. Usual disclaimers apply: this is work in progress and so it is still a bit muddled, though comments are welcome).
One of the more promising epistemic arguments for conservatism is the argument from resilience. The general idea is that we owe deference to certain institutions (and so should not change them) not because they are “optimal” for the circumstances in which we find ourselves, but because they have survived the test of time in a variety of circumstances without killing us or otherwise making us worse off than most relevant alternatives. This argument might be used, for example, to justify constitutional immobility in the USA: even if the US constitution is not optimal for every imaginable circumstance, it is tolerable in most (“we’re not dead yet”); after all, it has lasted more than 200 years with relatively minor changes to its basic structure (save for the treatment of slavery, of course; but let us focus only on the basic structure of constitutional government); and if we have no good reason to think that changes to the constitution would improve it (because the effects of any change are exceedingly difficult to predict, and would interact in very complicated ways with all sorts of other factors, a caveat that would not necessarily apply to the treatment of slavery in the original constitutional text, which we may take as an obvious wrong), and some reason to think that the costs of ill-advised changes would be large (“we could die,” or at the very least unchain a dynamic leading to tyranny, oppression, and economic collapse), we are better off not changing it at all and putting up with its occasional inefficiencies.
The oldest and in some ways the most powerful version of this argument can be found in Plato’s Statesman (from around 292b to 302b). There the Eleatic Stranger (the main character in this dialogue) argues for a very strict form of legal conservatism, suggesting that we owe nearly absolute deference to current legal rules in the absence of genuine political experts who have the necessary knowledge to change them for good. This might seem extreme (indeed, it has seemed extreme to many interpreters), but given the assumptions the Stranger makes, the argument seems rather compelling.
The basic logic is as follows. (For those interested in a “chapter and verse” interpretation of the relevant passages, see my paper here [ungated], especially the second half; it’s my attempt to make sense of Platonic conservatism.) In a changing environment, policy has to constantly adjust to circumstances; the optimal policy is extremely “nonconservative.” But perfect adjustment would require knowledge (both empirical and normative) that we don’t have. In Platonic terminology, you would need a genuine statesman with very good (if not perfect) knowledge of the forms of order (the just, the noble, and the good) and very good (if not perfect) knowledge of how specific interventions cause desired outcomes; in modern terminology, you would need much better social science than we actually have and a much higher degree of confidence in the rightness of our normative judgments than the “burdens of judgment” warrant. Worse, in general we cannot distinguish the people who have the necessary knowledge from those who do not; if unscrupulous power-hungry and ignorant sophists can always mimic the appearance of genuine statesmen, then the problem of selecting the right leaders (those who actually know how to adjust the policy and are properly motivated to do so) is as hard as the problem of determining the appropriate policy for changing circumstances.
If the first best option of policy perfectly tailored to circumstances is impossible, then (the Eleatic Stranger argues) the second best option is to find those policies that were correlated in the past with relative success according to some clear and widely shared criterion (the “not dead yet” or “could be worse” criterion), and stick to them. Note that the idea is not that these policies are right or optimal because they have survived the test of time (in contrast to some modern Hayek-type “selection” arguments for conservatism), or even that we know if or why they “work” (in the sense that they haven’t killed us yet). On the contrary, the Stranger actually assumes that these policies are wrong (inefficient, non-optimal, unjust); they are just not wrong enough to kill us yet (or, more precisely, not wrong enough for us to bear the risk of trying something different), even if we happen to live in the highly competitive environment of fourth century Greece. (The true right policy can only be known to the possessor of genuine knowledge; but ex hypothesi there is no such person, or s/he cannot be identified). And he also assumes that we do not know if the reason we are not dead yet has to do with these past policies; correlation is not causation, and the Stranger is very clear that by sticking to past policies we run large risks if circumstances change enough to render them dangerous. But the alternative, in his view, is not a world in which we can simply figure out which policies would work as circumstances change with some degree of confidence, but rather a world in which proposals are randomly made without any knowledge at all of whether they would work or not, and where the costs of getting the wrong policy are potentially very high (including potentially state death). If these conditions hold, sticking to policies that were correlated with relative “success” (by the “not dead yet” or “could be worse” criterion) is then rational. (There are some complications; the Stranger’s position is not as absolute as I’m making it seem here, as I describe in my paper, and Plato’s final position seems to be that you can update policy on the basis of observing sufficient correlation between policies and reasonable levels of flourishing and survival even in the absence of perfect knowledge).
Does this argument work? In order to understand the circumstances under which it might work, let us recast the argument in the terminology of a “fitness landscape.” Let us assume that, in some problem domain, we have some good reason to believe that the “fitness landscape” of potential solutions (potential policies) has many deep valleys (bad policies with large costs), some local but not very high “peaks” (ok policies) and only one very high peak (optimal policies). Assume further that this “fitness landscape” is changing, sometimes slowly, sometimes quickly; a reasonably ok policy in some set of circumstances may not remain reasonably ok in others. Under these circumstances, an agent stuck in one of the local peaks has very little reason to optimize and lots of reason to stick to their current policy if it has reason to think that its heuristics for traversing the fitness landscape are not powerful enough to consistently avoid the “deeps.” Conservatism is then rational, unless your local “peak” starts to sink below some acceptable threshold of fitness (in which case you may be dead whether or not you stick to the policy).
For a concrete example, consider the space of possible political systems. The vast majority of imaginable political systems may be correlated with some very bad outcomes, sometimes very bad –oppression, economic collapse, slavery, loss of political independence, even physical death. A smaller set – including liberal democratic systems, but potentially including other systems, are reasonably ok; they are correlated with a measure of stability and other good things, though (let us assume) we have no good way to know if they actually cause those good outcomes or if the correlation occurs by chance, and we have no reason to assume that these are the best possible outcomes that can be achieved, or that these good outcomes will be forever associated with these political systems. Finally, let us assume that there exists some utopian political system which would induce the best possible outcomes (however defined) for current circumstances (imagine, for the sake of argument, that this is some form of communism that had solved the calculation problem and the democracy problem plaguing “real existing communism”), but that we do not have enough knowledge (neither our social science or our theory of justice is advanced enough) to describe it with any certainty. Does it make sense to try to optimize, i.e., to attempt to find and implement the best political system, in these circumstances? I would think not; at best, we may be justified in tinkering a bit around the edges. Both the uncertainty about which political system is best and the potential costs of error are enormous, and circumstances change too quickly for the “best” system to be easily identified via exhaustive search. Hence conservatism about the basic liberal democratic institutions might be justified. (Note that this does not necessarily apply to specific laws or policies: here the costs are not nearly as large, and the uncertainty about the optimal policy might be smaller, or our heuristics more powerful. So constitutional conservatism is compatible with non-conservatism about non-constitutional policies).
On the other hand, it is important to stress that the “not dead yet” criterion is compatible with slow death or sudden destruction, and somehow seems highly unsatisfactory as a justification for conservatism in many cases. Consider a couple of real-life examples. First, take the case of a population in the island of St Kilda, off the coast of Scotland, described in Russell Hardin’s book How do you Know? The Economics of Ordinary Knowledge. According to Hardin, this population collapsed over the course of the 19th century in great part due to a strange norm of infant care:
It is believed that a mixture of Fulmar oil and dung was spread on the wound where the umbilical cord was cut loose. The infants commonly died of tetanus soon afterwards. The first known tetanus death was in 1798, the last in 1891. Around the middle of the nineteenth century, eight of every tenth children died of tetanus. By the time this perverse pragmatic norm was understood and antiseptic practices were introduced, the population could not recover from the loss of its children (p. 115, citing McLean 1980, pp. 121-124)
Though this norm was bound to decimate the population eventually, it worked its malign power over the course of a whole century, slowly enough that it may have been hard to connect the norm with the results. And, perversely, it seems that the conservatism of the St Kilda’s was perfectly rational by the argument above: a population that has that kind of infant mortality rate is probably well advised not to try anything that might push them over the edge even quicker, especially as they had no rational basis to think that the Fulmar oil mixed with dung was the root cause of their troubles (rather than, for example, the judgment of god or something of the sort).
Or consider the example of the Norse settlers of Greenland described in Jared Diamond’s Collapse. Living in a tough place to begin with, they were reluctant to change their diet or pastoral practices as the climate turned colder and their livelihood turned ever more precarious, despite having some awareness of alternative practices that could have helped them (the fishing practices of the Inuit native peoples, for example). So they eventually starved and died out. Yet their conservatism was not irrational: given their tough ecological circumstances, changes in subsistence routines were as likely to have proved fatal to them as not, and they could have little certainty that alternative practices would work for them. (Though it is worth noting that part of the problem here was less epistemic than cultural: the Greenland Norse probably defined themselves against the Inuit, and hence could not easily learn from them).
In sum, the resilience argument for conservatism seems most likely to “work” when we are very uncertain about which policies would constitute an improvement on our current circumstances; the potential costs of error are large (we have reason to think that the distribution of risk is “fat tailed” on the “bad” side, to use the economic jargon); and current policies have survived previous changes in circumstances well enough (for appropriate values of “enough”). This does not ensure that such policies are “optimal”; only that they are correlated with not being dead yet (even if we cannot be sure that they caused our survival). And in some circumstances, that seems like a remarkable achievement.
Thursday, November 18, 2010
Why do people underestimate income and wealth inequality?
There was a recent paper in the news by Michael Norton and Dan Ariely arguing that Americans substantially underestimate the degree of income and wealth inequality in the USA. Other papers have found similar results. But why? Crowds do quite well at estimating all sorts of other quantities, but they fail dramatically on this problem, as Timothy Noah notes in a Slate piece on the Norton and Ariely paper here. More technically, we might expect from models of information aggregation that if the signals people get about the true distribution of income are unbiased, the errors should cancel out, except they do not here. So what is the source of the bias here?
Two ideas. First, maybe people estimate the distribution of income and wealth based on signals from their friends and neighbors, and they mostly associate with people like them in terms of income. Since most people also tend to place themselves somewhere in the middle of the distribution (but why? national ideology?), the estimated distribution will be more egalitarian than the true distribution. If someone were to go around publicizing information about the true distribution of income then these estimates might shift a bit, but probably not reliably; people receive signals from their friends and neighbors about the distribution of income all the time, whereas few read or care about econometric estimates of income distribution.
But perhaps a second possibility (not necessarily incompatible with the first) is that people estimate the distribution of income and wealth from signals about consumption (whether or not these are their friends); if consumption inequality is lower than income or wealth inequality (as some people suggest it is), then estimates of income and wealth inequality will also be biased downwards. Again, providing information about the true distribution of income to people is also unlikely to change these perceptions reliably, but changes in consumption patterns might (e.g., if the rich engage in more conspicuous consumption).
Do either of these accounts sound like plausible explanations? Other ideas?
Bonus query: if Wilkinson and Pickett are right that income inequality causes social and health problems via status competition over consumption, then the fact that people are systematically deluded about the true extent of inequality might be a sort of silver lining; greater awareness of inequality might induce even more social and health problems, though it might also induce more redistributive policies than currently prevail (but I wonder: beliefs about the proper degree of inequality might also adjust downwards with more accurate information, depending on how strong our natural inequality aversion actually is).
Two ideas. First, maybe people estimate the distribution of income and wealth based on signals from their friends and neighbors, and they mostly associate with people like them in terms of income. Since most people also tend to place themselves somewhere in the middle of the distribution (but why? national ideology?), the estimated distribution will be more egalitarian than the true distribution. If someone were to go around publicizing information about the true distribution of income then these estimates might shift a bit, but probably not reliably; people receive signals from their friends and neighbors about the distribution of income all the time, whereas few read or care about econometric estimates of income distribution.
But perhaps a second possibility (not necessarily incompatible with the first) is that people estimate the distribution of income and wealth from signals about consumption (whether or not these are their friends); if consumption inequality is lower than income or wealth inequality (as some people suggest it is), then estimates of income and wealth inequality will also be biased downwards. Again, providing information about the true distribution of income to people is also unlikely to change these perceptions reliably, but changes in consumption patterns might (e.g., if the rich engage in more conspicuous consumption).
Do either of these accounts sound like plausible explanations? Other ideas?
Bonus query: if Wilkinson and Pickett are right that income inequality causes social and health problems via status competition over consumption, then the fact that people are systematically deluded about the true extent of inequality might be a sort of silver lining; greater awareness of inequality might induce even more social and health problems, though it might also induce more redistributive policies than currently prevail (but I wonder: beliefs about the proper degree of inequality might also adjust downwards with more accurate information, depending on how strong our natural inequality aversion actually is).
Wednesday, November 17, 2010
An anarchist sensibility
Justin Smith has recently written a very interesting series of posts on anarchism as a certain kind of political and moral sensibility (rather than as a political programme). From the latest:
The anarchist prefers to think about the human species as having got by for the vastly greater part of its existence without states and armies (and airports, etc.), and insists on asking, based on the perspective of the longue durée, whether so many things that are taken as inevitable in our age are in fact so. I grew up assuming cars were inevitable; now they strike me as relics from a swiftly waning era. I don't see why at least some of us should not be trying to imagine how we might go about securing a similar fate for armies, police, and prisons. It bears pointing out that whether you believe these institutions are inevitable or not, it is undeniable that they are capable of radical transformation. So if you tell me that it is impossible to imagine a world without prisons, it seems to me a reasonable challenge to your claim to note that the very denotation of the term you are using has shifted drastically, not just over the centuries, but even over the past few decades. The fact that this has been a shift for the worse, from the perspective of any lover of peace and freedom, does not diminish the strength of the challenge.I find this a very congenial perspective, not least perhaps because I am not naturally a highly political person and tend to the abstract and theoretical rather than the practical and concrete, despite having ended up teaching political science; my interests when I started university lay in pure mathematics, but turned to political theory by way of Heidegger. (Talk about corrupting the young. Heidegger books should come with a philosophical health warning, like cigarettes). The programmatic aspects of politics (the "what is to be done?" of everyday political life), while obviously important and worth thinking about seriously, just do not hold my interest that much. And some of my recent reading - James C. Scott and Christopher Boehm first and foremost, but also things like Adam Przeworski's wonderful book on the limits of self-government, about which I keep meaning to blog - has tended to reinforce my sense that our thinking about politics is too tied to a particular vision of a world of (well-ordered) states that seems, in its way, as utopian as the anarchist vision of a world without states. And on alternate days I think that if I am to be an idle utopian (which I am, with the emphasis on idle), I kind of prefer the vision without states.
[...]
Anarchism, then, as I see it, is a certain perspective on the affairs of men. It is realistic and naturalistic, in that it takes human beings as first and foremost a kind of primate, which only in certain circumstances comes to saddle itself with police and armies and so on. It asks whether and how human beings might thrive in the absence of these, and perhaps also hopes that they might someday (again) thrive without them, even if much of what we now value would have to be relinquished, and even as we soberly acknowledge that human pre-history was no idyll either.
Sunday, November 14, 2010
Trends in income inequality
Preparing for the "policy forum" about Wilkinson and Pickett's "The Spirit Level" I mentioned here, I found Deiniger and Squire's 1996 dataset on income inequality, which presents historical estimates of income inequality (in some cases going as far back as 1890) for a large number of countries. They simply looked up every study that tried to measure income inequality in particular countries and put it into their dataset, with some notes regarding the quality of the underlying data and the sources; not every study is of very high quality, but there is sometimes more than one study for a given year and country, and the resulting data probably gives you a good picture of overall trends. So I thought of looking at the trends in inequality in the countries Wilkinson and Pickett look at, and comparing it to trends in inequality in the communist countries for the period 1960-1993 (where Deiniger and Squire have data).
Here's the result:
"Rich countries" means rich today, and includes I think all of the countries that Wilkinson and Pickett look at, plus Hong Kong and Taiwan: Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Greece, Hong Kong, Ireland, Israel, Italy, Japan, Netherlands, New Zealand, Norway, Portugal, Singapore, Spain, Sweden, Switzerland, Taiwan, UK, USA. "Communist" includes Bulgaria, China, Cuba, Czechoslovakia, Hungary, Poland, Romania, Soviet Union, and Yugoslavia. For some of these countries there are only 3-4 estimates, for others there are long series for many years. I simply average all estimates for a country for a given year for rich and communist countries. (This is probably a terrible idea, but I'm just an amateur. I expect correction from irate statisticians, and shall be grateful for it.)
A scatterplot with the point estimates (as well as information about the original sources) is here:
Income inequality has probably gone up since then in many countries (not just rich ones); I haven't tried to merge Deiniger and Squire's estimates with later data, but I suspect we would see an upward trend after 1990 or so. (There are some fuller estimates available here; I might try to make a graph with them later).
Interpretations? Perhaps the threat of communism made rich countries engage in more redistribution than otherwise? (Following something like Acemoglu and Robinson's argument: the rich allowed more redistribution in the period 1960-1990 because of the potential threat of communist revolution). Or perhaps there was some feature of the world economy that tended to reduce inequality in advanced capitalist economies, but is now tending to increase it? (Something about finance capital, perhaps?) Pointers?
Here's the result:
"Rich countries" means rich today, and includes I think all of the countries that Wilkinson and Pickett look at, plus Hong Kong and Taiwan: Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Greece, Hong Kong, Ireland, Israel, Italy, Japan, Netherlands, New Zealand, Norway, Portugal, Singapore, Spain, Sweden, Switzerland, Taiwan, UK, USA. "Communist" includes Bulgaria, China, Cuba, Czechoslovakia, Hungary, Poland, Romania, Soviet Union, and Yugoslavia. For some of these countries there are only 3-4 estimates, for others there are long series for many years. I simply average all estimates for a country for a given year for rich and communist countries. (This is probably a terrible idea, but I'm just an amateur. I expect correction from irate statisticians, and shall be grateful for it.)
A scatterplot with the point estimates (as well as information about the original sources) is here:
Income inequality has probably gone up since then in many countries (not just rich ones); I haven't tried to merge Deiniger and Squire's estimates with later data, but I suspect we would see an upward trend after 1990 or so. (There are some fuller estimates available here; I might try to make a graph with them later).
Interpretations? Perhaps the threat of communism made rich countries engage in more redistribution than otherwise? (Following something like Acemoglu and Robinson's argument: the rich allowed more redistribution in the period 1960-1990 because of the potential threat of communist revolution). Or perhaps there was some feature of the world economy that tended to reduce inequality in advanced capitalist economies, but is now tending to increase it? (Something about finance capital, perhaps?) Pointers?
Thursday, November 11, 2010
The Potato, Food of Anarchists
A fascinating bit from The Art of Not Being Governed that I never got around to blogging when I first read it:
In general, roots and tubers such as yams, sweet potatoes, potatoes, and cassava/manioc/yucca are nearly appropriation-proof. After they ripen, they can be left in the ground for up to two years and dug up piecemeal as needed. There is thus no granary to plunder. If the army or the taxmen wants your potatoes, for example, they will have to dig them up one by one. Plagued by crop failures and confiscatory procurement prices for the cultivars recommended by the Burmese military government in the 1980s, many peasants secretly planted sweet potatoes, a crop specifically prohibited. They shifted to sweet potatoes because the crop was easier to conceal and nearly impossible to appropriate. The Irish in the early nineteenth century grew potatoes not only because they provided many calories from the small plots to which farmers were confined but also because they could not be confiscated or burned and, because the were grown in small mounds, an [English!] horseman risked breaking his mount’s leg galloping through the field. Alas for the Irish, they had only a minuscule selection of the genetic diversity of new world potatoes and had come to rely almost exclusively on potatoes and milk for subsistence.
A reliance on root crops, and in particular the potato, can insulate states as well as stateless peoples against the predations of war and appropriation. William McNeill credits the early-eighteenth-century rise of Prussia to the potato. Enemy armies might seize or destroy grain fields, livestock, and aboveground fodder crops, but they were powerless against the lowly potato, a cultivar which Frederick William and Frederick II after him had vigorously promoted. It was the potato that gave Prussia its unique invulnerability to foreign invasion. While a grain-growing population whose granaries and crops were confiscated or destroyed had no choice but to scatter or starve, a tuber-growing peasantry could move back immediately after the military danger had passed and dig up their staple, one meal at a time (pp. 195-196).
Planting potatoes is, for Scott, part of an arsenal of agricultural techniques used by certain peoples for “repelling” the state, including planting a large variety of cultivars (which makes the output of agriculturists less “legible” to the state), cultivating “crops that will grow on marginal land and at high altitudes” (like maize), require little attention and/or mature quickly, blend into surrounding vegetation, and are easily dispersed. "Real-existing" anarchists (at least the kind that decides to retain some form of agriculture) have been potato eaters, apparently.
Clearly planting potatoes does not work on its own to repel the state, however. Prussian peasants were dependent on potatoes, but they certainly did not escape the state (but did they escape it more than similarly situated peasants? Or did social structures in Prussia produce peasant subordination by other mechanisms, not necessarily via state violence? Perhaps the land was too flat?). And Scott does not mention this, but the staple crop in the Inca empire was also the potato (and they also grew other crops, like maize, that are state-repelling in Scott’s view, and happened to be situated in the highlands rather than the lowlands; the Inca empire seems to be a big counterexample to Scott’s general argument). So this sort of claim calls out for testing and further investigation: are peoples with the sort of agriculture that Scott describes less likely to have had states (at least in the past) than peoples that did not, beyond Southeast Asia? Why did the Incas manage to create a state in ecological conditions that seem very unfavourable to it, at least in Scott's view? I suppose that it could be the case that there was less “stateness” in Inca lands than we think, but still, a bit puzzling.
Subscribe to:
Posts (Atom)