Monday, October 02, 2017

The quantification of power: some thoughts on, and tools for, measuring democracy

(More substantive content soon! This is mostly of interest to political scientists, R users, and people concerned with the measurement of democracy).

Democracy is the government of numbers. No other form of government has historically been as concerned with the quantification of power. Indeed, the idea that power depends on the exact numerical strength of one’s supporters, rather than their qualities, would have seemed absurd for most of human history. And I would guess no other form of government has evoked so much mathematical effort. (Even the recent election here in NZ produced extraordinarily sophisticated Bayesian models to predict the outcome).

And yet because the concept of democracy uneasily mingles what is, what can be, and what ought to be, people often object to the attempt to quantify its degree (or even its existence) in particular places and times. (My students often do!). Democracy does not seem like the kind of thing that would be easily and uncontroversially measurable. On the contrary, because any attempt to measure democracy reflects certain normative standards, it cannot but be controversial, especially since most of its conceptualizations for such purposes tend to reduce it to competitive elections with a wide suffrage, which for a variety of reasons seems like an unacceptably narrow view of the ideal to many people.

This is most obvious when we’re talking about cases like Venezuela, where to take a position on the question – to say “Venezuela is a democracy” or “Venezuela is not a democracy” – is to take sides in a rancorous political dispute. But even to say something relatively uncontroversial, like “the United States is a consolidated democracy”, is fraught with normative implications, since clearly “actually existing democracies” (representative governments with non-Potemkin opposition parties and nearly universal suffrage) are highly imperfect, and to give them top scores in some scale seems to imply that they are better than they truly are. In any case, although most people around the world accept democracy as the only legitimate form of government, they disagree enormously about whether or not a given place is or is not actually democratic, and the degree to which particular practices and institutions “matter” for democracy.

Democracy measurement, then, is a somewhat dubious enterprise. The essential contestability of the concept (is democracy about equality, or about self-government, or about freedom? In what proportions?), as well as good-faith differences of opinion about the sorts of preconditions that are essential for its functioning and the kinds of institutions that actualize its values, make it difficult to take seriously any single measurement of “democraticness.” And these disagreements are not really resolvable by appeal to the dictionary; they go back to the earliest discussions of democracy as a distinct phenomenon in history.[1]

Yet I still think the attempt to summarize in some disciplined way particular judgments about “democraticness” over time and in space is useful. A democracy measure seems to me to be a numerical crystalization of a political history: a history at a (literal) glance that can be put to use to say more interesting things about the world. One need not agree with any particular conceptualization of democracy, or take any given measure as a normative standard of what democracy should be, to appreciate the possibility of historical comparison across time and space. And because the concept of democracy is inescapably contested, I think the more the merrier: let a hundred measures of democracy bloom, let a thousand schools of thought contend!

I am thus pleased to announce three different R packages (or rather, two and one update) for accessing and manipulating all the democracy datasets I know about:
  1. A package to access the Varieties of Democracy (V-Dem) dataset, version 7.1 (the latest update). The V-Dem dataset is the gold standard of democracy measurement today. It provides indexes targeting multiple conceptualizations of democracy, and an extremely wide variety of indicators that you can use to satisfy basically every measurement need that you might have; if you don’t like their particular conceptualizations of democracy, you can basically build your own. Each country is coded by at least five people, all of whom live there, and subject to rigorous aggregation and validation procedures. Plus, it is annually updated, and covers the entire period 1900-2016, so it’s pretty comprehensive. If you do any serious empirical research that requires you to use measures of democracy, you should seriously consider using V-Dem as your first choice of measure. This package allows you to access the entire V-Dem dataset (more than 3,000 variables, including external ones) directly from R, and to extract combinations of columns easily according to particular criteria (e.g., section of the codebook where they appear, label, etc.). Check it out at https://xmarquez.github.io/vdem, and install it using devtools::install_github("xmarquez/vdem").
  2. A package to download or access most other democracy datasets used in scholarly work from R, including Polity IV, Freedom House, Geddes, Wright, and Frantz’s Autocratic Regimes dataset, the World Governance Indicators’ “Voice and Accountability” index, the PACL/ACLP/DD dataset, and many others, including some which are now of merely historical interest. (There are 32 of them in the package). The package automates the process of putting these datasets in standard country-year format, assigning appropriate country codes, and the like, and makes it easy to access some less well-known democracy datasets. (Mostly I created it because I’ve spent hundreds of hours tediously repeating these operations!). Check it out at https://xmarquez.github.io/democracyData, and install it using devtools::install_github("xmarquez/democracyData").
  3. Finally, I’ve also updated my package to replicate and extend the Unified Democracy scores. (I first described this package on this blog). This produces a latent variable index from multiple democracy measures, based on methods discussed by Pemstein, Meserve, and Melton in 2010; the most recent update of the package extendes these scores up to 2016 and incorporates revisions and updates of a variety of datasets, including Polity IV, Freedom House, and V-Dem It also includes improvements to the functions used to calculate UDS-style models. Check it out at https://xmarquez.github.io/QuickUDS, and install it using devtools::install_github("xmarquez/QuickUDS").
Feedback, contributors, and pull requests for any of these packages welcome; I hope to be able to submit at least 2 of these packages to CRAN in the near future, so if you use them and encounter any problems let me know. (The V-Dem package is too large for CRAN and will probably never be there).

In what follows, a short discussion of the characteristics of these measures, probably of most interest to people who already use them.

Some general characteristics of democracy measures

The numerical measurement of democracy is about fifty years old. The earliest comprehensive measures of democracy – the Polity project, Freedom House’s Freedom in the World index (first known as the Gastil index), Kenneth Bollen's and Tatu Vanhanen's measures of democracy – go back to the late 1960s and early 1970s. (Vanhanen, who’s been at this business longer than most, identifies some earlier attempts to measure democracy numerically, some going back to the early 1950s, but these were pretty small and unsystematic). There are now 32 different accesible datasets containing some measure of democracy, most developed in the first decade of this century (at least AFAIK):


Most of these measures tend to be highly but not perfectly correlated, reflecting differences in conceptualization as well as varying judgments about the political situation of specific countries and periods:

Yet the high overall level of correlation among these measures masks substantial variation over time:

There is a lot more agreement among measures of democracy after the 1920s than before, simply because it is harder to make judgments of democracy for the more distant past (how much should class-stratified male suffrage count? etc.), though go back far enough and it’s reasonably easy (since there are no democracies past a certain point). In any case, only 13 of the 32 datasets measuring democracy code countries during the 19th century, and only 8 of these make any effort to be comprehensive (mostly because they follow the Polity IV panel, or modify the polity IV scores in some way).

These correlations among measures also mask substantial variation in space:

In other words, while on average the pairwise correlation between different measures of democracy within individual country histories is quite high (0.7), for a substantial minority of countries correlations can be much lower, or even negative. These numbers are better if we only look at the degree of agreement among measures from large, well-resourced projects, to be sure, but they are still by no means reassuring if we are looking for consensus:


Most democracy measurement projects are actually variants of these large-scale efforts; a large number of them take Polity, PACL/ACLP, or Freedom House as starting points to develop their own measures. If we take their correlations as measures of similarity, we can cluster the indexes hierarchically to show these quasi-genealogical family resemblances:


At the top, we have the “Polity cluster” – measures of democracy that mostly just modify Polity, including the Participation-Enhanced Polity Scores (PEPS), the PITF indicators (based on subcomponents of Polity), and the Polity scores themselves. These are highly related with some calculated indexes, including the Unified Democracy Scores and my extension, Freedom House, and Coppedge, Alvarez, and Maldonado’s “contestation dimension” (from a principal components analysis of a number of democracy measures), that attempt to weigh multiple factors in the construction of a measure of democracy, but mostly end up giving weight to the contestability of power and civil liberties.

In the middle we have a cluster that attempts to weigh participation and contestation more equally (LIED, the V-Dem Additive Polyarchy Index, Vanhanen’s Index of Democratization, etc.) and then a cluster of measures that derive from PACL’s attempts to develop a dichotomous measure of democracy (including Boix, Miller and Rosato’s extension as well as Geddes, Wright, and Frantz’s dataset of Autocratic regimes, as well as several other academic datasets). Then there is another cluster of measures that give more weight to formal inclusion (e.g. Doorenspleet, and Bernhard, Nordstrom, and Reenock, both of which make democracy depend on the existence of universal suffrage), a cluster of V-Dem indexes (which weigh multiple factors to come up with a number, including formal inclusiveness), and finally at the bottom we find measures that simply gauge the degree of participation (Vanhanen’s index of participation and the “inclusion dimension” calculated by Coppedge, Alvarez, and Maldonado).

There is a lot more that one could show here, but this is probably enough for now; hope these tools are useful to others! All code for this post available in this repository.

[1] On the other hand, unlike other controversial numerical measures of social phenomena, like university rankings or GDP per capita, governments and other organizations do not spend much time trying to “game” measures of democracy, because few people other than a small number of political scientists care, and little money is at stake. This is probably a good thing, on balance.

Monday, February 13, 2017

Propaganda as Literature: A Distant Reading of the Korean Central News Agency's Headlines

A rather long post on reading the Korean Central News Agency's headlines I am not putting directly on this blog because it contains interactive graphs that I cannot figure out how to embed, but look nice on GitHub. North Korean politics plus lots of data art, including baroque Sankey flow diagrams!

See it here.

Saturday, January 28, 2017

Big Lies at the Monkey Cage

No, not that post. Just me talking about the uses of lies in politics, which may interest some readers here.

Posts at the Monkey Cage are highly constrained in terms of length and style, so I may as well use this blog for some additional notes and clarifications.

Mythical Lies. One point that perhaps could be stressed with respect to the political uses of myth would be that their acceptance always depends on the persuasiveness of alternative narratives. Moreover, it seems to me that the acceptance of myths usually hinges on taking particular narratives “seriously but not literally,” as was sometimes said of Trump supporters (and could, of course, be said of many other people).

For example, the appeal of the Soviet socialist myth in the 1930s did not hinge on its general accuracy or the degree to which practice lived up to its internal standards, but on its articulation of values that seemed plainly superior to the ones on offer by the major alternative narratives (liberal capitalist or fascist). Not everyone may have felt “dizzy with success” in the 1930s, but little that was credible could be said for capitalism at the time (a lack of credibility reinforced by the impossibility of travel and centralized control of information, of course, but not only by that). Here’s Stephen Kotkin in his magisterial Magnetic Mountain: Stalinism as a Civilization:
The antagonism between socialism and capitalism, made that much more pronounced by the Great Depression, was central not only to the definition of what socialism turned out to be, but also to the mind-set of the 1930s that accompanied socialism’s construction and appreciation. This antagonism helps explain why no matter how substantial the differences between rhetoric and practice or intentions and outcome sometimes became, people could still maintain a fundamental faith in the fact of socialism’s existence in the USSR and in that system’s inherent superiority. This remained true, moreover, despite the Soviet regime’s manifest despotism and frequent resort to coercion and intimidation. Simply put, a rejection of Soviet socialism appeared to imply a return to capitalism, with its many deficiencies and all-encompassing crisis— a turn of events that was then unthinkable. (Magnetic Mountain, pp. 153-54).
On one reading of Soviet history, the valence of the capitalist and socialist myths eventually reversed (perhaps by the late 1970s? Or later?): capitalism came to seem fundamentally superior to many Soviet citizens, despite its problems (which, incidentally, were constantly pointed out by Soviet propaganda), while Soviet socialism came to appear unworkable and stagnant (despite the material advantages that many Soviet citizens enjoyed, including great employment stability). But this reversal in valence had less to do with specific facts (popular Soviet views of capitalism in the early 90s could be remarkably misinformed) than with an overall loss of trust in the values Soviet myths articulated, reinforced by decades of failed prophecy about the coming abundance. (Perhaps best conceptualized as a cumulative reputational cost of lying?).

Strategic Lies. One thing I did not emphasize in the piece is that people may of course be predisposed to believe lies that accord with their deep-seated identities. Everyone has their own favorite examples of this, though I am reluctant to speak of “belief” in some of the more extreme cases. (See, e.g., this post about the differential predispositions of voters to identify the bigger crowd in two pictures of the inauguration; perhaps it’s better to speak here of people giving the finger to the interviewers, reasserting their partisan identities). But by the same token, these lies do not work for groups whose identities predispose them to reject the message or the messenger (e.g., Democrats, in the question about inauguration pictures).

So “identity-compatible lies” (anyone have a better term?) should be understood as ways to mobilize people, not necessarily (or only) to deceive them, which put them in the same functional category as “loyalty lies” below. From a tactical standpoint, the question then is about the marginal persuasive effect of such lies: does telling a big lie that will be embraced by supporters and rejected by non-supporters increase or reduce the chances that an uncommitted person will believe you?

I’m not sure there’s an obvious answer to this question that is valid for most situations. In any case, it seems to me that, over time, the marginal persuasive effect should decrease, and even become negative (as seems to be happening in Venezuela, where in any case most people who are not Chavistas can and do simply “exit” government propaganda by changing the channel or turning off the TV, and the remaining Chavistas become increasingly subject to cognitive dissonance (how come after all the “successes” proclaimed by the government in the economic war, the other side is still winning?).

Loyalty Lies. The idea that baldfaced lies can help cement the loyalty of the members of a ruling group when trust is scarce seems to be becoming commonplace; both Tyler Cowen and Matthew Yglesias provide good analyses of how this may work within the context of the Trump administration. (Cowen is also interesting on what I would call “lies as vagueness” and their function in maintaining flexibility within coalitions, which I didn’t mention, but which are obviously related to this and this).

But I wanted to plug in specifically a really nice paper by Schedler and Hoffmann (linked, but not mentioned, in my Monkey Cage piece) that stresses the need to “dramatize” unity in authoritarian environments in order to deter challengers during times of crisis. Their key example is the Cuban transition of power from Fidel to Raul Castro (2006-2011) – a situation which saw the need for supposedly “liberal” members of the Cuban regime to show convincingly that they were in fact “on the same page” as everyone else in the elite. And the same need to dramatize unity in a crisis seems to me to be driving the apparent lunacy of some of the statements by Venezuelan officials (check out Hugo Perez Hernaiz’s Venezuelan Conspiracy Theories Monitor for a sampling).

I suspect that the need to dramatize loyalty within a coalition (by “staying on the same page” and thus saying only the latest lie du jour) may conflict with the imperatives of strategic lying (saying things that are credible to the larger groups). Here the tradeoff is about the relative value of support outside vs. support within the ruling group; the less you depend on the former, the less it matters whether elite statements are believed "outside."