Wednesday, March 09, 2011

Crowdsourcing a Democracy Index

(Sorry for the recent neglect of the blog. I just started teaching again, and that tends to absorb all my energy. So here’s a teaching-related post on something I’ve been doing in one of my classes).

One of the things my students are doing in my “Dictatorships and Revolutions” class this term is constructing a democracy index/regime classification like those produced by Freedom House, the Polity project, or the DD dataset of political regimes I’ve used in this blog in the past (see, e.g., here and here).[1] We are looking at examples of how different regime classifications can be constructed, discussing some of their problems, and then collectively constructing a set of criteria for classification, which we will ultimately use to actually code all 192 or so countries in the world at intervals of about five years for a couple of decades. (If you are interested in the actual details of how the exercise is organized, e-mail me; this whole thing is still quite experimental, so I would not mind some feedback. It’s turning out to be a bit complex). Since there are over 100 students in the class (around 120, in fact), this means that we can achieve full coverage (and even some overlap) if each student codes just 2 countries (at various points in time), and I am planning to assign 4-5 countries to each student (so each country gets at least 2 coders).  We will then examine how our crowdsourced index or regime classification compares to some of the other indexes and regime classifications.

As a warm-up exercise, I set up a democracy ranking website using allourideas.org, which I learned about some time ago via the good orgtheory people. Basically, this is a webpage where you are presented with a comparison between two countries, and asked which one is more democratic (you can answer “I don’t know,” and give a reason). The results of the pairwise comparisons can be used to generate a ranking, which represents something like the probability that a given country would be more democratic than a randomly selected country. (But rather than read this explanation, why not go play with it? It can be addictive, and it’s basically self-explanatory once you see it). I asked the students to go to this website in the first class of the term, and to vote; a lot of them voted (an average of about 14 times, i.e., 14 comparisons). I didn’t know exactly what to expect, but I was sort of hoping for a “wisdom of crowds” effect. And there is, indeed, something like that, but the effect is small. Here’s a graph (link for full screen):


The y axis represents the sum of Freedom House’s political rights and civil liberties scores: 2 is most free, 14 least free. The x axis represents the “ranking” of the countries as calculated by the Allourideas software, ranging from 4 (North Korea has only a 4% chance of prevailing in a “more democratic” comparison against a randomly selected country) to 93 (Australia; New Zealand scored 92, and was for a time in first position, which is to be expected from a group from New Zealand; see the complete ranking here). Note that these numbers do not reflect the judgments of “individual” students, but the calculated probability of prevailing in a comparison against a randomly selected country, given the information available from previous pairwise comparisons. (No student or set of students actually “ranked” North Korea last or Australia first). The size of the bubbles is proportional to the class’ subjective “uncertainty”: basically, the number of times a country was involved in an “I don’t know” answer divided by the total number of times the country appeared in any comparisons. There were 1250 votes submitted, but since there are 192 countries, the number of possible comparisons is 36,672, which means that a relatively large number of potential comparisons never appeared. (Which is part of the reason I am posting this here – I want to see what happens if lots of people engage in this informal ranking exercise).

There’s clearly a correlation between the rating by Freedom House and the informal rankings generated by the pairwise comparisons produced by the students – about -0.62, which is pretty respectable. (Some of the correlations between Freedom House and other measures of democracy are not much higher than this). A simple regression of the Freedom House ratings on the rankings generated by the students gives a coefficient of -0.11 (highly significant, not that that matters much in this context), which means that an increase of 10 points in the student-generated ranking is associated with a decrease of about 1 point in the combined Freedom House PR+CL score. (A more thorough analysis could be undertaken, but I don’t feel qualified to do it; I’ve put up the data here for anyone who is interested in doing some more exploration, and will update it later if enough other people participate in the ranking exercise).


Most of the “obvious” cases appear at the extremes – developed, well-known democracies get a high ranking, while obvious dictatorships mostly get a low ranking. Many of the countries that seem to be misplaced, however, appear to be either small and little talked about in the news or not especially well-known to students; see, for example, Ghana (which is ranked lower than it should be, if Freedom House is right) and Armenia (which is ranked higher than it should be, if Freedom House is right). Would this change if more people contributed to the ranking, especially people from a variety of countries around the world (I know this blog gets a small readership from a number of unlikely countries –could my kind readers send this link around to people who might be interested, e.g., students?). Here's a heatmap of the student-generated rankings (darker is more democratic):


The map seems reasonable enough to the naked eye. It seems that even a simple informal ranking exercise can be a reasonable approximation to a professional ranking (like that generated by Freedom House) if the people doing the ranking have some knowledge of the countries being compared, so I would expect that more people participating would probably move the informal ranking closer to Freedom House’s measure. (Maybe this is a most cost-effective method of generating a democracy index – “the people’s democracy index,” as it were). But it could also be the case that the ranking would diverge more from the Freedom House ranking as people from diverse countries participated, with different understandings of democracy. Perhaps global opinion about which countries count as most democratic would diverge sharply from the opinions of Freedom House’s expert coders. Or perhaps it would be affected by national biases – people from particular countries would have a tendency to rank it higher/lower than a more “objective” ranking would. It would be interesting to know – so it would be great if you could spread the word by sending  this link around!

(I have also wondered whether this method would work for generating “historic” data on democracy. But the obvious way of doing this would introduce many very unlikely or difficult comparisons– e.g., could we meaningfully compare democracy levels in 1964 Gambia vs. 1980 Angola using this method? – and the less obvious way would require one to set up a website for each distinct year).


[1] Technically, an index of democracy and a regime classification are two different things. The Economist and Freedom House produce indexes of democracy/freedom – an aggregated measure of the degree of democracy in a given country at any particular point in time, ranging from 0 to 100. A regime classification instead takes regimes as types, and attempts to determine whether a given country should be categorized as one kind or another.

No comments:

Post a Comment