Another year, another CPI. The world’s most well-watched measure of public sector corruption was published on 3 December and the usual suspects were in pretty much their usual places. Denmark (92 out of 100) pipped perennial rivals New Zealand (91) and Finland (89) for the top spot, whilst North Korea and Somalia once more share the dubious honour of coming joint 174th and therefore last. A number of countries can be celebrated as apparent success stories; the Ivory Coast, Egypt, Saint Vincent and the Grenadines, for example, all improved on their 2013 performance by five points, whilst Afghanistan, Jordan, Mali and Swaziland added four points to their previous totals. At the other end of the spectrum Recep Tayyip Erdoğan’s Turkey dropped five points (from 50 to 45, and with that from to 64th from 53rd), whilst Angola, Malawi and most notably China – despite an ongoing and high-profile anti-corruption campaign from leader Xi Jinping (see here and here for more on that) – dropped four places.
So much for the nuts and bolts of the results, which can be dissected in all their glory here. What can and should we read out of all this? In order to answer that question it is worth stepping back and remembering how the index works and what TI is trying (successfully and unsuccessfully) to do with it. The CPI is a composite index and a variety – 12 in 2014 – of data sources are used to create what is effectively a poll of polls on perceptions of public sector corruption in a given country. TI provides a detailed account of where its data comes from (see here) and also how it uses it. The CPI was first published in 1995 when it included 41 countries, with New Zealand achieving the best score (i.e. nearest to 10, as it was then) and Indonesia the worst (nearest to 0). By 2014 the CPI had expanded to 175 countries. The data produced is used, in varying ways and for varying purposes, by journalists, other anti-corruption organisations and not least politicians, and the CPI has undoubtedly developed into the key brand name in the study of corruption worldwide.
The CPI’s prominence has certainly not shielded it from criticism. Indeed, criticising the methodology that underpins the CPI has become very commonplace. Steffan Andersson and Paul Heywood, for example, see a number of basic problems (see here). Firstly, the CPI measures perceptions of corruption rather than corruption itself. Secondly, there are fundamental definitional problems that should lead us to be very unsure of what respondents actually understand the term corruption to mean. Indeed, frequently it appears that the terms bribery and corruption are used interchangeably and are for many one and the same thing. Thirdly, the CPI suffers from ‘false accuracy’ and there is no way of knowing what the real difference between scores that are closely grouped together is in practice. A difference, in other words, of just a few points can lead to countries being a fair distance apart in the league table and yet we are not at all sure that these differences actually reflect what is happening in the real world. Finally, responses to the various surveys are very likely to be shaped by – whether directly or indirectly – the assumptions and attitudes of the western business community; for the simple reason that the majority of people asked have roots in this particular milieu.
Other analysts have also not been slow in coming forward with their criticisms. Steve Sampson, speaking for many in the development studies community, is sceptical of what he regards as “corruption becoming a scientific concept” as measurement tools like the CPI can, and have, easily become objects of political manipulation (see here for Sampson’s at times biting critique). Even fellow quantifiers such as Theresa Thompson and Anwar Shah from the World Bank have criticised some of the statistical techniques that TI have employed (see here). They leave no one in any doubt as to how grave they think the CPI’s methodological shortcomings are when they state that “closer scrutiny of the methodology … raises serious doubts about the usefulness of aggregated measures of corruption” and “potential bias introduced by measurement errors lead to the conclusion that these measures are unlikely to be reliable, especially when employed in econometric analyses” (Thompson and Shah, 2005, pp-8-9). Stephen Knack’s careful dissection of the CPI (here) also makes uncomfortable reading for TI defenders; he argues, for example, that scores are frequently not based on the same set of sources that were used for that country in the previous year. This is evidence, he claims, of the unreliability of scores even within one country, let alone on a cross national basis. He also raises further significant issues about the independence – in a statistical sense – of the data used, claiming that many of the ‘statistically significant’ changes that TI claims to have uncovered would not in reality be so if “appropriate corrections for interdependence” had been made.
For its part, TI has certainly tried its level best both to be open about the methodological shortcomings of the CPI (as well as its other corruption indices) and also to adjust them wherever possible. The founder of the CPI index, Johann Graf Lambsdorff, for example, is careful to acknowledge some of the methodological issues inherent in all composite indicators and he is always careful to describe changes in country scores from year to year as changes in perceived corruption rather in actual corruption levels. TI has also tacitly admitted that the CPI has its limitations by the very fact that it has developed a whole host of other indices – such as the Bribe Payers Index and the Global Corruption Barometer – looking at both the perceptions and experiences of specific groups of stakeholders (ranging from businessmen to households). One of TI’s founders, Jeremy Pope, has been rather more explicit, claiming that “the CPI’s major usefulness is in the past” and the TI has to be “a lot more sophisticated these days” (quoted in Andersson and Heywood, 2009, p.755).
And yet, all these criticisms not withstanding, the CPI has done one indisputable thing; it has put the issues of corruption and anti-corruption well and truly on the policy map. Indeed, as Andersson and Heywood astutely observe;
“We should not underplay its significance in the fight against corruption: its value goes beyond the stimulation of research activity, since the publication of the CPI each autumn has generated widespread media interest across the world and contributed to galvanising international anti-corruption initiatives, such as those sponsored by the World Bank and the OECD” (Andersson and Heywood, 209, p.747).
Even staunch critics of the quantification of corruption have begrudgingly admitted that “whatever its limitations” the development of the CPI has “undoubtedly done much to promote the anti-corruption agenda” (see here). It is also doubtful that any of the more nuanced second and third generation indices that both TI itself and other organisations have developed would have seen the light of day if the CPI hadn’t existed before them.
So, with all that in mind, we should be careful of perhaps reading too much in to the data that has been produced. But we should also perhaps be wary that cynicism gets us nowhere and that TI, for all its sins, continues to push the analysis of corruption to the forefront of our thinking. And that alone should be reason enough to cut the CPI just a little slack.
Just as FYI, Transparency International changed it’s CPI calculation methodology in 2012, addressing some of the points above. More on this here :http://www.transparency.org/cpi2012/in_detail (see FAQ7, and the independent review downloadable on the right hand side). TI also looks at some concerns in this blogpost http://blog.transparency.org/2014/12/03/putting-public-sector-corruption-on-the-map/