Robert Barrington, Professor of Anti-Corruption Practice at the Centre for the Study of Corruption (CSC), looks at the implications of the UK’s application of an algorithm to final-year exam results at schools – and detects some of the primary ingredients for corruption.
Many in the anti-corruption community have been puzzling in recent years about the relationship between tech advances and corruption. There are some clear positives: such as greater transparency enabled by easier and more instant flows of information; and the arming of anti-money laundering officials with new tools for investigations. But our instincts also seem to say that there is something profoundly problematic going on, even if it is hard to pin down what that is. The tech revolution can have negative consequences for issues such as human rights, democracy, government accountability and basic freedoms; these have always been closely related to corruption, but is it an intellectual leap too far to state that the emerging ‘algocracy’, as it is increasingly known, has a tendency to be corrupt?
We have witnessed the unregulated power of large companies whose complexity, innovation and corporate governance prevents traditional accountability; abuses of surveillance tech by governments and law enforcement authorities; very high spending by tech companies on political and regulatory lobbying; and ordinary people feeling helpless in the face of the power that tech gives to both companies and governments. A heady cocktail, but not fitting traditional definitions of corruption such as ‘the abuse of entrusted power for private gain.’
My colleague at the CSC Roxana Bratu has been researching this; and other deep thinkers such as Luminate’s Martin Tisne have put in the intellectual hard yards to bring us closer to understanding what we are really dealing with. The UK’s A-level scandal seems to bring a few of these strands together.
For those unfamiliar with the scandal, here is a re-cap. In the absence of pupils sitting final school year exams known as A-levels due to Covid-19, the UK’s constituent governments (education is devolved) decided to use an algorithm to generate the exam results.
The Financial Times described the outcome as an ‘algoshambles.’ The algorithm was designed by the exam regulators, under governmental instruction, to replicate the national patterns of previous years. Everyone involved was well-intentioned. But the outcome was indeed a shambles: elite private schools were favoured; brilliant pupils from historically poorly-performing schools were automatically marked down; many pupils missed out on grades their track records had every reason to suggest they would achieve. The national pattern was accurately replicated and the government’s aim apparently achieved; but at the cost of many cases of individual injustice.
Why talk of corruption in this context? The reason is simple: this gives us an insight into what government by algorithm could look like, and it looks very similar to a corrupt autocracy. Here are five corruption-flavoured take-aways from the UK’s A-level scandal which lead to that conclusion:
- Algorithms make government easier. They work brilliantly for central planning and nationwide implementation of centralised policy. Governments will therefore like them and want more of them. They give governments more control.
- The blanket application of an algorithm, however well-intentioned, inevitably smooths out the nuance of individual cases; even though society is made up of individual citizens with individual circumstances, applying algorithms in such a way is a blunt but effective mechanism.
- Inequality can easily be worsened: it may be by accident rather than design, but very clearly, those who are already privileged or advantaged can see that significantly increased if the algorithm works only marginally in their favour.
- Transparency allows citizens whose lives are affected by algorithms to assess whether they are designed and applied fairly; whereas the absence of transparency leaves the victim helpless and with little recourse except through a complex, expensive (in time, and possibly money if legal challenges are involved) and soul-destroying process of appeals.
- Governments and agencies that use algorithms are, however, very reluctant to be transparent about them; in this case, an apparently deliberate secrecy. And even being transparent may leave citizens confused as to how interpret or challenge an algorithm.
That all sounds as though we need good algorithms sensitively applied, not bad ones that have negative consequences. But what – if anything – does this tell us about corruption?
One obvious thing to do when looking for abuses of power is to look at where power is concentrated. In this case, very high levels of power (over final school exam results, potentially determining an entire career – especially if it makes the difference to being able to study medicine or going to a top university) were concentrated in the exam regulator. Of course, that is the case in normal years, but in the Covid-19 situation, the regulator was not only deciding on exam results, but deciding on the basis of its own discretionary judgement, under governmental instruction, rather than marking an exam paper. It was above the norm – a super-concentration of power accompanied by an absence of transparency and limited accountability (to the Minister whose policy was complicit in the shambles).
This concentration of power created a severe problem – in part, because it also represented a transfer of power away from the student sitting an exam. In other words, there was a big power gap through simultaneously taking away power from citizens and granting more power to the government. Perhaps the most positive thing for the A-level students was that the shambles was so obvious that it was clear the algorithm’s findings had to be over-turned. Around forty percent of pupils received grades from the algorithm that were lower than the teacher predictions that were eventually used. It is easy to envisage a scenario in which the impact was on perhaps five percent of the cohort, which could have been swept under the carpet. And who knows what other algorithms are already in place, and already operating like this?
What is missing here with regard to standard definitions of corruption is the ‘abuse’ of the power and any ‘private gain’. On the contrary, the exam regulator (and/or the relevant government minister) seems to have made a massive cock-up.
So this case may not be an example of corruption, but it should send a chilling warning. Governments in mature democracies in advanced economies can play fast and loose with algorithms in areas that have a major impact on the lives of citizens. We have now witnessed the enormous power of these algorithms. Imagine what an abuse of such power would look like; and imagine the possibilities for private gain of an algorithm governing access to or exclusion from healthcare, finance, the best jobs, etc. And note also that the culpable minister only performed his u-turn to by-pass the algorithm after sustained public and media pressure.
The conclusion has to be that such algorithms are not inevitably corrupt; but the potential for abuse of power and private gain gives a glimpse of what corruption may come to look like in the twenty-first century.
This new long read from Cory Doctorow adds a few more relevant angles to the corruption-algorithm mix, including the links between monopoly power, lobbying and the merging of social media and state surveillance.
https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59
For better understanding the real corruption in practice
https://www.academia.edu/37741482/Corruption_as_a_net_of_influences_links_and_connections