Of bits, bugs and responsibility in the public square

Data in societyEpidemiology &c.

Usually, I wake to the sound of the bluejays in our spacious backyard. This morning, I woke to the ‘priority alert’ from my phone, indicating an urgent message. It was 0515 – early even by the standards of epidemiologists in these times. My friend’s message was terse but foreboding – “look at this”, followed by a link to Github.

At the end of the link was the codebase promised for weeks by Neil Ferguson, the computational epidemiologist who has advised the UK government on COVID-19 related steps until his recent resignation. I have previously been a staunch defender of Ferguson’s approach – his model was (and is) theoretically sound, and probably as good as such models will ever get. Prediction, of course, is difficult. Especially when it comes to the future, as Niels Bohr is credited of saying. Using a method that relies on simulating populations in cells and microcells, it combines the granularity and stochastics of agent-based models without requiring the resources typical for agent-based simulation. His model, versions of which were used in previous outbreaks, has been the de facto gold standard to the UK government.

And looking at the code, that raises some extremely serious questions. I would like to explore some of these issues, but will not go into a detailed analysis of the code, for one reason – the code eventually (and reluctantly) shared by Imperial College is almost definitely not the code used to generate forecasts for HM Government. We know that at some point, Github and even John Carmack (yes, that John Carmack!) has been involved in cleaning up some of the quality issues. Imperial, meanwhile, obstinately resists releasing original code – both via Github and under a valid FOIA request that Imperial’s lawyers are entirely misinterpreting.[1] We can, however, safely assume from the calibre of the people who have worked on the improved version that whatever was there was worse.

The quality issue

First of all, the elephant in the room: code quality. It is very difficult to look at the Ferguson code with any understanding of software engineering and conclude that this is good, or even tolerable. Neil Ferguson himself attempts a very thin apologia for this:

That, sir, is not a feature. It’s not even a bug. It’s somewhere between negligence and unintentional but grave scientific misconduct.

For those who are not in the computational fields: “my code is too complicated for you to get it” is not an acceptable excuse. It is the duty of everyone who releases code to document it – within the codebase or outside (or a combination of the two). Greater minds than Neil Ferguson (with all due respect) have a tough enough time navigating a large code base, and especially where you have collaborators, it is not unusual to need a second or two to remember what a particular function is doing or what the arguments should be like. Or, to put it more bluntly: for thirteen years, taxpayer funding from the MRC went to Ferguson and his team, and all it produced was code that violated one of the most fundamental precepts of good software development – intelligibility.

The policy issue

When you write code, you should always do so as if your life depended on it. For us working in the field of modelling infectious diseases, lives being at stake is common, sometimes to the point of losing track of it. I don’t, of course, know whether that is what indeed happened, but I doubt anybody would want to trust their lives to thousands of lines of cobbled-together code.

Yet for some reason, the UK government treated Ferguson’s model as almost dogmatic truth. This highlights an important issue: politicians have not been taught enough about data-driven decision-making, especially not where predictive data is involved. There is wide support for a science-driven response to COVID-19, but very little scrutiny of the science behind many of the predictions that informed early public health measures. Hopefully, a Royal Commission with subpoena powers will have the opportunity to review in detail whether Ferguson intentionally hid the model from HM Government the way he hid it from the rest of the world or whether the government’s experts just did not understand how to scrutinise or assess a model – or, the worst case scenario: they saw the model and still let it inform what might have been the greatest single decision HM Government has made since 1939, without looking for alternatives (there are many other modelling approaches, and many developers who have written better code).

The community issue

Perhaps the biggest issue is, however, the response to people who dare question the refusal by Imperial to release the original source code. This is best summarised by the responses of their point man on Github, who is largely spending his time locking issues and calling people dumb & toxic:

It may merit attention that the MRC is taxpayer-funded – the self-same taxpayer who is deemed unfit to even behold what he paid for. This is the worst of ‘closed science’, something many scientists (myself included) have worked hard to dismantle over the years. Publicly funded science imposes a moral obligation to present its results to the funder (that is, the taxpayer), and it should perhaps not be up to the judgment of a junior tech support developer to determine what the public is, or is not, fit to see. Perhaps as an epidemiologist, I take special umbrage at the presumption that everyone who wishes to see the original code base would be “confused” – maybe I should write to reassure Dr Hinsley that I do understand a little about epidemiology. It is, after all, what I do.

The science issue

None of these issues are, of course, anywhere near as severe as what this means – a massive leap backwards, erosion of trust and a complete disclaimer of accountability by publicly funded scientists.

There is a moral obligation for epidemiologists to work for the common good – and that implies an obligation of openness and honesty. I am reminded of the medical paternalism that characterised Eastern Bloc medicine, where patients were rarely told what ailed them and never received honest answers. To see this writ large amidst a pandemic by what by all accounts (mine included) has been deemed one of the world’s best computational epidemiology units is not so much infuriating as it is deeply saddening.

One of my friends, former Navy SEAL Jocko Willink, counseled in his recent book to “take the high ground, or the high ground will take you”. Epidemiology had the chance to seize and hold the narrative, through openness, transparency and honesty about the forecasts made. It had the chance, during this day in the sun of ours, to show the public just how powerful our analytical abilities have become. Instead, petty academic jealousy, obsessions with institutional prestige and an understandable but still disproportionate fear of being ‘misinterpreted’ by people who ‘do not understand epidemiology’ have given the critics of forecasting and computational epidemiology fertile breeding ground. They are entirely justified now in criticising any forecasts that come out of the Imperial model – even if the forecasts are correct. There will no doubt be public health consequences to the loss of credibility the entire profession has suffered, and in the end, it’s all due to the outdated ‘proprietary’ attitudes and the airs of superiority by a few insulated scientists who, somehow, somewhere, left the track of serving public health and humanity for the glittering prizes offered elsewhere. With their abandonment of the high road, our entire profession’s claim to the public trust might well be forfeited – in a sad twist of irony, at a time that could well have been the Finest Hour of computational epidemiology.

And while we may someday regain the respect of the public we swore to serve (perhaps after a detailed inquiry into what went wrong), for now there will be never glad confident morning again.


1 Section 22A allows for non-disclosure of ongoing research if it is in the public interest. It’s not that I disagree with Imperial on whether it’s in the public interest or not to release the code: it’s that I cannot for the life of me see how anyone could reasonably consider that it is not. Which just so happens to be the administrative law standard when adjudicating issues like this.
Chris von Csefalvay
Chris von Csefalvay is a data scientist/computational epidemiologist specialising in the dynamics of infectious disease and AI-based, data-driven solutions to public health problems.

You may also like

1 Comment

  1. Well said, you are certainly correct about the lack of trust engendered by this fiasco but I you may be over confident in thinking that respect and trust might return in our lifetime. When the dust has settled and the inquiries concluded, the damage that the ego-maniacs at ICL have caused will not be forgiven nor forgotten and sadly many honourable and scrupulous scientists will suffer too.

Leave a Reply