Have at it lads

The computer coding behind the Imperial College research that led to the coronavirus lockdown has been analysed for the first time by outside experts who have been “largely positive” about its quality, says its creator.

Professor Neil Ferguson, lead author of the report which recommended that the government impose stringent measures in order to prevent Covid-19 claiming the lives of 250,000 in the UK, has been under pressure to release the coding so that others can scrutinise his mathematical modelling and check his working is accurate.

Now, he told The Telegraph, he has given a limited number of experts access to his coding before making it public on the software repository website GitHub.

That’s assuming he has actually put it on GitHub of course…..

42 thoughts on “Have at it lads”

  1. Surreptitious Evil

    If that’s a fake, it’s a state-level disinformation grade one. Academic papers included, loaded up by one of the key researchers, with upload history, not by an “Nferguson” account.

  2. Bloke in North Dorset

    The model might be quite good and well written, but that doesn’t protect us from GIGO.

  3. BlokeInTejasInNormandy


    That and the danger of modeling more detail than there is data to support,

  4. The Meissen Bison

    Didn’t Professor Neil Ferguson glean his modelling expertise from watching London Fashion Week on the telly?

  5. I see Ferguson’s model is Bayesian. It’s 40 years since I briefly studied probability theory, and I came away with a dislike of Bayesian probability because it involves too many a priori assumptions – or more than a frequentist would make. But I can’t claim to have any real expertise in the area – only a basic acquaintance long ago.

  6. It relies on knowing numbers which are unknown, and not necessarily well-conceived, like R0 and CFR. These are not (probably) numbers you can plug into a model and get a good prediction out. I’d even doubt whether such a model can be successfully run after the numbers are known and the epidemic is over. Oh, and it’s got human behaviour in it too. That’s not always predictable..

  7. R0 varies with social and hygiene norms. And as soon as a few people have it anyway it isn’t R0 anymore.

  8. Expert & “expert”. Two different nouns. Expert is a transitory qualitative assessment. The guy at a garage who diagnosed & fixed a problem with your car when other garages failed. “Ëxpert” comes from credentialization, politicking & self promotion & belongs in the same category includes confidence trickster.

  9. There isn’t one single model. The different academic papers use different models, even when they’re written by the same academics. Sometimes they just tweak an old model, sometimes they start from scratch.

    When people talk about “the” Imperial model, they’re usually talking about the one used for Report 9 (16 March) about how to avoid overwhelming the UK health capacity. That’s the one that is apparently undergoing cleanup before posting. The code people have posted above is for Report 13 (30 March) and is about how to estimate the effectiveness of the lockdowns across Europe. Very different kettle of fish (this isn’t about prediction but rather about inference) and its code was written from scratch I believe.


  10. @Theo

    To be fair frequentists make assumptions too particularly if they’re building a predictive model rather than just performing inference from data. Bayesians are often more upfront about the assumptions (priors) they’ve made, and in principle their methodology is well suited to exploring the impact of a range of possible prior beliefs eg depending on whether you include studies that used Chinese data. In practice most publications don’t actually include such sensitivity analyses which is a shame.

    But it’s not helpful to think of rival camps of Bayesians vs frequentists these days – probably not since the 1990s or so. Most statistical types seem to use a grab bag of whatever methods seem suitable for the problem at hand without falling into fixed ideological lines.

  11. ‘Gosh! I wish I knew what you fellows were on about!’ I always thought Publius Claudius Pulcher had the right idea when he chucked the sacred chickens overboard, saying ‘Since they won’t eat, let them drink.’

    Of course he then went on to lose the battle of Drepana, but Roman chooks are surely far better at predictions than modern modellers.

  12. It’s a simplistic model, at first glance. It looks at the known data from other countries, plus the measures they’ve taken, and tries to project into the future based on different scenarios. E.g. if Austria closed its non-essential shops and then saw a sudden fall in cases, the model would calculate two scenarios: shops stay open = more deaths, shops close = fewer deaths.

    The big problem is hidden variables. Let’s say Austria closes its shops, but also tries to keep Covid-19 patients away from hospitals; whereas e.g. Sweden keeps the shops open, but treats everyone with a cough in A&E. The model assumes that closing the shops did the trick; whereas in fact segregating the patients did most of the work.

    Think of all the possible variables: How packed are the trains? Does the country have a mask-wearing culture? Do people talk loudly while gesticulating wildly, and does that contribute to the spread? What about hospitals – what kind of PPE do they use, do they change between patients, what’s the food like, how much air is recycled in the hospital’s air-con, etc.

    It could be that 90% of the gain comes from simple, low-impact measures like mask-wearing and hand-washing; not from buggering the economy and making everybody miserable. Ferguson’s model isn’t looking at those questions, so it can’t tell us.

  13. “I see Ferguson’s model is Bayesian. It’s 40 years since I briefly studied probability theory …”

    My PhD supervisor passed me a book on Bayesian stats and said “Tell me if we can use this stuff”.

    I did my reading and reported back. “I’ve learnt that it sounds much more scientific if you refer to a prejudice as a prior.”

    We decided that it didn’t promise to be much use to our work. The logic was quite appealing but it all depended on somehow having a feel for the value of a parameter before you analysed the data. It’s true that you normally design experiments based on a guess at the rough value of what you are hoping to measure, so to that extent the idea was broadly familiar.

    As for Lord Fergie of Models, the “prior” suggested by his previous work on modelling epidemics is that he should have been ignored.

  14. Does the prof have access to data not available to the rest of us? I’ve been trying to find daily hospital admissions but all I’ve come up with is a low res accumulated graph. That seems to me to be a useful dataset.

    There’s a nice graph showing the progress of the disease in fatal cases and it’s roughly two weeks from infection to hospital then a further two weeks to showing up in the death stats. Tomorrow is four weeks in to the lockdown so if it is working we should see something soon, certainly by the end of the month.

  15. @dearieme

    I think the strength of the Bayesian approach is clearer if you are trying to do predictive modelling based on uncertain data/parameter estimates, where instead of just “plugging in a number” for the key parameters you can plug in a probability distribution based on the central estimate and associated uncertainty of previous studies. Or in experimental work, you might be trying to estimate some property X, and you’ve done an experiment and produced some data that would allow you to calculate X, but the calculation also requires you to use Y and previous studies have not produced an exact value for Y but rather an estimate with considerable uncertainty around it. Like you say, it’s about “having a feel” for something. You don’t necessarily need strong feelings about what you think X will be, you can make your prior about X as “uninformative” as possible.

  16. Making a model is a demonstration of just what a smart boy you are. The usefulness depends on the quality of data you plug in. This model was rushed out to fan the panic on the basis of absolutely no good data and reeks of hubris.

  17. Can someone explain what is achieved by a code-driven model which cannot also be achieved by a spreadsheet using, int. al., logical IF-type formulae and, perhaps, multiple tabs?

  18. @ Ljh

    One of the advantages of a Bayesian model is that if you have got big uncertainties in the data, you can feed those uncertainties in as inputs too. The result might be that the output has almost unusably wide uncertainties too, but at least that tells you that you don’t really know what’s going to happen and that’s arguably better than the pure GIGO you get with a deterministic, non-Bayesian model into which you feed in highly uncertain data but you don’t have any way to tell the model what the accompanying uncertainties are. In the latter case, one ought at least to do a sensitivity analysis and see what difference varying the parameters within known bounds might have, but in a model that depends on several input parameters it gets fiddly to explore the multi-way sensitivities (eg what if X is much bigger than previously estimated, while Y and Z are much smaller). It’s neat that the Bayesian approach keeps tabs on all these uncertainties for you.


    In principle MS Excel spreadsheets are Turing-complete, so theoretically there’s nothing to stop you using them. In practice the difficulty is speed. If what I’ve written above isn’t complete gobbledigook then hopefully you can see what the advantage of a Bayesian approach might be. Due to the uncertainty in the existing data, your model parameters actually belong in a big “parameter space” of all plausible combinations of parameters. What you want to do is keep pulling out a sample of possible combinations of parameters from that space, with how likely you are to pick a given combination based on how likely you expect such a combination to be (you need a “Monte Carlo” process), and re-running your model simulation with each such combination of parameters used as a set of inputs. Then you tally up your results and you can see a range of possible outcomes – from there you can work out e.g. how likely it is that the epidemic dies out quickly, how likely it is to overwhelm your health service capacity.

    Now all of that can be done on Excel. I’ve not done it with an epidemiological model but I’ve done it with business/finance models. There are even Excel add-ins you can buy that do the Monte Carlo and uncertainty stuff – for example Palisade @Risk – but compared to writing your own code in C++ or similar, it can be very slow. I’d also note that Excel spreadsheets can be a pain to debug if there’s a coding error in them, particularly the fact there are so many cells to read through and their logic is spread out across two dimensions (or three, with tabs!). With code it isn’t quite as simple as “start at the top of the page and work down” to follow the logic, but it’s certainly easier, and a “for loop” or similar is usually more human-readable than the Excel equivalent (which might be thousands of rows of copied cells, where you have to be careful that the cell references have been made relative/absolute as required).

  19. It’s reasonably clear, Mr Ears, thank you.

    But are you not also saying that, absent pinning your colours to the mast with a guess at known unknowns, you might as well not bother?

    In other words, for financial projections with some underlying exuberance which may or may not tutn out to be irrational, it’s fine:if it doesn’t pan out, some people lose their shirts and that’s that.

    But for public policy involving tens of millions, especially where, as here, there’s no denominator of incidence, we don’t even have exuberance. It’s just guesswork. And worse than guess work, it has the patina of expert credibility…

    Which brings me back to Excel vs code: a great many more people can grok Excel, even with logic gates, than can grok code.

  20. @Lud

    Yes that sounds pretty fair. Every model makes assumptions, and you ought to be clear where they’re coming from. In business models it’s often based on “judgement” that eg sales growth will be most likely Y% but you’re confident it will be in a band between X% and Z%. Because if you don’t have some kind of assumption about sales growth then you’ll never be able to forecast your future profits, will you? Now you might base it on past data but that doesn’t help greatly if you’re planning a completely new product line and I did get the feeling with the commercial clients I did number crunching for that they were largely pulling numbers out of thin air. Then again, they knew whose head it would be on.

    For epi models you might frame your uncertainty about a parameter on previous studies. Here’s another modelling group’s attempt at something similar: https://cmmid.github.io/topics/covid19/control-measures/uk-scenario-modelling.html and if you look at the full report and supplementary figure 1 in particular they’re very clear how they produced their prior for R0 based on the uncertainty in previous estimates. So at least the colour-pinning has been done in a transparent way (though I feel a bit dodgy about it – if so many previous studies came up with incompatible conclusions it seems a bit off to just bung them all together like that with a wide uncertainty, rather than pick through the differences with a toothcomb and try to figure why they’re there and what might be more relevant to a UK context, but I digress…).

    Now I’ll admit I’m not generally a fan of Bayesian approaches because personally I get a similar gut feeling about them to Theo and Dearieme. But I can see the logic and the appeal of them, and I believe in some fields (eg week-ahead weather forecasting) it has proven a successful approach in terms of improving predictive accuracy. I still think there’s some virtue to a simple model, the kind that’s easily bashed out in Excel, and seeing how closely it matches the fancier ones. The more moving parts a model has, the more ways it can go wrong. Of course the simple model might turn out not to be up to the job either but if not it’s worth trying to figure out why not. In fact some unis have prepared relatively simple models – I think NZ’s response was based on a model that was so straightforward it could easily have been implemented in Excel – and they got some pretty scary results too, though again that may be due more to the input parameters they used than the model structure itself.

  21. I have created models of traffic flows using Excel (long ago for a masters dissertation). It worked well enough for the purpose (pass). But it was slow, prone to unexpectedly halt during demonstrations and, coming back to it after a few months, far too much trouble to revise and debug. Now the same could be said of any programming language (I mainly use Gnumeric and Python now), but spreadsheets do get unwieldy very quickly; they look good and glossy on the surface, but do not have qualities that wear well.

  22. View from the Solent

    “When people talk about “the” Imperial model, they’re usually talking about the one used for Report 9 (16 March) about how to avoid overwhelming the UK health capacity. That’s the one that is apparently undergoing cleanup before posting. The code people have posted above is for Report 13 (30 March) and is about how to estimate the effectiveness of the lockdowns across Europe. Very different kettle of fish ”

    Having looked at the code, I agree wholeheartedly. It’s not even fish, let alone the same kettle.
    This is taking on the appearance of another climategate.

  23. If you allow me four free parameters I can build a mathematical model that describes exactly everything that an elephant can do. If you allow me a fifth free parameter, the model I build will forecast that the elephant will fly.
    John von Neumann (1903 – 1957)

    But it’s tough for the politicians. They were going along the right lines (no lockdown – the correct answer according to some experts) and then Prof Ferguson comes running in to announce that his model shows 510,000 people may die. Politicians turn to their in house experts (Prof Whitty et al) and say: is this possible? To which (I imagine) the scientific answer is “Yes, but it’s very unlikely.” Whatchagonado?

  24. At least in the good old days if a haruspex got his modelling wrong he was likely to be reading the future in his own entrails as they spilled out onto the floor. These days, the useless modellers will get a gong and a very good pension.

    I want to see accountability, not rewards, for these guys when they get it so badly wrong.

  25. Appoint a PPE tsar, Mr Miller. It’s an eye-catching initiative with which Boris personally can be associated.

    If they can get him into khaki scrubs, so much the better.

  26. Having played around with more basic models in the past, I’m careful to not to judge the modellers and decision makers too harshly for their initial decisions. The problem with natural processes like epidemics is that there can be huge variations in outcome for relatively minor changes in key variables. And early in the process, the uncertainty over those variables is very high.

    So you could build, in absolutely good faith, a model that predicts a very scary scenario as a base case, as long as you’re honest around the distribution of probabilities around that. And then find out that whilst it’s bad, it’s not remotely THAT bad.

    For me, the question is much more about how they adjust as the evidence is developed. Will they follow reality even if it makes their earlier work look like a mistake?

    Unfortunately my guess is not, because they know the media and much of the public will judge them in hindsight and so they will feel a need to defend their positions more than they probably should.

  27. “The computer coding behind the Imperial College research that led to the coronavirus lockdown has been analysed for the first time by outside experts who have been “largely positive” about its quality, says its creator.”

    What does “quality” mean here? There’s clear, easy to follow, well-structured code without any obvious nasties, and there’s code that meets the specification. You can write really clear code that’s wrong, and you can write horribly structured code that’s right. Think about how Polly Toynbee has a good grasp of the English language and it’s all in the Guardian house style, spent correctly with lovely fonts, but utter bollocks.

    The real test of quality code is that you run some tests and the results are as expected compared to the specification. And that means, at a minimum, that every line of code is visited at least once by a test. It also means testing various high and low values, permutations of codes and so forth. And if you’re not stupid, you write automated tests to do this, so you can rerun them whenever you like.

    In the timescales, I’m going to guess that they’ve done a code review, not run tests against the code. But the release will tell us that.

  28. “At least in the good old days if a haruspex got his modelling wrong he was likely to be reading the future in his own entrails as they spilled out onto the floor. ”

    With the advantage that during a crisis, rather than would-be experts besieging rulers with pet schemes, they would all go very quietly to their studies and reflect deeply on what advice the would offer should they be summoned to give advice.

  29. It’s the old issue of effort vs accuracy, at some point your playing around with a level of detail that makes no difference or variables that have little impact.
    Having done variance analysis on financial models to understand why the last budget and forecast don’t agree (e.g. impact of different sales mix means same revenue different gross margin) figuring out why different inputs can give such varied outputs can be challenging

  30. We have an entire branch of science dedicated to this. It’s called Data Science. Central to it is machine learning, where you, to cut a long story short, throw a bunch of data at an algorithm and ask it to learn stuff about it. Broadly, we ask it to perform one of a few tasks – classification, regression, clustering, dimensionality reduction, model selection and preprocessing. Once we’ve run the algorithms over our data, we can use it to make predictions. These predictions tend to be far more accurate than models written with preconceived ideas, although they’re not totally immune to bias. The fact that our lockdown is based on a Bayesian model and hasn’t used the branch of science dedicated to the study of data is mildly terrifying.

    There is a publicly available data lake for COVID-19 data available here: https://aws.amazon.com/blogs/big-data/a-public-data-lake-for-analysis-of-covid-19-data/

  31. Bloke in North Dorset

    That link from Rob Moss is very interesting. leaving aside the arch capitalists at Amazon providing an excellent free service, as I was reading it I was thinking that if only the purveyors of that of that other existential threat, climate change, were so open about their models and data.

    Lo and behold, when I finished reading it I found this in my RSS feed:

    Scientists now acknowledge cloud cover changes “control the Earth’s hydrological cycle”, “regulate the Earth’s climate”, and “dominate the melt signal” for the Greenland ice sheet via modulation of absorbed shortwave radiation. CO2 goes unmentioned as a contributing factor.

    Climate modeling of factors influencing Greenland warming, surface melt have been 100% wrong

    A few years ago scientists acknowledged “a major disparity in trends between models from the Coupled Model Intercomparison Project 5 (CMIP5) and observations for the last 20-30 years” (Hanna et al., 2018).

    All 36 climate models simulating blocking over Greenland were wrong. None of the models were correct.

    The abysmal performance of the modeling relative to observations has been ongoing for the last 20 to 30 years – effectively for the entire time the CMIP5 models have been in existence.

    That same blog also has a piece about how the lockdowns have shown that diesel cars aren’t the evil polluters environmentalists have been claiming.

  32. @BoM4
    The real test of quality code is that you run some tests and the results are as expected compared to the specification.
    Spot on, but how many in the commercial world (apart from me and thee, obvs) actually do that rigorously? And in the academic world, the amount of testing that goes on is pretty close to zero.

  33. @MBE

    Last week someone at Hectors posted that the Report 9 model is 13 years old, undocumented code for Winter Flu

    It’s over three weeks now since Ferguson told Tele he’d release “next week”

Leave a Reply

Your email address will not be published. Required fields are marked *