Our friend Kevin Roche is the former general counsel of UnitedHealth, former chief executive officer of United Health’s Ingenix (now Optum Insight) data analysis division, and the proprietor of Healthy Skeptic. When authorities announced that Minnesota would be using a $17 million federal grant to learn from its mistakes in Covid-19 forecasting and improve its forecasts the next time around, we thought that Kevin had an important contribution to make. The Star Tribune apparently thinks otherwise, having found Kevin’s comments unfit to print. Kevin writes:
We learned this week that the State of Minnesota intends to use a $17 million dollar grant to study why the model developed by the state for use during the Covid-19 epidemic was a mere million miles from reality. You may recall that under the model as many as 50,000 Minnesotans would die and many more be hospitalized. Unfortunately, those predictions were used as the purported basis for closing schools and businesses, forcing people to stay indoors, and visiting numerous other futile and costly suppressive measures upon the citizenry. Indeed, those measures have done immense damage — far more damage than the epidemic itself — particularly to children.
Governor Walz repeatedly bragged about the model and how special we were in developing it to guide policy. He has scrubbed his press conferences from YouTube and other sources so they couldn’t be used against him, but once something is on the internet, it is always there. And he clearly used the model to justify the shutdowns and his accompanying terror campaign of unrelenting fear mongering, which led to anxiety, depression, greater drug and alcohol use and most importantly, missed health care that will cause deaths and more serious disease for years.
The model was built by a particularly inept team, which seemed not to have a grasp of basic modeling or statistical principles. At one point the Star Tribune ran a story about how wonderful it was that graduate students at the University of Minnesota were participating in the project. That should have been a clear warning sign of impending disaster. In fairness, many other models were equally poorly constructed, but there was a small group of independent statisticians and epidemiologists who early on created models that proved to be far more accurate.
I and others attempted repeatedly to point out to the modeling team the major flaws in their model, but we were either ignored or told that the team knew what they were doing. I ran a whole series of posts on the flaws and how the model could be improved. Good models in health care are all built on large databases of past information. A common example is identifying the best treatment patterns for a disease. You can take large claim and electronic medical records of the care delivered by many physicians in different ways to a huge number of patients, examine outcomes, and extract a model which says the best method for treating this disease for this type of patient is X.
An early epidemic model by definition does not have a large database of past events. A truism about models is that they only tell the modeler what the modeler tells them to tell him or her. Lacking sufficient data, the Minnesota team essentially made up parameters and inputs. And they made incredibly basic errors.
It is common knowledge that in an infectious disease epidemic the people most likely to become infected and to experience serious disease are those in poor health condition, whether due to age or pre-existing disease. The early part of an epidemic will accordingly see disproportionately high rates of hospitalization and death. Not accounting for this is a failure to do the fundamental check that the population sample for the data you are using is representative of the population as a whole. The virus didn’t “sample” or infect the population randomly. It infected certain parts of the population in a highly preferential manner.
The failure to account for extensive variation in viral load, in the susceptibility of the population to infection, in circumstances of transmission, and in other crucial parameters led to a model that was far too simplistic and discordant with the reality of the epidemic as it unfolded. And again, in fairness to the team, much of the data critical to building a good model simply wasn’t well understood, including basic facts about the infectivity and survivability of the virus itself, and viral transmission dynamics, something of which we still don’t have a clear picture.
I would not have a high level of confidence in getting any better model by using essentially the same group and approach that produced the first disaster. The state would spend its money far better by engaging those groups that produced better models and by involving people who demonstrated innovative approaches. But we also should recognize how foolish it is to determine public policy based on models, particularly in the early phase of the epidemic. I would hope that would be a key lesson from the now widely acknowledged failure of shutdowns, school closures, social distancing, plastic barriers, masking, and other measures.
Notice: All comments are subject to moderation. Our comments are intended to be a forum for civil discourse bearing on the subject under discussion. Commenters who stray beyond the bounds of civility or employ what we deem gratuitous vulgarity in a comment — including, but not limited to, “s***,” “f***,” “a*******,” or one of their many variants — will be banned without further notice in the sole discretion of the site moderator.