Sniffing Model Glue

Anyone remember those massive econometric models of the 1970s, like the one Chase Econometrics touted with something like 10,000 variables, that couldn’t predict interest rates six months out?  (There’s a reason that Sir Alan Walters, Margaret Thatcher’s economic adviser, always had to correct his secretary’s transcription every time she rendered “econometrics” as “economic tricks.”)  Most of those fancy computer models were abandoned a long time ago, and no one seems interested in bringing them back.  Dart-boards and WAGs (wild-assed guesses) seem to do better than fancy models, though some forecasting WAGs (especially Brian Wesbury) seem consistently to predict better than others.

I’ve wondered for long time whether the same kind of modeling “imponderables” (as Jeeves might put it) affect the whole climate modeling enterprise.  Loyal Power Line reader Dale Wyckoff points me to this very interesting Scientific American article  on why economic models are always wrong that would seem to apply equally to climate models.  Here are two excerpts that get at the heart of the problem:

The next step was “calibrating” the model. Almost all models have parameters that have to be adjusted to make a model applicable to the specific conditions to which it’s being applied–the spring constant in Hooke’s law, for example, or the resistance in an electrical circuit. Calibrating a complex model for which parameters can’t be directly measured usually involves taking historical data, and, enlisting various computational techniques, adjusting the parameters so that the model would have “predicted” that historical data. At that point the model is considered calibrated, and should predict in theory what will happen going forward. . .

The problem, of course, is that while these different versions of the model might all match the historical data, they would in general generate different predictions going forward–and sure enough, his calibrated model produced terrible predictions compared to the “reality” originally generated by the perfect model. Calibration–a standard procedure used by all modelers in all fields, including finance–had rendered a perfect model seriously flawed. Though taken aback, he continued his study, and found that having even tiny flaws in the model or the historical data made the situation far worse. “As far as I can tell, you’d have exactly the same situation with any model that has to be calibrated,” says Carter.

Sounds about like what I had in mind in my Weekly Standard article a few days ago.

Recommend this Power Line article to your Facebook friends.

Responses