Can we trust the predictive output of computer modelling?
I would be the first to admit that this is not an area where I have anything more than general knowledge. However, what prompted me to think about this topic was a chance conversation with someone here in Payson. We were chatting over the phone and this person admitted to being less than fully convinced of the ’cause and effect’ of man’s influence on the global biosphere.
When I queried that, what was raised was the idea that all modelling algorithms used in climate change predictions must incorporate mathematical constants. I continued to listen as it was explained that, by definition, all constants were, to some degree, approximations. Take, for example, the obvious one of the constant π, that Wikipedia describes as: a mathematical constant that is the ratio of a circle’s circumference to its diameter. Pi, of course, would have to be rounded if it was to be used in any equation. Even taking it to thirty decimal places, as in 3.14159 26535 89793 23846 26433 83279, would mean rounding it to 3.14159 26535 89793 23846 26433 83280 (50288 being the 30th to 35th decimal places).
OK, so I must admit that I was leaning to the viewpoint that this person had a valid perspective. I then asked Martin Lack, he of Lack of Environment and a scientifically trained person, for his thoughts. The rest of this post is based on the information that Martin promptly sent me.
One of the links that Martin sent was to this post on the Skeptical Science blogsite. That post sets out the common skeptics view, namely:
Models are unreliable
“[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere.” (Freeman Dyson)
The author of the Skeptical Science posting responds,
Climate models are mathematical representations of the interactions between the atmosphere, oceans, land surface, ice – and the sun. This is clearly a very complex task, so models are built to estimate trends rather than events. For example, a climate model can tell you it will be cold in winter, but it can’t tell you what the temperature will be on a specific day – that’s weather forecasting. Climate trends are weather, averaged out over time – usually 30 years. Trends are important because they eliminate – or “smooth out” – single events that may be extreme, but quite rare.
Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.
So all models are first tested in a process called Hindcasting. The models used to predict future global warming can accurately map past climate changes. If they get the past right, there is no reason to think their predictions would be wrong. Testing models against the existing instrumental record suggested CO2 must cause global warming, because the models could not simulate what had already happened unless the extra CO2 was added to the model. All other known forcings are adequate in explaining temperature variations prior to the rise in temperature over the last thirty years, while none of them are capable of explaining the rise in the past thirty years. CO2 does explain that rise, and explains it completely without any need for additional, as yet unknown forcings.
I strongly recommend you read the full article here. But I will republish this graph that, for me at least, is a ‘slam dunk’ in favour for modelling accuracy.

Not only does this show that the data is within the range of projections of the modelled output, more seriously the data is right at the top end of the model’s predictions. The article closes with this statement:
Climate models have already predicted many of the phenomena for which we now have empirical evidence. Climate models form a reliable guide to potential climate change.
There is a more detailed version of the above article available here. Do read that if you want to dig further down into this important topic. All I will do is to republish this,
There are two major questions in climate modeling – can they accurately reproduce the past (hindcasting) and can they successfully predict the future? To answer the first question, here is a summary of the IPCC model results of surface temperature from the 1800’s – both with and without man-made forcings. All the models are unable to predict recent warming without taking rising CO2 levels into account. Noone has created a general circulation model that can explain climate’s behaviour over the past century without CO2 warming. [my emphasis, Ed.]
Finally, back to Lack of Environment. On the 6th February, 2012, Martin wrote an essay Climate science in a nut fragment. Here’s how that essay closed:
Footnote:
If I were to attempt to go even further and summarise, in one single paragraph, why everyone on Earth should be concerned about ongoing anthropogenic climate disruption, it would read something like this:Concern over anthropogenic climate disruption (ACD) is not based on computer modelling; it is based on the study of palaeoclimatology. Computer modelling is based on physics we have understood for over 100 years and is used to predict what will happen to the atmosphere for a range of projections for CO2 reductions. As such, the range of predictions is due to uncertainty in those projections; and not uncertainties in climate science. Furthermore, when one goes back 20 years and chooses to look at the projection scenario that most-closely reflects what has since happened to emissions, one finds that the modelled prediction matches reality very closely indeed.
In his email, Martin included these bullet points.
- Concern over anthropogenic climate disruption (ACD) is not based on computer modelling.
- It is based on our understanding of atmospheric physics (and how the Earth regulates its temperature).
- Computer modelling is based on this physics (which we have understood for over 100 years).
- Models have been used to predict temperature and sea level rise for a range of projections for CO2 emissions.
- The wide range of predictions was due to uncertainty in those emissions projections not uncertainties in climate science.
- This can be demonstrated by looking at predictions made over 20 years ago in light of what actually happened to emissions.
- The model predictions for both temperature and sea level rise are very accurate (if not slightly under-estimating what has happened).
Sort of makes the point in spades! The sooner all human beings understand the truth of what’s happening to our planet, the sooner we can amend our behaviours. I’m going to pick up the theme of behaviours in tomorrow’s post on Learning from Dogs.
Finally, take a look at this graph and reflect! This will be the topic that I write about on Thursday.
