Jack W. Dini
Livermore, CA
From: Plating & Surface Finishing, May 2005
What grade would you give someone who was correct 20 percent of the time? Not passing for sure. However, being right 20 percent of the time got some authors published in the prestigious journal Science. (1) They were trying to account for the decline in global temperatures from the end of World War II until the late 1970s. As an aside, in case you don’t remember, the 70s were the times we were supposedly headed for an ‘ice age.’ Newsweek highlighted this with an article titled, “The Cooling World.” (2) Anyhow, getting back to the present, it turns out that computer models have a difficult time producing cooling with the multitude of variables in the mix. The authors of the Science article, Delworth and Knutson, found that all they had to do was run their model many times, compare the output with observed temperature history, tweak some of the input, and go back for another run. After five such runs they concluded, “in one of the five GHG [greenhouse gases]-plus-sulfate integrations, the time series of global mean surface air temperature provides a remarkable match to the observed record, including the global warmings of both the early (1925-1944) and latter (1978 to the present) parts of the century. Further, the simulated spatial pattern of warming in the early 20th century is broadly similar to the observed pattern of warming.” (1)
In discussing this work, Robert Davis says the following, “Yes, it’s possible to get a model to reproduce anything you choose merely by tweaking a few parameters and running it enough times. But the model that reproduces the temperature history screws up precipitation, and the model that gets rainfall correct can’t generate the proper wind or pressure fields. The reason is actually quite plain: We don’t understand the physics of the atmosphere well enough to model climate change. That is the grim reality that at least four out of five climate models chose to ignore.”(3) John Christy adds: “Keep firmly in mind that models can’t prove anything. Even when a model generates values that appear to match the past 150 years, one must remember that modelers have had 20 years of practice to make the match look good. Is such model agreement due to fundamentally correct science or to lots of practice with altering (or tuning) the sets of rules in a situation where one knows what the answer should be ahead of time?” (4)
Science writer James Trefil echoes this thought. “After you’ve finished a model, you would like to check it out. The best validation is to apply the simulation to a situation where you already know the answer. You could, for example, feed in climate data from one hundred years ago and see if the GCM predicts the present climate. The fact that GCMs can’t do this is one reason I take their predictions with a grain of salt.” (5) A comparison of nearly all of most sophisticated climate models with actual measurements of current climate conditions found the models in error by about 100 percent in cloud cover, 50 percent in precipitation, and 30 percent in temperature change. Even the best models give temperature change results differing from each other by a factor of two or more. (6)
Reliability is in Question
While on the topic of global warming, which in a large part has been made a major scientific and political issue because of complex models, here are other examples of the poor predictability of some of those models:
•The models that served as the scientific background for the 1992 Rio Treaty implied that the world should have warmed 1.5 C since the late 19th century. In actuality, the world has warmed only 0.5 C, so the models were off by a factor of 3. (7)
•As computer simulations have become more sophisticated, projections of rising sea levels have become much smaller. A 25 foot increase predicted in 1980 fell to three feet by 1985 and then to one foot by 1995. (8)
•Computers forecast a warming of the troposphere of 0.224 C per decade, when actual measurements showed a warming of only 0.034 C per decade. Predictions were off by almost a factor of 7. (9)
•Computer models of ocean circulation did not predict temperature changes which occurred in the deep sea south of the Aleutian Islands. Keay Davidson observes; “At the very least, the findings indicate that computer models of ocean circulation—which are vital for monitoring climate change—are badly in need of a tune-up. The discovery was not explicitly predicted by any known computer models of ocean circulation.” (10)
•Carbon buildup has slowed during the past 10 years. Original predictions were that it would be up to 600 ppm by the year 2100, but that number has been reduced to only 500 ppm. (11)
•Atmospheric temperatures at the stratopause and mesopause regions (the atmospheric layers at about 30 and 50 miles altitude, respectively), at the Earth’s poles were found to be about 40-50 degrees F cooler than model predictions. (12)
•Jane Shaw reports that since “computers have to treat large areas of the earth as if they are on one elevation, their findings don’t give good descriptions of regions that may be hundreds of miles wide. Mountain ranges have an enormous impact on climate; their cooler air causes snow and rain to fall, drying out the air as it moves over the mountains. Yet most computer models do not distinguish mountain ranges from prairies. The building blocks for the models are not fine-grained enough; the mountains have to be flattened in the models and the valleys filled in. The predictions for the wet, mountainous forests of the Pacific Northwest are not much different than the predictions for the dry desert in Nevada. Because they are unable to make such distinctions, the climate descriptions may be distorted.(13) Here’s an example. Martin Wild and his colleagues recently proposed that melting over Greenland should remain negligible, even with doubled carbon dioxide.(14). Why the big difference from past assessments? The short answer is resolution as discussed above. Even the best models end up representing Greenland as a gently rounded mound rather than as a steep walled mesa. And, because melting takes place only as lower elevations, the area prone to melting gets exaggerated in the models.(15) So is Greenland really melting? Here’s some data that I bet you haven’t heard; the West Greenland Ice Sheet, the largest mass of polar ice in the Northern Hemisphere, has thickened by up to seven feet since 1980.(16)
Other Examples
Global warming isn’t the only situation where computer models exhibit shortcomings. The best model available at the time of the Chernobyl accident did not describe a major feature of the radioactivity deposition 80 miles northeast of the plant, and it was mostly in this region that children ingested or inhaled radioactive iodine and developed thyroid cancers.(17)
Predictions of the plume from the Kuwait oil fires (February 1991 to October 1991) were reasonably well described, but some individual deviations where air masses turned westward over Riyadh in Saudi Arabia were not well predicted even after the event.(17)
A program researchers were using for studying the effects of airborne soot on human health produced enormous results that went unchecked for years. A team in Canada estimates it will change its data on the impact of airborne soot on mortality downwards by 20-50%. Other groups throughout the world using the same tool are now redoing their calculations.(18)
Stuart Beaton and his colleagues note that an EPA model, which treats all cars of a given model year as having the same odometer reading, the same annual mileage accumulation, and an equal likelihood of emission control problems, has little success in predicting urban on-road vehicle emissions. This leads them to conclude, “lack of linkage between EPA’s model and real-world measurements leads to inappropriate policy decisions and wastes scarce resources. If we want to maintain public support for programs that claim to reduce air pollution, those programs must do what they claim in the real world, not just in the virtual world of the computer modeler.”(19)
Jerry Dennis reported this about the Great Lakes, “One recent computer model projected a period of drought and heat continuing through the twenty-first century, resulting in even lower water levels. Another predicted more heat and precipitation, resulting in the Great Lakes staying at the same level or even rising a foot or so above average.”(20) Take your pick.
A Sacrilegious Thought
Naomi Oreskes and her co-authors argue that large computer models with multiple inputs should probably never be considered ‘validated.’ They argue that verification and validation of models of natural systems is impossible because natural systems are never closed, and because models are always non-unique. “Models can only be evaluated in relative terms, and their predictive value is always open to question.”(21) They quote Nancy Cartwright who has said: “A model is a work of fiction.”(22)
While not necessarily accepting Cartwright’s viewpoint, Oreskes et al., compare a model to a novel. Some of it may ring true and some may not. “How much is based on observation and measurement of accessible phenomena, how much is based on informed judgment, and how much is convenience? Fundamentally, the reason for modeling is a lack of full access, either in time or space, to the phenomena of interest.”(21) It’s obvious that in some cases we still have a long way to go with modeling.
References
1.Thomas L. Delworth and Thomas R. Knutson, “Simulation of Early 20th Century Global Warming,” Science, 287, 2246, March 24, 2000
2.Peter Gwynne, “The Cooling World,” Newsweek, 85, 64, April 28, 1975
3.Robert E. Davis, “Playing the numbers with climate model accuracy,” Environment & Climate News, 3, 5, July 2000
4.John R. Christy, “The Global Warming Fiasco,” in Global Warming and Other Eco-Myths, Ronald Bailey, Editor, (Roseville, CA, Prima Publishing, 2002), 15
5.James Trefil, The Edge of the Unknown, (New York, Houghton Mifflin Company, 1996), 46
6.Jay Lehr and Richard S. Bennett, “Computer Models & The Need For More Research,” Environment & Climate News, 6, 12, July 2003
7.Robert W. Davis and David Legates, “How Reliable are Climate Models?,” Competitive Enterprise Institute, June 5, 1998
8.“The Global Warming Crisis: Predictions of Warming Continue to Drop,” in Facts on Global Warming, (Washington, DC, George C. Marshall Institute, October 15, 1997)
9.TSAugust, www.tsaugust.org/Global%20Warming.htm, accessed January 19, 2004
10.Keay Davidson, “Going to depths for evidence of global warming,” San Francisco Chronicle, A4, March 1, 2004
11.Jane S. Shaw, Global Warming, (New York, Greenhaven Press, 2002), 23
12.C. S. Gardner, et al., “The temperature structure of the winter atmosphere at the South Pole,” Geophysical Research Letters, Issue 16, Citation 1802, August 28, 2002
13.Jane S. Shaw, Global Warming, 60
14.Martin Wild, et al., “Effects of polar ice sheets on global sea level in high-resolution greenhouse scenarios,” Journal of Geophysical Research, 108, No. D5, 4165,2003
15.David Schneider, “Greenland or Whiteland?,” American Scientist, 91, 406, September-October 2003
16.David Gorack, “Glacier melting: Just a drop in the bucket,” Environment & Climate News, 2, 6, May 1999
17.Richard Wilson and Edmund A. C. Crouch, Risk-Benefit Analysis, Second Edition, (Cambridge, Harvard University Press, 2001), 74
18.Jonathan Knight, “Statistical error leaves pollution data up in the air,” Nature, 417, 677, June 13, 2002
19.Stuart P. Beaton, et al., “On-Road Vehicle Emissions: Regulations, Costs and Benefits,” Science, 268, 991, May 19, 1995
20.Jerry Dennis, The Living Great Lakes, (New York, St. Martin’s Press, 2003), 137
21.Naomi Oreskes et al., “Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences,” Science, 263, 641, February 4, 1994
22.Nancy Cartwright, How the Laws of Physics Lie, (Oxford, Oxford University Press, 1983), 153
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment