PART1: The perplexing temperature data published 1974-84 and recent temperature data.

Posted by Frank Lansner (frank) on 13th July, 2010
T >>


The perplexing temperature data published 1974-84 and recent temperature data.
(Part 1)
1) Introduction
 1.1) The “divergence problem”
 1.2) The 95% confidence limit from CRU / Phil Jones
 1.3) The “perplexing” cooling 1930´ies – 1970´ies … not so “perplexing” anymore?
2) Reasons to reject original temperature/unadjusted data
 2.1) Reasons that - according to CRU, NOAA, GISS etc. - leads to significant warmer temperature trend.
(Part 2)
3) Presentation of some of the temperature data sets used.
 3.1) Angel and Korshover 1975
 3.2) Chen 1982
 3.3) Folland 1984 / Ocean temperatures
 3.4) Hansen 81
 3.5) Jones 82
 3.6) Vinnikov1980
 3.7) Yamamoto 1975
(Part 3)
4) NH temperature ensemble anomalies
 4.1) Land-Air Temperature minimum and Ocean-Water Temperature minimum
 4.2) Natural influences on NH temperatures
5) Other temperature series
 5.1) NCEP and ERA-40 temperatures
 5.2) ERA-40
 5.3) Temperature Proxies
 5.4) RATPAC
6) Estimate of NH Land temperatures
(Part 4)
7) Estimate of NH Land+Sea temperatures 1900 - today
8) The Land-Ocean temperature equilibrium, and UHI.
9) How much land area is there on the Northern Hemisphere?
10) Final words
1) Introduction
In this writing I aim to explore the best (latest) temperature data published before the global warming movement grew strong in the mid 1980´ies. Thus, my focus is mostly the rather modern temperature data published 1974-84. These temperature data I will refer to as “the original temperature data”. I will then compare these original temperature data with the modern versions (from CRU, Hadley, GISS and NOAA etc.) to examine the size of adjustments to temperature data and in general learn what ever can be learned.
Due to the availability and quality of data, temperatures of the Northern Hemisphere 1930-80 are the main focus of this writing.
In 2007 the IPCC actually did a similar compare between old temperature data and then temperature data of more recent data from the hand of CRU, Hansen, Jones. IPCC published this graphic showing a number of quite old (pre 1960) temperatures series covering different large areas of the globe:
fig 1.
The IPCC then was able to conclude:
IPCC: “While the data and the analysis techniques have changed over time, all the time series show a high degree of consistency since 1900.”.  
IPCC finds a consistency worth mentioning despite different areas, data coverage and methods. Older data seems to suppor newer data and vice versa.
So why examine the correlation between old and new temperature data again in this writing?
- Because the best and latest pre-global-warming temperature data published 1974-84 are not included in the IPCC graphic and thus, the latest data the IPCC graphic displays (except for Hansen and Jones publications 1986-87) ends as early as 1960.
fig 2.
To illustrate what appears to be missing in the IPCC graphic to the left, a row of original temperature data 1974-84 are shown to the right. Just like the IPCC graphic on the left, the graphic on the right are all large areas of the globe, mostly Northern Hemisphere, but the original temperature series proceeds to around 1980 in stead of ending in 1960. (The fat blue line in both graphics is the Bodyko and the extension represented by Angel and Korshover data. I have highlighted this dataset because the Bodyko data is used in both graphics and thus, this dataset allows us to verify that the compare of the 2 graphics is fair. The Angel-Korshover data set is quite representative for other NH graphs 1958-80. In the right graphic, the Bodyko-Angel-Korshover is shown with 5 years running mean.)
For the above illustration I threw a bunch of temperature graphs into one rough and mixed graph, just to indicate why I felt something was missing in the IPCC graphic:
fig 3.
I will obviously go much further in details of several of the datasets used.
1.1) The “divergence problem”
A core issue of the climate gate is the hiding of “the decline”. It appears that tree ring temperature proxy data was cut off by 1960 in IPCC graphics, and thus these temperature data only shows the 1940-60 part of the temperature decline after the 1930´ies.
A “divergence problem” has been invented where tree ring data is supposed to erroneously show a decline in temperatures after 1960. But the decline in tree ring temperature indicator after 1960 appears to have some resemblance with the declining original temperature trends after 1960 – which the IPCC did not show either:
fig 4.
The tree ring trends (from CRU) – the blue lines shows the tree density and ring width. See also: http://www.klimadebat.dk/forum/vedhaeftninger/osborn99.jpg
(The divergence problem illustrated in context with other parameters:
Taken from : http://wattsupwiththat.com/2009/04/11/making-holocene-spaghetti-sauce-by-proxy/)
Is there a significant “divergence” between temperatures and tree ring data?
1.2) The 95% confidence limit from CRU / Phil Jones
When addressing the temperature decline 1930´ies-1970´ies in debates, I have been told, that there is hardly any difference in Jones 1982 data compared with CRUTEM3 (land temperature) today. You see, “the Jones 1982 temperature data lies within the range of the “CRUTEM3 NH 95% Confidence limit”:
fig 5.
And therefore there is no problem at all. But this “95% Confidence limit” happens to allow very different slopes for the temperature decline after the 1930´ies and thus the “95% confidence limit” argument appears meaningless when analysing in this context.
1.3) The “perplexing” cooling 1930´ies – 1970´ies … not so “perplexing” anymore?
The above downplaying of the temperature decline after 1930 is in contrast to the viewpoints in the 1970-85 period, here a few quotes:
U.S. National science board 1974:
“During the last 20 to 30 years, world temperature has fallen, irregularly at first but more sharply over the last decade”
Phil Jones, 1985, about the temperature decline after the 1930´ies:
“No satisfactory explanation for this cooling exists, and the cooling itself is perplexing because it is contrary to the trend expected from increasing atmospheric CO2 concentration. Changing Solar Activity and/or changes in explosive volcanic activity has been suggested as causes… but we suspect it may be an internal fluctuation possibly resulting from a change in North Atlantic deep water production rate.”
So, Jones said in 1985 that: “the cooling itself is perplexing - but why not say so today? And why don’t we see a “perplexing” cooling after 1940 in the IPCC graphic today? And furthermore, back in the early 1980´ies Jones appears to accept data as is at least to such an extent that he is considering how nature has produced these “perplexing” cooling data – like a real scientist should.
In 1986, it seems that jones more and more takes distance to the older data:
“The method of Vinnikov et al. (1980), involving nearly 1200 maps, is both time-consuming and subjective. The results could not Practically be repeated even if the precise data sources were known.
Yamamoto's (1981) results are not strictly comparable to the other analyses discussed here because a zero anomaly value was assumed for all grid points where interpolation could not be made.” Etc.etc.
But in 1982 Jones wrote:
 “The high correlations in table 2, particularly those with Budyko (1969) and Vinnikov et al. (1980) supports the reliability of our results”. (Jones in 1982 describes both strengths and weaknesses of his methods and results.)
Jones describes how the 1985-86 data show much less cooling than in 1980-82:
“A cooling of about 0.28 C is evident between 1940 and 1965. The magnitude of this cooling in the present analysis is considerably smaller than in the earlier analyses of Vinnikov et al. (1980), Hansen et al. (1981) and Jones et al. (1982), amounting to about 0.38 C in those studies.”
2) Reasons to reject original temperature/unadjusted data
When original temperature data shows too little warming to fully support the global warming hypothesis, there are numerous explanations why all these data sets “doesn’t count”. So before looking at the original temperature data, let’s first take a brief look at the typical reasons to reject the original temperature data.
One might find some of these reasons to reject original data reasonable, but there is an overall problem: How come so many different technical issues just happen to make “errors” in the same way, yielding a common error trend with less warming? A common error trend for radio balloons error types, ground stations error types, sea surface and marine air error types and then tree ring error types (and also satellite error types)?
(And then at the same time for the MWP, temperature trends change to favour of a new common trend supporting IPCC viewpoints, despite of the many different techniques and thus error types:
2.1) Reasons that - according to CRU, NOAA, GISS etc. - leads to significant warmer temperature trend.
1) “Larger number of temperature stations leads to a warmer trend”.
This is in fact close to nonsense. Any extra temperature station added could lead to a colder trend just as well as a warmer trend.
See my comments on “Angel and Korshover 75”
2) ”The ‘Northern Hemisphere’ for some studies are only 17,5N-90N !”
It is obviously correct, that a full 0-90N would be best for a “NH” data set. But one should not forget, that when using a GISS 1200 km zone around temperature stations, the 90N – 17,5N becomes more like 90N-7N. In this writing numerous full 0N-90N are used fot NH – and they don’t differ significantly from the 17,5N-87,5N (Vinnikov 1980).
Here Chen 1982 is comparing such areas:
Fig 6.
And it appears that the slope 17,5N – 87,5N is -0,013 K/year while the 0N – 90 N yields a -0,014 K/year slope.
So it seems that this geographical difference does yield a rather small difference.
A better argument is, if a large continuous area is added to a dataset. For example, the Southern hemisphere has a smaller temperature decline 1930-1970 than the Northern hemisphere, and thus NH, SH, global are areas expected to show a somewhat different results. In the present writing NH has the biggest focus.
3) “This ‘Northern hemisphere’ does not include ocean areas”.
This is being claimed for the Jones 1982, but it’s only a half truth. Jones 1982 does include a significant amount of sea area, however definitely not all of the NH ocean area. See more under “Jones 82”.(After finishing the present writing it appears clearly that the temperatures measured from land are somewhat similar with marine Air temperatures but significantly different from SST – surface water temperatures.)
4) “Moving stations out of town to avoid UHI explains warming corrections”
Again, this is really Nonsense. Despite relocation of some temperature stations, UHI still induces far too much warming in temperature data in general world wide:
Thus: Any correction in connection with UHI should overall be towards colder temperature trends. If you make a warm correction due to stations moved out of town, you should make a much larger cold correction for the much larger UHI effect. The UHI is generally very much larger than the effect of relocating globally.
5) “Temperature stations moved to higher altitude explains warming corrections”
Nonsense. If you place a station at a higher altitude, the temperature is likely to decrease and should be corrected. So we have a world wide trend during the 20´eth century where all countries starting in year 1900 independently of each other started to move their temperature stations up the hills?? I think any such altitude correction globally needs to be confirmed by some strong statistic data that shows that in general, temperature stations has been moved up in altitude. Anyone published on this subject? I would be surprised...
6) “New Method, like different new grid method etc, happens to yield more warming in data.”
When shifting grid method, each new grid will just as often lead to a colder result than a warmer grid result. “New grid types” etc. standing alone is not an explanation of warmer results – such an argument should be accompanied by a real reason for the added warming trend.
Jones 1982, compares different grid types and finds no statistical significant difference in the results:
fig 7.
And Jones 1985: “The three independent analyses of Vinnikov et al (1980), Hansen et al (1980) and Jones et al (1982) are compared in Wigley et al (1985a,b). The different series have at least 95% variance in common. As the data sources used by the various workers are so similar (we estimate that there is around 95% data overlap among the previously published analysis), the implication is that the method of gridding have little effect on annual hemispheric mean temperature estimates.”
(My comment would be, that a 5x5 grid in the tropics is many times bigger than a 5x5 grid in the Arctic, so even the good grid methods are not a “perfect” approach in my view. Obviously each fraction of the globe should have same area to have same weight...! )
7) “Some values and months where suspect, so we either took them out or adjusted.”
Any single data adjusted could just as well lead to colder value than warmer value. This does not explain a warming trend.
If really removing values leads to shift in temperature trend, this should be investigated (!)
8) “Sparse data”: Again, this will lead to colder trend as often as warmer trend. This is not an argument for significant warming trend on its own.
9) “When the SST peaked from 1941, this is because all over the world in all oceans at the same time, people at in war time dared not do these samples with a light using a bucket of water,  - so they used a machine inlet instead and this gave the 1940´ies warming peak in SST.”
-          See Folland 1984:
10) “The measuring time, TOBS, has changed, and it so happens, that this gave too warm temperatures earlier, we must add a warming to later data then”.
It’s true, that if you measure temperatures earlier in the morning, then you will have to correct for this cooling. I discussed this with a nice intelligent believer of GW, a scientist, and he says that this is actually the case.
”There has been some change to machine measuring of temperatures and now they are taken at night…”
But taking temperatures at night in stead of day results in a perhaps 10 – 20 full degrees Celsius, so I am afraid this still sounds like nonsense. And normally there are both night and day measurements.
 What we need here is a solid independent documentation, that all over the world in all countries rich and poor, they have actually synchronically shifted the Time of OBServation to slightly earlier to explain the world wide TOBS warming corrections. Remember, these temperature data was taken in 1930, 40, 50 , 60, 70 – in times when temperature data were just for trivial weather use. So why would poor countries prioritize new equipment etc? And if new machinery was introduced, howcome they did not set the machine to measure at the same time as they used to? Howcome there is a world wide trend that they just happen to set the machines to measure a little earlier?
Before accepting such coincidence – that just happens just yield another reason to add warm to data - I would like to see the world wide independent made graph of a still earlier TOBS in order to evaluate this apparently rather odd reason to reduce the 1930-70 decline in temperatures.
11) A-lot-of-postulates-and-claims-hard-to-prove-right-or-wrong:
For example, Jones claim about Vinnikovs maps that are “subjective” etc. Perhaps a valid point but impossible to deal with as a reader, and should not be accepted until a straight forward explanation is given.
12) “this dataset has ‘well known’ problems”
The fact that claims concerning a dataset have been repeated often is not an argument, obviously.
13) “Other datasets that this study uses are wrong, and they just happen to show too much cooling trend 1930-70, so there should be more heat trend added”
(- It could be NCAR, SST data from Metoffice etc used in NCEP, Raobcore, ERA-40, IGRA and many ocean temperature datasets.)
This “argument” pushes the reader to investigate what ever the reason for rejecting data was in other studies. Probably this reason for problems in other datasets are one of the 12 reasons mentioned above. The reader is left with a time consuming job if he wants to evaluate this “argument”.
Finally, if you then explain why a certain temperature dataset should be rejected using most of the reasons 1) to 13), any reader will be knocked out and has simply got to reject any study as told even though it actually looked like good science to begin with.




Last changed: 14th July, 2010 at 22:06:36



All wet By Unknown on 4th January, 2011 at 19:26:45
Excellent comment. Something like the "Humidex" is needed.

Also about the 95% confidence level Jones touts: this is a garbage standard, open to wide and easy abuse and error. From data snooping to publication bias to ... it's used only by the softest of pseudo-sciences.
I fear the whole arguement is MEANINGLESS! By Unknown on 24th July, 2010 at 15:24:42
"Average Temperature".

Hum, my "Average Blood Pressure" is...such and such.

Meaningless. I cannot "average" an intensive variable.

Let's take a concrete example here: 85 F, 70% RH. BTU per cubic foot of air, 38. 105 F, 10% RH (PHX), BTU per cubic foot, 38.

Atmospheric energy = HIGHER at LOWER TEMPERATURE.

Therefore TEMPERATURES ARE MEANINGLESS when it comes to the trends in atmospheric energy, without HUMIDITY data for each and every TEMPERATURE POINT.

An exercise in futility.

Sorry, I'm a stick in the mud.


Max Hugoson, Minnesota (That's the 84 F , 70% realm.)

Add Comment