Latest News (hidethedecline)
Original Temperatures: Iceland
|Posted by Frank Lansner (frank) on 7th April, 2014|
|Latest News (hidethedecline) >>|
This writing builds on the methods and conclusions from the summary article:
See also original temperature data from Sweden and Norway presented recently:
Iceland temperature data has a special focus due to its Arctic location and then the much debated GISS versions of Reykjavik and Akureyri station data.
For Iceland it has been possible to collect meteorological yearbook data from:
1912-1919 : Meteorological yearbooks from Denmark
1921-1997 : Original Iceland meteorological year books
Approx. 1960 – 2013 : Online meteorological data from the Icelandic Met Office:
The online data from the Icelandic met office matches data from the printed year books.
Thus using original temperature data from Iceland, we can evaluate other data sources like GISS and KNMI´s ECA&D.
Illustration of the Iceland temperature stations used in this writing. There is a majority of stations located near the coast but fortunately we have also the interior of Iceland represented.
An interesting result from Iceland is that the difference coastal versus non-coastal (“OAS”) is small. In all other European areas analysed so far differences have been much larger.
The “largest” difference in trend observed was when making a compare between the 4 coastal of red area 6 in South Iceland and then the 2 valley stations further from the Coast, Akureyri and Grimsdadir.
We see that the coastal stations have smaller variance in temperature than the non-coastal stations.
The delay in the coastal temperatures is relatively small, just around one decade.
However, both the coastal and non-coastal station used show that temperatures today resemble the mid 20´ieth century temperatures.
Temperature average trends from all 7 areas from Iceland’s coast line + the series “OAA” (stations marked black on fig 1) show remarkable similarity.
In the following, the average of the 7 coastal areas (35 stations) will be labelled as “Coastal Avg.”.
According to KNMI´s data, Reykjavik’s temperatures show a strong warming even after the warm peak 2003 – see the red graph.
The ECA-jump in Reykjavik temperatures after 2004 is not supported by original Reykjavik data nor the average of 35 coastal Iceland stations.
Also for the coastal Stykkisholmur station ECA have warm adjusted temperatures after 2004.
Likewise for Dalatangi.
For Vestmannaejar I have not found the original data after 2004, but we can see that ECA´s data appear to leave the average coastal trend also around 2004.
Since the Icelandic Met Office themselves presents the 2004-2013 data online, this suggests that KNMI / ECA&D them selves made the adjustments? “Homogenizations” ?
This Climate4you graphic shows quite well how come GISS adjustment of Arctic has alarmed many sceptics. Blue represents original temperature data, red represents GISS (NASA) Version...
First we can conclude that GISS has not produced a warm trend in Reykjavik data in the same way KNMI´s ECA&D did. GISS cold adjusts older data while ECA warm adjust recent temperatures. Thus, GISS adjustments are not backed up by ECA and vice versa.
Notice also that the GISS cold adjustment of older data 1930-1965 make the Reykjavik data strongly different from the average of 35 Iceland coastal datasets.
Likewise, the GISS Akureyri adjustment makes the past colder and thus creates a warm trend after 1930 that was never there.
Last changed: 7th April, 2014 at 18:15:56Back
|Simple average||By Unknown on 3rd July, 2014 at 20:50:36|
|I have made the following consideration about average temperature.
You seem to be in possession of original untouched data which might be suitable for such analysis.
I cannot see that you have combined a larger bulk of you data in a way as described below.
What do you think? (PS! I might be offline for some periods As I start my holiday tomorrow)
I wonder: Are all these adjustments really necessary? How many well designed and reliable points of measurement will you need to get a sufficiently accurate yearly average?
Or to be more precise: How many points of measurement will you need to get a measured yearly average with sufficiently low standard uncertainty to be able to detect a positive trend trend of 0,015 K/year (1,5 K/century)?
I consider the calculated average of a number of temperature readings, performed at a defined number of identified locations, as a well defined measurand. Hence, the standard uncertainty of the average value can then be calculated as the standard deviation of all your measurements divided by the square root of the number of measurements. ( See the open available ISO standard: Guide to the expression of Uncertainty in Measurements).
Let us say that you have 1000 temperature measurement stations. which are read 2 times each day, 365 days each year. You will then have 730 000 samples each year.
(Let us disregard potential correlation for a moment.)
If we assume that 2 standard deviations for the 730 000 samples is 20 K.
(This means that 95 % of the samples are within a temperature range of 40 K.)
An estimate for the standard uncertainty for the average value of all samples will then be:
2 Standard uncertainties for the average value = 2 Standard deviations for all measurements / (Square root of number of measurements)
20 K / (square root(730 000)) = 20 K / 854 = 0.02 K.
This means that a year to year variation in the average temperature that is larger than 0,02 K cannot reasonably be attributed to uncertainty in the determination of the average. This further means that a variation larger than 0,02 K can reasonably be attributed to the intrinsic variation of the measurand.
If I further assume that 2 standard deviations of the yearly average temperature measured at a high number of locations is in order of magnitude 0,1 K (Remaining variation of the feature when trends are removed). This means that 95 % of the calculated yearly average temperatures is within the range + 0,1 K to - 0,1 K from the average of all yearly averages (If trends are removed).
Since the standard uncertainty of the measured average (0,02 K) is much less than the standard uncertainty of the feature we are studying ( 0,1 K), I regard the uncertainty to be sufficiently low. Hence 1000 locations and 2 daily readings seems to be sufficiently high for the defined purpose.
However, the variation of the measurand, yearly average of your temperature measurements, now seems to be too high to be able to see a trend of 0,01 K / year. One approach can then be to calculate the average over several years. The standard uncertainty of the average temperature for a number of years will then be equal to the standard deviation of the yearly average (0,1 K) divided by the square root of number of years. Let us try an averaging period of 16 years. 2 standard uncertainties for the average temperature for a period of 16 years can then be calculated as 0,1 K / (square root(16)) = 0,1 K / 4 = 0.025 K.
If you choose an averaging period of 16 years, the standard uncertainty of the measured average value can now be recalculated, as the number of measurements has increased by 16 times to: 16 * 730 000 = 11 680 000. Two standard uncertainties for the average value is now 0,006 K. Hence the number of measurement locations can be reduced. Even if I select as few as 250 measurement points, 2 standard uncertainties will be as low as 0,01 K.
Consequently it seems that we should only need in order of magnitude 250 good temperature measurement locations to be able to identify a trend in the average temperature. Adding more temperature measurement locations does not seem to add significant value, as the year to year variation in temperature seems to be intrinsic to the average temperature and not due to lack of measurement locations. Hence the variation cannot be reduced by adding more measurements.
So, if the intended use of the data set is to monitor the development of the average temperature, all the operations that are performed on the data sets seems to be a waste of effort. The effort to calculate temperature fields, compensate for urban heat effect and estimate measurements for discontinued locations all seems to be meaningless. What should be done is to throw over board all the questionable and discontinued measurement locations and keep in order of magnitude 250 good temperature measurement stations randomly spread around the world.
|By Unknown on 2nd July, 2014 at 09:40:02|
|By Unknown on 2nd July, 2014 at 09:39:14|
|Dear unknown!||By Unknown on 14th June, 2014 at 20:33:58|
|Thank you for your input, very relevant!
I have been offline until just recently and I first see the comment now.
Where did you take the IMO data from, do you have a link?
And i cant see your name etc. So for now you are just "Unknown"
|An aknowledgement would be nice.||By Unknown on 23rd May, 2014 at 19:41:10|
|Did you see my earlier comment? I don't know. If you replied with "noted", I'd know. Is that too much to ask?|
|Comparisons of individual stations used by GISS to IMO originals||By Unknown on 17th May, 2014 at 09:38:17|
|Sources given in the comments section.
Holar Hornafirdi, Iceland
Grimsey is more complicated because some of the data has been shifted by 5 years in some datasets (probably through the WMO).