Deception Detection In Non Verbals, Linguistics And Data.

Australian Climate Data -- Big, Dirty, Biased and Manipulated.

Australian Climate Data used for creating trends by BOM is analysed and dissected. The results show the data to be biased and dirty, even up to 2010 in some stations, making it unfit for predictions or trends.

In many cases the data temperature sequences are strings of duplicates and duplicate sequences which bear no resemblance to observational temperatures.

This data would have been thrown out in many industries such as pharmaceuticals and industrial control, and many of the BOM data handling methologies are unfit for most industries. 

Dirty data stations appear to have been used in the network to combat the scarcity of climate stations argument made against the Australian climate network. (Modeling And Pricing Weather-Related Risk, Antonis K. Alexandridis et al)

We use a forensic exploratory software (SAS JMP) to identify fake sequences, but also develop a technique which we show at the end of the blog that spotlights clusters of these sequences in time series data. This technique, as well as Data Mining Bayesian and Decision Tree analysis prove the causality of BOM adjustments creating fake unnatural temperature sequences that no longer function as observational data, making it unfit for trend or prediction analysis.


"These (Climate) research findings contain circular reasoning because in the end the hypothesis is proven with data from which the hypothesis was derived."

Circular Reasoning in Climate Change Research - Jamal Munshi


Before We Start -- The Anomaly Of An Anomaly:
One of the persistent myths in climatology is:

"Note that temperature timeseries are presented as anomalies or departures from the 1961–1990 average because temperature anomalies tend to be more consistent throughout wide areas than actual temperatures." --BOM


This is complete nonsense. Notice the weasel word "tend" which isn't on the NASA web site. Where BOM use weasel words such as "perhaps", "may", "could", "might" or "tend", these are red flags and provide useful investigation areas.

Using an offset value arbitrarily chosen, a 30 year block of average temperatures, does not make them "normal", nor does it give you any more data than you already have.

Plotting deviations from an arbitrarily chosen offset, for a limited network of stations gives you no more insight and it most definitely does not mean you can extend analysis to areas without stations, or make extrapolation any more legitimate, if you haven't taken measurements there.

Averaging temperature anomalies "throughout wide areas" if you only have a few station readings, doesn’t give you any more an accurate picture than averaging straight temperatures. 


Think Big, Think Global:

Lets look Annual Global Temperature Anomalies. This is the weapon of choice when creating scare campaigns. It consists of averaging nearly a million temperature anomalies into a single number. (link)

Here it is from the BOM site for 2022. 

 


Data retrieved using the Wayback website consists of the years 2014 and 2010 and 2022 from BOM site (actual data is only to 2020). Nothing is available earlier. 

Below is 2010.




Looking at the two graphs you can see differences. There has been warming but by how much?

Overlaying the temperature anomalies for 2010 and 2020 helps. 



BOM always state that their adjustments and changes are small, for example:

"The differences between ‘raw’ and ‘homogenised’ datasets are small, and capture the uncertainty in temperature estimates for Australia." -BOM


Let's create a hypothesis: Every few years the temperature is warmed up significantly, at the 95% level (using BOM critical percentages).

Therefore, 2010 > 2014 <2020.
The null hypothesis is that the data is from the same distribution therefore not significantly different.

To test this we use:


Nonparametric Combination Test

For this we use NONPARAMETRIC COMBINATION TEST or NPC. This is a permutation test framework that allows accurate combining of different hypothesis.

Pesarin popularised NPC, but Devin Caughey of MIT has the most up to date and flexible version of the algorithm, written in R. (link).

Devin's paper on this is here.

"Being based on permutation inference, NPC does not require modeling assumptions or asymptotic justifications, only that observations be exchangeable (e.g., randomly assigned) under the global null hypothesis that treatment has no effect. It is possible to combine p-values parametrically, typically under the assumption that the component tests are independent, but nonparametric combination provides a much more general approach that is valid under arbitrary dependence structures." --Devin Caughey, MIT


As mentioned above, the only assumptions we make for NPC are that the observations are exchangeable, and it allows us to combine  two or more hypothesis, while accounting for multiplicity, and to get an accurate total p value. 

NPC is also used where a large number of contrasts are being investigated such as brain scan labs. (link)


The results of after running NPC in R, and our main result:

2010<2014 results in a  p value = 0.0444

This is less than our cutoff of p value = 0.05 so we reject the null and can say that the Global Temp. Anomalies between 2010 and 2014 have had warming increased significantly in the data, and that the distributions are different.


The result of 2020 > 2014 has a p value = 0.1975

We do not reject the null here, so 2014 is not significantly different from 2020. 

If we combine p values using hypothesis (2010<2014>2020 ie increases in warming in every version) with NPC we get a p value of 0.0686. This just falls short of our 5% level of significance, so we don't reject the null, although there is considerable evidence supporting this.

The takeaway here is that Global Temperature Anomalies have been significantly altered by warming up, between the years 2010 and 2014, after which they stayed essentially similar.


I See It But I Don't Believe It....

" If you are using averages, on average you will be wrong." (link)
    -- Dr. Sam Savage on The Flaw Of Averages


I earlier posts I showed the propensity of the BOM to copy/paste or alter temperature sequences, creating blocks of duplicate temperatures and sequences lasting a few days or weeks or even a full month. They surely wouldn't have done this with Global Temperature Anomalies, a really tiny data set, would they?




As an incredible as it seems, we have a duplicate sequence even in this small sample. SAS JMP calculates the probability of seeing this at random given this sample size and number of unique values, is equal to seeing 10 heads in a row in a coin flip sequence. In other words, unlikely. More likely is the dodgy data hypothesis.


The Case Of The Dog That Did Not Bark

Just as the dog not barking on a specific night was highly relevant to Sherlock Holmes in solving a case, so it is important with us knowing what is not there.

We need to know what variables disappear and also which ones suddenly reappear.


"A study that leaves out data is waving a big red flag. A
decision to include or exclude data sometimes makes all the difference in
the world."    -- Standard Deviations, Flawed Assumptions, Tortured Data, and Other Ways to Lie with Statistics, Gary Smith.




This is a summary including missing data from Palmerville as an example. Looking at maximum temps first, the initial data the BOM works with is raw, so minraw has 4301 missing temps, then minv1 followed as the first set of adjustments and now we have 4479 temps missing. Around 178 temps went missing.

A few years later and more tweaks are on the way, thousands of them, and in version minv2 and we now have 3908 temps missing, so now 571 temps have been imputed or infilled. 

A few more years later technology has sufficiently advanced for BOM to bring out a new fandangled version, minv2.1 and now we have 3546 temps missing -- a net gain of 362 temps that have been imputed. By version minv22 there are 3571 missing values and so a few more go missing.


Max values tell similar stories as do other temperature time series. Sometimes temps get added in with data imputation, sometimes they are taken out. You would think that if you are going to use advanced techniques for data imputation you would do all the missing values, why only do some. Likewise, why delete specific values from version to version.

Its almost as if the missing/added in values help the hypothesis.



Below -- Lets stay with Palmerville for August. All the Augusts from 1910 to 2020. For this we will use the most basic of all data analysis graphs, the good old scatterplot. This is a data display that shows the relationship between two numerical variables.




Above -- This is a complete data view of the entire time series, minraw and minv22. Raw came first (bottom in red) so this is our reference. There is clustering at the ends of the raw graph as well as missing values around 1936 or so, and even at 2000 you see horizontal gaps where decimal values have disappeared, so you only get whole integer temps such 15C, 16C and so on. 

But minv22 is incredibly bad -- look at the long horizontal "gutters" or corridors that exist from 1940's to 2000 or so. There are complete temperature ranges that are missing, so 14.1, 14,2,14.3 for example might be missing for 60 years or so. It turns out that these "gutters" or missing temperature ranges were added in! Raw has been adjusted 4 times with 4 versions of state of the art BOM software and this is the result - a worse outcome.



January has no clean data, massive "corridors" of missing temperature ranges until 2005 or so. No predictive value here. Raw has a couple of clusters at the ends, but this is useless for the stated BOM goal of observing trends. Again, the data is worse after the adjustments.



March data is worse after adjustments too. They had a real problem with temperature from around 1998-2005.


Below -- Look at before and after adjustments. This is very bad data handling procedures and it's not random, so don't expect this kind of manipulation to cancel out.




More Decimal Drama:
You can clearly see decimals problems in this histogram. The highest dots represent the most frequently occurring temperatures and they all end in decimal zero. This is from 2000-2020.







Should BOM Adjustments Cause Missing Temperature Ranges?

Data that disappears when it is being purportedly "corrected" or "aligned" to the network is called biased data. When complete temperature ranges disappear for 60 years or more, it is biased. Data that is missing NOT at random is biased. Data that has different mean values depending whether it is Tuesday or Friday or Sunday is biased. Data that is infilled or imputed and creates outliers is biased.


Here is the University Of Stockholm's adjustment statement:

"18700111-20121231 Correction for urban heat island trend and other inhomogeneities. This gives an average adjustment by -0.3 C both  May and August and -0.7 C for June and July. This adjustment is in agreement with conclusions drawn by Moberg et al. (2003), but have been determined on an ad hoc basis rather than from a strict statistical analysis."


And here is what the Swedish Scatterplot with Raw and Homogenised data looks like:



The data is all still there after adjustment, just the adjusted months were "slightly lowered" indicating a cooler temperature adjustment.

Decimals have not gone missing in action and complete temperature ranges have not been altered or deleted.

Below -- How about decimals after adjustment: BOM has such a problem with decimals, surely there has been an effect in Sweden? Five temperatures went up a bit, five went down. This is exactly what you would expect. 







Sunday at Nhill = Missing Data NOT At Random
A bias is created with missing data not at random.(link).

Below - Nhill on a Saturday has a big chunk of data missing in both raw and adjusted.


Below: Now watch this trick -- my hands dont leave my arms -- it becomes Sunday, and voila -- thousands of raw temperatures now exist, but adjusted data is still missing.






Below -- You want more, I hear. Now its Monday, and voila -- now thousands of adjusted temperatures appear!



The temperatures all reappear!


I know, you want to see more:

Below -- Mildura data Missing NOT At Random


Above -- Mildura on Friday with raw has a slice of missing data at around 1947, which is imputed in the adjusted data.


Below -- Mildura on a Sunday:


Above - The case of the disappearing temperatures, raw and adjusted, around twenty years of data.



Below -- Mildura on a Monday:


Above - On Monday, a big chunk disappears in adjusted data, but strangely the thin stripe at 1947 missing data in raw is filled in at the same location at minv22.


Even major centres like Sydney get affected with missing temperature ranges over virtually the entire time series up to around 2000:



Below -- The missing data forming these gashes is easily seen in a histogram too. Below is November in Sydney with a histogram and scatterplot showing that you can get 60-100 years with some temps virtually never appearing!




More problems with Sydney data. My last posts showed two and a half months of data that was copy/pasted into different years.

This kind of data handling is indicative of many other problems of bias. 


Sydney Day-Of-Week effect

Taking all the September months in the Sydney time series from 1910-2020 shows Friday to be at a significantly different temperature than Sunday and Monday.

 The chance of seeing this at random is over 1000-1:



Saturday is warmer than Thursday in December too, this is highly significant.





Never On A Sunday.

Moree is one of the best worst stations. It doesn't disappoint with a third of the time series disappearing on a Sunday! But first Monday to Saturday:

Below -- Moree on a Monday to Saturday looks like this. Forty odd years of data is deleted going from raw to minv1, then it reappears again in versions minv2, minv21 and minv22. 





Below -- But then Sunday in Moree happens, and a third of the data disappears! (except for a few odd values). 




A third of the time series goes missing on Sunday! It seems the the Greek comedy film Never On A Sunday with Greek prostitute Ilya attempting to relax Homer (but never on a Sunday) has rubbed off onto Moree.



Adjustments create duplicate sequences of data

Below -- Sydney shows how duplicates are created with adjustments:

The duplicated data is created by the BOM with their state-of-the-art adjustment software, they seem to forget that this is supposed to be observational data. Different raw values turn into a sequence of duplicated values in maxv22!







Real Time Data Fiddling In Action:

Duplicates sequences go up and down in value....then a single value disappears!


Maxraw (above) has a run of 6 temperatures at 14.4 (others too above it, but for now we look at this), and at version minv1 the sequence is faithfully copied, at version minv2 the duplicate sequence changes by 0.2 (still dupes though) and a value is dropped off on Sunday 18. By version minv21, the "lost value" is still lost and the duplicate sequence goes down in value by 0.1, then goes up by 0.3 in version minv22. So that single solitary value on Sunday 18 becomes a missing value. 

Duplicate sequences abound, looking up the series (above) you see more duplicate runs when looking upwards, and indeed this carries on above the snapshot. Many of the sequences have what appear to be made-up or fabricated numbers with short runs and an odd value appearing or disappearing in between.



A Sly Way Of Warming:

Last two examples from Palmerville, one showing a devious way of warming by copying from March and pasting into May!





"Watch out for unnatural groupings of data.
In a fervent quest for publishable theories—no
matter how implausible—it is tempting to tweak the data to provide
more support for the theory and it is natural to not look too closely if
a statistical test gives the hoped-for answer.

    -- 
Standard Deviations,Flawed Assumptions, Tortured Data, and Other Ways to Lie with Statistics, Gary Smith.



"In biased research of this kind, researchers do not objectively seek the
truth, whatever it may turn out to be, but rather seek to prove the truth of what they already know to be true or what needs to be true to support activism for a noble cause (Nickerson, 1998)."    -- Circular Reasoning In Climate Change Reasearch, Jamal Munishi




The Quality Of BOM Raw Data


We shouldn't be talking about raw data, because it's a misleading concept......


"Reference to Raw is in itself a misleading concept as it often implies
some pre-adjustment dataset which might be taken as a pure
recording at a single station location. For two thirds of the ACORN SAT
there is no raw temperature series but rather a composited series
taken from two or more stations."    -- the BOM


"Homegenization does not increase the accuracy of the data - it can be no higher than the accuracy of the observations. " (M.Syrakova, V.Mateev, 2009)


So what do these composites look like? A simple thing most data analysts do at the data exploratory stages is look at the distribution with a histogram.

Here's Inverell:


You don't have to be a data scientist to see that this is a problem, there appear to be two histograms superimposed. We are looking at maxraw on the X axis and frequency or occurrences on the Y axis.

This tells us how often each temperature appeared. The gaps show problems in the decimal use of data, some temps appear a lot, then 5 don't appear often, then one appears a lot, then four don't appear often. We have a high, 5 low, high, 4 low sequence. 



These histograms consists of the entire time series. Now we know decimalisation came in around the 70's, so this shouldn't happen with the more recent decades, correct?


Here we have what appears to be three histograms merged into one, and that is in the decade 2010-2020. Even at that late stage in the game, BOM is struggling to get clean data.

In fact, the problem here is more than decimalisation -- Double Rounding Imprecision where Fahrenheit is rounded to nearest 1 degree precision, then converted to Celcius and rounded to 0.1 precision creating an excess of decimal 0.0's and a scarcity of 0.5's (with this example); different rounding scenarios exist where different decimal scarcities and excesses were created in the same time series!

The paper is below--
"Decoding The Precision Of Historical Temperature Observations" -- Andrew Rhimes, Karen A McKinnon, Peter Hubers.

These different double rounding scenarios putting records in doubt in some cases (see above paper).

This is what it looks like, plotting decimal use per year by frequency:




Certain decimals are used more (or less) in certain years and decades.
It's obvious most stations have neither raw nor clean data, and doesn't even look like observational data.



Adjustments, Or Tweaking Temperatures To Increase Trends.


 "For example, imagine if a weather station in your suburb or town had to be moved because of a building development. There's a good chance the new location may be slightly warmer or colder than the previous. If we are to provide the community with the best estimate of the true long-term temperature trend at that location, it's important that we account for such changes. To do this, the Bureau and other major meteorological organisations such as NASA, the National Oceanic and Atmospheric Administration and the UK Met Office use a scientific process called homogenisation." -- BOM


First of all, how are climate adjustments done in other countries? The University Of Stockholm has records going back nearly 300 years.

Here are their adjustments:
"18700111-20121231    -- Correction for urban heat island trend and other inhomogeneities."

"This gives an average adjustment by -0.3 C both  May and August and -0.7 C for
June and July. This adjustment is in agreement with conclusions drawn by Moberg
et al. (2003), but have been determined on an ad hoc basis rather than from a
strict statistical analysis."

The scatterplot of adjustments over raw looks like below. The adjustments were all done to combat the Urban Heat Island effect.



What this shows is a "step change" in the early years--say 1915, you can see a single continuous adjustment, then looking at 1980 we can see a few temperature ranges are adjusted differently, as they mention in their Read Me above where they talk about May, August, June and July.



How does the BOM deal with adjustments of bias? The condescending quote under the header prepares you for what is coming:



A "step change" would look like the arrow pointing at 1940. But look at around 1960--there are a mass over adjustments covering a massive temperature range, its a large hodge-podge of specific adjustments for specific ranges. If we only look at 1960 in Moree:





Look at the intricate overlapping adjustments- the different colours signify different sizes of adjustments in degrees Celsius (see table on right side of graph).

BOM would have us believe that these chaotic adjustments for just 1960 in this example, are exact and precise adjustments needed to correct biases.

A more likely explanation based on the Modus Operandi of the BOM is that specific months and years get specific warming and cooling to increase the desired trend. Months like August and April are "boundary months" and are consistently warmed or cooled more depending on the year.

These are average monthly adjustments, getting down to a weekly or daily view really brings out the chaotic nature of the adjustments, as the scatterplot shows.


Nhill maxv22 adjustments over raw below:




Nhill minv22 adjustments over raw is below:



Palmerville is below, the arrow and label points to a tiny dot with a specific adjustment for a specific week/month.




The problem here is that:
1-- most of the warming trends are created by adjustments, and this is easy to see.
2--BOM tell us that they are crucial because they have so many cases of vegetation growing, moving to airports, observer bias, unknown causes, cases where they think it should be adjusted because it doesn't look right and so on. 

Now looking at the scatterplots you can clearly see that adjustments are not about correcting "step-changes" and biases. Recall, as we saw above, in most cases the adjustments make the data worse by adding duplicate sequences and adding other biases. Bedford's Law will also be used later to show less compliance with adjustments indicating data problems.


Biases that are consistent are easily dealt with:

"Systematic bias as long as it does not change will not affect the changes in temperature. Thus improper placement of the measuring stations result in a bias but as long as it does not change it is unimportant. But any changes in the number and location of measuring stations could create the appearance of a spurious trend."    --Prof Thayer Watkins, San Jose University.


The Trend Of The Trend


"Analysis has shown the newly applied adjustments in ACORN-SAT version 2.2 have not altered the estimated long-term warming trend in Australia." -- BOM


Long term trends are pooled data and:
".....pooled data may cancel out different individual signatures of manipulation."
-- (Diekmann, 2007)


But version 2.2 does change trends on some individual stations, though, see below. Version 2.2 changed the trend on version 2.1.





And version 2 can change the trend of version 1 as well:



And version 1,2,2.1,2.2 can also change the trend of raw:





Adjustments: Month Specific And
Add Outliers + Trends, 


August and April are "boundary" months on the edge of summer and winter and often get special attention, warming or cooling depending if it is before or after around 1967. Columns are months and frequencies (occurrences).


The above shows how the largest cooling adjustments at Bourke get hammered into a couple months. This shows months and frequencies, how often adjustments of this size were done. It makes the  bias adjustments look like what they are - warming or cooling enhancements.

More of the same:



Strange how so many stations all have problems in April and August.
Below is a Violin Plot, this shows graphically where most adjustments go. Ignore the typo saying "May, as you can see it is August.
Horizontal axis is months, vertical Y axis is the amount of adjustment in Celsius.

If the distribution is thick/big as per top of April, it means a lot of the distribution resides there, meaning many occurrences of warming adjustments. The long tails (spikes) indicate outliers, so August has many larger adjustments (up to -4 C).



We can prove the causality of adjustments with Bayesian and Decision Tree data mining (later).



Below we look at values that are missing in Raw but appear in version 2.1 or 2.2.
This tells us they are created or imputed values. In this case the black dots are missing in raw, but now appear along with some outliers.




These outliers and values, by themselves, have an upward trend. In other words, the imputed/created data has a warming trend.(below)

Adding outliers is a no-no any any data analysis. The fact is that only some values are created, which seem to suit the purpose of warming, but there are still missing values in the time series. As we progress to different versions of tweaking software, it is possible new missing values will be imputed, or other values disappear.






First Digit Of Temperature Anomalies Tracked For 120 years


This paper from Scotland along with their customised R software, tracks the distances tropical cyclones travel over time:

Technological improvements or climate change? Bayesian modeling of time-varying
conformance to Benford’s Law, Junho Lee and Miquel de Carvalho (link)

It uses Benfords Law to check for compliance or deviation of the first digit in a hurricanes travel, and they combine it with a Bayesian model. I wrote to the authors and they slightly modified the R code to run on Acorn anomalies.

This allowed me to check scarcity or excess use of the first digit in a timeseries over 120 years. Later on I will show how Bedford's Law is justified ( University Canberra has already shown temperature anomalies are Bedford's compliant, see Sambridge), but to keep the picture clearer we will minimise the data.




The X axis is years, and since it began in 1911, this becomes one. The arrows show where the years 1911-2019 are.

This shows a real time use of digits in temperature first anomaly position ie leading value. The 1 value is underused till 2010 where it becomes overused briefly, the drops once again.  This shows that the adjustments have too many digits in leading position of minv21 with a 2-4 value, and not enough values of one. This is consistent with other studies using Bedford's Law showing that small values have been decreased in first digit position, thereby warming or cooling that part of the time series (depending whether the temperature anomaly has a + or - in front of it).




The German Tank Problem


In World War II, each manufactured German tank or piece of weaponry was printed with a serial number. Using serial numbers from damaged or captured German tanks, the Allies were able to calculate the total number of tanks and other machinery in the German arsenal.

The serial numbers revealed extra information, in this case an estimate of the entire population based on a limited sample.

This is an example of what David Hand calls Dark Data. This is data that many industries have, never use, but leaks interesting information that can be used. (link)

Now, Dark Data in the context of Australian Climate data would allow us extra insight to what the BOM is doing behind the scenes with the data....that they are not aware of. So if dodgy work was being done, they would not be aware of any information "leakage."

A simple Dark Data  scenario here is very simply done by taking the first difference of a time series:

Get the difference between temperature 1 and temperature two, then the difference between temperature 2 and temperature 3 and so on. (below)



An example is above. If the difference between two days is zero, then the two paired days have the same temperature. So this is a quick and easy way to spot paired days that have the same temperature.

Intuition would expect a random distribution with no obvious clumps.


Above is deKooy in the Netherlands with a fairly even distribution. Sweden is very similar. Diff0 in the graph refers to the fact that there is zero difference between a pair of temps when using the First Difference technique above, meaning that the 2 days have identical temperatures.

Lets look at Melbourne, below:




The paired days with same temperatures are clustered in the cooler part of the graph, and taper out after 2010 or so.


Below is Bourke, and again you can see clustered data.






Below is Port Macquarie, and there is extremely tight clustering from around 1940-1970.



This data is varying with adjustments, in many cases there are very large difference before and after adjustments.

The capital cities vary around 3-4% of the data being paired. Country station can go up to 20% for some niche groups.

The hypothesis is this: The most heavily clustered data points are the most heavily manipulated data areas.

The paired clusters are not heavily dependent on temperature but more on adjustments-- this was discovered as the causal link in data mining analysis, more later.

Let's grab a random spot in the heaviest clustered areas at Port Macquarie, 1940-1970.




Above is a temperature segments and it is immediately apparent that many days have duplicated sequences.

From this we know:
1--the data is not observational.
2--it is heavily manipulated.
3--it is probably not "real".
4--it certainly is not climate data readings taken from a thermometer.


More from Port:


Here we have gaps of 1 and 3 between the sequences.


Below we have gaps of 8 between sequences!




Below -- now we have gaps of 2, then 3, then 4, then 5. You couldnt make this stuff up. Bear in mind that every time series has hundreds of these dodgy sequences!



It's pretty clear you couldn't use this data to model or predict anything. 

The Climate data presented by the BOM has no integrity, it is highly modified with each iterative version of software, under the umbrella of "homogenisation." 

The data cleaning procedures are non existent --  sometimes data is imputed along with outliers, sometimes data is deleted when strategically advantageous, along the way the data is continuously modified creating duplicated runs and sequences such that the data no longer has any characteristics of "naturally occurring numbers. The next post uses Bedford's law and other Digit Tests to show that climate data is modelled output and not observational data.








 


No comments

Post a Comment

© ElasticTruth

This site uses cookies from Google to deliver its services - Click here for information.

Professional Blog Designs by pipdig