Deception Detection In Non Verbals, Linguistics And Data.

Australian Climate Data Has No Evidential Value, Part 2


On the 13th Nov 2021 the temperature dropped to a max of 13 C and snow came down in Tasmania. The BOM played down the temperature drop saying it wasn't "unusual, it wasn't unprecedented, November had a "habit of keeping Tasmania guessing", and then used statistics for ALL the months, saying there had been snowfall 25 times since records began.

The Minimum temperature dropped to just 2.9 C. Now we are told it was certainly not average for spring, and then comparisons were made when one day was 36 C and the next was 17 C and so on, implying it wasn't that unusual. But hey, in October the rain was 70% over average, and its in the top 10 rainfalls.





So How Unusual Was a Minimum Temp of 2.9 C in November In Tasmania?

The BOM say it was the first time since 1963. But that is not an accurate way to convey statistical information.

All the days of November since 1910 total 3055 minimum temps.
Of those, 4 were 2.9 OR less. That is a 1 in 763 event.

Clearly, the BOM were not promoting the unusual cold, in fact they said it wasn't unusual in November, when it actually was a rare event -- but the 10th highest rainfall in October that was 70% above average (50%) was unusual. 

The language they have been using since the early 2000's has been changing and becoming more biased and deceptive. This applies especially to new definitions created by BOM such as:

"Reproduction in climate science tends to have a
broader meaning, and relates to the robustness
of results." -BOM


"Robustness checks involve reporting alternative specifications that test the same hypothesis.  Because the problem is with the hypothesis, the problem is not addressed with robustness checks."


This is very evident when homogeneity is being discussed by BOM, the language is framed to make it sound completely reasonable, it's what they are not telling you that is the problem. Overly careful language, weasel words and especially contradictions are red flags that need to be investigated.

.When the data set is "carefully curated", you know it's not--the sloppiest data handling of any organisation would not allow copy/pasting sequential data, man made patterns in data that have"never been altered", the complete inability to impute data consistently and correctly without creating outliers,
complete lack of multiplicity correction, and using software to pick up breaks at the 95% level with hundreds of time series longer than 35 thousand days--no wonder every single time series has tens of thousands of adjustments, many with patterns and distortions so evident because of the high number of cumulative false positives, they would not be allowed in any other industry.

Below--BOM Homogenisation software.




"A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted a sudden drop in day time temperature an increase in minimum temperatures which is entirely spurious. It is for this reason that these early data are currently not used for monitoring climate change."

Here we are told that a consistent bias that doesn't change is the reason temperatures are not used in the early 1900's, but when temperatures that are blank and have been imputed, say at Port Macquarie, all 13 000 of them, they have been  created by software, outliers and all. Most stations are using data from 1910 onward, and in some cases it is more accurate than current data!!



Here Sydney data from 1920-1940 just passes the Benfords law test at the p value of .0511 (Pearson test), while the more recent Sydney data 1980-2000 is completely non compliant, well below the 0.05 p value with a value of 0.0045.






Melbourne March 1910-1920 completely conforms to Benford's Law single digit test.





The March 1990-2000 data in Melbourne is hopelessly manipulated and testing the single digit produces a non compliant output with a p value less than 0.0001




Scatterplots Reveal Clumping of Temperature.

Overused clumped temps means other temps don't get used.

This is what the Simonsohn Number Bunching Tests for--excessive bunching in histograms. I went over some examples for Sydney in my last post, but lets look at actual data from a temperature time series.

Generally, what you do right at the beginning of an analysis is to use a scatterplot and a histogram to see what the data looks like.

This is pretty much stats 101, and looking at Melbourne for all the days of March:




These are the actual temperature data points by year for raw and adjusted data.
What we are looking at in the top scatterplot are "channels" or gaps running across horizontally, left to right in time.

This is a beautiful thing, because these gaps have been created there from the adjustments, raw doesnt have this. Sometimes it does, but not in this case.
This means, the gaps were added during adjustments. This is as plain as day, for all to see with a stats 101 scatterplot.

What this shows is what my old histograms from a few blog posts ago showed -- repeated temperatures and under used temps. In this case we are looking at missing ranges of temperatures, for example no temperatures in the range of 15.1-15.3 C, when suddenly they pop up in 2000.



November in Melbourne is the same, with appearing and disappearing ranges of temps. As some temperatures are not used/others are over used, this will be more evident with the histograms.


Same for April, man made gaps in adjusted data show temperature ranges that disappear. These patterns show that the data bears no resemblance to observational data, it is in fact computer modeled, and pretty sloppy at that, to be able to see missing temperature ranges in a dead straight line and then tell the masses that this is the way nature works.

No surprise that all these  time series are hopelessly non compliant with Benford's Law and the Simonsohn Number Bunching Test. There is no doubt at all, Melbourne, like Sydney, has had large scale fabrication.


Histograms Show A Different Perspective


Another basic technique every data analyst uses is histograms. It's the first thing we do. Any yet so much is revealed.



The above histogram represents on the vertical Y axis frequency--how often it happened, while across the horizontal X axis are the years. This is a different view of the same thing essentially, but with Canberra. You can see large vertical gaps or spaces, where certain temperatures have never occurred (or only a few times) for thousands of days. You can see how some temps are clumped with a few others, these are overused.



Same above, these all fail digit analysis tests, indicating large amounts of fabrication.


Here the data looks better, although there are still gaps, they are not entirely missing. Data before 2000 for Canberra has little evidential value.

Part of the problem is that BOM has no problem duplicating temperatures where ever it sees fit:


Here runs of duplicate temperatures have been created. The probability of seeing this in a random time series has been calculated by SAS JMP with a Rarity Value of 39.7, meaning this is equal to the chance of seeing 39.7 heads turn up in a coin flipping run. In other words, you can be 100% sure this is fabricated.

There are thousands of duplicated short runs, and I have demonstrated many in another blog post, so I won't dwell too long on these.



Of interest are specific months that receive the highest numbers of adjustments in the Canberra time series. September receives the biggest work, followed by August and October which are red flag months for most of the time series. These are highly tampered months.


The earliest data is in the 1940's, and that receives the largest number of 
adjustments followed by 1970's and 1980's.


Nhill, That Most Fabricated Of Stations....


This station gets the award for the most manipulated and fabricated station out there, and that is saying something.

© ElasticTruth

This site uses cookies from Google to deliver its services - Click here for information.

Professional Blog Designs by pipdig