Deception Detection In Non Verbals, Linguistics And Data.

Australian Climate Data Has No Evidential Value, Part 2


On the 13th Nov 2021 the temperature dropped to a max of 13 C and snow came down in Tasmania. The BOM played down the temperature drop saying it wasn't "unusual, it wasn't unprecedented, November had a "habit of keeping Tasmania guessing", and then used statistics for ALL the months, saying there had been snowfall 25 times since records began.

The Minimum temperature dropped to just 2.9 C. Now we are told it was certainly not average for spring, and then comparisons were made when one day was 36 C and the next was 17 C and so on, implying it wasn't that unusual. But hey, in October the rain was 70% over average, and its in the top 10 rainfalls.





So How Unusual Was a Minimum Temp of 2.9 C in November In Tasmania?

The BOM say it was the first time since 1963. But that is not an accurate way to convey statistical information.

All the days of November since 1910 total 3055 minimum temps.
Of those, 4 were 2.9 OR less. That is a 1 in 763 event.

Clearly, the BOM were not promoting the unusual cold, in fact they said it wasn't unusual in November, when it actually was a rare event -- but the 10th highest rainfall in October that was 70% above average (50%) was unusual. 

The language they have been using since the early 2000's has been changing and becoming more biased and deceptive. This applies especially to new definitions created by BOM such as:

"Reproduction in climate science tends to have a
broader meaning, and relates to the robustness
of results." -BOM


"Robustness checks involve reporting alternative specifications that test the same hypothesis.  Because the problem is with the hypothesis, the problem is not addressed with robustness checks."


This is very evident when homogeneity is being discussed by BOM, the language is framed to make it sound completely reasonable, it's what they are not telling you that is the problem. Overly careful language, weasel words and especially contradictions are red flags that need to be investigated.

.When the data set is "carefully curated", you know it's not--the sloppiest data handling of any organisation would not allow copy/pasting sequential data, man made patterns in data that have"never been altered", the complete inability to impute data consistently and correctly without creating outliers,
complete lack of multiplicity correction, and using software to pick up breaks at the 95% level with hundreds of time series longer than 35 thousand days--no wonder every single time series has tens of thousands of adjustments, many with patterns and distortions so evident because of the high number of cumulative false positives, they would not be allowed in any other industry.

Below--BOM Homogenisation software.




"A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted a sudden drop in day time temperature an increase in minimum temperatures which is entirely spurious. It is for this reason that these early data are currently not used for monitoring climate change."

Here we are told that a consistent bias that doesn't change is the reason temperatures are not used in the early 1900's, but when temperatures that are blank and have been imputed, say at Port Macquarie, all 13 000 of them, they have been  created by software, outliers and all. Most stations are using data from 1910 onward, and in some cases it is more accurate than current data!!



Here Sydney data from 1920-1940 just passes the Benfords law test at the p value of .0511 (Pearson test), while the more recent Sydney data 1980-2000 is completely non compliant, well below the 0.05 p value with a value of 0.0045.






Melbourne March 1910-1920 completely conforms to Benford's Law single digit test.





The March 1990-2000 data in Melbourne is hopelessly manipulated and testing the single digit produces a non compliant output with a p value less than 0.0001




Scatterplots Reveal Clumping of Temperature.

Overused clumped temps means other temps don't get used.

This is what the Simonsohn Number Bunching Tests for--excessive bunching in histograms. I went over some examples for Sydney in my last post, but lets look at actual data from a temperature time series.

Generally, what you do right at the beginning of an analysis is to use a scatterplot and a histogram to see what the data looks like.

This is pretty much stats 101, and looking at Melbourne for all the days of March:




These are the actual temperature data points by year for raw and adjusted data.
What we are looking at in the top scatterplot are "channels" or gaps running across horizontally, left to right in time.

This is a beautiful thing, because these gaps have been created there from the adjustments, raw doesnt have this. Sometimes it does, but not in this case.
This means, the gaps were added during adjustments. This is as plain as day, for all to see with a stats 101 scatterplot.

What this shows is what my old histograms from a few blog posts ago showed -- repeated temperatures and under used temps. In this case we are looking at missing ranges of temperatures, for example no temperatures in the range of 15.1-15.3 C, when suddenly they pop up in 2000.



November in Melbourne is the same, with appearing and disappearing ranges of temps. As some temperatures are not used/others are over used, this will be more evident with the histograms.


Same for April, man made gaps in adjusted data show temperature ranges that disappear. These patterns show that the data bears no resemblance to observational data, it is in fact computer modeled, and pretty sloppy at that, to be able to see missing temperature ranges in a dead straight line and then tell the masses that this is the way nature works.

No surprise that all these  time series are hopelessly non compliant with Benford's Law and the Simonsohn Number Bunching Test. There is no doubt at all, Melbourne, like Sydney, has had large scale fabrication.


Histograms Show A Different Perspective


Another basic technique every data analyst uses is histograms. It's the first thing we do. Any yet so much is revealed.



The above histogram represents on the vertical Y axis frequency--how often it happened, while across the horizontal X axis are the years. This is a different view of the same thing essentially, but with Canberra. You can see large vertical gaps or spaces, where certain temperatures have never occurred (or only a few times) for thousands of days. You can see how some temps are clumped with a few others, these are overused.



Same above, these all fail digit analysis tests, indicating large amounts of fabrication.


Here the data looks better, although there are still gaps, they are not entirely missing. Data before 2000 for Canberra has little evidential value.

Part of the problem is that BOM has no problem duplicating temperatures where ever it sees fit:


Here runs of duplicate temperatures have been created. The probability of seeing this in a random time series has been calculated by SAS JMP with a Rarity Value of 39.7, meaning this is equal to the chance of seeing 39.7 heads turn up in a coin flipping run. In other words, you can be 100% sure this is fabricated.

There are thousands of duplicated short runs, and I have demonstrated many in another blog post, so I won't dwell too long on these.



Of interest are specific months that receive the highest numbers of adjustments in the Canberra time series. September receives the biggest work, followed by August and October which are red flag months for most of the time series. These are highly tampered months.


The earliest data is in the 1940's, and that receives the largest number of 
adjustments followed by 1970's and 1980's.


Nhill, That Most Fabricated Of Stations....


This station gets the award for the most manipulated and fabricated station out there, and that is saying something.

Australian Climate Data Before 2000 Has No Evidential Value - Digit Analysis Proof.

Data analysis of climate stations from the Australian Bureau Of Meteorology (BOM) shows that:


1-- Most country stations before 2000 are riddled with errors, fabrication and manipulation as to make them worthless of any evidential value relating to long term climate trends.


This is backed up by performing digit tests such as Benford's Law and Simonsohn's Number Bunching Test used in fraud analytics, as well Frequency Histogram analysis and pattern exploration within SAS software JMP, and using Bootstrap Resampling, a computational technique used to estimate statistics on a population by sampling a data set with replacement.


Also, data mining Decision Trees and Boosted Gradient Trees show patterns in the data.


2-- The BOM claim that raw data has only had basic data cleaning and spatial adjustments and then they say, "The often quoted “raw” series is in fact a composite series from different locations."   Raw is not Raw, as I showed my last blog post, it is a modeled output which fails digit analysis tests. On top of that, a series of further adjustments are done, which biases the data.


"The Bureau does not alter the original temperature data measured at individual stations." - BOM, 2014

"For two-thirds of the ACORN-SAT station series there is no raw temperature series, but rather a composited series taken from two or more stations." -BOM


3-- Missing values in the data can either be left missing or carefully imputed (infilled). We show that BOM are not adverse to making up values, copy/pasting blocks of temperatures into different dates, sometimes they delete data and sometimes they impute data creating outliers. This is bad practice and biases data. 

BUT worst of all is data Missing NOT At Random....in other words, much of the missing data has structure, meaning it is missing for a reason, thus biasing the data further. State of the art data mining applications such as Treenet find structure in the missing data.


4-- BOM claim homogeneity adjustments are needed  because of changed conditions such as station moves, new screen installed etc.

"The differences between ‘raw’ and ‘homogenised’ datasets are small, and capture the uncertainty in temperature estimates for Australia." -BOM

And:

 "... a systematic shift of observing sites from post offices to airports, leads to apparent and spurious trends in the data."


But as Prof. Thayer Watkins from San Jose University notes:

"A perplexing aspect of the global temperature data is that there is no measure of accuracy associated with each datum. Surely the earlier years with their fewer weather stations and less accurate instruments have less accurate values than the later years. However systematic but constant bias in the measurements is not really an issue. The concern is not with the level of the temperature but with the change in the level of the temperature. Systematic bias as long as it does not change will not affect the changes in temperature. Thus the improper placement of a measuring station results in a bias but as long as it does not change it is unimportant."


And further:

"Homegenization does not increase the accuracy of the data - it can be no higher than the accuracy of the observations. The aim of adjustments is to put different parts of a series in accordance with each other as if the measurements had not been taken under different conditions." (M.Syrakova, V.Mateev, 2009)


In fact, the medicine is worse than the disease here, with adjustments larger than any biases that they are attempting to correct. Some stations have adjustments of over 10 C degrees at some specific years. In fact Mildura has +12.80 C and -13 C adjustments in one times series, a range of over 25 C degrees!

The BOM is working with averaged averages which hides a multitude of sins, and makes the adjustments look smaller than they are. For example, Bourke has tens of thousands of adjustments, nearly all bigger than they claim: 

"Bourke: the major adjustments (none of them more than 0.5 degrees Celsius) relate to site moves in 1994 (the instrument was moved  from the town to the airport), 1999 (moved within the airport grounds) and 1938 (moved within the town), as well as 1950s inhomogeneities that were detected by neighbour comparisons which, based on station photos before and after, may be related to changes in vegetation (and therefore exposure of the instrument) around the site."




Homogenisation has created an industry -- a large scale warming and cooling adjustment industry that is tweaked and changed every few years. The IPCC has 22 models running at once, as Prof. Thayer Watkins from San Jose Uni notes, if this was a science, why would 22 different models be needed?

On top of that, instead of changing the model to fit the data, the data is being changed to fit the model!



5 -- We show that as well as adjustments, there are tweaks involving 10-15 day blocks that have linear relationships with raw data. It appears linear regression is being used is used to "tweak" blocks of 10-15 days. Most of the linear relationships appear after 2000.


7-- We show most country stations are worthless for climate analysis before 2000, but some stations in particular are rotten, even after that time. Stations such as Bourke, Mildura, Moree, Nhill and even Port Macquarie are so riddled with fabricated and error ridden data that they are worthless for trend prediction. Digital analytics, bootstrap resampling, first differencing and data mining show show that these stations should not be used for analysis.


8--The dates 1953-1957 and 1963-1967 are significant in many times series and are identified with data mining software. These are the tipping points at which the earlier data date is cooled, and the later data is warmed. This follows with certain months too -- August, October and March have been highly tampered with in particular, in some cases the day of the week has biases too.


The myth created by the industry is that homogenisation is required for all stations due to changes of conditions or because homogenisation software has picked up a "break" in the temperature time series - note 5% false positives are picked up automatically when running software at 95% levels of significance and is cumulative in subsequent runs.

This homogenisation process is the principal driver of warming. All stations receive tens of thousands adjustments. In fact, adjusted data is in most cases less compliant to Benfords law (used for fraud detection) and other digital tests compared to raw data. The more the data is manipulated, the more biases are created.

a-- Biases are created with imputation/infilling, in some cases creating outliers, (record temperatures) as well as data deletion and large scale adjustments.
b-- Biases are created by sequentially running software without multiplicity corrections. 
c--Biases are created by p-hacking, harking, publication bias. (link).
d--Biases are created by fabricating data by copy/pasting sequences of temperatures in other months and years.



Global warming Or Climate Change?

Google Ngrams (below) shows that the term "Climate Change" surfaced around 1992 after "Global Warming", yet it was Republican pollster Frank Luntz who came up with the term "Climate Change" but wanted to change it back to "Global Warming" because it was thought that climate change sounded less severe at the time. (link)


One of the most influential scientific philosophers of the 20th century, Karl Popper is known for his premise in empirical science that a theory can never be proven, but it can be falsified. Having a theory of CO2 causing warming AND also being responsible for cooling cannot be falsified and so is not science. (wiki)




Language has had a large part to play in the climate scenario and the language from the BOM has become progressively more deceptive. As in police statement analysis, red flag areas are always of special interest because they highlight areas where special attention is needed.



Preliminary:

A Quick Look At An Australia Wide Warming Trend:

Or, What A Difference 10 Years Can Make.

Looking at the Australia averaged warming trends, at the moment (if you got the memo), the consensus is 1.5 C warming per 100 years, 2 years ago it was 0.9 C and 10 years ago even lower, around 0.57 C. The Wayback Machine on the internet can capture the BOM's previous submissions:


Above: Australia Averaged Temperature Anomaly of 1.5 C for 2021

Below: Australia Averaged Temperature Anomaly of 0.57 C for 2011


This shows 263% increase in Australia wide temperatures in 10 years as the BOM changes it mind and updates its new modeling.


A study below compares 100 rural agricultural climate stations with 'minimal adjustments' compared to BOM's "high quality" temperature series with 'many adjustments ' and shows large biases in the "high quality" network, on average 31% higher than the minimal adjusted stations:


Biases in the Australian High Quality Temperature Network

This suggests that biases from these sources have exaggerated apparent Australian warming. Additional problems with the HQN include failure of homogenization procedures to properly identify errors, individual sites adjusted more than the magnitude of putative warming last century, and some sites of such poor quality they should not be used, especially under a “High Quality” banner."



I download temps for Berlin, Marseilles, Frankfurt, Amsterdam and various USA areas to compare to Australia, and the level of adjustments and the creation of duplicated pairs of days with identical temperatures is dramatically higher.


Language can highlight problem areas:
1- "The Bureau does not alter the original temperature data measured at individual stations."  - BOM, 2014

 2 - "For two-thirds of the ACORN-SAT station series there is no raw temperature series, but rather a composited series taken from two or more stations."


3 - "Reference to ‘raw’ data is in itself a misleading concept...." 


What is being said is that the original observed data aka RAW data, has never been altered by the BOM. Then we are being told there isn't Raw data for most of the stations, it's composited ie modeled. (wiki- "In computer science, a composite data type or compound data type is any data type which can be constructed in a program...").  

Then we are told, don't mention Raw, it's a misleading concept--it may get you thinking there is something less fabricated and more accurate than homogenised data.


Quick Look At Australia Averaged Anomaly Trends part2


Since the main weapon in the arsenal of BOM is the Global or Average Australia  trend temps, it all comes down to a single number without Confidence Intervals.
Looking at the difference from warming in 2011 and 2021 and using the actual BOM data, another graph is presented below to show the extent of this "change of mind" the BOM had.



This histogram shows the 2011 warming data and the blue is the 'updated' 2021 warming data. You can see the standard methodology --  the past is cooled even more, the dates after the late 50's/60's are warmed even more. The original 2011 data has been deleted by BOM on its website, voila, history re-written.

Another histogram, with binned values to smooth it out:




So this looks bad. But how bad is it?

Above are 2 Benford's law curves based on the first digit of the temperature. I will go into more detail about Benford's Law later on, suffice to say for the quick comparison, that Benford's law provides a test for the value of the first digit. Most observational data complies with this log family of curves.

Looking at the red dotted curve as our baseline, you can clearly see that the 2021 data has less ones (lower blue bar on graph) than 2011, it has more 4,5,6,7 and 9's.


The 2011 curve is roughly compliant, 2021 is non compliant with p value >0.0001.


So in the 2021 curve, the first digit value has less 1 and 2's and lots more higher numbers than there should be. This shows more extreme numbers being used in the second graph as the 1's and 2's are turned into 4,5,6, and 7's! This has warmed up the temperatures significantly, but is also man made because it is is far less compliant.

This is typical throughout the whole time series -- the more versions of temperature we get, the more extreme they are, and the more non compliant to  Benfords Law they are.

Would it's own mother recognise what 2011 has become?
Nope--



The distributions and therefore the temp trends are different enough at the 95% level to reject the null of similarity. They are different, and any claim of small changes between distributions are obviously not true


"Bootstrapping is a statistical procedure that resamples a single dataset to create many simulated samples. This process allows you to calculate standard errors, construct confidence intervals, and perform hypothesis testing for numerous types of sample statistics. Bootstrap methods are alternative approaches to traditional hypothesis testing and are notable for being easier to understand and valid for more conditions." (link)

Bootstrapping is a modern powerful statistical computation technique that overcomes many problems and assumptions of the old methods of estimating statistics on a population. I use this on all comparisons as well as creating Confidence Intervals, especially since the assumption of normality (used by older methods) is not true in much climate data, see for example Port Macquarie in February, the histogram shows "fat tails" or leptokurtic tails.

 


These are extreme weather event tails, where Normal Distribution and Standard Deviation are misleading or wrong-- Prof Thayer Watkins:

"...because of this skewness it does not have a finite standard deviation and thus any sample estimates of the standard deviation of annual changes is meaningless." 


OK, so, maybe the old global data is bogged down with dodgy numbers in the first 90 years--maybe the most recent temperatures are similar between 2011 and 2021?




The difference of means between the 2 groups is not contained in either Confidence Interval at the 95% level, meaning the groups are different at 95% level.

I turned the 2011 and 2021 distributions into a time series model with a 10 year forecast at the end of it to compare models. There are vastly different results, but the important bit to is to look at the green Confidence Intervals at the 95% level--they are huge and this is where they uncertainty is! The warming variation is massive and you would be hard pressed to form a conclusion based on this series.

Below: 2011 and 2021 distributions turned into time series forecast model.




Notes On Benford's Law

I did go into depth on Benford's law in my recent posts, but there I was using the climate industry offsets called temperature anomalies to convert temperatures into a log type curve to use it for Benfords law.

Now however, I have been using an easier more effective method to allow temperatures to be analysed with Benford's Law -- First Difference the data and it follows a log type curve because the range has been increased.


Briefly, Benfords law is the most used technique in the world to red flag suspect data, there are tens of thousands of pages describing it on the internet, it is used by the tax office to red flag potential tax fraud, and law enforcement to detect money laundering, and so and so on.

Most observational data has enough range to automatically be able to be tested--the first digit of your house number address conforms to Benford's Law because the range of numbers is observational and the range is large enough.

Now the problem with temperature is that observed temps in climate are very narrow, lets say between -10C to +50C, give or take a few degrees. This range is not 200-1 or more so wont fit a log-type curve.

A simple technique I developed and baseline checked for accuracy  involves First Differences or First Differencing:

It is used in climate and economics and statistics and involves the subtracting of the number before it. If yesterday was 25 C and today is 20, the differences is 5...this is done for all the sequential values in a time series.



So Sydney's temperatures (left highlighted column) runs down in sequential order as they happened, and the differences are in the right column.

This results in the time series becoming de-trended, that is any trend in the time series is removed, it is also used to identify patterns and detect linearity. (link

But it also has the happy result of exactly conforming to Benford's Law. Of course, we need to test this on placebo data, so lets do that.



Placebo Data Checks (data we know is good as a baseline)
Baselining Benford's law and First Differences.

Case 1:

Kinsa in the U.S have several million thermometers connected to the internet via smart phones. People buy the body temperature thermometers for $30 and volunteer to add their readings into a real time data base. This has been the most effective predictor of Covid spread, because fever is one of the Covid symptoms. The accuracy of the 'fever' maps produced by Kinsa has helped develop strategies in the U.S to combat spread. We know the temperatures are on average accurate because of the effectiveness of the maps. 

I downloaded 300 000 readings and selected a subset in Feb and March with the counties with highest number of readings.


A difference was done on the Kinsa temperatures for March and Feb.

Benfords Law first digit analysis was performed for the Feb data which resulted in a tight fit--the temperatures are Benford Compliant.


Case 2:
This was duplicated for March (below) with the same results.


Body temperatures have even a lower range than climate temperatures so are less likely to conform to Benford's Law but First Differences has extended to range and shows the data to be compliant, so this is a good test for temperatures overall.

Case 3:
But most observational data complies with Benford's Law, so 30 000 greyhound dogs that ran in Ballarat were downloaded. Their weight was differenced and tested:



Case 4 +5:
This was also down for Covid hospital admissions. I randomly picked Belgium  from European data and got a compliant fit. I also checked with PM2.5, one of the most widely analysed air pollution studies in the U.S, and this complied with Benfords law too, however, I have decided not to show every single study and analysis I did because of space constraints.

There is little doubt that First Difference of temperatures creates a curve that is Benfords law compliant.

case 6:
As a comparison, lets look at Nhill in Victoria for the month of October. Instead of a graph, I will used the actual numbers output so that you can see the size of the variation:



The first digit position has 9 values, of which 1 is most prevalent, it should occur about 30% of the time, here it occurs a bit over 20% of the time. This shows that the values of 1 and 2 come in far less than they should, with values 3,4,5 coming in more than they should. The p value is <0.0001, the data is non compliant. The data is red flagged and very suspicious--this is an extreme deviation from expected observational data.



Do Different Years Get Different Warming Treatment?


I binned the temperature time series into 20 year blocks to test how different years are treated. I used minv2.1 for Sydney.






The red values at the Pearson Test number show rejection of the null hypothesis at the 95% level, the p value is 0.05--anything less than this is non compliant and the null is rejected. The 20 year block of 1920-1940 scrapes thru. Overall, 3 out of 5 binned years are non compliant, the the other 2 binned blocks of years are just compliant.


Other Anomalies:

In my last blog I highlighted the large amount of copy pasting that was being done in Sydney data - large blocks of temperatures, full months in fact-- were  copy pasted into other years. I won't be repeating this in this analysis, instead I want to look at a few new directions.

Decimal values are are serious problem in the BOM data. I have already looked at the missing decimal values, where only months and months of integers exist. But there is another problem with decimals-they have a correlation with the integers.
There shouldn't be any correlation between the integer and the decimal value, if the integer goes up, the decimal value shouldn't be decreasing, but that is what is happening.

These are  not minor correlations, they are large, among the largest in the time series:








As a matter of interest, it's evident that the minimum temps are the most manipulated--they are the most copy/pasted/the largest anaomalies with integer to decimal, the largest warming or cooling adjustments and they fail the most in Benfords law and Number Bunching Tests in general.


While looking at tests, let's get into Uri Simonsohn's Number Bunching Test. (link)
This is a very novel premise to test all the numbers at once, instead of just the first one as in Benford's Law. I managed to script Uri's test in JMP for analysis and was able to replicate all his tests and results on his blog.

This allows me to get consistent output with the SAS JMP software. The test checks for bunching or clumping in histograms. We know the histograms are dodgy by the BOM, I had dozens of examples in my last blog.


A quick look at Nhill in Victoria shows a fabricated frequency distribution in the histogram:


This is very evident in raw and exists but is weaker in adjusted data. The histogram has highly repeated temps interspersed with low freq (occurring) temps, see raw pic above. There is a consistent pattern of high low. I have been told it may be conversion from Fahrenheit to Celsius during metrication in the 70's, but this appears in some stations after 1980 and even after 2000.

Now this is extreme and easy to see, but the Melbourne August temperature below is more subtle and and a bit  different:



And Sydney August follows:



In both cases, a few temperature occur a lot (the high bars indicate high freq), while some temperatures say between 20-25 C never occur or occur a few times only in a time series that has 3300 days of August in this case. So certain temps dominate, some have large gaps revealing that they never occur or only occur a few times. These temps are "bunched" as per Simonsohn's terminology, or "clumped" together, with too few occurrences in some places and too many occurrences in others, above what is expected. 

So we need a test to quantify this, and that's what the number bunching test bootstrap resampling test does. Simonsohn has much experience in exposing fabrication in scientific studies, and is an expert in digital analysis.

To test this on placebo data, I tested it on various data sets, notably the PM2.5 air pollution data set from the US, subject of over 40 studies, as well as the Kinsa body temperature data, greyhound dogs starting prices.... all checked out as expected, no excessive frequencies of certain numbers outside the 95% level of expectation. I am not showing the results here because I am having trouble keeping posts to a manageable size.



Simonsohn Number Bunching Test For Sydney Histograms, Month By Month.


Testing each month -- a distribution of expected average frequencies is created with a bootstrap resample and the monthly BOM frequency histogram is compared to the bootstrapped expected average frequencies to see how common or uncommon they are. See datacoloda for more info.















It can be seen that the temperature frequency histograms are extremely heavily manipulated and/or fabricated. We know some data has been fabricated in the Sydney time series with my last post showing some of the incidences of weeks and months being copy/pasted into subsequent years.

Now we can see conclusively that the temperatures have been heavily tampered with, by changing the frequency of occurrence, where some temperatures are overused and appear far too much, some far too little.

Each version of the adjusted data changes the frequency of occurrence of temperatures, which can be easily seen in histograms that have a bin width of 1.
 
September, June and May are the only months where the null is not rejected at the 95% level of significance, in other words, they have temperature frequencies as we would expect.

The other 9 months have heavily manipulated temperature frequencies, with many temperatures appearing far more often than they should.

Each updated adjusted data set further increases in frequency a range of temperatures. This increasing of frequency is a tool to increase warming, reducing the frequency of occurrence cools the temps down. 

This further backs up Benford's Law that the temperature time series is not observational data.


One more thing to check, and keeping the Sydney data for consistency, lets look at the month of November for binned blocks of years-- 1920-1940, 1940-1960, 1960-1980, 1980-2000 and finally 2000-2020.








Every block of years for Sydney November temperatures fails horribly at the 95% level of significance, 2000-2020 is the only block of years that is at 1.2%, still lower than p value of 5% (0.05) so we reject the null, but not that far away.

The rest of the years are hopeless, the chance of seeing such extreme temperature frequencies at random are over a hundred million to one....in other words virtually zero. Thus the probability of large scale tampering is virtually certainty.

As mentioned in the beginning, there is no evidential value in nearly any of the time series 1910-2000, and even after 2000 it is lucky dip as to which series still have value.

I was going to post some more number bunching tests for Bourke and Mildura, but they are even worse than Sydney, if that is possible. I may still do a few more in another post.

Summary: If a major capital city has dirty data, the country stations have no hope, and this is borne out with subsequent tests. The data is dirty, manipulated and fabricated and this can be proven with universal digit tests.

Averaging can hide a multitude of sins, that's why I am using daily time series with around 38 000 days. The BOM technique of coming up with a single averaged number instead of a distribution without Confidence Intervals which is supposed to encompass countrywide warming is meaningless, even if the data were clean! 

The Flaw Of Averages States, " If you are using averages, on average you will be wrong." (link)

More to follow in part 2.









© ElasticTruth

This site uses cookies from Google to deliver its services - Click here for information.

Professional Blog Designs by pipdig