Over the past 18 months, global temperatures have been regularly breaking records, on both the surface-based (NASA, NOAA, Met Office) and the remote sensing records (UAH, RSS). Yes, this has been occurring now due to the 2nd strongest (by 3-monthly Nino 3.4 anomalies) El Nino on record. But I would argue, in many news broadcasts or articles, that the role of El Nino is regularly overstated – as though the reason for record-setting global temperatures is entirely ‘natural’ and should not be cause for alarm. This is not the case. Merely having an El Nino doesn’t guarantee a new record, without the background GHG forcing. It would be harder to argue that point if the current El Nino was by far and away the strongest, but it isn’t – 1982/3 (+2.1) and 1997/8 (+2.3) were similar to 2015/16 (+2.3).
The El Nino-Southern Oscillation (ENSO) is a big, noisy oscillator on global temperatures. There are other, slower factors (like the PDO and AMO), but ENSO is the biggest contributor year-on-year. Warming due to increased GHG forcing continues in the background, whilst ENSO causes these short-term fluctuations and will ‘make-or-break’ a year when it comes to setting records. It’s therefore natural that El Nino years set the records, as they spike temperatures upwards – it wouldn’t be the case that a La Nina year would result in a record. Notable is 2005, which was, at the time, a record-warm year, but ENSO neutral (a weak El Nino occurred in boreal winter 2004/5). Fig.1 shows this most clearly.
2015 was the warmest calendar year on record according to NASA’s GISTEMP record and NOAA’s NCDC record. However, it was not very strongly affected by El Nino – perhaps only around 10% of the warmth was due to ENSO – as the event was only developing during the year. With a lag effect of the warming Pacific on raising atmospheric temperatures, without increased GHGs it would not have beaten the 1998 value as much as it did (a 1951-1980 GISS anomaly of 0.87°C in 2015, compared with 0.63°C in 1998). It’s a tad ‘brute force’ (and not perfect) to do, but taking all other things to be equal, the difference of 0.24°C between those values is a trend of 0.14°C/decade.
This trend is of note due to the overhyped ‘global warming pause’ that seemed evident in the early part of the 21st century. Often exaggerated by skeptics who picked 1998 as a year to start a linear trend, it is something which seems to be evident in all datasets to differring extents – whether or not it is worth considering is still argued by different scientists. Many potential reasons exist – a change to a cool PDO and persistent La Nina conditions in 2007-2014 is something I always return to as a reason, particularly because since the PDO turned positive in 2014, temperatures have spiked, significantly. Fig. 1 strikingly shows the magnitude of recent warming, suggesting any pause/hiatus/slowdown is over – we will see the picture clearly in the next La Nina year.
Figure 2: NASA GISTEMP monthly data from Jan 1997 to June 2016. Credit woodfortrees.org.
2016 is now looking very likely to beat 2015 (the first 6 months were the warmest half-year on record), which would make the top 3 record-warm years on the surface datasets 2014, 2015 and 2016 – rather remarkable. 7 consecutive months from October 2015 – April 2016 saw GISTEMP anomalies above 1°C relative to 1951-1980, which had never been observed before. The magnitude of the final 2016 anomaly depends on the strength of the developing La Nina, which is currently expected by most models to be rather weak. Fig. 3 shows a plume of various model forecasts. It’s worth noting 2010 set a new record at the time despite a strong La Nina developing in the 2nd half of the year.
Figure 3: ENSO model plume. Credit IRI.
Now for a brief comparison of the GISS and UAH records. NASA’s record is often accused of being the warmest (using a 1200km smoother to cover regions of sparse data, something which doesn’t actually alter the end result as much as some argue) whilst the UAH is a skeptic-friendly dataset, mainly because the 1998 El Nino spike was rather more severe. Fig. 4 shows a comparison of their records for June (the warmest in the GISTEMP record, and the 2nd warmest in the UAH record behind 1998).
Figure 4: Comparison of UAH and GISTEMP records for June 1979-2016 w.r.t. 1981-2010.
There are slight differences, and indeed the trends are slightly different. But the broad picture is the same. They differ on exact monthly rankings, as one would expect, but it’s not as though the UAH record shows global cooling, is it? (I’d argue satellite temperatures exaggerrate the ENSO influence – note how the 1998 El Nino spike and the 2008 La Nina dip are bigger).
A final note is inspired by a Twitter discussion I had and considers climatic averages. NOAA use the entire 20th century, NASA use 1951-1980 whilst UAH use 1981-2010. In the case of the latter, it’s enforced due to the data beginning in 1979. But is 1951-1980 or the full 20th century appropriate?
Local climate statistics are updated to reflect the new averaging periods and the ‘changing climate’, which is meaningful for the public (I don’t particularly care about knowing whether England was warmer than the 1961-1990 average, as I didn’t ever experience that, whereas 1981-2010 has meaning to me). However, when it comes to the globe as a whole, updating averaging periods would only confuse the message – suddenly these massive anomalies would become smaller, whilst you can’t actually ‘feel’ the anomaly, unlike for local regions. Moreover, as the 1981-2010 period exhibited such strong warming (whereas 1951-1980 was almost neutral) the average never truly existed. To best communicate the data, the anomalies should reflect the amount of warming that has taken place. For that reason, the full 20th century average (which is very similar to 1951-1980) would be my preference when it comes to the surface based temperature datasets.