Global historic temperature data 1850-2018
RCP scenarios temperature data 2005-2100
CMIP5 multi-modal mean scenarios
extended RCP scenarios GHG concentration data 1750-2500
M. Meinshausen, S. Smith et al. (2011) “The RCP GHG concentrations and their extension from 1765 to 2300”
The following section is intended to give an overview about the methods used and choices made to generate the warning stripes. This post will be updated when something changes and was last updated on 2019/04/11.
Aligning historical and predicted data
There is one main problem when using stripe visualizations instead of line graphs in general: The visualization doesn’t allow to show uncertainties or ranges of temperatures. This problem gets more pronounced when linking measured historical with predictive scenario data. If the two sets of data don’t align perfectly, this would produce a very “hard step” in the color gradient that looks (and likely is) not natural, decreasing acceptance and appeal of the visualisation.
Choice of the historical dataset
The joining of the two datasets becomes sligtly problematic as the measured global temperature data has so far slightly lagged the mean temperature projections based on the RCP scenarios. You can find more detailed considerations and possible reasons for this on Climate Lab Book. The gap between projected and measured temperature for 2018 is however also dependent on the temperature data used. HadCRUT 4.6 for example is showing a slightly larger gap than the GISTEMP v3 data by NASA. As far as I know this difference is mainly due to measurement vs interpolation for the rapidly warming but hard to measure arctic region. Within the GISTEMP time series the recent temperature record of 2016 even surpasses the mean predicted scenario data for 2019. Therefore using the GISTEMP data instead of the HadCRUT data allows us to leave the scenario data untouched and still show a smooth transition between historically measured data and the RCP predictions without showing a „hard“ step. As I am not aware of any problems in using GISTEMP data instead of HadCRUT I regard this method as clearly preferable to somehow adjusting the scenario data just because it looks off or a visual break between historic and predicted data. If there is a compelling argument to do otherwise, please let me know and I will change it.
As a side note: the global temperature given within HadCRUT in 2019 is so far on track to be warmer than the previous two years (written in 04/2019), closing the gap naturally. Also, there are newer models developed for the next IPCC report, which may as well help as well.
Aligning the different scenarios to 2018
That being said, RCP scenario data still had to be adjusted as they start to diverge from historic data in 2005 and take different paths, depending on the global developments assumed for each scenario. This is why by 2019 there is already a considerable spread in the projected mean temperatures between the various projections. Again, plotting this spread from one year to the other (read: from 2018-2019) would not only look strange but is also highly unlikely to happen. Therefore, a method described by Millar et al. (2017) was used to shift the scenarios to 2018. This is considering, that the real development of carbon emissions worldwide between 2005-2018 has been very similar to those assumed for the RPC 8.5 scenarios. For a detailed look I will point to Glen Peters showing this this in Nature and updating the figures on Twitter recently. As Millar et al. wrote, in the RCP 2.6 scenario climate policy is starting to slow growth of emissions by 2010. As this did not happen, Millar et al. used a shifted RCP 2.6_2017 scenario which simply assumed the changes that were assumed to happen by 2010 in 2018. Spoken clearly, they took the scenario emission curve from 2010 onward, cropped it and then put the starting point to the RCP 8.5 emissions in 2017, therefore both scenarios will start to diverge by 2018. The same approach was taken here, only with a shift to 2019 as we have had one more year of measured data in the meantime. So, while the RCP 8.5 scenario remains unchanged, the RCP 2.6, RCP 4.5 and RCP 6 temperature (and emission and CO2-concentration) projections were shifted from 2010 to 2018 based on the real emissions happened so far. This is taking into account that the possible future pathways proposed by those scenarios in the past aren’t possible anymore and we need new scenarios with stronger actions to reach the same temperature equilibriums. This is also the reason why the expected temperatures for the year 2100 in these shifted scenarios (name it RCP 2.6_2018 and so on if you like) are higher than for the unshifted ones described in the IPCC report- driven by the accumulated additional emissions within the RCP 8.5 scenario compared to the respective scenarios within 2010-2018.
Adding “weather noise”
As the CMIP 5 mean scenarios are used for temperature predictions those predictions are very “smooth” compared to the past temperatures. This means that year-to-year temperature jumps are far less pronounced than they probably will be. This is of no surprise given that the RCP scenarios are climate models and not weather models, with the intent to model mean temperatures for longer time periods. In other words, the projected temperature for e.g. the year 2035 is not a forecast for this specific year, but a projection how the mean global temperature in the period from say 2021-2050 will be. This is certainly the right way for climate science, but it is problematic for the use within a warning stripe visualisation, as the smooth color gradients without any disruptions don’t look “natural”. As written before, it would show weather data for the past, but climate data for the future. Thus – to align the look of the two data sets visually – the mean climate data had to be “weatherized”. This was done using the variance of the proxy pre-industrial base period from 1881-1910 and calculating the variance of this period. This variance was used to generate a 99% prediction interval around the mean RCP temperatures (+-0.241°C). In a last step a random number within this projection interval was added to the mean projected temperature to simulate a random weather noise”. Again, note that this is in no way a temperature projection, but a way to harmonize the past with the future temperature data.
Designing the colorscale
Visualizing this data via colors is not an easy task. The main problem is, that science agrees that any temperature rise above 2°C maximum would be catastrophic for both, human society and the natural systems it is embedded in. Still, the extended RCP pathways show warmings of >7.5°C by 2200 (and still warming). To visually capture the critical thresholds around 1.5-2.0°C as well as the very high possible warmings in very high emissions scenarios a non-linear colorscale was used. Beyond are short explanations for the choices made:
Medium temperature baseline:
The medium temperature in the Graph is the temperature average of 1961-1990 according to the GISTEMP data series. This is already equivalent to a warming of +0.368°C to pre-industrial levels (here defined as the average GISTEMP temperature between 1881-1910). Choosing this temperature baseline is common in climate science literature and enables us to visualize the historic temperature change in a classic red-to-blue (or orange) colorscale.
Maximum temperature bounds:
The maximum (and minimum) temperature bounds depicted on the colorscale are +-12°C. While a temperature scale maximum of +8°C would have been enough to show the model output until 2200 the higher bound was chosen out of two reasons. First, AR5 cites a temperature increase of 12°C as “lethal […] for most areas occupied by today’s human population”, therefore setting a dire limit for a global warming driven by the human carbon based economy.
This natural limit for human- induced warming is also approached in a different way – by using up all recoverable fossil fuels and burning them. Lenton et al. (2006) calculate the amount of conventional and exotic resources to about 15 000 GtC, which they equal to a warming of +12.5°C.
For symmetry reasons this maximum temperature of +12°C was also used as the minimum temperature of -12°C, also enabling a use of the same colorscale for historic eras like the ice ages.
Color gradients and transformation
The basic model for building the colorscale was the cubehelix color scheme developed by Dave Green,that allows for a smooth transient of hue, saturation and most importantly perceived lightness of colors. While on the negative side of temperatures a single-hue gradient was used, positive temperatures are depicted in a “multi-hue”, or rotated gradient which follows a path from yellow over orange, red and purple colours while increasing in saturation and decreasing in lightness.
Still, given the very high maximum and minimum bounds, the resulting colorscale would visually understate the drastic changes for our environmental and social systems for temperatures up to +4°C. Therefore, the temperature scale was further transformed logarithmancy heavily increasing the visual difference for “small” temperature changes while decreasing it for very high temperature changes up to a point that they are practically undistinguishable at very high levels. Note, that the transformation curve is not based on any scientific reasons, but purely aesthetical and political considerations. The goal was to simultaneously be able to show the whole range up to +12°C, yet still show the drastic and unprecedented change of global mean temperature in the last century and make the likely range of future temperatures the world could be locked in within the next few decades depending on our emissions (about +1.1 to +4.0 °C compared to 1961-1990) easy to distinguish. The figure below shows the process and result.