The Atlantic is generally considered to be Europe’s “weather kitchen” – bringing heat from the tropics to our coasts, it plays a critical role in predicting the weather. Nevertheless, it usually takes a backseat when it comes to forecasting: though atmospheric processes are closely monitored and thousands of corresponding data points are calculated, the ocean and its heat input are often represented with just a single parameter.
Granted, sometimes the Atlantic is warmer, and sometimes cooler. But in comparison to the atmosphere, it responds only sluggishly: for instance, it can absorb a great deal of energy without any noticeable change in surface temperature. And it can store heat for extended periods of time, releasing it only gradually. Consequently, for short-term forecasts, only the initial value is used – since the water temperature isn’t likely to change appreciably in such a short timespan. And this approach works: if we compare the forecasts with the actual weather after the fact, we can see that, despite this simplification, the two largely match.
But things look quite different when it comes to forecasts for one or more years, e.g. questions like whether next winter will be a harsh one, or whether next summer will be an especially hot one. My team and I prepare these forecasts at Universität Hamburg’s Center for Earth System Research and Sustainability (CEN) – and they’re particularly challenging. It’s a bit like working on a group project at school: just because the average grade was a C, it doesn’t necessarily mean that schoolboy Tom or schoolgirl Julia will both get a C and continue to the next grade at the end of the school year. Instead, the final outcome depends on how well the concrete requirements are fulfilled – whether Tom studied the right material, or whether Julia went to a party the night before the project was due. In the same way, for our medium-scale predictions, we need to know how much heat the Atlantic actually contributes at the beginning of the forecasting period.
To date, these forecasts have been extremely unreliable; as a rule, they’re only right just over half of the time. But if we take into account how the current water temperature deviates from the long-term average, that is, whether the initial temperatures in May were high or low in comparison, the accuracy rate jumps to 80%. In fact, our predictions are most accurate when the initial temperature substantially deviates from the long-term average – so when the Atlantic is unusually warm or cold in May, we can predict what will happen in the next several months and years even more accurately.
This aspect is also interesting because it entails adopting a new philosophy. So far, the assumption has been: the more initial data points that we take the average of, the more accurate the forecast will be – because short-term “outliers” will have virtually no effect on the outcome. This is an approach we researchers have been using for years. But now we’re starting to learn: sometimes less might be more. In other words: determining whether or not a given initial value was unusual or not can sometimes yield better results than using the statistical mean – depending on the timeframe being considered.
Therefore, it can be advisable to select certain individual factors and assign them more weight, essentially treating them as individual cases. According to this approach, a general rise in temperatures doesn’t mean that there can’t be an especially cold year now and then; similarly, a general increase in precipitation doesn’t mean they’re can’t be short-term droughts. Consequently, we’re now taking a closer look at which parameters are best suited for different types of prediction, so as to ultimately arrive at more reliable forecasts.
Prof. Johanna Baehr works at the Cluster of Excellence for climate research and at Universität Hamburg’s Center for Earth System Research and Sustainability.