When the BBC's weather map turned yellow this week, seeming to suggest a dose of Mediterranean sunshine just in time for the long weekend, viewers were stunned to discover that their sunbathing would have to take place in a less than tropical 15 degrees Celsius. "I think it's very misleading," says 43-year-old florist Vicky Laffey. "If you look at a map full of orange and yellow, it looks like that country is in a heat wave and people get the wrong impression."
The BBC gets its data from the Met Office, which says the temperature scale is static - meaning "the same sliding scale is used regardless of the time of year or region on screen". (There will also be no changes made to the colors used on the map based on viewers' upcoming days off.)
Yet hell hath no fury like a Brit who has already started unsheathing the patio furniture. This week's story may not be a surprise considering the unfavorable start to the weather forecast.
The Met Office was founded in 1854 by Captain Robert FitzRoy, later Vice Admiral of the Navy, who was well aware of the impact of weather on life at sea. In an attempt to underarm the sailors, daily weather reports in London were sent to him from outposts on the English and Irish coasts via the new technology of the telegraph. His role was to try to make sense of the data and thereby pioneer the science we now know as meteorology. However, as with all emerging systems, mistakes occurred and the deluge of public criticism of his weather reports, together with his depression, caused him to commit suicide in 1865.
Later that same year, the Royal Society - which would manage the Met Office for forty years after FitzRoy's death - decreed that "we find no evidence that any competent meteorologist believes that science is at present in such a state as to enable an observer is to indicate day by day the weather we will experience over the next forty-eight hours." Ninety years later, in 1955, the head of Dunstable's Central Forecasting Station opined that "very little accuracy can be guaranteed for forecasts made more than 24 hours in advance."
Knowledge of the weather - and, more importantly, of what awaits us in the coming days and weeks - has occupied space in the human mind since the beginning of time. The success of crops is of course highly dependent on water and sun, albeit in exactly the right amount and at the optimal time. Droughts, floods and occasional locusts crop up in many a Bible story and the Great Famine of 1315-1317, largely caused by a climate anomaly that brought colder temperatures and heavy rain, affecting up to 12 percent of the northern European population.
And let's not forget how many military efforts have failed due to bad winds. The Spanish Armada, for example, saw most of its 130-strong fleet crashed onto the rocks on the west coast of Scotland and Ireland. The D-Day landings were delayed for 24 hours due to high winds and unfavorable sea conditions. You can't help but feel sorry for Group Capt James Stagg, Eisenhower's chief meteorologist, who, along with his team, was tasked with coming up with accurate forecasts. But despite their efforts, the weather remained difficult to predict and the actual D-Day - June 6 - was far from ideal conditions for landing more than 130,000 men on enemy beaches.
These incidents, at the mercy of wind and tide, feel a world away from where we are now, where rain or shine can be plotted down to the minute. Britain no longer uses a permanently staffed observatory on the summit of Ben Nevis, a central monitoring tool until 1904; Nor are cones hoisted on coastal masts to warn of approaching storms. Weather mapping underwent its first major change when Numerical Weather Prediction, using mathematical models of current atmospheric and ocean conditions, was introduced in 1922.
However, it took so much time to handle these huge data sets that by the time it was worked out, it had already happened again. It was not until computer simulation was introduced in the 1950s that all this information could be processed in a timely manner, with the first computerized reports emerging in the second half of the 1960s. It was an important step forward from the level of weather forecasts: 'Red sky in the morning, shepherd's warning'.
"Have we gotten better at learning to read the signs in the sky? In one word: no," says Tristan Gooley, author of books, among others The secret world of weather. Over the past century, "the way weather has been studied and our understanding of it has been driven by our better monitoring and modeling of the atmosphere. But it usually happens in a very macroeconomic way," he explains. "Weather has come to mean something that happens over hundreds of miles, whereas just 100 years ago it happened and it was something that happened over hundreds of meters."
While we have gone on to extrapolate the significance of cloud formations and the color of sunsets, there have been several relatively modern incidents that have clearly illustrated what an inaccurate scientific weather forecast is. Consider Michael Fish's now infamous comment: 'Earlier today, apparently, a woman called the BBC and said she heard a hurricane was coming. Well, if you're looking, don't worry, it isn't!", on October 15, 1987, which was quickly followed by a £1.5 billion repair bill for the damage caused.
Since 2016, the Met Office has been using the Cray XC40 supercomputer, which it calls "one of the most powerful in the world, dedicated to weather and climate forecasting". About 15 times more powerful than the computer that preceded it, the Cray XC40 collects 215 billion weather observations worldwide every day. These are then fed into an atmospheric model that contains the million lines of code that generates predictions.
Douglas Parker, professor of meteorology at the University of Leeds, says that while British forecasts were woefully unreliable in the 1970s and 1980s, things have changed. Now "you'll meet regular people painting a house, or builders, who will use the forecast [and] rely on it every day," he says. From agriculture to shipping, aviation and energy, the country's GDP - and how we get our food and goods and keep the lights on - depends on the accuracy of forecast data.
Does its critical importance make it vulnerable to a cyber attack? "I think that's always possible, and I'm sure that's a risk," Parker replies. "[But] safety in the UK Met Office is extremely high because of the responsibility they bear to crucial industries."
He describes the one-day forecast as "super accurate" (the Met Office says the four-day forecast is now as accurate as the one-day forecast of 30 years ago). But he is aware that blind spots remain. Within a kilometer - a geographic area that would fall under the same forecast - there can be significant variation depending on whether you are in the park or on the street, on a hill or in a valley. Fog also "is a big challenge," Parker said. Clouds and ice "all have incredibly complicated physics, and actually trying to put those into equations in a model," he adds, is a far cry from the accuracy currently possible for other types of weather.
I suspect most of us like to follow the 14-day forecast before we go, but is that of much use? Parker says no: "If you go up to five days, well, sometimes we'll certainly have less statistical certainty... If you go up to 10 days, that's just an indication."
For Gooley, knowing the country is the best way to get an accurate forecast. "If the 100 best meteorologists in the world borrowed 100 of the world's most powerful computers, they would still have trouble figuring out where exactly a forecast shower would fall tomorrow," he says. "A person who is sensitive to his landscape gains a capacity for understanding that is denied to machines."
That won't stop scientists from refining their mapping tools - nor will it stop AI from being used for the same purposes. In the 1970s, before supercomputers, forecasters used previous weather patterns to predict forecasts. "If you do it manually, like people did, it's actually very inaccurate," says Gooley (because there wasn't enough data back then).
But now the ability of algorithms to "match patterns with past events suddenly becomes super effective, and that's been kind of a revolution in the last few years," he adds. "Once you train them, they can be very fast." For now, though, those models are no more skilled than the models we have now ("and probably a little less skilled," Parker said). Could we get a better - or at least less fraught - forecast in a few years? "It cannot be ruled out," he says. "No one knows what will happen."