Thanks to last week's heatwave, TechNerd spent a lot time examining various weather forecasting widgets and sites, trying to use body English to bump the thermometer down. He tried using the power of his mind to control the weather, but unlike the X-Men's Storm, he merely developed a headache and had to lie down.
Predicting anything, with weather notably near the top of the list, is notoriously difficult. A forecast relies on a simulation, a necessarily simplified mathematical model of reality put into a Petri dish and examined. The simulation uses current measurements or statistics and applies them using rules that are supposed to mimic what actually happens—to clouds, hurricanes, the stock market, or people.
Jorge Luis Borges seems to have anticipated every problem that would afflict digital plenty and precision in advance of the arrival of actual powerful computers. In his story, "On Rigor in Science," he describes a country in which a map at the scale of 1:1 (a mile to the mile) is the only one that would suffice.
You might laugh, but in weather simulations, that's the ultimate but impossible intent. With faster and faster supercomputers—a term that lost of some of its luster when desktop machines started qualifying for the moniker—weather forecasting can ostensibly become more accurate.
Rules are less approximate, and the regions that the computer defines against which rules are applied becomes smaller and smaller. The smaller each volume is, the more accurate the forecast should become. The best supercomputer work right now still measures these cell in kilometers.
With an nearly infinitely powerful and large computer, something like Deep Thought (or even The Earth) from Hitchhiker's Guide to the Galaxy, a simulation could drop down to the molecular or atomic level. Although then the simulation becomes Borgesian, or at least Heisenbergian.
Remember the notion that a butterfly batting its wings on one side of the world causes a hurricane on the other? It's not true, but small events do interact in ways that can cause extreme events over small volumes that are beneath the resolution of detail in current forecasts; or that combine with other events to bubble up into something major.
There's a sweet spot forecasters want to find where they can still abstract the individual components that make up a simulation into an aggregate that's finely scaled enough to be practical and useful. It doesn't help to get a perfect forecast that comes three months later, for instance. And computational power can increase so much—the limits are starting to show, although simulations lend themselves to efforts in which many pieces of a problem are simultaneously solved by different processor elements.
There's also the issue that beyond some small number of days, even with computational abilities hundreds of times what we possess today, the systems are too chaotic to lend themselves to accurate prediction.
So the practical upshot of the improvement in forecasting means that a forecast of 3 to 4 days today is as accurate as a 2-day forecast in the 1980s. This means that hurricanes are more accurately anticipated (among other weather events), even when the intensity may vary up or down from the initial prediction, or the path changes direction.
Of course, none of this explains why during our super-heatwave last week in Seattle, 1-to-7-day forecasts from Accuweather, The Weather Channel, NOAA, and other sources varied as much as 10 degrees from each other. Each source has its own inputs, and its own secret sauce—but I haven't yet seen a truthiness graph for weather sites to compare performance against reality.
I suppose with the weather finally cooling, I can stop checking the widgets on my iPhone and desktop obsessively to see what today and tomorrow's weather will bring. Today's the first time I can remember looking forward to it being "only" 83 degrees—or will it be 90?