Go into a restaurant in Miami these days, and everybody’s talking about how lucky they were that Irma turned away at the last minute and saved the city. The week-long power outage was a small price to pay, they say, for sideswipe salvation. Thankfully the forecast was wrong.
How can a forecast be exceptionally accurate and wrong at the same time? By choosing where in the forecast and communication process you do the evaluation.
Grading the forecasts as they leave the NHC yields a top score. But look at the end of the line where the messaging rubber hits the taking-action road, and survey says: Irma was supposed to hit Miami and suddenly turned and hit Naples. The forecast missed the mark.
If that’s what restaurants full of people believe after hours upon tedious hours of television, websites and social media coverage of every zig and zag of Irma’s track, then that’s the truth, the whole truth, and the only useful truth.
Forecasts are made for people, not record books. If the NHC forecaster is thinking “A” when he or she issues an advisory, but people comprehend and act on “B,” the forecast — the combination of the numbers, words, graphics and explanations — doesn’t make the grade.
On TV, public officials in Southwest Florida, the Florida governor, and innumerable meteorologists and reporters said it over and over again: Irma was supposed to hit Miami, and then it turned. How could restaurant-goers think anything else?
All of this is very unfair to my brethren, the outstanding forecasters at the NHC, of course. Technically, they did make excellent forecasts, and I’m sure they cringed every time they heard an official or a broadcaster misinterpret their advisories and express surprise and chagrin at the storm’s sudden turn to the left. But, Irma was exactly the kind of storm that highlights the weaknesses in the NHC’s communications tools. It starts with its cone of uncertainty, the graphic that shows where it thinks the center of the storm could reasonably go.
[Understanding hurricane forecasts: making sense of spaghetti, cones and categories]
Here is the forecast cone from Thursday morning, Sept. 7, at 11 a.m.
(National Hurricane Center)
There are no words that could have been spoken or written to convince any reasonable person looking at that graphic that the NHC was not forecasting Hurricane Irma to hit Miami. There’s a big giant M right over the city. If you read the fine print (which nobody does), you’d see that an M means extra bad. (The M actually stands for “major” hurricane, meaning Category 3 or above, which is a terrible label since it wouldn’t apply to hurricanes Ike or Sandy.)
The intended NHC message is a much broader statement about the threat to the entire southern part of the peninsula. Any whiff of that notion is lost, however, in the noise created by the extra-bad M over the most glitzy and glamorous city on the hurricane coast.
To compound the problem, almost every TV station and network accentuated the M by replacing it with a big shiny spinning hurricane symbol drilling down on South Beach. (I’m pleased to say that the Weather Channel doesn’t use that style of graphic, but its use is widespread on TV, websites and social media.)
The National Hurricane Center goes out of its way to remind users not to look at the big M, because there are errors and uncertainty intrinsic in the forecast. That message is buried, however, at the bottom of the technical discussion the NHC issues with each advisory, which is not designed for public consumption. Not that it would make any difference where the disclaimer was printed. As long as the M is over Miami, the words don’t count.
Communication problems don’t end with the cone
A conflict between the spirit of the message and the way it is conveyed showed up in the local National Weather Service forecast for Miami and Naples that Thursday as well. The forecast for both cities for that following Sunday, when the M was positioned over Miami, was: Hurricane conditions possible. Showers and thunderstorms likely. Highs in the mid 80s. Chance of rain 70 percent.
There is only a 70 percent chance of rain, yet we’re supposed to be worried about a mega-hurricane! The forecast is tone and content deaf, and doesn’t in any way convey the actual threat to South Florida that Miami-based forecasters were concerned about. A tell-it-like-it-is forecast would likely have gotten more attention. For example: Extreme hurricane conditions possible. If there are no significant changes to the forecast track, full preparations should be completed by Friday evening.
There are two traps in the hurricane communications system on display here. First, the forecast cone with its line of dots down the middle is especially misleading when the potential target area includes multiple metropolitan areas — in this case, Miami and Naples-Fort Myers, which are separated by 100 miles. The obvious question is, “which area should prepare?” If the answer is “both,” then the graphic can’t favor one over the other without being misleading.
A partial solution is buried in the mountain of data that comes with every National Hurricane Center advisory: the odds that hurricane force winds will impact major cities in the hurricane’s potential path. Every city should have a threshold percentage at three or four days out that would trigger when preparations should begin. Most people would take action if there were a 1 in 10 chance that something bad was going to happen to their family, so maybe that’s a good number. Whatever that threshold is, the NHC advisory showed a 1 in 5 chance of hurricane-force winds in Naples that Thursday morning, which increased to a 1 in 3 chance that afternoon.
The bottom line is, any area with a threat level that exceeds the established risk threshold should be treated equally in the forecast cone. There has been a lot of discussion about how to revise the forecast cone over a number of years. Hopefully the Irma experience will hasten those efforts.
Concerning the wording of the city forecasts, the issue highlighted above is only the tip of an industry-wide inflexibility iceberg that muddles communications every time a hurricane makes landfall. Just about every agency, company, outlet, TV station, website and app — including the National Weather Service, the Weather Channel, and most posts on social media — make explicit weather forecasts for cities in a hurricane’s impact zone, well before that impact can possibly be known with specificity, based on the modern state of meteorological science.
These misleading and confusing forecasts are produced by well-intentioned people and organizations because the formats of their television, web, or App graphics demand it. There are seven-days worth of forecast boxes to fill in, so that’s what they do, even though everybody recognizes that the future weather in a potential hurricane impact zone is unknowable. And the forecast is at risk of being drastically wrong.
A way forward: Extreme events demand a different kind of coverage
Deceptive forecasts that show benign weather when extreme conditions are possible, at best damage the credibility of the weather enterprise, and at worst endanger the public.
The solution here is to create a “storm mode” of forecast operations analogous to the military’s Defcon 1. Under storm mode, a different operations paradigm would ensue. The daily forecast would be replaced by statements of the nature of the threat plus preparedness timetables: “What might be coming, when do I have to be ready, and what do I need to be ready for?” That’s what people need to know. The day-to-day products and verbiage about temperatures and chances of rain are misleading, unnecessary, counterproductive and a waste of valuable forecaster time.
I acknowledge that my restaurant test is unscientific, but it’s the right way to evaluate if the system we use to forecast hurricanes and to communicate with the public is working. Forecasters know that Irma didn’t turn at the last minute, it made an arc to the north at an un-forecast-ably different angle than it might have — well within the well-understood errors intrinsic to the system. But until we develop a system that keeps the public in sync with the forecasters before, during and after a storm, excellent forecasts will continue to be wrong.
Post from The Washington Post