Proposed tweaks to severe thunderstorm warnings

The National Weather Service (NWS) is collecting public comment on some proposed changes to severe thunderstorm warnings. These changes would add damage threat labels for wind and hail threats. The three tiers are (no label), considerable, and destructive.

CategoryWindHail
(no label)> 60 mph> 1.0″
Considerable> 70 mph> 1.75″
Destructive> 80 mph> 2.75″

As part of the proposal, the NWS says, they will recommend that destructive severe thunderstorms trigger a wireless emergency alert (WEA) message. This means most modern cell phones will receive an alert for the highest-end storms. According to an analysis by Joseph Patton, this would apply to just over 1% of severe thunderstorm warnings. (This percentage will vary by time and location.)

I am 100% on board with this proposal. Let’s be honest with ourselves: most people ignore severe thunderstorm warnings. I’ll be the first to admit that I do. Once I’m inside, I’m safe enough without taking extra precautions. But those top-end storms can do damage similar to tornadoes. Being able to distinguish between “get inside” and “get to the basement” severe storms is helpful.

Now I’ve suggested before that tornado and severe thunderstorm warnings should be combined into a single product. I still hold that opinion. Intensity of the threat matters more than the specific mechanics of the threat. But I very much doubt the NWS will implement that idea any time soon. This proposal at least allows for cleaner communication of the most life-threatening thunderstorms.

You can give the NWS your own opinion via online survey before July 30, 2020.

Beware weather forecast snake oil

Snake oil salesman are found in every industry and weather forecasting is no different. So how do you identify weather forecast snake oil? One major sign is that the forecaster doesn’t talk about it until after the fact. Another is that you only hear about the successful forecasts. And of course, if it seems to good to be true, there’s a good chance it is.

I recently saw someone talking about severe weather forecasts months out. The man behind these forecasts isn’t just some rando with a website. He has a PhD in meteorology from the University of Oklahoma and is a forecaster at the Storm Prediction Center. So it’s entirely possible that he’s on to something here. But I’m suspicious.

He recently posted about his forecast for March tornadoes. The forecast is ostensibly from three months before the outbreak. I looked through the archives and there was no indication prior to the fact. His website contains no forward-looking forecasts. There’s no methodology. There’s no discussion of busted forecasts.

I don’t know Dr. Cook. I don’t want to say anything about him as a person or a forecaster. But until he shows more transparency on his forecasts, I’m inclined to call it weather forecast snake oil.

Please don’t argue with the warning system

“Please don’t argue with the warning system”, Indiana University told a lecturer from its meteorology department as he rightly criticized their communications on Sunday.

Despite being wrong, the university continued to insist that they were making the right choice. Now as a Boilermaker, I’m normally in favor of Indiana University embarrassing itself. But this time, it’s just bad. Warning fatigue can kill people. The false alarm rate is already too high; telling people about warnings that don’t exist only makes it worse.

The “warnings affect the entire county until notified otherwise” statement is only a decade out of date. But I get it, our warning dissemination technology hasn’t caught up with how warnings are issued. You may recall I’ve written a few words on the subject.

The fact that dissemination technology is still (mostly) stuck in a county-based paradigm 10 years after the nationwide implementation of polygon-based warnings is an embarrassment. Emergency management is more than just weather, so I don’t expect emergency managers to know as much as meteorologists. I do expect them to not act silly when they’re corrected by experts. But most of all, I expect things to get better.

I don’t know why I expect things to get better. It’s hard to imagine the large public- and private-sector investments that are necessary to fix the issue. Storm deaths are relatively low, so there’s not even mass tragedy to spur action. It’s much easier to just work around the edges and pretend the glaring issues don’t exist. But if we’re serious about being a Weather-Ready Nation, we need to fix it at some point. Otherwise public institutions will continue making themselves look bad and misinforming the public.

How to reduce artificial boundaries in severe weather warnings

If you’ve been around here a while, you’ve seen me have opinions about the shapes of so-called “storm-based warnings”. Years ago, the National Weather Service changed the shape of tornado and severe thunderstorm warnings. Instead of issuing warnings based on the county, warnings are arbitrary polygons fitted to the threatened area. The idea is that by shaping warnings to the actual threat, the public gets a more accurate warning.

The reality is a little messier. Warnings are still frequently communicated to the public on a county basis. Worse, the warnings themselves are sometimes shaped to a county line. This is sometimes done to prevent a tiny sliver of a county to be included in a warning. Other times, it’s the result of a boundary between the responsibility areas of different NWS Forecast Offices.

Last week gave a great example close to home. The NWS office in Northern Indiana issued a tornado warning on the edge of their forecast area. Because the adjacent office didn’t issue a warning for that storm, the resulting shape was comically bad.

A tornado warning (red) shaped by the boundary (blue) between the IWX and IND forecast areas.

To be clear: I don’t blame the forecasters here. It was a judgment call to issue or not issue a warning. The real problem is that the artificial boundary does the public a disservice. Most of the general public probably does not know which NWS office serves them. Bureaucratic boundaries here only add confusion.

One solution is for the offices to coordinate when issuing warnings near the edge of their area. That doesn’t hold up well in the short time frame of severe weather, especially if an office is understaffed or over-weathered. Coordination takes time and minutes matter in these situations.

My solution is simpler: allow (and encourage) offices to extend warnings beyond their area. Pick a time frame (30 minutes seems reasonable) and allow the warning to extend as far into another office’s area as it needs to in order to contain the threat at that time. Once the threat is entirely into the new area, allow that office to update the warning as they see fit.

This allows offices to draw warnings based on the actual threat. It buys some time for additional coordination if needed, or at least gives a cleaner end to the warning. It does mean that some local officials will need to have a relationship with two NWS offices, but if they’re on the edge they should be doing that anyway.

The downside is that it increases the effort in verifying warnings because you can no longer assume which office issued the warning. And it could lead to some territorial issues between offices. But the status quo provides easier bureaucracy by putting the burden on the public. That’s not right.

Sidebar: what about issuing warnings at the national level?

Another solution would be for a national center to issue warnings. This is already the case for severe weather watches, after all. While it would solve the responsibility area problems, it would also reduce the overall quality of warnings. Local offices develop relationships with local officials, spotters, etc. These relationships help them evaluate incoming storm reports, tailor warnings to local conditions and events, etc. While a national-level warning operation would clearly provide some benefit, warning response is ultimately a very personal action that benefits from putting the warning issuance as close to the public as possible.

Reporting severe weather via social media

It feels weird writing a post about sever weather in mid-December, but here we are. Over the weekend, storm chaser Dick McGowan tried to report a tornado to the NWS office in Amarillo, Texas. His report was dismissed with “There is no storm where you are located. This is NOT a valid report.” The only problem was that there was a tornado.

Weather Twitter was awash in discussion of the exchange on Saturday night. A lot of it was critical, but some was cautionary. The latter is where I want to focus. If you follow me on Twitter, it will not surprise you to hear that I’m a big fan of social media. And I think it’s been beneficial to severe weather operations. Not only does it make public reporting easier, but it allows forecasters to directly reach the public with visually-rich information in a way not previously possible.

But social media has limitations, too. Facebook’s algorithms make it nearly useless for disseminating time-sensitive information (e.g. warnings), and the selective filtering means that a large portion of the audience won’t get the message anyway. Twitter is much better for real-time posting, but is severely constrained by the 140 character limit.  In both cases, NWS meteorologists are experts on weather, not social media (though there are efforts to improve social media training for forecasters), and there’s not necessarily going to be someone keeping a close eye on incoming social media.

I don’t know all of the details of Saturday night’s event. From one picture I saw, it looked like the storm in question looked pretty weak on radar. There were also several possible places Dick could have been looking and it didn’t look he made which direction he was looking clear. At the root, this is a failure to communicate.

As I said above, I’m a big fan of social media. If I need to get in touch with someone, social media is my first choice. I frequently make low-priority weather reports to the NWS via Twitter. For high-priority reports (basically anything that meets severe criteria or that presents an immediate threat to life), I still prefer to make a phone call. Phone calls are less parallelizable, but they’re lower-latency and higher-bandwidth than Tweets. The ability for a forecaster to ask for a clarification and get an answer quickly is critical.

If you do make a severe weather report via Twitter, I strongly encourage enabling location on the Tweet. An accurate location can make a big difference. As with all miscommunications, we must strive to be clear in how we talk to others, particularly in textual form.

Tornado warning false alarm rates

Five Thirty Eight recently ran a post about the false alarm rate of tornado warnings. Tornado warnings fail to verify (i.e. have no tornado) approximately 75% of the time, a number that has held steady for years. This comes as no surprise to meteorologists, and probably not to the general public. What’s disappointing about the article is that it doesn’t address the reason that the false alarm rate hasn’t improved: because it’s not a priority.

The ideal case, of course, is a false alarm rate of zero. While the article quotes the reasoning (“you would rather have a warning out there and have it miss than have an event and not have one out there”), it doesn’t explain why that reasoning leads to a high false alarm rate.

The first reason is that an emphasis on maximizing detection means that in questionable scenarios, forecasters will lean toward issuing a warning instead of not. I’ve been in an office when an unwarned tornado has been reported. The forecasters are not happy about that. They take the National Weather Service mission of protecting life and property seriously. The impact of a false alarm (inconvenience and lost productivity) outweighs the potential loss of life from a missed event.

After inadvertently posting this when I meant to save the draft, a friend commented that the “ideal FAR is actually non-zero if you want lead time.” This leads to the second reason an emphasis on detection increases the false alarm rate. Issuing a tornado warning seconds before the tornado hits is of limited utility. People in the warned area need time to move to safety. The article does point out that lead time has increased steadily for the past few decades. But the more lead time you have, the more likely it is that a warned storm won’t produce a tornado. Tornadoes are exceptional events.

There’s a balance between detection rate and lead time on one side and false alarm rate on the other. Like a seesaw, lowering one side raises the other (if you play with the signs on the numbers, that is). Prudent policy focuses first on detection and then on lead time, so the false alarm rate has to suffer. Improvements in technology and science will hopefully move the fulcrum such that we can lower the false alarm rate without reducing the lead time or probability of detection too much.

Severe weather outlooks on TV

At the end of October, the Storm Prediction Center changed the categories used in severe weather outlooks in order to more clearly communicate risk. These outlooks, like many NWS products, started as a way of communicating information to other meteorologists, emergency managers, et cetera. Though they weren’t designed with public consumption in mind, social science has helped to shape some of the changes. The Internet means that weather products are available to anyone who is looking for them.

What I’ve noticed since then is that not all of the local TV stations have gone along for the ride. A while ago, I asked one of the local meteorologists about this. His station discussed it internally and decided fewer categories made for less viewer confusion. I don’t have any reason to dispute that.

Severe weather outlook from the Storm Prediction Center.

Severe weather outlook from the Storm Prediction Center.

Severe weather outlook from NBC affiliate WXIN.

Severe weather outlook from NBC affiliate WXIN.

Severe weather outlook from CW affiliate WISH.

Severe weather outlook from CW affiliate WISH.

Severe weather outlook from Fox affiliate WXIN.

Severe weather outlook from Fox affiliate WXIN.

My main concern isn’t that the station doesn’t use the same categories as the SPC, but that different stations in the market use different categories. Of course, they should do what they think is in the best interests of their viewers. I’m certainly not suggesting there be mandatory unification. At the same time, I think stations having different risk categories is more confusing to the public than adding “marginal” and “enhanced” categories. Then again, TV weather seems to be one place that people have a specific and unwavering loyalty. Outside of weather weenies, I’m not sure there are too many people who would even notice differences between stations.

2013 severe weather watches

Greg Carbin, Warning Coordination Meteorologist at the Storm Prediction Center, recently updated his website to include maps of 2013 severe thunderstorm and tornado watches. I always like looking at these, because they highlight areas of increased and diminished severe weather threat. It’s important to not read too much into them though. As with hurricanes, it’s not always the frequency of events that makes a year memorable. 2013 was a below- or near-normal year for watches in the areas of Illinois and Indiana that were hit by a major tornado outbreak on November 17.

Tornado (left) and severe thunderstorm (right) watch count (top) and difference from 20 year average (bottom) by county. Maps are by the NOAA Storm Prediction Center and in the public domain.

Speaking of hurricanes, the quietness of the 2013 Atlantic hurricane season is evident in the below-average tornado watch count along the entire Gulf coast. Landfalling hurricanes are a major source of tornado watches for coastal states, so an anomaly in watches is often reflective of an anomaly in tropical activity. Preliminary tornado counts for 2013 are the lowest (detrended) on record. It’s not surprising, then, that the combined severe thunderstorm and tornado watch counts are generally below normal.

Severe weather watches (left) and departure from normal (right) by county. Maps are by the NOAA Storm Prediction Center and are in the public domain.

As you’d expect, Oklahoma and Kansas had the largest number of watches. What’s really interesting about the above map is the anomalously large number of watches in western South Dakota, western Montana, and Maine. Indeed, western South Dakota counties are comparable to Kansas in terms of raw watch count. Of course, that doesn’t mean the watches verified, but it’s an interesting note. Looking back through past years, the last 4 years have been anomalously high in western South Dakota. Is this an indication of a population increase, forecaster bias, or a change in severe weather climatology?

Tornado safety in schools

Yesterday afternoon, the second EF-5 tornado in 15 years struck the town of Moore, Oklahoma. As a nationwide audience watched the live coverage from local TV station, the tornado leveled roughly 30 square miles, including two schools, plus damage to three more and to a hospital. I don’t know what it is about Moore, but it seems to be a tornado magnet.

Historical tornado tracks (colored by intensity) from tornadohistoryproject.org. This does not include the 2013 EF-5.

Historical tornado tracks (colored by intensity) from tornadohistoryproject.org. This does not include the 2013 EF-5.

From what I’ve read, the school day had not yet ended when the tornado struck, which meant the schools were full. As the immediate shock wears off, some of the discussion will inevitably turn to the question of whether the schools should have dismissed early. In my opinion, the answer is “absolutely not”.

While it’s true that (as of this writing) nine children died, it’s quite possible the death toll would have been even worse. If the students don’t get home before they tornado hits, they’re sitting ducks in the school bus or walking home. During last year’s Henryville, IN tornado, a bus driver returned to school after an early dismissal, saving the lives of the students aboard.

Even if the students make it home, that’s not necessarily much safer. Numerous homes in the damage path were leveled. In other cases, students live in mobile homes or otherwise weak structures. It is tantamount to a death sentence to send them home in such conditions. This was the case in Enterprise, Alabama in 2007. While school officials received criticism for this decision, they made the right one.

Having students on the road during a tornado is obviously not the answer. Having students at be home isn’t particularly compelling in many cases. Because we cannot yet predict the specific path of a tornado until it has formed, it’s hard to make the argument in favor of cancelling classes. While some students have been killed by staying at school, it remains the best option available.

More thoughts on warning polygons

On Tuesday, Patrick Marsh wanted a distraction from his dissertation and embarked on an idle investigation of tornado warnings and impacted areas (my thoughts on what “impact” means are below). Using some very rough approximations, he calculated the percentage of warned persons who are impacted by a tornado. Even under the most generous set of assumptions, the verification by population is generally below 20%. It’s worth noting that 2011 (the most recent year that official tornado data is available) was the best year in the analysis, but there is no indication of a general improvement trend.

Despite some of the problems I’ve previously noted in the polygon warning system, it’s still better than warning entire counties. Still there’s a lot of room to improve the false alarm rate. Much of the population-based false alarm comes from warnings that have no tornado at all. The rest either comes from too-large warnings or not-small-enough warnings (“not-small-enough” warnings are small enough to be justifiable, but still larger than absolutely necessary).

It’s not always easy to shrink warnings. Only the supercell storms relatively close to a radar site seem suitable. In those cases, it’s possible to make the warning only a few miles wide, or the width of the mesocyclone with uncertainty added as you go downstream. This would minimize the area under the warning, but it got me wondering: would that be too small?

At the scale of a mile or two, how do you explain the warned area to the public? Storm-based warnings are already difficult to communicate quickly, and microwarnings would only compound the problem. Even in Lafayette, the 10th largest city in Indiana, the covered area might look something like:

...TEAL ROAD BETWEEN 4TH STREET AND 26TH ST...
...KOSSUTH ST BETWEEN 9TH STREET AND SAGAMORE PARKWAY...
...SOUTH ST BETWEEN FIVE POINTS AND FARABEE DR...
...18TH ST BETWEEN BECK LN AND FERRY ST...

And so on. Or maybe it would use neighborhoods and landmarks instead:

...LAFAYETTE COUNTRY CLUB...
...HIGHLAND PARK...
...JEFFERSON HIGH SCHOOL...
...FIVE POINTS...
...WALLACE TRIANGLE...
...COLUMBIAN PARK...

Either way, it’s much more complicated than a simple “LAFAYETTE”. Yes, it’s more detailed, does that help? First, it takes much longer to read the text. Secondly, can you count on people, especially those who are new to the area, to know the streets, neighborhoods, and landmarks well enough to quickly figure out if they’re affected or not? I suspect the answer is “no”.  Perhaps some day someone with the time, energy, and funding can look at this.

Sidebar: What does it mean to be “affected” by a tornado?

When Patrick commented on Twitter about his post from Tuesday, I remarked that the results depend on how “affected” is defined. His analysis was based on population, but that doesn’t necessarily convey all impacts. If my office is wiped out by a tornado but my house is untouched, I am still affected. You can expand this out even further and incorporate businesses that saw decreased revenue as a result of a tornado, even if they were not directly hit. Businesses that see increased revenue (e.g. home improvement stores) might also be included, even though the effect is a positive one. The broader (and, I would argue, more accurately) we define being affected, the more difficult it becomes to get accurate data.