Quickly Noted: The Worst Case
Last spring, one of my favorite security bloggers, Bruce Schneier, wrote about his experience at a security conference where the moderator asked panel participants what their worst-case scenarios were.
You know the kind, you’re explaining your POD plan, and someone pipes up, what if there’s a snowstorm, and all the power goes out and the internet is down and there’s an asteroid coming and they’re allergic to all three medications, how will people know where to go for their medication? The response I tend to give in my head is, “Well, we’re all screwed at that point, so who cares.” Out loud, though, I assure them that’s a valid concern and that we’d work with our partners to get the message out, door to door.
Schneier gives four reasons why worst-case thinking adversely affects our planning and response, but I wanted to focus on the first of them:
Worst-case thinking means generally bad decision making for several reasons. First, it’s only half of the cost-benefit equation. Every decision has costs and benefits, risks and rewards. By speculating about what can possibly go wrong, and then acting as if that is likely to happen, worst-case thinking focuses only on the extreme but improbable risks and does a poor job at assessing outcomes.
Our recent H1N1 influenza response demonstrates, to some degree, why this is. By and large, most health departments planned for H5N1, so when a milder H1N1 flu came through, our draconian closure and quarantine and isolation measures seemed out of whack. A lot of places ended up scaling back their planned actions kind of haphazardly due to the situation at the time. We hadn’t considered the costs of our planned actions (or more specifically, blinded us to the costs because we assumed we would only do this during the “worst-case scenario”). Truly a lesson to be learned.