Evaluating the Cities Readiness Initiative
Last month, the RAND Corporation published a report intended to evaluate the Cities Readiness Initiative (CRI) program. This evaluation was requested by HHS in the spring of 2007. For those of you listening in and not prone to clicking links, CRI seeks to help the nation’s largest metropolitan regions develop the ability to provide life-saving medications in the event of a large-scale bioterrorist attack or naturally occurring disease outbreak. Traditionally, this program has focused on preparing for and responding to an aerosolized anthrax attack. The goal of the program is to provide antibiotics and life-saving medications to 100% of a planning jurisdiction’s population within 48 hours.
Now, before I read the summary, or the key findings, or even got into the document, it was my feeling that pouring millions of dollars into developing plans for public health response would naturally elicit better public health response plans. Being able to hire people would spread the burden of planning to more people, enabling folks to better do their jobs. Because of this, public health planning in general should see improvements due to the existence of the CRI project. These improvements wouldn’t be equal or standard across all sites because, well, some people are better planners than others and not all health departments have made planning a priority. I imagined this would be one of those reports where the outcome was obvious, but a bit of research would quantify that and settle the issue.
And really? That’s exactly what it was. If you feel like I do, this report isn’t groundbreaking (if you think CRI is a huge failure, then it might be groundbreaking to you). There are some interesting points, though, that I wanted to bring up.
Chapter 5 (CRIs Impact on Planning) was the most interesting chapter, I thought. Obviously, the effects of a planning initiative will have the greatest impact on planning, but I think that RAND, in this chapter, preemptively identified some of the weaknesses of the program today.
The first of these weaknesses is something that I talk about too much around here. Public information and communication goals in the Technical Assistance Review (TAR) tool are increasingly being met by the CRI jurisdictions, but the report notes that risk communication best practice includes–depends on–pre-testing and evaluation to ensure that the information being disseminated is necessary, correct, easily understood, and not confusing.
The second issue raised has to do with the anthrax scenario. This is something I’ve wondered about before, so it’s nice to know that it’s a concern of folks more involved in CRI planning. Some jurisdiction felt that having planners focus solely on responding to an aerosolized anthrax attack, a relatively unlikely event, can detract from generating either all-hazards plans or plans geared toward situations deemed much more likely by a comprehensive hazard vulnerability analysis. The report notes the difference between the CRI scenario and others:
For example, response to a nuclear detonation would require medical surge and mass evacuation capabilities, and response to an infectious disease incident would likely rely heavily on medical surveillance and nonpharmaceutical countermeasures.
I’ve wondered about this specifically, especially as planning goals seem to be driven at the funding level by type of event. CRI worries about anthrax, HHS worries about pandemic flu, DoE worries about nukes, etc. And never the twain shall meet. In a worst case scenario, isn’t it possible that two planners in a health department could never cross-pollinate their ideas as one plans for a pandemic and the other for an anthrax attack?
I imagine that some day, as planning jurisdictions routinely start scoring 100s on the TAR, the tool will have to be adapted to reward planners for taking the next steps and thinking creatively.
The third weakness I’ll bring up here builds on this idea of the TAR tool being less than ideal at grading response planning. According to the report, some jurisdictions felt that the TAR was overly restrictive, and not complementary to other federal efforts, such as NIMS:
For example, some expressed concern about being required by CRI to develop call-down lists for command structures when listing only position titles is the practice recommended by the National Incident Management System (NIMS), which leaves plans more durable over time and more flexible in the event of unexpected contingencies.
The report notes that these concerns were most pronounced at sites that were much further along in their planning, and those jurisdictions that had program consultants most likely to work with the planners. Again, I think this calls for the development of a new tool that takes into account the fact that some cities have been building these plans for years and making sure they check the box that says “We have a mass dispensation plan” seems like a waste of a planners time. Maybe TAR, Level 2.
Like I said above, this report by no means criticizes the CRI program, but I think it points out some weaknesses that should be addressed. And in all truthfulness, I would argue that these weaknesses are a testament to the success of the program. Some participants have outstripped the utility of the existing measurement tool and are being to feel constrained by the inflexibility that was required in earlier stages of planning. Kudos planners and DSNS consultants, now get back to it.
Photo credit: net_efekt