Home » Professional Standards

Category Archives: Professional Standards

September 2017
M T W T F S S
« Apr    
 123
45678910
11121314151617
18192021222324
252627282930  

Whole Community Emergency Management: Was I Suppose to Have Noticed a Difference?

 

"There's nothing in the streets/ Looks any different to me/ And the slogans are replaced, by-the-bye" The Who- "Won't Get Fooled Again"

“There’s nothing in the streets/Looks any different to me/And the slogans are replaced, by-the-bye”    The Who- “Won’t Get Fooled Again”

 

Imagine my surprise when I learned recently we have been living in the promise land of Whole Community Emergency Management since 2011 and somehow I had neither known nor noticed any difference!

I do not know why, but I do not recall much being made of this document in 2011, nor do I recall hearing much made of it since then.  I still hear about the National Response Framework, National Incident Management System, and even occasionally the National Recovery Framework….but what has happened to Whole Community Emergency Management?  Was it really meant as a paradigm shift, or simply meant to give the appearance of a major change in approach to federal emergency management?

I don’t know the answers to these questions, but something seems just a little suspect when FEMA starts coining terms to promote their “NEW!….IMPROVED!” emergency management.  We already have Comprehensive Emergency Management, Integrated  Emergency Management, and even (if you pay attention to what is going on in other parts of the world…) Integrated Disaster Risk Management: do we really need another?  And if I wasn’t already suspicious enough, there seems to be something rather strange about FEMA claiming the idea of community-based emergency management as if this is something new and original, when scholars and researchers have been advocating community-based approaches since before there was a FEMA.

Would I be alone in thinking there is something amusingly odd, even ironic, about FEMA teaching “bottom-up” emergency management to state and local emergency management?  But there will be nothing amusing to those at the community level, who have embraced and put their energies into collaborative programs of emergency and disaster management thinking this is the change they have waited for, when they discover that this new and improved version of emergency management did not come with systems to  make state-level and federal level assistance faster and more responsive in addressing their needs.  Go ask some of those still rebuilding after Sandy how responsive this Whole Community Emergency Management has worked out for them.  Or could it be it no longer applies after the disaster strikes?

The simple fact is that calling EM  “Comprehensive” or “Whole Community” or whatever catchy term someone with marketing flair thinks will catch on does not mean very much.  What you call your version of EM isn’t important.  In any form of professional practice, what you call the approach is not what matters.  I don’t care what name my lawyer or my surgeon gives to his particular brand of lawyering or surgery: I want him to know what he is doing, and to do it well.

I don’t care what it is called, the important question is whether it is “good” emergency management, both in idea AND in practice.

 

 

 

 

 

 

NFPA1600, EMAP, and the Concept of “Benchmarks”

Yesterday an interesting thread came to my inbox  through the IAEM email discussion.  Someone asked if there were any benchmarks available in EM.  The unanimous opinion was that NFPA1600 and EMAP serve as benchmarks for our “profession”.  Me, having taken a look at the NFPA1600 and EMAP standards on a couple of occasions, and being the pain-in-the-ass, contrarian that I am, said “Not so fast my friends….”  

Here is my reply:

I would argue that the EMAP and NFPA1600 standards are not really benchmarks, if one is speaking of a benchmark as a way of comparing measurements of performance and/or process in some area against measurements of best-practice performance and/or process in that area within the same industry or across different industries.If you accept the dictionary.com definition of benchmark used by IAEM (http://www.iaem.com/certification/cem_corner/Benchmarks.htm), then a standard would be a benchmark. But that is not what people are speaking of when talking of benchmarks. When I benchmark my computer, I am comparing the measurements of my system’s performance on a set of tasks against other computers with both the same and different components doing the same tasks. I am not simply comparing whether my system does or not have certain components.Unless a standard has quantitative/measurable performance levels, then the standard is not really a benchmark. EMAP and NFPA are really structural/qualitative standards for emergency management programs: they establish the required minimal elements of a program. Although EMAP describes theirs as a performance standard, it is measuring performance qualitatively, and in a broad way . How an element is implemented (correct/incorrect; accurate/inaccurate; supported/unsupported by evidence; current/out-dated method) and whether the program’s performance of the element is (successful/unsuccessful; cost-effective/cost-ineffective; exceptional, above average, average, adequate, piss-poor, catastrophe-waiting-to-happen-and-some-people-are-gonna-lose-their-jobs, etc.), is not really addressed by the standards. For example, NFPA requires:
 
4.6 Performance Objectives.
4.6.1* The entity shall establish performance
objectives for program requirements in
accordance with Chapter 4 and program
elements in accordance with Chapters 4 through
8.
4.6.2 The performance objectives shall depend on
the results of the hazard identification, risk
assessment, and business impact analysis.
4.6.3* Performance objectives shall be developed
by the entity to address both short term and
long term needs.
4.6.4* The entity shall define the terms short-term
and long-term.” (2010, p. 1600-6)
 
5.4.2.2 The vulnerability of people, property, the environment,
and the entity shall be identified, evaluated, and monitored.
but it does not establish levels of performance for that assessment. “ (2010, p. 1600-7)
 
Regarding hazard/risk/vulnerability analysis, EMAP requires (interestingly, the word
vulnerability and/or social vulnerability
does not appear in this section, in word or concept):
 
4.3: Hazard Identification, Risk Assessment and Consequence Analysis
Overview
An accredited Emergency Management Program should have a Hazard Identification, Risk Assessment (HIRA)
and Consequence Analysis. The chapter includes responsibilities and activities associated with the
identification of hazards and assessment of risks to persons, public and private property and structures.
4.3.1 The Emergency Management Program shall identify the natural and human-caused hazards that
potentially impact the jurisdiction using a broad range of sources. The Emergency Management
Program shall assess the risk and vulnerability of people, property, the environment, and its own
operations from these hazards.
4.3.2 The Emergency Management Program shall conduct a consequence analysis for the hazards
identified in 4.3.1 to consider the impact on the public; responders; continuity of operations including
continued delivery of services; property, facilities, and, infrastructure; the environment; the economic
condition of the jurisdiction and public confidence in the jurisdiction’s governance. (2010, September, pp 5-6)
 
Closer to the idea of benchmarking would be works like Godschalk’s Natural Hazard Mitigation (1999) comparing state mitigation plans, or comparative studies of pandemic planning (http://www.degruyter.com/view/j/jhsem.2009.6.1/jhsem.2009.6.1.1599/jhsem.2009.6.1.1599.xml; http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0002269).
 
Outside of EM, examples of benchmarking abound in other industries. Here is a benchmarking report on national energy efficiency programs: http://www.eebestpractices.com/pdf/methodology.pdf
 
One of the larger issues, of course, would be in what ways can we acceptably quantify Emergency Management processes and performance?  
 

Now I am not trying to rain on anyone’s parade.

I believe the NFPA and EMAP standards are necessary and good so long as they are understood for what they are: a set of not-too-bad standards that establish the minimum elements of an emergency management program.  Adherence to the standard, however, does not guarantee that a program is effective, efficacious, keeping up with latest developments in the field of disaster studies and sciences, making use of evidence-based practices, and so forth.  It does not guarantee this anymore than having ground beef and a grill assures you of a good hamburger.  Because EM is rather closely aligned with government and politics, I become suspicious (cynical?) that EM has inherited the genetic predisposition of government to congratulate itself on some new development, legislation, or policy, and tell everyone how great it is working…all the way to the point it fails miserably. And sometimes it tells everyone how great it is working past the point where it has failed miserably.

EM standards, in my view, are very much inter-related with my study and research interests in the relationship between the academic and professional bodies of knowledge that make up Disaster Studies and Sciences, including the specialty of EM.  If there are indeed such bodies (I believe there are, though much more for the academic side than for the occupational side) standards should reflect the current state of knowledge specifically, not generically.  And all of this ties in with evidence-based practice, as well as the difficult but necessary question of  appropriate process and outcome measurements  for the occupation/profession.

There is still a long way to go and much more to be done.  Let us not become complacent.

%d bloggers like this: