Category Archives: EMAP

November 2017
M T W T F S S
« Apr    
 12345
6789101112
13141516171819
20212223242526
27282930  

NFPA1600, EMAP, and the Concept of “Benchmarks”

Yesterday an interesting thread came to my inbox  through the IAEM email discussion.  Someone asked if there were any benchmarks available in EM.  The unanimous opinion was that NFPA1600 and EMAP serve as benchmarks for our “profession”.  Me, having taken a look at the NFPA1600 and EMAP standards on a couple of occasions, and being the pain-in-the-ass, contrarian that I am, said “Not so fast my friends….”  

Here is my reply:

I would argue that the EMAP and NFPA1600 standards are not really benchmarks, if one is speaking of a benchmark as a way of comparing measurements of performance and/or process in some area against measurements of best-practice performance and/or process in that area within the same industry or across different industries.If you accept the dictionary.com definition of benchmark used by IAEM (http://www.iaem.com/certification/cem_corner/Benchmarks.htm), then a standard would be a benchmark. But that is not what people are speaking of when talking of benchmarks. When I benchmark my computer, I am comparing the measurements of my system’s performance on a set of tasks against other computers with both the same and different components doing the same tasks. I am not simply comparing whether my system does or not have certain components.Unless a standard has quantitative/measurable performance levels, then the standard is not really a benchmark. EMAP and NFPA are really structural/qualitative standards for emergency management programs: they establish the required minimal elements of a program. Although EMAP describes theirs as a performance standard, it is measuring performance qualitatively, and in a broad way . How an element is implemented (correct/incorrect; accurate/inaccurate; supported/unsupported by evidence; current/out-dated method) and whether the program’s performance of the element is (successful/unsuccessful; cost-effective/cost-ineffective; exceptional, above average, average, adequate, piss-poor, catastrophe-waiting-to-happen-and-some-people-are-gonna-lose-their-jobs, etc.), is not really addressed by the standards. For example, NFPA requires:
 
4.6 Performance Objectives.
4.6.1* The entity shall establish performance
objectives for program requirements in
accordance with Chapter 4 and program
elements in accordance with Chapters 4 through
8.
4.6.2 The performance objectives shall depend on
the results of the hazard identification, risk
assessment, and business impact analysis.
4.6.3* Performance objectives shall be developed
by the entity to address both short term and
long term needs.
4.6.4* The entity shall define the terms short-term
and long-term.” (2010, p. 1600-6)
 
5.4.2.2 The vulnerability of people, property, the environment,
and the entity shall be identified, evaluated, and monitored.
but it does not establish levels of performance for that assessment. “ (2010, p. 1600-7)
 
Regarding hazard/risk/vulnerability analysis, EMAP requires (interestingly, the word
vulnerability and/or social vulnerability
does not appear in this section, in word or concept):
 
4.3: Hazard Identification, Risk Assessment and Consequence Analysis
Overview
An accredited Emergency Management Program should have a Hazard Identification, Risk Assessment (HIRA)
and Consequence Analysis. The chapter includes responsibilities and activities associated with the
identification of hazards and assessment of risks to persons, public and private property and structures.
4.3.1 The Emergency Management Program shall identify the natural and human-caused hazards that
potentially impact the jurisdiction using a broad range of sources. The Emergency Management
Program shall assess the risk and vulnerability of people, property, the environment, and its own
operations from these hazards.
4.3.2 The Emergency Management Program shall conduct a consequence analysis for the hazards
identified in 4.3.1 to consider the impact on the public; responders; continuity of operations including
continued delivery of services; property, facilities, and, infrastructure; the environment; the economic
condition of the jurisdiction and public confidence in the jurisdiction’s governance. (2010, September, pp 5-6)
 
Closer to the idea of benchmarking would be works like Godschalk’s Natural Hazard Mitigation (1999) comparing state mitigation plans, or comparative studies of pandemic planning (http://www.degruyter.com/view/j/jhsem.2009.6.1/jhsem.2009.6.1.1599/jhsem.2009.6.1.1599.xml; http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0002269).
 
Outside of EM, examples of benchmarking abound in other industries. Here is a benchmarking report on national energy efficiency programs: http://www.eebestpractices.com/pdf/methodology.pdf
 
One of the larger issues, of course, would be in what ways can we acceptably quantify Emergency Management processes and performance?  
 

Now I am not trying to rain on anyone’s parade.

I believe the NFPA and EMAP standards are necessary and good so long as they are understood for what they are: a set of not-too-bad standards that establish the minimum elements of an emergency management program.  Adherence to the standard, however, does not guarantee that a program is effective, efficacious, keeping up with latest developments in the field of disaster studies and sciences, making use of evidence-based practices, and so forth.  It does not guarantee this anymore than having ground beef and a grill assures you of a good hamburger.  Because EM is rather closely aligned with government and politics, I become suspicious (cynical?) that EM has inherited the genetic predisposition of government to congratulate itself on some new development, legislation, or policy, and tell everyone how great it is working…all the way to the point it fails miserably. And sometimes it tells everyone how great it is working past the point where it has failed miserably.

EM standards, in my view, are very much inter-related with my study and research interests in the relationship between the academic and professional bodies of knowledge that make up Disaster Studies and Sciences, including the specialty of EM.  If there are indeed such bodies (I believe there are, though much more for the academic side than for the occupational side) standards should reflect the current state of knowledge specifically, not generically.  And all of this ties in with evidence-based practice, as well as the difficult but necessary question of  appropriate process and outcome measurements  for the occupation/profession.

There is still a long way to go and much more to be done.  Let us not become complacent.

%d bloggers like this: