Close

Presentation

Optimizing the Response Evaluation Process
DescriptionIncident responses of varying sizes and complexities, are carried out by governments, businesses, and NGOs across the world; and evaluations or assessments of these responses are conducted without always producing measurable and actionable results. We will address common challenges facing the evaluation/assessment process and what can be done to optimize the process.

Reviewing ten years of evaluation feedback from Tier 3 global exercises, common issues that impact the quality of the evaluator feedback have emerged. We have developed methods and guidelines to overcome these issues and collect consistent high-quality evaluation data.

The development of the evaluation materials and the selection of trained and qualified evaluators is key. As evaluators are expected to answer the provided questions. If the answers are not providing quality of data, the problem may lie in the question set, not necessarily with the evaluator.

However, you can have the best-worded questions and still receive poor-quality feedback if the evaluators are not subject matter experts in the process you are asking them to evaluate, and/or unfamiliar with the plan or procedures being evaluated. Have they been properly trained in the evaluation process? Have you defined what good looks like or is it up to the individual to decide? Are you requiring them to support their position with a narrative?

Are you utilizing digital tools? A truly digital tool includes some level of automated analysis or data consolidation, which is challenging where the pathways to a successful response are not binary (yes or no) given human factors and team dynamics. If you ask the wrong question or the evaluators are not competent and prepared, the results will suffer regardless of which evaluation approach you employ.

Beyond the questions, evaluators, and the training they receive regarding the process, additional factors influence the effectiveness of the evaluation feedback. Among those are cultural, personal, and geopolitical influences. In some cultures, it is considered rude to mention any performance gaps, and in some political environments, constructive feedback may be considered criticism of existing political administrations and simply cannot be made. As a result, evaluation feedback may not be accurate and could pose a significant risk in the future.

We will discuss the influencing factors identified and the methods we are implementing to eliminate or minimize their impact. Utilizing a standardized approach that provides benchmarks, we are comparing apples to apples allowing us to develop actionable strategies supporting continuous improvement across the globe.
Authors
Director; Exercises & Preparedness
Principal, Emergency Management Systems
Principal; All Hazard Emergency Management Specialist
Event Type
Paper
TimeThursday, May 16th10:00am - 10:20am CDT
Location275-277
Tags
Preparedness