For the third year running, a poll of the American public, on the subject of potential crisis events, shows it is most worried about natural disasters—by a wide margin. Americans are twice as worried about natural events, such as a hurricane, tornado, flood, or wildfire, than they are about terrorist attacks, cyber attacks, environmental disasters, or disease outbreaks.
The findings underscore the need to prioritize preparedness in the face of threats from natural disasters, “because we know that it’s not if, but when, a disaster will strike,” according to Nicolette Louissaint, PhD, executive director of Healthcare Ready, which sponsored the independent survey conducted by polling firm YouGov. The survey also found that while 42 percent of people are concerned about an emergency happening, more than half of Americans (53 percent) do not have any emergency preparation plans in place.
The potential impact from natural disasters is certainly a business risk that concerns retailers. The unprecedented destruction from recent hurricanes and wildfires, which are expected to grow more commonplace, has brought into sharp relief the danger of a disaster plan built on outdated assumptions and weather models.
Importantly, experts say it suggests the need for retail organizations to employ metrics and self-assessments to uncover flaws in their current plans and to align disaster preparedness with growing risks from natural disasters. In short, if a retailer believes its disaster plan is perfect, it has probably not dug deep enough to uncover its shortcomings.
Every organization, including those in retail, needs to develop a process that provides it with a broad perspective of its emergency preparedness and response capabilities, according to Cheyene Marling, president of BC Management and executive vice president for people solutions and program analytics at Firestorm Solutions, a crisis management firm, in a national disaster conference presentation.
Company leaders have demonstrated a willingness to invest in business continuity, but Marling said that they also “want some proof that it’s working; they demand some evidence that progress is being made.” She added that internal self-assessments give disaster planners ammunition they need to demonstrate on-going progress in addressing preparedness—so long as they are comprehensive.
She advised companies to rate themselves on their progress on a full range of preparedness activities, suggesting eleven categories to gain an honest overall perspective of disaster readiness:
- program policy
- risk analysis
- business impact analysis
- testing and exercises
- training and awareness
- crisis communications
- incident response
- communicable illness
In each area, leaders in disaster preparedness should examine how performance comports with program goals. Using category weighting, and assessing against particular risks, retailers can develop both an overall rating for preparedness that it can share with management, as well as to indicate how ready the company is to handle a broad range of natural disasters.
Marling cited a few critical keys to success in this endeavor. First, include all dimensions of preparedness in self-assessments; often, companies focus on just a few aspects of emergency preparedness, she warned. Second, use the ratings an assessment generates to highlight room for improvement—not to validate a disaster response program. For example, a retailer that rates itself 90 percent or higher for overall preparedness, is probably both exaggerating its readiness and not encouraging senior management to sustain their commitment to disaster response and business continuity, suggested Marling.
Within each of the 11 self-assessment categories, performance measures provide valuable information on progress and readiness and can help generate scores ratings in each area. Many IT business continuity measures are straightforward—amount of time required to reboot file servers after an outage, for example—but how can you judge disaster response more broadly? How can you score readiness without having a disaster and a post-incident review?
Particularly valuable are metrics collected during the performance of disaster exercises and those assessing the readiness of personnel to handle their crisis responsibilities, suggested Marling. Unlike ‘plan’ metrics that examine if certain tasks have been conducted—whether risk analysis has been updated, plans completed, or policies approved, for example—exercises and training offer insight into actual performance levels.
For example, training and awareness performance measures might include the following general measures: whether or not an awareness and training program is in place; if the schedule for disseminating program information is being followed; and if training is conducted as scheduled.
However, performance measures can also include the scores of emergency team members on tests after crisis training. Anonymous awareness surveys of general staff can also offer insight into specific program weaknesses that general plan measures often can’t.
Similarly, emergency plan exercises should be judged both broadly and at a component level.
- Broadly (samples): Is a schedule for conducting exercises in place? Is the schedule followed? Were post-exercise assessments conducted? Were recommendations from the exercises put in place?
- Detailed (samples): How long did it take to seal off the building’s ventilation system? What percentage of personal protective equipment devices failed during the exercise? How many issues of confusion were identified during after-exercise participant surveys?
By using both detailed metrics—in addition to “yes or no” measures about the existence of plans and protocols—retail leaders are forced to confront areas in which there is always room for improvement.
If the only performance measures a retailer uses are general—such as, do we have a plan if a tornado alarm sounds? Have we disseminated it to store associates?—then it can be easy for senior leaders to think of disaster readiness as a project that has been completed and for them to lose sight of the importance of ongoing activities. But when more data about the program is collected, including compliance rates and performance within the myriad readiness activities that should be maintained, then it’s easier for senior leaders to see value from continuous improvement initiatives.