As a result of the pandemic and store looting that followed the killing of George Floyd in Minneapolis, retailers are undoubtedly beefing up operations to be better prepared next time. But if retailers allow emotion and bias to creep into assessments of risk, they could both misspend precious resources and be no more prepared for the next disaster.
For example, it is easier to pay attention to the risk from an event that happened recently, while it’s comparatively difficult to prepare for events that are not fresh in an organization’s mind. US businesses will certainly be better prepared should a second COVID-19 wave were to hit, but what if the next global virus hits network systems rather than people? It is easy to think businesses and nations will be caught just as flatfooted as they were when COVID-19 emerged.
People and organizations give more weight—sometimes too much weight—to events that have recently occurred. A recent event is easier to recall, and when the ease of recall increases, the more numerous such events seem, research shows. Loss prevention and security leaders need to be sure that their department and emergency teams don’t place undue attention on a recent crime or event to the detriment of preparing for more probable events that are less fresh in the company’s consciousness.
In an interview with LP Magazine, Timothy Williams, vice chairman with Pinkerton and a security and risk mitigation expert, warned retailers against over-reacting to recent looting. The extraordinary events surrounding the protests this summer could cause some retailers to consider investing in more substantial barriers to deter rioters, but he suggested retailers should be judicious in applying preventative measures, to use state-of-the-art crime data and projections to “determine the proper calculus for each location based on risk,” and to balance security costs against buying more insurance.
In addition to “recency bias,” Georgetown and George Mason University researchers say near misses can also skew an organization’s future perception of similar risk events. For example, a retail store that enacts its emergency plan because of a looming disaster, such as a hurricane, can easily draw the wrong lesson if it turns out to be a near miss instead of a real event. “People appear to mistake such good fortune as an indicator of resiliency,” they concluded in “Why Near-Miss Events Can Decrease an Individual’s Protective Response to Hurricanes,” Risk Analysis.
It’s not that people use near-miss information to deliberately change their view of becoming a victim, but “it might be said that near-miss information changes people’s frames of reference,” said researchers. A certain probability level, which may have felt risky before, may suddenly feel less risky. “Our research shows how people who have experienced a similar situation but escape damage because of chance will make decisions consistent with a perception that the situation is less risky than those without the past experience.” Although the research focused on individual decision-making, companies are comprised of individuals and can also fall victim to an unreal sense of resiliency when they escape unharmed from a potential disaster.
If LP executives and other crisis management leaders believe the perception of staff or senior management has been skewed by being lucky in previous near-miss incidents, they may need to do more to make their case for mitigation. “Just outlining facts, such as estimated costs and probabilities,” may not be enough to account for over-confidence, said researchers. When near-miss bias is at work, people ignore objective facts in favor of their own prior experience. Thus, the narrative that accompanies facts grows more important, say researchers. People must be made explicitly aware of their potential bias and should be warned that “gut feelings” must not be allowed to shape decisions related to disaster preparedness and mitigation.
A report by the Institute of Risk Management (IRM) advocates the idea that a company should come to terms with its own relationship to risk. It should establish and measure its “risk appetite”—the degree to which it is willing to take on risk to achieve business goals. In this approach, a retail organization would develop risk appetite statements and identify measures of risk appetite—the culmination of which would be greater transparency of its risk taking.
The IRM takes a top-level view of risk taking, but it includes a warning for operational executives like LP executives and others who advocate for and implement the controls that act as a counterweight to management’s risk appetite. Specifically, the group suggests that operational executives make a critical mistake if they perceive management to hold a singular view of the need for security vis-à-vis business goals.
For example, senior leaders may choose to proceed with a business project before accompanying asset protection measures get a fair review. But LP leaders should not mistake that management rebuke to draw a general conclusion about how important physical security is to them, suggests the IRM report. And the converse is also true: just because management agrees with LP’s recommendations for one project is no guarantee that they will for another. “There appears to be a broad consensus that there is no single risk appetite, but rather a range of appetites for different types of risk, according to the IRM report, Risk Appetite and Risk Tolerance. “There might be one risk appetite for selling a particular product, and a different appetite for taking risk while selling another product. There might be one appetite for regulatory risk in one country and another appetite in a different regulatory regime.”
Lesson: Be mindful to always be the same strong advocate for security and asset protection in every instance and project. Don’t allow an internal bias—in the form of either a defeatist attitude or overconfidence—sway you from making well-documented recommendations in every case.
Another important aspect of dealing with senior management is to understand that—just as their appetite for risk will vary depending on the project—their risk appetite also has a temporal dimension. “In other words, the appetite and tolerance will change over time as circumstances change,” according to the IRM analysts. LP executives need to constantly take senior management’s temperature on the subject of security risk and should not assume that it is a fixed attitude.
Risk Assessment Bias
As noted at the outset, the mind can twist and turn facts to comply with existing mental biases. That is why—after a security breach or a disaster for which a company was unprepared—what seemed like rational decisions at the time suddenly seem careless. In addition to recency bias and near-miss bias, here are additional fact-distorting mindsets that LP leaders and crisis managers need to account for in security risk and threat assessments.
Event Certainty. In making risk decisions, people tend to underweight probable outcomes compared to outcomes that have an element of certainty. It leads to management reluctance to fund mitigation against unlikely events, like earthquakes. Crisis management planners must prevent legitimate uncertain risks from being pushed aside in favor of attention to more certain risks that would have consequences that are far less serious.
Emotionally Charged Events. When strong emotions are involved, people tend to focus on the potentially harmful outcome, rather than on the probability of occurrence. Fear of terrorism is a good example. LP executives may find it necessary to counterbalance emotional security fears by highlighting statistically larger risks.
Presentation of Choices. In a risk assessment matrix, an increase in the number of choices for rating the severity of an event—from a scale of “low-medium-high” to a scale with five choices, for example—will tend to increase its severity rating. A “medium” severity rating will often become a “4” on a 5-point scale, for example. So corporate security departments should use a consistent risk assessment methodology across the organization and understand the potential for influencing management’s risk perception depending on how security presents choices.
Relation to the Status Quo. People will generally take more chances to avoid a certain loss than for a similar gain. It’s related to the tendency of an individual to be more upset at, say, losing a $100 bill, than they will experience an increase in happiness at a similar gain—finding a $100 bill. The result is that companies are less likely to recognize the substantial gains for moving away from the status quo. In practical terms, it means that highlighting the value of security and risk mitigation is important to avoid irrational risk decisions.
Finally, there are additional ways in which probability biases can lead to poor management decisions, according to research.
- Availability cascade—a self-reinforcing process where collective beliefs gain credibility through repetition.
- Confirmation bias—interpreting and manipulating data in ways that confirm what you already believe.
- System justification—bias that leads you to defend and bolster the status quo.
- Disregard of “regression toward the mean”—a bias where one expects exceptional results to continue.
- Professional bias—analyzing events through the lens of your own profession, and not making a broader, more objective analysis.
- Zero risk bias—a focus on reducing all risk, including relatively insignificant risks, as opposed to reducing manageable risk.
- Groupthink—bias leading you to view a risk the way everyone else does.