The loss prevention/asset protection industry is full of hard-chargers—individuals striving to get better, to excel at what they do, to make a difference. But increasing loss, crime, and intense retail competition mean we need to continue to get even better.
And the LP industry has really responded over the last years to support our career field with a top-flight LP magazine, the LP Foundation and its LPQ and LPC certifications, CFI training and credentials, ASIS CPP and other certifications, daily and weekly e-newsletters, great industry associations and their excellent conferences, the Loss Prevention Research Council (LPRC) with over 300 completed LP/AP research projects and counting, and even more good, specific LP/AP training and credentials up and running or on the way, to name just a few.
Education can be a disruptive innovation, especially online learning. Job performance training injects critical and evolving knowledge and skills into practitioners. We all really do need to know and be good at what and how to accomplish our missions. Observation or ride-along training helps, but most of what happens in our business may not actually occur in our presence so that we can learn how to deal with it. In-person training is good too, but outcomes depend on the trainer’s skills. Online and computer-based training programs often provide more content and delivery consistency.
New Online Course and Certificate
To this end, the University of Florida has now fielded the first course of what may become three online LP/AP problem-solving courses. The idea is to support the retail associations, the LP Foundation, and its certifications, as well as individuals and retail chains with a certificate program.
The course title is “Introduction to Evidence-Based Loss Prevention” (EBLP). As the first of three courses, this online course is designed to help participants better understand how to use a theory-driven, evidence-based, systematic, crime-prevention process to make people, places, and assets safer and more secure. The course contains six modules:
- Crime/Loss and Loss Prevention: Defining Impact and Process
- Basics of Evidence-Based Practice
- Environment and Behavior: Using Theory to Understand and Solve Problems
- The Problem-Solving Process: SARA and Beyond
- EBLP Case Study Examples
- Use the EBLP Process and Worksheets to Solve a Problem
I would encourage individuals and organizations looking to develop or enhance evidence-based, problem-solving skills for new or even experienced LP/AP practitioners to check this program out at ufl.edu. Enter the course name in the search bar.
The University of Florida’s Eric Ryan would also be happy to discuss providing course demos, group rates, and program objectives and process at any time with you. Contact him at eric (at) dce (dot) ufl (dot) edu.
In this column, we’ve often discussed precision problem-solving. Precision means better outcomes with fewer negative side effects. And greater precision comes from a better problem diagnosis, meaning a more complete description of the very specific problem and its likely causes and where and when it’s clustered. Recognizing one size does not fit all, retailers increasingly assign risk and vulnerability scores to their locations.
Risk estimates how much relative exposure a given store has to nearby clusters of surrounding offenders (the more likely offenders, the higher the risk) and how accessible the location is to these offenders. Store risk obviously varies widely, even within markets. Retailers subscribe to services that estimate area risk using reported crime and estimated social disorganization as examples.
Relative vulnerability is how well a store, distribution center, or office is capable of handling crime attempts. Every store’s ability to prevent and handle problems also varies. A place manager’s loss control knowledge and commitment varies. So does their AP toolkit. Historic loss, shrinkage, manager performance, reported incidents, and other metrics help retailers prioritize support.
The currently described research was an earlier attempt to gauge whether and how retail chains segregate stores into risk and vulnerability bands for more precise protective support. The LPRC team is currently preparing to collect even more store risk and vulnerability process data in a new project.
Study Method: Loss prevention executives from twenty-one companies in six categories (mass merchants, department stores, drug stores, apparel stores, specialty stores, and grocery and dollar stores) completed surveys.
Results: Almost all the retailers we talked to use some process and data to evaluate each store’s relative risk and vulnerability. Following are some study highlights.
By far, the type of risk data most likely to be collected by the participating companies is actual loss/shrink, collected by almost all (95.2%) of the participants. More than two-thirds (71.4%) of the loss prevention executives indicate they collect data on the number of incidents by crime, and a similar percentage (66.7%) collects data on the number of accidents at the store level.
Approximately four-fifths (81.0%) of the LP executives surveyed indicate they use the risk data they collect at the store level to make a risk profile for each store or to classify stores into categories. Nearly three-fifths (58.8%) of the companies who report using store-level risk data to classify their stores have three classification levels based on this data. Nearly one-quarter (23.5%) of these respondents have five classification levels based on store-level risk data. The most typical classification schemes are ordered number or letter categories such as “1, 2, 3, 4, 5” or “A, B, C” or categorical ranking schemes such as “low, medium, high.”
More than two-fifths (41.2%) of the participants that classify their stores based on store-level risk data determine which stores belong in each category based on a combination score of actual loss/shrink, number of incidents, and other data collected. About 30 percent of the companies that classify their stores based on risk data do so based on a combination of actual loss/shrink and LP measures present in a store, such as EAS, CCTV, and so forth.
Nearly one-half (47.1%) of the participants who assign stores to categories based on risk data have classified between 2 percent and 5 percent of their stores in their highest risk category. About one-quarter (23.6%) of these respondents have classified between 7 percent and 10 percent of their stores in their highest risk category. About 30 percent of these executives report 15 percent or more of their stores have been classified in their highest risk category.
About 30 percent of the participants who assign stores to categories based on risk data report that the average loss/shrink rate (as a percentage of sales) for stores in their highest risk category is between 1.0 percent and 2.1 percent. Almost one-quarter (23.5%) of these respondents indicate that the average loss-shrink rate for stores in their highest risk category is 3.0 percent. Nearly one-half (47.1%) of the participants who assign stores to categories based on risk data indicate that they conduct risk assessments and reclassify stores once a year.
As mentioned above, the LPRC is working to generate more risk and vulnerability rating data and process ideas. Please let us know your willingness to participate in understanding even better ways to measure and predict store and department loss and crime levels.
Evaluating Crime Prevention Strategies edited by Johannes Knutsson and Nick Tilley and distributed by Lynne Rienner Publishers, Inc., Boulder, Colorado (2010). This resource describes the differing reasons for assessing current and proposed crime and loss control programs and methods, as well as different evaluation methods. It’s never enough to employ technologies or tactics without evaluating how well an effort was actually executed, its impact on the issue, estimated ROI, and any good, neutral, or negative side effects.
As a committed experimental criminologist, I recognize randomized controlled trials usually provide the strongest evidence and usable ROI metrics, but are often not feasible due to budgets, expertise, small sample sizes, and sparse event or outcome data. This book provides evaluation options.