Do you typically conduct a pilot study or otherwise test new technologies or security systems for retail stores before rolling them out? According to a survey by SDR/LPM, loss prevention executives are more likely than security counterparts in other industries to believe that a pilot study or field test is an effective way to show senior management that a project is a good idea.
In addition to being an effective tool of persuasion, a pilot study can help remove bugs from a project, indicate ways to improve it before an enterprise-wide roll-out, and may yield surprising results, which can stop you from investing in the wrong security equipment.
In one case study, LP experts said they examined three different theft-prevention technologies and had predicted the payback period for each. But when the results came in, the team discovered actual payback periods differed sharply from what they had anticipated, and even upended their view of the most cost-effective anti-theft option.
It’s natural that LP executives, who deal with a high volume of security incidents, would see the greatest value in pilot programs. It is easier to measure the efficacy of new devices or security systems for retail stores than in other industries. A store that experiences a reduction from 180 to 45 shoplifting incidents per month after implementing a new LP tool probably has good evidence of its effectiveness, whereas in industries such as business services—that typically tackle far fewer security incidents—measuring success in a pilot study is harder to do. It’s tough to draw any meaningful lessons from incident data when they decrease from two to one per month or increase by the same amount.
But even when pilot studies are somewhat easier to learn from, they must still be designed with scientific principles in mind to provide reliable conclusions. From selecting an unrepresentative store location to failing to train staff, it’s easy to inject variables into a pilot study, in which case the results cease to be an accurate measure of the subject of the study.
From a large grocery chain that regularly conducts field tests and Carnegie Mellon University research on conducting effective pilot studies, the following are best practices for conducting tests and achieving confidence in the results:
- Conduct a pre-implementation measurement phase. Too often, according to Carnegie Mellon researchers, improvements are implemented and their effect measured without conducting valid measurements beforehand. The result is to interpret a pilot study based “on opinion and impressions.” The improvement team needs to collect data and use valid information to define the current situation instead of assuming that it already has the necessary valid information, said researchers.
- Use good sampling, measurement, and analysis techniques. The grocer’s VP of retail operations warns that the test sample must be representative of the population of interest—for example, high-loss stores or products. If the goal is to roll out a program or device companywide, then LP needs to examine whether there is anything they can do to increase the probability that they will be able to generalize the pilot study results to the larger population. For example, ask yourself, “Is the skill and experience of the personnel involved in the pilot study typical for the company? How similar—in size and other factors—is the project environment of the pilot study to other organizational projects?” The more rigorous your statistical approach is in creating a pilot study, the more narrowly you can use the results. For example, a relatively simple design may be sufficient if your goal is limited to learning whether a wireless camera system operates as promised. But a more rigorous experimental design is needed to more definitively identify in which environments a device or project will work best and under what conditions.
- Identify variables that could impact results. Developing assumptions that give form to a field test is the first step, but Carnegie Mellon researchers suggest initial planning should also include an examination of factors that are beyond your control that could confound or influence the cause-and-effect relationship of the change you’re trying to evaluate. It may be possible that some extraneous variables can be accounted for or minimized during the design of the field test.
- Measure related outcomes. Field tests of LP solutions or security systems for retail stores should measure more than effectiveness; it must also measure consequences. For example, protective boxes around high-theft items may effectively reduce shoplifting, but it’s also important to measure whether they have a concurrent impact on sales, customers’ checkout time and line length, and customers’ shopping experience. Carnegie Mellon researchers suggest using an instrument to obtain anonymous feedback.
The grocer’s LP team achieves this broad perspective by developing diagrams and flowcharts of how they think a new deterrent measure will work and fit with workflow. These are helpful for indicating the possible unintended consequences that need to be measured during the pilot study.
The team also conducts surveys of customers and employees on their attitude toward new LP technology and asks them to compare new methods to older processes or technology. And, to learn not just whether a test technology works but also if it is a wise investment, they track all initial and ongoing costs, such as employee time, so they can calculate return on investment during the field test phase.
Carnegie Mellon researchers advise developing the pilot implementation plan collaboratively with the people who will participate in the pilot study. The pilot study may affect their attitude or overwhelm them, which injects variables that can affect field test results.
Researchers also warn against overlooking training. The pilot implementation plan must describe the training people need, including how to use the new technology or how the new process works and where to get help if they encounter problems.
This post was originally published in 2017 and was updated November 26, 2018.