Imagine starting your day off receiving budget approval for new loss prevention equipment to protect your high-shrink departments in ten different stores. You are thinking of unrolling this technology to other departments based on the performance in shortage reduction, but how do you measure it? Do you measure the stores equally even though there were different starting days for installation? Does the technology perform differently across markets? Across different departments? What measurements will you rely on to assess the technology’s effectiveness? Does it have diminishing effectiveness over time?
With ever-increasing justification needed to get diminishing budget allocations, the need grows to properly and accurately quantify the impact of technology and add credibility to future requests. This brings us to today’s topic: fact-based research. As an industry, we are moving away from gut reactions and anecdotal evidence to more scientific approaches, or at least I believe we should be. Understanding the scientific method and designing your own proof of concepts in house will lead you to be better equipped to evaluate products, as well as see where vendor white papers may be fudging the numbers.
With information as richly available as it is today, you’d be remiss not to take it into consideration when you calculate a return on investment. With better data, LP practitioners can get a more robust view of where certain technology can be more effective and the financial impact it can have on not just shortage, but also operational and customer experiences as well.
Consider the use of electronic article surveillance (EAS) tags; it may vary in effectiveness due to the type of theft risk in particular locations. Sure, EAS tags will likely scare off the opportunistic shoplifters, but will it have the same impact on organized retail crime (ORC) offenders? By taking a look at data for your locations, you can invest capital more intelligently than arbitrarily assigning new technology to your top-shortage stores. There are many ways to test a new product to come up with valid conclusions, but it essentially falls into two main categories: pre/post (crossover) testing and comparable-store (parallel) testing.
Crossover testing occurs in two time periods—before and after the technology is introduced. This is the most powerful study design because we can see how our metrics change with the introduction of a new technology in a store. Before the tech is installed, we get to see what baseline activity is like for a particular metric, and after the tech is installed we can see its direct impact on our metrics of interest. This also allows us to use significantly smaller sample sizes when compared to a parallel design.
The main drawback to this type of design is that it can be a lengthy process to collect information in both periods. Additionally, with this longer timeline, temporal effects may confound your conclusions. The most obvious impact of this effect would be during the holiday season. Did your sales increase due to the new clothing tag allowing for more stock to be available on the shelves, or is it a hot-ticket item? You can compare the results to those from the same period last year for a better comparison since business trends are more similar year to year than month to month.
Parallel testing occurs in one time period and compares test stores to untested stores. This allows for a shorter testing period and can help mitigate some of those seasonal effects mentioned before. Again, if we look at a new clothing tag in one store, we can look at the increase in sales and compare that to our control store.
However, parallel designs are more subject to external factors that may influence your data. If you have one high-risk store getting a new package wrap and compare it to a similar store that doesn’t have the wrap, there may be major differences between how the stores already protect their packages that could influence the outcome. On top of this, there’s always the Hawthorne effect, where once your store managers realize you are paying extra attention to a particular item, they may go above and beyond to mitigate loss for that item, which impacts all research studies.
Did you have to reread those last few paragraphs once or twice to get the gist of it? Designing pilots and testing new technology can be difficult. Thus, there may be a need to bring people on your team who understand the nuances of assessing various treatments in a real-world setting. At my company, we brought on Kyle Grottini, who made sure we could measure a technology’s impact in a scientifically valid manner. This allowed us to predict our shortage reduction and what that return on investment would mean to our bottom line. As I’ve mentioned previously, you can teach a data scientist about LP with relative ease compared to teaching an LP person data science.
The Right Personnel
As data becomes more omnipresent, having a person on your team who can discuss different methods to leverage available information into business insights is invaluable. In addition to using different data sources, they can also work on data visualization, automated reporting, and dashboards for everyone from store associates to the CEO. Having them focus on the data piece not only provides you with new insights, but also allows them to take an objective look at information that may have been treated subjectively otherwise. Although fact-based research is important, rooting research within the practical constraints of your business will be where people with LP experience can exert some guidance for the data people.
Being able to forecast loss is much more powerful than reacting to a year-end inventory. Working with the Loss Prevention Research Council, we consider the role of data in the LP industry in both the LP Innovations Working Group and Data Analytics Working Group. The intertwining of fact-based research, data, and LP is coming at you faster than ever before. Do you have the right people in place to deal with the changing tide?