Get Our Email Newsletter

What is Sensor Fusion—and How Should Retail Use It?

“If it looks like a duck and quacks like a duck, it’s probably a duck.”

We’ve all heard some form of the “Duck Test” before. We use the folksy expression when we want to get to the heart of the matter and clearly describe something for what it is.

What most people don’t realize is that the Duck Test, for all its simplicity, is a perfect expression of one of the most complex modern fields of technology: sensor fusion.

So what is sensor fusion?

- Digital Partner -

It’s a process that combines input from multiple types of sensors and uses the information to correctly identify something. As humans, we see a duck with the sensors in our eyes, and compare the image we see with a catalogue of animals on file in our brains. We also hear the duck with the sensors in our ears, and we compare the sound of a duck’s quack with a library of animal sounds from memory.

Could we recognize a duck by sight without hearing it? Sure. Could we hear a duck quack without seeing it and guess what it is? Most likely. But when we combine the two sensors together, we’re so confident that we discard other possibilities.

Now approach the Duck Test with security technology. Imagine a toy duck sitting on a display shelf at the front of a store’s toy section. The store security camera can see the duck sitting on the shelf, but a camera doesn’t know what a duck is, it only records a 2D image.

Now layer in simple infrared (IR) sensors, like those from a common game console like the Kinect, and we can build a point cloud map of the duck. The point cloud is thousands of small data points, each with its own x, y, and z coordinates, which we can use to calculate the exact shape and volume of the toy duck. Now we have a 3D model overlaid with a 2D image. But a store computer system never went to preschool and learned about animals, so how does it know what a duck is?

LP Solutions

The computer must learn. If the SKU is fully digitized, we already know its exact dimensions and volume, and we simply compare the information we’ve gathered from the store sensors. If it is not digitized, then the system can be trained. We feed it reference images of the toy duck, and over time, with interactions from the store, machine learning can be used to accurately identify the toy, just like a child learning from flashcards.

Toys don’t all make sounds, but they do have much more precise ways to announce themselves with modern technology. More than six billion passive UHF RFID tags made their way into retail stores in 2017. Overhead RFID readers, RFID handhelds, robotic sensor platforms, and even trial drones are being deployed in stores to scan these items, and each tagged item identifies itself precisely to an exact serial number through radio waves.

A real duck may quack, but a toy duck with RFID will not only identify itself, but also tell us exactly which duck it is out of millions of others just like it. With access to shared serialized inventory histories, now rapidly moving to the blockchain through programs like Auburn University’s Project Zipper with five retailer and eight brand owner partners, we can precisely trace the journey from when the toy duck was made and where it has been its entire life since.

What Is Sensor Fusion, Really? It’s a Way of Creating Stronger Tools

Which brings us back to sensor fusion. Simple statements such as “looks like, sounds like, probably is…” can mean much more when automated. In the example above, we have 2D cameras, 3D stereoscopic cameras, IR sensors, RFID scanners, RF tags, machine-learning algorithms, digitized item files, serialized item history, and blockchain technology all working in parallel to tell us not just that it is “probably” a duck, but precisely what kind of duck, its exact name, and where it originated.

- Digital Partner -

Through the lens of loss prevention technology, all of this information is invaluable, especially for high-value assets, or items that can be substituted, or counterfeited. But how can we fuse in even more sensors for an even stronger tool?

People-tracking with store cameras is commonplace; it’s fairly simple to track a shopper (or shoplifter) throughout a store, and with a strong sensor fusion system, tie items to the shopper as he or she picks them up along the way. Bluetooth Low Energy (BLE) can be used on an opt-in basis to track customers on their paths through the store, and sort them as low risk while LP systems focus on shoplifters. RFID and computer vision sensors can capture information about items traveling to the wrong areas. Does that customer really need six Blu-ray copies of “Cobra” in the dressing room?

All of this data from all of these sensors fused together is used to identify events. Person + Item + Action = Event, and with a good sensor fusion engine, we can correctly identify all types of events in the store, both good and bad. Luckily, we can often identify the bad events before they even occur. Loss prevention has typically been reactive (busting shoplifters at the exit), or restrictive (locked case).

But with a strong events engine, we can make loss prevention proactive. Take the case of the “Cobra” Blu-rays in the dressing room. Instead of sending over asset protection to apprehend the thief at the store exit, what if we sent someone offering the would-be shoplifter assistance as they exited the dressing room? “Can I help you find anything else? Like, maybe six Blu-ray copies of “Tango & Cash” to help fill out your collection?” Nine times out of ten, the would-be thief will abandon their attempt and leave with no problem to resolve.

Using sensor fusion, we can identify shelf sweeps in real time. We can identify higher-value assets circumventing point-of-sale systems. In addition, we can identify hopeful shoplifters preparing for a crime. In one pilot store, we noticed several high-value electronics items hovering out of place near an exit. A visual check confirmed that there were two potential shoplifters waiting by the exit, watching for an opportunity to slip out unnoticed. By the time they worked up the nerve, the police were already outside waiting to meet them, thanks to the sensor fusion technology that alerted store staff about the crime threat.

From vision and hearing, we can confidently identify a duck, or hundreds of thousands of other types of animals, objects, and occurrences. Using all five senses, we can identify even more. Imagine what we can do with five types of automated sensors in stores? Or a system with 15 types of sensors?

For a fusion system like that, it will be easier, more efficient, and more accurate to identify every “duck” in the store, track it, and prevent its theft.

This post was originally published in 2017 and was updated September 24, 2018. 

Loss Prevention Magazine updates delivered to your inbox

Get the free daily newsletter read by thousands of loss prevention professionals, security, and retail management from the store level to the c-suite.

What's New

Digital Partners

Become a Digital Partner

Violence in the Workplace

Download this 34-page special report from Loss Prevention Magazine about types and frequency of violent incidents, impacts on employees and customers, effectiveness of tools and training, and much more.

Webinars

View All | Sponsor a Webinar

Whitepapers

View All | Submit a Whitepaper

LP Solutions

View All | Submit Your Content

Loss Prevention Media Logo

Stay up-to-date with our free email newsletter

The trusted newsletter for loss prevention professionals, security and retail management. Get the latest news, best practices, technology updates, management tips, career opportunities and more.

No, thank you.

View our privacy policy.