Get Our Email Newsletter

This content is made possible by our sponsors. Contact us to learn more.

Leveraging Artificial Intelligence in Loss Prevention

James Stark

James Stark is the segment development manager for retail at Axis Communications. In this capacity, he is responsible for developing strategies and building channel relationships to expand Axis’ presence in the Americas retail market.

Stark is a subject matter expert with dynamic experience spearheading cross-functional initiatives by leveraging business data analytics, strategic planning, and specialized systems and tools to optimize security measures, risk management, and customer experience. He has more than thirty years of experience in the retail industry and specializes in loss prevention, safety, e-commerce fraud, and supply chain security.


LPM: How can AI be effectively integrated into security systems to enhance the protection of people, property, and assets?

James Stark: AI can enhance security by extending surveillance coverage to the farthest reaches of a property, including parking lots and other remote locations, ultimately providing coverage to all people and assets across the five zones of influence. By integrating various data sets—such as metadata from video feeds, sensor data, and access logs, to name a few—AI systems can create a comprehensive view of the environment, making it easier to identify unusual patterns or potential threats.

These data sets act like different “colors” on a canvas; when combined, they provide a clearer picture of the overall security landscape. AI technology can also reduce the gap between incident and response. While AI might not always predict incidents, it can react quickly and efficiently, enabling a hyper‑reactive approach to security.

Integrating AI into security systems also empowers people by providing them with more actionable information, allowing for stronger decision-making. For example, a camera providing visual data to shelf monitoring software can track inventory levels for business purposes and alert security if it notices suspicious activity, like a shelf being emptied suddenly.

Organizations can improve operational efficiency and security by finding multiple uses for AI technologies.

LPM: To what extent should people trust decisions made by AI systems, and what factors should be considered when evaluating the reliability of AI-generated outputs? How does this extend to protecting assets?

Stark: People should still adopt a “trust but verify” approach to AI decisions. While it’s essential to trust the AI systems you’ve chosen to implement, continual verification is key to ensuring ongoing reliability. This is like overseeing a child’s learning; you trust their ability to learn but still want to guide them.

AI systems should not be micromanaged, but they do need regular testing to ensure they adapt correctly to any changes in the environment, like alterations in lighting or layout. This ensures the AI continues to function as intended. Think of AI like your vision: even without glasses, you can still see, but your perception might be distorted. Similarly, AI decisions should always be interpreted within the context of human oversight, especially in high-stakes situations like loss prevention, where false accusations or missed incidents could have significant consequences.

LPM: How can we leverage AI in a way that is transparent and explainable, allowing people to understand the reasoning behind their decisions?

Stark: You need to have a strong communication plan and consistently discuss the technology and its impacts in order to foster AI transparency. Awareness and education are key here; people need to understand how AI works and the reasons behind its decisions. This helps demystify AI and makes its actions more predictable and comprehensible.

I’m reminded of a quote from Chris Nelson: “We change, we grow, we innovate.” This mindset acknowledges that change can be difficult, growth can be painful, and innovation is often misunderstood. To overcome these challenges, it’s crucial to continuously tell the story of the AI system—explaining who is using it, what it does, when and where it’s applied, why it’s being used, and how it functions. This narrative should be communicated clearly and frequently, reinforcing widespread understanding and trust.

LPM: What are the key challenges in integrating AI into existing operational systems and workflows, and how can these challenges be overcome?

Stark: Integrating AI into existing systems involves several key challenges, primarily related to infrastructure, system integrations, and partnership. The first question is whether your current infrastructure can support future AI integrations. Continuously adding components to an outdated system is not effective. Instead, organizations can choose to make changes as simple as updating system software or network hardware—though in some cases a complete overhaul may be necessary to create a solid foundation for AI.

Through open-source technology, it is possible to pull together different solutions, but, in some cases it’s best to start with a new infrastructure that is built specifically to support AI. This process involves bringing the right partners into the discussion, including technology providers, software developers, and systems integrators. These partners must collaborate effectively with each other and with your organization to ensure seamless integration and functionality.

AI integration is not a one-time affair, though. It will require continuous reviews, device health monitoring, scheduled maintenance, and fine-tuning to adapt to evolving needs and environments. Having partners who are actively engaged and present on the ground with your team is essential for successful implementation and ongoing support.

Loss Prevention Media Logo

Stay up-to-date with our free email newsletter

The trusted newsletter for loss prevention professionals, security and retail management. Get the latest news, best practices, technology updates, management tips, career opportunities and more.

No, thank you.

View our privacy policy.