The Role of AI in Criminal Investigations and Law Enforcement

One of the potentially most important technological developments in modern criminal justice is happening today—the emergence of artificial intelligence.

As AI systems become ever more capable and powerful, their greatest impact may be felt in the most basic criminal justice application—the investigation process itself. Used correctly, AI can help security and law enforcement solve more crimes and do so faster than ever before. Used incorrectly, it can lead to egregious privacy invasions and a multitude of ethical problems.

Too Much Data

Today, investigators often have an overwhelming sea of digital evidence to sort through. They face a nearly ubiquitous social media presence, image and video recordings of private moments lived in public, and the public email accounts of individuals. Those emails often contain a mind-numbing volume of private business, from conversations with an ex-spouse about child custody arrangements to a person’s effort to repair his or her credit—and a myriad of other irrelevant matters. On top of all this “evidence,” there are all the smart devices that fill our homes and businesses, which are extraordinarily geolocative and surveillance-optimized.

Digital Partners

Against this backdrop, AI offers one of the most attractive benefits for investigators—an unprecedented ability to analyze colossal amounts of data rapidly and precisely. This capability is realized in a number of different areas:

Benefits of AI in Criminal Investigations

  1. Enhanced Data Analysis and Pattern Recognition

AI is vastly superior to the human mind for analyzing not only what is private (and probably should stay that way), but also who is private (and probably should retain some privacy)—in addition to when, where, and why.

AI is rapidly becoming a key part of the investigator’s toolkit for making sense of the huge amounts of digital evidence that the human mind alone cannot comprehend. For example, advanced AI with machine learning capabilities can be used to analyze:

  • Video evidence from surveillance cameras
  • Evidence found in public email accounts
  • Human communication—in all its private and not-so-private forms
  • Human behavior
  • Human presence in public and private spaces

2.  Improved Evidence Processing

AI technologies have transformed the processing and analyzing of evidence, dramatically changing the investigative landscape. Systems using computer vision, for instance, can take poor-quality surveillance footage and make it possible to identify details necessary for an investigation that would otherwise be very difficult to see. License plates, actual suspicious activities, and other distinguishing features are now much easier to find in video evidence.

One application of machine learning is natural language processing (NLP), which has emerged as a powerful tool for examining the texts and spoken words that comprise our communications. This technology’s capability is nothing short of amazing; computers can pick out keywords, discern sentiment or context, and understand the who/what/where/when of it all.

The common link between these various technologies, whether computer vision, machine vision, or even a simple microphone, is that they all require high-quality inputs to be useful. Humans need clear images to accurately interpret video footage; they need good audio to understand sounds. That requirement has not changed with the new technology. In fact, with AI and machine learning, the need is heightened, because while AI operates at a much faster pace than humans, it isn’t exercising human judgment or intelligence. A few decades ago, law enforcement officials seeking to identify a suspect would typically have to comb through a book of Polaroid photos, inspecting each image with their eyes for a potential match. Today, AI and computer vision can perform the same task almost instantly. But even with the added speed and efficiency of these powerful new tools, the basic process of picking a person’s face out of a lineup remains unchanged.

3.  Real-Time Crime Prevention

Predictive policing algorithms have shown some potential for helping police forces allocate resources more effectively (this capability has also ignited controversy, as we’ll discuss). These systems take in a number of different types of data—historical crime data as well as demographic and environmental information. They then analyze this veritable treasure trove of data, and, using sophisticated AI, discern not just the kinds of crimes being committed and the ways and means of committing them, but also the times and places they’re most likely to occur.

4.  Advanced Interview Analysis

  • In the domain of interviews and interrogations, AI provides sophisticated tools for:
  • Analyzing voice stress to pinpoint possible signs of deception
  • Detecting facial micro-expressions during questioning for more profound emotional insights
  • Employing NLP to unpack inconsistencies or oddities in statements
  • Watching for behavioral patterns during long interviews that might be clues to truth or deception

Limitations of AI in Analyzing Behavior

While these use cases certainly demonstrate AI’s prowess, they’re also a lively testament to the fact that behavior isn’t a good predictor of truth or deception. Indeed, several studies indicate that we’re not nearly as good at reading human behavior as we might think, and some researchers have even gone so far as to say that we’re not good at it at all. When we construct models, they are only as useful as the data they are trained on. If that data is filled with creativity, subjectivity, or biases, then the model based on it will have the same characteristics. The operative principle here could be phrased as “good data in, good model out.”

Many studies support the use of behavioral analysis. However, many others argue against using it in models. This creates a substantial challenge for those conducting investigations on the ground: they must navigate between two scientific perspectives while accounting for bias. As I frequently underscore in my writings, people are naturally biased, and the models they create learn from data generated by people—so they too, are biased. Models can be made to learn in a way that does not result in biased predictions but requires a training set that is, as much as possible, free of bias.

Risks and Challenges

  1. Privacy and Civil Liberties Concerns

When power like AI is harnessed in public sector criminal investigations, it inevitably affects citizens’ privacy. This technology’s ability to concentrate on truly vast amounts of data—both personal and non-personal—makes it potentially invaluable for solving crimes. But could it also be unconstitutional in the ways it gathers, stores, and uses people’s data? This remains an open question.

In the area of surveillance, there are concerns aplenty. Here are a few of the most asked questions:

  • How far can law enforcement go in amassing and keeping our data?
  • Can we trust authorities not to use our data in bad ways we can’t even fathom yet?
  • What is the correct balance between the many benefits of AI and our security?
  • What about the chilling effect on free speech that innocent individuals may suffer?

2.  Bias and Discrimination

One of the most serious dangers tied to AI in law enforcement is algorithmic bias. AI learns from historical data, which often mirrors our existing societal biases and discriminatory practices. If biases seem to come from thin air when we decide certain groups of people are “at-risk” and certain areas of the country are “high crime,” just think how much worse it would be for those decisions to be made by an algorithm working from that flawed data. Then imagine that algorithm working in real-time, identifying certain individuals as being at-risk or certain neighborhoods as high crime. Is that fair? Is it not like using a magic eight ball to decide who deserves to be treated with dignity by our justice system? These tools, which are imbued with bias, could promote inequity, and that is unjust.

3.  Reliability and Accuracy Concerns

Despite the great speed at which AI systems can process the massive amounts of data they are now fed, doubts persist about their reliability and accuracy when it comes to rendering life-and-death decisions. Here are some of the most pressing concerns:

  • AI can generate false positives. For example, individuals might be incorrectly identified by facial recognition systems, which could lead to wrongful arrests and accusations.
  • AI can misread context. AI lacks what humans have in abundance—an understanding of nuanced cultural and situational elements essential to human judgment. When we misinterpret the context of a situation, we’re likely to miss the important elements and get it wrong. That risk is much greater with AI.
  • AI can be overly relied on. AI’s predictions might be too trusted by human investigators and not scrutinized adequately, leading to more mistakes.

4.  Legal and Ethical Implications

Many have expressed concern about the intricate legal and ethical issues associated with the use of AI in criminal investigations. They worry about several issues:

  • The acceptability of AI-produced evidence in court
  • The potential of AI to affect inquiries and still maintain due process rights
  • The difficulty most people may have in understanding AI
  • The potential of AI to make interrogation more effective and thus more “unfair” to defendants

Specific Applications and Considerations

I’ve looked at some of the ways AI is changing criminal justice processes. Here are some more specifics:

  1.  AI in Interrogation Rooms

Using AI in interrogations offers exciting possibilities but also poses some tough challenges. AI might help law enforcement study human behavior and figure out when people are lying. But there are also concerns regarding:

  • How reliable and accurate AI-based deception detection will prove to be
  • The potential for AI to be used to coerce or unduly influence a suspect
  • Whether AI might run afoul of suspect rights and protections
  • The mental health impact on those being interrogated when AI is used in the process

2.  Digital Evidence Analysis

The practice of using AI to analyze digital evidence has grown in importance. Its uses now include:

  • Automated processing of electronic
    devices
  • The recovery of deleted or hidden data
  • The analysis of online activities and digital footprints
  • Recognizing patterns across multiple digital sources
  • However, as with other applications, these uses of AI also raise concerns about:
  • Chain of custody for digital evidence
  • Verification of evidence processed by AI
  • The significance and usefulness of evidence produced by AI
  • Preservation of individual privacy rights

Best Practices and Recommendations

  1. Policy Framework

To minimize these risks while maximizing the many benefits of AI, police departments should:

  • Create detailed AI governance policies
  • Clear up the AI usage ambiguity in existing legal guidelines
  • Develop stringent oversight structures
  • Be as transparent as possible without compromising public safety
  • Issue regular checks for AI deployment “health” and “safety”
  • Update policies and guidelines as necessary

2.  Training and Education

  • The best policies and practices are useless if people aren’t trained to implement them. So, it is essential to train law enforcement personnel to:
  • Grasp the abilities and shortcomings of AI
  • Identify potential errors and biases
  • Correctly analyze and comprehend the insights produced by AI
  • Uphold the human judgment necessary for decision-making
  • Reconcile the use of this newfangled technology with the traditional toolbox

3.  Technical Considerations

  • Organizations putting AI systems into action should:
  • Regularly test and validate their AI systems
  • Maintain proper documentation of AI processes
  • Ensure security of the systems and protection of the data
  • Have backup systems and contingency plans in place
  • Perform regular updates and maintenance of AI systems

4.  Ethical Guidelines

  • Here are some areas ethical guidelines should focus on and address clearly:
  • Safeguarding individual rights
  • Ensuring fairness and non-discrimination
  • Achieving transparency and accountability
  • Protecting privacy
  • Ensuring human oversight and control

Future Considerations

The full capabilities of AI are just beginning to be explored. As this technology continues to advance, we can expect to see more capable behavioral analysis, better predictive algorithms, enhanced real-time monitoring, and more advanced data analysis tools.

Looking ahead to implementation, agencies must determine how to mesh AI with their existing systems. They will face demands for training and adaptation, as well as financial and human resource issues. Finally, and importantly, they will need to gain the public’s acceptance and trust.

Integrating AI into criminal investigations, interviews, and interrogations is both an opportunity and a challenge for law enforcement. The possible payoffs in efficiency, accuracy, and crime prevention are large; so, too, are the risks to privacy, civil liberties, and fairness. Finding success here will require a careful balance, much like a teeter-totter, between potential benefits and risks. On one end are the potential payoffs—the promise of making things more efficient and, in some cases, more accurate; the potential of using it to help sort through the masses of information that law enforcement agencies generate and that we, as a society, generate in our daily lives; and, on top of that, the potential of using it for the good old-fashioned crime prevention that we tend to associate with our police forces. On the other end there are the risks. It seems clear that the risks to privacy, civil liberties, and fairness are substantial.

Harnessing and using this technology effectively but judiciously will be the challenge facing law enforcement and security professionals for the foreseeable future.

Stay up-to-date with our free email newsletter

The trusted newsletter for loss prevention professionals, security and retail management. Get the latest news, best practices, technology updates, management tips, career opportunities and more.

No, thank you.

View our privacy policy.

Exit mobile version