Can You Trust What You’re Hearing in Interviews?

Interviewing Candidates in the Era of Generative AI

IMAGE BY GoodStudio/Shutterstock.com

The LPRC continues to grow and we’re currently looking to fill a geographic information systems (GIS) analyst role. This position requires fluency with multiple systems and analytic approaches. Today, candidates can predict likely questions, generate polished answers in advance, and even rely on generative AI in real time to produce responses on-screen that the candidate can read off the screen.

Over the past two weeks, I’ve interviewed two candidates who raised serious concerns—not about their resumes, but about whether they were using generative artificial intelligence during the interview process. This article is going to review these two experiences and discuss some of the strategies I used, as well as some others that are recommended by other experts.

One candidate began the interview with their camera off and turned it on only after prompting. Throughout the discussion, their answers were too perfect—so perfect, in fact, that I felt like I was listening to an audiobook description of analytic approaches, statistical concepts, and software. Almost every response started with “that is a great question” and, in some cases, a restatement of the question itself, followed by a long, textbook-perfect explanation. The responses felt unnatural; there was no visible effort to recall or think through the question, and no personal insights—it seemed like the candidate was just reciting words.

Digital Partners

The second candidate displayed similar behavior. When asked why they wanted to work at the LPRC, they gave a nearly verbatim description of our mission and facts about our organization—this was almost as if it had been pulled directly from our website. Similarly, their descriptions of GIS software was filled with language that someone might expect from the software’s marketing team. For example, they accurately described what the software could be used for, but also described it as having a “user-friendly interface.” They described another software package as “enabling the creation of highly engaging web applications.” Again, the video was unusually blurry—so much so that I was unable to clearly see whether the candidate was reading off the screen.

Neither interview had a single glaring issue, but each raised several red flags in combination. In both cases, the candidates’ video feeds were either off initially or so blurry they obscured facial expressions and eye movement. If someone were reading AI-generated responses, they would want to make it difficult to see their eye movements. Nevertheless, the video quality isn’t necessarily a problem on its own, but it becomes suspicious when combined with other issues, such as the impersonal nature of their responses, the language they used, and other characteristics of the interview. When all of these are considered in combination, it made me incredibly suspicious, and very confident in one of the cases that the candidate was using generative AI in his responses.

The most concerning issue for hiring managers moving forward is that you cannot be absolutely sure candidates are not using generative AI during interviews, so candidates who give almost perfect answers could be that good or they could be regurgitating responses fed to them by generative AI. If hiring managers err on the side of caution with someone who is simply that good, they risk missing out on a talented candidate. However, if hiring managers are not vigilant, they will be stuck with a team member who not only is unqualified for the position, but someone who is willing to engage in unethical behavior to cover up their shortcomings or misrepresent their abilities.

So, how can hiring managers adapt to this new challenge? There are some things that I did during the interviews that might help, such as focusing on specific applications of analytic techniques and their experiences with the software they were describing. One of the candidate’s responses made it rather clear that he did not truly understand the concepts for which he had previously provided textbook explanations.

In his article “When Candidates Use Generative AI for the Interview” (MIT Sloan Management Review), Navio Kwok offers several techniques for identifying real expertise:

  1. Ask the candidate to walk you through one of their projects or how they solved a problem: Have the candidate describe a specific project and the process they followed.
  2. Ask them to explain their choices: Ask why they chose certain methods and what factors influenced these decisions.
  3. Ask how different contexts or circumstances would affect their problem-solving strategies: Ask them to explain how they might have used a different approach to solve a problem under different circumstances.
  4. Ask the candidate to discuss alternative approaches: Ask what other options they considered but chose not to use, and why not.
  5. Ask the candidate to critique their approach: Ask them to evaluate their own process, discussing both strengths and weaknesses of their approach.

I used some of these approaches with the first candidate, and he provided much better responses when I probed about his problem-solving approach, his process, and how he might use various analytic approaches in his work. Unfortunately, I cannot be sure whether he had prepped answers using generative AI. In the era of generative AI, candidates can have models build their resume, create a lengthy and detailed explanation of fictional projects, and predict many of the questions an interviewer might ask.

This is where unpredictable questions might be helpful. I once had an interviewer ask me “if you were an animal, what kind of animal would you be and why?” I was stunned by this question, and initially hesitated, but it made me think, it forced me to generate an answer, and it was certainly not predictable. We might be entering an era where we must use the most unpredictable questions that will still help us assess candidates’ expertise.

There are other techniques interviewers can use, especially when I was hiring someone whose technical expertise was greater than mine (for what it is worth, I know relatively little about the technical specifics of GIS analysis). I involve LPRC colleagues or external subject matter experts in the interview process. I also ask candidates to explain complex topics in language that a non-expert can understand. When these two approaches are used in combination, they can explain the concept and my colleagues can confirm their accuracy.

Nevertheless, as technology continues to rapidly advance, hiring managers must pay close attention to whether candidates are using generative AI in their interviews. We may not always be able to tell for sure whether a candidate is using AI in real time, or whether they used generative AI to prepare for the interview. However, there are some strategies we can use to more accurately assess candidates’ expertise, skills, and knowledge. Of course, managers must remain vigilant about the use of generative AI even after they have hired candidates, because there are many other dangers of using generative AI, including data privacy concerns, accuracy, ethics, and many other issues that we may not even be aware of.

Stay up-to-date with our free email newsletter

The trusted newsletter for loss prevention professionals, security and retail management. Get the latest news, best practices, technology updates, management tips, career opportunities and more.

No, thank you.

View our privacy policy.

Exit mobile version