Security clearance investigations are onerous for both the applicants and investigators, and such investigations can be expensive. In this report released by the RAND Corporation, the authors present results from an exploratory analysis that tests automated tools for detecting when some of these applicants attempt to deceive interviewers.
The report shows that how interviewees answer questions could be a useful signal to detect when they are trying to be deceptive.
Key findings from the report include:
- Models that used word counts were the most accurate at predicting who was trying to be deceptive versus those who were truthful.
- The authors found similar differences in accuracy rates for detecting deception when interviews were conducted over video teleconferencing and text-based chat.
- ML transcription is generally correct, but there can be errors, and the ML methods often miss subtle features of informal speech.
- Although models that used word counts produced the highest accuracy rates for all participants, there was evidence that these models were more accurate for men than women.