Cognitive Load and Credibility: Exploring the Link Between Mental Strain and Dishonesty, with Insights into Technology-Assisted Solutions
If you’ve ever been through any investigative interview training, sat through a psychology class, or read an article on memory, you have likely crossed paths with the term “cognitive load”. It’s been tossed into seminars, used in marketing materials, and referenced by academics and practitioners alike, but what is it—really?
The most basic definition of cognitive load is this:
The measure of how hard someone needs to think while they are engaged in a particular activity.
A growing body of research points to the correlation between receiving what are considered “high credibility statements”/ “low credibility statements”, and the level of mental strain (cognitive load) present on the subject when they provide their response. The running theory is it takes more mental strain to lie than it does to tell the truth.
When presented with a question, the answer to that question immediately populates in our brains (assuming we understand the question and know the answer). There is no stopping this—this is a reflexive response that happens automatically. After the answer to the question surfaces in our minds, that’s when the “freedom of choice” enters the equation. We can either respond to the question with our honest answer OR we can choose to modify our response.
Here’s where things get tricky—there are many reasons why an individual may choose to modify their response (also referred to as impression management). Regardless of the rationale, if the decision is made to modify the honest answer and replace it with a dishonest answer, this takes time. Time is a nebulous term—the variance could range from fractions of a second to a noticeable pause—regardless, it takes more time, in general, to lie.
If only decision making was as simple as what I’ve laid out below, our jobs would be so much easier. Humor me—we know there is a lot more to it, but this is a basic visual noting the difference in our options. One will almost always take more time than the other.
Calculus of an honest response:
- Process question
- Answer comes to mind
- Consider ramifications of being honest
- Deliver honest response
Calculus of dishonest response:
- Process question
- Answer comes to mind
- Consider ramifications of being honest
- Deliberate on what answer would position you in a better light
- Decide on best dishonest answer
- Deliver dishonest response
The practice of managing the individuals cognitive load comes into play when the investigator intentionally works to “impose greater cognitive load” to exasperate the time differential (or response latency) between “true” statements and “false” ones. This is rooted in the theory that there is less cognitive strain when we provide honest answers. The primary reason for the increase in mental stress comes when the respondent needs to quickly work to modify their answer before they respond.
What does it mean to “impose cognitive load”?
There are a variety of techniques that can be used to increase the cognitive load an individual experiences. I will focus on two leading methods:
- Time Constraints
- Unexpected Questions
Let’s talk time constraints.
Research suggests that when individuals are placed under “reasonable” time constraints, it positions them to be more likely to provide honest responses to the presented questions. The reason being, respondents don’t have the bandwidth to easily fabricate dishonest answers in the time allotted. One emerging area of study is utilizing a new questioning process called, “rapid response measurement” (RRM). RRM leans into the theory that when time constraints are applied to the questioning process, it creates an environment where “faking” behaviors are significantly more challenging to pull off. RRM uses a “dichotomous response” (only two options provided) structure where the respondent is presented with one question at a time, each question having only two answers to choose from (multiple choice, not narrative). The questions are non-accusatory in nature and are prompting for answers of, “yes” “no” or “agree” “disagree”. RRM studies show significant promise in increasing the credibility of responses v. using standard surveys/questionnaires.
What about unexpected questions?
When an individual is aware they will be taking a questionnaire, sitting for an interview or otherwise, they have the opportunity to prepare their responses. This isn’t always a bad thing, but when dealing with individuals who may be incentivized to be dishonest in their answers—the advanced notice provides them with the opportunity to anticipate questions and formulate responses that may allow them to save face.
If the respondent is anticipating a particular question or set of questions and they have a prepared response—when the question is presented, they will likely be able to respond without taking the additional time needed that we discussed earlier. An effective strategy to “counter” these rehearsed responses is to present what are considered “unexpected questions”. Questions that the respondent was less likely to anticipate, and as a result, will be kicked back into the standard “calculus of a dishonest response” thought pattern when answering—thus taking more time to respond.
Why does this matter?
Whether you have an applicant taking a pre-hire psychological questionnaire, a current employee sitting in for a round of questioning, or a group of employees taking an engagement survey—ultimately, you want the responses to be as legitimate as possible. Utilizing technology that leverages findings from credible research, positions you and your organization to receive more accurate responses, leading to more efficient and effective follow-up actions.
The analytics fueling the Verensics AI powered questionnaires utilize the concepts discussed in this article (and many more). The AI element uses the time-boxed rapid response measurement (RRM) approach to create “living” dynamic question sets for respondents. As a result, no questionnaire is the same as the last—each one is unique. The analytics direct which questions should be presented next, based on how the previous questions were answered. The system is designed to introduce “unexpected questions” throughout the process, focusing on specified behavioral categories selected in advance by the hiring manager/investigator.
There is no tool or technique that can “guarantee” honest responses, however there are tools that will increase the likelihood of receiving credible answers. The goal is two-fold: increase the chances of receiving honest responses, while also providing the hiring manager/investigator with objective data to review and interpret. Noting discrepancies in response times to questions along with inconsistencies in answers doesn’t necessarily mean the individual is lying (there are countless other explanations). For that reason, our software will never provide any conclusions or suggest follow-up action, it simply uses the noted inputs to inform the direction of subsequent question, then presents the findings in an objective, easy to interpret format.
It’s up to your team to appropriately assess the findings, pursue any investigative leads and strategize appropriate follow-up conversations to provide actionable context to the noted responses. If you would like to learn more how your team can benefit from Verensics AI powered questionnaires, please reach out to schedule a demo today!
Sources:
Christ, S. E., Van Essen, D. C., Watson, J. M., Brubaker, L. E., & McDermott, K. B. (2009). The Contributions of Prefrontal Cortex and Executive Control to Deception: Evidence from Activation Likelihood Estimate Meta-analyses. Cerebral Cortex, 19, 1557-1566. doi:10.1093/cercor/bhn189.
DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K., & Cooper, H. (2003). Cues to deception. Psychological Bulletin, 129, 74-118. doi:10.1037/0033-2909.129.1.74
Hanway, P., Akehurst, L., Vernham, Z., & Hope, L. (2021). The effects of cognitive load during an investigative interviewing task on mock interviewers’ recall of information. Legal and Criminological Psychology, 26(1), 25-41.
Hartwig, M., Granhag, P. A., & Strömwall, L. (2007). Guilty and innocent suspects’ strategies during interrogations. Psychology, Crime, & Law, 13, 213-227. doi:10/10683160600750264
Lancaster, G. L. J., Vrij, A., Hope, L., & Waller, B. (2013). Sorting the liars from the truth tellers: The benefits of asking unanticipated questions. Applied Cognitive Psychology, 27, 107-114.
Meade, A. W., Pappalardo, G., Braddy, P. W., & Fleenor, J. W. (2020). Rapid Response Measurement: Development of a Faking-Resistant Assessment Method for Personality. Organizational Research Methods, 23(1), 181-207. https://doi.org/10.1177/1094428118795295