Silent Talker Lie Detector

The Silent Talker Psychological Profiler is a camera system which observes and analyses non-verbal behaviour in the form of micro-gestures while a subject is being interviewed, for the claimed purpose of credibility assessment. It is grounded in the psychological theory that non-verbal behaviour is modified by a number of influences when a person is being deceptive. These include arousal (in particular stress), cognitive load, duping delight,[1] and behaviour control.[2]

History

Silent Talker was invented between 2000 and 2002 by a team at Manchester Metropolitan University, Zuhair Bandar, James O'Shea, David McLean and Janet Rothwell. Following its invention, the Silent Talker Adaptive Psychological Profiling architecture and its specific instantiation as a lie detector, were patented internationally.[3] In the interim, the inventors have been involved in raising investment funding and the code has been ported to various programming languages and speeded up from near real-time to real-time response. Current research includes adapting the technology to the measurement of comprehension amongst participants giving informed consent to take part in clinical trials [4] Silent Talker Limited was incorporated on 9 April 2015 to commercialize this technology worldwide.

Testing procedure

The subject of the interview is observed by one or more cameras (e.g. head-and-shoulders, full body view, thermal imaging camera), which input the video stream to a conventional computer. As the interview takes place, Silent Talker's model of truthful vs. deceptive behaviour is used to classify the answers to the questions as truthful or deceptive in real-time. This can be as a classification at the end of the answer to a question or as a continuous monitoring stream during the interview. No calibration is required to tune the system to individuals and no training of the interviewer is required to interpret the Silent Talker classifications.

Countermeasures

As other lie detectors detect changes in stress, the most common approach is to disrupt this either by practicing calming techniques during lying or artificially raising stress during truth-telling. Because Silent Talker is based on a multi-factor model including cognitive load, duping delight and behaviour control, its inventors claim that it is robust to countermeasures. In fact it is believed that because a large number of channels are used, attempts at behaviour control will generate more incongruities between channels which can be detected. Further experimental trials are required to investigate this hypothesis.

Challenges

Challenges with this technological approach include:

  • it relies on only one channel, the face; excludes body language, voice, verbal content, verbal style and psychophysiology
  • it concludes from data without hypothesizing; operators play no role in the decision making
  • it relies on an intrusive camera within a few metres of the face
  • it doesn't analyse what the person is saying so expression can be correlated with the account.

References

  1. Ekman. P. Lying And Nonverbal Behavior – Theoretical Issues And New Findings. Journal of Nonverbal Behavior, 1988, 12, 163–175. "Archived copy" (PDF). Archived from the original (PDF) on 2011-11-12. Retrieved 2011-10-13.CS1 maint: archived copy as title (link)
  2. Greene, John O. (1985). "Planning and Control of Behavior During Deception". Human Communication Research. 11 (3): 335–364. doi:10.1111/j.1468-2958.1985.tb00051.x.
  3. Bandar, J., McLean, D., O'Shea, J. and Rothwell, J. ANALYSIS OF THE BEHAVIOUR OF A SUBJECT WO02087443, https://www.google.com/patents/WO2002087443A1?cl=en&dq=analysis+of+the+behaviour+of+a+subject&hl=en&sa=X&ei=PJ0tU8TpAa-A7QbqtYGoBg&ved=0CDQQ6AEwAA
  4. Crockett, Keeley A.; O'Shea, James D.; Buckingham, Fiona J.; Bandar, Zuhair A.; MacQueen, Kathleen.M.; Chen, Mario; Simpson, Kelly. FATHOMing out interdisciplinary research transfer, IEEE Symposium on Computational Intelligence for Engineering Solutions (CIES), 2013
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.