A Model for Identifying Deceptive Acts from Non-Verbal Cues In Visual Video Using Bidirectional Long Short Term Memory (BiLSTM) With Convolutional Neural Network (CNN) Features

Authors

  • FAITHA AJIBADE
  • OLALEKAN AKINOLA

Keywords:

Nonverbal cues, Visual videos, CNN, BiLSTM

Abstract

Abstract
Automatic identification of deception is crucial in so many areas like security, police investigations, court trials, political debates, relationships, and workplace and so on. Techniques for deception detection range from detecting deception through verbal, nonverbal and vocal clues. Existing works have been thoroughly dependent on combining different modalities of videos like audio and text for identifying deceptive behaviours These approaches have improved the overall accuracy of deception detection systems but there are exceptions where videos do not have accompanying audio and text. The aim of this research is to develop a model that can identify deceptive behaviours through non-verbal cues gotten from the visual modality of videos. The Real Life Deception Dataset created by Perez et al. (2015) was used for the purpose of this research. It contained labelled videos of deceptive and truthful court cases. Image frames were extracted from each video and pre-processed. A Convolutional Neural Network (CNN) was used to learn the different behavioural gestures and cues exhibited in these image frames before passing these learned features to the Bidirectional Long Short Term Memory (BiLSTM) algorithm which then classifies as either deceptive or truthful. Training and testing was also done using BiLSTM and evaluated with existing works. The model performed well in identifying deception from visual videos using CNN features learned from image frames. It gave an accuracy of 61% with a loss of 0.2 after running for three epochs. Sourcing local data from surveillance cameras and security feeds can be further explored to
validate this work

Downloads

Published

2022-05-07