Vocalic Markers of Deception and Cognitive Dissonance for Automated Emotion Detection Systems

Persistent Link:
http://hdl.handle.net/10150/202930
Title:
Vocalic Markers of Deception and Cognitive Dissonance for Automated Emotion Detection Systems
Author:
Elkins, Aaron Chaim
Issue Date:
2011
Publisher:
The University of Arizona.
Rights:
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
Abstract:
This dissertation investigates vocal behavior, measured using standard acoustic and commercial vocal analysis software, as it occurs naturally while lying, experiencing cognitive dissonance, or receiving a security interview conducted by an Embodied Conversational Agent (ECA).In study one, vocal analysis software used for credibility assessment was investigated experimentally. Using a repeated measures design, 96 participants lied and told the truth during a multiple question interview. The vocal analysis software's built-in deception classifier performed at the chance level. When the vocal measurements were analyzed independent of the software's interface, the variables FMain (Stress), AVJ (Cognitive Effort), and SOS (Fear) significantly differentiated between truth and deception. Using these measurements, a logistic regression and machine learning algorithms predicted deception with accuracy up to 62.8%. Using standard acoustic measures, vocal pitch and voice quality was predicted by deception and stress.In study two, deceptive vocal and linguistic behaviors were investigated using a direct manipulation of arousal, affect, and cognitive difficulty by inducing cognitive dissonance. Participants (N=52) made verbal counter-attitudinal arguments out loud that were subjected to vocal and linguistic analysis. Participants experiencing cognitive dissonance spoke with higher vocal pitch, response latency, linguistic Quantity, and Certainty and lower Specificity. Linguistic Specificity mediated the dissonance and attitude change. Commercial vocal analysis software revealed that cognitive dissonance induced participants exhibited higher initial levels of Say or Stop (SOS), a measurement of fear.Study three investigated the use of the voice to predict trust. Participants (N=88) received a screening interview from an Embodied Conversational Agent (ECA) and reported their perceptions of the ECA. A growth model was developed that predicted trust during the interaction using the voice, time, and demographics.In study four, border guards participants were randomly assigned into either the Bomb Maker (N = 16) or Control (N = 13) condition. Participants either did or did not assemble a realistic, but non-operational, improvised explosive device (IED) to smuggle past an ECA security interviewer. Participants in the Bomb Maker condition had 25.34% more variation in their vocal pitch than the control condition participants.This research provides support that the voice is potentially a reliable and valid measurement of emotion and deception suitable for integration into future technologies such as automated security screenings and advanced human-computer interactions.
Type:
text; Electronic Dissertation
Keywords:
Deception; Emotion; Security; Vocalics; Management Information Systems; Affective Computing; Cognitive Dissonance
Degree Name:
Ph.D.
Degree Level:
doctoral
Degree Program:
Graduate College; Management Information Systems
Degree Grantor:
University of Arizona
Advisor:
Nunamaker, Jay F.; Burgoon, Judee K.

Full metadata record

DC FieldValue Language
dc.language.isoenen_US
dc.titleVocalic Markers of Deception and Cognitive Dissonance for Automated Emotion Detection Systemsen_US
dc.creatorElkins, Aaron Chaimen_US
dc.contributor.authorElkins, Aaron Chaimen_US
dc.date.issued2011-
dc.publisherThe University of Arizona.en_US
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.en_US
dc.description.abstractThis dissertation investigates vocal behavior, measured using standard acoustic and commercial vocal analysis software, as it occurs naturally while lying, experiencing cognitive dissonance, or receiving a security interview conducted by an Embodied Conversational Agent (ECA).In study one, vocal analysis software used for credibility assessment was investigated experimentally. Using a repeated measures design, 96 participants lied and told the truth during a multiple question interview. The vocal analysis software's built-in deception classifier performed at the chance level. When the vocal measurements were analyzed independent of the software's interface, the variables FMain (Stress), AVJ (Cognitive Effort), and SOS (Fear) significantly differentiated between truth and deception. Using these measurements, a logistic regression and machine learning algorithms predicted deception with accuracy up to 62.8%. Using standard acoustic measures, vocal pitch and voice quality was predicted by deception and stress.In study two, deceptive vocal and linguistic behaviors were investigated using a direct manipulation of arousal, affect, and cognitive difficulty by inducing cognitive dissonance. Participants (N=52) made verbal counter-attitudinal arguments out loud that were subjected to vocal and linguistic analysis. Participants experiencing cognitive dissonance spoke with higher vocal pitch, response latency, linguistic Quantity, and Certainty and lower Specificity. Linguistic Specificity mediated the dissonance and attitude change. Commercial vocal analysis software revealed that cognitive dissonance induced participants exhibited higher initial levels of Say or Stop (SOS), a measurement of fear.Study three investigated the use of the voice to predict trust. Participants (N=88) received a screening interview from an Embodied Conversational Agent (ECA) and reported their perceptions of the ECA. A growth model was developed that predicted trust during the interaction using the voice, time, and demographics.In study four, border guards participants were randomly assigned into either the Bomb Maker (N = 16) or Control (N = 13) condition. Participants either did or did not assemble a realistic, but non-operational, improvised explosive device (IED) to smuggle past an ECA security interviewer. Participants in the Bomb Maker condition had 25.34% more variation in their vocal pitch than the control condition participants.This research provides support that the voice is potentially a reliable and valid measurement of emotion and deception suitable for integration into future technologies such as automated security screenings and advanced human-computer interactions.en_US
dc.typetexten_US
dc.typeElectronic Dissertationen_US
dc.subjectDeceptionen_US
dc.subjectEmotionen_US
dc.subjectSecurityen_US
dc.subjectVocalicsen_US
dc.subjectManagement Information Systemsen_US
dc.subjectAffective Computingen_US
dc.subjectCognitive Dissonanceen_US
thesis.degree.namePh.D.en_US
thesis.degree.leveldoctoralen_US
thesis.degree.disciplineGraduate Collegeen_US
thesis.degree.disciplineManagement Information Systemsen_US
thesis.degree.grantorUniversity of Arizonaen_US
dc.contributor.advisorNunamaker, Jay F.en_US
dc.contributor.advisorBurgoon, Judee K.en_US
dc.contributor.committeememberNunamaker, Jay F.en_US
dc.contributor.committeememberBurgood, Judee K.en_US
dc.contributor.committeememberGolob, Elyseen_US
dc.contributor.committeememberGoes, Paulo B.en_US
All Items in UA Campus Repository are protected by copyright, with all rights reserved, unless otherwise indicated.