Vocal Emotion Communication With Cochlear Implants

Sponsor
Father Flanagan's Boys' Home (Other)
Overall Status
Recruiting
CT.gov ID
NCT05486637
Collaborator
Arizona State University (Other), House Institute Foundation (Other), University of Nebraska (Other)
255
2
60
127.5
2.1

Study Details

Study Description

Brief Summary

Patients with hearing loss who use cochlear implants (CIs) show significant deficits and strong unexplained intersubject variability in their perception and production of spoken emotions in speech. This project will investigate the hypothesis that "cue-weighting", or how patients utilize the different acoustic cues to emotion, accounts for significant variance in emotional communication with CIs. The results will focus on children with CIs, but parallel measures in postlingually deaf adults with CIs will be made, ensuring that results of these studies benefit social communication by CI patients across the lifespan by informing the development of technological innovations and improved clinical protocols.

Condition or Disease Intervention/Treatment Phase
  • Behavioral: Perception of acoustic cues to emotion
  • Behavioral: Production of acoustic cues to emotion

Detailed Description

Emotion communication is a fundamental part of spoken language. For patients with hearing loss who use cochlear implants (CIs), detecting emotions in speech poses a significant challenge. Deficits in vocal emotion perception observed in both children and adults with CIs have been linked with poor self-reported quality of life. For young children, learning to identify others' emotions and express one's own emotions is a fundamental aspect of social development. Yet, little is known about the mechanisms and factors that shape vocal emotion communication by children with CIs. Primary cues to vocal emotions (voice characteristics such as pitch) are degraded in CI hearing, but secondary cues such as duration and intensity remain accessible to patients. It is proposed that individual CI users' auditory experience with their device plays an important role in how they utilize these different cues and map them to corresponding emotions.

In previous studies, the Principal Investigator (PI) and the PI's team conducted foundational research that provided valuable information about key predictors of vocal emotion perception and production by pediatric CI recipients. The work proposed here will use novel methodologies to investigate how the specific acoustic cues used in emotion recognition by CI patients change with increasing device experience (Aim 1) and how the specific cues emphasized in vocal emotion productions by CI patients change with increasing device experience (Aim 2). Studies will include both a cross-sectional and a longitudinal approach.

The team's long-term goal is to improve emotional communication by CI users. The overall objectives of this application are to address critical gaps in knowledge by elucidating how cue-utilization (the reliance on different acoustic cues) for vocal emotion perception (Aim

  1. and production (Aim 2) are shaped by CI experience. The knowledge gained from these studies will provide the evidence-base to support the development of clinical protocols that support emotional communication by pediatric CI recipients, and will thus benefit quality of life for CI users.

The hypotheses to be tested are: [H1] that cue-weighting accounts significantly for inter-subject variations in vocal emotion identification by CI users; [H2] that optimization of cue-weighting patterns is the mechanism by which predictors such as the duration of device experience and age at implantation benefit vocal emotion identification; and [H3] that in children with CIs, the ability to utilize voice pitch cues to emotion, together with early auditory experience (e.g., age at implantation and/or presence of usable hearing at birth) account significantly for inter-subject variation in emotional productions. The two Specific Aims will test these hypotheses while taking into account other factors such as cognitive and socioeconomic status, theory of mind, and psychophysical sensitivity to individual prosodic cues.

This is a prospective design involving human subjects who are children and adults. The participants will perform two kinds of tasks: 1) listening tasks in which participants listen to speech or nonspeech sounds and make a judgment about it, interacting with a software program on a computer screen; and 2) speaking tasks, in which participants will read aloud a list of simple sentences in a happy way and a sad way or converse with a member of the research team, in which participants retell a picture book story or describe an activity of their choosing. Participants' speech will be recorded, analyzed for its acoustics, and also used as stimuli for listening tasks. In addition to these tasks, participants will also be invited to perform tests of cognition, vocabulary, and theory of mind.

Participants will not be assigned to groups, and no control group will be assigned, in any of the Aims. In parallel with cochlear implant patients, the team will test normally hearing listeners spanning a similar age range to provide information on how the intact auditory system processes emotional cues in speech in perception and in production. Effects of patient factors such as their hearing history, experience with their cochlear implant, and cognition will be investigated using regression-based models. All patients will be invited to participate in all studies, with no assignment, until the sample size target is met for the particular study. The order of tests will be randomized as appropriate to avoid order effects.

Study Design

Study Type:
Observational
Anticipated Enrollment :
255 participants
Observational Model:
Case-Control
Time Perspective:
Prospective
Official Title:
Perception and Production of Emotional Prosody With Cochlear Implants
Actual Study Start Date :
Jul 1, 2022
Anticipated Primary Completion Date :
Jun 30, 2027
Anticipated Study Completion Date :
Jun 30, 2027

Arms and Interventions

Arm Intervention/Treatment
Children with Cochlear Implants

Participants will be native speakers of American English and include pediatric cochlear implant recipients with unilateral or bilateral devices aged 6-19 years. In Aim 1 participants will listen to emotional speech sounds and identify the talker's intended emotion. In Aim 2 participants will be invited to produce emotional speech by reading out scripted materials or in a more naturalistic conversational setting.

Behavioral: Perception of acoustic cues to emotion
Using novel methodologies and stimuli comprising both controlled laboratory recordings and materials culled from databases of ecologically valid speech emotions (e.g., from publicly available podcasts), the team aims to collect perceptual data to build a statistical model to test the hypothesis that experience-based changes in emotion identification by pediatric and adult CI recipients is mediated by improvements in cue-optimization.

Behavioral: Production of acoustic cues to emotion
The team will acoustically analyze vocal emotion productions by participants, quantify acoustic features of spoken emotions, and obtain behavioral measures of how well normally hearing listeners can identify those emotions.

Children with Normal Hearing

Participants will be native speakers of American English who have normal hearing and are aged 6-19 years. In Aim 1 participants will listen to emotional speech sounds and identify the talker's intended emotion. In Aim 2 participants will be invited to produce emotional speech by reading out scripted materials or in a more naturalistic conversational setting.

Behavioral: Perception of acoustic cues to emotion
Using novel methodologies and stimuli comprising both controlled laboratory recordings and materials culled from databases of ecologically valid speech emotions (e.g., from publicly available podcasts), the team aims to collect perceptual data to build a statistical model to test the hypothesis that experience-based changes in emotion identification by pediatric and adult CI recipients is mediated by improvements in cue-optimization.

Behavioral: Production of acoustic cues to emotion
The team will acoustically analyze vocal emotion productions by participants, quantify acoustic features of spoken emotions, and obtain behavioral measures of how well normally hearing listeners can identify those emotions.

Adults with cochlear implants

Participants will be native speakers of American English and include adult cochlear implant recipients with unilateral or bilateral devices. In Aim 1 participants will listen to emotional speech sounds and identify the talker's intended emotion. In Aim 2 participants will be invited to produce emotional speech by reading out scripted materials or in a more naturalistic conversational setting.

Behavioral: Perception of acoustic cues to emotion
Using novel methodologies and stimuli comprising both controlled laboratory recordings and materials culled from databases of ecologically valid speech emotions (e.g., from publicly available podcasts), the team aims to collect perceptual data to build a statistical model to test the hypothesis that experience-based changes in emotion identification by pediatric and adult CI recipients is mediated by improvements in cue-optimization.

Behavioral: Production of acoustic cues to emotion
The team will acoustically analyze vocal emotion productions by participants, quantify acoustic features of spoken emotions, and obtain behavioral measures of how well normally hearing listeners can identify those emotions.

Adults with normal hearing

Participants will be native speakers of American English who have normal hearing. In Aim 1 participants will listen to emotional speech sounds and identify the talker's intended emotion. In Aim 2 participants will be invited to produce emotional speech by reading out scripted materials or in a more naturalistic conversational setting.

Behavioral: Perception of acoustic cues to emotion
Using novel methodologies and stimuli comprising both controlled laboratory recordings and materials culled from databases of ecologically valid speech emotions (e.g., from publicly available podcasts), the team aims to collect perceptual data to build a statistical model to test the hypothesis that experience-based changes in emotion identification by pediatric and adult CI recipients is mediated by improvements in cue-optimization.

Behavioral: Production of acoustic cues to emotion
The team will acoustically analyze vocal emotion productions by participants, quantify acoustic features of spoken emotions, and obtain behavioral measures of how well normally hearing listeners can identify those emotions.

Outcome Measures

Primary Outcome Measures

  1. Vocal emotion recognition accuracy [Years 1-5]

    Percent correct scores in vocal emotion recognition

  2. Vocal emotion recognition sensitivity [Years 1-5]

    Sensitivity (d's) in vocal emotion recognition

  3. Voice pitch (fundamental frequency) of vocal productions [Years 1-5]

    Voice pitch measured from acoustic analyses of recorded speech

  4. Intensity of vocal productions [Years 1-5]

    Intensity measured from acoustic analyses of recorded speech

  5. Duration of vocal productions [Years 1-5]

    Duration (1/speaking rate) measured from acoustic analyses of recorded speech

  6. Identifiability of recorded speech emotions [Years 1-5]

    Accuracy and associated d's in listeners' ability to identify the emotions recorded in participants' speech

Secondary Outcome Measures

  1. Reactions times for vocal emotion identification [Years 1-5]

    Reaction times in emotion recognition task

Eligibility Criteria

Criteria

Ages Eligible for Study:
6 Years to 80 Years
Sexes Eligible for Study:
All
Inclusion Criteria:
  • Prelingually deaf children with cochlear implants

  • Postlingually deaf adults with cochlear implants

  • Normally hearing children

  • Normally hearing adults

Exclusion Criteria:
  • Non-native speakers of American English

  • Prelingually deaf individuals who receive cochlear implants after age 12

  • Adults unable to pass a basic cognitive screen

Contacts and Locations

Locations

Site City State Country Postal Code
1 Arizona State University Tempe Arizona United States 85287
2 Boys Town National Research Hospital Omaha Nebraska United States 68131

Sponsors and Collaborators

  • Father Flanagan's Boys' Home
  • Arizona State University
  • House Institute Foundation
  • University of Nebraska

Investigators

  • Principal Investigator: Monita Chatterjee, Ph.D., Father Flanagan's Boys' Home

Study Documents (Full-Text)

None provided.

More Information

Publications

Responsible Party:
Monita Chatterjee, Senior Scientist, Father Flanagan's Boys' Home
ClinicalTrials.gov Identifier:
NCT05486637
Other Study ID Numbers:
  • Prosody
First Posted:
Aug 3, 2022
Last Update Posted:
Aug 3, 2022
Last Verified:
Aug 1, 2022
Individual Participant Data (IPD) Sharing Statement:
Yes
Plan to Share IPD:
Yes
Studies a U.S. FDA-regulated Drug Product:
No
Studies a U.S. FDA-regulated Device Product:
No
Product Manufactured in and Exported from the U.S.:
Yes
Keywords provided by Monita Chatterjee, Senior Scientist, Father Flanagan's Boys' Home
Additional relevant MeSH terms:

Study Results

No Results Posted as of Aug 3, 2022