SAME-Match-Treat

Wroclaw, Poland

Register
Register
Register

Ideas and ProjectsUpdated on 18 October 2024

Seeing speech: Probing the cerebral mechanisms of Cued Speech perception

Annahita Sarré

PhD candidate at Paris Brain Institute (ICM)

Paris, France

About

Most alphabets are based on visual coding of phonemes and syllables, while similar visual codes were developed for visually conveying the sounds of speech to deaf people. Notably, Cued Speech (CS) allows for syllables to be specified by a combination of lip configuration, hand location relative to the face and hand shape. The use of this communication system improves general language skills, and reading in particular, in a deaf community characterized by low literacy. Despite its proven efficiency, and while learning this system likely involves brain transformations comparable to those associated with reading acquisition, the mechanisms of CS perception remain largely unknown.
The goal of this project is therefore to study its brain bases and the links between CS perception and reading, two coexisting visual codes for language that may both compete and support each other.

In two studies involving 3 groups of participants (deaf and hearing people proficient in CS and a group of hearing people naïve to CS) each, we explore the perception of CS by MRI on the one hand, and by EEG and eye-tracking on the other. The main aim of the MRI study is to identify the brain areas that process and, more specifically, encode the various components of CS (lips, hand position and shape). In parallel, the EEG study focuses on the multivariate analysis of data derived from the presentation of syllable in the form of CS videos (isolated or within a word or sentence) or written. This will enable us to identify the temporal course of phonological neural encoding of CS perception and reading, as well as its possible amodality.

Type

  • SAME-NeuroID Retreat
  • Ideas sharing

Similar opportunities