uri icon
  • Contact Info

Lynne E Bernstein Faculty Member

Positions

Some of my Contributions to Science 1. Measuring individual differences in deaf and NH adults: I have enjoyed investigating individual differences with relatively large sample sizes and statistical modeling. I used analyses of a large sample to oppose a prevalent view that lipreading is more accurate in normal-hearing compared with deaf individuals due to the formers’ normal auditory experience of language. The view perplexed me in light of my experience communicating with prelingually deaf adults at Gallaudet University. I carried out normative studiesa with ~200 students (Gallaudet University and the University of Maryland Baltimore County), with follow up research on 332 deaf students in Californiab (National Center on Deafness at CA State University Northridge). These studies showed large positive effect sizes on lipreading associated with deafness and reliance on spoken language. More recently, I used generalized linear mixed models applied to a database of 461 normal-hearing adult lipreaders to show that males and females differ significantly in the perceptual and linguistic factors that predict their lipreading accuracy. The study used the computational approach I developed for carrying out phoneme-to-phoneme sequence alignment with open set sentence lipreading responsesc. A recent paper of mine reports on 250 normal-hearing adults who responded to a clinical speech recognition in noise test.d Using the sequence alignment technology and mixed models, I showed that their open set recognition errors are more informative about their speech recognition in noise than are keywords correct scores. These studies of speech recognition also rested on my PhD training as a psycholinguist and experimental psychologist. My background naturally combines expertise in perception, language, and computational modeling. a. Bernstein LE, Demorest ME, Tucker PE. Speech perception without hearing. Percept Psychophys. 2000 Feb;62(2):233-52. PubMed PMID: 10723205. b. Auer ET Jr, Bernstein LE. Enhanced visual speech perception in individuals with early-onset hearing impairment. J Speech Lang Hear Res. 2007 Oct;50(5):1157-65. PubMed PMID: 17905902. c. Bernstein LE, Demorest ME, Eberhardt SP. A computational approach to analyzing sentential speech perception: phoneme-to-phoneme stimulus-response alignment. J Acoust Soc Am. 1994 Jun;95(6):3617-22. PubMed PMID: 8046151 d. Bernstein, L. E., Eberhardt, S. P. & Auer, E. T., Jr. (2021). Errors on an open set speech-in-babble sentence recognition test reveal individual differences in acoustic phonetic perception and babble misallocations. Ear & Hearing, 42, 673-690. doi: 10.1097/AUD.0000000000001020 2. Speech perception training: I have been studying speech perception training since the 1980s. Initially, I focused on vibrotactile signals in combination with lipreading, which in the 1980s was seen as a potential alternative to cochlear implants. We implemented one of the first digital video speech databases for speech training. Our lab published unique studies, including ones on visual perception of prosodya and sentence lipreading. Our work led eventually to projects on multisensory training for unisensory perceptual learning, including SeeHear’s current Phase-II project (R44 DC015418) on web-based VO and AV training. Among our many findings, we showed that prelingually deafened adults with late-acquired cochlear implants are impeded in learning auditory speech when visual speech is availablec, and the complementary effect arises with normal-hearing adultsd: Their visual speech learning is impeded by auditory stimuli. We showed learning of vocoded acoustic speech is enhanced in normal-hearing adults trained with audiovisual speechb. Recently, we published in the area of the neural pathways responsible for vibrotactile category learning. A paper in revision reports on significant effects of lipreading training that extend to audiovisual speech stimuli. a. Bernstein LE, Eberhardt SP, Demorest ME. Single-channel vibrotactile supplements to visual perception of intonation and stress. J Acoust Soc Am. 1989 Jan;85(1):397-405. PubMed PMID: 2522107. b. Bernstein LE, Auer ET Jr, Eberhardt SP, Jiang J. Auditory perceptual learning for speech perception can be enhanced by audiovisual training. Front Neurosci. 2013;7:34. PubMed PMID: 23515520; PubMed Central PMCID: PMC3600826. c. Bernstein LE, Eberhardt SP, Auer ET Jr. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults. Front Psychol. 2014;5:934. PubMed PMID: 25206344; PubMed Central PMCID: PMC4144091. d. Eberhardt SP, Auer ET Jr, Bernstein LE. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training. Front Hum Neurosci. 2014;8:829. PubMed PMID: 25400566; PubMed Central PMCID: PMC4215828. 3. Perceptual learning in the context of life-long perceptual experience: I have a longstanding interest in how life-long experience affects perception and perceptual learning. While studying vibrotactile speech learning in deaf adultsa, I noted a strong association between vibrotactile learning and hearing aid experiencea. I hypothesized that vibrotactile stimulation from the hearing aid was responsible for the learning effect. Later, when fMRI became available, we showed that auditory cortex was activated to a significantly greater extent in prelingually deafened hearing aid users than in normal-hearing adultsc, an effect that has now been shown in animal research. In a study on audiovisual speech perception in prelingually deafened adults with late-acquired cochlear implantsb, we reported super-additive effects of combining auditory and visual speech. More recently, we have been studying how perceptual stimulus selection biases learning to lipread in normal-hearing adults. We are currently studying attention to the face versus to speech during lipreading using eye-tracking. a. Bernstein LE, Tucker PE, Auer ET Jr. Potential perceptual bases for successful use of a vibrotactile speech perception aid. Scand J Psychol. 1998 Sep;39(3):181-6. PubMed PMID: 9800534. b. Moody-Antonio S, Takayanagi S, Masuda A, Auer ET Jr, Fisher L, Bernstein LE. Improved speech perception in adult congenitally deafened cochlear implant recipients. Otol Neurotol. 2005 Jul;26(4):649-54. PubMed PMID: 16015162. c. Auer ET Jr, Bernstein LE, Sungkarat W, Singh M. Vibrotactile activation of the auditory cortices in deaf versus hearing adults. Neuroreport. 2007 May 7;18(7):645-8. PubMed PMID: 17426591; PMCID: PMC1934619. 4. Neural mechanisms of multisensory speech perception: I have led fundamental research into multisensory speech processing using functional magnetic resonance imaging (Fmri) and electroencephalography. My lab showed that lipreading relies on modality-specific visual cortical representations. Using fMRI, we showed selectivity in a high-level visual area we called the temporal visual speech area (TVSA) (cortex posterior to multisensory superior temporal sulcus)a. TVSA responds preferentially to speech versus non-speech face gestures. With EEG, we demonstrated the visual mismatch negativity (vMMN) with spoken phonemes selected to be perceptually near versus farb. We showed that the vMMN arose for perceptually far changes in an area consistent with the TVSA but both far and near changes produced the vMMN in a homologous right hemisphere region. This result is consistent with a left-lateralized speech pathway and a right-lateralized face processing pathway. Recent results from other researchers are confirming this organization. I proposed a model with E. Liebenthal predicting parallel modality-specific ventral and dorsal pathways for auditory and visual speechc. This work, along with our vibrotactile speech research has led to studies of vibrotactile categoryd and speech learning. I am currently collaborating with statistics colleagues on some of the related EEG data from this line of work to develop a functional linear modeling approach for the EEG time series of cortical oscillatory power. a. Bernstein LE, Jiang J, Pantazis D, Lu ZL, Joshi A. Visual phonetic processing localized using speech and nonspeech face gestures in video and point-light displays. Hum Brain Mapp. 2011 Oct;32(10):1660-76. PubMed PMID: 20853377; PubMed Central PMCID: PMC3120928. b. Files BT, Auer ET Jr, Bernstein LE. The visual mismatch negativity elicited with visual speech stimuli. Front Hum Neurosci. 2013;7:371. PubMed PMID: 23882205; PubMed Central PMCID: PMC3712324. c. Bernstein LE, Liebenthal E. Neural pathways for visual speech perception. Front Neurosci. 2014;8:386. PubMed PMID: 25520611; PubMed Central PMCID: PMC4248808. d. Malone, P. S., Eberhardt, S. P., Wimmer, K., Sprouse, C., Klein, R., Glomb, K., . . . Bernstein, L. E., Riesenhuber, M. (2019). Neural mechanisms of vibrotactile categorization. Human Brain Mapping, 40(10), 3078-3090. PMID:30920706 Complete List of Published Work in MyBibliography: https://www.ncbi.nlm.nih.gov/sites/myncbi/lynne.bernstein.1/bibliography/41150873/public/?sort=date&direction=descending 828.

Research Areas

GW Expert Finder utilizes up to four primary sources that shares data to populate individual profiles. Reference these FAQs for information on how to update your GW expert profile.
GW Expert Finder utilizes up to four primary sources that shares data to populate individual profiles. Reference these FAQs for information on how to update your GW expert profile.