Speech Perception

The speech perception research in the lab centers on questions of language sound change and synchronic phonological structure. Speech Perception research is supported by a two room suite, where participants are tested in small sound isolated booths. In addition, we have an eye tracking lab for the psycholinguistic study on-line word and sentence processing.

Visual phonetic aspects of sound change

Completed NSF grant, Keith Johnson

Using movies of speech, in which we manipulate the phonetic information in both the audio and video tracks, we are studying the role of visual information in perceptual confusions (and non-confusions) that may be involved in sound change. In this project we have developed capacity for high-quality audio-video recording of speech, for taking phonetic measurements from movies, and for conducting audio-visual speech perception experiments.

Cross linguistic studies on spoken language processing

Completed NIH Grant, Keith Johnson

The long-term objective of this research project is to understand human spoken language processing (particularly speech perception and auditory word recognition) in linguistic context. Speech signals are unique in human experience because they are highly familiar, and have great practical significance in daily life. Therefore, it is not too surprising to find that people develop optimized processing strategies tuned specifically for speech. In this work we study how this tuning process may be sensitive to linguistic structure. Cross-linguistic spoken language research is important because without it we are in danger of concluding that the phenomena found in one language (or even dialect) are somehow normative for speakers of other languages. Such a narrow understanding of "normal" spoken language processing is likely to have a negative impact on clinical speech and hearing practice in a pluralistic society.

Articulatory Phonetics

Speech production studies in the lab focus on the human system of articulatory control in adults and children, in English and other languages, and in ordinary speech as well as in response to experimental manipulations. Facilities for speech production research are housed in a 500 square foot research lab with a double-walled sound booth, an ultrasound, an aerodyamic rig, as well as equipment for electro-palatography, electro-glottography, and static palatography.

Production of gestural timing in complex segments

Speech often requires engagement of multiple articulations by multiple speech organs. An important task for language users (and acquirers) is coordinating these articulations, both within and between segments. This task is made more challenging by factors such as context (e.g. word stress patterns) and speech speed. This research studies variation in articulatory timing using ultrasound video and acoustic analysis in order to better understand what aspects of variation can be attributed to linguistic vs. extra-linguistic factors. Studying the interaction between linguistic and extra-linguistic variation may lead us to a better understanding of sound change.

Articulatory Training

Asking people to change their articulations, in various experimental paradigms, may be a powerful way to study the limits of speech motor control in adults, and the cross-linguistic tuning of speech motor control regimes. We use auditory target training, articulatory feedback training and altered auditory feedback to explore these questions both from the perspective of theoretical motor control, and with an eye toward improving acquisition of phonetics in second language teaching.

Sociophonetics

The sociolingusitics of phonetic identity (particularly in regional dialect and gender and sexual idenity) and of phonetic accommodation is a focus interest and energy in the PhonLab. Research in sociophonetics is made possible by an audio video recording studio, and by the conversational language recording space SPARCL. As well as great web application programming support from Ronald Sprouse.

Voices of ...

The "Voices of Berkeley" project collected phonetic data from over 700 students at Berkeley using students' own computer and a web-interface. We have recently begun to publish results of this project and make plans for a larger, more representative sample.

SPARCL Facilities

The SPARCL (Sociophonetic Area for Recording Conversational Language) is a resource for conducting research in the areas of sociolinguistics, sociophonetics, discourse analysis, and gesture analysis. It is designed for eliciting naturalistic speech data in a comfortable environment. The lab is equipped with video cameras and high quality audio equipment, in a living room style décor. Financial support for SPARCL was provided by the Department of Linguistics, the Department of Spanish and Portuguese (thanks Justin Davidson), and by a UC Berkeley "Student Technology Fund" award won by lab graduate students.

Phonetic Neuroscience

Collaborators for Phonetic Neuroscience research are Edward Chang (UCSF Department of Neurosurgery), John Houde and Sri Nagarajan (UCSF Speech Neuroscience Lab), Bob Knight (Cognitive Neuroscience Research Lab, UCB Psychology), and Marc Ettlinger (Martinez Veterans Affairs, Speech and Hearing Research Lab).

Publications from these collaborations have appeared in Brain and Language, Nature Neuroscience, Science, Nature, PLoS One, and ELife.

Funding

The lab has recieved funding from the National Science Foundation, the National Institutes of Health, the Peder-Sather Center, the France-Berkeley Fund, and the Holbrook Fund.

Graduate student research has been supported by the National Science Foundation, the UC Berkeley Graduate Division, the UC Berkeley Council of Graduate Students, the Abigail Hodgen Fund for Women in the Social Sciences, and of course, the Department of Linguistics.