Cross linguistic studies on spoken language processing. Completed NIH Grant (Keith Johnson)
The long-term objective of this research project
is to understand human spoken language processing (particularly speech
perception and auditory word recognition) in linguistic
context. Speech signals are unique in human experience because they
are highly familiar, and have great practical significance in daily
life. Therefore, it is not too surprising to find that people develop
optimized processing strategies tuned specifically for speech. In this
work we study how this tuning process may be sensitive to linguistic
structure. Cross-linguistic spoken language research is important
because without it we are in danger of concluding that the phenomena
found in one language (or even dialect) are somehow normative for
speakers of other languages. Such a narrow understanding of 'normal'
spoken language processing is likely to have a negative impact on
clinical speech and hearing practice in a pluralistic society.
(John Ohala, Maria Josep Sole, & Ronald Sprouse)
We are studying patterns of nasal coarticulation and phonation types
using recordings of nasal and oral airflow and pressure. We have
recently (2007) up-graded our equipment thanks to a grant from the
Holbrook Experimental Phonetics Fund. Tilsen and McGuire plan to use
the equipment to study the phonetic basis of syntactic patterns.
Exemplar models of phonology
(Keith Johnson, Marc Ettlinger)
Ettlinger is extending Johnson's earlier work on exemplar-based speech perception with models of phonological change using exemplar-based language learning agents. The aim of this project is to test through model simulations the degreee to which patterns of phonological change can be attributed to simple "first principles" of spoken language perception and production.
(Keith Johnson, Sam Tilsen)
We use Fourier analysis of the slow-moving speech amplitude envelope
to analyze speech rhythm. This method has the advantage, over other
methods, of not requiring as a first analysis step that vowel and
consonant intervals be hand labelled. Thus, we can quickly characterize
rhythm at multiple time-scales in large speech corpora.
(Keith Johnson, Christian DiCanio)
The aim of this project is to test the hypothesis that the phonetic
basis of sound change is visual as well as acoustic. We are looking at
how place of articulation in nasal coda consonants may be sensitive to
visual phonetic and acoustic phonetic properties of nasal consonants.
student research support
Students have successfully sought funding for their
research from the National Science Foundation, the National Institutes
of Health, the UC Berkeley Graduate Division, the UC Berkeley Council
of Graduate Students, the Abigail Hodgen Fund for Women in the Social
Sciences, and of course, the Department of Linguistics.
Phonetics research at UC Berkeley is generously supported by
an endowment - the Holbook fund. This fund makes it possible to periodically buy workstations and equipment for general phonetic research.
Research collaborations exist with several colleagues at Berkeley
and in the Bay Area.
Professor John Houde (UCSF) works with students in the lab and sits in
on seminars when time permits. He studies sensory-motor adaptation in
speech and the neural correlates of speech perception and speech motor
Dr. Edward Chang (UCSF) collaborates with Prof. Johnson and works with
students in the lab. He studies the neurophysiology of speech
production and perception.
Professor Nelson Morgan (Department of Electrical Engineering, ICSI)
is valuable colleage and teacher. His research specialization is in
automatic speech recognition. He hosts the weekly ICSI "speech
Professor Dan Klein (Department of Computer Science) specializes in
natural language processing. We've collaborated with him to make
speech corpora available via the Berkeley Language Center.