About

The Berkeley Phonetics & Phonology Forum ("Phorum") is a weekly talk and discussion series featuring presentations on all aspects of phonology and phonetics.

Meetings

Mondays 10 - 11
Phonology Lab
50 Dwinelle

Coordinators

Sarah Bakst

Matt Faytak

Berkeley Phonetics and Phonology Forum

Schedule of Talks for Fall 2013

Previous Meetings

September 9 -

Larry Hyman
UC Berkeley

What is phonological typology?

Abstract
Handout

September 16 -

Andréa Davis
University of Arizona

Learning robust word representations

A number of studies with infants and with young children suggest that hearing words produced by multiple talkers helps learners to develop more robust word representations. However, pilot work suggests two important caveats to previous data: 1) only less phonologically proficient learners benefit from phonetic variation in new word perception but 2) all learners may benefit from variation in new word production. The proposed studies seek further evidence to bolster these two points as well as the beginning of an explanation about why production is different from perception.

September 23 -

Erin Donnelly
UC Berkeley

When /n/ isn't just a nasal: codas in Choapan Zapotec

Cross-linguistically, codas are often constrained against completely, or are only tolerated when filled by sonorant phones (Zamuner 2003). Choapan Zapotec is not unusual in this regard: when a word-final coda is present at all, only /j/, w/, or /n/ can fill it, and word-medial codas are almost unattested. However, liquids, which most phonological theories treat as more sonorant than nasals, can never be codas in Choapan. Furthermore, /n/, /j/, and /w/ exhibit special features word-finally that are never associated with consonants in other positions, including these same phonemes in onsets. Based on these positional and featural patterns, I analyze the word-final allomorph of /n/ in Choapan as a glide (cf. Ferre 1988). These data can contribute to theories regarding lenition in codas, as well as the putative position of nasal consonants in sonority hierarchies.
Handout

September 30 -

Sam Bowman and Benjamin Lokshin
Stanford

Idiosyncratic transparent vowels in Kazakh

Idiosyncratically transparent vowels - those that fail to either undergo or trigger harmony in a particular morpheme but do both elsewhere - have not been documented in any language, and have been claimed to be impossible as recently as Törkenczy (2013). We present two affixes in Kazakh that contain idiosyncratic vowels, and present evidence from wordlist elicitations and phonetic studies with two native speakers from different regions. We discuss the implications of these findings for constraint-based theories of vowel harmony and present analyses in two recent systems: both Rhodes's (2010) system for vowel harmony in Agreement by Correspondence and Kimper's (2011) Trigger Competition can be made to accommodate lexical specifications that force vowels in particular morphemes to act as transparent, and we show that implementing these lexical specifications in Trigger Competition forces us to make somewhat stronger predictions about the rarity of idiosyncratic transparency.
Handout

October 7 -

Shinae Kang
UC Berkeley

Audio-visual compensation for coarticulation depends on listeners' familiarity to the sound

This study investigates how visual-phonetic information affects compensation for coarticulation in speech perception. Some studies suggested that listeners' phonological knowledge (obtained through gestural perception of the sound) plays a key role in shifting the auditory percepts during compensation (Fowler 2006, Mitterer 2006). However, these studies confound the knowledge of articulation with the phonological knowledge.

In this paper, we try to distinguish the two confounding factors (phonological knowledge and the gestural perception) by testing round vowels that are either native or non-native in American English. Listeners are known to hear more [sh] before round vowels to compensate for anticipatory lip-rounding (Mann& Repp 1980). Therefore, if the compensation effect were indeed caused by the gestural perception, listeners would compensate for both native and non-native sounds when they see the lip-rounding. We tested compensation effect in mid-round vowels: [o] and [oe] (and [e] for baseline). A series of CV syllables were created with fricative continuum from [s] to [sh] spliced before [e],[o] and [oe]. To balance the stimuli, vowels were extracted from both (s)V and (sh)V. The syllables were aligned with the videos of a face saying [s]V, [sh]V. Separate movies were made for each vowel environment. Three groups of American English listeners participated in audio-only, audio-visual, video-only experiment (A:15/AV:17/V:7). While compensation effect was found for both [o] and [oe] in audio-only experiment, it is caused by a greater shift in [e] by vowel transition cue. In audiovisual experiment, the compensation decreased overall. Although the videos had a similar effect for [o]and [oe] in video-only condition, it had a greater effect on [o] in AV condition. In summary, we found no evidence that knowing the gestural information enhances the listeners' compensation for coarticulation. The result is discussed in light of the findings from other research.

October 14 -

John Ohala and Marc Ettlinger
UC Berkeley and VA Northern California Healthcare System

The next big thing in phonology: the scientific method

Everyone who has had a decent high school science class knows what the scientific method is: observe (in a targeted domain), hypothesize (an explanation for the observations), test (the hypothesis), publish (details of the previous three). But the scientific method is not applied in various human endeavors: religion, fad diets, get-rich-quick schemes, and is poorly manifested in certain scholarly disciplines, e.g., anthropology, law. In our presentation we will explore the extent to which modern mainstream phonology does and/or is able to use the scientific method in its pursuit of an understanding of the grammar attributed to native speakers.

October 21 -

Florian Lionnet
UC Berkeley

Phonological teamwork: an Agreement by Correspondence account of multiple-trigger assimilation

Abstract

October 28 -

Bodo Winter
UC Merced

Categorical speech perception is not that categorical

Humans perceive speech in a categorical fashion - but the cognitive processes that give rise to this are characterized by continuity. I will discuss results from work with Leonardo Lancia (MPI EVA, Leipzig) that provide converging evidence for the time-varying and dynamically changing nature of categorical speech perception: A connectionist computational model is built that synthesizes multiple cognitive processes (perceptual competition, adaptation, perceptual learning) in the same architecture. The model's predictions are tested via two experiments with French participants. Overall, this work highlights how categorical perception may arise from an underlying dynamic process, and how processes at multiple time scales affect categorical perception. I will also discuss some new preliminary results from a mouse tracking experiment on the same topic that did not pan out quite as planned.

November 4 -

Lise Menn
University of Colorado at Boulder

The Linked-Attractor Model of child phonology: update

Abstract

November 18 -

Clara Cohen
UC Berkeley

Effects of abstract and usage information on pronunciation variation

November 25 -

TBA



December 2 -

Melinda Fricke
UC Berkeley, Penn State

A retrieval-based account of word and segment durations in speech production

Articulatory gestures vary in duration for many reasons. At the segmental level, duration is used to signal linguistic contrast between phonemes, e.g., the primary difference between voiced and voiceless stops in English is the duration of the vocal fold abduction gesture. At the suprasegmental level, longer duration is associated with prosodic prominence (Aylett and Turk, 2004) and contrastive focus (Katz and Selkirk, 2011), among other discourse-related factors. An additional factor hypothesized to affect the duration of articulatory gestures is the speed or ease with which words can be retrieved from the lexicon. Retrieval-based accounts of phonetic variation have tied the accessibility of words during speech planning to their duration in connected speech (Bell et al, 2009; Gahl et al., 2012). In this talk, I present data on word and segment durations produced in a diverse set of speaking contexts, bringing new evidence to bear on the relationship between word retrieval, phonological encoding, and articulatory duration. Data from a word learning experiment with preschoolers, single word and conversational speech produced by adult speakers, and pilot data on bilingual speech planning will be discussed. I argue for an interactive, cascading model of speech production, in which the availability of phonological segments during planning affects the duration of articulatory gestures.

December 9 -

Benjamin Munson
University of Minnesota

Explicit versus implicit social priming in speech perception

In this talk, I will present the results of two lines of research on gender and phonetic variation. The first of these examines relationships among gender expression and gendered speech in boys aged 5 to 13. In this line of research, my colleagues (Janet Pierrehumbert, Ken Zucker, Laura Crocker, Allison Owen-Anderson) and I have shown that boys with a clinical diagnosis of Gender Identity Disorder have speech that is less prototypically boy-like than boys without this label. In this part of the talk, we can brainstorm the mechanisms that might underlie these differences, and ways to best measure the perception of gender through speech. The second part of the talk examines how the perception of gender affects the identification of anterior sibilant fricatives. Strand and Johnson (1996) showed that listeners' phoneme identification can be biased by purely social expectations about how talkers should sound. This finding has been replicated many times with a variety of social categories, including gender, age, social class, and regional dialect. In this part of the Phorum, we will talk about how the type of priming affects this process. In particular, we will talk about the differences between studies with very explicit priming methods (such as presenting a picture of a talker with an obvious attribute, or telling the listeners to imagine that speech was produced by a particular type of talker) versus ones that are very implicit. I will present some data on s-S categorization that tried to contrast these two types of priming, and we will brainstorm better ways to implicitly prime social categories than the method that I used.