The Berkeley Phonetics & Phonology Forum ("Phorum") is a weekly talk and discussion series featuring presentations on all aspects of phonology and phonetics.
A number of studies with infants and with young children suggest that hearing words produced by multiple talkers helps learners to develop more robust word representations. However, pilot work suggests two important caveats to previous data: 1) only less phonologically proficient learners benefit from phonetic variation in new word perception but 2) all learners may benefit from variation in new word production. The proposed studies seek further evidence to bolster these two points as well as the beginning of an explanation about why production is different from perception.
Cross-linguistically, codas are often constrained against completely, or are only tolerated when filled by sonorant phones (Zamuner 2003). Choapan Zapotec is not unusual in this regard: when a word-final coda is present at all, only /j/, w/, or /n/ can fill it, and word-medial codas are almost unattested. However, liquids, which most phonological theories treat as more sonorant than nasals, can never be codas in Choapan. Furthermore, /n/, /j/, and /w/ exhibit special features word-finally that are never associated with consonants in other positions, including these same phonemes in onsets. Based on these positional and featural patterns, I analyze the word-final allomorph of /n/ in Choapan as a glide (cf. Ferre 1988). These data can contribute to theories regarding lenition in codas, as well as the putative position of nasal consonants in sonority hierarchies.
Idiosyncratically transparent vowels - those that fail to either undergo or trigger harmony in a particular morpheme but do both elsewhere - have not been documented in any language, and have been claimed to be impossible as recently as Törkenczy (2013). We present two affixes in Kazakh that contain idiosyncratic vowels, and present evidence from wordlist elicitations and phonetic studies with two native speakers from different regions. We discuss the implications of these findings for constraint-based theories of vowel harmony and present analyses in two recent systems: both Rhodes's (2010) system for vowel harmony in Agreement by Correspondence and Kimper's (2011) Trigger Competition can be made to accommodate lexical specifications that force vowels in particular morphemes to act as transparent, and we show that implementing these lexical specifications in Trigger Competition forces us to make somewhat stronger predictions about the rarity of idiosyncratic transparency.
This study investigates how visual-phonetic information affects compensation for coarticulation in speech perception. Some studies suggested that listeners' phonological knowledge (obtained through gestural perception of the sound) plays a key role in shifting the auditory percepts during compensation (Fowler 2006, Mitterer 2006). However, these studies confound the knowledge of articulation with the phonological knowledge.
In this paper, we try to distinguish the two confounding factors (phonological knowledge and the gestural perception) by testing round vowels that are either native or non-native in American English. Listeners are known to hear more [sh] before round vowels to compensate for anticipatory lip-rounding (Mann& Repp 1980). Therefore, if the compensation effect were indeed caused by the gestural perception, listeners would compensate for both native and non-native sounds when they see the lip-rounding. We tested compensation effect in mid-round vowels: [o] and [oe] (and [e] for baseline). A series of CV syllables were created with fricative continuum from [s] to [sh] spliced before [e],[o] and [oe]. To balance the stimuli, vowels were extracted from both (s)V and (sh)V. The syllables were aligned with the videos of a face saying [s]V, [sh]V. Separate movies were made for each vowel environment. Three groups of American English listeners participated in audio-only, audio-visual, video-only experiment (A:15/AV:17/V:7). While compensation effect was found for both [o] and [oe] in audio-only experiment, it is caused by a greater shift in [e] by vowel transition cue. In audiovisual experiment, the compensation decreased overall. Although the videos had a similar effect for [o]and [oe] in video-only condition, it had a greater effect on [o] in AV condition. In summary, we found no evidence that knowing the gestural information enhances the listeners' compensation for coarticulation. The result is discussed in light of the findings from other research.
Everyone who has had a decent high school science class knows what the scientific method is: observe (in a targeted domain), hypothesize (an explanation for the observations), test (the hypothesis), publish (details of the previous three). But the scientific method is not applied in various human endeavors: religion, fad diets, get-rich-quick schemes, and is poorly manifested in certain scholarly disciplines, e.g., anthropology, law. In our presentation we will explore the extent to which modern mainstream phonology does and/or is able to use the scientific method in its pursuit of an understanding of the grammar attributed to native speakers.
Humans perceive speech in a categorical fashion - but the cognitive processes that give rise to this are characterized by continuity. I will discuss results from work with Leonardo Lancia (MPI EVA, Leipzig) that provide converging evidence for the time-varying and dynamically changing nature of categorical speech perception: A connectionist computational model is built that synthesizes multiple cognitive processes (perceptual competition, adaptation, perceptual learning) in the same architecture. The model's predictions are tested via two experiments with French participants. Overall, this work highlights how categorical perception may arise from an underlying dynamic process, and how processes at multiple time scales affect categorical perception. I will also discuss some new preliminary results from a mouse tracking experiment on the same topic that did not pan out quite as planned.
Articulatory gestures vary in duration for many reasons. At the segmental level, duration is used to signal linguistic contrast between phonemes, e.g., the primary difference between voiced and voiceless stops in English is the duration of the vocal fold abduction gesture. At the suprasegmental level, longer duration is associated with prosodic prominence (Aylett and Turk, 2004) and contrastive focus (Katz and Selkirk, 2011), among other discourse-related factors. An additional factor hypothesized to affect the duration of articulatory gestures is the speed or ease with which words can be retrieved from the lexicon. Retrieval-based accounts of phonetic variation have tied the accessibility of words during speech planning to their duration in connected speech (Bell et al, 2009; Gahl et al., 2012). In this talk, I present data on word and segment durations produced in a diverse set of speaking contexts, bringing new evidence to bear on the relationship between word retrieval, phonological encoding, and articulatory duration. Data from a word learning experiment with preschoolers, single word and conversational speech produced by adult speakers, and pilot data on bilingual speech planning will be discussed. I argue for an interactive, cascading model of speech production, in which the availability of phonological segments during planning affects the duration of articulatory gestures.