The Berkeley Phonetics & Phonology Forum ("Phorum") is a weekly talk and discussion series featuring presentations on all aspects of phonology and phonetics.
In recent work, we've been investigating how people interpret utterances containing repair disfluencies (e.g., 'The chef reached for some salt uh I mean some ketchup'). Our experiments involve presenting listeners with sentences at the same time that they are shown a small set of pictures. Eye movements are monitored and time-locked to different portions of the utterance to provide evidence concerning the incremental build-up of interpretations. One set of experiments has shown that listeners were more likely to fixate a critical distractor item (pepper) during the processing of repair disfluencies compared to the processing of coordination structures ('...some salt, and also some ketchup...'). In other experiments we have demonstrated that the pattern of fixations to the critical distractor for disfluency constructions is similar to fixation patterns for sentences employing contrastive focus (...not some salt, but rather some ketchup...). The results suggest that similar mechanisms underlie the processing of repair disfluencies and contrastive focus, with listeners generating sets of entities that stand in semantic contrast to the reparandum in the case of disfluencies or the negated entity in the case of contrastive focus.
It has been observed that palatalized trills are prone to undergo sound change, which is usually attributed to their complex articulation (Kavitskaya et al. 2009, Ladefoged and Maddieson 1996, Solé 2002, etc.). In this talk, I will present the results about the influence of palatalization on the tongue-tip gesture in Russian trills and laterals (by analyzing the tongue-tip peak velocity and stiffness by means of EMA) and discuss its possible consequence for sound change.
This talk takes up two interrelated issues for lexically-conditioned phonological patterns: (1) how the grammar captures the range of phonological variation that stems from lexical conditioning, and (2) whether the relevant lexical categories needed by the grammar can be learned from surface patterns. Previous approaches to category-sensitive phonology have focused largely on constraining it; however, only a limited understanding currently exists of the quantitative space of variation possible (i.e., entropy) within a coherent grammar. In this talk, I present an approach that models lexically-conditioned phonology as cophonology subgrammars of indexed constraint weight adjustments (i.e., ‘varying slopes’) in multilevel Maximum Entropy Harmonic Grammar. This approach leverages the structure of multilevel statistical models to quantify the space of lexically-conditioned variation in natural language data, and allows for the deployment of information-theoretic model comparison to assess competing hypotheses of lexical categories. Two case studies are examined: part of speech-conditioned tone patterns in Mende (joint work with Sharon Inkelas, UCB), and lexical versus grammatical word prosodification in English. Both case studies bring to bear new quantitative evidence to classic category-sensitive phenomena. The results illustrate how the multilevel approach developed here can capture the probabilistic heterogeneity and learnability of lexical conditioning in a phonological system.
Note: Special day (Wednesday), time (10:00-11:00), and location (1229 Dwinelle)
Categorical phonological processes (e.g. assimilation) that seem to be driven by gradient, subphonemic effects traditionally considered to fall within the domain of phonetics (e.g. coarticulation), constitute a challenge for phonological theory. Such data raise the question of the nature of phonology and its relation with phonetic substance, which has given rise to a long debate in linguistic theory, schematically opposing two types of approaches to phonology: substance-free approaches, which hold that phonetic substance is not relevant to phonological theory, and phonetically grounded approaches, for which (at least some) phonological phenomena are rooted in natural phonetic processes, such as coarticulation.
In this talk, I argue in favor of phonetic grounding, on the basis of novel data relevant to this debate: 'phonological teamwork', a cumulative effect which obtains when two segments exerting the same subphonemic coarticulatory effect may trigger a categorical phonological process (e.g. assimilation) only if they 'team up' and add their coarticulatory strengths in order to pass the threshold necessary for that process to occur. Drawing from original fieldwork, I analyze a particularly rich case of teamwork: the doubly triggered rounding harmony of Laal (endangered isolate, Chad). I provide instrumental evidence that the harmony is driven by subphonemic coarticulatory effects, and propose to enrich phonology with phonetically grounded representations of such effects, called subfeatures. Subfeatures do not contradict the separation between phonology and phonetics, but rather constitute a mediating interface between them. Throughout the talk, I highlight the importance of linguistic fieldwork, meticulous data collection and analysis, and detailed description of seemingly minor phenomena, for contributing to important theoretical debates.
This talk explores the role of language contact in the development of consonants with secondary palatalization in the historical region of Posen (Polish Poznań). Linguistic documentation indicates that by the early 20th century, Low German spoken in Poznań and surrounding regions was in the process of losing secondary palatalization (Koerth 1913, 1914; Teuchert 1913). The loss of this articulation was due to the influence of Low German from other regions. Teuchert and Koerth are of the view that Polish played some role in the development of these consonants, but they are at a loss to explain how Polish had influenced the Low German. In this talk, I present evidence that secondary palatalization developed as the result of a Lechitic VC co-articulation rule being mapped onto the phoneme system of Low German. Differences between the output of the Lechitic rule in Polish and Low German are due to prioritization of Low German input features.
Synchronic variation in speech articulation is thought to be at the heart of most sound change, whether through the generation of phonemically ambiguous speech or the creation of phonological innovations available to language learners. The precise mechanism for the incorporation of phonetic variation into phonological systems however has remained elusive. Are some completed sound changes the grammaticalization of two extremes of phonetic variation? And when are additional mechanisms required to make the leap from gradient phonetic variation to discrete phonological categories? In this talk, I present a two-pronged approach to understanding the role of this variation in sound change at work in our lab.
The goal of the first series of work is to quantify the extent of articulatory variation towards a completed sound change. As a case study, I present the landscape of variation in the articulation of velarized coda laterals in English, and discuss the links between this variation and vocalization through intra- and inter-speaker variation. Our second goal is to examine what processes exist in the space between gradient articulatory variation and categorical phonological change. Here, I describe ongoing research, the goal of which is to capture speakers' behavior in response to external forces designed to push their speech into unstable articulatory territory.
One of the most fundamental observations about speech communication is that there is no one-to-one mapping between segments or words and their acoustic realizations. On one hand, it is clear that signal-to-word mapping is many-to-one: perfectly understandable productions of the same word can take on countless acoustic realizations that may differ from one another along many dimensions. On the other hand, signal-to-word mapping is also sometimes one-to-many: the same acoustic signal may, on different occasions, be perceived as different sounds, words or sequences of words. In this talk, I will discuss research (conducted with Megan Reilly and Sheila Blumstein) addressing two questions raised by this state of affairs.
Firstly, does the speech production system avoid perceptual confusability by modulating how a segment is produced in specific ways and/or under specific conditions? I will present two experiments investigating the influence of lexical and sentential information on stop consonant articulation; I will argue that 'listener-oriented models' fail to account for the patterns of systematic phonetic variation in speech production we observe.
Secondly, I will discuss how listeners accommodate the one-to-many mapping from signals to words. I will briefly present experimental evidence suggesting that the speech perception system integrates higher-level (e.g., lexical and sentential) cues and phonetic cues, weighting them with respect to their relative reliabilities. If there’s time, I will wrap up by suggesting a novel prediction made by combining the results of the speech production and speech perception work.
This study explores the way that language interference and transfer is manifested in the phonetic correlates of contrastive information marking and prosodic constituency in English, and how these manifestations vary based on native language experience. Native speakers of French (L2 speakers of English) and native monolingual speakers of American English were recorded while speaking English, producing utterances that presented contrastive information. Contrastive information was elicited by the sequential visual presentation of stimuli that contrasted in color or shape, with target segments also in different prosodic positions in the utterance. Results show that L1 French speakers were generally successful in producing the correlates of English prosodic prominence and in this this domain experience played a role, with more experienced speakers producing correlates of prosodic structure that were more in line with L1 English norms. However, the contrastive status of words was generally not encoded successfully by L1 French speakers, with even the most experienced failing to produce correlates of contrastive marking that were in line with L1 English norms. English syllable reduction and durational cues for accentuation also posed a problem to L1 French speakers. L2 English deviations from L1 English norms are explained in reference to the L1 systems of French. The differential successes of L2 learners in marking prosodic structure as opposed to contrastive information are in line with theories that posit a general difficulty in the production of semantically driven language features for L2 learners, and also perceptual accounts of French 'stress-deafness'.
IsiXhosa has a pattern of labial palatalization in which the passive suffix /-w/ causes stem-final labials to become palatals. Some previous work has treated this as a phonological process, though others claim the pattern is fundamentally in the lexicon (with the change being historical rather than synchronic). We probe this question experimentally, using a wug test. If palatalization is phonological, speakers should extend it to nonce items. If, however, the palatalized forms are lexically stored, speakers will not palatalize in nonce items. The findings presented in this talk suggest that labial palatalization is genuinely phonological for some speakers, but not for others, and that members of the same speech community have different grammatical representations for the same pattern.
Morphologically derived environment (MDEEs) patterns are well-known examples in which static phonotactic patterns in the lexicon do not accord with what is allowed at morphological boundaries (phonological alternations). Analyses of MDEEs (e.g. Lubowicz 2002, Wolf 2008, a.o.) often assume that both the alternation patterns as well as the static phonotactic patterns are productive. Yet upon closer inspection, well-known MDEE patterns prove to be less clear-cut than most analyses suggest. In particular, as Inkelas (2009) and Antilla (2006) have shown for Turkish velar deletion and Finnish assibilation respectively, a morphologically derived environment is hardly sufficient to ensure that a supposedly derived-environment-only phonological process will apply. This talk takes up a related assumption regarding the static stem-internal phonotactic patterns in the lexicon of languages with MDEEs. It is implicitly assumed in analyses of MDEEs that stem-internal sequences that violate the generalization across morpheme boundaries are completely well-formed. To examine this, I present the results of corpus studies and phonotactic modeling simulations of two well-known MDEE cases: Korean palatalization and Turkish velar deletion. I show that in one case (Korean), a weaker version of the across-morpheme constraint (*ti) that motivates palatalization is active in the lexicon. This follows from the fact that forms violating *ti such as mati ‘joint’ are under-represented in the Korean lexicon. This contrasts with Turkish where the relevant alternation-motivating constraint is unavailable from pure phonotactic learning. These results further confirm the observation that MDEEs are not a unitary phenomenon. I also discuss the implications of this study for the relation between static and dynamic generalization as well as for phonological learning.
The goal of the perception test is to characterize listeners’ ability to identify sounds that are at various stages of change in PNWE. The two research questions we seek to address are: (1) Can social information (beliefs about speaker dialect) override listeners' use of phonetic information in vowel identification? (2) How do listeners use phonetic information when such information conflicts with the phonetic cues of their native dialect (here, changes in progress involving raising or near-merger). In this Phorum talk, I will describe the methods we are using to address these research questions. We are conducting an online test of Vowel identification (forced choice) adapted from methods used by Labov & Ash (1997) in the Cross Dialect Comprehension studies and by Niedzielski (1999). Our respondents include a 2-region sample of 100 listeners.
Lindblom (1995) proposed two modes of listening to speech: a “what” mode, where listeners focus on meaning, and a “how” mode, where listeners attend to details of pronunciation. This theory fits with Hickok and Poeppel’s (2004, 2007) more recent dual stream model of speech perception. What conditions then are necessary for modulating the use of one listening mode or the other? Following observations from speech recognition studies (Cole & Jakimik 1980, etc.), this presentation will discuss the results of two experiments which considered how structural and semantic context (including word predictability) interact with the listener’s attention to phonetic details. Phonetic accommodation and intentional imitation were used as experimental tools for determining what phonetic details subjects noticed after hearing target words in a variety of structural and semantic contexts. The results suggest listeners attend more closely to details of pronunciation when less structural and semantic context is present. I will then briefly consider the relevance of these findings to patterns in sound change and phonology.