About

The Berkeley Phonetics & Phonology Forum ("Phorum") is a weekly talk and discussion series featuring presentations on all aspects of phonology and phonetics.

Meetings

Mondays 11 - 12
Phonology Lab
50 Dwinelle

Coordinators

Emily Cibelli

Gregory Finley

Berkeley Phonetics and Phonology Forum

Schedule of Talks for Fall 2012

PREVIOUS MEETINGS:

September 10 -

Will Chang
UC Berkeley

Linguistic mirages and lexical borrowing between Tongan and Samoan

Despite more than a thousand years of cultural exchange between Tonga and Samoa before European contact, very few loanwords have been identified as having diffused between them, even though there is a large vocabulary peculiar to these two Polynesian languages. The difficulty lies in the fact that when Tongan and Samoan show related forms, it is almost always possible to reconstruct a Proto Polynesian form. One is tempted to explain this shared vocabulary as retentions in Tongan and Samoan that have been lost elsewhere in Polynesia. I show that this is impossible, by statistically comparing forms peculiar to Tongan and Samoan with forms widespread in Polynesia. The reconstructed protoforms associated with the second set of forms have far more *h and *r sounds than the (false) reconstructions associated with the first set of forms.this is a straightforward consequence of the sound laws in the history of the two languages, and of the way that Tongan words are nativized in Samoan. While it remains impossible to positively identify specific forms as loans, I have used a statistical model to identify a handful as having been borrowed with high probability; and of the approximately 850 etyma shared by Tongan and Samoan, an estimated 45% are loans.

September 17 -

Clare Sandy
UC Berkeley

Tone and syllable structure in Karuk

Word-level prosody in Karuk incorporates both tone and stress, and the placement of prominence is sensitive to complex interactions of lexical, phonological, and morphological factors. In this talk I discuss results of a quantitative analysis of prominence in Karuk roots, which shows that syllable structure is an important factor in determining placement of tone. Nouns and verbs display different patterns of syllable structure and accentuation, and I argue that the patterns seen in nouns represents the default. A constraint against high tone on short closed syllables, *CVC, is used to explain both static distributions of tones in verb roots and is also shown to be active in derived contexts; contributing to the degree of stability of tone under affixation and a vowel lengthening process triggered by certain affixes. *CVC thus unifies and motivates seemingly arbitrary phonological rules, and results in the Karuk system being far more predictable than has previously been thought.

September 24 -

Sharon Inkelas
UC Berkeley

Modeling Transient Child Phonology with a Recycle Constraint

This talk, based on ongoing joint work with Tara McAllister Byun (NYU) and Yvan Rose (Memorial U.), presents work on a new model of phonological acquisition which incorporates performance-based generalizations into grammar. The model is designed to shed light on several conundrums of child phonology. One is that children can exhibit systematic patterns which are unattested in adult language (e.g. consonant harmony involving major place of articulation, or neutralization of consonantal place or stricture contrasts in perceptually strong positions only). Another is that children's productions does not tend to improve steadily and monotonically, but can show "U"-shaped curves, show systematic variation, and/or get stalled at plateaus.

The model we develop features a grammatical mechanism, the constraint Recycle, which favors reproduction of stored error forms. Under the influence of this constraint, errors become systematic and sensitive to phonological conditioning factors. We further assert that Recycle interfaces with a body of tacit knowledge about the stability of mappings between motor plans and acoustic outputs, here termed the A-map. If a target form is motorically complex and therefore error-prone, the associated entry in the A-map will indicate an unstable mapping. We define Recycle in such a way that the magnitude of the penalty incurred by a target is inversely proportional to the stability of its mapping: the less consistent the mapping, the greater the pressure to reuse an old form that has a more reliable production routine. Crucially, as the child matures, the changes in motor control ability that he/she experiences will be reflected as shifts in the topography of the A-map. Once the mapping from motor plan to acoustic target becomes stable for a particular form, the pressure to use an old form is lifted. An adult-like production, or the closest approximation permitted by the child's current grammar, will then emerge.

The present proposal is related to the UseListedError model put forward by Tessier (2008, 2012). However, the two accounts are distinct in that our Recycle model draws an explicit connection between children's performance limitations and their tendency to reuse stored forms, identifying a functional motivation for an otherwise perplexing pattern.

October 1 -

Keith Johnson & Ronald Sprouse
UC Berkeley

Preliminary analysis of electrocorticography signals produced while reading words aloud

Electrocorticography data is very rich, and in previous studies appears to capture a great deal of information about both auditory sensation and motor control in speech. We focus on two aspects of this type of data in a study of word list reading by a single patient. First, we will report on the use of principal components analysis (PCA) to reduce the dimensionality of the data (from about 17000 variables per word to "only" a couple of hundred). A couple of sanity checks on this method will be discussed. Second, we will report, in a preliminary way, on the use of PCA rank reduction to study neural firing patterns that are associated with phonetic information (bilabial versus velar initial consonants) and to study lexical information (high versus low frequency).

Acknowledgement: Dr. Edward Chang (UCSF) kindly provided us with the ECog data that will be discussed in this presentation. The patient we are studying (#GP31) read over 4000 words with a grid of electrodes implanted on the surface of her brain.

October 8 -

Roslyn Burns
UC Berkeley

The Vowel System of Mesoamerican Plautdietsch: A preliminary analysis

This talk focuses on the vowel system of Mesoamerican Plautdietsch in both its synchronic state and diachronic development. Plautdietsch (PDT), a West Germanic language with origins in Poland, has been carried across the globe by Mennonites searching for religious freedom and economic opportunity. A consequence of this migration is that some PDT speech communities can develop innovations independent of contact with other PDT speech communities. On the basis of acoustic analysis and comparative investigation of other PDT vowel systems, I will argue that similarities between Canadian-Mexican PDT vowel systems are a parallel development to the Russian-German group. This parallel development is in part due to a phoneme whose presence has been recognized in the broader PDT literature, but whose exact phonetic quality had remained elusive due to lack of acoustic analysis.

October 15 -

Sarah Bakst
UC Berkeley

Rhotics and Retroflexes in Indic and Dravidian

Although the Indic languages of North India and the Dravidian languages of South India both have retroflexes in their phonetic inventories, Ladefoged and Bhaskararao (1983) demonstrated that the retroflexes in both language families do not share the same articulation. The researchers used static palatography and X-ray evidence to show that the Northern retroflex stops are apical and pronounced at the alveolar ridge, whereas the Southern ones are sublaminal and pronounced at the palate. In my thesis I aimed to see if these results could be replicated using static palatography with speakers of Hindi, Punjabi, and Tamil, and also to determine the acoustic correlates of the differences in articulation using acoustic-only elicitation.

I also extended this question to the long-standing problem of how to classify rhotics. The boundaries of the rhotic category has never been well-defined by a single acoustic or articulatory feature. The lowered third formant has been proposed in the past as a possible universal indicator of rhoticity, but Lindau (1985) found that many rhotics have a high third formant. Retroflexes, however, have been associated with rhotics because they both share a lowered third formant. I hoped to define the relationship between different retroflexes, rhotics, and their overlap. Specifically, I hoped that the difference between the two regions' retroflexes would give some insight into whether there can be a "degree" of rhoticity.

I found the same clear differences in retroflex articulation that Ladefoged and Bhaskararao found and was able to extend these differences to retroflex rhotics, but concrete acoustic differences have remained elusive. There is a trend of greater lowering of the third formant in Tamil than in Hindi or Punjabi, but this was not entirely consistent. In my talk, I will explain the problem of retroflex cues and propose some possible other avenues to explore.

October 22 -

Maria Josep Solé
Universitat Autonoma de Barcelona, Spain

Creating phonological categories in an L2

Perceiving and producing the sounds of a second language, particularly sounds that are not contrastive in the L1, is an especially difficult task. The current study examines whether L2 learners of English have formed separate phonological categories for sound contrasts differing in a feature that is nondistinctive in their L1, e.g. /kæt/ vs /kʌt/ vs /kɑt/ (‘same category’ assimilation, Best 1995). A medium-term auditory repetition priming task was used to investigate if a prime-target pair differing only by sounds that are subsumed perceptually by similar L1 sounds (/kæt/ vs /kʌt/) yield the same amount of priming as (i) a repeated prime-target pair (/kæt/ vs /kæt/) or (ii) a prime-target pair differing by features that are distinctive in the L1 (/kæt/ vs /kɪt/). It was hypothesized that if L2 learners have not formed distinct categories for English-specific contrasts (contrasts differing in features non-contrastive in the L1), e.g., /æ/ vs /ʌ/, they will process cat and cut as homophones that will show priming effects.

46 Spanish/Catalan advanced learners of English and 18 native American English speakers were tested in a lexical decision task involving real English words and nonwords produced by an American English speaker. Preliminary results indicate that there are no facilitation effects for words differing in an English-specific contrast for L2 speakers (suggesting that they keep the two words separate), but there is facilitation for non-words (e.g. /ʃæb/ primes both /ʃʌb/ and /ʃæb/). The fact that an English-specific vowel contrast is in part confusable in nonwords, whereas the different lexical items are kept separate suggests that the sound categories may only be abstracted from lexical contrasts at a later stage. The implications for models of speech processing and L2 learning will be considered.

October 29 -

Kristofer Bouchard
UC San Francisco

Single-trial control of vowel formants and coarticulation

Human speech depends on the capacity to produce a large variety of precise movements in rapid sequence, making it among the most complicated sequential behaviors found in nature. Here, we used high-resolution, multi-electrode cortical recordings during the production of consonant-vowel syllables to understand how the human brain controls vowel formants, a surrogate for tongue/lip posture. Population decoding of trial-by-trial sensory-motor cortical activity allowed for accurate prediction of formants. Indeed, a significant fraction of the with-in vowel variability could be accurately predicted. Interestingly, decoding performance of vowels formants extended well into the consonant phase. Additionally, we show that a portion of carry-over coarticulatory effects on vowel formants can be attributed to immediately preceding cortical activity, demonstrating that the representation of a vowel depends on the preceding consonant. Importantly, significant decoding of vowel formants remained during the consonant phase after removing the effect of carry-over coarticulation, demonstrating that the representation of consonants depends on the upcoming vowel. Together, these results demonstrate that the cortical control signals for phonemes are anticipatory and that the representation of phonemes is non-unitary.

November 5 -

Sharon Inkelas & Keith Johnson
UC Berkeley

Testing the Learnability of Sound-Based Writing Systems

This paper reports the results of an artifical learning experiment testing the hypothesis that the learnability of symbols used in sound-based writing systems is correlated with the acoustic stability of the type of speech chunks to which the symbols correspond. Three conditions are compared: a system in which symbols correspond to C and V segments (the 'Segment' condition); a system in which symbols correspond either to onset consonants or to VC syllable rimes (the 'Onset-Rime' condition), and a condition in which symbols correspond to CV or VC demisyllables (the 'Demisyllable' condition).

The Segment condition matches alphabetic writing systems like those of Spanish, English, etc. The Onset-Rime condition relies on a syllable-internal distinction long exploited by phonologists. The Demisyllable condition draws on findings from speech recognition and synthesis (see e.g. Jurafsky & Martin 2008) that diphones and triphones are acoustically and perceptually more stable than single segments because of the key segment-to- segment transitions they contain.

Data was obtained from 57 subjects (19 per condition). Subjects, all English-speaking university students, participated in a computer-based task in which they learned sound correspondences for 20 symbols. The sound-to-symbol mapping was randomized across subjects. After mastering individual symbols, subjects were trained on the combination of symbols into CVC words (C-V-C in the Segment condition, C-VC in the OnsetRime Condition, CV-VC in the Demisyllable condition). Subjects were then tested on their ability to read aloud novel (CVC) combinations of the symbols on which they had been trained. Results are based on readings of the same 18 test items in each condition.

The experiment tested the relative strengths of two identifiable biases: the Experience Bias and the Acoustic Bias. (1) As English speakers, the subjects were conversant with segment-based writing, creating a bias of experience which should favor the Segment condition. (2) However, the Experience Bias is offset by what may be termed the Acoustic Bias, according to which subjects will prefer symbols which correspond to acoustically stable speech chunks like CV and VC over chunks like C which encode context dependent segment-to-segment transitions. The Acoustic Bias hypothesis favors the Demisyllable condition.

Results support the Acoustic Bias hypothesis over Experience Bias. Along all dimensions of analysis, subjects in the Demisyllable Condition outperformed subjects in the Segment Condition (with subjects in the Onset-Rime condition falling, predictably, somewhere in between). Overall, subjects were more accurate, and responded with shorter reaction times, in the Demisyllable condition. Comparisons reached statistical significance for reaction times and for accurate vowel learning, and approached significance in several other areas as well. The results of this study resonate with the well-known finding that among independently evolved sound-based writing systems, alphabetic systems are extremely rare (perhaps a singularity), while writing systems making use of CV or VC symbols are common (e.g. Gelb 1984, Coulmas 1989, Daniels & Bright 1996). The study has obvious implications for the teaching of literacy, and suggests that phonologists should take seriously the notion of .demisyllable. as a basic unit of representation (e.g. Fujimura 1989, Itô & Mester 1993).

References
Coulmas, Florian. 1989. The Writing Systems of the World. Oxford: Blackwell.
Daniels, Peter and William Bright. 1996. The World.s Writing Systems. Oxford: Oxford University Press.
Fujimura, O. (1989) 'Demisyllables as sets of features: Comments on Clements' paper.' In J. Kingston and M.E. Beckman (eds), Papers in Laboratory Phonology I: Between the Grammar and the Physics of Speech (pp. 334-40). Cambridge: Cambridge Univ. Press.
Gelb, Ignace. 1952/1963. A Study of Writing. Chicago: University of Chicago Press.
Itô, Junko and Armin Mester. 1993. Japanese Phonology: Constraint Domains and Structure Preservation. In John Goldsmith (ed.) A Handbook of Phonological Theory. Blackwell Handbooks in Linguistics Series.
Jurafsky, Daniel; James H. Martin (2008). Speech and Language Processing. Prentice Hall.

November 19 -

Olga Dmitrieva
UC Berkeley

Phonetics vs. phonology: Fundamental frequency as a correlate of stop voicing in English and Spanish.

It is known that fundamental frequency at the onset of the vowel following the stop consonant (onset F0) tends to covary with the voicing feature of the stop itself: onset F0 is lower after voiced stops than after voiceless ones (Hombert 1975). However, there is some controversy about the origins of this covariation and its use in cuing voicing distinctions, especially given the phonetically non-uniform expression of phonological voicing across languages. There are languages which mainly contrast voiceless aspirated with voiceless unaspirated stops in prevocalic position (English), as well as languages which contrast voiceless unaspirated and prevoiced stops (Spanish). The behavior of onset F0 in such languages is expected to differ depending on the presumed cause of the covariation: physiology of the vocal folds vibration, aerodynamics of aspiration, or VOT in general. The present study examines the distribution of onset F0 in English and Spanish voiced and voiceless prevocalic stops, taking into account both their phonetic and phonological properties. The results cannot be easily reconciled with the proposed aerodynamic and physiological explanations of the onset F0 covariation with voicing (Ladefoged 1967; Löist et al. 1989) and suggest instead that phonological factors play a major role in determining the distribution of the onset F0. Members of the opposing phonological categories within each language are differentiated through onset F0 independently of their phonetic implementation, while equivalent phonetic categories across languages do not follow the same trend in the distribution of onset F0 due to differences in their phonological status. These results provide support for the adaptive dispersion theory (Liljencrants and Lindblom 1972, Lindblom 1990) in the domain of secondary cues to phonological contrasts.

November 26 -

Judith Kroll
The Pennsylvania State University

Cross-language competition begins during speech planning but extends into bilingual speech

Recent bilingual studies have shown that both languages are engaged when only a single language is required. Critically, cross-language activation occurs in tasks that are highly skilled, such as listening, reading, and speaking. However, bilinguals do not ordinarily suffer the consequences of cross-language interference, suggesting that they possess a mechanism of cognitive control that allows them to effectively select the language they intend to use. I will present data from three studies that use acoustic measures to demonstrate that not only are both languages active during the earliest stages of planning, but that cross-language activity extends into the execution of the speech plan.

December 3 -

LSA Practice Talks - note that we will meet 11 am to 1 pm this week


11:00 - Susanne Gahl
UC Berkeley

11:30 - Stephanie Shih
Stanford University and UC Berkeley

The similarity basis for consonant-tone interaction as Agreement by Correspondence

This paper addresses the on-going debate over the distinction between Agreement by Correspondence and the previously dominant theory of autosegmental feature-spreading, focusing on a key conceptual difference between the two theories: the role of similarity in harmony patterns. Using data from consonant-tone interaction in Dioula d'Odienné propose that sonority underlies the relationship between segments and tone. Agreement by Correspondence's unique ability to make direct reference to similarity in determining segmental agreement makes it better suited for handling phenomena like consonant-tone interaction.

12:00 - John Sylak-Glassman
UC Berkeley

The Phonetic Properties of Voiced Stops Descended from Nasals in Ditidaht

Five genetically diverse languages of the Pacific Northwest Coast of North America underwent an areally-diffused and cross-linguistically rare sound change in which nasal stops (e.g. /m, n/) denasalized to voiced oral stops (e.g. /b, d/). This study examines the phonetic results of that change for the first time based on new data from Ditidaht (Wakashan). The voiced stops exhibit significant prevoicing and have the same duration as the contemporary nasal consonants (which can all be traced to contact, baby talk, or sound symbolism). These characteristics may be phonetic relics of the historical nasals from which the contemporary voiced stops descended.

12:30 - Melinda Fricke and Keith Johnson
UC Berkeley

Development of coarticulatory patterns in spontaneous speech

While previous studies have focused on carefully controlled laboratory speech, this study compares fricative-vowel rounding coarticulation in adults' and toddlers' spontaneous speech. We analyzed the spectra of /s/ when it occurred either before or after front vs. rounded vowels. For adults, we found clear evidence of anticipatory rounding coarticulation, as well as some transitory perseverative coarticulation. For children, there was no obvious rounding coarticulation, but rather palatalization of /s/ in front vowel contexts, especially in the perseverative direction. Compared to child speech, adult spontaneous speech thus exhibits less mechanical linkage of articulators, and more anticipatory inter-articulator coordination.