The Berkeley Phonetics & Phonology Forum ("Phorum") is a weekly talk and discussion series featuring presentations on all aspects of phonology and phonetics.
This talk reviews several areas of phonetics that were important around 1924, when the Linguistic Society of America was established. The 1920s saw widespread use and expansion of the IPA for phonetic description of languages, including major works on American English phonetics; new speech production laboratories in the U.S. with many specialized instruments; the beginnings of the acoustic theory of speech production; benefits from telecommunications research for basic speech science; and the participation of linguists and non-linguists in phonetics.
Ese Ejja is a Takanan language spoken in the Bolivian Amazon. This talk presents a working analysis of the distribution of accent in Ese Ejja phonological words headed by nouns and verbs. Phonological words are (partially) defined as having a primary accent which falls on one of the first three syllables of the word, whose consistent phonetic correlate is high pitch. We argue in this talk that the distribution of primary accent depends on a complex interaction of factors, including (1) inherent accent of a word, (2) accent assignment from affixes/clitics, (3) accent assignment based on major part of speech (noun vs. verb) and word class (e.g. transitive vs. intransitive verb), (4) rules of accent clash resolution, (5) rules of (trochaic) footing, and (6) restrictions on the 3-syllable primary accent window. Of particular theoretical interest will be the proposed rules of accent assignment and accent clash resolution, the lack of alignment between trochaic feet and the 3-syllable accent window, and the correlation between grammatical categories and distinct phonological patterns.
We show that a class of cases that has been previously studied in terms of learning of abstract phonological underlying representations (URs) can be handled by a learner that chooses URs from a contextually conditioned distribution over observed surface representations. We implement such a learner in a Maximum Entropy version of Optimality Theory, in which UR learning is an instance of semi-supervised learning. Our objective function incorporates a term aimed to ensure generalization, independently required for phonotactic learning in Optimality Theory, and does not have a bias for single URs for morphemes. This learner is successful on a test language provided by Tesar (2006) as a challenge for UR learning. We also provide successful results on learning of a toy case modeled on French vowel alternations, which have also been previously analyzed in terms of abstract URs. This case includes lexically conditioned variation, an aspect of the data that cannot be handled by abstract URs, showing that in this respect our approach is more general.