Abney, Drew H.
Complexity matching across conversational settings
Recent studies of dyadic interaction have examined phenomena of synchronization, entrainment, alignment and convergence. All these forms of behavioral matching have been hypothesized to play a supportive role in establishing coordination and common ground between interlocutors. Behavioral matching involves one-to-one correspondences between partners, but interlocutor behaviors may also converge in terms of broader statistical parameters, such as the means, variances, and shapes of frequency distributions. Recent work in our labs has tested this possibility in the context of distributions that appear as power laws over certain temporal scales of measurement in conversational speech signals. I analyzed speech signals from various social settings as time series of acoustic onset events and found the events to cluster across timescales as a power law, and inter-event intervals distributed as an inverse power law. Notably, I observed convergence of power laws across dyads and the degree of convergence depended on the context in the social setting. This type of convergence is referred to as complexity matching, and is predicted by corresponding theory from statistical mechanics.
In this presentation, I will provide a brief overview of complexity matching and how it relates to and extends upon contemporary work studying linguistic activity in social settings. I will then present two separate studies demonstrating how a theory of complexity matching in human conversation and interaction can span across diverse settings. In the first study, I show how the degree of complexity matching varies across affiliative and argumentative conversational contexts. In the second study, I show how this framework and acoustic analysis can be applied to infant-caregiver vocal interactions.
Tense systems across languages support efficient communication
All languages have ways of expressing location in time, but they differ widely in their grammatical tense systems. At the same time, there are tense systems that recur across unrelated languages. What explains this wide but constrained variation? Taking a functionalist perspective, we propose that tense systems are shaped by the need to support efficient communication - a need that has recently been shown to explain cross-language semantic variation in other domains. We test this proposal computationally against the tense systems of 64 languages. We find that most languages in the sample support near-optimally efficient communication, but with some interesting and potentially illuminating exceptions. We conclude that efficient communication may play an important role in explaining why tense systems vary across languages in the ways they do.
Toward a Cognitive Science of Swearing
Fictive motion is a case of figurative language describing spatial descriptions in terms of motion such as The road runs along the river. Previous experimental research has concentrated on understanding how fictive motion is processed. While the results strongly suggest we mentally simulate fictive motion, the studies do not give us any information how fictive motion is actually used in every day discourse. In this, we will present discourse data from the TV News Archive, examining both the linguistic structure and gestures used with fictive motion. We show that despite earlier claims, the progressive (or imperfect) aspect is commonly used with fictive motion, and that co-occurring gestures help to elicit mental simulation. As the first study to look at real discourse data, we provide comprehensive results that adds insight to our knowledge of fictive motion and figurative language.
Observing Fictive Motion in the Wild
Profanity is special. Unlike most of language, profanity can be produced by people who are missing Broca's and Wernicke's areas. Unlike most of language, it defies ostensibly universal rules of grammar. And unlike most of language, it changes over time in patterns that are largely predictable. This talk presents unique features of profanity: features that stretch and ultimately demand modifications to existing theories of how language works.
Computational Construction Grammar for Speech
Construction Grammar theories build an encompassing view on how language works by abandoning the traditional distinctions between pragmatics, semantics, syntax and morphology and incorporating all levels within one main data structure: the construction. Yet, if it is really "constructions all the way down", mappings between continuous speech signals and discrete meanings should be handled by the framework as well. In this talk, I will pose a number of challenges for undertaking this task in a computational Construction Grammar framework, and demonstrate a solution that was implemented in Fluid Construction Grammar. The second part of this talk shows how such a "phonotactic grammar" can be learned in a discriminatory word learning task consisting of minimal pairs (e.g. "bim" vs. "kim"). Starting from word-based phonotactic templates, segment categories will appear to emerge that are dependent on their occurrence in specific templates. Finally, higher order phonotactic templates are distilled that encode more general constraints on the formation of words in the source language that is acquired by the learner.
Using Bayesian Models to Discriminate Cognitive Theories of Time and Space
How do we construct a sense of time in the absence of a perceptual system devoted to timing? Two major theories contend that people think about time via obligatory associations with the domain of space. Conceptual Metaphor Theory suggests that people make decisions about time by directionally integrating information from the domain of space (Lakoff & Johnson, 1999), whereas A Theory of Magnitude posits conceptual overlap and symmetrical flow of information between the two domains (Walsh, 2003).
I present evidence that is inconsistent with both of these accounts as they are traditionally represented, and show that neither account makes predictions specific enough to discriminate between them empirically without additional assumptions. To address this problem of underspecification, I provide a formalization of each theory in terms of simple Bayesian models, which can be used to predict individual behavior in an established time and space judgment paradigm. I demonstrate that this computational formalism can be applied to specify the two theories in comparable terms and provide a basis for better informed, empirically driven comparison of their predictions.
Language evolution in the lab tends toward informative communication
Why do languages parcel human experience into categories in the ways they do? Languages vary widely in their category systems but not arbitrarily, and one possibility is that this constrained variation reflects universal communicative needs. Consistent with this idea, it has been shown that attested category systems tend to support highly informative communication. However it is not yet known what process produces these informative systems. We show that human simulation of cultural transmission in the lab produces systems of semantic categories that converge toward greater informativeness, in the domains of color and spatial relations. These findings suggest that larger-scale cultural transmission over historical time could have produced the diverse yet informative category systems found in the world’s languages.
Word Order Constrains the Diachronic Development of Mandarin 'One'-phrases as NPIs
Mandarin Chinese 'one'-phrases, consisting of the numeral 'one', a unit word (UW: classifier/measure word), and a noun, have multiple functions, including counting, denoting an indefinite amount, and behaving as negative polarity items (NPIs). These meanings can be teased apart by their association with two types of word orders, SOV/SVO and Num(eral)-UW-Noun/Noun-Num-UW. This study aims to capture the association between an NPI reading and word orders across three periods, Old Chinese, Middle Chinese, and Early Mandarin Chinese, with the data obtained from Academia Sinica Ancient Chinese Corpus. The cross-period comparison shows that 'one'-phrases as NPIs gradually formed their association with a focal position, which serves to distinguish the NPI reading from the other readings. The development of 'one'-phrases as NPIs is correlated with the changes in word orders. This diachronic analysis explains the mechanism of how one meaning of a polysemous phrase gets distinguished by association with its semantically related constructions.
Cross-modal speech training in the acquisition of novel phoneme categories
Experience with one's native language from birth gives rise to phonemic representations that are robust in the face of considerable acoustic variation in natural speech. However, the well-entrenched biases of the perceptual system to these native phoneme categories can make it difficult to acquire the speech sounds of a new language in adulthood. In this talk, I discuss two behavioral studies designed to examine the types of information that novice learners find useful when acquiring fledgling phonemic categories. A particular focus of the work concerns what I'll call "cross-modal" learning - the interaction of acoustic and articulatory information - as a means to build more stable representations and reduce interference from native language categories.
Spatial Metaphor and Geography Reasoning
A growing body of evidence suggests that cardinal directions are understood through bodily axes, with the metaphor NORTH IS UP generating a particularly rich set of entailments. My work investigates the role that this metaphorical mapping plays in reasoning about geography. Using a mousetracking paradigm, I present participants with images of two locations and ask them about their relative location. In addition to an overall effect of metaphorical influence on reasoning, I also explore individual differences in terms of both how frequently individuals evoke these metaphors in their own language use, and in terms of their familiarity with different locations used in the study.
Causal and concessive adverbial subordinators in Brazilian Portuguese
In Brazilian Portuguese, besides having at their disposal more grammaticalized and, therefore, more semantically opaque adverbial subordinators to signal causal and concessive (discourse) relations, such as porque and embora, respectively, speakers also have a wide range of (complex) adverbial subordinators formed of a lexical item and a subordinating particle, e.g. já que (already that, literal glossing) and ainda que (still that, literal glossing), causal and concessive, respectively. Observing such grammatical scenario, I address two main questions in this talk: (i) does the semantic material of the lexical item at the base of each of these complex adverbial subordinators contribute to the overall meaning of causal and concessive adverbial constructions?; (ii) do these (complex) subordinators actually provide evidence to the conceptual organization of causal and concessive semantic structure? I explore these questions both from a theoretical point of view, considering matters related to categorization and (re)framing of events, but also from an empirical point view, by means of corpus analysis.
Conceptual Integration and Multi-Modal Discourse Comprehension
Iconic co-speech gestures are spontaneous body movements produced in coordination with speaking. For example, a speaker might trace an oval in the air as he says the word "platter". We suggest that speakers utilize conceptual integration, or blending, processes to combine linguistic information with visual-spatial and motoric information made available through gestures. Our model of multi-modal discourse makes a number of testable predictions. First, it suggests that gestural information enables speakers to formulate visually specific cognitive models of concrete discourse referents. Further, it suggests both visuo-spatial and sensori-motor working memory (WM) systems play an important role in speech-gesture integration. In this talk, I will describe a series of experiments that support this model of multi-modal discourse comprehension, and discuss its implications for conceptual structure.
Force-dynamic image schemas: between verb and argument structure construction
There has been a long debate over the relative semantic contribution of verbs and argument structure constructions to the structure of the event expressed by a clause, in particular semantic structures described as "transfer", "emission", "application" and so on. I will argue that these semantic structures are force-dynamic image schemas that are only partly constrained by verb meaning and argument structure construction. Verbs have a force-dynamic potential that allows them to be construed in more than one force-dynamic image schema. Argument structure constructions constrain but do not determine the force-dynamic structure of the event expressed by the verb occurring in the construction. The mapping between verb roots and force-dynamic image schemas varies across languages, but additional research is required to determine what constraints there are on possible mappings. This talk will focus more on data from English and other languages than my BLS presentation does, but I may also discuss the semantic analysis from my BLS presentation if there is interest.
Languages as Adaptive Systems
I present evidence for a perhaps provocative position that languages adapt to a variety of constraints operating on them. I will discuss how we might identify the selective pressures on languages, and offer some evidence from a diverse range of studies--from cross-linguistic analysis to computational modeling--that language has echoes of these evolutionary processes that mark it as an adaptive system. Rather than an abstract computational system ensconced in genetic pre-determinism, properties of language, even morphosyntax, may show a kind of adaptive flexibility under diverse conditions.
This project is collaborative with Gary Lupyan, U. Wisconsin-Madison.
Viewpoint: constructions and discourse
This talk will explore several types of linguistic constructions which dedicate some of their formal features to representation of viewpoint. The examples include, among others, various types of XYZ constructions, negation, and the use of pronouns in narrative discourse. The data suggest that these constructions profile types of viewpoint configurations, either through their overall form (as is the case in One person's X is another person's Y) or through specific grammatical forms, such as the genitive, or both. The analyses also make it clear that viewpoint allocation through constructional means relies on discourse networks, rather than specific local forms. This leads to two broad observations. One, that the concept of constructional compositionality is useful in explaining the contribution of specific forms to meanings of constructions and discourse types. Two, that even most basic forms, such as the first person pronoun, may prompt different interpretations on the basis of their place in the network of discourse spaces.
A frame semantic approach to the interpretation of null arguments in English and Spanish
Null or implicit arguments (Bhatt & Pancheva 2006) have received attention regarding their interpretation once omitted. In a sentence like I understand Ø[Content], the ‘content understood’ is definite null instantiated (DNI), i.e., it is obligatorily retrievable from context. In contrast, a verb like eat gets an indefinite reading (INI), e.g., Who was eating Ø[Ingestibles] in here? The referent can remain unidentified. Grammatical and information-structural explanations have been explored for NI (Fillmore 1986, Goldberg 2005, Lambrecht & Lemoine 2005), while questions remain about the lexical semantics of NI (Resnik 1996, Rappaport Hovav & Levin 1998). For the purposes of this paper, we seek lexical generalizations that can be made about DNI-licensing verbs (e.g., understand, arrive) and INI-licensing verbs (e.g., eat, bake, read). Following insights regarding the relationship of frames to null instantiation (per Ruppenhofer & Michaelis 2009), we further suggest that a finer-grained distinction by frame role may reveal commonalities within DNI. With this work, we hope to shed light on cross-linguistically consistent generalizations in null argument interpretation based on verbs’ association with broader semantic frames, more precisely on the shared high-level semantics of frame roles.
Constructional Annotation Net (CAN): A database of metaphors, frames and grammatical constructions
In this talk, I introduce the Constructional Annotation Net (CAN), a database I have created in order to converge multiple resources for semantic annotation in one user-friendly interface. CAN is created in Drupal. The goal is to bring together the metaphor resources in MetaNet, the frame resources in FrameNet, and other useful resources such as various constructicons (collections of constructions) in one place. These resources can then be deployed to annotate texts from corpora, from the internet, and those uploaded by individual users. Annotations and semantic libraries are searchable and downloadable by non-contributing users. I illustrate with data from the Cancer Metaphor Project (a project of which I am a member at UC Merced), showing how we have been using CAN to annotate cancer-related blog texts. CAN is a multilingual resource, and currently has annotation capabilities in English, Spanish, French and Romanian, with initial development in Japanese. This talk will be partially in the form of a demo.
Learning Robust Word Representations
A number of studies with infants and with young children suggest that hearing words produced by multiple talkers helps learners to develop more robust word representations. However, pilot work suggests two important caveats to previous data: 1) only less phonologically proficient learners benefit from phonetic variation in new word perception but 2) all learners may benefit from variation in new word production. The proposed studies seek further evidence to bolster these two points as well as the beginning of an explanation about why production is different from perception.
Despot, Kristina Štrkalj
Conceptual structure of Mind in Croatian: a diachronic and a synchronic approach
Diachronic analysis of linguistic data can play an important part in revealing that metaphor is not just about the words, but about deeper conceptual structures. Synchronic polysemy, for instance, often reflects diachronic development and linguistic change in the sense that non-metaphorical meaning evolves and motivates metaphorical meaning (Sweetser 1990 and others). However, metaphor in a diachronic perspective has gained much less research interest than metaphor in synchronic perspective. In an endeavor to bring a diachronic dimension to metaphor studies, in this talk I provide an analysis of metaphor in a diachronic perspective in the Croatian language with respect to the target domain of the Mind. The evolution of the concept of Mind in Croatian is traced by comparing examples of metaphorical linguistic expressions for mind and cognizing in Old Croatian with those from modern Croatian. Specifically, I focus on the Mind is a Body system. The research corpus consists of the Croatian Linguistic Repository (Institute of the Croatian Language and Linguistics), Google examples (cautiously used), Dictionary of the Croatian or Serbian Language (known as Academy's Dictionary) and the corpus of texts for the Old-Croatian Dictionary (Institute of the Croatian Language and Linguistics). Analysis has shown that the metaphorical system of the earlier stages of the Croatian language was less diverse and that the metaphorical hierarchical structure was fairly simple. Primary metaphors and general level metaphors are regularly present in Old Croatian, but the system is poor in source and target subcases, entailments and special cases in comparison with modern Croatian. Analysis also reveals that the general tendency of using metaphoric expressions is much lower in medieval and early modern times and that language of those times was much simpler, using literal expressions and basic level and prototypical words. This may show that diversity in linguistic metaphor emerges from the primary metaphors and general level metaphors, developing towards complex metaphors and special cases over time.
Construction-based metaphor analysis: Integrating MetaNet and Embodied Construction Grammar
The MetaNet metaphor repository includes representations of complex networks of schemas and metaphors which support detailed analysis of the metaphor(s) expressed in a given linguistic expression. But, MetaNet does not currently include constructional representations which would enable us to analyze how these metaphors are being expressed via the constructions instantiated in this expression. Embodied Construction Grammar (ECG) is a complementary resource that provides a means to accomplish the necessary constructional analysis. ECG supports computationally implemented constructional analyses of utterances, returning complex semantic specifications in the form of schemas, role bindings, and role value specifications. As I discuss in this talk, ECG can also be used to identify the key metaphor role mappings associated with the metaphor(s) expressed in a given expression. MetaNet and ECG thus each contribute key parts to a constructional analysis of linguistic expressions of metaphor. Moreover, the compatibility of the formalized meaning representations utilized by each system provides the necessary conceptual and representational basis for their integration.
The Role of Deictic Reference in a Grammatical Divergence between Visual and Tactile American Sign Language
This paper examines the role of deictic reference in a grammatical divergence between Visual American Sign Language (VASL) and Tactile American Sign Language (TASL) currently taking place among DeafBlind people in Seattle, Washington. Most members of the Seattle DeafBlind community are born deaf, and due to a genetic condition, slowly lose vision. They come to Seattle using VASL as their primarily language. However, as vision is lost, it becomes increasingly difficult to link deictic signs to the present, remembered, or imagined environment. As a result, the deictic system becomes inoperable. This paper examines three interactional mechanisms employed by DeafBlind people to address this problem: signal transposition, sign calibration, and sign creation. Signal transposition involves a transposition of handshapes onto locations on the body of the addressee, yielding a tactually accessible ground. Sign calibration is a process through which participants intuitively adjust signs that have lost their capacity to refer to objects in the immediate environment. As this process is honed, new rules for the formation of signs are generated and novel forms are created that would not be predicted given the grammar of VASL. I call this process sign creation. These interactional processes affect the internal organization of the deictic system, but they also echo further into the grammar, affecting multiple sub-systems, ultimately leading to the emergence of a new, tactile signed language.
Frame Semantics-Based Corpus Linguistics: Shedding Light on Emotion
In this talk I report on the results and questions I have generated so far in my frame-based analysis and corpus annotation of emotion categories, focusing on words whose primary denotation is some perspective on emotional experience, e.g. "scared", "scary", "angry", "annoying", "enjoy", etc., rather than behavoral reactions ("weep","tremble") that metonymically evoke emotion or metaphorical expressions of emotion. My primary concern is to construct a model of emotions respecting the categories found in my languages of investigation, in this talk discussing primarily English data.
There are a variety of linguistic, psychological, philosophical, and anthropological models of emotion, roughly characterizable in terms of how emotions are best described (categories, positions in a "space", or bundles of appraisals); how they come about (hardwired/inherent/universal vs. learned/constructed); how they relate to potentially separate affective categories (moods, attitudes, personality); and, most fundamentally, what they are (neural activity, cognitive states, feelings in the body, or a mixture of these).
A Frame Semantics-based corpus linguistics approach provides evidence in favor of some particular choices out of these, in particular showing that prototypical emotion words in English ("anger", "happy", "enjoy") do not, as one might expect from the majority of psychological theories, divide episodic emotion from moods or attitudes (Ekman & Cordaro 2011; Scherer 2005; but see also Plutchik 1980), while personality traits are partially picked out by separate linguistic categories ("irascible", "cry-baby", "happy-go-lucky"). Notably, based on my corpus annotation of a number of emotion subcategories, emotion-related words refer to long-term attitudes more often than they refer to momentary states, thus suggesting a solution to the emotion definition problem:
How can emotions be distinguished from bodily states like hunger, fatigue that pertain to desires and behavior?
Can or should emotions include categories without a specific "good"/"bad" judgement? Perhaps the solution lies in acknowledging that the prototype around which emotions are centered in the following kind of scene:
A Cognizer holds some judgement or complex of judgements of a Stimulus, not necessarily positive or negative, but having potential consequences for the Cognizer by virtue of the nature of the Stimulus itself, rather than because of the state of the Cognizer independent of the Stimulus.
A Computational Framework for Conceptual Blending, with Applications in Mathematics and Music
Conceptual blending is a concept invention method that is advocated in cognitive science as a fundamental, and uniquely human engine for creative thinking. This talk presents a computational framework, based on Goguen's category-theoretical formalization of blending, which treats the blending process as an interleaved search and evaluation problem. The system is demonstrated with examples from different domains where creativity is important. In particular, we show how the blending framework is capable of inventing `Eureka'-lemmas to facilitate mathematical proofs, and we show how the framework can harmonize chord progressions for automated music generation.
Affordances, Actionability and Simulation
(with Srini Narayanan, ICSI Berkeley)
The notion of affordances depends crucially on the actions available to an agent in context. When we add the expected utility of these actions in context, the result has been called actionability. There is increasing evidence that AI and Cognitive Science would benefit from shifting from a focus on abstract "truth" to treating actionability as the core issue for agents. Actionability also somewhat changes the traditional concerns of affordances to suggest a greater emphasis on active perception. An agent should also simulate (compute) the likely consequences of actions by itself or other agents. In a social situation, communication and language are important affordances.
The Frame Semantics of 'Communication_Manner’ in English and German
This paper presents an analysis of cross-linguistic frame validity of a small group of English and German verbs, which express manner of communication, in order to address the question whether or not identical semantic frames exist across languages (Boas, 2005). The data for the English and German verbs was drawn from two corpora and manually annotated for the framesemantic elements in accordance with the methodology utilized by the Berkeley FrameNet project (Johnson et al., 2001). The study’s focus is on the cross-linguistic analysis of the frequency of the semantic elements pertaining to the Communication_Manner frame. The findings suggest that while the same semantic frame applies to the two languages, significant differences in the frequency of certain frame elements exists. As a result, I argue that when adapting semantic frames based on English to another language, it is important consider a more magnified approach to determine more subtle differences in how events are conceptualized across languages. This approach includes the tracking of frame elements and distributional trends for fillers of the semantic element.
Faultless disagreement and the development of subjective semantics
We use judgments of faultless disagreement over different predicates (e.g., "spotted" versus "tall" versus "pretty") to explore children’s ability to integrate subjectivity in their compositional semantics, coordinating not only their knowledge about word meanings and the world, but also others’ personal preferences and past experiences. While we find the development of faultless disagreement to exhibit an especially prolonged trajectory ending some time in the school years, children's early qualitative explanations give insight into their developing semantics of different predicates.
Perspective taking as a window into cognition
In this talk, I will describe how my research captures some of the trends that have transformed in recent decades the field of Cognitive Science, in terms of viewing cognition (a) as grounded in social interaction, (b) as grounded in the body and the environment, and (c) as a probabilistic system. I will illustrate this point across three lines of research whose shared site of inquiry is perspective taking. The first line of research examines perspective taking in a storytelling task that dissociates the status of information for speakers and addressees to assess whether behavioral adjustments are egocentric or for the addressee. This work demonstrates that when the conversational partner’s perspective can be represented as a simple constraint (e.g., Has my partner heard this story before, vs. not?), speakers can readily adapt various grains of their behavior (utterance planning, articulation, and gestures) to their partner, with implications for models of speech and gesture production. The second line of research examines the influence of multiple cues on perspective taking in a collaborative spatial reconstruction task. By parameterizing various sources of information (one’s egocentric viewpoint, their conversational partner’s viewpoint, and the configuration’s intrinsic axis), this work shows that the perspective from which people represent spatial information in memory and describe it to their partners depends on the convergence of social and environmental cues. The final line of research adopts a dynamical systems approach to examine cue integration during perspective taking, with social and environmental cues conceptualized as weighted information evolving over time. Here, perspective selection is assessed through micro-behavioral measures derived from participants’ mouse trajectories as they respond to spatial instructions. Altogether, this work contributes to understanding the interplay of environmental, social, and cognitive constraints in a range of behaviors (language processing, gesturing, action planning) and sheds light on our capacity to apprehend our environment and coordinate with others.
Gen-Meta: Generating metaphors using a combination of AI reasoning and corpus-based modeling
Metaphor is important in the production of all sorts of mundane discourse: ordinary conversation, news articles, popular novels, advertisements, etc. This presents a challenge to how Artificial Intelligence (AI) systems both understand inter-human discourse, as well as produce more natural-seeming language, as most AI research on metaphor has been about its understanding rather than its generation. To redress the balance towards generation of metaphor, we are developing an approach that directly tackles the role of AI systems in communication, uniquely combining this with corpus-based results to guide output to more natural forms of expression.
Gibbs, Raymond W., Jr.
Metaphor Wars: Conceptual Metaphor in Human Life
The idea that there are enduring metaphors in thought, or conceptual metaphors, has brought a dramatic revolution in the multidisciplinary world of metaphor studies. Cognitive linguistic research, in particular, has been enormously successful in uncovering the vast ways in which conceptual metaphors shape thought, reasoning, metaphorical language use, and many kinds of expressive actions. But “conceptual metaphor theory” has been widely criticized by many over the past 35 years, both from scholars working within and outside of metaphor research. These criticisms have created a long “war over metaphor” that continues to be waged within metaphor and broader scholarly communities. My talk will describe the “metaphor wars” and offer my own assessments of where the conflicts now stand. I will also offer my present views on what is a conceptual metaphor, how conceptual metaphors provide major constraints on human experience, and why conceptual metaphor theory may be one of the most important advances in the history of cognitive science.
A Cognitive Analysis of the English Get-Passive
The English get-passive construction (1) has previously been studied for its syntactic and semantic relations to the more familiar be-passive (2), and to the plethora of related 'get' constructions such as the get-inchoative (3) and the get-causative (4).
1. The man got killed by a whale
2. The man was killed by the whale
3. He got sick
4. I got him fired
Most commonly, the get-passive has been studied for its notable adversative semantics. However, few previous researchers have attempted to provide an in-depth analysis of the semantics of the get-passive in context and an explanation for the origin of its common adversative reading. In this talk I present data to characterize the array of typical uses and senses of the get-passive, and use blending theory to provide an explanation of its semantics. This analysis is then brought to bear on a range of malefactive and adversative phenomena from other languages.
Grammar as an exaptation of spatial cognition: Affirmation in Slavic and English
Affirmation is a very abstract, largely unmarked, and notoriously elusive linguistic category. However, this exceptional quality of being formally amorphous, semantically unstructured and highly problematic; makes the study of affirmation an ideal testing ground for the property of fuzziness typical of all natural language phenomena. In this talk, I will argue that grammatical categories can only be identified within a context-dependent co-ordination system which I call DISTANCE model. Evolutionarily developed for the purposes of spatial orientation, the DISTANCE model allows us to shape, navigate and expand mental systems. It can be hypothesized that this cognitive mechanism emerged as a result of our perception of space and attempts at its systematization. At the same time, the experience of DISTANCE, based on the visual perception of depth and perspective, may be responsible for the emergence of our ubiquitous power of creativity. Consequently, language and other domains of abstract human mental activity, such as numerical systems, seem to be intrinsically related to space cognition. I will try to illustrate these points referring to examples of the use of affirmation in Slavic and English.
Rupture and Repair: The Effect of Framing on Legal Discourse after 9/11
In this presentation, I will discuss the change in the judiciary's approach to (counter-) terrorism by attending to the effects framing has on this shift. I show that the events of 9/11 were framed in a particular way, with what I term 'crisis discourse'. This particular framing, found in both executive and judicial speech and writing, allowed political and legal actors to call for and justify a new legal approach to terrorism. Crisis discourse ruptured the pre-9/11 legal discourse on 'terrorism as a crime'. Furthermore, the crisis discourse succeeded in replacing the criminal law frame with one of war and prevention, thus suturing the rupture it had caused. I will explore the role framing had in this process and conclude with some thoughts on what implications this might have for judicial independence.
Holmes, Kevin G.
Categorical Perception Beyond Words: Rethinking the Role of Language in Perceptual Discrimination
Categories can affect our perception of the world, rendering between-category differences more salient than within-category ones. Recently, such categorical perception (CP) has been found to be lateralized, with stronger CP observed in the right visual field than the left for categories marked by words in participants' native language (e.g., green vs. blue). Consistent with the linguistic dominance of the left hemisphere of the brain, lateralized CP has been widely regarded as support for the Whorfian hypothesis that language influences perception. I challenge this conclusion, proposing instead that lateralized CP may reflect the left hemisphere's more general propensity for categorical processing. In experiments with novel object stimuli, I show that both named and unnamed categories yield lateralized CP, and to comparable degrees. Contrary to the prevailing interpretation of the phenomenon, these findings imply that lateralized CP does not depend on online language access. They also point to a previously underexplored possibility: that even semantic distinctions that are not linguistically marked may be sufficiently salient to modulate perceptual judgments. I find support for this possibility in experiments testing for lateralized CP in the domains of color and space. Together, the results suggest that our representation of the visual world may be filtered not just through the lexical categories of our native language, but also through semantic distinctions not traditionally viewed as categories.
Exploring the complexity of interjections: On the importance of verbal, vocal and visual resources
Over the past 250 years, the close relation between interjections and body movements or gestures has been thoroughly discussed. And yet, though the advent of video technology has helped to capture our ephemeral body movements, so far very few scholars have studied interjections by taking into account both voice and body so as to reveal their richness and complexity in interaction. When a speaker produces an interjection, he or she relies on the word itself, prosody and a vast set of visible embodied displays so as to construct meaning. An interjection is a complex element of language that must be analysed as an integrative multimodal marker, hence the necessity to analyse speakers’ verbal, vocal and visual resources in interaction. In this presentation I will start with laying the theoretical and methodological bases for a multimodal study of interjections. I will then explore the range of multimodal strategies with which speakers accompany interjections in interaction. Numerous examples taken from both original and more synthetic data, transcribed and annotated in ELAN and Praat, will illustrate my point. The findings show that the speakers integrate a wide range of visual and vocal resources, involved in the co-construction of meaning in interaction, which they synchronise simultaneously when uttering interjections. This research shows that the usage of synchronised modalities in interjectional utterances can be considered a linguistic and cognitive skill.
A report on some recent neurophonetic studies
It has been my pleasure to be involved in a recent series of studies on neurophonetics with Edward Chang's research group at UCSF. The team has been looking at electrocorticography data which is recorded while subjects are engaged in speaking or listening tasks. Utilizing the unique richness of ECog data, we have focused particularly on the neural encoding of phonetic information. This talk will be a review of some of that work with a focus on the fall-out when linguistic/phonetic assumptions meet neuroscience data.
Polysemy of Georgian preverb gada-: The story's not quite over
The semantics of preverbs in Georgian, particles indicating path on motion verbs, have long vexed Kartvelian linguists. While they clearly function as perfective aspect markers on non-motion verbs, they have been assumed to have arbitrary distributions on non-motion verbs as well as a general lack of any non-aspectual meaning (Harris 1981, Aronson 1990, Melikishvili 2008). My examination of the semantics of 215 verbs using the preverb gada-, however, indicates that these assumptions about the semantics of Georgian pre-verbs may actually be untrue. I propose for gada- a principled polysemy network based on image schema transformations, demonstrating that gada- is likely not only semantically contentful, but also polysemous and non-arbitrary.
Lyric Viewpoint in the Reader’s Brain
Literary criticism has historically theorized lyric poetry in terms of the relations between the poet and “the lyric ‘I’,” between the poem as a representation and as an event, and between the fictionality of the content and the real world. The reader is typically understood as an audience observing/hearing the lyric discourse. But while the text’s modes of address (including to the reader) have been analyzed at length, little attention has been paid to reader viewpoint, nor to how the lyric event is in fact constructed in and by the reader’s brain. How can we analyze the viewpoint of the reader of a lyric poem? What textual features motivate a reader to align with the speaker, addressee, and/or audience, and what cognitive mechanisms account for such construals / simulational experiences? This talk presents work in progress, applying mental spaces theory to reanalyze canonical models of the lyric as variations on a blend, and using simulation semantics to analyze how reader viewpoint dynamically shifts as motivated by deixis, access organization, canonical perspectives, and other cognitive mechanisms.
The Connecting Functions of Phrasemes in Texts
Web 2.0 offers a unique opportunity to understand phraseme construction and function in written language authored by "citizen authors". Such written texts, posted on public forums of newspaper reader comments provide a valuable resource for better understanding how writers develop phrasal and sentential structure and function to communicate complex ideas about everyday events and concerns. This presentation will describe some preliminary research on this topic using a variety of examples to illustrate basic concepts and the theoretical framework.
Bayesian pragmatics in a spatial language game
Frank and Goodman (2012) proposed that listeners understand ambiguous utterances by rationally combining evidence about word meaning and the salience of particular objects in context. They found that a Bayesian statistical model using this information provided a near- perfect account of their empirical data. However, their test of the model was based on communication about simple geometrical objects that varied along only three dimensions. In this talk, I will present a study testing whether their proposal extends to the richer and more complex domain of spatial relations. We find that it does. While the results are not as strong as in their original study, they nonetheless demonstrate that simple formal accounts of communication may capture important aspects of pragmatic inference.
Neural Integration and Cascades in Cognitive Linguistics
This is a short informal overview and discussion of sections of How Brains Think, a book in preparation with Srini Narayanan. The main topic is neural integration and cascades, with a concentration on issues brought up in Chapter 3. Possible topics, depending on time: unconscious neural integration prior to consciousness; X-Nets; sequencing and embedding; linear scales; integrative mappings vs. the integration of metaphoric mappings; constructions and grammatical categories; sound symbolism. I will presuppose an understanding of basic cognitive linguistics.
Understanding noun-noun compounds: Schemas re-re-re-visited
The semantically underspecified nature of noun-noun compounds has posed a special problem for theories of language comprehension. Are the possible semantic relations between nouns in NN compounds listable? If so, how is a particular relation chosen by a listener/comprehender? If not, how are the meanings of conventionalized compounds and the interpretations of novel compounds constrained? In this talk I will argue that experientially motivated schemas play an active role in noun-noun compound comprehension and that this role is consistent with the basic claims of conceptual blending.
A typology of intrinsic and extrinsic relativity
The typological description of reflexive markers in the world's languages is almost always approached from a formal and functional perspective (e.g. Geniušienė 1987; König & Gast 2008); that is, most researchers are interested in identifying which reflexive marker(s) exist in a particular language and subsequently exploring their various grammatical functions beyond semantic reflexivity (middle, passive, reciprocal, etc.). However, there is an alternative typological approach to the study of reflexivity, which includes more focus on the conceptual structure of reflexive events. As evidenced through the use of pronouns in English (Lederer 2013), I assume that some human actions are canonically self-directed (events performed on the body or in the direction of the body) while other events and states are canonically other-directed. For example, in English, the former type, which I will call 'intrinsically reflexive' events, are often unmarked while the latter type, 'extrinsically reflexive' events, are obligatorily marked (John shaved this morning vs. John was stabbing himself this morning). Given this conceptual distinction, new typological questions arise around how languages do and don't overtly mark intrinsic vs. extrinsic reflexivity. In this talk, I present findings from a 70-language sample and discuss the implications of the data in regards to the grammatical representation of reflexivity.
Using Corpus Methodology to Investigate Conceptual Metaphor
Much recent research on figurative language and conceptual metaphor theory derives from corpus examination, and analysts are increasingly focused on the development of quantificational tools to reveal co-occurrence patterns indicative of source and target domain associations. Some mappings between source and target are transparent and appear in collocation patterns in natural language data. However, other metaphors, especially those that structure abstract processes, are more complex because the target domain is lexically divorced from the source. In this talk, I use economic discourse as a case study to introduce new techniques directed at the quantitative evaluation of metaphorical occurrence when target and source relationships are nonobvious. Constellations of source-domain triggers are identified in the data and shown to disproportionately emerge in topic specific discourse. The methodology and findings will be discussed in relationship to the Berkeley "brand" of metaphor research, in which metaphorical lexis is connected to metaphorical mappings in the conceptual system.
The influence of metaphor-specific attribute weighting on lexical choice
To understand objects and events in the world, people associate particular properties with particular concepts. For each such concept, a given property can be more centrally important to that concept than other properties. For instance, the property "emits light" is centrally important to the concept lamp, whereas "emits heat" is not. Different concepts can share properties while remaining distinct due to such differences in property importance. Although the property "emits heat" is not crucial to the concept lamp, it is centrally important to the concept cooktop, which also shares with the concept lamp the property "emits light". In this manner, domain-specific attribute weighting forms a basis of conceptual structure. Here I extend an existing connectionist model of semantic cognition to consider the role of attribute weighting in metaphoric thought and to probe the potential for reverberatory activity between language and thought. In particular, I use a simulation of crosslinguistic differences in metaphor-specific attribute weighting to ask whether such variability influences lexical choice during production of non-metaphorical language.
Code-switching across contexts
Code-switching (CS), switching languages at the word, phrase, clause, or sentence level (Valdés, 1988), is part of the bilingual reality of numerous communities and occurs across formal and informal situations. CS can be an intimate way to communicate solidarity with someone. Because CS conveys information about a speaker’s social identity, ideologies, culture, CS for bilinguals can be a socio-politically charged use of languages. However, there are negative attitudes from those who misunderstand the phenomena and critique its speakers for not speaking either language well. Research on CS over the past three decades has been primarily based on oral and informal situations. Few studies have observed written CS in casual online language or across formal institutions.
Categorical Perception of Containers
Categorical perception is the notion that categories affect people’s perception of objects in the real world. We predict that participants will show categorical bias in a reconstructive memory task that involves discriminating between different bottles. In line with previous research, our results show that participants misremember bottles in a categorical way, such that the size estimates are biased towards the center of the category. The present study extends recent findings from our lab that show that categorical perception effects are magnified by time. Specifically, we look at the degree to which different delay periods increase the effects of categorical perception. Our results demonstrate that (i) categorical perception occurs in container categories and (ii) the effect is magnified by time. We picked containers because there are interesting cross-linguistic differences in container categorization, that lead us to discuss the implications for linguistic relativity.
Looking at framing in everyday language
Framing is important in everyday communication and reasoning. People constantly frame events, states, and situations with the intention of encouraging others to adopt a particular point of view or take particular actions in the world. Though social scientists and cognitive linguists know a good deal about the semantics of framing, relatively little is known about the effects it has on reasoning. This presentation will discuss recent experimental findings on aspectual and metaphorical framing across various domains, including political messages, reckless driving reports, and risk alerts.
The role of language in structure-dependent cognition
One of the most striking features of human cognition is the ability to generate an infinite number of ideas by combining a finite set of elements according to structure-dependent principles. This ability is most clearly displayed in language, but also characterizes other aspects of our cognition such as inference making, mental arithmetic, music cognition, and motor action sequences. In this talk I will address the question of whether the neural mechanisms of natural language play a role in structure-dependent cognition, focusing on mental arithmetic, deductive reasoning, and music cognition in healthy volunteers.
Moore, Kevin Ezra
The primary-metaphor components and generic structure of Moving Ego and Moving Time
This talk refines the conceptual-metaphor analysis of two well-studied metaphors -- Moving Ego and Moving Time (Boroditsky & Ramscar 2002; Gentner, Imai & Boroditsky 2002; Lakoff & Johnson 1980, 1999; Moore 2014). In Moving Ego, temporal relations are depicted in terms of a present that moves relative to other times, as in We are approaching Christmas. In Moving Time, times move relative to the present, as in Christmas is approaching (Clark 1973, Fillmore 1997/ 1971). The primary-metaphor theory of Grady (1997a,b) seeks to state metaphors in terms of the most direct connections between experiential motivations, metaphor mappings, and linguistic expressions. This talk decomposes Moving Ego and Moving Time into primary metaphors. Among these primary metaphors are three which Moving Ego and Moving Time share: NOW IS HERE, IMMEDIACY IS PROXIMITY, and OCCURRENCE IS ARRIVAL. This analysis reveals the temporal aspects of the experiential motivations of Moving Ego and Moving Time, and identifies what the two metaphors have in common. Accordingly, a generic-level structure is identified and the metaphors are stated as conceptual integration networks (Fauconnier & Turner 2002).
One characteristic that Moving Ego and Moving Time have in common is that canonically they both depict temporal relations relative to the present. In this respect they contrast with the primary metaphor SEQUENCE IS RELATIVE POSITION ON A PATH, as instantiated by expressions like Spring follows winter which depict relations among times independently of the present. While such expressions have usually been considered to instantiate Moving Time, they have important properties that cannot be captured unless the contrast between SEQUENCE IS RELATIVE POSITION ON A PATH and Moving-Ego/Moving-Time is recognized.
The approach to temporal metaphors advocated here improves the usefulness of conceptual metaphor theory for analyzing how metaphorical expressions depict temporal relations in English and crosslinguistically.
Artificial Neural Networks for Speech Recognition: There and Back Again
Artificial neural networks have been used for speech recognition experiments since the 1960s, though full-fledged efforts were not widely pursued until the late 1980s, and have not come into broad commercial use until fairly recently. In this talk, I’ll try to give some historical perspective, and conclude with some opinions about ways that this development is important, but also how it is limited.
The Polyfunctionality of -ara in Karuk
Karuk is a language isolate spoken along the Klamath River in Northern California. It is a polysynthetic, agglutinative language comprised of a rich assortment of morphological features. The purpose of this talk is to examine the uses of the morpheme -ara, a suffix that can be added to both verb and noun-stems to form applicative verb-themes and adjectival noun-themes (Bright 1957). Using the formal theoretical framework provided in Cognitive Semantics (Langacker 1991; Lakoff 1999; Ostman and Fried 2004), I will discuss the three prototypical uses of -ara: the Attributive -ara, the Applicative Instrumental -ara, and the Result-State -ara, as well as relevant cases of constructional polysemy and metaphor extension associated with each of these three general constructions.
Persian Compound Verb Constructions
In Persian, a limited number of 'simple' verbs are used, but the vast majority of verbs are complex. Complex verb refers to multi-word verbal expressions consisting of a simple verb plus a complement. Complex verbs in Persian can be grouped into two categories: phrasal verbs (cf. English take down, get back) and light verb constructions (cf. take a bath, make sense). Persian phrasal verbs and light verb constructions differ crucially in the way that the verb and complement compose to create the meaning of the whole, and in particular in the relationship of metaphor to the composed constructional meaning.
In phrasal verbs, the verb and its collocated elements jointly determine the content of the event frame. For example, in æz bɛɪn boɾdæn ('from among take'), which means 'to destroy,' the verb and complement contribute to an event frame of removal which is the source domain for the metaphor EXISTENCE IS PHYSICAL PROXIMITY/ PRESENCE. Another phrasal verb be vodʒud ɑmædæn ('into existence come'), which means 'to come into existence,' involves the same metaphor, but 'come into' is source domain language and 'existence' is target domain language. The phrasal verb væɣt gozaʃtæn ('time put'), which means 'to set aside time,' involves the metaphor TIME IS A RESOURCE.
What distinguishes light verb constructions from phrasal verbs is that the complement of the verb (often a verbal noun) entirely determines the event-type, while the light verb contributes causal structure, aspect, and viewpoint to the event-frame. (Family 2006, Megerdoomian 2001) For example, ʃɛkændʒe kæɾdæn ('torture do') and ʃɛkændʒe dɑdæn ('torture give') both mean to torture. What distinguishes the two is the causal construal of the scene. 'Torture do' highlights the agent because kæɾdæn ('do') simply means an agent viewpoint on an activity. On the other hand 'torture give' emphasizes the causal effect of one agent on another, because the light verb dɑdæn ('give') regularly evokes the metaphor CAUSAL TRANSFER IS EXCHANGE. An event of giving necessarily involves a giving agent and a potentially agentive recipient. The patient-viewpointed profile of a torture event, ʃɛkændʒe didæn ('torture see'), means 'experience/suffer torture,' and makes use of the metaphor EXPERIENCE IS PASSIVE PERCEPTION. Light verbs thus allow us to select various viewpoints in a particular event-frame, and also allow us to make fine distinctions between different types of experiencers and agents. Unlike phrasal verbs, light verbs involve a relatively small set of conventional mappings of this kind.
Each Persian light verb construction (eg dɑdæn ('give'), gɛɾɛftæn ('get'), zædæn ('hit')) is thus conventionally associated with a particular set of metaphoric mappings onto action and event types. The meaning of the light verb links up to the the event frame via metaphor - preserving the causal, aspectual and viewpoint structure of the source domain, and imposing them on the construal of the target-domain event frame.
Here for an argument: Conflict in laboratory and real-world settings
Interpersonal interaction is incredibly complex, weaving together multiple communication streams and communicative signals. While most of our everyday interactions are largely neutral or even positive, conflict is still a key part of the human experience. As a cognitive scientist, conflict provides an interesting venue for studying how context modulates interaction and its consequences. In this talk, I will be exploring conflict's implications on interaction using laboratory experiments on naturalistic conflict and new data from real-world debates.
The potential for iconicity in vocalization
Theories on the origins of language have often discounted the potential for iconicity – resemblance between form and meaning – in vocalization (e.g. Armstrong & Wilcox, 2007; Tomasello, 2008). Taking an empirical approach to this issue, I will present a series of “vocal charades” experiments that examine the potential for iconic vocalizations to ground the creation of vocal symbol systems. The findings show that people can create iconic vocalizations to effectively communicate a variety of meanings, and further, that they can use these vocalizations to establish conventional signals.
Petruck, Miriam R. L.
Integrating FrameNet and MetaNet - slides
Both Frame Net and MetaNet were developed according to the principles of Frame Semantics (Fillmore 1975, 1985, 2012, inter alia), at the heart of which is the semantic frame, a theoretical construct for the representation of meaning used to characterize events, states of affairs, situations, or objects. While significant differences exist between FrameNet and MetaNet (e.g. types of relations,levels of semantic granularity for frames), the two knowledge bases are essentially complementary. This talk presents joint work with Ellen Dodge to integrate the two knowledge bases. The results of such an integration will improve both resources and provide a means of identifying and analyzing metaphorical language, as well as enhance NLP applications, such as question answering, information extraction, and event tracking. These applications require information about who did what to whom, as well as about relations between and across events and participants.
Experimental Syntax of Croatian: Linearity Beats Hierarchy in Gender Agreement
In Croatian, as well as in some other South Slavic languages, SV and VS conditions in gender agreement of coordinate structures differ substantially. Results of our previous psycholinguistic experiments within the Coordinated Research in the Experimental Morphosyntax of South Slavic Languages (EMSS) project confirm statistically significant differences in agreement types between these two positions. Although prominent claims in the literature favor the structural (pertaining to the syntactic hierarchy) rather than the linear factors in grammatical agreement, our findings from the controlled psycholinguistic production task experiment, as well as the follow-up grammaticality rating study and non-words study, contradict these claims. The baselines for our research are the coordinated structures of the same and mixed gender, the most interesting condition being the NF and FN conditions. In VS condition, the first conjunct after the verb is at the same time the highest hierarchically (HCA), first in a linear structure (out of two or more conjuncts, FCA) and closest to the verb (CCA). This triggers the agreement with the highest/first/closest conjunct so strongly that the agreement with the lowest/second/furthers is almost non-existing. In the SV condition, however, there is much higher variability of results the variability of results is much higher, but at the same time there is a visible tendency towards the CCA in mixed gender conditions not containing M as one of the conjuncts. In our current research, we test only the SV condition, as it’s more informative. We also focus on two linear features that we labeled “first comes first” and “proximity for all”. Our hypothesis is that the rate of the FCA in SV condition with coordinated structure placed outside the focus position would be even lower than previously attested, despite the fact that the first conjunct is hierarchically highest out of two in both conditions. In our previous experiments, the coordinated structure in the SV condition was initially placed in the linear sentence structure. Since our goal was to disambiguate between the hierarchically first and linearly closest agreement, we did not control for the factors pertaining to the information structure. In our current experiment, the coordinated structure is placed in the second position, while the very first position is reserved for adverbs. (Stimulus: Dogovor je pomaknut na petak. Target: sjednice i vijeća vs. stimulus: Iznenada je dogovor pomaknut na petak) As a result, we avoid the effect of focus position. We will present the combination of results from some of the offline comprehension studies, grammaticality judgment ratings and online production tasks supporting the view that linearity plays not only a more prominent role than what has been claimed in the literature, but a crucial one. Our findings are in line with the Parallel Architecture and the Simpler Syntax Hypothesis. The implications of our findings go far beyond the gender agreement itself by opening the doors to a possible theory of psycholinguistically informed agreement processing.
Language as a system of constraints on interaction dynamics
In the embodied and situated view on language, its dynamical aspects come to the fore. Yet relating symbolic forms to dynamical processes such as language production, understanding, and coordinated interaction has always been a difficult problem. It has manifested itself as a ‘symbol grounding’ problem within the informational processing approaches and as a problem of functionality and normativity in dynamics-oriented ecological approaches.
In my talk I would like to point to a framework, which was proposed to account for the relation between symbolic structures and dynamics already in the 1960s but – probably due to prominence of computational approaches – did not gain much recognition. This framework, advocated, for example, by Michael Polanyi and Howard Pattee treats informational structures as constraints on dynamics, inseparable from dynamics but irreducible to it.
I will apply this framework to thinking about natural language. I will show that, on one hand, it alleviates some recalcitrant theoretical problems, such as accounting for context dependency and efficiency of language, but on the other hand, brings about new obligations, such as specifying the various types of relevant dynamics. Most importantly, I will indicate the methodological consequences of adopting such a framework and illustrate them with contemporary research on linguistic interaction, in which studying dynamics is as important as studying linguistic forms.
Word meanings across languages support efficient communication
A central question in cognitive science is why languages have the semantic categories they do. Word meanings vary widely across languages, but this variation is constrained. I will argue that this wide but constrained variation reflects the functional need for efficient communication. I will present a general computational framework that instantiates this idea, and will use that framework to show how this idea accounts for cross-language variation in three semantic domains: color, spatial relations, and kinship.
Existential constructions in typological perspective : Drawing a map from parallel
Distinguishing Existential Constructions from Locational Constructions remains an open question (cf. Borschev and Partee 2002, Partee and Borschev 2004, 2006, 2007, Koch 2012, Creissels 2014). More generally, it is still difficult to find a workable definition of Existential Constructions which goes beyond the restrictive formal account. Classical definitions take for granted that Existential Constructions are specific (or marked) constructions, involving i) an inversion, ii) an indefinite NP, iii) a Location PP and iv) a thetic reading (eg. Jespersen 1924, Freeze 1992, Sasse 1987, Mac Nally 2011).
I will present ongoing research carried out within the project ETE (Space Time and Existence) which aims at drawing a map of functional equivalents of well identified existential constructions found in translations. Parallel corpora are used as a heuristic tool to reveal the various strategies used across languages or within the same language in front of (aligned with) grammaticalized existential constructions such as there is (En), hay (Sp), il y a (Fr), c’é (It). The languages involved in the comparison are French, Italian, Romanian and Spanish, Dutch, English and German, Serbian and Russian, Greek, and also two non Indo-European languages, Hungarian and Arabic. I will present the project’s methodology, research questions and preliminary results.
Borschev, V, and Partee B. H. 2002. The Russian genitive of negation: Theme-rheme structure or perspective structure? Journal of Slavic Linguistics 10, 105-44.
Creissels, Denis. 2014. Existential predication in typological perspective. Paper presented on the Workshop Space, Time and Existence: Typological, cognitive and philosophical viewpoints. 46th Annual Meeting of the Societas Linguistica Europaea, Split, Croatia, 18-21 September 2013.
Freeze, R. 1992. Existentials and other locatives. Language 68, 553-595.
Hoekstra, T. & Mulder R. 1990. Unergatives as copular verbs: locational and existential predications. The Linguistic Review 7, 1-79.
Jespersen, Otto. 1924. The philosophy of grammar. Reprint Norton Library, 1965.
Koch, P. 2012. Location, existence and possession: a constructional-typological approach. Linguistics 50-3: 533-603.
McNally, L. 2011. Existential sentences. In C. Maienborn, K. von Heusinger & P. Portner Eds. Semantics: An International Handbook of Natural Language Meaning, Vol. 2, 1829-1848. Berlin: Mouton De Gruyter.
Partee, B. H., and Borschev, V. 2004. The semantics of Russian Genitive of Negation: The nature and role of Perspectival Structure. In Proceedings of Semantics and Linguistic Theory (SALT) 14, eds. Kazuha Watanabe and Robert B. Young, 212-234. Ithaca, NY: CLC Publications.
Partee, B. H., and Borschev, V. 2006. Information structure, Perspectival Structure, diathesis alternation, and the Russian Genitive of Negation. In Proceedings of Ninth Symposium on Logic and Language (LoLa 9), Besenyőtelek, Hungary, August 24–26, 2006, eds. Beáta Gyuris, László Kálmán, Chris Piñón and Károly Varasdi, 120-129. Budapest: Research Institute for Linguistics, Hungarian Academy of Sciences, Theoretical Linguistics Programme, Eötvös Loránd University.
Partee, B. H., and Borschev, V. 2007. Existential Sentences, Be, and the Genitive of Negation in Russian. In: Existence: Semantics and Syntax, Ileana Comorovski and Klaus von Heusinger, eds., Dordrecht: Kluwer/Springer, p. 147-190.
Sasse, Hans-Jürgen. 1987. “The thetic/categorical distinction revisited”. Linguistics 25: 511–580.
Semantic Fieldwork on Cognitive State and Process Terms
In this talk I discuss the use of semantic fieldwork methodology to investigate the lexical semantics of cognitive state and process terms with meanings similar to English 'know', 'think', 'remember', 'forget', and 'remind', among others. The data I discuss comes from ongoing fieldwork in two languages, Kwak'wala (Wakashan) and Turkmen (Turkic). Investigating the lexical semantics of cognitive states in a fieldwork setting presents certain methodological challenges, as cognitive states and processes are invisible and subjective. I therefore begin the talk by illustrating four elicitation methods I have used to try and overcome these challenges: (1) tell-back storytelling, (2) in-context translation tasks, (3) in-context semantic judgment tasks, and (4) visual storytelling. Next I present a set of fieldwork findings related to the use of terms we might loosely translate as 'mind' in Kwak'wala and Turkmen, pointing out cross-linguistic similarities and differences in the use and meaning of these terms as they relate to the naming of cognitive states and processes. For example, we'll see that the concept of noqe? 'heart, mind' in Kwak'wala is used to name experiences with both emotional and cognitive components. We'll also explore how Turkmen makes obligatory use of the MIND IS A CONTAINER metaphor (Lakoff & Johnson 1980) in relation to one's jaat 'mind' in naming events related to forgetting, remembering, and memorizing. Related uses of kelle 'head' in Turkmen will also be compared and contrasted. The talk will finish with a discussion of some proposed semantic (near)universals in this semantic domain and how they hold up in light of the Kwak'wala and Turkmen data. Finally, we'll consider the hypothesis that semantic universals in this domain could be constrained by properties of our phenomenal experience of mental states.
Smirnova, Anastasia V.
Evidentiality and inferential reasoning: experimental investigations
In this paper I use methods from cognitive psychology to address a long-standing debate in semantics about the nature of evidential meaning. According to the traditional view, evidentiality grammaticalizes the source of knowledge. Competing modal analyses argue that evidentiality encodes the speaker's subjective evaluation of information in terms of its believability. In an experimental study I hold knowledge source constant and vary the type of logical argument, deductive versus abductive. I find that evidential choices are influenced by both the type of the argument and the perceived subjective strength of the argument. These results are predicted by modal analyses but not by the traditional view. Implications for research on evidentiality in cognitive psychology are discussed.
Flexibility in Thought and Language
A central property of human cognition is our ability to connect and draw parallels between different ideas. This ability is powerful, as it allows us to draw inferences that go beyond our experience in the world. My research explores this property of cognition by studying flexible uses of language - cases in which we use a single word to express multiple concepts and the relations between them. In this talk, I will present three sets of studies that explore how flexibility arises in childhood to support learning. These studies suggest that from early in life, children are flexible, and expect words to label different ideas. However, they also indicate that children initially struggle to restrict their flexibility, leading to striking errors. I explore how children ultimately constrain their flexibility, by drawing on their knowledge of the world and the social context in which language is used.
The scope of conventionality: Do children expect newly-learned words to be mutually known?
When children learn a new word, do they expect others to know that word? Previous work by Diesendruck & Markson (2001) suggests that by age 3, children expect common nouns to be mutually-known. In their study, children learned a label for one of two novel objects (“This is a dax”) either in a puppet’s presence or absence. When the puppet later asked for an object using a second label (“Give me the bem”), children in both conditions gave the puppet the previously-unlabeled object, suggesting that they believed the puppet already knew the first label. Across three experiments, I will critically evaluate the claim that children’s behavior in previous studies is contingent on their presumption of shared knowledge of conventional meanings. Our findings raise new questions about how children make inferences about conventional meanings, and have implications for lexical and pragmatic approaches to word learning.
Spatial language promotes cross-magnitude associations in early childhood
Across languages and cultures, spatial language is frequently coopted to describe other domains. In English, for example, we describe temporal durations as long or short, numbers as big or small, and auditory pitch as high or low. In the present work, we explore how children’s experience with spatial metaphors influences their cross-magnitude matching ability. To test this, we probed whether English-learning children are better able to match magnitudes in ways that reflect familiar spatial metaphors compared to unfamiliar ones. Second, we explored whether children are better at matching magnitudes when spatial metaphors are provided – even when the metaphors are unfamiliar – compared to when the task is purely perceptual. Our preliminary findings suggest that spatial language can promote the construction of cross-magnitude associations in early childhood. Critically, this process appears to be equally accessible for spatial metaphors that are both familiar and novel, suggesting that experience with specific spatial metaphors is not necessary for forming these associations. Instead, spatial language may promote the perceptual organization of domains such as time and pitch by providing ordinal cues that indicate how the endpoints of these magnitudes should be aligned.
Sweetser, Eve, and Kashmiri Stec
Managing multiple viewpoints: Coordinating embedded perspective in multi-modal narratives
Linguistic and narratological descriptions of the coordination of multiple viewpoints often focus on embedded representational structures, such as the difference between reported speech or thought and free indirect discourse (e.g. Sanders and Redeker 1996; Dancygier 2012). Gestural descriptions often look to spatial frames of reference, and how the expression of gestural viewpoint is affected by their combination (e.g. Perniss and Özyürek, 2011). But more options for embedded viewpoint are possible, especially if we widen the scope of inquiry to include spontaneous multi-modal narratives and the representation of embedded mental space structures (Fauconnier 1997) in speech and co-speech gesture.
Looking at our dataset, we find that an embedded "mixed viewpoint" occurs both within and across modalities: although speech may show linguistic markers of viewpoint embedding, speech may also indicate maintenance of one viewpoint while co-speech gesture indicates maintenance of a different viewpoint. We investigate this phenomenon of mixed viewpoint, defined as instances where different mental spaces are simultaneously activated by different modalities, by using a dataset which consists of video-recorded English autobiographical narratives told between pairs of friends. This use of naturalistic data enables an investigation of embedded viewpoint "in the wild" (cf. Hutchins 1996), and overcomes limitations in previous work which was either modality-specific or constrained by experimental design.
We find that even though a narrative may itself be complex and rich with embedded viewpoint structure, the accompanying co-speech gestures exhibit an equally complex, and often very different, viewpoint structure. For example, in one segment we see a narrator holding a gesture associated with one mental space (narrative) even as she changes her body orientation and elaborates another mental space in speech (speaker-addressee interaction) - all while continuing to hold that first gesture. In this example, the maintenance of the narrative gesture indicates the speaker's intention to return to the narrative space and continue elaborating it. Here, speech alone suggests the activation of one mental space (interaction), but looking to the complete multi-modal utterance, we see that not only are multiple mental spaces active (interaction in speech; narrative in gesture), they are active in different ways: one foregrounded (interaction), the other backgrounded (narrative), even though both are perceptually present.
We will focus on cases like this, where viewpoint multiplicity is distributed across modalities. Topics addressed include the combination of mental spaces seen in our corpus, as well as the means by which those mental spaces are activated or maintained (e.g. gesture, body orientation, different linguistic markers, etc.). We draw on both mental spaces (Fauconnier 1997) and conceptual integration (Fauconnier and Turner, 2002) to explain how mixed multi-modal viewpoint is possible, and provide a typology of the combinations seen in our corpus.
Sweetser, Eve, and Isaac Smith
Conditional constructions, gestural space, and mental spaces
In recent years, a major body of research has shown the close connection between speech and cospeech gesture. Gesture has proven an added source of information about co-speech cognitive patterns, for example in examining metaphors of time (Núñez & Sweetser 2006, Núñez et al. 2012). It is also generally accepted that in gesture, as in signed languages, locations or areas in physical gesture space are associated with topics or other areas of meaning being expressed. Less work has been done on the associations between grammatical constructions and gesture, although gesture researchers generally acknowledge that "erasure" swipes often accompany negation. This paper examines the spatial structure of gestures accompanying English if-conditionals, and argues that these physical spatial structures reflect mental space construction. It has been claimed (Fauconnier 1997, Dancygier & Sweetser 2005) that the function of conditionals is to set up particular mental space structures, involving a Conditional Space (set up by the if-clause) and an Extension Space where the consequent holds as well as the antecedent. In a predictive conditional such as If you open the window, it will get cold in here, the predictive function involves setting up of two alternate space structures - the expressed conditional space and extension, and an alternate one where the window stays closed and the room stays warm. In a speech-act conditional, on the other hand, there are not typically two such alternate spaces built - e.g. If you need to reach me, my phone number is 238- 5861 presumably does not mean that the phone number is different if the addressee does not need to reach the speaker.
In some gestural structures, mental space structure seems to correspond to physical space structure: e.g. a speaker might alternate between sides of her body when gesturing about a past situation and a present one. A possible set of predictions about conditionals, therefore, might be: (1) gestures accompanying predictive conditionals are expected to manifest presentational hand gestures or head gestures towards one side of the speaker, (2) gestures accompanying predictive conditionals may manifest movement outwards from the original locus associated with the if-clause, to an extension space locus farther from the body and (3) where there is active comparison of alternatives, hand and head gestures may literally alternate sides. Also, (4) nonpredictive conditionals will not involve locus placement to one side, or alternation between loci, since they don't involve alternative spaces.
We are using data from the UCLA corpus of captioned television programs, concentrating on talk show and interview data, rather than on fully scripted programs. We search for if, eliminating both the indirect questions and the numerous cases where speakers' bodies and hands are not sufficiently visible. We then annotate utterances for associated use of space. In our first 100 examples, speech- act conditionals do not pick a locus on one side of the body; and not a single speech-act conditional has involved alternation between sides of the body. Rather, gestures appropriate to the 'conditional' speech act are performed - e.g, a speaker says, If you like action, there's plenty of action in this movie, and extends both hands palm-up simultaneously (not alternately) in a presentation gesture ("here's my point"). Predictive if-clauses (if we finish on time...), however, frequently involve setting up a (pointed, or palm-up) one-handed space, often with gaze or head movement towards that side, sometimes followed by moving the hand outwards during the consequent clause - and they sometimes also involve alternation between sides, when the contrast becomes topical (if we don't...). Initial results are therefore promising - there is potential evidence for a correlation between gestural space and the specific mental space semantics of if-conditional constructions. We are gradually building up a corpus and should have at least 500 examples categorized by May 2015.
Dancygier, Barbara, & Sweetser, Eve. 2005. Mental Spaces in Grammar: Conditional Constructions. Cambridge University Press.
Fauconnier, Gilles. 1997. Mappings in Thought and Language. Cambridge University Press.
Núñez, Rafael, & Sweetser, Eve. 2006. With the future behind them: convergent evidence from Aymara language and gesture in the crosslinguistic comparison of spatial construals of time. Cognitive Science 30(3), 401-50.
Núñez, Rafael et al. 2012. Contours of time: topographic construals of past, present, and future in the Yupno valley of Papua New Guinea. Cognition 124(1), 25-35.
Children’s acquisition of time words: Language, space, and the development of conceptual structure
In adulthood, the concept of time is rich, structured, and multifaceted, and it has profound implications for how we lives our lives, plan for the future, interpret our experiences, and construct our life stories. How is Western society’s linear model of time formed in the mind of a child? Although learning formal systems of time-telling is a prolonged and often difficult process, children begin using time words like “minute” and “yesterday” as early as age 2 or 3. Although their use of time words remains inaccurate for several years, their errors suggest that they may have partial meanings for these terms long before learning their formal definitions. My research investigates children’s initial comprehension of time words, as it relates to broader questions about the acquisition of abstract language and concepts, and the development and use of conventional spatial representations of invisible domains. In this talk, I will demonstrate that 4-year-olds, who have no idea how long an hour is and often cannot report what happened in their lives yesterday, nonetheless have structured lexical categories for duration words like “minute” and “hour” and for deictic time words like “yesterday” and “next week.” Furthermore, while the initial formation of time concepts may have surprisingly little to do with duration perception, preschoolers are nonetheless able to flexibly use spatial representations to demonstrate their knowledge of sequential order and of past/future relationships. Over the next 3 years, children gradually converge on adult-like meanings for time words, and begin to use space to think about time in conventionalized, culture-specific ways.
Connecting Speech Perception and Sound Change: A Computational Model of Push-Chains
In this talk, I present an exemplar-based computational model of sound change. The model captures an interaction between two phonological categories (e.g. vowels), where one category is subjected to systematic external bias that causes it to encroach on the phonetic/acoustic space of the other category. This creates overlap between the categories and a consequent perceptual disadvantage for productions falling in the overlap due to their low discriminability, yielding a push-chain by self-organizing principles. I show that many parameter settings yield the same qualitative properties as seen in the Short Front Vowel Shift of New Zealand English: the categories maintain their width and overlap whilst moving together. Furthermore, I show that the introduction of the assumption that high-frequency words are more robust to low-discriminability disadvantages in perception leads to word-frequency effects in production. Low-frequency words come to lead the change in the retreating category, as observed in New Zealand English (Hay, Pierrehumbert, Walker, and LaShell, 2015). The model also makes predictions for future studies: it predicts high-frequency words to lead in the advancing category, and it predicts high-frequency words to yield a greater lexical bias in synchronic spoken word perception. I discuss studies planned to investigate these predictions.
Metonymy as quasi-idiomatization
In this talk I will argue that metonymy involves neither a referential shift whereby something stands for something else, nor a mapping within the same conceptual domain, domain matrix, frame, or idealized cognitive model. Instead, metonymy is a purely phraseological process that involves quasi-idiomatization, i.e., the creation of new meanings via adding covert idiomatic meanings to the overt components' literal meanings. Quasi-idiomatization is a pervasive semantic strategy that, in addition to giving rise to classic instances of metonymy (i.e., mainly nominal metonymies such as Washington referring to the U.S. government, which works in the U.S. capital Washington, D.C.), is also typical of many other formal structures, which have so far received fairly little attention in metonymy research. These include kinegrams (i.e., verb phrases such as shrug one's shoulders), instances of all word- formation processes (e.g., derivation, compounding, conversion, blending), the personal pronoun we, and even entire speech acts, whose meanings often represent combinations of their overt components' meanings and covert idiomatic meanings. The analysis of metonymic meanings as quasi-idiomatic meanings provides a clear basis for distinguishing between metonymy and metaphor. Whereas metonymy produces quasi-idiomaticity, metaphor always gives rise to full-idiomaticity, i.e., semantically reinterpreted meanings that never contain their overt components' literal meanings.
Domain-Specific Applications of Embodied Construction Grammar
Embodied Construction Grammar (ECG) offers a neurologically plausible model of language use. Like other construction grammars, ECG operates on the assumption that language is composed of form-meaning pairings; in ECG, constructions bind grammatical constituents to roles in complex schema hierarchies, and the ECG Analyzer parses input utterances and outputs a Semantic Specification (SemSpec) illustrating these pairings. ECG focuses particularly on simulation-based semantics; thus, in this talk, I will focus on two distinct domains to which we have applied ECG. First, I will describe a robot task, in which natural language is used to give commands to a simulated robot model; this is a modular system that demonstrates how ECG can be used to drive action from language. Second, I will describe an implementation for metaphor analysis, in which the system uses ECG for metaphor identification and analysis, and is able to construct a database of utterances and associated metaphor bindings. Both systems are implemented and fully functional.
In this workshop, I will discuss a Python script I wrote that reads in a FrameNet XML data-dump and builds a customized database, which users can query. By default, this includes the following data for each frame: name, frame elements, associated lexical units, frame relations, frame element relations, definition, and ID. I also included optional scripts to gather additional information about valence patterns for specific frames. Participants interested in trying out the system should bring: a laptop computer with a working installation of Python (either 2 or 3 should work, but I’ve tested more extensively with 3), and ideally, to save time, a FrameNet XML data-dump stored on your machine. You can request the data-dump here: website
Language use under social and cognitive constraints
My talk will focus on a series of studies currently aims to explore how a language user's social and cognitive context contributes to language use in their online customer reviews. Some recent theories of language see it as a complex and highly adaptive system, adjusting to factors at various time scales. For example, at a longer time scale, language may adapt to certain social or demographic variables of a linguistic community. At a shorter time scale, patterns of language use may adjust to cognitive affective states of the language user. If so, language may be used as an indication of certain cognitive and social influences on behavior. Until recently, datasets large enough to test how subtle effects of cognitive states and socio-cultural properties - spanning vast amounts of time and space - influence language change have been difficult to obtain. The emergence of digital computing and storage have brought about an unprecedented ability to collect and classify massive amounts of data. We analyzed over one million online business reviews using information theory and network analyses to quantify language structure social connectivity while review ratings provide an indication of the reviewer's cognitive affective state. Results indicate that some proportion of variance in the structure of individual language use can be accounted for by differences in cognitive states and social-network structures, even after fairly aggressive covariates have been added to regression models.
A Nation under Joint Custody: How Conflicting Family Models divide US Politics
Although on the surface the most hotly contested political issues seem unrelated, individuals' stances across these diverse issues tend to align themselves into two groupings: liberal or conservative. What might explain this pattern of political division? A widely influential account of the liberal-conservative divide is Moral Politics Theory (Lakoff, 1996). This theory contends that (1) political attitudes stem from moral worldviews that are conceptually anchored in idealized family models, such that endorsement of a strict-father model predicts conservatism and endorsement of a nurturant-parent model predicts liberalism; (2) the mapping of idealized family models onto politics occurs because individuals metaphorically conceptualize the nation as a family; and (3) a portion of the population subscribes to both family models (biconceptuals), and therefore possesses more flexible political attitudes which can shift depending on whether a policy stance is framed in terms of strict-father or nurturant-parent values. This investigation constitutes the first comprehensive empirical test of Moral Politics Theory across six studies, lending support to its three principal components and, more generally, providing novel insights into the fundamental processes underlying people's moral-political psychology.
Morally Queasy: Insula and Basal Ganglia Responses to Literal and Metaphoric Disgust Language
Good and bad are frequently understood via a conceptual metaphor (Lakoff & Johnson, 1980) construing immoral deeds (e.g., “A bad crime”) as impure or disgusting deeds (e.g., “A rotten crime). This study investigates the neural processing of literal and metaphoric uses of the same disgust language in political statements.
We find evidence that literal disgust language processing, when compared to non-affective language, recruits areas of the brain closely connected to visceromotor and motivational systems of the brain, including areas of gustatory and olfactory cortices and parts of limbic basal ganglia, i.e., brain regions directly associated with physical disgust in the face of inducers such as spoiled food or vomit. Moreover, the same disgust language when used in metaphor recruits a similar subset of brain regions (i.e., ventral anterior insula and pallidum), when compared to metaphoric language drawing on non-affective domains (e.g., the metaphoric construal of time as motion). Additionally, a within and cross-modal classification analysis (MVPA) shows evidence of unique neural patterns of activity across voxels in the left anterior insula and adjacent frontal operculum that allow us to distinguish between disgust language and fear-related language, which activates the anterior insula in similar but not identical ways.
Our findings support the general embodiment hypothesis (e.g., Barsalou, 2009; Gallese & Lakoff, 2005; Lakoff, 1987; Niedenthal et al., 2009), showing that disgust language is processed through recruitment of brain areas that also process non-linguistic disgust inducers. Moreover, in line with other recent investigations into the neural validity of conceptual metaphors (Boulenger et al., 2009; Citron & Goldberg, 2014; Lacey et al., 2012), it shows that metaphoric uses of disgust language are grounded in affective and sensorimotor representations relevant to the basic emotion of disgust. On a larger scale, this study suggests that affective metaphor may gain its rhetorical power by drawing upon emotion systems of the brain.
When interlocutors get pushy: Space management gestures and communicative force
In face-to-face discourse, pragmatic gestures commonly rely on the same embodied concepts that structure thought and language (McNeil 1992; Müller 2004; Sweetser 1998). While gestures that serve to control discourse events have been noted (Kendon 1995, 2004; Calbris 1990), the majority of pragmatic gesture research emphasizes communicative cooperation and inclusion (Bavelas et al. 1992, 1995; Kendon 1995, 2004; Müller 2004).
Based on an analysis of 28 minutes of political debate, this study proposes a distinction between two types of pragmatic gestures that manage discourse interaction (Discourse Management Gestures; DMGs, henceforth): those that serve inclusive-cooperative functions and those that serve control functions. Moreover, the data analysis reveals that both DMG types frequently rely on metaphoric construals of communicative force as physical force and communicative space as physical space. To acquire a better understanding of the form and function of DMGs that rely on such metaphoric construals, they are (i) distinguished from non-metaphoric and conventionalized gestures, and (ii) analyzed for their underlying force-dynamic construals (Talmy 1981, 1988). Moreover, the data analysis reveals that metaphoric, force-dynamic DMGs are used with much higher frequency (96%) than non-metaphoric (1%) and conventionalized (3%) DMGs. Probable reasons for this phenomenon are discussed in terms of the embodied concepts that guide our understanding of face-to-face communication and different types thereof, such as amicable conversation and argument.
Metaphors and math: Talking, gesturing and thinking about numbers in terms of space
Evidence from behavioral experiments and neuroscience suggests that numerical cognition and spatial cognition share overlapping brain areas and are cognitively much intertwined. In this talk, I will discuss new strands of evidence that reveal how numbers can be conceptualized in terms of physical extent ("small" vs. "large" numbers), or in terms of horizontal and vertical axes. In particular, I will focus on joint work with Teenie Matlock and Marcus Perlman that looks at co-speech gesturing when people talk about numerical quantities. In the TV News Archive, a vast online data repository of recorded news casts, we found many instances where gesture space is mapped onto implied numerical quantities. For example, speakers may make a pinch or precision grip gesture when talking about a "tiny number", or they may point upwards when talking about a "high number". Thus, gestures reveal the spatial nature of numerical thinking. The data furthermore show that people readily switch between different spatial mappings, suggesting that the underlying conceptualizations of quantities are mutually compatible with each other and used in a flexible fashion. I will outline implications for numerical cognition research (e.g., SNARC effects, A Theory of Magnitude), gesture research and metaphor theory.
Revisiting Synesthetic Metaphors
“Synesthetic metaphors” are metaphors that combine words from distinct sensory domains, as in the expressions “rough smell”, “smooth taste” or “bright sound”. Since the 50’s, researchers have observed that these metaphors are characterized by striking asymmetries, for example “sweet fragrance” is more acceptable to speakers than “fragrant sweetness”. This ultimately led to the proposal of a directional hierarchy: “touch > taste > smell > hearing / vision”. I will assess this hierarchy with new corpus data and a novel methodological approach. My analyses show that asymmetries between synesthetic metaphors are simultaneously affected by multiple interacting factors, including word frequency, emotional valence and even the sound structure of words. These results question the notion of a universal and monolithic hierarchy that characterizes the relationship between the senses.
Numeral systems across languages support efficient communication: From approximate numerosity to recursion
Languages differ qualitatively in their numeral systems. At one extreme, some languages have a small set of number terms, which denote approximate or inexact numerosities; at the other extreme, many languages have forms for exact numerosities over a very large range, through a recursively defined counting system. What explains this variation? Here, we use computational analyses to explore the numeral systems of 25 languages that span this spectrum. We argue that these numeral systems all reflect a need for efficient communication, mirroring existing arguments in the domains of color, kinship, and space. Our analyses suggest that numeral systems "crystallize" into exact recursive form for functional reasons: recursion is a cognitive tool that supports highly informative communication at the cost of only modest cognitive complexity. (Joint work with Terry Regier.)
Historical semantic chaining and efficient communication: The case of container names
Semantic categories in the world's languages often reflect a historical process of chaining: A name for one idea is extended to a conceptually related idea, and from there on to other ideas, producing a chain of concepts that all bear the same name. The beginning and end points of such a chain might in principle be conceptually rather dissimilar. There is also evidence supporting a contrasting picture: Languages tend to support efficient, informative communication, often through semantic categories in which all exemplars are similar. Here, we explore this tension through computational analyses of existing cross-language naming and sorting data from the domain of household containers. We find: (1) formal evidence for historical semantic chaining, and (2) evidence that systems of categories in this domain nonetheless support near-optimally efficient communication. Our results suggest that semantic chaining may be constrained by the functional need for efficient, informative communication.
This is joint work with Terry Regier (Berkeley) and Barbara C. Malt (Lehigh).
FrameNet for Language Learners?
In language pedagogy, it has become a truism that lexical meanings cannot be adequately learned by means of "raw" definitions. So, what does it take to understand the meaning of a word? And which kinds of information should be included in a dictionary for L2 learners? By discussing some lexical meanings (particularly in the specific knowledge domain of soccer), I address these issues from a FrameNet-based vantage point. As an empirical source I use the Berkeley FrameNet database, the so-called 'kicktionary' (a frame-based online dictionary of soccer language), and the 'German Frame-based Online Lexicon' (G-FOL, a didactic resource for language learners). First, I describe the necessary (semantic) requirements a learner's dictionary should meet. The second part introduces G-FOL that I am currently developing in cooperation with Hans C. Boas (http://coerll.utexas.edu/mfn/home). Specifically, I describe how G-FOL employs (and expands on) principles of to what extend the presented methodological framework can be applied to more complex frames and constructions. Finally, following Fillmore et al. (2012), I discuss how a learner's dictionary may integrate constructional information by pointing out the relation between frame element labels and their instantiations in actual sentences. The resource shows users how individual frame elements are realized syntactically in a given construction through a prose definition explicating relevant cross-linguistic differences and through frame-element annotation of sentences.