Workshop on Language, Cognition and Computational Models

The Workshop on Language, Cognition and Computational Models has been held in Paris at Ecole Normale Supérieure (ENS) and at the Institut des Systèmes Complexes de Paris-Ile de France (ISC-PIF), on May 28th and 29th 2013.

The goal of this event is to provide a venue for the multidisciplinary discussion of theoretical and practical research for computational models of language and cognition. The event centers around recent advances on computational models for language acquisition, processing and evolution.

May 28th

Dan Dediu – The interplay between linguistic and biological evolution

Source de la vidéo : http://www.savoirs.ens.fr/expose.php?id=1288

Culture and language are full-blown evolutionary systems, and conceptual and methodological parallels with evolutionary biology have helped advance our understanding of language change and evolution. Moreover, it is becoming clearer that the cultural and biological evolutionary systems are not independent, but there are complex interactions between the two, ranging from culture adapting to biological constraints, to culture shaping biological evolution, and to co-evolutionary processes between culture and biology.

In this talk I will discuss some examples of such interactions between the cultural and biological evolutionary systems, with a special emphasis on the genetic biasing of language change and evolution. Such biasing can act at various levels, from the anatomy and physiology of the vocal tract and hearing organs, to the way our brain and cognitive system process language. I will argue that computational and mathematical models of these process are an essential tool in understanding their necessary conditions, their dynamics and their detectable traces in real data. To this end I will overview some models that try to address the effects of biases on language change and evolution, and their implications for experimental and observational studies.

Ted Briscoe – A model of L1/L2 Language Acquisition and its implications for language change

Source de la vidéo : http://www.savoirs.ens.fr/expose.php?id=1287

Building on my earlier work on the Bayesian Incremental Parameter Setting (BIPS) model of L1 acquisition for Generalized Categorial Grammar (GCG). I will describe how to extend this model to provide a unified account of L1/L2 morphosyntactic acquisition which embeds the L1 transfer hypothesis in a formal theory of learning. Previously, I’ve argued that the BIPS GCG model is able to account for the ‘growth’ of grammar during creolisation and for some typological universals, when combined with a model of language processing complexity and embedded in an (evolutionary) iterated learning model (ILM) of language change. I will argue that just as the starting point for L1 acquisition is inductively biased, so is the starting point for L2 acquisition but with overlaying effects of the matured L1 parameters. I’ll then argue that the BIPS GCG L2 model, combined with an account of processing complexity extended to morphology and embedded in the ILM, predicts that the proportion of L2 speakers of a given language will influence the mophology/syntax trade-off in that language.

Anne Reboul – Social evolution of public languages: between Rousseau’s Eden and Hobbes’ Leviathan

Source de la vidéo : http://www.savoirs.ens.fr/expose.php?id=1289

In the past few years, the question of the evolution of language has clearly favoured social accounts, which basically have defended a strong link between public languages and prosocial tendencies toward group cohesion and cooperation. The two main such accounts have contrary causal directions: according to Dunbar (1996, 2004), language evolved to replace grooming as a tool for group cohesion, while according to Tomasello (2008, 2009), sharing, as a major tendency of human nature, gave way to the sharing of information through public languages, as a means for sharing states of mind including emotions. These accounts seem to favor a fairly Rousseauist view of human nature, according to which linguistic communication is seen as altruistic cooperation, and seem, at least partly, based on a fairly misguided interpretation of the Gricean notion of cooperation.

Such views of the evolution of public languages, and more generally, of the evolution of communication are profoundly problematic anyway, given the basic tenets of the theory of natural selection: altruistic cooperation, including altruistic communication, could never have evolved. Rather, one should expect communication either to be mutualistic (which seems roughly to be the case for animal communication systems) or to be basically strategic or tactical, aimed by the agent at the realisation of his or her ultimate goal through the effects of the communication on the mental states and on the behavior of the addressee (Krebs & Dawkins 1984). And this is perfectly compatible with Gricean cooperation, as long as a necessary distinction is made between distal (non-Gricean, manipulative) and proximal (Gricean, cooperative) intentions.

Basically, strategic or tactical communication is tantamount to manipulation, itself often associated to lying. This is obviously a possibility, but hardly a necessity: manipulation can quite well be associated with telling the truth, though it necessitates lesser forms of deception, notably that the agent hides his/her manipulative (distal) intentions and his/her ultimate goals. Is there a way to support this alternative view of the evolution of public languages? Well, the universal existence in human languages (and in no other animal communication system) of implicit communication seems to give such a supporting evidence. Implicit communication, though it would seem a highly risky form of communication as it is subject to failure in a way that explicit communication is not, fulfills two important functions relative to such a strategic or manipulative view of the evolution of public languages. It allows the speaker to bypass his/her hearer’s epistemic vigilance; and it allows the speaker to deny his/her manipulative intentions by denying any commitment to the information thus communicated (Reboul 2011, in press). Additionally, its frequent use in politeness, if politeness is seen as a way to safely navigate dominance relations, is also manipulative.

Finally, what we know of the political organization of hunter-gatherer groups, which seem to be the best models for the groups of humans in which public languages have evolved since about 200,000 years ago, gives a strong reason for the evolution of implicit comunication: such groups work on an egalitarian and consensual basis (Boehm 1999, 2012), which is strongly (and occasionally murderously) enforced. Decisions at group level are discussed and so-called leaders are supposed to keep a low profile and to refrain from pushing forward their own positions. Implicit communication is a good way to circumvent such socially enforced prescriptions.

May 29th

Robert Berwick – Darwinian Linguistics

Source de la vidéo : http://www.savoirs.ens.fr/expose.php?id=1290

Famously, in The Descent of Man, Charles Darwin extended his theory of evolution to human language. First, Darwin speculated that language emerged through sexual selection: “some early progenitor of man, probably used his voice largely … in singing”; and “this power would have been especially exerted during the courtship of the sexes.” Second, Darwin pictured organism and language “family trees” – phylogenetics – as essentially one and the same, taking up a theme he first outlined in Origin of Species.

How well do Darwin’s proposals hold up in light of modern comparative biology and linguistics?

In this talk, we demonstrate that one should take care not over-inflate Darwin’s metaphor. Language’s origin and then its change over time cannot be exactly equated to biological evolution, because linguistic principles and parameters are not precisely equivalent to bio-molecular data such as DNA sequences, and language inheritance is not strictly equivalent to biological inheritance. As a result, any facile ‘lifting’ of techniques originally applied to biological evolution may be plagued by false equivalences. Biological methods make particular assumptions about how evolution works that are not met in the case of language, e.g., with respect to genes, inheritance, and genetic variation, the basic ‘fuel’ that evolution burns. Unlike biological evolution, where mutations in DNA boost variation and lead to new genes, duplicated whole genes or genomes, novel traits, and new species, so far as we know for certain the human-specific shared genetic endowment for language has been frozen since its emergence. The implications of these differences are illustrated by several recent analyses of language geographic flow and language phylogenetics that have conflated Darwinian biological evolution with language evolution, arriving at conclusions that deserve caution.

Massimo Poesio – Using Data about Conceptual Representations in the Brain for Computational Linguistics

Source de la vidéo : http://www.savoirs.ens.fr/expose.php?id=1292

Existing electronic repositories of lexical and commonsense knowledge such as ConceptNet, Cyc, FrameNet, and especially WordNet (Fellbaum, 1998), have had a dramatic and positive impact on Artificial Intelligence (AI) and Human Language Technology (HLT) research, making it possible to carry out the first large-scale semantic analyses of text and some simple forms of inference. Nowadays there are few semantic interpretation systems that do not use WordNet. However, the widespread application of these resources has also highlighted their limitations. There are reasons to doubt that the organization of conceptual knowledge in the brain reflects its organization in WordNet. The hypothesis underlying the BrainNet project is that the dramatic advances in our knowledge of concepts arising from interdisciplinary research of the last thirty years pave the way to the development of a lexical resource of a novel type that may overcome the limits just discussed: an electronic dictionary that directly mirrors the mental lexicon, modelled on the basis of recordings of brain activity using contemporary neuroimaging techniques (EEG, MEG and fMRI) and containing information automatically extracted from corpora using automatic methods. In the talk I will discuss some findings about the organization of abstract knowledge as well as work on using these methods in computational linguistics applications such as sentiment analysis and in medical applications such as the early prevention of semantic dementia.

Shuly Wintner – The Features of Translationese

Source de la vidéo : http://www.savoirs.ens.fr/expose.php?id=1293

Translation is a text production mode that imposes cognitive (and cultural) constraints on the text producer. The product of this process, known as *translationese*, reflects these constraints; translated texts are therefore ontologically different from texts written originally in the same language. Many of the special properties of translationese are believed to be universal, in that they are manifest in any translated text regardless of the source and target languages. In fact, a cognitive framework has been suggested as the explanation of translation universals.

In this work we test several Translation Studies hypotheses using a computational methodology that is based on supervised machine learning. Casting the problem in the paradigm of authorship attribution, we define dozens of classifiers that implement various linguistically-informed features that reflect translation universals. While the practical task of distinguishing original from translated texts is easy, we focus not on improving the accuracy of classification, but rather on designing linguistically meaningful features and assessing their contribution to the task. We demonstrate that some feature sets are indeed good indicators of translationese, thereby corroborating some hypotheses, whereas others perform much worse (sometimes at chance level), indicating that some `universal’ assumptions have to be reconsidered.

While our results are limited to the case of translationese, this methodology can be adopted for studying other kinds of texts produced under different cognitive constraints, such as texts produced by non-native speakers, by people with learning disabilities or medical problems, or by children acquiring a language.

Martin Kay – Putting Linguistics back into Computational Linguistics

Source de la vidéo : http://www.savoirs.ens.fr/expose.php?id=1291

The belief has recently become widespread that the properties of language needed to process it for useful purposes will emerge if sufficiently large quantities of raw text and speech are analyzed automatically using sufficiently sophisticated techniques. The kind of understanding that a linguist attempts to achieve by examining individual specimens at close range has little value, at least for practical purposes. But, if information can be caused to emerge from the raw data only if it is in there in the first place, and it has long been known that this is not the case. A language is a code, that is, a system of arbitrary relations between symbols and things in worlds, real and imaginary. No time or effort invested in examining the symbols will reveal these relations to one who does not know the code. If this is true, then we must ask why statistically based machine translation, for example, has come as far as it has, and how much further it can expect to go.