Kyle Rawlins

Associate Professor, Cognitive Science Department

For non-specialists

This page attempts to give an overview of what it is that I do, at various levels of detail, given no background in linguistics. The page is aimed at roughly three groups of people: (i) family members and people who knew me before I started doing this, (ii) undergraduates who are considering taking a course in linguistics or semantics and are wondering what it might involve, and (iii) prospective graduate students wondering how semantics might fit into cognitive science.

What is linguistics?

Natural languages are full of patterns. Many of these patterns are specific to human languages; for example, if you were designing an ideal communication system you wouldn’t necessarily include them. We also find great systematicity in the patterns in natural languages, with many patterns recurring again and again in a range of unrelated languages. We also find many patterns recurring again and again in novel utterances by speakers of a particular language. (By “novel” I mean the utterance of a sentence that the speaker has never heard or produced before in their life.)

An example of the first kind of systematicity is that in all languages (as far as I know), there is the part of speech that is standardly called a verb. An example of the second kind of systematicity is that (if you are a native speaker of English) you can utter sentences that you have never heard before (e.g. The auditorium is full of seventeen large elephants), and you will always find yourself putting any determiners (the, seventeen) before their corresponding nouns. You will never utter a sentence like auditorium the is full of large elephants seventeen. There are of course other languages where this isn’t true (e.g. Thai, where various determiner-like objects appear following the noun).

One major goal of linguistics is to find, model, and explain such patterns. In doing so what we aim at are models of a speaker’s natural language competence and performance- their ability to understand and produce utterances, and especially novel utterances. Another way of looking at this is that we aim at modeling the grammars of natural languages.

Why do it?

Language is central to much human behavior. Not only that, it is highly human-specific. While there are instances of animal communication, most researchers agree that these instances do not have the properties of a human language (e.g. they lack recursion, compositionality, and so on). Consequently, it is commonly held that humans have a specialized cognitive “module” dedicated to acquiring the ability to speak a natural language. By studying language we are studying a core aspect of human cognition, and can also gain insight of the structure of cognition in general (e.g. issues of how cognitive modules interact, and what it means for cognition to be modular).

On the more practical side, models of the human ability to produce and understand language can inform computational models used in machine natural language understanding, question answering, machine translation, and related tasks. Purely statistical approaches in these domains can go far, but I do not believe they can achieve human-level competence without actual empirically adequate models of human competence itself.

Cognitive science?

If you know just that I was trained as a linguist, you may be wondering why I am working in a cognitive science department. Linguistics historically has been more typically classified as a discipline in the humanities or in the social sciences. Certain sub-fields are probably correctly still classified this way. But mainstream theoretical linguistics is not, and I do think that it falls squarely in the empirical sciences. The domain of
data is the linguistic behavior of native speakers. Moreover, I think linguistics is best conceived of as closely related to or even a branch of cognitive science — where the goal is understanding of the mind and cognition through a range of methodologies (theorizing, computational/mathematical modeling, and experimentation). While many linguists prefer to think of grammars as more abstract mathematical objects, I prefer to keep in mind that there must be a close correspondence between any abstract grammar and an implementation of that grammar in the mind/brain at multiple levels of abstraction. The cognitive science department at JHU has people working on topics such as language, writing systems, and vision with a diverse array of perspectives and approaches; theoretical linguistics is an important part of this array.

What is semantics?

Semantics is the subfield of linguistics that focuses on meaning in natural languages. Utterances manage to encode and convey a tremendous amount of information in a highly language-specific way. To make this concrete, you (assuming you are a native speaker of English) have the automatic ability to make judgments about what it would take for sentences of English to be true. This holds even if you have never heard the sentence in question before. To take the example from above, the auditorium is full of seventeen large elephants, given some auditorium, you can automatically determine what the world would have to be like to make this sentence true. If someone were to utter this sentence to you, and you had reason to think them
credible, you might even update your mental model of the world with this information about the auditorium. Of course, the meaning of this particular sentence would require great credibility on the part of a speaker, and this is something else semantics is important for.

Researchers working in semantics and related areas use mathematical and computational tools (such as set theory, formal logics, typed lambda calculus) to model this kind of information, as well as to model the way it is encoded into and decoded from an utterance. One major goal is to explain “compositionality” — to give a theory of how speakers decode information from novel sentences, given meanings of the words and how they are put together. Another major goal is to understand the role in communication that is played by the context in which an utterance is made. A third major goal is to understand the patterns of meaning found in words, and understand what the representation of meanings for particular words looks like.

For more information, see Bill Ladusaw’s overview of the study of meaning on the Linguistics Society of America site.

Ok, so what do you do?

I work in the area of semantics, as well as closely related fields, syntax and pragmatics. There are two core empirical topics that much of my work focuses on. One is an understanding of how questions work in natural languages. That is, how are questions constructed, what can they mean, and how do/can people respond to them? The other is an understanding of how people talk about hypothetical situations — situations that will happen in the future, that might happen, that didn’t happen but could have, or that couldn’t have happened at all. People engage in this kind of conversation pervasively, using specific linguistic resources — modal words like English “might”, “would”, “should”, and so on, conditional adjuncts like “if Alfonso comes to the party” and “to get to the party”, as in “if Alfonso comes to the party, it will be fun” and “to get to the party, you should get off 83 at Falls Rd.” My goal is to understand the linguistic structure and properties of these resources, and understand the way the grammar models meanings of such sentences.

More generally, I am interested in the semantics of modification: the meanings of adverbs, adjectives, and how they interact with the meanings of sentences the appear in. I am also interested in the question of what role the idea of compositionality mentioned above actually plays in the grammar, and how strong of a principle it is. Finally, I am interested in the formal “power” of grammars — grammars form a hierarchy of what kinds of languages they can describe, and it is an unsettled question as to where grammars of natural languages (and their parts) fall on this hierarchy. The main idea I am pursuing is that the semantic component of the grammar is more powerful in this sense than certain other components.