Language Acquisition

The way that humans come to learn a language, seemingly without even trying, is an incredibly amazing phenomenon, and one that we often take for granted. From the earliest ages, infants are sensitive to all kinds of statistics about the language input that they hear, and we are only beginning to understand how infants use these amazing learning abilities in order to end up as language experts. In my research, I have been particular interested in two issues.
First, what is the relationship between learning the structure of a language (the dependencies between it's sounds, words, and phrases) and the meaning of those elements? How does learning about one influence the learning of the other?
Second, recent research done under the label "statistical learning" has demonstrated that infants, children, and adults are amazingly adept at tracking and integrating all kinds of combinatorial and probabilistic information. One outstanding question in the realm of statistical learning concerns what are the units over which those statistics are calculated, and how do infants come to figure out which units to use?
I find these, and many other, questions about language acquisition fascinating, and have done and am doing much research on these topics.
Baby in Headturn Experiment
doggy comb kitty

Meaning: Linguistic vs. World Knowledge

Humans acquire meaning by interacting with their world, and realizing its structure and affordances. But we also acquire a good portion of our meaning through language, by talking about things, and by using language operationally to achieve goals.
A good portion of my research has focused on investigating the semantic knowledge that is accessible to people purely through simple language statistics, like word co-occurrences and words' distributional similarities. It turns out that people can learn a shocking amount of structured meaning knowledge through patterns in language alone.
Even more interesting is that there is usually a big difference between the semantic information that one acquires by interacting with object and events in the world, and the semantic information that is emphasized in language. My work has shown that people are sensitive to these differences, and incredibly good at using the type of semantic knowledge (from language, or from the world), that is appropriate in a particular situation.


Yellow Carrots

Theories and Models of Learning and Knowledge Representation

Computational models force us to be clear about our theories. Models' successes and failures tell us a lot about the necessary givens for learning and representation - both in terms of the structure of the learning system, and in terms of the structure of environmental input.
I am especially interested in statistical and connectionist models of learning. Some have argued these models are doomed to fail, that they cannot explain critical facts about learning - such as the ability to learn and transfer abstract knowledge, and learn complex phenomena like long-distance dependencies. A number of my research projects have been directed toward showing that these models are actually quite good at these phenomena, and that when they fail, it's when and how humans do. But all models have their limitations, and finding them is a critical part of the scientific enterprise.
I am also a strong advocate of an interactionist research program, where computational models are built based on experimental findings, and then are used to make predictions in future experiments. This back and forth between models and experiments allows our models to better reflect cognitive processes, and helps structure and drive the experimental research program. In many of my papers, you will see this reciprocity at work.

Feed Forward Network
Network Graph

Big Data Analyses of Cognitive Development, Social Cognition, and Clinical Psychology

Technological change has had a vast effect on the amount of data we have available, and the types of questions we can ask. I was doing "big data" research for almost a decade before it had a name. What can children learn from six million words? Are there statistical differences in political speech on cable news? Are people sensitive to the ways in which proper names are used in language, the connotations those names carry? These are just some of the questions I have been involved in studying.
But "big data" is not just about mining large data sets for correlations. Big data, when guided by theoretically-driven questions and computational models, can allow us to ask and test previously unthinkable hypotheses. Access to immense amounts of data can also suggest to us explanations that never would have seemed plausible given smaller amounts of data. Increased statistical power (both as a scientist and as a statistical learning) is very, very important.
I am trained as a cognitive developmentalist. But recently, I have become very interested in the application of these models and statistical analyses to social and clinical psychology phenomena. How do periods of profound deafness and growing up with a cochlear implant affect vocabulary and the development of semantic memory? How do people suffering from Alzheimer's and Schizophrenia differ in the ways that they use language, and can this help us diagnose people and better understand the progression of the disease? These are some of the questions I am currently investigating.
Feed Forward Network
Network Graph