Research | Child Language Lab
Lab grad student Daoxin Li will be presenting a talk at CUNY 2021 titled Acquiring recursive structures through distributional learning. This work asks how distributional learning could help learners figure out which structures allow recursion (and which don’t!) in their language.
Lab undergrad Ania Alberski is presenting a short talk / poster at CUNY 2021 on the The role of language context in the acquisition of novel words.
Lab undergraduate Annalise will be presenting a talk at LSA 2021 titled Adults do not regularize unpredictable language variation, even when learning from an unreliable speaker.
Katie will be presenting a talk on work with Yiran Chen at LSA 2021 titled Morphosyntactic variation is preserved, not regularized, when an optional form is rare.
Lab undergraduate Stefan Pophristic is presenting a poster at the LSA 2021 Virtual Annual meeting on The Role of Gender in the Acquisition of Serbian Case System.
Former lab undergraduate Sarah Nam is presenting a poster at the LSA 2021 Virtual Annual meeting on The Influence of Verb Information on Learning Novel Words. Sarah is now a graduate student at Vanderbilt.
Penn awarded our lab a Research Recovery grant to help us conduct remote research with children and their families during covid and beyond.
Katie submitted a paper and shared a new preprint on Learning a language from inconsistent input: Regularization in child and adults learners. This is work with Alison Austin, Sarah Furlong, and Elissa Newport.
When linguistic input contains inconsistent use of grammatical forms, children produce these forms more consistently, a process called ‘regularization.’ Deaf children learning American Sign Language from parents who are non-native users of the language regularize their parents’ inconsistent usages (Singleton & Newport, 2004). In studies of artificial languages containing inconsistently used morphemes (Hudson Kam & Newport, 2005, 2009), children, but not adults, regularized these forms. However, little is known about the precise circumstances in which such regularization occurs. In three experiments we investigate how the type of input variation and the age of learners affects regularization. Overall our results suggest that while adults tend to reproduce the inconsistencies found in their input, young children introduce regularity: they learn varying forms whose occurrence is conditioned and systematic, but they alter inconsistent variation to be more regular. Older children perform more like adults, suggesting that regularization changes with maturation and cognitive capacities
Katie presented a poster at BUCLD on the acquisition of variables rules conditioned on social context.
Today I gave a talk for George Washington University’s Cognitive Neuroscience Seminar on Forming categories and making generalizations: lessons from child language acqusition. You can see the slides here.
Yiran submitted a proceedings paper for PLC on with Katie investigating learning biases and the animacy hierarchy. The paper is to appear in the University of Pennslyvania Working Papers in Linguistics.
Lab graduate student was to give a talk on her work investigating the animacy heirarcy at the Penn Linguistics Conference this year, but it was cancelled due to COVID-19.
Katie wrote a commentary with Jordan Kodner and Spencer Caplan on Ambridge’s 2020 Against stored abstractions paper. We argue that abstractions are really good for both brains and machines!
Katie submitted a paper with Mackenzie Fama, Elissa Newport, and Peter Turkeltaub on statistical learning in healthy aging and left hemisphere stroke.
A new review paper on Artificial language learning with children is out today by Katie and collaborator Jennifer Culbertson.
Artificial language learning methods—in which learners are taught miniature constructed languages in a controlled laboratory setting—have become a valuable experimental tool for research on language development. These methods offer a complement to natural language acquisition data, allowing researchers to control both the input to learning and the learning environment. A large proportion of artificial language learning studies has aimed to understand the mechanisms of learning in infants. This review focuses instead on investigations into the nature of early linguistic representations and how they are influenced by both the structure of the input and the cognitive features of the learner. Looking not only at young infants but also at children beyond infancy, we discuss evidence for early abstraction, conditions on generalization, the acquisition of grammatical categories and dependencies, and recent work connecting the cognitive biases of learners to language typology. We end by outlining important areas for future research.
Katie and her husband Brandon welcomed their daughter, Joan on Nov 19!
Katie’s new paper with Patty Reeder, Elissa Newport, and Dick Aslin is out now in Language Learning and Development
Successful language acquisition hinges on organizing individual words into grammatical categories and learning the relationships between them, but the method by which children accomplish this task has been debated in the literature. One proposal is that learners use the shared distributional contexts in which words appear as a cue to their underlying category structure. Indeed, recent research using artificial languages has demonstrated that learners can acquire grammatical categories from this type of distributional information. However, artificial languages are typically composed of a small number of equally frequent words, while words in natural languages vary widely in frequency, complicating the distributional information needed to determine categorization. In a series of three experiments we demonstrate that distributional learning is preserved in an artificial language composed of words that vary in frequency as they do in natural language, along a Zipfian distribution. Rather than depending on the absolute frequency of words and their contexts, the conditional probabilities that words will occur in certain contexts (given their base frequency) is a better basis for assigning words to categories; and this appears to be the type of statistic that human learners utilize.
We’re so excited to be joining the Linguistics department at the University of Pennsylvania!