Computational work:

Natural language processing for cognitive models of learning and memory
A substantial body of literature exists to explain episodic memory and semantic memory for words. However, no existing model combines models of word meaning with models of episodic memory with a single learning mechanism. In collaboration with David Huber, I have shown that a model of word representations that defines words by their contexts of use can explain episodic memory phenomena, including list length effects, primacy and recency effects, strength effects, and pure and mixed list composition effects.

Representing lexical phonology and the structure of the lexicon
Relevant publications: [1]
Words have regular structure restricted by phonotactics, but can be arbitrarily long and vary in their phonological realization. For modeling language production or comprehension, representing the whole phonological sequence is therefore critical. In Jacobs and Mailhot (2019), I have shown that neural networks can faithfully encode phonological form in word vectors, and that these phonological representations reveal the phonological structure of the lexicon and correlate with psycholinguistically important measures such as phonological neighborhood density.

Connectionist models of phrase composition
The phrase “red herring” does not refer to a fish of a particular color. In this set of projects, I have built models that learn to interpret words when they appear together and show that even literal phrases must be learned, which can help to explain phrase frequency effects in language processing (Jacobs, Dell, Benjamin, & Bannard, 2016; Jacobs, Dell, & Bannard, 2017).

Behavioral work:

Language production of sequences
Relevant publications: [1] [2]
Compound words and multiword expressions are complicated — we have to identify, remember, and then produce them fluently. When speakers are producing compounds, are they planned and produced like monomorphemic words? And when we retrieve phrases for production, is the process similar, or more discrete? My work has shown that compounds are produced similarly to monomorphemic words (Jacobs & Dell, 2014), but that phrases are decomposable, and retrieving one word from a phrase triggers the retrieval of the other to the extent that they co-occur (Jacobs, Dell, & Bannard, 2017).

Episodic memory for multiword sequences
Relevant publications: [1] [2]
A number of theories have proposed that multiword sequences are stored as unanalyzed wholes in long-term memory. As part of my dissertation work, I used novel applications of recognition memory and free recall experiments to understand phrase processing (i.e. Jacobs, Dell, Benjamin, & Bannard, 2016; Jacobs, Dell, & Bannard, 2017). I have found that phrase frequency effects can still arise in the presence of incremental production, without needing to posit phrase representations per se. More recently, I have been investigating whether phrase frequency effects arise spontaneously during the recall of visual information (Jacobs & Ferreira, in prep).

The role of auditory memories in speakers’ prosodic decisions
Relevant publications: [1][2][3]
Research on spoken word production tends to focus on the mechanics and/or goals that affect the phonetic forms of speakers’ utterances. Context and experience, however, influence speakers’ decisions. But what experience matters? Can simply saying a word in your head change your fluency? In a series of several projects, I have found that hearing a word is critical for production decisions (Jacobs, Yiu, Watson, & Dell, 2015; Buxó-Lugo, Jacobs, & Watson, 2020; Tippenhauer, Jacobs, & Watson, accepted), though it is not the only factor, as speakers reduce even when auditory input is degraded (Jacobs, Loucks, Watson, & Dell, accepted).


Building distributional semantic models of words, phrases, and entities
Building word, phrase, and entity embeddings can be difficult and require significant hacking of existing packages. When working with traditional matrix factorization methods, the input space can be transformed to your liking. The nontology package makes that easy. Additional tutorials can be found at another repo to learn different kinds of word embeddings from TED talks: distribu_ted!