Using Machine Learning to Study Neural Representations of Language Meaning with Tom Mitchell
How does the human brain use neural activity to create and represent meanings of words, phrases, sentences, and stories? One way to study this question is to give people text to read, while scanning their brain. We have been doing such experiments with fMRI (1 mm spatial resolution) and MEG (1 msec time resolution) brain imaging, and developing novel machine learning analyses for these data. As a result, we have learned answers to questions such as:
- Are the neural encodings of word meaning the same in your brain and mine?
- Are neural encodings of word meaning built out of recognizable subcomponents, or are they randomly different for each word?
- What sequence of neurally encoded information flows through the brain during the half-second in which the brain comprehends a word?
- How are meanings of multiple words combined when reading phrases, sentences, and stories?
This talk will summarize our machine learning approach, some of what we have learned, and newer questions we are currently studying.
Tom M. Mitchell is the E. Fredkin University Professor at Carnegie Mellon University, where he founded the world's first Machine Learning Department. His research uses machine learning to develop computers that are learning to read the web (http://rtw.ml.cmu.edu), and uses brain imaging to study how the human brain understands what it reads. Mitchell is a member of the U.S. National Academy of Engineering, the American Academy of Arts and Sciences, a Fellow of the American Association for the Advancement of Science (AAAS), and a Fellow and Past President of the Association for the Advancement of Artificial Intelligence (AAAI). In 2015 he received an honorary Doctor of Laws degree from Dalhousie University for his contributions to machine learning and cognitive neuroscience.