In the midst of a conversation with an acquaintance, your brain might skip ahead, anticipating the words that the other person will say. Perhaps then you will blurt out whatever comes to mind. Or maybe you will nurse your guess quietly, waiting to see if—out of all the hundreds of thousands of possibilities—your conversational partner will arrive at the same word you have been thinking of. Amazingly, your companion will often do so.
How does the brain do this? Figuring out how we process language has long been a focus for neuroscientists. Massachusetts Institute of Technology researchers brought a new take to the question using a technique called integrative modeling. They compared dozens of machine-learning algorithms called neural networks to brain scans and other data showing how neural circuits function when a person reads or listens to language. The researchers had a two-part goal: they wanted to figure out how the brain processes language and in doing so push the boundaries of what machine-learning algorithms can teach us about the brain.
The modeling technique reveals that a key role may be played by next-word prediction, which is central to algorithms such as those that suggest words as you compose your texts and e-mails. The researchers discovered that models that excel at next-word prediction are also best at anticipating brain activity patterns and reading times. So it seems like these models are not just useful for proposing the word “want” after you have typed “do you”—or for allowing computers to complete any number of tasks from behind the scenes. They may also offer a window into how your brain makes sense of the flood of words coming out of your friend’s mouth.