Monday, November 29, 2021

The Science of Mind Reading

One night in October, 2009, a young man lay in an fMRI scanner in Liège, Belgium. Five years earlier, he’d suffered a head trauma in a motorcycle accident, and since then he hadn’t spoken. He was said to be in a “vegetative state.” A neuroscientist named Martin Monti sat in the next room, along with a few other researchers. For years, Monti and his postdoctoral adviser, Adrian Owen, had been studying vegetative patients, and they had developed two controversial hypotheses. First, they believed that someone could lose the ability to move or even blink while still being conscious; second, they thought that they had devised a method for communicating with such “locked-in” people by detecting their unspoken thoughts.

In a sense, their strategy was simple. Neurons use oxygen, which is carried through the bloodstream inside molecules of hemoglobin. Hemoglobin contains iron, and, by tracking the iron, the magnets in fMRI machines can build maps of brain activity. Picking out signs of consciousness amid the swirl seemed nearly impossible. But, through trial and error, Owen’s group had devised a clever protocol. They’d discovered that if a person imagined walking around her house there was a spike of activity in her parahippocampal gyrus—a finger-shaped area buried deep in the temporal lobe. Imagining playing tennis, by contrast, activated the premotor cortex, which sits on a ridge near the skull. The activity was clear enough to be seen in real time with an fMRI machine. In a 2006 study published in the journal Science, the researchers reported that they had asked a locked-in person to think about tennis, and seen, on her brain scan, that she had done so.

With the young man, known as Patient 23, Monti and Owen were taking a further step: attempting to have a conversation. They would pose a question and tell him that he could signal “yes” by imagining playing tennis, or “no” by thinking about walking around his house. In the scanner control room, a monitor displayed a cross-section of Patient 23’s brain. As different areas consumed blood oxygen, they shimmered red, then bright orange. Monti knew where to look to spot the yes and the no signals.

He switched on the intercom and explained the system to Patient 23. Then he asked the first question: “Is your father’s name Alexander?”

The man’s premotor cortex lit up. He was thinking about tennis—yes.

“Is your father’s name Thomas?”

Activity in the parahippocampal gyrus. He was imagining walking around his house—no.

“Do you have any brothers?”

Tennis—yes.

“Do you have any sisters?”

House—no.

“Before your injury, was your last vacation in the United States?”

Tennis—yes.

The answers were correct. Astonished, Monti called Owen, who was away at a conference. Owen thought that they should ask more questions. The group ran through some possibilities. “Do you like pizza?” was dismissed as being too imprecise. They decided to probe more deeply. Monti turned the intercom back on.

“Do you want to die?” he asked.

Cartoon by Liana Finck

For the first time that night, there was no clear answer.

That winter, the results of the study were published in The New England Journal of Medicine. The paper caused a sensation. The Los Angeles Times wrote a story about it, with the headline “Brains of Vegetative Patients Show Life.” Owen eventually estimated that twenty per cent of patients who were presumed to be vegetative were actually awake. This was a discovery of enormous practical consequence: in subsequent years, through painstaking fMRI sessions, Owen’s group found many patients who could interact with loved ones and answer questions about their own care. The conversations improved their odds of recovery. Still, from a purely scientific perspective, there was something unsatisfying about the method that Monti and Owen had developed with Patient 23. Although they had used the words “tennis” and “house” in communicating with him, they’d had no way of knowing for sure that he was thinking about those specific things. They had been able to say only that, in response to those prompts, thinking was happening in the associated brain areas. “Whether the person was imagining playing tennis, football, hockey, swimming—we don’t know,” Monti told me recently.

During the past few decades, the state of neuroscientific mind reading has advanced substantially. Cognitive psychologists armed with an fMRI machine can tell whether a person is having depressive thoughts; they can see which concepts a student has mastered by comparing his brain patterns with those of his teacher. By analyzing brain scans, a computer system can edit together crude reconstructions of movie clips you’ve watched. One research group has used similar technology to accurately describe the dreams of sleeping subjects. In another lab, scientists have scanned the brains of people who are reading the J. D. Salinger short story “Pretty Mouth and Green My Eyes,” in which it is unclear until the end whether or not a character is having an affair. From brain scans alone, the researchers can tell which interpretation readers are leaning toward, and watch as they change their minds.

I first heard about these studies from Ken Norman, the fifty-year-old chair of the psychology department at Princeton University and an expert on thought decoding. Norman works at the Princeton Neuroscience Institute, which is housed in a glass structure, constructed in 2013, that spills over a low hill on the south side of campus. P.N.I. was conceived as a center where psychologists, neuroscientists, and computer scientists could blend their approaches to studying the mind; M.I.T. and Stanford have invested in similar cross-disciplinary institutes. At P.N.I., undergraduates still participate in old-school psych experiments involving surveys and flash cards. But upstairs, in a lab that studies child development, toddlers wear tiny hats outfitted with infrared brain scanners, and in the basement the skulls of genetically engineered mice are sliced open, allowing individual neurons to be controlled with lasers. A server room with its own high-performance computing cluster analyzes the data generated from these experiments.

Norman, whose jovial intelligence and unruly beard give him the air of a high-school science teacher, occupies an office on the ground floor, with a view of a grassy field. The bookshelves behind his desk contain the intellectual DNA of the institute, with William James next to texts on machine learning. Norman explained that fMRI machines hadn’t advanced that much; instead, artificial intelligence had transformed how scientists read neural data. This had helped shed light on an ancient philosophical mystery. For centuries, scientists had dreamed of locating thought inside the head but had run up against the vexing question of what it means for thoughts to exist in physical space. When Erasistratus, an ancient Greek anatomist, dissected the brain, he suspected that its many folds were the key to intelligence, but he could not say how thoughts were packed into the convoluted mass. In the seventeenth century, Descartes suggested that mental life arose in the pineal gland, but he didn’t have a good theory of what might be found there. Our mental worlds contain everything from the taste of bad wine to the idea of bad taste. How can so many thoughts nestle within a few pounds of tissue?

Now, Norman explained, researchers had developed a mathematical way of understanding thoughts. Drawing on insights from machine learning, they conceived of thoughts as collections of points in a dense “meaning space.” They could see how these points were interrelated and encoded by neurons. By cracking the code, they were beginning to produce an inventory of the mind. “The space of possible thoughts that people can think is big—but it’s not infinitely big,” Norman said. A detailed map of the concepts in our minds might soon be within reach.

Norman invited me to watch an experiment in thought decoding. A postdoctoral student named Manoj Kumar led us into a locked basement lab at P.N.I., where a young woman was lying in the tube of an fMRI scanner. A screen mounted a few inches above her face played a slide show of stock images: an empty beach, a cave, a forest.

“We want to get the brain patterns that are associated with different subclasses of scenes,” Norman said.



from Hacker News https://ift.tt/3DZwe6i

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.