There's an interesting piece in
Nature about how
neuroscientists are learning to "decode" brain activity, that is, to identify, e.g. what a person is looking at by analyzing their brain activity. First they have to "train" the decoder, a computer program, by having a subject, for example, look at pictures of various objects. It then "learns" associations between classes of objects and patterns of brain activity. You then test the decoder by presenting an image to the subject and having the computer guess the visual object:
Anne Hathaway's face appears in a clip from the film Bride Wars, engaged in heated conversation with Kate Hudson. The algorithm confidently labels them with the words 'woman' and 'talk', in large type. Another clip appears — an underwater scene from a wildlife documentary. The program struggles, and eventually offers 'whale' and 'swim' in a small, tentative font.
And so it goes:
Applying their techniques beyond the encoding of pictures and movies will require a vast leap in complexity. “I don't do vision because it's the most interesting part of the brain,” says Gallant. “I do it because it's the easiest part of the brain. It's the part of the brain I have a hope of solving before I'm dead.” But in theory, he says, “you can do basically anything with this”.
Movies, anyone?
Edges became complex pictures in 2008, when Gallant's team developed a decoder that could identify which of 120 pictures a subject was viewing — a much bigger challenge than inferring what general category an image belongs to, or deciphering edges. They then went a step further, developing a decoder that could produce primitive-looking movies of what the participant was viewing based on brain activity5.
But I wouldn't worry about anyone peaking in on my thoughts anytime soon:
Devising a decoding model that can generalize across brains, and even for the same brain across time, is a complex problem. Decoders are generally built on individual brains, unless they're computing something relatively simple such as a binary choice — whether someone was looking at picture A or B. But several groups are now working on building one-size-fits-all models. “Everyone's brain is a little bit different,” says Haxby, who is leading one such effort. At the moment, he says, “you just can't line up these patterns of activity well enough”.