Culture Magazine

Using Machine Learning to "imitate" Speech Processing in the Brain

By Bbenzon @bbenzon

Result 1: self-supervised learning suffices to make this algorithm learn brain-like representations (i.e. most brain areas significantly correlate with its activations in response to the same speech input). pic.twitter.com/MMWKoJgW8W

— Jean-Rémi King (@JeanRemiKing) June 6, 2022

Result 3: With an additional 386 subjects, we show that wav2vec 2.0 learns both the speech-specific and the language-specific representations of the prefrontal and temporal cortices, respectively. pic.twitter.com/329u5xyqkP

— Jean-Rémi King (@JeanRemiKing) June 6, 2022

By J Millet*, @c_caucheteux*, @PierreOrhan, Y Boubenec, @agramfort, E Dunbar, @chrplr and myself at @MetaAI, @ENS_ULM, @Inria & @Neurospin
🙏Thanks @samnastase, @HassonUri, John Hale, @nilearn, @pyvista and the open-source and open-science communities for making this possible!

— Jean-Rémi King (@JeanRemiKing) June 6, 2022

Interested in this line of research?
Check out our latest paper on the convergence between deepnets and the brain: https://t.co/oDmMoKlWzx

— Jean-Rémi King (@JeanRemiKing) June 6, 2022

Back to Featured Articles on Logo Paperblog