Languages Magazine

The History of Deep Learning: Timeline

By Expectlabs @ExpectLabs
image

Deep learning has proven an invaluable technology for powering recent AI innovation. But how has it evolved over the years? Check out the timeline below for a peek into the history of deep learning. 

  • 1943  |  McCulloch and Pitts publish “A Logical Calculus of the Ideas Immanent in Nervous Activity,” laying the foundations for artificial neural networks.
  • 1950  |  Alan Turing suggests the Turing Test as a way to measure machine intelligence.
  • 1950s  |  The first generation of AI researchers attempt to manually identity and code all the features that computers need to identify objects. This is immensely time-intensive and does not result in the immediate AI leaps scientists of the time were predicting.
  • 1959  |  David H. Hubel and Torsten Wiesel discover two types of cells in the primary visual cortex: simple cells and complex cells, which will later inspire neural networks’ multi-level approach for dealing with increasing levels of abstraction.
  • 1960s  |  Ray Solomonoff publishes his ideas on algorithmic probability, establishing the groundwork for modern AI theory.
  • 1974  |  In his Harvard PhD thesis, Paul Werbos describes training neural networks through backpropagation.
  • 1980  |  Dr. Kunihiko Fukushima proposes the neocognitron, a hierarchical multilayered neural network capable of robust visual pattern recognition through learning.
  • 1989  |  Yann LeCun et al. successfully apply backpropagation to a deep neural network with the purpose of recognizing handwritten ZIP codes. Unfortunately, the process is time-intensive, so it doesn’t develop much traction until years later.
  • 2005  |  GPUs are produced in great quantities and become much cheaper.
  • 2000s  |  An increase in computing power and structured data convince Yann LeCun and Geoffrey Hinton to push neural network technology again.
  • 2009  |  Hinton et al.’s deep-learning neural network breaks the record for accuracy in turning the spoken word into typed text. Later that year, Hinton is invited by Li Deng to work with Microsoft on applying deep learning to speech recognition.
  • 2011  |  IBM’s Watson AI wins “Jeopardy!”
  • 2011  |  Apple introduces the iPhone personal assistant Siri
  • 2011  |  Expect Labs is founded.
  • 2012  |  Google’s neural network of 16,000 computer processors browses YouTube and teaches itself about cats.
  • 2013  |  Japanese researchers build baseball-playing robots that continuously learn via neural networks.
  • 2014  |  Google acquires DeepMind
  • 2014  |  The MindMeld API is released, powering voice-driven content discovery.
  • 2014  |  Skype Translator is released.
  • 2014  |  Baidu’s Deep Speech achieves 81% accuracy in noisy environments.

Back to Featured Articles on Logo Paperblog

Paperblog Hot Topics

Magazines