Fashion Magazine

The Nobel Prize in Physics Highlights Major Breakthroughs in the AI ​​revolution: Creating Machines That Learn

By Elliefrost @adikt_blog

If your jaw dropped when you watched the latest AI-generated video, your bank balance was saved from criminals by a fraud detection system, or your day was made a little easier because you could dictate a text message on the go, then you have many scientists, mathematicians and engineers to thank.

But two names stand out for fundamental contributions to the deep learning technology that makes these experiences possible: physicist John Hopfield of Princeton University and computer scientist Geoffrey Hinton of the University of Toronto.

The two researchers received the Nobel Prize in Physics on October 8, 2024 for their groundbreaking work in the field of artificial neural networks. Although artificial neural networks are modeled after biological neural networks, both researchers' work was based on statistical physics, hence the prize in physics.

The Nobel Prize in Physics highlights major breakthroughs in the AI ​​revolution: creating machines that learn

How a neuron calculates

Artificial neural networks owe their origins to research into biological neurons in living brains. In 1943, neurophysiologist Warren McCulloch and logician Walter Pitts proposed a simple model of how a neuron works. In the McCulloch-Pitts model, a neuron is connected to its neighboring neurons and can receive signals from them. It can then combine these signals to send signals to other neurons.

But there's a twist: it can weight signals coming from different neighbors differently. Imagine you're trying to decide whether to buy a new best-selling phone. You talk to your friends and ask them for their recommendations. A simple strategy is to collect all the recommendations from friends and decide to go with what the majority says. For example, you ask three friends, Alice, Bob and Charlie, and they say yes, yes and no respectively. This brings you to the decision to buy the phone because you have two jays and one no.

You may trust some friends more because they have in-depth knowledge of technical gadgets. So you might decide to give more weight to their recommendations. For example, if Charlie is very well informed, you can count his no three times and now your decision not to buy the phone is two yeses and three nos. If you are unlucky enough to have a friend who completely distrusts you when it comes to tech gadgets, you may even give him a negative weight. So their yay counts as a no and their no counts as a yay.

Once you have decided for yourself whether the new phone is a good choice, other friends can ask you for your recommendation. Similarly, neurons in artificial and biological neural networks can merge signals from their neighbors and send a signal to other neurons. This possibility leads to an important distinction: is there a cycle in the network? For example, if today I ask Alice, Bob, and Charlie, and tomorrow Alice asks me for my recommendation, there is a cycle: from Alice to me, and from me back to Alice.

The Nobel Prize in Physics highlights major breakthroughs in the AI ​​revolution: creating machines that learn
The Nobel Prize in Physics highlights major breakthroughs in the AI ​​revolution: creating machines that learn

If the connections between neurons have no cycle, computer scientists call this a feedforward neural network. The neurons in a feedforward network can be arranged in layers. The first layer consists of the inputs. The second layer receives its signals from the first layer and so on. The last layer represents the outputs of the network.

However, if there is a cycle in the network, computer scientists call it a recurrent neural network, and the arrangements of neurons can be more complicated than in feedforward neural networks.

Hopfield Network

The initial inspiration for artificial neural networks came from biology, but soon other fields began to shape their development. These include logic, mathematics and physics. Physicist John Hopfield used ideas from physics to study a particular type of recurrent neural network, now called the Hopfield network. In particular, he studied their dynamics: what happens to the network over time?

Such dynamics are also important when information spreads through social networks. Everyone is aware that memes go viral and form echo chambers in online social networks. These are all collective phenomena that ultimately arise from simple information exchange between people in the network.

Hopfield pioneered the use of models from physics, especially those developed to study magnetism, to understand the dynamics of recurrent neural networks. He also showed that their dynamics can provide such neural networks with a form of memory.

Boltzmann machines and backpropagation

In the 1980s, Geoffrey Hinton, computational neurobiologist Terrence Sejnowski, and others expanded Hopfield's ideas to create a new class of models called Boltzmann machines, named after 19th-century physicist Ludwig Boltzmann. As the name implies, the design of these models is rooted in the statistical physics developed by Boltzmann. Unlike Hopfield networks that can store patterns and correct errors in patterns - like a spell checker does - Boltzmann machines can generate new patterns, sowing the seeds for the modern generative AI revolution.

Hinton was also part of another breakthrough that occurred in the 1980s: backpropagation. If you want artificial neural networks to perform interesting tasks, you have to somehow choose the right weights for the connections between artificial neurons. Backpropagation is a key algorithm that allows weights to be selected based on the performance of the network on a training dataset. However, training artificial neural networks with many layers remained a challenge.

In the 2000s, Hinton and his colleagues cleverly used Boltzmann machines to train multilayer networks by first pretraining the network layer by layer and then using another fine-tuning algorithm on top of the pretrained network to further adjust the weights. Multi-layer networks were renamed deep networks and the deep learning revolution had begun.

AI pays it back to physics

The Nobel Prize in Physics shows how ideas from physics have contributed to the rise of deep learning. Now deep learning is beginning to pay off in physics by enabling accurate and fast simulations of systems ranging from molecules and materials to Earth's entire climate.

In awarding the Nobel Prize in Physics to Hopfield and Hinton, the Prize Committee expressed its hope in humanity's potential to use these advances to promote human well-being and build a sustainable world.

This article is republished from The Conversation, an independent nonprofit organization providing facts and trusted analysis to help you understand our complex world. It is written by: Ambuj Tewari, University of Michigan Read more: Ambuj Tewari receives funding from the NSF.

Back to Featured Articles on Logo Paperblog