Skip to content

How UofT’s Geoffrey Hinton Won the 2024 Nobel Prize in Physics

Written by Thomas Crunelle
Illustrated by Maggie Huang

In 2024, the Nobel Prize in Physics was awarded for something that might surprise you. Not for discoveries about black holes, or the conundrums of quantum mechanics, but for discoveries in machine learning (ML) with artificial neural networks (ANN).¹ These terms might sound intimidating but bear with me; this is a story about how machines learn, inspired by our human brain, and how two brilliant scientists made it all possible.

What is Machine Learning?

Let’s start with a simple analogy: imagine teaching a child to recognize cats. You show them a series of animal pictures, pointing out, “This one’s a cat” and “This one’s not.” Over time, the child picks up on patterns: the whiskers, pointy ears, perhaps that feline attitude, and eventually, they can spot a cat without your help. Machine learning works in much the same way. The “child” is actually a computer, and instead of pointing out features, you feed it data, heaps and heaps of data, and let it uncover the patterns on its own.

In essence, machine learning is about teaching computers to learn from examples rather than through explicit programming.² For instance, instead of writing out step-by-step instructions for identifying a cat, we provide a machine with labeled images of cats and non-cats. The machine then figures out what makes a cat a cat.

ML is part of the broader field of artificial intelligence (AI), which aims to create systems capable of perception (e.g., recognizing faces or voices), reasoning, decision-making, and even exhibiting creativity.² In recent years, ML has become the engine driving AI. Keep in mind, however, that ML is just one piece of the AI puzzle, albeit a fascinating and transformative one.²

What are Artificial Neural Networks?

Our brains are marvels of biology that include myriads of interconnected cells called neurons that communicate, process information, and help us learn. This sparked the following idea: what if we could create a system that mimics the brain’s way of working? Thus, artificial neural networks were born.³

An artificial neural network is essentially a digital brain, consisting of virtual “neurons” organized in layers. The first layer takes in input (like an image), the middle layers process it, and the final layer produces an output (e.g., “This is a cat!”). These networks learn by fine-tuning the connections between neurons, much like our brains strengthen pathways as we gain new knowledge.³

This brain-inspired technology forms the backbone of modern machine learning, enabling computers to recognize patterns and solve problems in ways that were once the exclusive domain of human intelligence.

The Hopfield Network

In 1982, John Hopfield, one of this year’s Nobel laureates, unveiled the Hopfield network.¹ These binary networks function as associative memories, capable of storing and retrieving patterns.⁴ Imagine a brain-inspired system that can “recall” a complete image from a fragmented or noisy input – show it a blurry cat, and it fills in the missing details.

Hinton’s Boltzmann Machine

Fast forward to the 1980s, when Geoffrey Hinton, the other laureate, built on Hopfield’s ideas to create the Boltzmann machine.⁵ Hinton applied ideas from statistical physics to improve the “Hopfield network.” By incorporating probabilities into a multi-layered version of the network, he developed a system capable of recognizing and classifying images, as well as generating new examples similar to those it was trained on.¹

Why It Matters

Why did this work earn a Nobel Prize? Because ANNs have become integral to contemporary everyday life. They power technologies and span numerous fields: earth sciences, hydrology, and ocean modeling, as well as climate and weather prediction.⁶ In medicine, they assist in diagnostics by analyzing medical images and signals. The finance industry benefits too, with applications in stock market forecasting and fraud detection.⁶ In short, the breakthroughs by Hopfield and Hinton have reshaped science, technology, and everyday living.

Sources:

  1. Cunningham C. Survival of the fittest. Encyclopædia Britannica. 2024 Mar 14 [accessed 2024 Apr 17]. https://www.britannica.com/science/survival-of-the-fittest
  2. Hunter P. The evolution of human endurance. EMBO reports. 2019;20(11). doi:10.15252/embr.201949396
  3. Bompas A, Kendall G, Sumner P. Spotting fruit versus picking fruit as the selective advantage of human colour vision. i-Perception. 2013;4(2):84–94. doi:10.1068/i0564
  4. Morriss‐Kay GM. The evolution of human artistic creativity. Journal of Anatomy. 2010;216(2):158–176. doi:10.1111/j.1469-7580.2009.01160.x
  5. Heinrich B. Racing the antelope: What animals can teach us about running and life. New York: Harper Collins; 2001.