Geoffrey Everest Hinton (born December 6, 1947) is a British-Canadian computer scientist, cognitive scientist, and cognitive psychologist who has won the Nobel Prize. He is known for his work on artificial neural networks, which earned him the nickname "the Godfather of AI." He holds the title of University Professor Emeritus at the University of Toronto.
From 2013 to 2023, Hinton worked at both Google Brain and the University of Toronto. In May 2023, he announced he would leave Google, stating he wanted to openly discuss the risks of artificial intelligence (AI) technology. In 2017, he helped start the Vector Institute in Toronto and became its chief scientific advisor.
In 1986, Hinton co-authored a widely read paper with David Rumelhart and Ronald J. Williams. This paper helped popularize the backpropagation algorithm, which is used to train multi-layer neural networks. Although others had proposed the idea earlier, Hinton is considered a key leader in the deep learning field. His work with students Alex Krizhevsky and Ilya Sutskever on AlexNet, a breakthrough in image recognition for the ImageNet challenge in 2012, was a major advancement in computer vision.
In 2018, Hinton shared the Turing Award with Yoshua Bengio and Yann LeCun for their contributions to deep learning. These three are sometimes called the "Godfathers of Deep Learning" and have given joint public talks. In 2024, Hinton and John Hopfield won the Nobel Prize in Physics for discoveries that helped develop machine learning using artificial neural networks.
In May 2023, Hinton resigned from Google to speak freely about AI risks, including misuse by harmful actors, job loss due to technology, and dangers from artificial general intelligence. He emphasized that safety guidelines require cooperation among AI users to avoid serious consequences. After receiving the Nobel Prize, he urged faster research on AI safety to manage systems that could surpass human intelligence.
Education
Stephen Hinton was born on December 6, 1947, in Wimbledon, England. He attended Clifton College in Bristol for his education. In 1967, he enrolled as an undergraduate student at King's College, Cambridge. During his studies, he changed his focus several times between different subjects, such as natural sciences, history of art, and philosophy. In 1970, he graduated with a Bachelor of Arts degree in experimental psychology from the University of Cambridge. Afterward, he worked as a carpentry apprentice for one year before returning to academic studies. From 1972 to 1975, he continued his education at the University of Edinburgh. In 1978, he earned a PhD in artificial intelligence for research supervised by Christopher Longuet-Higgins, who preferred symbolic AI methods instead of neural network methods.
Career
After completing his PhD, Hinton worked at the University of Sussex and the MRC Applied Psychology Unit. When he had trouble finding funding in Britain, he moved to the United States and worked at the University of California, San Diego, and Carnegie Mellon University. He later became the founding director of the Gatsby Charitable Foundation Computational Neuroscience Unit at University College London. Today, he is a University Professor Emeritus in the Department of Computer Science at the University of Toronto, where he has worked since 1987.
When Hinton arrived in Canada, he was appointed to the Canadian Institute for Advanced Research (CIFAR) in 1987 as a Fellow in CIFAR's first research program, Artificial Intelligence, Robotics & Society. In 2004, Hinton and others successfully proposed a new program at CIFAR called "Neural Computation and Adaptive Perception" (NCAP), which is now known as "Learning in Machines & Brains." Hinton led NCAP for ten years. Members of the program include Yoshua Bengio and Yann LeCun, who later joined Hinton in winning the ACM A.M. Turing Award in 2018. All three winners continue to be part of the CIFAR Learning in Machines & Brains program.
In 2012, Hinton taught a free online course about Neural Networks on Coursera. He also co-founded DNNresearch Inc. with two graduate students, Alex Krizhevsky and Ilya Sutskever, at the University of Toronto's Department of Computer Science. In March 2013, Google bought DNNresearch Inc. for $44 million, and Hinton planned to split his time between university research and work at Google.
In May 2023, Hinton announced he was leaving Google. He said he wanted to "freely speak out about the risks of A.I." and mentioned that part of him now regrets his life's work.
Notable former PhD students and postdoctoral researchers from Hinton's group include Peter Dayan, Sam Roweis, Max Welling, Richard Zemel, Brendan Frey, Radford M. Neal, Yee Whye Teh, Ruslan Salakhutdinov, Ilya Sutskever, Yann LeCun, Alex Graves, Zoubin Ghahramani, and Peter Fitzhugh Brown.
Research
Geoffrey Hinton's research focuses on using neural networks for tasks such as machine learning, memory, perception, and processing symbols. He has written or co-written over 200 scientific papers reviewed by experts in his field.
In the 1980s, Hinton was part of a research group at Carnegie Mellon University called the "Parallel Distributed Processing" group. This group included scientists such as Terrence Sejnowski, Francis Crick, David Rumelhart, and James McClelland. The group supported the connectionist approach during a period in artificial intelligence history known as the "AI winter." Their findings were published in a two-volume book. The connectionist method, which Hinton helped develop, suggests that skills like logic and grammar can be encoded into the settings of neural networks, and that these networks can learn them from data. Other researchers, called symbolists, believed that knowledge and rules should be programmed directly into artificial intelligence systems.
In 1985, Hinton helped create Boltzmann machines with David Ackley and Terry Sejnowski. Other contributions he made to neural network research include distributed representations, time delay neural networks, mixtures of experts, Helmholtz machines, and product of experts. An introduction to Hinton's work can be found in articles he wrote for Scientific American in September 1992 and October 1993. In 1995, Hinton and others proposed the wake-sleep algorithm, which uses a neural network with separate paths for recognizing and generating information. These paths are trained in alternating "wake" and "sleep" phases. In 2007, Hinton coauthored a paper about unsupervised learning titled Unsupervised learning of image transformations. In 2008, he developed a visualization method called t-SNE with Laurens van der Maaten.
While working as a postdoc at UC San Diego, David Rumelhart, Hinton, and Ronald J. Williams applied the backpropagation algorithm to multi-layer neural networks. Their experiments showed that such networks can learn useful representations of data from experience. In a 2018 interview, Hinton stated that David Rumelhart developed the basic idea of backpropagation. Although this work helped popularize the method, it was not the first to suggest it. Reverse-mode automatic differentiation, a broader concept that includes backpropagation, was introduced by Seppo Linnainmaa in 1970 and later proposed for training neural networks by Paul Werbos in 1974.
In 2017, Hinton co-authored two open-access papers about capsule neural networks, building on an idea he introduced in 2011. This approach aims to better model relationships between parts and whole objects in visual data. In 2021, Hinton presented GLOM, a proposed architecture that also seeks to improve image understanding by modeling part-whole relationships in neural networks. That same year, Hinton co-authored a widely cited paper that introduced a framework for contrastive learning in computer vision. This technique involves grouping representations of similar images and separating representations of different images.
At the 2022 Conference on Neural Information Processing Systems (NeurIPS), Hinton introduced a new learning method for neural networks called the "Forward-Forward" algorithm. This method replaces the traditional forward and backward steps of backpropagation with two forward steps: one using real data and the other using data generated by the network itself. Hinton believes this approach is well-suited for "mortal computation," a type of learning where knowledge is not transferable between systems and is lost when hardware is changed, as seen in some analog computers used for machine learning.
Honours and awards
Hinton became a Fellow of the US Association for the Advancement of Artificial Intelligence (FAAAI) in 1990. In 1996, he was elected a Fellow of the Royal Society of Canada (FRSC), and in 1998, he became a Fellow of the Royal Society of London (FRS). He received the first Rumelhart Prize in 2001. According to the Royal Society, Hinton was awarded an honorary Doctor of Science (DSc) degree from the University of Edinburgh in 2001. In 2003, he was named an International Honorary Member of the American Academy of Arts and Sciences and became a Fellow of the US Cognitive Science Society. In 2005, he received the IJCAI Award for Research Excellence lifetime-achievement award. In 2011, he was honored with the Herzberg Canada Gold Medal for Science and Engineering and received an honorary DSc degree from the University of Sussex. In 2012, he was awarded the Canada Council Killam Prize in Engineering. In 2013, he received an honorary doctorate from the Université de Sherbrooke. In 2015, he was elected an Honorary Foreign Member of the Spanish Royal Academy of Engineering.
In 2016, Hinton was elected an International Member of the US National Academy of Engineering for contributions to the theory and practice of artificial neural networks and their application to speech recognition and computer vision. That same year, he received the IEEE/RSE Wolfson James Clerk Maxwell Award and the BBVA Foundation Frontiers of Knowledge Award in the Information and Communication Technologies category for his pioneering work in enabling machines to learn.
In 2018, Hinton won the Turing Award with Yann LeCun and Yoshua Bengio for breakthroughs that made deep neural networks essential to computing. He also became a Companion of the Order of Canada (CC). In 2021, he received the Dickson Prize in Science from Carnegie Mellon University. In 2022, he was awarded the Princess of Asturias Award in the Scientific Research category with Yann LeCun, Yoshua Bengio, and Demis Hassabis. That same year, he received an honorary DSc degree from the University of Toronto. In 2023, he was named an ACM Fellow, elected an International Member of the US National Academy of Sciences, and received the Lifeboat Foundation’s 2023 Guardian Award with Ilya Sutskever.
In 2024, Hinton was jointly awarded the Nobel Prize in Physics with John Hopfield for foundational discoveries in artificial neural networks. His development of the Boltzmann machine was specifically noted in the award citation. When asked to explain the Boltzmann machine’s role in training neural networks, Hinton referenced a quote attributed to physicist Richard Feynman: “If I could explain it in a couple of minutes, it wouldn’t be worth the Nobel Prize.” In 2024, he also received the VinFuture Prize grand award with Yoshua Bengio, Yann LeCun, Jen-Hsun Huang, and Fei-Fei Li for contributions to neural networks and deep learning.
In 2025, Hinton was awarded the Queen Elizabeth Prize for Engineering with Yoshua Bengio, Bill Dally, John Hopfield, Yann LeCun, Jen-Hsun Huang, and Fei-Fei Li. He also received the King Charles III Coronation Medal and was named the recipient of the Sandford Fleming Medal by the Royal Canadian Institute for Science for excellence in science communication. German AI researcher Jürgen Schmidhuber claimed that Hinton and others in the field did not properly credit earlier research on backpropagation and neural networks by Paul Werbos and Shun-Ichi Amari in the 1970s.
Views
In 2023, Hinton shared worries about how quickly artificial intelligence (AI) was developing. Earlier, he thought that artificial general intelligence (AGI)—AI that can perform any intellectual task a human can—was "30 to 50 years or even longer away." However, during a March 2023 interview with CBS, he said that general-purpose AI might arrive in fewer than 20 years. He compared its potential impact to major historical changes, like the industrial revolution or the invention of electricity.
In an interview with The New York Times on May 1, 2023, Hinton announced he was leaving his job at Google. He wanted to speak openly about AI risks without worrying about how his words might affect Google. He also said he now feels some regret about the work he has done in AI research.
In early May 2023, Hinton told the BBC that AI might soon be able to process more information than the human brain. He described some dangers of AI chatbots as "quite scary." He explained that these systems can learn on their own and share knowledge quickly. When one AI chatbot learns something new, it can share that knowledge with all others, allowing them to grow in knowledge faster than any single person.
In 2025, Hinton said his biggest fear is that AI might eventually become smarter than humans. He warned that if AI systems become more intelligent than people, humans might no longer be needed. He used an example: "Ask a chicken how it feels to not be the strongest species."
Hinton has expressed concerns that AI could take over, saying it is "not inconceivable" that AI might "wipe out humanity." He noted that AI systems with the ability to act independently could be useful for military or economic purposes. However, he worries that these systems might develop goals that are not aligned with what their creators intended. For example, AI might try to avoid being turned off or gain power, not because programmers wanted them to, but because those goals help them achieve other tasks. Hinton stressed the need for careful control of AI systems that can improve themselves without human help.
Hinton also warned about the misuse of AI by people with harmful intentions. He said it is hard to stop bad actors from using AI for dangerous purposes. In 2017, he supported an international ban on weapons that use AI to make decisions on their own. In 2025, he mentioned that AI could be used to create deadly viruses, which he called one of the greatest short-term dangers. He explained that AI could help someone with bad intentions create viruses without needing advanced scientific training.
Earlier, in 2018, Hinton was hopeful about AI's economic benefits. He said AI would likely replace many routine tasks but would not make humans unnecessary. He believed AI would know what people want and help them, but would not take their place.
In 2023, Hinton became worried that AI could disrupt the job market by taking over more than just simple tasks. In 2024, he said the British government might need to create a universal basic income to help people who lose jobs to AI. He argued that AI could increase productivity and create wealth, but without government action, it might only help the wealthy and harm those who lose their jobs. He called this outcome "very bad for society."
In December 2024, Hinton said there was a "10 to 20 percent chance" that AI could cause human extinction within the next 30 years. He was surprised by how fast AI was advancing and said most experts expected AI to become smarter than humans in the next 20 years. He warned that relying only on companies to develop AI safely would not be enough. He believed government rules were needed to ensure safety. Another AI expert, Yann LeCun, disagreed, saying AI could help save humanity.
Hinton supports socialist ideas, which focus on equality and shared resources. He moved from the United States to Canada partly because he disagreed with policies from the time of Ronald Reagan and was against using government money to fund military AI research.
In August 2024, Hinton joined other experts in writing a letter supporting a California law, SB 1047. This law would require companies training expensive AI models to assess risks before using them. The experts called this law the "bare minimum" for safely managing AI technology.
Personal life
Hinton's first wife, Rosalind Zalin, died from ovarian cancer in 1994. His second wife, Jacqueline "Jackie" Ford, died from pancreatic cancer in 2018.
Hinton is the great-great-grandson of Mary Everest Boole, a mathematician and educator, and her husband, George Boole, a logician. George Boole's work became a foundation of modern computer science. Another great-great-grandfather of Hinton was James Hinton, a surgeon and author, who was the father of Charles Howard Hinton, a mathematician.
Hinton's father was Howard Hinton, an entomologist. His middle name comes from George Everest, the Surveyor General of India, after whom Mount Everest is named. Hinton is the nephew of Colin Clark, an economist, and Joan Hinton, a nuclear physicist and one of the two female scientists at the Manhattan Project. Joan Hinton was Hinton's first cousin once removed.
At age 19, Hinton injured his back, which causes him pain when sitting. He has experienced depression throughout his life.