Eric Horvitz

Date

Eric Joel Horvitz is an American computer scientist and a Technical Fellow at Microsoft. He works as the company's first Chief Scientific Officer. Before this role, he was the director of Microsoft Research Labs, which includes research centers in Redmond, Washington; Cambridge, Massachusetts; New York, New York; Montreal, Canada; Cambridge, United Kingdom; and Bangalore, India.

Eric Joel Horvitz is an American computer scientist and a Technical Fellow at Microsoft. He works as the company's first Chief Scientific Officer. Before this role, he was the director of Microsoft Research Labs, which includes research centers in Redmond, Washington; Cambridge, Massachusetts; New York, New York; Montreal, Canada; Cambridge, United Kingdom; and Bangalore, India.

In 2013, Horvitz was chosen as a member of the National Academy of Engineering for his work on ways to help computers make decisions when they have limited information or resources.

Biography

Horvitz earned his Ph.D. and M.D. from Stanford University. His Ph.D. thesis, titled Computation and Action Under Bounded Resources, and later research introduced models of bounded rationality based on probability and decision theory. He completed his doctoral studies with the guidance of advisors Ronald A. Howard, George B. Dantzig, Edward H. Shortliffe, and Patrick Suppes.

He is currently the Chief Scientific Officer of Microsoft. He has been elected as a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), the National Academy of Engineering (NAE), the American Academy of Arts and Sciences, and the American Association for the Advancement of Science (AAAS).

In 2014, he was elected an ACM Fellow for "contributions to artificial intelligence and human-computer interaction." In 2013, he was inducted into the ACM CHI Academy for "research at the intersection of human-computer interaction and artificial intelligence." He was elected to the American Philosophical Society in 2018.

In 2015, he received the AAAI Feigenbaum Prize, an award given every two years for long-term, impactful contributions to artificial intelligence. This includes creating computational models of perception, reflection, and action, and applying them to systems for making quick decisions in areas like healthcare, traffic, and information management. In the same year, he was also awarded the ACM-AAAAI Allen Newell Award for "contributions to artificial intelligence and human-computer interaction through work in computing and decision sciences, including developing principles and models of sensing, reflection, and rational action."

He serves on the President's Council of Advisors on Science and Technology (PCAST), the Scientific Advisory Committee of the Allen Institute for Artificial Intelligence (AI2), and the Computer Science and Telecommunications Board (CSTB) of the US National Academies. He has previously held leadership roles, including president of the Association for the Advancement of AI (AAAI), member of the NSF Computer & Information Science & Engineering (CISE) Advisory Board, member of the Computing Community Consortium (CCC) council, chair of the Section on Information, Computing, and Communications of the American Association for the Advancement of Science (AAAS), member of the Board of Regents of the US National Library of Medicine (NLM), and member of the National Security Commission on Artificial Intelligence (NSCAI), which released its final report in March 2021.

Work

Horvitz's research focuses on challenges in creating systems that can sense, learn, and think. His work includes improvements in how machines learn, find information, interact with people, study biology, and support online shopping.

Horvitz helped use probability and decision-making ideas in artificial intelligence. His work made artificial intelligence more respected in other areas of computer science and engineering, influencing fields like how people and computers interact and how computers operate. He connected artificial intelligence with decision science. For example, he created the idea of bounded optimality, which is a way to make decisions when you don't have all the information. This idea has also influenced studies in how people think and behave.

He studied how probability and value can help machines make decisions. His methods look at solving many problems over time in changing environments. He also used probability and learning to solve complex problems and help computers prove mathematical ideas. He introduced the anytime algorithm, a method where computers improve their answers gradually as they get more time or resources.

He created long-term challenges for artificial intelligence and promoted the idea of open-world AI, where machines can understand and perform well in new situations they have never seen before.

He studied how humans and machines can work together. He developed rules for using learning and decision-making to decide whether machines or humans should take the lead in solving problems. He also created ways for machines to learn when to ask humans for help and how to combine human and machine skills. In human-centered AI, he created tools to help people make decisions quickly and made statistical results easier to understand. He also studied how people pay attention to computers and used learning to predict how interruptions affect users. His work on modeling human surprise was highlighted as a breakthrough by MIT Technology Review.

He explored ways to use AI to help people with software and daily tasks.

He contributed to multimodal interaction, which uses different ways like speech and movement to interact with computers. In 2015, he won the ACM ICMI Sustained Accomplishment Award for his work in this area. His research included how systems can use physical details from real-world settings and talk with multiple people at once.

He co-created methods to protect privacy using probability, including a system where people share information to help others, called community sensing, and methods that protect data based on risk.

He is Microsoft's most prolific inventor.

He led projects to use AI in computers, such as using learning to manage memory in Windows, predict web traffic, improve graphics, and search the internet. He also worked on using AI to find errors in software.

Horvitz talks about artificial intelligence on shows like NPR and Charlie Rose. His online talks include both technical explanations and talks for general audiences, such as a TEDx talk titled "Making Friends with Artificial Intelligence." His research has been reported in The New York Times and MIT Technology Review. He has also spoken to the US Senate about the progress, opportunities, and challenges of AI.

AI and society

He has discussed technical and societal challenges and opportunities related to using AI technologies in the real world, including how AI can be used safely, how AI systems can sometimes cause unintended problems, or be used in harmful ways. He has spoken about the risks of using AI in military settings. He and Thomas G. Dietterich encouraged research on AI alignment, stating that AI systems "must understand what people intend rather than following commands literally."

He has called for action to address risks to civil liberties caused by government use of data in AI systems. He and privacy expert Deirdre Mulligan emphasized the need to balance privacy concerns with the benefits of using data for societal good.

He has presented on the dangers of AI-generated deepfakes and helped develop technologies that use special codes to confirm the source and history of changes made to digital content.

He served as President of the AAAI from 2007–2009. During his presidency, he organized and co-led the Asilomar AI study, which led to a meeting of AI scientists in February 2009. The study examined the progress of AI, reviewed concerns about the direction of AI development, including the risk of losing control over AI systems, and explored ways to reduce risks and improve long-term outcomes. This was the first time AI scientists gathered to discuss concerns about superintelligence and the loss of control over AI, and it attracted public attention.

In reports about the Asilomar study, he stated that scientists must study and address ideas about superintelligent machines and concerns about AI systems becoming uncontrollable. In a later NPR interview, he said that investing in research about superintelligence would be valuable, even if the chance of losing control over AI seemed low, because the consequences could be severe.

In 2014, Horvitz created and funded with his wife the One Hundred Year Study of Artificial Intelligence (AI100) at Stanford University. In 2016, the AI Index was launched as part of the AI100 project.

According to Horvitz, the funding for AI100, which may grow in the future, is expected to support the study for 100 years. A Stanford press release stated that over the next century, groups of experts will "study and predict how artificial intelligence will affect how people work, live, and play." A planning document for the study lists 18 topics for discussion, including law, ethics, the economy, war, and crime. Topics also include risks such as AI misuse that could threaten democracy and freedom, as well as ways to address the possibility of superintelligence and loss of control over AI.

The AI100 study is managed by a Standing Committee. This committee creates questions and themes and organizes a Study Panel every five years. The Study Panel publishes a report that evaluates the progress of AI technologies, challenges, and opportunities related to AI's impact on people and society.

The 2015 Study Panel of AI100, led by Peter Stone, released a report in September 2016 titled "Artificial Intelligence and Life in 2030." The panel recommended increasing public and private funding for AI, suggested improving AI knowledge in government, and advised against broad government rules. Panel chair Peter Stone noted that AI will likely support, rather than replace, human workers, and may create new jobs in technology. While focusing on the next 15 years, the report also addressed concerns about superintelligent robots, stating, "Unlike in movies, there is no race of superhuman robots on the horizon or likely in the future." Stone explained that the report intentionally avoided discussing this idea.

The report from the second phase of the AI100 study, led by Michael Littman, was published in 2021.

He co-founded and has led the Partnership on AI, a non-profit group that includes companies like Apple, Amazon, Facebook, Google, DeepMind, IBM, and Microsoft, along with experts from civil society, universities, and non-profit research organizations. The group's website highlights projects such as studies on risk scores in criminal justice, facial recognition systems, AI and the economy, AI safety, AI and media accuracy, and documentation of AI systems.

He created and leads the Aether Committee at Microsoft, the company's internal group focused on responsibly developing and using AI technologies. He reported that the Aether Committee has provided guidance that has influenced Microsoft's AI efforts. In April 2020, Microsoft shared content about principles, guidelines, and tools developed by the Aether Committee and its teams, including those focused on AI reliability and safety, bias and fairness, clarity and explanation, and collaboration between humans and AI.

Publications

  • Horvitz, E. (December 1990). Computation and Action Under Bounded Resources (PDF). Dissertation. Stanford, CA: Stanford University
  • Horvitz, E. (July 7, 2017). "AI, people, and society." Science, 357(6346): 7. Bibcode: 2017Sci…357….7H. doi: 10.1126/science.aao2466. PMID 28684472
  • Gershman, S.; Horvitz, E.; Tenenbaum, J. (July 17, 2015). "Computational rationality: A converging paradigm for intelligence in brains, minds, and machines." Science, 349(6245): 273–278. Bibcode: 2015Sci…349..273G. doi: 10.1126/science.aac6076. PMID 26185246. S2CID 14818619
  • Kamar, E.; Hacker, S.; Horvitz, E. (June 2012). "Combining human and machine intelligence in large-scale crowdsourcing." Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems – Volume 1 (PDF), vol. 1. Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems. pp. 467–474. doi: 10.65109/HMCS1623. ISBN 978-0-9817381-1-6
  • Horvitz, E. (July 2008). "Artificial Intelligence in the Open World." Opening Session of the Annual Meeting, Association for the Advancement of Artificial Intelligence (Lecture). Chicago, IL
  • Horvitz, E; Kadie, C; Paek, T; Hovel, D (March 2003). "Models of Attention in Computing and Communication: from Principles to Applications" (PDF). Communications of the ACM, vol. 46, no. 3. ACM. pp. 52–59. doi: 10.1145/636772.636798. S2CID 2584780
  • Horvitz, E. (February 2001). "Principles and Applications of Continual Computation" (PDF). Artificial Intelligence, 126(1–2): 159–196. CiteSeerX 10.1.1.476.5653. doi: 10.1016/S0004-3702(00)00082-5
  • Horvitz, E (May 1999). "Principles of mixed-initiative user interfaces" (PDF). Proceedings of the SIGCHI conference on Human factors in computing systems the CHI is the limit – CHI '99. New York, NY: ACM. pp. 159–166. doi: 10.1145/302979.303030. ISBN 0-201-48559-1. S2CID 8943607
  • Horvitz, E; Barry, M (August 1995). "Display of information for time-critical decision making" (PDF). UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence, San Francisco, CA: Morgan Kaufmann Publishers Inc. pp. 296–305. ISBN 1-55860-385-9
  • D, Heckerman; Horvitz, E; Nathwani, Bharat (June 1992). "Toward Normative Expert Systems: Part I, the Pathfinder Project" (PDF). Methods of Information in Medicine, 31(2): 90–105. doi: 10.1055/s-0038-1634867. PMID 1635470. S2CID 14672300
  • Henrion, M.; Breese, J.; Horvitz, E. (1991). "Decision analysis and expert systems." AI Magazine, 12(4). Menlo Park, CA: American Association for Artificial Intelligence: 64–91. doi:

More
articles