The Myth of Artificial Intelligence Why Computers Can’t Think the Way We Do by Erik Larson. Erik is an entrepreneur and former research scientist at the University of Texas at Austin where he focused in machine learning and natural language processing.
In addition, Erik founded two DARPA-funded AI startups and works on core issues in natural language processing and machine learning. Erik has written for The Atlantic.
Artificial Intelligence seems to be the buzzword of the last twenty years, for better or for worse. For some it is the savior of humanity. For others, the spawn of the devil.
So, does AI actually deliver on superior knowledge systems surpassing human capabilities? Actually, there are valid points by Erik to reveal quite the opposite.
The real challenge proposed by Erik is that so many noted authors on AI, and all their books promising AI’s coming revolution have really all missed their target dates. All of those noted experts made bold predictions to delivery dates of systems that surpass all human knowledge and the downstream effect AI will play upon both markets and society. So why in 2022 have they all missed the mark?
The book is broken into three parts: The Simplified World, the Problems of Inference, and the Future of the Myth. These opening chapters should be considered mandatory reading for every middle school student as Part 1 is certainly well researched. At less than seventy pages, Erik provides a grounded explanation to the early foundation that brought AI forward. From Chapter Two, Turing at Bletchley, to Chapter Five, Natural Language Understanding.
Alvin Turing at Bletchley
In addition, the Dartmouth workshop of 1956 is the birth of Artificial Intelligence. Add luminaries Claude Shannon of Bell Labs, Marvin Minsky of Harvard Herbert Simon of Carnegie Mellon, John McCarthy, Harvard psychologist George Miller, and John Nash.
In addition, Erik’s documentation of Alvin Turing at Bletchley Park is direct and honest:
The problem-solving view of intelligence helps explain the production of invariably narrow applications of AI throughout its history. Game playing, for instance, has been a source of constant inspiration for the development of advanced AI techniques, but games are simplifications of life that reward simplified views of intelligence. A chess pro- gram plays chess, but does rather poorly driving a car. IBM’s Watson system plays Jeopardy!, but not chess or Go, and massive programming or “porting” efforts are required to use the Watson platform to perform other data mining and natural language processing functions, as with recent (and largely unsuccessful) forays into health care.
pg. 28
In addition, Erik documents how this remains even today. At the same time, CPU processing to achieve AI’s lofty goals are only recently possible.
Erik’s message across parts 11 and 111 reflect how building AI systems to mimic and surpass human knowledge is actually much further away than what noted AI leaders have predicted.
Within part 11 The Problem of Inference, there is a message that building AI systems will result in true human knowledge. However, There is so much to unpack in this book. It will be a real pleasure in reading.
AI ‘winters’ defined
Machine translation was stuck, in other words, with results that were a far cry from fully automatic, high-quality translations (and that remain so today, although the quality has improved). Thus the pattern continued. AI had oversold itself, and in the wake of the failure of the translation research to live up to promises, the NRC pulled its funding after investing over twenty million dollars into re- search and development, an enormous sum at the time. In the wake of the debacle, AI researchers lost their jobs, careers were destroyed, and AI as a discipline found itself back at the drawing board.
Attempts to tame or solve the “commonsense knowledge problem” dominated efforts in AI research in the 1970s and 1980s. By the early 1990s, however, AI still had no fresh approaches or answers to its core scientific—and philosophical—problem. Japan had invested millions
in its high-profile Fifth Generation project aimed at achieving success in robotics, and Japan too had failed, rather spectacularly. By the mid- 1990s, AI found itself again in a “winter” ——no confidence in the promises of AI researchers, no results to prove naysayers wrong, and no funding. Then came the web.
page 54-55.
For all of the books that I have read about AI, none really present in depth reasons for those AI winters in the 1970s and 1980s. It was such a welcoming view for Erik to address this front and center. For many AI supporters, this may also be fresh insights to those winter periods.
The rebirth of AI
So, the birth of the web is actually the rebirth of AI. It certainly appears AI winters resulted in a lack of enormous data. Enter the web and very large datasets. Millions of new web users generating statistical and pattern-recognition solutions. This is what AI was starving to build: larger and larger datasets.
Limits to AI
It is impressive to see a section dedicated to Daniel Kahneman’s Thinking Fast and Slow and the ideas surrounding how AI thinks. Erik also compliments Melanie Mitchell’s excellent book Artificial Intelligence. There is much for readers new to AI to digest across these books as well.
In conclusion, Erik’s book takes readers on a much deeper and honest journey of AI’s history. He is honest to show how it must change to truly surpass human knowledge. This book should be more widely read. The impact of AI is not only shaping society. AI may fundamentally alter key elements across the global in our near future.