Categories
Artificial Intelligence Education Innovation Network Reading

Latest Read: The AI Delusion

The AI Delusion by Gary Smith. Gary holds a Ph.D. in Economics is a professor Economies at Pomona College. He was a 1967 Woodrow Wilson Fellow and a 1968 Yale University Fellow. He was awarded a Stanford Research Institute Grant in 1978 and a NSF grant for an economics computer lab beginning in 1995.

The AI Delusion by Gary Smith

Gary certainly provides a solid narrative that artificial intelligence is not perfect. On the contrary, it is quite far from perfect. As a result, we should be aware of how much blind faith is given to so many artificial intelligence services. We do this at our own peril.

IBM’s Watson is an example. Gary explains why Watson, a question-answering computer system capable of answering questions posed in natural language is a bad match for healthcare but can be an absolutely wonderful solution in other markets.

The AI Delusion certainly also reveals how many times artificial intelligence systems have simply failed. These lead to important lessons. At the same, time Gary does acknowledge that today’s machine learning has solved problems thought impossible just twenty years ago.

For example, the Obama campaigns in 2008 and his 2012 re-election deployed data analytics that were critical in his win and re-election. Yet, the Hilary Clinton campaign followed data insights from a machine learning system named Ada. This big data system advised against campaigning in Michigan and other states. This so upset former President Bill Clinton that he attempted to persuade the campaign to change strategy, however he was overruled by Ada. A powerful example of big data going off the tracks.

Gary is certainly acknowledging that machines in the future will have the ability to think, however today many are mislead by deep neural networks. Many on the surface associate brain neurons to artificial intelligence’ neural networks. Neural networks do not mimic the brain. Neural networks are indeed powerful programs that execute complex mathematical programs. However, today’s neural networks do not understand words, or images.

Gary provides a very valuable lesson regarding AI’s myth-like acceptance of results. Enter the Texas sharpshooter fallacies. He is addressing two endemic problems with an often held, popular belief of “data first, theory later” which make up the fallacy. I found this similar lessons in A Brief History of Artificial Intelligence.

Texas sharpshooter part 1

First, a self-proclaimed marksman covers a wall with targets and then fires. Inevitably, he hits a target, which he displays proudly without mentioning all the missed targets. Because he is certain to hit a target, the fact that he did so proves nothing at all.

If you torture the data long enough, it will confess.

Economist Ronald Coase
Texas sharpshooter part 2

Second, a hapless cowboy shoots a bullet at a blank wall. He then draws a bullseye around the bullet hole, which again proves nothing because there will always be a hole to draw a circle around. This second theory is the Feynman Trap

Can artificial intelligence write?

How effective is Google Translate? On the surface many rely upon Google’s artificial intelligence service. However in testing Gary clearly documents how a machine cannot understand language and provides multiple examples of translation errors moving literature from English to French and back, and then English to German and back. Actually Gary illustrates how AI cannot read, nor can it understand (yet) sentences with double nouns.

Will artificial intelligence (eventually) be able to read and write?

There is absolutely no fundamental philosophical reason that machines could not, in principle, someday think, be creative, be funny, be nostalgic, be excited, be frightened, be ecstatic, be resigned, be filled with hope, and of course, as a corollary, be able to translate magnificently between languages. No, there is absolutely no fundamental philosophical reason that machines might not someday succeed smashingly in translating jokes, puns, comic books, screenplays, novels, poems, and of course essays just like this one. But all that will come about only when machines are just as alive and just as filled with ideas, emotions, and experiences as human beings are. And that is not around the corner.
p. 44

I found five key lessons that everyone will benefit from learning from Gary regardless of your job or market:

  1. Symbols without context (chapter three)
  2. Correlation is not causation (chapter four)
  3. Patterns in randomness (chapter five)
  4. Trust in the black box (chapter five)
  5. The Texas sharpshooter fallacy (chapter six)

Gary has also written a number of books including:
Standard Deviations: Flawed Assumptions, Tortured Data, and Other Ways to Lie with Statistics (London Times Book of the Week November 2014), The 9 Pitfalls of Data Science (Winner of the 2020 Prose Award for Popular Science & Popular Mathematics), and The Phantom Pattern Problem: The Mirage of Big Data.

In conclusion, many readers will benefit from understanding the myths surrounding Artificial Intelligence. This is certainly a worth read to understand the hype surrounding AI.


Claremont McKenna College | The AI Delusion