The AI Delusion, by Gary Smith (Oxford, 249 pp., $27.95)
Artificial intelligence may prove more dangerous as it advances, but it will never generate actual intelligence so long as the basic assumptions of the field remain unchanged. In The AI Delusion, Gary Smith reveals why, and assesses the technology’s problems from an economist’s perspective.
AI’s basic problem concerns how computers process symbols, from the series of English letters one types on a keyboard to, more fundamentally, the strings of 0’s and 1’s into which those letters are encoded. The meanings of these symbols—indeed, even the fact that they are symbols—is not something the computer knows. A computer no more understands what it processes than a slide rule comprehends the numbers and lines written on its surface. It’s the user of a slide rule who does the calculations, not the instrument itself. Similarly, it’s the designers and users of a computer who understand the symbols it processes. The intelligence is in them, not in the machine.
As Smith observes, a computer can be programmed to detect instances of the word “betrayal” in scanned texts, but it lacks the concept of betrayal. Therefore, if a computer scans a story about betrayal that happens not to use the actual word “betrayal,” it will fail to detect the story’s theme. And if it scans text that does contain the word, but without deploying the concept of betrayal, the computer will erroneously classify it as a story about betrayal. Due to the rough correlation that exists between contexts in which the word “betrayal” appears, and contexts in which the concept is deployed, the computer will loosely simulate the behavior of someone who understands the word—but, says Smith, to suppose such a simulation amounts to real intelligence is like supposing that climbing a tree amounts to flying.
Similarly, image-recognition software is sensitive to fine-grained details of colors, shapes, and other features recurring in large samples of photos of various objects: faces, animals, vehicles, and so on. Yet it never sees something as a face, for example, because it lacks the concept of a face. It merely registers the presence or absence of certain statistically common elements. Such processing produces bizarre outcomes, from misidentifying a man merely because he’s wearing oddly colored glasses to identifying a simple series of black and yellow lines as a school bus.
It would miss the point to suggest that further software refinements can eliminate such glitches, because the glitches demonstrate that software is not doing the same kind of thing we do when we perceive objects. The software doesn’t grasp an image as a whole or conceptualize its object but merely responds to certain pixel arrangements. A human being, by contrast, perceives an image as a face—even when he can’t make out individual pixels. Sensitivity to pixel arrangements no more amounts to visual perception than detecting the word “betrayal” amounts to possessing the concept of betrayal.
The implications of AI’s shortcomings, Smith shows, are not merely philosophical. Failure to see how computers merely manipulate symbols without understanding them can have serious economic, medical, and other practical consequences. Most examples involve data mining—poring over vast bodies of information to detect trends, patterns, and correlations. The speed of modern computers greatly facilitates this practice. But as Smith maintains, the conclusions that result are often fallacious, and the prestige that computers have lent data mining only makes it easier to commit the fallacies.
In any vast body of data, many statistical correlations are bound to exist due to coincidence—they don’t merit special attention. For example, a correlation may exist between changes in temperature in an obscure Australian town and price changes in the U.S. stock market. A person would know that the two events have no connection. Human beings, unlike computers, can conceptualize the phenomena in question, and thereby judge that—given the nature of Australian weather patterns and of U.S. stock prices—no plausible causal link connects them. However, because a computer merely processes symbols without having the concepts we associate with them, it can’t do this. Accordingly, it cannot distinguish bogus correlations from significant ones.
If the result of applying a data-mining algorithm is as silly as this example, any human researcher would know to dismiss it—but many less-silly results still reflect coincidence rather than genuine causal connection. Ransacking a medical database, for example, might turn up a statistical correlation between a certain treatment and recovery. Mining economic data is bound to reveal correlations between various economic variables. Most of these correlations will also be merely coincidental, but researchers are often too easily impressed by the size of the body of data and the computing power that produced the result. They are often fooled as they rush to discover and publish interesting findings. This reliance on computerized data mining is precisely why the results of so many scientific studies turn out to be unrepeatable and so much investing advice turns out to be bad.
Despite the technical nature of its subject, The AI Delusion piles up examples in a lucid and accessible way. Smith shows that the real danger posed by so-called artificial intelligence is not that machines might become smarter than we are but that they will always remain infinitely dumber.
Edward Feser’s most recent book is Aristotle’s Revenge: The Metaphysical Foundations of Physical and Biological Science.
from Hacker News https://ift.tt/2HRYHjz
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.