Tuesday, March 30, 2021

Why Computers Won't Make Themselves Smarter

In the eleventh century, St. Anselm of Canterbury proposed an argument for the existence of God that went roughly like this: God is, by definition, the greatest being that we can imagine; a God that doesn’t exist is clearly not as great as a God that does exist; ergo, God must exist. This is known as the ontological argument, and there are enough people who find it convincing that it’s still being discussed, nearly a thousand years later. Some critics of the ontological argument contend that it essentially defines a being into existence, and that that is not how definitions work.

God isn’t the only being that people have tried to argue into existence. “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever,” the mathematician Irving John Good wrote, in 1965:

Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

The idea of an intelligence explosion was revived in 1993, by the author and computer scientist Vernor Vinge, who called it “the singularity,” and the idea has since achieved some popularity among technologists and philosophers. Books such as Nick Bostrom’s “Superintelligence: Paths, Dangers, Strategies,” Max Tegmark’s “Life 3.0: Being Human in the age of Artificial Intelligence,” and Stuart Russell’s “Human Compatible: Artificial Intelligence and the Problem of Control” all describe scenarios of “recursive self-improvement,” in which an artificial-intelligence program designs an improved version of itself repeatedly.

I believe that Good’s and Anselm’s arguments have something in common, which is that, in both cases, a lot of the work is being done by the initial definitions. These definitions seem superficially reasonable, which is why they are generally accepted at face value, but they deserve closer examination. I think that the more we scrutinize the implicit assumptions of Good’s argument, the less plausible the idea of an intelligence explosion becomes.

What might recursive self-improvement look like for human beings? For the sake of convenience, we’ll describe human intelligence in terms of I.Q., not as an endorsement of I.Q. testing but because I.Q. represents the idea that intelligence can be usefully captured by a single number, this idea being one of the assumptions made by proponents of an intelligence explosion. In that case, recursive self-improvement would look like this: Once there’s a person with an I.Q. of, say, 300, one of the problems this person can solve is how to convert a person with an I.Q. of 300 into a person with an I.Q. of 350. And then a person with an I.Q. of 350 will be able to solve the more difficult problem of converting a person with an I.Q. of 350 into a person with an I.Q. of 400. And so forth.

Do we have any reason to think that this is the way intelligence works? I don’t believe that we do. For example, there are plenty of people who have I.Q.s of 130, and there’s a smaller number of people who have I.Q.s of 160. None of them have been able to increase the intelligence of someone with an I.Q. of 70 to 100, which is implied to be an easier task. None of them can even increase the intelligence of animals, whose intelligence is considered to be too low to be measured by I.Q. tests. If increasing someone’s I.Q. were an activity like solving a set of math puzzles, we ought to see successful examples of it at the low end, where the problems are easier to solve. But we don’t see strong evidence of that happening.

Maybe it’s because we’re currently too far from the necessary threshold; maybe an I.Q. of 300 is the minimum needed to increase anyone’s intelligence at all. But, even if that were true, we still don’t have good reason to believe that endless recursive self-improvement is likely. For example, it’s entirely possible that the best that a person with an I.Q. of 300 can do is increase another person’s I.Q. to 200. That would allow one person with an I.Q. of 300 to grant everyone around them an I.Q. of 200, which frankly would be an amazing accomplishment. But that would still leave us at a plateau; there would be no recursive self-improvement and no intelligence explosion.

The I.B.M. research engineer Emerson Pugh is credited with saying “If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.” This statement makes intuitive sense, but, more importantly, we can point to a concrete example in support of it: the microscopic roundworm C. elegans. It is probably one of the best-understood organisms in history; scientists have sequenced its genome and know the lineage of cell divisions that give rise to each of the nine hundred and fifty-nine somatic cells in its body, and have mapped every connection between its three hundred and two neurons. But they still don’t completely understand its behavior. The human brain is estimated to have eighty-six billion neurons on average, and we will probably need most of them to comprehend what’s going on in C. elegans’s three hundred and two; this ratio doesn’t bode well for our prospects of understanding what’s going on within ourselves.

Some proponents of an intelligence explosion argue that it’s possible to increase a system’s intelligence without fully understanding how the system works. They imply that intelligent systems, such as the human brain or an A.I. program, have one or more hidden “intelligence knobs,” and that we only need to be smart enough to find the knobs. I’m not sure that we currently have many good candidates for these knobs, so it’s hard to evaluate the reasonableness of this idea. Perhaps the most commonly suggested way to “turn up” artificial intelligence is to increase the speed of the hardware on which a program runs. Some have said that, once we create software that is as intelligent as a human being, running the software on a faster computer will effectively create superhuman intelligence. Would this lead to an intelligence explosion?

Let’s imagine that we have an A.I. program that is just as intelligent and capable as the average human computer programmer. Now suppose that we increase its computer’s speed a hundred times and let the program run for a year. That’d be the equivalent of locking an average human being in a room for a hundred years, with nothing to do except work on an assigned programming task. Many human beings would consider this a hellish prison sentence, but, for the purposes of this scenario, let’s imagine that the A.I. doesn’t feel the same way. We’ll assume that the A.I. has all the desirable properties of a human being but doesn’t possess any of the other properties that would act as obstacles in this scenario, such as a need for novelty or a desire to make one’s own choices. (It’s not clear to me that this is a reasonable assumption, but we can leave that question for another time.)

So now we’ve got a human-equivalent A.I. that is spending a hundred person-years on a single task. What kind of results can we expect it to achieve? Suppose this A.I. could write and debug a thousand lines of code per day, which is a prodigious level of productivity. At that rate, a century would be almost enough time for it to single-handedly write Windows XP, which supposedly consisted of forty-five million lines of code. That’s an impressive accomplishment, but a far cry from its being able to write an A.I. more intelligent than itself. Creating a smarter A.I. requires more than the ability to write good code; it would require a major breakthrough in A.I. research, and that’s not something an average computer programmer is guaranteed to achieve, no matter how much time you give them.

When you’re developing software, you typically use a program known as a compiler. The compiler takes the source code you’ve written, in a language such as C, and translates it into an executable program: a file consisting of machine code that the computer understands. Suppose you’re not happy with the C compiler you’re using—call it CompilerZero. CompilerZero takes a long time to process your source code, and the programs it generates take a long time to run. You’re confident that you can do better, so you write a new C compiler, one that generates more efficient machine code; this new one is known as an optimizing compiler.

You’ve written your optimizing compiler in C, so you can use CompilerZero to translate your source code into an executable program. Call this program CompilerOne. Thanks to your ingenuity, CompilerOne now generates programs that run more quickly. But CompilerOne itself still takes a long time to run, because it’s a product of CompilerZero. What can you do?

You can use CompilerOne to compile itself. You feed CompilerOne its own source code, and it generates a new executable file consisting of more efficient machine code. Call this CompilerTwo. CompilerTwo also generates programs that run very quickly, but it has the added advantage of running very quickly itself. Congratulations—you have written a self-improving computer program.

But this is as far as it goes. If you feed the same source code into CompilerTwo, all it does is generate another copy of CompilerTwo. It cannot create a CompilerThree and initiate an escalating series of ever-better compilers. If you want a compiler that generates programs that run insanely fast, you will have to look elsewhere to get it.

The technique of having a compiler compile itself is known as bootstrapping, and it’s been employed since the nineteen-sixties. Optimizing compilers have come a long way since then, so the differences between a CompilerZero and a CompilerTwo can be much bigger than they used to be, but all of that progress was achieved by human programmers rather than by compilers improving themselves. And, although compilers are very different from artificial-intelligence programs, they offer a useful precedent for thinking about the idea of an intelligence explosion, because they are computer programs that generate other computer programs, and because when they do so optimization is often a priority.

The more you know about the intended use of a program, the better you can optimize its code. Human programmers sometimes hand-optimize sections of a program, meaning that they specify the machine instructions directly; the humans can write machine code that’s more efficient than what a compiler generates, because they know more about what the program is supposed to do than the compiler does. The compilers that do the best job of optimization are compilers for what are known as domain-specific languages, which are designed for writing narrow categories of programs. For example, there’s a programming language called Halide designed exclusively for writing image-processing programs. Because the intended use of these programs is so specific, a Halide compiler can generate code as good as or better than what a human programmer can write. But a Halide compiler cannot compile itself, because a language optimized for image processing doesn’t have all the features needed to write a compiler. You need a general-purpose language to do that, and general-purpose compilers have trouble matching human programmers when it comes to generating machine code.

A general-purpose compiler has to be able to compile anything. If you feed it the source code for a word processor, it will generate a word processor; if you feed it the source code for an MP3 player, it will generate an MP3 player; and so forth. If, tomorrow, a programmer invents a new kind of program, something as unfamiliar to us today as the very first Web browser was in 1990, she will feed the source code into a general-purpose compiler, which will dutifully generate that new program. So, although compilers are not in any sense intelligent, they have one thing in common with intelligent human beings: they are capable of handling inputs that they have never seen before.

Compare this with the way A.I. programs are currently designed. Take an A.I. program that is presented with chess moves and that, in response, needs only to spit out chess moves. Its job is very specific, and knowing that is enormously helpful in optimizing its performance. The same is true of an A.I. program that will be given only “Jeopardy!” clues and needs only to spit out answers in the form of a question. A few A.I. programs have been designed to play a handful of similar games, but the expected range of inputs and outputs is still extremely narrow. Now, alternatively, suppose that you’re writing an A.I. program and you have no advance knowledge of what type of inputs it can expect or of what form a correct response will take. In that situation, it’s hard to optimize performance, because you have no idea what you’re optimizing for.



from Hacker News https://ift.tt/3dq8IDE

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.