Written: June 14, 2009
When God, or Evolution, created humankind, they also created prejudice, and sexist, nationalistic, racist, intellectualist, religious, and sexual-orientationist supremacy complexes must have served some purpose, or else why would God (or Evolution) have created them?
As we became more and more civilized and enlightened, we decided that it is not nice to think that just because we happen to be members of the set of Males, or Whites, or Jews, or Gentiles, or Professors, or Straight, we are better than members of the respective complement sets. But there is one more superiority complex that it is still "politically correct" to have, namely Human Supremacy.
One aspect of Human Supremacy is the belief that the fact that we seem to be slightly more cognitively advanced than (non-human) animals justifies murdering them and eating their flesh. I am sure that in two hundred years we will look back at this barbaric habit in horror and disgust, just like we today look back at slavery, segregation, and prosecuting homosexuality. But, unfortunately, some of my best friends, and even my own beloved wife, and two of my three children, are carnivores (well, omnivores), so I better shut-up on this issue, or else I may get in trouble with the Boss.
Another aspect of Human Supremacy, that is almost as despicable, is the belief, even among very smart people, like Sir Roger Penrose and Guru Avi Wigderson (and most other currently alive people, both smart and less-smart) that, in spite of the enormous, exponentially growing, power of computers, humankind is superior to machine-kind. At this time of writing, this may still be essentially true (in some narrow sense), but these people claim that humankind is intrinsically superior, and always will be!
They may be right, and they may be wrong. All I can say for sure is that all their alleged proofs are totally flawed. Besides, regardless of its veracity, I believe that it is counter-productive to feel superior to machines, since a healthy working relation with computers requires that we treat them with due respect. For our sake rather than theirs (they don't mind!).
Sir Roger and Guru Avi are entitled to their opinions, and since computers are more mature, and less sensitive, than human beings, we don't have to worry that their feelings are hurt by the human-supremacist drivel. But what I don't like are Penrose's and Wigderson's (and countless other) "proofs". Here they use their academic reputation to foster their philosophical/theological views, without sufficient caveat, and without distinguishing between scientific (or mathematical) statements and metaphysical statements, and without warning that their alleged "proofs" are really pseudo-proofs and should be taken as such.
Here I have to insert a disclaimer. I have no proof that human supremacy is wrong. This is still an open question. All I am claiming is that all the alleged proofs given so far are false. Also, I am only talking about mathematics, and while I believe that in two hundred years computers will completely surpass humans in the domain of mathematics, I have no opinion, and for that matter, any interest, in other domains of human activity. For example, it is possible that humans will always be better gossipers than computers, so they would have their niche that they can still be proud of (if contest it is).
Before going on to knock-down Penrose's and Wigderson's meta-mathematical arguments, let me mention that I have the greatest admiration for their major scientific achievements. I also admire their literary skills, and in the case of Penrose, if you ignore his views, you can learn a lot of substance from his books, and ditto for Wigderson, who still has to write a book aimed at the general public, but whose lucid articles and lectures, aimed at the general mathematical public, convey lots of beautiful insights, and to be honest, his human-supremacist views are just a minor theme, and not the main point.
But even great scientists, once they start to express metaphysical opinions, sometimes say lots of nonsense. Witness Newton with his alchemy, astrology, and theology, and all those great mathematicians, from David Hilbert down, when they quarreled about the "foundations" of mathematics, and most of what they said was not even wrong but total gibberish (for one, because there is no such thing as an actual [or even potential] infinity). Then we have Safarevich, Yuval Ne'eman, Israel (Robert) Aumann, and many other people, from the extreme left to the extreme right, whose views regarding anything outside of their own scientific specialty is not worth more (and sometimes less) than anyone else's.
There are many arguments for human supremacy, so in the interest of time, I will only briefly debunk two of Penrose's and one of Wigderson's arguments.
Roger Penrose's trump card is Gödel's incompleteness theorem. He claims that it shows that there is more to mathematical knowledge than the axiomatic (mechanical) proof-machine, and that there is something transcendental that only humans can know, and that can never be discovered, let alone, proven, by machines. This is mystical nonsense. Gödel didn't prove anything. What he did was disprove Hilbert's very naive- and by hindsight stupid-belief in the universal proof machine based on the axiomatic method. Under my interpretation (not Gödel's!, who besides being the greatest logician of our time was also a great crackpot platonist), he proved by a reductio argument, that the so-called infinite does not exist, and only finite statements about finite objects are a priori meaningful. Many seemingly "infinite" statement can be a posteriori made to make sense, by interpreting them "formally" (i.e. syntactically or combinatorially, like formal power series in combinatorics), but the word
should be taken out of mathematics, and replaced by the phrase
"not even a posteriori meaningful".
(For more on this, see pages 4-5 in my article.)
Another argument, dear to Penrose, is a certain chess problem, due to William Hartston (mentioned in "Shadows of the minds", and "The Large, the Small,and the Human Mind" (p. 104)) that allegedly Deep Blue (or one of its cousins) couldn't figure out, but any beginner human chess player can solve right away, by "seeing" it instantaneously. It is true that current computer-chess programs use number-crunching (or rather position-crunching), but it shouldn't be too hard to teach computers some symbol-crunching, valid for an m by n board rather than only an 8 by 8 board, and my brilliant (human) former student Thotsaporn Aek Thanatipanonda is doing just that. This is analogous to saying that computers can't do the multiplication
(22100000000000000000 - 1)(22100000000000000000 + 1) ,
because of insufficient memory. If you do it in any symbolic computation system, you would get the answer right away, treating it as a special case of
for symbolic a. The conventional wisdom is that this is a "general theorem", valid for infinitely many cases, and as such requires a (human) deductive proof, using the axioms of algebra. But, in fact, this is really one statement that the computer does mechanically and algorithmically.
Moving on to Guru Avi Wigderson, he is fond of claiming that the meaning of the famous (or rather infamous) P vs. NP problem is:
"Can (human) creativity be automated?"
(see, e.g. this wonderful article). Sure enough (claims Avi), computers can verify that Smith murdered Brown, once Hercule Poirot or Miss Marples found a proof, but they would never be able, in polynomial time, to come up with it themselves. Computers can even, at least in principle, verify that Wiles' proof of FLT is error-free, but it takes a human of the caliber of Sir Andrew to come-up with it at the first place.
Frankly, I don't see the connection. Both Wiles and Miss Marples are using heuristics, that are quasi-algorithms, that would soon be taught to computers, and once they learn these bunch of tricks, they would be able to do much better than humans. Notwithstanding the (most probable) fact that P ≠ NP. While it is true that most things are beyond the scope of both people and computers (e.g. the sixth Ramsey number, the number of self-avoiding walks in the square-lattice with 1000 steps, and the number of 100 by 100 Latin squares, to cite just a few examples). So computers are not much better than humans, if this is a consolation, they are only a few orders-of-magnitude better. All that I am claiming is that, if anyone would be superior (in mathematics), in two hundred years, it won't be us. In other words:
Anything a human can do (in mathematics), a computer would (eventually) be able to do much better (and faster!)
So let's get rid of this last bit of still-politically-correct human chauvinism, and learn to respect our silicon brethern, and instead of trying to prove RH or Goldbach, or 3x+1 all by ourselves, let's invest all our energy and "creativity" to train our computers to surpass us. But we must be good teachers, and not try to ask them to do too much too soon, but rather start modestly. And before you know it, they would come up with proofs of all of the above, plus much better, more elegant, nicely streamlined, proofs of FLT and many other theorems clumsily and ad-hocily previously already proved by humans.