Author: Greg Kuperberg

I don't see any reason to have a hostile reaction to Doron Zeilberger's opinion 36. The basic thesis --- that computers are replacing mathematicians --- is true in some respects and not in others. I suspect that when computers make mathematicians obselete, all human activity will be superfluous and human history will end. I don't know how far in the future this eventuality might lay, but even if it were soon, nothing we do now would matter much afterwards.

One view that I do not share (and which was not mentioned by Doron) is that quantum computers will be the big turning point. I doubt it for two reasons: First, people are probably classical computers. Second, most complexity theorists believe that bounded-error quantum polynomial time (BQP) falls short of hard complexity classes such as NP and PSPACE. There was an interesting article about this in the xxx computer science archive by Fortnow and Rogers [cs.CC/9811023]. (You have to go to xxx.lanl.gov or a mirror site to see it; I apologize that there is no Front for the CS archive.)

Doron's argument for his thesis is a little crazy, but it's not completely crazy. You shouldn't interpret it literally, even though Doron himself might. Although it is not very original or strongly argued, it obviously is provocative. It's like shock art. When I'm in the mood for it, I like it.

If you want some context, look at his web page:

http://www.math.temple.edu/~zeilberg/OPINIONS.html

Here is a very funny quote from opinion 34, "We should confess to our dumb mistakes, in order that our students should not feel bad about their mistakes":

-----------------------------------

I had a method that I was very fond of, to compute the so-called
finite-memory generating functions for [self-avoiding walks]. It had
many variables, but when I did lots of specializations, I obtained an
incredibly elegant corollary: the number of n-step non-retracing walks,
on the square-lattice, equals 4*3^(n-1). I was very excited about this
beautiful new result, and made a quick literature search, and convinced
myself that it is apparently new. Then I wrote Gordon Slade, asking him
about the novelty of this amazing result. In his reply, he pointed out,
very politely, that my `new result' is not terribly deep. I am leaving
this as an exercise to the reader.

-----------------------------------

The only part of "opinion 36" that I object to is the accusations
against the editors of the Notices. The Notices is not the be-all of
mathematical soapboxes, to be protected at all costs from the fallible
judgement of its editors. In fact, sci.math.research is about as
widely read as the Notices, although more by amateur mathematicians
and less by professionals. I take the two forums equally seriously.
One difference is that the moderators of sci.math.research give people
more rope to hang themselves than the Notices does.

--
/\ Greg Kuperberg (UC Davis)
/ \
\ / Visit the Math Archive Front at http://front.math.ucdavis.edu/
\/ * Thought is free. - The Tempest *
_________________________________________________________________

-----fascinating feedback from Andrej Bauer

Author: Andrej Bauer

Date: 1999/04/15

Forum: sci.math.research

_________________________________________________________________

David desJardins

>I don't think that a computer that can prove the Goldbach Conjecture
>will get any kind of "head start" from reading a list of definitions
>and facts used in proving theorems in plane geometry.

What is the point of the above statement? Of course nobody expects that planar geometry will help solve Goldbach Conjecture! I fail to see what you were going after there. Did Zeilberger suggest that?

I think there is a more constructive way to understand the above remark. Maybe you mean to say that providing a computer with a large knowledge base of theorems in a given branch of mathematics will NOT give the computer a head start in that branch. (I do not intend to put words in your mouth, so please accept my apology if you did not mean to say that.)

I would like to argue that it is extremely useful to have a large base of mathematical knowledge organized in a way that can be manipulated by computers, even if computers can do only the most trivial sort of theorem proving. In this sense Zeilberger is correct when he says that we should be "entering mathematical facts into computers".

At some level we already have a kind of Math-Internet knowledge database. Mathematicians correspond with each other via e-mail and put their papers on web pages. What is missing is the kind of knowledge that a computer could manipulate *semantically*. For example, there ought to be a Math-AltaVista where you could ask "Has anyone proved this theorem yet?", and the computer would go off searching the planet.

I believe that a large body of knowledge would help a theorem prover enormously, provided the searching mechanisms were efficient enough. I base this opinion on an analogy with how mathematicians operate (they know a whole lot of theorems, tricks and techniques), and on my experience with the 'Analytica' theorem prover, developed by Ed Clarke at CMU. Analytica uses Mathematica to do algebraic manipulations, and it has a large knowledge base of the basic properties of real numbers. In other words, it does not try to prove everything from the axioms all the time. It knows about useful definitions, and it does not automatically eliminate them (because that causes an exponential blow-up). To see what sort of things it can do, see the paper in Journal of Automated Reasoning, vol. 23, no. 3, December 1998, pp. 295--325. The point is that you could never prove certain kinds of theorems without *extensive* knowledge of properties of reals, and a lot of algebraic manipulation that Mathematica does.

>I don't think time spent on >entering such facts into Maple or Mathematica will advance mathematical >knowledge, now or ever, by one iota.

I agree that entering mathematical knowledge into existing computer algebra systems and theorem provers is a waste of time. We have not yet developed the infrastructure that is needed for a global mathematical database that could be manipulated on the semantic level. Mathematica and Maple are nowhere near being satisfactory systems for such an endeavor. I think Zeilberger is wrong when he thinks that we can do it today. We do not even have a good language to do it in.

However, I have no doubts that the required tools, mathematical and computer theoretic, are going to be developed in two or three decades, if not sooner. They *are* being developed by various groups, mostly by theoretical computer scientists (one that comes to mind is the Cornell Nuprl group). And once we have those you will be proven wrong---there will come a day, not too far from now, when computers will significantly advance mathematical knowledge. I do not want to make a prediction as to whether this will be just because of a global mathematical database, or also because of very smart theorem provers. But I do predict it will be in my lifetime.

>In the end no human mathematicians will be useful or needed, right? In >fact, "in the end" the universe will die of heat exhaustion and all of >the elementary particles will decay. But we don't have to make >decisions about what to do right now based on what will happen "in the >end"; we can decide based on the circumstances that prevail at the >moment.

You know very well that I was not thinking of the end of the universe and I was not talking about mathematician's angst of being useless.

So, silly remarks and intentional misunderstandings aside, the point I was trying to make was that either our generation or the one coming after us is going to make the big leap from doing math "by hand" to doing math "*with* computers". In that sense, we do have to make decisions now. I think the correct decision is to wait until those pesky theoretical computer scientists come up with decent languages and knowledge manipulation techniques that will bless the happy matrimony of math and computers. I suspect Zeilberger is afraid that the arrogant mathematicians will refuse to listen to the pesky theoretical computer scientists.

>I don't read the article as a modest exhortation to mathematicians to >become more familiar with what computers can do for them, and to use >them more in their research, which I would wholly support. I read it as >either lunacy or satire. Perhaps the editors read it the same way, and >that's why it was not accepted.

Yes, the article is not written in the the most diplomatic and convincing way. It is too enthusiastic, and that is why many people will probably dismiss it easily (which is just as well).

If you attempt to understand Zeilberger in a less dismissive way, though, then you could still find something positive in what he is saying. He's drawing our attention to a new technology which will, in his opinion, revolutionize mathematics. Zeilberger thinks this is happening now, I think it will happen in my lifetime, and if understand what you are saying, according to you it will never happen (even though you heavily use computers?). Opinions, opinions.

********

Let me also respond to Phil Diamond. Phil Diamond, pmd@maths.uq.edu.au writes:

>OK, I accept that the machines will be able to do these nontrivial >problems. But after that? Who will provide more nontrivial (or even >trivial) problems? Who will invent the concepts that are used?

Humans will invent new concepts, of course. May I ask why you are asking these questions? I do not understand what you are getting at. If I understand you correctly, you are making the point that even if machines could prove theorems much more efficiently than humans, they still would not know *what* to prove. So what? We are going to tell them what to prove. Machines are *tools*! They will replace those mathematicians who spend their days devising formulas for compound interest rates, and solving differential equations for the design of new cars. That's good!

I cannot help but to view your opinion as a form of technophobia. Is it?

Maybe something needs to be said about where mathematical problems come from. I think the best mathematical problems are the ones that originate from real-world problems, and I mean this in a very general sense. For example, I would claim that classical analysis was invented because of the needs of physics to understand the macroscopic world (Newtonian mechanics). A more recent example would be the way computer science is driving certain branches of mathematics (discrete math, type theory, constructive logic). I can't imagine we'd ever run out of problems to solve.

>This is a question that goes far beyond computer algebra and enters >the AI area. And after 40 years and zillions of $, the Holy Grail >of machine intelligence (whatever that is?) seems as unattainable >as ever.

Knowing the kind of stuff Zeilberger does, I do not think he has AI-ish inclinations. My understanding is that he is suggesting that mathematicians should be finding *algorithms* for solving problems (he talks about *programming*, not about automated theorem proving). The kinds of algorithms that he might have in mind are Schur's algorithm for finding the closed form of an indefinite integral, or Gosper-Zeilberger-Wilf algorithms for finding the closed form of summations, or Buchberger's algorithm for finding a Groebner basis. Of course, having blown his vision out of proportion, he paints a future which resembles a sci-fi movie.

Is Prof. Zeilberger reading this discussion? It would be great to hear his opinion. Maybe his student can provoke him into replying by showing him a printout of this thread.

>It is the difference between developing chess playing >systems that can beat any human being at the game, and **inventing** >the game.

Yes, yes. But don't you think that even if computers can be "just" very good at proving theorems, that would still have a huge impact on math? And that we should pay attention to such a possibility, even if computers will always lack the "human spirit and creativity"?

I'll borrow your analogy. We all know that computers have become very good at chess. It is perhaps less known that they are very bad at go, the Japanese game. Today's computers are as bad at math as they are at go, but thirty years from now they will be as good at math as they are at chess today.

Andrej Bauer