Written: Jan. 4, 2009

The December 2008 issue of the Notices of the American Mathematical Society was dedicated to the activity of (Computerized) Formal Proof, with fascinating articles by Thomas Hales (of Kepler fame), who gave a great overview of the state of the art, followed by an article by Georges Gonthier who has recently programmed a fully formal proof of the Four Color Theorem, and by computer scientists John Harrison and Freek Wiedijk, describing the "nuts and bolts" behind this endeavor.

Teaching computers how to discover and prove mathematical results is certainly the
**way** to go, and I believe that mathematicians who continue to do pure human,
pencil-and-paper, computer-less, research, are wasting their time. If they learned
how to program, in, say, Maple, and used the same mental energy and ingenuity while
trying to use the full potential of the computer, they would go much further.
For example, it is very possible that if Andrew Wiles' programming skills
would have been as good as his proving skills,
he would have already proved the Riemann Hypothesis. Of course, there is a huge **psychological**
barrier, and it is very hard to teach old mathematicians new tricks,
but we should train the next generation to be able to take full advantage of the computer, and
courses like my Experimental Math
class should be mandatory to **all** students.

But the obsession with "Formal Proof", so nicely described in the Notices issue, is not
the **optimal** use of computers' (and humans'!) time. It is an unfortunate left-over
from the Euclidean heritage, that was already a bad model for doing human mathematics,
but is even worse for the forthcoming age of computer math, that
should revert back to the Babylonian-Indo-Chinese model of **algorithmic** mathematics.
Ironically, Tom Hales, at the motto to his feature article, cites Francis Bacon's suggestion, from the latter's
1620 *Novum Organum*, that the "*entire work of understanding be commenced afresh*".
The activity of "computerized formal proof" is not at all *fresh*,
it is just harnessing the great power of the computer to redo, much more reliably, of course,
the same-old-Euclidean-drivel that human mathematicians have beed doing for the last 2300 years.

The "axiomatic method" is not even the most efficient way to prove theorems in Euclidean Geometry. Thanks to Rene Descartes, every theorem in Euclidean Geometry is equivalent to a routine identity in high-school algebra, see Shalosh B. Ekhad's Plane Geometry textbook. Analogous remark apply almost-every-where in mathematics. Of course we don't always have a complete decidable algorithm for every problem, but the language of generalized-high-school-algebra is much better than the Euclidean-Hilbertian-Bourbakian one.

Tom Hales is one of my greatest heroes, and his
*first* proof of the Kepler conjecture was a major tour-de-force in demonstrating what
computers can do when programmed by ingenious humans. But he shouldn't have
listened to those humans that raised doubts
about the validity of computer-assisted proofs,
who claim that "you can's trust a computer", and "computer programs always have bugs".
True, of course, but computers and computer programs, properly empirically tested, are still
orders-of-magnitude more reliable than those one-notch-above-apes animals commonly called humans.

There are so many open problems left to do, Tom, so don't waste your time trying to find a "formal proof" version to Kepler, in order to appease these prejudiced humans. Besides, I have news for you. They will never be convinced. The best way to respond to their criticism is to ignore it.

I concede that "formal proofs" are not completely without interest. It is a nice game that computational logicians
like to play, and it does have its uses in applied software development, in testing program-correctness,
in addition to (not instead!) empirical tests. Also, it is nice that a few theorems, like 4CT, would have
a completely formal proof, since Euclideanism is still the dominant religion at the early 21st century.
But *enough is enough*. It was funny the first time, and just like the 3000th proof that a
certain problem is NP-hard, the law of diminishing returns will hit very soon.

Real mathematics is (or at least should be) *algorithmic*. The axiomatic method is like machine language
or a Turing machine or a Tuxedo. It is very stifling. The logicist approach that takes
several hundred of pages to prove that 1+1=2 is ridiculous. Let's be happy with the current standards
of rigor in *informal* human mathematical discourse, and use computers with that level.
If anything, we have to be flexible, and relax the rigor, allowing semi-rigorous proofs, and
even non-rigorous proofs (as long as we state from the outset the level of rigor).

So let's leave this *formal proof* game to professional computational logicians, but
regular mathematicians, like Tom Hales used to be before he let his human critics push him around,
should be *pragmatic* and try to *optimize* computers' great potential.
Absolute certainty is impossible, so let's settle with
the same, or even diminished, level of "rigor" that we are used to in normal mathematical discourse.
Only this way would we have a reasonable chance to see a proof of RH, Goldbach, 3x+1, P vs. NP, etc.
in *our* lifetime.

Added April 8, 2009: See Leon Smith's comments .

Added May 9, 2009: Read the interesting feedback by Rob Arthan.

Opinions of Doron Zeilberger