Written: Nov. 9, 2003

One of the mathematicians I love to hate is G.H. Hardy,
whose famous `apology' is a manifesto of narrow-minded
elitist "purism" and application-phobia. He is probably
turning in his grave to see the big commercial success of the Fermat-Euler
`pure' theorem that a^{(p-1)(q-1)} leaves a remainder
1 when divided by pq, and would be even more embarrassed if he knew
that the Hardy filter is useful in Electrical Engineering.

But the thing that most outraged me in Hardy's snooty drivel is the assertion that "Chess is trivial". With all due respects, Prof. Hardy, I consider the problem whether "White can always win" (with perfect play) much deeper and more interesting than your pet problem, the Riemann Hypothesis.

But I shouldn't be too hard on G.H. Hardy. After all he is just a lowly human, and these cute humans have a curious psychological need. They need a `meaning' for their insignificant existence. Religious people have God, but atheists like Hardy also need some kind of God, so they invent lots of `pseudo-Gods', and that's why we have racism, sexism, and, in Hardy'c case "Pure Math supremacy".

But a great mathematician like G.H. Hardy deserves a little more than flat dismissal of his credo, that unfortunately still survives today, amongst quite a few "pure" mathematicians. Let's try to understand why he thought that Chess is trivial, while Number Theory is not.

I call it `infinite-fixation'. Most mathematicians think that
whatever is `finite' is trivial, and reducing a problem to
`checking finitely many cases' renders it trivial, and makes
the actual checking unnecessary. On the other hand
a real theorem incorporates *infinitely* many facts, hence
is `transcendental' and `deep'.

But this *infinite* is the biggest fiction ever invented by humans.
We are all finite creatures who live in a finite world, and even our
mathematical world is finite.
[for more details, see my paper
"Real" Analysis is a Degenerate Case of Discrete Analysis
].

Let me give you an example. Consider walks on the planar square grid with unit steps: "Left", "Right", "Up", and "Down". Consider the following two problems:

- Find a formula for the number of walks with n steps, valid for all n.
- Find the number of such walks, that are
*self-avoiding*(i.e. never return to a previously-visited site) with 1000 steps.

According to Hardy, the first problem is `non-trivial' (in the philosophical, conceptual sense), while the second one is just asking for a finite fact, hence is `trivial'. The output for the first question is a general formula, featuring a general symbol, n, hence would be valid for `infinitely many n', and encompasses infinitely many facts.

I am sure that all of you have already figured out the answer to
the first problem: 4^{n}, and I am willing to bet
that none of you has any clue about the second problem, nor would you
be able to get it with all the computers in the world.
I am willing to bet that the second `trivial' problem would remain
open long after all the seven millennium problems will be solved,
and most likely will never be answered.

But, I can hear you retort, this is still *just* a number, while
4^{n} is, an admittedly easy, but nevertheless a genuine
`theorem', valid for all (hence infinitely many) positive integers n.
Nonsense! There is no such thing as infinitely many integers
(or infinitely many of anything). The formula 4^{n} is
ONE fact: it says that for a *symbolic* n, the number of walks
of n steps equals the *expression* 4^{n}.
But *expressions* (in this case, in Maple's lingo,
of type `^`), are every bit as finite as that other kind
of expression called *integer*, which is the type
of the output for the second problem. It is true that the
former expression has a slightly more complex syntax: it has
two *operands* (once again in Maple's jargon),
op(1,%) which equals the simpler expression 4, (of type integer),
and op(2,%) which is the *expression* n, whose type
is "symbolic"). But the memory allocation needed to express the
expression that solves problem 1 is much less than the memory
allocation needed to express the answer to the second problem,
that would require more than 416 decimal digits (once known).

But just the *size* of the output does not make the second
problem less trivial than the first one. After all, the
question "what is the 2000th Fibonacci number" has about the
same output size, yet is "trivial" (in the
true sense of the word), because of its (very low, linear)
*computational complexity*.
On the other hand, no one knows of any
* efficient* algorithm for computing the number of self-avoiding
walks, and even computing the first 51 members
of the enumerating sequence, undertaken by
Tony Guttmann and collaborators, is a highly non-trivial
`conceptual' (replacing brute brute force by clever brute force)
and computational feat. It is very unlikely that a
tractable `expression' (or any poly. time algorithm) exists.
Alas, just like proving that P is not NP, it is just as
unlikely that there would be a proof that no such algorithm
exists.

It is true that humans have their own ways of thinking, and
human mathematics, comparatively speaking, was a big success story.
Part of the reason that humans were so successful is that in their
research they `cheated' and used *pseudo-algorithms*,
that usually did not work (and then, of course, they never
reported their failure), but sometimes did work (and then
they had a publication). I believe that the reason computer
proofs, so far, did not reach their full potential in superseding
humans, is that they are much more honest than humans, and
use genuine algorithms. Hence it is important for
human coaches of computers, like
myself, to understand the implicit pseudo-algorithms
(more commonly called `research heuristics' and `intuition')
employed by humans.
Once this is accomplished, human mathematicians will only be
able to compete against each other, and would have no chance
against machines.

So a human mathematician, just like Maple or Mathematica, is just
a `symbol-cruncher', and philosophically
`symbol-crunching' is equivalent to `number-crunching', since
`numbers' are really (a special case of) symbols, but
symbols can (and actually are!, in the internal memory of a
computer-algebra system) represented by strings of 0's and 1's,
the only two *numbers* that are really necessary!

Going back to Chess, I am willing to bet that the
question `Can White guarantee to Win?'
(and certainly the even deeper problem, `what is the winning move?')
will remain wide open long
after RH and Goldbach are solved. The predecessors of
RH and Goldbach (PNT and Chen's p_{1}
+p_{2}p_{3}) have comparatively shallow
proofs (using a simple quadratic recursive inequality
(Selberg's) and a simple-minded sieve (the linear sieve)).
On the other hand, the game of Chess is so deep and intricate,
that it is extremely unlikely that we'll ever be able to know
whether or not White can always win.

Hence "Blessed is the 'Trivial' for it shall inherit the Non-Trivial Land", and, analogously, "Damned is the (so-called) `Non-Trivial' since it is really Trivially TRIVIAL".

Read the interesting feedback by Olivier Gerard (Nov. 2003), Dan Romik (Nov. 2003) and Apostolos Doxiadis (April 12, 2004).

Doron Zeilberger's Opinion's Table of Content