%chomp.tex: ``Three-Rowed CHOMP''
%a Plain TeX file by Doron Zeilberger (10 pages)
%begin macros
\def\L{{\cal L}}
\baselineskip=14pt
\parskip=10pt
\def\Tilde{\char126\relax}
\def\halmos{\hbox{\vrule height0.15cm width0.01cm\vbox{\hrule height
0.01cm width0.2cm \vskip0.15cm \hrule height 0.01cm width0.2cm}\vrule
height0.15cm width 0.01cm}}
\font\eightrm=cmr8 \font\sixrm=cmr6
\font\eighttt=cmtt8
\magnification=\magstephalf
\def\lheading#1{\bigbreak \noindent {\bf#1}
}
\parindent=0pt
\overfullrule=0in
\def\P{{\cal P}}
\def\N{{\cal N}}
%end macros
\bf
\centerline
{
Three-Rowed CHOMP
}
\rm
\bigskip
\centerline{ {\it
Doron ZEILBERGER
}\footnote{$^1$}
{\eightrm \raggedright
Department of Mathematics, Temple University,
Philadelphia, PA 19122, USA.
%\break
{\eighttt zeilberg@math.temple.edu} \hfill \break
{\eighttt http://www.math.temple.edu/\Tilde zeilberg/ .}
September 5, 2000. Supported in part by the NSF.
This article is accompanied by the Maple package
Chomp3Rows, available from Zeilberger's website.
}
}
{\bf Abstract:} A ``meta'' (pseudo-) algorithm is described
that, for any fixed $k$, finds a fast ($O(\log(a))$) algorithm
for playing 3-rowed Chomp, starting with the first, second, and
third rows of lengths $a$, $b$, and $c$ respectively,
where $c \leq k$, but $a$ and $b$ are {\it arbitrary}.
{\bf How To Play CHOMP}
David Gale's famous game of Chomp ([Ch]) starts out with an
$M$ by $N$ chocolate bar, in which the leftmost-topmost
square is poisonous. Players take turns picking squares.
In his or her (or its) turn, a player must pick one of the remaining
squares, and eat it along with all the squares that
are ``to its right and below it''. Using matrix-notation
with the poisonous square being entry $(1,1)$, and the
initial position consisting of the whole bar
$\{ (i,j) \vert 1 \leq i \leq M, 1 \leq j \leq N \}$,
then picking the square $(i_0,j_0)$ means that one has to eat
all the remaining squares $(i,j)$ for which {\it both} $i \geq i_0$ and
$j \geq j_0$ hold. The player that
eats the poisonous (leftmost-topmost) square loses.
Of course picking $(1,1)$ kills you, so
a non-suicidal player will not play that move unless it is
forced to.
For example, if $M=4$ and $N=3$, then the initial game-position is
$$
\matrix{
X&X&X\cr
X&X&X\cr
X&X&X\cr
X&X&X\cr} \quad .
$$
The first player may choose to play $(4,3)$, in which case the
game-position becomes
$$
\matrix{
X&X&X\cr
X&X&X\cr
X&X&X\cr
X&X& \cr} \quad ,
$$
or he may choose to play $(2,2)$, which shrinks the chocolate bar to
$$
\matrix{
X&X&X\cr
X& & \cr
X& & \cr
X& & \cr} \quad ,
$$
and so on.
\vfill\eject
{\bf The Two Cultures of Mathematics: Existential and Constructive}
Nineteenth century mathematicians were honest. When they tried to prove
that something existed, they tried to find it, {\it explicitly}. Then
came Hilbert along, and turned math into {\it theology} by ``proving''
existence by using the {\it ``Law'' of the Excluded Middle}.
Of course, 20th-century mathematicians were forced to cheat, because
they ran out of ``computing time'' and ``memory allocation'' in their
tiny PC between their shoulders.
Now that we have much more powerful silicon brains to aid us,
we can afford again the luxury of being honest, and actually
{\it finding} the desired objects, at least whenever it is
theoretically possible.
Of course, in (finite) combinatorics, there is no philosophical
objection to using indirect arguments, but it is still
much more satisfying to {\it exhibit} the object
rather than just proving its existence.
One of the most gorgeous existence proofs of all times is
the following gem of David Gale ([Ch]).
{\bf The Chomp Existence Theorem:}
In a Chomp game that starts with an $M$ by $N$
chocolate bar (with $MN>1$), there {\it exists} a winning move
for the first player.
{\bf David Gale's One-Line Proof}: Consider the minimal move of
only removing the square at the bottom-rightmost corner,
$(M,N)$.
If it is a winning move, then it is a winning move.
Otherwise, it is a losing move. But in the latter case, this means
that your opponent has a good counter-move. It is immediate
that any position reachable from the new position
(with $MN-1$ squares) is also reachable from the initial position
(with $MN$ squares). Hence the first player could have gotten
to that position right away.
This famous (but not famous enough!) proof
can be used in practice as follows. I first play
with Kasparov, pretending that I am Deep Blue, and make the
above trial move of removing the bottom-right corner. If he
resigns, then it is a good move, otherwise, he would make
a good counter-move, that I can use when I play with you.
This idea is called `strategy stealing' in [WW].
{\bf How to Find the Winning Move?}
Of course, there {\it exists} an algorithm for finding the
winning move, and more generally of deciding whether any given
Chomp position is winning or losing, and finding a winning move
in the former case. This is true for all {\it combinatorial games}
where the number of possible positions is finite, and the game
graph is acyclic. This is part of the lovely theory of
{\it combinatorial games}. Readers not familiar with this beautiful
theory are strongly advised to read Aviezri Fraenkel's lively
and very lucid introduction [ST].
First one ``draws'' the {\it game graph} of the game. This
is a directed graph whose vertices are all possible
``game positions'', and there is a directed edge between
the vertex corresponding to
position $P_1$ and the vertex corresponding to
position $P_2$ if $P_2$ is reachable from $P_1$ in
{\it one} legal move.
Having `drawn' that graph,
you label all the sinks (vertices with outdegree $0$) by $\P$
(for `Previous player wins'), because if it is your turn to move and
you are there, then you lost, because you can't go anywhere.
Next label all vertices that have at least one edge leading to a $\P$-vertex,
by $\N$ (for "Next player wins").
Then, if you see a vertex all of whose edges lead to $\N$-labelled
vertices, label it $\P$. Continue to alternately label the vertices
$\N$ and $\P$ until you finish.
If the directed graph of the game is finite and has no cycles,
then it is easy to see that the above procedure always terminates.
Now let's apply this to Chomp. How many possible positions are there?
The intermediate positions in an $M \times N$
Chomp are integer-partitions (non-increasing sequences of
positive integers)
$\lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_r$,
with
$\lambda_1 \leq N$ and $r \leq M$. It is well known and easy to see
that their number is ${{M+N} \choose {N}}$. So for example
when $M=O(N)$, we have an exponential size graph, and hence
an exponential-time and exponential-memory
algorithm for labelling it (and doubly-exponential in bit size!).
When $M=N$, there is a trivial winning move.
Choose to chomp at the square at location $(2,2)$,
leaving a symmetric $L$-shaped shape, and then use
the {\it copy-cat} strategy of aping your opponent's
move to the other arm of the $L$-shape.
Another trivial special case is when we only have $2$ rows.
We have
{\bf Proposition 0}: A 2-rowed Chomp position is $\P$ iff
it is of the form $(a,a-1)$ ($a\geq 1$), i.e. where
the top row has one more square than the bottom row.
If $(a,b)$ ($a \geq b \geq 0$) is a Chomp position
that is winning (i.e. $b \neq a-1$), then if $a-b \geq 2$,
a winning move is to go to position $(b+1,b)$, and if
$a-b=0$, to go to position $(b,b-1)$.
{\bf Proof:} We really have two statements here:
$P_1(a$): $(a,a-1)$ is a $\P$ position (for all $a \geq 1$).
$P_2(b)$: $(a,b)$ is an $\N$ position for every $a$ such that
$a \geq b \geq 0$, and $a-b \neq 1$. Furthermore the winning
move is $(a,a-1)$ if $a=b$ and $(b+1,b)$ if $a-b\geq 2$.
The proof is by joint induction. $P_1(1)$ is trivially true.
The positions reachable from $(a,a-1)$ are $(a-1,a-1)$, and
$(a,a-2)$,$(a,a-3), \dots, (a,0)$, each of which are $\N$ positions
because of $P_2(b)$, for $b1$, is $\N$, since we can get to $(1,0,0)$ from it.
From the $2$-rowed case we know that $(a,a,0)$, and $(a,b,0)$
with $a-b \geq 2$ are $\N$.
What are the `smallest' positions
with $c \leq 1$, that are $\P$ ?. We see that all the four-cell positions:
$(4,0,0),(3,1,0),(2,2,0),(2,1,1)$ are $\N$. Amongst the five-cell
positions: $(5,0,0),(4,1,0)$ are $\N$, since the former can be answered
by $(1,0,0)$ and the latter by $(2,1,0)$.
We already know, from the $2$-rowed case, that (3,2,0) is $\P$,
and now we see that (2,2,1),(3,1,1) are both $\P$, since
all the positions reachable from them are $\N$ (do it!).
So, now we have two new members of the $\P$ club:
$(2,2,1)$, and $(3,1,1)$. But this two factoids entail
an infinite number of new memberships in the $\N$ club!, namely
$(2+\alpha,2+\beta,1)$ for all $\alpha \geq \beta \geq 1$,
as well as $(2+\alpha,2,1)$ for $\alpha \geq 1$. Also, from
the fact that $(3,1,1)$ is $\P$ we get that $(3,2,1),(3,3,1)$
are $\N$, as well as $(3+\alpha,1,1)$ for $\alpha \geq 1$.
Now it is easy to see (do it!) that these $\N$ positions exhaust all the
possible positions with more than 5 cells and $c \leq 1$.
Hence:
{\bf Proposition 1}: The only $\P$ positions $(a,b,c)$,
with $c=1$, are
$(2,2,1)$ and $(3,1,1)$. Every $\N$ position with $c=1$,
and at least $6$ cells,
is (at least) in one of the forms:
$(3,2,1)$;$(3,3,1)$;$(4+\alpha,1,1) (\alpha \geq 0)$;
$(2+\alpha,2+\beta,1) (\alpha \geq \beta \geq 0, \alpha + \beta >0)$
where ($\alpha \geq \beta \geq 0$),
and the winning moves are, respectively:
$$
(3,2,1) \rightarrow (3,1,1) \quad ,
$$
$$
(3,3,1) \rightarrow (3,1,1) \quad ,
$$
$$
(4+\alpha,1,1) \rightarrow (3,1,1) \quad ,
$$
$$
(2+\alpha,2+\beta,1) \rightarrow (2,2,1) \quad .
$$
Note that there turned out to be only 2 members of $\P$ when
$c=1$, so we did not need induction. On the other hand
when $c=0$ (the $2$-rowed case), there were infinitely many
members, so we had to resort to induction.
Now we are ready to graduate to the case $c=2$.
From the $\P$ positions for $c=0$ we know that
$(b+1,b,2)$ all belong to $\N$ (you chomp-off the last row),
and from $(3,1,1) \in \P$ we know that $(3,2,2) \in \N$,
and from $(2,2,1) \in \P$ we know that $(2,2,2) \in N$.
Hence the smallest (according to the number of cells) $\P$ position is
$(4,2,2)$, since all the possible moves lead to $\N$ positions.
Now this fact implies that $(4,3,2),(4,4,2)$ are
$\N$ positions. The smallest $\P$ position is $(5,3,2)$, which
immediately implies that $(5,4,2) \in \N$ and $(5+\alpha,3,2) \in \N$,
for $\alpha \geq 1$. The smallest position that is not covered
by the above is $(6,4,2)$, and we can directly verify that
all the possible moves from $(6,4,2)$ lead to $\N$ positions.
So $(6,4,2) \in \P$.
Continuing, we get that also $(7,5,3) \in \N$, and we humans would be
soon lead to conjecture the following
{\bf Proposition 2}:
A Chomp position $(a,b,2)$ is $\P$ iff $a-b=2$.
Now this is a {\it genuine} proposition, since it contains
infinitely many statements: $(4,2,2) \in \P$, $(5,3,2) \in \P$,
$(6,4,2) \in \P, \dots$. Nevertheless, we humans can easily
prove it by induction like we did prove Proposition 0 for
$c=0$ (the two-rowed case). Why won't YOU do it RIGHT NOW?!
We can continue in this vein, but I doubt that any human can go
beyond $c=10$, since the number of cases grows larger and larger,
as we will see below, once we let Shalosh B. Ekhad take over.
One hope would be that there would emerge a {\it pattern} valid
for arbitrary $c$, that a human would be able to prove by induction
or otherwise. However this miracle is probably as unlikely as
a closed-form solution to the Three Body Problem.
Neither I, nor Shalosh, were able to detect such a pattern,
and while Shalosh succeeded, using the Maple package Chomp3Rows, to
be described soon, to first conjecture, and then (all by itself!)
prove Proposition $c_0$, for $c_0 \leq 115$,
characterizing the $\P$ positions of the form $(a,b,c_0)$,
no discernible {\it global} pattern emerged
(as a function of $c_0$), at least not to us.
{\bf So Why is it Interesting?}
Gregory Chaitin, standing on the shoulders of Kurt G\"odel and
Alan Turing, taught us that most results are either undecidable
or uncompressible. Whenever there is a pattern, it means that
the result is compressible, hence was trivial all-along, we just
did not know it. So we humans are destined to only prove trivial results,
since any result, once proved, is trivial, for the very reason that
{\it we}, low-tech humans, were able to
prove it. The only way to transcend this triviality predicament, and
to be able to prove semi-trivial results, is to ask the computer to
help us (see [Op36][Op37]).
In other words, we have this powerful hammer, called the computer,
that can hammer so many different nails. Of course, we should not
waste our time hammering screws, but when we see a screw, rather
than look for a screwdriver, or try to use our mighty hammer to
awkwardly hammer it, just leave it alone! There are so many
challenging nails around, where our hammer can be used to advantage,
that we should keep a lookout for them, and try to nail them, thereby
hopefully improving our hammering skills. It would be
premature to try and find a computer-proof of the Riemann Hypothesis,
so let's try first to teach the computer how to do research in, say,
$3$-rowed Chomp. Whenever, we have a class of human theorems where the
arguments repeat themselves, it is a good candidate for teaching a computer.
So the main justification for developing the package {\it Chomp3Rows}
was as an {\it \'etude} in computer-generated research. As we'll get
better and better at it, very soon we will be able to tackle the
Riemann Hypothesis and other biggies.
I call people like me, `mathematical engineers', and I am sure
that this culture of mathematical (software) engineering, will
soon be the mainstream one, making the following dichotomy
obsolete.
{\bf The Two Cultures of Mathematics: Problem Solvers and Theory Builders.}
W.T. Gowers [TC], in a fascinating, though somewhat defensive, article,
described these two cultures, lamenting that Theory Builders are still
the topdogs, and problem-solvers the underdogs.
Luckily, this is no longer true!
In fact, he himself, a problem-solver by his own avowal,
is a living proof of the respectability of problem-solvers,
having won the Fields medal.
Since computers are more suited for problem-solving than theory-building
(at least for some time to come), I believe that problem-solving will
become more and more important. But, with a twist! It would be a total
waste of time for a human to solve a problem directly. The mainstream
activity of late 21st and 22nd century mathematics would be
meta-problem-solving: programming the
computer to solve problems, that we humans would never have
a chance to prove by ourselves.
So Gowers's two breeds of mathematicians
will both die out and give place to the
{\bf Mainsteam
Culture of Future Mathematics: Programming Computers to do Math.}
\vfill\eject
{\bf Back to 3-Rowed Chomp}
Since we are restricting attention to $3$-rowed Chomp,
it is more convenient, at least for the computer, to
look at the conjugate partition, i.e. the column-lengths,
and write down how many 3's, how many 2's, and how many
1's. So from now on position $(a,b,c)$ will be denoted
by $[c,b-c,a-b]$ if $c>0$. If $c=0$ and $b>0$,
then it will be denoted by $[b,a-b]$, and if $b=c=0$ then
we simply write $[a]$. In order not to confuse these two
notations, we will use circular parentheses for the former,
and reserve square brackets for the latter. For example
$(5,4,2)$ is $[2,2,1]$ in the new notation, while
$(4,1)$ is $[1,3]$, and $(6)$ is $[6]$.
In the sequel
$\alpha$ and $\beta$ will denote general non-negative integers.
Let's look at two-rowed Chomp once again.
In the new notation
the $\P$ positions are $[\alpha,1]$. These entail
that $[\alpha,0]$ and $[\alpha,2+\beta]$ are
always $\N$ positions, and we call them symbolic
winners. We also have the winning-move function
$f([\alpha,0])=[\alpha-1,1]$ and $f([\alpha,2+\beta])=([\alpha,1])$.
In order to formally prove the above statement,
and its analogs for $c=1, 2, \dots$, we could
use induction, like we did in the human ramblings above,
but we can also prove that the defining property is satisfied.
The symbolic winners are disjoint from the symbolic losers
(assume, that $[\alpha',1]=[\alpha, \beta+2]$, then
$\beta=-1$, a contradiction), and that their union is
the set of all possible positions.
Using generating functions, this is equivalent
to the formal power series
$$
\sum_{[w_1,w_2] \in \P} {x_1}^{w_1} {x_2}^{w_2} +
\sum_{[w_1,w_2] \in \N} x_1^{w_1} {x_2}^{w_2}-
({{1} \over {(1-x_1)(1-x_2)}} -1)
$$
having non-negative coefficients. While I don't have a general
algorithm for deciding this for arbitrary formal power series,
(and even for arbitrary rational functions, in fact the general
problem is undecidable), the simple rational functions
that show up in this problem all have their denominator
$(1-x)(1-y)$ and then one can use Maple's conversion to
partial fractions, and use an obvious necessary condition
(and I don't care whether or not it is sufficient, it worked fine
for me) to prove that this is indeed the case.
The reader is urged to study the
source-code of the Maple package {\tt Chomp3Rows} to see
how it is done in practice. The novelty is that the computer
does `general reasoning' for {\it infinitely many cases}
by using {\it symbol-crunching} rather than mere {\it number-crunching}.
Another nice thing is that,
for any fixed $c_0$, the computer finds,
iteratively, the $\P$ positions. Every time it finds a new $\P$ position
it evaluates whether this is it, by checking, using generating
functions, that all positions with $c \leq c_0$ are taken care of.
If this is not the case, and it has two consecutive $\P$ positions
$[c_0,\alpha_0,\beta_0]$ and $[c_0,\alpha_0+1,\beta_0]$,
it `conjectures' that $[c_0,\alpha_0+\alpha,\beta_0]$ is a symbolic
loser (representing an infinite sequence of $\P$ positions),
and then tries to prove its conjecture. If it is unable
to prove it, it just keeps on going. A priori, we don't have
a (meta-) proof that it would always succeed, so unlike WZ theory,
this is only a pseudo-algorithm. But once we succeed, we don't have
to worry about its pseudoness,
since once the pudding came out, let's just enjoy it
and eat it, and not worry whether we were justified in trying out
the recipe on a priori grounds.
This is the beauty of human research! It is always a gamble.
If you know beforehand that your ideas are guaranteed to work out, then
it is a routine exercise, not research!
%\vfill\eject
{\bf A User's Manual for the Maple Package Chomp3Rows }
As with all my packages, it is available from
{\tt http://www.math.temple.edu/\Tilde zeilberg/}.
Once you downloaded it (as Chomp3Rows), go into Maple, and
type: {\tt read Chomp3Rows;} and follow the instructions
given there. In particular, to get a list of the
{\it main} procedures, type {\tt ezra();}, while
to get a list of {\it all} procedures, type
{\tt ezra1();}. To get help on any particular procedure,
type {\tt ezra(ProcedureName);} .
for example for help with {\tt Losers}, type {\tt ezra(Losers);} .
The main procedure is {\tt Losers(a,k);}. It inputs a symbol,
$a$, and an integer, $k$, and outputs the set of
all symbolic losers $[\alpha,\beta,\gamma]$ (in the
conjugate notation, i.e. the conjugate partition is
$3^\alpha 2^\beta 1^\gamma$), such that $\alpha \leq k$.
For example, if you type {\tt Losers(a,5);} you would get:
{\tt
[[[1, 0, 2], [1, 1, 0]], [[2, a, 2]], [[3, 0, 3], [3, 2, 0], [3, 1, 3]],
[[4, 0, 4], [4, 1, 4], [4, 3, 0], [4, 2, 4]],
[[5, 0, 5], [5, 1, 3], [5, 2 + a, 4]]]}.
This means that the
only losers when $\alpha=1$, are $[1,0,2]$ and $[1,1,0]$,
i.e. $(3,1,1)$ and $(2,2,1)$; the only losers when
$\alpha=2$ are those of the form $[2,a,2]$, ($a \geq 0$),
or in the usual notation, $(4+a,2+a,2)$, etc.
My computer, Shalosh B. Ekhad, was able to go as far
as $k=115$, and in order to save time, this pre-computed
table is built-in in the procedure {\tt T115}.
Using the pre-computed data, encoded in
{\tt T115(a,k)}, procedure {\tt PTable(k)} extends
the table given in [WW], p. 599, from $k \leq 26$ to $k \leq 115$.
However, note that [WW]'s table is really implicitly {\it symbolic},
even though its authors may not have been aware of it, and
it looks {\it numeric}.
It turns out that whenever the $i^{th}$ column ends with
a $0$, it means that there are only finitely many losers
with $\gamma=i$, but whenever it does not end with a $0$, it always happens
to end with a repeated integer. In reality, as
{\it rigorously} shown by Chomp3Rows, these two repeated last entries
really repeat ad-infinitum, implying a symbolic loser.
In addition {\tt Ptable(115);} extends this all the way to
$\gamma \leq 115$.
But in order to play Chomp effectively, it is not enough just to know what the
losing positions are. One should also know what is a {\it winning move}
if you are lucky to be at an $\N$ position. This information
is furnished by typing {\tt WINNERS(a,b,k);}.
Here $a$ and $b$ are symbols (letters), and $k$ is a positive integer.
This will return a table $U$, where $U$ is the
set of symbolic winners, arranged in the form of pairs
{\tt [winner,loser]}, where {\tt winner} is a symbolic winner,
and {\tt loser} is the ``winning move'' to be performed.
For example, typing
{\tt op(WINNERS(a,b,1));} yields
{\tt
table([
1 = $\{$[[1, 1, 1], [1, 0, 2]], [[1, 0, 0], [1]], [[1, a, 1],[a + 1, 1]],
[[1, 2, 0], [1, 0, 2]], [[1, 0, 3 + b], [1, 0, 2]],
[[1, 2 + a, b], [1, 1, 0]], [[1, 1, b + 1], [1, 1, 0]]$\}$
])},
which means, e.g., (the last term) that $[1,1,b+1]$ is a
symbolic winner, and the winning move is to make it
into $[1,1,0]$.
You can also play (3-Rowed) Chomp {\it fast} with the
computer, by typing {\tt PlayChomp(POS);},
where {\tt POS} is the initial position.
For example, try {\tt PlayChomp([100000,20000,115])};
This would be hopeless in a brute-force, numeric program that
implements the naive algorithm.
But even if you are a {\it numerical} person, and are
only interested in, say, a table of all losers in
a $3 \times 115$ game of Chomp, then, finding and storing the losers
and winners
symbolically is much more efficient than listing explicitly all
${{118} \choose {3}}=266916$ cases. If you are also willing
to start with non-rectangular positions,
with arbitrarily large top two rows, for example
$(10000000,1000000,115)$, then the numeric program would be hopeless.
Very often, clever symbolic computations can save lots of numerical
efforts. Conversely, symbolic computation can learn a lot
from numerics in efficiency and speed. Hence, another challenge for the
future would be to bring together the
\break
{\bf Two Cultures of Computation: Numeric and Symbolic.}
{\bf Future Work}
It would be worthwhile to extend {\tt Chomp3Rows} to $k$-row
Chomp, where the lengths of the last $k-2$ rows are fixed, but
the top two rows are {\it arbitrary}. Another interesting
problem is to find, automatically, the
more refined {\it Sprague-Grundy function},
rather than just $\P$-$\N$ status. Finally, it is hoped
that the present methodology might extend to `perturbation
expansions' of Wythoff's game
(a.k.a. Fibonacci-Nim) and other combinatorial games.
{\bf References}
[Ch] D. Gale, {\it A Curious Nim-type game}, Amer. Math. Monthly
{\bf 81} (1974), 876-879.
[Ma] P. Cartier, {\it Mathemagics (A tribute to L. Euler
and R. Feynman) }, preprint.
[Op36] D. Zeilberger, {\it Opinion 36 of Doron Zeilberger:
Don't ask what can the computer do for Me?, but rather:
What Can I do for the Computer?},
{\tt http://www.math.temple.edu/\Tilde zeilberg/Opinion36.html}.
[Op37] D. Zeilberger, {\it Opinion 37 of Doron Zeilberger:
Programming Computers to Do Math is at least as much fun
as Proving Theorems},
{\tt http://www.math.temple.edu/\Tilde zeilberg/Opinion37.html}.
[SR] D. Zeilberger,
{\it Theorems for a price: Tomorrow's semi-rigorous mathematical culture},
Notices of the Amer. Math. Soc. {\bf 40 \# 8}, 978-981 (Oct. 1993).
Reprinted: Math. Intell. {\bf 16 no. 4}, 11-14 (Fall 1994).
[ST] A. S. Fraenkel, {\it Scenic Trails Ascending from Sea-Level
Nim to Alpine Chess}, in: ``{\it Games of No Chance}'', R. J. Nowakowski,
ed., Cambridge University Press, 1996.
[TC] W. T. Gowers, {\it The Two Cultures of Mathematics},
in: {\it ``Mathematics: Frontiers and Perspectives''}, edited
by V. Arnold et. al., Amer. Math. Soc., Providence, 2000.
[WW] E. R. Berlekamp, J. H. Conway, and R. K. Guy,
{\it ``Winning Ways for Your Mathematical Plays''},
Academic Press, London, 1982.
\bye