%byrnes.tex: Chomp , Recurrence, and Chaos(?)
%
%
%a Plain TeX file by Doron Zeilberger (13 pages)
%begin macros
\def\L{{\cal L}}
\baselineskip=14pt
\parskip=10pt
\def\Tilde{\char126\relax}
\def\halmos{\hbox{\vrule height0.15cm width0.01cm\vbox{\hrule height
0.01cm width0.2cm \vskip0.15cm \hrule height 0.01cm width0.2cm}\vrule
height0.15cm width 0.01cm}}
\font\eightrm=cmr8 \font\sixrm=cmr6
\font\eighttt=cmtt8
\magnification=\magstephalf
\def\P{{\cal P}}
\def\Q{{\cal Q}}
\def\W{{\cal W}}
\def\V{{\cal V}}
\parindent=0pt
\overfullrule=0in
%end macros
\bf
\centerline
{
CHOMP, RECURRENCES, and CHAOS(?)
}
\rm
\bigskip
\centerline{ {\it
Doron ZEILBERGER
}\footnote{$^1$}
{\eightrm \raggedright
Department of Mathematics, Rutgers University (New Brunswick),
Hill Center-Busch Campus, 110 Frelinghuysen Rd., Piscataway,
NJ 08854-8019, USA.
%\break
{\eighttt zeilberg@math.rutgers.edu} \hfill \break
{\eighttt http://www.math.rutgers.edu/\Tilde zeilberg/ .}
First version: June 3, 2003. This version: June 9, 2003.
Accompanied by the Maple package {\eighttt BYRNES} available from
{\eighttt http://www.math.rutgers.edu/\Tilde zeilberg/programs.html} .
Supported in part by the NSF.
}
}
{\bf \qquad\qquad\qquad\qquad
Dedicated to Saber Elaydi on his 60th Birthday}
{\bf Abstract.}
In this article, dedicated with admiration and friendship to
Chaos and Difference (and hence Recurrence) Equations
Guru Saber Elaydi,
I give a new approach and a new algorithm for
Chomp, David Gale's celebrated combinatorial game.
This work is inspired by Xinyu Sun's
``ultimate-periodicity" conjecture and by its brilliant proof by
high-school-student Steven Byrnes.
The algorithm is implemented in a Maple package
{\tt BYRNES} accompanying this article. By looking at the output,
and inspired by previous work of Andries Brouwer, I speculate that
Chomp is Chaotic, in a yet-to-be-made-precise sense, because
the losing positions are given by ``weird'' recurrences.
{\bf Saber Elaydi}
When Gerry Ladas asked me to write an article dedicated
to Saber Elaydi, I hesitated, since
while we both work on {\it difference equations}, our research
interests (but not our mathematical philosophies!)
are almost diametrically opposite. In particular,
the subject of my current research,
{\it Combinatorial Games}, seemed superficially to be
far removed from Saber's research interests.
On further thought, however, I realized
that this current research of mine is not as far from Saber,
after all, since
{\it recurrences} feature in it prominently.
But {\it recurrence equations} are almost synonymous with
{\it difference equations} (see the next section), and
Saber wrote {\it the} {\bf Book}[E1] on this subject, a modern
classic that I, and my students, thoroughly enjoyed.
As I dwelt even deeper into my research, I also realized that
{\bf chaos}, or something like it, also arises naturally,
and Saber is also a Chaos guru (cf. [E2]).
So even though the
{\it primary} ``subject classification" of this article
is ``Combinatorial Games", the {\it secondary} ones are
``Recurrence equations'' and ``Chaos'', so
I hope that Saber will find this interesting.
Happy 60th Birthday, Saber!
{\bf Difference vs. Recurrence Equations}
In most contexts, the notions of {\it difference equation} and
of {\it recurrence equation} are identical, and the choice of
which of them to use is merely cultural, the former preferred by
numerical analysts and the latter by combinatorialists and number
theorists.
Indeed a generic form for an $r^{th}$-order difference equation is
$$
P(n, f(n), \Delta f(n), \Delta^2 f(n), \dots, \Delta^r f(n)) \equiv 0 \quad,
$$
while a generic form for an $r^{th}$-order recurrence equation is
$$
Q(n, f(n),Ef(n),E^2f(n), E^rf(n) ) \equiv 0 \quad ,
$$
where, as usual, $\Delta f(n):=f(n+1)-f(n)$ and
$Ef(n):=f(n+1)$. Since $\Delta=E-1$ and $E=\Delta+1$, we can
write, using the binomial theorem ,
$\Delta^kf(n)=(E-1)^kf(n)$ as a linear combination of $E^if(n)$'s and
$E^kf(n)=(\Delta+1)^kf(n)$ as a linear combination of $\Delta^if(n)$'s,
thereby going from one form to the other.
This equivalence is still true if the {\it order} of the difference
(alias recurrence) equation is infinite (i.e. to compute
$f(n)$ we need {\it all} the previous values:
$f(n-1), f(n-2), \dots, f(1), f(0) $, but
only if the function $P$ is ``algebraic'' or ``analytic'' in some sense.
In this article we will encounter {\it weird recurrences} that
in addition to using {\it all} the previous values, also feature
{\it mex} that inputs sets of integers and outputs non-negative integers.
{\bf Definition of mex}: Given a set of non-negative integers
$S$, $mex(S)$ is the {\it smallest} non-negative integer
that does not belong to $S$.
For example, $mex(\{0,1,2,5,7\})=3$, and $mex(\{2,3,5,7\})=0$.
{\bf Beware of Sequences Defined by mex}
{\bf Exercise:} Consider the sequence defined by the recurrence
$$
a_i:=mex(\{0,1\} \cup \{ j a_r \,\, ; \,\, \quad j \geq 1 \,,\,
\,\, 1 \leq r*0$ then
$$
g([c,a,b])=
\{ [c,a,b+x] \,\, ; \,\, x>0 \}
\,\, \bigcup \,\,
\{ [c,a+x,b-x] \,\, ; \,\, 00$ and $b=0$ then
$$
g([c,a,0])=
\{ [c,a+x,y] \,\, ; \,\, x \geq 0, y \geq 0, x+y>0 \}
\,\, \bigcup \,\,
\{ [c+x,a-x,0] \,\, ; \,\, 00$ and $a=0$ then
$$
g([c,0,b])=
\{ [c+x,y,b-x-y] \,\, ; \,\, x \geq 0, y \geq 0, 00$ and $[c,a,b]$ is a loser then the
following are {\it guaranteed} winners
$$
\{ [c,a,b+x] \,\, ; \,\, x>0 \}
\eqno(1.1)
$$
$$
\{[c,a+x,b-x] \,\, ; \,\, 00$ and $[c,a,0]$ is a loser then the
following are {\it guaranteed} winners
$$
\{ [c,a+x,y] \,\, ; \,\, x \geq 0 \, \, , \, \, y \geq 0
\,\, , \, \, x+y>0 \} \quad .
\eqno(2.1)
$$
$$
\{ [c+x,a-x,0] \,\, ; \,\, 00$ and $[c,0,b]$ is a loser then the
following are {\it guaranteed} winners
$$
\{ [c+x,y,b-x-y] \,\, ; \,\, x>0 \,\, , \,\, y>0
\,\, , \,\, 00$, then
$$ [a,b+x] \quad,\quad 1 \leq x < \infty \quad ,
\eqno(ImW1)
$$
$$
[a+x,b-x] \quad ,\quad 1 \leq x \leq b \quad ,
\eqno(ImW2)
$$
are all winners.
If $[a,0]$ is a loser then
$$
[a+x,y] \quad, \quad 0 \leq x,y < \infty \quad , \quad x+y>0 \quad ,
\eqno(ImW3)
$$
are all winners.
Let's try to find the set of losers in $2$-rowed Chomp.
We already know that $[0,1]$ is a loser, hence
by $(ImW1)$, $[0,x]$, $x>1$ are winners,
and so is $[1,0]$, by $(ImW2)$.
Crossing out $[0,1]$ and all its implied winners, the minimal
point is $[1,1]$ that must be a loser since it only leads to
previously established winners. Now $[1,1]$'s implied winners
are $[1,x]$, $x>1$, and $[2,0]$, and hence $[2,1]$ is
a loser. By induction, if $[a,1]$ is a loser, then $[a,x]$, $x>1$
and $[a+1,0]$ are winners, and hence the minimal uncovered
point, $[a+1,1]$, is a loser, since it only leads to $[a+1,0]$
$[a+1-x,x+1]$, ($x \geq 1)$, and $[x,0]$ ($x \leq a$), which are
all previously established winners. It follows that
we have the ``theorem" that the set of losing positions in
$2$-rowed Chomp is $ \{ [a,1] \,\, ; \,\, a \geq 0 \}$.
{\bf 2-Rowed Chomp with Instant Winners}
Now consider a slight generalization. The
positions and legal moves
are the same, {\bf but}, {\it by fiat},
the members of a certain
(finite or infinite) set of points are designated
{\it instant winners}, and if it is your turn to move, and
the counter is on that point, you are declared the winner.
Let's describe this set (of points $[a,b]$)
as a sequence of sets (of integers),
$I_a$ ($0 \leq a < \infty$), where $I_a$ is the set of
$b$ such that $[a,b]$ is an instant winner.
If the set of instant winners is finite then there would be
an $a_0$ such that $I_a=\emptyset$, for $a \geq a_0$, in other words
$I_a$ would be eventually the empty set.
Let's try to figure out how to determine the set of losers,
in this, more general, game.
Because of $(ImW1)$ it follows that for any
given $a$ there is at most one $b$ such that $[a,b]$ is a loser.
Let's denote it by $L_a$, if it exists.
Because of $(ImW3)$, it follows that if $[a_0,0]$ is a loser then
there are no losers with $a>a_0$. In this case there are only
finitely many losers.
The problem of finding the set of losers is equivalent to
determining the sequence
of integers $\{L_a\}_{a=0}^{\infty}$.
Fix an integer $a$, and
suppose that we already know
$L_i$ for $i0$.
If and when $L_{a_0}=0$, then the sequence
$\{ L_a\}_{a=0}^{\infty}$ terminates at $a=a_0$.
{\bf Example}: Suppose that
the set of instant winners consists of $[0,0],[0,1],[0,2],[0,3]$,
$[1,0],[1,1],[1,5]$, and $[a,0],[a,1]$
($a>1$). In other words
$I_0=\{0,1,2,3\}, I_1=\{0,1,5\} \,\, , \,\, I_a=\{0,1\} (a \geq 2)$.
We have:
$$
L_0=mex(I_0)=mex(\{ 0,1,2,3\})=4 \quad,
$$
$$
L_1=mex(I_1 \cup \{L_0-1\} )=mex(\{ 0,1,3,5\})=2 \quad,
$$
$$
L_2=mex(I_2 \cup \{L_1-1, L_0-2 \} )=mex(\{ 0,1,2\})=3 \quad,
$$
$$
L_3=mex(I_3 \cup \{L_2-1, L_1-2, L_0-3 \} )=mex(\{ 0,1,2\})=3 \quad,
$$
$$
L_4=mex(I_4 \cup \{L_3-1, L_2-2, L_1-3, L_0-4 \} )=mex(\{ 0,1,2\})=3 \quad.
$$
Now we can already {\it guess} a pattern, $L_a=3$ for $a \geq 2$.
Let's try and prove it by induction using $(FundamentalRecurrence)$.
We have just established it for $a=2,3,4$.
For $a \geq 5$ we have, by the inductive hypothesis, that
the set of non-negative integers in
$$
I_a \cup \{ L_{a-1}-1, L_{a-2}-2, \dots, L_0 -a \}
$$
is really finite, namely
$$
I_a \cup \{ 3-1, 3-2, 3-3 \}= \{0,1,2\}
$$
and its $mex$ equals $3$. \halmos
{\bf The Ultimate-Periodicity Phenomenon}
Now assume that the sequence of sets $I_a$ describing the
instant winners is not arbitrary, but is
{\it ultimately-periodic}. In other words,
starting at a certain place $a_0$, there is a (minimal) period
$p$ such that $I_a=I_{a+p}$ for $a>a_0$.
Now we are on more secure grounds. Ultimately-Periodic Sequences
are {\it finite} objects, and hence meaningful. To describe such
a sequence all we have to do is specify the non-periodic head
$I_0, \dots, I_{a_0}$ followed by the period
$I_{a_0+1},I_{a_0+2}, \dots , I_{a_0+p}$ that keeps repeating.
{\bf Two Immediate Consequences of} $(FundamentalRecurrence)$
{\bf Lemma Bounded}: If the sets $I_a$ are (uniformly) bounded,
and $M-1$ is an upper bound, (i.e. $max(I_a) \leq M-1$ for all $a >0$)
then $L_a \leq M$, for all $a>0$.
{\bf Proof}: This follows immediately from the fact that
$mex(S) \leq max(S)+1$, and induction on $a$.
{\bf Ultimate-Periodicity Theorem}: If $I_a$ is ultimately-periodic
then the sequence $L_a$ either terminates (with the last value
being $0$), or else is ultimately-periodic.
{\bf Proof:} Since $I_a$ is ultimately-periodic the set of
finite sets $\{ I_a \,\, ; \,\, a >0 \}$ is finite, and hence bounded.
Let $M-1$ be the (least) upper bound. By Lemma {\bf Bounded},
$L_a \leq M$. Since negative integers in a set do not
affect its $mex$ (recall that $mex(S)$ is the smallest
{\it non-negative} integer in the complement of $S$),
the hitherto ``infinite memory'' recurrence
$(FundamentalRecurrence)$, where to know the value of
$L_a$ you have to remember {\it all} your past, now
becomes a ``finite memory'' recurrence, where you only
have to remember what happened in the last $M$ days.
$$
L_a= mex(I_a \cup \{ L_{a-1}-1, L_{a-2}-2, \dots, L_{a-M}-M \}) \quad.
\eqno(BoundedFundamentalRecurrence)
$$
Introducing the `states'
$$
S_a:=(I_a \,\,;\,\,
L_{a-1},L_{a-2}, \dots, L_{a-M} )
$$
$(BoundedFundamentalRecurrence)$ induces a well-defined function
$F:=S_a \rightarrow S_{a+1}$.
Since $L_a$ is bounded, and $I_a$ is ultimately-periodic,
it follows that there are only finitely many states.
By the venerable {\bf Pigeon-Hole} Principle, sooner or later
we must visit a previously-visited `state', i.e. there exists
an $a_0$ and a $q$ such that $S_{a_0}=S_{a_0+q}$.
But once that happens, everything repeats itself with
period $q$, and
$S_{a_0}=S_{a_0+iq}$ for all $i>0$. In particular,
$L_{a+q}=L_a$ for all $a \geq a_0 +1$. \halmos
{\bf A Posteriori Justification}
How to turn this into an algorithm? The `theoretical' upper bound
for the period is enormous, but is hardly (and perhaps never)
achieved. Once an ultimately-periodic sequence of sets $I_a$ is given
in the form
$$
[I_0,I_1, \dots, I_{b_0}][I_{b_0+1},I_{b_0+2}, \dots,
I_{b_0+p}]^{\infty} \quad,
$$
just keep computing $L_a$ using
$(BoundedFundamentalRecurrence)$.
Suppose that your computer detects that, after a certain
place $a_0$, the same segment $L_{a_0+1},L_{a_0+2},\dots, L_{a_0+q}$
keeps repeating (say $10$ times).
Then the computer is justified in {\it guessing} that the
$\{L_a\}_{a=0}^{\infty}$ equals.
$$
[L_0,L_1, \dots, L_{a_0}][L_{a_0+1},L_{a_0+2}, \dots, L_{a_0+q}]^{\infty}
\quad .
$$
In order to {\it prove} this conjecture you only have
to check it for the {\bf finite} number of cases
$0 \leq a \leq max(a_0,b_0)+M+lcm(p,q)$, since later on things
start to `repeat themselves'.
{\bf Analogy Example 1}: Guess the decimal representation of
$1/3$, and then prove it rigorously.
{\bf Solution:}
Compute $1/3$ to ten-decimal-digits accuracy, guess
that $1/3=0.3333333333\dots$, and then prove it
rigorously by summing an infinite geometric series:
$$
3 \sum_{i=1}^{\infty} (1/10)^i = 3 {{1} \over {10-1}}={{1} \over {3}}
\quad . \halmos
$$
{\bf Analogy Example 2}: Guess the
continued-fraction representation of $\sqrt{7}$,
and then prove it rigorously.
{\bf Solution:} Using Maple, or, if you wish, pencil-and-paper,
compute {\tt convert(evalf(sqrt(7)),confrac);}, getting
{\tt [2,1,1,1,4,1,1,1,4,1,1,1,4,1,1,1,4]}. Now {\it conjecture}
that $\sqrt{7}=[2,(1,1,1,4)^{\infty}]$. Let's call the
right side $x$. Then $x=2+1/y$, where
$y=[(1,1,1,4)^{\infty}]$. This means
$y=1+1/(1+1/(1+1/(4+1/y)))$, from which you can get a quadratic equation
satisfied by $y$, that implies the quadratic equation satisfied by $x$,
that turns out to be $x^2-7=0$. \halmos
Note that the current algorithm for proving the conjectured
ultimately-periodic sequence $L_a$ from the input
ultimately-periodic sequence of Instant Winners $I_a$ is
perfectly valid even if we did not have the {\it a priori}
assurance that it is
{\it always} guaranteed to work. The Ultimate-Periodicity
Theorem {\it guarantees} that we are bound to succeed at the end,
even though some of our initial guesses may prove to be wrong.
This is also true in the two elementary `analogy examples' above.
A well-known, elementary, and very easy theorem
(that uses the pigeon-hole principle!)
asserts
that any rational number has either a terminating
or an ultimately-periodic decimal expansion. This guarantees
that the `empirical algorithm' is going to work for any
{\it rational number}.
Analogously for Example 2. A theorem of
Lagrange
states that any
quadratic irrationality has an ultimately-periodic continued fraction,
and the proof also uses recurrences and the pigeon-hole principle.
Hence we have an {\it a priori} guarantee
that this will work for the square-root of {\it any}
(non-perfect-square) integer.
{\bf Back to Three-Rowed Chomp}
Recall our {\bf Little Problem} of determining {\bf fast},
all the losers $[c,a,b]$ for $c \leq C$, where $C$ is a fixed,
given integer.
Because of the first part of $(Rule1)$, it follows
that for any given $c$ and $a$, there is
{\it at most} one $b$ such that $[c,a,b]$ is a losing position.
Let's define $B_c(a)$ to be that $b$ (if it exists, otherwise it is
undefined).
It follows that knowledge of the set of losers in 3-rowed Chomp
is the same as {\it knowing} the sequence of sequences
$B_c$, and knowing it for $c \leq C$ is the same as knowing its
first $C+1$ terms.
We already know $B_0$! This is just the
case of 2-rowed Chomp, and we found that $B_0=1^{\infty}$.
Suppose that we already know $B_c$ for $c*