.S D +4
.EQ
delim $$
.EN
.nr Pi 10
.nr Pt 1
\fBA Constant Term Identity Featuring The Ubiquitous (And Mysterious)
Andrews-Mills-Robbins-Rumsey Numbers 1,2,7,42,429, ... \fR
.SP1
.LP
\fIDoron Zeilberger*\fR
.FS*
Department of Mathematics, Temple University,
Philadelphia, PA19122. Supported in part by NSF grant DMS8800663.
.FE
.SP1
.DS
\fIIf $A=B$ then $B=A$\fR
(Axiom satisfied by the equality relation)
.DE
.SP1
\fBAbstract:\fR George Andrews's recent proof of the
Mills-Robbins-Rumsey conjectured formula for the number of
totally symmetric self-complementary plane partitions
is used to derive a new multi-variate constant
term identity, reminiscent of, but not implied by, Macdonald's
$BC sub n$-
Dyson identity. The method of proof consists in translating to the
language of constant terms an expression of Doran for the
desired number in terms of sums of minors of a certain matrix.
The question of a direct proof of the identity, which would furnish
an alternative proof of the Mills-Robbins-Rumsey conjecture,
is raised, and a prize is offered for its solution.
.SP1
\fB0. Prologue\fR
.P
Sometimes,
it may occur to mathematician \fBX\fR, in his attempt at proving
a conjectured equality $A=B$, to introduce
another quantity $C$, and to attempt to prove the two lemmas
$A=C$ and $C=B$. The original conjecture $A=B$ would then follow by
the transitivity of the $=$ relation.
Alas, it might happen that, after the successful completion
by \fBX\fR of the first part of his program, but before finishing
the second part, the conjecture $A=B$ is proved by his rival \fBY\fR
by a completely different method. Should \fBX\fR let thousands of hours,
fifty yellow pads, and ten ball-point pens go unrecorded in the archival
literature? Certainly not! All that \fBX\fR has to do is
promote the equality $C=B$ from the status of lemma to that
of theorem and observe that its proof follows immediately
from his own lemma $A=C$, \fBY\fR's theorem $A=B$, and the
symmetry and transitivity of the equality relation. To be on the safe
side, \fBX\fR should argue also that in addition to the intrinsic interest
of $C=B$, the \fImethod of proof\fR of the lemma $A=C$ is interesting,
and might lead to the proof of other conjectures.
.P
The above scenario happened with the following specialization.
.SP1
\fBX\fR= Your faithful servant.
.SP1
\fBY\fR= George Andrews.
.SP1
A:= The number of totally symmetric self complementary plane partitions
(TSSCPP) whose 3D Ferres diagrams fit inside $[0,2n] sup 3$.
.SP1
.EQ
B:=~prod from i=0 to n-1 (3i+1)! over (n+i)! ~~~,
.EN
.SP1
the Andrews-Mills-Robbins-Rumsey sequence {1,2,7,42,429, ... }
[A3],[MRR1],[MRR2],[Ro1].
.SP1
.EQ
C:= CT~ prod from {1<= i x sub i sup -1$,
$i= 1, ... , n $, which obviously does not change the constant
term. We also changed the summation range from
{$ 0 < j sub 1 < ... < j sub n < inf $} to
{$ 0 <= j sub 1 <= ... <= j sub n < inf $}, which doesn't change
anything, since the extra monomials all have at least two of
their exponents equal, and hence their contribution to the constant term
is zero, thanks to the anti-symmetry of the kernel.
.SP1
\fB6.\fR Here we used
.SP1
.EQ
sum from {0 <= j sub 1 <= j sub 2 <= ... <= j sub n < inf }
x sub 1 sup {j sub 1 } ... x sub n sup {j sub n} ~=~
(1- x sub n ) sup -1 (1- x sub n x sub n-1 ) sup -1 ...
(1- x sub n x sub n-1 ... x sub 1 ) sup -1 ~~.
.EN
.SP1
For $n=1$ this is just the sum of an infinite geometric series.
Assuming this is true for $n-1$ variables,
$x sub 2 , ... , x sub n$, we have
.SP1
.EQ
sum from {0 <= j sub 1 <= j sub 2 <= ... <= j sub n < inf }
{x sub 1} sup { j sub 1} ... {x sub n} sup { j sub n}~=~
.EN
.SP1
.EQ
[ sum from { j sub 1 =0} to inf
{( x sub 1 ... x sub n )} sup { j sub 1 } ] cdot
[ sum from { 0<= j sub 2 - j sub 1 <= ... <= j sub n - j sub 1 < inf }
x sub 2 sup {j sub 2 - j sub 1 } ... x sub n sup {j sub n - j sub 1 } ]
.EN
.SP1
.EQ
~=~
(1- x sub 1 ... x sub n ) sup -1
(1- x sub 2 ... x sub n ) sup -1
... (1- x sub n ) sup -1 ~~~.
.EN
.SP1
\fB7.\fR Here we "averaged" over all the images under the symmetric group,
noting that the constant term is not affected. More explicitly,
for any Laurent series $f( x sub 1 , ... , x sub n )$ let
the symmetric group act naturally by:
.SP1
.EQ
pi (f) ( x sub 1 , ... , x sub n ) =
f ( x sub { pi (1)} , ... , x sub { pi (n)} )~~,
.EN
.SP1
and let
.SP1
.EQ
f sup # = 1 over n! sum from { pi epsilon S sub n }
pi (f)~~,
.EN
.SP1
then $CT(f) = CT ( f sup # )$. We also used the obvious fact that
if $f sup &$ denotes the "anti-symmetrizer":
.SP1
.EQ
f sup & = 1 over n! sum from { pi epsilon S sub n} sgn ( pi ) pi (f)~~,
.EN
.SP1
then if $g$ is anti-symmetric, then
.SP1
.EQ
(gf) sup # = g ( f sup & )~~~.
.EN
.SP1
\fB8.\fR Here we used the identity:
.SP1
.EQ(L)
sum from { pi epsilon S sub n} ~sgn ( pi )~
pi [ (1- x sub n ) sup -1 (1- x sub n x sub n-1 ) sup -1 ...
(1- x sub n x sub n-1 ... x sub 1 ) sup -1 ]~=~
.EN
.SP1
.EQ
{ prod from { 1 <= i < j <= n } ( x sub j - x sub i ) }
over
{ prod from i=1 to n (1- x sub i ) prod from { 1 <= i < j <= n }
(1- x sub i x sub j ) }~~~~.
.EN
.SP1
This is easily seen to be equivalent to Littlewood's identity that
sums all the Schur functions (e.g. [Macd1], ex I.5.4, p. 45). Here
I give another proof which, incidentally, also proves Littlewood's
identity.
Let's call the left side of (L) $ f( x sub 1 , ... , x sub n )$,
and its right side $g( x sub 1 , ... , x sub n )$.
Separating the sum over $S sub n$ in the definition of $f$ into
the $n$ subsets of $S sub n$, according to the values of $pi (n )$,
we have (chopping $pi (n) = i$ amounts to losing $n-i$ inversions)
.SP1
.EQ
f( x sub 1 , ... , x sub n ) =
(1- x sub 1 x sub 2 ... x sub n ) sup -1 sum from i=1 to n
(-1) sup (n-i) f( x sub 1 , x sub 2 , ... , { x sub i } hat
, ... x sub n )~~~~.
.EN
.SP1
Since $f( x sub 1 ) = g ( x sub 1 )$ is obviously true, $f=g$
is true for $n=1$. (L) would follow by induction if we can prove that
.SP1
.EQ
g( x sub 1 , ... , x sub n ) =
(1- x sub 1 x sub 2 ... x sub n ) sup -1 sum from i=1 to n
(-1) sup (n-i) g( x sub 1 , x sub 2 , ... , { x sub i } hat
, ... x sub n )~~,
.EN
.SP1
which is equivalent to
.SP1
.EQ
(1- x sub 1 ... x sub n ) ~=~
sum from i=1 to n ~(-1) sup n-i ~
g ( x sub 1 , ... , {x sub i} hat , ... , x
sub n ) / g ( x sub 1 , ... , x sub n )~~~.
.EN
.SP1
But it is readily seen that
.EQ
(-1) sup n-i {g ( x sub 1 , ... , {x sub i} hat , ... , x
sub n )} over { g ( x sub 1 , ... , x sub n )}~=~
(1- x sub i ) prod from {1 <= j <= n , j != i}
(1- x sub i x sub j )/( x sub i - x sub j )~~~.
.EN
.SP1
It thus remains to prove that
.SP1
.EQ
(1- x sub 1 ... x sub n ) =
sum from i=1 to n
(1- x sub i ) prod from {1 <= j <= n , j != i}
( x sub i - x sub j )/(1- x sub i x sub j )~~~.
.EN
.SP1
This is a rational function identity resembling those of Good[Go],
Gustafson and Milne[Gu-Mi], Gross and Richards [Gr-Ri],
and Milne [Mi]. It is easily proved by Lagrange-interpolating
the degree-$n$ polynomial (in $z$)
$f(z):= (1- z x sub 1 ) ... ( 1- z x sub n )$ at the $n+1$ points
{$1, x sub 1 , ... , x sub n $}, then substituting
$z=0$ and finally making trivial adjustments.
.SP1
\fB9.\fR This is (7) in reverse.
.SP1
\fB5. Robbins's Conjectured Expression For D', And a More General
(and hence easier) Constant Term Conjecture\fR
.SP1
Dave Robbins[Ro2], in a private communication, made the following
conjecture regarding the sum of minors D`. Let
$B'(m,n)$ be defined by $B'(0,n)=B(n)$, and
.SP1
.EQ
B'(m+1,n) over B'(m,n)~=~
2 prod from j=1 to n-1 (2m+j+2)(3m+2n+2+j) over (m+1+j)(3m+2+2j)~~~,
.EN
.SP1
then
.SP1
\fBConjecture D'=B'\fR (Robbins[Ro2]): $D'=B'$.
.SP1
Conjecture \fBD'=B'\fR would, of course, follow from the following
constant term conjecture
.SP1
\fBConjecture C'=B'\fR: $C'=B'$.
.SP1
\fBC'=B'\fR, being more general than \fBC=B\fR, should be easier to
prove. Its proof will, in particular, solve the $25$ dollars problem
stated above.
.SP1
\fB6. Sums of Minors of Generalized Binomial Coefficients Matrices\fR
.SP1
.P
Our approach for expressing the sums of minors of the matrix
$X$ took advantage of the fact that its entries were representable
as constant terms as follows:
.SP1
.EQ
X sub i,j = CT {(1+x) sup m+i-1} over {x sup j-i} =
CT left [ {
(1+x) sup m {((1+x)x) sup i-1} over {x sup j-1 }
}
right ]~~~.
.EN
.SP1
Scanning the proof of \fBD'=C'\fR given in section 3, we see that
it is still valid when $(1+x) sup m$ is replaced by a general
polynomial $f(x)$ and $(1+x)x$ is replaced by a general polynomial
$g(x)$. We thus have
.SP1
\fBGeneral Theorem\fR: Let $f(x)$ and $g(x)$ be polynomials and consider
the $n times (~deg(f)+(n-1) deg(g)^+^1)$ matrix whose entries are given
by
.SP1
.EQ
X sub i,j~:=
CT[ {f(x) g(x) sup i-1} over { x sup j-1}]~~~.
.EN
.SP1
The sum of all its $n times n$ minors equals
.SP1
.EQ
CT~prod from {1<= i