%gepner.tex: Doron Gepner's Statistics on Words in $\{1,2,3\}^{*}$ is (Most Probably) Asymptotically Logistic
%by Doron Zeilberger
%Plain TeX
%begin macros
\def\L{{\cal L}}
\baselineskip=14pt
\parskip=10pt
\def\halmos{\hbox{\vrule height0.15cm width0.01cm\vbox{\hrule height
0.01cm width0.2cm \vskip0.15cm \hrule height 0.01cm width0.2cm}\vrule
height0.15cm width 0.01cm}}
\font\eightrm=cmr8 \font\sixrm=cmr6
\font\eighttt=cmtt8
\magnification=\magstephalf
\def\W{{\cal W}}
\def\G{{\cal G}}
\def\P{{\cal P}}
\def\Q{{\cal Q}}
\def\1{{\overline{1}}}
\def\2{{\overline{2}}}
\parindent=0pt
\overfullrule=0in
\def\Tilde{\char126\relax}
\def\frac#1#2{{#1 \over #2}}
%\headline={\rm \ifodd\pageno \RightHead \else \LeftHead \fi}
%\def\RightHead{\centerline{
%Title
%}}
%\def\LeftHead{ \centerline{Doron Zeilberger}}
%end macros
\centerline
{
\bf
Doron Gepner's Statistics on Words in $\{1,2,3\}^{*}$ is (Most Probably) Asymptotically Logistic
}
\bigskip
\centerline
{\it Doron ZEILBERGER}
\bigskip
\qquad
{\it
Dedicated to my friend and hero, Doron Gepner (b. March 31, 1956), on his $60^{th}$ birthday.
``Doron le Doron me-Doron''.
}
\bigskip
{\bf Preface}
I first met Doron Gepner in 1980, when he was a Physics graduate student at the Weizmann Institute of Science,
and I was a young {\it khoker bakhir}. Already then Doron was a legend, since he was the first person in Israel,
as far as I know, to have solved {\it Rubik's cube} completely from scratch, using
group-theoretical methods. I was so impressed that I asked him to present a guest-lecture
in my graduate combinatorics class, and the students loved it.
Doron then went on to do seminal work in theoretical physics,
that, unfortunately, is over my head.
But the part that is really interesting to me is his current work, greatly generalizing the celebrated {\it Rogers-Ramanujan} identities,
and giving lots of new insight. I am sure that this work will lead to many future gems.
The purpose of the present note is to present a {\it present}
({\it Doron} in Hebrew)\footnote{${}^1$}{\sixrm Exactly thirty years ago, on March 31, 1986, my wife Jane and I were at Doron Gepner's
30th birthday party, (in Princeton) that Ida (Doron's wife) organized in their place. Another guest was
a colleague of Doron, an Egyptian postdoc, and we pointed out to him
that we have the same name, and that
it means a ``gift'', (presumably ``God's gift''), to which he
retorted `` in your cases it seems to be the devil's gifts''.},
from one Doron to another, by paying an old {\it debt}. In 1987, when he was
a postdoc at Princeton University, Doron introduced a new {\it permutation statistic} (see below),
that came up in his work in string theory and conformal field theory.
It so happened that at the time,
my friend and collaborator, the eminent French combinatorialist, Dominique Foata, visited me.
Foata is the world's greatest expert on permutation (and word-) statistics
(and coined the term!), so it was only natural that
we both got intrigued and tried to investigate Gepner's new statistics,
that we christened {\it gep}, in analogy with the classical statistics {\it inv}, {\it maj}, and {\it des} (see below).
We had some preliminary results, but not enough for a paper.
This was due to the fact that my beloved servant, Shalosh B. Ekhad, was not yet born. Now, almost thirty years later, it
is a good opportunity to revisit Doron Gepner's difficult statistics and harness the full power of
my silicon servant, and of Maple, to study it seriously.
{\bf Important note}:
All the results in this paper were gotten by using the Maple package {\tt GEPNER.txt}, available, free of charge, from
the url
{\tt http://www.math.rutgers.edu/\~{}zeilberg/tokhniot/GEPNER.txt} \quad .
Sample input and output files may be gotten from the front of this article:
{\tt http://www.math.rutgers.edu/\~{}zeilberg/mamarim/mamarimhtml/gepner.html} \quad .
{\bf A crash course on Permutation and Word Statistics}
A permutation statistics is an integer-valued function on the set of permutations. The most famous one is the
{\bf number of inversions}, $inv(\pi)$, (that shows up in the definition of the determinant of a square matrix)
$$
inv(\, \pi_1 \dots \pi_n \, ):=\sum_{1 \leq i\pi_j) \quad ,
$$
where $\chi(S)$ is $1$ or $0$, according to whether $S$ is true or false, respectively.
Almost as famous is Major Percy Alexander MacMahon's statistics, $maj(\pi)$, called the ``major index''
$$
maj(\, \pi_1 \dots \pi_n \, ):=\sum_{i=1}^{n-1} \, i \, \chi (\pi_i>\pi_{i+1}) \quad .
$$
The generating functions according to $inv$ and $maj$ are both given by
$[n]!:=1 (1+q)(1+q+q^2) \cdots (1+q+\dots +q^{n-1})=(1-q)\dots (1-q^n)/(1-q)^n)$
as proved by Netto and MacMahon respectively.
In particular the permutation statistics $inv$ and $maj$ are {\it equally distributed}.
Permutations of length $n$ may be viewed as {\it words} in the ``alphabet'' $\{1,2, \dots, n\}$ with
exactly one occurrence of each letter. The above definitions of $inv$ and $maj$ make perfect sense
when defined on words, of any length, in the same alphabet, where repetitions (and omissions) are welcome.
Let $\W(a_1, \dots, a_n)$ be the set of words in the alphabet $\{1, \dots , n\}$
with $a_1$ occurrences of $1$, $a_2$ occurrences of $2$, $\dots$, $a_n$ occurrences of $n$.
MacMahon proved (Theorems 3.6 and 3.7 in [A])
$$
\sum_{w \in \W(a_1, \dots, a_n)} q^{inv(w)} = \frac{[a_1 + \dots + a_n]!}{[a_1]! \dots [a_n]!} \quad,
$$
$$
\sum_{w \in \W(a_1, \dots, a_n)} q^{maj(w)} = \frac{[a_1 + \dots + a_n]!}{[a_1]! \dots [a_n]!} \quad,
$$
(where, as mentioned above, $[m]!:=(1-q)(1-q^2) \cdots (1-q^m)/(1-q)^m$).
In particular they are still {\it equally distributed}, and Dominique Foata ([F]) gave a gorgeous
bijective proof.
\vfill\eject
{\bf Asymptotic Normality}
Most (but not all) combinatorial statistics (naturally parametrized by one or more integer parameters), for example
tossing a (fair or loaded) coin $n$ times and observing the number of Heads, are {\it asymptotically normal}, that
means that (for the sake of simplicity let's only consider the one parameter case), if you call the sequence $X_n$,
figure out its {\it mean}, $\mu_n$, (usually extremely easy) (aka as average, aka as expectation), call it $\mu_n$, and its
{\it variance}, $m_2(n)$ (also, usually, fairly easy), and define the {\it standardized} sequence of random variables
$$
Z_n :=\frac{X_n -\mu_n}{\sqrt{m_2(n)}} \quad,
$$
then the sequence $\{Z_n\}$, converges, in distribution, to the good-old normal distribution
(aka Gaussian distribution) whose probability density function is, famously,
$e^{-x^2/2}/\sqrt{2\pi}$. A good way to prove this is to discover explicit expressions for the moments (about the mean),
$m_r(n)$ (or at least the leading terms), and prove that the {\it standardized moments}, $m_r(n)/m_2(n)^{r/2}$ tend, as $n$ goes to infinity,
to the moments of the standard normal distribution, that equal $0$ when $r$ is odd and $1 \cdot 3 \cdots (r-1)$, when $r$ is even.
This approach can be often taught to a computer, see [Z1][Z2].
The asymptotic normality of $inv$, for the two-lettered case,
was first proved by Mann and Whitney [MW], and for the general case by Persi Diaconis[D] (and reproved in [CJZ]).
{\bf Enter Doron Gepner's Statistics}
One way to look at an inversion in a word $w_1 \dots w_m$ is as the number of {\bf pairs}
of letters $w_i w_j$ with $1 \leq i