Written: Sept. 12, 2015

Four years ago, mathematicians were panicking. The then director of the Division of Mathematical Sciences at NSF, Sastry Pantula, had the "nerve" to propose that the name of that division be changed to

"Division of Mathematical and Statistical Sciences".

They wrote many letters, and made such a fuss, and I remember one mathematician saying that, by the same logic,
the "Division of Fishery" should be replaced by "Division of Salmon and Fishery", implying that
Sastry's proposal is a *category error*, since
statistics is just one out of several mathematical sciences, and by making
statistics and mathematics at the same "categorical level", it would challenge the dominance of mathematics.
They were also worried, of course, about the *bottom line*, that it would hurt the funding of pure math.

But don't worry, Professor Pantula, time is on your side! Mathematics will very soon, (like everything else!) become
a statistical science, and mathematical knowledge will be obtained by statistical methods. Mathematical
truth will become statistical, and the fraction of mathematical knowledge with traditional "full (rigorous) proof"
would have measure 0.
In fact such statements, with a full human-made proof, (e.g. FLT, Poincaré, or the recent Yitang Zhang "almost twin prime" theorem)
would be considered *trivial*, exactly **because** they have full proofs. Even computer-assisted proofs, like the
Four Color Theorem, with a complete proof, would be dismissed as semi-trivial, since their probability of correctness
is (at least allegedly) 1.

So there is going to be *trade-off*, not unlike Heisenberg's, between *depth* and *certainty*, and one
of the major problems of future mathematics (that would become part of statistics) would be to quantify this.

"Probabilistic algorithms", that output the answer with very high probability have been used in computer science for a very long time, and more recently in learning theory, for example Leslie Valian't beautiful "Probably Approximately Correct" (PAC) theory, where one can estimate and bound the error of the output. But here the "problems" are conceptually trivial, and given sufficient time and space, one can (usually) find out the answer eventually, at least in principle.

On the other hand, in pure mathematics, one has "general theorems", valid for "infinitely" many cases, or so it seems,
so it seems funny to apply the same methods. This is only an illusion, of course, since there is no such thing as "infinity",
and every mathematical statement is **one** fact, but usually involving *symbols*, so
instead of *number crunching* we have to do *symbol crunching*.

So, still today, it is "all or nothing", you either have a full iron-tight proof,
or you don't. Until you have a full proof, you have *nothing*, or at the very best, a *conjecture*.
Even for very plausible conjectures, like the twin prime conjecture, or the irrationality of
Euler's constant, γ, that are virtually certain, on informal, heuristic grounds, there is no way to "quantify", at present, the
probability of them being wrong.

But this will change soon enough. Mathematics will increasingly use the emerging
(inherently *statistical*)
methods of "data mining" and
"machine learning" (still in their infancy) to get new frontiers in mathematical knowledge. For the great
majority of these results, a full traditional proof would be out of the question, and **all** knowledge,
*including pure mathematical knowledge*, will be obtained using statistical methods.

So in ≤ 100 years, after all, there would be, once again, no need to change the name of the "Division of Mathematics"
to the "Division of Mathematics and Statistics", because mathematics would,
**tautologically**, become (a small) subset of statistics.

Opinions of Doron Zeilberger