Written: June 10, 2021.
More than 14 years ago I made some comments on Avi Wigderson's wonderful plenary talk at the 2006 ICM (International Congress of MATHEMATICS). So already in 2006, Computational Complexity was considered a respectable part of mainstream mathematics (not just CS). But the ultimate proof of ToC's bourgeois respectability was when, recently, Avi co-won (with Lovasz) a (well-deserved!) Abel Prize, the mathematical analog of the Nobel.
Yesterday, Avi gave an equally fascinating talk in Joel Lebowitz's Rutgers Mathematical Physics Seminar that thanks to the pandemic, can be enjoyed by many more people, since it is on Zoom, and is preserved for posterity in its web-site.
Now that ToC is a full-fledged subset of math, it also shares its conceptual flaws, namely its embrace of the so-called infinity. In particular, its wrong interpretation of Gödel's and Turing's so-called undecidability. Gödel and Turing did not prove that R ≠ RE, but rather that RE (taken literally) is nonsense. Also there were lots of "equal signs" in Avi's talks, and the punch-line was
MIP* = RE ,
but this "equal sign"-and all others in ToC-are "up-to polynomial-time `equivalence'" (what if the polynomial has degree 10000?). So this is a nice theory with beautiful theorems, but with little relevance to the real (finite!) world. Another beautiful, but most probably empty, theory, is Quantum Computing, mentioned at the end of Avi's talk, that produced (as Avi stated) a multi-billion-dollar industry, that I believe is a huge empty bubble.
Already in the 2006 talk, Avi mentioned a certain "paradox". There are many cases where the only known algorithms are probabilistic, and use randomness, and there are no known efficient determinstic versions. Yet under a very plausible assumption (only a little stronger than P ≠ NP) it does not help, i.e. there is a canonical way of "derandomizing" it. Once again, he fell in the pitfall of equating "just as easy" with "up to a polynomial time reduction". Even a constant reduction by a factor of 1/100 of a practical algorithm is a big deal!
Avi also advertised his wonderful book Mathematics and Computation that is highly recommended, but I beg to differ with the exaggerated subtitle "a theory revolutionizing Technology and Science". The Computer revolution would have done just fine without the esoterica of P vs. NP, and the more arcane MIP* and their ilk. It is mainly due to physicists (e.g. Bardeen et. al. who invented the transistor) and engineers who did the hardware, and people who actually designed and created actual practical algorithms, who developed the software, as described, for example, in Don Knuth's authoritative "Art of Computer Programming" bible. Of course, knowing that an algorithm is O(n*log(n)) as opposed to O(n^2) is important, and the basic notion of computational complexity is a fruitful notion, but the "meta-theory", while definitely a nice theory, is just that a theory and claiming that it "revolutionized" Technology and Science is quite a bit of hype.
Finally I'd like to disagree with two of the assertions in Avi's above-mentioned book. In section 20.8 (p. 340 of the print version), titled "K-12 education", he made the following two claims.
Response: No offense to Alan, but he can't hold a candle to Albert. In a counterfactual world with Alan deleted, things would have gone more or less the same, and even the awkward and artificial notion of Turing machine, as a "model of computation", wouldn't be necessary. I would rate von Neumann, who actually built a real computer over Turing. On the other hand, the 20th-century w/o Albert would be very different.
Response: If you refer to "undecidability" then it is the usual, conventional misunderstanding what Turing "proved".
What Turing did actually prove was that the so-called Halting Problem is a nonsense question. It is a priori meaningless
since it quantifies over an infinite set, and what he proved is that it is also a posteriory meaningless, and can't
be resurrected like the statement that "n+3n=4n for all n" that can be made sensible.
If you referred to the fact that P ≠ NP, then it is not (yet) a theorem.
If you meant that machines will never pass the Turing test, then it is definitely not a theorem, and machines are already
surpassing humans in many human endeavors.
So computers can do (in principle) everything that is meaningful!