Your delightful post on Knuth today led me to your essay on Avi Wigderson's ICM lecture and the P vs. NP problem. I'm not Avi (just a former postdoc of his), but I wanted to take the liberty anyway of responding to some of your points. 1. I think you misunderstood the point about NP problems being easy to check. No, checkability doesn't help explain why the NP-complete problems are hard, nor does anyone think it does. What it does help explain is why the P versus NP problem *itself* is hard! (If the solutions took exponential time to check, then it would be trivial to prove there's no polynomial-time searching algorithm.) 2. I don't think Avi's basic argument for P!=NP requires any Penrose-style leap of faith about the specialness of human creativity. One can say: our experience has been that solving combinatorial search problems is an extremely hit-or-miss business, and that when we do strike a hit, it's because we have some special knowledge of the problem structure (no doubt helped by a billion years of evolution). One can therefore speculate that brute-force search will in general be unavoidable for *any* intelligent entity (humans, AI bots, etc.). This is not that far from some of the arguments for P!=NP you yourself give. 3. I think theoretical computer scientists understand perfectly well that polynomial vs. exponential is an idealization imposed on a finite universe. Like any other idealization, this one can only be justified by its usefulness: leading to powerful new theorems, open problems, and so on. By that criterion, I would argue that it's already justified itself a thousand times over. Of course, someone who disparages the achievements of TCS as "yet another reduction theorem" (which is sort of like disparaging the achievements of physics as "yet another particle") would naturally disagree with that assessment. 4. While it's true that RH, Goldbach, and P!=NP have proofs of size O(1) each assuming they're provable at all, it's not true (as you say) that "introducing a parameter n [is] completely misleading." One can ask, for example: is there a proof of RH with at most n symbols? And then the issue becomes whether the time needed to answer that question grows polynomially or exponentially with n. (Indeed, this is exactly how Gödel phrased the P vs. NP question in his now-famous 1956 letter to von Neumann.) 5. Personally, I'd be *thrilled* if automated techniques could give us new insight into complexity questions like P vs. NP. I even tried myself to apply automated techniques at various points: for example, I wrote a software package for computing decision-tree measures of Boolean functions, and I tried to do a search for the smallest arithmetic circuit computing the 4-by-4 Permanent. But in both cases, I was stymied by the astronomical size of the search space, which grows not merely exponentially but doubly-exponentially with the input size n. So, do you have a concrete proposal for how we complexity theorists should be enlisting computers to help us in the near future (i.e. without waiting around for general-purpose AI-bots)? I think such a proposal would be met with considerable enthusiasm (in contrast to, say, kvetching from the sidelines :-) ). 6. I completely agree that Avi needs to use a spell-checker. All the best, Scott Aaronson

Back to Opinion 76 of Doron Zeilberger