Interesting comments by Wesley Pegden (Dec. 6, 2009)
Prof. Zeilberger,
I was reading some of your opinions just now... 98 in particular. I
basically share your attitude towards the P=NP question. The example I
like to give is the following: consider the "knotless" problem, which
asks whether an input graph can be drawn in R^3 such that none of its
cycles form nontrivial knots.
Suppose someone discovered a reduction from, say, 3-SAT to this
problem. Then P=NP, because as a consequence of the graph minor theorem
work of Neil Robertson and Paul Seymour, there is a polynomial-time algorithm for
identifying any graph property closed under taking minors, which, its
easy to see, includes the knotless property. Nevertheless, *no explicit
algorithm is known* to solve the knotless problem. Not just "the best
known algorithm is super-polynomial", but no algorithm is known at all
(before the minor theorem it was unknown if the problem was decidable),
and I don't see any reason why we should ever expect humans to know of
an explicit such algorithm.
Best of all, the algorithm guaranteed to exist by the graph minor
theorem is not only polynomial, but essentially linear (n log n is the
best known at this point I think). I believe, however, that the
constant *in front* of the n log n may be expected to be larger than the
number of particles in the known universe. (And we have no upper bound
on it for the knotless property, since the graph minor theorem doesn't
tell us how many forbidden minors it takes to characterize the knotless
property, only that there are finitely many).
But sometimes I feel like I'm being too hard on the P=NP? problem.
Certainly, I think it's clear that the answer P=NP would not have
consequences. It could not actually tell us there are "efficient"
algorithms to solve NP-complete problems, since it is easy to produce
problems in NP which take n^100000 steps (or however large you want).
But wouldn't it be interesting to show that P!=NP ? It is easy to rag
on the possibility of showing P=NP, but showing P!=NP would be harder to
dismiss. This would undoubtedly involve the development of powerful new
ideas (perhaps the creative use of computers would be important here).
And it *could* conceivably have practical implications... if you wanted
to have some mathematics-based confidence, for example, that the NSA had
not secretly discovered a way to defeat your cryptography (perhaps some
future system based on some Lattice problems like the shortest vector
problem that you knew were NP-hard).
Of course, there's a problem here too... that the P,NP hierarchy is
based on *worst-case* analysis, where in the real-world (including
cryptography) you usually care about average case (especially if the
average includes almost every case). This leads to complementary
issues. If P=NP, it means that NP problems really are polynomial time
(even in worst case), but polynomial doesn't mean easy. If P!=NP, then
things are hard (if we accept that super-polynomial really does usually
mean hard) but only in the worst case, and average case may be easy.
But, at the risk of carrying on this rambling just a bit further, I
think its worth considering whether it isn't a beautiful question
anyways, albeit one without the immediate practical ramifications some
computer scientists may suggest (and if that's the standard now, I know
I'm in trouble.) And I think, in spite of the all the talk about
"efficiency", this beauty is actually a really big reason for the
popularity of the problem among researchers. P and NP are classes with
simple, natural definitions, "are they the same?" is a natural question,
and we have been unable to answer it. I think the real reason that
"P=NP?" is the object of study is that P and NP are the simplest
complexity classes for which the equality question is difficult, not
because P and NP are the classes for which the question becomes important.
Back to
Opinion 98 of Doron Zeilberger