Written: June 11, 2006.
I often tell my students that the reason math (especially rigorous math with proofs) is so hard is that "mathematician" is really another species, higher than homo sapiens, and a mathematician is to a non-mathematical human as a human is to a dog. Both humans and dogs have very high cognitive skills, and the abstraction level of a dog far surpasses that of an ant, that in turn, beats that of an amoeba, but for a mathematician, the informal and often flawed logic of the hoipoli is like the logic of a dog compared to that of a human.
I have the highest respect for my dog Zoe, that in many respects is much superior to me. For example, she can run faster and her sense of smell is much better than mine. But she is not quite a human, so in some (rather narrow) sense, any human is "better" than her, since the human's abstraction level is higher.
I also have the highest respect for my parents, but they were not mathematicians, so at least in some (admittedly very narrow) sense I am "better" than them, since my abstraction level is higher than theirs.
I also have the highest respect for, say, Andrew Wiles, but he is not a computer-programmer!, so in (not quite as narrow) a sense, Ken Appel, Wolfgang Haken, Tom Hales, Vince Vatter and yours truly are much better than him!
All that Andrew Wiles did, in his FLT proof, was solve one problem, using ad hoc arguments, that depend on historical contingencies of what was proved before. It is a huge Rube Goldberg contraption, that is unlikely to lead to anything further of any significance, just more of the same human drivel.
I admit that some human mathematics is interesting, and even important, perhaps even the proof of FLT, but not for the ostensible reason that it solved a famous open problem, and not even for the reason that it might lead to more human mathematics about modular forms. In my eyes, a piece of human mathematics is interesting and important if and only if it can give us a clue how to generalize it and teach the method to the computer. This happens when it has some implicit structure that can be taught to a computer, that will later enable it to do better and bigger things.
According to this criterion, most of human mathematics is completely useless. It was developed by humans for human consumption. In order for humans to understand it, it had to proceed in tiny steps, each comprehensible to a human. But if we take the "mesh size" of each step, dA, to be larger, one can do potentially much bigger and better things, and the computer's dA is much larger, so we can (potentially) reach a mountain-top much faster, and conquer new mountain-tops where no humans will ever tread with their naked brains.
So this activity of computer-generated mathematics is the future. Unfortunately, many human mathematicians still don't realize the importance of this activity, and dismiss it as "just a computer program" and "no new mathematics". These were the reasons given to the rejection (by Journal of Combinatorial Theory-Series A) of my recent paper Automatic CounTilings. It is true, that in some sese it has "no new math", but it has something far more important, it has New META-MATH. It is an example of a methodology that will make all computer-free math obsolete very soon. I am not so paranoid to claim that the editors (that include enumeration guru Mireille Bousquet-Melou) deliberately rejected the paper because, being practicioners of traditional math, they are afraid to be put out of business. They are not so devious. They simply are so wrapped-up in their own way of doing things that from their perspective they "don't see the point".
Let me finish with a comparison. Which of the following papers is more significant?