Dear Prof. Zeilberger, I am not qualified to comment on your opinion 94 as regards the careers advice for Thomas Hales. I will have to leave that to Tom or posterity. I would like to comment on the relevance of computational logic to mathematics. You are certainly right that computer programs can be orders of magnitude more reliable at mathematical calculation than humans. The problem is that the number of orders of magnitude really matters. In human mathematics, we really only get adequate reliability by the multiple independent checks that are made on proofs as mathematics is used and taught. The big problem with computer algebra systems like Maple or Mathematica is that they were just not designed with assurance in mind. They give no indication of the reliability of the results. On the other hand, if you write your own programs, then, while you may be confident that they are free of errors, how do you convince anybody else? Peer review of code is incredibly time-consuming and tedious and of very limited value. Trying to produce totally correct programs is also potentially a waste of time if all you need is the correctness of one or two specific calculations. I believe that the mathematical community and anyone else who cares about computer algebra should not be complacent about the dependability of the systems they use. We should demand that computer algebra systems be designed with dependability in mind. The user must be able to make convincing claims about the validity of results. These need not be claims that a complete formal proof can be produced, but you need to be able to claim some degree of confidence. Otherwise what is the point? What we have learned in computational logic is that good software engineering makes it possible to mitigate the argument that "programs always have bugs". For example, the various HOL systems as used in the flyspeck project are based on an LCF-style architecture. This means that you only have to trust a small and fixed part of the system, namely the logical kernel that carries out primitive inferences. The rest of the code can be riddled with bugs, but that does not matter: if code outside the kernel goes wrong it may annoy the user, but only bugs in the kernel can produce an invalid proof. Practical experience over more than 20 years with systems of this type has shown that they really do give a very high degree of dependability and even succeed in not annoying the user quite a lot of the time. These ideas need to be exported into the world of computer algebra. This does not imply total rigour: a result can always be given with a qualified certificate of reliability (e.g., you might get "proved using WZ as an oracle" from a system that hasn't formalized the metatheory of WZ). So we can mitigate the arguments that "programs always have bugs" and "you can't trust a computer", and we have to. We do this in engineering when computers are applied to safety-critical problems, and there is no reason why we should not do it for the kinds of programs that mathematicians want to use. Then, maybe after the retirement of a few diehards, the arguments against using computers in mathematics can be consigned to history. Finally, concerning the axiomatic method, there is one vital difference that makes machine language less stifling than a tuxedo: you cannot compile high-level languages into a tuxedo. OK, so it took Russell and Whitehead hundreds of pages to prove that 1+1 = 2, but that was when it was done for the very first time, many decades before there were machines to help. These days, with computer assistance, we have a range of "headline" formal proofs like 4CT and the Jordan curve theorem. These are based on formal developments of much of elementary algebra and analysis together with a lot of interesting combinatorics, abstract algebra, geometry and topology. These developments include microcosms of computer algebra tools for things like term normalisation and symbolic differentiation. I disagree that the returns are diminishing and I predict continued and accelerating progress, if for no other motivation than "because it is there". Best Regards, Rob Arthan.