#OK to post #Robert Coyne, 1/28/24, hw3 (* no Maple coding, all just for humans 4. I don't know how the approach goes in terms of N, i.e. for a more accurate approximation for the region around a large but finite N, but in the extreme limit this is just a constant: 1/zeta(2) = 6/pi^2 = .6079... 5. I'm still not exactly convinced it is less secure. The claim seems to me just a plausible idea that the fewer and larger the prime factors the harder they should be to find. That's certainly so for brute force, and from what was said in class it's so for the known general methods of factoring. But those are not the methods we need to fear; we know they're not good enough because we chose the size of our numbers so that they wouldn't be, and we know that we can do that and still have legitimate encryption and decryption be practicable because that's in P and indeed rather small/easy. What we need to worry about is the method we don't know. To feel confident of the claim that two factors are best, we'd need some general theorem about all possible ways of factoring, say a bound for the difficulty. But if this were too large, more than polynomial, it would be a proof that NP isn't P, and we'd have heard about it. Or if too small, just a lovely hope with no sign that it can be attained, we'd figure it has no bite, isn't really relevant. We'd need something in between; moreover it would have to be in terms of the primes, not n itself. Seems a tall order, and anyway for the moment I haven't heard of any such thing. So I think the best argument for two primes is that we can't allow just any number of factors at random, which would indeed make some of them too small for comfort; the best we could do would be to limit it to, say, 2..7. Any particular key would use only a single value, and there's no reason to think it would be any better than two; if there's some special factoring method that works especially well on p*q, why not one for p*q*r? And obliging the hacker to try a few possibilities (if the number of factors matters at all for his method) seems unlikely to outweigh the substantial increase in difficulty for bigger primes or keys. So with everything else equal, why not fall back on the intuitive argument for two? Because in real life the cracker isn't always a mathematician, the crack isn't always about factoring-in-general, and our own methods for generating keys aren't always public. Think of a cryptocurrency wallet. If you lose access to your private key, you lose a fortune. So you have to be able to reconstitute it using a mnemonic passphrase fed into the key-making routine. That opens up all the usual human-weakness ways of guessing the password with its resulting key. But the key has to be split up among the several, presumed two primes. The hacker knows that "everyone" does it that way, and there may be an especially popular piece of software for the task, or the spy in Accounting may have spilled what we bought. If I were in charge of enabling all employees of my organization to have RSA keys easily, my impulse would be to take the existing program (I would probably insist on open source) so as not to reinvent the wheel, but tweak it to use three primes, splitting the easy passwords differently, so as at least to throw some unanticipated obstacle in the bad guy's way. And the third prime, rather than just a weird password-massage, will also beat the hacker who is a mathematician and has a special method for factoring p*q but hasn't worked out p*q*r because he didn't think he'd ever need that. We are dealing with an adversarial situation, and the stakes are huge: This isn't just for fun; the question is, Should we stake the world's financial system, the nation's military security, and everyone's identity (as recognized and enforced by law) on RSA? Just because there's no _published_ crack of it doesn't mean no one has found any. If NSA cracked Putin's key, would they send it off to the Journal of Cryptography? *)