#OK to post homework #GLORIA LIU, Jan 28 2024, Assignment 3 #=====================================# #1. Read and do all the examples, plus make up similar ones, in pages 61-90 of Frank Garvan's awesome Maple booklet #Chapter 6 with(plots): plot(sec(x),x=-2*Pi..2*Pi,y=-5..5): plot([cos(t),1/2*sin(t),t=0..2*Pi]): polarplot(sin(6*t),t=0..2*Pi): L:=[[1,1],[1,2],[1,3],[2,2],[3,1],[3,2],[3,3],[5,1],[5,3],[6,1],[6,2],[6,3],[7,1],[7,3]]: plot(L,x=0..7,y=0..5,style=point): plot3d(exp(-(x^2+y^2-1)^2),x=-2..2,y=-2..2): plot1:=plot3d([sqrt(1+u^2)*cos(t),sqrt(1+u^2)*sin(t),u],u=-1..1,t=0..2*Pi): plot2:=plot3d(x+y+1,x=-2..2,y=-1..1): display(plot1, plot2): contourplot3d(exp(-(x^2+y^2-1)^2),x=-(1.3)..(1.3),y=-(1.3)..(1.3),filled=true,coloring=[blue,red]): animate(1/(1+x*t),x=0..10,t=0..1,frames=10): animate3d([r*cos(t+a),r*sin(t+a),r^2*cos(2*t)+sin(a)],r=0..1,t=0..2*Pi,a=0..2*Pi,frames=10,style=patch): #Chapter 7 with(linalg): v:=vector([1,2,3]): A:=matrix(2,3,[a,b,c,d,e,f]): op(A): f:=(i,j)->x^(i*j): A:=matrix(4,4,f): A:='A': v:='v': f:='f': #=====================================# #2. Look at the Maple procedure #EAnew(m,n) #that I added after class, and compare it with Maple's gcd(m,n) and EA(m,n). Why is EAnew better? #EAnew(m,n): inputs m>n and outputs the gcd(m,n) using iteration rather than recursion EAnew:=proc(m,n) local m1,n1,m1new: m1:=m: n1:=n: while n1>0 do m1new:=n1: n1:=m1 mod n1: m1:=m1new: od: end: #Run 3 times: for i from 1 to 3 do a:=rand(10^50000..10^50001)(): b:=rand(10^50000..10^50001)(): print(time(gcd(a,b))); print(time(EAnew(a,b))); od: a:='a': b:='b': #Results: From 3 random samples, Maple's gcd got 0.011s, 0.010s, and 0.010s respectively, while #EAnew got 3.644s, 3.541s, 3.683s respectively. #EAnew is better (than the original EA function) because it doesn't use recursion. The gcd of larger numbers may be calculated without storing the variables for each recursive call. #=====================================# #3. Write a procedure #EstimateProbGCD(N,K) #that estimates the probability (given in floating point) that two randomly chosen integers between 1 and N are relatively prime, by doing the experiment K times. EstimateProbGCD:=proc(N,K) local a,b,primeCount,i: primeCount:=0: for i from 1 to K do: a:=rand(1..N)(): b:=rand(1..N)(): if gcd(a,b)=1 then primeCount:=primeCount+1: fi: od: primeCount/K: end: EstimateProbGCD(10^10,10^4); #Result: 6061/10000 #=====================================# #4. Find the exact value of the limit of this probability as N goes to infinity. #Skipped #=====================================# #5. Write a procedure #MakeRSAkeyG(D1,g); #that takes n (the modulus) to be a product of g distinct primes (MakeRSAkeyG(D1,2) should do the same as MakeRSAkey(D1)) #Why is, for example, MakeRSAkey(D1,3) less secure? MakeRSAkeyG:=proc(D1,g) local i,n,d,S,m,L,e,p: L:=[]: m:=1: n:=1: for i from 1 to g do p:=nextprime(rand(10^(D1-1)..10^D1-1)()): print(p); n:=n*p: m:=m*(p-1): od: S:=rand(m/2..m-1)(): for e from S to m-1 while gcd(e,m)>1 do od: d:=e&^(-1) mod m: [n,e,d]: end: #Example: MakeRSAkeyG(2, 3) -> [123469, 107521, 104161] with primes 37,47,71 #MakeRSAkeyG(D1,3) is less secure. When n has more prime factors, the probability of finding one that is a factor of n increases, so it should be easier to factor and thus solve. #=====================================# #6. Carefully read section IV of the seminal Rivest-Shamir-Adleman paper, about digital Signatures, and implement it, using the RSA encrypton. #Skipped #=====================================# #7. Get ready for the next class (starting Coding theoy) by reading Chapter I of A First Course in Coding Theory by Raymond Hill #Done.