代写数值计算作业-Numerical Computing代写
代写数值计算作业

代写数值计算作业-Numerical Computing代写

Numerical Computing, Spring 2022

Homework Assignment 7

代写数值计算作业 Although as stated in AG it will eventually converge to an upper triangular matrix under the assumptions that A is real, has real eigenvalues

1.This question concerns the QR algorithm for eigenvalues.  代写数值计算作业

(a)Implement the QR algorithm for eigenvalues without shifts(AG,p.302). Although as stated in AG it will eventually converge to an upper triangular matrix under the assumptions that A is real, has real eigenvalues, and has nonsingular principal submatrices, thiscantake a long time. Let rk denote the last row of the strict lower triangle of Ak, i.e.,

rk = [(Ak)n,1, . . . , (Ak)n,n1] .

It turns out that rk converges to zero much faster than the other rows in the lower triangle of Ak.1 So, inside the loop, compute “rk” and terminate the loop when this drops below a tolerance, say 1012. How many iterations are required when A0, the input to the QR algorithm, is set to this matrix?

(b)The idea here is that if rkis very small, then (Ak)n,n is very close to an eigenvalue of Ak, and therefore, since the QR algorithm preserves eigenvalues, very close to an eigenvalue of A0. Check this  by  calling  eig with  input  A0.   How  close  is  (Ak)n,n to  the closest eigenvalue of A0代写数值计算作业

(c)Now modify your program to implement the QR algorithm with shifts (AG, 303), setting αkto (Ak1)n,n, which is called the Rayleighquotient shift since αk eT Ak1en, where en is the last coordinate vector [0, . . . , 0 1]T . Does convergence to the tolerance take place much faster now? If not, you have a bug.


1 This is complicated to explain so we won’t go into it, but it has to do with the relationship to the block power method.

(d)Explainwhy the eigenvalues of Ak are the same as the eigenvalues of Ak1 even when shifts are used.  Hint:  it’s very similar to the explanation I gave for the unshifted algorithm, also taking into account how subtracting the shift αkI from a matrix changes its eigenvalues.

If we wanted to compute all  the eigenvalues of A0 we would now “de- flate” the matrix by deleting the last row and column and continuing with the remaining matrix of order n − 1, as is done by AG in their code qreig.m, but this is not required here.

代写数值计算作业
代写数值计算作业

2.This concerns bidiagonalization of a rectangular matrix.

(a)Following the outline on AG 244, write a Matlab function housesvdto reduce an m×n rectangular matrix A, where m > n, to bidiagonal form using Householder transformations, applying different transformations on the left and right of A. You need n transformations on the left but only n − 2 on the right, applied in the sequence left, right, . . . , left, right, left, left. If only one output argument is requested (check whether nargout is 1) sim- ply return the bidiagonal matrix as a full matrix with explicit zeros below the main diagonal and above the first superdiagonal.  代写数值计算作业

If three output arguments are requested, the first should be the matrix with the same bidiagonal entries but with Householder transformation information in the lower and upper triangles in- stead of explicit zeros; you will also need to output two extra vectors, say u and v, to store the first component of each House- holder  transformation.   It  will  be  helpful  to  look  at  houseeig on p. 239 which reduces a square matrix A to tridiagonal form by applying the same Householder transformations to the left and right of A. Test your code, with both output variations, on this matrix, printing the output in both cases (format short e is fine).

(b)Approximatelyhow many flops are required by housesvd in terms of m and n? Explain your reasoning.

If we wanted to compute all the singular values of A, we could now apply a variant of the QR algorithm to the bidiagonal matrix, but this is complicated to explain so we’ll move to a simpler method.

3.This concerns the Block Power method for the SVD.

(a)The signs of the diagonal entries  of  R  in  a  QR  factorization are arbitrary: although Gram-Schmidt (which we are not us-ing)chooses them to be positive, Householder (which Matlab’s qr  uses)  does  not.2Since  we’d  like  them  to  be  positive  in  the next part, write a short Matlab function [Q,R]=qrPosDiagR(A) which computes the reduced Q and R factors of A by calling Mat- lab’s qr(A,0) and then modifies the Q and R factors so that the diagonal entries of R are all positive (or zero, but that’s unlikely), by changing the sign of the corresponding row of R and the cor- responding column of Q, so that the equations A = QR still holds (check this!)  One way to see that this “works” is to observe that QR QS2R = (QS)(SR) where S  is a diagonal matrix of ±1’s.

(b)Implement the Block Power method for the SVD given on p. 11A-5 of my lecture notes for April 12 (this is not covered in AG), us- ingqrPosDiagRfor both QR factorizations in the loop.  It turns out that for random matrices, both the upper triangular matrices  RU and RV converge  to diagonal matrices, so compute the norm of the strictly upper triangular parts of both matrices (either by using triu(.,1) or by subtracting off the diagonal entries) and terminate when the sum of these two norms is less than a toler- ance, tol.  The inputs to this function should be an m × n matrix A with m > n, an integer r that is usually much less than n, and the tolerance  tol.  代写数值计算作业

Before  the  loop,  set  the  initial  matrix  V0 to the n × r matrix whose columns are the first r columns of the n × n identity matrix, obtained by eye(n,r);

you can get m and n from size(A). This block power method can be slow, like the QR method for eigenvalues without shifts, but  it  works  pretty well when r is small. Run it on this matrix A, setting r = 3 and setting tol to say 106 (you can also try other values if you like).

Now  compare the diagonal entries of the final RU and RV to the largest r singular values of A which you can compute from Matlab’s  svd.   Are  they  good  approximations  to  the  singular values? Are the final U and V good approximations to left and right singular vectors? Remember that the scale factors of singu- lar vectors are not unique, so you can’t expect them to be close

2 As we discussed, the signs are actually chosen to avoid cancellation in the Householder operations, but the following modification, which is done after the computation of Q and R is finished, does not introduce any new cancellation.

to the U and V computed by svd. Instead you can check to see whether the singular value residual norm “AV U D” is small, where U and V  are computed by  your block power code and D is either the diagonal of RU or of RV (check both).

4.Download WallOfWindows.jpg and read it into Matlab using A=im read(’WallOfWindows.jpg’,’jpg’).Display it with image(A).  代写数值计算作业

If you type whos A you will see that A is a 3456 × 4608 × 3 three- dimensional “matrix” (or tensor) of unsigned 8-bit integers.  Split these into red, green and blue components, each a 3456 × 4608 matrix of un- signed integers, via A_red=A(:,:,1), etc, and convert these to double precision matrices using double.

(a)Compute the reduced (economy-sized) SVD of each color com- ponent matrix  separately  using svd(M,0).  (If  you  like  you  can transpose the component matrix first so it has more rows than columns, as in my notes, but this is actually not necessary; we made this assumption only for convenience.) Then, for each component matrix, compute its nearest rank r approximation (see p. 6–7 of Notes on Eigenvalues and Singular Values), for r = 10, 20, 30 and 100.  代写数值计算作业

Recombine them into the 3456×4608×3 uint8 format, and then display the resulting four pictures using image, labelling them appropriately.

How well do they approximate the original picture in your opinion? This shows that SVD is a use- ful approach to image compression: instead of storing the whole matrix, you need only to store the part of the SVD needed to construct the rank r approximation: how much storage does this require, in terms of m, n and r?

(b)The problem with the method just described is that it is expen- siveto call svd on such a large matrix and it would take much too long for a much larger matrix. Can you get pictures that are just as good as using your block power method from the previous ques- tion, with less computational expense, again using r = 10, 20, 30 and 100?  You may find you don’t have to make the tolerance in the block power method very small to get good How big can you make tol and still get good pictures, and how does the time required compare to the time required using svd?

 

更多代写:美国Online exam代考价格  gmat网考作弊  设计作业代做  伦敦essay英文代写  论文代写范文  代写sci

合作平台:essay代写 论文代写 写手招聘 英国留学生代写

代写数值计算作业
代写数值计算作业

发表回复