## Homework Assignment 7

### 1.This question concerns the QR algorithm for eigenvalues.  代写数值计算作业

(a)Implement the QR algorithm for eigenvalues without shifts(AG,p.302). Although as stated in AG it will eventually converge to an upper triangular matrix under the assumptions that A is real, has real eigenvalues, and has nonsingular principal submatrices, thiscantake a long time. Let rk denote the last row of the strict lower triangle of Ak, i.e.,

rk = [(Ak)n,1, . . . , (Ak)n,n1] .

It turns out that rk converges to zero much faster than the other rows in the lower triangle of Ak.1 So, inside the loop, compute “rk” and terminate the loop when this drops below a tolerance, say 1012. How many iterations are required when A0, the input to the QR algorithm, is set to this matrix?

#### (b)The idea here is that if rkis very small, then (Ak)n,nis very close to an eigenvalue of Ak, and therefore, since the QR algorithm preserves eigenvalues, very close to an eigenvalue of A0. Check this  by  calling  eig with  input  A0.   How  close  is  (Ak)n,nto  the closest eigenvalue of A0?  代写数值计算作业

(c)Now modify your program to implement the QR algorithm with shifts (AG, 303), setting αkto (Ak1)n,n, which is called the Rayleighquotient shift since αk eT Ak1en, where en is the last coordinate vector [0, . . . , 0 1]T . Does convergence to the tolerance take place much faster now? If not, you have a bug.

1 This is complicated to explain so we won’t go into it, but it has to do with the relationship to the block power method.

(d)Explainwhy the eigenvalues of Ak are the same as the eigenvalues of Ak1 even when shifts are used.  Hint:  it’s very similar to the explanation I gave for the unshifted algorithm, also taking into account how subtracting the shift αkI from a matrix changes its eigenvalues.

If we wanted to compute all  the eigenvalues of A0 we would now “de- flate” the matrix by deleting the last row and column and continuing with the remaining matrix of order n − 1, as is done by AG in their code qreig.m, but this is not required here.

### 2.This concerns bidiagonalization of a rectangular matrix.

(a)Following the outline on AG 244, write a Matlab function housesvdto reduce an m×n rectangular matrix A, where m > n, to bidiagonal form using Householder transformations, applying different transformations on the left and right of A. You need n transformations on the left but only n − 2 on the right, applied in the sequence left, right, . . . , left, right, left, left. If only one output argument is requested (check whether nargout is 1) sim- ply return the bidiagonal matrix as a full matrix with explicit zeros below the main diagonal and above the first superdiagonal.  代写数值计算作业

If three output arguments are requested, the first should be the matrix with the same bidiagonal entries but with Householder transformation information in the lower and upper triangles in- stead of explicit zeros; you will also need to output two extra vectors, say u and v, to store the first component of each House- holder  transformation.   It  will  be  helpful  to  look  at  houseeig on p. 239 which reduces a square matrix A to tridiagonal form by applying the same Householder transformations to the left and right of A. Test your code, with both output variations, on this matrix, printing the output in both cases (format short e is fine).

(b)Approximatelyhow many flops are required by housesvd in terms of m and n? Explain your reasoning.

If we wanted to compute all the singular values of A, we could now apply a variant of the QR algorithm to the bidiagonal matrix, but this is complicated to explain so we’ll move to a simpler method.

### 3.This concerns the Block Power method for the SVD.

(a)The signs of the diagonal entries  of  R  in  a  QR  factorization are arbitrary: although Gram-Schmidt (which we are not us-ing)chooses them to be positive, Householder (which Matlab’s qr  uses)  does  not.2Since  we’d  like  them  to  be  positive  in  the next part, write a short Matlab function [Q,R]=qrPosDiagR(A) which computes the reduced Q and R factors of A by calling Mat- lab’s qr(A,0) and then modifies the Q and R factors so that the diagonal entries of R are all positive (or zero, but that’s unlikely), by changing the sign of the corresponding row of R and the cor- responding column of Q, so that the equations A = QR still holds (check this!)  One way to see that this “works” is to observe that QR QS2R = (QS)(SR) where S  is a diagonal matrix of ±1’s.

(b)Implement the Block Power method for the SVD given on p. 11A-5 of my lecture notes for April 12 (this is not covered in AG), us- ingqrPosDiagRfor both QR factorizations in the loop.  It turns out that for random matrices, both the upper triangular matrices  RU and RV converge  to diagonal matrices, so compute the norm of the strictly upper triangular parts of both matrices (either by using triu(.,1) or by subtracting off the diagonal entries) and terminate when the sum of these two norms is less than a toler- ance, tol.  The inputs to this function should be an m × n matrix A with m > n, an integer r that is usually much less than n, and the tolerance  tol.  代写数值计算作业

#### Before  the  loop,  set  the  initial  matrix  V0 to the n × r matrix whose columns are the first r columns of the n× nidentity matrix, obtained by eye(n,r);

you can get m and n from size(A). This block power method can be slow, like the QR method for eigenvalues without shifts, but  it  works  pretty well when r is small. Run it on this matrix A, setting r = 3 and setting tol to say 106 (you can also try other values if you like).

Now  compare the diagonal entries of the final RU and RV to the largest r singular values of A which you can compute from Matlab’s  svd.   Are  they  good  approximations  to  the  singular values? Are the final U and V good approximations to left and right singular vectors? Remember that the scale factors of singu- lar vectors are not unique, so you can’t expect them to be close

2 As we discussed, the signs are actually chosen to avoid cancellation in the Householder operations, but the following modification, which is done after the computation of Q and R is finished, does not introduce any new cancellation.

to the U and V computed by svd. Instead you can check to see whether the singular value residual norm “AV U D” is small, where U and V  are computed by  your block power code and D is either the diagonal of RU or of RV (check both).