The QR Algorithm for Eigenvalue Estimation: Theory and Experiments
Except where reference is made to the work of others, the work described in this thesis is
my own or was done in collaboration with my advisory committee. This thesis does not
include proprietary or classifled information.
Wei Feng
Certiflcate of Approval:
Tin-Yau Tam
Professor
Mathematics and Statistics
Thomas H. Pate, Chair
Professor
Mathematics and Statistics
John P. Holmes, III
Professor
Mathematics and Statistics
George T. Flowers
Dean
Graduate School
The QR Algorithm for Eigenvalue Estimation: Theory and Experiments
Wei Feng
A Thesis
Submitted to
the Graduate Faculty of
Auburn University
in Partial Fulflllment of the
Requirements for the
Degree of
Master of Science
Auburn, Alabama
December 19, 2008
The QR Algorithm for Eigenvalue Estimation: Theory and Experiments
Wei Feng
Permission is granted to Auburn University to make copies of this thesis at its
discretion, upon the request of individuals or institutions and at
their expense. The author reserves all publication rights.
Signature of Author
Date of Graduation
iii
Vita
Wei Feng, only son and the third child of Zhiguo Feng and Shulan He, was born in
Tangshan city, Hebei Province, China on Nov. 28, 1976. He graduated from the First High
School of Luanxian in 1996 and was enrolled in Shanghai Jiao Tong University the same
year, where he earned both his Bachelor of Engineering and Master of Science degrees in
electrical engineering in 2000 and 2003, respectively. He then worked for Magima Inc. in
2003 for several months before he went to the United States with his beloved wife, Shuang
Feng. He then entered the PhD program in electrical engineering of Auburn University in
Aug. 2004, where he is expected to acquire the degree in Spring, 2009. He was accepted
in the Master?s program in mathematics of Auburn University in Feb. 2008. He and his
wife shared the same high school, the same university for their undergraduate and graduate
years in China, and the same university in the United States in Auburn. They are blessed
with a lovely daughter, Grace T. Feng, who was born on May 19, 2005.
iv
Thesis Abstract
The QR Algorithm for Eigenvalue Estimation: Theory and Experiments
Wei Feng
Master of Science, December 19, 2008
(M.E., Shanghai Jiao Tong University, 2003)
(B.S., Shanghai Jiao Tong University, 2000)
57 Typed Pages
Directed by Thomas H. Pate
In this thesis, we explore one of the most important numerical algorithms ever invented,
the QR algorithm for computing matrix eigenvalues. First, we describe out notations and
mathematical symbols used throughout the thesis in Chapter 1. Then we lay the ground
work by stating and proving some basic lemmas in Chapter 2. Then in Chapter 3, we prove
the convergence of the QR algorithm under the assumption of distinct magnitudes for all
eigenvalues. This constraint is relaxed in Chapter 4, where we prove the convergence of
the QR algorithm under the assumption of possibly equal magnitude eigenvalues. Finally,
in Chapter 5, we present some numerical experiments to validate the conclusions drawn in
this thesis.
v
Acknowledgments
First of all, I would like to thank Dr. Thomas H. Pate, my principle adviser, for his
teaching and guidance throughout the whole process. What is not shown in this work is his
meticulous yet picture clear methodology toward mathematics that I have witnessed and
hopefully learned from. I am also thankful to Dr. Gary M. Sampson, Dr. Jo W. Heath,
Dr. Narendra K. Govil, Dr. Amnon J. Meir and Dr. Stephen E. Stuckwisch, from whose
teachings I have enjoyed mathematics. I also want to thank Dr. Tin-Yau Tam and Dr.
John P. Holms, for sitting on my committee and proof-reading my thesis.
I also want to thank Dr. Thomas S. Denney Jr. and Dr. Stanley J. Reeves from the
Electrical and Computer Engineering department not only for the flnancial support they
granted me during the years I spent in Auburn, but also the teaching and guidance that
have honed my problem solving skills. I also appreciate the interactions that I had with the
students in the image processing group.
Finally I want to thank my family members for providing the love and support all the
time. I want to thank my wife not only for her love and care, but also for her sharing the
burden for the years. I want to thank my baby daughter for being the greatest and happiest
gift to me. In the end, I want to thank my parents and my sisters, who have raised me and
supported me unconditionally all my life.
vi
Style manual or journal used Journal of Approximation Theory (together with the style
known as \aums"). Bibliography follows van Leunen?s A Handbook for Scholars.
Computer software used The document preparation package TEX (speciflcally LATEX)
together with the departmental style-flle aums.sty.
vii
Table of Contents
List of Figures ix
1 Introduction 1
2 Useful Lemmas and Theorems 4
3 Convergence of the QR Algorithm 21
4 Generalization: Equal-magnitude eigenvalues 29
5 Experiments 38
Bibliography 48
viii
List of Figures
5.1 Lower triangular (ofi diagonal) part of Ak converge to zero . . . . . . . . . 39
5.2 Diagonal elements of Ak converge to eigenvalues of A . . . . . . . . . . . . . 40
5.3 Upper triangular (ofi diagonal) part of Ak converge in magnitude . . . . . . 40
5.4 Lower triangular (ofi diagonal) part of A converge to zero . . . . . . . . . . 41
5.5 Diagonal elements of Ak converge to eigenvalues of A . . . . . . . . . . . . . 42
5.6 Upper triangular (ofi diagonal) part of A converge in magnitude . . . . . . 42
5.7 Lower triangular (ofi diagonal) part of A converge to zero . . . . . . . . . . 43
5.8 Schematic illustration of the lower triangular ofi diagonal blocks of Ak . . . 43
5.9 Diagonal elements of Ak do NOT converge to eigenvalues of A . . . . . . . 44
5.10 Convergence of eigenvalue multisets of diagonal blocks of Ak to those of A . 44
5.11 Convergence of QR: A is shifted by (2+i) in the complex plane . . . . . . 47
ix
Chapter 1
Introduction
The eigenvalues of an n ? n matrix A are the roots of its characteristic polynomial
det(A??I). The famous Abel-Ru?ni theorem states there is no algebraic formula for the
roots of a general polynomial of degree flve or higher. This means that the best method
for computing the eigenvalues of a general n?n (n ? 5) matrix is likely to be iterative in
nature. The most famous eigenvalue algorithm is the QR algorithm discovered by Francis
[3, 4] and Kublanovskaya [5] independently. A convergence analysis of the QR algorithm
was given by Wilkinson [6]. A brief sketch of the early days history of the development of
the QR algorithm was given by Parlett [13].
In this thesis, we present a complete detailed proof of the convergence of the QR
algorithm under mild assumptions. First, the convergence is proved assuming that the
magnitudes of all eigenvalues are distinct. Then this assumption is loosened such that the
magnitudes of some eigenvalues may be equal, under which the convergence is re-deflned and
proved. Huang and Tam [14] proved the convergence of the QR algorithm for real matrices
with non-real eigenvalues. Since the magnitudes of such matrices are distinct except for the
conjugate pairs, they flt the loosened assumption.
We employ certain standard notations throughout this thesis. We let Cm?n denote
the set of m ? n complex matrices. Similarly, Rm?n denotes the set of all m ? n real
matrices. Both A(i;j) and Aij refer to the (i;j)th element of matrix A. On the other hand,
A = [aij] signifles that the (i;j)th element of A is aij. Given A 2 Cm?n, we let AT denote
the transpose of A and we let A? denote the conjugate transpose of A. If A is square and
invertible, then A?1 denotes the inverse of A. The k?k identity matrix is denoted as Ik. In
circumstances where the dimension is unambiguous, the subscript of a matrix is dropped.
For example, if confusion seems unlikely, we may write I instead of Ik. The unit vectors
1
representing the coordinate system of dimension n is denoted as ei, for all i = 1;2;:::;n.
Mathematically, ei = [0;:::;0;1;0;:::;0]T, where the number 1 is the i-th element of ei.
The notation diag([d1;d2;??? ;dn]) represents the diagonal matrix with d1, d2, :::, dn
on the diagonal in that order. Similarly, if x = [x1;x2;:::;xn]T (or x = [x1;x2;:::;xn])
is a vector, then the diagonal matrix with x on the diagonal is deflned as diag(x) =
diag([x1;x2;:::;xn]). The notation Dg applied on an n?n matrix A is deflned as Dg(A) =
diag([A(1;1);A(2;2);:::;A(n;n)]). The zero scalar, vectors and matrices are all repre-
sented by 0 for simplicity. The distinction should be clear in context. The Kronecker delta
function is denoted as ?ij and ?ij =
8
>><
>>:
1; if i = j
0; otherwise
. The character ? is used exclusively
to represent an eigenvalue.
For a scalar a, jajrepresents the absolute value of a and a denote the complex conjugate
of a. For a matrix A, jAj is the matrix whose elements are the absolute value of the
corresponding elements of A. In other words, (jAj)ij = jAijj. The Euclidean norm of a
vector v in Cn or Rn is denoted by kvk2. We reserve the notation jjj?jjj for matrix norms.
For example, jjj?jjjp is the induced p-norm of a square matrix for p ? 1 deflned as jjjAjjjp =
maxx6=0 kAxkpkxk
p
. As a known fact, square matrix p-norms are sub-multiplicative; that is,
if A;B 2 Cn?n, then jjjABjjjp ? jjjAjjjpjjjBjjjp. Speciflcally, the matrix 1-norm of a matrix
A 2Cn?n is
jjjAjjj1 = maxj=1;???;n
nX
i=1
jA(i;j)j: (1.1)
The Frobenius norm of a matrix A 2Cn?n is deflned as
jjjAjjjF =
vu
ut nX
i=1
nX
j=1
jA(i;j)j2: (1.2)
2
The limit of a sequence of matrices is a matrix consisted of the component-wise limits of
the matrix elements. For example, if Ak, k = 1;2;::: are n ? n complex matrices, then
limk!1Ak = 0 means that limk!1Ak(i;j) = 0, for all i;j = 1;2;:::;n.
The direct sum of matrices A 2Cm?m and B 2Cn?n is deflned as A'B =
2
64A 0
0 B
3
75,
which is an (m+n)?(m+n) matrix.
Let a;b 2Cn?1, then the inner product between a and b denoted by ha;bi is b?a.
The projection of a vector a on the vector subspace W with respect to h;i is denoted
as PW(a).
3
Chapter 2
Useful Lemmas and Theorems
Lemma 2.1. Suppose A 2Cn?n. If A is unitary and upper triangular, then A is diagonal.
Proof. Let A = [aij]. Since A is unitary,
8
>><
>>:
Pn
j=1jaijj
2 = 1;8i = 1;:::;n
Pn
i=1jaijj
2 = 1;8j = 1;:::;n:
(2.1)
We prove by induction. The lemma is obviously true when n = 1. Assume that the lemma
is true for n = k, k ? 1. For n = k +1, write
A =
2
64a11 c
0 Ak
3
75; (2.2)
where Ak is k?k and c is 1?k. Since A is unitary,
A?A = Ik+1: (2.3)
Plugging (2.2) into (2.3) we obtain
2
64a?11 0
c? A?k
3
75
2
64a11 c
0 Ak
3
75 =
2
64ja11j2 a11c
a11c? c?c+A?kAk
3
75 = I
k+1 =
2
641 0
0 Ik
3
75:
From the above, we infer that ja11j2 = 1 and a11c = 0. Thus, a11 6= 0. Consequently c must
be 0. This means that Ak must be unitary and upper triangular. By induction hypothesis,
Ak is diagonal. We conclude that A is diagonal.
4
Corollary 2.1. Suppose A 2 Cn?n. If A is unitary and upper triangular with positive
diagonal elements, then A = In.
Proof. By Lemma 2.1, A is diagonal. Since A is also unitary, we must have jaiij2 = 1 for
each i. But aii > 0 for each i. Therefore aii = 1 for each i. So A must be In.
Lemma 2.2. If A 2Cn?n is upper triangular and invertible, then A?1 is upper triangular.
Proof. Let A = [a1;a2;:::;an] and B = A?1 = [bT1 ;bT2 ;:::;bTn]T, where ai, i = 1;:::;n are
the columns of A, and bi, i = 1;:::;n, are the rows of B. Since A is upper triangular and
invertible, aij = 0, for all i > j and aii 6= 0 for all i = 1;:::;n. From BA = I, we get
biaj = ?ij. We use induction to show that bij = 0 when i > j.
For j = 1 and i > 1, (BA)i1 = bi1a11. Since a11 6= 0, we must have bi1 = 0, for all
i > 1. Now suppose that there exists a positive integer 1 ? k < n such that for all j ? k,
bij = 0 for all n ? i > j ? 1. For j = k + 1, and i > k + 1, (BA)ij = (BA)i;k+1 =
bi1a1;k+1 + bi2a2;k+1 + ??? + bi;k+1ak+1;k+1 = bi;k+1ak+1;k+1 = 0. But ak+1;k+1 6= 0, so
bi;k+1 = 0, for all i > k +1.
Corollary 2.2. If A 2Cn?n is upper triangular with positive diagonal elements, then A?1
is also upper triangular with positive diagonal elements.
Proof. Let B = [bij] = A?1 where A = [aij]. By Lemma 2.2, B is upper triangular;
moreover, for each i, 1 ? i ? n, we have (BA)ii = biiaii = 1. Therefore, bii = 1=aii > 0.
Corollary 2.3. If A 2Cn?n is lower triangular and invertible, then A?1 is lower triangular.
Proof. Since A is lower triangular, AT is upper triangular. Thus?AT??1 is upper triangular.
But ?AT??1 = ?A?1?T. Hence A?1 is lower triangular.
Lemma 2.3. If R1 and R2 are both upper triangular matrices, then R1R2 is upper trian-
gular.
Proof. Let R1 = [aij] and R2 = [bij]. Then for i > j, (R1R2)ij =
nP
t=1
aitbtj = 0 because it is
always true that either i > t or t > j.
5
Deflnition 1. A matrix B 2 Cn?n is said to be similar to A 2 Cn?n if there exists an
invertible matrix S 2Cn?n such that B = S?1AS. S is called the similarity matrix of the
similarity transformation A ! S?1AS.
Deflnition 2. If the matrix A 2 Cn?n is similar to a diagonal matrix, then A is said to
be diagonalizable. Furthermore, if the similarity matrix is unitary, then A is said to be
unitarily diagonalizable.
Lemma 2.4. If A 2Cn?n, then A is diagonalizable if and only if there is a set of n linearly
independent vectors, each of which is an eigenvector of A.
Proof. Please refer to page 46 of [1] for the proof of this Lemma.
Deflnition 3. The matrix A 2Cn?n is said to be normal if A?A = AA?.
Lemma 2.5. If A 2Cn?n is normal, then A is unitarily diagonalizable.
Proof. This is a result of Theorem 2.5.4 of [1].
Corollary 2.4. Suppose A 2 Cn?n is normal and has n distinct eigenvalues. Let A =
XDX?1 be any diagonalization of A. Then the columns of X are mutually orthogonal.
Proof. Since A is normal, by Lemma 2.5, A is unitarily diagonalizable. Thus there exists
a unitary matrix U and a diagonal matrix D1 such that A = UD1U?1. This is equivalent
to AU = UD1, or
A[u1;u2;:::;un] = [u1;u2;:::;un]diag([d11;d22;:::;dnn]); (2.4)
where ui is the i-th column vector of U and dii is the i-th diagonal element of D1, for each
i = 1;2;:::;n. Since A has n distinct eigenvalues, dii 6= djj, for each i 6= j. Equation 2.4
means that ui is the corresponding eigenvector of the eigenvalue dii of A. Furthermore, ui,
for all i = 1;2;:::;n are mutually orthogonal since U is unitary.
For any matrix X such that A = XDX?1, all the columns of X have to be eigenvectors
of A corresponding to difierent eigenvalues of A listed on the diagonal of D. Without loss
6
of generality, we assume that D = D1. Let the i-th column of X be xi. Then xi is an
eigenvector of A corresponding to the eigenvalue dii. Thus we have xi = ciui, where ci is a
non-zero constant for each i. Since ui, for all i = 1;2:::;n are mutually orthogonal, xi, for
all i = 1;2;:::;n are also mutually orthogonal because
hxi;xji = hciui;cjuji = ci?cjhui;uji = ci?cj?ij:
Deflnition 4. A function jjj?jjj : Cn?n ! R is called a matrix norm if for all A;B 2Cn?n
it satisfles the following axioms:
(1) jjjAjjj? 0;
(2) jjjAjjj = 0 if and only if A = 0;
(3) jjjcAjjj = jcjjjjAjjj for all complex scalars c;
(4) jjjA+Bjjj?jjjAjjj+jjjBjjj;
(5) jjjABjjj?jjjAjjjjjjBjjj:
Lemma 2.6. The Frobenius norm jjj?jjjF is a matrix norm.
Proof. Please refer to page 291 of [1].
Lemma 2.7. If U 2 Cn?n is unitary, and A 2 Cn?n, then jjjUAjjjF = jjjAjjjF. In other
words, the Frobenius norm is invariant under unitary multiplication.
Proof. By deflnition (1.2), it is easy to see that
jjjAjjj2F = traceA?A:
7
Thus,
jjjUAjjj2F = trace(UA)?(UA)
= traceA?U?UA
= traceA?A
= jjjAjjj2F:
Lemma 2.8. Suppose Ak 2Cn?n, k = 1;2;:::; is a sequence of matrices, then the following
two conditions are equivalent,
1. limk!1jjjAkjjjF = 0;
2. limk!1Ak = 0.
Proof. Assume that limk!1jjjAkjjjF = 0, then 8i;j 2f1;2;:::;ng,
lim
k!1
jAk(i;j)j? lim
k!1
vu
ut nX
i=1
nX
j=1
jAk(i;j)j2
= lim
k!1
jjjAkjjjF
= 0:
Therefore, limk!1Ak = 0.
Now assuming that limk!1Ak = 0, we have
lim
k!1
jjjAkjjj2F = lim
k!1
0
@
nX
i=1
nX
j=1
jAk(i;j)j2
1
A
=
nX
i=1
nX
j=1
lim
k!1
jAk(i;j)j2
?
= 0:
8
Lemma 2.9. Let t and n be positive integers so that t ? n. Suppose Q1;Q2;Q3;::: is a
sequence of n?n unitary matrices. Let
Qk =
2
64Q(k)11 Q(k)12
Q(k)21 Q(k)22
3
75; (2.5)
where each Q(k)11 is a t ? t matrix and each Q(k)22 is an (n ? t) ? (n ? t) matrix. If
limk!1jjjQ(k)12 jjjF = 0, then limk!1jjjQ(k)21 jjjF = 0.
Proof. Note that for each k we have
(n?t) = jjjQ(k)21 jjj2F +jjjQ(k)22 jjj2F = jjjQ(k)12 jjj2F +jjjQ(k)22 jjj2F:
Thus, jjjQ(k)21 jjj2F = jjjQ(k)12 jjj2F for each positive integer k. That, limk!1jjjQ(k)12 jjj2F = 0, then
implies that limk!1jjjQ(k)21 jjj2F = 0.
Corollary 2.5. Suppose Q1;Q2;Q3;::: is a sequence of n?n unitary matrices such that all
the elements above the diagonal have limit zero, i.e., limk!1Qk(i;j) = 0 for all 1 ? i < j ?
n. Then all the elements below the diagonal also have limit zero, i.e., limk!1Qk(i;j) = 0
for all 1 ? j < i ? n.
Proof. For any i and j such that 1 ? j < i ? n, let Qk =
2
64Q(k)11 Q(k)12
Q(k)21 Q(k)22
3
75, for all k = 1;2;:::,
where each Q(k)11 is a j?j matrix and each Q(k)22 is an (n?j)?(n?j) matrix. Then Qk(i;j)
belongs to Q(k)21 . By Lemma 2.9, since Q(k)12 has limit zero, Q(k)21 also has limit zero, which
means that Qk(i;j) has limit zero for all 1 ? j < i ? n.
Lemma 2.10. If Ak 2 Cn?n;for k = 1;2;::: and limk!1Ak = 0, then for any B;C 2
Cn?n, limk!1BAkC = 0.
9
Proof. Since limk!1Ak = 0, limk!1Ak(i;j) = 0, for all i;j = 1;2;:::;n. Thus by the
deflnition of matrix 1-norm (1.1),
lim
k!1
jjjAkjjj1 = 0:
Using the sub-multiplicative property of the matrix 1-norm, we have
jjjBAkCjjj1 ?jjjBjjj1jjjAkjjj1jjjCjjj1:
Taking the limit as k goes to inflnity on both sides, we get that limk!1jjjBAkCjjj1 = 0.
Lemma 2.11. Suppose A1;A2;A3;::: and B1;B2;B3;::: are sequences of n?n matrices
such that limk!1Ak = ~A and limk!1Bk = ~B. Then limk!1AkBk = ~A ~B.
Proof. We use Frobenius norm to prove this lemma. First, jjjBkjjjF = jjj(Bk ? ~B) + ~BjjjF ?
jjj(Bk ? ~B)jjjF + jjj~BjjjF. Thus, limk!1jjjBkjjjF = jjj~BjjjF. Since the Frobenius norms of Bk
converge, they are bounded. In other words, there exists a positive number M, such that
jjjBkjjjF < M for all k = 1;2;:::.
Now we have
jjjAkBk ? ~A ~BjjjF = jjjAkBk ? ~ABk + ~ABk ? ~A ~BjjjF
?jjj(Ak ? ~A)BkjjjF +jjj ~A(Bk ? ~B)jjjF
?jjj(Ak ? ~A)jjjFjjjBkjjjF +jjj ~AjjjFjjjBk ? ~BjjjF
? Mjjj(Ak ? ~A)jjjF +jjj ~AjjjFjjjBk ? ~BjjjF:
Thus, limk!1jjjAkBk ? ~A ~BjjjF = 0. According to Lemma 2.8, we have limk!1(AkBk ?
~A ~B) = 0. Therefore, limk!1AkBk = ~A ~B.
Lemma 2.12. Suppose Q1;Q2;::: is a sequence of n?n unitary matrices, and R1;R2;:::
is a sequence of n ? n upper triangular matrices each of which has positive diagonal. If
limk!1QkRk=I, then limk!1Qk = I, and limk!1Rk = I.
10
Proof. Let Qk = [q(k)ij ], and let Rk = [r(k)ij ] for each k. Note that limk!1jjjQkRk ?IjjjF = 0.
Moreover, because each Rk is upper triangular we have
jjjQkRk ?Ijjj2F = jjjQk(Rk ?Q?k)jjj2F = jjjRk ?Q?kjjj2F ?
nX
i=2
i?1X
j=1
jq(k)ji j2:
where the second equality follows by Lemma 2.7. Since limk!1jjjQkRk ? Ijjj2F = 0, we
must have limk!1q(k)ji = 0 whenever i > j. This means that the upper triangular part
excluding the diagonal of the Qk has limit zero. By Corollary 2.5, the lower triangular part
of Qk excluding the diagonal also has limit zero. Then for each i, 1 ? i ? n, we must have
limk!1jq(k)ii j = 1, simply because each row and column of each Qk has 2-norm equal to 1.
We now consider Rk. Since limk!1jjjRk ?Q?kjjj2F = 0, and
jjjRk ?Q?kjjj2F ?
X
i6=j
jr(k)ij ? ?q(k)ji j2;
we have limk!1Pi6=j jr(k)ij ??q(k)ji j2 = 0. Thus, limk!1jr(k)ij ??q(k)ji j = 0 when i 6= j. But, we
have 0 ? jr(k)ij j ? jr(k)ij ? ?q(k)ji j+j?q(k)ji j by the triangle inequality. Moreover, limk!1q(k)ji = 0
when i 6= j. Therefore we must have limk!1r(k)ij = 0 for each i 6= j.
Since limk!1QkRk = I, we have limk!1(QkRk)ii = 1 for each i, 1 ? i ? n. But,
(QkRk)ii =
nX
t=1
q(k)it r(k)ti = q(k)ii r(k)ii +
X
t6=i
q(k)it r(k)ti :
Since the ofi-diagonal parts of the Qk and Rk both tend to 0 as k goes to inflnity, we have
lim
k!1
(QkRk)ii = lim
k!1
q(k)ii r(k)ii = 1:
But, we have already shown that limk!1jq(k)ii j = 1 for each i. This and the fact that
limk!1q(k)ii r(k)ii = 1 together imply that limk!1jr(k)ii j = 1 for each i. But, each of the Rk
has positive diagonal. Therefore, limk!1jr(k)ii j = limk!1r(k)ii = 1. We have now shown
11
that limk!1Rk = I. The flnal step is to show that limk!1q(k)ii = 1 for each i. This follows
immediately from the fact that both limk!1q(k)ii r(k)ii = 1 and limk!1r(k)ii = 1.
Lemma 2.13. Let B1;B2;::: be a sequence of n?n matrices whose elements are bounded.
Let Q1;Q2;::: be a sequence of n ? n unitary matrices such that limk!1Qk = In. Let
Ak = Q?kBkQk, for all k = 1;2;:::. Then limk!1(Ak ?Bk) = 0.
Proof. Since the elements of Bk, k = 1;2;::: are bounded, there exists an M > 0, such
that jBk(i;j)j ? M, for all k = 1;2;:::, i = 1;2;:::;n and j = 1;2;:::;n. Let Qk =
[q(k)1 ;q(k)2 ;:::;q(k)n ], where q(k)i is the i-th column of Qk, for all i = 1;2;:::;n. Since
limk!1Qk = In, we have
lim
k!1
q(k)i = ei:
If we let ?(k)i = q(k)i ?ei, for all k = 1;2;::: and i = 1;2;:::;n, then limk!1?(k)i = 0. Thus,
jAk(i;j)?Bk(i;j)j =
flfl
flfl?q(k)i ?T Bk?q(k)j ??Bk(i;j)
flfl
flfl
=
flfl
flfl?ei +?(k)i ?T Bk(ej +?(k)j )?Bk(i;j)
flfl
flfl
?
flfl
flfl??(k)i ?T Bkej
flfl
flfl+
flfl
flfl??(k)i ?T Bk?(k)j
flfl
flfl+
flfl
fleTi Bk?(k)j
flfl
fl
?
flfl
flfl??(k)i ?T
flfl
flfljBkjej +
flfl
flfl??(k)i ?T
flfl
flfljBkjflflfl?(k)j flflfl+eTi jBkjflflfl?(k)j flflfl
? M
flfl
flfl??(k)i ?T
flfl
flflej +
flfl
flfl??(k)i ?T
flfl
flfl?flflfl?(k)j flflfl+eTi flflfl?(k)j flflfl
?
;
where ? is the n?n matrix of which all elements are 1. Since
lim
k!1
M
flfl
flfl??(k)i ?T
flfl
flflej +
flfl
flfl??(k)i ?T
flfl
flfl?
flfl
fl?(k)j
flfl
fl+eTi
flfl
fl?(k)j
flfl
fl
?
= 0;
we get
lim
k!1
(Ak(i;j)?Bk(i;j)) = 0:
12
Lemma 2.14. If A 2 Cn?n is invertible, then A can be uniquely factorized as A = QR,
where Q 2 Cn?n is unitary and R 2 Cn?n is upper triangular with positive diagonal ele-
ments.
Proof. The proof is essentially a description of the Gram-Schmidt orthogonalization process.
We give a sketch of that procedure. Denote A = [a1;a2;:::;an], where ai, i = 1;:::;n
are columns of A. Since A is invertible, fa1;a2;:::;ang are linearly independent. Let
fl1 = a1=ka1k2, or a1 = ka1k2fl1. Then let fl2 = (a2 ?PW1(a2))=ka2 ?PW1(a2)k2, where
W1 = spanffl1g. Note since fa1;a2;:::;ang are linearly independent, a2 ?PW1(a2) 6= 0.
One gets a2 = PW1(a2) + ka2 ?PW1(a2)k2fl2. Notice that PW1(a2) = ha2;fl1ifl1. So a2 =
ha2;fl1ifl1 + ka2 ? PW1(a2)k2fl2. In general, one gets fli+1 = (ai+1 ? PWi(ai+1))=kai+1 ?
PWi(ai+1)k2, where Wi = spanffl1;fl2;:::;flig. Let r11 = ka11k2, rii = kai ?PWi?1(ai)k2,
rij = 0, when j < i and
rij = haj;flii;mboxforallj > i: (2.6)
Then
A = [a1;a2;:::;an] = [fl1;fl2;:::;fln]
2
66
66
66
64
r11 r12 ::: r1n
r22 ::: r2n
... ...
rnn
3
77
77
77
75
:
Let Q = [fl1;fl2;:::;fln] and R = [rij]. We get A = QR, where Q is unitary by construction
and R is invertible because rii > 0, i = 1;2;:::;n.
Now suppose that there exist two difierent factorizations with Q1, Q2, R1 and R2,
where Q1 and Q2 are unitary and R1 and R2 are upper triangular with positive diagonal
elements, such that A = Q1R1 = Q2R2. Then Q?12 Q1 = R?12 R1. So R?12 R1 is upper
triangular, unitary and has positive diagonal elements. By Corollary 2.1, R?12 R1 = I.
13
Thus R1 = R2. Also, Q?12 Q1 = I leads to the conclusion that Q1 = Q2. This proves the
uniqueness of the factorization.
Lemma 2.15. If L is an n?n lower triangular matrix with unit diagonal, U is an upper
triangular matrix, and P1 and P2 are permutation matrices such that L = P1UP2, then
P2 = PT1 .
Proof. Let li;j = L(i;j), ui;j = U(i;j), fii;j = P1(i;j) and fli;j = P2(i;j), for all i;j 2
f1;2;:::;ng. Elementwise,
li;j = [fii;1;fii;2;:::;fii;n]U
2
66
66
66
64
fl1;j
fl2;j
...
fln;j
3
77
77
77
75
= uti;sj;
where ti;sj 2 f1;2;:::;ng are the indices of element 1 in the i-th row of P1 and the j-th
column of P2, respectively, for all i;j 2f1;2;:::;ng. Note that s and t are permutations on
f1;2;:::;ng. We focus on the cases when i = j, i.e., 1 = li;i = uti;si. Immediately we get
that ti ? si since U is upper triangular. Since this has to be true for all i 2 f1;2;:::;ng,
we claim that si = ti for each i 2 f1;2;:::;ng. Indeed, Let i1 be chosen such that ti1 = n.
Then si1 = n. Otherwise the element in U corresponding to the position (i1;i1) would be
0. Now choose i2 such that ti2 = n?1. Then, si2 cannot be n because s is a permutation
and si1 = n. Therefore, si2 = n?1. Proceed for tik = n?k, k = 3;4;:::;n?1, we get that
ti = si, for all i 2 f1;2;:::;ng. Remembering the deflnitions of ti and si, we get that the
ti-th row of P1 and the ti-th column of P2 are identical vectors, except the transposition,
for all ti = 1;2;:::;n. In other words, P1(ti;j) = P2(j;ti), for all j = 1;2;:::;n. Thus,
P2 = PT1 .
14
Lemma 2.16. If A 2Cn?n and A is invertible, then A can be factored as A = LPU where
L is a unit lower triangular matrix, P is a permutation matrix, and U is an invertible upper
triangular matrix. Furthermore, P is unique.
Proof. Let A = [aij] 2Cn?n. Start from the flrst column of A, and flnd the flrst non-zero
element (such an element must exist since A is invertible). Suppose this element is ai;1,
where 1 ? i ? n. Let
L1 =
2
64Ii 0
C In?i
3
75;
where
C =
2
66
66
66
64
0 ::: 0 c1
0 ::: 0 c2
...
0 ::: 0 cn?i
3
77
77
77
75
and ct = ?ai+t;1a
i;1
. Then the flrst column of L1A are all zeros except the i-th element, which
remains ai;1. This is essentially the Gaussian elimination applied on the flrst column of A.
Let
U1 =
2
66
66
66
66
66
4
a?1i;1 b1 b2 ::: bn?1
0 1 0 ::: 0
0 0 1 ::: 0
... ...
0 0 ::: 0 1
3
77
77
77
77
77
5
;
where bt = ?ai;t+1a
i;1
, for all t = 1;2;:::;n ? 1. Then the i-th row of L1AU1 are all zero
except the flrst element, which is normalized to 1. Note that the only change to the flrst
15
column of L1A by right multiplying U1 is that its i-th element is normalized to 1. This in
essence is the Gaussian elimination applied on the i-th row of L1A.
Assume that the a0j;2 is the flrst non-zero element in the second column of L1AU1. Let
L2 =
2
64Ij 0
G In?j
3
75;
where
G =
2
66
66
66
64
0 ::: 0 g1
0 ::: 0 g2
...
0 ::: 0 gn?j
3
77
77
77
75
and gt = ?aj+t;2a
j;2
. Then the second column of L2L1AU1 are all zeros except the j-th
element, which remains aj;2. This is essentially the Gaussian elimination applied on the
second column of L1AU1.
Let
U2 =
2
66
66
66
66
66
4
1 0 0 ::: 0
0 a?1j;2 h1 ::: hn?2
0 0 1 ::: 0
... ...
0 0 ::: 0 1
3
77
77
77
77
77
5
;
where ht = ?aj;t+2a
j;2
, for all t = 1;2;:::;n ? 2. Then the j-th row of L2L1AU1U2 are all
zero except the flrst element, which is normalized to 1. This in essence is the Gaussian
elimination applied on the j-th row of L2L1AU1.
16
Repeat the above process for another (n?2) times. Since A is invertible, we will get
a permutation matrix. Mathematically, we express the process as
LnLn?1???L2L1AU1U2???Un = P;
where Lk and Uk for any k = 1;2;:::;n are the unit lower triangular matrix and the upper
triangular matrix used in the k-th iteration and P is the flnal permutation matrix. Let
L = LnLn?1???L1 and U = U1U2???Un. Then L is unit lower triangular and U is upper
triangular. Thus we have LAU = P. Multiplying L?1 on the left and U?1 on the right
gives A = L?1PU?1. By Lemma 2.2 and Corollary 2.3, L?1 is unit lower triangular and
U?1 is upper triangular. The existence of the LPU decomposition is proved.
Now, since A is invertible, U is invertible, which means all the diagonal elements of
U are non-zero. Suppose there exist permutation matrices P1;P2 and lower and upper
triangular matrices L1;L2 and U1;U2, such that A = L1P1U1 = L2P2U2. Then, P1 =
L?11 L2P2U2U?11 = LP2U, where L = L?11 L2 is unit lower-triangular and U2U?11 is upper
triangular.
The condition P1 = LP2U is equivalent to L = P1U?1PT2 = P1VPT2 , where V = U?1.
Since U is upper triangular, V is upper triangular. By Lemma 2.15, we get that PT2 = PT1 ,
i.e., P2 = P1.
This decomposition is called modifled Bruhat decomposition [8].
Deflnition 5. A multiset of cardinality n is a collection of n members where multiple
presence of the same element is allowed and is counted as multiple members.
A multiset is like a set, whose members are not ordered, but some or all of its members
could be the same element. For example, the collectionf1;2;1;3gis a multiset of cardinality
4. Furthermore, it is the same multiset as f2;1;1;3g.
Deflnition 6. The eigenvalues of A 2Cn?n, counting multiplicity, compose a multiset. We
call it the eigenvalue multiset of A.
17
Deflnition 7. Let ' = f`1;`2;:::;`ng and ? = f?1;?2;:::;?ng be two multisets with
complex elements. Then we deflne the distance d(';?) between multisets ' and ? as
d(';?) = min maxi=1;:::;nflfl'i ?? (i)flfl: (2.7)
where the minimum is taken over all permutations of 1;2;:::;n.
Theorem 2.1. Let n ? 1 and let
p(x) = anxn +an?1xn?1 +???+a1x+a0; an 6= 0
be a polynomial with complex coe?cients. Then, for every ? > 0, there is a ? > 0 such that
for any polynomial
q(x) = bnxn +bn?1xn?1 +???+b1x+b0
satisfying bn 6= 0 and
max0?i?njai ?bij < ?;
we have
d(?;M) ? ?;
where multiset ? = f?1;:::;?ng contains all of the zeros of p(x), multiset M = f?1;:::;?ng
contains all of the zeros of q(x), both counting multiplicities.
Proof. Please see [1] and [2] for the proof of this theorem.
Theorem 2.2. Suppose n ? 1 and A;B 2 Cn?n. Let ? = f?1;?2;:::;?ng and ? =
f?1;?2;:::;?ng be the eigenvalue multisets of A and B respectively. Then for every ? > 0,
18
there exists a ? > 0, such that if maxi;j=1;2;:::;njA(i;j)?B(i;j)j < ?, then
d(?;?) ? ?: (2.8)
Proof. The eigenvalues of A and B are the zeros of the corresponding characteristic polyno-
mials pA(x) = det(?I ?A) and pB(x) = det(?I ?B). Let a = fan;an?1;:::;a1;a0g and
b = fbn;bn?1;:::;b1;b0g be the coe?cient vectors for pA and pB, i.e.,
pA(x) = anxn +an?1xn?1 +???+a1x+a0
and
pB(x) = bnxn +bn?1xn?1 +???+b1x+b0:
According to Theorem 2.1, there is a ?0 > 0, so that if ka?bk1 < ?0, then d(?;?) ? ?.
Since ai and bi are polynomial functions of elements of A and B, they are con-
tinuous functions of elements of A and B. Hence there exists a ? > 0, such that if
maxi;j=1;2;:::;njA(i;j)?B(i;j)j < ?, ka?bk1 < ?0. Combining the above arguments, we
flnish the proof.
Remark 1. Theorem 2.2 illustrates the continuous dependence of the eigenvalues of a
matrix on its elements.
Deflnition 8. Let f'k: 'k is a multiset of n complex elements, k = 1;2;:::g. Let ? be
a multiset of n complex elements. If limk!1d('k;?) = 0, we say that the sequence of
multisets f'kg converges to multiset ?.
Lemma 2.17. Suppose fAk 2 Cn?n : k = 1;2;:::g and B 2 Cn?n. Let the eigenvalue
multiset of Ak be ?(k) = f (k)1 ; (k)2 ;:::; (k)n g. Similarly, let the eigenvalue multiset of B be
? = f?1;?2;:::;?ng. If limk!1Ak = B, then f?(k)g converge to ?.
19
Proof. The proof follows directly from Theorem 2.2.
20
Chapter 3
Convergence of the QR Algorithm
If A 2Cn?n has n distinct eigenvalues f?i : i = 1;:::;ng where j?ij > j?i+1j for each i
such that 1 ? i ? n?1 and j?nj > 0. Then A has n linearly independent eigenvectors. By
Lemma 2.4, there exists an X such that A = XDX?1, where D = diag([?1;?2;:::;?n]).
Furthermore, by Lemma 2.16, X?1 can be factorized as X?1 = LPU, where L is a unit
lower triangular matrix, P is a unique permutation matrix, and U is an invertible upper
triangular matrix.
Theorem 3.1. Suppose A 2 Cn?n has n distinct eigenvalues f?i : i = 1;:::;ng where
j?ij > j?i+1j for each i such that 1 ? i ? n?1 and j?nj > 0. There exists an invertible matrix
X such that A = XDX?1, where D = diag([?1;?2;:::;?n]). Let X?1 = LPU, where L
is unit lower-triangular, P is a unique permutation matrix and U is upper triangular. Let
A = Q1R1 be the unique QR factorization of A with R1 having positive diagonal elements.
Let A2 = R1Q1 = Q2R2, where Q2R2 is the unique QR factorization of A2. Repeat the
above process so that for k ? 2, Ak = Rk?1Qk?1 = QkRk, where QkRk is the unique QR
factorization of Ak with Rk having positive diagonal elements. Then Dg(Ak) converges to
PT diag([?1;?2;:::;?n])P. Furthermore, as k goes to inflnity, the elements in the lower
triangular part of Ak go to zero and the elements in the upper triangular part of Ak converge
in magnitude.
Proof. By description, Ak = Rk?1Qk?1 = Q?k?1Qk?1Rk?1Qk?1 = Q?k?1Ak?1Qk?1, which
means that Ak is similar to Ak?1, for each k > 1. As a result of this similarity, the matrices
Ak all have the same characteristic polynomial and, hence, the same eigenvalues.
21
For each positive integer k, let
Pk = Q1Q2???Qk (3.1)
and
Uk = RkRk?1???R1: (3.2)
Note that by Lemma 2.3, Uk is upper triangular with positive diagonal elements. Also
notice that
Qk?1Ak = Qk?1Rk?1Qk?1 = Ak?1Qk?1: (3.3)
Using (3.3), we compute the product PkUk as
PkUk = Q1Q2???QkRk ???R1
= Q1???Qk?1AkUk?1
= Q1???Qk?2Ak?1Qk?1Uk?1
= Q1???Qk?3Ak?2Qk?2Qk?1Uk?1
= ???
= AQ1Q2???Qk?2Qk?1Uk?1
= APk?1Uk?1: (3.4)
22
Since (3.4) is true for each k ? 2, we have
PkUk = APk?1Uk?1
= A(APk?2Uk?2)
= ???
= Ak?1P1U1
= Ak:
Since QkRk is the unique QR-factorization of Ak as guaranteed by Lemma 2.14, PkUk is
the unique QR-factorization of Ak with Uk having positive diagonal elements.
Let XP = QR be the unique QR factorization of XP such that R has positive diagonal
elements. Thus, X = QRPT. Now we have
Ak = XDkX?1 = QRPTDkLPU
= QR
?
PT(DkLD?k)P
?
PTDkPU
= QR
?
PT(DkLD?k)P
?
DkpU; (3.5)
where Dp = PTDP. Since D is an invertible diagonal matrix, DkLD?k is still a unit lower-
triangular matrix. Thus, when 1 ? i < j ? n, ?DkLD?k?ij = 0. When 1 ? j < i ? n,
?DkLD?k?
ij = lij?
ki=?kj goes to zero as k !1sincej?ij < j?jj. Since the diagonal elements
of PT(DkLD?k)P are the diagonal elements of DkLD?k, only rearranged by P, we have
limk!1DkLD?k = I and
lim
k!1
PTDkLD?kP = I:
Write PTDkLD?kP = I +Ek, where limk!1Ek = 0. Plugging into Equation (3.5), we get
Ak = QR(I + Ek)DkpU = Q(I + REkR?1)RDkpU. By Lemma 2.10, limk!1REkR?1 = 0
and so limk!1(I + REkR?1) = I. For each k, let ~Qk ~Rk be the unique QR factorization
23
of I + REkR?1 such that R has positive diagonal elements. Notice that by Lemma 2.12,
limk!1 ~Qk = limk!1 ~Rk = I. We get
Ak = (Q~Qk)(~RkRDkpU):
Now we focus on the diagonal elements of ~RkRDkpU. Note that the diagonal elements of Dp
are those of D rearranged by permutation P. Let ?p;i = Dp(i;i). Also let U = [uij]. The
i-th diagonal element of ~RkRDkpU can be written as (~Rk)ii(R)ii?kp;iuii. By construction,
(~Rk)ii > 0 and (R)ii > 0 for each i = 1;:::;n. Let
?k = diag
0
@
2
4 ?
kp;1u11
flfl
fl?kp;1
flfl
flju11j
; ?
kp;2u22
flfl
fl?kp;2
flfl
flju22j
;:::; ?
kp;nunn
flfl?k
p;n
flflju
nnj
3
5
1
A: (3.6)
Note that ?k is a unitary diagonal matrix. Moreover, ?k ~RkRDkpU has positive diagonal
elements, for its i-th diagonal element is (~Rk)ii(R)iij?p;ijk juiij. Since Q~Qk??k is unitary,
Ak = (Q~Qk??k)(?k ~RkRDkpU) (3.7)
is the unique QR factorization of Ak. But we have already shown that PkUk is the unique
QR factorization of Ak for each k; therefore,
Pk = Q~Qk??k; (3.8)
and
Uk = ?k ~RkRDkpU: (3.9)
24
From (3.1) and (3.8),
Qk = P?k?1Pk = (Q~Qk?1??k?1)?(Q~Qk??k)
= ?k?1 ~Q?k?1 ~Qk??k
= ?k?1??k(?k ~Q?k?1 ~Qk??k):
By Lemma 2.11, (~Q?k?1 ~Qk) ! I, as k !1 since each of the ~Qk converges to I. Now,
jjj(?k ~Q?k?1 ~Qk??k)?IjjjF = jjj
?
?k
?~
Q?k?1 ~Qk ?I
?
??k
?
jjjF
= jjj~Q?k?1 ~Qk ?IjjjF:
Thus, limk!1jjj(?k ~Q?k?1 ~Qk??k)?IjjjF = 0. By Lemma 2.8, we get
lim
k!1
(?k ~Q?k?1 ~Qk??k) = I:
Furthermore,
(?k?1??k)ii = ?
k?1
p;i uiifl
flfl?k?1
p;i
flfl
fljuiij
?kp;iuiifl
flfl?k
p;i
flfl
fljuiij
= ?p;ij?
p;ij
:
Thus,
lim
k!1
Qk = diag
? ?
p;1
j?p;1j;
?p;2
j?p;2j;:::;
?p;n
j?p;nj
??
: (3.10)
In other words, Qk converge to a unitary diagonal matrix.
Similarly, plugging (3.9) into Rk = UkU?1k?1, we get
Rk = (?k ~RkRDkpU)(?k?1 ~Rk?1RDk?1p U)?1 = ?k ~RkRDpR?1 ~R?1k?1??1k?1:
25
Again, focusing on the i-th diagonal elements, we have
(Rk)ii = ?
kp;iuii
j?p;ijk (
~Rk)iiRii?p;iR?1ii (~Rk?1)?1ii
0
@?
k?1
p;i uii
j?p;ijk?1
1
A
?1
= (~Rk)ii(~Rk?1)?1ii j?p;ij;
Hence,
lim
k!1
(Rk)ii = lim
k!1
(~Rk)iij?p;ij=(~Rk?1)ii = j?p;ij: (3.11)
This shows that the diagonal elements of the matrices Rk, k = 1;2;:::, converge to the
magnitudes of the eigenvalues of A.
Note that
Ak+1 = P?kAPk =
?
Q~Qk??k
??
A
?
Q~Qk??k
?
= ?k ~Q?kQ?XDX?1Q~Qk??k
= ?k ~Q?kQ?QRPTDPR?1Q?Q~Qk??k
= ?k ~Q?kRDpR?1 ~Qk??k
= ?k ~Q?k??k(?kRDpR?1??k)?k ~Qk??k
= ^Qk(?kRDpR?1??k)^Q?k; (3.12)
where ^Qk = ?k ~Q?k??k. Since limk!1 ~Qk = I and j(?k)iij = 1, limk!1 ^Qk = I. Obviously
the elements in the various ?kRDpR?1??k are uniformly bounded, by Lemma 2.13 , we
have
lim
k!1
?A
k+1 ??kRDpR?1??k
? = 0: (3.13)
26
Hence, as k goes to inflnity, the elements in the lower triangular part of Ak+1 have limit
zero, that is, limk!1Ak+1(i;j) = 0 for 1 ? j < i ? n; and on the diagonal,
lim
k!1
Ak+1(i;i) = (?kRDpR?1??k)ii = ?p;i; (3.14)
which proves the convergence of the QR algorithm on the lower triangular and the diagonal.
Let RDpR?1 = W. Then W = [wij] is an upper triangular matrix. That is, wij = 0
when 1 ? j < i ? n. Also let ?p;i = j?p;ije i and uii = juiijefii, for each i = 1;:::;n. Thus
(?kRDpR?1??k)st = ?
kp;suss
flfl?k
p;s
flflju
ssj
wst ?
kp;tutt
flfl?k
p;t
flflju
ttj
= wstejk( t? s)+fit?fis::
From Equation (3.13), for s < t,
lim
k!1
?
(Ak+1)st ?wstejk( t? s)+fit?fis
?
= 0: (3.15)
So limk!1(Ak+1)st may not exist for 1 ? s < t ? n. However, from Equation (3.15), we
conclude that limk!1j(Ak+1)stj exists and is equal to jwstj.
Notice that if X?1 has LU decomposition, then P = I. In this case Dg?Ak? converges
to diag([?1;?2;:::;?n]).
Corollary 3.1. Given that A satisfles all the assumptions described in Theorem 3.1, sup-
pose that A is also normal. Then not only the convergence of the QR algorithm described in
Theorem 3.1 holds, but also the elements in the upper triangular part of Ak ofi the diagonal
converge to zero.
Proof. Since A is normal and has n distinct eigenvalues, by Lemma 2.4, for any diagonal-
ization of A in the form of A = XDX?1, the columns of X are mutually orthogonal. Hence
(XP)?(XP) = PT(X?X)P is diagonal. This means that the columns of XP are mutually
27
orthogonal as well. Recall that XP = QR is the unique QR factorization of XP where R
has positive diagonal elements. According to the Gram-Schmidt orthogonalization process
described in the proof of Lemma 2.14, any element in the upper triangular part of R ofi
the diagonal is described by Equation (2.6). The mutual orthogonality of column vectors
of XP then means rij = 0, for all j > i. So R is diagonal.
From Equation 3.13, we have
lim
k!1
jAk+1j = lim
k!1
flfl?
kRDpR?1??k
flfl
= lim
k!1
flflRD
pR?1
flfl
= flflRDpR?1flfl: (3.16)
Because R is diagonal, RDpR?1 is diagonal, which is equivalent as saying that the upper
triangular part of RDpR?1 is zero. Hence
lim
k!1
jAk+1(i;j)j = 0; for all 1 ? i < j ? n;
which means that
lim
k!1
Ak+1(i;j) = 0; for all 1 ? i < j ? n: (3.17)
28
Chapter 4
Generalization: Equal-magnitude eigenvalues
In the proof given in Chapters 3, we assumed that the eigenvalues ?1;?2;:::;?n of
our n?n matrix A satisfy j?i+1j > j?ij, for each i 2 f1;2;:::;n?1g. In this section, we
relax that assumption a bit and assume only that
j?1j?j?2j?????j?rj > j?r+1j?????j?nj:
In Chapter 3, in the expression A = XDX?1 where D is a diagonal matrix with eigenval-
ues of A on the diagonal, no constraints on X?1 were imposed to guarantee convergence.
However, in this chapter, extra constraints have to be assumed to guarantee convergence in
general, as stated in the following theorem.
Theorem 4.1. Suppose A 2Cn?n. Let f?i : i = 1;:::;ng be the eigenvalues of A, counting
multiplicity. Suppose j?1j ? j?2j ? ??? ? j?rj > j?r+1j ? ??? ? j?nj > 0. Let D =
diag([?1;?2;:::;?n]). Suppose there exists invertible X 2 Cn?n such that A = XDX?1.
Let X?1 = LPU be the modifled Bruhat decomposition of X?1, where L is unit lower-
triangular, P is a unique permutation matrix and U is upper triangular. Assume that
P = Pr 'Pn?r; (4.1)
where Pr and Pn?r are permutation matrices of sizes r ? r and (n ? r) ? (n ? r). Let
?p;i = (PTDP)i;i. Then in the QR algorithm iteration, the sequence of eigenvalue multisets
of the top-left r ?r blocks converge to f?p;1;?p;2;:::;?p;rg and the sequence of eigenvalue
multisets of the bottorm-right (n?r)?(n?r) blocks converge to f?p;r+1;:::;?p;ng.
29
Proof. We deflne Pk and Uk as in Chapter 3. See (3.1) and (3.2). Thus Ak = PkUk is the
unique QR-factorization of Ak where Uk has positive diagonal elements.
Let XP = QR be the unique QR factorization of XP such that R has positive diagonal
elements. Thus X = QRPT. Similar to equation (3.5), we have
Ak = Q(RPTDkLD?kPR?1)RDkpU:
where Dp = PTDP. Let
Hk = RPTDkLD?kPR?1 (4.2)
and let Hk = ~Qk ~Rk be the unique QR factorization of Hk such that ~Rk has positive diagonal
elements. Let
~Qk =
2
64 ~Qrk ~Q12k
~Q21k ~Qn?rk
3
75;
~Rk =
2
64~Rrk ~R12k
0 ~Rn?rk
3
75;
and
Hk =
2
64Hrk H12k
H21k Hn?rk
3
75;
where Hrk, ~Qrk and ~Rrk are of size r?r, Hn?rk , ~Qn?rk and ~Rn?rk are of size (n?r)?(n?r).
Since ~Rk is upper triangular, its inverse ~R?1k is also upper triangular according to Lemma
30
2.2. Given ~Rk in the above block form, we can easily show that
~R?1k =
2
64(~Rrk)?1 Wk
0 (~Rn?rk )?1
3
75 (4.3)
where Wk = ?
?~
Rrk
??1 ~
R12k
?~
Rn?rk
??1
.
Let Fk = DkLD?k and write Fk in block form as
Fk =
2
64F(k)11 0
F(k)21 F(k)22
3
75; (4.4)
where F(k)11 is of size r ? r and F(k)22 is of size (n ? r) ? (n ? r). Since j?1j ? j?2j ?
??? ? j?rj > j?r+1j ? ??? ? j?nj, we have limk!1Fk(i;j) = limk!1lij?ki=?kj = 0, for all
i ? r +1 > r ? j. That is to say limk!1F(k)21 = 0. Now let
PTDkLD?kP =
2
64Z(k)11 Z(k)12
Z(k)21 Z(k)22
3
75; (4.5)
where Z(k)11 are of size r?r and Z(k)22 are of size (n?r)?(n?r). Plugging (4.1) and (4.4)
into (4.5) and expand the multiplications, we get Z(k)21 = PTn?rF(k)21 Pr. Taking the limit as
k goes to inflnity, we get limk!1Z(k)21 = limk!1PTn?rF(k)21 Pr = 0. Furthermore, let
Rk =
2
64R(k)11 R(k)12
0 R(k)22
3
75
where R(k)11 are of size r?r and R(k)22 are of size (n?r)?(n?r). Then we can write
R?1k =
2
64
?
R(k)11
??1
R012
0
?
R(k)22
??1
3
75;
31
where R012 = ?
?
R(k)11
??1
R(k)12
?
R(k)22
??1
. From (4.2), we get
lim
k!1
H21k = lim
k!1
R(k)22 Z(k)21
?
R(k)11
??1
= 0: (4.6)
However, here limk!1Hk = I may not be true.
Let
?k = diag
0
@
2
4 ?
kp;1u11
flfl?
p;1k
flflju
11j
; ?
kp;2u22
flfl
fl?kp;2
flfl
flju22j
;:::; ?
kp;nunn
flfl?k
p;n
flflju
nnj
3
5
1
A;
and we get the following equation (in the same form as equation (3.7)),
Ak = (Q~Qk??k)(?k ~RkRDkpU): (4.7)
Again, since Q~Qk??k is unitary and ?k ~RkRDkpU is upper triangular with positive diagonal
elements, we recognize that (4.7) is the unique QR factorization of Ak. Thus,
Pk = Q~Qk??k; (4.8)
and
Uk = ?k ~RkRDkpU: (4.9)
32
From Hk = ~Qk ~Rk, we get ~R?1k = H?1k ~Qk. Applying the Frobenius norm,
jjj~R?1k jjjF = jjjH?1k ~QkjjjF
= jjjH?1k jjjF; by Lemma 2.7
= jjj
?
RPTDkLD?kPR?1
??1
jjjF
= jjjRPTDkL?1D?kPR?1jjjF
?jjjRPTjjjFjjjDkL?1D?kjjjFjjjPR?1jjjF
= CR;PjjjDkL?1D?kjjjF
? CR;PjjjL?1jjjF (4.10)
where CR;P = jjjRPTjjjFjjjPR?1jjjF is a constant. The last inequality holds because
flfl
flfl?DkL?1D?k?
i;j
flfl
flfl?flflL?1(i;j)flfl; for all i;j = 1;2;:::;n:
On account of (4.10), we have,
jjj(~Rrk)?1jjjF ?jjj~R?1k jjjF ? CR;PjjjL?1jjjF: (4.11)
Thus,
jjj~Q21k jjjF ?jjjH21k jjjFjjj(~Rrk)?1jjjF ? CR;PjjjL?1jjjFjjjH21k jjjF ! 0, as k !1 (4.12)
since jjjH21k jjjF ! 0 by (4.6). Therefore, limk!1 ~Q21k = 0.
Furthermore, by Lemma 2.9, we get limk!1 ~Q12k = 0. Consequently, limk!1 ~Qrk ~Qr?k =
Ir and limk!1 ~Qn?rk ~Qn?r?k = In?r. In conclusion, we showed that
lim
k!1
0
B@~Q
k ?
2
64~Qrk 0
0 ~Qn?rk
3
75
1
CA = 0:
33
Notice that the same computation in (3.12) applies to the factorization shown in (4.7).
We get
Ak+1 = ^Qk(?kRDpR?1??k)^Q?k
where ^Qk = ?k ~Q?k??k. If we denote ?k =
2
64?rk 0
0 ?n?rk
3
75 and ^Q
k =
2
64 ^Qrk ^Q12k
^Q21k ^Qn?rk
3
75, then
^Qk =
2
64?rk 0
0 ?n?rk
3
75
2
64 ~Qrk ~Q12k
~Q21k ~Qn?rk
3
75
2
64?r?k 0
0 ?n?r?k
3
75
=
2
64 ?rk ~Qrk?r?k ?rk ~Q12k ?n?r?k
?n?rk ~Q21k ?r?k ?n?rk ~Qn?rk ?n?r?k
3
75:
From above, we get ^Q12k = ?rk ~Q12k ?n?r?k ! 0 and ^Q21k = ?n?rk ~Q21k ?r?k ! 0, as k ! 1.
Moreover,
lim
k!1
^Qrk = Ir; (4.13)
and
lim
k!1
^Qn?rk = In?r: (4.14)
Let
Jk = ?kRDpR?1??k: (4.15)
34
Then Jk is upper triangular with Jk(i;i) = ?p;i, i = 1;:::;n. If we write Jk =
2
64Jrk J12k
0 Jn?rk
3
75,
then,
Ak+1
= ^QkJk ^Q?k
=
2
64 ^Qrk ^Q12k
^Q21k ^Qn?rk
3
75
2
64Jrk J12k
0 Jn?rk
3
75
2
64 ^Qr?k ^Q21?k
^Q12?k ^Qn?r?k
3
75
=
2
64 ^QrkJrk ^Qr?k +(^QrkJ12k + ^Q12k Jn?rk )^Q12?k ^QrkJrk ^Q21?k +( ^QrkJ12k + ^Q12k Jn?rk )^Qn?r?k
^Q21k Jrk ^Qr?k +( ^Q21k J12k + ^Qn?rk Jn?rk )^Q12?k ^Q21k (Jrk ^Q21?k +J12k ^Qn?r?k )+ ^Qn?rk Jn?rk ^Qn?r?k
3
75:
If we write Ak+1 =
2
64Ark+1 A12k+1
A21k+1 An?rk+1
3
75, then as k ! 1, for the lower left block A21
k+1, we
have
lim
k!1
A21k+1 = lim
k!1
^Q21k Jrk ^Qr?k +( ^Q21k J12k + ^Qn?rk Jn?rk )^Q12?k
= 0
for the following reasons. By (4.15), Jk is uniformly bounded, hence Jrk, J12k and Jn?rk are
all uniformly bounded. Furthermore, both ^Qr?k and ^Qn?rk are uniformly bounded because
they are part of a unitary matrix. Finally both ^Q21k and ^Q12k go to zero as k goes to inflnity.
For the top right block A12k+1, we have
lim
k!1
?
A12k+1 ? ^QrkJ12k ^Qn?r?k
?
= lim
k!1
??^
QrkJrk ^Q21?k +(^QrkJ12k + ^Q12k Jn?rk )^Qn?r?k
?
? ^QrkJ12k ^Qn?r?k
?
= 0; (4.16)
35
for the same reasons stated above for A21k+1. Furthermore, by (4.13), (4.14) and (4.15), we
get
lim
k!1
flflA12
k+1
flfl = lim
k!1
flfl
fl^QrkJ12k ^Qn?r?k
flfl
fl
= lim
k!1
flflJ12
k
flfl
= lim
k!1
flfl
fl??kRDpR?1??k?12
flfl
fl
= lim
k!1
flfl
fl?RDpR?1?12
flfl
fl; (4.17)
where ??kRDpR?1??k?12 and ?RDpR?1?12 represent the top right r ? (n ? r) blocks of
?kRDpR?1??k and RDpR?1, respectively. Equation 4.17 shows that the elements of A12k+1
converge in magnitude to those of the corresponding top right block of RDpR?1, which is
a flxed matrix. Note that this is a similar result of (3.15), except that (3.15) applies to
all the upper triangular elements of Ak+1 ofi the diagonal while (4.17) only applies to the
upper right block ofi the diagonal blocks of Ak+1.
We also want to point out that a similar result to Corollary 3.1 exists for the equal-
magnitude eigenvalue case. If all the eigenvalues of A are distinct and A is normal, then
by the same reasoning presented in Corollary 3.1, R is diagonal. Hence ?RDpR?1?12 = 0.
Thus, limk!1flflA12k+1flfl = 0, which means that
lim
k!1
A12k+1 = 0: (4.18)
For the two diagonal blocks, we have
lim
k!1
?
Ark+1 ? ^QrkJrk ^Qr?k
?
= lim
k!1
?
(^QrkJ12k + ^Q12k Jn?rk )^Q12?k
?
= 0 (4.19)
and
lim
k!1
?
An?rk+1 ? ^Qn?rk Jn?rk ^Qn?r?k
?
= lim
k!1
?^
Q21k Jrk ^Q21?k + ^Q21k J12k ^Qn?r?k
?
= 0: (4.20)
36
Remember that in Chapter 3, the whole lower triangular block excluding the diagonal
goes to zero as k goes to inflnity. Here for the equal eigenvalue magnitude case, under the
assumption of (4.1), only the lower left block of size (n ? r) ? r goes to zero as k goes
to inflnity. The two diagonal blocks Ark+1 and An?rk+1 does not converge in a conventional
sense. However, using Lemma 2.17, we conclude that the sequence of eigenvalue multisets
of fArk : k = 1;2;:::;g converge to the multiset f?p;1;?p;2;:::;?p;rg and the sequence of
eigenvalue multisets of fAn?rk : k = 1;2;:::;g converge to the multiset f?p;r+1;:::;?p;ng.
Note that in (4.1), if Pr = Ir and Pn?r = In?r, then P = I. This means that X?1 has
LU decomposition X?1 = LU.
37
Chapter 5
Experiments
To show the convergence of the QR algorithm that we have proved in Chapters 3
and 4, we have designed several experiments which we performed using MATLAB (The
Mathworks, Natick, MA). The symbols used in this chapter will be the same symbols used
in Chapters 3 and 4.
We choose the dimension of the matrices to be n = 5. First, we construct a random
unit lower triangular matrix L and a random upper triangular matrix U. To guarantee
numerical stability, we constrained the 2-norm condition numbers of both L and U to be
no more than 100.
To illustrate Theorem 3.1, we choose the magnitude of the eigenvalues of the the matrix
A to be ?i = 2n+1?i, for all i = 1;2;:::n. Their phases are generated randomly in the range
of 0 to 2?. Using these eigenvalues, we form the diagonal matrix D = diag([?1;?2;:::;?n]).
In the flrst experiment, we form the matrix X?1 by letting X?1 = LU. The matrix A
is formed by A = XAX?1. Then we performed the QR algorithm on A for 800 iterations.
The results are shown in Figures 5.1, 5.2 and 5.3.
Figure 5.1 shows the convergence of the lower triangular part of Ak ofi the diagonal.
The curve represents the evolution of the maximum absolute value of all the lower triangular
ofi diagonal elements of Ak with the iterations.
Figure 5.2 shows the convergence of the diagonal elements of Ak in the complex plane.
The trajectory of the diagonal elements were plotted with the iterations. The triangles
represent the starting point of each diagonal element (Note that the diagonal elements in
the flrst several iterations tend to be far from the flnal converging value. In order to show
more detail, we chose to start the plot from the 7-th iteration). The circles represent the
38
0 100 200 300 400 500 600 700 80010
?40
10?30
10?20
10?10
100
1010
max {|A
k(i,j)|: i>j}
QR Iterations
Figure 5.1: Lower triangular (ofi diagonal) part of Ak converge to zero
ending points of the trajectories. The \+" signs mark the real eigenvalues of A. From this
flgure, we can see that the diagonal elements of Ak converge to the eigenvalues of A.
The evolution of four randomly selected upper triangular elements of Ak are shown in
Fig. 5.3. On the top row, the magnitudes of these elements are shown against iterations.
On the bottom row, their trajectories in the complex plane are plotted. Again triangles and
circles represent beginning and ending points of the trajectories. One can see that these
upper triangular elements of Ak converge in magnitude (top row) but do not converge in
value (bottom row).
In the second experiment, we generate a random permutation matrix P. We relax
the constraint such that X?1 = LPU. The results are shown in Figures 5.4, 5.5 and 5.6.
These flgures show similar convergence results of the lower triangular, diagonal and upper
triangular parts of Ak. Compared to the Figs. 5.1, 5.2 and 5.3, one can see that there are
some more oscillations presented in the LPU case than the LP case. Also, notice that in
Fig. 5.5, the trajectories of Ak(1;1) and Ak(2;2) trade places with each other during the
39
?15 ?10 ?5 0 5 10 15?20
?15
?10
?5
0
5
10
15
20
R
I
Ak(1,1)
Ak(2,2)
Ak(3,3)
Ak(4,4)
Ak(5,5)
Figure 5.2: Diagonal elements of Ak converge to eigenvalues of A
0 500 10000
100
200
300
Ak(2, 4)
Magnitude
QR Iterations
?200 0 200?500
0
500
I
R
0 500 10000
50
100
Ak(3, 4)
QR Iterations
?20 0 20?40
?20
0
20
40
60
R
0 500 10000
200
400
600
Ak(1, 2)
QR Iterations
?100 0 100
?400
?300
?200
?100
0
100
200
R
0 500 10000
1000
2000
3000
Ak(1, 4)
QR Iterations
?2000?100001000
?4000
?2000
0
2000
4000
R
Figure 5.3: Upper triangular (ofi diagonal) part of Ak converge in magnitude
40
iterations. This actually re ects the permutation matrix P. In this case,
P =
2
66
66
66
66
66
4
0 1 0 0 0
1 0 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 1
3
77
77
77
77
77
5
:
0 100 200 300 400 500 600 700 80010
?250
10?200
10?150
10?100
10?50
100
1050
QR Iterations
max {|A
k(i,j)|: i>j}
T?L
B?L
B?R
Figure 5.4: Lower triangular (ofi diagonal) part of A converge to zero
The above experiments validate the QR algorithm that we presented in Chapter 3.
Then we changed the eigenvalues to validate Theorem 4.1 presented in Chapter 4. The
new eigenvalues are divided into two groups, the flrst group of 2 eigenvalues ?1 and ?2 have
the same magnitude of 2, but with random phases. The second group of 3 eigenvalues ?3,
?4 and ?5 have the same magnitude of 1, again with random phases. The diagonal matrix
D is then formed by D = diag([?1;?2;:::;?5]).
We constructed two random permutation matrices Pr and Pn?r of sizes 2 and 3 respec-
tively. We let P = Pr'Pn?r and X?1 is constructed as X?1 = LPU. Then A is formed by
A = XDX?1. The QR algorithm iteration results are shown in Figures 5.7, 5.9 and 5.10.
41
?30 ?20 ?10 0 10 20 30 40?20
?15
?10
?5
0
5
10
15
20
25
30
R
I
Ak(1,1)
Ak(2,2)
Ak(3,3)
Ak(4,4)
Ak(5,5)
Figure 5.5: Diagonal elements of Ak converge to eigenvalues of A
0 500 10000
500
1000
Ak(2, 4)
Magnitude
QR Iterations
?500 0 500
?500
0
500
1000
I
R
0 500 10000
500
1000
1500
Ak(1, 3)
QR Iterations
?5000 500
?1500
?1000
?500
0
500
1000
1500
R
0 500 10000
200
400
600
Ak(2, 3)
QR Iterations
0 200400
?400
?200
0
200
400
600
R
0 500 10000
20
40
60
Ak(3, 4)
QR Iterations
?40?20 0 20
?50
0
50
R
Figure 5.6: Upper triangular (ofi diagonal) part of A converge in magnitude
42
In Fig. 5.7, the convergence of several blocks of the lower triangular part of Ak ofi the
diagonal are shown. Here, T-L is the set whose only member is Ak(2;1); B-L is the set
containing Ak(i;j), i = 3;4;5 and j = 1;2; B-R is the set that contains Ak(4;3), Ak(5;3)
and Ak(5;4). A schematic illustration of the difierent blocks are shown in Fig. 5.8. From
Fig. 5.7, we can see that the B-L block converge to zero while the other two blocks do not.
0 100 200 300 400 500 600 700 80010
?250
10?200
10?150
10?100
10?50
100
1050
QR Iterations
max {|A
k(i,j)|: i>j}
T?L
B?L
B?R
Figure 5.7: Lower triangular (ofi diagonal) part of A converge to zero
Figure 5.8: Schematic illustration of the lower triangular ofi diagonal blocks of Ak
Figure 5.9 shows the non-convergence of the diagonal elements of Ak. However, the
eigenvalue multisets of the two blocks (top-left 2?2 block and bottom-right 3?3 block) of
Ak converge to the eigenvalue multisets of the corresponding blocks of A, as shown in Fig.
5.10.
43
?2.5 ?2 ?1.5 ?1 ?0.5 0 0.5 1 1.5 2?3
?2
?1
0
1
2
3
R
I
Ak(1,1)
Ak(2,2)
Ak(3,3)
Ak(4,4)
Ak(5,5)
Figure 5.9: Diagonal elements of Ak do NOT converge to eigenvalues of A
0 100 200 300 400 500 600 700 80010
?15
10?10
10?5
100
105
QR Iterations
d(?
k,
?)
top?left 2 x 2 block
bottom?right 3 x 3 block
Figure 5.10: Convergence of eigenvalue multisets of diagonal blocks of Ak to those of A
44
Although we do not explore QR algorithms with shift [7, 9, 10, 11], we are aware of
these advanced algorithms. The most simple shift algorithm for QR is the single shift QR
algorithm. As shown above, when some eigenvalues of A share the same magnitude, then
the diagonal elements of Ak would not converge to these eigenvalues. In the extreme case, if
all the eigenvalues of A have the same magnitude, then the QR algorithm fails totally. Shift
algorithms are proposed to solve this problem. Here we provide one example illustrating
the single shift QR algorithm.
Let
A =
2
66
66
66
66
66
4
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 1
1 0 0 0 0
3
77
77
77
77
77
5
:
Then all 5 eigenvalues of A have the same magnitude 1. In fact, the 5 eigenvalues of A
are the flfth complex roots of 1 listed as follows, 1;cos
2?
5
?
+ isin
2?
5
?
;cos
4?
5
?
+
isin
4?
5
?
;cos
6?
5
?
+ isin
6?
5
?
;cos
8?
5
?
+ isin
8?
5
?
. The original QR algorithm
fails to converge at all for this matrix. However, if we let
As = A+I;
then the eigenvalues of As are those of A plus 1. Indeed, if we let ? be an eigenvalue of A
and x be a corresponding eigenvector of A, then
Asx = (A+I)x = ?x+x = (?+1)x:
45
The eigenvalues of As are now
[?s;1;:::;?s;5] =
?
2;1+cos
2?
5
?
+isin
2?
5
?
;
1+cos
4?
5
?
+isin
4?
5
?
;
1+cos
6?
5
?
+isin
6?
5
?
;
1+cos
8?
5
?
+isin
8?
5
??
:
Obviously they do not all have the same magnitude. But still j?s;2j = j?s;5j and j?s;3j =
j?s;4j. So after shift, we have 3 multisets of eigenvalues composed respectively of ?s;1, ?s;2
and ?s;5, and ?s;3 and ?s;4. According to our proof in Chapter 4, the QR iterations should
produce convergence at ?s;1 on the diagonal. It should also produce two 2?2 block matrices
along the diagonal that converge in terms of eigenvalue multisets.
We implemented the QR algorithm on As for 200 iterations. The iteration result A200
is shown below,
A200 =
2
66
66
66
66
66
4
2:0000 ?1:7000?10?16 ?3:2641?10?16 ?3:1697?10?16 ?1:5618?10?17
?2:4792?10?19 1:3090 9:5106?10?1 ?1:0435?10?16 4:6808?10?17
7:6303?10?19 ?9:5106?10?1 1:3090 2:3168?10?16 2:0064?10?16
1:5620?10?101 ?1:9469?10?83 6:3259?10?84 1:9098?10?1 ?5:8779?10?1
?1:1349?10?101 1:4145?10?83 ?4:5961?10?84 5:8779?10?1 1:9098?10?1
3
77
77
77
77
77
5
:
(5.1)
As seen in (5.1), Ak(1;1) converges to ?s;1 = 2. Also we see that there are two diagonal
blocks that do not converge to zero. All other elements under these diagonal blocks converge
to zero.
Also notice that the all the elements above the diagonal blocks also converge to zero.
This is not by accident. In this case A is normal because A?A = AA? = I. By Equation
4.18, all the elements above the diagonal blocks converge to zero.
The diagonal block of
2
4 1:3090 9:5106?10
?1
?9:5106?10?1 1:3090
3
5 has two eigenvalues: 1:3090 +
0:9511iand1:3090?0:9511i, whichare approximatelyequalto?s;2 and?s;5. Theeigenvalues
46
of the diagonal block of
2
41:9098?10
?1 ?5:8779?10?1
5:8779?10?1 1:9098?10?1
3
5 are 0:1910 + 0:5878i and 0:1910 +
0:5878i, which are approximately equal to ?s;3 and ?s;4. This result not only validates our
proof in Chapter 4, but also illustrates that the single shift is efiective as to enable the
convergence to one eigenvalue of A.
Figure 5.11 shows the QR iterations of A+(2+i)?I. In each iteration, the constant
(2+i) is subtracted from the diagonal elements of Ak before they are plotted. One can see
that all 5 eigenvalues converge with this shift. This is because the complex shift resulted in
all 5 eigenvalues having 5 difierent magnitudes.
?1 ?0.5 0 0.5 1
?1
?0.8
?0.6
?0.4
?0.2
0
0.2
0.4
0.6
0.8
1
R
I
Ak(1,1)
Ak(2,2)
Ak(3,3)
Ak(4,4)
Ak(5,5)
Figure 5.11: Convergence of QR: A is shifted by (2+i) in the complex plane
47
Bibliography
[1] Roger A. Horn and Charles R. Johnson, \Matrix Analysis," Cambridge University
Press, 1990
[2] Ludwig Elsner, \On the Variation of the Spectra of Matrices," Linear Algebra and Its
Applications 47 (1982), 127-138
[3] J.G.F. Francis, \The QR Transformation Part I," Computer Journal 4 (1961), 135-148,
[4] J.G.F. Francis, \The QR Transformation Part II," Computer Journal 4 (1962), 332-345
[5] V. N. Kublanovskaya, \On Some Algorithms for the Solution of the Complete Eigen-
value Problems," USSR Computational Mathematics and Mathematical Physics 1
(1961), 637-657
[6] J.H. Wilkinson, \Convergence of the LR, QR and Related Algorithms," Computer
Journal 8 (1965), 77-84
[7] J.H. Wilkinson, \The Algebraic Eigenvalue Problem," Oxford University Press, 1988
[8] E. E. Tyrtyshnikov, \Matrix Bruhat Decompositions with a Remark on the QR (GR)
Algorithm", Linear Algebra and its Applications 250 (1997), 61-68
[9] K. Braman, R. Byers, R. Mathias, \The Multishift QR Algorithm. Part I: Maintaining
Well-Focused Shifts and Level 3 Performance," SIAM J. Matrix Anal. Appl. 23 (2001),
929-947
[10] K. Braman, R. Byers, R. Mathias, \The Multishift QR Algorithm. Part II: Aggressive
Early De ation," SIAM J. Matrix Anal. Appl. 23 (2001), 948-973
[11] D.S. Watkins, \Forward Stability and Transmission of Shifts in the QR Algorithm,"
SIAM J. Matrix Anal. Appl. 16 (1995), 469-487
[12] D.S. Watkins, \Understanding the QR Algorithm," SIAM Rev. 24 (1982), 427-440
[13] B.N. Parlett, \The QR Algorithm," Computing in Science and Engg. 2 (2000), 38-42
[14] H. Huang, T. Tam, \On the QR iterations of real matrices," Linear Algebra and its
Applications. 408 (2005), 161-176
48