Inverse Eigenvalue Problem for Euclidean Distance Matrices
by
Emily M. Dempsey
A thesis submitted to the Graduate Faculty of
Auburn University
in partial ful llment of the
requirements for the Degree of
Master of Science
Auburn, Alabama
May 4, 2014
Approved by
Peter M. Nylen, Professor of Mathematics
Douglas A. Leonard, Professor of Mathematics
Thomas H. Pate, Professor of Mathematics
Abstract
Emily M. Dempsey, Mathematics, Auburn University
The Inverse Eigenvalue Problem for Euclidean Distance Matrices
This paper examines the inverse eigenvalue problem (IEP) for the particular class of
Euclidean distance matrices (EDM). Studying the necessary and su cient conditions for
a matrix to be an element of EDM gives us a better understanding as to the necessary
conditions placed on the numbers to be the eigenvalues of some Euclidean distance matrix.
Using this necessary conditions, Hayden was able to solve the IEP order n using a Hadamard
matrix of the same order. After 10 years, an error in his construction of Hayden?s solution
order n + 1 was noted and corrected accordingly however the result was not a solution to
the IEP of order n+ 1.
ii
Table of Contents
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
List of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
1 What is an inverse eigenvalue problem? . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Examples of solved inverse eigenvalue problems . . . . . . . . . . . . . . . . 1
1.1.1 Circulant matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Real symmetric tridiagonal matrices . . . . . . . . . . . . . . . . . . 2
2 De nition and existence of an Euclidean distance matrix . . . . . . . . . . . . . 4
2.1 EDM de nition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Characterization of EDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Eigenvalue properties of EDM . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Connections between Hadamard and Euclidean distance matrices . . . . . . . . 14
3.1 Hadamard matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Solution to the IEP based on a given Hadamard matrix . . . . . . . . . . . . 15
3.2.1 An example constructing a distance matrix using a Hadamard matrix
of the same order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 Solving the inverse eigenvalue problem for other orders . . . . . . . . . . . . 16
3.3.1 An error attempting to solve the inverse eigenvalue problem for EDM
for the (n+ 1) (n+ 1) case . . . . . . . . . . . . . . . . . . . . . . 20
3.4 Current state of the n+ 1 case . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
iii
List of Symbols
EDM set of Euclidean distance matrices
x bold letter x indicates vector
Rn real vector space with dimension n
Rn n set of all real valued matrices of order n
(D) set of eigenvalues of matrix D
k k norm of a vector
k k2 Euclidean norm of a vector
MT transpose of matrix M
SYM+0 set of all symmetric non-negative matrices with 0 diagonal
fxg? orthogonal subspace of vector x
IEP inverse eigenvalue problem
iv
Chapter 1
What is an inverse eigenvalue problem?
An inverse eigenvalue problem deals with the reconstruction of a matrix from given spec-
tral data (including complete or partial information of eigenvalues or eigenvectors). Often,
reconstructing a matrix with some particular structure (such as being a circulant matrix or
a tridiagonal matrix) given spectral data is the aim. Also, given spectral data, does a matrix
with some particular structure even exist? Considerable time has been spent on nding the
necessary and su cient conditions in order for a given inverse eigenvalue problem to have a
solution. An e cient algorithm for nding such a solution to an inverse eigenvalue problem
is normally quite di cult.
1.1 Examples of solved inverse eigenvalue problems
For some classes of matrices, the inverse eigenvalue problem has been solved. For in-
stance, the inverse eigenvalue problem for a circulant matrix C 2 Rn n has been solved,
along with symmetric tridiagonal matrix T 2Rn n.
1
1.1.1 Circulant matrices
A circulant matrix C2Rn n is of the form
2
66
66
66
66
66
4
0 1 2 n 1
n 1 0 1 n 2
n 2 n 1 0 n 3
... ... ... ... ...
1 2 ::: n 1 0
3
77
77
77
77
77
5
:
Each row of a circulant matrix is the previous row cycled horizontally one place to the right.
De ne a basic circulant P =
2
66
66
66
66
66
4
0 0 0 1
1 0 0 0
0 ... ... ... ...
... ... ... 0 0
0 0 1 0
3
77
77
77
77
77
5
; where n 1 = 1; and i = 0 for i6= n 1.
Notice that the minimal polynomial for P is q(t) = tn 1: So (P) is the set of distinct nth
roots of unity, say f 0; 1;:::; n 1g where = e2 in . Let f 0; 1;:::; n 1g be the eigen-
values of circulant matrix C , and therefore the roots of characteristic polynomial of C .
By interpolation, there is a polynomial Q(x) of degree n 1 that maps each i to i. So,
C = Q(P) and hence (Q(P)) = (C ) =f 0;:::; n 1g.
1.1.2 Real symmetric tridiagonal matrices
Real symmetric tridiagonal matrices are of the form
2
66
66
66
66
66
4
1 1 0 0
1 2 2 ... ...
0 2 ... ... 0
... ... ...
n 1 n 1
0 ::: 0 n 1 n
3
77
77
77
77
77
5
:
Given real numbers 1 < 1 < 2 < < n 1 < n 1 < n, there exists a unique
2
tridiagonal (or Jacobi matrix), T =
2
66
66
66
66
66
4
1 1 0 0
1 2 2 ... ...
0 2 ... ... 0
... ... ...
n 1 n 1
0 ::: 0 n 1 n
3
77
77
77
77
77
5
with i > 0 such that
(T) = f 1;:::; ng and (T1;1) = f 1;:::; n 1g, where T1;1 2 R(n 1) (n 1) denotes the
matrix T 2Rn n with column 1 and row 1 removed [1].
3
Chapter 2
De nition and existence of an Euclidean distance matrix
Distance matrices are a class of matrices subject to the condition that each entry repre-
sents the distance pairwise between points in a nite dimension k. Storing distances among
points in matrix form is relatively e cient. There are obviously necessary conditions for an
n n matrix M to possibly be a distance matrix; however the initial obvious conditions are
not su cient to classify as a distance matrix. Euclidean distance matrices are a sub-class of
distance matrices in that the distance is measured using the Euclidean metric.
2.1 EDM de nition
An Euclidean space is a nite-dimensional real vector space Rk with inner product
de ned by hx;yi = h(x1;x2;:::;xk);(y1;y2;:::yk)i = x1y1 + x2y2 + + xkyk. The inner
product induces a distance or metric d(x;y) = kx yk2 = p ki=1(xi yi)2. An element
of the set of Euclidean distance matries, denoted EDM, is derived by a complete list of the
squared distances between pairs of points from a list of kfxi;i = 1:::kg. The entries of the
distance matrix are de ned by dij = kxi xjk22 = hxi xj;xi xji, which represents the
distance (squared) between the ith and jth point. Note that the number of points in the list
N, does not have to equal the dimension of the space k.
The absolute distance squared between points xi and xj for i;j = 1;:::;N must satisfy
the metric space properties:
1. dij 0, i6= j
2. dij = 0 if and only if xi = xj
4
3. dij = dji
4. dij dik +dkj, i6= j6= k
A distance matrix D has N2 entries but since it is symmetric and has diagonal 0, D
contains only N(N 1)=2 pieces of information. Note that, EDM, the set of Euclidean
squared-distance matrices, is a subset of RN N+ .
2.2 Characterization of EDM
Although the entries in any distance matrix D must satisfy the metric properties given
above, that is not a su cient condition for the matrix D to be in EDM. It wasn?t until 1982,
that J. C. Gower gave necessary and su cient conditions for a matrix D to be in EDM.
His result, according to many mathematicians, is rather late given its signi cance. Gower?s
results are explained in Theorem 2.1. Following the proof, is an example as to why matrix a
A2Rn n, with entries only satisfying the metric properties may not be and element of EDM.
Theorem 2.1. Let s2Rn, with sTe = 1, e the all 1?s vector. D is a distance matrix if and
only if
1. D is symmetric,
2. diag(D) = 0 and
3. (I esT)D(I seT) is negative semide nite with rank bounded above by k.
Proof. ()) Let D = [dij] be a distance matrix de ned by the points in the list fx? 2Rn,
? = 1;:::;Ng. De ne X := [x1;:::;xN] 2Rn N where each vector in the list is a column
of the matrix X. Each entry in D, dij = kxi xjk2 = kxik2 +kxjk2 2xTi xj. Therefore
D = XeT + eXT 2PTP where Pk n = [x1;:::;xn], xi2Rk and X = [kx1k2;:::;kxnk2]T.
Let u2Rn then uT(I esT)D(I seT)u=((I seT)u)TD(I seT)u. Let v = (I seT)u
then v = (I seT)u2feg? since eT(I seT)u = (eT ((eTs)eT)u = (eT eT)u = 0T u = 0.
5
vTDv = vT(XeT + eXT 2PTP)v
= vTXeTv + vTeXTv 2vTPTPv
= 2(Pv)T(Pv)
= 2kPvk2 0
(() Let D = DT, diag(D)=0 and (I esT)D(I seT) be negative semide nite. There-
fore 12(I esT)D(I seT) is positive semide nite. Since rankf(I esT)D(I seT)g k,
9Qk n = [y1;:::;yn], yi2Rk such that QTQ = 12(I esT)D(I seT). Then, 2QTQ =
(I esT)D(I seT) = D esTD DseT +esTDseT. Let g = Ds 12sTDse. Then 2QTQ =
D geT egT. After rearranging, D = geT + egT 2QTQ: So, 0 = dii = gi + gi 2yTi yi,
and therefore gi = yTi yi =kyik2. Finally, dij =kyik2 +kyjk2 2yTi yj =kyi yjk2.
Observation 2.2. If sTe = 1 then (I seT)x 2feg? for all x since eT(I seT)x =
(eT eT)x = 0. Therefore, condition 3 of Theorem 2.1 can be restated as D is negative
semide nite on the subspace feg? for D2EDM.
Observation 2.3. By the pure de nition and metric properties, if D2EDM, D must be
symmetric, positive real valued with 0 diagonal to have a chance of being in EDM. From now
on, denote the set of n n square symmetric zero diagonal non-negative matrices as SYM+0 .
Therefore Theorem 2.1 could be restated more simply: Let D2SYM+0 . Then D2EDM i
D is negative semide nite on feg?.
Together, conditions 1, 2, and 3 of Theorem 2.1 are the necessary and su cient condi-
tions for a matrix D2EDM. The matrix A2SYM+0 solely satisfying the metric properties
does not guarantee A2EDM.
6
Example 1: De ne
A =
2
66
66
66
64
0 1 1 4
1 0 1 1
1 1 0 1
4 1 1 0
3
77
77
77
75
:
Upon inspection, the entries in A satisfy each metric property 1-4 and conditions 1 and 2 of
Theorem 2.1 therefore A is in SYM+0 . In checking the last condition, let s =
1 1 0 1
T
,
with sTe = 1. Compute ^A = (I esT)A(I seT)
^A =
2
66
66
66
64
0 1 0 1
1 2 0 1
1 1 1 1
1 1 0 0
3
77
77
77
75
2
66
66
66
64
0 1 1 4
1 0 1 1
1 1 0 1
4 1 1 0
3
77
77
77
75
2
66
66
66
64
0 1 1 1
1 2 1 1
0 0 1 0
1 1 1 0
3
77
77
77
75
=
2
66
66
66
64
2 0 1 2
0 0 2 0
1 2 2 1
2 0 1 2
3
77
77
77
75
:
Condition 3 of Theorem 2.1 states ^A = (I esT)A(I seT) must be negative semide nite
in order for A 2 EDM. By the de nition of negative semide nite, and our computation,
the inequality xT(I esT)A(I seT)x = xT ^Ax 0 must hold for any x 2 R4. But
xT ^Ax =
0 0 1 0
2
66
66
66
64
2 0 1 2
0 0 2 0
1 2 2 1
2 0 1 2
3
77
77
77
75
2
66
66
66
64
0
0
1
0
3
77
77
77
75
= 2. Therefore the matrix A cannot be an
Euclidean distance matrix.
7
2.3 Eigenvalue properties of EDM
Before attempting to solve an inverse eigenvalue problem for D2EDM, as with any
class of matrices, it is important to have an understanding of the way the eigenvalues of
D behave, relationships with their eigenvectors and other spectral properties of EDM as a
set. Each potential distance matrix M 2SYM+0 is symmetric and real-valued. Therefore
(M)2R since hx;xi=h x;xi=hMx;xi=hx;MTxi=hx;Mxi=hx; xi= hx;xifor
any 2 (M). Moreover, if M2SYM+0 with (M) =f 1; 2;:::; ng, then trace(M) = 0;
meaning that
nX
i=1
i = 0. These properties of eigenvalues of any matrix in SYM+0 gives us an
obvious starting point for the purpose of reconstructing an Euclidean distance matrix from
spectral data. Next it will be useful to state and prove the Courant-Fisher Theorem for the
real symmetric case.
Theorem 2.4 (The Courant-Fischer Theorem). Let A be an n n real symmetric matrix
with ordered eigenvalues 1 2 k n and L a subspace of Rn. De ne
F(L) = FA(L) = maxfxTAxjx2L;kxk2 = 1g, and de ne f(L) = fA(L) = minfxTAxjx2
L;kxk2 = 1g. Then, k = minfF(L)jdim(L) = kg= maxff(L)jdim(L) = n k + 1g:
Proof. Let A be an n n real symmetric matrix with ordered eigenvalues 1 2
k n and L a subspace of Rn. De ne F(L) = FA(L) = maxfxTAxjx 2
L;kxk2 = 1g, and de ne f(L) = fA(L) = minfxTAxjx 2 L;kxk2 = 1g. Then f(L)
minff(L)jdim(L) = kg maxff(L)jdim(L) = kg maxff(L)jdim(L) = n k + 1g= k:
Similarly, F(L) maxfF(L)jdim(L) = n k + 1g minfF(L)jdim(L) = n k + 1g
minfF(L)jdim(L) = kg= k:
De nition 2.5. A symmetric matrix B is said to be almost negative semide nite if xTBx
0 for all x2Rn such that xTe = 0, (x2feg?).
The following theorem was adapted from J. Ferland and J.P. Crouzeix?s theorem on
properties of almost positive semide nite matrices and gives some useful conditions for the
8
analogous almost negative semide nite matrices. This, in turn, gives alternate necessary and
su cient conditions for a matrix A to be in EDM involving spectral data, which is useful
for the study of inverse eigenvalue problems, [3].
Theorem 2.6. Let D2SYM+0 and D6= 0. D2EDM if and only if
1. D has exactly one positive eigenvalue,
2. 9 w2Rn such that Dw = e and wTe 0.
Proof. ())(1) Let D2EDM. Suppose L Rn. De ne F(L) = maxfxTDxj x2L;kxk2 =
1g. By the Courant-Fisher Theorem, k = minfF(L)jdim(L) = kg. Let 1 2 n
be the ordered eigenvalues of D. n 1 = minfF(feg?)jdim(feg?) = n 1g F(feg?).
By Theorem 2.1, F(feg?) 0 because I seT v = v for all v 2feg?. So, 1 2
n 1 0. If 1 = 2 = = n 1 = 0, then n = 0 since Trace(D) =
nP
i=1
i = 0.
If D had all zero eigenvalues, D would be orthonormally similar to the 0 matrix which is a
contradiction. Therefore there are some non-zero eigenvalues of D and n > 0.
()) (2) By (1) D is not negative semide nite. Suppose e 62 range(D), therefore there
does not exist w 2 Rn such that Dw = e. Recall, e can be written as e = x + y where
x 2 ker(D) and y 2 range(D). Note that D = DT, so range(D)=range(DT). Since e =2
range(D), x 6= 0. xTe = xT(x + y) = xTx + xTy = xTx since ker(D) ? range(D), so
xTy = 0. Let v2Rn. De ne h = I 1xTexeT v.
hTDh =
I 1xTexeT
v
T
D
I 1xTexeT
v
= vTDv 1xTevTDxeTv 1xTevTexTDv + 1(xTe)2vTexTDxeTv
= vTDv:
9
Note that
hTe = vT
I 1xTeexT
e
= vTe vT
1
xTeex
Te
= vTe vTe
= 0:
So by Theorem 2.1 with s = 1xTe x, hTDh = vTDv 0. D cannot be negative
semide nite so e2 range(D). Therefore there exists w2Rn such that Dw = e.
Now suppose wTe6= 0. We are going to show wTe > 0. Let u be an eigenvector corre-
sponding to the positive eigenvalue of D. Let g = I ( 1wTe)weT u = u ( 1wTe)weTu .
Re arrange to obtain, u = g + ( 1wTe)weTu, then compute
uTDu =
g + 1wTeweTu
T
D
g + 1wTeweTu
= gTDg + gTD 1wTeweTu + uT 1eTwewTDg + uT 1(eTw)2ewTDweTu:
Using Dw = e to simplify,
uTDu = gTDg + g
TeeTu
wTe +
uTeeTg
eTw +
uTewTeeTu
eTwwTe :
Since gTe = I 1wTeweT u T e = 0,
We obtain
10
uTDu = gTDg + (u
Te)(uTe)T
(wTe)(wTe)T (w
Te)
= gTDg +
uTe
wTe
2
wTe:
Since u is an eigenvector corresponding to the positive eigenvalue of D, uTDu > 0.
Noting that g2feg? and gTDg 0, by Theorem 2.1. Therefore wTe > 0.
(() Let be the positive eigenvalue of D with orthonormal eigenvector u 6= 0. i.e.
Du = u. Since the remaining eigenvalues of D are non-positive, D can be written as
D = uuT CTC. By assumption, there exists w2Rn satisfying Dw = e and wTe 0.
Dw = ( uuT CTC)w = uTwu CTCw = e (*)
and,
wTDw = wTuTwu wTCTCw = wTe 0. (**)
If uTw = 0, then kCwk2 = wTCTCw = wTe 0, by (**). This implies Cw = 0.
Then (*) leads to the contradiction e = 0. We conclude uTw6= 0, and Cw6= 0. Since > 0
and uTw6= 0, (*) may be rearranged to give u = 1 uTw(e +CTCw).
D = uuT CTC
=
1
uTw(e +C
TCw)
1
uTw(e +C
TCw)
T
CTC
= 1 (uTw)2 (eeT + ewTCTC +CTCweT +CTCwwTCTC) CTC:
11
Let h2Rn satisfy hTe = 0. Then
hTDh = 1 (uTw)2 (hTCTCwwTCTCh) hTCTCh:
By (**), (uTw)2 kCwk2. Therefore,
hTDh 1kCwk2 (hTCTCw)2 kChk2
= 1(Cw)T(Cw)[(Ch)T(Cw)]2 (Ch)T(Ch)
0;
since by Cauchy-Schwarz we have [(Ch)T(Cw)]2 (Cw)(Cw)T(Ch)(Ch). Hence D is
negative semide nite on the subspace feg?. So by Theorem 2.1, D2EDM.
Observation 2.7. An important fact to note is if there exists two vectors w1;w2 2Rn such
that Dw1 = Dw2 = e, then w1 w2 = z 2ker(D). So, eTw1 eTw2 = eTz. However
eTz = wT1Dz = 0, therefore the conclusion is wT1 e = wT2 e.
A special simple case of Theorem 2.6, which will be used later, is when D has eigenvector
e associated with its only positive eigenvalue.
Corollary 2.8. Let D2SYM+0 and have only one positive eigenvalue 1, with eigenvector
e. Then D is a distance matrix.
Proof. Suppose D2SYM+0 and has only one positive eigenvalue 1, with eigenvector 1pne.
All other eigenvalues of D are nonpositive, D = 1 1neeT CTC. If xTe = 0,
xTDx = 1 1nxTeexT xTCTCxT
= xTCTCxT = kCxk2
0:
12
Therefore by Theorem 2.1 since matrix D is symmetric with diag(D) = 0 and is negative
semide nite on feg?, D2EDM.
By Theorem 2.6, D must have exactly one positive eigenvalue; call it 1. So necessarily
1 = 2 n since
nP
i=1
i = 0.
13
Chapter 3
Connections between Hadamard and Euclidean distance matrices
So far, we have characterized EDM as a set of matrices all of which are symmetric and
have zero diagonal from the geometry of distance and metric properties. By Theorem 2.1,
any D in EDM must be negative semide nite on the subspace feg?. We also found from
Theorem 2.6, D in EDM must have exactly one positive eigenvalue. The all 1?s vector e
must be of the form Dw for some w with wTe 0. And the sum of the eigenvalues of D is
zero. In this section we discuss how to construct a distance matrix with speci ed eigenvalues
from a known Hadamard matrix.
3.1 Hadamard matrices
First some preliminary de nitions and background.
De nition 3.1. A matrix H2Rn n is a Hadamard matrix if each entry is 1 and HTH =
nI meaning the rows, and consequently columns, are pairwise orthogonal and H is non-
singular.
Historically, Hadamard matrices were known as far back as 1867 and it is believed there
is a connection between Hadamard matrices and tessellations. Hadamard matrices are only
known to exist if n = 1;2 or n 0 mod 4 and it is unknown if this condition is su cient [2].
In order to show connection between Hadamard matrices and EDM, it is very useful to point
out shared properties for a set real numbers in common with the eigenvalues for D2EDM.
Lemma 3.2. Let 1 0 2 ::: n where
nX
i=1
i = 0, then (n 1)j nj 1 and
(n 1)j 2j 1.
14
Proof. By the given assumption, 1 = j 2j+j 3j+ ::: +j nj and j 2j j 3j ::: j nj.
Therefore (n 1)j 2j j 2j j 3j ::: j nj (n 1)j nj, and by replacing with 1,
(n 1) 1 (n 1)j nj.
3.2 Solution to the IEP based on a given Hadamard matrix
The next theorem published by Thomas Hayden, Robert Reams, and James Wells in
1999 [7], shows the closest and perhaps strongest connection distance matrices have with
Hadamard matrices and solves the inverse eigenvalue problem for EDM in a nite number
of cases.
Theorem 3.3. Let n be such that there exists a Hadamard matrix of order n. Let 1
0 2 ::: n and
nX
i=1
i = 0, then there exists a distance matrix D with eigenvalues
1; 2;:::; n.
Proof. Let e 2 Rn; and H 2 Rn n be a Hadamard matrix of order n. Let U = 1pnH so
that U is an orthonormal matrix. Let = diag( 1; 2;:::; n). Therefore D = UT U is
symmetric and has eigenvalues 1; 2;:::; n. H has one row of all ones, assume it is row 1
of H.
De = UT Ue = 1nHT He = 1nHT
2
66
66
66
64
n 1
0
...
0
3
77
77
77
75
= 1n
2
66
66
66
64
n 1
n 1
...
n 1
3
77
77
77
75
= 1e. Therefore e is an
eigenvector of D corresponding to eigenvalue 1. It can be computed dii = Pni=1 1n i = 0
since Pni=1 i = 0. By Corollary 2.8, D2EDM.
For the purpose of constructing D2EDM of any nite order given particular spectral
properties, this is su cient but not necessary condition. For example, there exists distance
matrices of order 3 so they cannot be Hadamard. Furthermore, it is not even known if there
exists a Hadamard matrix for every n 4 which is a multiple of 4 [2].
15
3.2.1 An example constructing a distance matrix using a Hadamard matrix of
the same order
Let n = 4, assign 1 = 6; 2 = 1; 3 = 2; 4 = 3,
so that
=
2
66
66
66
64
6 0 0 0
0 1 0 0
0 0 2 0
0 0 0 3
3
77
77
77
75
:
Let Hadamard matrix,
H =
2
66
66
66
64
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
3
77
77
77
75
;then U = 12
2
66
66
66
64
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
3
77
77
77
75
:
So UT U =
2
66
66
66
64
0 2 2:5 1:5
2 0 1:5 2:5
2:5 1:5 0 2
1:5 2:5 2 0
3
77
77
77
75
is a distance matrix by Theorem 3.3; call it D. Also, by
Corollary 2.8, since De = 6e; D is in EDM.
3.3 Solving the inverse eigenvalue problem for other orders
The next step in solving the inverse eigenvalue problem of Euclidean distance matrices
for all orders of n, has been focused especially on the n + 1 case. Starting with a matrix
D2EDM of order n constructed from given spectral data, is there any way to border the
matrix D to get a matrix ^D of size n+ 1 with ^D2EDM?
16
In order to begin looking at the inverse eigenvalue problem for the n + 1 case we need
a few preliminary theorems due to Fiedler [3].
Theorem 3.4. Let A be a symmetric m m matrix with eigenvalues 1;:::; m and let u be
a unit eigenvector corresponding to 1. Let B be a symmetric n n matrix with eigenvalues
1;:::; n and let v be a unit eigenvector corresponding to 1. Then for any p, the matrix
C =
2
64 A puvT
pvuT B
3
75
has eigenvalues 2;:::; m; 2;:::; m; 1; 2 where 1 and 2 are eigenvalues of the matrix
C =
2
64 1 p
p 1
3
75:
Proof. Let u1;u2;:::;um be an orthogonal set of eigenvectors of A,
Aui = iui; i = 2;:::;m:
By direct veri cation
2
64ui
0
3
75 are eigenvectors of C with the corresponding i = 2;:::;m:
Similarly, for B, let v1;v2;:::;vn be an orthogonal set of eigenvectors of B,
Bvi = ivi; i = 2;:::;n:
By direct veri cation
2
64vi
0
3
75 are eigenvectors of C with the corresponding i = 2;:::;n:
Let 1 and 2 be eigenvalues of C with corresponding eigenvectors (g;h)T and (j;k)T.
17
2
64 1 p
p 1
3
75
2
64 g
h
3
75 =
1
2
64 g
h
3
75, gives us the equations
8>
><
>>:
1g +ph = 1g
pg + 1h = 1h
.
2
64 1 p
p 1
3
75
2
64 j
k
3
75 =
2
2
64 j
k
3
75, gives us the equations
8>
><
>>:
1j +pk = 2j
pg + 1h = 2k
.
Then,2
64 A puvT
pvuT B
3
75
2
64 gu
hv
3
75 =
2
64 gAu +hpu
gpv +hBv
3
75 =
2
64 (g 1 +hp)u
(gp+h 1)v
3
75 =
1
2
64 gu
hv
3
75
and,2
64 A puvT
pvuT B
3
75
2
64 ju
kv
3
75 =
2
64 jAu +kpu
jpv +kBv
3
75 =
2
64 (j 1 +kp)u
(jp+k 1)v
3
75 =
2
2
64 ju
kv
3
75.
Therefore 1 and 2 are eigenvalues of C with corresponding eigenvectors (g;h)T and
(j;k)T.
Theorem 3.5. Let 1 2 ::: k be the eigenvalues of the symmetric non-negative
matrix A2Rk k and 1 2 ::: l be the eigenvalues of the symmetric non-negative
matrix B2Rl l, where 1 1. Also, Au = 1u, Bv = 1v and u and v are corresponding
unit Perron vectors. Then with p = p ( 1 1 + ) the matrix
2
64 A puvT
pvuT B
3
75
has eigenvalues 1 + ; 1 ; 2;:::; k; 2;:::; l for any 0.
Proof. By Theorem 3.4,
2
64 A puvT
pvuT B
3
75 has eigenvalues
2;:::; m; 2;:::; m: If 0
choose p = p ( 1 1 + ). Set det
0
B@
2
64 1 p
p 1
3
75 I
1
CA = 0.
18
det
0
B@
2
64 1 p
p 1
3
75 I
1
CA = (
1 )( 1 ) p2
= 2 ( 1 + 1) + ( 1 1 p2) = 0
By the quadratic formula,
= 1 + 1
p(
1 + 1)2 4( 1 1 p2)
2
= 12
1 + 1
q
21 2 1 1 + 21 + 4p2
:
After plugging in p and distributing, and grouping 1; 1,
= 12
h
1 + 1
p
( 1 1)2 + 4 1 4 1 + 4 2
i
= 12
h
1 + 1
p
( 1 1)2 + 4 ( 1 1) + 4 2
i
= 12
h
1 + 1
p
(2 + ( 1 1))2
i
= 12[ 1 + 1 (2 + 1 1)]
Therefore 1 = 12[ 1+ 1+2 + 1 1] = 1+ 1, and 2 = 12[ 1+ 1 2 1+ 1] = 1 .
By Theorem 3.4, 1 + and 1 are eigenvalues of the matrix
2
64 A puvT
pvuT B
3
75.
19
3.3.1 An error attempting to solve the inverse eigenvalue problem for EDM for
the (n+ 1) (n+ 1) case
In 1999, T.L. Hayden et. al. claimed to solve the inverse eigenvalue problem for distance
matrices (n + 1) (n + 1) using a distance matrix of size n constructed from a Hadamard
matrix and speci ed eigenvalules as in Theorem 3.3. [7].
The theorem states:
Let n be such that there exists a Hadamard matrix of order n. Let 1 0 2 ::: n+1
and
n+1X
i=1
i = 0, then there is an (n + 1) (n + 1) distance matrix ^D with eigenvalues
1; 2;:::; n+1.
Let n = 4 and 1 = 6:1623, 2 = 0:1623, 3 = 1, 4 = 2, 5 = 3. Following the
construction in the proof in the original paper, we begin with a distance matrix constructed
from a Hadamard matrix (same matrix as in our previous example)
D =
2
66
66
66
64
0 2 2:5 1:5
2 0 1:5 2:5
2:5 1:5 0 2
1:5 2:5 2 0
3
77
77
77
75
;
with eigenvaluesf6.1623+(-0.1623), -1, -2, -3gand Perron vector e. Following Theorem 3.5,
de ne ^D=
2
64 D pu
puT 0
3
75, letting u=1
2e and p =
p 6:1623( :1623) 1:0001: By computing
20
the block matrix
^D
2
66
66
66
66
66
64
0 2 2:5 1:5 :5
2 0 1:5 2:5 :5
2:5 1:5 0 2 :5
1:5 2:5 2 0 :5
:5 :5 :5 :5 0
3
77
77
77
77
77
75
with eigenvalues f6.1623, -.1623, -1, -2, -3g. De ne ^w = [:5;:5;:5;:5; 4]T, D^w = ^e, but by
inspection ^wT^e < 0 and by Theorem 2.6, ^D cannot be a distance matrix.
Over 10 years after the original paper was published, Jaklic and Modic were able to
note the error, salvage some of the proof but yield di erent results [11].
3.4 Current state of the n+ 1 case
Building on his solution for solving IEP of EDM?s of order such that a Hadamard matrix
exists, Hayden in 1999, considered the n+ 1 case. Despite a major error in his solution, the
paper was published and the inverse eigenvalue problem was supposedly solved for the n+ 1
case. Jaklic and Modic in 2013 noted the error in the theorem of Hayden?s n+1 followed the
direction of the proof and added a small but signi cant condition [11]. However, as a result,
this did not solve an IEP for EDM; but su ces to give a general construction algorithm for
a distance matrix, but not from prescribed spectral data.
Theorem 3.6. Let D2EDM and D6= 0 with Perron eigenvalue r and the corresponding
normalized Perron eigenvector u. Let Dw=e, wTe 0 and p > 0. Then the matrix ^D=2
64 D pu
puT 0
3
752EDM i p2 [ ; +], where := r
uTe
p
reTw; noting that the denominator
can be zero if u = 1pne. In this case, take + =1.
Proof. By Theorem 2.6 matrix ^D2EDM if (1) it has exactly one positive eigenvalue and
(2) there exists a vector ^w2Rn+1, such that ^Dw = e and ^wTe 0:
1. The matrix ^D has exactly one positive eigenvalue:
21
By Theorem 3.4, ^D has n 1 non-positive eigenvalues of the original distance matrix D and
the eigenvalues of the matrix 2
64r p
p 0
3
75:
This can be seen by looking at the roots of characteristic polynomial p(x) = x2 rx p2
which are 12(r pr2 + 4p2). Because pr2 + 4p2 >r and ^D has n 1 non-positive eigenval-
ues, then ^D has exactly one positive eigenvalue.
2. We now must nd values of p such that eT ^w 0:
De ne
^w =
2
64w (u
Te
r
1
p)u
1
p(u
Te r
p)
3
75
so that ^D^w = e.
eT ^w = eTw 1reTuTeu + 1peTu + 1puTe rp2 (3.1)
= eTw 1r(uTe)2 + 2puTe rp2 (3.2)
Case 1: Suppose u = 1pne. Then w = 1re since Du = ru means D 1pne = r 1pne; and
therefore D1re = e. So w = 1re because Dw = e. After substituting those values into (2)
and reducing, eT ^w = 1p2 (2ppn r). So eT ^w 0 if and only if 2ppn r 0, meaning
p r2pn. Using what we de ned vectors u and w to be, plugging in and reducing,
= ruTe preTw = rpn pn:
Therefore eT ^w 0 if and only if p2[ ; +] = [ r2pn;1]:
22
Case 2: Suppose u6= 1pne. Refer to equation (3.2), rearrange and set equal to zero.
eTw 1r(uTe)2 + 2puTe rp2 = 0;
1r
(uTe)2 +
2
p
uTe +
eTw rp2
= 0:
By the quadratic formula the roots of the polynomial equation
f(x) = 1rx2 + 2px+
eTw rp2
are rp pr(eTw).
Therefore, eT ^w =
uTe rp +pr(eTw)
uTe rp pr(eTw)
= 0, noting that
uTe rp +pr(eTw)
uTe rp pr(eTw)
: Setting each term equal to 0 and solving
for p, we achvieve,
p = ruTe +pr(eTw) or p = ruTe pr(eTw):
Observe that [ ; +] is well de ned since for any w2Rn satisfying Dw = e, eTw is unique.
Refer to observation 2.7. So if p2[ ; +], where = ruTe preTw given any u2Rn, then
eT ^w 0:
In conclusion, ^D=
2
64 D pu
puT 0
3
752EDM where D2EDM of any size, i p2[ ; +], where
= ruTe preTw:
Looking back at Example 3.3.1, with u = 12e, 16w = e and r = 6. By inspection,
the interval [ ; +] = [32;1]. Note p 1:0001 =2 [ ; +]. Therefore, by Theorem 3.6,
^D=
2
64 D pu
puT 0
3
75 can not be a distance matrix.
By including the condition p2[ +; ], Theorem 3.6, guarantees that ^D is a distance
matrix but does not necessarily solve the inverse eigenvalue problem. The added condition
for p2[ ; +] and the de nition of means p is de ned in terms of r which, recall, is an
eigenvalue of our original distance matrix D.
23
Bibliography
[1] Moody T. Chu, Inverse Eigenvalue Problems, SIAM Vol. 40, No.1, 1-39 (1998)
[2] R. Craigen, G. Faucher, R. Low, T. Wares, Circulant partial Hadamard Matrices. Linear
Algebra and its Applications 439: 3307-3317 (2013).
[3] J.P. Crouzeix, J. Ferland, Criteria for quasiconvexity and pseudoconvexity: relationships
and comparisons, Math. Programming 23 (2) (1982) 51-63
[4] Jon Dattorro. Convex Optimization and Euclidean Distance Geometry, MeBoo Pub-
lishing, USA, 2005, v2013.03.30.
[5] Miroslav Fiedler, Eigenvalues of Nonnegative Symmetric Matrices. Linear Algebra and
its Applications 9: 119-142 (1974).
[6] J. C. Gower Euclidean Distance Geometry. Math. Scientist, 7 1-14 (1982).
[7] T. L. Hayden, R. Reams, J. Wells, Methods for constructing matrices and the inverse
eigenvalue problem. Linear Algebra and its Applications 295: 97-112 (1999).
[8] R. A. Horn, C. R. Johnson, Matrix Analysis, Cambridge University Press (1999).
[9] Monique Laurent, A Connection Between Positive Semide nite and Euclidean Distance
Matrix Completion Problems. Linear Algebra and its Applications 273: 9-22 (1998).
[10] I. J. Schoenberg, Remarks to Maurice Frechet?s Article, "Sur la de nition axiomatique
d?une classe d?espace distances vector-iellement applicable sur l?espace de Hilbert. An-
nals of Mathematics, Vol 36, No, 3. (1935)
[11] G. Jaklic, J. Modic, A note on Methods for constructing distance matrices and inverse
eigenvalue problem. Linear Algebra and its Applications 437: 2781-2792 (2012)
24