Reconstruction of Parametric Image Maps in Single- and Multiple-Coil Functional Magnetic Resonance Imaging Except where reference is made to the work of others, the work described in this dissertation is my own or was done in collaboration with my advisory committee. This dissertation does not include proprietary or classified information. Weidong Tang Certificate of Approval: Thomas Denney Professor Electrical and Computer Engineering Stanley Reeves, Chair Professor Electrical and Computer Engineering Fa Dai Professor Electrical and Computer Engineering George T. Flowers Dean Graduate School Reconstruction of Parametric Image Maps in Single- and Multiple-Coil Functional Magnetic Resonance Imaging Weidong Tang A Dissertation Submitted to the Graduate Faculty of Auburn University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Auburn, Alabama August 10, 2009 Reconstruction of Parametric Image Maps in Single- and Multiple-Coil Functional Magnetic Resonance Imaging Weidong Tang Permission is granted to Auburn University to make copies of this dissertation at its discretion, upon the request of individuals or institutions and at their expense. The author reserves all publication rights. Signature of Author Date of Graduation iii Vita TANG Weidong was born to TANG Gongxian (cjkCCC6cjkB9A6cjkCFC8) and YU Zaihui (cjkD3DAcjkD4DAcjkBBDD)in Bengbu, Anhui Province, China (cjkD6D0cjkB9FAcjkB0B2cjkBBD5cjkCAA1cjkB0F6cjkB2BAcjkCAD0) on May 6, 1967. He received his Bachelor degree from Xidian University (cjkCEF7cjkB0B2cjkB5E7cjkD7D3cjkBFC6cjkBCBCcjkB4F3cjkD1A7) in July 1988 and Master degree from Tsinghua University (cjkC7E5cjkBBAAcjkB4F3cjkD1A7) in July 1993, respectively. He was with Chinese Academy of Sciences, Hughes Network Systems, and other companies. His research interests include medical imaging and signal processing, as well as software engineering. iv Dissertation Abstract Reconstruction of Parametric Image Maps in Single- and Multiple-Coil Functional Magnetic Resonance Imaging Weidong Tang Doctor of Philosophy, August 10, 2009 (M.S., Tsinghua University, Beijing, China, 1993) (B.S., Xidian University, Xi?an, China, 1988) 103 Typed Pages Directed by Stanley Reeves Functional Magnetic Resonance Imaging (fMRI) is a standard tool to measure the hemodynamic response which is related to activation patterns in the human and animal brain. In conventional anatomical MRI, the decay and precession rates are regarded as sources of artifacts, but in applications such as functional MRI (fMRI), they are physi- ological quantities of interest. Single-shot parameter assessment by retrieval from signal encoding (SS-PARSE) acknowledges local decay and phase evolution in MRI, so it models each datum as a sample from (k,t)-space rather than k-space. Because local decay and frequency vary continuously in space, discrete models in space can cause artifacts in the reconstructed parameters. Increasing the resolution of the reconstructed parameters can more accurately capture the spatial variations, but the resolution is limited not only by computational complexity but also by the size of the acquired data. For a limited data set used for reconstruction, simply increasing the model resolution may cause the reconstruc- tion to become an underdetermined problem. This dissertation presents a solution to this problem based on cubic convolution interpolation. Because the local decay and frequency are exponential time functions, FFTs can not be directly applied to the reconstruction v algorithm. A polynomial expansion is proposed so that FFTs can be used to accelerate reconstruction. Thesecondcontribution ofthis dissertation isanewmethodto optimizethenonuniform FFT (NUFFT). This work was motivated by the nonuniform k-space trajectory in SS- PARSE. With the polynomial expansion, the cost function of the reconstruction of SS- PARSE is represented by a linear combination of 2-D Fourier transforms whose inputs are uniformly distributed data and outputs are nonuniformly distributed frequency responses. The gradient of the cost function in the reconstruction is also a linear combination of 2-D Fourier transforms whose inputs are nonuniformly distributed data on the frequency domain and outputs are functions on a 2-D nonuniform grid. FFTs can be applied to neither the cost function nor the gradients function because of the nonequally spaced inputs or outputs. In this dissertation, we focused on the 1-D Fourier transforms with uniform inputs and nonuniform outputs. The basic form of the optimization of the NUFFT is a nonlinear problem. In this dissertation, this nonlinear problem was reformulated to find the least- square solution of a linear problem. The computational accuracy of the NUFFT is also improved by the new method. The results can be easily extended to 2-D or the case with nonuniform inputs and uniform outputs. After validating and testing these ideas with a single-coil MRI system, we extended the framework to parallel MRI systems, which have multiple receiving coils. Existing re- construction methods estimate the maps of coil sensitivities by imaging ?standard? objects. These convenient methods do not account for the change of coil sensitivities caused by the imaged objects. We propose a new algorithm that concurrently reconstructs the coil sensi- tivities along with magnitude, decay and field map. The core of this algorithm is the fast approach and the interpolation method we developed for SS-PARSE. From the simulation results, we observed significant improvement in the reconstruction accuracy of the decay function that is of the interest in fMRI. vi Acknowledgments I would like to thank Dr. Stanley J. Reeves, my research advisor, for his guidance, help and leadership during my years in Auburn. He led me into the area of medical imaging. I also enjoyed and will miss our talks on sports, religion, and other technical and non-technical topics. I would like to appreciate my committee members, Dr. Thomas Denney and Dr. Dai Fa, for their careful review and helpful suggestions on my dissertation. I would like to thank Dr. Amnon J. Meir for serving as the outsider reader. I would like to thanks Dr. Donald B. Twieg at the University of Alabama at Birming- ham. This work was supported in part by the National Institute of Biomedical Imaging and Bioengineering (NIH), Grant No. EB 003292. My thanks also go to my friends in Auburn. Finally, I would like to thank my beloved parents using an ancient Chinese poem, such kindness as young grass receives from the warm sun cannot be repaid. cjkD56CcjkD1D4cjkB4E7cjkB2DDcjkD0C4cjkA3ACcjk88F3 cjkB5C3cjkC8FDcjkB4BAcjk959FcjkA1A3 vii Two things are infinite: the universe and human stupidity; and I?m not sure about the universe. Albert Einstein War Eagle! viii Style manual or journal used Journal of Approximation Theory (together with the style known as ?aums?). Bibliography follows van Leunen?s A Handbook for Scholars. Computer software used The document preparation package TEX (specifically LATEX) together with the departmental style-file aums.sty. ix Table of Contents List of Figures xii 1 Introduction 1 1.1 Organization of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 A Brief History of MRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 MRI Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3.1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3.2 The 1D Imaging and the Fourier Transform . . . . . . . . . . . . . . 4 1.3.3 Spatial Resolution in MRI . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 Relaxation and Field Inhomogeneity . . . . . . . . . . . . . . . . . . . . . . 6 1.5 Functional MRI (fMRI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2 Reconstruction of SS-PARSE 10 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 Iterative Reconstruction Method . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4 Fast Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.4.1 Line Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.4.2 Approximations for Exponential Time Function . . . . . . . . . . . . 21 2.4.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.5 Analysis of Accuracy of the Fast Algorithm . . . . . . . . . . . . . . . . . . 27 2.5.1 Reconstruction ErrorIntroducedbytheApproximation ofTimeFunc- tion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5.2 Reconstruction Error Introduced by k-Space Gridding . . . . . . . . 27 2.6 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6.1 Validation of Fast Approach . . . . . . . . . . . . . . . . . . . . . . . 30 2.6.2 Noiseless Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.6.3 Noisy Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.6.4 Phantom and Animal Experiment . . . . . . . . . . . . . . . . . . . 35 2.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3 Nonuniform Fast Fourier Transform (NUFFT) 42 3.1 Theory of NUFFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.1.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.1.2 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2 Least-Squares Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.3 Interpolations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 x 3.4 Error Analysis of the Interpolation . . . . . . . . . . . . . . . . . . . . . . . 53 3.5 Inverse Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4 Reconstruction of Parallel MRI 59 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.2 Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.2.1 Extension of SS-PARSE . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.2.2 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.2.3 Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.3 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.3.1 Interpolation of Coil Sensitivity . . . . . . . . . . . . . . . . . . . . . 64 4.3.2 Regularized Reconstruction . . . . . . . . . . . . . . . . . . . . . . . 67 4.4 Human Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 5 Conclusion 80 5.1 Summary of the Contributions of this Thesis . . . . . . . . . . . . . . . . . 80 5.2 Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Bibliography 83 Appendix A Derivation of NRMSE of NUFFT with Linear Interpolation 87 xi List of Figures 1.1 Two MRI trajectories. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2 Magnitude - Free induction decay (FID) . . . . . . . . . . . . . . . . . . . . 7 1.3 Spin Echo RF Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4 Single-Shot MRI RF Sequence . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.1 Reconstructed R?2(sec?1) from different methods. All images have 128 ? 128 resolution 80-iteration reconstruction. For (a) and (b), interpolation factors are both 2, so the interpolation coefficients are 64?64; (c) is from a reconstruction with 128?128 resolution. . . . . . . . . . . . . . . . . . . . . 16 2.2 FWHM of cubic convolution and cubic spline. C(iCconv ? Cconv) and S(iCspline ? Cspline) were normalized to 1. The FWHM of cubic convolu- tion is about 0.25cm and that of cubic spline is about 0.40cm. The results were from reconstructions with 80 iterations. . . . . . . . . . . . . . . . . . 17 2.3 Cubic convolution interpolation function . . . . . . . . . . . . . . . . . . . . 18 2.4 Frequency responses of cubic convolution and spline interpolation . . . . . . 18 2.5 2-D cubic convolution interpolation function . . . . . . . . . . . . . . . . . . 19 2.6 Quadratic approximation of line search . . . . . . . . . . . . . . . . . . . . . 22 2.7 Polynomial approximations of exponential time function for a pair of R and ? 24 2.8 Approximations of exponential time function. . . . . . . . . . . . . . . . . . 25 2.9 Errors in Eq. (2.4) by approximation of time function . . . . . . . . . . . . 28 2.10 Errors in reconstructed images by approximation of time function . . . . . . 28 2.11 Errors in Eq. (2.4) by FFTs rounding . . . . . . . . . . . . . . . . . . . . . 29 2.12 Errors in reconstructed images by FFTs rounding . . . . . . . . . . . . . . . 29 xii 2.13 A rosette trajectory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.14 True and Reconstructed Images from Noiseless Experiments. All images are displayed with 128?128 resolution. We only reconstructed the pixels within the inscribed circle of the square. (a)(c)(e) are true images; (b)(d)(f) are from 64?64 reconstruction without interpolation, then interpolated to 128?128 images for display. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.15 Reconstructed images from noiseless experiments. All images are displayed with 128?128 resolution. (a)(c)(e) are from 128?128 reconstruction without interpolation, (b)(d)(f) are images reconstructed from 2? cubic convolution interpolation, so the interpolation coefficients are 64?64. . . . . . . . . . . 34 2.16 Images reconstructed from noisy signals. SNR is 40dB. All images have 128? 128 resolution. (a)-(c) are from 64?64 reconstruction without interpolation, then interpolated to 128?128 images for display; (d)-(f) are from 128?128 reconstruction without interpolation, (g)-(i) are images reconstructed from 2? cubic convolution interpolation, so the interpolation coefficients are 64?64. 36 2.17 Images reconstructed from phantom experiment. All images have 128?128 resolution and are masked to remove exterior artifactual features. (a)(c)(e) are from 64 ? 64 reconstruction without interpolation, then interpolated to 128 ? 128 images for display; (b)(d)(f) are images reconstructed from 2? cubic convolution interpolation, so the interpolation coefficients are 64?64. 38 2.18 Images reconstructed from phantom experiment with larger frequency offset. All images have 128?128 resolution. (a)(c)(e) are from 64?64 reconstruction without interpolation, then interpolated to 128 ? 128 images for display; (b)(d)(f) are images reconstructed from 2? cubic convolution interpolation, so the interpolation coefficients are 64?64. Only the frequency map of the imaged object is shown in (e) and (f). The deliberately added frequency map is removed from these two figures. The artifactual features between the reconstruction mask (the inscribed circle of the square) and the imaged object remain, but they are not of interest. . . . . . . . . . . . . . . . . . . 39 2.19 Images reconstructed from monkey experiment. All images have 128 ? 128 resolution. (a)(c)(e) are from 64 ? 64 reconstruction without interpolation, then interpolated to 128 ? 128 images for display; (b)(d)(f) are images re- constructed from 2? cubic convolution interpolation, so the interpolation coefficients are 64 ? 64. The R?2 map in (c) was plotted with saturation of the values to maintain the same scale as (d) and so that the details of (d) would be visible. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.1 Maximum Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2 NRMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 xiii 3.3 Maximum Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.4 Linear Interpolation Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.5 Cubic Convolution Interpolation Kernel . . . . . . . . . . . . . . . . . . . . 52 3.6 NRMSE for different J and the number of precomputed frequencies for linear interpolation. In this figure, NRMSEs are plotted as the functions of J?s for different numbers of the precomputed frequencies. The ?Exact? is the NRMSE for the precomputed frequencies. The ?Exact? is the accuracy limit of the method stated in this chapter. . . . . . . . . . . . . . . . . . . . . . . 54 3.7 NRMSE for different J and the number of precomputed frequencies for linear interpolation. In this figure, NRMSEs are plotted as the functions of the numbers of the precomputed frequencies for different J?s. . . . . . . . . . . 55 3.8 NRMSE for different J and the number of precomputed frequencies for cu- bic convolution interpolation. In this figure, NRMSEs are plotted as the functions of J?s for different numbers of the precomputed frequencies. The ?Exact? is the NRMSE for the precomputed frequencies. The ?Exact? is the accuracy limit of the method stated in this chapter. . . . . . . . . . . . . . 56 3.9 NRMSE for different J and the number of precomputed frequencies for cu- bic convolution interpolation. In this figure, NRMSEs are plotted as the functions of the numbers of the precomputed frequencies for different J?s. . 57 3.10 Comparison of the performance of linear and cubic convolution interpola- tions. The ?Exact? is the accuracy limit of the method stated in this chapter. 57 4.1 The magnitude, decay and field map used to synthesize simulation data. All images are displayed with 256?256 resolution. M0 is normalized to 1. . . . 65 4.2 The coil sensitivity ,maps used to synthesize simulation data. All images are displayed with 256?256 resolution. The unconstrained area in which M0 is zero are displayed with zero. All maps are normalized to 1. . . . . . . . . . 66 4.3 The coil sensitivity C1 reconstructed from different interpolation factors. All images are displayed with 128 ?128 resolution. The artifacts in the uncon- strained area in which M0 is zero are removed. All maps are normalized to 1. No regularization is used. . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.4 The magnitude, decay and field map reconstructed from signals with 30dB SNR. All images are displayed with 128 ? 128 resolution. The artifacts in the unconstrained area in which M0 is zero are removed. M0 is normalized to 1. No regularization is used. . . . . . . . . . . . . . . . . . . . . . . . . . 69 xiv 4.5 The coil sensitivity maps reconstructed from signals with 30dB SNR. All images are displayed with 128 ?128 resolution. The artifacts in the uncon- strained area in which M0 is zero are removed. All maps are normalized to 1. No regularization is used. . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.6 The magnitude, decay and field map reconstructed from signals with 30dB SNR. The regularization parameters ? = 5.5 ? 107, ?R = 1.75 ? 108 and ?I = 0. All images are displayed with 128?128 resolution. The artifacts in the unconstrained area in which M0 is zero are removed. M0 is normalized to 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.7 The coil sensitivity maps reconstructed from signals with 30dB SNR. The regularization parameters ? = 1.09 ? 106. All images are displayed with 128?128 resolution. The artifacts in the unconstrained area in which M0 is zero are removed. All maps are normalized to 1. . . . . . . . . . . . . . . . 73 4.8 Rosette trajectory used in human experiment. Only the first half of the trajectory is plotted. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.9 The magnitude, decay and field map reconstructed from the human brain experiment. Cubic convolution interpolation was used to model the coil sen- sitivity. All images are displayed with 128 ? 128 resolution. (a) (c) (e) are from 4? interpolation for coil sensitivity. (b) (d) (f) are from 8? interpola- tion for coil sensitivity. M0 is normalized to 1. Most of the artifacts outside of the head were removed. No regularization was used. . . . . . . . . . . . . 76 4.10 The coil sensitivity maps reconstructed from the human brain experiment. Cubic convolution interpolation was used to model the coil sensitivity. All images are displayed with 128 ? 128 resolution. (a) (c) are from 4? inter- polation. (b) (d) are from 8? interpolation. Most of the artifacts outside of the head are removed. All maps are normalized to 1. No regularization was used. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.11 The magnitude, decay and field map reconstructed from the human brain experiment. Cubic spline interpolation was used to model the coil sensitivity. All images are displayed with 128 ? 128 resolution. (a) (c) (e) are from 4? interpolation for coil sensitivity. (b) (d) (f) are from 8? interpolation for coil sensitivity. Most of the artifacts outside of the head were removed. M0 is normalized to 1. No regularization was used. . . . . . . . . . . . . . . . . . 78 4.12 The coil sensitivity maps reconstructed from the human brain experiment. Cubic spline interpolation was used to model the coil sensitivity. All images are displayed with 128 ? 128 resolution. (a) (c) are from 4? interpolation. (b) (d) are from 8? interpolation. Most of the artifacts outside of the head were removed. All maps are normalized to 1. No regularization was used. . 79 xv Chapter 1 Introduction Magnetic resonance imaging (MRI), which is also called nuclear magnetic resonance imaging (NRMI) or spin imaging, is an important application of the theory of nuclear magnetic resonance (NMR). MRI may also be the most important development of the medical diagnostic imaging since Wilhelm R?oentgen?s X-ray. Its medical applications have revolutionized clinical diagnosis. In the last several decades, six scientists were awarded Nobel Prizes in three different disciplines (physics, chemistry, and physiology or medicine) for their works related to MRI. In an MRI experiment, the nuclear magnetization of hydrogen atoms in water in hu- man or animal body are aligned by a powerful external magnetic field. After the aligned magnetization of the hydrogen nuclei is tipped by a radiofrequency (RF) wave, a gyromag- netic field is generated. The gyromagnetic field is captured by the signal receiving coils. The received signal is interpreted as the spatial-frequency response of the imaged object, so Fourier analysis is a fundamental tool in image reconstruction. MRI is non-invasive and non-ionizing. MRI is superior to computerized tomography (CT) in soft-tissue imaging, such as neurological (brain), musculoskeletal, cardiovascular, and oncological imaging. Abundant diagnostic information is provided by changing the parameters of a MRI system. With the development of hardware and computing technologies, functional magnetic resonance imaging (fMRI) is becoming a standard procedure with useful applications in patients management [1]. fMRI is interested in hemodynamic responses, which reflect the neural activities in the brains or spinal cords of humans or animals. Blood Oxygenation Level Dependent (BOLD) fMRI was introduced by Ogawa [2] and Kwong [3]. The change of neural activity in a region of the brain causes the changes in 1 blood oxygenation. The changes, called the BOLD effect, can be detected by magnetic resonance imaging (MRI). The BOLD effect is the basis for almost all fMRI experiments to map patterns of activation in the working human brain [1]. Because the BOLD produces some physical effects that are omitted in conventional MRI models, the classical reconstruction methods based on Fourier transforms must be changed to accommodate aspects of fMRI, such as long readout time and nonuniform k- space trajectories. 1.1 Organization of the Thesis In this chapter, we briefly review the history and basic principles of MR imaging. In Chapter 2, we detail the major problem ? the reconstruction of single-shot param- eter assessment by retrieval from signal encoding (SS-PARSE). A new iterative reconstruc- tion methods based on a more accurate physical model is developed. After reviewing some similar works, we focus on two issues: quality improvement and a fast algorithm. In Chapter 3, implementation of an FFT with nonuniform sampling, in the reconstruc- tion is investigated. In Chapter 4, we extend concepts stated in Chapters 2 and 3 to parallel imaging. Chapter 5 concludes the thesis. The innovative ideas are summarized, and possible future work is discussed. 1.2 A Brief History of MRI In the 1930?s, Isidor Rabi investigated the relationship between nuclei, magnetic field and external RF. In1944, his work was awarded the Nobel Prize in physics ?for hisresonance method for recording the magnetic properties of atomic nuclei?. In 1946, Felix Bloch [4] and Edward Purcell [5] laid the physical foundation of MRI. They independently observed the phenomenon of NMR. They also theoretically explained theexperiments. They wereawarded theNobel Prize?for their development of new methods 2 for nuclear magnetic precision measurements and discoveries in connection therewith? in 1952. From 1950 to 1970, the major developments of NMR were in chemical and physical molecular analysis. In 1971, Raymond Damadian [6] successfully used NMR to discriminate malignant tissue from normal tissue in a rat by measuring relaxation times of the different tissues. In 1973, in a paper that almost was not published [7], Paul Lauterbur illustrated the internal structure of a clam acquired by MRI. The door to medical applications of MRI was opened. In the same decade, Peter Mansfield mathematically analyzed the RF signals of MRI and developed a method for fast imaging. Lauterbur and Mansfield shared the 2003 Nobel Prize in physiology or medicine ?for their discoveries concerning magnetic resonance imaging?. 1.3 MRI Model 1.3.1 Basic Concepts Spin, a quantum mechanical property, can interact with an external magnetic field B0. The nuclei, which have an odd number of protons or neutrons, have a non-zero spin or magnetic moment. Without an external magnetic field, the randomly oriented spin angular momentum cannot be detected. When an external magnetic field B0 is applied, M0, the magnetic moment along the external field direction, is generated [8]: M0 = ?0? 2planckover2pi12 4kT B0 (1.1) where ?0 is the spin density, ? is a constant called the gyromagnetic ratio, planckover2pi1 defines h/(2pi) in terms of Planck?s quantum constant h, k is the Boltzmann constant, and T is the absolute temperature. This magnetic moment vector precesses around the external field direction 3 with an angular frequency called the Larmor frequency given by ?0 = ?B0 (1.2) 1.3.2 The 1D Imaging and the Fourier Transform In order to have a detectable signal, a radio-frequency (RF) magnetic field is applied for a short time to make the magnetic field that is produced by the aggregate proton spins precess along with the magnetization. This precession generates a changing flux in a nearby coil. The changing flux is a signal modulated at the Larmor frequency ?0. After demodulation, the received signal is given by [8] s(t) = integraldisplay ?(x)e??G(x,t)dx (1.3) where ?G(x,t) = ??xintegraltextt0 G(?)d?, and the G(t) is the gradient in the z-direction. This signal is called free induction decay (FID). Let k(t) = ?2pi integraldisplay t 0 G(?)d? (1.4) where k(t) is called the k-trajectory [9], which samples the spatial-frequency domain. Then (1.3) can be written as s(t) = integraldisplay ?(x)e??2pik(t)xdx (1.5) (1.5) shows that the signal s(t) is related to the spin density of the sample ?(x) by a Fourier transform. This is the foundation of the reconstruction of conventional MRI. Two k-space trajectories are illustrated in Figure 1.1. 4 ?5 ?3 ?1 1 3 5 ?5 ?3 ?1 1 3 5 kx (cm ?1) k y (cm ?1 ) (a) EPI Trajectory ?5 ?3 ?1 1 3 5 ?5 ?3 ?1 1 3 5 kx (cm?1) k y (cm ?1 ) (b) Rosette Trajectory Figure 1.1: Two MRI trajectories. The concept of 1D imaging can be extended to multi-dimensional Fourier imaging. s(t) = integraldisplay ?(r)e??2pik(t)?rdr (1.6) where r is a multi-dimensional vector. In this case, t controls the values of the vector of k(t), which correspond to the frequency-domain coordinates of the Fourier transform. Thus, the signal s(t) can be interpreted as a sample of the Fourier transform of ?(r) at the frequency coordinate k(t). In this way, a one-dimensional signal can sweep through an n-dimensional space. 1.3.3 Spatial Resolution in MRI Limited readout time in MRI applications restricts the number of collected samples, the number of phase encodings and the coverage of k-space. The inversion problem based on partially covered k-space is called a limited-Fourier inversion problem [8]. In the discrete 5 case, the 1D image ?(x) is reconstructed by ?(x) = ?k N?1summationdisplay n=0 s(n)e??2pin?kx (1.7) where ?k is the k-space sampling interval. The total width of k-space coverage is W = N?k. Let L be the size of the field of view (FOV). To avoid aliasing, the Nyquist criterion must be met ?k = 1L ? 1A (1.8) where A is the physical size of the original image. From (1.7), we have s(n) = N?1summationdisplay k=0 ?(k)e??2pink?x (1.9) where ?x is the spatial resolution, the smallest size that can be measured for a given object: ?x = LN = 1N?k = 1W (1.10) 1.4 Relaxation and Field Inhomogeneity In an MRI experiment, after the RF pulse is turned off, the longitudinal magnetization field begins to exponentially recover with a time constant T1, the longitudinal relaxation time. T1 is also called thermal or spin-lattice relaxation time. T2 is theexponential decay rateof theFIDfor anideal MR experiment. T2 is alsoknown as transverse relaxation. Because of static magnetic field inhomogeneities, an observed FID decays with an exponential constant T?2 that is smaller than T2. In soft tissues, the typical T1 is about 1 second. T2 and T?2 are at the level of mil- liseconds. For most biological tissues, T1 values are typically 5 to 10 times longer than T2 values [10]. 6 T2* Decay Figure 1.2: Magnitude - Free induction decay (FID) The spin density and T1 and T?2 among different tissues are the basis of the MRI contrast mechanism. Because the T?2 relaxation rate of blood depends on whether or not the hemoglobin is bound with oxygen, the T?2 map as a function of space is of interest in fMRI. The BOLD contrast is also described by R?2, the reciprocal of T?2. The magnetic field inhomogeneity also causes phase changes in the observed signal. In Chapter 2, we will model the relaxation and field inhomogeneity by revising the (1.6) [11]: s(t) = integraldisplay ?(r)e?(R?2(r)+??(r))te??2pik(t)?rdr (1.11) Conventional methods to estimate R?2 and ? are all based on multi-shot MRI. In tra- ditional MRI, R?2 and ? can also be compensated by manipulating the RF sequence (see Figure 1.3). Since the readout time is short, it it not necessary to model R?2 and ?. Single- shot MRI needs a long readout time (see Figure 1.4), so the decay and phase precession can not be omitted any more. [11] suggested an iterative method for single-shot MRI. We will review this method in Chapter 2. 7 RF pulse z-gradient (slice) y-gradient (phase) x-gradient (frequency) Signal 90o 180o Readout Time Figure 1.3: Spin Echo RF Sequence ? ?Readout Time Gx Gy Gz RF pulse Figure 1.4: Single-Shot MRI RF Sequence 1.5 Functional MRI (fMRI) For many years, it has been known that the functions of the human brain are controlled by different areas of the cerebral cortex. Several modalities are successful in mapping brain functions onto related laminae. Positron emission tomography (PET) detects brain activ- ities by measuring regional cerebral blood flow (rCBF). Magnetoencephalography (MEG) 8 and electroencephalography (EEG) detect the magnetic or electronic signal generated by the activated brain, but they can hardly locate which areas the signal is from. fMRI is a noninvasive modality that efficiently maps brain function. Before thelate 1980?s, theimperfections caused byT?2 were regarded as anegative factor in MRI. The technique of spin echo can be used to refocus the RF pulse and compensate T?2 relaxation. Another method is to shorten the interval between the excitation RF pulse and the signal sampling. Later, it was recognized that the paramagnetic material in blood can be used to mark blood vessels and generate effective contrast. Because deoxyhemoglobin is more param- agnetic than oxyhemoglobin (oxyhemoglobin is more diamagnetic than deoxyhemoglobin), deoxyhemoglobin is magnetically susceptible. This means that an oxygen intensity change can cause a change in the MRI signal, making deoxyhemoglobin a natural contrast agent. Because the consumption of oxygen reflects brain activities, one can map brain function by analyzing an MRI signal. The goal of fMRI is to detect brain activation with sensory, motor and cognitive processes. 9 Chapter 2 Reconstruction of SS-PARSE Single-shot parameter assessment by retrieval from signal encoding (SS-PARSE) ac- knowledges local decay and phase evolution in MRI, so it models each datum as a sample from (k,t)-space rather than k-space. Because local decay and frequency vary continuously in space, discrete models in space can cause artifacts in the reconstructed parameters. In- creasing the resolution of the reconstructed parameters can more accurately capture the spatial variations, but the resolution is limited not only by computational complexity but also by the size of the acquired data. For a limited data set used for reconstruction, simply increasing themodel resolution may cause thereconstruction to become an underdetermined problem. In this chapter, we present a solution to this problem based on cubic convolution interpolation. Because the local decay and frequency are exponential time functions, FFTs can not be directly applied to the reconstruction algorithm. A polynomial expansion is proposed so that FFTs can be used to accelerate reconstruction. Results on simulated data and phantoms demonstrate reduced computation time and improved quality. 2.1 Introduction Twieg [9] and Ljunggren [12] introduced the k-space formalism to unify different MRI techniques. Based on this formalism, the observed MRI signal s(t) is the Fourier transform of local transverse magnetization M(x): s(t) = integraldisplay M(x)e?k(t)?xdx+?(t) (2.1) where s(t) is the observed MRI signal, ? is ??1, M(x) is local transverse magnetization at location x, and ?(t) is white complex Gaussian noise. The last factor in the integrand 10 represents the spatial phase modulation imposed by the imaging gradients at time t: k(t) = ? integraldisplay t 0 G(t?)dt? (2.2) The advantage of this model is that M(x) can be reconstructed from s(t) by a Fourier transform when s(t) is appropriately rearranged, but it is physically inaccurate, especially for long readout MRI. The premise of this model is that local decay and phase evolution are insignificant; that is, local transverse magnetization does not change during the signal acquisition. For long readout time, magnitude, decay and phase-angle precession of local transverse magnetization is inevitable. In conventional anatomical MRI, the decay and precession rates are regarded as sources of contrast, but in applications such as functional MRI (fMRI), they are physiological quantities of interest [13,14]. fMRI measures signal changes caused by neural activity in the brain or spinal cord of humans or other animals. So fMRI is interested in a series of images or ? more precisely ? a single evolving image, not a single static image as in conventional MRI methodology. SS-PARSE recognizes this reality and uses a more accurate model [11]: s(t) = integraldisplay M0(x)e?[R?2(x)+??(x)]te?k(t)?xdx+?(t) (2.3) where M0(x) is the local transverse magnetization immediately following excitation, R?2(x) is the local net relaxation rate, and ?(x) is the local frequency offset. Many methods to acquire proper signals to reconstruct some combination of M0, R?2 and ? have been proposed [15?20]. [21,22] suggested time-segmentation and frequency- segmentation toaccelerate thereconstruction oflocal magnitudeinthepresenceoffrequency offset; [23] estimated ? from multi-scan MRI; [24,25] jointly reconstructed R?2 and ?; [26] recovered M0 and ?0 in the presence of R?2. All of these methods require multiple scans or echoes and none of them jointly estimate all three terms of M0, R?2 and ?. 11 In this chapter, we address the following three issues in the reconstruction algorithm of SS-PARSE after reviewing the iterative reconstruction method. ? how to implement a higher-resolution reconstruction. ? how to do an efficient line search in the iterative reconstruction algorithm. ? how to interpret this nonlinear problem of SS-PARSE in terms of Fourier transforms. 2.2 Iterative Reconstruction Method The discrete version of (2.3) on the spatial (x,y) grid indexed by i is given by sn = summationdisplay i M0ienWie?kn?xi bracehtipupleft bracehtipdownrightbracehtipdownleft bracehtipupright ?sn(M0,W) +?n (2.4) where Wi = ?[R?2i +??i]?t, kn = k(n?t), and ?t is the sampling interval. Our goal is to reconstruct M0, W from the observed signal s by solving (2.4). For traditional MRI, local transverse magnetization in (2.1) can be reconstructed by taking the inverse Fourier transform of an appropriately formatted version of s(t) because the trajectory k(t) samples the frequency domain of one image M(x). SS-PARSE is dif- ferent; its trajectory samples the frequency domains of different images that are related by an exponential time function enW. Because the problem is now nonlinear and does not have the structure of an FFT, one must use an iterative method to solve this problem [27]. We propose to use an iterative conjugate-gradients (CG) algorithm to minimize the cost function: J(z) = summationdisplay n |sn ? ?sn(z)|2 +R(z) (2.5) with respect to z, where z = {M0,W}, and R(z) is a regularization term. In the follow- ing discussion, we omit the regularization term for simplicity, though the method easily incorporates such a term if desired. 12 The CG algorithm used to reconstruct z is initialized as follows [28]: Initialization ? Set z = z0 ? Steepest descent: ?z0 = ??zJ (z0) ? Line search: ?0 = argmin ??0 J (z+??z0) ? z1 = z0 +??z0 After the first iteration, the following steps constitute one iteration of moving along subse- quent conjugate direction ?zn, where ?z0 = ?z0: 1. ?zn = ??zJ (zn) 2. ?n = parenleftbig?zTn?znparenrightbig/parenleftbig?zTn?1?zn?1parenrightbig 3. ?zn = zn +?n ?zn?1 4. ?n = argmin ??0 J (zn +??zn) 5. zn+1 = zn +?n ?zn The gradients ?zJ computations include two parts: ?J ?M0 = Nsummationdisplay n=1 fnenWejkn?x (2.6) ?J ?W = M0 Nsummationdisplay n=1 nfnenWejkn?x (2.7) where fn = [?sn ?sn]? and N denotes the length of sn. There are three problems with this method: ? Because local decay and frequency vary continuously in space, discrete models in space can cause artifacts in the reconstructed parameters. Increasing the resolution of the reconstructed parameters can more accurately capture the spatial variations, but the resolution is limited not only by computational complexity but also by the size of the acquired data. For a limited data set used for reconstruction, simply increasing the resolution may cause the reconstruction to become an underdetermined problem. 13 ? Directly evaluating J(z) and ?zJ (z) is computationally intensive. ? An accurate line search requires several evaluations of J(z). 2.3 Interpolation The iterative reconstruction method cannot be used to reconstruct images with arbi- trary resolution. The degrees of freedom of this reconstruction depends on the resolution of z, so limited acquired data limits the resolution of the reconstruction. We propose a solu- tion based on cubic convolution interpolation. Instead of estimating images, this algorithm computes interpolation coefficients that are used to generate images on a higher-resolution grid, so the reconstruction algorithm is still a fully determined or overdetermined problem. Interpolation is an efficient bridge between different resolutions in a discrete represen- tation. In order to reconstruct images on a high-resolution grid while keeping the recon- struction algorithm determined or over-determined, we estimate interpolation coefficients rather than parameter set z, the parameter images to be reconstructed. There are several candidates for the interpolation. Cubic splines are very popular in ap- plications because of their minimum curvature property [29]. When we applied cubic splines to this problem, however, we observed residual blurring in local decay and frequency. To explain this behavior, we resort to a simplified linearized model. By using splines, we are actually solving a problem corresponding to something like y = Ax, where A is a lowpass filter. The character of A makes the reconstruction problem harder, because A is not well conditioned. Conjugate-gradients has no problem with this in thelinear case. The conjugate directions can still cover the space quickly and reconstruct the unknown x. In the nonlinear case, the presence of the A matrix exaggerates the effect of nonlinearity. In a nonlinear problem, A changes from iteration to iteration, which changes the relationship of the conju- gate directions from iteration to iteration. Thus, the nonlinear local decay/frequency term will converge more slowly than the linear magnitude term. Without an inordinate number of iterations, the estimated decay/frequency parameters would be blurred in comparison to 14 the magnitude term. The cubic convolution interpolator is different from the cubic splines interpolator [30]. It passes through zero at neighboring sample locations, so it does not have to invert a system of equations to match the sample values to the coefficients. We found that this interpolator yields significantly less blurred results with the same number of itera- tions. This difference is illustrated in Figure 2.1, where we compared the R?2 reconstructed from cubic spline interpolation, cubic convolution spline, and non-interpolation. In order to quantify the different blurring of cubic convolution and cubic spline inter- polations and to further justify our choice of cubic convolution, we designed a special ex- periment to approximate the point-spread function (PSF) for this nonlinear problem. First, we noiselessly synthesized s(t) of (2.3) with constant M0, R?2 and ?. Then we reconstructed the three parameter sets with cubic convolution and cubic splines with 80 iterations. Be- cause of the interpolations, we actually estimated the interpolation coefficients rather than M0, R?2 and ?. The reconstructed coefficients for R?2 by cubic convolution and cubic spline are denoted by Cconv and Cspline. The value of R?2 within a small circle (diameter 1/128 of FOV, which is 12.8 ? 12.8cm2) at the origin was changed to a different constant value, and then a new s(t) was noiselessly synthesized based on the changed parameter sets. The reconstructed coefficients for R?2 from the new s(t) by cubic convolution and cubic splines are denoted by iCconv and iCspline. Let C be the cubic convolution interpolation operator and S be the cubic spline interpolation operator. The central lines of C(iCconv ?Cconv) and S(iCspline ? Cspline) are shown in Figure 2.2. The FWHM is noticeably smaller for cubic convolution interpolation. If sampled data are equally spaced, interpolation functions can be written as: g(x) = summationdisplay k cku parenleftbiggx?x k h parenrightbigg (2.8) where ck is an interpolation coefficient that is a function of the sampled data, xk is the kth interpolation node, h is the sampling interval, and u(x) is an interpolation kernel. The sampled function f(x) agrees with g(x) at every xk. Cubic convolution interpolation 15 5 10 15 20 25 (a) Cubic Spline 5 10 15 20 25 (b) Cubic Convolution 5 10 15 20 25 (c) Non-Interpolation Figure 2.1: Reconstructed R?2(sec?1) from different methods. All images have 128 ? 128 resolution 80-iteration reconstruction. For (a) and (b), interpolation factors are both 2, so the interpolation coefficients are 64 ? 64; (c) is from a reconstruction with 128 ? 128 resolution. 16 ?0.8 ?0.6 ?0.4 ?0.2 0 0.2 0.4 0.6 0.80 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x (cm) C( ?), S( ?) ? ?0.25 ? ?0.40 Cubic Conv. Cubic Spline Figure 2.2: FWHM of cubic convolution and cubic spline. C(iCconv?Cconv) and S(iCspline? Cspline) were normalized to 1. The FWHM of cubic convolution is about 0.25cm and that of cubic spline is about 0.40cm. The results were from reconstructions with 80 iterations. assumes that g(x) agrees with f(x) for the first four terms of the Taylor series expansion. The 1-D continuous interpolation kernel function is written as: u(x) = ? ???? ???? ??? ??? ??? ???? ? 4 3|x| 3 ? 7 3|x| 2 +1 0 ? |x| < 1 ? 712|x|3 +3|x|2 ? 5912|x|+ 52 1 ? |x| < 2 1 12|x| 3 ? 2 3|x| 2 + 7 4|x|? 3 2 2 ? |x| < 3 0 3 ? |x| (2.9) This function is shown in Figure 2.3. If the sampling interval h = 1/2, the vector form of u(x) is u = {1/96,0,?3/32,0,7/32,1,7/32,0,?3/32,0,1/96}. The comparison of the frequency response of the cubic convolution interpolator and the cubic splines interpolator is shown in Figure 2.4. The high-frequency response of the cubic convolution interpolation is higher than that of the cubic splines interpolation. This helps explain why the former one gives less blurred results. 17 ?3 ?2 ?1 0 1 2 3?0.2 0 0.2 0.4 0.6 0.8 1 x u(x) Figure 2.3: Cubic convolution interpolation function 0 2 4 6 8 10 12 10?10 10?8 10?6 10?4 10?2 100 Frequency (rad/s) Amplitude Cubic Conv. Cubic Spline Figure 2.4: Frequency responses of cubic convolution and spline interpolation For the reconstruction of SS-PARSE, we use 2-D interpolation. The continuous 2- D interpolation function is shown in Figure 2.5. For 2-D interpolation, we first apply 18 ?2 ?1 0 1 2 ?2 ?1 0 1 2 0 0.2 0.4 0.6 0.8 1 xy u(x,y) Figure 2.5: 2-D cubic convolution interpolation function interpolation for all rows in a 2-D matrix. Then we apply the same procedureto the columns of the interpolated matrix. This process can be written with matrix multiplications: z = ATCA (2.10) where C is the coefficient matrix formatted as an image array, z is an NM ?NM matrix, M is an integer not less than 2, and A is an N ?NM sparse matrix constructed from u(x). The element amn in A is computed by amn = u parenleftbigg 1+ n?1M ?m parenrightbigg (2.11) for the interpolation kernel in (2.9). The key step to compute cost function (2.5) is the evaluation of (2.4). For the inter- polation method, this step is straightforward. First, we compute z using (2.10). Then we compute the cost function. The computation of gradient matrix ?J/?C includes two parts. 19 One is the computation of ?J/?z, and the other is the computation of ?z/?C: ?CJ = ?zJ ? ?z?C = ?zJ ?parenleftbigATAparenrightbig = ?zJ ?parenleftbiguTuparenrightbig (2.12) where u = [u(?3+h) u(?3+2h) ??? u(3?h)]. Since uTu is a small matrix, we directly compute this convolution. 2.4 Fast Algorithm We implemented the previously discussed algorithm with MATLAB on a Linux work- station. The non-interpolation version took about 160 seconds to reach a given accuracy with 64?64 resolution. The interpolation method was also tested. The interpolation factor is 2, so the computational complexity of the interpolation is more than four times that of the non-interpolation algorithm, taking about 900 seconds. The approximation of the line search described below has already been incorporated into the algorithm. So further speedups are highly desired. We propose an efficient line search approximation and then an acceleration based on FFT-based approximation. 2.4.1 Line Search Line search is critical to the speed and accuracy of the CG algorithm; an efficient method is suggested to address this problem. To invert a non-quadratic function using the CG algorithm, line search is used to compute ?, the adjustable step length. The line search is computationally intensive for this reconstruction algorithm because each line search evaluates (2.4) several times. We numerically analyzed the function of J(?) and found that the quadratic approximation can be used to reduce the computational complexity of the line search [31]. The method of 20 quadratic approximation is described as follows. J (z+??z) ? J??(0)?2 +J?(0)? +J (z) (2.13) ?min = ? J ?(0) 2J??(0) (2.14) where J?(0) ? J (z+? ?z)?J (z?? ?z)2? (2.15) J? (??/2) ? J (z?? ?z)?J (z)?? (2.16) J??(0) ? J ?(?/2) ?J?(??/2) ? (2.17) where ? is a small positive scalar. Since J(z) has been computed in the previous iteration, one only needs to evaluate J (z+??J) and J (z???J) to minimize J (z+??J) with respect to ?. From Figure 2.6, we can see J(?) is consistently very close to a quadratic function over the range of interest. As a fail-safe, the value of J (z+?min?J) can be compared to J (z). If the value of J (z+?min?J) > J (z), this indicates a nonquadratic cost in the current step. In those unusual cases, a full line search can then be performed to find the minimizer. If that happens, the conjugate direction needs to be reinitialized to the negative gradient, since this indicates a nonquadratic region. 2.4.2 Approximations for Exponential Time Function Conventional MRI samples the frequency domain of an image assumed to be static, so FFTs can be directly used to reconstruct the image. Because of the exponential time function enW, SS-PARSE samples thefrequency domains of different images that are related by the exponential discrete time function. In order to use FFTs, we must separate the time variable n from local decay and frequency W. Frequency and time segmentations were proposed in [22] to address the problematic exponential time function so that FFTs can beused to accelerate the reconstruction process. 21 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 10?6 1.3 1.305 1.31 1.315 1.32 1.325 1.33 1.335 x 10 9 ? J(? ) No Approximation Quadratic Approximation Figure 2.6: Quadratic approximation of line search However, this method is inconvenient for simultaneous estimation of magnitude, decay and frequency. We use a polynomial approximation of the exponential time function to reduce computation time [31]. According to [22], all of the known approximations are special cases of the following general form: enW ? L?1summationdisplay l=0 al(n)Cl (W) n = 1,??? ,N (2.18) where Cl denotes basis functions, and al denotes coefficients. The authors of [22] suggested the approximations of time segmentation and frequency segmentation. Both segmentations are efficient in the absence of decay (Re{W} = 0), and the coefficients are only determined by the histogram of Im{W}. Unfortunately, neither requirement is satisfied in SS-PARSE reconstruction. Coefficients of an efficient approximation should not be changed, since updating the coefficients requires intensive computations. Assuming Re{W} = 0 and 22 the values of W are histogrammed into K bins, the computation of al with LS time- segmentation approach is O(LK(N + L) + L3N) [22] in addition to the histogramming computation. Since Re{W} negationslash= 0 in SS-PARSE, the computation is more complicated. A reasonable assumption is that R and ? are bounded within a known range. Based on this assumption, one can approximate the exponential time function with polynomials whose coefficients will not be changed through the estimation process. Using the polynomial approximation, (2.18) is written as: enW ? L?1summationdisplay l=0 alnlWl n = 1,??? ,N (2.19) Theadvantage of polynomial functionsis that thefunctionsofnusedto separatenfrom W are analytically defined and easily evaluated. Experiments show that the minimum value of L necessary for a good approximation is determined by N|?|max?t, since local decays are relatively small compared to frequencies. Because of this feature, the maximum frequency range dominates the error and should be made as small as possible. Thus, polynomial approximations can be defined to cover half of the range of frequency. Let: ?0 = 12 (?max +?min)?t (2.20) Z(l) = ?? ?? ??? al (W???0)l , Im{W???0) ? 0 a?l (W+??0)l , Im{W???0) < 0 (2.21) (2.19) can then be rewritten as: enW ? e?n?0 L?1summationdisplay l=0 nlZ(l) (2.22) The direct choice of the coefficients al of this approximation is a Taylor series, but the convergence speed of the Taylor expansion is too slow. We used an LS method to choose al. This optimization process can be written in matrix form. x is an M ?1 vector 23 with Im{x} ? [0,N|?|max?t] and Re{x} less than some real number; a is an L?1 vector composed of al; X is an M ? L matrix whose ith column is xi?1. We consider the real and imaginary parts of x to be uniformly likely over their assumed range and choose X to represent this uniform weighting. As long as the number of samples (rows of X) is relatively large compared to the polynomial order, the exact sample spacing and density have negligible impact on the solution. a is solved by: argmin a 1 2 bardble x ?Xabardbl2 2 (2.23) Figure 2.7 shows the precision of the real parts of the polynomial approximation for a specific pair of R and ?. The normalized root mean square error (NRMSE) ? of this 0 10 20 30 40 50 60 70?1 ?0.8 ?0.6 ?0.4 ?0.2 0 0.2 0.4 0.6 0.8 1 Time (ms) Real Part of Time Function (Re{ e ?wt }) w = 18 + i*120pi Real{exp(?wt)} Time Function Poly. Appr. L = 24 Poly. Appr. L = 23 Figure 2.7: Polynomial approximations of exponential time function for a pair of R and ? approximation is defined as: ? = radicalbigg A+B C (2.24) 24 where A = integraldisplay ymax 0 integraldisplay xmax 0 vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglee ?x??y ? L?1summationdisplay l=0 al(x+?y)l vextendsinglevextendsingle vextendsinglevextendsingle vextendsingle 2 dxdy B = integraldisplay 0 ymin integraldisplay xmax 0 vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglee ?x??y ? L?1summationdisplay l=0 a?l (x+?y)l vextendsinglevextendsingle vextendsinglevextendsingle vextendsingle 2 dxdy C = integraldisplay ymax ymin integraldisplay xmax 0 vextendsinglevextendsinglee?x??yvextendsinglevextendsingle2 dxdy The NRMSE is shown in Figure 2.8. 15 16 17 18 19 20 21 2210 ?10 10?9 10?8 10?7 10?6 10?5 10?4 NRMSE ( ?) L Accuracy of Polynomial Approximation xmax = 1.5 ymax = 4pi ymin = ?4pi Figure 2.8: Approximations of exponential time function. 2.4.3 Implementation Polynomial approximation allows us to separate the time variables from the variables of local decay and frequency in the exponential time function. Inserting (2.22) into (2.4) 25 leads to: ?sn ? e?n?0 L?1summationdisplay l=0 nl Ksummationdisplay i=1 M0iZi(l)e?kn?x(i) (2.25) By discretizing kn into an integer grid, we can compute the inside summation of (2.25) with an FFT, and s is approximated with a linear combination of a relatively small number of FFTs. There are two ways to compute ?zJ. One is to compute the derivatives and then apply an approximation. The other one is to apply an approximation and then compute the derivatives. Both methods lead to identical formulas in this case. ?J ?M0 ? L?1summationdisplay l=0 Z(l) Nsummationdisplay n=1 fne?n?0nle?kn?x (2.26) ?J ?W ? M0 Lsummationdisplay l=1 Z(l) Nsummationdisplay n=1 fne?n?0nle?kn?x (2.27) where fn = (?sn ?sn)? and N denotes the length of s(n). The inside summations of (2.26) and (2.27) can also be evaluated by FFTs. If the sizes of reconstructed M0 and W are all K ? K, direct evaluations of (2.4), (2.6) and (2.7) require O(NK2) arithmetic operations. If kn is discretized into a grid of mK ?mK, it requires O(Lm2K2 log2 m2K2) arithmetic operations. Even a small deviation in the approximated gradient relative to the actual gradient could theoretically have a huge impact on the final results. It is hard to analytically char- acterize this effect. We numerically analyzed this deviation in Section 2.6.1 and the results show that the errors in the final results caused by approximations have a negligible impact on the estimation results. 26 2.5 Analysis of Accuracy of the Fast Algorithm There are two sources of errors introduced by our algorithm. One is the approximation of the time function; the other is the k-space gridding, rounding the non-equispaced tra- jectory into an equispaced grid. We use normalized root mean square (NRMS) to describe the accuracy. If ?x is the estimation of a row vector x, the NRMS is defined as: NRMS = bracketleftbigg(?x?x)(?x?x)? xx? bracketrightbigg1/2 (2.28) 2.5.1 Reconstruction Error Introduced by the Approximation of Time Func- tion Another error in the fast algorithm is from the approximation of the exponential time function. In order to find the error caused only by polynomial approximation, we do not round the non-equally spaced k-space trajectory into an integer grid. We compare the results of polynomial approximation with the results of the non-approximation method using the standard conjugate gradient algorithm. Figure 2.10 shows the NRMS error of the FID. In the NRMS computations, x is the estimated FID from the non-approximated algorithm, and ?x is the estimated FID form the polynomial approximation method. We also computed the NRMS errors of reconstructed images. In the computations, x is the image reconstructed by the non-approximated method, and the ?x is the image reconstructed from the polynomial approximation. 2.5.2 Reconstruction Error Introduced by k-Space Gridding In order to use FFTs, one approach is to round the non-equally spaced k-space tra- jectory to an equally spaced grid. This will cause errors in reconstruction. We first study the error in (2.4). We compare the results by mapping the k-space trajectory into equally spaced grids with that of directly computing (2.4). The result is shown in Figure 2.11. The reconstruction used 64 ? 64 resolution. If the FFT factor is m, the k-space trajectory is 27 14 15 16 17 18 19 20 21 2210 ?4 10?3 10?2 10?1 100 Polynomial Approximation Length NRMS Figure 2.9: Errors in Eq. (2.4) by approximation of time function 14 15 16 17 18 19 20 21 2210 ?4 10?3 10?2 10?1 100 Polynomial Approximation Length NRMS M0 R2e ? Figure 2.10: Errors in reconstructed images by approximation of time function mapped into a 64m?64m equally spaced grid. The errors in the reconstructed images are shown in Figure 2.12. 28 2 3 4 5 6 7 80.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 FFT Factor NRMS Figure 2.11: Errors in Eq. (2.4) by FFTs rounding 2 3 4 5 6 7 80.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 FFT Factor NRMS M0 R2e ? Figure 2.12: Errors in reconstructed images by FFTs rounding Because the error caused by k-space gridding is the dominant part of the approximation error, we propose to use the nonuniform FFT (NUFFT) instead of k-space gridding. The NUFFT will be detailed in the next chapter. 29 2.6 Experiments In this section, we validated the fast approach and the cubic convolution reconstruction algorithm using synthetic data. A rosette trajectory was used to synthesize data. The readout time was 66.7ms, and the FOV was 12.8cm. The sampling interval was 5.56?s, so the length of s(n) in (2.4) was 12,000 samples. To simulate a continuous spatial domain, the data was synthesized from images with resolution 1024 ? 1024. We used analytical functions of M0(x), R?0(x) and ?(x) to generate the 1024 ? 1024 images that were used to approximate a continuous spatial domain. All the reconstructed images in this paper have 64 ? 64 or 128 ? 128 resolution. In order to compare the reconstructed images with the ?true? images, we used the same analytical functions to generate ?true? images with 64?64 or 128?128 images. According to [32], the rosette trajectories can be modeled by: k(t) = kmax cos(?1t)e??2t (2.29) where k(t) = kx(t) + jky(t). A rosette trajectory with kmax = 2.819, ?1 = 5171.4s?1 and ?2 = ?3334.8s?1 is shown in Figure 2.13. 2.6.1 Validation of Fast Approach The SS-PARSE reconstruction was implemented in MATLAB on a Dell Optiplex 745 equipped with a 2.66GHz Core 2 CPU and 3GB RAM, and Windows XP. We compared two cases: 1) 64?64 resolution and2) 64?64 interpolation coefficients with 2? cubicconvolution interpolation (computational complexity roughly equivalent to 128 ? 128 resolution). For each case, the length of s(t) was 12,000. The computation times per iteration are given in Table 2.1. We have shown that polynomial expansion of the time function along with the NUFFT can greatly speed up the reconstruction. Since this approach involves approximations, we need to demonstrate that the reconstructions from the fast approach have no significant 30 ?2 ?1 0 1 2 ?2 ?1 0 1 2 kx k y Figure 2.13: A rosette trajectory Table 2.1: Computer Times (in seconds per iteration) Fast Reso. NO YES 64 5.37 0.64 2x 20.56 1.18 differences from that of the unaccelerated method. We used noiseless free induction decay (FID) signals to compare the results. Three cases were studied: 1) a 64 ? 64 resolution reconstruction, 2) a 128 ? 128 resolution reconstruction, and 3) 2? interpolation. These three methods are referred to as 64, 128 and 2? in Table 2.2. We ran 200 iterations of the CG reconstruction algorithm for each case. 31 Table 2.2: NRMSE (%) Difference between Unaccelerated and Fast Approach Reso. M0 R?2 ? 64 0.8 0.4 0.5 128 1.4 0.3 0.3 2x 0.5 0.5 0.3 Table 2.3: NRMSE (%) of Noiseless Reconstruction Fast Unaccelerated Reso. M0 R?2 ? M0 R?2 ? 64 21.3 20.5 20.4 21.1 20.4 20.3 128 44.3 23.6 23.9 44.4 23.6 24.0 2x 15.3 15.4 16.3 15.3 15.4 16.3 These results indicate that the error introduced by the approximations in the fast approach is quite small ? quite a bit smaller than the typical error one would expect due to noise in the signal. 2.6.2 Noiseless Experiments We evaluated the performance of the reconstruction algorithm for a noiseless signal. The true images used to synthesize the FID and the reconstructed images are shown in Figure 2.14 and 2.15. We also computed the NRMS of these three methods compared to the true images in Table 2.3. After 76 iterations, the reduction rate of the cost function is less than 1% for cubic convolution reconstruction. For direct 64 ? 64 reconstruction, this number is 118. Because the simulation signal was generated from 1024 ? 1024 images and the recon- struction algorithms are based on 64 ? 64 or 128 ?128 resolution, this mismatching leads the reconstruction algorithm to converge to a solution that is not perfectly accurate. The results of this experiment indicate that the approximation error due to the poly- nomial model leading to algorithm acceleration is far less than the discrete modeling error caused by the limited spatial resolution of the parameter maps. 32 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) |M0| 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (b) |M0| 0 5 10 15 20 25 30 (c) R?2(sec?1) 0 5 10 15 20 25 30 (d) R?2(sec?1) ?50 ?40 ?30 ?20 ?10 0 10 20 30 40 50 (e) ?(Hz) ?50 ?40 ?30 ?20 ?10 0 10 20 30 40 50 (f) ?(Hz) Figure 2.14: True and Reconstructed Images from Noiseless Experiments. All images are displayed with 128?128 resolution. We only reconstructed the pixels within the inscribed circle of the square. (a)(c)(e) are true images; (b)(d)(f) are from 64 ? 64 reconstruction without interpolation, then interpolated to 128?128 images for display. 33 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) |M0| 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (b) |M0| 0 5 10 15 20 25 30 (c) R?2(sec?1) 0 5 10 15 20 25 30 (d) R?2(sec?1) ?50 ?40 ?30 ?20 ?10 0 10 20 30 40 50 (e) ?(Hz) ?50 ?40 ?30 ?20 ?10 0 10 20 30 40 50 (f) ?(Hz) Figure 2.15: Reconstructed images from noiseless experiments. All images are displayed with 128 ? 128 resolution. (a)(c)(e) are from 128 ? 128 reconstruction without interpola- tion, (b)(d)(f) are images reconstructed from 2? cubic convolution interpolation, so the interpolation coefficients are 64?64. 34 2.6.3 Noisy Experiment We added white Gaussian noise to the synthesized signal and reconstructed the images from the noisy signal. The SNR is defined by: SNR = bardblsbardbl 2 N?2 (2.30) where ?2 is the noise variance. The images reconstructed from a noisy signal with 40 dB SNR are shown in Figure 2.16. The NRMSE of the reconstructed images at different SNRs is listed in Table 2.4. Table 2.4: NRMSE (%) of Noisy Reconstruction Reso. M0 R?2 ? 64 30.4 31.1 31.1 30dB 128 82.3 36.9 35.5 2x 28.0 29.7 27.4 64 23.1 22.6 23.0 40dB 128 48.5 25.8 26.2 2x 17.2 18.9 18.0 64 23.1 21.0 21.7 50dB 128 45.1 24.3 24.7 2x 15.4 16.7 16.6 We observe that the noise even for 30 dB is on the same order as the error due to the spatial-domain modeling error. Furthermore, the estimates degrade gracefully with decreasing SNR. 2.6.4 Phantom and Animal Experiment We applied this method to data collected from two phantom experiments using a 4.7T 60cm-vertical-bore Varian primate MRI system along with a stripline resonator quadrature head coil. We used transverse cross-sectional rosette trajectories with a slice thickness of 35 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) |M0| 0 5 10 15 20 25 30 (b) R?2(sec?1) ?50 ?40 ?30 ?20 ?10 0 10 20 30 40 50 (c) ?(Hz) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (d) |M0| 0 5 10 15 20 25 30 (e) R?2(sec?1) ?50 ?40 ?30 ?20 ?10 0 10 20 30 40 50 (f) ?(Hz) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (g) |M0| 0 5 10 15 20 25 30 (h) R?2(sec?1) ?50 ?40 ?30 ?20 ?10 0 10 20 30 40 50 (i) ?(Hz) Figure 2.16: Images reconstructed from noisy signals. SNR is 40dB. All images have 128 ? 128 resolution. (a)-(c) are from 64 ? 64 reconstruction without interpolation, then interpolated to 128?128 images for display; (d)-(f) are from 128?128 reconstruction with- out interpolation, (g)-(i) are images reconstructed from 2? cubic convolution interpolation, so the interpolation coefficients are 64?64. 36 2mm. The phantom was constructed from an 8cm-diameter beaker containing four 1.6cm- diameter tubes. The tubes were filled with agarose gel and various concentrations of either CuSO4 or Sephadex beads to obtain different R?2 values. The results of the first experiment are compared in Figure 2.17. The interpolation method yields both sharper edges and smoother regions inside the circles. For the R?2 parameter map in which fMRI is interested, the interpolation method gives better results. In (e), the small circles are clearer than in (b), especially the two circles in the lower part. The artifacts are also less visible in (e). We also did an experiment with a much larger frequency offset using the same type of phantom. Theobject was deliberately deshimmedto show therobustnessof the interpolated method. The results are shown in Figure 2.18. We can see more obvious improvements in the reconstruction of R?2. The results of a monkey experiment are shown in Figure 2.19. Image data of a macaque monkey brainwereobtained usingtheVarian system witharosette trajectory. Thesampling interval was 4.59?s, and the acquisition duration was 55.0ms. Images were reconstructed using the proposed algorithm. In the interpolated image, the anatomic features are clearly visible. The frequency tends to be more homogeneous inside the brain and has rises or dips near the edges as is often the case. The cerebral ventricles, which have a butterfly-like shape in this slice, can be seen in the frequency map. The interpolated image shows more high- frequency detail than the non-interpolated 64 ? 64 image even though it does not require the estimation of more free parameters. In all experiments, the images of M0 are normalized to 1. 2.7 Conclusion This paper presents a new method to reconstruct local magnitude, decay and frequency at higher resolution from limited data of a single-shot MRI signal. Using interpolation, the reconstruction algorithm is well constrained since it allows us to avoid overparameterizing the solution. Simulated data shows that the interpolation method also gives a numerically 37 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) |M0| 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (b) |M0| 0 5 10 15 20 (c) R?2(sec?1) 0 5 10 15 20 (d) R?2(sec?1) ?60 ?40 ?20 0 20 40 60 (e) ?(Hz) ?60 ?40 ?20 0 20 40 60 (f) ?(Hz) Figure 2.17: Images reconstructed from phantom experiment. All images have 128?128 res- olution and are masked to remove exterior artifactual features. (a)(c)(e) are from 64 ?64 reconstruction without interpolation, then interpolated to 128 ? 128 images for display; (b)(d)(f) are images reconstructed from 2? cubic convolution interpolation, so the interpo- lation coefficients are 64?64. 38 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) |M0| 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (b) |M0| 0 20 40 60 80 100 120 140 160 180 (c) R?2(sec?1) 0 20 40 60 80 100 120 140 160 180 (d) R?2(sec?1) ?200 ?150 ?100 ?50 0 50 100 150 200 (e) ?(Hz) ?200 ?150 ?100 ?50 0 50 100 150 200 (f) ?(Hz) Figure 2.18: Images reconstructed from phantom experiment with larger frequency offset. All images have 128 ? 128 resolution. (a)(c)(e) are from 64 ? 64 reconstruction without interpolation, then interpolated to 128 ? 128 images for display; (b)(d)(f) are images re- constructed from 2? cubic convolution interpolation, so the interpolation coefficients are 64?64. Only the frequency map of the imaged object is shown in (e) and (f). The delib- erately added frequency map is removed from these two figures. The artifactual features between the reconstruction mask (the inscribed circle of the square) and the imaged object remain, but they are not of interest. 39 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) |M0| 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (b) |M0| 0 20 40 60 80 100 120 (c) R?2(sec?1) 0 20 40 60 80 100 120 (d) R?2(sec?1) ?200 ?100 0 100 200 300 (e) ?(Hz) ?200 ?100 0 100 200 300 (f) ?(Hz) Figure 2.19: Images reconstructed from monkey experiment. All images have 128 ? 128 resolution. (a)(c)(e) are from 64?64 reconstruction without interpolation, then interpolated to 128?128 images fordisplay; (b)(d)(f) areimages reconstructed from 2?cubicconvolution interpolation, so the interpolation coefficients are 64?64. The R?2 map in (c) was plotted with saturation of the values to maintain the same scale as (d) and so that the details of (d) would be visible. 40 better result in reconstructing magnitude, decay and frequency. From our experimental data, we can also observe the visual improvement due to the interpolation method. We applied a polynomial approximation of the time exponential function to this method to reduce computational complexity. Computer simulation shows that there is no significant difference between the accelerated and the unaccelerated methods. 41 Chapter 3 Nonuniform Fast Fourier Transform (NUFFT) Unlike classical MRI sampling methods that uniformly sample the spatial frequency domain, SS-PARSE nonuniformly samples k-space. One approach that allows FFTs to be used in the reconstruction algorithm is to round a non-equally spaced frequency-domain trajectory onto an equally spaced grid. Oversampling can be used to reduce the error introduced by this rounding. By analyzing the reconstruction errors of the fast algorithm, we know that the primary source of the errors is the trajectory gridding. A grid with higher resolution can reduce the errors, but it could make the reconstruction process much slower while also requiring a great deal more memory. An algorithm that can accurately and quickly evaluate Fourier samples is desirable. In this chapter, we discuss the implementation of the fast algorithms for evaluating Fourier transforms (FTs). We focus on 1-D FTs. 1-D FTs can be easily extended to multidimensional Fourier samples because of the separability of Fourier kernels. 3.1 Theory of NUFFT 3.1.1 Problem Statement Let x = {x?N/2,??? ,xN/2?1} be a finite sequence of complex numbers. The discrete Fourier transform (DFT) is defined by the formula: Xk = N/2?1summationdisplay n=?N/2 xne?2piN kn (3.1) 42 where N is a positive even integer, k = ?N/2,??? ,N/2 ? 1. The frequency components 2pik/N are equally spaced, so (3.1) can be evaluated by an FFT, which requires O(N logN) operations. Now, we extend this definition to nonuniformly spaced frequency components. ? = {?0,??? ,?K?1} is a finite sequence of real numbers, and ?k ? [?pi,pi] for k = 0,??? ,K ?1. The Fourier transform of the finite sequence x evaluated at the frequencies of ? is given by: Xk = N/2?1summationdisplay n=?N/2 xne?n?k (3.2) The direct evaluation of (3.2) requires O(NK) operations. Our goal is to design an algorithm that only needs computational complexity proportional to the FFT and meets a required accuracy. 3.1.2 Basic Concepts [33] proved that any function of the form e?cx can be accurately represented on any finite interval on the real line using a small number of terms of the form ebx2 ?e?kx, and this number of terms is independent of the value of c. Theorem 1 [33] Let b > 12, c,d > 0 be real numbers, and let m ? 2, q ? 4bpi be integers. Then, for any x ? [?d,d], vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglevextendsinglee?cx ?eb(xpi/md) 2 ? [cmd/pi]+q/2summationdisplay k=[cmd/pi]?q/2 ?ke?kxpi/md vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglevextendsingle < e?bpi 2(1?1/m2) ?(4b +9) (3.3) where ?k = 12?bpie?(c?k)2/4b (3.4) 43 This theorem can be written in a general form that is consistent with (3.2): En,k = vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglevextendsinglee??kn ?s?1n ?(J?1)/2?summationdisplay l=??(J?1)/2? gl(?k)e?2pi(vk+l)n/mN vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglevextendsingle < ? (3.5) where s?1n is a function of n, gl(?k) is a function ?k, vk = [?kmN/2pi], integer J ? K and ? is a nonnegative real number. With this approximation, one can evaluate (3.2) by an FFT and interpolation in the transform domain with two steps: 1. Compute an mN-point FFT of the weighted xn. Yk = N/2?1summationdisplay n=?N/2 s?1n xne? 2pimNkn (3.6) 2. Approximate each Xm by a linear combination of Yk?s. Xk ? ?Xk = ?(J?1)/2?summationdisplay l=??(J?1)/2? gl(?k)Yvk+l (3.7) The computational complexity of this algorithm is O(mN logmN +JK). If we choose constant scaling factor s = {s?N/2,??? ,sN/2?1} and J = K, this method exactly com- putes Xm, but there is no computational gain. The performance of this approximation is determined by s and the weighting coefficients g(?k). [34?36] proposed different methods to compute s and g based on different criteria. All of the methods first optimize g, then compute s using the optimized g. We present an algorithm that simultaneously optimizes s and g by least-squares approximation. If the Fourier transform of a set of certain frequencies is only evaluated once, it not worth using the NUFFT scheme because of the computational complexity of optimizing the scaling factor and weighting coefficients. In some applications, such as iterative recon- struction of MRI, the same set of frequencies is used for each iteration. The same k-space trajectory (spatial frequencies) could also be used for different MRI experiments. In both 44 scenarios, the additional computational cost of precomputations of NUFFT can be afforded. Because of the periodicity of gl(?k), gl only needs to be precomputed for several ?k in one period. Theprecomputed gl are then used to interpolate the gl for the new set of frequencies. The linear interpolation of K frequencies requires 2JK operations. 3.2 Least-Squares Optimization For a given sequence xn, the NUFFT approximation error at is expressed by: vextendsinglevextendsingle vextendsingleXk ? ?Xk vextendsinglevextendsingle vextendsingle = vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglevextendsingle N/2?1summationdisplay n=?N/2 xne?n?k ? ?(J?1)/2?summationdisplay l=??(J?1)/2? gl(?k) N/2?1summationdisplay n=?N/2 s?1n xne? 2pimNkn vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglevextendsingle = vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglevextendsingle N/2?1summationdisplay n=?N/2 xn ? ?e?n?k ?s?1n ?(J?1)/2?summationdisplay l=??(J?1)/2? gl(?k)e? 2pimNkn ? ? vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglevextendsingle (3.8) By Cauchy-Schwarz inequality [34], we have: vextendsinglevextendsingle vextendsingleXk ? ?Xk vextendsinglevextendsingle vextendsingle 2 ? N/2?1summationdisplay n=?N/2 |xn|2 N/2?1summationdisplay n=?N/2 vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglevextendsinglee?n?k ?s?1n ?(J?1)/2?summationdisplay l=??(J?1)/2? gl(?k)e? 2pimNkn vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglevextendsingle 2 (3.9) For all possible sequences xn with norm 1, the maximum error is: N/2?1summationdisplay n=?N/2 E2n,k (3.10) where En,k is defined in (3.5). The total error for all ?k is: ? = K?1summationdisplay k=0 N/2?1summationdisplay n=?N/2 E2n,k (3.11) 45 Our goal is to minimize ? with respect to sn and gl(?k). This minimization problem can be formulated in a matrix form: argmin S,G vextenddoublevextenddoubleB?S?1AGvextenddoublevextenddouble2 2 (3.12) where A is an N ?J matrix, G is a J ?K matrix vector, S is an N ?N diagonal matrix, and B is an N ?K matrix. These are defined by: Anl = e?2pinl/mN Snn = sn Bn = e??n ?e??2pi(v?q/2)n/mN where n = ?N/2,??? ,N/2?1, l = 0,??? ,J ?1, and K is the length of ?. (3.12) is anonlinearproblem. It can beapproximated byalinear minimization problem: argmin S,G bardblSB?AGbardbl22 (3.13) Obviously, S = 0, G = 0 is a solution to this problem. We can use s0 = 1 to avoid this solution. With this constraint, it can be shown that (3.13) is equivalent to the standard linear minimization problem. We write G as a column vector g with Gm,n = g(n?1)J+m. Let bi denote the ith column of B. dbi is a diagonal matrix with the elements of bi on its diagonal. We define a sparse matrix as: F = ? ?? ?? ?? ?? ?db1 A 0 ?db2 A ... 0 ... ?dbK A ? ?? ?? ?? ?? (3.14) 46 We define column vector s that is composed of sn. Now we have: SB?AG = F ? ??s g ? ?? (3.15) Let s0 = 1, and ?y be the (N/2 +1)th column of F. We remove ?y from the matrix F to form a new matrix A. Column vector x is a stack of s and g without s1. Now, (3.13) is converted to a standard linear minimization problem: argmin x bardblAx?ybardbl22 (3.16) In Figures 3.1 and 3.2, we compare the performance of this ?direct? solving method with min-max, the best available method. The maximum error is defined as: ?max = max k ? ? summationtextN/2?1 n=?N/2 E 2 n,k N ? ? 1/2 (3.17) We can further improve the NUFFT accuracy. We initialize s with the results of direct solving, Kaiser-Bessel window, or Gaussian function, then we solve G by: G = parenleftbigAHAparenrightbig?1AHSB (3.18) We use G to optimize s: s = parenleftbigCHCparenrightbig?1CHZ (3.19) where Z is an NK ?1 column vector from AG, and C = [db1,??? ,dbK]T. If necessary, we can use the updated s to find the optimal G. This method is efficient in reducing the maximum error. The maximum error of this method is compared with that of min-max in Figure 3.3 47 2 3 4 5 6 7 8 9 10 11 12 10?12 10?10 10?8 10?6 10?4 10?2 100 J Maximum Error Maximum Error (m=2) Min?Max Quadratic Figure 3.1: Maximum Error 3.3 Interpolations In 3.2, we developed the approximation for a set of specific frequencies. This approx- imation gives higher accuracy than other methods. In some applications, one can use the precomputed gl(?k) to find the gl for given frequencies by interpolations. In this section, we analyze the errors for two kinds of interpolations, linear and cubic convolution. Let f be a real number with |f| ? 1/m, and ? = 2pif/N. e?2pinf/N ? s?1n ?(J?1)/2?summationdisplay l=??(J?1)/2? gl(f)e?2pinl/mN (3.20) 48 2 3 4 5 6 7 8 9 10 11 12 10?12 10?10 10?8 10?6 10?4 10?2 100 J NRMSE NRMSE (m=2) Min?Max Quadratic Figure 3.2: NRMS Error It can be shown that gl(f) is a periodic function with period 1/m: e?2pin(f+k/m)/N ? s?1n e?2pink/mN ?(J?1)/2?summationdisplay l=??(J?1)/2? gl(f)e?2pinl/mN = s?1n ?(J?1)/2?summationdisplay l=??(J?1)/2? gl(f)e?2pin(l+k)/mN (3.21) where k is an integer. With this property, we only need to study the NUFFT within one period. We use the precomputed gl(fk) for a set of frequencies fk to compute gl(x) of any other frequency x by interpolation: gl(x) = summationdisplay k gl(fk)u(xh ?k) (3.22) 49 2 3 4 5 6 7 8 9 10 1110 ?10 10?9 10?8 10?7 10?6 10?5 10?4 10?3 10?2 10?1 J Maximum Error Min?Max Iterative Figure 3.3: Maximum Error where u(x) is the interpolation kernel. There are several candidates for the interpolation kernel. We use the linear spline and cubic convolution as examples. The linear kernel (3.23) and cubic kernel (3.24) are plotted in Figures 3.4 and 3.5, respectively. u(x) = ?? ?? ??? 1?|x|, |x| < 1 0, otherwise (3.23) 50 u(x) = ? ???? ??? ??? ???? 3 2|x| 3 ? 5 2|x| 2 +1, 0 ? |x| < 1 ?12|x|3 + 52|x|2 ?4|x|+1, 1 ? |x| < 2 0 2 ? |x| (3.24) ?1 ?0.5 0 0.5 1 0 0.2 0.4 0.6 0.8 1 x u(x) Figure 3.4: Linear Interpolation Kernel Let K be a positive integer, J be an odd number, and h = 1/mK. We choose the following frequency set: f = braceleftbigg fk = ? 12m +kh, k = ?1,0,??? ,K +1 bracerightbigg so there are K + 3 elements in f. We compute the interpolator coefficients G(f) and the corresponding cubic convolution coefficients cl(k). For any frequency x ? [?1/2m,1/2m], we use cubic convolution interpolation to compute gl(x). 51 ?2 ?1.5 ?1 ?0.5 0 0.5 1 1.5 2 0 0.2 0.4 0.6 0.8 1 x u(x) Figure 3.5: Cubic Convolution Interpolation Kernel For even J, we choose a different f to make gl(x) continuous: f = {fk = kh, k = ?1,0,??? ,K +1} In both cases, we set vk = 0 for all k. For linear interpolation, we choose a slightly different f: f = braceleftbigg fk = ? 12m +kh, k = 0,??? ,K bracerightbigg for odd J, and f = {fk = kh, k = 0,??? ,K} for even J. 52 3.4 Error Analysis of the Interpolation For odd J, the normalized root mean square error (NRMSE) is computed by: ?2 = mN N/2?1summationdisplay n=?N/2 integraldisplay x vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglevextendsinglee?2pinx/N ?s?1n ?(J?1)/2?summationdisplay l=??(J?1)/2? gl(x)e?2pinl/mN vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglevextendsingle 2 dx = mN N/2?1summationdisplay n=?N/2 summationdisplay k integraldisplay fk+1 fk vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglevextendsinglee?2pinx/N ?s?1n ?(J?1)/2?summationdisplay l=??(J?1)/2? gl(x)e?2pinl/mN vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglevextendsingle 2 dx (3.25) For odd J, the integral interval x is [?1/2m,1/2m]; for even J, x is [0,1/m]. The error performance of linear and cubic convolution interpolations are illustrated in Figures 3.6, 3.7, 3.8 and 3.9 with m = 2. Figure 3.10 compares the performance of linear and cubic convolution interpolations. For J ? 8, thelinear interpolation cannot approach theaccuracy of theLSoptimization because the performance is dominated by the accuracy of the linear interpolator. For cubic convolution interpolation, the error of the LS optimization plays the major role, so the performance can approach the possible limit with more precomputed frequencies. 3.5 Inverse Fourier Transform We use inverse to represent the Fourier transforms with uniform inputs and nonuniform outputs. The inverse FT is defined as: xn = K?1summationdisplay k=0 Xke?n?k (3.26) 53 3 4 5 6 7 8 9 10 11 1210 ?14 10?12 10?10 10?8 10?6 10?4 10?2 NRMSE ( ?) J K = 16 K = 128 K = 240 K = 480 K = 960 Exact Figure 3.6: NRMSE for different J and the number of precomputed frequencies for linear interpolation. In this figure, NRMSEs are plotted as the functions of J?s for different numbers of the precomputed frequencies. The ?Exact? is the NRMSE for the precomputed frequencies. The ?Exact? is the accuracy limit of the method stated in this chapter. An inverse FT can be approximated by xn ? K?1summationdisplay k=0 Xks?1n ?(J?1)/2?summationdisplay l=??(J?1)/2? gl(?k)e?2pin(vk+l)/mN = s?1n K?1summationdisplay k=0 Xk ?(J?1)/2?summationdisplay l=??(J?1)/2? gl(?k)e?2pinl/mN bracehtipupleft bracehtipdownrightbracehtipdownleft bracehtipupright Ak e?2pinvk/mN (3.27) The procedure of the inverse NUFFT is summarized as following: 1. Compute Ak. This requires JK operations. Ak = ?(J?1)/2?summationdisplay l=??(J?1)/2? gl(?k)e?2pinl/mN 54 100 200 300 400 500 600 700 800 90010 ?7 10?6 10?5 10?4 10?3 NRMSE ( ?) K (# of Precomputed Frequencies) J = 4 J = 5 J = 6 J = 7 J = 8 Figure 3.7: NRMSE for different J and the number of precomputed frequencies for linear interpolation. In this figure, NRMSEs are plotted as the functions of the numbers of the precomputed frequencies for different J?s. 2. Weight Xk by Ak. Bk?N/2 = XvkAvk, k = 0,??? ,K ?1 3. Compute an mN-point FFT. yn = N/2?1summationdisplay k=?N/2 Bke?2pink/mN 4. Weight yn by s?1n xn = s?1n yn 55 3 4 5 6 7 8 9 10 11 1210 ?14 10?12 10?10 10?8 10?6 10?4 10?2 NRMSE ( ?) J K = 20 K = 48 K = 64 K = 96 K = 160 Exact Figure 3.8: NRMSE for different J and the number of precomputed frequencies for cubic convolution interpolation. In this figure, NRMSEs are plotted as the functions of J?s for different numbers of the precomputed frequencies. The ?Exact? is the NRMSE for the precomputed frequencies. The ?Exact? is the accuracy limit of the method stated in this chapter. 3.6 Discussion For a given set of K frequencies, it requires 2JNM operations are required to compute gl and sn for an N-point FFT. So it is not economical to do this for a set of frequencies that arerepeatedly used. Oncewehave gl andsn, theFourier transformrequiresO(mN logmN+ JM) operations. With precomputed gl and sn, one can use linear or cubic interpolation to evaluate gl for the given K frequencies. Linear interpolation requires 2JM operations to compute gl for the given frequencies. The total computational complexity is O(mN logmN)+2JM. This method requires less computations than cubic interpolation. The disadvantage of linear interpolation is that its NRMSE reaches the lower limit of about 10?7 due to interpolation error. For hardware implementation, the storage of the precomputed data of a larger set 56 5 10 15 20 25 30 35 40 4510 ?8 10?7 10?6 10?5 10?4 10?3 NRMSE ( ?) K (# of Precomputed Frequencies) J = 4 J = 5 J = 6 J = 7 J = 8 Figure 3.9: NRMSE for different J and the number of precomputed frequencies for cubic convolution interpolation. In this figure, NRMSEs are plotted as the functions of the numbers of the precomputed frequencies for different J?s. 3 4 5 6 7 8 9 10 1110 ?10 10?9 10?8 10?7 10?6 10?5 10?4 10?3 10?2 NRMSE ( ?) J linear K = 960 cubic K = 120 Exact Figure 3.10: Comparison of the performance of linear and cubic convolution interpolations. The ?Exact? is the accuracy limit of the method stated in this chapter. 57 of frequencies is also a possible problem. Its advantage is for J ? 5, since about 100 precomputed frequencies can reach the best possible performance. The cubic convolution interpolation requires more computations. The interpolation needs 4JM operations, so the total number is O(mN logmN)+4JM. The cubic algorithm can provide much higher accuracy than the linear algorithm. It also requires much less storage because fewer precomputed points are required for the same accuracy. This feature is useful for some hardware implementations. This is also possibly useful for some software applications even if there is enough memory because memory access could be the bottleneck. 58 Chapter 4 Reconstruction of Parallel MRI 4.1 Introduction The concept of parallel MRI was first suggested by Carlson [37], but his work was unknown until 2004 [38]. The introduction of simultaneous acquisition of spatial harmonics (SMASH) [39] started the age of parallel MRI. The conventional Fourier MRI is modeled by y(t) = integraldisplay M(x)e?k(t)?xdx There is more than one RF coil in a parallel MRI system. Each coil has a different receiving property that is called localized sensitivity. Let Ck(x) be the localized sensitivity of the kth coil, k = 1,??? ,K. The received signal at the kth coil can be modeled by yk(t) = integraldisplay Ck(x)M(x)e?k(t)?xdx+?k(t) (4.1) where ?k(t) is additive white Gaussian noise with zero mean and ?2? variance. [40] summarized the advantages of parallel imaging: ? faster imaging. ? higher spatial resolution. ? improved image quality of single-shot or turbo-spin-echo or echo-planar by shortening the echo train whenever it is severely affected by signal decay and field inhomogeneities. ? complement for the increased SNR and compensation for the growing specific absorption rate and increasing geometric distortions for high and ultra-high field MRI. 59 The SNR (signal-to-noise ratio) can be increased with array coils in the same imaging time. The array coils can also be used for partially parallel acquisition (PPA) [41]. In this chapter, we extend the parallel MRI model (4.1) with the concept of SS-PARSE. In current reconstruction algorithms, the coil sensitivities Ck(x) are found by putting an known object in a parallel MRI system and measuring the spatial sensitivities directly. This method is convenient, but it is inaccurate because different objects may have different coil sensitivities due to the fact that each object has its own properties that interact with the coil responses. We can bring the concept of SS-PARSE into parallel MRI. We call this extension single-shot parallel PARSE (SS-pPARSE). The received signal in the kth coil is modeled by: yk(t) = integraldisplay Ck(x)M0(x)e[R?2(x)+??(x)]te?k(t)?xdx+?k(t) (4.2) The goal of PSSPARSE is to reconstruct Ck(x), M0(x), R?2(x) and ?(x) from the observed signals yk(t) for all k = 1,??? ,K. 4.2 Reconstruction 4.2.1 Extension of SS-PARSE By discretizing (4.2) on the spatial (x,y) grid indexed by i, we have yk(n) = summationdisplay i Ck,iM0ienWie?kn?xi bracehtipupleft bracehtipdownrightbracehtipdownleft bracehtipupright ?yk +?k(n) (4.3) where n = 1,??? ,N, Wi = ?[R?2i +??i]?t, kn = k(n?t), and ?t is the sampling interval. We use the method discussed in Chapter 2 to simultaneously solve Ck, M0 and W. We use iterative conjugate-gradient algorithm to minimize the cost function: J(z) = Ksummationdisplay k=1 bardblyk ? ?yk(z)bardbl2 (4.4) 60 with respect to z, where z = {Ck,M0,W,k = 1,??? ,K}. The gradient ?zJ computations include three parts: ?J ?M0 = Ksummationdisplay k=1 Ck Nsummationdisplay n=1 fk(n)enWe?kn?x (4.5) ?J ?W = M0 Ksummationdisplay k=1 Ck Nsummationdisplay n=1 nfk(n)enWe?kn?x (4.6) ?J ?Ck = M0 Nsummationdisplay n=1 fk(n)enWe?kn?x (4.7) where fk(n) = [?yl(n)?yk(n)]?. We apply the polynomial approximation of the exponential time function in Chapter 2 to compute the cost function and gradient for parallel MRI. yk(n) ? ejn?0 L?1summationdisplay l=0 nl Ksummationdisplay i=1 Ck,iM0iZi(l)e?kn?x(i) (4.8) ?J ?M0 ? Ksummationdisplay k=1 Ck L?1summationdisplay l=0 Z(l) Nsummationdisplay n=1 fk(n)e?n?0nle?kn?x (4.9) ?J ?W ? M0 Ksummationdisplay k=1 Ck Lsummationdisplay l=1 Z(l) Nsummationdisplay n=1 fk(n)e?n?0nle?kn?x (4.10) ?J ?Ck ? L?1summationdisplay l=0 Z(l) Nsummationdisplay n=1 fk(n)e?n?0nle?kn?x (4.11) Z and ?0 are defined in chapter 2. The reconstruction algorithm based on cubic interpolation can also be applied to the reconstruction in parallel MRI. 4.2.2 Initialization The first step in the reconstruction of PSSPARSE is to find the initial values of Ck(x), M0(x), R?2(x) and ?(x) for the conjugate-gradients algorithms described above. We use a different mathematical model to findthe initial values. We assume that there are K different 61 M0 maps ? M1,??? ,MK ? the magnitude. Based on this assumption, yk is modeled by: yk(n) = summationdisplay i Mk,ienWie?kn?xi bracehtipupleft bracehtipdownrightbracehtipdownleft bracehtipupright ?yk +?k(n) (4.12) In the cost function (4.4), the unknown z is defined as z = {Mk,W,k = 1,??? ,K}. The gradients are computed by: ?J ?Mk = Nsummationdisplay n=1 fk(n)enWe?kn?x (4.13) ?J ?W = Ksummationdisplay k=1 Mk Nsummationdisplay n=1 nfk(n)enWe?kn?x (4.14) With polynomial approximation, the estimated signals and and gradients are evaluated by: yk(n) ? ejn?0 L?1summationdisplay l=0 nl Ksummationdisplay i=1 Mk,iZi(l)e?kn?x(i) (4.15) ?J ?Mk ? L?1summationdisplay l=0 Z(l) Nsummationdisplay n=1 fk(n)e?n?0nle?kn?x (4.16) ?J ?W ? Ksummationdisplay k=1 Mk Lsummationdisplay l=1 Z(l) Nsummationdisplay n=1 fk(n)e?n?0nle?kn?x (4.17) After finding Mk, we use the root mean square of Mk as the initial values of M0: M0 = radicalBiggsummationtext K k=1|Mk| 2 K (4.18) and the coil sensitivities are computed by: Ck = MkM 0 , k = 1,??? ,K (4.19) 62 We also apply the interpolation method to compute the initial conditions. Because of the use of interpolation, the algorithm stated in 4.2.1 is initialized with the interpolation coefficients other than Ck, Mk and W. We use the interpolation coefficients that are associated with the reconstructed Mk, Ck and W to initialize the reconstruction algorithm 4.2.3 Regularization We use regularization to improve the reconstruction performance. The regularization operation is defined as: R(x) = H?x?HT (4.20) where ? denotes a convolution operation. The regularization kernel H is: H = ? ?? ?? ? 0 0 0 ?1 2 ?1 0 0 0 ? ?? ?? ? (4.21) With regularization terms, the cost function (4.4) is given by: J(z) = Ksummationdisplay k=1 bardblyk ? ?yk(z)bardbl2 +?bardblR(M0)bardbl2 +? Ksummationdisplay k=1 bardblR(Ck)bardbl2 +?R bardblR(R?2)bardbl2 +?I bardblR(?)bardbl2 (4.22) where ?, ?, ?R and ?I are all nonnegative real numbers. The corresponding gradients ?J includes terms of ?R: 1 2?bardblR(x)bardbl 2 = A?x (4.23) 63 where A = ? ?? ?? ?? ?? ?? ? 0 0 1 0 0 0 0 ?4 0 0 1 ?4 12 ?4 1 0 0 ?4 0 0 0 0 1 0 0 ? ?? ?? ?? ?? ?? ? (4.24) 4.3 Simulation We synthesized the simulation data from the images in Figure 4.1 and 4.2. Eq. (4.3) was used to to generate the synthesized data. The image resolution for the synthesis was 1024. It is assumed that the FOV is 12.8cm. Both x-axis and y-axis are defined on [?6.4,6.4]. Four receiving coils are placed at the four corners of the FOV. The coil sensitivities are described by (4.25). C(x,y) = bracketleftbig(x?x0)2 +(y ?y0)2bracketrightbig?? (4.25) In the simulation, x0 = y0 = 6.8 and ? = 1/4. The SNR is defined as: SNR = summationtextK k=1bardbl?ykbardbl 2 MN?2? (4.26) 4.3.1 Interpolation of Coil Sensitivity In the all of the simulations, 128?128 was used as the image resolution. We used 2? interpolation for the computation of the initial conditions. That is, the coefficients of Mk and W were 64?64. In the joint reconstruction of M0, W and Ck, 64?64 resolution was also used for the coefficients of M0 and W. Since the Ck?s are very spatially smooth, we use 64 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) |M0| 0 5 10 15 20 (b) R?2 (sec?1) ?40 ?30 ?20 ?10 0 10 20 30 40 (c) ? (Hz) Figure 4.1: The magnitude, decay and field map used to synthesize simulation data. All images are displayed with 256?256 resolution. M0 is normalized to 1. 65 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) Coil 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (b) Coil 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (c) Coil 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (d) Coil 4 Figure 4.2: The coil sensitivity ,maps used to synthesize simulation data. All images are displayed with 256 ? 256 resolution. The unconstrained area in which M0 is zero are displayed with zero. All maps are normalized to 1. 66 a lower resolution for the coefficients of Ck?s ? a larger interpolation factor. We used the cubic convolution interpolation for all of the estimated parameters. For I? interpolation, the sampling interval is h = 1/I. The 1-D interpolation vector of I? interpolation is: u(?nh), n = 0,?h,??? ,?(3/h?1)h (4.27) where u(x) is defined by (2.9). We experimentally compared the reconstruction accuracies of different interpolation factors of the coil sensitivity in Table 4.1. In this comparison, we used signals with 30 dB SNR. Table 4.1: NRMSE (%) of Different Coil Sensitivity Interpolations Interpolation C1 C2 C3 C4 M0 R?2 ? 2? 29.2 36.8 36.6 30.3 29.8 14.4 17.9 4? 8.7 8.7 7.5 7.2 28.3 14.4 17.9 8? 6.9 8.3 7.5 6.7 29.5 14.4 17.9 16? 10.7 10.4 6.9 7.2 30.2 14.4 17.9 The reconstructed C1 from the different interpolation factors are illustrated in Figure 4.3. 4.3.2 Regularized Reconstruction We applied regularization to this reconstruction. Because the field map ? is not spa- tially smooth enough, the experiments show that the regularization does not improve the reconstruction accuracy of ?. We empirically selected the regularization parameters ?, ? and ?R. Table 4.2 compares the reconstruction performance for different combinations of regularization coefficients. The interpolation factor for the coil sensitivity used here was 8. The interpolation factors of M0, R?2 and ? were all 2. A set of coil sensitivities and images from regularized reconstruction are displayed in Figure 4.6 and 4.7. Table 4.2 shows that the selection of the regularization parameter for one set of variables has little impact on the 67 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) 2? 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (b) 4? 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (c) 8? 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (d) 16? Figure 4.3: The coil sensitivity C1 reconstructed from different interpolation factors. All images are displayed with 128?128 resolution. The artifacts in the unconstrained area in which M0 is zero are removed. All maps are normalized to 1. No regularization is used. 68 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) |M0| 0 5 10 15 20 25 30 (b) R?2(sec?1) ?50 ?40 ?30 ?20 ?10 0 10 20 30 40 50 (c) ?(Hz) Figure 4.4: The magnitude, decay and field map reconstructed from signals with 30dB SNR. All images are displayed with 128?128 resolution. The artifacts in the unconstrained area in which M0 is zero are removed. M0 is normalized to 1. No regularization is used. 69 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) Coil 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (b) Coil 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (c) Coil 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (d) Coil 4 Figure 4.5: The coil sensitivity maps reconstructed from signals with 30dB SNR. All images are displayed with 128 ? 128 resolution. The artifacts in the unconstrained area in which M0 is zero are removed. All maps are normalized to 1. No regularization is used. 70 reconstruction accuracy of the other sets of variables. For example, the selection of ?, the regularization parameter of M0, is almost unrelated to the reconstruction accuracy of R?2 and ?. Table 4.2: NRMSE (%) of Different Regularization Coefficients ? ?R ? C1 C2 C3 C4 M0 R?2 ? 0 0 0 7.0 8.3 7.5 6.7 29.5 14.4 17.9 0 0 1.09?106 7.7 7.9 6.9 7.6 29.0 14.4 17.9 0 1.75?108 0 8.6 6.8 6.7 9.3 28.7 4.6 17.9 0 1.75?108 1.09?106 10.6 8.9 8.2 8.5 28.8 5.5 17.9 5.5?107 0 0 10.8 8.5 6.4 9.6 13.7 14.4 17.9 5.5?107 0 1.09?106 9.4 8.4 6.4 9.8 13.6 14.4 17.9 5.5?107 1.75?108 0 11.5 8.9 6.6 9.5 13.7 4.0 17.9 5.5?107 1.75?108 1.09?106 9.5 8.4 6.5 9.5 13.6 3.9 17.9 4.4 Human Experiment We applied the reconstruction method to human brain data. A Siemens Tim Trio 3T MRI system was used in this experiment. The system is located at the Department of Neuroscience of the Brown University. The major parameters of this experiment are listed in Table 4.3. The rosette trajectory used in the experiment is plotted in Figure. 4.8. Table 4.3: Experiment Parameters of Human Experiment Parameter Value Unit sampling interval (?t) 5.0 ?s FOV 22.0 cm readout duration 56.8 ms magnetic field 3.0 T trajectory rosette - In the reconstruction, we used the signals from two coils. Since the simulation exper- iments show that 4? and 8? interpolation for the coil sensitivities give similar results, we 71 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) |M0| 0 5 10 15 20 (b) R?2(sec?1) ?50 ?40 ?30 ?20 ?10 0 10 20 30 40 50 (c) ?(Hz) Figure 4.6: The magnitude, decay and field map reconstructed from signals with 30dB SNR. The regularization parameters ? = 5.5 ?107, ?R = 1.75 ?108 and ?I = 0. All images are displayed with 128 ? 128 resolution. The artifacts in the unconstrained area in which M0 is zero are removed. M0 is normalized to 1. 72 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) Coil 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (b) Coil 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (c) Coil 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (d) Coil 4 Figure 4.7: The coil sensitivity maps reconstructed from signals with 30dB SNR. The regularization parameters ? = 1.09?106. All images are displayed with 128?128 resolution. The artifacts in the unconstrained area in which M0 is zero are removed. All maps are normalized to 1. 73 ?2 ?1 0 1 2 ?2.5 ?2 ?1.5 ?1 ?0.5 0 0.5 1 1.5 2 2.5 kx (cm ?1) k y (cm ?1 ) Figure 4.8: Rosette trajectory used in human experiment. Only the first half of the trajec- tory is plotted. applied these two levels of interpolation. For both cases, M0, R?2 and ? were reconstructed with 2? interpolation. The recovered images are shown in Figure 4.9 and 4.10. We also interpolated coil sensitivity with a cubic spline function. The magnitude, decay and field map were still interpolated by a cubic convolution function because of its advantage for the exponential time function demonstrated in Chapter 2. The results are in Figure 4.11 and 4.12. The reconstructed images show that the proposed algorithm can produce realistic re- sults. The artifacts on the edges of the coil sensitivity maps, R?2 and ? are caused by the zero or near zero M0 values at that locations. 4? and 8? coil sensitivity interpolation reconstructed similar M0, R?2 and ?, but the the coil sensitivity maps from 8? interpolation are better. We see some artifacts at the left upper and lower corners. The artifacts may be caused by the locations of the receiving coils because the two coils were located at the right upper and lower corners. Two more coils at the left upper and lower corners may correct this problem. Another possible reason for the 74 artifacts is system bias. By tuning the reconstruction parameters, the system bias can be removed. 4.5 Conclusion In this chapter, we extended the cubic convolution interpolation, quadratic line search, polynomial approximation of the exponential time function and nonuniform FFT from single-coil SS-PARSE to the multiple-coil SS-PARSE. Because the reconstruction complexity is proportional to the number of receiving coils, the computational speed improvement of the fast approach stated in Chapter 2 is more significant in the reconstruction of the multiple-coil system. The experiments in our simulations show that regularization can improve the recon- struction performance of some of the parameters. Our simulations show that the frequency map is insensitive to regularization. The best results were from the reconstruction that regularized M0, R?2 and Ck. Because we lack the gold standard of human brain experiment, we did not use regularization for the human experiment. We applied both cubic convolution interpolation and cubic spline interpolation for the coils sensitivity model. We see similar results for the human experiment. But in our simulation, the spline model can not reach the minimum error using the quadratic line search, so we have to resort to golden section search that takes much longer time. Even though we did not observe this phenomenon in the human experiment, we still believe it is safer to use cubic convolution. 75 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) |M0| 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (b) |M0| 0 10 20 30 40 50 60 (c) R?2(sec?1) 0 10 20 30 40 50 60 70 (d) R?2(sec?1) ?100 ?50 0 50 100 (e) ?(Hz) ?100 ?50 0 50 100 150 (f) ?(Hz) Figure 4.9: The magnitude, decay and field map reconstructed from the human brain experiment. Cubic convolution interpolation was used to model the coil sensitivity. All images are displayed with 128 ? 128 resolution. (a) (c) (e) are from 4? interpolation for coil sensitivity. (b) (d) (f) are from 8? interpolation for coil sensitivity. M0 is normalized to 1. Most of the artifacts outside of the head were removed. No regularization was used. 76 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) Coil 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (b) Coil 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (c) Coil 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (d) Coil 2 Figure 4.10: The coil sensitivity maps reconstructed from the human brain experiment. Cubic convolution interpolation was used to model the coil sensitivity. All images are displayed with 128 ? 128 resolution. (a) (c) are from 4? interpolation. (b) (d) are from 8? interpolation. Most of the artifacts outside of the head are removed. All maps are normalized to 1. No regularization was used. 77 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) |M0| 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (b) |M0| 0 10 20 30 40 50 60 (c) R?2(sec?1) 0 10 20 30 40 50 60 (d) R?2(sec?1) ?100 ?50 0 50 100 (e) ?(Hz) ?100 ?50 0 50 100 (f) ?(Hz) Figure 4.11: The magnitude, decay and field map reconstructed from the human brain experiment. Cubic spline interpolation was used to model the coil sensitivity. All images are displayed with 128 ? 128 resolution. (a) (c) (e) are from 4? interpolation for coil sensitivity. (b) (d) (f) are from 8? interpolation for coil sensitivity. Most of the artifacts outside of the head were removed. M0 is normalized to 1. No regularization was used. 78 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) Coil 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (b) Coil 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (c) Coil 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (d) Coil 2 Figure 4.12: The coil sensitivity maps reconstructed from the human brain experiment. Cubic spline interpolation was used to model the coil sensitivity. All images are displayed with 128?128 resolution. (a) (c) are from 4? interpolation. (b) (d) are from 8? interpo- lation. Most of the artifacts outside of the head were removed. All maps are normalized to 1. No regularization was used. 79 Chapter 5 Conclusion 5.1 Summary of the Contributions of this Thesis In this thesis, a new approach is suggested for SS-PARSE reconstruction. The major parts of this approach are: 1. The interpolated reconstruction method is proposed. This method can reconstruct higher-resolution images that show higher-frequency features while the reconstruction is still well constrained. A phantom experiment showed that the interpolated recon- struction is much better than the non-interpolated reconstruction in a larger field inho- mogeneity environment. This could be also useful for high-field MRI because the field inhomogeneity of high-field MRI is more serious. 2. The cubic convolution interpolator yields significantly less blurred R?2 than the cubic spline interpolator because of its zero-crossing at neighboring sample locations. 3. The polynomial approximation of the exponential time function was used to linearize the reconstruction. This approximation is convenient for the dynamic estimation of R?2 and field map because the optimized polynomial coefficients need not be changed for updated R?2 and field map. Using this approximation, the reconstruction is converted to the form of Fourier transforms so that the FFT can be used. 4. The quadratic approximation of the line search was implemented. This method greatly reduces the computation of the line search. Motivated by non-Cartesian trajectories used by SS-PARSE, we also improved the nonuniform FFT (NUFFT). The pre-weight scaling factor and interpolation coefficients are two critical issues in the NUFFT. The existing methods first optimize the interpolation, then compute the scaling factor based on the optimized interpolation. We linearized this 80 problem so that optimal solutions of the scaling factor and interpolation coefficients are simultaneously found. This method improved the precision of the NUFFT. This framework was extended to parallel imaging. We validated this method with simulated data. We also applied the proposed method to a human brain experiment. 5.2 Future Works In Chapter 4, we observed that regularization can further improve the reconstruction performance. We empirically choose the regularization parameter and kernel. We need a mathematical method to optimize the parameter and kernel. The generalized cross- validation [42] is a possible candidate for this issue. We tentatively tried this method, but we did not have satisfactory results. We need to refine this work or choose a different method. The proposed methods were mathematically validated by simulated data. Validation by human and animal experiments is needed. The methods suggested in this dissertation reconstructed realistic images from these experiments, but we do not have a quantitative error analysis because of the lack of ?gold standard?. With a gold standard from a human experiment, we would be able to refine some reconstruction parameters. We tested the cubic convolution and cubic spline interpolations. Other interpolations may be better suited for this problem. With the interpolation, we have a continuous model instead of a discrete model. Because of the insurmountable computation obstacle we can only solve this problem with a higher discrete resolution. A possible future work is to find a continuous reconstruction algorithm that is within the capability of today?s computing technology. The core of this algorithm is still how to deal with the problematic exponential time function. In the NUFFT, we compute an mN-point FFT. m = 2 is the best comprise. Using two scaling factors instead of one may give a better approximation. Optimization of the NUFFT approximation requires the solution of an overdetermined problem. For given set of K frequencies, we have NK equations, but we only have JK + N variables. With two 81 scaling factors, we may have an additional N variables. More degrees of freedoms could produce a more accurate results. 82 Bibliography [1] J. Hennig, O. Speck, M. A. Koch, and C. Weiller, ?Functional Magnetic Resonance Imaging: A Review of Methodological Aspects and Clinical Applications,? J Mag. Reson. Imaging, vol. 18, pp. 1?15, Jul 2003. [2] S. Ogawa, T. Lee, A. Nayak, and P. Glynn, ?Oxygenation-Sensitive Contrast in Mag- netic Resonance Image of Rodent Brain at High Magnetic Fields,? Mag. Reson. Med., vol. 14, no. 1, pp. 68?78, 1990. [3] K. Kwong, J. Belliveau, D. Chesler, I. Goldberg, R. Weisskoff, B. Poncelet, D. Kennedy, B. Hoppel, M. Cohen, R. Turner, et al., ?Dynamic Magnetic Resonance Imaging of Hu- man Brain Activity During Primary Sensory Stimulation,? Proceedings of the National Academy of Sciences, vol. 89, no. 12, pp. 5675?5679, 1992. [4] F. Bloch, W. Hansen, and M. Packard, ?Nuclear Induction,? Physical Review, vol. 70, no. 7-8, pp. 460?474, 1946. [5] E. Purcell, H. Torrey, and R. Pound, ?Resonance Absorption by Nuclear Magnetic Moments in a Solid,? Physical Review, vol. 69, no. 1-2, pp. 37?38, 1946. [6] R. Damadian, ?Tumor Detection by Nuclear Magnetic Resonance,? Science, vol. 171, no. 3976, pp. 1151?1153, 1971. [7] P. Lauterbur, ?Image Formation by Induced Local Interactions: Examples Employing Nuclear Magnetic Resonance,? Nature, vol. 242, no. 5394, pp. 190?191, 1973. [8] E. Haacke, R. Brown, M. Thompson, and R. Venkatesan, ?Magnetic Resonance Imag- ing: Physical Principles and Sequence Design,? 1999. [9] D. Twieg, ?The k-Trajectory Formulation of the NMR Imaging Process with Applica- tions in Analysis and Synthesis of Imaging Methods,? Medical Physics, vol. 10, p. 610, 1983. [10] A. Elster and J. Burdette, Questions and Answers in Magnetic Resonance Imaging. Mosby St. Louis, MO, 2001. [11] D. Twieg, ?Parsing Local Signal Evolution Directly from A Single-Shot MRI Signal: A New Approach for fMRI,? Mag. Reson. Med., vol. 50, pp. 1043?1052, Nov 2003. [12] S. Ljunggren, ?A Simple Graphical Representation of Fourier-Based Imaging Meth- ods,? J Mag. Reson., vol. 54, no. 2, pp. 338?343, 1983. 83 [13] S. Ogawa, D. Tank, R. Menon, J. Ellermann, S. Kim, H. Merkle, and K. Ugurbil, ?In- trinsic Signal Changes Accompanying Sensory Stimulation: Functional Brain Mapping with Magnetic Resonance Imaging,? Proceedings of the National Academy of Sciences, vol. 89, no. 13, pp. 5951?5955, 1992. [14] R. Buxton, Introduction to Functional Magnetic Resonance Imaging. Cambridge Uni- versity Press, 2002. [15] R. Menon, S. Ogawa, D. Tank, and K. Ugurbil, ?4 Tesla Gradient Recalled-Echo Characteristics of Photic Stimulation-Induced Signal Changes in the Human Primary Visual Cortex,? Mag. Reson. Med., vol. 30, no. 3, pp. 380?386, 1993. [16] G. Glover, S. Lemieux, M. Dragova, and J. Pauly, ?Decomposition of Inflow and Blood Oxygen Level-Dependent (BOLD) Effects with Dual-Echo Spiral Gradient-Recalled Rcho (GRE) fMRI,? Mag. Reson. Med., vol. 35, no. 3, pp. 299?308, 1996. [17] O. Speck and J. Hennig, ?Functional Imaging by I0- and T?2-Parameter Mapping Using Multi-Image EPI,? Mag. Reson. Med., vol. 40, no. 2, pp. 243?8, 1998. [18] S. Posse, S. Wiese, D. Gembris, K. Mathiak, C. Kessler, M. Grosse-Ruyken, B. El- ghahwagi, T. Richards, S. Dager, and V. Kiselev, ?Enhancement of BOLD-Contrast Sensitivity by Single-Shot Multi-Echo Functional MR Imaging,? Mag. Reson. Med., vol. 42, pp. 87?97, 1999. [19] K. S. Nayak and D. G. Nishimura, ?Automatic Field Map Generation and Off- Resonance Correction for Projection Reconstruction Imaging,? Mag. Reson. Med., vol. 43, pp. 151?154, Jan 2000. [20] A. Schulte, O. Speck, C. Oesterle, and J. Hennig, ?Separation and Quantification of Perfusion and BOLD Effects by Simultaneous Acquisition of Functional I0- and T?2- Parameter Maps,? Mag. Reson. Med., vol. 45, no. 5, pp. 811?6, 2001. [21] B. Sutton, D. Noll, and J. Fessler, ?Fast, Iterative Image Reconstruction for MRI in the Presence of Field Inhomogeneities,? Medical Imaging, IEEE Trans. on, vol. 22, no. 2, pp. 178?188, 2003. [22] J. Fessler, S. Lee, V. Olafsson, H. Shi, and D. Noll, ?Toeplitz-Based Iterative Image Reconstruction for MRI With Correction for Magnetic Field Inhomogeneity,? Signal Processing, IEEE Trans. on, vol. 53, pp. 3393?3402, Sept. 2005. [23] A. Funai, J. Fessler, D. Yeo, V. Olafsson, and D. Noll, ?Regularized Field Map Es- timation in MRI,? Medical Imaging, IEEE Trans. on, vol. 27, pp. 1484?1494, Oct. 2008. [24] V. Olafsson, D. Noll, and J. Fessler, ?Fast Joint Reconstruction of Dynamic R?2 and Field Maps in Functional MRI,? Medical Imaging, IEEE Trans. on, vol. 27, pp. 1177? 1188, Sept. 2008. 84 [25] T. Knopp, H. Eggers, H. Dahnke, J. Prestin, and J. Senegas, ?Iterative Off-Resonance and Signal Decay Estimation and Correction for Multi-Echo MRI,? Medical Imaging, IEEE Trans. on, vol. 28, pp. 394?404, March 2009. [26] H. Nguyen, B. Sutton, R. Morrison Jr, and M. Do, ?Joint Estimation and Correction of Geometric Distortions for EPI functional MRI using Harmonic Retrieval,? Medical Imaging, IEEE Trans. on, vol. 28, pp. 423?434, March 2009. [27] T. Harshbarger and D. Twieg, ?Iterative Reconstruction of Single-Shot Spiral MRI with Off Resonance,? Medical Imaging, IEEE Trans. on, vol. 18, pp. 196?205, March 1999. [28] E. Chong and S. Zak, An Introduction to Optimization Theory. John Wiley & Sons, Inc., 2001. [29] M. Unser, ?Splines: A Perfect Fit for Signal and Image Processing,? Signal Processing Magazine, IEEE, vol. 16, pp. 22?38, Nov 1999. [30] R. Keys, ?Cubic Convolution Interpolation for Digital Image Processing,? Acoustics, Speech, and Signal Processing, IEEE Trans. on, vol. 29, no. 6, pp. 1153?1160, 1981. [31] W. Tang, S. Reeves, and D. Twieg, ?Fast Joint Estimation of Local Magnitude, Decay, and Frequency from Single-Shot MRI,? in Proceedings of SPIE, vol. 6498, p. 649818, SPIE, 2007. [32] D. C. Noll, ?Multishot Rosette Trajectories for Spectrally Selective MR Imaging,? Medical Imaging, IEEE Trans. on, vol. 16, pp. 372?377, Aug 1997. [33] A. Dutt and V. Rokhlin, ?Fast Fourier Transforms for Nonequispaced Data,? SIAM Journal on Scientific Computing, vol. 14, p. 1368, 1993. [34] J. Fessler and B. Sutton, ?Nonuniform Fast Fourier Transforms Using Min-Max Inter- polation,? Signal Processing, IEEE Trans. on, vol. 51, pp. 560?574, Feb 2003. [35] Q. Liu and N. Nguyen, ?An Accurate Algorithm for Nonuniform Fast Fourier Trans- forms (NUFFT?s),? Microwave and Guided Wave Letters, IEEE, vol. 8, pp. 18?20, Jan 1998. [36] M. Jacob, ?Optimized Least-Square Nonuniform Fast Fourier Transform,? Signal Pro- cessing, IEEE Transactions on, vol. 57, pp. 2165?2177, June 2009. [37] J. Carlson, ?An Algorithm for NMR Imaging Reconstruction Based on Multiple RF Receiver Coils,? J Mag. Reson., vol. 74, pp. 376?380, 1987. [38] M. Griswold, ?Basic Reconstruction Algorithms for Parallel Imaging,? Parallel Imaging in Clinical MR Applications, p. 19, 2007. [39] D. Sodickson and C. McKenzie, ?A Generalized Approach to Parallel Magnetic Reso- nance Imaging,? Medical Physics, vol. 28, p. 1629, 2001. 85 [40] O. Dietrich, ?General Advantages of Parallel Imaging,? Parallel Imaging in Clinical MR Applications, p. 173, 2007. [41] M. Griswold, P. Jakob, M. Nittka, J. Goldfarb, and A. Haase, ?Partially Parallel Imaging with Localized Sensitivities(PILS),? Mag. Reson. Med., vol. 44, no. 4, pp. 602? 609, 2000. [42] S. Reeves and R. Mersereau, ?Blur Identification by the Method of Generalized Cross- Validation,? Image Processing, IEEE Trans. on, vol. 1, no. 3, pp. 301?311, 1992. 86 Appendix A Derivation of NRMSE of NUFFT with Linear Interpolation With linear interpolation, gl(x) on the interval [xk,xk+1] is represented by: gl(x) = gl(xk+1)?gl(xk)h (x?xk) +gl(xk) (A.1) where h = xk+1 ?xk. The total error on the interval [xk,xk+1] is: R(n,k) = integraldisplay xk+1 xk vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglevextendsinglee? 2pin N x ?s?1n ?(J?1)/2?summationdisplay l=??(J?1)/2? bracketleftbiggg l(xk+1)?gl(xk) h (x?xk) +gl(xk) bracketrightbigg e?2pinmNl vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglevextendsingle 2 dx = integraldisplay h 0 vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglevextendsinglee? 2pin N xke? 2pin N x ?s?1n ?(J?1)/2?summationdisplay l=??(J?1)/2? bracketleftbiggg l(xk+1)?gl(xk) h x+gl(xk) bracketrightbigg e?2pinmNl vextendsinglevextendsingle vextendsinglevextendsingle vextendsinglevextendsingle 2 dx (A.2) Let Ank = e?2pinN xk Bnk = ?s?1n ?(J?1)/2?summationdisplay l=??(J?1)/2? gl(xk+1)?gl(xk) h e ?2pinmNl Cnk = ?s?1n ?(J?1)/2?summationdisplay l=??(J?1)/2? gl(xk)e?2pinmNl 87 then, R(n,k) = integraldisplay h 0 vextendsinglevextendsingle vextendsingleAnke?2pinN x +Bnkx+Cnk vextendsinglevextendsingle vextendsingle 2 dx = h+I1(n,k) +I2(n,k) +I3(n,k) (A.3) where I1(n,k) = 2Re braceleftbigg AnkB?nk integraldisplay h 0 xe?2pinN xdx bracerightbigg I2(n,k) = 2Re braceleftbigg AnkC?nk integraldisplay h 0 e?2pinN xdx bracerightbigg I3(n,k) = 13bardblBbardbl2h3 +Re{BnkC?nk}h2 +bardblCbardbl2 For ? negationslash= 0, integraldisplay h 0 xe?xdx = h?e?h ? 1?2 parenleftBig e?h ?1 parenrightBig integraldisplay h 0 e?xdx = 1? parenleftBig e?h ?1 parenrightBig 88