3D Volume Reconstruction From 2D Plenoptic Data Using FFT-Based Methods
View/ Open
Date
2015-06-15Type of Degree
DissertationDepartment
Electrical Engineering
Metadata
Show full item recordAbstract
Traditional imaging modalities produce images by capturing a 2D slice of the 4D light field. This is an inherently lossy conversion, as the angular information contained in the light field is ignored. Light-field imaging attempts to capture not only the spatial information but also the angular information by sampling the light field from multiple perspectives. By recording both the spatial and angular information contained in the light field, the path each ray travels to the sensor can be reconstructed. By retracing these paths, the image can be refocused to any arbitrary focal plane after acquisition. The resulting images are no longer limited to a 2D space but can now describe the entire 3D imaged volume. Plenoptic imaging systems are commonly used to generate 2D images at varying focal depths from a single acquired image. This technique can also be extended to create estimates of the 3D imaged volume by creating a stack of these 2D refocused images. However, each 2D refocused image will contain energy from out-of-plane objects, which is commonly recognized as image blur. This image blur is undesirable in many applications utilizing volume recon- structions, and an efficient means of removing this out-of-plane energy is desired. Existing state-of-the-art techniques for producing blur-free reconstructions such as the multiplica- tive algebraic reconstruction technique (MART) are tomographic approaches. While such techniques can produce exceedingly accurate estimates of the volume, the computational burden is also extremely high. This research describes alternate methods of reconstructing the volume via frequency-domain algorithms. The focal stack generated by digitally refocusing the acquired data can be modeled as a linear process whereby the system point spread function (PSF) is convolved with the imaged volume. Deconvolution is based on recognizing that convolution is equivalent to point-by-point multiplication in the frequency domain. It follows that the imaged volume can then be estimated by point-by-point division of its spectrum by the spectrum of the PSF. This is beneficial as calculation of a signal spectrum can be done efficiently via the fast Fourier Transform (FFT) . Where volume reconstruction may have taken hours using tomographic methods, solutions utilizing deconvolution can be obtained in minutes or even seconds. To truly understand the impact that such a drastic reduction in processing time can have, one must consider that processes involving dynamic events rely on not just a single reconstructed volume. To fully describe such events, the volume must be imaged and subsequently reconstructed multiple times to analyze the event. Fourier-based processing techniques have also been shown to offer computationally ef- ficient alternatives to the more intuitive integration-based refocusing algorithms. Existing research has focused on generating 2D images from the 4D plenoptic data set through the use of the projection-slice theorem [1,2]. However, these results offer a hint at the flexibility of the projection-slice theorem and its application to higher-dimensional spaces. The 2D/4D projection-slice theorem used to compute 2D images is extended to the 3D/4D case in order to directly generate the 3D focal stack from the 4D plenoptic data. This offers the potential for further improvements in PIV processing speed over conventional tomographic methods. Furthermore, it is shown that the 3D object can be estimated directly from the 3D projec- tions contained within the 4D plenoptic data, again through the use of the projection-slice theorem, without deconvolution.