Development of a 3-D Fluid Velocimetry Technique based on Light Field
Imaging
by
Kyle Lynch
A thesis submitted to the Graduate Faculty of
Auburn University
in partial ful llment of the
requirements for the Degree of
Master of Science
Auburn, Alabama
May 9, 2011
Keywords: 3-D imaging, light eld rendering, velocimetry
Copyright 2011 by Kyle Lynch
Approved by
Brian Thurow, Chair, Associate Professor of Aerospace Engineering
Stanley Reeves, Professor of Electrical and Computer Engineering
Roy Hart eld, Professor of Aerospace Engineering
Abstract
A novel method for performing 3-D particle image velocimetry is developed and demon-
strated. The technique is based on light eld photography, which uses a dense lenslet array
mounted near a camera sensor to simultaneously sample the spatial and angular distribution
of light entering the camera. Computational algorithms are then used to refocus the image
after it is taken and render a 3-D intensity distribution. This thesis provides an introduction
to the concepts of light eld photography and outlines the processing steps and algorithms
required to obtain a 3-D velocity eld. To support this, a ray-tracing simulator is used to
simulate light eld images and rendering codes are generated to form 3-D particle volumes
which can be used for particle image velocimetry (PIV) interrogation. The simulation and
rendering code is tested with uniform displacement elds and a spherical vortex, and mea-
surement errors are quanti ed. It is shown that light eld imaging is a feasible method
for performing 3-D velocimetry with a single camera, and steps are outlined for further
development and testing.
ii
Acknowledgments
I would like to thank Dr. Brian Thurow for his invaluable assistance, advice, and
time given in assisting with the development of this thesis. Our conversations have led to
numerous ideas and thoughts regarding not only light eld imaging, but also the broader
eld of optical diagnostics which I greatly enjoy and have established as the foundation for
my future career.
I also wish to thank my parents for their continuous support throughout my education. I
could not have completed this work without their encouragement and help, and their support
of my future academic goals is invaluable.
Lastly, I?d like to thank my colleague Zach Reid for the hours of fruitful discussions and
help regarding light eld imaging and also numerous topics involving laser diagnostics.
iii
Table of Contents
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Three-Dimensional Flow Measurement Background . . . . . . . . . . . . . . . . 6
2.1 Scanning PIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Defocusing PIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Tomographic PIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Holographic PIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5 Synthetic Aperture PIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6 Summary and Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3 Introduction to the Concepts of Light Field Rendering and Photography . . . . 24
3.1 Light Fields and the Plenoptic Concept . . . . . . . . . . . . . . . . . . . . . 25
3.2 Recent Developments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.3 Plenoptic 2.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4 Camera Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.1 Ray Tracing Description and Optical Con guration . . . . . . . . . . . . . . 32
4.2 1-Dimensional Ray-Tracing Simulator . . . . . . . . . . . . . . . . . . . . . . 35
4.2.1 Tests using the 1-D Simulator . . . . . . . . . . . . . . . . . . . . . . 39
4.2.2 Depth of Field and Achievable Resolution . . . . . . . . . . . . . . . 45
4.2.3 Including Di raction E ects . . . . . . . . . . . . . . . . . . . . . . . 47
4.3 2-Dimensional Ray-Tracing Simulator . . . . . . . . . . . . . . . . . . . . . . 52
iv
4.3.1 Optical Con guration . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.3.2 Particle Field Generation . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.3.3 Tests using the 2-D Simulator . . . . . . . . . . . . . . . . . . . . . . 56
4.3.4 Angular Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.4 Simulating Particle Displacements . . . . . . . . . . . . . . . . . . . . . . . . 63
5 Light Field Rendering Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.1 Light Field Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.2 Computational Refocusing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.3 Focal Plane Spacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.4 Single-particle refocusing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.5 Intensity Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.6 3-D Deconvolution and Limited-Angle Tomography . . . . . . . . . . . . . . 86
6 Volumetric Correlation Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.1 WIDIM Algorithm Development . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.1.1 Grid Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.1.2 Cross-Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.1.3 Vector Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.1.4 Predictor Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.1.5 Grid Re nement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
6.2 Extension to 3-D PIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.1 Uniform X-Direction Displacements . . . . . . . . . . . . . . . . . . . . . . . 112
7.2 Uniform Z-Direction Displacements . . . . . . . . . . . . . . . . . . . . . . . 115
7.3 Variation of Particle Number Density . . . . . . . . . . . . . . . . . . . . . . 119
7.4 Velocity Field of a Simulated Vortex . . . . . . . . . . . . . . . . . . . . . . 122
8 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
v
Appendix A: Fortran 2-D Simulation Code . . . . . . . . . . . . . . . . . . . . . . . . 135
Appendix B: Matlab Vortex Displacement Scripts . . . . . . . . . . . . . . . . . . . . 140
Appendix C: Light Field Rendering Codes . . . . . . . . . . . . . . . . . . . . . . . . 146
Appendix D: Matlab 3-D PIV Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
vi
List of Figures
2.1 Schematic of a traditional particle imaging scenario with defocusing. Adapted
from [1]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Schematic of a defocus PIV particle imaging scenario using a double aperture
arrangement. Adapted from [1]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Schematic of a tomographic PIV experimental setup. . . . . . . . . . . . . . . . 10
2.4 Demonstration of ghost particle formation and displacement. . . . . . . . . . . . 12
2.5 Schematic of an o -axis holographic capture and reconstruction. Adapted from [2]. 15
2.6 Schematic of a digital in-line holographic capture and reconstruction. . . . . . . 16
3.1 Schematic of defocusing e ect with a full aperture. . . . . . . . . . . . . . . . . 26
3.2 Schematic of defocusing e ect with the use of an eccentric aperture. . . . . . . . 27
3.3 Schematic of defocusing e ect with the use of an eccentric aperture. . . . . . . . 28
3.4 Fundamental di erence in operation of original plenoptic and "focused plenoptic"
cameras. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.1 Schematic of Optical Con guration and Dimensional De nitions. . . . . . . . . 33
4.2 Depth testing, dy = 0 m ;dz = +1000 m. . . . . . . . . . . . . . . . . . . . . 41
4.3 Depth testing, dy = 0 m ;dz = 1000 m. . . . . . . . . . . . . . . . . . . . . 41
vii
4.4 Spatial resolution testing, dy = 62 m ;dz = 0 m. . . . . . . . . . . . . . . . 42
4.5 Spatial resolution testing, dy = 0 m ;dz = 0 m. . . . . . . . . . . . . . . . . . 43
4.6 Spatial resolution testing, dy = +62 m ;dz = 0 m. . . . . . . . . . . . . . . . 43
4.7 Spatial resolution testing, dy = +63 m ;dz = 0 m. . . . . . . . . . . . . . . . 44
4.8 Spatial resolution testing, dy = +10 m ;dz = +500 m. . . . . . . . . . . . . . 44
4.9 Spatial resolution testing, dy = +30 m ;dz = +500 m. . . . . . . . . . . . . . 45
4.10 Schematic of dimensions for depth of eld calculations. . . . . . . . . . . . . . . 46
4.11 E ect of magni cation on ambiguity, f = 50 mm. . . . . . . . . . . . . . . . . . 48
4.12 Comparison of Airy and Gaussian functions. Adapted from Adrian and Wester-
weel [3, p. 101]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.13 Schematic of di raction simulation, applied at both the lenslet array and the
sensor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.14 Di raction testing using the same conditions as tested in Figure 4.6, dy =
+62 m ;dz = 0 m. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.15 Schematic of the lenslet array (red) overlaying the edge of the CCD array (blue). 55
4.16 Particle eld de nitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.17 In-focus particle image (dx = 0, dy = 0, and dz = 0). Lenslet array outline given
in light gray. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.18 E ect of a depth change on a single particle image. Lenslet array outline given
in light gray. In all cases, dx = 0 and dy = 0. . . . . . . . . . . . . . . . . . . . 58
viii
4.19 E ect of an in-plane and depth change on a single particle image. Lenslet array
outline given in light gray. In the left column, both dx = 0 and dz = 0. In the
right column, dx = 0 and dz = +2 mm. . . . . . . . . . . . . . . . . . . . . . . 59
4.20 Example of two in-line particles, where the depth of the two particles are indi-
vidually varied. Lenslet array outline given in light gray. In all cases, dx = 0 and
dy = 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.21 In-focus particle image (zspan = 0). 1,000 particles simulated with 25,000 rays
each. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.22 Particle image (zspan = 3:0 mm). 1,000 particles simulated with 25,000 rays each.
The lenslet grid outline has been removed to improve clarity. . . . . . . . . . . . 61
4.23 Particle image (zspan = 5:0 mm). 1,000 particles simulated with 25,000 rays each.
The lenslet grid outline has been removed to improve clarity. . . . . . . . . . . . 62
4.24 Particle image (zspan = 7:0 mm). 1,000 particles simulated with 25,000 rays each.
The lenslet grid outline has been removed to improve clarity. . . . . . . . . . . . 62
4.25 Distribution of angles in degrees for a 1,000 particle simulation. . . . . . . . . 64
4.26 Distribution of angles in degrees for a 1,000 particle simulation. . . . . . . . . 64
4.27 X-Y slice from particle displacement simulation of the Hill spherical vortex. Pa-
rameters a = 1:5 mm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.1 Cropped portion of a simulated pinhole image with (f=#)m = 16. . . . . . . . . 69
5.2 Cropped portion of processed image showing grid cells and center locations. . . 71
5.3 Registration for a single lenslet. Top number (red) indicates the angle . Bottom
number (green) indicates the angle . Actual pixel locations given by blue points.
Lenslet centroid shown as red asterisk. . . . . . . . . . . . . . . . . . . . . . . . 73
ix
5.4 Di erences in synthetic sensor size vs. actual sensor size. . . . . . . . . . . . . . 75
5.5 Change in synthetic sensor size with refocus distance. . . . . . . . . . . . . . . . 75
5.6 Signal interpolation technique, a) De nition of normalized coe cients, b) Result-
ing rendered pixels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.7 Example of refocused images at various refocusing planes. . . . . . . . . . . . . 78
5.8 Unique depths of eld for calculation of the focal plane spacing. . . . . . . . . . 79
5.9 Unique depths of eld for calculation of the focal plane spacing. f=# = 2, f = 50
mm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.10 Unique depths of eld for calculation of the focal plane spacing. f=# = 11,
f = 50 mm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.11 Unique depths of eld for calculation of the focal plane spacing. f=# = 11,
f = 200 mm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.12 Refocus plane 3, t = 5:8236 mm . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.13 Refocus plane 6, t = 2:9555 mm . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.14 Refocus plane 9, t = 0:0000 mm . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.15 Refocus plane 12, t = +3:0455 mm . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.16 Refocus plane 15, t = +6:1837 mm . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.17 Intensity histogram for a refocused image using 100 bins. Gaussian distribution
tted to the distribution, and markers of the mean intensity, and three standard
deviations above the mean intensity given. . . . . . . . . . . . . . . . . . . . . . 86
x
5.18 Original light eld image without thresholding. . . . . . . . . . . . . . . . . . . 87
5.19 Light eld image with non-dilated thresholding. . . . . . . . . . . . . . . . . . . 87
5.20 Light eld image with dilated thresholding. . . . . . . . . . . . . . . . . . . . . 87
6.1 WIDIM algorithm ow chart. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.2 Schematic of PIV gridding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.3 Intermediate steps for the calculation of the spatial correlation with the FFT.
Window size 64 x 64 px. Data from case A2 of the 2005 PIV Challenge. . . . . 98
6.4 Example of a cross-correlation performed using 48 x 48 px interrogation window
sizes with 24 x 24 px spacing and no window o set. Vectors scaled by a factor of
5. Data from case A, image 2 of the 2003 PIV Challenge. . . . . . . . . . . . . . 100
6.5 Vector eld from Figure 6.4 using the iterative validation and replacement pro-
cedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.6 X-Coordinate Predictor using 64 x 64 px interrogation window sizes with 32 x 32
px spacing. Data from case A, image 2 of the 2003 PIV Challenge. . . . . . . . 102
6.7 Y-Coordinate Predictor using 64 x 64 px interrogation window sizes with 32 x 32
px spacing. Data from case A, image 2 of the 2003 PIV Challenge. . . . . . . . 103
6.8 X-Coordinate Corrector using 64 x 64 px interrogation window sizes with 32 x
32 px spacing. Data from case A, image 2 of the 2003 PIV Challenge. . . . . . . 103
6.9 Y-Coordinate Corrector using 64 x 64 px interrogation window sizes with 32 x
32 px spacing. Data from case A, image 2 of the 2003 PIV Challenge. . . . . . . 104
6.10 Rf = 1. Initial percentage of valid vectors: 97%. . . . . . . . . . . . . . . . . . . 105
xi
6.11 Rf = 2. Initial percentage of valid vectors: 97%. . . . . . . . . . . . . . . . . . . 106
6.12 Rf = 3. Initial percentage of valid vectors: 95%. . . . . . . . . . . . . . . . . . . 107
6.13 Rf = 4. Initial percentage of valid vectors: 85%. . . . . . . . . . . . . . . . . . . 108
6.14 Corrector RMS for the multi-pass algorithm. The re nement factor Rf is also
plotted for clarity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.15 Percentage of valid vectors for the multi-pass algorithm. The re nement factor
Rf is also plotted for clarity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.16 Schematic of resize operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.17 Example of resizing operation applied to a set of refocused planes. Note that in
these images the thresholding operation has not been applied to give a clearer
view of the resizing procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
7.1 Velocity eld using 0.001 particles per pixel and a simulated 4.5 pixel shift in the
x-direction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
7.2 Histogram of x-displacements. 50 bins used over 4800 data points. . . . . . . . . 114
7.3 Histogram of y-displacements. 50 bins used over 4800 data points. . . . . . . . . 115
7.4 Histogram of z-displacements. 50 bins used over 4800 data points. . . . . . . . . 115
7.5 Histogram of x-displacements without using image resizing. 50 bins used over
4800 data points. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
7.6 Vector eld showing one of every 5 vectors without image resizing. Mean value
of x-component subtracted, and both y and z components set to zero for e ective
visualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
xii
7.7 Vector eld for uniform z-displacement showing one of every 4 vectors. . . . . . 117
7.8 Histogram of z-displacements. 100 bins used to highlight peak-locking e ect more
clearly. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
7.9 Vector eld for uniform z-displacements (same as Figure 7.7). Mean value of the
displacement subtracted to show location and direction of bias. . . . . . . . . . 119
7.10 Histogram of x-displacements. 100 bins used over 3600 data points. . . . . . . . 119
7.11 Histogram of y-displacements. 100 bins used over 3600 data points. . . . . . . . 120
7.12 Example of resizing operation applied to a set of refocused planes. Note that in
these images the thresholding operation has not been applied to give a clearer
view of the resizing procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
7.13 Histogram of the rendered intensity volume corresponding to Figure 7.12a. The
3 threshold level for this case is 0.34. . . . . . . . . . . . . . . . . . . . . . . . 122
7.14 Histogram of the rendered intensity volume corresponding to Figure 7.12e. The
3 threshold level for this case is 0.72. . . . . . . . . . . . . . . . . . . . . . . . 122
7.15 3-D velocity eld of a inviscid spherical vortex. All vectors plotted. . . . . . . . 123
7.16 X-Y projection of the 3-D velocity eld in Figure 7.15. All vectors plotted. . . . 124
7.17 Y-Z projection of the 3-D velocity eld in Figure 7.15. All vectors plotted. . . . 124
7.18 Histogram of x-displacements for the velocity eld given in Figure 7.15 . . . . . 125
7.19 Histogram of y-displacements for the velocity eld given in Figure 7.15 . . . . . 125
7.20 Histogram of z-displacements for the velocity eld given in Figure 7.15 . . . . . 126
xiii
List of Tables
2.1 Comparison of Experimental Studies using 3-D PIV Techniques . . . . . . . . . 23
4.1 De nition of Simulator Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.2 De nition of Di raction Parameters . . . . . . . . . . . . . . . . . . . . . . . . 49
4.3 De nition of 2-D Simulator Parameters, based on Kodak KAI-16000 sensor. . . 55
5.1 Synthetic Depth of Field and Focal Plane Positioning . . . . . . . . . . . . . . . 80
xiv
Chapter 1
Introduction
The visualization of uid motion has served both scienti c and artistic purposes for cen-
turies, dating back to the early sketches of Leonardo da Vinci. His work in the eld included
numerous illustrations of complex ow patterns, such as the turbulent ow downstream of
a at plate immersed in uid. These observations represented a new approach to the study
of uids which emphasized the importance of visualization for understanding the complex
interactions, patterns, and topology of ows. A resurgence of interest in ow visualizations
occurred in the late nineteenth century due to the increased use of hydraulic systems in in-
dustry and the culmination of theoretical studies occurring in the eld of hydrodynamics. A
number of landmark experiments that formed the foundation of modern uid dynamics were
performed by pioneers such as Osborne Reynolds, Geo ery Taylor, and Ludwig Prandtl. In
particular, the pipe ow experiments of Reynolds formed the foundation for the study of
turbulence and established the Reynolds number as an important dimensionless parameter
to characterize uid behavior.
Since the time of these early experiments, the study of uid mechanics has allowed re-
markable technological progress to occur. The advances span a diverse range of elds, ranging
from aeronautics and hydraulics to biology and medical research. These applications have
drastically increased the need to not only visualize the uid motion, but provide meaningful
measurements which can be used to improve the design of these devices. Fortunately, a num-
ber of advances have also occurred in measurement technology which have allowed scientists
to investigate a large gamut of ow regimes, characterized not only by velocity (subsonic,
supersonic, and hypersonic), but also by a variety of other factors (chemically reacting ows,
magnetohydrodynamic ows, etc.).
1
Many of the advancements in uid mechanics have occurred not only due to the in-
creasing accuracy and capability of measurement tools, but also from a more fundamental
approach to research involving the understanding of various aspects of uid mechanics and,
in particular, turbulence. Various advancements have been made in the eld over the years,
notably the seminal works by Kolmogorov in 1941 [4], which outlined the range of spatial
and temporal scales that universally pervade all turbulent ows, and Townsend in 1956 [5]
which rst brought to light the importance of coherent structures in turbulent ows. Both of
these ideas have been expanded and studied in incredible detail; however, these fundamental
ideas pose unique challenges to the development of diagnostics and in some ways provide a
goal for all techniques to achieve. In this regard, the ultimate goal for diagnostic techniques
is to resolve the motions contained at the smallest scales of the ow while also being able
to measure the topology and interactions of the largest ow scales. While the focus of this
thesis does not concern fundamental turbulence research, the requirements turbulent ows
place on diagnostic methods provides the basic impetus for the current work.
In addition to conducting experiments, many uid mechanical studies are performed
using theoretical and computational tools. Fluid motion can be fully described by the set
of nonlinear partial di erential equations known as the Navier-Stokes equations. The equa-
tions were formalized in the middle of the nineteenth century; however, the complexity of
the equations yielded only a small number of exact solutions that were in most cases not
useful for practicing engineers. Rapid developments in computer technology and numeri-
cal solution methods have allowed these equations to be revisited and applied, establishing
the eld of computational uid dynamics (CFD). Within this eld the most rigorous solu-
tion method is known as direct numerical simulation (DNS), where the full range of spatial
and temporal scales of the ow are computed. For speci c cases (typically low Reynolds
numbers), DNS is a viable option that provides highly accurate results. However, as the
Reynolds number increases, the resulting number of required calculations rises exponentially
and quickly exceeds the capability of computers to solve in a timely manner. Unfortunately,
2
many practical engineering scenarios operate in a high Reynolds number regime, and alter-
native solution methods are required. Typically, the approach is to use lower-order models
to reduce the complexity of the computation, which rely on broad physical assumptions or
tabulated experimental data.
For many scenarios, low- delity CFD is used as a powerful tool for engineers and scien-
tists. The heavy use and reliance on computational solutions does not however eliminate the
need for physical experiments. CFD depends on experimental data for validation purposes.
Likewise, CFD is heavily used to optimize designs before testing is conducted, eliminating
expensive additional testing and physical models. Thus, a primary challenge for modern mea-
surement technology is to acquire accurate quantitative data for validation of CFD models,
and to investigate ows that are still di cult to solve numerically, such as turbulent ows,
separated ows, unsteady ows, and more.
The development of lasers and digital charge-coupled device (CCD) cameras within the
last fty years have transformed uid measurement experiments. These technologies revo-
lutionized the eld by enabling non-intrusive measurements, in contrast to highly intrusive
techniques utilizing hot-wires, pitot probes, tufts, and other simple devices. One of the rst
optical techniques to be developed was Laser Doppler Anemometry (LDA), which is capa-
ble of a point (1-D) measurement of all three velocity components using the interference of
multiple laser beams to track the motion of particle tracers. As CCD technology developed
further, planar 2-D techniques were developed, including particle image velocimetry (PIV)
[6, 7, 3], planar Doppler velocimetry (PDV) [6, 8, 9], planar laser-induced uorescence (PLIF)
[10, 11, 12], and molecular tagging velocimetry (MTV) [13, 14]. In these techniques, lasers
are used to create a high-intensity planar light source that is either scattered by particles
present in the ow eld or is absorbed by molecules within the ow which subsequently uo-
resce. A CCD camera captures the resulting distribution of signal within the plane. A major
advantage of these planar techniques is their ability to acquire data at multiple spatial loca-
tions simultaneously which enables measurements of derived quantities that were previously
3
inaccessible. For example, spatial derivatives of a 2-D velocity eld yield a component of
the vorticity and strain elds, which often are more useful quantities for fundamental uids
studies than the velocity eld itself [15].
There are however substantial limitations to the utility of planar measurements. The
majority of practical ows encountered are characterized by strongly three-dimensional fea-
tures, making the selection of a planar measurement location di cult or the basic premise of
a planar measurement inherently ill-posed. To extract meaningful 3-D information from the
ow, multiple experiments are conducted at various planes of interest. This can considerably
increase the duration and expense of a testing campaign. Another notable limitation is the
inability to calculate the full set of 3-D spatial and temporal derivatives. These drawbacks
have spurred the development of three-dimensional, three-component measurement tech-
niques such as tomographic PIV, holographic PIV, and others. Each of these techniques,
while powerful, have unique experimental restrictions and di culties which prevent their
more widespread use and will be considered herein.
The previous discussions illustrate the major challenge of modern laser diagnostics;
namely, no diagnostic technique currently exists which is capable of accurately resolving the
3-D motion and properties of the entire range of scales in ows of practical interest. The
techniques that have already been developed are capable of acquiring 3-D, 3-C velocity data
but with multiple drawbacks and often complex setups. The subject of this thesis is the
development of a novel three-dimensional measurement technique using concepts from light
eld rendering and photography to approach or exceed some of the capabilities of earlier
3-D, three-component (3-C) velocimetry techniques.
Chapter 2 provides additional background into existing three-dimensional measurement
techniques and describes limitations of these approaches. Particular attention is given to
tomographic and synthetic aperture PIV which have numerous parallels to the technique
being developed. Chapter 3 introduces the concepts and ideas behind light eld photography.
This includes a review of earlier e orts in both photography and microscopy. Chapter 4
4
describes the development of both a simpli ed 1-D light eld simulator, and a complete 2-D
simulator for light elds. In this chapter, some of the basic shortcomings of the camera
will be addressed and a detailed parametric study is performed to provide insight into the
optimal experimental con guration. This includes variables such as particle seed density,
lenslet sizing, focal plane spacing, etc. In Chapter 5, the light eld rendering codes are
discussed, including computational refocusing and volume reconstruction, which allow raw
light eld images to be converted into volumetric intensity data. Chapter 6 discusses the
development of volumetric PIV algorithms which are used to perform the correlation on
volumetric data. This rst begins with a validation of the approach on 2-D images, then is
extended to 3-D volumes. Lastly, Chapter 7 provides concluding remarks.
5
Chapter 2
Three-Dimensional Flow Measurement Background
In developing a new diagnostic technique, the capabilities of existing measurement tech-
niques must be known and used for comparison throughout development. This chapter
presents an overview of the state-of-the-art in three-dimensional (3-D), three-component (3-
C) velocimetry techniques.
2.1 Scanning PIV
Scanning PIV is an intuitive extension of traditional PIV, which makes use of high-
repetition-rate lasers and cameras to record a number of particle image pairs throughout a
short duration scan, such that the motion of the ow is e ectively frozen during the scan.
Galvanometric scanning mirrors or rotating mirrors are used to perform the scanning by
displacing the individual laser sheets so they are located at various depths with respect to
the camera. Traditional PIV algorithms are then used on each particle image pair to yield
a velocity eld for each slice of the volume.
The technique takes advantage of the mature processing techniques developed for tradi-
tional PIV, and of all 3-D PIV techniques, has the highest in-plane accuracy in determining
velocity. By using a stereoscopic imaging arrangement, the third component of velocity can
be obtained for each planar slice, making the technique a true 3-D, 3-C measurement which
has accuracy equal to traditional stereoscopic PIV.
The primary disadvantage of the technique is that the timescale of the scanning and
imaging is often orders of magnitude longer than the characteristic timescales of most ows
encountered in practice. Even with kHz-rate laser and camera systems, scanning PIV is
typically only applied in water tunnels where ow velocities are on the order of centimeters
6
per second. Additionally, the cameras must be con gured such that the focal depth extends
across the entire depth of the scan, which can reduce the quality of the individual particle
images and lower the signal-to-noise ratio. In 1995, Brucker [16] implemented a system based
on this technique to measure the 3-D transient wake in a spherical cap wake ow in a water
tunnel. In this work, a Reynolds number of 300 was studied, based on a cap radius of 15 mm
and a freestream velocity of 10 cm/s. Each scan required 0.72 seconds for the acquisition of
9 individual planes (a total of 18 images). Another more recent study by Brucker [17] on an
air ow with an average velocity of 5 m/s used a 2 kHz camera to acquire 9 individual planes
for a total of 18 images. As a nal example, Zhang et al. [18] used scanning PIV to study
a laminar separation bubble on an airfoil at moderate angles of attack in a water tunnel.
The Reynolds number in these experiments ranged from 20,000 to 60,000, based on a chord
length of 200 mm and a maximum freestream velocity of 0.3 m/s. For all of the examples
above, scanning PIV has provided useful 3-D data; however, the restrictions on ow velocity
are severe enough to prevent the technique from being used for the vast majority of practical
ows.
2.2 Defocusing PIV
In defocusing PIV, particle depth information is recovered by utilizing the natural blur-
ring of the particle with increasing distance from the focal plane of the imaging system. An
example of this e ect is shown in Figure 2.1, where light from point A is in focus and results
in a sharp point being formed on the image plane. In contrast, light from point B does not
originate on the focal plane, and thus forms a di use point on the image plane. In theory,
this information in itself could recover the particle depth and position. However, typical
particle imaging scenarios are often also low-light scenarios, limiting the signal-to-noise ratio
of the resulting image. Additionally, as the particle density increases, identifying individual
particles becomes increasingly di cult due to substantial image overlap. Finally, even if
these two issues could be resolved, the particle blur is symmetric about the focal plane, such
7
z
A
B
LensAperture
Image Plane
A'
B'
Reference PlaneImage
Figure 2.1: Schematic of a traditional particle imaging scenario with defocusing. Adapted
from [1].
z
B
LensAperture
Image Plane
B
1
'
Reference PlaneImage
B2
'
Figure 2.2: Schematic of a defocus PIV particle imaging scenario using a double aperture
arrangement. Adapted from [1].
that there exists an ambiguity of whether the particle is a certain distance towards or away
from the camera.
For these reasons, a specialized aperture is used to overcome each of these issues. The
concept was rst demonstrated by Willert and Gharib [19] in 1992 and has been continually
re ned since its inception. Pereira et al. [1] provides a revised description of the technique.
Essentially, the system can be described through the schematic in Figure 2.2, where a dual
aperture is used to encode depth information.
The defocusing PIV technique uses the image shift caused by the two apertures to
perform a measurement of the depth location of the particle. Note, for a two aperture
con guration, there still exists an ambiguity in depth, as a particle in opposite depth locations
will form the same image on the sensor. Most defocusing PIV implementations utilize a three
8
aperture mask which is arranged in the form of a triangle. The ambiguity is removed by
measuring the orientation of the triangle on the resulting image. Besides the triangular
aperture, the concept has been extended by Lin et al. [20] using an annular aperture. The
advantages of this approach are the increased optical e ciency due to the use of a larger
aperture area.
Defocusing PIV has a number of attractive advantages, including a simple experimental
setup and straightforward image processing. These features have led to it being used in a
growing number of experiments with commercial implementations available (the TSI V3V
system being an example). However, defocusing PIV has a number of drawbacks, most
severely the restriction in total particle number density since the processing is more akin to
individual particle tracking velocimetry rather than the image correlation techniques used
in particle image velocimetry. In Pereira and Gharib [21] an error analysis is presented for
determining the maximum particle density. This restriction leads to a limit in the number
of recoverable velocity vectors in each image. For example, the images acquired in [20] result
in approximately 450 vectors per image, in contrast to the thousands of vectors attainable
from a traditional PIV recording. Further studied in [21] is the absolute error in defocusing
PIV measurements, on the order of 1-2% for in-plane motion, and as much as 10% for out-
of-plane motion. With these restrictions and errors, defocusing PIV is unsuitable for a large
number of fundamental turbulence studies or CFD validation e orts where highly accurate
data is available; however, it is nding use in general studies in a variety of elds (see Gharib
et al. [22] for examples).
2.3 Tomographic PIV
Tomographic PIV is based on the illumination, recording, and reconstruction of tracer
particles within a 3-D measurement volume. The technique uses several cameras to record
simultaneous views of the illuminated volume which is then reconstructed using optical
tomography algorithms to yield a discretized 3-D intensity eld. A pair of 3-D intensity
9
PIV Laser
Expansion Lenses
Mirror
Measurement
Volume
PIV Cameras in
Scheimpflug Condition
Figure 2.3: Schematic of a tomographic PIV experimental setup.
elds are analyzed using 3-D cross-correlation algorithms to calculate the 3-D, 3-C velocity
eld within the volume. The technique was originally developed by Elsinga et al. in 2006 [23]
and has since grown and is the focus of intense research into improvements and applications
of the technique.
The use of multiple cameras requires relatively complex mounting and orientation hard-
ware to properly align each eld of view. To ensure the focus is uniform across the volume,
each camera lens is aligned in a Scheimp ug con guration where the lens and image planes
are tilted with respect to the object plane residing in the illuminated volume. The particles
are kept in the focal depth of each camera by reducing the aperture (f/#) to increase the
depth of eld. A calibration of the viewing angles and eld of view is established using a
calibration target placed at several depths throughout the measurement volume. In order for
the reconstruction algorithms to correctly triangulate the particle position, the calibration
must be highly accurate (typically on the order of the particle image size, under 1 mm).
The reconstruction procedure is a complex under-determined inverse problem. The
primary complication is that a single set of views can result from a large number of 3-D
volumes. Procedures to properly determine the unique volume from a set of views are the
10
foundation for the eld of tomography. A number of procedures exist including algebraic,
Fourier, and back-projection reconstruction; however, for the small number of views and
high spatial frequencies present in particle images, algebraic techniques are typically used.
The rst step in these algebraic algorithms is the proper conditioning of each of the
images. In particular, for e cient processing, the image data must be segmented to ensure
a zero-intensity background and portions of nite-intensity which de ne the individual par-
ticle images. This process may be quite complex due to the nature of the particle images,
which themselves constitute a range of intensities. These images are used as inputs to the
multiplicative algebraic reconstruction technique (MART) algorithm, which uses iterative
techniques to determine the intensity of the reconstructed volumetric pixels, or voxels. The
advantage of this pixel-by-pixel reconstruction technique is that it avoids the need to identify
individual particles and naturally takes into account the intensity distribution of the images.
As alluded to earlier, the reconstruction is not always unambiguous, which leads to the
generation of \ghost particles" which can reduce the quality of the resulting 3-D intensity
eld. The conceptual reasoning behind the generation of ghost particles is shown in Figure
2.4. In this case, both cameras are capturing an image consisting of two separate particle
images. However, due to the ambiguity introduced along the line of sight of each camera, the
reconstruction process is unable to precisely determine the actual depth of the particle, and
thus accumulates intensity at the voxels corresponding to both the actual particle position
and the ghost particle position. To understand why this is a problem, consider the case
of a pair of particle images where the particle images have shifted between exposures. In
this case, if the actual particle displacement is outward, the ghost particles will also be
displaced outward, regardless of if the velocity eld indicates another value. Additionally,
the magnitude of the ghost particle displacement is not set by the local value of the velocity
at the ghost particle location, but is a function of the velocity at the actual particle position
and the viewing angles of each camera. The issue with ghost particles can be addressed
by introducing additional views to the tomographic PIV setup, which substantially removes
11
Actual
Particles
Ghost
Particles
Camera 1 Camera 2 Camera 1 Camera 2
Figure 2.4: Demonstration of ghost particle formation and displacement.
the ambiguity that exists in a two or three camera setup, but increases optomechanical
complexity and system cost.
In Elsinga et al. [23], a parametric study was performed using a reduced-dimensionality
model (1-D slices reconstructing a 2-D volume), where parameters such as number of MART
iterations, number of cameras, viewing directions, particle image density, calibration accu-
racy, and image noise were varied. For their assessment, the cameras were placed at optical
in nity so that magni cation, viewing direction, and focal depth were constant across the
eld of view. They concluded that accurate volumes could be reconstructed by as few as
three cameras, however both the quality of the reconstruction and the achievable particle
seeding density increase by adding more cameras. The viewing angles were also varied to
optimize the reconstruction; an angle of 30 degrees was found optimal for reducing the num-
ber of \ghost particles," as described above. At large viewing angles, the line-of-sight from
each camera increases to the point where more artifacts are present in the reconstructions.
The last trade study investigated the e ect of calibration inaccuracies and found that a
calibration error of 1 pixel could result in as much as a 50% decrease in reconstruction accu-
racy. This is a signi cant nding that places stringent requirements on the optomechanical
setup for the cameras. In practice, typical calibration errors can be reduced to around 0.2
12
pixels through self-calibration procedures, resulting in a negligible e ect on reconstruction
performance. These errors have been explored more recently in Elsinga et al. [24].
Tomographic PIV has been applied perhaps to a broader range of ows than any other
3-D velocimetry technique. In Humble et al. [25], the technique was used to elucidate the
structure of a turbulent boundary layer/shock wave interaction. This study used the unique
aspects of 3-D, 3-C data to draw connections from supersonic boundary layers to the more
familiar structures, such as hairpin vortices and streamwise-elongated regions of low-speed
ow, that are found in incompressible boundary layers. The low-frequency motion of the
shock wave impinging on the boundary layer was found to have a direct connection to the
motion of the elongated regions of velocity.
Another example by Scarano and Poelma [26] investigated the vorticity eld in the
wake of a cylinder at Reynolds numbers of 180, 360, 540, 1080, and 5540. This range of
Reynolds numbers allowed the transitional behavior of the ow to be studied in great detail.
In particular at Re = 180, regular shedding occurred, which slowly began developing counter-
rotative stream-wise vortex pairs at Re > 500. Between these two Reynolds numbers, both
the regular shedding and counter-rotating vortex pairs coexist. At the turbulent Reynolds
number of 5540, a large increase in the range of ow scales was observed as well as the
development of a distinct separated shear layer.
One of the more recent applications of the technique is for measuring the aero-acoustic
eld of basic shapes by using high-repetition rate lasers and cameras. Violato et al. [27]
performed an experiment using a 5 kHz tomographic-PIV system to measure the 3D velocity
eld near the leading edge of a standard NACA 0012 airfoil which was positioned in the
wake of a cylindrical rod. In this work, an analysis was performed of various pressure eld
integration methods to reduce errors in the calculation, which are critical to ensure the
accurate sound eld prediction with various acoustic analogies. It should be mentioned
that this work represents a relatively new eld that is receiving considerable interest for the
13
predicted capability of measuring the aeroacoustic properties of complex shapes and to help
validate computational aeroacoustic models.
2.4 Holographic PIV
Holographic PIV (HPIV) encompasses a variety of experimental techniques which use
the interference of coherent light scattered by a particle and a reference beam to encode
information of the amplitude and phase of the scattered light incident on a sensor plane.
This encoded information, known as a hologram, can then be used to reconstruct the original
intensity eld by illuminating the hologram with the original reference beam via optical
methods or digital approximations. This intensity eld is interrogated using cross-correlation
techniques to yield a velocity eld. By using holography instead of traditional imaging, this
class of methods overcomes one of the primary limitations in most 3-D PIV schemes, namely
the limited focal depth of the imaging lenses used by individual cameras. The two primary
methods used in modern holographic PIV are o -axis holography and in-line holography,
both of which will be discussed here.
O -axis holography uses separate beams to provide the object and reference waves. This
setup is used to avoid speckle noise from being generated from interference of the two waves
within the scattering medium, which would occur if they were both propagated through the
medium. A simpli ed schematic of this arrangement is provided in Figure 2.5. An actual
o -axis experiment is a highly complex optical system comprising numerous elements, and
the reader is referred to an example schematic in Sheng et al. [28] for a more complete
presentation. For the recording, an object beam is directed through the particle eld, which
results in particle scattering normal to the beam direction. Simultaneously, a reference beam
enters at an oblique angle to the setup, and the combination of scattered object light and the
reference beam (i.e., the interference pattern/hologram) is recorded onto a holographic lm
plate. For the purposes of reconstruction, a conjugate reference beam is directed onto the
developed holographic lm (a transparency consisting of ne fringe patterns). The resulting
14
Unscattered
Object
Wave
Reference
Wave
Scattered
Object
Wave
Hologram
Particle Field
|R+O|
2
Recording
Reconstruction
Conjugate
Reference
Wave
Hologram
|R|
2
O
*
Real Image
Camera
Figure 2.5: Schematic of an o -axis holographic capture and reconstruction. Adapted from
[2].
wave transmitted through the lm forms a stationary intensity eld in real space which
represents the original 3-D volume. Using a set of cameras with shallow depths of eld,
individual planes of this stationary volume can be captured and interrogated with typical
PIV algorithms.
In-line holography is another approach that provides some unique advantages for particle
imaging. Perhaps the largest of these is the use of forward scattered light, which is orders
of magnitude brighter than scattering oriented normal to the beam direction. Additionally,
the optical setup of such systems is much simpler because the residual light does not need
to be separated and recombined at a di erent location. A schematic of such a setup is given
in Figure 2.6. The in-line con guration also provides a relatively easy extension to apply
CCD and CMOS sensors, creating a separate class of experiments known as digital in-line
holography (DiH). The complexity of such setups shifts from the optical setup to image
post-processing, which involves the use of simulated reference beams. Further discussion of
these topics is beyond the scope of this thesis but is given an excellent treatment by Arroyo
and Hinsch [29].
Critical to all holographic setups are specialized requirements on lasers and recording
media. In both o -axis and in-line holographic setups, injection-seeded Nd:YAG lasers or
Ruby lasers must be used due to the requirement of a long (> 1 meter) coherence length. This
15
Laser
Lens
Lens
PinholeMirror
Mirror
Mirror
Lens
Lens
Beam
Splitter
Beam
Splitter
Measurement
Volume
Reconstructed
Volume
Image
Sensor
Beam
Stop
Figure 2.6: Schematic of a digital in-line holographic capture and reconstruction.
coherence requirement is necessary due to the use of interference to generate the hologram.
To perform the recording, o -axis holography requires photographic plates to be used due
to narrow holographic fringe spacing, given by Equation 2.1:
df = 2 sin (2.1)
where df is the fringe spacing, is the wavelength of laser light being used (typically 532
nm or other visible wavelengths), and is the half-angle between the scattered object wave
and the reference wave. For the case of o -axis holography, the object wave arrives at the
holographic plate at a normal angle and the reference beam arrives at an angle to the plate.
For example, consider the half angle between the beams to be 22.5 degrees. The fringe
spacing for this case would be approximately 0.695 microns, or equivalently 1440 lines/mm,
a resolution which can only be achieved using holographic lm. This precludes the use of
digital cameras for direct sampling of the hologram in an o -axis setting.
A variety of issues degrade the quality of HPIV results. The rst class of issues involves
the reconstruction itself. In holography, the object wave of a particle is typically assumed
to be spherical; however, due to Mie scattering theory, this wave is a complex shape which
16
can distort the reconstructed particle. Another issue is the presence of substantial speckle
noise which lowers the overall signal-to-noise ratio of particle images. This e ect is of greater
concern for in-line holographic systems because the reference beam is propagated through
the volume along with the scattered object beam. Noise can also be introduced through
impurities in the scattering medium, such as temperature variations and window blemishes.
Because holography requires coherent imaging, these e ects are much more severe than in
normal imaging conditions. The combination of these factors increases the complexity of
the correlation process. In particular, the speckle noise in an HPIV recording often prevents
traditional image-based correlation methods from being used. Instead, single particle iden-
ti cation and correlation are implemented, which set limits on particle number density. A
more comprehensive outline of these error sources is given in Meng et al. [2]
In light of these issues, it may seem that Holographic PIV is too complicated and
error-prone to be used for ow measurements. However, many impressive results have been
obtained with all holographic approaches. Svizher and Cohen [30] used a hybrid in-line and
o -axis HPIV system to study the physics of hairpin vortices in a 40 x 30 x 30 mm3 volume
with great detail. HPIV has also been useful for fundamental turbulence studies, such as
in Tao et al. [31], which investigated the alignment of vorticity and strain rate tensors in
high Reynolds number turbulence. In this work, a volume of 46.5 x 45.1 x 44.5 mm3 was
used. As a nal example, Sheng et al. [28] used holographic microscopy to perform near-wall
measurements of turbulent shear stress and velocity in boundary layers within an exceedingly
small measurement volume of 1.5 mm3. One advantage of using holography for extreme near-
wall measurements is the robustness of the technique in regards to interferences (traditional
PIV by comparison fares poorly in these cases, due to re ections at the interface). This
listing of experiments is only a few out of numerous publications about HPIV published in
recent years. For a more comprehensive listing on modern experiments, Meng et al. [2]
provides a good starting point.
17
2.5 Synthetic Aperture PIV
Synthetic aperture PIV (SAPIV) is the most recent development in 3-D PIV techniques
that makes use of similar light eld rendering concepts described in this work, and thus is
described in detail here. The technique uses a large camera array (eight or more cameras)
to capture multiple views of the measurement volume simultaneously. Methods from light
eld rendering are used to generate digitally refocused planes at various depths within the
measurement volume. Using reconstruction procedures, it is possible to determine a pair of
3-D intensity elds which are used as inputs for volumetric PIV through cross-correlation.
The technique has just recently been developed by Belden et al. [32] and represents the
newest 3-D PIV technique developed.
The design of an SAPIV experiment is in many ways similar to a tomographic PIV
experiment. In each, cameras are individually positioned and adjusted to focus on a similar
eld of view through the use of mounting hardware and lenses operated in a Scheimp ug
(tilted lens) condition. Also, the aperture of each camera must be adjusted so that the
measurement volume is contained within the entire focal depth of each camera. The primary
di erences from a tomo-PIV setup are the larger number of cameras used and the radically
di erent reconstruction algorithms. For SAPIV, the map-shift-average algorithm is used to
construct a synthetically refocused image from the individual views by projecting each view
onto a desired focal surface. In the resulting image, points that lie on the focal surface are
sharp, whereas points o of the surface are blurred out. This algorithm is described for
general cases by Vaish et al [33], and is brie y explained here.
The process of projecting individual views onto a desired focal surface is known as
computing the homography of the image, which depends on the camera parameters and the
focal plane (i.e., for each desired focal plane, a new homography must be computed). The
homography is given as a mapping shown in Equation 2.2.
18
0
BB
BB
@
bx0
by0
b
1
CC
CC
A
i
=
2
66
66
4
h11 h12 h13
h21 h22 h23
h31 h32 h33
3
77
77
5
i
0
BB
BB
@
x
y
1
1
CC
CC
A
i
(2.2)
For the implementation in Belden et al [32], two assumptions were made to simplify the
calculation of the homography at various focal planes. First, planar focal planes are used,
and secondly, this focal plane is aligned with the center of projection of each camera. The
assumption that the centers of projection of the cameras are collinear and parallel is possibly
an invalid assumption for real experiments, and represents an improvement that could be
made in the algorithm. Regardless, these two assumptions allow the analysis to continue,
and simpli es the homography transformation for various focal planes to a coordinate shift,
as shown in Equation 2.3.
0
BB
BB
@
x00
y00
1
1
CC
CC
A
i
=
2
66
66
4
1 0 k XCi
0 1 k YCi
0 0 1
3
77
77
5
0
BB
BB
@
x0
y0
1
1
CC
CC
A
i
(2.3)
In this equation, x0 and y0 are the coordinates of the reference focal plane (determined
through a calibration procedure), and x00 and y00 are the coordinates of the new focal plane.
XCi and YCi de ne the relative location of camera i, and nally k is a constant that de-
termines the amount by which the focal plane is shifted. By using this equation to transform
the image from each camera and averaging the resulting transformed images, a refocused
image is rendered at the speci ed focal plane depth. In this image, in-focus particles will
appear sharp and bright, and out-of-focus particles will appear blurred and relatively dark.
Through a simple thresholding procedure (considered in detail later in this thesis), the parti-
cle images can be extracted and used to build slices of a three-dimensional intensity volume.
A pair of these volumes are then used in a 3-D cross-correlation to determine the 3-D, 3-C
displacement.
19
In Belden et al. [32], a parametric study was performed to analyze the e ect of the
measurement volume size, particle seeding density, mapping function error, and camera ar-
rangement. For all cases, a 5 x 5 camera array is used with an 85 mm focal-length lens, 1000
x 1000 px sensors, and pixel pitch of 10 m. The reconstruction quality parameter Q, de ned
originally by Elsinga et al. [23] for tomographic PIV evaluation, is used to judge the quality
of rendered volumes. In all cases, the reconstruction quality is reduced for greater particle
seeding densities and camera baselines. For example, in a 100 mm3 volume, adequate recon-
structions can be obtained with particle densities up to approximately 0.15 particles/mm3.
For smaller volumes, the acceptable seeding increases dramatically; in a 50 mm3 volume,
particle densities can be as high as 0.6 particles/mm3 to yield similar reconstruction quality.
The number of cameras was also varied to determine optimal performance. In all cases,
using more than 10 cameras allowed adequate reconstructions to be obtained. Diminishing
returns were found for arrays consisting of more than 13 cameras. Finally, mapping errors
were analyzed, and were determined to be negligible due to self-calibration procedures that
can reduce the mapping error to less than 0.1 pixels.
A sample experiment was undertaken using a set of 9 computer vision cameras imaging
a volume of 50 x 50 x 10 mm3 containing a small-scale repeatable vortex ring generated
by a piston apparatus. The seeding density for the experiment was approximately 0.026
particles/pixel, or equivalently 0.23 particles/mm3. It should be noted that this density is
signi cantly lower than simulations indicated would yield acceptable reconstruction quality.
Using this setup, adequate measurements of the vortex ring were made that approximately
match results obtained from 2D slices using traditional PIV.
2.6 Summary and Comparison
The previous sections presented a variety of current experimental techniques for per-
forming 3-D, 3-C PIV. Each technique has unique strengths and weaknesses, and no single
technique is capable of performing measurements for all experimental conditions. This in
20
some ways is a testament to the di culty of acquiring 3-D data. In this section, comparisons
will be made between the aforementioned techniques to highlight their utility and shortcom-
ings. In this comparison, scanning PIV will not be considered due to its highly limited
applicability to low-speed ows. Additionally, the processing for scanning PIV is nearly
identical to planar stereoscopic PIV, for which there are numerous articles in the literature
describing processing accuracy and capabilities.
The rst criteria to be compared is the number of cameras used in the technique. The
cost and complexity of the technique are directly tied to the number of cameras required.
The fewest number of cameras needed is by holographic PIV, which acquires the entire
hologram all at once using digital in-line holography or with holographic lm plates. In
each of these cases, only a single camera is required. In sharp contrast to this is synthetic
aperture PIV, which requires upwards of 10 cameras to achieve acceptable reconstruction
accuracy. However, the cameras used for SAPIV need not be high-quality cooled scienti c
CCD cameras, but instead can be lower-cost computer vision cameras because the averaging
process used in the reconstruction procedure e ectively reduces the impact of camera noise.
Techniques that use a moderate number of cameras are tomo-PIV (typically using 4), and
defocusing PIV (typically using 3).
The second major criteria for comparison is the size of the measurement volume each
technique can handle. The largest volumes are attainable using defocusing PIV (volumes
as large as 100 mm3 have been used). In contrast, most holographic PIV systems work
with very small volumes, due in large part to the limitations raised by the size of optics
available and the resolution requirements for identifying individual particles. Also though,
the smallest volumes are achievable using microscopic holographic PIV. In the moderate
regime are tomographic and synthetic aperture PIV. Due to the sensitivity of the MART
algorithm, tomo-PIV is unsuitable for depths in excess of 20 mm. Synthetic aperture PIV
is capable of viewing larger depths, but with reduced seeding density.
21
The next comparison made is on the total number of particles which can e ectively be
used in each technique. This metric serves two purposes: rst, it helps describe the informa-
tion storage within each data set. In other words, the greater the total number of particle
tracers, the more information is contained regarding the ow eld. Secondly, it evaluates
the robustness of the reconstruction algorithms. Currently, tomographic PIV is capable of
operating with the largest number of particles, which could potentially improve as numer-
ous researchers are exploring improvements in MART and other reconstruction algorithms.
Holographic PIV is next, followed closely by defocusing PIV. Surprisingly, synthetic aperture
PIV so far has been demonstrated with the least number of particles. However, SAPIV is a
new technique, and with further development may equal or exceed the particle count of the
other 3-D PIV methods.
22
Table
2.1:
Comparison
of
Exp
erimen
tal
Studies
using
3-D
PIV
Tec
hniques
Tec
hnique
Volume
Size
(mm
3)
No.
of
ca
meras
No.
of
particles
Partic
les/pixel
DDPIV
[20]
35
x27
x35
1
40,000
0.038
DDPIV
[21]
100
x1
00
x100
3
35,000
0.034
Tomo-PIV
[23]
35
x35
x7
4
24,500
0.05
Tomo-PIV
[25]
70
x10
x40
4
Tomo-PIV
[26]
100
x100
x20
4
100,000
0.024
HPIV
[30]
40
x30
x30
1
HPIV
[31]
46.5
x45.1
x44.5
1
HPIV
[28]
1.5
x2.5
x1.5
1
56,250
0.014
SAPIV
[32]
65
x40
x32
8
9860
0.015
23
Chapter 3
Introduction to the Concepts of Light Field Rendering and Photography
The concept of light eld imaging has evolved over the past 17 years beginning with the
work of Adelson and Bergen [34] and Adelson and Wang [35], and was extended by Ng et
al. [36] for hand-held photography and Levoy et al. [37] for microscopy. These recent works
describe the light eld as the complete distribution of light in space which can be described
by a 5-D function, sometimes termed the plenoptic function, where each ray is parameterized
by its position (x;y;z) and angle of propagation ( ; ). In a transparent medium such as air,
the radiance along a light ray remains constant along all points on its path. This makes one
of the spatial coordinates redundant and reduces the plenoptic function to a 4-D function
which is commonly termed the light eld [38]. Conventional photography only captures the
2-D spatial distribution of the 4-D light eld because the angular distribution is lost through
integration at the sensor surface.
Assuming the light eld can be captured, there are a variety of computational methods
for generating 2-D images or even 3-D models from this 4-D data. These methods fall under
the general name of light eld rendering, which was introduced to the computer graphics
community by Levoy and Hanrahan in 1996 [38]. In general, the term refers to methods for
computing new views or perspectives of a scene by extracting a proper 2-D slice from the 4-D
light eld. Choosing a particular slice can allow for various unique imaging situations, such
as orthographic projections and crossed-slit projections. Another rendering technique that
is now widely used is synthetic aperture photography, where large arrays of cameras are used
to capture the light eld and generate refocused images at multiple depths by extracting the
proper slice from the light eld.
24
These novel processing techniques are only available if the light eld is known; therefore,
any device that can record portions of or the complete 4-D light eld is of tremendous value.
As listed in Levoy [39], there are several ways to capture a light eld including the mounting
of a camera on a movable gantry and taking a large number of photos at di erent positions
(used in the original work on light eld rendering by Levoy and Hanrahan [38]), the use of a
large static array of cameras (e.g. Wilburn et al. [40]), or the use of a lenslet array mounted
near a camera sensor. This last device, which is termed the plenoptic camera, records the
light eld in a single image and is the focus of the development in this work.
3.1 Light Fields and the Plenoptic Concept
Our description of the plenoptic camera begins with the work of Adelson and Wang [35]
who were the rst to propose the concept of the plenoptic camera. They used the analogy
of a single-lens stereo view to describe the function of the camera. Figures 3.1 through 3.3
illustrate the fundamental principles of this concept. In Figure 3.1a, a point object is focused
onto an image sensor leading to a bright distinct point on the sensor surface. As the object
is moved closer (Figure 3.1b) and farther away (Figure 3.1c), the image of the spot grows
larger and appears blurred as it is out of focus. At this point, the position of the object is
not apparent from the image. However, if an eccentric aperture is used, the distance of the
object from the lens can be determined. In Figure 3.2a, when the object is in focus, the
light strikes the same point as before and produces a sharp peak at the center of the sensor.
When the object is moved closer to the lens (Fig. 3.2b), the eccentric aperture limits the
angular range of rays traveling to the sensor and the intersection of incident rays and the
image sensor leads to a blurred image of the object that is displaced to the right of center.
Conversely, when the object is moved further away, the rays follow a di erent path through
the aperture with the image formed in front of the sensor and the resulting rays traveling to
the left of center. Thus, a close-up object leads to lateral displacement on the sensor to the
right whereas a distant object leads to displacement to the left. As such, the precise location
25
and depth of the object can be determined by measuring its horizontal displacement and size
on the sensor. This concept is similar to the working principles of defocusing PIV; however,
by making an additional modi cation, the plenoptic camera is able to provide much more
information regarding the light eld.
(a) (b) (c)
Figure 3.1: Schematic of defocusing e ect with a full aperture.
The plenoptic camera does not use an eccentric aperture. Instead, a lenslet array is used
to encode the angular information of incident rays onto pixels found behind each lenslet. This
is illustrated in Figure 3.3 with an array of pinholes used in place of lenslets. In this case,
a main lens is used to form an image on the array of pinholes with 3 x 3 pixels located
behind each pinhole. As such, each pinhole represents a macropixel. When an object is
perfectly focused on the center pinhole (Figure 3.3a), all of the rays converge at the pinhole
illuminating all of the pixels found underneath that particular pinhole. When the object is
moved closer (Figure 3.3b), however, a blurred spot is produced that spans several pinholes.
As the angle of rays reaching the pinholes varies depending on the pinhole location, only
certain pixels under each pinhole receive light whereas the others remain dark. This is
26
illustrated by the pattern of pixels found beneath each gure. Conversely, when the the
object is moved further away, the angle of light rays incident on each pinhole is di erent,
and results in a di erent pattern on the pixel array (Figure 3.3c). As such, by analyzing
the distribution of light under each pinhole, the depth of the object can be determined.
Replacing the array of pinholes with a lenslet array yields an identical result, but greatly
increases the amount of light collected by the sensor.
This concept was demonstrated by Adelson and Wang [34] using a 500 x 500 pixel CCD
camera with microlenses forming 5 x 5 pixel macropixels. As such, the spatial resolution of
the resulting image was 100 x 100 pixels with the 25 pixels under each macropixel used to
record angular information about the image. This illustrates an inherent trade-o associated
with the plenoptic camera between spatial and angular resolution. Nonetheless, Adelson and
Wang were able to use their prototype sensor to demonstrate the concept for range nding
where they produced qualitatively accurate depth maps for various images.
(a) (b) (c)
Figure 3.2: Schematic of defocusing e ect with the use of an eccentric aperture.
27
(a) (b) (c)
Figure 3.3: Schematic of defocusing e ect with the use of an eccentric aperture.
3.2 Recent Developments
Interest in plenoptic cameras picked up recently due to the rapid increases in CCD
resolution which allow both the spatial and angular information to be adequately sampled. In
particular, we note the work of Ng et al. [36] who developed a hand-held version of the camera
using a commercially available 16 megapixel image sensor and a lenslet array consisting of 296
x 296 lenslets. Their focus was on digital photography where the additional information made
available with light eld imaging enables computational rendering of synthetic photographs,
allowing for focusing of the camera or adjustment of the aperture after the image has been
taken. Also demonstrated in their work is the ability to move the observer across the aperture
of the camera, which produces changes in parallax. This is particularly useful in macro (close-
up) imaging as is often used in the laboratory and wind tunnel environment. The number
of views available is equal to the number of pixels behind each lenslet. In their case, this
corresponded to a total of 196 (14 x 14) di erent views of the same scene recorded on a single
28
sensor. This will prove to be an important point when we consider the idea of tomographic
reconstruction where 196 viewpoints using separate cameras is not practical.
More recently, e orts have been underway by Levoy et al. [37, 41] to develop a light eld
microscope based on the plenoptic camera. The fundamental principle remains the same;
however, their work focused on three additional challenges associated with microscopic imag-
ing. For one, wave optics and di raction must be considered in a microscopic environment
whereas geometrical optics was su cient before. Secondly, a typical microscope objective
functions di erently than a normal camera lens. Finally, most objects in microscope images
are partially transparent whereas the previous e ort in plenoptic photography by Ng et al.
[36] had focused on scenes with opaque objects. This last point is perhaps the most relevant
to this work, where illuminated particle elds are also partially transparent. This may at
rst appear to be a major disadvantage in reconstructing a 3-D volume, because of the con-
tributions from scatterers located at multiple depths. Nevertheless, this imaging scenario can
take advantages of developments in the eld of microscopy, where there has been substan-
tial e ort at increasing the depth resolution of images acquired of transparent scenes. The
technique of deconvolution microscopy uses knowledge of the point-spread-function (PSF) of
the microscope objective to increase the depth resolution of the resulting image by removing
the contributions from out-of-plane captured light. The processed images exhibit clear and
de ned features that are typically hidden by out-of-focus scatterers.
These prior e orts present a logical approach for determining a 3-D intensity eld of a
transparent scene, which will be used in this work. First, a focal stack of images is generated
using the computational refocusing technique. The focal plane spacing and number of planes
generated will be discussed in later chapters. Second, a thresholding, 3-D deconvolution, or
limited-angle tomography algorithm can be used to modify the planes of the focal stack by
removing out-of-plane contributions and yielding a volume containing sharp particle images.
An analysis of these algorithms will be also be covered later in this thesis.
29
3.3 Plenoptic 2.0
Since the original work by Ng et al. [36] on plenoptic photography, there have been
a number of additional investigators researching various extensions and modi cations to
the technique. Most notably, Lumsdaine and Georgiev [42, 43] have explored the tradeo
between spatial and angular resolution and developed a similar camera where the lenslet
array acts as an imaging system focused on the focal plane of the main camera lens. This
is in contrast to the original plenoptic cameras, which placed the lenslet array at the focal
plane of the main lens. This change in operation can be seen in Figure 3.4. In their work,
a 16 megapixel sensor was coupled with a lenslet array of 250 micron pitch and 750 micron
focal length. The number of lenslets was 144 in both directions. Thus, traditional plenoptic
rendering would only generate an image of 144 px x 144 px resolution. However, high
resolution rendering is able to extract an image of resolution 1040 px x 976 px.
The primary drawback to this technique is that much less angular resolution is captured.
At present, the amount of angular sampling required to generate clearly refocused images
of particle elds is unknown (the high spatial frequencies in particle images may invalidate
the high resolution plenoptic approach). Additionally, the rendering of high-resolution tech-
niques is more complicated and is less intuitive. For these reasons, traditional plenoptic
rendering is used throughout this work. However, as shown in [43], due to the similarities
in camera operation the plenoptic 2.0 rendering algorithms can be used for traditional light
eld data. This presents the possibility of exploring these rendering algorithms in future
studies.
30
Plenoptic 1.0
Plenoptic 2.0
Image Focal Plane
Figure 3.4: Fundamental di erence in operation of original plenoptic and "focused plenoptic"
cameras.
31
Chapter 4
Camera Simulation
To evaluate camera performance under a variety of scenarios, both a simpli ed 1-D
and complete 2-D light eld image simulator are developed. The 1-D simulator is used
primarily for the purpose of understanding the basic concept and operation of the camera
and identifying shortcomings or unexpected behavior. The 2-D simulator is used to render
full particle elds for use in reconstruction and cross-correlation algorithms. The simulators
are designed to generate a representative signal or image acquired by the camera sensor for
a particle eld which is approximated by a distribution of one or more point light sources.
The value of this approach stems from the ability to vary parameters such as main lens focal
length, lenslet pitch, lenslet focal length, and others to test and optimize camera performance.
This chapter rst begins with a description of the ray tracing procedure and de nes the
variables to be used throughout the simulations. This provides the necessary framework for
constructing the 1-D simulator, which is then used to test a variety of imaging scenarios.
The simulator is then extended to include di raction e ects and produce complete 2-D light
eld images of particle elds.
4.1 Ray Tracing Description and Optical Con guration
The use of linear (Gaussian) optics is well established for geometrically tracing the
path of light through space and various optical elements through the use of matrix methods
from linear algebra. An important application of Gaussian optics is ray tracing in computer
graphics. Brie y, ray tracing is a rendering technique in which a large number of light rays
from a scene are used to form an image at arbitrary locations or viewpoints. Rays of light are
initialized at the light source by specifying an initial position and direction. Any number of
32
z
y
s
o
s
i
Focal
Plane
Main
Lens
Lenslet
Array
Image
Sensor
f
m
f
l
p
l
p
p
p
m
dy
dz
Figure 4.1: Schematic of Optical Con guration and Dimensional De nitions.
ray transfer matrices are then used to simulate optical elements and the propagation of light
through free space [44]. The intersection each ray makes with a sensor plane or designated
viewpoint de nes the generated image. The primary limitation of traditional Gaussian optics
is that ray transfer matrices assume the optical elements and coordinates are de ned with
respect to the optical axis. For a plenoptic camera, the lenslets are not located on the optical
axis, and thus require a modi ed approach. Georgeiv and Intwala [45] and Ahrenberg and
Magnor [46] have shown that Gaussian optics can be extended to light eld imaging as
well, through an extension of basic linear optics known as a ne optics. The simulations
constructed herein apply their work with a ne optics to ray tracing, allowing lenslets to be
properly simulated and light eld images of entire particle elds to be generated.
In Figure 4.1, the optical elements comprising the simulation are shown (not to scale),
and the corresponding dimensional variables are labeled. The origin of the optical axis is
located at the center of the sensor plane, with the z-axis oriented out of the camera through
the center of the lens aperture, and the x- and y-axes aligned with the sensor plane (the
x-axis is projected into the page in this gure). This imaging con guration can be described
using the classical thin lens equation, shown in equation 4.1. The assumption made with the
thin lens equation is that the thickness of the lens is negligible with respect to the overall
length of the optical system. Modern lenses with multiple optical elements are not thin by
33
this de nition, but the concept of principle planes can reduce any combination of lenses to
a single thin lens. The design of modern lenses takes this into account and allows them to
be simulated using this equation.
1
fm =
1
so +
1
si (4.1)
In this equation, so is the distance from the object focal plane to the main lens, and si is the
distance between the main lens and the image focal plane. The main lens of focal length fm
acts to form an image at the image focal plane, exactly like a conventional camera. However
for the plenoptic camera, rather than placing the sensor at this plane, a lenslet array of focal
length fl and pitch pl is inserted. The sensor plane, with individual pixels of pitch pp, is
then placed at the focal length of the lenslet array. For all current simulations, the ll factor
of both the lenslets and the pixels are assumed to be 100%. Note again the lenslet array is
located at the focal point of the main lens, and that the distance between the lenslet array
and the sensor is much smaller than the distance between the main lens and the lenslet array.
With the basic dimensions speci ed, the ray transfer matrices can now be de ned.
The rst and most basic ray transfer matrix is for the propagation of light through free
space, or translation. This is modeled as a linear propagation of light from the original point
(x;y) at an angle ( ; ) over a distance t. This is expressed in matrix notation in equation
4.2. Note that only the position of the ray changes, and the direction remains the same.
0
BB
BB
BB
B@
x0
y0
0
0
1
CC
CC
CC
CA
=
2
66
66
66
64
1 0 t 0
0 1 0 t
0 0 1 0
0 0 0 1
3
77
77
77
75
0
BB
BB
BB
B@
x
y
1
CC
CC
CC
CA
(4.2)
The next basic transfer matrix used is for refraction of light rays through a thin lens,
which is expressed in equation 4.3, where f is the focal length of the lens. In contrast to
translation, the position of the light ray is constant, while the direction of the light ray is
modi ed. As mentioned previously, the lens must either be thin, or capable of being reduced
34
to a single e ective thin lens for this equation to be used. This assumption is used for all
lenses used in both the 1-D and 2-D simulators.
0
BB
BB
BB
B@
x0
y0
0
0
1
CC
CC
CC
CA
=
2
66
66
66
64
1 0 0 0
0 1 0 0
1=f 0 1 0
0 1=f 0 1
3
77
77
77
75
0
BB
BB
BB
B@
x
y
1
CC
CC
CC
CA
(4.3)
The previous equations for light propagation and lens transfer make the assumption that
the optical element is centered with respect to the optical axis. For light eld cameras using
a lenslet array, a modi cation must be made to account for the lenslets that are shifted from
the optical axis. The derivation of this approach, known as a ne optics, is given in Georgeiv
and Intwala [45]. The result of their derivation is to treat the propagation through a lenslet
array of focal length fl and separation from the optical axis (sx;sy) as the combination of a
lens transfer matrix and a prism, as given below in equation 4.4. In contrast to the previous
two transfer matrices, both the position and the direction of the light ray are modi ed.
0
BB
BB
BB
B@
x0
y0
0
0
1
CC
CC
CC
CA
=
2
66
66
66
64
1 0 0 0
0 1 0 0
1=fl 0 1 0
0 1=fl 0 1
3
77
77
77
75
0
BB
BB
BB
B@
x
y
1
CC
CC
CC
CA
+
0
BB
BB
BB
B@
0
0
sx=fl
sy=fl
1
CC
CC
CC
CA
(4.4)
4.2 1-Dimensional Ray-Tracing Simulator
The 1-D simulator is used as a relatively simple means to evaluate basic camera concepts
without requiring a full particle simulation and allowing the propagation of rays through the
system to be easily visualized. All particles are approximated in the simulator as point
sources of rays. This simpli cation allows the particle eld to be de ned as simply the
coordinates of each point, which is given in terms of a shift from the optical axis dy and a
shift from the object focal plane dz. A positive value of dy is oriented upward, and a positive
35
value of dz is oriented away and out from the camera along the optical axis as indicated in
Figure 4.1.
The remaining variables are camera parameters, which can be divided into two cate-
gories: xed parameters and variable parameters. The xed parameters are set through the
hardware design of the camera and lenslet array and include the lenslet focal length fl, the
lenslet pitch pl, the pixel pitch pp, and the number of pixels np;y. These can be modi ed in
the simulator as needed but cannot be modi ed in the physical hardware once manufactured.
The other class of parameters are variable parameters, which can be changed to accom-
modate di erent imaging con gurations. These include the main lens focal length fm and
the object focal plane distance so. These parameters could be directly varied to achieve the
desired imaging conditions. However, a more pragmatic approach can be used by de ning
a eld of view FOVy and the main lens focal length. This allows the magni cation to be
calculated and enforces a unique combination of so and si in accordance with the thin-lens
equation. The camera speci cations can be further de ned by placing a condition on the
image-side or e ective f-number of the main lens. As recognized by Ng et al. [36], to prevent
overlap or unused pixels of the lenslet images, the image-side f-number (also termed the e ec-
tive f-number) of the main lens must be equal to or larger than the f-number of the lenslets.
For the main lens, the image-side f-number is calculated in terms of the magni cation, as
described by Smith [47] and de ned in equation 4.8. This f-number is used to calculate the
pitch or aperture diameter pm of the main lens. For the lenslets, the distance si is much
larger than the focal length of the lenslets fl, so it can be assumed the lenslets are focused
to in nity, allowing the simpli ed de nition of f-number ((f=#)l = fl=pl) to be used. These
quantities and relationships are de ned, in order of their calculation, through the following
equations.
M = (np;y pp)=FOVy (4.5)
si = fm(1 M) (4.6)
36
so = si=M (4.7)
(f=#)m = (f=#)l=(1 M) (4.8)
pm = fm=(f=#)m (4.9)
The positioning of the lenslet array must now be de ned. It would be computationally
e cient and simple to develop an algorithm where the lenslet array begins at the exact edge
of the CCD. However, in an actual camera, the lenslet array will not be perfectly aligned
with the edge of the CCD and will exhibit a slight overlap over the entire sensor. To simulate
this, the number of lenslets needed to completely cover the sensor is determined. The total
size of the array will be slightly larger than the sensor, so an o set is calculated and applied
to center the lenslet array with respect to the optical axis. The simulated lenslet array can
be de ned by three variables: the lenslet pitch pl, the number of lenslets in the vertical
direction nl;y, and a vertical o set oy. Determining the lenslet ns;y that a ray strikes is then
a computationally e cient process of division and rounding, as given by equation 4.10. The
corresponding shifts require multiplying the lenslet number by the lenslet pitch, then adding
a shift to move from the corner to the center of the lenslet, as shown in equation 4.11. Note
that the lenslet numbering starts at the top left corner of the sensor, which requires the
rays to be shifted by an amount equal to half the sensor size. This is accounted for in the
following equations.
ns;y = round([y +np;ypp=2 oy pl=2]=pl) (4.10)
sy = ns;ypl +oy +pl=2 np;ypp=2 (4.11)
With the physical parameters of the simulator de ned, the actual simulation process
can be presented. As mentioned previously, the simulator acts as a ray-tracing program
to propagate rays of light through the main lens and lenslet array to the sensor plane.
Initializing the rays for this procedure consists of two steps: the angular sweep of rays
must be de ned and the rays themselves must be created. The rst step is important for
37
computational e ciency; the simulator should not allow rays to be generated which will fall
outside of the camera aperture. These bounds are easily calculated through equations 4.12
and 4.13 for each particle. The computational implementation uses the atan2 function to
ensure the correct sign is returned.
max = tan 1
p
m=2 dy
so +dz
(4.12)
min = tan 1
p
m=2 dy
so +dz
(4.13)
At this point, a correction is added to the simulator to take into account the di erences
in the solid collection angle of light depending on the position of the particle. The angle
subtended by rays intersecting the outer circumference of the aperture is denoted as the
reference angle ref. For each particle, the subtended angles are calculated and are divided
by the reference angle to determine the corrected number of rays to generate for each point,
as given in equation 4.14.
nrays;corr = round
n
rays( max min)
ref
(4.14)
A uniformly distributed random number generator is used to produce randomized angles
for the ray which fall within the angular bounds de ned above. Using a random distribution
of angles better approximates the stochastic nature of image formation and prevents an
accumulation of simulation artifacts which could occur for uniform angular distributions.
With the initial position x, y, z and angle , of the ray de ned, the tracing begins through
the use of equations 4.2 and 4.3. For the case of the 1-D simulation, these are simpli ed
to terms of y, z, and . The equations simulate propagation of the ray to the main lens,
through the main lens, and to the lenslet array. At this point, the lenslet that the ray strikes
must be determined using equations 4.10 and 4.11. This shift is then used in conjunction
with equation 4.4 to propagate the ray through the correct lenslet and to the sensor array.
38
When the ray reaches the sensor, the pixel that it strikes must be determined. A similar
division and rounding procedure can be used to determine the correct pixel ypx, as given in
equation 4.15. Note that as before, the pixel coordinates begin at the top left corner of the
image and are accounted for with this equation.
ypx = round
(p
pnp;y +pp)=2 y
pp
(4.15)
This series of steps is repeated until the correct number of rays is generated for each
particle. Each ray accumulates signal on the sensor to form the nal image. For the 1-D
simulator, this image is a 1-D array which is visualized using simple plotting.
4.2.1 Tests using the 1-D Simulator
Four distinct test cases are used to illustrate important physical concepts behind the
plenoptic camera. These include a positive and negative depth change varying only dz, an
in-plane displacement test varying only dy, and a case involving a blur across two lenslets
due to a change in both dy and dz. These tests lead to the formation of an expression for the
in-plane and out-of-plane resolution of the camera and allow the expressions to be tested.
For these test cases a simpli ed set of parameters is used, listed below in Table 4.1.
Note that in order to t the main lens and lenslet array into a single readable plot, the focal
length of the main lens is reduced to a very small value of 1.5 mm, which reduces the values
of so, si, and pm. This does not a ect the trends which will be visualized herein, and is only
for display purposes. In all cases, a total of 50,000 rays were used to accumulate statistically
signi cant signal values. For plotting purposes, only 100 of the 50,000 rays are shown in the
following ray diagrams.
The rst test evaluates a particle at two di erent depths. Figure 4.2 presents a simu-
lation where the particle is located on the optical axis but shifted 1000 m away from the
39
Table 4.1: De nition of Simulator Parameters
Lenslet Focal Length fl 500 m (0.500 mm)
Lenslet Pitch pl 125 m (0.125 mm)
Pixel Pitch pp 7.4 m (0.0074 mm)
Number of Lenslets nl;y 5
Number of Pixels np;y 75
Main Lens Focal Length fm 1500 m (1.5 mm)
Magni cation M -1
camera. The rst vertical line from the left is the object focal plane, the second line repre-
sents the main lens, the third line is the position of the lenslet array, and the fourth line is
the pixel array. To the right of the pixel array is a curve which provides a relative measure
of the integrated signal at each pixel. Readily apparent in the image is the large blur spot
incident on the lenslet array. Since this blur spot spans multiple lenslets, the signal is spread
to multiple locations on the sensor. The incident angle of the light rays on the lenslet array
determines the resulting pattern at the sensor. Figure 4.3 is an alternate case demonstrating
a particle located 1000 m closer to the camera lens. Note the di erent signal pattern, which
is caused by the large di erence in angles incident on the lenslet array. These signal patterns
are a key element in plenoptic camera performance, which allows the depth of particles to
be determined. Also, these two examples indicate that the depth of two in-line particles can
be determined without any ambiguity.
The next test exposes a potential limitation of the plenoptic camera. Figure 4.4 presents
an example where the particle is located on the focal plane with a -62 m in-plane displace-
ment, which forms an image at the edge of the central lenslet. Figure 4.5 locates the particle
at the center of the focal plane resulting in an image being formed at the middle of the
central lenslet. Finally, Figure 4.6 has a +62 m in-plane displacement, the exact opposite
of Figure 4.4. The most noticeable characteristic about these images is that the resulting
signal is very nearly similar, with hardly any bias in signal level, and no signal contributions
from other lenslets. This illustrates a worst-case scenario for the plenoptic camera; namely,
40
?7000 ?6000 ?5000 ?4000 ?3000 ?2000 ?1000 0 1000?400
?300
?200
?100
0
100
200
300
400
Z?Coordinate (?m)
Y?Coordinate (
?m)
Figure 4.2: Depth testing, dy = 0 m ;dz = +1000 m.
?7000 ?6000 ?5000 ?4000 ?3000 ?2000 ?1000 0 1000?400
?300
?200
?100
0
100
200
300
400
Z?Coordinate (?m)
Y?Coordinate (
?m)
Figure 4.3: Depth testing, dy = 0 m ;dz = 1000 m.
41
?7000 ?6000 ?5000 ?4000 ?3000 ?2000 ?1000 0 1000?400
?300
?200
?100
0
100
200
300
400
Z?Coordinate (?m)
Y?Coordinate (
?m)
Figure 4.4: Spatial resolution testing, dy = 62 m ;dz = 0 m.
particles that lie within the focal plane can not in general be resolved with a resolution
greater than half the lenslet pitch divided by the magni cation (pl=2M). Figure 4.7 expands
on this idea again, when the particle is moved a very small amount, however, causing the
signal to dramatically change. The slight bias in edge pixels in these simulations does present
a means to measure the displacement within a lenslet; however, the variation in intensities is
likely too small to be captured with a detector or will be under the noise oor of the camera.
The next test illustrates a scenario for determining the spatial location of a particle
with sub-lenslet accuracy. Figure 4.8 illustrates a case of where a blur spot is formed on the
lenslet array. When the particle is moved 20 m in the y-direction, a portion of the blur spot
crosses over into the next lenslet and is de ected accordingly. This results in a substantial
change in the resulting signal levels.
As seen in these cases, much more information is available on the location of the par-
ticle when the signal is spread out over multiple lenslets. Not only is a new set of angular
information available, also the di erence in signal levels is an indicator of the particle po-
sition. However, the worst-case scenario presented previously is important in quantifying
42
?7000 ?6000 ?5000 ?4000 ?3000 ?2000 ?1000 0 1000?400
?300
?200
?100
0
100
200
300
400
Z?Coordinate (?m)
Y?Coordinate (
?m)
Figure 4.5: Spatial resolution testing, dy = 0 m ;dz = 0 m.
?7000 ?6000 ?5000 ?4000 ?3000 ?2000 ?1000 0 1000?400
?300
?200
?100
0
100
200
300
400
Z?Coordinate (?m)
Y?Coordinate (
?m)
Figure 4.6: Spatial resolution testing, dy = +62 m ;dz = 0 m.
43
?7000 ?6000 ?5000 ?4000 ?3000 ?2000 ?1000 0 1000?400
?300
?200
?100
0
100
200
300
400
Z?Coordinate (?m)
Y?Coordinate (
?m)
Figure 4.7: Spatial resolution testing, dy = +63 m ;dz = 0 m.
?7000 ?6000 ?5000 ?4000 ?3000 ?2000 ?1000 0 1000?400
?300
?200
?100
0
100
200
300
400
Z?Coordinate (?m)
Y?Coordinate (
?m)
Figure 4.8: Spatial resolution testing, dy = +10 m ;dz = +500 m.
44
?7000 ?6000 ?5000 ?4000 ?3000 ?2000 ?1000 0 1000?400
?300
?200
?100
0
100
200
300
400
Z?Coordinate (?m)
Y?Coordinate (
?m)
Figure 4.9: Spatial resolution testing, dy = +30 m ;dz = +500 m.
the measurement uncertainty of the camera. Rather than using a large number of simula-
tions to determine the bounds of the uncertainty, the depth-of- eld equations can be used
to generalize the procedure and will be discussed in the next section.
4.2.2 Depth of Field and Achievable Resolution
Consider the ray schematic shown in Figure 4.10, which is an illustration of the classic
depth-of- eld concept often used in photography. A point located a distance Do in front of
a convex lens will focus to a sharp point behind the lens at a distance Di. A fundamental
premise of geometrical optics is that the image of an in nitely small point is itself in nitely
sharp (i.e., has an in nitely small spot size). However, real optics applications utilize detec-
tors with sensor elements of a nite size. To characterize the e ect this has on the optical
resolution of a system, a circle of confusion c is de ned as an arbitrary measure of spatial res-
olution at the image plane. As shown in Figure 4.10, this circle of confusion can be seen as a
bounding diameter for the intersection of two light cones that originate at di erent locations
DF;o and DN;o, positioned on both sides of the original point Do. These two spots are known
45
?
z
D
N,o
c
D
o
D
F,o
D
i
D
F,i
D
N,i
Figure 4.10: Schematic of dimensions for depth of eld calculations.
as the far depth-of- eld and the near depth-of- eld, respectively, and the di erence between
them z is called the focal depth, or simply the depth-of- eld. Another way to understand
this concept is that if the circle of confusion is de ned to be the size of a pixel on a sensor,
the focal depth z indicates the smallest physical depth over which the pixel integrates light
for the imaging condition de ned by Do and Di.
This concept provides a simple way to estimate the achievable depth resolution of the
plenoptic camera if the circle of confusion is de ned to be the lenslet pitch (in the next
chapter, this idea will be extended to give closed-form expressions for proper focal plane
spacing in computationally refocused images). The depth-of- eld equations can be derived
using geometry and similar triangles; the resulting expressions are given in equations 4.16
through 4.18. In these equations, f is the main lens focal length, f=# is the main lens f-
number, c is the circle of confusion, and s is the location of the particle in object space. The
circle of confusion is de ned as the lenslet pitch pl. For a particle lying on the focus plane (i.e.,
s = so) and with conditions speci ed in Table 4.1, these equations yieldDN;o = 2570 m (2.57
mm) and DF;o = 3600 m (3.60 mm), leading to a total ambiguity in depth z = 1030 m
(1.03 mm). The maximum in-plane ambiguity is given by equation 4.19, y = 62:5 m
(0.0625 mm).
DN;o = sf
2
f2 +c(s f)(f=#) (4.16)
DF;o = sf
2
f2 c(s f)(f=#) (4.17)
46
z = DF;o DN;o (4.18)
y = pl2M (4.19)
Now that analytical expressions have been developed for the maximum uncertainty for
in-plane and out-of-plane displacement, the e ect of other variables can be considered. In
particular, magni cation and focal length are two important parameters that can be varied
in an experiment. Note that changes of magni cation for a xed focal length lens result in
a change in the values of so and si. Secondly, the e ect of focal length itself is analyzed
for two common focal lengths: 50 mm and 200 mm. In Figure 4.11, the plot is made for a
focal length of 50 mm. Perhaps intuitively, the ambiguities both increase as a less negative
magni cation is used (corresponding larger eld of view and larger so). This can be visualized
in the context of Figure 4.10. As the distance so from the main lens increases, the resulting
diamond shape which de nes the ambiguity will become stretched along the optical axis.
The in-plane ambiguity also increases with a less negative magni cation. This plot indicates
that ambiguities can be minimized by using large negative magni cations (small values of
so and small elds of view). Also, indicated in equations 4.16 and 4.17, the depth resolution
is a function of f-number. For a lower f-number, the size of the ambiguity will decrease.
Surprisingly, the plot in Figure 4.10 is the same for a focal length of 200 mm. This is due to
the calculation of optical quantities; the diameter of the main lens aperture is increased for
a 200 mm focal length, making the size of the ambiguity similar.
4.2.3 Including Di raction E ects
The geometric optics model is an approximation to the actual behavior of light, and
is essentially correct when the wavelength of light is vanishingly small. In this case, the
use of discrete rays forming precisely de ned spots in the image is appropriate. However,
this simpli ed model does not take into account the realistic e ects of di raction that arise
when the wavelength of light is a nite value. For a point-like image, di raction causes the
47
?1.6 ?1.4 ?1.2 ?1 ?0.8 ?0.6 ?0.4
0
1000
2000
3000
4000
5000
6000
7000
Magnification
Depth Ambiguity,
? z (
?m)
20
40
60
80
100
120
140
160
In?Plane Ambiguity,
? y (
?m)
Depth Ambiguity
In?Plane Ambiguity
Figure 4.11: E ect of magni cation on ambiguity, f = 50 mm.
precisely de ned spot to spread and form a spot with a nite diameter. This has important
implications for PIV imaging because the di raction spot size is typically on the order of
or larger than the pixel size. The e ect of di raction on general optical systems has been
described by Smith [47], and is further de ned for the speci c case of particle imaging
by Adrian and Westerweel [3]. In this section, the need for including di raction e ects is
demonstrated and a method for incorporating di raction into the simulation is discussed.
To x thoughts, consider the current imaging con guration where the magni cation
M = 1, the f-number of the main lens (f=#)m = 2, the wavelength of light = 532 nm,
and the diameter of a particle dp = 1 m. An approximation to the diameter of a particle
image at the image plane, including di raction e ects, is given in equation 4.20, from Adrian
and Westerweel [3, p. 101]. In this equation, ds is the di raction spot diameter given by
equation 4.21. Note that this reference de nes the magni cation as a positive rather than a
negative quantity, and thus positive magni cation is used in these equations.
d (M2d2p +d2s)1=2 (4.20)
48
ds = 2:44(1 +M) (f=#)m (4.21)
With the experimental parameters de ned above, the di raction spot size ds = 5:19 m and
the resulting particle image diameter d = 5:28 m. For a lenslet of 125 m pitch, d is
approximately 4.2% of the lenslet diameter. At the sensor plane, the magni cation must
be recalculated for the lenslets themselves, M = fl=si. This value is close to zero due
to the design of the camera. When this magni cation is used in equations 4.20 and 4.21,
the di raction spot size at the sensor plane is d = 5:2 m, and the particle image diameter
d = 5:3 m. This is approximately 70.5% of the pixel size based on a 7.4 m pixel. Note
that the reason both blur spots come to be the same size for both the lenslet and pixel is
that the value of the magni cation term is reduced by a factor of two for the pixel, which
corresponds to the f-number being greater by a factor of 2 for the main lens. For this reason,
di raction e ects must be included in the simulator at both the lenslet and sensor planes to
produce realistic images. Also, di raction causes an order of magnitude larger blur than the
geometric image of the particle itself. Thus, only di raction is taken into account and the
negligible e ect of nite particle diameter is ignored. These calculations are summarized in
table 4.2.
Table 4.2: De nition of Di raction Parameters
Di raction-Limited Spot Size at Lenslet Array ds;m 5.2 m
Std. Dev. of Normal Distribution at Lenslet Array m 0.96 m
Di raction-Limited Spot Size as a Percentage of Lenslet Pitch 4.2%
Di raction-Limited Spot Size at Sensor Array ds;l 5.2 m
Std. Dev. of Normal Distribution at Sensor Array l 0.96 m
Di raction-Limited Spot Size as a Percentage of Pixel Pitch 70.5%
The approach to implementing di raction in this work appeals to the stochastic nature
of image formation. In the ray tracing procedure, di raction is simulated as a random shift
of the spatial coordinate of the ray at both the lenslet plane and the sensor plane. The
behavior of this shift is governed by the di raction pattern of a planar wavefront passing
through a circular aperture, which is then focused onto a planar surface or sensor. The
49
intensity pattern of the focused spot is given by the Airy function shown in equation 4.22.
In this equation, r is a radial coordinate, whose origin is located at the central location of
the point. Da is the diameter of the aperture, Z0 is the distance from the aperture to the
point of focus, and J1 is the rst-order Bessel function of the rst kind.
jh(r)j2 =
D2
a
4 Z0
2
2J1( Dar= Z0) D
ar= Z0
2
(4.22)
Equation 4.22 is the theoretical solution for the intensity distribution; however, calcula-
tion of the Bessel function is an iterative procedure which makes computation di cult and
not well suited for massive computations. In contrast, the Gaussian function is calculated
through simple closed-form expression and has advantages in terms of both computational
e ciency and mathematical properties. Equation 4.23 is the Gaussian function presented in
Adrian and Westerweel [3, p. 101]. The Gaussian function can be t to the Airy function
by setting the parameter 2 = 3:67. A comparison of these two functions is given in Figure
4.12, showing a qualitatively good t.
g(s) = ce
4 2s2d2
s
(4.23)
Since the Gaussian function approximates the actual di raction pattern, we can assume
that a focused particle image will have an intensity pro le that is roughly Gaussian. The
image is actually created from the statistical distribution of photons incident on a sensor
plane. Thus, it can be understood as having been compiled by large numbers of individual
photons arranged in a normal (Gaussian) statistical distribution, or probability density func-
tion (PDF). The de nition of a normal distribution is generally given in terms of a standard
deviation and a mean , as shown in equation 4.24. By taking the mean value to be zero
and equating the exponential terms of equations 4.23 and 4.24, the standard deviation can
be explicitly determined as given in equation 4.25. The values of the standard deviation at
50
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.2
0.4
0.6
0.8
1
s / ds
Normalized Intensity
Airy Function
Gaussian Function
e??
2
Threshold
Figure 4.12: Comparison of Airy and Gaussian functions. Adapted from Adrian and West-
erweel [3, p. 101].
both the lenslet and sensor plane are given in table 4.2.
f(x) = ce
(s )2
2 2
(4.24)
=
s
d2s
8 2 (4.25)
These de nitions now allow di raction to be introduced into the ray tracing. When a
ray arrives at the lenslet plane, a normally distributed random number generator is used to
generate a random shift of the spatial coordinates. Note that the mean of the random number
generator is set to the initial spatial coordinate of the ray, and the standard deviation is set
as de ned in equation 4.25. This process is used independently in both coordinate directions.
The ray is then propagated through the lenslet array using the appropriate shifts sx and sy
as previously de ned. When the ray reaches the sensor, a similar procedure is performed,
but using the optical parameters of the lenslet to de ne ds rather than the optical parameters
of the main lens. This process is shown schematically in Figure 4.13.
51
d
s,m
d
s,l
Figure 4.13: Schematic of di raction simulation, applied at both the lenslet array and the
sensor.
The algorithm was rst tested in the 1-D simulator, using the MATLAB normally
distributed random number generator randn. Figure 4.14 shows a run of the simulator using
the same particle placement as Figure 4.6. The e ect of di raction is to spread the signal
near the edge of the central lenslet to the adjoining lenslet. Potentially this e ect could
be exploited to reduce the ambiguity in spatial resolution by an amount equal to twice
the di raction-limited spot size at the lenslet plane. It also suggests that another method
for reducing the ambiguity is to intentionally blur the image using a ground glass di user.
These are commercially available for photographers, but will not be considered in this work
primarily because they typically reduce the light level by 10-20%.
4.3 2-Dimensional Ray-Tracing Simulator
The simulator is now extended to produce 2-D images of multiple particles distributed
within a volume. The ray tracing procedure is similar to the procedure developed earlier;
however, the scale of the computation is much larger and a separate approach is needed.
Consider the previous 1-D simulator, which traced 50,000 rays for a single particle. For the
2-D simulator, the same number or more rays are used, but the number of particles is much
larger, in many cases greater than 1,000. Thus, the computational e ort can be orders of
magnitude greater. For this reason, the 2-D simulator has been concurrently programmed
both in MATLAB to validate the computational algorithm and Fortran to provide increases
52
?7000 ?6000 ?5000 ?4000 ?3000 ?2000 ?1000 0 1000?400
?300
?200
?100
0
100
200
300
400
Z?Coordinate (?m)
Y?Coordinate (
?m)
Figure 4.14: Di raction testing using the same conditions as tested in Figure 4.6, dy =
+62 m ;dz = 0 m.
in speed. The Fortran version is supplied in the appendix. The use of Fortran requires the
optical con guration and particle eld to be generated and saved in separate text les to be
processed by the simulator. The MATLAB code outputs directly to a TIFF format image
le, while the Fortran code outputs an ASCII le which can be converted to an image le
after processing.
4.3.1 Optical Con guration
The optical con guration for the 2-D simulator proceeds along the same lines as the 1-D
simulator. An assumption made to simplify the extension to 2-D is the use of both square
lenslets and square pixels; therefore, the pixel pitch in the x-direction is equal to that in
the y-direction (pp;x = pp;y) and the lenslet pitch is equal in both directions (pl;x = pl;y).
Therefore, the pixel and lenslet pitches can be de ned as before with the variables pp and
pl. This is a valid simpli cation as the vast majority of CCD sensors utilize square pixels,
and the lenslet arrays considered for the current work are comprised of square lenslets.
53
Another modi cation is made to take into account the aperture stops on most common
lenses. Equation 4.8 will return an exact value for the f-number; however, most lenses are
only o ered with a discrete number of f-number settings, known as f-stops. Thus, a list of
common f-stops is used so that the f-number is rounded up to the nearest commonly available
f-stop (i.e., an exact f-number of 3.2 would be rounded up to 4 rather than the nearest value
of 2.8). This ensures there is no overlap in the lenslet images.
For fast computation the layout of the lenslet array is de ned as a square grid (in
addition to the individual lenslets themselves being square). Note that a hexagonal layout
achieves an approximately 20% increase in spatial resolution by reducing unusable space
between each lenslet; however, no manufacturer was able to provide a hexagonal lenslet
array which forced the use of an array arranged in a square grid. The simulated lenslet array
can be de ned by ve variables: the lenslet pitch pl, the number of lenslets in the horizontal
direction nl;x, the number of lenslets in the vertical direction nl;y, a horizontal o set ox, and
a vertical o set oy. The o sets allow the array to be made larger than the sensor and then
centered, similar to the 1-D simulator. Determining the lenslet that a ray strikes is then a
computationally e cient process of division and rounding. The corresponding shifts require
multiplying the lenslet number by the lenslet pitch, then adding a shift to move from the
corner to the center of the lenslet, as de ned previously in equations 4.10 and 4.11.
For visualization purposes, the center locations of the lenslets lx and ly can be calculated
using equations 4.26 and 4.27, respectively. Figure 4.15 shows the upper-left and lower-right
corners of the lenslet array and sensor. In this gure, the array covers all pixels of the
sensor, and along the edge only portions of the individual lenslets overlay the sensor. This
approximates how a real lenslet array would be aligned over a sensor.
lx = nxpl=2 +ox +pl=2 (4.26)
ly = nypl=2 +oy +pl=2 (4.27)
54
?18 ?17 ?16 ?15
10
10.5
11
11.5
12
12.5
X (mm)
Y (mm)
15 16 17 18
?12.5
?12
?11.5
?11
?10.5
?10
X (mm)
Y (mm)
Figure 4.15: Schematic of the lenslet array (red) overlaying the edge of the CCD array (blue).
A summary of these and other optical parameters is given in table 4.3. Unless otherwise
noted, these settings will be used for all 2-D simulations herein. The script outputs this
information to an ASCII le which is used as an input to the Fortran ray-tracing program.
Table 4.3: De nition of 2-D Simulator Parameters, based on Kodak KAI-16000 sensor.
Lenslet Focal Length fl 0.500 mm (500 m)
Lenslet Pitch pl 0.125 mm (125 m)
Pixel Pitch pp 0.0074 mm (7.4 m)
Number of Pixels, X-Direction (Horizontal) np;x 4872
Number of Pixels, Y-Direction (Vertical) np;y 3248
Sensor Size, X-Direction (Horizontal) 36.05 mm
Sensor Size, Y-Direction (Vertical) 24.04 mm
Number of Lenslets, X-Direction (Horizontal) nl;x 289
Number of Lenslets, Y-Direction (Vertical) nl;y 193
Main Lens Focal Length fm 50 mm
Magni cation M -1
4.3.2 Particle Field Generation
Similar to the 1-D simulator, all particles are approximated as point sources of rays.
The generation of particles is kept separate from the optical con guration script to allow
di erent optical con gurations to be tested with identical particle elds. A MATLAB script
is used to generate the particle eld, starting with two variables npts and nrays which de ne
the number of particles to generate, and the number of rays which will be simulated for
55
each particle. The particle eld is de ned over a volume given by the bounds xspan, yspan,
and zspan. A schematic of this con guration is shown in Figure 4.16. Note that the z-axis
is the optical axis of the system, and at z = 0, the x-y plane is de ned as the object focal
plane of the camera. Thus the origin of this coordinate system is the intersection of the
focal plane and the camera optical axis. In addition to the coordinates of the particle, an
intensity coe cient is de ned for each particle to simulate variations in particle scattering.
The intensity variation is de ned by the bounds imin to imax.
Using this information, the particle eld coordinates and intensity coe cients are cre-
ated using a uniformly distributed random number generator. The output is written to
an ASCII text le, with the rst two lines de ning npts and nrays, followed by the particle
information written out in four-column format specifying x, y, and z coordinates and the
intensity coe cient, i.
z
span
Optical Axis, +z
x
span
y
span
Focal Plane
+y
+x
Figure 4.16: Particle eld de nitions.
4.3.3 Tests using the 2-D Simulator
Multiple test cases are used to show the 2-D performances of some test cases used
earlier in the 1-D simulator. These include a positive and negative depth change varying
56
only dz and an in-plane displacement test varying only dy. Following these tests, elds of
multiple particles will be simulated, rst involving particle elds with no depth and then
being extended to nite depths.
Figure 4.17 represents the 2-D image of an in-focus particle. In this case, a single lenslet
is illuminated, as demonstrated earlier for the 1-D case. The light lines in the image represent
the individual lenslets. The next extension comes by adding a change in depth. Figure 4.18
shows cases of movements farther and closer to the camera. The image of the particle is
spread over multiple lenslets. Notably, the pattern is unique at each position, and is unique
for identical magnitudes of positive and negative depth.
Figure 4.17: In-focus particle image (dx = 0, dy = 0, and dz = 0). Lenslet array outline
given in light gray.
Changes in in-plane motion are shown in Figure 4.19. The left column of this gure
shows an in-focus particle. In particular, 4.19b shows di raction spreading some signal to
a neighboring lenslet. The right column shows the motion of an out-of-focus particle. The
motion of the particle can be determined more accurately because the signal is no longer
constrained by the ambiguity over a single lenslet. This indicates the potential of sub-lenslet
resolution in certain cases.
Another example of the robustness of the plenoptic approach is with two in-line particles.
Figure 4.20 shows images of two in-line particles being displaced from each other a certain
extent. The unique patterns on the signal indicate ways to determine the depth of each
particle. This will be explored again when the refocusing algorithm is demonstrated.
57
(a) dz = +1 mm (d) dz = 1 mm
(b) dz = +2 mm (e) dz = 2 mm
(c) dz = +3 mm (f) dz = 3 mm
Figure 4.18: E ect of a depth change on a single particle image. Lenslet array outline given
in light gray. In all cases, dx = 0 and dy = 0.
The next images demonstrate particle eld images. Figures 4.21 through 4.24 show
particle elds with various values of zspan. Note the unique patterns formed in each image,
which are signi cantly di erent than a corresponding 2-D image. The light eld algorithms
58
(a) dy = +0 mm (d) dy = +0 mm
(b) dy = +0:62 mm (e) dy = +0:62 mm
(c) dy = +0:125 mm (f) dy = +0:125 mm
Figure 4.19: E ect of an in-plane and depth change on a single particle image. Lenslet array
outline given in light gray. In the left column, both dx = 0 and dz = 0. In the right column,
dx = 0 and dz = +2 mm.
to be discussed in the next chapter will be capable of extracting individual planes from this
data.
59
(a) dz = 3; 1 mm (d) dz = 3;+1 mm
(b) dy = 3; 2 mm (e) dz = 3;+2 mm
(c) dy = 3; 3 mm (f) dz = 3;+3 mm
Figure 4.20: Example of two in-line particles, where the depth of the two particles are
individually varied. Lenslet array outline given in light gray. In all cases, dx = 0 and
dy = 0.
60
Figure 4.21: In-focus particle image (zspan = 0). 1,000 particles simulated with 25,000 rays
each.
Figure 4.22: Particle image (zspan = 3:0 mm). 1,000 particles simulated with 25,000 rays
each. The lenslet grid outline has been removed to improve clarity.
61
Figure 4.23: Particle image (zspan = 5:0 mm). 1,000 particles simulated with 25,000 rays
each. The lenslet grid outline has been removed to improve clarity.
Figure 4.24: Particle image (zspan = 7:0 mm). 1,000 particles simulated with 25,000 rays
each. The lenslet grid outline has been removed to improve clarity.
62
4.3.4 Angular Statistics
As an aside, the quantum e ciency of CCD cameras has an angular dependence be-
cause most modern sensors incorporate a microlens array positioned at the sensor surface
to increase ll factor. For example, the Kodak KAI-16000 sensor equipped with a microlens
array has a substantial reduction in relative quantum e ciency when light rays are incident
on the detector by over +/- 10 degrees [48]. Figures 4.25 and 4.26 are histograms showing
the distribution of angles at the sensor plane. Note that in both directions, a reasonable
fraction of light rays are at angles greater than 10 degrees when arriving at the sensor. This
indicates a microlens array could distort or attenuate the resulting signal values by a small
amount. This analysis was done only for the speci ed conditions that have been simulated,
and there may be scenarios where the angular distribution is more extreme. As a worst-case
bound on the angle incident on the sensor, equation 4.28 can be used. For the conditions
speci ed in table 4.3, the maximum angle is approximately 19 degrees, which is consistent
with the histogram values.
max = pl2f
l
+ tan 1
p
(np;xpp=2)2 + (np;ypp=2)2
si
!
(4.28)
It should be noted that this could be alleviated with the use of a eld lens, as shown by
Adelson and Wang [35]. The purpose of this lens is to eliminate the \bulk angle" described
in the next chapter, which would reduce the magnitude of the angles at the edge of the
sensor.
4.4 Simulating Particle Displacements
Evaluating the velocity eld requires two snapshots of the particle eld to be acquired.
The displacement of the particles and the time interval t between the two snapshots allows
the velocity to be calculated. To simulate this process, two light eld images are rendered
63
?20 ?15 ?10 ?5 0 5 10 15 200
0.5
1
1.5
2
2.5
3
3.5
4
4.5x 10
5
Angle ? (deg)
Number of Rays
Figure 4.25: Distribution of angles in degrees for a 1,000 particle simulation.
?20 ?15 ?10 ?5 0 5 10 15 200
0.5
1
1.5
2
2.5
3
3.5
4
4.5x 10
5
Angle ? (deg)
Number of Rays
Figure 4.26: Distribution of angles in degrees for a 1,000 particle simulation.
which correspond to an initial particle eld and a displaced particle eld. A separate MAT-
LAB script was used to read the original le created by the particle eld generation script
(Section 4.3.2), and apply displacements to each particle in accordance with a velocity eld
speci ed by one or more equations.
64
In this work, two simulated ow elds are evaluated. The rst is a uniform ow eld,
which can be de ned as a constant displacement for each particle in any of the three coordi-
nate directions. Using uniform displacement provides a simple way to evaluate the resolution
of the technique, including minimum detectable displacement for both in-plane and depth
motions. Another advantage is that a uniform displacement eld does not contain shear,
which allows the accuracy of the technique to be evaluated using very basic PIV correlation
algorithms.
The second ow eld used for this work is a simulated vortex ring, which was used in
the initial tests of both Tomographic and Synthetic Aperture PIV. Vortex rings can be easily
de ned using closed-form equations and exhibit large-scale 3-D structure which is important
for evaluating camera performance along all coordinates. For these reasons, the Hill?s spher-
ical vortex is used as the reference ow eld for this work. This is a particular solution of
the vorticity equation for the generation of a steady inviscid vortex ring in which non-zero
values of vorticity are con ned within a sphere. The theoretical approach and derivation of
the stream functions for this ow are given by Green [49]. By taking the derivatives of the
stream function the velocity eld can be obtained. To simplify the equations, the derivation
of these equations is based on a cylindrical coordinate system in which the two primary coor-
dinates are the distance R perpendicular to the induced motion, and the distance Z parallel
to the induced motion. The origin of this coordinate system is the center of the vortex. The
size of the vortex based on the diameter of the vorticity-containing sphere is denoted as a,
and the induced velocity is given as u0. The stream function is a set of piecewise equations
de ned independently for ow internal to the vortex and external to the vortex. The internal
velocity eld is given by equations 4.29 and 4.30, and the external velocity eld is given by
equations 4.31 and 4.32.
uint = 32u0
1 2R
2 +Z2
a2
(4.29)
65
vint = 32u0
ZR
a2
(4.30)
uext = u0
"
a2
Z2 +R2
5=2 2Z2 R2
2a2
1
#
(4.31)
vext = 32u0
ZR
a2
a2
Z2 +R2
5=2
(4.32)
To combine the solutions from the internal and external velocity elds, a simple test is
performed by equations 4.33 and 4.34 to check if the coordinate location being evaluated lies
within the bounds of the vortex, given by the vortex radius a. These statements are taken
to be boolean, which will evaluate to either 0 or 1. These values are used in equations 4.35
and 4.36 to give the nal value of velocity.
tint = (R2 +Z2) a2 (4.33)
text = (R2 +Z2) >a2 (4.34)
u = uinttint +uexttext (4.35)
v = vinttint +vexttext (4.36)
With the velocity de ned, the displacement of the particles can be calculated if a time
interval t is speci ed. The general equation for particle displacement is actually a Taylor
series expansion, as indicated in equation 4.37. Unfortunately, this is an in nite series that
requires multiple derivatives of the velocity eld to be calculated (typically only a few terms
are needed, but the derivatives can still be complicated expressions). The approach used
instead is to divide the total time step into a large number of smaller time steps (typically
1,000) to approximate the actual displacement.
x0 = v t x@v@x + x
2
2
@v2
@2x
x3
6
@v3
@3x +::: (4.37)
66
?4 ?3 ?2 ?1 0 1 2 3 4?3
?2
?1
0
1
2
3
X?Coordinate (mm)
Y?Coordinate (mm)
Figure 4.27: X-Y slice from particle displacement simulation of the Hill spherical vortex.
Parameters a = 1:5 mm.
This approach to evaluating displacement allows curved streamlines to be generated.
A consequence of this fact is that the ow eld contains sharp velocity gradients which are
known to degrade PIV correlation accuracy [3, p. 345]. This can be partially compensated
by the use of window deformation techniques, which will be discussed in Chapter 6. Nev-
ertheless, this is an excellent test for evaluating the utility of the technique for a physically
realistic ow eld. It should be noted that the previous discussion exposes a limitation of
PIV in that it only evaluates linear displacements. This is not a limitation caused by the
plenoptic camera.
67
Chapter 5
Light Field Rendering Algorithms
Generating volumetric intensity data from a simulated or experimentally acquired light
eld image requires the use of light eld rendering algorithms. This chapter details the
algorithms used, including registration of the light eld data, computational refocusing, and
intensity thresholding. More advanced reconstruction schemes using 3-D deconvolution are
also discussed and will be the subject of future investigations.
The primary concept behind light eld rendering is to select a subset of rays from
the complete 4-D light eld to generate a conventional 2-D image. The unique aspects of
the light eld, such as the complete parameterization of spatial and angular information,
allows modi cations to be made prior to the generation of the nal 2-D image. Herein,
the computational refocusing approach uses the angular information contained in the light
eld to propagate light rays to an image focal plane that is di erent from the original focal
plane. This procedure is similar to the ray-tracing developed previously in the context of
the image simulator. By integrating a subset of rays at this new location, a 2-D image at a
new location is created. To enable this process, the light eld image must be parameterized
correctly. This is addressed in the rst topic of this chapter, which is then followed by a
discussion of refocusing and rendering.
5.1 Light Field Registration
The parameterization of the 4-D light eld uses two spatial coordinates (x, y), and two
angular coordinates ( , ). The process of registration refers to assigning speci c values of
these four coordinates to each pixel of a light eld image acquired with the plenoptic camera.
68
With the light eld parameterized in this manner, it is possible to perform ray-tracing to
synthetically refocus the camera to an arbitrary image focal plane.
The registration rst begins by either simulating or acquiring a \pinhole" image to
determine the central locations of each lenslet. To create a simulated pinhole image, a large
number of points within the focus plane are de ned (zspan = 0), and a large f-number is used
((f=#)m = 16) to generate distinct points on the sensor plane. The di raction parameters
are recalculated, allowing the di raction spot size to be representative of a (f=#)m = 16 lens.
Figure 5.1 is a representative pinhole image rendered using the simulator. The variation in
spot intensities is due to the random distribution of particles used to form the image. In
other words, the simulator models light as discrete rays emanating from point sources and
an approximation to a uniform eld is created by using a large number of particles placed
on a single plane. Due to this, there can be inhomogeneities in the resulting image. This
has been compensated to an extent by using a large number (500,000) particles with a small
number of rays (100) emanating from each. Actual pinhole images will not be subject to
this e ect.
Figure 5.1: Cropped portion of a simulated pinhole image with (f=#)m = 16.
A Cartesian grid is placed over the image which de nes the search regions for each
lenslet image. This is performed in MATLAB by specifying mesh node points using the
69
function meshgrid, which is manually o set as needed to position the grid appropriately.
The parameters of the grid are set through visual inspection of the resulting overlaid mesh
to ensure each grid cell contains only a single lenslet image. The nite spot size created
by di raction provides an excellent way to resolve the lenslet locations to subpixel accuracy
by using a 2-D Gaussian peak t on the spot formed under each lenslet. This is performed
in MATLAB by rst determining the intensity centroid of each grid cell and a rst-order
estimate of the standard deviations in both coordinate directions. These are used as a seed
for the MATLAB function fminsearch, which operates on the 2-D Gaussian function given
in equation 5.1.
f(x) = ce
(x cx)2
2 2x +
(y cy)2
2 2y
(5.1)
After a number of iterations, the function returns updated estimates of the peak locations.
This procedure is relatively quick; for an iteration tolerance of 1 10 7, each grid cell
takes approximately 0.03 seconds to evaluate, and the centers of all lenslet images could be
found in about 10 minutes. Note that this is a one-time procedure for a particular camera
con guration. The lenslet center locations are saved to a le and used on all subsequent
images that use identical camera settings. An example of a small portion of a processed
image is shown in Figure 5.2, where the grid is shown and the centroid values determined
from the Gaussian t. Upon visual inspection, excellent agreement is seen with the center
of the lenslet images.
The next step in the registration procedure is to convert the lenslet centroid positions
cx and cy from image coordinates back to the optical axis coordinates (lx and ly) using
equations 5.2 and 5.3. Also, the angle between the optical axis at the aperture and the
lenslet is determined, denoted l and l . These are termed the \bulk angles" in this work,
and are calculated with equations 5.4 and 5.5 using the atan2 function.
lx = cxpp ppnp;x=2 (5.2)
70
Figure 5.2: Cropped portion of processed image showing grid cells and center locations.
ly = cypp +ppnp;y=2 (5.3)
l = tan 1lx=si (5.4)
l = tan 1ly=si (5.5)
A basic working principle of the plenoptic camera is that the spatial resolution is sampled
at the lenslet location, and thus all pixels corresponding to a particular lenslet are set to the
same spatial coordinate. Determining the individual pixels that are paired with a lenslet
is found by using a distance formula in both coordinate directions. Since the location of
each lenslet is known, the average distance and radius can be calculated. This is used to set
the bounds on the lenslet size used in the distance calculation. Once the proper pixels are
identi ed, the x and y coordinates for each of the pixels are set to the lx and ly coordinates
of the lenslet. Next, the angles for each pixel p and p are calculated through the use of
equations 5.6 through 5.9, where fl is the lenslet focal length and xl and yl are local spatial
coordinates for the pixel.
xl = xppp np;x2 pp lx (5.6)
71
yl = yppp np;y2 pp ly (5.7)
p = tan 1xl=fl (5.8)
p = tan 1yl=fl (5.9)
This process is repeated for each lenslet in the image, such that each pixel has a spatial
position de ned by the nearest lenslet and an angular position de ned by the relative location
with respect to the lenslet centroid. An example registration is shown in Figure 5.3, with both
angular coordinates listed. This registration procedure is more general than the approach
taken by Ng et al. [36], which used image rotation, resizing, and cropping to ensure that
an integer number of pixels de ned each lenslet. Their approach is simpler to implement
and allows computationally e cient algorithms to be used such as Fourier slice refocusing
[50]. However, both rotation and resizing involve bilinear interpolation procedures which
have been shown to degrade traditional particle imaging [3, p. 394]. The technique given
here requires no interpolation in the registration process.
72
X Coordinate
Y Coordinate
?0.1080.104?0.1080.089?0.1080.074?0.1080.060?0.1080.045?0.1080.030?0.1080.015?0.1080.001?0.108?0.014?0.108?0.029?0.108?0.044?0.108?0.059?0.108?0.073?0.108?0.088?0.108?0.103?0.108?0.117
?0.0930.104?0.0930.089?0.0930.074?0.0930.060?0.0930.045?0.0930.030?0.0930.015?0.0930.001?0.093?0.014?0.093?0.029?0.093?0.044?0.093?0.059?0.093?0.073?0.093?0.088?0.093?0.103?0.093?0.117
?0.0790.104?0.0790.089?0.0790.074?0.0790.060?0.0790.045?0.0790.030?0.0790.015?0.0790.001?0.079?0.014?0.079?0.029?0.079?0.044?0.079?0.059?0.079?0.073?0.079?0.088?0.079?0.103?0.079?0.117
?0.0640.104?0.0640.089?0.0640.074?0.0640.060?0.0640.045?0.0640.030?0.0640.015?0.0640.001?0.064?0.014?0.064?0.029?0.064?0.044?0.064?0.059?0.064?0.073?0.064?0.088?0.064?0.103?0.064?0.117
?0.0490.104?0.0490.089?0.0490.074?0.0490.060?0.0490.045?0.0490.030?0.0490.015?0.0490.001?0.049?0.014?0.049?0.029?0.049?0.044?0.049?0.059?0.049?0.073?0.049?0.088?0.049?0.103?0.049?0.117
?0.0340.104?0.0340.089?0.0340.074?0.0340.060?0.0340.045?0.0340.030?0.0340.015?0.0340.001?0.034?0.014?0.034?0.029?0.034?0.044?0.034?0.059?0.034?0.073?0.034?0.088?0.034?0.103?0.034?0.117
?0.0200.104?0.0200.089?0.0200.074?0.0200.060?0.0200.045?0.0200.030?0.0200.015?0.0200.001?0.020?0.014?0.020?0.029?0.020?0.044?0.020?0.059?0.020?0.073?0.020?0.088?0.020?0.103?0.020?0.117
?0.0050.104?0.0050.089?0.0050.074?0.0050.060?0.0050.045?0.0050.030?0.0050.015?0.0050.001?0.005?0.014?0.005?0.029?0.005?0.044?0.005?0.059?0.005?0.073?0.005?0.088?0.005?0.103?0.005?0.117
0.0100.1040.0100.0890.0100.0740.0100.0600.0100.0450.0100.0300.0100.0150.0100.0010.010?0.0140.010?0.0290.010?0.0440.010?0.0590.010?0.0730.010?0.0880.010?0.1030.010?0.117
0.0250.1040.0250.0890.0250.0740.0250.0600.0250.0450.0250.0300.0250.0150.0250.0010.025?0.0140.025?0.0290.025?0.0440.025?0.0590.025?0.0730.025?0.0880.025?0.1030.025?0.117
0.0400.1040.0400.0890.0400.0740.0400.0600.0400.0450.0400.0300.0400.0150.0400.0010.040?0.0140.040?0.0290.040?0.0440.040?0.0590.040?0.0730.040?0.0880.040?0.1030.040?0.117
0.0540.1040.0540.0890.0540.0740.0540.0600.0540.0450.0540.0300.0540.0150.0540.0010.054?0.0140.054?0.0290.054?0.0440.054?0.0590.054?0.0730.054?0.0880.054?0.1030.054?0.117
0.0690.1040.0690.0890.0690.0740.0690.0600.0690.0450.0690.0300.0690.0150.0690.0010.069?0.0140.069?0.0290.069?0.0440.069?0.0590.069?0.0730.069?0.0880.069?0.1030.069?0.117
0.0840.1040.0840.0890.0840.0740.0840.0600.0840.0450.0840.0300.0840.0150.0840.0010.084?0.0140.084?0.0290.084?0.0440.084?0.0590.084?0.0730.084?0.0880.084?0.1030.084?0.117
0.0990.1040.0990.0890.0990.0740.0990.0600.0990.0450.0990.0300.0990.0150.0990.0010.099?0.0140.099?0.0290.099?0.0440.099?0.0590.099?0.0730.099?0.0880.099?0.1030.099?0.117
0.1130.1040.1130.0890.1130.0740.1130.0600.1130.0450.1130.0300.1130.0150.1130.0010.113?0.0140.113?0.0290.113?0.0440.113?0.0590.113?0.0730.113?0.0880.113?0.1030.113?0.117
5
10
15
20
25
10 12 14 16 18 20 22 24
Figure 5.3: Registration for a single lenslet. Top number (red) indicates the angle . Bottom
number (green) indicates the angle . Actual pixel locations given by blue points. Lenslet
centroid shown as red asterisk.
73
5.2 Computational Refocusing
Rendering a single pixel of a computationally rendered image is a straighforward process
of integrating the signal from all pixels underneath a particular lenslet. By performing this
operation for each lenslet, a large number of pixels are generated, corresponding to the nal
computationally rendered image. The computed image will have a depth of eld given by
the e ective f-number of the system and the size of the lenslets. Note that other rendering
techniques exist which use a subset of the pixels under each lenslet. This is known as synthetic
aperture refocusing, and allows the size and position of the aperture to be adjusted after
the image is captured to simulate various depths of eld and perspective changes. This
capability is not used for the algorithms discussed here.
Computational refocusing is a simple extension of light eld rendering, which uses the
angular information de ned at each pixel to e ectively propagate the intensity to a new
plane. This is similar to the ray-tracing de ned in the previous chapter, and actually uses
equation 4.2 for the propagation. The lateral coordinate of the ray at this new plane can
be located within the domain of a di erent lenslet, causing the resulting rendered image to
have a di erence in intensity. The algorithm rst begins by de ning the size of the synthetic
sensor. This di ers from the actual sensor size, as indicated in Figure 5.4. The synthetic
size synthx and synthy is simply de ned by the size of the usable lenslets on the sensor.
In the gure, xmax, xmin, ymax, and ymin are the maximum and minimum lenslet centroid
locations, and avgx and avgy are the average size of the individual lenslets determined from
the registration procedure.
This de nition of the synthetic sensor size is valid when the refocus plane is located
at the original focus plane (i.e, t = 0). However, when the refocus plane is translated to
a di erent distance, the synthetic sensor size must be modi ed to take into account the
trapezoidal shape of the imaging volume. This is illustrated in Figure 5.5, where the sizes
of the pixels and overall sensor can be seen to increase as the distance is increased from the
lens. This scaling can be accounted for through the use of similar triangles, giving a scale
74
Synthetic Width, synth
x
Physical Width
Physical Height
Synthetic Height, synth
y
x
max
x
min
y
min
y
max
Figure 5.4: Di erences in synthetic sensor size vs. actual sensor size.
factor SF = (t+si)=si. Using this scale factor, the synthetic sensor size and synthetic pixel
size can be calculated using equations 5.10 through 5.13.
s
i
t
Synthetic Focal Planes
Trapezoidal Imaging Volume
Figure 5.5: Change in synthetic sensor size with refocus distance.
avgx = SF
x
max xmin
nx 1
(5.10)
avgy = SF
y
max ymin
ny 1
(5.11)
synthx = SF (xmax xmin) +avgx (5.12)
synthy = SF (ymax ymin) +avgy (5.13)
75
With the size of the synthetic sensor de ned at a refocus distance, each ray of the light
eld can be rendered. First, it must be determined where the ray intersects the synthetic
focal plane in terms of the lenslet number. This is done through equations 5.14 and 5.15.
Note that these equations will return a fractional value.
ns;x = (synthx=2 +avgx=2 +x)=avgx (5.14)
ns;y = (synthy=2 +avgy=2 y)=avgy (5.15)
The fractional value returned by these equations allows for a sub-pixel interpolation of
the intensity in the rendered images. Consider the diagram shown in Figure 5.6. If the ray
of the light eld is considered to be discretized into pixel-sized elements, and the particular
ray falls on a non-integer lenslet number, the signal from the particular ray will be spread
over multiple elements. Calculating the contribution of the signal to the overlapped pixels is
done using the normalized coe cients TL, TR, BL, and BR. This is schematically outlined
in Figure 5.6a, and the resulting distribution of signal is given in Figure 5.6b. Note that the
intensity levels are inverted in this gure, and a darker gray value indicates greater intensity.
The code for performing this operation is not listed here, but is provided in the appendix.
L
x
R
x
B
y
T
y
BL BR
TRTL
(a)
(b)
(x,y)
Figure 5.6: Signal interpolation technique, a) De nition of normalized coe cients, b) Re-
sulting rendered pixels.
A set of four low-particle-density, half-size refocused images is generated and shown
in Figure 5.7. In this case, arbitrary focal plane spacing was chosen as a demonstration.
In Figure 5.7a, the image is rendered with no refocusing performed. Clear points can be
76
identi ed corresponding to imaged particles. Also notable are collections of low-intensity
blur spots throughout the image. As shown in gures 5.7b-e, bright points from one image
appear as blur spots in the remaining images. From a casual inspection, it appears that
each plane is unique in that the pattern of high-intensity, in-focus particles is unique in each
image.
These images provides insight on two important issues. First, it is clear that the choice
of focal plane locations is key for ensuring the uniqueness of each refocused image. This will
be the topic of the next section. The second nding is the clarity by which in-focus particles
can be identi ed amidst a background of blurred particles. This contrast in signal values
may allow simple thresholding to be used for volume reconstruction, a topic which will also
be discussed in the following sections.
5.3 Focal Plane Spacing
The depth-of- eld concepts developed earlier in Section 4.2.2 provide a starting point
for determining the focal plane locations. The basic premise is that each refocused image
will contain unique information if the focal depths of each refocused image do not overlap.
This is schematically shown in Figure 5.8. Here, multiple light cones are emanating from
the object space, forming multiple diamonds in the image space. As previously stated, the
height of these diamonds is equal to the circle of confusion, c. In this analysis, c is taken to
be the calculated lenslet pitch.
From an overview of this gure, it is apparent that unique information will only be
generated when refocus planes are chosen that lie within separate diamonds (i.e., two planes
should not fall within the same diamond; these would contain redundant information). For
this work, the focal plane is placed at the center of this diamond.
To determine the actual locations, a marching approach is required that rst solves for
the original focus plane, then steps to the remaining planes. Since this discussion is focused
on the image plane rather than the object plane, the earlier expressions for depth of eld are
77
(a) t = 0 mm
(b) t = 2 mm (c) t = +2 mm
(d) t = 4 mm (e) t = +4 mm
Figure 5.7: Example of refocused images at various refocusing planes.
not used. Instead, the term depth of focus is used to describe the equations dealing with the
image plane. For determining the initial bounds DN;i and DF;i of the original focus plane,
equations 5.16 and 5.17 are used.
DN;i = fDif (f=#)p
l
(5.16)
DF;i = fDif + (f=#)p
l
(5.17)
78
D
N,o
c
D
o
D
F,o
D
i
D
F,i
D
N,i
Object Space Image Space
Or
igin
al F
oc
al P
la
n
e
-t
F
oc
al P
la
n
e
+t
F
o
c
a
l
Plan
e
Figure 5.8: Unique depths of eld for calculation of the focal plane spacing.
By rearranging the equations, they can be rewritten to remove the term corresponding
to the distance Di. The resulting equations 5.18 and 5.19 are much better suited for a
marching approach in both directions from the original image focal plane.
DN;i = fDF;i((f=#)c=f + 1)f (f=#)c (5.18)
DF;i = fDN;i(1 (f=#)c=f)f + (f=#)c (5.19)
Di = DN;i(1 (f=#)c=f) (5.20)
These equations have been applied with the reference imaging conditions given in table
4.3. As suggested by Levoy [37], the number of unique focal planes capable of being generated
by a plenoptic camera is equal to the number of pixels underneath each lenslet. For the
camera considered here, this is taken to be 16. Thus, a total of 8 planes are found on either
side of the original image focal plane. The relevant values of the calculation are shown in
table 5.1. The second and third columns are the bounds on the depth of focus for each
plane. Column 4 shows the size of this depth of eld; note, the size of the depth of eld
changes depending on which direction the image is refocused. The fth column takes these
values and divides it by the circle of confusion c (in this case, lenslet pitch pl). This shows,
as before, that the ambiguity in depth is almost an order of magnitude larger than in-plane
79
ambiguity. The location of each of the refocus planes is given in the sixth column, and nally,
the distance the light eld must be propagated t is given in the last column.
Table 5.1: Synthetic Depth of Field and Focal Plane Positioning
Plane No. DN;i (mm) DF;i (mm) DF;i DN;i (DF;i DN;i)=c Di (mm) t
1 91.8523 92.7755 0.9231 7.3851 92.3116 -7.6884
2 92.7755 93.7079 0.9324 7.4593 93.2393 -6.7607
*3 93.7079 94.6497 0.9418 7.5343 94.1764 -5.8236
4 94.6497 95.6009 0.9513 7.6100 95.1229 -4.8771
5 95.6009 96.5617 0.9608 7.6865 96.0789 -3.9211
*6 96.5617 97.5322 0.9705 7.7638 97.0445 -2.9555
7 97.5322 98.5124 0.9802 7.8418 98.0199 -1.9801
8 98.5124 99.5025 0.9901 7.9206 99.0050 -0.9950
*9 99.5025 100.5025 1.0000 8.0002 100.0000 0.0000
10 100.5025 101.5126 1.0101 8.0806 101.0050 1.0050
11 101.5126 102.5328 1.0202 8.1618 102.0202 2.0202
*12 102.5328 103.5633 1.0305 8.2438 103.0455 3.0455
13 103.5633 104.6041 1.0408 8.3267 104.0811 4.0811
14 104.6041 105.6554 1.0513 8.4104 105.1272 5.1272
*15 105.6554 106.7173 1.0619 8.4949 106.1837 6.1837
16 106.7173 107.7898 1.0725 8.5803 107.2509 7.2509
17 107.7898 108.8731 1.0833 8.6663 108.3288 8.3288
Table 5.1 provides the actual values which will be used herein for all refocused images
unless otherwise stated. Another, possibly easier, way to visualize this data is with a simple
plot. Figure 5.9 shows each of these points, and the corresponding depth-of- eld \diamonds"
created by each. In this gure, the cross markers represent the refocus plane positions, and
the vertical line at 100 mm indicates the original image focal plane position. Note the
unequal axes in this gure, which are used for ease of visualization. Also, the optical axis in
this gure does not correspond to the optical axis in the actual simulators; here, the optical
axis is taken to be the distance from the lens si.
As an aside, it is worth exploring the di erences focal length and f-number play in
determining unique focal planes. For a short focal-length lens, having a large f-number can
cause drastic nonlinearity in the sizing and spacing of the focal planes, as shown in Figure
5.10 for the case of a 50 mm lens operated at an f-number of 11. However, as focal length
80
90 92 94 96 98 100 102 104 106 108 110
?0.08
?0.06
?0.04
?0.02
0
0.02
0.04
0.06
0.08
Optical Axis (mm)
Off?axis Coordinate (mm)
Figure 5.9: Unique depths of eld for calculation of the focal plane spacing. f=# = 2, f = 50
mm.
is increased, this e ect becomes much less signi cant. Figure 5.11 shows a 200 mm focal-
length lens operated at the same f-number. The primary reason for this is the distances from
the lens; for the 50 mm/f-number 11 case, the limited angles of light allowed to enter the
camera require a large physical distance in order to generate a unique focal plane spacing.
For the 200 mm case, the distance required to generate the unique focal planes is the same;
however, it is a smaller percentage of the total imaging distance and thus doesn?t result in
a substantial nonlinearity.
5.4 Single-particle refocusing
With the focal planes de ned, a more detailed look can be used to evaluate the rendered
images. First, the case of refocusing a single particle is considered. In the following plots,
a cropped 2-D image is shown as well as a 1-D slice of pixel intensity values through the
center of the image. Figures 5.12 through 5.16 present these various views. In each case,
clear changes in image intensity and diameter can be seen. One of the best ways to evaluate
changes in shape is by de ning a particle image diameter, which can be found by performing
81
60 70 80 90 100 110 120 130 140 150 160
?0.08
?0.06
?0.04
?0.02
0
0.02
0.04
0.06
0.08
Optical Axis (mm)
Off?axis Coordinate (mm)
Figure 5.10: Unique depths of eld for calculation of the focal plane spacing. f=# = 11,
f = 50 mm.
350 360 370 380 390 400 410 420 430 440 450
?0.08
?0.06
?0.04
?0.02
0
0.02
0.04
0.06
0.08
Optical Axis (mm)
Off?axis Coordinate (mm)
Figure 5.11: Unique depths of eld for calculation of the focal plane spacing. f=# = 11,
f = 200 mm.
a Gaussian t to the data. By using the previously derived equation 4.25 and rearranging
to solve for ds (the spot diameter de ned in Adrian and Westerweel [3, p. 101]), the particle
image diameter in pixels can be determined. This is a critical parameter for PIV evaluation;
the majority of PIV algorithms perform best when the particle image diameter is greater
82
than 1.5 pixels. In the gures shown here, this does not appear to be an issue. Note that
in Figure 5.14 the Gaussian function is slightly skewed; there actually is a small value of
image intensity which is not easily visible because the lenslet is not exactly positioned on
the optical axis. Regardless, this shows that even for an approximately in-focus image, the
particle diameter is suitable for PIV. Another interesting feature of this data is the rapid
change in intensity values. The in-focus particle in Figure 5.12 exhibits a peak intensity of
around 30,000, whereas the out-of-focus particle in Figure 5.14 has a peak intensity much
lower, roughly 800. This suggests that out-of-focus particles could potentially be removed
from images using a simple thresholding process. This will be the topic of the next section.
5 10 15 20
0
50
100
150
200
Intensity (a.u.)
Horizontal Coordinates (px)
ds = 21.28
Figure 5.12: Refocus plane 3, t = 5:8236 mm
5 10 15 20
0
500
1000
Intensity (a.u.)
Horizontal Coordinates (px)
ds = 10.85
Figure 5.13: Refocus plane 6, t = 2:9555 mm
The focal plane spacing derived earlier is the minimum spacing required in order for a
particle located at any position within the volume to be focused on a plane in the refocused
images. However, to ensure that the resulting volume of the focal stack is scaled correctly
83
5 10 15 20
0
1
2
3 x 10
4
Intensity (a.u.)
Horizontal Coordinates (px)
ds = 1.61
Figure 5.14: Refocus plane 9, t = 0:0000 mm
5 10 15 20
0
200
400
600
800
Intensity (a.u.)
Horizontal Coordinates (px)
ds = 11.19
Figure 5.15: Refocus plane 12, t = +3:0455 mm
5 10 15 20
0
50
100
150
200
Intensity (a.u.)
Horizontal Coordinates (px)
ds = 22.67
Figure 5.16: Refocus plane 15, t = +6:1837 mm
in all three dimensions, the refocus planes are typically supersampled. For example, in the
current con guration, each pixel of a 2-D rendered slice represents an area of 125 m2,
de ned by the area of the lenslets. For the reconstructions to be properly scaled, the refocus
plane spacing (depth-dimension) must be equal to the image or in-plane dimension.
84
5.5 Intensity Thresholding
The refocused planes contain multiple contributions from out-of-focus particles. These
contributions would distort the correlation in PIV algorithms if not removed from the 3-
D data. For this reason, an approach used by Belden et al. [32] for synthetic aperture
PIV data is adapted for this work. In their paper, they considered refocused images to be
composed of high-intensity in-focus particle images and low-intensity out-of-focus noise, both
of which are randomly distributed in space. They found that the intensity distribution of a
refocused image was approximately Gaussian, allowing the convenient parameters of mean
and standard deviation to be used for quantifying the intensity range. The refocused
images generated in this work do not appear to have the same Gaussian intensity distribution,
but can be described in general terms using one; for example, Figure 5.17 shows an intensity
histogram of a sample refocused image with a Gaussian t overlaying the histogram. While
the Gaussian curve doesn?t closely follow the data, it does describe the general trends well.
Also included on this graph are the locations of the mean intensity value, and the value
located three standard deviations above the mean. In this work, this threshold level is
adopted.
A modi cation made to the technique of Belden et al. is to dilate the thresholded images
by a radius of 1 pixel. This serves the primary purpose of ensuring that each particle image
contains neighboring information and has an e ective diameter of greater than 1 pixel. The
reason for this change is that the primary sub-pixel evaluation routines in PIV processing
algorithms rely on either a three-point Gaussian or polynomial t to the correlation plane.
The shape and information of the correlation peak in PIV algorithms is dictated by the
form of the particle images; thus, including neighboring information in the particle images
ensures that the correlation plane will also contain multiple pixels of information available to
sub-pixel estimators. Also, the artifact known as \peak locking," or integer biasing in PIV
vector elds, is primarily driven by particle image sizes being less than roughly 2 pixels.
85
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
1000
2000
3000
4000
5000
6000
7000
Normalized Intensity
Counts
Figure 5.17: Intensity histogram for a refocused image using 100 bins. Gaussian distribution
tted to the distribution, and markers of the mean intensity, and three standard deviations
above the mean intensity given.
To perform the dilation, the thresholded image is converted into a binary image. MAT-
LAB has a number of convenient routines for modifying binary images. In particular, the
strel function is used to produce the correct dilation lter, which is then passed to imdilate.
The resulting dilated binary image is then multiplied by the original image, to yield a nal
thresholded image. The results of this operation are shown in gures 5.18 through 5.20. The
former is an original refocused image, and the latter represents the image after thresholding
and dilation. Clearly, the thresholding removes the low-intensity background due to out-
of-focus particles and retains the in-focus particle images. Such an image is suitable as a
preliminary input for correlation algorithms.
5.6 3-D Deconvolution and Limited-Angle Tomography
The thresholding procedure described above is used to generate inputs to the correlation
algorithms. This procedure is simple but is not well-suited for dense particle elds where
particles may not be easily identi ed. Another disadvantage with the thresholding approach
is that only information from a single refocused plane is used to make the thresholding
86
Figure 5.18: Original light eld image without thresholding.
Figure 5.19: Light eld image with non-dilated thresholding.
Figure 5.20: Light eld image with dilated thresholding.
calculation (as opposed to using information contained in all planes to make an accurate
determination). For this reason, the techniques of 3-D deconvolution and limited-angle
87
tomography can be used, which compute the 3-D volume by using information from more
than one plane simultaneously, and could potentially give more accurate results.
3-D deconvolution was pioneered in the early 1980s as the mathematical foundation to
deconvolution microscopy, and has advanced over the past twenty years to include multiple
deconvolution algorithms and implementations. It was developed as a method to increase
the spatial resolution in microscopes. Resolving small structures within biological cells is
not typically possible using traditional microscopy due to the fundamental limiting e ects of
di raction for both imaging and illumination. Some of these issues have been addressed using
clever illumination techniques and uorescent markers; however even with these methods,
out-of-focus scatter from other regions of the specimen reduce the e ective resolution of
the in-focus images. 3-D deconvolution uses information on the blurring behavior of the
microscope objective to reduce out-of-focus scatter in in-focus images.
An introduction and review of deconvolution techniques is given by Sibarita [51] and Mc-
Nally et al. [52], with a short summary given here. The rst step in a deconvolution process
is either the calculation or measurement of the 3-D point spread function (PSF) of the optical
system. In microscopes, this is a complex function due to the e ects of di raction. Typically
this is performed by imaging tiny uorescent beads at di erent focal depths. In traditional
photography, the PSF can be approximated much more simply since the e ects of di raction
are negligible, but it can be determined in a similar fashion by measuring the manner in
which an approximately point light source becomes blurred. For light eld photography, the
PSF can be calculated in exactly the same way, except rather than using manual refocusing,
the image of a point source can be taken once and computationally refocused. However,
there is a complication of the PSF for traditional and light eld photography which involves
the radial asymmetry of the PSF. Because a traditional camera acquires perspective views
of a scene, the angles of light incident on the sensor vary. This is in contrast to microscopy,
where all views are e ectively orthographic.
88
Once the PSF is known, a variety of computational algorithms can be used. Each of
these algorithms uses the knowledge of the PSF and the data contained in the focal stack
to remove out-of-focus contributions. In a basic sense, it can be regarded as an inversion
method. The simplest algorithm used is the nearest neighbors algorithm, which restricts
itself to using neighboring planes of data rather than the entire focal stack. The justi cation
for this is that the nearest two planes make the most substantial contribution to the in-focus
plane. This algorithm is very fast to apply and does not require iteration. Constrained
iterative algorithms such as the Jansson Van-Cittert algorithm and the Gold algorithm can
generate good results; however, they are typically slow and very sensitive to measurement
noise. This may preclude them from being used in images with low signal-to-noise ratio.
Another class of methods is known as statistical methods. One example is the maximum
likelihood estimation (MLE) algorithm, which uses the knowledge of the image noise sources
(Poisson noise) to make an estimate of the resulting images. Another approach is to use a
regularization function in addition to the MLE algorithm. This makes an assumption that
the noise is Gaussian to simplify the mathematical formulation, which is quite complex.
The review by Sibarita [51] gives examples of each of these applied to microscope data. In
this paper, the author mentions that no one algorithm is well suited for all cases; however,
the maximum likelihood estimation algorithm and one-step algorithms not discussed here
(Wiener ltering and unconstrained Tikhonov-Miller) perform poorly in terms of nal image
quality.
It should be mentioned at this point that deconvolution microscopy has been used
as a method for performing 3-D micro-PTV measurements. Park and Kihm [53] used a
microscope with a 40X objective lens to visualize particle motion in a 100 m2 square micro-
channel. The technique was able to resolve particles within a 5.16 m cubic volume. The
basic processing methodology was to take the particle images and use the knowledge of
the microscope PSF to determine the depth. The entire volume under consideration was
intentionally defocused to ensure no ambiguity in depth location occurred. Their work was
89
able to obtain fairly accurate measurements to within of +/- 7% (limited partially due to
Brownian motion). However, it is doubtful the methodology used in their work can be
applied to this work primarily since it deals with microscopic imaging and does not concern
the reconstruction of a 3-D intensity eld, but rather the identi cation and matching of the
PSF.
90
Chapter 6
Volumetric Correlation Algorithms
In order to extract the displacement information from a pair of 3-D particle elds,
a statistical interrogation scheme is required. These schemes are typically known as PIV
algorithms, which rely on taking two images acquired at di erent times and nding the
proper displacement to yield the maximum match between the two images. Typically the
images are discretized into hundreds or thousands of interrogation windows which allow
spatial variations in the displacement eld throughout the images. By multiplying the pixel
displacement by the magni cation of the imaging system and the pixel size, the physical
displacement is determined. The development of these algorithms for 2-D image data began
as early as the 1980s but was severely restricted due to the limited computing capabilities
available. A history of the early e orts and developments is not given here; the reader is
referred to the review by Adrian [54] for additional historical context. However, as computing
technology evolved, increasingly sophisticated interrogation procedures were developed which
allowed for improved evaluation of particle displacement. A variety of approaches have been
used, most of which can be classi ed into one or more groups:
single-pass correlation
multiple-pass correlation with discrete window o set
grid re nement schemes
multiple-pass correlation with image deformation
spatially adaptive correlation
optical ow
91
The single pass correlation technique, described in [55], was one of the rst algorithms
to be developed and tested on digital PIV data. Since then, researchers have found that an
iterative solution approach can be used to increase the dynamic velocity range (DVR) and
dynamic spatial range (DSR). The DVR is de ned as the ratio of the maximum to minimum
resolvable velocity, and the DSR is de ned as the ratio of the maximum to minimum re-
solvable length scale. Substantial improvements in DVR can be achieved using a multi-pass
iterative interrogation with a window o set, which reduces the in-plane loss-of-pairs and
achieves greater accuracy and range in velocity measurement [56, 57]. Furthermore, by us-
ing a grid re nement in addition to the multi-pass window o set method, each interrogation
region can be reduced in size which results in an increase in DSR [57].
The accuracy of PIV interrogation was further advanced by the application of image
deformation techniques. These techniques perform a traditional single-pass correlation to
determine a velocity eld, and deform the two images such that the deformation follows the
velocity eld [58, 59]. In this manner, the correlation becomes far more robust with respect
to velocity gradients that have a spatial scale on the order of the interrogation window
size. These techniques also allow the use of symmetric o sets on both images to achieve
second-order accuracy, as rst suggested by Wereley and Meinhart [60].
A variety of other schemes have been developed and are still under development. Spa-
tially adaptive PIV eliminates the use of a Cartesian grid for performing the interrogation,
and results in interrogation windows spaced and sized to the scales of the ow eld under
evaluation [61]. This technique does yield increases in both DVR and DSR; however, the
complexity of the algorithms make applying the technique di cult. Additionally, since the
interrogation grid is uniquely adapted for each image pair, it is di cult to compile turbulent
statistics since measurements are located at di erent positions in each correlation. Typically
measurements must be interpolated onto a uniform grid, which can have the detrimental
e ect of smoothing the data.
92
Optical ow presents another approach to measuring displacement, which uses concepts
developed originally for computer vision research. Optical ow algorithms are fundamentally
di erent from correlation-based PIV algorithms in that the interrogation is performed by
determining the gradient of intensity between an image pair. The displacement vector is
simply a vector aligned along the gradient [62]. The advantage to this technique is the
substantial increase in data yield (each pixel of the image results in a velocity vector).
However, a fundamental limitation exists due to the calculation of the gradient: particle
displacements cannot in general be greater than the particle image diameter. This greatly
limits the DVR of the technique. However, research is still being conducted in this eld on
better understanding the connection between optical ow and uid motion to extend the
range of velocities that can be measured. Note that the references given for each of the
algorithms discussed above are by no means exhaustive, but give the most well known work
related to each algorithm.
Based on the previous discussion, the algorithms that appear capable of achieving the
highest DVR, DSR, and accuracy while still being relatively straightforward enough to im-
plement are the multi-pass discrete o set methods with grid re nement. The results from the
third international PIV challenge (collected and published by Stanislas et al. [63]) showed
that image deformation techniques are recommended; however, particular attention must
be paid to the validation procedures and low-pass lters used, which greatly increases the
di culty of accurately implementing the algorithm. Some other ndings were a general
agreement that interrogation window sizes smaller than 16 x 16 pixels should be almost
universally avoided. Also, the application of optical ow algorithms in the same tests did
not achieve similar performance as the advanced correlation-based algorithms.
These ndings motivated the choice in this work to use a multi-grid, multi-pass discrete
o set method. Herein, the technique will be referred to as WIDIM (window displacement
iterative multigrid), an acronym rst used by Scarano [57]. This chapter begins with a basic
description of the technique and outlines the processing steps required. The algorithm is
93
developed rst for the evaluation of 2-D PIV images and is then extended to work for 3-D
data based on modi cations to the 2-D algorithm.
6.1 WIDIM Algorithm Development
A schematic outline of the WIDIM algorithm is given in Figure 6.1 and can be described
as consisting of two separate methods. First, a grid re nement algorithm is used (also
referred to as multi-grid), where the interrogation window size is decreased as a means to
increase the DSR of the result. The second is the multi-pass algorithm, where multiple passes
are performed using the displacement data from a previous pass to o set the interrogation
windows prior to the correlation. The primary advantage to this iterative approach is the
ability to eliminate the in-plane loss of pairs which occurs with no prediction step.
Load
Images
Grid
Generation
Cross
Correlation
Update
Displacement
d = p + c
Validate
Displacement
Update Predictor
Test for
Convergence
Output
Multi-Grid Multi-Pass
Figure 6.1: WIDIM algorithm ow chart.
6.1.1 Grid Generation
A Cartesian grid is generated that allows each interrogation window to t within the
image domain while also allowing for interrogation region overlap. The grid methodology is
shown for a single interrogation row in Figure 6.2. The sizes of the interrogation windows are
given as xsize and ysize, and the spacing between each window (which allows for overlap) is
denoted as xspacing and yspacing. Note that in Figure 6.2, the actual window sizes are distorted
to show the window overlap, in this case 25%, more clearly. The number of windows that can
t within a given image size are determined by using a loop to ll the image until another
94
interrogation window cannot t in the image area. This will result in a corner of the image
not being covered by interrogation windows. For this reason, the grid is then o set by an
amount xoffset and yoffset to center the grid within the image. The grid is de ned as integer
node points xnode and ynode and by the locations of the centroids xcent and ycent which can
be fractional values.
Image Width
X Size
Y Size
X Spacing
Node Point Center Point
Y Offset
X Offset
Figure 6.2: Schematic of PIV gridding.
6.1.2 Cross-Correlation
The interrogation of displacement is achieved using the discrete spatial cross-correlation
R of two image windows, IA and IB, as de ned in equation 6.1. The values of p and q are
the shift of the interrogation regions with respect to each other.
R(p;q) = 1x
size ysize
xsizeX
m=1
ysizeX
n=1
IA(m;n)IB(m+p;n+q) (6.1)
The cross-correlation can be computed directly from this equation, however this is a
computationally intensive operation requiring (xsizeysize)4 operations. An alternative is the
use of an e cient Fast Fourier Transform (FFT) algorithm which reduces the complexity to
(xsizeysize)2 log2(xsizeysize)2.
To perform the FFT-based spatial correlation, the image windows are obtained by
using the previously de ned grid node points and window sizes to extract a subset of data
from the individual images. Each window is subtracted by the mean value of itself to
eliminate the correlation of the mean and uctuating intensities as shown in [3, p. 320]. This
subtraction procedure is valid for the case of relatively homogeneous illumination within
95
each interrogation window, which is almost always a valid assumption. An example of two
windows is shown in Figure 6.3a,b. Next, the two-dimensional FFT of both windows, FA and
FB, is computed using the fft2 function in MATLAB. The shifted absolute magnitudes of
these Fourier transforms are shown in Figure 6.3c,d. The resulting pair of Fourier transforms
are then multiplied together. Speci cally, the complex conjugate of FA is multiplied by FB.
This quantity is known as the cross-spectral density and is shown in Figure 6.3e. Taking
the inverse FFT using the MATLAB function ifft2 yields the spatial correlation R. Some
implementations of the FFT, such as the FFTW library used by MATLAB, shu e the FFT
such that high frequencies are placed at the center, and the DC-component is located at
the corners. For this reason, the function fftshift is used to reshu e the values of the
correlation plane so the center of the plane corresponds to zero displacement. The shifted
correlation plane is shown in Figure 6.3f.
The drawback of using FFTs instead of a direct computation approach is that the
FFT is de ned only for periodic signals. As detailed in Adrian and Westerweel [3, p. 372],
the exact value of the spatial correlation can only be recovered by zero-padding the input
values such that the resulting size of the inputs IA and IB are 2xsize and 2ysize. In
practice, selection of window sizes in accordance with the one-quarter rule (displacements
should be less than one-quarter of the interrogation window size) reduces this e ect because
the correlation peak will not be \folded" back into the actual correlation plane through
the periodicity e ect. However, to ensure correlation artifacts do not skew the results of
this work, all spatial correlations are zero-padded by twice the input size in both coordinate
directions. Note that even with zero-padding, the computational e cency of the FFT method
is still an order of magnitude better than the direct method. For example, using 64 x 64 px
interrogation windows, the direct approach requires almost twenty million operations, while
the zero-padded FFT requires just 200,000.
A further modi cation to the FFT method described here is to properly scale the cor-
relation plane. As noted in Ra el et al. [7, p. 139], the correlation coe cient is the proper
96
normalization of the cross correlation, which allows a comparison between di erent correla-
tion planes to be made. Additionally, the values of the correlation coe cient give a rst-order
approximation to the relative strength of the correlation. To modify the correlation plane to
yield the correlation coe cient, the data is scaled by dividing it by the standard deviations of
the original windows. Note that the standard deviation is calculated prior to the application
of zero-padding. After this is performed, the values of the correlation plane will nominally
fall in the range 1 R 1.
With the correlation plane calculated, the displacement is found by estimating the
precise location of the peak value. The estimate can be broken into two components, the
integer displacement m0 and n0, and the fractional estimate X and Y . The value of the
total displacement is the sum of the integer and fractional displacements for each coordinate.
The integer estimate is found using a simple maximum search within the correlation plane,
excluding boundary pixels. The fractional displacement is determined using a Gaussian
three-point estimator, which has become the standard method for subpixel peak detection
in PIV. The Gaussian estimate is de ned in equation 6.2, R0 is the maximum value of the
correlation plane at (m0;n0), and R 1 and R+1 are the values of the neighboring points.
This equation can be applied separately and independently along all coordinate directions.
= lnR 1 lnR+12 lnR
1 4 lnR0 + 2 lnR+1
(6.2)
Note that the form of equation 6.2 requires a neighboring value of the correlation in each
direction. For this reason, the integer estimator de ned previously excludes boundary points.
Also, the Gaussian estimator requires all values of the correlation plane to be positive (the
natural logarithm of zero or a negative number is unde ned). Due to this, the correlation
plane is scaled in the estimator by subtracting the minimum value of the correlation plane,
making all values of the plane greater than or equal to zero. To eliminate any possibility
that the zero value could be encountered, a value of 1 10 10 is added to ensure all values
are positive. These modi cations to the correlation plane do not shift the estimated peak
97
(a) IA (b) IB
(c) jFAj, shifted (d) jFBj, shifted
(e) jSj, shifted (f) R, shifted
Figure 6.3: Intermediate steps for the calculation of the spatial correlation with the FFT.
Window size 64 x 64 px. Data from case A2 of the 2005 PIV Challenge.
location, since the estimate is based on the ratio of the surrounding pixel intensities and not
on the absolute intensity value of the pixels.
98
As an nal step to determine the actual displacement, the estimate is reduced by an
amount equal to half of the correlation plane size. This is because for zero displacement
the peak is located at the center of the correlation plane while the estimators provide values
given in image coordinates which begin at the upper-left corner of the plane. Equations
6.3 and 6.4 perform this shift. The additional subtraction of one is due to the nature of
MATLAB image indexing, which begins at one.
dx = (m0 + X) Rx;size=2 1 (6.3)
dy = (m0 + Y ) Ry;size=2 1 (6.4)
An example of this cross-correlation procedure without any window o set applied to
an entire image is given in Figure 6.4. From a qualitative standpoint, the ow eld of a
turbulent jet is represented quite well. However, there do appear to be obvious outliers in
the data. Therefore, validation procedures must be developed to detect outliers and replace
them with suitable displacement values.
6.1.3 Vector Validation
To identify invalid vectors and replace them, a local validation process is used for de-
tection. The multi-step procedure outlined here is similar to the one outlined by Adrian and
Westerweel [3, p. 512].
1. Local outlier detection using the normalized median test. Vectors exceeding a threshold
value of two are tagged as invalid.
2. Bilinear interpolation to replace vectors tagged as invalid.
3. Repeat process until convergence.
The normalized median test is described by Westerweel and Scarano [64] and is a local
validation test based on the median value of the velocity components of the neighboring
99
MATLAB
Figure 6.4: Example of a cross-correlation performed using 48 x 48 px interrogation window
sizes with 24 x 24 px spacing and no window o set. Vectors scaled by a factor of 5. Data
from case A, image 2 of the 2003 PIV Challenge.
eight vectors. The technique is robust and applies to a wide variety of ows, including those
with large velocity gradients, and is now the predominant validation technique used in PIV
codes.
100
MATLAB
Figure 6.5: Vector eld from Figure 6.4 using the iterative validation and replacement pro-
cedure.
6.1.4 Predictor Generation
After a initial interrogation pass performed without a window o set, the predictor for
the next iteration can be generated by simply rounding the value of the displacement to
the nearest integer value. The resulting values px and py are used to o set the individual
101
windows of image B from image A. An example of the predictor eld is shown in gures 6.6
and 6.7.
A more complex approach is needed to apply the predictor to a re ned grid. As the grid
resolution increases, the values of the predictor must be interpolated from the displacement
eld of the previous grid. For this reason, a bilinear interpolation is performed with the
MATLAB function interp2. The results of the interpolation are then rounded to yield
integer values.
200 400 600 800
100
200
300
400
500
600
700
800
900
X?Coordinate
Y?Coordinate
?6
?5
?4
?3
?2
?1
0
Figure 6.6: X-Coordinate Predictor using 64 x 64 px interrogation window sizes with 32 x
32 px spacing. Data from case A, image 2 of the 2003 PIV Challenge.
6.1.5 Grid Re nement
After multiple passes using identical grid settings, the velocity eld will converge. To
increase the spatial resolution of the algorithm, a new grid can be generated using a smaller
window size and window spacing. To accomplish this a re nement factor Rf is de ned,
which is taken to be an integer value going from 1 to as much as 4. The grid parameters at
102
200 400 600 800
100
200
300
400
500
600
700
800
900
X?Coordinate
Y?Coordinate
?2
?1.5
?1
?0.5
0
0.5
1
Figure 6.7: Y-Coordinate Predictor using 64 x 64 px interrogation window sizes with 32 x
32 px spacing. Data from case A, image 2 of the 2003 PIV Challenge.
200 400 600 800
100
200
300
400
500
600
700
800
900
X?Coordinate
Y?Coordinate
?2.5
?2
?1.5
?1
?0.5
0
0.5
1
1.5
Figure 6.8: X-Coordinate Corrector using 64 x 64 px interrogation window sizes with 32 x
32 px spacing. Data from case A, image 2 of the 2003 PIV Challenge.
103
200 400 600 800
100
200
300
400
500
600
700
800
900
X?Coordinate
Y?Coordinate
?2.5
?2
?1.5
?1
?0.5
0
0.5
1
1.5
Figure 6.9: Y-Coordinate Corrector using 64 x 64 px interrogation window sizes with 32 x
32 px spacing. Data from case A, image 2 of the 2003 PIV Challenge.
each re nement level are then de ned in equations 6.5 to 6.8, where xsize;i, ysize;i, xspacing;i,
and yspacing;i are the initial grid settings.
xsize = ceil(xsize;i=Rf) (6.5)
ysize = ceil(ysize;i=Rf) (6.6)
xspacing = ceil(xspacing;i=Rf) (6.7)
yspacing = ceil(yspacing;i=Rf) (6.8)
In the above equations, the ceil function is used to ensure integer values of all inputs.
Additionally, a check is performed on both xsize and ysize to ensure that they are even
values. This restriction is due to the way the fftshift function in MATLAB reshu es the
correlation plane. After the grid parameters are set, the grid generation function generates
104
new node and centroid points for use in the calculations. An example of the grid re nement
is given in gures 6.10 through 6.13.
Figure 6.10: Rf = 1. Initial percentage of valid vectors: 97%.
From 6.10 to 6.13, the spatial resolution increases dramatically; however, at high re ne-
ment levels a high percentage of invalid vectors exist. This is due to the reduced number of
particles forming a distinct correlation peak, as detailed in Adrian and Westerweel [3, p. 346].
In this work, the acceptable maximum re nement level is de ned as the highest level which
can be achieved with greater than 95% valid vectors on the rst validation pass. Another
way to visualize this behavior is in gures 6.14 and 6.15. In the former, the RMS of the
corrector values are plotted alongside the re nement value. Note that due to the multi-pass
procedure, within each grid re nement step the RMS error generally decreases signi cantly.
The second gure shows the performance of the vector validation and replacement scheme
along with the re nement number. Note the substantial drop in valid vectors when the
re nement factor Rf = 4.
105
Figure 6.11: Rf = 2. Initial percentage of valid vectors: 97%.
6.2 Extension to 3-D PIV
Extending the WIDIM algorithm to 3-D requires each component of the algorithm to
be adapted for 3-D correlation. This is a relatively simple procedure which does not involve
substantial modi cation of the PIV code. A notable change is the use of the fftn and
ifftn commands for performing the 3-D FFT and inverse FFT, respectively. However, the
images do need to be conditioned prior to being loaded into the code. This is because the
output from the rendering code has equal resolution at each plane but actually represents a
trapezoidal volume in which the magni cation changes at each plane. Thus, if a modi cation
is not made to the individual refocus planes, the di erences in magni cation through each
interrogation volume will result in a slight shear. To correct for this, the data in each of the
refocused images is resized in a manner such that the magni cation is set to a constant value
while no information is lost. This is done by calculating the image size for each plane using
the relation (t + si)=si. The base resolution is chosen as the plane with the largest scale
106
Figure 6.12: Rf = 3. Initial percentage of valid vectors: 95%.
factor. This procedure is shown schematically in Figure 6.16. This procedure takes a data
set consisting of multiple planes at identical pixel resolution but di erent magni cations
due to their distance from the lens. By performing a resizing operation on each of the
planes, the magni cation can be adjusted such that the image data now represents constant
magni cation. For the resizing, data outside of the image range is taken to be zero. The
shaded regions in Figure 6.16 correspond to areas lled with zeros. An example of this
applied to a set of 2-D data is given in Figure 6.17.
One of the most important parameters in PIV interrogation is particle number density
within the interrogation windows, which is designated NI. It can be de ned as the average
number of particle images within each interrogation window or volume. It is well documented
that increasing the number density directly increases the peak detection probability and
accuracy [3, p. 343], and suitable values typically occur for NI > 10 [3, p. 350]. This
parameter can be controlled in an experiment by introducing or removing seeding particles,
107
Figure 6.13: Rf = 4. Initial percentage of valid vectors: 85%.
1 2 3 4 5 6 7 8 9 10 11 120
0.5
1
1.5
2
2.5
3
3.5
4
Pass Number
Corrector RMS (px)
Corrector RMS
Refinement Factor
Figure 6.14: Corrector RMS for the multi-pass algorithm. The re nement factor Rf is also
plotted for clarity.
or at processing time by modifying the size of the interrogation windows or volumes. For
measurements desiring high spatial resolution, the interrogation window size must be made
as small as possible. Thus to ensure NI is a large enough value in these cases, the seeding
108
10 20 30 40 50 60 7070
75
80
85
90
95
100
Validation Pass Number
Percentage Valid Vectors
Corrector RMS
Refinement Factor
Figure 6.15: Percentage of valid vectors for the multi-pass algorithm. The re nement factor
Rf is also plotted for clarity.
Non-constant Magnification
Constant Magnification
Figure 6.16: Schematic of resize operation.
concentration is often increased in the case of a traditional PIV measurement. The limitation
in seeding concentration typically occurs when laser speckle noise becomes the dominant
factor in the imaging or the line of sight loses transparency due to the dense particle eld.
For 3-D PIV, most techniques do not directly measure the intensity eld and for this
reason require reconstruction procedures to generate a volumetric intensity eld. Through
the use of Monte Carlo simulations and actual experiments, all reconstruction procedures
(MART, SAPIV Thresholding, etc.) have limits on the particle density of elds to yield
acceptable reconstructions. In essence, the primary limitation of spatial resolution in 3-D
PIV is due to the limited particle density able to be reconstructed. It is estimated that the
109
(a) t = 7:7 mm (b) t = 4:5 mm
(c) t = 1:3 mm (d) t = 1:9 mm
(e) t = 5:1 mm (f) t = 8:3 mm
Figure 6.17: Example of resizing operation applied to a set of refocused planes. Note that
in these images the thresholding operation has not been applied to give a clearer view of the
resizing procedure.
technique described here will have similar limitations to the SAPIV thresholding technique,
but this must be veri ed. Additionally, the 3-D PIV code itself must be veri ed. For this
reason, the technique is rst tested on simulated uniform velocity elds independently in
both the x and z directions. This allows the accuracy of the technique to be evaluated for
110
both in-plane and out-of-plane motion. The variable to be changed in these simulations is
the particle number density, measured in particles per pixel (ppp).
111
Chapter 7
Results
Testing of the simulation, rendering, and PIV algorithms is performed in this section by
analyzing cases of uniform in-plane and out-of-plane displacements and a spherical vortex
simulation. The resulting velocity elds are screened for bias errors, and the accuracy is
estimated for motions in the in-plane and out-of-plane directions.
7.1 Uniform X-Direction Displacements
A uniform displacement in the x-direction was tested to identify any bias errors which
could be present in the rendering codes or PIV algorithms. A uniform shift of 4.5 pixels was
used; the pixel size was determined by the 125 m pitch of the lenslets, so the physical shift
input into the simulator was 0.562 mm. For these tests a relatively low particle density of
0.001 particles per pixel (ppp) was used. Particles per pixel is de ned in this case as the
number of pixels of the CCD, not of the refocused images. By comparison, tomographic
PIV and other 3-D techniques typically use seeding densities an order of magnitude greater.
16 refocused planes are generated using the plane spacing derived from the depth of eld
equations. An example of the velocity eld generated in this con guration is given in Figure
7.1. In this run, 4800 vectors are generated while only 1 out of every 3 vectors is shown for
clarity. From a qualitative perspective, it is clear from this overview that the PIV algorithms
do not result in substantial outliers. A more rigorous comparison of the accuracy is performed
by generating displacement histograms for each coordinate direction.
Figure 7.2 is a histogram of the x-displacements using 50 bins. A Gaussian function
is t to the data to show the excellent agreement between the histogram and the normal
distribution. This is an indication that the velocity vectors are complying with the statistical
112
0
50
100
150
200
250
300
0
50
100
150
200
0
10
20
X (px)Y (px)
Z (px)
Figure 7.1: Velocity eld using 0.001 particles per pixel and a simulated 4.5 pixel shift in
the x-direction.
central limit theorem, which states that the compilation of many independent, identical
random variables (such as the random error in PIV, [3, p. 380]) will tend towards a normal
distribution. The annotations in Figure 7.2 indicate the mean and standard deviation of the
samples, not of the Gaussian curve t.
Note that the mean value appears to show a slight underestimation of the actual dis-
placement (4.5 px). This is likely a bias error inherent in the PIV processing itself. The
value is consistent with the magnitude of the displacement bias error described in Adrian
and Westerweel [3, p. 356]. This underestimation occurs when the values of the correlation
peak are slightly skewed due to the nite size of the interrogation regions. This leads to a
systematic underestimation of the actual displacement. Modern PIV interrogation methods
eliminate this issue by using fractional window o set techniques to position the correlation
peak at the center of the correlation plane which eliminates the bias error. These tech-
niques are more complicated due to the use of image interpolation schemes which introduce
113
additional complexity in the interrogation steps. However, the magnitude of this error is
relatively small and is on the order of the random error magnitude for PIV.
?4.9 ?4.8 ?4.7 ?4.6 ?4.5 ?4.4 ?4.3 ?4.2 ?4.10
50
100
150
200
250
300
350
400
Displacement (px)
Counts
? = 0.07 px
? = ?4.45 px
Figure 7.2: Histogram of x-displacements. 50 bins used over 4800 data points.
Histograms were also generated in the other coordinate directions for the case of uni-
form x-coordinate displacement. Figure 7.3 presents the displacements in the y-direction,
and Figure 7.4 presents the data in the z-direction. In both cases, the mean value of the
displacement is zero, and the standard deviation of the displacement is in agreement with
the random error magnitude typical of PIV interrogations [3, p. 380].
As a further test to evaluate the accuracy of the rendering codes, the image resizing
step was removed to gauge the e ect on the PIV interrogation. By removing the image
resize step, the magni cation is no longer constant throughout the image, and it should
be expected that a depth-dependent bias should exist. Figure 7.5 shows a histogram of
x-component vectors for this scenario. The bimodal distribution in the histogram is due to
the change in magni cation. This is clari ed in Figure 7.6, which shows the error increasing
as the distance from the central plane also increases. In this gure, the average displacement
was subtracted to better illustrate the spatial distribution of the bias.
114
?0.4 ?0.3 ?0.2 ?0.1 0 0.1 0.2 0.30
100
200
300
400
500
600
700
800
Displacement (px)
Counts
? = 0.03 px
? = ?0.00 px
Figure 7.3: Histogram of y-displacements. 50 bins used over 4800 data points.
?0.5 ?0.4 ?0.3 ?0.2 ?0.1 0 0.1 0.2 0.3 0.40
100
200
300
400
500
600
700
Displacement (px)
Counts
? = 0.05 px
? = ?0.00 px
Figure 7.4: Histogram of z-displacements. 50 bins used over 4800 data points.
7.2 Uniform Z-Direction Displacements
Particle displacements in the z-direction are particularly challenging as the number of
pixels in the depth direction (16) is signi cantly lower than that in the in-plane directions.
For this reason, the interrogation volume size in the z-direction must be reduced to allow
for multiple planes to be measured. For the cases studied here, the interrogation window
115
?5.5 ?5 ?4.5 ?40
50
100
150
200
Displacement (px)
Counts
? = 0.25 px
? = ?4.83 px
Figure 7.5: Histogram of x-displacements without using image resizing. 50 bins used over
4800 data points.
?50
0
50
100
150
200
250
0
50
100
150
200
0
10
20
X (px)Y (px)
Z (px)
Figure 7.6: Vector eld showing one of every 5 vectors without image resizing. Mean value of
x-component subtracted, and both y and z components set to zero for e ective visualization.
initial zsize was set to 16 px, and the initial zspacing is 8 px. Additionally, the rst tests of the
uniform z-displacement showed severe bias errors related to particles leaving the rendered
116
measurement volume. For this reason, the rendering was expanded by 50% for both positive
and negative depths, and the number of refocus planes generated was doubled (32). This
allows for particles located near the edge of the measurement volume to gradually fade out
of the measurement volume rather than be cut o . This procedure was found e ective for
eliminating biases near the boundaries of the volume.
Figure 7.7 shows the velocity eld obtained for the uniform z-displacement case with
a nominal displacement of 2.5 pixels. One out of every four vectors is shown for clarity.
From a qualitative standpoint, it captures the displacements with few major outliers. As
for the previous case, a more substantial analysis can be made by analyzing displacement
histograms. Figure 7.8 shows a histogram of the z-displacements. Notable in this gure
is the presence of a bimodal structure centered on integer displacements, likely due to the
\peak-locking" e ect in PIV. There also appears to be a small outlier at 0 px, likely due to
a region of interrogation windows without particles near the edges of the volume.
0
50
100
150
200
250
300
0
50
100
150
200
0
20
40
X (px)Y (px)
Z (px)
Figure 7.7: Vector eld for uniform z-displacement showing one of every 4 vectors.
117
?4 ?3 ?2 ?1 0 1 2 3 4 5 60
100
200
300
400
500
Displacement (px)
Counts
? = 0.57 px
? = ?2.47 px
Figure 7.8: Histogram of z-displacements. 100 bins used to highlight peak-locking e ect
more clearly.
To con rm these ideas, a vector eld is shown in Figure 7.9 where the mean displacement
has been subtracted from the actual displacement, giving a view at the spatial distribution of
the bias errors. It is clear through inspection of this gure that there appears to be an inward
bias towards the edge of the interrogation volume, and the outliers corresponding to zero
displacement are found to occur at the edges of the volume, where there could potentially
be few particle images. Note that in this gure, the vectors have been scaled by a factor of
4 to highlight the e ects.
These ndings indicate that z-displacements can be measured; however, they are highly
sensitive to peak-locking e ects. By modifying the thresholding and rendering procedures
in future e orts, it may be possible to reduce the peak-locking e ect and potentially reduce
biases at the edge of the measurement volume. Histograms of the displacement in both the x-
direction and y-direction are shown in gures 7.10 and 7.11, respectively. As in the previous
case of uniform x-displacement, the errors in other coordinate directions are minimal and
correspond to the magnitude of the random error in PIV.
118
050100150200
250
0
100
200
0
5
10
15
20
25
30
35
40
X (px)Y (px)
Z (px)
Figure 7.9: Vector eld for uniform z-displacements (same as Figure 7.7). Mean value of the
displacement subtracted to show location and direction of bias.
?3 ?2 ?1 0 1 2 30
100
200
300
400
500
600
700
Displacement (px)
Counts
? = 0.18 px
? = ?0.00 px
Figure 7.10: Histogram of x-displacements. 100 bins used over 3600 data points.
7.3 Variation of Particle Number Density
As previously noted, the particle number density is a critical parameter in PIV, which
speci es the maximum spatial resolution that can typically be achieved. The number density
119
?0.4 ?0.2 0 0.2 0.4 0.6 0.8 10
50
100
150
200
250
300
Displacement (px)
Counts
? = 0.12 px
? = ?0.00 px
Figure 7.11: Histogram of y-displacements. 100 bins used over 3600 data points.
was increased in the simulations from the base value of 0.001 ppp to 0.005 and also 0.01 ppp.
To demonstrate the e ect of this increase on the refocused planes, Figure 7.12 shows an
example refocused plane from each case, with and without the thresholding applied. An
interesting result can be found from this gure that the thresholding technique performs
poorly at high particle densities. The reduced number of particles in the thresholded images
precludes performing any PIV interrogation with these data sets.
To understand why the thresholding procedure deteriorates at high particle image den-
sities, the intensity histograms are compared for both 0.001 and 0.01 ppp in gures 7.13
and 7.14. Clearly, at high particle densities the intensity distribution becomes noticeably
more Gaussian and the threshold level is increased. It should be noted that the Gaussian
intensity distribution and high threshold levels demonstrated in Figure 7.14 also exist for
the individual planes of the rendered volume, not just the entire volume. This indicates that
the use of a unique threshold calculated for each focal plane would have a minimal e ect for
improving the discrimination of particles.
Based on these results, PIV was not attempted at the higher particle densities of 0.005
and 0.01 ppp, since the low number of particles in the thresholded images would result in
120
(a) Original, 0.001 ppp (b) Thresholded, 0.001 ppp
(c) Original, 0.005 ppp (d) Thresholded, 0.005 ppp
(e) Original, 0.01 ppp (f) Thresholded, 0.01 ppp
Figure 7.12: Example of resizing operation applied to a set of refocused planes. Note that
in these images the thresholding operation has not been applied to give a clearer view of the
resizing procedure.
interrogation volumes with few or no particles and thus invalid results. As a rule of thumb,
accurate PIV evaluation requires greater than 10 particle images within each volume [3,
p. 350].
121
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
2
4
6
8
10
12
14
16 x 10
4
Displacement (px)
Counts
Figure 7.13: Histogram of the rendered intensity volume corresponding to Figure 7.12a. The
3 threshold level for this case is 0.34.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
2
4
6
8
10 x 10
4
Displacement (px)
Counts
Figure 7.14: Histogram of the rendered intensity volume corresponding to Figure 7.12e. The
3 threshold level for this case is 0.72.
7.4 Velocity Field of a Simulated Vortex
The previous two sections evaluated the performance of the PIV and rendering codes
on uniform displacement elds to check for bias errors and peak-locking e ects. With these
122
errors now quanti ed, a true 3-D velocity eld can be tested to evaluate the algorithms for
the case of displacements in all three directions. The Hill?s spherical vortex is used for this
purpose. Figure 7.15 shows the 3-D velocity eld generated by the rendering and PIV code.
Qualitatively, the large-scale vortical structure can be easily deduced with good resolution.
Figures 7.16 and 7.17 present projective views of the 3-D velocity eld to provide additional
insight.
0
50
100
150
200
250
300
0
50
100
150
200
0
20
40
X (px)Y (px)
Z (px)
Figure 7.15: 3-D velocity eld of a inviscid spherical vortex. All vectors plotted.
As in the previous cases, the histograms of displacement are shown to evaluate the
e ects of peak locking or any bias. Figures 7.18 through 7.20 show these histograms for
x, y, and z-displacements, respectively. Most notable is the reduced e ect of peak locking
for z-displacements, which may simply be caused by the lower percentage of vectors with
appreciable displacement in the z-direction.
123
0 50 100 150 200 250 3000
20
40
60
80
100
120
140
160
180
X (px)
Y (px)
Figure 7.16: X-Y projection of the 3-D velocity eld in Figure 7.15. All vectors plotted.
0 20 40 60 80 100 120 140 160 1800
5
10
15
20
25
30
Y (px)
Z (px)
Figure 7.17: Y-Z projection of the 3-D velocity eld in Figure 7.15. All vectors plotted.
124
?1.5 ?1 ?0.5 0 0.5 1 1.50
200
400
600
800
1000
Displacement (px)
Counts
? = 0.14 px
? = 0.00 px
Figure 7.18: Histogram of x-displacements for the velocity eld given in Figure 7.15
?6 ?5 ?4 ?3 ?2 ?1 0 10
500
1000
1500
2000
Displacement (px)
Counts
? = 0.39 px
? = ?0.05 px
Figure 7.19: Histogram of y-displacements for the velocity eld given in Figure 7.15
125
?0.2 ?0.15 ?0.1 ?0.05 0 0.05 0.1 0.15 0.2 0.25 0.30
100
200
300
400
500
600
Displacement (px)
Counts
? = 0.04 px
? = 0.00 px
Figure 7.20: Histogram of z-displacements for the velocity eld given in Figure 7.15
126
Chapter 8
Concluding Remarks
This thesis has detailed the development of a novel 3-D particle image velocimetry tech-
nique based on light eld imaging by describing the concept of light elds and plenoptic
photography, detailing the development of rendering codes, and nally integrating the ren-
dering codes with a complete simulation of the camera and a 3-D PIV algorithm. Conclusions
can be drawn from each of these.
The application of light eld imaging to particle imaging was able to be explored in this
thesis. For example, the depth and angular resolution can be traded through the design of
the lenslet array. Increasing pitch of the lenslet increases the number of pixels covered by the
lenslet and the number of unique focal planes available to be generated but also results in a
reduction in the spatial resolution of each refocused image. The choice of lenslet size used
in this work was primarily to evaluate the performance of a commercially available lenslet
array; however, custom lenslet arrays can be manufactured. As an aside, the capability of
trading depth resolution for spatial resolution could allow for innovative new ways to perform
dual-plane PIV without multiple cameras and polarization lters. Another approach could
be increase the depth resolution of the technique and use the camera with particle tracking
velocimetry (PTV) algorithms. Advances in sensor technology may allow for smaller pixels
to be reliably manufactured in the future and allow both the angular and spatial resolution
to be sampled in much greater detail.
Other design parameters include the optical parameters such as magni cation, the work-
ing distances si and so, and the f-number. In this thesis, it was demonstrated that increasing
the depth resolution required the lenslet size (and thus circle of confusion) to be made smaller
the camera to be operated at a magni cation closer to 1 so the cone of rays generated at
127
di erent refocus planes do not intersect in an elongated volume. This nding has practical
implications in that performing high resolution measurements will be easier within small
measurement volumes. It also constrains the design space for lenslet arrays. In most cases,
a lenslet with a low f-number is desired. Currently, the lowest f-number available for com-
mercial order is f=4, but custom designs may allow this to be reduced to smaller numbers.
This has the combined e ect of potentially increasing the depth resolution of the camera as
well as increasing the light gathering capacity. However, it must be kept in mind that the
f-number of the camera lens is xed by the e ective f-number of the lenslets and is generally
lower than the lenslet f-number. The f-number of the lenslets should thus not be designed
in a manner where the main lens cannot properly match the required e ective f-number.
The rendering codes developed in this thesis have successfully allowed for multiple planes
to be refocused. However, the rendering process still requires additional work to fully under-
stand the performance and limitations. A particular interest would be comparing rendering
techniques used by other light eld research groups, such as Fourier slice refocusing, to the
current technique. Additionally, more basic questions persist and must be answered in future
e orts. Namely, there is not a clear understanding of the number of focal planes that can be
generated from a light eld image and if generating a greater number of planes has any posi-
tive e ect on the PIV interrogation. The current viewpoint taken in this work was originally
given by Levoy, who uses the concept of uniqueness to restrict the number of generated focal
planes to the number of pixels under a lenslet in any one direction. This perhaps could be
strengthened for the case of particle imaging by developing a rmer theoretical background.
A notable de ciency of the rendering codes, speci cally the intensity thresholding, is the
inability to handle high particle number densities. This represents an area where tremendous
progress can still be made in the technique by introducing more sophisticated 3-D deconvolu-
tion or limited angle tomographic schemes to eliminate out-of-focus particles. Improvements
in this step of the rendering process have a direct e ect in increasing the accuracy of PIV
by allowing a larger number of particles to be placed within each interrogation volume and
128
allowing low intensity scatterers (which would be eliminated by the thresholding procedure)
also to contribute to the correlation.
Finally, the PIV code developed in this thesis provided a reasonably accurate interro-
gation with adequate dynamic spatial and velocity range to evaluate the refocused images
generated from the simulator and rendering codes. However, enormous developments in
PIV techniques have occurred since the technique used in this thesis (discrete window o -
set) was proposed in 1999. The challenges in correctly implementing advanced correlation
schemes ideally require dedicated e orts and suggest a collaborative e ort should be formed
with others in the 3-D PIV community such that the predominant focus in plenoptic PIV
development is on the rendering codes and design studies.
In light of the di culties expressed above, the plenoptic PIV concept has been demon-
strated in this thesis to be certainly feasible using currently available hardware and software
and should continue to be developed. The attractiveness of a single-camera solution for 3-D
PIV cannot be discounted; however, rather than seeing plenoptic PIV as a replacement for
tomographic or holographic PIV, perhaps it should be considered a unique complement to
these techniques, whose simplicity could allow it to be used in laboratories worldwide.
129
Bibliography
[1] F. Pereira, M. Gharib, D. Dabiri, and D. Modarress, \Defocusing digital particle image
velocimetry: a 3-component 3-dimensional DPIV measurement technique. Application
to bubbly ows," Experiments in Fluids, vol. 29, pp. S78{S84, 2000.
[2] H. Meng, G. Pan, Y. Pu, and S. H. Woodward, \Holographic particle image velocimetry:
from lm to digital recording," Measurement Science and Technology, vol. 15, pp. 673{
685, 2004.
[3] R. J. Adrian and J. Westerweel, Particle Image Velocimetry. Cambridge University
Press, 2011.
[4] A. N. Kolmogorov, \Dissipation of energy in the locally isotropic turbulence," C. R.
Acad. Sci, vol. 32, pp. 16{18, 1941.
[5] A. A. Townsend, The Structure of Turbulent Shear Flow. University Press, 1st ed., 1956.
[6] M. Samimy and M. P. Wernet, \Review of planar multiple-component velocimetry in
high-speed ows," AIAA Journal, vol. 38, pp. 553{574, 2000.
[7] M. Ra el, C. Willert, S. Wereley, and J. Kompenhans, Particle Image Velocimetry: A
Practical Guide. Springer, 2nd ed., 2007.
[8] G. S. Elliot and T. J. Beutner, \Molecular lter based planar doppler velocimetry,"
Progress in Aerospace Sciences, vol. 35, pp. 799{845, 1999.
[9] B. Thurow, N. Jiang, W. Lempert, and M. Samimy, \Development of megahertz-rate
planar doppler velocimetry for high-speed ows," AIAA Journal, vol. 43, pp. 500{511,
2005.
[10] J. P. Crimaldi, \Planar laser induced uorescence in aqueous ows," Experiments in
Fluids, vol. 44, pp. 851{863, 2008.
[11] I. van Cruyningen, A. Lozano, and R. K. Hanson, \Quantitative imaging of concentra-
tion by planar laser-induced uorescence," Experiments in Fluids, vol. 10, pp. 41{49,
2000.
[12] A. Cessou, U. Meier, and D. Stepowski, \Applications of planar laser induced uo-
rescence in turbulent reacting ows," Measurement Science and Technology, vol. 11,
pp. 887{901, 2000.
130
[13] B. Stier and M. M. Koochesfahani, \Molecular tagging velocimetry (MTV) measure-
ments in gas phase ows," Experiments in Fluids, vol. 26, pp. 297{304, 1999.
[14] H. Hu and M. M. Koochesfahani, \Molecular tagging velocimetry and thermometry
and its application to the wake of a heated circular cylinder," Measurement Science and
Technology, vol. 17, pp. 1269{1281, 2006.
[15] P. A. Davidson, Turbulence: An introduction for scientists and engineers. Oxford Uni-
versity Press, 2004.
[16] C. Brucker, \3-D scanning-particle-image-velocimetry: Technique and application to a
spherical cap wake ow," Appl. Sci. Res., vol. 56, pp. 157{179, 1996.
[17] C. Brucker, \3-D scanning PIV applied to an air ow in a motored engine using digital
high-speed video," Measurement Science Technology, vol. 8, pp. 1480{1492, 1997.
[18] W. Zhang, R. Hain, and C. J. Kahler, \Scanning PIV investigation of the laminar
separation bubble on a SD7003 airfoil," Experiments in Fluids, vol. 45, pp. 725{743,
2008.
[19] C. E. Willert and M. Gharib, \Three-dimensional particle imaging with a single cam-
era," Experiments in Fluids, vol. 12, pp. 353{358, 1992.
[20] D. Lin, N. C. Angarita-Jaimes, S. Chen, A. H. Greenaway, C. E. Towers, and D. P.
Towers, \Three-dimensional particle imaging by defocusing method with an annular
aperture," Optics Letters, vol. 33, pp. 905{907, 2008.
[21] F. Pereira and M. Gharib, \Defocusing digital particle imaging velocimetry and the
three-dimensional characterization of two-phase ows," Measurement Science and Tech-
nology, vol. 13, pp. 683{694, 2002.
[22] M. Gharib, F. Pereira, D. Dabiri, J. R. Hove, and D. Modarress, \Quantitative ow vi-
sualization: Toward a comprehensive ow diagnostic tool," Integrative and Comparative
Biology, vol. 42, pp. 964{970, 2002.
[23] G. E. Elsinga, F. Scarano, B. Wieneke, and B. W. van Oudheusden, \Tomographic
particle image velocimetry," Experiments in Fluids, vol. 41, pp. 933{947, 2006.
[24] G. E. Elsinga, J. Westerweel, F. Scarano, and M. Novara, \On the velocity of ghost par-
ticles and the bias errors in tomographic-PIV," Experiments in Fluids, 2010. Published
Online.
[25] R. A. Humble, G. E. Elsinga, F. Scarano, and B. W. van Oudheusden, \Three-
dimensional instantaneous structure of a shock wave/turbulent boundary layer inter-
action," Journal of Fluid Mechanics, vol. 622, pp. 33{62, 2009.
[26] F. Scarano and C. Poelma, \Three-dimensional vorticity patterns of cylinder wakes,"
Experiments in Fluids, vol. 47, pp. 69{83, 2009.
131
[27] D. Violato, P. Moore, and F. Scarano, \Lagrangian and eulerian pressure eld evaluation
of rod-airfoil ow from time-resolved tomographic PIV," Experiments in Fluids, 2010.
Published Online.
[28] J. Sheng, E. Malkiel, and J. Katz, \Using digital holographic microscopy for simultane-
ous measurements of 3D near wall velocity and wall shear stress in a turbulent boundary
layer," Experiments in Fluids, vol. 45, pp. 1023{1035, 2008.
[29] M. P. Arroyo and K. D. Hinsch, Recent Developments of PIV towards 3D Measurements,
pp. 127{154. Springer, 2008.
[30] A. Svizher and J. Cohen, \Holographic particle image velocimetry system for measure-
ment of hairpin vortices in air channel ow," Experiments in Fluids, vol. 40, pp. 708{722,
2006.
[31] B. Tao, J. Katz, and C. Meneveau, \Geometry and scale relationships in high
reynolds number turbulence determined from three-dimensional holographic velocime-
try," Physics of Fluids, vol. 12, pp. 941{944, 2000.
[32] J. Belden, T. T. Truscott, M. C. Axiak, and A. M. Techet, \Three-dimensional synthetic
aperture particle image velocimetry," Measurement Science and Technology, vol. 21,
pp. 1{21, 2010.
[33] V. Vaish, G. Garg, E. Talvala, E. Antunez, B. Wilburn, M. Horowitz, and M. Levoy,
\Synthetic aperture focusing using a shear-warp factorization of the viewing transform,"
in Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and
Pattern Recognition, vol. 3, 2005.
[34] E. H. Adelson and J. R. Bergen, \The plenoptic function and the elements of early
vision," in Computational Models of Visual Processing, pp. 3{20, MIT Press, 1991.
[35] E. H. Adelson and J. Y. A. Wang, \Single lens stereo with a plenoptic camera," IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 14, pp. 99{106, 1992.
[36] R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, \Light eld
photography with a hand-held plenoptic camera," tech. rep., Stanford University, 2005.
[37] M. Levoy, N. R, A. Adams, M. Footer, and M. Horowitz, \Light eld microscopy," ACM
Transactions on Graphics, vol. 25, pp. 924{934, 2006.
[38] M. Levoy and P. Hanrahan, \Light eld rendering," in Proceedings of the 23rd annual
conference on Computer graphics and interactive techniques, SIGGRAPH ?96, pp. 31{
42, 1996.
[39] M. Levoy, \Light elds and computational imaging," Computer, vol. 39, pp. 46{55,
2006.
[40] B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams,
M. Horowitz, and M. Levoy, \High performance imaging using large camera arrays,"
ACM Transactions on Graphics, vol. 24, pp. 765{776, 2005.
132
[41] M. Levoy, Z. Zhang, and I. McDowall, \Recording and controlling the 4D light eld in
a microscope," Journal of Microscopy, vol. 235, pp. 144{162, 2009.
[42] A. Lumsdaine and T. Georgiev, \Full resolution light eld rendering," tech. rep., Adobe
Systems, 2009.
[43] A. Lumsdaine and T. Georgiev, \The focused plenoptic camera," ICCP 2009, 2009.
[44] A. Gerrard and J. M. Burch, Introduction to Matrix Methods in Optics. Dover Publi-
cations, 1994.
[45] T. Georgeiv and C. Intwala, \Light eld camera design for integral view photography,"
tech. rep., Adobe Systems, 2003.
[46] L. Ahrenberg and M. Magnor, \Light eld rendering using matrix optics," in Proceedings
of WSCG 2006, 2006.
[47] W. J. Smith, Modern Optical Engineering. McGraw-Hill, 4th ed., 2007.
[48] Kodak Company, \Kodak KAI-16000 Image Sensor Device Performance Speci cation,"
2010.
[49] S. I. Green, Fluid vortices. Springer, 1995.
[50] R. Ng, \Fourier slice photography," ACM Transactions on Graphics, vol. 24, pp. 735{
744, 2005.
[51] J. Sibarita, \Deconvolution microscopy," Advanced Biochemical Engineering and
Biotechnology, vol. 95, pp. 201{243, 2005.
[52] J. G. McNally, T. Karpova, J. Cooper, and J. A. Conchello, \Three-dimensional imaging
by deconvolution microscopy," Methods, vol. 19, pp. 373{385, 1999.
[53] J. S. Park and K. D. Kihm, \Three-dimensional micro-ptv using deconvolution mi-
croscopy," Experiments in Fluids, vol. 40, pp. 491{499, 2006.
[54] R. J. Adrian, \Twenty years of particle image velocimetry," Experiments in Fluids,
vol. 39, pp. 159{169, 2005.
[55] C. E. Willert and M. Gharib, \Digital particle image velocimetry," Experiments in
Fluids, vol. 10, pp. 181{193, 1991.
[56] J. Westerweel, D. Dabiri, and M. Gharib, \The e ect if a discrete window o set on the
accuracy of cross-correlation analysis of digital piv recordings," Experiments in Fluids,
vol. 23, pp. 20{28, 1997.
[57] F. Scarano and M. L. Riethmuller, \Iterative multigrid approach in piv image processing
with discrete window o set," Experiments in Fluids, vol. 26, pp. 513{523, 1999.
[58] F. Scarano and M. L. Riethmuller, \Advances in iterative multigrid piv image process-
ing," Experiments in Fluids, vol. 29, pp. S51{S60, 2000.
133
[59] F. Scarano and M. L. Riethmuller, \Iterative image deformation methods in piv," Mea-
surement Science and Technology, vol. 13, pp. R1{R19, 2002.
[60] S. T. Wereley and C. D. Meinhart, \Second-order accurate particle image velocimetry,"
Experiments in Fluids, vol. 31, pp. 258{268, 2001.
[61] R. Theunissen, F. Scarano, and M. L. Riethmuller, \Spatially adaptive piv interrogation
based on data ensemble," Experiments in Fluids, vol. 48, pp. 875{887, 2010.
[62] T. Liu and S. L., \Fluid ow and optical ow," Journal of Fluid Mechanics, vol. 614,
pp. 253{291, 2008.
[63] M. Stanislas, K. Okamoto, C. J. K ahler, J. Westerweel, and F. Scarano, \Main results of
the third international piv challenge," Experiments in Fluids, vol. 45, pp. 27{71, 2008.
[64] J. Westerweel and F. Scarano, \Universal outlier detection for piv data," Experiments
in Fluids, vol. 39, pp. 1096{1100, 2005.
134
Appendix A: Fortran 2-D Simulation Code
1 program plenoptic simulator
3 ! Bring in random number module.
! http ://www. netlib . org/random/
5 use random
7 implicit none
9 real :: f l , p l , p p , f m , FOV x, FOV y, M, s i , s o , fnum l , fnum m, p m
real :: sigma m , sigma l , o x , o y , p ccd x , p ccd y , coll angle
11 real :: phi max , phi min , phi total , theta max , theta min , theta total , phi , theta
real :: randa , randb , sx , sy
13
integer :: n p x , n p y , n rays , n l x , n l y , npts , nrays phi , nrays theta
15 integer :: x ccd , y ccd , nsx , nsy
integer :: i , j
17
real , allocatable , dimension(: ,:) :: image , image 2d
19 real , allocatable , dimension(:) :: dx, dy, dz , intensity
real , dimension(4 ,4) :: T1, T2, T3, L1, L2, A1
21 real , dimension(4) :: ray , ray2 , ray3 , ray4 , ray5
23 character(40) :: optics filename , field filename , output filename , t output filename
25 ! Temporary Variables
real :: tmp n p y , tmp n p x , tmp n l x , tmp n l y , tmp n rays
27 character(17) :: dump
29 write( , ) ? Plenoptic Image Simulator ?
write( , ) ?Kyle Lynch (lynchkp@auburn . edu) ?
31 write( , )
33 call get command argument(1 , field filename )
call get command argument(2 , output filename )
35
! Load in optics settings from file .
37 optics filename = ? optics2 . txt ?
open(unit=8,file=optics filename , status=? old ?)
39 read(8 ,100) dump, f l
read(8 ,100) dump, p l
41 read(8 ,100) dump, p p
read(8 ,100) dump, tmp n p x
43 read(8 ,100) dump, tmp n p y
read(8 ,100) dump, f m
45 read(8 ,100) dump, FOV x
read(8 ,100) dump, FOV y
47 read(8 ,100) dump, M
read(8 ,100) dump, s i
49 read(8 ,100) dump, s o
read(8 ,100) dump, fnum l
51 read(8 ,100) dump, fnum m
read(8 ,100) dump, p m
53 read(8 ,100) dump, sigma m
read(8 ,100) dump, sigma l
55 read(8 ,100) dump, tmp n l x
read(8 ,100) dump, tmp n l y
57 read(8 ,100) dump, o x
135
read(8 ,100) dump, o y
59 read(8 ,100) dump, tmp n rays
n p x = nint (tmp n p x)
61 n p y = nint (tmp n p y)
n l x = nint ( tmp n l x )
63 n l y = nint ( tmp n l y )
n rays = nint (tmp n rays)
65 allocate(image(n p y , n p x ))
allocate(image 2d(n p y , n p x ))
67 100 format(A,F14.6)
close(8)
69
! Load in particle field
71 ! field filename = ? field . txt ?
open(unit=9,file=field filename )
73 read(9 , ?(I14) ?) npts
allocate(dx(npts ))
75 allocate(dy(npts ))
allocate(dz(npts ))
77 allocate( intensity (npts ))
do i = 1,npts
79 read(9 , ?(F14.6 ,1X,F14.6 ,1X,F14.6 ,1X,F14.6) ?) dx( i ) , dy( i ) , dz( i ) , intensity ( i )
enddo
81 close(9)
83 ! output filename = ?image. txt ?
open(unit=10,file=output filename )
85
t output filename = ?image2D. txt ?
87 open(unit=11,file=t output filename )
89 !
!
91 !
! Begin Image Ray Tracing
93 !
!
95 !
97 ! Initialize image arrays with zeros .
do j = 1,n p y
99 do i = 1,n p x
image(j , i ) = 0.0
101 image 2d(j , i ) = 0.0
enddo
103 enddo
105 ! Lenslet > CCD Ray Transfer Matrix
T3(1 ,:) = (/1. ,0. , f l ,0./)
107 T3(2 ,:) = (/0. ,1. ,0. , f l /)
T3(3 ,:) = (/0. ,0. ,1. ,0./)
109 T3(4 ,:) = (/0. ,0. ,0. ,1./)
111 ! Main Lens Ray Transfer Matrix
L1(1 ,:) = (/1. ,0. ,0. ,0./)
113 L1(2 ,:) = (/0. ,1. ,0. ,0./)
L1(3 ,:) = (/ 1./f m ,0. ,1. ,0./)
115 L1(4 ,:) = (/0. , 1./f m ,0. ,1./)
117 ! Lenslet Ray Transfer Matrix
L2(1 ,:) = (/1. ,0. ,0. ,0./)
119 L2(2 ,:) = (/0. ,1. ,0. ,0./)
L2(3 ,:) = (/ 1./ f l ,0. ,1. ,0./)
121 L2(4 ,:) = (/0. , 1./ f l ,0. ,1./)
123 ! Main Lens > Lenslets Ray Transfer Matrix
T2(1 ,:) = (/1. ,0. , s i ,0./)
125 T2(2 ,:) = (/0. ,1. ,0. , s i /)
136
T2(3 ,:) = (/0. ,0. ,1. ,0./)
127 T2(4 ,:) = (/0. ,0. ,0. ,1./)
129 A1 = matmul(T2,L1)
131 p ccd x = n p x p p
p ccd y = n p y p p
133
coll angle = 2 atan2(p m/2, s o )
135
do i = 1,npts
137
write( , ) i
139
T1(1 ,:) = (/1. ,0. , s o+dz( i ) ,0./)
141 T1(2 ,:) = (/0. ,1. ,0. , s o+dz( i )/)
T1(3 ,:) = (/0. ,0. ,1. ,0./)
143 T1(4 ,:) = (/0. ,0. ,0. ,1./)
145 phi max = atan2(p m/2 dy( i ) , s o+dz( i ));
phi min = atan2( p m/2 dy( i ) , s o+dz( i ));
147 phi total = phi max phi min ;
149 theta max = atan2(p m/2 dx( i ) , s o+dz( i ));
theta min = atan2( p m/2 dx( i ) , s o+dz( i ));
151 theta total = theta max theta min ;
153 nrays phi = nint (( phi total / coll angle ) sqrt (real( n rays )));
nrays theta = nint (( theta total / coll angle ) sqrt (real( n rays )));
155
do j = 1 ,( nrays phi nrays theta )
157
call random number(randa)
159 call random number(randb)
161 theta = theta min + (randa (theta max theta min ))
phi = phi min + (randb (phi max phi min ))
163 ray = (/dx( i ) ,dy( i ) , theta , phi/)
165 ! Propogate ray from point to the main lens
ray2 (1) = T1(1 ,1) ray (1) + T1(1 ,2) ray (2) + T1(1 ,3) ray (3) + T1(1 ,4) ray (4);
167 ray2 (2) = T1(2 ,1) ray (1) + T1(2 ,2) ray (2) + T1(2 ,3) ray (3) + T1(2 ,4) ray (4);
ray2 (3) = T1(3 ,1) ray (1) + T1(3 ,2) ray (2) + T1(3 ,3) ray (3) + T1(3 ,4) ray (4);
169 ray2 (4) = T1(4 ,1) ray (1) + T1(4 ,2) ray (2) + T1(4 ,3) ray (3) + T1(4 ,4) ray (4);
171 if ( sqrt (ray2(1) 2 + ray2 (2) 2) < (p m/2)) then
173 ! Propogate ray through the main lens and to the lenslet array .
ray3 (1) = A1(1 ,1) ray2 (1) + A1(1 ,2) ray2 (2) + &
175 A1(1 ,3) ray2 (3) + A1(1 ,4) ray2 (4);
ray3 (2) = A1(2 ,1) ray2 (1) + A1(2 ,2) ray2 (2) + &
177 A1(2 ,3) ray2 (3) + A1(2 ,4) ray2 (4);
ray3 (3) = A1(3 ,1) ray2 (1) + A1(3 ,2) ray2 (2) + &
179 A1(3 ,3) ray2 (3) + A1(3 ,4) ray2 (4);
ray3 (4) = A1(4 ,1) ray2 (1) + A1(4 ,2) ray2 (2) + &
181 A1(4 ,3) ray2 (3) + A1(4 ,4) ray2 (4);
183 ! DIFFRACTION AT LENSLET
ray3 (1) = ray3 (1) + sigma l random normal ();
185 ray3 (2) = ray3 (2) + sigma l random normal ();
187 ! 2D Image Capture .
x ccd = nint ( ( ray3 (1) + ( p ccd x+p p)/2 ) / p p );
189 y ccd = nint ( ( ray3 (2) + ( p ccd y+p p)/2 ) / p p );
if (x ccd>0 .and. y ccd>0 .and. x ccd<(n p x+1) .and. y ccd<(n p y+1)) then
191 image 2d(y ccd , x ccd) = image 2d(y ccd , x ccd) + intensity ( i );
endif
193
137
! Find lenslet
195 nsx = nint ( ( ray3 (1) + p ccd x/2 o x p l/2 ) / p l );
nsy = nint ( ( ray3 (2) + p ccd y/2 o y p l/2 ) / p l );
197 sx = nsx p l + o x + p l/2 p ccd x /2;
sy = nsy p l + o y + p l/2 p ccd y /2;
199
! Propogate ray through lenslet
201 ray4 (1) = (L2(1 ,1) ray3 (1) + L2(1 ,2) ray3 (2) + &
L2(1 ,3) ray3 (3) + L2(1 ,4) ray3 (4)) + 0.;
203 ray4 (2) = (L2(2 ,1) ray3 (1) + L2(2 ,2) ray3 (2) + &
L2(2 ,3) ray3 (3) + L2(2 ,4) ray3 (4)) + 0.;
205 ray4 (3) = (L2(3 ,1) ray3 (1) + L2(3 ,2) ray3 (2) + &
L2(3 ,3) ray3 (3) + L2(3 ,4) ray3 (4)) + sx/ f l ;
207 ray4 (4) = (L2(4 ,1) ray3 (1) + L2(4 ,2) ray3 (2) + &
L2(4 ,3) ray3 (3) + L2(4 ,4) ray3 (4)) + sy/ f l ;
209
! Propogate ray to the CCD
211 ray5 (1) = T3(1 ,1) ray4 (1) + T3(1 ,2) ray4 (2) + &
T3(1 ,3) ray4 (3) + T3(1 ,4) ray4 (4);
213 ray5 (2) = T3(2 ,1) ray4 (1) + T3(2 ,2) ray4 (2) + &
T3(2 ,3) ray4 (3) + T3(2 ,4) ray4 (4);
215 ray5 (3) = T3(3 ,1) ray4 (1) + T3(3 ,2) ray4 (2) + &
T3(3 ,3) ray4 (3) + T3(3 ,4) ray4 (4);
217 ray5 (4) = T3(4 ,1) ray4 (1) + T3(4 ,2) ray4 (2) + &
T3(4 ,3) ray4 (3) + T3(4 ,4) ray4 (4);
219
! DIFFRACTION AT PIXEL
221 ray5 (1) = ray5 (1) + sigma m random normal ();
ray5 (2) = ray5 (2) + sigma m random normal ();
223
! Determine which pixel the ray will strike .
225 x ccd = nint ( ( ray5 (1) + ( p ccd x+p p)/2 ) / p p );
y ccd = nint ( ( ray5 (2) + ( p ccd y+p p)/2 ) / p p );
227
! Ensure it falls within sensor and record the signal
229 if (x ccd>0 .and. y ccd>0 .and. x ccd<(n p x+1) .and. y ccd<(n p y+1)) then
image(y ccd , x ccd) = image(y ccd , x ccd) + intensity ( i );
231 endif
233 endif
235 enddo
237 enddo
239 ! Write out image data
write(10 , ?(I5) ?) n p x
241 write(10 , ?(I5) ?) n p y
do j = 1,n p y
243 do i = 1,n p x
write(10 , ?(I6) ?) nint ( image(j , i ) )
245 enddo
enddo
247 close(10)
249 write( , ) ? Successfully exported image data to ? , output filename
251 ! Write out image data
write(11 , ?(I5) ?) n p x
253 write(11 , ?(I5) ?) n p y
do j = 1,n p y
255 do i = 1,n p x
write(11 , ?(F8.1) ?) image 2d(j , i )
257 enddo
enddo
259 close(11)
261 write( , ) ? Successfully exported image data to ? , t output filename
138
263 end program plenoptic simulator
139
Appendix B: Matlab Simulator Scripts
1 function particle gen (filename , particle density , x span , y span , z span , . . .
i min , i max ,type)
3 %PARTICLE GEN 2 Generate Particle Field
% [] = PARTICLE GEN 2(FILENAME,PARTICLE DENSITY,X SPAN,Y SPAN,Z SPAN, ...
5 % I MIN,I MAX)
% Outputs a text file containing the number of points ( particles ) and the
7 % x , y, and z coordinates for the particle . Intensity coefficents are
% also output .
9 %
% Inputs :
11 % FILENAME is the file to output particle positions .
% PARTICLE DENSITY is the density of particles , in number per mm^3.
13 % X SPAN is the total size of the x coordinate of the volume , which will
% be centered with respect to the optical axis .
15 % Y SPAN is the total size of the y coordinate of the volume , which will
% be centered with respect to the optical axis .
17 % Z SPAN is the total size of the z coordinate of the volume , which will
% be centered with respect to the optical axis .
19 % I MIN is the minimum particle intensity coefficent .
% I MIN is the maximum particle intensity coefficent .
21 %
% Outputs :
23 % None.
%
25 % Kyle Lynch (lynchkp@auburn .edu) January 30, 2011
27 % Define the ranges .
x min = x span + x span /2;
29 y min = y span + y span /2;
z min = z span + z span /2;
31 x max = x span x span /2;
y max = y span y span /2;
33 z max = z span z span /2;
35 if strcmp(type, ? volumetric ?)
volume = (x max x min) (y max y min) (z max z min );
37 n pts = round( particle density volume );
elseif strcmp(type, ? pinhole ? );
39 n pts = 750000;
z max = 0;
41 z min = 0;
elseif strcmp(type, ? perpixel ?)
43 npx = 4872 3248;
n pts = round( particle density npx);
45 else
fprintf( ?Improper type . Either "normal" or "pinhole ."nn ? );
47 return;
end
49
% Generate points
51 xval = x min + (x max x min ). rand(n pts ,1);
yval = y min + (y max y min ). rand(n pts ,1);
53 zval = z min + (z max z min ). rand(n pts ,1);
ival = i min + (i max i min ). rand(n pts ,1);
55
% Write the file
57 fprintf( ?Number of Particles : %dnn ? , n pts )
fprintf( ?Exporting to %snn ? , filename );
140
59 fid = fopen(filename , ?w+? );
fprintf( fid , ?%14dnn ? , n pts );
61 for i = 1: n pts
fprintf( fid , ?%14.6f %14.6f %14.6f %14.6fnn ? ,xval( i ) , yval( i ) , zval ( i ) , ival ( i ));
63 end
fclose( fid );
65
% % Plotting Functions
67 % if z max z min > 0
% figure
69 % plot3 (zval , xval , yval , ?. ?)
% xlabel ( ?Z?)
71 % ylabel ( ?X?)
% zlabel ( ?Y?)
73 % view(40 ,10)
% axis equal
75 % axis ij
% axis ([ z min z max x min x max y min y max])
77 % title ( ?3D Particle Locations ?)
% end
79 %
% figure
81 % plot (xval , yval , ?. ?)
% xlabel ( ?X?)
83 % ylabel ( ?Y?)
% axis equal
85 % axis xy
% axis ([ x min x max y min y max])
87 % title ( ?2D Particle Projections ?)
function optics gen (filename , f l , p l , p p , n p x , n p y , f m , M, n rays , type)
2 %OPTICS GEN 2 Generate Optical Parameters
% [] = OPTICS GEN 2(FILENAME, F L, P L, P P, N P X, N P Y, F M, M, N RAYS)
4 % Outputs a text file containing the optical parameters of the system .
%
6 % Inputs :
% FILENAME is the file to output the optical settings .
8 % F L is the focal length of the lenslets , in mm.
% P L is the pitch of the lenslets , in mm.
10 % P P is the pitch of the pixels , in mm.
% N P X is the number of pixels horizontally .
12 % N P Y is the number of pixels vertically .
% F M is the focal length of the main lens .
14 % M is the magnification of the optical system .
% N RAYS is the number of rays to be generated for each point .
16 %
% Outputs :
18 % None.
%
20 % Kyle Lynch (lynchkp@auburn .edu) January 30, 2011
22 % Derived parameters
FOV y = n p y p p / M;
24 FOV x = n p x p p / M;
% M = (p p n p y / FOV y);
26 s i = f m (1 M);
s o = s i / M;
28 fnum l = f l / p l ;
30 if strcmp(type, ?normal ?)
fnum m = fnum l / (1 M);
32 elseif strcmp(type, ? pinhole ?)
fnum m = 16;
34 else
fprintf( ?Improper type . Either "normal" or "pinhole ."nn ? );
36 return;
end
38
141
p m = f m / fnum m;
40 p ccd x = n p x p p ;
p ccd y = n p y p p ;
42
fnum scale = [0.5 0.7 1.0 1.4 2 2.8 4 5.6 8 11 16 22 32 45 64 90 128];
44 fnum m = min( fnum scale (( fnum scale fnum m)>=0));
46 % Diffraction Parameters . Adrian and Westerweel use positive
% magnifications .
48 lam = 532E 9 1E3; % convert to mm
betasq = 3.67;
50 M1 = s i /s o ;
ds m = 2.44 (1 + M1) fnum m lam;
52 sigma m = sqrt((ds m^2) / (8 betasq ));
diffract m percent = (ds m / p l ) 100;
54 M2 = f l / s i ;
ds l = 2.44 (1 + M2) fnum l lam;
56 sigma l = sqrt(( ds l ^2) / (8 betasq ));
diffract l percent = ( ds l / p p) 100;
58 % keyboard
60 fprintf( ?CCD Resolution : %d px X, %d px Ynn ? ,n p x , n p y)
fprintf( ? Pixel Pitch : %f mm X, %f mm Ynn ? ,p p , p p );
62 fprintf( ? Physical Size of CCD: %f mm X, %f mm Ynnnn ? ,p ccd x , p ccd y );
64 fprintf( ?Main Lens Diffraction Spot Size : %fnn ? ,ds m );
fprintf( ?Main Lens Diffraction Std . Dev: %fnn ? ,sigma m );
66 fprintf( ?Main Lens Diffraction Percentage of Lenslet Pitch : %fnn ? , . . .
diffract m percent );
68 fprintf( ?Lenset Diffraction Spot Size : %fnn ? , ds l );
fprintf( ?Main Lens Diffraction Std . Dev: %fnn ? , sigma l );
70 fprintf( ?Main Lens Diffraction Percentage of Lenslet Pitch : %fnnnn ? , . . .
diffract l percent );
72
% keyboard ;
74
% Determine the grid offset to center the grid within the image.
76 n l y = 0;
for (y = 0: p l :( p ccd y ))
78 n l x = 0;
for (x = 0: p l :( p ccd x ))
80 n l x = n l x + 1;
end
82 n l y = n l y + 1;
end
84 n l = n l y n l x ;
o x = (( p ccd x (x+p l )) / 2);
86 o y = (( p ccd y (y+p l )) / 2);
fprintf( ? Lenslet Pitch : %f X, %f Ynn ? , p l , p l );
88 fprintf( ? Lenslet Offset : %f X, %f Ynn ? ,o x , o y );
fprintf( ?Number of Lenslets : %d X, %d Ynn ? , n l x , n l y )
90 fprintf( ?Total Number of Lenslets : %dnn ? , n l )
fprintf( ?Focal Length of Lenslets : %f mmnn ? , f l );
92
% Step 8: Export all data .
94 fprintf( ?Exporting to %snn ? , filename );
fid = fopen(filename , ?w? );
96 fprintf( fid , ?%15s = %14.6fnn ? , ? f l ? , f l ); % 1
fprintf( fid , ?%15s = %14.6fnn ? , ? p l ? , p l ); % 2
98 fprintf( fid , ?%15s = %14.6fnn ? , ?p p ? ,p p ); % 3
fprintf( fid , ?%15s = %14.6fnn ? , ?n p x ? ,n p x ); % 4
100 fprintf( fid , ?%15s = %14.6fnn ? , ?n p y ? ,n p y ); % 5
fprintf( fid , ?%15s = %14.6fnn ? , ?f m ? ,f m ); % 6
102 fprintf( fid , ?%15s = %14.6fnn ? , ?FOV x? ,FOV x); % 7
fprintf( fid , ?%15s = %14.6fnn ? , ?FOV y? ,FOV y); % 8
104 fprintf( fid , ?%15s = %14.6fnn ? , ?M? ,M); % 9
fprintf( fid , ?%15s = %14.6fnn ? , ? s i ? , s i ); % 10
106 fprintf( fid , ?%15s = %14.6fnn ? , ? s o ? , s o ); % 11
142
fprintf( fid , ?%15s = %14.6fnn ? , ?fnum l ? ,fnum l ); % 12
108 fprintf( fid , ?%15s = %14.6fnn ? , ?fnum m ? ,fnum m); % 13
fprintf( fid , ?%15s = %14.6fnn ? , ?p m ? ,p m); % 14
110 fprintf( fid , ?%15s = %14.6fnn ? , ?sigma m ? ,sigma m ); % 15
fprintf( fid , ?%15s = %14.6fnn ? , ? sigma l ? , sigma l ); % 16
112 fprintf( fid , ?%15s = %14.6fnn ? , ? n l x ? , n l x ); % 17
fprintf( fid , ?%15s = %14.6fnn ? , ? n l y ? , n l y ); % 18
114 fprintf( fid , ?%15s = %14.6fnn ? , ?o x ? ,o x ); % 19
fprintf( fid , ?%15s = %14.6fnn ? , ?o y ? ,o y ); % 20
116 fprintf( fid , ?%15s = %14.6fnn ? , ? n rays ? , n rays ); % 21
118 return;
120 % Plotting Functions
lenslet loc = zeros( n l y , n l x ,2);
122 for (y = 1:1: n l y )
for (x = 1:1: n l x )
124 lenslet loc (y,x,1) = (x 1) p l + o x + p l /2;
lenslet loc (y,x,2) = (y 1) p l + o y + p l /2;
126 end
end
128 lenslet loc (: ,: ,1) = lenslet loc (: ,: ,1) ( p ccd x /2);
lenslet loc (: ,: ,2) = lenslet loc (: ,: ,2) ( p ccd y /2);
130
h = figure( ? Position ? ,[100 100 650 315]);
132 subplot(121)
hold on
134 line([ p ccd x/2 p ccd x /2] ,[ p ccd y/2 p ccd y /2])
line ([ p ccd x/2 p ccd x /2] ,[ p ccd y/2 p ccd y /2])
136 line ([ p ccd x/2 p ccd x /2],[ p ccd y/2 p ccd y /2])
line([ p ccd x/2 p ccd x /2],[ p ccd y/2 p ccd y /2])
138 line ([0 0] ,[ p ccd y/2 p ccd y /2] , ?Color ? , ?k ?)
line([ p ccd x/2 p ccd x /2] ,[0 0] , ?Color ? , ?k ?)
140 plot( lenslet loc (: ,: ,1) , lenslet loc (: ,: ,2) , ?ro ? , ?MarkerSize ? ,5);
xlabel( ?X (mm) ? ) , ylabel( ?Y (mm) ?)
142 axis image
144 subplot(122)
hold on
146 line([ p ccd x/2 p ccd x /2] ,[ p ccd y/2 p ccd y /2])
line ([ p ccd x/2 p ccd x /2] ,[ p ccd y/2 p ccd y /2])
148 line ([ p ccd x/2 p ccd x /2],[ p ccd y/2 p ccd y /2])
line([ p ccd x/2 p ccd x /2],[ p ccd y/2 p ccd y /2])
150 line ([0 0] ,[ p ccd y/2 p ccd y /2] , ?Color ? , ?k ?)
line([ p ccd x/2 p ccd x /2] ,[0 0] , ?Color ? , ?k ?)
152 plot( lenslet loc (: ,: ,1) , lenslet loc (: ,: ,2) , ?ro ? , ?MarkerSize ? ,5);
xlabel( ?X (mm) ? ) , ylabel( ?Y (mm) ?)
154 axis image
156 % Depth of field parameters
DN = ( s o f m^2) / (f m^2 + fnum m p l (s o f m ));
158 DF = ( s o f m^2) / (f m^2 fnum m p l (s o f m ));
DOF = abs(DF DN)
function sim displacement gen ( filename , output filename , func , dt , nsteps )
2 %SIM DISPLACEMENT GEN Generate Particle Displacements
% [] = SIM DISPLACEMENT GEN(FILENAME,OUTPUT FILENAME,DT,NSTEPS) takes an
4 % input particle field file , applies a velocity field to the particle
% locations over a number of steps , and saves the new particle locations
6 % to a file .
%
8 % Inputs :
% FILENAME is the file containing the initial particle positions .
10 % OUTPUTFILENAME is the file to output final particle positons .
% FUNC is the velocity field function to be used .
12 % DT is the duration of each timestep , in seconds .
% NSTEPS is the number of timesteps taken .
14 %
143
% Outputs :
16 % None.
%
18 % Kyle Lynch (lynchkp@auburn .edu) October 12, 2010
20 fid = fopen(filename , ?r+? );
% type = fgetl ( fid );
22 n pts = str2num(fgetl( fid ));
% n rays = str2num( fgetl ( fid ));
24
for i = 1: n pts
26 tmp = str2num(fgetl( fid ));
xval( i ) = tmp(1);
28 yval( i ) = tmp(2);
zval ( i ) = tmp(3);
30 ival ( i ) = tmp(4);
end
32 fclose( fid );
34 a = 5;
u0 = 1;
36
step dt = dt / nsteps ;
38 for t = 0: step dt : dt
X = xval ;
40 Y = zval ;
Z = yval ;
42 [TH,R,Z] = cart2pol(X,Y,Z);
44 [u,v] = feval(func ,Z,R,a ,u0 );
[vx , vy ,vz ] = pol2cart(TH,v,u);
46
vx2 = vx;
48 vy2 = vz+u0;
vz2 = vy;
50
xval = xval + vx2 step dt ;
52 yval = yval + vy2 step dt ;
zval = zval + vz2 step dt ;
54 end
56 % Write the file
fid2 = fopen( output filename , ?w+? );
58 % fprintf (fid2 ,?%14snn? , ? particles ?);
fprintf( fid2 , ?%14dnn ? , n pts );
60 % fprintf (fid2 ,?%14dnn? , n rays );
62 for i = 1: n pts
fprintf( fid2 , ?%14.6f %14.6f %14.6f %14.6fnn ? ,xval( i ) , yval( i ) , zval ( i ) , ival ( i ));
64 end
66 fclose( fid2 );
68 end
function [u,v] = sim hill vortex (Z,R,a ,u0)
2 %SIM HILL VORTEX Generate Particle Displacements
% [U,V] = SIM HILL VORTEX(Z,R,A,U0)
4 % Simulates velocity field of Hill ? s Spherical Vortex , from "Fluid
% Vortices" by Sheldon Green. (1995)
6 %
% Inputs :
8 % Z is the coordinate lying in the direction of induced vortex velocity .
% R is the coordinate perpendicular to the direction of induced velocity .
10 % A is the size of the vortex .
% U0 is the freestream velocity .
12 % NSTEPS is the number of timesteps taken .
%
144
14 % Outputs :
% U is the x component of velocity .
16 % V is the y component of velocity .
%
18 % Kyle Lynch (lynchkp@auburn .edu) October 12, 2010
20 u int = (3/2) . u0 . (1 ((2. R.^2 + Z.^2) ./ (a .^2)));
v int = (3/2) . u0 . ((Z. R)./(a .^2));
22
u ext = u0 . (((a.^2./(Z.^2 + R.^2)).^(5./2)). ((2. Z.^2 R.^2)./(2. a.^2)) 1);
24 v ext = (3/2) . u0 . ((Z. R)./(a.^2)) . ((a.^2)./(Z.^2 + R.^2)).^(5./2);
26 test int = (R.^2 + Z.^2)<=a.^2;
test ext = (R.^2 + Z.^2)>a.^2;
28
u = u int . test int + u ext . test ext ;
30 v = v int . test int + v ext . test ext ;
32 end
145
Appendix C: Light Field Rendering Codes
function [cx , cy ] = lf calibration ( filename )
2 %LF CALIBRATION Light Field Image Coordinate Calibration
% [CX, CY] = LF CALIBRATION(FILENAME) registers
4 % lenslet images and assigns and (i , j ) coordinate system to each lenslet .
% Additionally , edge lenslets are identified and discarded .
6 %
% Outputs :
8 % CX is an 2 D array specifying the j coordinate of each lenslet .
% CY is an 2 D array specifying the i coordinate of each lenslet .
10 %
% Kyle Lynch (lynchkp@auburn .edu) September 12, 2010
12
image = imread( filename );
14 image = double(image);
16 % Define grid . Must be done manually .
nlx = 288;
18 nly = 192;
sz = 16.973;
20 offset x = 0;
offset y = 2;
22
[ xgrid ygrid ] = meshgrid(1: sz : sz nlx ,1: sz : sz nly );
24 xgrid = xgrid + offset x ;
ygrid = ygrid + offset y ;
26
% Calculate centroid locations .
28 tic
% figure
30 cx = zeros(nly 1,nlx 1);
cy = zeros(nly 1,nlx 1);
32 for yidx = 1: nly 1
for xidx = 1: nlx 1
34 xpt = round( xgrid (yidx , xidx ));
ypt = round( ygrid (yidx , xidx ));
36 subset = image(ypt : ypt+round( sz ) ,xpt : xpt+round( sz ));
[ p fit ] = gauss 2d fit ( subset );
38 cx(yidx , xidx) = (xpt 1) + p fit (2);
cy(yidx , xidx) = (ypt 1) + p fit (3);
40 end
fprintf( ?Peak fit : Line %d of %dnn ? ,yidx , nly 1);
42 end
toc
44
end
1 function [ x array , y array , theta array , phi array , radiance ] = . . .
lf radiance gen (filename ,cx , cy)
3 %LF RADIANCE GEN Light Field Radiance Generation
% [X ARRAY,Y ARRAY,THETAARRAY,PHI ARRAY,RADIANCE] =
5 % LF RADIANCE GEN(FILENAME,CX,CY) generates all information
% regarding radiance elements , including their intensity , pixel coordinates ,
7 % spatial coordinates , and angular coordinates .
%
9 % Inputs :
% FILENAME is a string defining the lenslet image file .
11 % CX is an 2 D array specifying the x coordinate of each lenslet .
146
% CY is an 2 D array specifying the y coordinate of each lenslet .
13 %
% Outputs :
15 % XARRAY is a 4 D array specifying the x coordinates of each radiance element .
% YARRAY is a 4 D array specifying the y coordinates of each radiance element .
17 % THETAARRAY is a 4 D array specifying the theta angle of each radiance element .
% PHI ARRAY is a 4 D array specifying the phi angle of each radiance element .
19 % RADIANCE is a 4 D array specifying the intensity of each radiance element .
%
21 % Kyle Lynch (lynchkp@auburn .edu) October 4, 2010
23 if nargin < 3
fprintf( ?Not enough input arguments .nn ? );
25 return;
end
27
% Determine number of lenslets .
29 [ n l y , n l x ] = size(cx );
31 % Read in image file .
image = imread( filename );
33 image = double(image);
35 % Miscellaneous initializations .
[ n p y , n p x ] = size(image);
37 p p = 0.0074;
f l = 0.500;
39 s i = 100;
p ccd x = p p n p x ;
41 p ccd y = p p n p y ;
43 % Determine the average spacing between the centers of each lenslet . This
% allows us to determine the effective number of pixels underneath each
45 % lenslet . The floor function rounds this number down to the nearest
% integer value to ensure no overlap occurs . Additionally , a check is
47 % performed to ensure that the number of pixels used in both the x and y
% directions is equal .
49 avg x = mean((cx (: , n l x) cx (: ,1)) / ( n l x 1)); %%%% SHOULD THIS BE NLX 1?
avg y = mean((cy( n l y ,:) cy (1 ,:)) / ( n l y 1));
51 % keyboard ;
if (floor(avg x) ~= floor(avg y ))
53 fprintf( ?Error : Unequal Lenslet Size ? ); return;
else
55 n p l = floor(avg x );
r = n p l / 2;
57 end
n p l = 14;
59 r = 7;
61 % Initialize the data arrays . These are 4 D arrays , the size of the first
% two are equal to the number of lenslets in each direction , and the second
63 % two are equal to the number of pixels under each lenslet as defined
% above .
65 % n array = zeros ( n l y , n l x , n p l , n p l );
% m array = zeros ( n l y , n l x , n p l , n p l );
67 x array = zeros( n l y , n l x , n p l , n p l );
y array = zeros( n l y , n l x , n p l , n p l );
69 theta array = zeros( n l y , n l x , n p l , n p l );
phi array = zeros( n l y , n l x , n p l , n p l );
71 radiance = zeros( n l y , n l x , n p l , n p l );
73 % imshow(image ,[]) , hold on
75 % Loop through each lenslet , first in the horizontal (x direction ) , and
% then in the vertical (y direction ).
77 for y = 1: n l y
fprintf( ?Radiance Gen: %d of %dnn ? ,y, n l y );
79 for x = 1: n l x
147
81 % Grab the centroid location in (x ,y) coordinates from the original
% pixel coordinates . Note that because the y coordinate is flipped
83 % in image coordinates , a corresponding flip has to occur here .
l x = cx(y,x) p p p ccd x /2;
85 l y = p ccd y/2 cy(y,x) p p ;
87 % Find the theta and phi angle for the lenslet centroid location .
% This is known as the bulk angle and is calculated as the
89 % arctangent of the ratio of lenslet position divided by reference
% image distance . This is also called the "bulk angle ".
91 l theta = atan2( l x , s i );
l phi = atan2( l y , s i );
93
% With the lenslet information defined , the pixels underneath the
95 % lenslet must be identified . The pixels are found within a radius
% defined previously . The pixel coordinates in the x direction are
97 % stored as m ind , and in the y direction as n ind .
dist x = abs((1: n p x) cx(y,x ));
99 dist y = abs((1: n p y) cy(y,x ));
m ind = find( dist x < r ); % X
101 n ind = find( dist y < r ); % Y
103 % Create an array of spatial coordinates equal to the size of the
% block of pixels covered by the lenslet . These coordinates are
105 % homogenous within a single lenslet .
x c = ones(length(n ind ) ,length(m ind)) l x ;
107 y c = ones(length(n ind ) ,length(m ind)) l y ;
109 % Determine the angles for each of the lenslet pixels . This is
% known as the "local angle ." First the internal spatial
111 % coordinates are defined for each pixel ( effectively just an
% offset from the lenslet centroid location ) , then the arctangent
113 % is used to calculate the angle .
x2 = m ind p p p ccd x/2 l x ;
115 y2 = p ccd y/2 n ind p p l y ;
[X2,Y2] = meshgrid(x2 , y2 );
117 p theta = atan2(X2, f l ) + l theta ;
p phi = atan2(Y2, f l ) + l phi ;
119
% Save all calibration information to the specified arrays .
121 x array (y,x ,: ,:) = x c ;
y array (y,x ,: ,:) = y c ;
123 theta array (y,x ,: ,:) = p theta ;
phi array (y,x ,: ,:) = p phi ;
125 radiance (y,x ,: ,:) = image(n ind , m ind );
127 end
129 end
131 end
1 function [image, scalefac ] = lf render (radiance , x array , y array , theta array , phi array , t)
%LF RENDER Light Field Rendering/Refocusing
3 % [IMAGE] = LF RENDER(RADIANCE,X ARRAY,Y ARRAY,THETAARRAY,PHI ARRAY,T)
% integrates signal from each lenslet , rendering an image.
5 %
% Inputs :
7 % RADIANCE is a 4 D array specifiying the intensity of each radiance element .
% XARRAY is a 4 D array specifying the x coordinates of each radiance element .
9 % YARRAY is a 4 D array specifying the y coordinates of each radiance element .
% THETAARRAY is a 4 D array specifying the theta coordinates of each radiance element .
11 % PHI ARRAY is a 4 D array specifying the phi coordinates of each radiance element .
% T is an optional argument specifying the distance to translate the
13 % focus plane . By default , t = 0.
%
15 % Outputs :
148
% IMAGE is a 2 D double precision image.
17 %
% Kyle Lynch (lynchkp@auburn .edu) October 4, 2010
19
if nargin < 5
21 fprintf( ?Not enough input arguments .nn ? );
return;
23 elseif nargin < 6
t = 0;
25 end
27 % Set rendered image dimensions , which are equal to the number of lenslets
% in each of the coordinate directions .
29 [ny, nx, np, nt ] = size( radiance );
image = zeros(ny, nx);
31
si = 100;
33
% Determine the effective sensor size , which is equal to the size of the
35 % lenslet array . Subtrating x min from x max will determine the distance
% between the outermost lenslet centroid locations . To find the total
37 % distance to the outsides of the lenslets , an additional lenslet diameter
% must be added .
39 scalefac = (( t+si )/ si );
x min = min( x array (:));
41 y min = min( y array (:));
x max = max( x array (:));
43 y max = max( y array (:));
avg x = mean( (x max x min) / (nx 1) ) scalefac ;
45 avg y = mean( (y max y min) / (ny 1) ) scalefac ;
p ccd x = (x max x min) scalefac + avg x ;
47 p ccd y = (y max y min) scalefac + avg y ;
49 % Use simple 2 D ray tracing to propagate the radiance to a different
% position . This distance is denoted by the variable t .
51 if (t ~= 0)
x array = x array + t theta array ;
53 y array = y array + t phi array ;
end
55
% Accelerate the rendering by only calculating rays with nonzero intensity .
57 radiancetest = find(radiance >0);
59 % Render each ray .
for r = 1:length( radiancetest )
61
% Grab all information about the particular ray from the information
63 % arrays .
ray = radiancetest (r ); % Ray index .
65 ray radiance = radiance (ray ); % Ray intensity .
ray x = x array (ray ); % Ray x coordinate
67 ray y = y array (ray ); % Ray y coordinate
69 % Determine the location the ray intersects the focal plane in terms of
% the lenslet number ( floating point number).
71 initial x = ( p ccd x/2 + ray x + avg x/2) / (avg x );
initial y = ( p ccd y/2 ray y + avg y/2) / (avg y );
73
% Determine the four surrounding pixels .
75 L x = floor( initial x );
R x = ceil ( initial x );
77 L x coeff = R x initial x ;
R x coeff = initial x L x ;
79
B y = floor( initial y );
81 T y = ceil ( initial y );
B y coeff = T y initial y ;
83 T y coeff = initial y B y;
149
85 % Determine the contribution of the ray to each pixel .
TL = T y coeff L x coeff ;
87 TR = T y coeff R x coeff ;
BL = B y coeff L x coeff ;
89 BR = B y coeff R x coeff ;
91 % Render the final image through a summation of the image array . The
% multiple if statements are used to block rays that are propogated
93 % outside of the image coordinates .
if T y > 0 && T y <= ny
95 if L x > 0 && L x <= nx
% Top Left Pixel
97 image(T y , L x) = image(T y , L x) + ray radiance TL;
end
99 if R x > 0 && R x <= nx
% Top Right Pixel
101 image(T y ,R x) = image(T y ,R x) + ray radiance TR;
end
103 end
105 if B y > 0 && B y <= ny
if L x > 0 && L x <= nx
107 % Bottom Left Pixel
image(B y , L x) = image(B y , L x) + ray radiance BL;
109 end
if R x > 0 && R x <= nx
111 % Bottom Right Pixel
image(B y ,R x) = image(B y ,R x) + ray radiance BR;
113 end
end
115
end
150
Appendix D: Matlab 3-D PIV Codes
1 clear all
clc
3
% Specify correlation settings and input files .
5 scalefactor = 4; % Vector display scale factor .
zero padding = ? false ? ; % Zero padded correlation .
7 spof = ? false ? ; % Symmetric phase only filtering .
nmt eps = 0.1; % Error value for normalized median test ( usually 0.1 0.2).
9 nmt threshlvl = 2; % Threshold value for normalized median test ( usually 2).
start x spacing = 32; % Spacing between interrogation windows in x direction .
11 start y spacing = 32; % Spacing between interrogation windows in y direction .
start z spacing = 4; % Spacing between interrogation windows in y direction .
13 start x size = 64; % Size of interrogation windows in x direction .
start y size = 64; % Size of interrogation windows in y direction .
15 start z size = 8; % Size of interrogation windows in y direction .
refinement factor = 3; % Refinement factor ( size initial / size final )
17 file prefix = ?C:nUsersnlynchkpnDesktopnLight Field Rendering Codesn001xn? ; % Base file directory .
file A prefix = ?imgA ? ;
19 file B prefix = ?imgB ? ;
21 % Load files . All images converted to double precision format .
nfiles = 16;
23 idx = 1;
for i = 1:1: nfiles
25 imgA(: ,: , idx) = double( imread ([ file prefix file A prefix num2str(i , ?%03d ?) ? . tif ? ]) );
imgB(: ,: , idx) = double( imread ([ file prefix file B prefix num2str(i , ?%03d ?) ? . tif ? ]) );
27 idx = idx + 1;
end
29 [ image height , image width , image depth ] = size(imgA);
31 h = figure ;
imshow(imgA(: ,: ,4) ,[])
33 set(h, ?Color ? , ?none ? );
35
37 % export fig ( ?C:nUsersnlynchkpnDesktopnMasters Thesisnfiguresn001 plane4 . pdf ? ,h);
break;
39
for R = 1: refinement factor
41
% Define the window parameters for the particular grid refinement
43 % level . Note the grid refinement is exponential because the refinement
% factor is in the denominator .
45 x spacing = round( start x spacing / R);
y spacing = round( start y spacing / R);
47 z spacing = round( start z spacing / R);
x size = round( start x size / R);
49 y size = round( start y size / R);
z size = round( start z size / R);
51
% Ensure both dimensions of each window are even , due to the
53 % performance of the fftshift function .
if mod( x size ,2) == 1
55 x size = x size + 1;
end
57 if mod( y size ,2) == 1
y size = y size + 1;
151
59 end
if mod( z size ,2) == 1
61 z size = z size + 1;
end
63
% Generate the grid . For the initial grid , no information needs to be
65 % saved . For the remaining grids , old grid data must be stored to build
% the predictor .
67 if R > 1
old cent x = cent x ; old cent y = cent y ; old cent z = cent z ;
69 old nx = nx; old ny = ny; old nz = nz ;
old dx = dx; old dy = dy; old dz = dz ;
71 end
[ node x , node y , node z , cent x , cent y , cent z , nx, ny, nz ] = . . .
73 generate grid 3d (image height , image width , image depth , . . .
x size , y size , z size , x spacing , y spacing , z spacing );
75
% Initialize the predictor . For the initial grid , no correlation exists
77 % and thus it can be set to zero for all windows. However , for the
% remaining grids , there exists correlation data to use for
79 % initializing a predictor .
if R > 1
81 [px, py, pz ] = . . .
generate predictor discrete 3d ( old cent x , old cent y , old cent z , old dx , old dy , old dz , . . .
83 old nx , old ny , old nz , cent x , cent y , cent z , nx, ny, nz );
else
85 px = zeros(size(node x ));
py = zeros(size(node y ));
87 pz = zeros(size(node z ));
end
89
% Multi pass analysis . The number of passes are dictated by a
91 % convergence criteria , such that the RMS values of the corrector
% differ by less than 0.1 pixels , and less than 10 passes are
93 % performed .
old rms = 0; new rms = 1E6; pass = 1;
95 while (abs(new rms old rms) > 0.1) && (pass < 10)
97 [cx , cy , cz , R vec , IA vec , IB vec ] = piv pass discrete 3d (imgA, imgB, node x , node y , node z , . . .
x size , y size , z size , px, py, pz , zero padding , spof );
99
%%% Update Displacement %%%
101 dx = px + cx ;
dy = py + cy ;
103 dz = pz + cz ;
105 %%% Validate Displacement %%%
old total = 0; new total = 1; pass2 = 1;
107 while (abs( new total old total ) > 0.1) && (pass2 < 10)
[ test ] = normalized median test 3d (dx, dy, dz , nx, ny, nz , nmt threshlvl , nmt eps );
109 [dx, dy, dz ] = bilinear interpolation 3d (dx, dy, dz , nx, ny, nz , test );
old total = new total ;
111 new total = ((numel( test) numel(find( test==0)))/numel( test )) 100;
fprintf( ? Validation Percentage : %.2fnn ? , new total );
113 pass2 = pass2 + 1;
end
115
%%% Update Predictor and Convergence Check %%%
117 pass = pass + 1;
px = round(dx);
119 py = round(dy);
pz = round(dz );
121 old rms = new rms ;
new rms = sqrt(mean(cx.^2 + cy .^2));
123 fprintf( ? Corrector RMS: %.4fnn ? ,new rms );
125 end
152
127 end
129 load scalevec2
dx = dx . median( scalevec2 );
131 dy = dy . median( scalevec2 );
133 h = figure( ? Position ? ,[100 100 600 400]);
scalefac = 40;
135 skip = 5;
quiver3 ( cent x (1: skip :end) , cent y (1: skip :end) , cent z (1: skip :end) , . . .
137 dx(1: skip :end) scalefac mean(dx(1: skip :end) scalefac ) ,dy(1: skip :end) scalefac 0,dz (1: skip :end) scalefac 0 ,0)
view([138 70])
139 xlabel( ?X (px) ? );
ylabel( ?Y (px) ? );
141 zlabel( ?Z (px) ? );
box off
143 set(h, ?Color ? , ?none ? );
export fig ( ?C:nUsersnlynchkpnDesktopnMasters Thesisnfiguresn001x vectors non . pdf ? ,h);
145
h = figure( ? Position ? ,[100 100 600 300]);
147 [n,x] = hist(dx ,50);
hist(dx,50)
149
h2 = findobj (gca, ?Type ? , ?patch ? );
151 set(h2 , ?FaceColor ? ,[.6 .6 .6] , ?EdgeColor ? ,[.3 .3 .3])
153 hold on
[p] = gauss 1d fit (n,x)
155 ztmp = p(1) exp( (0.5 (x p(2)).^2)/p(3)^2 );
r = plot(x,ztmp, ?r ? , ?LineWidth ? ,2)
157 set(r , ?LineWidth ? ,2)
xlabel( ?Displacement (px) ? );
159 ylabel( ?Counts ? );
161 annotation( ?textbox ? ,[.18 .68 .18 .13] ,...
?FontSize ? ,10 , ?BackgroundColor ? , ?none ? , ?EdgeColor ? , ?none ? ,...
163 ? String ? ,[ ?nsigma = ? num2str(std(dx) , ?%.2f ?) ? px ? ]);
annotation( ?textbox ? ,[.18 .74 .18 .13] ,...
165 ?FontSize ? ,10 , ?BackgroundColor ? , ?none ? , ?EdgeColor ? , ?none ? ,...
? String ? ,[ ?nmu = ? num2str(mean(dx) , ?%.2f ?) ? px ? ]);
167
set(h, ?Color ? , ?none ? );
169 export fig ( ?C:nUsersnlynchkpnDesktopnMasters Thesisnfiguresn001 x vectors histx non . pdf ? ,h);
1 function [cx , cy , cz , R vec , IA vec , IB vec ] = piv pass discrete 3d (imgA, imgB, node x , node y , node z , . . .
x size , y size , z size , px, py, pz , zero padding , spof )
3 %PIV PASS DISCRETE PIV interrogation with discrete window offset .
% [] = PIV PASS DISCRETE(IMGA, IMGB, NODE X, NODE Y, NODE Z,
5 % X SIZE, Y SIZE, Z SIZE , PX, PY, PZ, ZERO PADDING, SPOF)
% FFT based discrete spatial correlation . Based on the implementation
7 % suggested in Adrian and Westerweel (2011) , pp. 369 376.
%
9 % Inputs :
% IMGA is the full data of the first image.
11 % IMGA is the full data of the second image.
% NODE X are the upper left node points of each window (x coordinate ).
13 % NODE Y are the upper left node points of each window (y coordinate ).
% NODE Z are the upper left node points of each window (z coordinate ).
15 % X SIZE is the interrogation window size in pixels (x coordinate ).
% Y SIZE is the interrogation window size in pixels (y coordinate ).
17 % Z SIZE is the interrogation window size in pixels (z coordinate ).
% PX is the discrete integer predictor of x coordinate displacement .
19 % PY is the discrete integer predictor of y coordinate displacement .
% PZ is the discrete integer predictor of z coordinate displacement .
21 % ZERO PADDING is a boolean for using zero padded interrogation windows.
% SPOF is a boolean for using symmetric phase only filtering .
23 %
% Outputs :
153
25 % CX is the fractional corrector of x coordinate displacement .
% CY is the fractional corrector of y coordinate displacement .
27 % CZ is the fractional corrector of y coordinate displacement .
% R VEC is a vector containing each correlation volume for analysis .
29 %
% Kyle Lynch (lynchkp@auburn .edu) February 27, 2011
31
tic
33
[ image height , image width , image depth ] = size(imgA);
35
% Set parameters that depend on correlation plane sizing .
37 if strcmp(zero padding , ? true ?)
U = y size 2;
39 V = x size 2;
W = z size 2;
41 else
U = y size ;
43 V = x size ;
W = z size ;
45 end
47 % Initialize storage arrays .
R vec = zeros(U, V, W, numel(node x ));
49 IA vec = zeros( y size , x size , z size , numel(node x ));
IB vec = zeros( y size , x size , z size , numel(node x ));
51
IA = zeros( y size , x size , z size );
53 IB = zeros( y size , x size , z size );
cx = zeros(1 , numel(node x ));
55 cy = zeros(1 , numel(node y ));
cz = zeros(1 , numel(node y ));
57
for idx = 1:numel(node x)
59
% Generate windows.
61 for z = 1: z size
for y = 1: y size
63 for x = 1: x size
65 yi = y + node y(idx );
xi = x + node x(idx );
67 zi = z + node z(idx );
yip = y + node y(idx) + py(idx );
69 xip = x + node x(idx) + px(idx );
zip = z + node z(idx) + pz(idx );
71
if (( xi <= 0) jj ( xi > image width) jj ( yi <= 0) jj ( yi > image height ) jj ( zi <= 0) jj ( zi > image depth) )
73 IA(y,x, z) = 0;
else
75 IA(y,x, z) = imgA(yi , xi , zi );
end
77
if (( xip <= 0) jj (xip > image width) jj (yip <= 0) jj (yip > image height ) jj ( zip <= 0) jj ( zip > image depth ))
79 IB(y,x, z) = 0;
else
81 IB(y,x, z) = imgB(yip , xip , zip );
end
83
end
85 end
end
87
% Calculate standard deviation of windows.
89 IA std = std(IA (:));
IB std = std(IB (:));
91
% Calculate mean of windows.
154
93 IA mean = mean(IA (:));
IB mean = mean(IB (:));
95
% Subtract mean from both windows.
97 IA s = IA IA mean;
IB s = IB IB mean;
99
% Perform FFT.
101 FA = fftn (IA s , [U, V, W]);
FB = fftn (IB s , [U, V, W]);
103
% Multiply by complex conjugate .
105 S = conj(FA) . FB;
107 % Take inverse FFT and rescale the data .
R = ifftn (S, ?symmetric ?) ./ (U V W IA std IB std );
109
% Reshuffle FFT plane .
111 R = fftshift (R);
113 [x0 , y0 , z0 ] = gaussian estimator 3d (R);
115 % Corrector Calculation .
cx(idx) = x0 V/2 1;
117 cy(idx) = y0 U/2 1;
cz(idx) = z0 W/2 1;
119
121 if (max(IA (:)) == 0) jj (max(IB (:)) == 0)
cx(idx) = 0;
123 cy(idx) = 0;
cz(idx) = 0;
125 end
127 R vec (: ,: ,: , idx) = R;
IA vec (: ,: ,: , idx) = IA;
129 IB vec (: ,: ,: , idx) = IB;
131 end
133
135 end
1 function [ test ] = . . .
normalized median test 3d (dx, dy, dz , nx, ny, nz , nmt threshlvl , nmt eps)
3
tic
5
radius = 1;
7
% Convert 1 D field to 3 D field to make indexing and neighboring data
9 % easier to process .
idx = 1;
11 dx2 = zeros(ny,nx, nz );
dy2 = zeros(ny,nx, nz );
13 dz2 = zeros(ny,nx, nz );
for z = 1:nz
15 for y = 1:ny
for x = 1:nx
17 dx2(y,x, z) = dx(idx );
dy2(y,x, z) = dy(idx );
19 dz2(y,x, z) = dz(idx );
idx = idx + 1;
21 end
end
23 end
155
25 test2 = ones(ny,nx, nz );
for z = 1:1: nz
27 for y = 1:1:ny
for x = 1:1:nx
29
% Extract data neighborhood excluding the center point .
31 neigh x = [ ] ; neigh y = [ ] ; neigh z = [ ] ; idx = 1;
for k = radius :1: radius
33 for j = radius :1: radius
for i = radius :1: radius
35 xi = x + i ;
yj = y + j ;
37 zk = z + k;
if (( xi <= nx) && ( xi > 0) && . . .
39 ( yj <= ny) && ( yj > 0) && . . .
(zk <= nz) && (zk > 0) && . . .
41 (( j ~= 0) jj ( i ~= 0) jj (k ~= 0)))
neigh x (idx) = dx2(yj , xi , zk );
43 neigh y (idx) = dy2(yj , xi , zk );
neigh z (idx) = dz2(yj , xi , zk );
45 idx = idx + 1;
end
47 end
end
49 end
51 % Calculate median of neighboring values
neighmed x = median( neigh x );
53 neighmed y = median( neigh y );
neighmed z = median( neigh z );
55
% Fluctuations with respect to the median
57 medfluct x = dx2(y,x, z) neighmed x ;
medfluct y = dy2(y,x, z) neighmed y ;
59 medfluct z = dz2(y,x, z) neighmed z ;
61 % Absolute value of residuals within the neighborhood
neighmedfluct x = neigh x neighmed x ;
63 neighmedfluct y = neigh y neighmed y ;
neighmedfluct z = neigh z neighmed z ;
65
% Median of absolute residuals
67 medneighmedfluct x = median(abs( neighmedfluct x ));
medneighmedfluct y = median(abs( neighmedfluct y ));
69 medneighmedfluct z = median(abs( neighmedfluct z ));
71 % Normalized fluctuations
normfluct x = abs( medfluct x / (medneighmedfluct x + nmt eps ));
73 normfluct y = abs( medfluct y / (medneighmedfluct y + nmt eps ));
normfluct z = abs( medfluct z / (medneighmedfluct z + nmt eps ));
75
if (sqrt( normfluct x^2 + normfluct y^2 + normfluct z ^2) > nmt threshlvl )
77 test2 (y,x, z) = 0;
end
79
end
81 end
end
83
% Convert 3 D test to 1 D test .
85 idx = 1;
test = zeros(1 ,ny nx nz );
87 for z = 1:nz
for y = 1:ny
89 for x = 1:nx
test (idx) = test2 (y,x, z );
91 idx = idx + 1;
end
156
93 end
end
95
% Calculate percentage of erroneous vectors
97 % percent = (count / (nx ny nz)) 100;
% percent = ( ( numel(dx) numel( find ( test==0)) ) / numel(dx) ) 100;
99
% fprintf ( ?Normalized Median Test complete , elapsed time %.2f seconds .nn? , toc );
101
end
function [ dx out , dy out , dz out ] = bilinear interpolation 3d (dx, dy, dz , nx, ny, nz , test )
2
tic
4
% Convert 1 D field to 3 D field to make indexing and neighboring data
6 % easier to process .
idx = 1;
8 dx2 = zeros(ny,nx, nz );
dy2 = zeros(ny,nx, nz );
10 dz2 = zeros(ny,nx, nz );
test2 = zeros(ny,nx, nz );
12 for z = 1:nz
for y = 1:ny
14 for x = 1:nx
dx2(y,x, z) = dx(idx );
16 dy2(y,x, z) = dy(idx );
dz2(y,x, z) = dz(idx );
18 test2 (y,x, z) = test (idx );
idx = idx + 1;
20 end
end
22 end
24 dx out2 = dx2;
dy out2 = dy2;
26 dz out2 = dz2 ;
for z = 1:1: nz
28 for y = 1:1:ny
for x = 1:1:nx
30 if test2 (y,x, z) == 0
32 % X Component of Bilinear Interpolation
idx = 1;
34 neigh x = [ ] ;
for i = [ 1, 1]
36 xi = x + i ;
yj = y;
38 zk = z ;
if ( ( xi <= nx) && ( xi > 0) && ( test2 (yj , xi , zk)==1) )
40 neigh x (idx) = dx2(yj , xi , zk );
idx = idx + 1;
42 end
end
44 if numel( neigh x ) >= 1
dx out2(y,x, z) = sum( neigh x ) / numel( neigh x );
46 end
48 % Y Component of Bilinear Interpolation
idx = 1;
50 neigh y = [ ] ;
for i = [ 1, 1]
52 xi = x;
yj = y + i ;
54 zk = z ;
if ( ( yj <= ny) && ( yj > 0) && ( test2 (yj , xi , zk)==1) )
56 neigh y (idx) = dy2(yj , xi , zk );
idx = idx + 1;
157
58 end
end
60 if numel( neigh y ) >= 1
dy out2(y,x, z) = sum( neigh y ) / numel( neigh y );
62 end
64 % Z Component of Bilinear Interpolation
idx = 1;
66 neigh z = [ ] ;
for i = [ 1, 1]
68 xi = x;
yj = y;
70 zk = z + i ;
if ( (zk <= nz) && (zk > 0) && ( test2 (yj , xi , zk)==1) )
72 neigh z (idx) = dz2(yj , xi , zk );
idx = idx + 1;
74 end
end
76 if numel( neigh z ) >= 1
dz out2(y,x, z) = sum( neigh z ) / numel( neigh z );
78 end
80
end
82 end
end
84 end
86 % Convert 2 D field to 1 D array .
idx = 1;
88 for z = 1:nz
for y = 1:ny
90 for x = 1:nx
dx out(idx) = dx out2(y,x, z );
92 dy out(idx) = dy out2(y,x, z );
dz out(idx) = dz out2(y,x, z );
94 idx = idx + 1;
end
96 end
end
98
% fprintf ( ? Bilinear vector replacement complete , elapsed time %f seconds .nn? , toc );
100
end
1 function [ node x , node y , node z , cent x , cent y , cent z , nx, ny, nz ] = . . .
generate grid 3d (image height , image width , image depth , . . .
3 x size , y size , z size , x spacing , y spacing , z spacing )
%GENERATE GRID 3D Generate 3D PIV Grid
5 % [NODE X, NODE Y, NODE Z, CENT X, CENT Y, CENT Z, NX, NY, NZ] =
% GENERATE GRID(IMAGE HEIGHT, IMAGE WIDTH, IMAGE DEPTH,
7 % X SIZE, Y SIZE, Z SIZE , X SPACING, Y SPACING, Z SPACING)
% Creates a grid for PIV evaluation , based on the dimensions and
9 % separations of interrogation windows.
%
11 % Inputs :
% IMAGE HEIGHT is the height of the image (y coordinate ).
13 % IMAGEWIDTH is the width of the image (x coordinate ).
% IMAGE DEPTH is the width of the image (z coordinate ).
15 % X SIZE is the interrogation window size in pixels (x coordinate ).
% Y SIZE is the interrogation window size in pixels (y coordinate ).
17 % Z SIZE is the interrogation window size in pixels (z coordinate ).
% X SPACING is the interrogation window spacing in pixels (x coordinate ).
19 % Y SPACING is the interrogation window spacing in pixels (y coordinate ).
% Z SPACING is the interrogation window spacing in pixels (z coordinate ).
21 %
% Outputs :
23 % NODE X are the upper left node points of each window (x coordinate ).
158
% NODE Y are the upper left node points of each window (y coordinate ).
25 % NODE Z are the upper left node points of each window (z coordinate ).
% CENT X are the centroid points of each window (x coordinates ).
27 % CENT Y are the centroid points of each window (y coordinates ).
% CENT Z are the centroid points of each window (z coordinates ).
29 % NX are the number of windows in the x direction .
% NY are the number of windows in the y direction .
31 % NZ are the number of windows in the z direction .
%
33 % Kyle Lynch (lynchkp@auburn .edu) March 3, 2011
35 % Determine the grid offset to center the grid within the image.
nz = 0;
37 for z = 1: z spacing : image depth z size
ny = 0;
39 for y = 1: y spacing : image height y size
nx = 0;
41 for x = 1: x spacing : image width x size
nx = nx + 1;
43 end
ny = ny + 1;
45 end
nz = nz + 1;
47 end
x offset = round(( image width x size x) / 2);
49 y offset = round(( image height y size y) / 2);
z offset = round(( image depth z size z) / 2);
51 fprintf( ?Grid Generation ...nn ? );
fprintf( ?X Size : %3d, Y Size : %3d, Z Size : %3dnn ? , x size , y size , z size );
53 fprintf( ?X Spacing : %3d, Y Spacing : %3d, Z Spacing : %3dnn ? , x spacing , y spacing , z spacing );
fprintf( ?X Regions : %3d, Y Regions : %3d, Z Regions : %3dnn ? ,nx,ny, nz );
55 fprintf( ?X Offset : %3d, Y Offset : %3d, Z Offset : %3dnn ? , x offset , y offset , z offset );
fprintf( ?Number of Regions : %6dnn ? ,nx ny nz );
57
% Build grid nodal and centroid points and store them in row major format .
59 idx = 1;
node x = zeros(1 ,nx ny nz ); cent x = zeros(1 ,nx ny nz );
61 node y = zeros(1 ,nx ny nz ); cent y = zeros(1 ,nx ny nz );
node z = zeros(1 ,nx ny nz ); cent z = zeros(1 ,nx ny nz );
63 for z = 1:1: nz
for y = 1:1:ny
65 for x = 1:1:nx
node x(idx) = x offset + (x 1) x spacing ;
67 cent x (idx) = x offset + (x 1) x spacing + x size /2;
69 node y(idx) = y offset + (y 1) y spacing ;
cent y (idx) = y offset + (y 1) y spacing + y size /2;
71
node z(idx) = z offset + (z 1) z spacing ;
73 cent z (idx) = z offset + (z 1) z spacing + z size /2;
75 idx = idx + 1;
end
77 end
end
79
end
function [px, py, pz ] = . . .
2 generate predictor discrete 3d ( old cent x , old cent y , old cent z , old dx , old dy , old dz , . . .
old nx , old ny , old nz , new cent x , new cent y , new cent z , new nx , new ny , new nz)
4 %GENERATE PREDICTOR DISCRETE Generate discrete integer predictor
% [] = GENERATE PREDICTOR DISCRETE(OLD CENT X, OLD CENT Y, OLD DX, OLD DY,
6 % OLD NX, OLD NY, NEW CENT X, NEW CENT Y, NEWNX, NEWNY)
% Creates the predictor array for PIV interrogation , based on a linear
8 % interpolation to new grid coordinates .
%
10 % Inputs :
159
% OLD CENT X is an array of old window centroid locations (x coordinate ).
12 % OLD CENT Y is an array of old window centroid locations (y coordinate ).
% OLD DX is the array of old displacements (x coordinate ).
14 % OLD DY is the array of old displacements (y coordinate ).
% OLD NX is the number of old interrogation windows in the x direction .
16 % OLD NY is the number of old interrogation windows in the y direction .
% NEW CENT X is an array of new window centroid locations (x coordinate ).
18 % NEW CENT Y is an array of new window centroid locations (y coordinate ).
% NEWNX is the number of new interrogation windows in the x direction .
20 % NEWNY is the number of new interrogation windows in the y direction .
%
22 % Outputs :
% PX is the discrete integer predictor for the x coordinate .
24 % PY is the discrete integer predictor for the y coordinate .
%
26 % Kyle Lynch (lynchkp@auburn .edu) February 27, 2011
28 % Convert 1 D field to 3 D field to make indexing and neighboring data
% easier to process .
30 idx = 1;
dx2 = zeros(old ny , old nx , old nz );
32 dy2 = zeros(old ny , old nx , old nz );
dz2 = zeros(old ny , old nx , old nz );
34 for z = 1: old nz
for y = 1: old ny
36 for x = 1: old nx
old cent x2 (y,x, z) = old cent x (idx );
38 old cent y2 (y,x, z) = old cent y (idx );
old cent z2 (y,x, z) = old cent z (idx );
40 old dx2(y,x, z) = old dx (idx );
old dy2(y,x, z) = old dy (idx );
42 old dz2 (y,x, z) = old dz (idx );
idx = idx + 1;
44 end
end
46 end
48 idx = 1;
dx2 = zeros(old ny , old nx , old nz );
50 dy2 = zeros(old ny , old nx , old nz );
dz2 = zeros(old ny , old nx , old nz );
52 for z = 1:new nz
for y = 1:new ny
54 for x = 1:new nx
cent x2 (y,x, z) = new cent x(idx );
56 cent y2 (y,x, z) = new cent y(idx );
cent z2 (y,x, z) = new cent z (idx );
58 idx = idx + 1;
end
60 end
end
62
% Perform interpolation .
64 px2 = round( interp3 ( old cent x2 , old cent y2 , old cent z2 , old dx2 , cent x2 , cent y2 , cent z2 , ? linear ? , 0));
py2 = round( interp3 ( old cent x2 , old cent y2 , old cent z2 , old dy2 , cent x2 , cent y2 , cent z2 , ? linear ? , 0));
66 pz2 = round( interp3 ( old cent x2 , old cent y2 , old cent z2 , old dz2 , cent x2 , cent y2 , cent z2 , ? linear ? , 0));
68 % Convert 2 D arrays back to 1 D arrays .
idx = 1;
70 for z = 1:new nz
for y = 1:new ny
72 for x = 1:new nx
px(idx) = px2(y,x, z );
74 py(idx) = py2(y,x, z );
pz(idx) = pz2(y,x, z );
76 idx = idx + 1;
end
78 end
160
end
80
end
161