Sample selection and reconstruction for array based multispectral
imaging
Except where reference is made to the work of others, the work described in this
dissertation is my own or was done in collaboration with my advisory committee. This
dissertation does not include proprietary or classified information.
Manu Parmar
Certificate of Approval:
Thomas S. Denney, Jr
Professor
Electrical and Computer Engineering
Stanley J. Reeves, Chair
Professor
Electrical and Computer Engineering
Jitendra K. Tugnait
Professor
Electrical and Computer Engineering
John Y. Hung
Professor
Electrical and Computer Engineering
George T. Flowers
Interim Dean
Graduate School
Sample selection and reconstruction for array based multispectral
imaging
Manu Parmar
A Dissertation
Submitted to
the Graduate Faculty of
Auburn University
in Partial Fulfillment of the
Requirements for the
Degree of
Doctor of Philosophy
Auburn, Alabama
May 10, 2007
Sample selection and reconstruction for array based multispectral
imaging
Manu Parmar
Permission is granted to Auburn University to make copies of this dissertation at its
discretion, upon the request of individuals or institutions and at
their expense. The author reserves all publication rights.
Signature of Author
Date of Graduation
iii
Vita
Manu Parmar was born in Mumbai (formerly, Bombay), India, in 1978. He received
the B.E. degree in electrical engineering from the Government College of Engineering, Pune,
in 2000. He traveled to the United States in 2000 to join the M.S. program in electrical
engineering at Auburn University and received the M.S. degree in 2002. He joined the Ph.D.
program in electrical engineering at Auburn University in 2002. His research interests
are in in the areas of digital imaging, color restoration, optimal image acquisition, and
multispectral imaging.
iv
Dissertation Abstract
Sample selection and reconstruction for array based multispectral
imaging
Manu Parmar
Doctor of Philosophy, May 10, 2007
(M.S., Auburn University, Auburn, 2002)
(B.E., Government College of Engineering, Pune University, 2000)
121 Typed Pages
Directed by Stanley J. Reeves
In this work we address the problem of acquisition of multispectral images in a sampled
form and the subsequent processing of the acquired signal. The problem is relevant in the
context of color imaging in digital cameras, and increasingly, in the field of hyperspectral
imaging as applied to remote-sensing and target recognition. The scope of this work encom-
passes a broad swath across image processing problems and includes: image acquisition, in
the problem of optimally selecting sampling rates and patterns of multiple channels; image
reconstruction, in the reconstruction of the sparsely sampled data; image restoration, in
obtaining an estimate of the true scene from noisy data; and finally, image enhancement
and representation, in the problem of presenting the reconstructed image in a color-space
that allows for transformations that achieve best perceived quality.
Acquisition of multispectral images in the simplest form entails either the use of multi-
ple sensor arrays to sample separate spectral bands in a scene, or the use of a single sensor
array with a mechanism that switches overlaying band-pass filters. Due to the nature of the
acquisition process, both these methods suffer from shortcomings in terms of weight, cost,
v
time of acquisition, etc. An alternative scheme widely in use only uses one sensor array to
sample multiple bands. An array of filters, referred to as a mosaic, is overlaid on the sensor
array such that only one color is sampled at a given pixel location. The full color image
is obtained during a subsequent reconstruction step commonly referred to as demosaick-
ing. This scheme offers advantages in terms of cost, weight, mechanical robustness and the
elimination of the related post-processing step since registration in this case is exact.
Three main issues need to be addressed in such a scheme, viz., the shape and arrange-
ment of the sampling pattern, selection of the sensitivities of the spectral filters, and the
design of the reconstruction algorithm. Each of the above problems is contingent on multi-
ple factors. Sensor sampling patterns are constrained by the limitations of electronic devices
and manufacturing processes, spectral sensitivities are affected by the material properties of
the colors painted on the array to form filters, and the reconstruction methods are limited
by computational resources.
In this research, we address the above problems from a signal processing perspective
and attempt to develop parametric algorithms that can accommodate external limitations
and constraints. We have developed methodologies for the selection of optimal sampling
patterns that will allow for ordered, repeated array blocks. In addition we have developed
an algorithm for demosaicking of CFA data based on Bayesian techniques. We have also
proposed a formulation for the selection of optimal spectral sensitivities for individual color
filters.
vi
Acknowledgments
The culmination of this work, and the entire enterprise of a completed Ph.D., is due
to the continued support of many individuals. First, I would like to thank my advisor Dr.
Stanley J. Reeves for the steady support, constant encouragement, and the super advise
throughout the course of this work. I am deeply indebted to him for his great patience with
me and my work, his glee at discovering new problems and their solutions, and the amazing
example he sets with his approach to work, research, signal processing, and life in general.
I am indebted to Dr. John Y. Hung, my M.S. thesis advisor for his support and his
exceptional qualities as a guide and mentor. I am thankful to Dr. Thomas S. Denney
for serving on my graduate committee, reviewing my work, and his wonderful ideas and
enthusiasm in myriad areas. I thank him for supporting me and my work on many occasions
along the line. I also thank Dr. Jitendra K. Tugnait for serving on my graduate committee,
Dr. Victor Nelson, the ECE graduate program coordinator, Ms. Jo Ann Loden, and the
wonderful people administering the department for all their help and their efforts in making
the Ph.D. experience at Auburn a pleasant one. I am grateful for the continued financial
support accorded me by the ECE department.
Last but not the least, I would like to thank my parents Cdr. Ram Singh and Mrs.
Varinder Parmar and my sister Ekta Parmar for their love and inspiration. I am grateful
for the love, support, and the wonderful company of my fianc?ee, Dr. Jhilmil Jain and the
great bunch of friends I had the good fortune to meet in my time at Auburn.
vii
Style manual or journal used Journal of Approximation Theory (together with the style
known as ?aums?). Bibliograpy follows van Leunen?s A Handbook for Scholars.
Computer software used The document preparation package TEX (specifically LATEX)
together with the departmental style-file aums.sty.
viii
Table of Contents
List of Figures xi
1 Introduction 1
1.1 Statement of the problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Scope of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Background 7
2.1 Color fundamentals and human color vision . . . . . . . . . . . . . . . . . . 7
2.1.1 Trichromacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Colorimetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Perceptually uniform color spaces . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.1 The CIELAB space . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4 The sCIELAB space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5 Image formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3 Demosaicking of Color Filter Array Data 20
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Bayesian restoration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.3 Color image model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3.1 Degradation Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3.2 Prior model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.4 Algorithm Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.4.1 The ICM iterations for pixel update . . . . . . . . . . . . . . . . . . 34
3.4.2 Edge Variable Update . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.4.3 Demosaicking Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4 Selection of sensor spectral sensitivities 44
4.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2 Image formation Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.3 Error Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.4 Correlation matrix model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.5 Experiments and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.5.1 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
ix
5 Sample selection in color filter arrays 71
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.3 Sample selection based on regularization . . . . . . . . . . . . . . . . . . . . 73
5.3.1 Human color vision model . . . . . . . . . . . . . . . . . . . . . . . . 74
5.3.2 Mathematical model . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.3.3 Sampling Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.4 Sample selection based on Wiener filtering . . . . . . . . . . . . . . . . . . . 83
5.4.1 The YyCxCz color space . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.4.2 The HVS MTFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.4.3 Sampling Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.4.4 Mathematical Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.4.5 Sampling Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.4.6 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.5 Conclusions and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.6 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6 Summary 99
6.1 Summary of results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Bibliography 102
x
List of Figures
1.1 Image acquisition with multiple sensor-arrays . . . . . . . . . . . . . . . . . 2
1.2 Image acquisition with a single sensor-array . . . . . . . . . . . . . . . . . . 3
2.1 Sensitivities of human rods and cones. . . . . . . . . . . . . . . . . . . . . . 9
2.2 CIE XYZ and CIE RGB color matching functions . . . . . . . . . . . . . . 13
2.3 Relative spectral power distributions of common light sources . . . . . . . . 17
3.1 Image processing pipeline in a digital camera . . . . . . . . . . . . . . . . . 21
3.2 CFA sampling and demosaicking . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3 (a) An image with information about three colors (red, green, and blue) at
each spatial location. (b) Representation of the image as it would be acquired
with a CFA-based imager. (c) CFA data shown with sampled colors at each
location. (d) Result of bilinear reconstruction of CFA data. . . . . . . . . . 23
3.4 Sample images from Eastman Kodak?s PhotoCD PCD0992. . . . . . . . . . 27
3.5 A representation of horizontal and vertical gradients obtained as the first
differences in the respective directions. . . . . . . . . . . . . . . . . . . . . . 28
3.6 Representation of a point in the 3-D lattice with associated line processes.
Red, green and blue pixels are shown surrounded by the respective line pro-
cesses that denote intra-channel edges (lk?). Line processes for the cross-
channel terms (ckk?? ) are appropriately labeled. . . . . . . . . . . . . . . . . 31
3.7 The set of cliques associated with a red pixel at location i. Locations of
i : +?, ? = H,V,DL,DR are labeled. . . . . . . . . . . . . . . . . . . . . . . 32
3.8 Reconstruction results for image 19 in Kodak PhotoCD PCD0992 . . . . . . 36
3.9 Reconstruction results for image 13 in Kodak PhotoCD PCD0992 . . . . . . 37
3.10 Reconstruction results for image 11 in Kodak PhotoCD PCD0992 . . . . . . 38
xi
3.11 Reconstruction results for image 22 in Kodak PhotoCD PCD0992 . . . . . . 39
3.12 Reconstruction results for image 21 in Kodak PhotoCD PCD0992 . . . . . . 40
3.13 Reconstruction results for image 1 in Kodak PhotoCD PCD0992 . . . . . . 41
4.1 Spectral sensitivity functions. Ordinates represent transmittance, abscissae
are wavelength in nm. (a),(b) RGB and CMY transmittances respectively
from ImagEval?s vCamera toolbox. . . . . . . . . . . . . . . . . . . . . . . . 45
4.2 Representation of the image formation process in color image acquisition with
color filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.3 Sampled spectra of common illuminants in the range 400-700 nm . . . . . . 49
4.4 The spectral correlation matrix R(1,1) for (a) the super-image obtained by
accumulating spectral data from all 22 sample images together and (b) for
the proposed model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.5 Sample multispectral images from Hordley et al. [70] rendered in sRGB space
for the D65 illuminant. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.6 Common periodic CFAs. (a) Bayer [14], (b) Gindele [18], (c) Yamanaka [48],
(d) Lukac [49], (e) striped, (f) diagonal striped [49], (g) CFA based on the
Holladay halftone pattern [50]. . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.7 (a)-(g) Optimal spectral sensitivity functions obtained for the CFA patterns
shown in Figs. 1(a)-1(g) respectively. Ordinates represent normalized trans-
mittances. The colors of transmittance curves are sRGB values for the re-
spective spectra. Bolder lines correspondto the optimal sensitivities obtained
at the location of the green filter in the respective CFA patterns. . . . . . . 64
4.8 The simulation pipeline. All variables are as described in preceding sections. 65
4.9 sRGB representations (for the D65 illuminant) of an image cropped from
image 3 from the database of multispectral images [70]. (o) Original image.
(a)-(g) From left to right ? Images reconstructed from the CFA sampled im-
ages obtained from the RGB, CMY, and optimized color filters respectively.
s-CIELab ?E error images appear to the right of each reconstructed image. 67
4.10 sRGB representations (for the D65 illuminant) of an image cropped from
image 4 from the database of multispectral images [70]. (o) Original image.
(a)-(g) From left to right ? Images reconstructed from the CFA sampled im-
ages obtained from the RGB, CMY, and optimized color filters respectively.
s-CIELab ?E error images appear to the right of each reconstructed image. 68
xii
5.1 The Bayer Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.2 A typical image processing pipeline in a color digital camera . . . . . . . . . 74
5.3 HVS green channel MTF . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.4 HVS red and blue channel MTFs . . . . . . . . . . . . . . . . . . . . . . . . 78
5.5 An 8?8 array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.6 Block diagram for calculating the error criterion . . . . . . . . . . . . . . . 89
5.7 Rod and cone sensitivities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.8 Array obtained by eliminating samples one at a time . . . . . . . . . . . . . 95
5.9 Block based array patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
xiii
Chapter 1
Introduction
1.1 Statement of the problem
In digital image acquisition, the optical sensor is either a charge coupled device (CCD)
or complementary metal oxide semiconductor (CMOS) device that is inherently monochro-
matic [1]. At a particular pixel location on a sensor-array, the photosensitive device inte-
grates the incident energy over its entire spectrum to generate a charge that is indicative
of intensity. The sensor array is thus capable of acquiring only a grayscale representation
of the imaged scene. In color or multispectral imaging where different bands along the
signal spectrum carry distinct information about the scene, the incident energy needs to be
sampled along the wavelength range of interest. In these applications, color filter overlays
(typically color pigment dyes) are used to cover the optical sensor-array such that the array
only captures energy in a particular range of wavelengths. In consumer applications such as
digital cameras, where the object is to produce a color image that may be displayed either
on a display device (a cathode ray tube (CRT) or liquid crystal display (LCD)) or printed
on paper, at least three color channels or bands must be sampled along the range of visible
wavelengths.
Typically, digital color cameras sample three (with wavelengths centered around the
red, green, and blue regions of the visible spectrum), or four (cyan, magenta, yellow, and
white) bands while document scanners with special applications sometimes sample up to six
bands. One way to achieve multi-band acquisition is to use multiple sensor-arrays overlaid
1
Scene
CCD
CCD
CCD
Optical
Lowpass
Filter
Color
Filters
Figure 1.1: Image acquisition with multiple sensor-arrays
with color filters such that energy in a distinct band is incident on a particular sensor-
array. In this case the number of sensor-arrays equals the number of bands to be sampled.
Figure 1.1 illustrates such a scheme where three distinct channels (red, green, and blue) are
sampled.
The optical sensor and its accompanying circuitry form a significant portion of the
total cost of a camera (up to 25% [2]), and multi-sensor arrays are limited only to the most
expensive digital cameras meant for professional use. Also, the beam-splitting arrangement,
which typically is a dichroic prism, adds weight to the imager. Finally, since the color bands
are acquired at different planes, an additional step of image registration is added to the
imaging pipeline.
An alternative arrangement uses sequential color sampling. A full color image is pro-
duced by taking multiple exposures while switching the color filter cascaded with the sensor-
array. The color filter in this case may be transmissive, dichroic, or a tunable liquid crystal
filter. The main disadvantage in this case is that the system is extremely sensitive to motion.
Only a few cameras targeted for studio use apply this technique.
2
Lens
Scene CCD
Figure 1.2: Image acquisition with a single sensor-array
Lately, manufacturers of consumer-level cameras (including Digital single lens reflex
(SLR) cameras) and video cameras have predominantly used another alternative scheme
that eliminates the limitations in the above schemes at the cost of added digital image
processing. In this scheme only one sensor-array (Fig. 1.1) is used to acquire the full-color
image. An array of filters, referred to as a mosaic, is overlaid on the sensor-array such that
only one color is sampled at a given pixel location. The full color image is obtained during
a subsequent reconstruction step commonly referred to as demosaicking. This scheme offers
multiple advantages in terms of cost, weight, mechanical robustness, and the elimination of
the image registration step since registration in this case is exact.
Such a mosaic-based sampling scheme for multispectral imaging presents a slew of
new challenges and has attracted much research interest. The main issues that need to be
addressed are:
? selection of the shape, arrangement, and sampling rates of mosaic filters to ensure
optimal reconstruction
? selection of spectral sensitivities of the mosaic filters to ensure optimal performance
(color reproduction in case of color cameras)
3
? the design of the reconstruction algorithm.
Each of the above problems is affected by multiple factors. The choice of a sampling
scheme for the mosaic or color filter array (CFA) depends not only on the suitability of a
particular pattern from the point of view of image reconstruction quality, but also on mate-
rial properties of the color filter pigments and the semiconductor photosensitive elements.
For example, it is desirable from an image quality perspective that the sampling pattern
be random. This ensures that there are no reconstruction artifacts due to fixed patterns in
the imaged scene. On the other hand, from a strict semiconductor devices perspective, it is
desirable to have fixed repeated sampling patterns to prevent color inconsistencies due to
cross-contamination among adjacent colors on the array. Demosaicking algorithms present
trade-offs in terms of reconstruction quality and computational time. The selection of spec-
tral sensitivities for the color filters is dependent on particular applications and viewing
conditions for the final image.
1.2 Scope of the thesis
The research problems listed in Section 1.1 have been addressed to a large extent as
independent problems in the literature. Recently, demosaicking algorithms have been a
subject of extensive research and various new approaches have been used to reconstruct
full-color images from sub-sampled data: projections on convex sets [3], wavelet domain
processing [4], decision-theory [5], neural networks [6] etc. Traditional image reconstruction
techniques have also been used to address the problem of demosaicking [7, 8, 9]. The
problem of selection of spectral sensitivities has been addressed only from the point of view
of color reproduction accuracy when areas of uniform colors are sampled [10, 11, 12, 13].
4
The problem of selection of sampling patterns has seen surprisingly little interest in the
openliterature while actual sampling schemes and algorithms usedby camera manufacturers
remain closely guarded proprietary information. Sampling schemes that have been patented
or published in the literature are predominantly based on heuristics and on convenience of
sensor-array read-out [14, 15, 16, 17, 18].
The unique problem of simultaneous spectral and spatial sampling presented by mosaic-
based sampling schemes does not appear to be addressed in the open literature. In this
work, we will propose methods to solve the above problems using unified approaches based
on signal processing principles. In addition, the methods proposed are parametric and are
flexible to the addition of constraints due to external factors.
Chapter 2 provides an overview of the fundamentals of human color vision and color
image processing. The subject of colorimetry, the measurement of color, is introduced. The
chapter also describes perceptually uniform color spaces that are commonly used to form
measures for color reproduction accuracy. Also, generalized image formation models for
multispectral image acquisition are detailed.
In Chapter 3 we present an algorithm for the recovery of color images from sparsely
sampled, noisy data. The proposed algorithm is based on the Bayesian framework, which
allows for the effective use of prior information in finding estimates for full-color true images.
We present results for a number of test images and demonstrate the efficacy of the proposed
algorithm.
In Chapter 4 we propose a method for the selection of optimal spectral sensitivities
for the color filters used in the CFA mosaic. The proposed method is based on a unique
joint spatial-spectral treatment that accounts for the simultaneous sampling in the spectral
5
and spatial domains, which is a characteristic of CFA-based imaging. Optimal color filter
transmittance functions for a number of common CFA arrangements are derived and shown
to perform better than standard RGB and CMY color filters in terms of both spatial
reconstruction quality and color fidelity.
In Chapter 5 we propose two methods for the selection of sampling arrangements
for CFAs. Both methods are based on optimization of criteria formed using standard
image processing techniques and incorporate the effects of human color vision in their
mathematical modeling.
In Chapter 6 we discuss the results obtained in previous chapters and summarize the
problems yet to be solved.
6
Chapter 2
Background
One of the primary features desirable in a color imaging system is an ability to faithfully
reproducecolors ina scene. Theimaging systemmust also preserve the original colors during
the transfer and further processing of the acquired signal among different devices (e.g.,
camera to printer to scanner). To this end, it is critical that the imaging system account
for the mechanisms of color vision in the human visual system (HVS) and the limitations
of various devices in the imaging system regarding the processing of color signals.
2.1 Color fundamentals and human color vision
The foundations of color theory and the spectral nature of visible light originate with
the work of Isaac Newton. His experiments with prisms led to the understanding that the
visible part of electromagnetic radiation (the wavelength region between ?min=360 nm and
?max=830 nm) can be decomposed into monochromatic components. It is important to
understand that although it is common to refer to radiation or objects possessing certain
colors, they only possesses the ability to trigger a sensation that is perceived as a particular
color by the HVS. The appearance of a color is also dependent on viewing conditions,
foreground and background color, spatial characteristics of the scene, and ambient light. In
addition, color appearance is very subjective and differs widely among observers.
A consistent method for the specification and measurement of color (colorimetry) is
not possible without an understanding of the HVS properties. Sharma and Trussel [19]
summarize a history of the development of the understanding of color vision:
7
The wider acceptance of the wave theory of light paved the way for a better
understanding of both light and color [20], [21]. Both Palmer [22] and Young
[20] hypothesized that the human eye has three receptors, and the difference
in their responses contributes to the sensation of color. However, Grassmann
[23] and Maxwell [24] were the first to clearly state that color can be math-
ematically specified in terms of three independent variables. Grassmann also
stated experimental laws of color matching that now bear his name [[25], p.
118]. Maxwell [26], [27] demonstrated that any additive color mixture could be
matched by proper amounts of three primary stimuli, a fact now referred to as
trichromatic generalization or trichromacy. Around the same time, Helmholtz
[28] explained the distinction between additive and subtractive color mixing and
explained trichromacy in terms of spectral sensitivity curves of the three ?color
sensing fibers? in the eye.
It has been determined that the the human retina has two kinds of receptors, viz., rods
and cones. The primary function of the rods is to provide monochromatic vision under low
illumination levels (scotopic vision). A photosensitive pigment called rhodopsin that is sen-
sitive primarily in the blue-green region of the spectrum is responsible for sensing radiation
in the rods. Under normal illumination, the rods are saturated and the cones contribute to
vision (photopic luminosity). There are three types of cones, each sensitive in a portion of
the visible spectrum and thus named L (long wavelengths), M (medium wavelengths), and S
(small wavelengths) types of cones. The spectral sensitivities of the cones have been deter-
mined through microspectrophotometric measurements [29], [30]. Figure 2.1(a) shows the
luminous response of rods and the aggregated response of the three cones and represents
8
350 400 450 500 550 600 650 700 7500
0.2
0.4
0.6
0.8
1
Wavelength (nm)
Efficiency
Photopic
Scotopic
(a) Photopic and Scotopic luminosity func-
tions for the HVS.
350 400 450 500 550 600 650 700 7500
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Wavelength (nm)
Efficiency
S (? cones)
M (? cones)
L (? cones)
(b) Cone sensitivities corrected for peak opti-
cal transmittance of the ocular media and the
internal QE of the photoisomerization
Figure 2.1: Sensitivities of human rods and cones.
luminosity under scotopic and photopic conditions respectively. Figure 2.1(b) shows the
sensitivities of the three cones as determined by Stockman et al. [30] and is a representation
of the color sensitivity of the HVS.
2.1.1 Trichromacy
The responses of the three cones to radiation emitted or reflected by a scene can be
modeled by a linear system under fixed ambient conditions. For an incident radiation with
a spectral distribution given by f(?), where ? represents wavelength, the responses of the
three cones are given by the 3?1 vector
ci =
integraldisplay ?max
?min
si(?)f(?)d?, i = 1,2,3, (2.1)
9
where, si(?) is the sensitivity of the ith type of cone and the visible range of the electro-
magnetic spectrum is between ?min = 360 nm and ?max = 830 nm. The cone responses are
a projection of the incident spectrum onto the three dimensional space spanned by the cone
sensitivity functions of. This space is called the human visual subspace (HVSS). Although
the actual colors perceived by the HVS are due to further non-linear processing by the hu-
man nervous system, under similar viewing conditions and ocular adaptation, a color may
be approximately specified by the responses obtained at the three types of cones.
Equation (2.1) may be written in the discrete form as
c= STf (2.2)
where c is a 3 ? 1 vector such that each element of c specifies the response obtained at
one type of cone, f is a n?1 vector that contains samples of the incident spectrum along
the wavelength range, and S is a n? 3 matrix. The columns of S are the sampled cone
sensitivity functions. Typically, the visible range of wavelengths is sampled every 10 nm
such thatn = 31. A higher sampling rate is used in applications involving fluorescent lamps
that have sharp spectral peaks [19].
Considerthe vectorspi,i = 1,2,3, such thatSTpi are linearly independent. Thevectors
pi are said to constitute a set of color primaries. They are colorimetrically independent in
that no one color can be formed as a linear combination of the other two and the matrix
STP, where P = [p1 p2 p3], is non-singular. For any spectrum f, we define the vector
a(f) = (STP)?1STf such that STf = STPa(f). This implies that for any spectrum f,
there exists a linear combination of the primaries that elicits the same response at the
cones and thus matches the spectrum in color. This result, referred to as the principle of
10
trichromacy, is used in color matching experiments where the color of a particular spectrum
is matched to the color obtained by a linear combination of a set of primaries.
Consider the set of unit-intensity orthonormal spectra given by {ei}ni=1, where ei is
an n ? 1 vector having a 1 in the ith position and zeros elsewhere. This set forms an
orthonormal basis for all visible spectra. Let ai be the vector that denotes the weights
applied to a set of primaries to colorimetrically match the spectrum of ei (ST = STPai).
For A = [a1,a2,??? ,an]T, we can form the color matching matrix A such that
STIN =STPAT. (2.3)
The columns of A are referred to as the color matching functions (CMFs) associated with
the primaries that are the columns of P. Any spectrum f may be represented as a weighted
sum of {ei}Ni=1 as
f =
nsummationdisplay
i=1
fiei, (2.4)
where fi are the elements of f. From (2.3), it follows that the spectrum of f is colorimet-
rically matched by weighting the primaries with the elements of
nsummationdisplay
i=1
fiei = ATf. (2.5)
ATf is a 3?1 vector that represents the relative intensities of the primaries P that match
the color of f and is referred to as a tristimulus vector.
11
2.2 Colorimetry
To offer a consistent means of measurement and comparison, tristimulus values ob-
tained from different experiments need to be defined with respect to a standard set of
color matching functions (CMFs). The International Commission on Illumination, CIE,
has defined a set of such CMFs that are used as standards in the industry. The CIE 1931
recommendations define a standard colorimetric observer by providing two equivalent sets
of CMFs.
The CIE RGB CMFs (?r(?), ?g(?), and ?b(?)) are associated with monochromatic pri-
maries at wavelengths of 700.0, 546.1, and 435.8 nm respectively. The radiant intensities are
adjusted so that the tristimulus values for the constant spectral power distribution (SPD)
spectrum are equal. The CIE XYZ CMFs (?x(?), ?y(?), and ?z(?)) are obtained by a linear
transformation of the CIE RGB CMFs, with the additional constraints that the XYZ CMFs
have no negative values, the choice of y(?) is coincident with the luminous efficiency func-
tion (the relative sensitivity of the human eye at each wavelength [31]), and the tristimulus
values are equal for the equi-energy spectrum. The CIE XYZ tristimulus values are most
commonly used in color research and applications. The Y tristimulus value is referred to
as the luminance and closely represents the perceived brightness or intensity of a radiant
spectrum. The X and Z tristimulus values contain information about color or chrominance.
2.3 Perceptually uniform color spaces
A unit for color difference that is commonly used in color research is the just noticeable
difference (JND). It has been established through psychovisual experiments that the JND
is highly variable across the CIE XYZ space and the space is perceptually non-uniform [32].
12
400 450 500 550 600 650 700?0.5
0
0.5
1
1.5
2
2.5
3
3.5
Wavelength (nm)
r(?)
g(?)
b(?)
(a) CIE ?r(?), ?r(?), ?r(?) color matching func-
tions
350 400 450 500 550 600 650 700 750 800 8500
0.5
1
1.5
2
2.5
Wavelength (nm)
x(?)
y(?)
z(?)
(b) CIE ?x(?), ?y(?), ?z(?) color matching func-
tions
Figure 2.2: CIE XYZ and CIE RGB color matching functions
Equal distances in the XYZ space do not correspond to equal differences in perceived color
and thus Euclidian distance between two points in the XYZ space can not be used as a
reliable objective measure of the perceived difference between two colors.
A perceptually uniform color space is highly desirable in defining tolerances in color
reproduction systems and in objectively measuring the performance of various image pro-
cessing algorithms. There has been much research directed at defining suitable perceptually
uniform color spaces [31], [33]. The CIE has proposed two uniform color spaces for prac-
tical applications, viz., the CIE 1976 L?U?V? (CIELUV) space and the CIE 1976 L?a?b?
(CIELAB) space. The CIELAB space is most commonly used in the imaging and printing
industry as the preferred device independent color space.
13
2.3.1 The CIELAB space
The L?, a?, and b? components of the CIELAB space are defined in terms of the X,
Y, and Z components of the CIE XYZ space by the nonlinear transformation
L? = 116f
parenleftbiggY
Yn
parenrightbigg
?16,
a? = 500
bracketleftbigg
f
parenleftbiggX
Xn
parenrightbigg
?f
parenleftbiggY
Yn
parenrightbiggbracketrightbigg
, (2.6)
b? = 200
bracketleftbigg
f
parenleftbiggY
Yn
parenrightbigg
?f
parenleftbiggZ
Zn
parenrightbiggbracketrightbigg
,
where Xn, Yn, and Zn are the D65 white point values in the XYZ color space and
f(x) =
?
???
?
???
?
7.787x+ 16116, if 0 ?x? 0.008856
x13, if 0.008856 fmax
1.0, otherwise,
(5.1)
where the constants a, b, c, and d are calculated from empirical data to be 2.2, 0.192, 0.114
and 1.1 respectively; ?fij is the radial spatial frequency in cycles/degree as subtended by
the image on the human eye scaled for the viewing distance, and fmax is the frequency
corresponding to the peak of Vij. Since we need the MTF in terms of discrete linear
frequencies along the vertical and horizontal directions (fi,fj), we must express (fi,fj) in
terms of the radial frequency ?fij. The discrete frequencies along the horizontal and vertical
directions depend on the pixel pitch ? of the output device (print or display device) and
the total number of frequencies M. A location (i,j) in the frequency domain corresponds
to the following fi and fj in cycles/mm:
fi = i?1?M,
fj = j?1?M . (5.2)
The linear frequencies are scaled for the viewing distancesand converted to radial frequency
as
fij = pi
180 arcsin
parenleftBig
1?
1+s2
parenrightBig
radicalBig
f2i +f2j. (5.3)
75
The MTF is not uniform along all directions. The HVS is most sensitive to spatial variation
along the horizontal and vertical directions. To account for this variation, the MTF is
normalized by an angle dependent function s(?ij) such that
?fij = fij
s(?ij), (5.4)
where
s(?ij) = 1?w2 cos(4?ij) + 1+w2 , (5.5)
with w being a symmetry parameter and
?ij = arctan
parenleftbiggf
j
fi
parenrightbigg
. (5.6)
The response obtained for the green channel for w = 0.7, and a viewing distance of 45
cm and a pixel pitch of 0.27 mm is shown in Fig. 5.3.
Theresponseof the HVS to chrominance, or the contrast sensitivity to spatial variations
in the chrominance channels, falls off faster than the response to the luminance channel.
A simple chrominance response model corresponding to a decaying exponential is chosen
as a basis for the HVS response to the blue and red channels. The red and blue channel
response is modelled as
VB,R(fij) = e(?0.15fij), (5.7)
The response obtained for the red and blue channels is shown in Fig. 5.4.
76
?6
?4
?2
0
2
4
6
?6
?4
?2
0
2
4
6
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
cycles/mm
cycles/mm
MTF
Figure 5.3: HVS green channel MTF
The HVS point spread functions hi for i = Red, green, blue are obtained as
hG = F?1 {VG(i,j)},
hR,B = F?1 {VR,B(i,j)}. (5.8)
The matricesHi are constructed fromhi such that multiplication of a column-ordered image
by Hi yields the 2-D convolution of the image by the point spread function hi.
77
?4
?2
0
2
4
?4
?2
0
2
4
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
cycles/mmcycles/mm
MTF
Figure 5.4: HVS red and blue channel MTFs
5.3.2 Mathematical model
We model the sub-sampled image as a linear transformation that maps the full-color
image to an image that contains only one color value at a particular pixel location. The
sub-sampled image is represented as
yi = Aixi +ui, i = red,green,blue, (5.9)
where xi, (mn? 1) and yi, (mn? 1) are the red, green and blue channels of the original
and the sub-sampled m?n images arranged in a column-ordered form, and ui, (mn?1),
78
are the similarly arranged noise terms. The matrices Ai are the sampling matrices. For the
fully-sampled case, Ai are identical to the mn?mn identity matrix. For the sub-sampled
case, the matrices Ai contain only the rows corresponding to a sampled pixel location. We
assume that the image and noise are uncorrelated.
We form a regularization functional for each channel that contains an energy bound
on the residual Aix?yi and a penalty on the roughness as:
?i = bardblAixi ?yibardbl22 +?iLixi2. (5.10)
The estimate of xi found on minimizing the constrained least squares problem in (5.10) is
?xi = (AHi Ai +?iLHi Li)?1AHi yi, (5.11)
where AH is the Hermitian transpose of A. To obtain the best estimate for the perceived
image, we minimize the discrepancy in the reconstructed image when viewed through the
HVS. Let the matrices Hi, i = Red, green, blue, represent the filtering effect correspond-
ing to the point spread functions (PSFs) of the red, green and blue channels of the HVS
respectively. We form a discrepancy function for one channel (dropping the subscript) as
d= E{bardblHx?H?xbardbl22}, (5.12)
79
where E{.} represents Expectation, and bardbl.bardblF denotes the matrix 2-norm.
d = EbraceleftbigbardblHx?H(AHA+?LHL)?1AHAxbardbl22bracerightbig+EbraceleftbigbardblH(AHA+?LHL)?1AHnbardbl22bracerightbig
= EbraceleftbigbardblH(AHA+?LHL)?1?LHLxbardbl22bracerightbig+EbraceleftbigbardblH(AHA+?LHL)?1AHnbardbl22bracerightbig. (5.13)
Let P = (AHA+?LHL), such that
d = EbraceleftbigbardblHP?1?LHLxbardbl22bracerightbig+EbraceleftbigbardblHP?1AHnbardbl22bracerightbig. (5.14)
Now,
E{bardblHP?1AHnbardbl22} =EbraceleftbigtrparenleftbignHAP?HHHHP?1AHnparenrightbigbracerightbig
=trparenleftbigAP?HHHHP?1AHRnparenrightbig, (5.15)
where Rn is the correlation matrix for n and is described by the relation Rn = EbraceleftbignnHbracerightbig.
We assume that the noise is independent, identically distributed such that Rn = ?I. Also,
P is symmetric and PH = P. Thus, Eq. (5.15) reduces to
E
braceleftbigbardblHP?1AHnbardbl2
2
bracerightbig= ?trparenleftbigAP?1HHHP?1AHparenrightbig. (5.16)
Also,
E
braceleftbigbardblHP?1?LHLxbardbl2
2
bracerightbig=
E
braceleftbigtrparenleftbigxH?LHLP?1HHHP?1?LHLxparenrightbigbracerightbig
= ?2 trparenleftbigLHLP?1HHHP?1LHLRxparenrightbig, (5.17)
80
where Rx is the correlation matrix for x and is described by the relation Rx = EbraceleftbigxxHbracerightbig.
From Eqs. (5.16) and (5.17), we have
d= ?trparenleftbigP?1HHHP?1parenleftbigAHA+?LHLRxLHLparenrightbigparenrightbig. (5.18)
For L = R?
1
2x , LHL = R?1x , and Eq. (5.18) reduces to
d= ?trparenleftbigP?1HHHparenrightbig. (5.19)
We define an error function as a weighted sum of the channel discrepancy functions as
e=
summationdisplay
i
?idi = ?i
summationdisplay
?i trparenleftbig(AHi Ai +?iR?1xi )?1HHi Hiparenrightbig, (5.20)
where ?i are scaling factors that reflect the perceptual importance of the fidelity in a par-
ticular channel.
5.3.3 Sampling Strategy
The goal is to sample only one color channel at each sample location. Thus, we have
to select mn samples from a set of 3mn samples. The error criterion defined in (5.20) may
be used to optimize the selection procedure. The criterion does not depend on the scene
being imaged and may be used for sub-sampling a general scene if the statistical properties
(Rx and Rn) of the fully sampled image are defined accurately.
Each row in the matricesAi in (5.20) corresponds to a sample in the respective channel.
The error criterion defined in (5.20) may be used to obtain the row that when eliminated
would cause the least error in the reconstructed signal when viewed through the HVS.
81
An exhaustive optimization would require the computation of the error criterion for all
combinations of eliminated rows, and would require (3mn)!(2mn)!(mn)! computations of the error
criterion. For a reasonably sized array, this computation would require immense resources.
The authors in [75] use a greedy algorithm for sequential backward selection (SBS)
of samples for signal reconstruction. The sequential backward selection algorithm can not
be guaranteed to provide optimal results, but the authors in [76] have shown that the
algorithm consistently provides good results with a relatively tight upper bound on the
error criterion. We devise an SBS scheme for optimizing the criterion as follows. We start
with a fully sampled image with all mn samples in each channel. The error criterion is
computed after eliminating one row from one of the matrices Ai, and the row that gives the
least value for the criterion is eliminated. In the next step, The matrix Ai from which the
row is eliminated is of dimension (m?1)?n. The error criterion is computed again after
eliminating one row from Ai, and rows of Ai are successively eliminated with the constraint
that the three channels are sampled in a mutually exclusive manner.
Computation of the error criterion requires the computation of the inverse of the matrix
P for each eliminated row. For an m ? n array, P is of dimension mn ? mn, and the
inversion requires considerable computation even for small arrays. The error criterion may
be simplified using the Sherman-Morrison matrix inversion formula such that we need find
only an update term after each elimination. Also, the matrices Hi are circulant block-
circulant and the matrix products involving Hi may be computed using DFTs. In spite of
these simplifications, the computation of the criterion is cumbersome since in the form of
(5.35), it requires the storage of at least the three mn?mn initial matrices P?1i .
82
5.3.4 Experiments
The power spectral density of a random process is given by the Wiener-Khinchine
relation, Sx(j?) = F{Rx}. We obtained an Rx representative of a general scene imaged
by a digital camera from the mean, Savg, of the power spectra of a large number of images
reflecting various image types as Rx = F?1{Savg}. The images used to obtain Savg span
a wide range of categories including natural scenes, landscapes, portraits, and a few color
test images obtained from the USC-SIPI [77] image database.
The sample selection procedure detailed in Section 5.3.3 was applied for fully-sampled
RGB arrays of different sizes. The error criterion values obtained for a Bayer array (eBayer)
and an array obtained by the SBS scheme (eSBS) detailed in Sec. 5.3.3 are shown in Table
5.1. The weights on the individual channel errors are ?Red = 1, ?green = 1.6, and ?blue = 1.
The values of ?i reflect the relative importance of the green channel on image quality and
precise values may be obtained through psychovisual experiments. An 8?8 array obtained
using SBS is shown in Fig. 5.5.
Table 5.1: Comparison of error criterion values with a Bayer array
Array size eBayer eSBS
8?8 28.8083 27.5952
12?12 46.0583 44.3362
16?16 74.9760 72.3530
32?32 218.4921 211.1279
5.4 Sample selection based on Wiener filtering
In the following sections we describe a design method for an RGB type CFA based
on the Wiener filtering of the sub-sampled CFA image. Since color differences in the RGB
83
1 2 3 4 5 6 7 8
1
2
3
4
5
6
7
8
Red Green Blue Legend:
Figure 5.5: An 8?8 array
space do not correspond to perceptual differences, in this work, we consider a model of the
HVS based on a uniform color space to quantify perceptual effects.
5.4.1 The YyCxCz color space
Various models have been proposed in the literature that use perceptually uniform
color spaces like the CIE L?a?b?to describe the modulation transfer functions (MTFs) of
the HVS. In this work, we use a model first described by Flohr et al. [78] to define the
MTFs of the HVS luminance and chrominance channels. This model served as a basis for
the HVS model used in Section 5.3.1. The Flohr model is channel-independent and is based
on a color space that is a linearization of the CIE L?a?b?color space. The transformation
from CIE L?a?b?to RGB is nonlinear and Flohr et al. propose a linearization about the D65
84
white-point to form a color space characterized by the channels Yy, Cx, and Cz as
Yy = 116YY
n
?16,
Cx = 500
bracketleftbiggX
Xn ?
Y
Yn
bracketrightbigg
, (5.21)
Cz = 200
bracketleftbiggY
Yn ?
Z
Zn
bracketrightbigg
.
The Yy component in this color space corresponds to luminance and Cx and Cz are similar
to R?G and B?Y opponent color chrominance components respectively.
In this work, we derive an MSE criterion in the YyCxCy space to obtain an RGB
array, and we will need to transform the error to the RGB space. From Eq. (5.21), the
transformation from YyCxCy to XYZ may be obtained as
X = CxXn500 + 1116 (Yy +16),
Y = Yn116 (Yy +16), (5.22)
Z = Zn116 (Yy +16)? Zn200Cz.
The transformation from XYZ to RGB about the D65 white point is performed as
?
??
??
??
R
G
B
?
??
??
??=
?
??
??
??
3.240479 ?1.537150 ?0.498535
?0.969256 1.875992 0.041556
0.055648 ?0.204043 1.057311
?
??
??
??
?
??
??
??
X
Y
Z
?
??
??
??. (5.23)
85
The transformation from YyCxCz space to RGB space is achieved via the cascaded trans-
formation YyCxCz ? XYZ ? RGB as
?
??
??
??
R
G
B
?
??
??
??=
?
??
??
??
3.240479 ?1.537150 ?0.498535
?0.969256 1.875992 0.041556
0.055648 ?0.204043 1.057311
?
??
??
??
?
??
??
??
?
??
??
??
1
16
Xn
500 0
Yn
116
16Yn
116 0
Zn
116 0 ?
Zn
200
?
??
??
??
?
??
??
??
Yy
Cx
Cz
?
??
??
??+
?
??
??
??
16
116
16Yn
116
16Zn
116
?
??
??
??
?
??
??
??,
where the values Xn, Yn, and Zn for the D65 white point are 0.3127, 0.3290, and 0.3583
respectively such that
?
??
??
??
R
G
B
?
??
??
??=
?
??
??
??
0.0220356 ?0.067728 0.000893
0.0138047 0.085737 ?0.000074
0.0031668 ?0.009224 ?0.001894
?
??
??
??
?
??
??
??
Yy
Cx
Cz
?
??
??
??+
?
??
??
??
0.352569
0.220875
0.050669
?
??
??
??
= T1
?
??
??
??
Yy
Cx
Cz
?
??
??
??+t (5.24)
5.4.2 The HVS MTFs
Flohr et al. propose a model that is a combination of the models detailed by N?as?anen
[79] and Sullivan et al. [74]. The Luminance MTF is modelled by an exponential that is
similar to the MTF of the green channel in (5.1) as
VYy(?fij) = K(L)e??(L) ?fij, (5.25)
86
where ?fij is the radial spatial frequency in cycles/degree as subtended by the image on the
human eye, and is a weighted magnitude of the linear frequency vector [fi fj]T. L is the
average luminance for the display, K(L) = aLb,
?(L) = ac ln(L)+d, (5.26)
and a= 131.6, b= 0.3188, c = 0.525, d = 3.91.
An approximation to experimental results obtained by Mullen [80] is used to obtain
the chrominance MTFs as
VCx,Cz(fij) = Ae(??fij), (5.27)
where ? = 0.419 and A = 400 as determined by Kolpatzik and Bouman [81]. As evident
from Eqs. (5.26)-(5.27) the HVS model has a lowpass nature for both the luminance and
the chrominance channels. The MTF of the chrominance channels decays at a greater rate
and the luminance channel MTF has lesser sensitivity at odd multiples of pi/4.
The HVS point spread functions (PSFs) hi for i = Yy,Cx,Cz are obtained by taking
the two-dimensional inverse Fourier transforms of VYy(?fij) and VCx,Cz(fij) as follows:
hYy = F?1braceleftbigVYybracerightbig,
hCx,Cz = F?1 {VCx,Cz}. (5.28)
87
5.4.3 Sampling Strategy
Consider the image processing pipeline for a typical digital color camera depicted in
Fig. 5.2. We propose a variation in the pipeline for the purpose of determining an error
criterion (Fig. 5.6). During image acquisition, all three color channels are acquired at each
sample location and full information about Yy, Cx, and Cz channels is available. Intensity
values obtained fromRGB sensors may be transformed into theYyCxCz space to obtain the
required values. The image is then sub-sampled so that we are left with only one channel
at a particular location and a demosaicking process is used to reconstruct the image. We
propose a reconstruction method based on the Wiener filter for this stage of the pipeline.
The HVS model detailed in Section 5.4.2 is used to characterize the perceptual error between
the original and the reconstructed image. Since we need to determine sample locations for
an RGB array, a color space transformation is applied to the output image obtained after
convolution with the HVS PSF to convert the values to RGB space.
An error criterion is defined as the MSE between the reconstructed and the original image
when passed through the HVS and after a color transformation into RGB space. We
start with the fully sampled image with all three color channels available at each pixel
location. The error criterion is then evaluated after eliminating all samples one at a time.
The sample value that leads to the least increase in the error criterion is eliminated and the
procedure is repeated with the remaining samples until only one channel is left at each pixel
location. The resulting sampling arrangement assures the least perceptual degradation in
the original fully-sampled image due to sparse sampling. The procedure neglects the effect
of color space transforms and quantization associated with the enhancement processes in
88
1
Demosaicking
via Wiener
reconstruction
CFA Image
Capture. Fully
sampled Y y C x C z
channels or RGB
converted to
Y y C x C z
Sub-sampling.
Multiplication by
A
HVS
luminance/
chrominance
frequency
response
Color
transformation
Y y C x C z to
RGB
HVS luminance/
chrominance
frequency
response
Color
transformation
Y y C x C z to
RGB
E {|| . || 2 } + -
Figure 5.6: Block diagram for calculating the error criterion
Step 4 (Fig. 5.2), and the display device model in Step 5. In effect, we assume that color
channel values obtained during acquisition are translated with reasonable fidelity to Step
4. Fig. 5.6 depicts the calculation of the error criterion in the form of a block diagram.
5.4.4 Mathematical Model
We assume that the effect of noise in the sub-sampling process may be neglected due
to its much lower magnitude when compared to pixel intensities. For an original image I
containing m?n pixels, the sub-sampled image is modelled as
y = Ax, (5.29)
where x? C(3mn?1) is the fully sampled image and consists of the luminance and opponent
chrominance channels (viz. the Yy, Cx, and Cz values) in column-ordered form and takes
89
the form x = [xTYy xTCx xTCz]T. Thus, the kth, 2kth, and 3kth elements of x (k < mn)
represent the three channel values for the same pixel location. The vector y ? C(mn?1) is
the similarly arranged sub-sampled image, and contains only one channel at a particular
pixel location. The matrix A ? C(mn?3mn) is a sampling matrix that represents a linear
transformation that maps the fully-sampled image to an image that is sub-sampled such
that only one color channel is sampled at a particular location.
The Wiener filter solution for the estimate ?x of x in Eq. (5.29) is found as
?x= RxyR?1y y, (5.30)
where Rxy = EbraceleftbigxyTbracerightbig and Ry = EbraceleftbigyyTbracerightbig; E{.} represents expectation. Substituting
explicit expressions for Rxy and Ry gives
?x = EbraceleftbigxyTbracerightbigparenleftbigEbraceleftbigyyTbracerightbigparenrightbig?1y
= Ebraceleftbigx(Ax)TbracerightbigparenleftbigEbraceleftbigAx(Ax)Tbracerightbigparenrightbig?1Ax
= EbraceleftbigxxTATbracerightbigparenleftbigEbraceleftbigAxxTATbracerightbigparenrightbig?1Ax
= RxAT parenleftbigARxATparenrightbig?1Ax, (5.31)
90
whereRx = EbraceleftbigxxTbracerightbig=E
?
???
???
???
???
?
??
??
??
xTYy
xTCx
xTCy
?
??
??
??
bracketleftbigg
xYy xCx xCz
bracketrightbigg
?
???
???
???
???
= E
?
???
???
???
???
?
??
??
??
xTYyxYy xTYyxCx xTYyxCz
xTCxxYy xTCxxCx xTCxxCz
xTCzxYy xTCzxCx xTCzxCz
?
??
??
??
?
???
???
???
???
=
?
??
??
??
RxYy RxYyxCx RxYyxCz
RxYyxCx RxCx RxCxxCz
RxYyxCz RxCxxCz RxCz
?
??
??
?? (5.32)
The elements on the diagonals of Rx are the autocorrelation matrices for the three channels
and the off-diagonal elements are the channel crosscorrelation matrices. An error functional
is formed as the mean square error of the original image and the reconstructed image when
viewed through the HVS and converted to RGB space as
e = EbraceleftbigbardblTHx?TH?xbardbl22bracerightbig, (5.33)
where bardbl.bardbl2 denotes the Frobenius matrix norm. The matrix H is constructed such that
multiplication of a column-ordered image by H yields the 2-D convolution of the image by
the PSFs hi obtained in Eq. (5.28). The three channels of the HVS model are assumed to
be independent such that H is block diagonal and of the form
H =
?
??
??
??
HYy 0 0
0 HCx 0
0 0 HCz
?
??
??
??, (5.34)
91
where the matrices Hi represent convolution of the individual channels by their respective
PSFs and have a circulant block circulant structure. The matrix T is obtained from Eq.
(5.24) such that multiplication of a column ordered image by T achieves the color transfor-
mation from YyCxCz space to RGB space. T may be represented as a Kronecker matrix
product of the form T = T1 ?Imn, where Imn is the mn?mn identity matrix.
The error criterion is thus
e = E
braceleftBig
bardblTHx?THRxAT parenleftbigARxAHparenrightbig?1Axbardbl22
bracerightBig
x
= E
braceleftBig
bardblTH
parenleftBig
I ?RxAT parenleftbigARxATparenrightbig?1A
parenrightBig
xbardbl22
bracerightBig
= E
braceleftbigg
tr
parenleftbigg
xT
parenleftBig
I ?RxAT parenleftbigARxATparenrightbig?1A
parenrightBigT
HTTTTH
parenleftBig
I ?RxAT parenleftbigARxATparenrightbig?1A
parenrightBig
x
parenrightbiggbracerightbigg
,
where tr(.) represents the trace of a matrix. Let P =
parenleftBig
I ?RxAT parenleftbigARxATparenrightbig?1A
parenrightBig
, such that
e = EbraceleftbigtrparenleftbigxHPHHHTHTHPxparenrightbigbracerightbig= trparenleftbigPHHHTHTHPRxparenrightbig. (5.35)
Note that the criterion described by Eq. (5.35) does not depend on a particular scene being
imaged. We only need to know the statistical properties of the scene as described by the
elements of Rx to evaluate the criterion.
5.4.5 Sampling Procedure
Two different sampling procedures are detailed in this section. In the first case, we
start with a fully-sampled image x with information about all three color channels. The
goal is to eliminate samples such that we are left with only one color channel at each pixel
location. As described in Section 5.4.3, we begin by eliminating the samples one at a time.
92
The error criterion is evaluated after each elimination and the sample that leads to the least
increase in the error criterion is eliminated. Initially, the matrixA is of size 3mn?3mn and
each row of A corresponds to a sample of the original image. Eliminating a sample from
the original image is equivalent to eliminating a row from A. The error criterion defined
in Eq. (5.35) may be used to obtain the row that when eliminated would cause the least
error. Since the optimization requires immense computational resources, we once again use
the SBS technique (Section 5.3.3) to elimintae samples one at a time.
In the second case, we once again start with the fully-sampled image but instead of
eliminating a single sample, we eliminate a sub-array of samples from the original image.
Figure 5.7(a) represents one channel of the image. The light dots represent pixel locations
and the heavy dots represent a sub-array of samples. At each iteration, a shifted version of
this sub-array is eliminated. This leads to a periodic replication of a non-periodic sampling
pattern (Fig. 5.7(b)). The arrangement depicted in Figs. 5.7(a) and 5.7(b) leads to a
4? 4 block periodic pattern. Such a block sampling pattern offers advantages in terms of
computational simplicity and ease in the design of demosaicking algorithms.
In both cases, computation of the error criterion may be simplified using the Sherman-
Morrison matrix inversion formula [82]. Instead of computing the inverse terms at each
iteration we can find only an update term after each elimination. Also, the block circulant
structure of H may be exploited for performing matrix multiplication via DFTs. In spite
of these simplifications, the algorithm places a great demand on computational and storage
resources.
93
(a) Sampling sub-array (b) Periodic pattern example
Figure 5.7: Rod and cone sensitivities
5.4.6 Experiments
We considered a 12?12 array. A variety of images that span a wide range of categories
including natural scenes, landscapes, portraits, and a few color test images were obtained
from the USC-SIPI [77] image database. The RGB channel values were converted to the
YyCxCz color space. Mean power spectra Smi for the individual channels and the mean
crossspectra Smij were found from the power spectra of the available images. Using the
Wiener-Khinchine relation for the power spectral density of a random process Sx(j?) =
F{Rx}, we obtained the elements of an Rx representative of a general scene imaged by
a digital camera from the mean spectra Smi and Smij as Ri = F?1{Smi}, and Rij =
F?1{Smij}.
94
The sample selection procedures detailed in Section 5.4.5 were applied for a fully-
sampled 12 ? 12 RGB array. Figure 5.8 shows the array obtained using the first method
where the samples are eliminated one at a time. Figures 5.9(a), 5.9(b), and 5.9(c) show
the array patterns obtained using the second method with 6 ? 6, 4 ? 4, and 3 ? 3 blocks
respectively. Figure 5.9(d) shows the array obtained with a 2 ? 2 repeating block. This
array is identical to the Bayer array. The error criterion values obtained for these cases are
shown in Table 5.2.
2 4 6 8 10 12
2
4
6
8
10
12
Red Green Blue Legend:
Figure 5.8: Array obtained by eliminating samples one at a time
Table 5.2: Comparison of error criterion values for a 12x12 array
Block size d
12?12 610.0892
6?6 656.1477
4?4 673.0023
3?3 684.8360
2?2 692.3486
95
2 4 6 8 10 12
2
4
6
8
10
12
(a) Array with 6?6 blocks
2 4 6 8 10 12
2
4
6
8
10
12
(b) Array with 4?4 blocks
2 4 6 8 10 12
2
4
6
8
10
12
(c) Array with 3?3 blocks
2 4 6 8 10 12
2
4
6
8
10
12
(d) Array with 2?2 blocks
Figure 5.9: Block based array patterns
5.5 Conclusions and discussion
In Sections 5.3 and 5.4 we proposed two design methodologies for selection of color
samples in CFAs. Both methods minimize error criteria obtained after reconstructing sub-
sampled images. The first method uses regularization for restoration and defines an error
criterion in the RGB space while the second method uses Wiener filtering for restoration and
96
defines an error criterion in the perceptually uniform YyCxCz space. The SBS algorithm is
used to sequentially eliminate samples until we arrive at an optimal sampling arrangement.
The results of experiments are listed in Tables 5.1 and 5.2. Both algorithms give error
criterion values that are smaller than that obtained for the Bayer array. For the second
algorithm, the error is least when samples are eliminated one at a time rather than in blocks.
The error increases progressively as the block size is reduced and is maximum for the 2?2
case (which is identical to the Bayer array). The error criterion has a value smaller than
the error criterion value for the Bayer array for all other cases.
The second algorithm is more interesting since:
1. It defines the error criterion in a perceptually uniform space where the magnitude of
the error corresponds to the error perceived by a human observer.
2. It provides an ability to select block-based sampling patterns. This is useful for
a number of reasons, primarily, since it results in symmetric array patterns, it is
simpler to design adaptive demosaicking algorithms for the resulting arrays. Also,
block-based patterns lend themselves to simplifiction in computation as the criterion
in this case may be reduced to a structured form (circulant or Toeplitz). Finally,
it is important that a particular color sample be surrounded by an identical set of
color samples everywhere in the array. This is due to the phenomenon of spectral
bleeding that occurs in closely spaced photosensitive elements in the sensor-array.
A particular element in the array that is covered by a color filter will also generate
some current due to the spill-over from neigboring elements. This contaminates the
expected spectral response of the element in question. A consistent arrangement
allows the image processor to account for the spectral bleeding.
97
5.6 Future work
The algorithms proposed in this work are extremely computationally intesive. We have
shown that the resulting sampling arrangements perform better than the most commonly
used array pattern (the Bayer array), but to validate the efficacy of the resulting sampling
patterns, we need to design larger arrays. At this time, due to memory contraints, we can
only design array patterns for images of size upto 12?12. The second method has a block
structure and we are exploring ways to simplify computations to enable the design of larger
arrays.
Conventionally, images are stored and displayed such that individual pixels are rect-
angular in shape. In this work we have considered rectangular sensor elements in CFAs.
It has been shown that hexagonal arrangements have many advantages [83], [84], [85]. In
particular, a hexagonal sampling grid allows two-dimensional sub-sampling at sub-nyquist
frequencies. Also, in hexagonal arrays, the distance between a particular element and it?s
immediate neighbors is the same and this property can be used effectively in demosaick-
ing algorithms. The selection of sampling patterns for hexagonally sampled arrays is an
interesting problem to be considered in the future.
98
Chapter 6
Summary
6.1 Summary of results
The acquisition of multispectral images in the mosaicked form presents many advan-
tages in terms of cost, simplicity of design, and the elimination of the registration step
required in multi-sensor cameras. At the same time, mosaicked imaging presents many new
challenges. The mosaicked image must be reconstructed to form full-color images, and a
suitable algorithm must be designed for the purpose. The sampling arrangement and the
sampling rate for the color samples must be chosen, and spectral sensitivity functions must
be chosen for the colors used in the mosaic. In this work we have developed methods that
address each of the above issues.
In Chapter 3 we proposed a general framework for the recovery of color images from
sparse data [55]. An algorithm based on the Bayesian paradigm that may be used for
simultaneous deblurring, denoising, and demosaicking of CFA data [86] was developed.
The proposed algorithm relies on a hierarchical Bayesian formulation for the image model
that accounts for the high correlation among color channels of a typical image. The ICM
algorithm was then used to locally arrive at optimal pixel values given their neighboring ele-
ments. The proposed algorithm does not assume any particular CFA sampling arrangement
and can be used for demosaicking of arbitrary CFA arrangements.
A novel joint spatial-chromatic sampling framework for the optimization of CFA based
imaging parameters was proposed in Chapter 4 [68]. We addressed the problem of optimiza-
tion of spectral sensitivity functions for the color filters in the sensor-array. An objective
99
criterion was introduced incorporates the effects of both spatial and spectral sampling in one
unified framework. is introduced. Experimental results indicate that the optimized trans-
mittance functions found by minimizing the objective criterion greatly outperform standard
RGB and CMY color filters. Optimized color filter transmittances lead not only to reduced
chromatic errors, but they also lead to fewer spatial artifacts in the reconstructed images
[87]. Optimized transmittances were found for various common CFA arrangements and
shown to outperform standard color filters in each case [88].
Two design methods for the selection of CFA sampling patterns were proposed in
Chapter 5 [51, 52]. Both methods incorporate the effects of the human visual system in
determining reconstruction quality of CFA sampled images. The quality of reconstructed
images is used to derive objective criteria which may be minimized with respect to CFA sam-
pling arrangements to derive optimal arrangements. The second method provides an ability
to select block-based sampling patterns which leads to ease in the design of demosaicking
algorithms and color filters with consistent effective transmittances across sensor-arrays.
6.2 Future work
There are several unresolved issues in the problem of multispectral imaging using focal-
plane arrays. In light of the methods proposed in this work, future work is called for in the
following areas:
1. In Chapters 4 and 5, objective criteria are derived to describe the distance between
original images and images reconstructed from sub-sampled CFA data. The efficacy
of the criterion hinges on the ability of the multi-dimensional autocorrelation matrix
Rxx to describe faithfully the properties of a natural scene. In this work we based our
100
correlation model on the key assumption that both spatial and spectral correlations
decay with distance in space and wavelength respectively. Spatial correlation does
indeed fall with distance in the general scene, but the nature of the relation between
elements of the autocorrelation function along the wavelength dimension is not easily
modeled. Research in this area will help refine the results obtained in this work.
2. Recently, researchers have started to explore the problem of CFA-based imaging for
multiple number of color bands (>4) [89, 90, 91, 92]. There is great potential of real-
izing the benefits of multispectral imaging with CFAs because of the steady increase
in sensor-array sizes. In Chapter 5 we have demonstrated that full-color images of
reasonable quality may be reconstructed from CFAs with sparse spatial sampling of a
particular color. For instance, blue is sampled at a much lower rate than green in the
5? 5 optimal block-based array, without a great loss in the quality of reconstructed
images. This suggests that the sparse sampling of particular colors due to an increase
in the number of color bands is a reasonable trade-off and should be investigated in
the context of the joint spatial-chromatic sampling framework developed in this work.
3. The effect of noise has not been considered in the development of the spatial-chromatic
reconstruction method proposed in this work. Effective noise models will greatly
increase the usefulness of methods proposed here.
101
Bibliography
[1] S. Abdallah, B. Saleh, and A. Aboulsoud, ?A general overview of solid state imaging
sensors types,? in Photonics and Its Application at Egyptian Engineering Faculties and
Institutes, Third workshop on, 2002, pp. 1?10.
[2] J. Adams, K. Parulski, and K. Spaulding, ?Color processing in digital cameras,? IEEE
Micro, vol. 18, no. 4, pp. 20?30, Nov/Dec 1998.
[3] B. Gunturk, Y. Altunbasak, and R. Mersereau, ?Color plane interpolation using al-
ternating projections,? Image Processing, IEEE Transactions on, vol. 11, no. 9, pp.
997?1013, Sept. 2002.
[4] J. Driesen and P. Scheunders, ?Wavelet-based color filter array demosaicking,? in In-
ternational Conference on Image Processing, Proceedings of, 2004, pp. V: 3311?3314.
[5] X. Wu and N. Zhang, ?Primary-consistent soft-decision color demosaicking for digital
cameras (patent pending),? Image Processing, IEEE Transactions on,, vol. 13, no. 9,
pp. 1263?1274, Sept. 2004.
[6] J. Go, K. Sohn, and C. Lee, ?Interpolation using neural networks for digital still
cameras,? Consumer Electronics, IEEE Transactions on, vol. 46, no. 3, pp. 610?616,
Aug 2000.
[7] H. Trussell, ?A MMSE estimate for demosaicking,? Proceedings of the International
Conference on Image Processing, vol. 3, pp. 358?361, 2001.
[8] H. Trussell and R. Hartwig, ?Mathematics for demosaicking,? IEEE Trans. Image
Processing, vol. 11, no. 4, pp. 485?492, April 2002.
[9] D. Taubman, ?Generalized wiener reconstruction of images from colour sensor data us-
ing a scale invariant prior,? International Conference on Image Processing, Proceedings
of, vol. 3, pp. 801?804, 2000.
[10] P. Vora, J. Farrell, J. Tietz, and D. Brainard, ?Image capture: simulation of sensor
responses from hyperspectral images,? IEEE Trans. Image Processing, vol. 10, no. 2,
pp. 307?316, Feb 2001.
[11] P. Vora and H. Trussell, ?Mathematical methods for the analysis of color scanning
filters,? IEEE Trans. Image Processing, vol. 6, no. 2, pp. 321?327, Feb 1997.
[12] ??, ?Mathematical methods for the design of color scanning filters,? IEEE Trans.
Image Processing, vol. 6, no. 2, pp. 312?320, Feb 1997.
102
[13] M. J. Vrhel and H. J. Trussell, ?Optimal color filters in the presence of noise,? IEEE
Trans. Image Processing, vol. 4, no. 6, pp. 814?823, June 1995.
[14] B. Bayer, ?Color imaging array,? U.S. Patent 3971065, July 1976.
[15] T. Yamagami, T. Sasaki, and A. Suga, ?Image signal processing apparatus having a
color filter with offset luminance filter elements,? U.S. Patent 5323233, June 1994.
[16] J. Hamilton, J. Adams, and D. Orlicki, ?Particular pattern of pixels for a color fil-
ter array which is used to derive luminanance and chrominance values,? U.S. Patent
6330029 B1, Dec. 2001.
[17] W. Zhu, K. Parker, and M. A. Kriss, ?Color filter arrays based on mutually exclusive
blue noise patterns,? Journal of Visual Communication and Image Representation,
vol. 10, pp. 245?247, 1999.
[18] E. Gindele and A. Gallagher, ?Sparsely sampled image sensing device with color and
luminance photosites,? U.S. Patent 6476865 B1, Nov. 2002.
[19] G. Sharma and H. J. Trussell, ?Digital color imaging,? IEEE Transactions on Image
Processing, vol. 6, no. 7, pp. 901?932, 1997.
[20] T.Young, ?An account of some cases of the production of colors, not hitherto de-
scribed,? Philos. Trans. R. Soc. London, p. 387, 1802.
[21] ??, ?On the theory of light and colors,? Philos. Trans. R. Soc. London, vol. 92, pp.
20?71, 1802.
[22] G. Palmer, Theory of Light. Paris, France: Hardouin and Gattey, 1786.
[23] H. Grassmann, ?Zur theorie der farbenmischung,? Poggendorf?, Annalen Phys.
Chemie,, vol. 89, pp. 69?84, 1853.
[24] J. Maxwell, ?Theory of the perception of colors,? Trans. R. Scottish Soc. Arts, vol. 4,
pp. 394?400, 1856.
[25] G. Wyszecki and W. S. Stiles, Color Science: Concepts and Methods, Quantitative
Data and Formulae, 2nd Edition. New York: Wiley, 1982.
[26] J. Maxwell, ?The diagram of colors,? Trans. R. Soc. Edinburgh, vol. 21, pp. 275?298,
1857.
[27] ??, ?Theory of compound colors and the relations to the colors of the spectrum,?
Proc. R. Soc. Lond., vol. 10, pp. 404?409, 1860.
[28] H. L. F. von Helmholtz, ?Theory of compound colors and the relations to the colors of
the spectrum,? Phys. Opt., 1866.
103
[29] H. J. A. Dartnall, J. K. Bowmaker, and J. D. Mollon, Microspectrophotometry of human
photoreceptors. Academic, 1983.
[30] A. Stockman, D. I. A. MacLeod, and N. E. Johnson, ?Spectral sensitivities of the
human cones,? Journal of the Optical Society of America, vol. 10, pp. 2491?2521, 1993.
[31] R. S. Gentile, Device independent color in PostScript, ser. Proc. SPIE: Human Vision,
Visual Processing, and Digital Display IV. SPIE, 1993, vol. 1913, pp. 419?432.
[32] G. Wyszecki and G. H. Fielder, ?Color difference matches,? Journal of the Optical
Society of America, vol. 62, pp. 1501?1513, 1971.
[33] A. D. North and M. D. Fairchild, ?Measuring color-matching functions. Part I,? Color
Res. Appl., vol. 18, no. 3, pp. 155?162, June 1993.
[34] Y. Nayatani, K. Takahama, H. Sobagaki, and K. Hashimoto, ?Color appearance model
and chromatic-adaptation transform,? Color Res. Appl., vol. 15, pp. 210?221, Feb 1990.
[35] S. L. Guth, ?Model for color vision and light adaptation,? J. Opt. Soc. Amer. A, vol. 8,
pp. 976?993, Jun 1991.
[36] Y. Nayatani, K. Hashimoto, H. Sobagaki, and K. Takahama, ?Comparison of color-
appearance models,? Color Res. Appl, vol. 15, pp. 272?284, Oct 1990.
[37] X. Zhang and B. Wandell, ?A spatial extension of CIELab for digital color image
reproduction,? in Proc. Soc. Inform. Display 96 Digest, 1996, pp. 731?734.
[38] X. Zhang, D. Silverstein, J. Farrell, and B. Wandell, ?Color image quality metric S-
CIELAB and its application on halftone texture visibility,? in COMPCON97 Digest of
Papers, IEEE, 1997, pp. 44?48.
[39] D. R. Cok, ?Reconstruction of CCD images using template matching,? Proceedings of
IS&T?s Annual Conference/ICPS, pp. 380?385, 1994.
[40] R. Kimmel, ?Demosaicing: image reconstruction from color CCD samples,? Image
Processing, IEEE Transactions on, vol. 8, no. 9, pp. 1221?1228, Sept. 1999.
[41] J. Glotzbach, R. Schafer, and K. Illgner, ?A method of color filter array interpolation
with alias cancellation properties,? International Conference on Image Processing, Pro-
ceedings of, vol. 1, pp. 141?144, 2001.
[42] D. Su and P. Willis, ?Demosaicing of color images using pixel level data-dependent
triangulation,? in Theory and Practice of Computer Graphics, 2003. Proceedings, 3-5
June 2003, pp. 16?23.
[43] D. Alleysson, S. S?usstrunk, and J. Herault, ?Linear demosaicing inspired by the human
visual system,? Image Processing, IEEE Transactions on, vol. 14, no. 4, pp. 439?449,
Apr 2005.
104
[44] E. Dubois, ?Frequency-domain methods for demosaicking of Bayer-sampled color im-
ages,? Signal Processing Letters, vol. 12, no. 12, pp. 847?850, December 2005.
[45] L. Chang and Y.-P. Tan, ?Effective use of spatial and spectral correlations for color
filter array demosaicking,? Consumer Electronics, IEEE Transactions on, vol. 50, no. 1,
pp. 355?365, Feb 2004.
[46] K. Hirakawa and T. Parks, ?Joint demosaicing and denoising,? Image Processing, IEEE
Transactions on, vol. 15, no. 8, pp. 2146?2157, Aug. 2006.
[47] B. Gunturk, Y. Glotzbach, Y. Altunbasak, R. Schafer, and R. Mersereau, ?Demo-
saicking: Color filter array interpolation in single chip digital cameras,? IEEE Signal
Processing Magazine, vol. 22, no. 1, pp. 44?54, Jan. 2005.
[48] S. Yamanaka, ?Solid state camera,? U.S. Patent 4054906, 1977.
[49] R. Lukac and K. Plataniotis, ?Color filter arrays: Design and performance analysis,?
IEEE Transactions on Consumer Electronics, vol. 51, no. 4, pp. 1260?1267, Nov. 2005.
[50] C. M. Hains, ?Personal communication,? Xerox Corp., July 2006.
[51] M. Parmar and S. J. Reeves, ?Color filter array design based on a human visual model,?
Proceedings of IS&T/SPIEs International Conference on Electronic Imaging, Compu-
tational Imaging II, vol. 5299, no. 5299, pp. 73?82., 2004.
[52] ??, ? A perceptually based design methodology for color filter arrays ,? Acoustics,
Speech, and Signal Processing, 2004. Proceedings, IEEE International Conference on,
vol. 53, pp. 473?476., 2004.
[53] S. Geman and D. Geman, ?Stochastic relaxation, Gibbs distributions and the Bayesian
restoration of images,? IEEE Trans. on Pattern Analysis and Machine Intelligence,
vol. 6, pp. 721?741, 1984.
[54] R. Molina, J. Mateos, A. Katsaggelos, and M. Vega, ?Bayesian multichannel image
restoration using compound gauss-markov random fields,? Image Processing, IEEE
Transactions on, vol. 12, no. 12, pp. 1642?1654, Dec. 2003.
[55] M. Parmar, S. J. Reeves, and T. S. Denney, Jr., ?Bayesian edge-preserving color im-
age reconstruction from color filter array data,? in Computational Imaging III, C. A.
Bouman and E. L. Miller, Eds., vol. 5674, no. 1. SPIE, 2005, pp. 259?268.
[56] G. Sharma, Ed., Digital Color Imaging Handbook. Boca Raton, FL, USA: CRC Press,
Inc., 2002, ch. one: Color fundamentals for digital imaging.
[57] Eastman Kodak Company, ?PhotoCD PCD0992,? (http://r0k.us/graphics/kodak/).
105
[58] D. Griffeath, Introduction to Markov Random Fields. Springer, 1976, chapter 12 of
Denumerable Markov Chains by Kemeny, Knapp, and Snell (2nd edition).
[59] T. S. Denney, Jr. and S. J. Reeves, ?Bayesian image reconstruction from Fourier-
domain samples using prior edge information,? Journal of Electronic Imaging, vol. 14,
no. 4, p. 043009, 2005.
[60] J. Besag, ?On the statistical analysis of dirty pictures,? Journal of the Royal Statistical
Society, series B, vol. 48, pp. 259?302, 1986.
[61] R. Hunt, ?Why is black-and-white so important in color,? in Color Imaging Conference,
1995, pp. 54?57.
[62] M. J. Vrhel and H. J. Trussell, ?Color filter selection for color correction in the pres-
ence of noise,? in IEEE International Conference on Acoustics, Speech, and Signal
Processing, 1993, pp. 313?316.
[63] ??, ?Filter consideration in color correction,? IEEE Trans. Image Processing, vol. 3,
no. 2, pp. 147?161, 1994.
[64] G. Sharma, H. Trussell, and M. Vrhel, ?Optimal nonnegative color scanning filters,?
IEEE Transactions on Image Processing, vol. 7, no. 1, pp. 129?133, Jan 1998.
[65] N. Shimano, ?Optimization of spectral sensitivities with gaussian distribution functions
for a color image acquisition device in the presence of noise,? Optical Engineering,
vol. 45, no. 1, p. 013201, 2006.
[66] M. Wolski, C. Bouman, J. Allebach, and E. Walowit, ?Optimization of sensor response
functions for colorimetry of reflective and emissive objects,? IEEE Trans. Image Pro-
cessing, vol. 5, no. 3, pp. 507?517, Mar 1996.
[67] D.-Y. Ng and J. P. Allebach, ?A subspace matching color filter design methodology
for a multispectral imaging system,? IEEE Transactions on Image Processing, vol. 15,
no. 9, pp. 2631?2643, Sep. 2006.
[68] M. Parmar and S. J. Reeves, ?Selection of optimal spectral sensitivity functions for
color filter arrays,? in International Conference on Image Processing, Proceedings of,
Oct. 2006, pp. 1005?1008.
[69] D. Alleysson, S. S?usstrunk, and J. Marguier, ?Influence of spectral sensitivity functions
on color demosaicing,? Proceedings of the Eleventh Color Imaging Conference: Color
Science and Engineering Systems, Technologies, Applications, pp. 351?357, November
2003.
[70] S. D. Hordley, G. D. Finlayson, and P. Morovic, ?A multi-spectral image database
and an application to image rendering across illumination,? in Proceedings of Third
106
International Conference on Image and Graphics, Hong-Kong China,, December 2004,
pp. 349?355.
[71] J. E. Farrell, F. Xiao, P. B. Catrysse, and B. A. Wandell, ?A simulation tool for
evaluating digital camera image quality,? in Proceedings of IS&T/SPIEs Electronic
Imaging 2004: Image Quality and System Performance, vol. 5294, Jan. 2004.
[72] I. 61966-2-2, Multimedia systems and Equipment ? Colour measurement and
management ? Part 2-1: Colour management- Default RGB colour space
? sRGB. International Electrotechnical Comission, 1999. [Online]. Available:
http://www.srgb.com
[73] M. A. Kriss, ?Color filter arrays for digital electronic still cameras,? Proceedings of
IS&T?s 49th Annual Conference, pp. 272?278, 1996.
[74] J. Sullivan, L. Ray, and R. Miller, ?Design of minimum visual modulation halftone
patterns,? IEEE Transactions on Systems, Man, and Cybernetics, vol. 21, no. 1, pp.
33?38, 1991.
[75] S. J. Reeves and L. P. Heck, ?Selection of observations in signal reconstruction,? IEEE
Transactions on Signal Processing, vol. 43, no. 3, pp. 788?791, March 1995.
[76] S. J. Reeves and Z. Zhao, ?Sequential algorithms for observation selection,? IEEE
Transactions on Signal Processing, vol. 47, no. 1, pp. 123?132, January 1999.
[77] USC-SIPI, ?Color image database,? available at:
http://sipi.usc.edu/services/database/Database.html.
[78] T. J. Flohr, B. W. Kolpatzik, R. Balasubramanian, D. A. Carrara, C. A. Bouman,
and J. P. Allebach, ?Model Based Color Image Quantization,? Human Vision, Visual
Processing, and Digital Display IV (1993), vol. SPIE 1913, pp. 270?281, 1993.
[79] R. N?as?anen, ?Visibility of halftone dot textures,? IEEE Transactions on Systems, Man,
and Cybernetics, vol. 14, no. 6, pp. 920?924, 1984.
[80] K. T. Mullen, ?The contrast sensitivity of human color vision to red-green and blue-
yellow chromatic gratings,? J. Physiol., vol. 359, pp. 381?400, 1985.
[81] B. Kolpatzik and C. Bouman, ?Optimized error diffusion for image display,? Journal
of Electronic Imaging, vol. 3, no. 3, pp. 277?292, July 1992.
[82] G. H. Golub and C. F. Van Loan, Matrix Computations. Baltimore, MD: The Johns
Hopkins University Press, 1996.
[83] R. M. Mersereau, ?The processing of hexagonally sampled two-dimensional signals,?
IEEE Proceedings, vol. 67, pp. 930?949, June 1979.
107
[84] M. Golay, ?Hexagonal parallel pattern transformations,? IEEE Trans Comp, vol. 18,
pp. 733?740, 1969.
[85] I. Her, ?Geometric transformations on the hexagonal grid,? IEEE Trans. Image Pro-
cessing, vol. 4, no. 9, pp. 1213?1222, September 1995.
[86] M. Parmar, S. J. Reeves, and T. S. Denney, Jr., ?Bayesian restoration of color im-
ages using a non-homogenous cross-channel prior,,? in submitted to The International
Conference on Image Processing, 2007.
[87] M. Parmar and S. J. Reeves, ?Optimization of spectral sensitivity functions for color
filter arrays,? in review.
[88] ??, ?Optimization of color filter sensitivity functions for color filter array based image
acquisition,? in Proceedings of the Fourteenth Color Imaging Conference, Nov. 2006,
pp. 96?101.
[89] R. Ramanath, W. Snyder, H.Du, H. Qi, and X. Wang, ?Band selection using inde-
pendent component analysis for hyperspectral image processing,? Proceedings AIPR
workshop, pp. 93?98, 2003.
[90] R. Ramanath, W. E. Snyder, and H. Qi, ?Mosaic multispectral focal plane array cam-
eras,? in Infrared Technology and Applications XXX. Edited by Andresen, Bjorn F.;
Fulop, Gabor F. Proceedings of the SPIE, Volume 5406, pp. 701-712 (2004)., B. F.
Andresen and G. F. Fulop, Eds., Aug. 2004, pp. 701?712.
[91] L. Miao and H. Qi, ?The design and evaluation of a generic method for generating
mosaicked multispectral filter arrays,? IEEE Trans. Image Processing, vol. 15, no. 9,
pp. 2780?2791, August 2006.
[92] L. Miao, H. Qi, R. Ramanath, and W. Snyder, ?Binary tree-based generic demosaick-
ing algorithm for multispectral filter arrays,? IEEE Trans. Image Processing, vol. 15,
no. 11, pp. 3550?3558, November 2006.
108