pictures taken by differently illuminating the same subject mat-

ter, is presented. The method solves for the response function di-

rectly using superposition constraints imposed by different com-

binations of two (or more) lights to illuminate the same subject

matter. Previous methods of computing camera response functions

typically used differing exposures of identical subject matter, lead-

ing to uniqueness problems (underconstrained due to comparamet-

ric periodicity or fractal ambiguity). The method used in this paper

overcomes this problem. Finally, we compare the method of the

paper to previous methods and find the new method outperforms

the previous work.

understood [1][2], often much less attention is given to the camera

response function (how the camera responds to light). In digital

cameras, the camera response function maps the actual quantity of

light impinging on each element of the sensor array to the pixel

values that the camera outputs.

position arises when we superimpose (superpose) pictures taken

from differently illuminated instances of the same subject matter,

using a simple

property that light is additive).

using different exposures of the same subject matter [3][4][5][6].

The method proposed in this paper differs from other methods in

that it does not require the use of charts, nor a camera that is ca-

pable of adjusting its exposure. The method is very easy to use,

produces very accurate results and requires only that the camera

in situations where differently illuminated, rather than differently

exposed pictures are available. For example, a camera in a typical

building or dwelling may observe an at least partially static scene

during which time various lights are turned on and off throughout

the day.

pictures of the same subject matter. With enough data, a direct

nonparametric solution for the camera response function can be

obtained, otherwise, a semi-parametric method such as Candocia's

piecewise linear comparametric method will often provide better

results[3]. A drawback of completely nonparametric methods is

that comparametric periodicity (periodicity in the amplitude do-

main, i.e. amplitude "ripples", also known as fractal ambiguity[9]

and comperiodicity[10]), plague the result unless more than two

input images are used with exposure differences that are inhar-

monic (in the amplitude domain).

function. In this method the linear constraint of superposition dis-

ambiguates comparametric periodicity.

light on individually (p

toquantity is neither radiance, irradiance, luminance, nor illumi-

nance, rather, it is a unit of light, unique to the spectral response

of a particular camera. The results obtained through this method

were more accurate than using homogeneity (e.g. comparagrams)

or typically available (coarsely quantized) charts.

vides an insightful analysis of homogeneity, the superposigram

provides an insightful analysis of superposition.

cascading two non-linear functions as shown in figure 1. In this

diagram, the

dynamic range compression function and a uniform quantizer. We

call the first function the Range Compression Function because

most camera response functions, such as the familiar gamma map-

ping, are convex. In this paper we assume that this function is

monotonic and convex[11]. The quantizer in turn maps the range

compressed photoquantities into discrete pixel values.

quantized to yield pixel values.

ing linear between quantization points, and by assuming that the

probability distribution of the measured photoquantities is uniform

in this range. Therefore, given pixel value p

can form the following equation:

tions of two light sources in an otherwise dark environment. Pixel

values p

Picture with only the lower lights turned on. Rightmost: Picture with both the upper

and lower lights turned on together.

shown in Fig 2.

at the ends of the camera's range where clipping occurs. In the re-

mainder of the paper, we will assume that the camera outputs pixel

values in the range

under other conditions is very simple.

of

for pixel value

Af

value in

values. With camera noise, this bias becomes very significant in

pixel ranges near both clipping points:

solution.

f

that the normalized histogram is a reasonable approximation of

the actual probability distribution of c, we can use the peak of this

histogram

For a digital camera with

each pixel value for each

from multiple image sets by simply adding the Superposigrams

produced by each set, thereby increasing the accuracy of our esti-

mate of f

peak location in conditions where

posigram, a superposigraph. Since most cameras exhibit a non-

linear response, we expect this surface to be curved. See figure 3

for the Superposigram surface of a Nikon D1 digital camera.

random synthetic lightspace data was generated. To this data, a

256 discrete points (plotted as various shaped points). Note that the recovered points

fall virtually on top of the original functions (plotted as solid lines).

procedure, Gaussian noise of standard deviation 10 was added to

the pixel values. The singular value decomposition method was

then used to recover the function using the described constraint

matrix where the equations were generated using the slenderized

superposigram. The results of this procedure on the synthetic data

are shown in figure 4. As can be seen in the figure, the high

quantity of noise added has significantly affected the recovered

response functions, however it demonstrates the stability of the al-

gorithm. When more reasonable noise levels are used, such as a

standard deviation of 3, the results are very accurate.

ure 3 was generated from the images presented in figure 2. To

create more data which would span the entire range of the camera

response function, several exposures f

tions from one set of three images, using more image sets did in

fact produce a more reliable response function. From the super-

posigraph that was generated from the multiple image sets (we

used 10 in total), the response function for camera was determined.

may be applied to the superposigraph. The linearized lightspace

surface arising from f

two tests were devised. The following sections describe these tests,

after which our results are presented.

With the recovered camera response function, we can linearize the images

by converting them from imagespace, f

tion by homogeneity

camera response function (regardless of how it was obtained). The

homogeneity-test requires two differently (by a scalar factor of k)

exposed pictures, f

f

agespace f to lightspace, q, by computing f

we convert it back to imagespace, by applying f . Alternatively we

could apply f

tion by superposition

test

sponse functions found by each of various methods

rors in response functions found by various methods (including

previous published work) are compared in Table 1.

sponse function can be recovered using the superposition prop-

(Previous Work [8][4])

how well the resulting response function superimposes images, based on testing the

candidate response function on pictures of subject matter taken under different light-

ing positions. The rightmost column denotes how well the resulting response function

amplitude-scales images, and was determined based on using differently exposed pic-

tures of the same subject matter. The entries in the rightmost two columns are mean

squared error divided by the number of pixels in an image.

improved results were obtained through slenderization of the com-

paragram, we found in this paper that improved results were ob-

tained through slenderization of the superposigram. The new method

was tested on various synthetic sequences with synthetic noise, to

prove ground-truth, as well as on actual data, to show that it works

in real world images as well.

pp. 842849, December 11-13 2001.

no. 8, pp. 13891406, August 2000, ISSN 1057-7149.

ropean Conference on Computer Vision (ECCV) Copenhagen, May

2002.

2002

Pattern Recognition (CVPR), Wisconsin, June