Light source estimation using feature points from specular highlights and cast shadows

A method for light sources estimation is proposed in this paper. The method utilizes feature points in cast shadows to estimate near light source positions from estimated source directions using specular highlights. There are several methods that can be used to estimate light sources from scene images, using either cast shadows or specular highlights. However, most of them are limited to directional light sources. The proposed method can estimate the positions and intensities of multiple near point light sources. Specular highlights on an object of known geometry are first used for light source direction estimation. Then, a discontinuity point in the object shape and the corresponding cast shadow on a ground plane are used for light source position estimation. Feature points obtained from an image of the cast shadow, however, can be inaccurate due to various factors. Information on diffused light reflected from Lambertian ground-plane surface is subsequently used to improve estimation accuracy. Experimental results were used to evaluate the performance of the proposed method.


INTRODUCTION
Light source estimation is a problem of interest in the fields of computer graphics and computer vision.In augmented reality applications, where generation of a mixed environment containing virtual objects in real scenes is needed, illumination information and surface reflectance properties of real objects are needed for consistent and realistic shading of virtual objects.Another application which requires lighting information is a retrieval of shape information from shading.The main goal of light source estimation is to recover location, direction and intensity of light source(s) given one or more images of a real scene.
Many methods have been developed to estimate properties of single and multiple light sources.These methods use information from either one or a combination of shading, cast shadows, and specular reflections.Majority of these methods are based on shading information.For example, Zhang and Yan (2001) developed a method for parallel light direction estimation using shading on surface of a spherical object in the scene.The sphere used in this method is assumed to have a Lambertian surface.With known sphere geometric and positional information in the scene, it is possible to estimate locations of pixels known as 'critical points' from a shaded image of the sphere.From the obtained critical points, light source directions can be estimated.Wang and Samaras (2002) extends the use of this method to an object of arbitrary shape.The visible points on the object are mapped to a virtual sphere by matching the normal direction at each point.Some critical boundaries may be lost during this process.They provide a method exploiting shadow information to solve the problem.Such hybrid method was reported to offer improved accuracy.
In the study of Bouganis and Brookes (2004), an attempt was made to increase accuracy and reduce limitations of the method in Zhang and Yan (2001).The method for critical point detection in Bouganis and Brookes (2004) method is different from the original one (Zhang and Yan, 2001), but is similar to that of Wei (2003).
Shadow and reflected light information has also been used for light source directions and intensities estimation.For example, Sato et al. (1999Sato et al. ( , 2001) ) provide a method for estimating an illumination distribution of a real scene using information of reflected light distribution over a Lambertian planar surface.By using the occlusion information of an incoming light caused by an object of known geometry and location, the method can provide sampled directional light source directions from estimated illumination distribution.
Specular highlights on a shiny surface in the scene can also provide information for light source estimation.For example, Powell et al. (2001) can estimate the positions of light sources from specular highlights on a pair of calibration spheres in the scene.This method estimates the positions and surface normals at the highlights in order to triangulate illuminants.Zhou and Kambhamettu (2002) present a method for locating multiple light sources and estimating their intensities from a pair of stereo images of a sphere.The sphere surface has both Lambertian and specular properties.The specular image is used to find the directions of the light sources, and the Lambertian image is used to find the intensities of the light sources.Another hybrid method is presented by Li et al. (2003).The method integrates cues from shading, shadow and specular reflections for estimating directional illumination in a textured scene.
Even though these previous methods are successful in estimating light source directions and intensities, they assume that the light sources in the scene are far-field so that each light source illuminates all objects at any point from the same direction.However, in a real scene (e.g., under indoor environment), it is quite common to have one or more near point light sources, which illuminate the objects from finite distance.In this case, a common Bunteong and Chotikakamthorn 169 parallel/directional light assumption of those methods is invalid.
Methods have been developed to estimate positions and intensities of near point light sources.Li et al. (2003) proposed two methods for recovering the surface reflectance properties of an object and the light source position from a single view without the distant illumination assumption.The first method is based on the use of the iterative separating-and-fitting relaxation algorithm.The second method estimates the specular reflection parameters and the light source position simultaneously by linearizing the Torrance-Sparrow specular reflection model and by optimizing the sample correlation coefficient.However, the methods were applied to a single light source case, and are applicable only to convex objects.Extension of the methods to the multiple light source case, if possible, can dramatically increase computational complexity.Takai et al. (2009) present an approach for estimating light sources from a single image of a scene that is illuminated by major near point light sources, and some directional light sources as well as ambient light.They use a pair of reference spheres as light probes.Major step in the method involves differencing the intensities of two image regions of the reference spheres.From an image of a difference sphere, parameters of point light sources are estimated by an iterative operation.The input image is updated by eliminating the lighting effects that is due to the estimated point light sources and the parameters of directional light sources and ambient light are estimated by an iterative operation again.Schniders et al. ( 2010) proposed an empirical analysis, which shows that line estimation from a single view of a single sphere is not possible in practice and present a closed form solution for recovering a polygon light source from a single view of two spheres and an iterative approach for rectangular light source estimation based on two views of a single sphere.
These methods for near point light source estimation are either limited to a single light source, or applicable under specific and often complex shooting setup.In this paper, we present a method to estimate the position and intensity of multiple near point light sources from a single view image.Unlike the methods of Schniders et al. ( 2010) and Takai et al. (2009) that use either the two reference spheres in the scene or two views of a single sphere, our method use only a single object of known geometry with specular reflection.Feature points in a cast shadow of that object on a ground plane with diffused reflection are exploited for effective and efficient source position estimation.Use of cast-shadow feature points can also help speeding up the computation.Note that, although there exists a method that use feature points in cast shadows for light source estimation (Cao and Shah, 2005), such method assumes that a light source is directional and two perspective view images of a scene are required.Note that, although using a pair of mirror spheres or cameras with fish-eye lens is an efficient method, it is inconvenient to setup specially when applied to near light source location estimation.The proposed method has the main benefit of simpler equipment setup.
An overview of the proposed method and the corresponding scene setup is first described.The method utilizes a specular highlight from an object of known geometry and location (a sphere in this case), and a ground plane with diffusive reflection (Figure 1 for the scene setup).The spherical object is required to contain some discontinuity on its surface (a sphere with a coneshaped tip or a box corner in our case).Figure 1 illustrates a case where there are two point light sources at L 1 and L 2 , and the corresponding specular peaks S 1 and S 2 , as seen by a camera on the right.

USING SPECULAR HIGHLIGHT FOR LIGHT SOURCE DIRECTION ESTIMATION
Specular reflection is the mirror-like reflection of light from a surface, in which light from each incoming direction is reflected into a single outgoing direction.The directions of incoming and outgoing light rays have the same angle with respect to the surface normal (Figure 2).Like those of Hara et al. (2005), in this paper a specular highlight is utilized to estimate a point source direction.In doing so, a specular peak pixel on a taken image of a real scene is first identified.The direction of the corresponding light source, which produces a highlight on the surface point P, can be calculated as: where is a vector of the light source direction measured at the point P, is a surface normal vector at P, and is a unit vector of a viewing direction, looked from P.
The light source position that causes specular peak at P is located along the direction of .Subsequently, given the so obtained light source direction, a method for light source position estimation from feature points will be explained.

USE OF FEATURE POINTS FOR SOURCE POSITION ESTIMATION
Features are parts or aspects of the image which captures its salient characteristics.Features may be edges, corners, blobs or ridges.Feature detection is an image processing algorithm for identifying the presence and location of certain salient characteristics in an inspected image.In this paper, a corner detector is used to find a corner point(s) on a shadow edge of a spherical object cast on a ground plane.With known object geometry and location, the line connecting a detected corner point on the ground plane to the corresponding point on the object surface will pass through the light source position.When used in combination with the source direction estimated using a specular highlight as explained before, a source position can be obtained as shown in Figure 3.
In addition, with a priori estimate of a source direction obtained using a specular highlight, feature points can be efficiently located by searching for such features only along a certain contour on a shadow-casted surface.In this case where a shadow is casted on a ground plane, such contour can be easily estimated by first constructing   With respect to the specular peak position, the estimated n th light source directions are denoted by .Let be the position of the n th specular peak with respect to the discontinuity point.Given a suitable scalar value , the intersection points between and + are M possible position of the n th light source.These intersection points are denoted by , …, (Figure 5).In practice, soft shadow may appear instead of a hard one.Feature point detection can be less accurate under this scenario.Furthermore, in real scene, the depth value from depth camera may be inaccurate.This is the reason why some feature point candidates must be kept in the previous discussion.To identify an actual feature point from those candidates, information collected from ground-plane diffuse reflection is used for this purpose as explained subsequently.

LIGHT SOURCE POSITION ESTIMATION FROM GROUND-PLANE DIFFUSE REFLECTION
Similar to Sato et al. (1999Sato et al. ( , 2001) ) method, the proposed method utilizes shadow of a known-geometry object, casted on a ground plane.Different ground-plane illumination scenarios are as shown in Figure 6.From the figure, at the point , lights coming from the point light sources at and are blocked.On the other hand, the point is illuminated by the light coming from alone.At , no occlusion occurs for both light sources.
Here, the ground plane is assumed to have a Lambertian surface.An amount of ground-plane reflected light as observed by the k th pixel is obtained as: = (2) From Equation 2, and are real-world positions of the n th light source and the k th pixel.In addition, and are respectively, the light source intensity and the    Blender 3D software.There were up to three point light sources and a single camera in the scene, all of which were placed above the object and looked down toward it.From this camera position, all specular highlights and cast shadows can be seen in the rendered pictures.The object was a sphere with a small conic shape on top.To evaluate the performance in the presence of soft shadow, nonpoint light sources were simulated by changing the Blender's 'Samples' parameter from the default value of 1 to 8 and 'Soft Size' parameter from the default value of 0.1 to 0.3.The experiments were performed to evaluate the accuracy of the proposed method by computing the Root Means Square Error (RMSE) of the estimated light position.The RMSE percentage was calculated by dividing the obtained RMSE with the actual light source distance from the plane center (the point where the sphere was placed on a ground).Improvement obtained by using the reflected light fitting step (Step 7) was also investigated.

One light source case
There was only one light source in this experiment.
Based on results with twenty different light source positions, the RMSE of the distance between the position of the real light source and the estimated position was calculated and used as accuracy measure.The results using the methods with and without reflected light fitting step are shown in Table 1. Figure 7a shows one of the original images, while Figure 7b shows the reconstructed image obtained using the estimated light source position.

Two light sources case
There were two light sources in this experiment.Based on results with ten different two-light source positional settings, the RMSE was calculated as in the previous experiment.The results using the methods with and without the reflected light fitting step are shown in Table 2. Figure 8a shows one of the original images, while Figure 8b shows the reconstructed image obtained using the estimated light source position.

Three light sources case
There were three light sources in this experiment.Based on results with ten cases of three different light source positions, the RMSE of the distance between the position of the real light source and the estimated position was calculated and used as accuracy measure.The results using the methods with and without reflected light fitting step are shown in Table 3. Figure 9a shows one of the original images, while Figure 9b shows the reconstructed image obtained using the estimated light source position.
From the results of the experiments, it is seen that the

Conclusions
In this paper, a method for multiple near point light sources estimation has been described.Our method uses information from specular highlights, feature points in cast shadows and diffuse component on the Lambertian ground plane.Specular highlight is used to estimate a light source direction.Feature points, along with an estimated light source direction, give an estimate of a light source position.For a case where feature points cannot be accurately detected, each possible solution is applied to fit a ground-plane light reflection solution in a non-negative least square sense.A solution candidate with a minimum least square error is chosen as an estimate of light source positions.Experimental results have been reported on the method's effectiveness.

Figure 1 .
Figure 1.The system setup containing a spherical object of known geometric parameters, and a ground plane with Lambertian-type reflection.

Figure 2 .
Figure 2. Relationship of three vectors involved in specular reflection.

Figure 3 .
Figure 3. Corner points on the object surface and the corresponding points in the object's shadow, used for light source position estimation.

Figure 4 .
Figure 4.The contour formed by the intersection of the plane containing a surface discontinuity point and points along a light source direction path, with the ground plane.

Figure 5 .
Figure 5. Possible positions of light sources estimated from feature point candidates.

Figure 6 .
Figure 6.Light sources are occluded at some points on the ground plane.

Figure 7 .
Figure 7. Sample images in the single light source case (a) original image (b) reconstructed image.

Figure 8 .Figure 9 .
Figure 8. Sample images in the two light sources case: (a) original image (b) reconstructed image.

Table 1 .
The estimation RMSEs for the single light source case.

Table 2 .
The estimation RMSEs for the two light sources case.