Image Based Lighting Part 1: Acquisition of Images


This page is designed as a supplement to lectures. It is not intended to be a stand alone tutorial.


Image Based Lighting can, at times, be a very efficient and accurate way of lighting and rendering a scene. Many times it is used to match a CG element to the cinematography of a live action scene. This requires that the CG scene is rendered with the exact same type of lighting as the live scene. Success depends on two steps: 1) acquiring the correct images to use and 2) correctly setting up the IBL node in Mental Ray to give the needed effect. This web supplement is in two sections: IBL Acquisition and IBL rendering.

This is Part One and describes what type of images are needed for IBL and how to create them yourself. Part Two, Image Based Lighting: Mental Ray IBL Rendering, describes how to use the IBL node in Maya Mental Ray.

Image Based Lighting (IBL) requires an image that wraps entirely around the scene including the zenith (top or north pole) and nadir (bottom or south pole). Mental Ray, as most rendering programs, can use 2 types of image projection where a single image can be mapped onto a sphere that surrounds the scene.

One is an equirectilinear. This is a traditional type of projection and is easy to understand and/or edit it. These images have to be twice as wide as they are tall. For example: 4000 wide by 2000 high.

The second is a projection that looks like a mirrored ball was placed in the scene and photographed. Many times this is actually how the photo is taken - with a mirrored ball. It is very distorted and difficult to edit, but it also shows the entire 360x180 degree world. This type of projection is called "Mirrored Ball," or "Angular."

Using Mental Ray, you would select "Angular" in the IBL node.

In addition to using an image that wraps around the entire world, an IBL renderer, if it is to be realistic, also requires a High Dynamic Range (HDR) image.

A typical 8, 12, 14, or 16 bit image can only resolve a limited dynamic range of brightness. This image shows a cave opening that has been shot with a DSLR camera. The 4 different exposures show that these typical images cannot resolve detail in the bright sky and detail in the darkest pile of leaves in the bottom of the cave.

An HDR image is 32 bit and, however, can resolve all the dynamic range. HDR images cannot be taken with a normal camera. To create them a software program blends different exposures together to create an image that has the full range.

It is impossible to view a 32 bit image on a typical monitor. What the does so as to show the whole range is to compress the 32 bit image down to 8 bits so all the detail is retained. This image is an 8 bit image that shows all the detail in a 32 bit image. It is compressed down. It is not really reproducing the original dynamic range in the cave.

An IBL renderer needs the huge difference in real world brightness levels to accurately represent lighting on the objects. An 8 bit image can be used, but it will look "flat." It cannot replicate the dramatic differences in brightness between light sources and shadow areas. Only a 32 bit HDR image can replicate this dramatic dynamic range.

So.....creating an image that an IBL renderer can use is difficult. You not only need a panoramic image that shows the whole 360x180 environment, but you also need it in a 32 bit HDR format.

Note: An 8 bit image can work, but it does not accurately represent the real-world dynamic range.

A viable solution is to download these images from open-source websites. Openfootage.net is one of the best and operates under the Creative Commons license. They offer both high res images for use when you want the background and low res versions when you only need the light source information. This image shows you an example and how a 3D CG model would render in that environment.

 

 

But, maybe you are a DIY type of person, and you strive for the highest quality, or most likely, you have a lighting situation that is unique. What does it take to do it yourself?

If you only need the lighting info - which is 99% of the time - shooting a bracketed sequence of exposures using a mirrored ball will work fine. Here is a typical setup:

1) A mirrored ball. This can be a gazing ball, a Christmas tree ornament, a ball bearing, or anything that is reflective and a sphere.

2) A DSLR that has manual controls so you can shoot a bracketed sequence of 5-11 f-stops. A telephoto lens is very helpful.

 

The biggest drawback to the mirrored ball approach is that you and the camera reflection is in the shot. If you are only interested in the lighting info, this probably is not that big of deal.

 

This image show problems with the reflections of the photographer. On close inspection you can also see that the image could not be used as a background image. It is fuzzy, blurry, warped, and messy.

Position the ball in the scene where your modeled CG object will be. Using manual exposure, shoot a bracketed sequence that varies by 1 stop. Go from an exposure where you can see detail in the brightest objects (sky, sun, light bulbs, etc) down to an exposure where you can see detail in the darkest shadow areas.

Use tripods for both the DSLA and the gazing ball so that the images are perfectly aligned.

 

Blending the individual exposures together requires a software program. One of the most used is Photomatix. Many photographers use HDR techniques just to make compelling 8 bit images. Photoshop, however, can do a great job in making the 32 bit file and this can save you time and money.

In Photoshop go to File>Automate>Merge to HDR Pro. This window will appear. Browse to the folder with your separate images. If you used a tripod (and of course you did use one) you keep the "Automatically Align" toggle off.

Photoshop will now bring in all the images and blend them together. This window appears and it can be confusing. It is assuming that you want to make an 8 or 16 bit image and you will be compressing all the dynamic range. You DON'T want this. If you click "OK", you will loose all your 32 bit data.

Instead, use the "Mode" switch it change the mode to 32 bit.

Note: The window shows you all the images that are being blended. You can toggle them on or off. Also, feel free to play around with all the sliders and presets to see all the power you have with a 32 bit image and as you compress it down to 8 bit. Remember though, that you won't be needing or using any of the sliders to make or save the 32 bit image.

This window will now appear. You have no choices except to set the "White Point Preview". This does not effect the image or data. It merely sets the preview of the image so you can see what the image looks like.

Now click "OK" and your 32 bit HDR image will be saved and ready to go.

Remember that when you bring it in to the IBL node in Mental Ray, you need to select "Angular" Mapping.

If you are need a high resolution perfect background image, because the image itself will appear in the final render or composite, then you will need to construct a full 360x180 degree equirectilinear image. You may want this quality for reflections on your CG elements. Most likely you will need an image that is close to 10,000 x 5,000 pixels.

These images are made by stitching together separate images. The rig shown here is an 8mm lens on a Nodel Ninja pano head. It requires only 5 images stitched together to create the full 360x180 degree image.

However, you also need to make this HDR and that requires taking bracketed exposures of each separate position of the head. This gets complicated fast.

To learn more about how panos are stitched together using PTGui, you can follow this shooting video or this stitching video.

Remember that you can use any type of image for an IBL. This is a water color image that was tiled and flipped so it made a seamless 360x180 rectilinear panorama. It is only in 8 bit, but it would give an accurate color cast to lights in a toon shader or non-photorealistic render. An image could become very abstract and colorful.

In the IBL node in Mental Ray, you can even use a texture - such as a Ramp Shader - as the image.

Lastly and most importantly, if the image is only going to be used for lighting information and perhaps subtle reflections in a few CG objects, then it should be low res and possibly blurry.

High Res images are good for accurate sharp reflections or for rendering the image in the background. However, these large images create for long renders.

For efficient renders where the image will not be seen, you should use and Mirrored Ball image (possibly blur it)and make it 1000 x 1000 or even smaller. Hold on to the large master image and if you later need more accuracy and detail, you can switch back to it. Your renders will increase dramatically, however.

On to Part Two: Image Based Lighting: Mental Ray IBL Rendering.

UMBC Department of Visual Arts, Advanced Maya Courses, Dan Bailey