These are some of my notes from implementing image based lighting, a.k.a. IBL.

I thought I understood it pretty well until I started implementing it. Now, after a lot of reading, discussing and trying things out, I'm finally getting back to the stage where I tentatively believe I understand it. Hopefully these notes will save future me, or anyone else reading them, from going through the same difficulty. I'll update them if I find out anything further.

I thought I understood it pretty well until I started implementing it. Now, after a lot of reading, discussing and trying things out, I'm finally getting back to the stage where I tentatively believe I understand it. Hopefully these notes will save future me, or anyone else reading them, from going through the same difficulty. I'll update them if I find out anything further.

### Some helpful references

- GPU-Based Importance Sampling,
*GPU Gems 3, Chapter 20*. - Real-Time Computation of Dynamic Irradiance Maps,
*GPU Gems 2, Chaper 10.* - Physically Based Shading in Theory and Practice,
*SIGGRAPH 2013 Course.* - In particular, the course notes for "Real Shading in Unreal Engine 4" are very helpful.

### The radiance integral

$$L_o(v) = L_e(v) + \int_\Omega L_i(l) f(l, v) (n \circ l) dl$$

- The amount of light reflected at some point on a surface is the sum, over all incoming directions, of the light from that direction multiplied by the BRDF for the surface material, multiplied by the cosine of the angle between the light and the relevant normal vector, plus any light emitted by the surface itself.
- We usually calculate this separately for the diffuse and specular lighting at the point, then sum the results.
- For diffuse lighting, the relevant normal vector is the surface normal.
- For specular lighting, the relevant normal vector is the half-vector: the vector halfway between the light direction and the view direction (where both vectors are pointing away from the surface point rather than towards it).

### BRDF

- A BRDF is a function which, given a view vector and a light vector, returns the amount of light which will be reflected along the view vector.
- The view vector is the vector from the point being shaded to the view position, not the other way around.
- The light vector is the vector from the point being shaded to the light position, not the other way around.
- BRDF is rotation invariant, only depends on the angle between the two vectors.
- BRDFs usually have other parameters which are constant for any given material.

### Diffuse BRDF

- Diffuse contribution comes from light emerging from the surface after bouncing around inside it a bit.
- This means the light direction can be fairly random.
- So a fairly common diffuse BRDF is simply a constant value: the diffuse color for the mtaerial divided by pi (this is the Lambertian model).
- Materials with lots of subsurface scattering will need a more complicated diffuse BRDF such as Oren-Nayar.
- Oren-Nayar is based on the microfacet model described below.

### Microfacet BRDF

$$f(l, v, h) = \frac{D(h) F(v, h) G(l, v, h)}{4 (n \circ l) (n \circ v)}$$

- Usually used for specular BRDFs, but Oren-Nayar (diffuse) is also based on microfacets.
- Assumes that a surface is composed of lots of tiny planar fragments whose normals vary according to some statistical distribution. This is the Cook-Torrance model.
- Breaks the BRDF into 3 main components:
- D(h), the normal distribution function.
- F(v, h), the Fresnel function.
- G(l, v, h), the geometric shadowing function.
- D(h) tells us the fraction of microfacet normals which point in a given direction.
- This describes how rough or smooth a material is.
- Rougher materials will have a more even spread of normals across whereas smoother ones will tend to have them concentrated around a single direction.
- Lots of options for this: Blinn-Phong, GGX, GTR, etc.
- F(v, h) gives us the fraction of light that gets reflected in each channel.
- This tells us the material's "colour".
- Pretty much everyone uses Schlick's approximation for this.
- G(l, v, h) describes how much of the reflected light is blocked by other microfacets.
- Tells us how "dark" a material is.
- Is (or
*should be*) affected by how rough the material is. - There are lots of options for this too.

### IBL in general

- The idea of IBL is to bake out components of the radiance integral into textures ahead of time, so that our shaders can rapidly approximate it given just a direction, a surface colour and a roughness value.
- We can bake out the irradiance (incoming light) for any given direction:
- The irradiance will be from all pixels within a cone around the given direction.
- The solid angle of the cone is determined by the normal distribution function for your BRDF (which in turn is shaped by the material's roughness).
- This will be stored in a texture where we can look up values using a direction vector, i.e. a cube map or a lat-long 2D texture.
- You can use a mip-mapped texture to store the results for different roughness levels. mip-level 0 will be the smoothest level, higher levels will be rougher.
- Mip-level N+1 should represent a cone covering twice as many pixels as Mip-level N, so that the sampling works out correctly.
- Because the mip-level contents depend on the mapping from the roughness parameter to a cone width, the irradiance map will be specific to a particular normal distribution function.
- You can precalculate the rest of the BRDF separately as well:
- This is the split sum approximation described in the SIGGRAPH 2013 course on physically based rendering.
- Generate a 2D texture where the u axis corresponds to dot(N, V) values >= 0.0 and the v axis corresponds to roughness values between 0.0 and 1.0.
- The texture contains a scale and bias to the material color.
- Apply the scale and bias, then multiply the result by the value from the irradiance map to get the final color for the pixel.

### IBL vs. Reflections

- What's the difference between IBL and reflections?
- Theoretically: nothing. They're the same thing.
- I think there may be an accident of terminology here, where "IBL" is often used to mean
*diffuse*IBL; and reflections is used to mean*specular*IBL. - It's common to provide a low-res HDR image with a matching high-res LDR image. In this case:
- The low-res HDR image is for diffuse IBL.
- The high-res LDR image is for reflections.
- This implies that reflections are expected to sample from an LDR input whereas IBL (of any kind, specular or diffuse) is better off sampling from a HDR input.