**Spot Lights**

In this post I'm going to add shadows to one of my lighting shaders. I've already got point lights and directional lights working, but for the sake of making my first attempt at shadows as easy as possible I'm going to use a spot light this time. Spot lights are very similar to point lights, with the only major difference being that we need to take into account angular attenuation as well as distance attenuation. The result is that we form a cone of light.

If we assume we have an inner and outer angle for the falloff, the we can calculate the cosine of each and hand the resulting values to our shader, ddefined as the Z and W values of the light-params vector in the following code. From there, assuming we also have the light direction and L, which is the vector from the surface to the light, we can calculate a falloff factor to multiply with our existing distance falloff value, using something similar to the following code.

float cosSpotInnerAngle = u_lightParams1.z; float cosSpotOuterAngle = u_lightParams1.w; float cosAngle = dot(u_lightDirection.xyz, -L); float falloff = clamp( (cosAngle - cosSpotOuterAngle) / (cosSpotInnerAngle - cosSpotOuterAngle), 0.0, 1.0);

Another small change is that where we use a sphere mesh for a point light, we can get away with a frustum mesh instead, scaled to fit the volume created by the light cone. This isn't required but reduces overdraw a lot.

**Shadow Maps**

Shadow maps store the depth of each shadow casting surface from the point of view of the light. Given this data our lighting shader will be able to compute the distance to each lit surface and along the same ray the distance to the nearest occluder, and by doing a depth comparison determine if the surface is occluded (in shadow) or not.

For a spot light we can use a perspective projection matching the shape of the light cone, and we can treat the light matrix more or less as a camera matrix for the purpose of rendering the occluders. Imagine we were treating the light itself as a camera for this step.

We render back faces, rather than front faces, as this helps to cut out self shadowing artifacts. If we rendered front faces there would be lots of surfaces where the nearest occluder and the lit surface were at the same depth, and then tiny inaccuracies in the calculations lead to errors in the results, which we don't want.

To keep things simple I'm going to allocate a 512x512 shadow map per light, but a better solution might be to start with one large shadow map and cut it up dynamically based on the number of lights active.

**Shadow Calculations**

Our lighting shaders already reconstruct the eye space position of each shaded fragment for lighting. To run the shadow tests we need to transform these positions into light clip space. To achieve that we can combine a few matrices and hand a matrix to the shader that encodes the full transformation. The sequence of transformations we need is...

Camera eye space -> World space (inverse camera view matrix)

World space -> Light eye space (light view matrix)

Light eye space -> Light clip space (light projection matrix)

If the matrices representing each of these is combined we end up with the required camera eye space to light clip space matrix.

The resulting coordinates provide a direct mapping into the lights shadow map. The XY values just need a scale and offset applying to them to map them to the 0 to 1 range used by UV's. The Z value can be compared directly to the values stored in the shadow map.

A single shadow map comparison will yield a binary on/off result, but we can do multiple to get a % in shadow result, which allows us to soften the edges of the shadow a little.

Our shadow calculation function looks something like this, where we also upload a bias value in the Z component of our lightParams to counter minor inaccuracies in the calculations.

float CalculateSpotShadowFactor(vec3 eyePos) { // Transform from eye-pos to light clip space. vec4 lightClipPos4 = u_eyeToLightMatrix * vec4(eyePos, 1.0); vec3 lightClipPos = lightClipPos4.xyz / lightClipPos4.www; // Work out UV coords for the shadow-map. vec2 shadowMapUV = lightClipPos.xy * 0.5 + 0.5; // Carry out the test inside the sampler. float lightClipDepth = lightClipPos.z; float lightClipBias = u_lightParams1.y; float shadowMapCompare = lightClipDepth - lightClipBias; // Run 4 comparisons. vec4 lightDepth4; lightDepth4.x = texture2D(shadowMapSampler, shadowMapUV.xy + u_filterPattern.xy).r; lightDepth4.y = texture2D(shadowMapSampler, shadowMapUV.xy + u_filterPattern.zw).r; lightDepth4.z = texture2D(shadowMapSampler, shadowMapUV.xy - u_filterPattern.xy).r; lightDepth4.w = texture2D(shadowMapSampler, shadowMapUV.xy - u_filterPattern.zw).r; lightDepth4 = lightDepth4 * 2.0 - 1.0; vec4 inLight4 = vec4(1.0) - step(lightDepth4, vec4(shadowMapCompare)); return dot(inLight4, vec4(0.25, 0.25, 0.25, 0.25)); }

**Multiple Lights**

As a finishing touch I've setup the example to use two lights rather than just one. Supporting multiple lights is easy as you just need to sum together the contributions from each light using standard additive blending.