Using Perlin Noise To Generate Terrain In Unity

It’s fairly common to find Perlin Noise being used to generate procedural terrain, among other things. Perlin Noise can be used to generate a heightmap, which in turn can be used to control the height of a chunk of terrain… and it would then be common to layer those heightmaps at differing frequencies to create the sort of surface detail that you would associate with real world terrain.

A single Perlin Noise heightmap might look like this.

I’m going to describe a quick test here where I’ll build a grid of terrain cells in Unity using some custom shaders to make the rendering a bit more interesting. What I’m going to end up with is a simple interface defined via a C# script, that in the Inspector looks something like this, that will allow me to quickly build and edit the properties of my terrain.

The way I’ve set this up is such that clicking the Build button cleans out all children of the game-object that the script is attached to, and then generates a set of new children representing terrain tiles and the water. The surface area of the terrain is defined by the tile count and tile size.

For each tile I’ll generate six sets of Perlin Noise, using different frequencies (input scales) and weights, which I’ll then sum and normalize, before raising to a power and finally scaling to fit the desired world height. We raise to a power in order to flatten out the lower terrain while leaving the taller features more or less untouched… which I find yields more interesting results. Once we have out heights we then create polygon meshes per tile to represent the terrain.

The important part of all this is that deep inside the inner loop of the script we do the following per vertex calculation, where the s and w values are taken direct from the UI we’ve created.

       // sum the weighted heights.
       float y = 0.0f;
       y += Mathf.PerlinNoise(x / s0, z / s0) * w0;
       y += Mathf.PerlinNoise(x / s1, z / s1) * w1;
       y += Mathf.PerlinNoise(x / s2, z / s2) * w2;
       y += Mathf.PerlinNoise(x / s3, z / s3) * w3;
       y += Mathf.PerlinNoise(x / s4, z / s4) * w4;
       y += Mathf.PerlinNoise(x / s5, z / s5) * w5;
       y /= weightSum;

       // raising the height to a power gives a much nicer result
       // - more flat areas... recognizable mountains... etc
       // also factor in a height scale.
       y = Mathf.Pow(y, WorldHeightPower);
       y *= WorldHeightScale;

With those meshes built and attached to a MeshFilter component (and a MeshCollider too) I’ll then link them to a custom shader. Assuming I’ve a shader in my asset library this is easy. I just need to add a MeshRenderer component and link it to my shader like so, where since I’ve based my shader on the standard Unity surface shader I also need to enable to metallic workflow and set the glossiness to prevent the terrain looking like shiny plastic.

      meshRenderer.sharedMaterial = new Material(Shader.Find("Custom/GOAT_Terrain"));
      meshRenderer.sharedMaterial.EnableKeyword("_SPECGLOSSMAP");
      meshRenderer.sharedMaterial.SetFloat("_Glossiness", 0.1f);

My custom shader samples two textures, one for grass and one for cliffs and I also provide a scale for each. Whenever these are changed via the inspector I can easily connect the new values to my components by doing the following for each terrain tile, bearing in mind that I’ve named that _GrassTex and _CliffTex in my shader code.

      meshRenderer.sharedMaterial.SetTexture("_GrassTex", GrassTexture);
      meshRenderer.sharedMaterial.SetTexture("_CliffTex", CliffTexture);
      meshRenderer.sharedMaterial.SetTextureScale("_GrassTex", new Vector2(GrassScale, GrassScale));
      meshRenderer.sharedMaterial.SetTextureScale("_CliffTex", new Vector2(CliffScale, CliffScale)); 

Finally… my shader uses the following code in place of the usual code that samples _MainTex, allowing for the two textures to be sampled and blended based on the surface normal, meaning you automatically get the grass texture on the flatter areas and the cliff texture otherwise…

      float lerpFactor = saturate(saturate(o.Normal.y - 0.8f) * 8.5f);
      fixed4 grassColor = tex2D(_GrassTex, IN.uv_GrassTex);
      fixed4 cliffColor = tex2D(_CliffTex, IN.uv_CliffTex);
      fixed4 textureColor = grassColor * lerpFactor + cliffColor * (1.0f - lerpFactor);
      o.Albedo = textureColor.rgb * IN.color.rgb;

The end result looks something like this…

The result thus far isn’t really what I was after. I was hoping to create something a little more stylized with more interesting and not necessarily natural features, and I’m not sure this is really taking me in the right direction, so while I could carry on and try to make this look super realistic, I’m going to leave this experiment there and look into other approaches.

Peter Panning Shadows In Unity

I’d like to find the cause of some problems I’ve seen with shadow casting lights in Unity and in order to do so I’d like to be able to edit and generally experiment with changes to the standard deferred shaders. It turns out you can do just that!

For reference this is the problem I want to address:

First step is to download built in shaders from here

Extract Internal-DeferredShading.shader and rename it. I’ve called it GOAT-DeferredShading.shader… just because I like goats… but you can call it anything. From there import the new shader into the project under Resources, and under the scenes Graphics Settings change Deferred to Custom Shader and point it at the imported shader.

Now we can try a modification to double check it’s all hooked up. Any modification to CalculateLight should be immediately visible in the game view. Try faking the GBuffer inputs or something similar and you should see the result right away if the new shader is working.

I’m trying to find the cause of some shadow caster Peter Panning problems so in order to ensure the relevant code is available I also import UnityDeferredLibrary and UnityShadowLibrary, renaming them and then replacing the includes to chain the files together such that the custom versions of the code are used instead of the original versions.

Almost right away I spot this code in UnityShadowLibrary.cginc:

inline half UnitySampleShadowmap (float3 vec)
{
    float mydist = length(vec) * _LightPositionRange.w;
    mydist *= 0.97; // bias

Hard coded numbers always make me suspicious so I try eliminating it, and this is the result.

This image shows shadow acne. The cause is where the depth of the occluder is more or less the same as the depth of the receiver, and the precision of the depth buffer isn’t enough to resolve the depth difference, so we end up with pixels incorrectly marked as in shadow. In effect the surface here is casting a shadow onto itself. The standard way of dealing with this would be to default to casting shadows from back faces only… where from the point of view of a light the back faces are never visible and are therefore always in shadow and are almost always deep enough to not interfere with the front facing polygons shadow calculations. Is this not how shadows work in Unity?

Changing this hard coded bias does however appear to fix the peter panning problems… it just causes lots of other problems at the same time.

If I’m going to fix this properly I need to be able to control the shaders that write to the shadow map. It turns out you can do that too. Unity comes with a shader called Standard.shader that is used by the standard materials. As before we can extract this, rename to GOAT-Standard.shader and import it. Then we apply this small modification to switch from front face to back face shadow casters…

     // ------------------------------------------------------------------
     //  Shadow rendering pass
     Pass {
        Name "ShadowCaster"
        Tags { "LightMode" = "ShadowCaster" }
        ZWrite On ZTest LEqual		
        Cull Front  // <---------------------------

At the same time we change that we can comment out the line that multiplies the distance by 0.97… and as if by magic everything works more or less the way I wanted it to. With the bias removed a small amount of flickering is visible where the pillar joins the floor, but with this setup a tiny negative bias is actually more appropriate, so I end up with code that looks like this instead

inline half UnitySampleShadowmap (float3 vec)
{
    float mydist = length(vec) * _LightPositionRange.w;
    mydist *= 1.01; // bias

And now the shadows are totally solid, and match the geometry perfectly!

First Steps in Unity

I’ve installed Unity (2017.1) for the first time… and want to get to grips with some of the basics, so I’m going to build a small scene with a few moving parts covering some of the things I expect to be important to me.

When you open Unity you start with a default setup that provides a few things out of the box. Namely a camera, directional light, and some default render settings. This is what you are presented with when you first open Unity.

Out of the box, what you don’t get, is access to a deferred rendering pipeline. I want one of these as I think deferred lighting, bloom, decent reflections, etc, are all essential components, so the first thing I need to do is install the Post Processing Stack, described here. Once you’ve got that installed you create a post processing profile, which describes the precise pipeline you want to use and add it to the camera via a Post Processing Behavior component.

Next, Unity adds a lot of ambient light and reflections by default. I don’t want any of these as I want tight control over the lighting. Basically, if I don’t add any lights, that means I want everything to be black, not some default ambient color that’s it’s chosen to use for me. I understand why they do this… because an out of the box experience where everything is black until you add some lights might be confusing for some people… but I want to get rid of this light source. To do that you need to go to the lighting settings and set the Environment Lighting and Environment Reflections to both be sourced from a color… and then make that color black.

Having done all that we still need to activate the deferred pipeline. To do that you go to the project graphics settings, under Edit -> Project Settings -> Graphics and switch all tiers to deferred.

Finally, I’ve turned off the sky too, which you can do by setting the clear color of the camera to black. I’ll add a sky again as and when I think I need one. For now it looks odd that I have so little light in the scene and then this bright sky sitting on top of everything, so I’m better off without it.

Unless I’ve missed a step there, you should now be able to drop in a shadow casting point light and a few cubes and get something similar to this…

Next up, I want to add a few textures. I have some normal and albedo maps lying around from previous experiments so I grab a few of those and drop them in as assets. For the normal map you need to open it up and set it’s type to Normal Map. Then you can create a material and connect them as Albedo and Normal Map under Main Maps. Finally select the floor and map it’s material to the new material that we just created. In my case I’ve also had to set the tiling on the material to 15 to make the maps fit the large floor I’m using.

Now I want to animate a few things, which seems like an excuse to try adding a C# script to my scene. I start by making a new script in the asset library, then I open it, which should open VS2017 Community Edition. From there you are presented with methods called Start and Update to which you can add code. I type this.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class NewBehaviourScript : MonoBehaviour {

    public Vector3 initialPosition;
    public float time;

    void Start () {
        initialPosition = transform.position;
        time = 0.0f;
    }
	
	void Update () {
        time += Time.deltaTime;
        transform.position = initialPosition;
        transform.Translate(Vector3.up * Mathf.Cos(time * 5.0f));
        transform.Translate(Vector3.left * Mathf.Sin(time) * 2.0f);
    }

}

Once saved I drag the script into the space below the existing components on each light which automatically creates a new script component for each, and when you click play now the lights slide back and forth in the X axis and bob up and down in Y.

There are a few things here I’m not happy with still. The first is that the shadows cast from the column exhibit whats known as Peter-Panning, where there is a disconnect between the caster and the shadow itself making it look as though the object is not quite sat on the floor. I’ve checked and the object actually intersects the floor so there is no gap at all. Unity exposes a per light shadow bias setting that seems to control this, but it appears that a bias of 0 doesn’t really mean 0 and so we still end up with a gap. I believe we should be able to get away with a much smaller bias and shouldn’t need to put up with this artifact, but I can’t see an obvious way to achieve that…

Other things I’ve struggled with a little are the points lights, where I seem to have to make them very intense to get enough light in the scene, but doing so seems to create very intense lighting some distance from the object still, which doesn’t feel realistic. I need to do some experiments as I’m not sure if this is down to the bloom settings, the tone-mapper or eye adaption, or more likely the light falloff, or some other thing… but when I’ve previously coded everything myself it seemed much easier to balance the lighting and get a more natural look to things.

I’ve a feeling that my next set of experiments might revolve around writing custom shaders for Unity.