Recreating snow with a shader

3. Snow deformation

To leave imprints in the snow we need to work with deformation. From the research of Van Eijk (2021) we can see that there are two main methods of Snow deformation in a plane. The raycast method is a way of deformation by casting downwards and then making an imprint. This is not efficient due to the need of customizing the imprint of each object. Since this is would take up a lot of time, the second option is chosen: a Depth render texture.

3.1 Depth camera

A Depth render texture works by creating an orthographic camera and using the output to generate a Depth render texture. To work with a Depth camera, we need the plane with tessellation and we add the orthographic camera bellow it. As we can see below, the idea is that the camera ignores the plane and only checks the objects on top of it. Then the camera checks how far the objects are and that information can be used to make a depth texture. This makes it easy to accurately get the depth of every kind of object, from spheres to feet.

To configure the camera to be a depth camera we make sure the camera is set to orthographic projection first. This is to make sure that it only looks at objects in between its clipping planes. This means that everything outside of the clipping plane wont be seen by the camera. After making sure that the camera is orthographic, we set the clear flags to depth only. This tells the camera to only render objects with the same layer as the culling mask that is defined.

We add a script to the camera with the line:

cam.depthTextureMode = DepthTextureMode.DepthNormals;

This tells the camera to generate a “screen-space depth and view space normals texture”. This gives us depth that we can use to make our render texture.

3.2 Depth Texture

To generate the depth texture we take the camera and add a render texture to it. To do this, we go the camera we added in 3.1 Depth Camera and add a texture to “Target Texture”. This is where the camera output gets send to. Then we make a shader to modify this output and create a simple black red image. We use this black red image to modify the vertex in the future. The red values should be based on the depth (distance) from the camera.

To modify the render texture that the camera already generates we use the function “OnRenderImage” in the script we made in 3.1 Depth Camera. This lets us manipulate the camera output. Grpahics.Blit lets us modify the output with a different material (shader).

Graphics.Blit(source, destination, _CameraMaterial);

In the shader itself we use the DecodeDepthNormal to decode the depth and normal values. The normal and depth values come from _CameraDepthNormalsTexture that we got by setting our camera to depthTextureMode.

fixed4 NormalDepth;
// make colors from depth
DecodeDepthNormal(tex2D(_CameraDepthNormalsTexture, i.uv), NormalDepth.w,;
col.rgb = 1 - NormalDepth.w;
Generated render texture

As you can see on the left we have a render texture with different red values according to depth. We achieve this end result by multiplying our results with the color red. This texture can be used to generate displacement on our surface

3.3 Displacement

To add the displacement to our surface we add a few lines to our vertex shader. These lines tell the vertices ” go down when the pixel on our displacement map is red”. This means that a vertex will go lower according to how high the red pixel value is.

The code:

float4 redValue = tex2Dlod(_Splat, float4(uv.xy,0,0)).r // get red value of the texture
float downDisplacement = _Displacement * redValue; // When _Displacement = 0, there will be no tracks -= v.normal * downDisplacement; // apply displacement to lower according to the disp texture += v.normal * _Displacement; // raise the vertices so that the collider stays at the bottom 

3.4 Persistent Texture

Combining capture with persistent texture, Tran (2018)

Because the texture used in 3.2 Depth texture is refreshed every frame, we need to use a persistent texture. This means that our captured results need to be written to a persistent texture so that we can use that information in the next frame. This is done in the same shader we wrote in that article. We update our camera first to make sure that our results are written to the persistent texture by using the line below. No material needs to be applied since we already got the result we wanted in the temporary texture.

Graphics.Blit(tempTex, persistentTex);

We also pass the persistent texture to the shader in 3.2 Depth texture and using that we get the maximum red value of them both (this makes sure that old footsteps are being copied in the temporary texture).

newColors.r = max(persistentCol.r, newColors.r);

3.5 Perlin Noise

To make the snow look more realistic, Perlin noise was added. Unity has a great example of how to make Perlin noise. This gives the surface a nice hill feeling. Because snow is not always even it can pile up more at some places then others. The Perlin map made below is 256 x 256. The texture does not have to be higher in resolution since this gives us the correct displacement.

Our way of implementing it (modified for clarity):

int pixWidth = 256, pixHeight = 256;
float xOrg = 0, yOrg = 0;

Color[] pix = new Color[pixWidth * pixHeight]; // create an array for all the pixels of the texture

for (float y = 0; y < pixHeight; y++) { // loop for height
    for (float x = 0; x < pixWidth; x++) { // loop for width
        float xCoord = xOrg + x / noiseTexture.width * scale;
        float yCoord = yOrg + y / noiseTexture.height * scale;
        float noise = Mathf.PerlinNoise(xCoord, yCoord); // make noise
        pix[(int)y * noiseTexture.width + (int)x] = new Color(noise, 0, 0); // set pixel color in array

We also had to make some changes to our existing displacement code from: 3.3 Displacement. We had to account for the new Perlin displacement in the calculation of the imprints made before and the new perlinDisplacement is added to the vertex.

float4 perlinCoordinate = tex2Dlod(_PerlinTex, float4(v.texcoord.xy,0,0)); // get perlin map
float4 depthCoordinate = tex2Dlod(_Splat, float4(v.texcoord.xy,0,0)); // get depth map

float perlinDisplacement = perlinCoordinate.r * _PerlinDisplacement; // height of perlin map
float imprintDisplacement = (perlinDisplacement + _Displacement) * depthCoordinate.r; // depth of tracks -= v.normal * imprintDisplacement; // apply displacement to lower the footsteps += v.normal * (_Displacement + perlinDisplacement); // raise so you can walk inside the snow 

3.6 Edge smoothing

To smoothen out the edges, we make use of box blur. Box blur is a method of checking the current pixels and the surrounding pixels and making an average. For this test I made 3 versions in the vertex shader:

– The plane without any smoothing
– The plane with smoothing only applied to the edges
– The plane with smoothing applied to the full slope

The code is seen below. I made this in the shader that also deforms the footprints into the plane. The _size value is set to 3. This means that every vertex checks neighbors up to 3 pixels away. Making this value too large has a significant impact on performance.

No smoothing

 float height = coordinate.r;

Edge smoothing

float4 coordinate = tex2Dlod(_Splat, float4(uv.xy,0,0)); // get current image values
float2 texSize =  _Splat_TexelSize.xy; // texsize = plane size / texture.width

float height = 0;
if(coordinate.r <= _SmoothingLimit) { // smoothing limit sets how much gets box blurred
    for (int i = -_size; i <= _size; ++i) {
        for (int j = -_size; j <= _size; ++j) {
            height += tex2Dlod(_Splat, float4(uv.x + ((float)i * texSize.x), uv.y + ((float)j * texSize.y), 0, 0)).r;
} else
    height = coordinate.r;

height /= pow(_size * 2 + 1, 2);

Full smoothing

float4 coordinate = tex2Dlod(_Splat, float4(uv.xy,0,0)); // get current image values
float2 texSize =  _Splat_TexelSize.xy; // texsize = plane size / texture.width

float height = 0;
for (int i = -_size; i <= _size; ++i) {
    for (int j = -_size; j <= _size; ++j) {
        height += tex2Dlod(_Splat, float4(uv.x + ((float)i * texSize.x), uv.y + ((float)j * texSize.y), 0, 0)).r;

height /= pow(_size * 2 + 1, 2);

As we can see in the images above, edge smoothing is close but not close enough. It results in a lot of sharp texture edges at the bottom due to the slope being to steep. Because of this, we implemented the full smoothing effect to make sure that all the edges look correctly.

3.6.1 Implementing Phong tessellation

The edges were smooth but there are other methods of making it even smoother. One of those methods is changing our Edge based tessellation to Phong tessellation. This smoothens out edges that are are not that smooth instead of making adding more vertices to do the job. When we implemented Phong tessellation to our project, it did nothing. Apparently all normals have to be set correctly for it to work. Because we raised our vertices and added smoothing on top of it, the normals need to be recalculated since they are not pointing up anymore. We didn’t succeed in recalculating the normals within the timeframe and therefore Phong tessellation was not implemented.

3.7 Raising the edges

When you walk through snow, you move the snow. It does not just disappear, it either gets compressed under your feet or gets moved forward when you walk through it. To improve on the existing depth texture we add edges.

The first attempt at edges was to raise the box blur we already had. This worked but had a downside. As seen in the image below by raising it, we create a steep part at the bottom and a sharp edge at the top.

Example of raising the box blurred edges

To create better and more realistic edges, we had to create a new layer on top of the render texture made in 3.2 Depth Texture. First a blue edge was needed around objects to tell the displacement shader where the plane needs to be raised. To add this blue edge we make another pass in the shader that makes our texture red according to depth. The new pass makes use of the idea behind box blur. We check in a range if texture contains a red pixel nearby. If it does, a blue one is created.

persistentCol.b = redPixelFound(persistentCol.r, i) ? 1 : 0;   
bool redPixelFound(float redColor, v2f i) {
    fixed2 texSize = _MainTex_TexelSize.xy;
    float2 uv = i.uv;
    if(redColor == 0) { // only check if the pixel contains no red value
        for (int i = -_EdgeWidth; i <= _EdgeWidth; ++i) {
            for (int j = -_EdgeWidth; j <= _EdgeWidth; ++j) {
                if(tex2Dlod(_MainTex, float4(uv.x + ((float)i * texSize.x), uv.y + ((float)j * texSize.y), 0, 0)).r > 0)
                return true;      
    return false;

Because this code will not work nicely with the current box blur. The box blur was moved from the vertex shader of the plane to the image itself. This is another pass we add so now we have 3. We use the existing full blur method from 3.6 Edge smoothing and we improve on it by using both the red and blue values in the blur.

height += pixelColor.r;
height -= pixelColor.b;

After that, we clear the blue and red values and the color of a pixel is based on the height value generated above. If a value is lower then 0, it becomes an edge(blue) else it becomes depth(red).

if(height <= 0)
    col.b = -height;
    col.r = height;

The results:

Since our shader now has 3 passes, we need to make sure that the camera runs it one by one. To do this we need to modify the code from 3.2 Depth Texture to run each pass sequentially. A temporary render texture needs to be made to make sure the data is stored before it can be used in the next pass.

renderTemp = RenderTexture.GetTemporary(tempTex.width, tempTex.height);

Graphics.Blit(source, destination, _tempMaterial, 0);
Graphics.Blit(tempTex, persistentTex); // Only copy the red values to the persistent texture

Graphics.Blit(persistentTex, renderTemp, _tempMaterial, 1);
Graphics.Blit(renderTemp, destination, _tempMaterial, 2);


After this is done, the shader that is applied to the plane is modified to not contain box blur anymore and to just use the height values generated by the texture above.

Related Posts