Creating Pixel Art with Shaders

by Pepijn van der Linden – 500637477

Image 1 – by Cort3D – Pixel Style Rendering in Blender

1. Introduction

I’ve always been a fan of retro games and seeing that pixel art style fills me with joy and a bit of bittersweet nostalgia. Drawing pixel art for games and animating sprites frame by frame can be a lot of work and since I found out that several games have tried making this process easier by ‘cheating’ with 3D I have been wanting to make my own pixel art graphics.

For the individual assignment of the Gameplay Engineering semester, I have looked into various different techniques and combined them to create my own pixel art pixel art style for a 3D environment.


What really set me on this journey however was finding the YouTube video below. This person managed to convincingly turn his 3D environment into real-time pixel art by combining several effects.

An other interesting case is that of Dead Cells. During development the character artist decided to build a tool that allowed him to render a simple 3D model as a 2D sprite so he could edit and generate his sprites faster.

2. What makes that retro look – PixArt Theory

Before I get into the technical stuff I’m going to take a moment to talk about the individual pixel art techniques, why they are important to the final result and when or why you would use them or leave them out.

Resolution & Detail
Image 2 – Pokémon Trainer Red sprite by Nintendo

The most obvious detail about pixel art is that it is.. well.. pixel-y. Choosing the resolution you work with is important because it dictates how much detail you can add to each piece. The lower resolution you go the more each pixel is going to matter to the way you see the details. To give you an example, take a look at the two Pokémon sprites in the picture above. While the bigger sprite has space to communicate features like a pose, facial expression and clothing/accessories the small sprite can only differentiate itself from other characters by showing key features like being male, wearing a cap and having little bits of dark hair sticking from the side while the hair at the same time communicates the shape of the ears. The black t-shirt underneath is not only communicating the upper body but also separating the head from the hands by just a few pixels.

If i’m using a 3D environment with low-poly objects, I would choose a mid- or lower resolution since there aren’t many intricate details i’m trying to capture on my textures or facial expressions that I need to see on my character.

Light & Shadows
Image 3 – Shaded Spheres

Objects in Unity have a ‘Diffuse’ shading by default. The object’s shadow gradually goes from light to dark. Because of a limited amount of colors pixel artists use in their palette, the shadows colors are snapped to a certain value. This effect is always present in pixel art and must be included for a satisfying look.


While outlines are not required for pixel art, they are often used to separate important details from the background. Especially in games with a small color palette it helps separate each object from things that use the same or a similar color. You should look to apply this effect on important characters, enemies, items and pick ups and perhaps props in the background that should stand out more than others. Black outlines stand out a lot, but to soften the effect you can give the outline only a slightly darker color than the surface.

Image 4 – Outlines

It also matters if there is only an outline on the outside of the object or if there are internal outlines that separate parts of the object to create depth and create readability.


This is an important technique that artists use to create the optical illusion of a smoother transition in between two gradients and greater color depth. Dithering is also used to add ‘roughness’ to an object. If you look at the spheres in the picture above you can see that the cel-shaded sphere looks smoother than the dithered one.

Image 5 – Lucas Pope, dithering in “Return of the Obra Dinn”

There are different types of dithering: diffusion, pattern and noise. In the image above you can see a Bayer matrix pattern (left) and noise (right).

While dithering can give your art more depth and create interesting details in a cel-shaded environment it also softens the definition of our edges and could hurt the visual style and make everything less readable. The higher the resolution you’re using the less you suffer from this effect.

3. Figuring out my Shaders

Image 6 – First Shader Test Environment by me.

When I started this project I knew very little about shaders and in my first attempts to create my pixel art shader I tried making one single .shader file that handles all the effects at the same time. This meant that every object in the world needed a material equipped with this shader to make the look consistent and couldn’t use a different shader of their own. Only objects with my shader would be rendered at a lower resolution. Also the technique I used for the outlines only had lines on the outside of each model but no possibilities for internal outlines. This outline technique uses a slightly bigger duplication of the original mesh, makes it completely black and only renders the back faces so you would see a nice outline from every angle.

These first tests made me realize that it would be best if I made the effects separately and made a script that could combine the effects and render them on top of each other. It also lead me to learn more about writing my own Post Processing effects, or Image Effect shader. This would apply my effect to the camera/screen instead of the object, meaning I could render everything on screen at a lower resolution and use Edge Detection to draw my outlines.

Shaders to apply to an object: Cel-shading and Dithering

Post Processing: Pixelization, Outlines, Dithering, Color Palette

Dithering can either be applied as an Image Effect or to individual objects.

Image Effect Shader

To write our own Post Processing effect a little setup is needed. You can create a base Image Effect Shader in Unity and this will create a .shader file that looks a lot like your standard vertex-fragment shader. This shader, however needs to be applied to the camera instead of an object.

Image 7 – Creating Image Effect

All I need to do is attach a C# script to my camera, in that script I create a new Material with the correct shader applied and I use OnRenderImage() This function is available to all classes that extend MonoBehaviour and will provide two RenderTextures ‘src‘ which is the image rendered by the camera and ‘dst‘ that will be output to the screen. Graphics.Blit() applies the shader to src and outputs the results to dst.

Image 8 – OnRenderImage function

If you’ve created a new Image Effect, the standard script will invert your colors, so you can test if it works!

4. Creating the effects


Image 9 – Using Unity’s LEGO Microgame project as testing ground for my effects

While I was first planning to write a shader for this effect, which is perfectly possible, it turns out that it is actually much easier to just do this part in a C# script. It looks a lot like the script that is used to apply an Image Effect to the camera.

xTileCount is used to multiply the number of tiles by the aspect ratio of the RenderTexture. The new dimensions create a temporary RenderTexture that can be used to resize the image. Lastly the filter mode is changed to Point. The default filter mode is Bilinear, which interpolates between pixels to obtain the pixel colors for the final render and the image will be blurry. Setting it to Point will keep the pixels sharp and blocky.

Image 10 – Low Res

Edge Detection

When trying to recreate Edge Detection with an Image Effect shader, we need a way to find places on the RenderTexture where the color changes drastically in lightness or hue. This is where the Sobel operator comes in.

Two calculations need to be made, each involving a 3×3 kernel. One in the x- and one in the y direction. To combine them, both calculations must be completed independently after which we can use Pythagoras’ Theorem to calculate the overall gradient. c = sqrt(a² + b²)

Edge detection can be done based on color, depth or normals. For each of these calculations we’re going to need the sampling points shown below:

Image 11 – Four Sampling Points
Color-based edges

The first edge detection will be based on difference in color. In the code shown below you can see the .rgb values of the screen texture _MainTex is being sampled at each of the sampling points and stored in col0 to 3. The gradient is calculated in the x- and y-direction after which they are solved and stored in c0 and c1. These represent our a and b when we apply Pythagoras’ Theorem.

You can see the formula being used to calculate an average gradient strength and our results representing c are stored in edgeCol. We only draw an edge if the edgeCol value is greater than the sensitivity threshold I have set.

Depth-based edges

To handle depth based edges we use a similar process. It is important that we get the depth texture from the camera first and store it in a sampler2D variable ( sampler2D _CameraDepthTexture; ). For the readers that haven’t seen what a depth texture looks like before, it’s a greyscale image that looks like the picture below:

Depth Texture image borrowed from Cyanilux

Using the four sampling points we take a look at the resulting depth texture. Unlike the color detection where we looked at the .rgb channels we only need one channel .r for this to store a single float in because the image is grayscale.

Linear01Depth is a Unity function that converts values to a linear range between 0 and 1. This isn’t a necessary step but does make sure we stay between the near and far clipping planes of the camera.

After this is done, the calculation is solved in the same way it was done for our color-based edges. Solved and store in d0 and d1 and put through Pythagoras’ formula.

Normal-based edges

The last method is going to take a look at the normals of each pixel we see. You have probably seen a normal map being used on a 3D model before, but by getting sampler2D _CameraDepthNormalsTexture; we get a normal texture of everything our camera sees.

Depth Normals Texture borrowed from Cheng Gu

Just like the previous techniques we solve the calculations. We need to use the dot product on each vector with itself to get the squared magnitude of the vector, which we can then use in the Pythagoras formula.

Combining Techniques

To get the values of all the edges at the same time we get the maximum. max() returns the larger of two numbers, so we compare edgeCol and edgeDepth, and compare the result with edgeNormal. The final color we return is going to be multiplied by 1 – the value of edge.

The edge effect can be a bit much, especially if you draw all three lines at full strength (black) but after playing around with the values a bit I have found a result I like.

For the end result it does matter if you draw the edge lines before you pixelate everything or after you pixelate. For a pixel art effect you’re going to want to draw the edges first and then pixelate your screen.

Toon Shading / Cel Shading

The technique in it’s essence is quite simple. The regular diffuse calculation would use the dot product to calculate the angle between the normal vector and the directional light’s direction. As such:

float NdotL = dot(normalize(normal),normalize(lightDir));

If you normalize the two vectors the result should be a value between -1 and 1. If the light is shining directly on top of the point then the result would be 1, if it’s at 90 degrees the result would be 0 and if it’s -1 then the light is coming from the direct opposite side.

The most basic two-tone cell-shading can be achieved by creating a cutoff point on the dot product value where the material will be lit if it’s above the cutoff point and have shadow where it’s below the cutoff.

To create those extra rings in the shadow I divide the NdotL that i’m returning into several parts by dividing it by any number and then rounding it down.

return floor(NdotL/0.3);


The dithering effect can be created as a shader on a material or as a postprocessing effect, like i’m showing in the images below. When you make the dithering effect with postprocessing the scene is rendered with normal lighting. A noise texture is then used to threshold each pixel so that the pixels that are lighter than the noise texture become light and the lower values become dark. As you can see in the images below you can use different patterns or noise to get a different dithering effect.

For this effect we’re going to use 3 sampler2D texture properties. A MainTexture, NoiseTexture and we need a ColorRampTexture. First we calculate the luminosity of the main texture. To place the noise texture onto the screen the UV coordinates of each pixel need to be divided by the size of the noise texture and then multiplied by the size of the image texture. In the code you’re seeing 2 multiplications because the x and y values of a textures TexelSize contain data divided by width and height and the z and w just contain the width and height. After this, we can sample the noise texture, we calculate the luminance of the pixel on the noise texture and then we perform the thresholding. Here you can either make it 1-bit by snapping the values to either light or dark, but some projects use the ColorRampTexture to blend the colours.

Color Ramp

We can sample from the ColorRampTexture and first check if the luminance exceeds the threshold, and sample a different color from the ramp depending on how far it goes over the threshold.

Don’t forget to also write a C# script with a Render(RenderTexture src, RenderTexture dst) {} function for this effect and to set the filterMode from Bilinear to Point if you haven’t already.

5. Conclusion

Concluding this blogpost, the first thing I want to say is that it really turns out recreating a pixel art style with shaders goes further than breaking down the most important pixel art techniques and trying to recreate them with shaders. Even though i’m getting closer to a nice retro look with each effect added on top of each other, sometimes it still reads as a 3D environment and looks closer to a PlayStation One game.

To take the art style one step further I must look beyond shaders. I’ve noticed effects in other projects that seem to add a lot to the style, like animations at a lower framerate or playing with textures that are rendered at a lower resolution.

Maybe instead of just using cel-shading on my objects to make the shadows snap from light to dark I need to make a post processing effect that snaps all the colors on the screen to a certain value to make the transition between colors less gradual.

That all said, I am happy with the visual results of combining the shaders in this project. I think i’ve come a long way but i’m definitely not quite there yet. I conclude by saying that these effects can be used as a base of must-have effects that can be built upon to create a perfected style.


Article about the art style of Unexplored 2:

Cross Hatch Shader Tutorial I used to learn about using textures on your shadows:

Daniel Illet – Basics of Image Effect Shader:

Shader Basics – Freya Holmér :

Minions Art – Shader Tutorials: , ,

Udemy Course by Jenny Hide:

Interesting forum posts from creator of Obra Dinn for dithering: ,

Rendering and shadows by Catlikecoding:

Ronja’s tutorial on Toon Shading:

Rona’s tutorial on Halftone Shading:

Roystan on Toon Shading and Outlines: ,

ProPixelizer is a cool paid asset you can check out for a quick and easy effect:

Related Posts