Procedurally generated dungeon
With mesh sculpting, puzzles and lighting
– By: Koen Groot
Table of contents
- Introduction
- Investigating algorithms
- Starting the dungeon generation
- Adding important pieces
- Getting back perfomance
- Making a mesh sculpter
- Altering the generation to add custom parts
- Adding extra’s
- Perfomance checks & Github
- Future of the project
Introdution
For my personal project I’m delving head first into procedurally generated dungeons and what makes them tick. But to go a step beyond the norm I want to make sure I have control to a certain degree over what dungeons are generated. To do so I want to make sure I can specify a certain amount of ‘tiles’ in the dungeon to be generated to my liking, whether empty or otherwise. For this instance my goal is to make them empty and create a secondary mechanic to bridge the empty space. The empty space would be a cliff like spot where the user can draw a mesh to get across. Each dungeon should be completable with all these things in mind, if I can generate multiple dungeons where each one still facilitates my goals, I will be content.
Note: github found here: https://github.com/KWgroot/DungeonGenerator.
Investigating algorithms
When I speak of a dungeon I specifically mean a gathering of multiple rooms connected through corridors. For which I found three possible algorithms through the given examples from our teachers that gave us this assignment. Firstly, cellular automata, an algorithm that makes more open ended cave systems than dungeons, but has potential for modification.
Next is something called binary space partitioning (BSP), which creates literal rooms by using a tree like algorithm.

And lastly Delaunay which creates multiple rooms that overlap of varying sizes. Then it uses behavior based on the Inverse-square law to move the squares so they no longer overlap. Which in the end creates a graph by connecting the possible rooms to each other, ending up with a dungeon like structure.

Breaking down the possibilities of each algorithm boils everything down to three questions that need answering. The following can be found in the table below.

With this knowledge I could already cross out Cellular automata from my options because it does not generate proper dungeon like structures, but instead creates more cave like structures.
Having narrowed down my options to Delaunay and BSP I just want to make sure I pick the one that can make the most efficient dungeon for my case. Which leads me to the following two studies, the first being about Delaunay: https://arxiv.org/pdf/1902.07554.pdf, the second being about DSP and other algorithms: http://oslo.ewi.tudelft.nl/Publications-new/2016/SLTLB16/chapter03.Online.pdf. By comparing the findings of the two studies I can determine a few things. First, while Delaunay does give a bit more control over the generated dungeon, it’s neglectable to the amount of performance it loses when creating larger dungeons. As seen in the graph below:

So while dependent on what the creators desire is, my choice will be to go for BSP in this case. Mainly due to my goal dungeon having a larger amount of rooms to navigate through. Also from the studies I could easily determine Delaunay to be vastly more difficult than BSP to create from scratch. Seeing as my time is limited to a short four weeks, BSP is without a doubt my preferred algorithm.
Starting the dungeon generation
To begin I needed some kind of mental image on how Binary Space Partitioning actually creates its rooms, luckily http://www.roguebasin.com/index.php/Basic_BSP_Dungeon_generation has a perfect set of example images.

This tells me the exact order of steps I need to follow to properly get the generation started. First, set up variables for general dungeon size, minimum room size, corridor width and how many times each ‘node’ should be split.
But to determine where these nodes and rooms go we still need to divide the area we designated into small pieces that can be given those rooms inside of them. Which is where we get into the hardest part of this assignment, the algorithm. First I create a queue called graph which represents my tree structure of nodes. Next a list of rooms in which each node required can be found. Then I grab the current node from the queue that was just made and check if this node’s sizes exceed the set room sizes we mentioned earlier. If it does in any way it means we have to split this space in random locations to create new child nodes. Then these two new child nodes are added to the before mentioned list and queue so we may reference them at later points.
The division itself is done is by randomly selecting horizontal or vertical and then while keeping in mind the minimum room size placing it randomly somewhere on the old parent node. The results are the two child nodes mentioned earlier.
But all this is just setting up our possible nodes without actually making any sort of room from these. Now that we have all the node data required it’s time to make some floor shapes for the dungeon. By traversing through the graph to get all the lowest leaves in the tree graph we have all the nodes that need to be drawn. All parent nodes have already been divided so they are neglected from this point on. Then to generate rooms we loop through all the nodes that still need rooms in them and generate two points for it, the bottom left and the top right. With some simple math we can figure out the other points anyway. By ensuring that the size is always 10% smaller than the full node it will always fit. Which get us to the final point, generating a mesh to show our dungeon’s floor plan. Some simple drawing of triangles, setting the UV’s and actually creating each mesh required and the result is something like this. No corridors yet of course but you can see each section being neatly divided and then having rooms in each section that don’t overlap each other.
CreateMesh(listOfRooms[i].BottomLeftAreaCorner, listOfRooms[i].TopRightAreaCorner, roomMat); // Just create room (floor)

Now that we have some nice rooms, we should connect them. Luckily the method to doing so is relatively simple, just connect each leaf node to it’s neighboring leaf node on the same depth in the tree graph. Keep doing this all the way up to the top of the graph and all rooms should have their connections set. As for the position of the actual corridor we’ll need some extra information on each situation. For starters, where is the leaf node corresponding to the position of the room we are currently checking. When that is known just start in the middle of the room and check if there are any issues just placing a corridor right there. Things like possible edge of the room and the other room not connecting might be an issue for instance. Once the origin point of the corridor is found simply create another mesh just like a room and draw it between the two rooms. For easy progress I’ve added all corridors to the room list just because they work the exact same.

Up next, creating a starting and ending room. The start room will always be the first room we generate, this works well because it’s not always the same room due to the binary space partitioning and it’s never unavailable. The ending room however had me thinking for a bit until I realized, I always have the exact same amount of rooms as I have corridors. Meaning that if I just check the exact middle of all rooms generated I’ll have the last room generated before the corridors start. After designating this as the ending room we have a completely random and generated starting and ending room with which we can do specific things.
if (i == 0)
CreateMesh(listOfRooms[i].BottomLeftAreaCorner, listOfRooms[i].TopRightAreaCorner, startRoomMat, true); // Start room
else if (i == listOfRooms.Count / 2)
CreateMesh(listOfRooms[i].BottomLeftAreaCorner, listOfRooms[i].TopRightAreaCorner, endRoomMat); // End room

Adding important pieces
But just a floor of a dungeon is hardly a dungeon, it needs walls all around it and inside it. But because we have most of the data about every room and it’s positions this won’t be all too difficult. I say that, but I did take the easy route and just used two prefabs and instantiated all walls instead of generating meshes for it. Mainly because that would simply take me far too much time to figure out all the right directions for the walls and then drawing their triangles all toward the center of each room. What I did do however is make a list of possible wall locations and possible door locations. Then because I use a very simple integer based system I can just use walls that are always exactly 1 unit long. If I then iterate along a wall, say bottom left to bottom right, I’ll simply add all the points in between where a wall should be place in the list. When the list notices I’m trying to add a point that already in there it means there’s a corridor here. Those locations instead are saved into the possible door locations list and the wall is removed from that location. Lastly I just get all the points in the list, instantiate walls on the points depending on if they’re horizontal or vertical and place all the walls I need.
for (int row = (int)bottomLeftV.x; row < (int)bottomRightV.x; row++)
{
Vector3 wallPosition = new Vector3(row, 0, bottomLeftV.z); // Set wall positions based on the floor mesh.
AddWallPositionToList(wallPosition, possibleWallHorizontalPosition, possibleDoorHorizontalPosition);
}

A dungeon with walls is nice and all, but without a roof it’s not complete either. So to get a roof we could just duplicate the floor everywhere and place it on top. But I’d rather also generate the roof the same way I generate the floor. However if I just copy it the wrong side will render, meaning I just flip the triangles on the newly created roof, place it right on top of the walls and a ceiling is created.
Getting back performance
The dungeon is just about ready in it’s most basic form but before I continue to the next part where I start working on the mesh sculpting mechanic, I need to look at the performance a bit. In the clips above you can see very little draw calls being made despite having made so mnay walls and floors. This is because I combined the meshes of all the walls into one giant mesh. And by not having a directional light in the scene the amount of vertices that need to be drawn also lowers. This is because a directional light requires a second pass of geometry to function. And last but not least by enabling fog and setting the camera distance for rendering only the parts are rendered that are required to be.
MeshFilter[] meshFilters = parent.GetComponentsInChildren<MeshFilter>();
CombineInstance[] combine = new CombineInstance[meshFilters.Length];
for (int i = 1; i < meshFilters.Length; i++)
{
combine[i].mesh = meshFilters[i].sharedMesh;
combine[i].transform = meshFilters[i].transform.localToWorldMatrix;
meshFilters[i].gameObject.SetActive(false);
}
parent.transform.GetComponent<MeshFilter>().mesh = new Mesh();
parent.transform.GetComponent<MeshFilter>().mesh.indexFormat = UnityEngine.Rendering.IndexFormat.UInt32;
parent.transform.GetComponent<MeshFilter>().mesh.CombineMeshes(combine, true, true);
Making a mesh sculpter
When I started researching mesh sculpting I just couldn’t find what I had in mind anywhere. Untill I found one single asset on the Unity asset store: https://assetstore.unity.com/packages/tools/modeling/mesh-pencil-3d-mesh-generator-from-2d-draw-data-179269. While this is a great tool, it also wasn’t really what I wanted to use because it was far to complicated for something as simple as what I wanted. But I did get some good ideas from this, for instance just draw on a collider and place it right there. I also realized the angle of drawing is important, so I needed a drawing camera to whatever this drawing object would be. So I got to work, created two planes with a gap between them, placed a box collider in the middle which could catch my raycasts towards it and started instantiating test objects where my mouse would be.
if (interacting && Input.GetMouseButton(0) && previousMousePos != Input.mousePosition) // Currently drawing and moving the mouse.
{
Ray ray = camera.ScreenPointToRay(Input.mousePosition);
RaycastHit hit;
if (Physics.Raycast(ray, out hit, raycastDistance, layerMask))
{
if (hit.transform.name != "Segment" && previousPoint != Vector3.zero) // Hit an empty space.
{
// Place object
}
}
}
However just instantiating prefabs is most certainly not what I had in mind, so I replaced that part with creating a mesh with the width of a corridor and a length that could be altered in the editor.
Now that looked more like it, but it still lacked the smooth drawing effect I so desire. To get this I need to get rid of the length that’s set in the editor and just use the old position of the mouse and the current position when I detect the mouse is moving. Then drawing a mesh with the right width between these points and combining all these meshes into one bridge mesh when the player is done drawing.
And with that result the mechanic is now ready to be tested in the actual dungeon. Note that this only considers a horizontal version and a vertical version needs slight altercations to work.
Altering the generation to add custom parts
The very first altercation is to place the drawing area that was previously created in the exact right position. So only vertical area’s go on vertical corridors. Luckily I kept this mind a while back and made sure every node knows a direction if it was created as a corridor node. Then by checking if the node we have to create is a corridor, checking it’s direction and randomly selecting which corridor to remove, we can place our custom area. This area then has to be placed in the centre of the previous mesh which is easy enough because we still have all the positions. Which all results in a well placed drawing area that just needs minor changes like the camera placement.
GameObject area = GameObject.Instantiate(drawAreaHorizontal, new Vector3(listOfRooms[i].BottomLeftAreaCorner.x +
(float)(listOfRooms[i].TopRightAreaCorner.x - listOfRooms[i].BottomLeftAreaCorner.x) / 2f,
0, listOfRooms[i].BottomLeftAreaCorner.y + (listOfRooms[i].TopRightAreaCorner.y - listOfRooms[i].BottomLeftAreaCorner.y) / 2),
Quaternion.identity);
area.transform.localScale = new Vector3(listOfRooms[i].TopRightAreaCorner.x - listOfRooms[i].BottomLeftAreaCorner.x, 1, 1);
area.transform.GetChild(0).GetComponent<Camera>().orthographicSize
= 1.2f + (listOfRooms[i].TopRightAreaCorner.x - listOfRooms[i].BottomLeftAreaCorner.x) / 10f;
area.transform.parent = transform;
But above you can only see the horizontal drawing area, the vertical one needs a different configuration because you don’t draw on the X-axis but the Z-axis. After making a new prefab and making sure it’s place in vertical corridors the script only needed a small change to draw vertical or horizontal and that’s all that was required.
The dungeon may have some new generation but it still looks terrible, this is because it’s an enclosed space without any light in it. So to fix this I started altering the generation to place ‘torches’ at calculated intervals. Just like all the parts before I need the amount of torches to be adjustable so I can create any type of dungeon based on variables. The idea I devised is that a ‘torch’ is placement along a wall in increments based on the length of the wall. The direction of the wall does not matter in this idea as we just take the coordinates from the beginning to the end. Then by adjusting steps by which the increment works we can have more or less torches everywhere. Now that torches are placed we just have to make them good looking, luckily I opted for URP at the start so I just used bloom effects and emission to have some nice looking ‘torches’. But next I wanted actual light sources, which isn’t supported by URP because you can only have 8 or less lights at any given point. So I needed to investigate a bit further because while I dont need more than 12 lights at any point I still want them to always work. Which led me to URP version 12, this version supports deferred lighting that allows me to place as many lights as I want with minimal impact on the performance. After converting the project the lights all worked flawlessly and my dungeon is now well lit.
for (int row = (int)bottomLeftV.x; row < (int)bottomRightV.x; row++)
{
Vector3 wallPosition = new Vector3(row, 0, bottomLeftV.z); // Set wall / torch positions based on the floor mesh.
if (row == (int)bottomLeftV.x + (increment * index))
{
if (increment <= 1)
index += 2;
else
index++;
//this wall should have torch
hasTorch = true;
}
AddWallPositionToList(wallPosition, possibleWallHorizontalPosition, possibleDoorHorizontalPosition, hasTorch, possibleTorchHorizontalPosition, true);
hasTorch = false;
}
Adding extra’s
There’s a few extra’s I added after I finished all my initial goals, all of which are smaller tasks that I changed upon receiving feedback along the project. Firstly, actually making the torches be torches instead of cubes with lightsources.

Which depending on if the wall they’re placed on is vertical or horizontal has a different rotation. Next up is corridors that are just a bit more random, as they originally were always in the middle of a room.

After which I just felt like the dungeon was far too quiet, so I added some background audio and a loop of footsteps when the player is walking.
The last two things I added are a staircase in the end room which I defined very early on and a spike pit underneath the drawing area.

There are a few things I haven’t mentioned like menu options and the minimap because they are not any part of the actual dungeon project. You can of course find these sections in the full project if you are interested.
Perfomance checks & Github
After all was said and done I figured I should get rid of things I no longer use both in the code as in objects that slow down the game. The second part being the most important, by looking through the profiler and the stat window I deduced a few issues. Firstly, too many objects creating drawcalls and secondly too many vertices and triangles at any given point. To fix the drawcalls I looked to the torches first as their fire is a particle effect. By limiting the maximum amount of particles emitted the drawcalls are lowered by a at least 50%. Next I put an LOD-group on each torch that makes it so only if you’re close enough the torch needs a drawcall and only shows the fire if even closer.
Next up, while I already combine the walls I dont actually do this for the ceiling or floor yet. This isn’t required either but it start weighing in harder the more rooms the dungeon creates. A smaller dungeon wont notice any difference, while a larger one gets rid of around 100 drawcalls on average. The only downside is I can’t use any vertice reducing technique anymore because once the dungeon floor or ceiling is rendered in anyway, all of them are because they’re now one object. But considering I only gain about 1k vertices at the most, this is neglectable.
The biggest improvement I could still make when it comes to perfomance is getting a different character model, but I don’t have the time for this. Mainly because over 60% of the vertices/tris of my entire dungeon scene is now the just the model. Meaning a lot could be gained by switching models.
Difference without performance functions
First off, without any optimization the stats windows shows us just how many drawcalls we’d have to make and the double amount of triangles and vertices at any given point.

First optimization on the list, combining as many meshes as possible to start shaving off drawcalls and vertices.
Combining mesh functions

When applied to walls, floors and ceilings the result is shown mostly in drawcalls but also straight up frames per second gained.

One sided rendering
Originally all my walls used a double sided material, meaning double the effort for rendering while I only really needed one side. So when a wall is placed, the function can now be called with a flip variable. When it’s used all triangles are flipped on that wall making sure I can use front facing materials on my walls. Strangely, this has such a marginal difference I cant even find it anywhere.

Deferred light rendering
While I cant show a difference in performance for this part, the reason I can’t is also the biggest reason it’s the biggest perfomance gain. The reason I switched to URP 12 to get deferred lighting was because older URP version did not allow me to have more than 8 lights in the scene on at any point. But what I could do was make sure all lights dont create shadows, meaning no extra vertices. Here’s the stats with shadows in each light to the left, and no shadows to the right.

Amount of particles
Normally my flame particle effect would create 100 partices at the most, which isn’t super bad, but not great either. For this purpose I remade it with a highest point of 10 particles total. The biggest gain here is the amount of drawcalls.

LOD system
But I’m still always rendering ALL torches with ALL fire effects, so a Level of Detail group is added to every torch. When far enough, the entire torch is culled, if you get closer the mesh is rendered but no flame. The closer you then get the more effects on the flame start playing like smoke and small embers flying off. Which results in the following stats.

Render distance and fog
And finally, right now the entire dungeon is always in view of the camera. Since we dont need to see that much due to the dark setting, the distance of the camera view is changed from the default 1000 to 40. Which results in the following

If you’ve made it this far and are interested in looking through all my code, you’re more than welcome over at: https://github.com/KWgroot/DungeonGenerator.
Future of the project
While I might not work much on this project for quite a while due to a coming internship, it’s highly likely I will continue in the future. Some of the features I still want to add is a physics based puzzle system with the mesh drawing mechanism. Preferably a version that can also be generated so it won’t feel like the same puzzle every time. I also want to see if I can change the look of the dungeon randomly either between generated dungeons or in the same dungeon. Things like a moss dungeon or an ice dungeon are things I’m really interested in. Each of these version would in turn generate different types of puzzles and perhaps even enemies if they would fit into the world. Lastly I had the idea of some kind of treasure system that either gives loot with stats or simply buffs that just buff stats. When I say stats I mean things like speed, jump height, jump speed, amount of drawing ink (optional) and so on.
The when for when I get to work in this project again is still unknown to me, but I do feel like I could use this solid foundation for something interesting.
References
Daniel, F. Peter, S. Vincent, W. (2019, February 21). Load-Balancing for Paralles Delaunay Triangulations: https://arxiv.org/pdf/1902.07554.pdf
Noor, S. Antonios, L. Julian, T. Ricardo, L. Rafael B. (DRAFT). Constructive generation methods for dungeons and levels: http://oslo.ewi.tudelft.nl/Publications-new/2016/SLTLB16/chapter03.Online.pdf
Richard Fleming Jr. (2020, October 12). Basic BSP Dungeon generation: http://www.roguebasin.com/index.php/Basic_BSP_Dungeon_generation
Nazar Korinnyi. (2021, April 13). Mesh pencil 3d mesh generator from 2d draw data: https://assetstore.unity.com/packages/tools/modeling/mesh-pencil-3d-mesh-generator-from-2d-draw-data-179269
Nathan M. Williams. (Unknown). An investigation into dungeon generation: http://www.nathanmwilliams.com/files/AnInvestigationIntoDungeonGeneration.pdf
One comment on “Procedurally generated dungeon”
Comments are closed.
ye that’s sick lad innit