Dynamic Terrain

This was a short 2.5 week specialization work for school in which i experimented with surface generation as a way to build terrain, exploring its potential uses and flaws. I used C++ and DirectX11.





It will be presented in a follow-along kind of style, skipping a lot of fluff, sidetracks and details, ending with conclusions.



The first step was to setup a basic project and research the field of surface generation in more detail to know my options and potential pitfalls, unfortunately its not a field overflowing with information but i found more than enought to get me started atleast. I decided to go with the traditional cube marching method as this was the most well documented and used among the various methods, it was also very easy to understand and an excellent entry point.

Explained in an extremely simplified way, the surface generation algorithms all work by sampling various points in space, measured as 'inside' or 'outside', imagine if two neighbouring points differ, one is inside and the other outside, then we have a surface somewhere between the points and you use this information connect the dots.

The cube marching algorithm works by sampling 8 points (imagine them as the corners of a cube), then building a one byte unsigned integer in which each bit represent one of the 8 states of the sampled points. This integer is then used in a lookup table that represent possible surface formations for the given states, the method can be expanded upon where the position of the vertices are interpolated depending on the difference of influence between the sampled points, resulting in a more expressive surface.

The next step is to create the structure for the data used to sample from, these are many methods but i went with a signed distance field. I then visualized the structure by setting the values to a hollow sphere and drawing a full box at every 'inside' sample point.



In order to have a better testing environment i used the simplex algorithm to generate more interesting volume data:



To better visualize form i implemented tri-planar shading:



I then began to implement the cube marching algorithm, it looked like the form was right but some triangles were black. My first suspicion was incorrect normals since you couldn't see through the black triangles and a quick look a the normals confirmed it (second picture). This was quickly fixed by using the signal information to to calculate a new vector, that was checked against the triangle normal determine if the triangle normal should be flipped or not.



And done:



The next step was to finalize the interpolation extension of the cube marching algorithm, as well as fixing issues with the value mapping of the simplex values. And there we go, cube marching:



Next i wanted to experiment with manipulating the distance fields, i created a simple sphere function to add and subtract from the field and put an offset on the simplex algorithm as i was recalculating the surface every frame:



Until now i had been working on a single 'chunk' of terrain, i decided it was time to create a structure to hold chunks and load them in as i get closer.



One of the most important visions/aspects of my work was if surface generation could reduce the amount of disk-data required to store terrain, my idéa was to use a sort of 'layering' system to build terrain when needed rather than storing the compiled result raw. Similary to photoshop, one would use layers that could be anything from randomization algorithms, localized heightmaps, models, fields, functions, voxels, etc, to generate the terrain desired.

Here's an example of the result of two functions (cutting of the field top and bottom) working on a high resolution simplex sample combined with a lower resolution sample:



And to make it more clear here's a much higher resolution with an amplified influence applied:



This felt very promising, but i did not have the time nor the code base to make tools for this layering experiement.

I moved on and implemented a ray-terrain intersection method so i could add and substract matter from the terrain in a 'paint' type way:



I also added more fitting textures and tweaked the layering to make more interesting terrain:



As a side-track i had also seen some volumetic shading work and was inspired to implement some crappy volumetric lightning as i thought it would be nice in this kind of environment:



At this point i had grown disillusioned with the current setup using cube marching, while there existed extensions to create different levels of detail it was not optimal, cube marching has problems accurately replicating certain surface expressions such as sharp edges and also often generates unnecessary mesh complexity. After doing another round of research, this time with a lot more insight, i ended up focusing on dual contouring.

Dual contouring works similary to cube marching in that it samples 8 points and checks if a surface is between edges, but instead of generating vertices on the edges, it takes the surface positions as well as normals at the edges and combines then into a single vertex within the space of the 8 samples, then generating the polygons between these vertices.

While dual contouring can be applied on the same meta data i already had i realized it was not the most optimal way of storing the meta data, an often used method with more advanced surface generation was storing the meta data in an octree, this could potentially reduce the storage size of meta data as well as lead naturally to LOD creation as well as mesh simplication, this dramatically increases the algorithmic complexity however, and i never got around to finishing an implementation in the end.



Conclusion:

I feel that these kinds of techniques have massive potential, today as far as i can tell they're mostly used for terrain because of their moldable properties rather than astheric due to being somewhat hard to control, but i believe we're going to see them being used more and more in all sorts of projects very soon when they've matured a bit, especially in games with large open environments.

While terrain might be the most obvious use for some of these techniques another very useful application is the automatic generation of level of detail for meshes. The simpler and faster techniques might be suitable to generate volumes for volumetic rendering. If used on characters and objects, you could dynamically destroy / transform parts of them.

The possibilites are endless and i very am very much looking forward to seeing how this is used in the future.



END