IsoMesh is a group of related tools for Unity for converting meshes into signed distance field data, raymarching signed distance fields, and extracting signed distance field data back to meshes via surface nets or dual contouring. All the work is parallelized on the GPU using compute shaders.
My motivation for making this was simple: I want to make a game in which I morph and manipulate meshes, and this seemed like the right technique for the job. Isosurface extraction is most often used for stuff like terrain manipulation. Check out No Man's Sky, for example.
I decided to share my code here because it represents a lot of trial and error and research on my part. And frankly, I just think it's cool.
The project is currently being developed and tested on Unity 2021.2.0f1.
A signed distance field, or 'SDF', is a function which takes a position in space and returns the distance from that point to the surface of an object. The distance is negative if the point is inside the object. These functions can be used to represent all sorts of groovy shapes, and are in some sense 'volumetric', as opposed to the more traditional polygon-based way of representing geometry.
If you're unfamiliar with SDFs, I would be remiss if I didn't point you to the great Inigo Quilez. I'm sure his stuff will do a much better job explaining it than I could.
While SDFs are really handy, they're mostly good for representing primitive shapes like spheres, cuboids, cones, etc. You can make some pretty impressive stuff just by combining and applying transformations to those forms, but for this project I wanted to try combine the mushy goodness of SDFs with the versatility of triangle meshes. I do this by sampling points in a bounding box around a mesh, and then interpolating the in-between bits.
In order to add a mesh of your own, open Tools > 'Mesh to SDF'. Give it a mesh reference and select a sample size, I suggest 64. Remember this is cubic, so the workload and resulting file size increases very quickly.
There is also the option to tessellate the mesh before creating the SDF. This will take time and increase the GPU workload, but it will not alter the size of the resulting file. The advantage of the tessellation step is that the resulting polygons will have their positions interpolated according to the normals of the source vertices, turning the 'fake' surfaces of normal interpolation into true geometry. This can produce smoother looking results, but it's usually unnecessary.
If your mesh has UVs you can sample those too. This is currently just experimental: naively sampling UVs has a big assumption built in - namely that your UVs are continuous across ths surface of your mesh. As soon you hit seams you'll see pretty bad artefacts as the UVs rapidly interpolate from one part of your texture to the other.
In this project, you'll find some sample scenes, including one demonstrating mesh generation and one demonstrating raymarching. Both have very similar structures and are just meant to show how to use the tools.
SDF objects are divided into three different components. These objects can be set to either 'min' or 'subtract' - min (minimum) objects will combine with others, subtract objects will 'cut holes' in all the objects above them in the hierarchy. These objects can be added to the scene by right-clicking in the hierarchy.
The SDFPrimitive component is standalone and can only currently represent four objects: a sphere, a (rounded) cuboid, a torus, and a box frame. Each of them has a few unique parameters, and they have nice reliable UVs. For now these are the only SDF primitives and operations I've added, but there are many more.
SDFMeshes provide a reference to an SDFMeshAsset file generated by the Mesh to SDF tool. These objects behave much as you'd expect from a Unity GameObject: you can move them around, rotate them, etc.
Currently there is only one SDFOperation supported - elongation. These works a little differently to primitives and meshes. An operation deforms the space of everything below it in the hierarchy. The elongation operation allows you to stretch space, which also works on the UVs!
You'll always have an 'SDFGroup' as the parent of everything within your system of SDFMeshes and SDFPrimitives. Objects within this system are expected to interact with one another and share the same visual properties such as colour and texture.
The final essential element is an 'SDFGroupRaymarcher' or 'SDFGroupMeshGenerator'. You can have as many of these as you want under one group. SDFGroupMeshGenerators can even represent chunks - though they need to overlap by the size of one cell on all sides, and should have all the same settings.
Given a scalar field function, which returns a single value for any point, an isosurface is everywhere the function has the same value. In SDFs, the isosurface is simply the points where the distance to the surface is zero.
Isosurface extraction here refers to converting an isosurface back into a triangle mesh. There are many different algorithms for isosurface extraction, perhaps the most well known being the 'marching cubes' algorithm. In this project, I implemented two (very similar) isosurface extraction algorithms: surface nets and dual contouring. I won't go into any more detail on these algorithms here, as others have already explained them very well.
As I say above, in order to use the isosurface extraction just add an SDFGroupMeshGenerator under an SDFGroup. The number of options on this component is almost excessive, but don't let that get you down, they all have tooltips which do some explaining, and if you've done your homework they should feel fairly familiar:
Normal settings are handy to control the appearance of the mesh surface. 'Max angle tolerance' will generate new mesh vertices when normals are too distinct from the normal of their triangle. I like to keep this value around 40 degrees, as it retains sharp edges while keeping smooth curves. 'Visual smoothing' changes the distance between samples when generating mesh normals via central differences.
I provide two techniques for finding the exact surface intersection points between SDF samples - interpolation is fast but gives kinda poor results at corners. Binary search provides much more exact results but is an iterative solution.
Gradient descent is another iterative improvement which simply moves the vertices back onto the isosurface. Honestly, I see no reason not to always have this on.
If you're familiar with SDFs, you're familiar with raymarching. They very often go hand-in-hand. Raymarching will also be very familiar to you if you ever go on ShaderToy. Again I recommend checking out Inigo Quilez for an in-depth explanation, but raymarching is basically an iterative sort of 'pseudo-raytracing' algorithm for rendering complex surfaces like SDFs.
In this project you can use an SDFGroupRaymarcher to visualize your SDFGroup. This component basically just creates a cube mesh and assigns it a special raymarching material. The resulting visual is much more accurate than the isosurface extraction, but it's expensive just to look at: unlike isosurface extraction which is just doing nothing while you're not manipulating the SDFs, raymarching is an active process on your GPU.
The raymarching material is set up to be easy to modify - it's built around this subgraph:
'Hit' is simply a 0/1 int which tells you whether a surface was hit, 'Normal' is that point's surface normal, and that should be enough to set up your shader. I also provide a 'thickness' value to start you on your way to subsurface scattering. Neat!
It also outputs a UV. UVs are generated procedurally from primitives and for meshes they're sampled. The final UV of each point is a weighted average of the UV of each SDF in the group. Texturing combined shapes can look really funky:
You can also directly visualize the UVs and iteration count.
I also include a very fun sample scene showing how you might add physical interaction. Unfortunately, Unity doesn't allow for custom colliders at this time, nor does it allow for non-static concave meshes. Which leaves me pretty limited. However, Unity does allow for convex mesh colliders and even static concave mesh colliders. Creating mesh colliders is very expensive for large meshes though. This led me to experiment with generating very small colliders only around Rigidbodies, at fixed distance intervals.
It works surprisingly well, even when moving the sdfs around!
I want to be able to add physics to the generated meshes. In theory this should be as simple as adding a MeshCollider and Rigidbody to them, but Unity probably won't play well with these high-poly non-convex meshes, so I may need to split them into many convex meshes.I intend to add more sdf operations which aren't tied to specific sdf objects, so I can stretch or bend the entire space.- I'd like to figure out how to get the generated 'UV field' to play nicely with seams on SDFMeshes. Currently I just clamp the interpolated UVs if I detect too big a jump between two neighbouring UV samples.
- None of this stuff is particularly cheap on the GPU. I made no special effort to avoid branching and I could probably use less kernels in the mesh generation process.
Undo is not fully supported in custom editors yet.Some items, especially SDF meshes, don't always cope nicely with all the different transitions Unity goes to, like entering play mode, or recompiling. I've spent a lot of time improving stability in this regard but it's not yet 100%.- I don't currently use any sort of adaptive octree approach. I consider this a "nice to have."
- I might make a component to automate the "chunking" process, basically just currently positioning the distinct SDFGroupMeshGenerator components, disabling occluded ones, spawning new ones, etc.
Note: I left this here for posterity, but Unity has now officially released 2021.2 on the tech stream, so it's no longer in alpha!
You may notice there is an option to switch between 'Procedural' and 'Mesh Filter' output modes. This changes how the mesh data is handed over to Unity for rendering. The 'Mesh Filter' mode simply drags the mesh data back onto the CPU and passes it in to a Mesh Filter component. Procedural mode is waaaay faster - using Unity's DrawProceduralIndirect to keep the data GPU-side. However, you will need a material which is capable of rendering geometry passed in via ComputeBuffers. This project is in URP, which makes it a bit of a pain to hand-write shaders, and Unity didn't add a VertexID node until ShaderGraph 12, which is only supported by Unity 2021.2.
If you want to try this out, but don't want to use an alpha version of Unity, this stuff is the only difference - you can import this into unity 2020.3, which I was previously working in, and it should be fine except for the procedural output mode and the corresponding shader graph.
If you have the amplify shader editor, I've included an amplify custom node class which should let you do the same thing!
- Inigo Quilez
- Dual Contouring Tutorial
- Analysis and Acceleration of High Quality Isosurface Contouring
- Kosmonaut's Signed Distance Field Journey - a fellow SDF mesh creator
- DreamCat Games' tutorial on Surface Nets
- Local interpolation of surfaces using normal vectors - I use this during the tessellation process to produce smoother geometry
- Nick's Voxel Blog - good source for learning about implementing the QEF minimizer (and their github repo)
- MudBun - I came across this tool while already deep in development of this, but it looks awesome and way more clean and professional than this learning exercise.