So I've been tinkering away at this project for some months now. I came across some images of the neural structure of a cochlea. I had the thought that it would be a good opportunity to delve into UE4's [still experimental] particle system --- Niagara. It ended up becoming something where I can explore various other methods for constructing the cochlea in its entirety, but emulating confocal microscopy being one of the ideas. The other is merging with a system that logically transforms soundwaves into the visual output; as well as being procedural enough so that a user can interface with it.
To start the primary simulation type is CPU based, i.e mesh-renderer and ribbon renderer. Mesh particles don't enable with GPUsim. So there's a lot of logic going on to generate the transforms for the neurons; Primarily constructing a control spline via functions I set up in blueprint, then accessing that in Niagara to work with the pertinent vectors.
On Somata & Optimization
One of the more immediate problems to solve relates to the most effective method for representing the shapes, without exceeding rendering budget. In the case of the neuron cell body I started off with pre-tessellated static mesh that roughly fits the limit, but it was at least 100 triangles for a decent amount of curvature. Times the number of particles/instances that adds up. Why not mesh LoDs? You'll immediately find that the most common method for LoDing (level of detail) -- reduced polycount versions of the original -- doesn't actually function in mesh particle data/space. In the current implement LOD-0 is the only thing that gets rendered, which means that's a no-go for managing triangles. So there's a number of other options.
I thought hey, why not just tessellate? Ok well the first point would be to have starting geometry of a low triangle count, that I can also morph into a sphere. I opted for an octahedron (tetrahedron should also work, but the base geometry would have a higher profile disparity at the lowest tessellation). 8 triangles, not bad. I enabled tessellation, piped a scalar in & checked wire-frame. Funnily enough wireframe mode doesn’t render tessellated triangles when the static mesh is in Niagara particle space. Cascade mesh particles are fine. We get to either use flat tessellation or PN triangles (spline based method that smooths). PN triangles don’t really work with mesh particles it seemed, & even if it would it's not like I would get the displacement controls I desired.
So flat tessellation, then some shader math. Vertex normals * .5, retrieve its vector length, cosine of that + n *x multiplied by the vertex normals. Then multiplied by the local scale. In the case of instances you can normally retrieve that scale by appending the lengths of the xyz vectors, which are transformed from local space into world space. However, the local-to-world transform function is only supported by vertex, compute or pixel shader. Since we're doing things in the domain/hull shader my workaround is to pass the transformed vector through the vertex shader (via customizedUV). Simple & effective in a variety of cases. That said if you're dealing with vec/float3s you'll need to utilize two customizeduv channels, since you unpack via a texcoord (which is a vec2).
Finally, to 'lodify' we simply linear interpolate between two scalars (e.g 0 and the highest tessellation multiplier), with the alpha input being the distance from the camera position to absolute world position. Vector length of cam pos - absworldpos, minus x, divided by y, saturate. The displacement function obviously goes into worlddisplacement pin.
I would like to refine the function but this works for getting a base to apply additional displacements. Another option is use the tetrahedron as as a surface domain bounding volume for rendering the cell body primarily via the pixel shader —- e.g via some version of spheregradient3d. We could then cut the base tri-count in half & choose whether or not to include tessellation. There are some complexities to deal with there. Either way it’s an option, & I’m actually already using some of that for the ‘nucleus’ portion.
Procedural Mesh Creation
One of the enjoyable things to play with has been UE4’s procedural mesh generation functionality. Typically you would just create the base models/geometry in your DCC(digital content creation software) of choice. E.g model with that toolset & export to file for import into the engine. In this case I wanted to play around with the aspect of generating topology programatically. The basic method is construction script with logic to assemble the data necessary to create ‘mesh sections’. Vertices, triangles, normals, tangents, uvs. Some things in the works are the octahedron for cell body, synaptic planes, path deforming shape profile (e.g cochlear ducts/body).
Below are a couple example snippets of vertex/triangle arrays. The algorithms are a bit dirty/not refined for a variable geodesic yet, but I just needed the 8-tri octa in this case. =] There are a variety of ways to solve for a particular topology. For example the simplest vertex function for a straight up octahedron could just be be a switch-on-int statement with the 6 respective direction vectors. The harder thing seems to be setting up a good algorithm for the triangle array.
Also a note on grids/planes. I set up an algorithm for a variable tessellation plane to work with, and it was still comparatively slower than a function called Create Grid Mesh (example of having code that’s already cooked/done in C++ vs entirely bp nodes). A way to do certain shapes like a tube is create a grid mesh section, then operate on those vertices before remaking the arrays, & calling create procedural mesh section. Calculate mesh tangents is another nice built in function for quickly getting the smoothed normals+tangents for the section.
Still looking into some techniques for the cell-packing, perhaps something with distance fields. At the moment it’s based on curve derived transforms, in a [3d] array with offsets. In the next blog post I’ll likely go over this more in depth.