-
Notifications
You must be signed in to change notification settings - Fork 84
DRAFT: Restructured Splat setup aimed at singular Splat object #196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
25baee2 to
13391ff
Compare
|
Beautiful work! |
|
In the near future, I would also love to see an experimental |
|
For |
If you want to dynamically alter the splat properties you can use shader hooks. But if you alter the positions of the splats, this can impact the required sorting. For a single This means you'll need global sorting and apply dynamic behaviour to a subset of the splats. In that case being able to run a shader to update each splat entirely on the GPU would be a solution (as is currently possible on A |
|
Can we remove |
|
Looks like the |
Should be resolved now, rebased against the wrong commit the first time around. |
|
Added a CPU based sorting approach and a basic For the CPU sorting it can be improved by:
The The interface is currently based on const splat1 = await loader.loadAsync(splatURL);
scene.add(splat1);
const splat2 = await loader.loadAsync(splatURL);
const batchedSplat = new BatchedSplat(2);
scene.add(batchedSplat);
// Adds splat1 to the batch, sets splat1.visible = false and 'tracks' it transform
batchedSplat.addSplat(splat1);
// Adds (raw) splat data to the batch, returns a proxy entity that can be placed in the scenegraph
const splatProxy = batchedSplat.addSplatData(splat2.splatData);
// Changes to the splatProxy take effect in the batchedSplat
scene.add(splatProxy); |
This is a PR exploring the benefits of refactoring the project to have a simplified/straightforward
Splatobject at the core. Advanced features can then be layered on top. This reduces coupling between the various features, avoiding a combinatory explosion and allows for dedicated purpose-built implementations that users can opt-in. The base case of loading and rendering a singular splat has the main focus.Note
Despite being a Pull Request, the changes are so extensive that going through the diffs is impractical. The following text aims to provide the global overview of the changes, but checking out the branch directly is encouraged.
Global overview
The main class is
Splat, which extendsTHREE.Meshand is intended to be used in a similar fashion. The individual splat properties are provided by any instances adhering to theSplatDatainterface. This aims to abstract the CPU and GPU representation of the splat properties. Loading splats can be done using theSplatLoader, which returnsSplatinstances.The following parts have been removed or replaced:
SparkRenderer/SparkViewpoint/SparkAccumulator: TheSplatclass handles it's own sorting and rendering.SparkControls: No longer part of the library, though still used in the examplesSplatEditSplatSkinningThe following parts have been retained:
SplatWorkerSplatEncoderinterface, improving load times by reducing indirections and abstracting CPU representation of the splat properties from the loaders.splatVertex.glsl/splatFragment.glslshadersPackedSplats: reduced to a read-only and only "one of" the possibleSplatDataimplementationsSplatclass:Splat.fromImage/fromText/....Three.js
Spark integrates into the Three.js rendering pipeline. However the
SplatMeshdoes not behave in ways one would expect from Three.js objects. In part due to the (implicit)SparkRenderer. With these indirections out of the way it becomes easier to align with idiomatic Three.js code. This should help people familiar with Three.js use Spark as well as improve interoperability with other Three.js libraries or frameworks..clone()instead of having to donew SplatMesh({ packedSplats: source.packedSplats });visible/layers/renderOrder/frustumCulled, whereasSplatMeshonly supportedvisibleand the others would needed to be set on theSparkRendereraffecting all splat meshessplat.geometrynow has aboundingSphereand (approximate)boundingBox.Splat, so no.initializedflag. ASplatinstance is the splat object in question. TheSplatLoaderreturns a promise that can be awaited giving a ready to useSplat.In general the goal is to avoid hacks (like the implicit
SparkRendererinsertion) and workarounds (likeSparkRenderer.renderEnvMap). Instead making sure standard Three.js approaches just work or require minimal logic to get working in user-code.Global Sorting
The biggest feature regression in this branch is the lack of global sorting. This means that combining multiple (overlapping) splats won't always render correctly. As mentioned before the idea is to layer more complex features on top of the base implementation.
BufferGeometryUtils.mergeGeometriesBatchedSplatanalogous toBatchedMeshthat allows splats to be combined into a single draw-call, yet retain their own transformPackedSplatsand combine them in a dedicatedDynamicSplatclass.API
The API surface is kept as minimal as possible. The main classes exposed to the consuming projects are
SplatandSplatLoader. This part isn't particularly fleshed out and can still easily change.Common use-case of loading a splat is as follows:
For procedural splats there is
Splat.construct/.fromImageUrl/.fromText:New possibilities
Some things became very hard to implement or even try out in the old setup. Changing the way the splat properties were encoded/packed on the CPU/GPU impacted all other parts. This is now mostly abstracted away and allows alternative representations to be implemented geared towards different use-cases.
Likewise the
SplatDatais in principal assumed to be static. For sorting this means that a readback step is not strictly necessary and alternative sorting methods can be implemented and tested.Another avenue that could improve the out-of-the-box DX is that
SplatLoaderreturns aSplatinstance instead ofSplatData. While not utilized yet in this branch, the idea is that the loaders could provide additional metadata (e.g. trained with AA or not, bounding boxes). Currently when file formats expose such information it is simply lost, whereas it could be used to set the right properties automatically.Examples
Several of the examples have been updated/ported. Below is a table detailing which examples have been updated, which haven't, and what would be needed:
BatchedSplatimplementationSplathandles different cameras/viewpointsBatchedSplatimplementation to handle sorting among the different splatsPackedSplatsrepresentation. Instead of implementing raycasting, GPU picking might be a better option (can also take shader hooks into account)SplatEditbased originalSplatDatasource, though this example would require both dynamic splat properties and global sortingDynamicSplatSplatas it suppresses sortingSplatany moreClosing remarks
Obviously moving the project in this direction would be a big change. I do think there's merit in a simple foundation that aligns with the most common use-case. Currently users run into trade-offs and limitations stemming from the
SparkRenderer/SplatAccumulatorsetup orPackedSplatrepresentation. While many of these can be worked around, the initial impression left behind isn't ideal. Flipping this around and having the base-case work out-of-the-box as good as possible and instead requiring the user to opt for things like global sorting will work better IMHO. I'd expect people to react more positively to diving into the docs to see how to combine/sort multiple splats, compared to wondering why their splat doesn't quite render correctly.