Improve memory efficiency and speed of gltf importer#2553
Improve memory efficiency and speed of gltf importer#2553yaRnMcDonuts merged 6 commits intojMonkeyEngine:masterfrom
Conversation
| stream.read(bin); | ||
| data.add(bin); | ||
| ByteBuffer buff = BufferUtils.createByteBuffer(chunkLength); | ||
| GltfUtils.readToByteBuffer(stream, buff, chunkLength, -1); |
There was a problem hiding this comment.
This fixes the way the stream is read, since stream.read, as it was used before, can return before the stream is read fully, if it hangs (eg. when loading from a slow disk or network)
| animations = null; | ||
| skins = null; | ||
| cameras = null; | ||
| useNormalsFlag = false; |
There was a problem hiding this comment.
This stuff can be pretty huge on big scenes, so leaving it there is pretty bad for devices with limited memory
| throw new IOException("Destination ByteBuffer too small: remaining=" + remaining + " < bytesToRead=" + bytesToRead); | ||
| } | ||
|
|
||
| ReadableByteChannel ch = Channels.newChannel(input); |
There was a problem hiding this comment.
In openjdk 21, the behavior for this is to transfer directly into the bytebuffer if the input is a FileInputStream, or, for generic streams, to fallback to use a small heap-located byte array with a maximum size of 8192 bytes.
Overall, I think using a larger chunk array could be beneficial, but I’ve decided this micro-optimization isn’t worth interfering with potential internal JVM optimizations.
Improve memory efficiency and speed of gltf importer
The GLTF importer does a lot of memory copying of byte arrays in the heap, which makes it slow and often cause of OOMs on Android with big models, due to the limited heap space.
This PR switches to direct buffers to get around that limitation. It also uses buffer views where possible to slice and convert data instead of copying it around.
There is also some refactoring in the way streams are used, see comments for more details.
This is tested with the bistro scene used here: #2137
The loading time in my machine is: