-
Notifications
You must be signed in to change notification settings - Fork 318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Virtual function call in destructor ~RenderInterface, can cause texture resources leaks in RenderInterface implementations #222
Comments
Thanks for the detailed report. So first I want to emphasize what I wrote in the linked issue:
So in normal cases, this is not an issue, as the render interface will/should be kept alive until after the call to However, the lifetime requirement when using the non-default render interface for a context is actually tied to the context, as currently stated: RmlUi/Include/RmlUi/Core/Core.h Line 99 in 16e6a67
So, this may become a problem if all of the following are true:
Then, during 4., we would have a call to a virtual function in the destructor. Proposed solution
I'm not sure if this would cause any trouble for users, I can't really see the use case for creating and destroying lots of render interfaces over and over again, which is the only case where I can see this lifetime requirement matter. Thoughts? |
Proposed solution sounds good. Sprinkle some asserts so that code immediately breaks if new lifetime rules are not honored and its perfect. |
The renderInterface contains the gpu resources in it's own format, so if I want for example change it format(compression, mipmaps,...) I can do this with recreating render interface and context. Other cases are GPU switches, offline rendering... |
… was destroyed before the call to Rml::Shutdown, see #222. Update the lifetime requirements for render interface, and add an assert in the render interface destructor to identify cases where this requirement is not respected.
Alright, this should now be fixed, let me know if there are any issues. So, I updated the lifetime requirements, but not as strict as suggested previously. It is still possible to destroy the render interface before the call to /// @lifetime If specified, the render interface must be kept alive until after the call to Rml::Shutdown. Alternatively, the render interface can be
/// destroyed after all contexts it belongs to have been destroyed and a subsequent call has been made to Rml::ReleaseTextures. I went ahead and changed I hope this solution should cover all your use cases @andreasschultes. |
Thanks, I think that is a good solution |
Virtual function call in destructor ~RenderInterface, can cause texture resources leaks in RenderInterface implementations
Reason:
RenderInterface::~RenderInterface
(virtual) ->TextureDatabase::ReleaseTextures
(non-virtual) ->TextureResource::Release
(non-virtual) ->RenderInterface::ReleaseTexture
(virtual, same object as the initial destructor)The custom implementation is never not called!
To the usecase:
It is not required to set a default
Rml::RenderInterface
, everyRml::Context
can have it ownRml::Context
. If theRml::Context
is destroyed, the belongRml::RenderInterface
can also be destroyed? I think there are some Font Textures, which are still allocated. I'll check that out later.TODO:
Rml::TextureResource
creates for eachRml::RenderInterface
a texture, with a load from disk or a texture creation callback function. All get function has anRml::RenderInterface
as parameter, if the texture isn't created, than it will be created for theRml::RenderInterface
. In conclusion, the gpu textures can be released if the are not in use by any rendering functions. They can be recreated by request. A call toTextureDatabase::ReleaseTextures(renderInterface)
will release all gpu textures belonging torenderInterface
. The TextureResource is still alive, and can be used by otherRml::RenderInterfaces
even these which are not created until now.Refs:
#133 (comment)
The text was updated successfully, but these errors were encountered: