Skip to content

Looking for feedback/suggestions on changing ImTextureID from void* to u64 #1641

Closed
@ocornut

Description

@ocornut

Both DirectX12 (#301) and Vulkan (#914) uses descriptors that are 64-bits.
This is a problem for our demos because ImTextureID is defined as pointer-size (typedef void* ImTextureID) and therefore on 32-bit target ImTextureID is 32-bits. This makes it difficult to use it as a convenient way to pass a texture descriptor. (in the case that the user wants to use a native description inside ImTextureID)

I believe it is not a suuuper critical problem because:

  • It only fails when compiling for 32-bit, and we could agree that people shipping DirectX12 and Vulkan applications are maybe unlikely to ship 32-bit?
  • Most real game engines based on DirectX12 and Vulkan would be wrapping textures or materials into their own engine types and then likely would want to store e.g. MyTexture* or MyMaterial* into ImTextureID which wouldn't be a problem any more. Though some engine may use 64-bit integer identifiers in all build mode?

(Let me know if you think those assumptions are wrong!)

So I think this mostly affect: (apart from our demo code)

  • Small-ish applications or quick experiments using DirectX12 and Vulkan, that are likely to stay low-level and want to setup imgui easily. In theory those could/would compile for 64-bit, but when you aren't shipping it's easy to just compile for 32-bit and it would be odd if the default renderer didn't work in there. Really not good for perception of imgui "just working".

One solution is to make ImTextureID always 64-bit. However C++ doesn't have a "pointer type that's always 64-bit" so it would have to be an integer and that mess up with some existing code.

(A) If I try that:
typedef ImU64 ImTextureID;
The kind of errors I get:

From imgui examples: (~7 errors)

// '=' : cannot convert from 'void *' to 'ImTextureID'
io.Fonts->TexID = (void *)(intptr_t)g_FontTexture;     // Store our identifier

// '==' : 'LPDIRECT3DTEXTURE9' differs in levels of indirection from 'ImTextureID'	
IM_ASSERT(g_FontTexture == io.Fonts->TexID);

Another codebase: (~40 errors)

// 'void ImGui::Image(ImTextureID,const ImVec2 &,const ImVec2 &,const ImVec2 &,const ImVec4 &,const ImVec4 &)': cannot convert argument 1 from 'Pasta::ShadedTexture *' to 'ImTextureID'	
ImGui::Image(&shadedTexture, textureSize)

// 'void ImDrawList::PushTextureID(ImTextureID)': cannot convert argument 1 from 'void *' to 'ImTextureID'
draw_list->PushTextureID(RawTex->ImID);

// '!=': no conversion from 'void *' to 'unsigned __int64'
bool push_id = draw_list->_TextureIdStack.back() != RawTex->GetImTexId();

(B) We can minimize the errors by using a higher-level primitive: (not really in the spirit of imgui's codebase, but let's try)

struct ImTextureID
{
    ImU64 Value;                            // user data to identify a texture (this is whatever to you want it to be! read the FAQ about ImTextureID in imgui.cpp)

    ImTextureID()                           { Value = 0; }
    ImTextureID(void* v)                    { Value = (ImU64)v; }
    ImTextureID(const ImTextureID& v)       { Value = v.Value; }
    operator ImU64() const                  { return (ImU64)Value; }
    operator const void*() const            { return (const void*)Value; }
    operator void*() const                  { return (void*)Value; }
    bool operator ==(const ImTextureID& v)  { return Value == v.Value; }   
    bool operator !=(const ImTextureID& v)  { return Value != v.Value; }
};

Then we get from imgui codebase (down from ~7 to 1 error)

// 'type cast' : cannot convert from 'const ImTextureID' to 'LPDIRECT3DTEXTURE9'	directx9_example	
g_pd3dDevice->SetTexture(0, (LPDIRECT3DTEXTURE9)pcmd->TextureId);

Other codebase: (down from ~20 to 5 errors)

// 'type cast': cannot convert from 'const ImTextureID' to 'Pasta::ShadedTexture *'
ShadedTexture* shddTex = (ShadedTexture*)(pcmd->TextureId);

// 'ImTextureID::operator !=': 3 overloads have similar conversions	wb	c:\omar\wb\work\code\srcs\texture.cpp	877	
bool push_id = draw_list->_TextureIdStack.back() != RawTex->GetImTexId();

None of those solution are perfect and we'd creating a new problem for people updating to a new version. They are fixable issues, so not end of the world, but a little annoying considering it'd for a change that would benefit few people.

We'd also have to cast MyTexture* into ImTextureID explicitly now, whereas previously it worke implicitly because it would cast to void*.

(C)
Add an optional imconfig.h directive to make ImTextureID unsigned long long, add static_assert in the dx12/vulkan back-ends in 32-bit mode to refer to this option.

(D)
Specifically for Vulkan or DirectX12 decide that the binding can use another type, e.g. pass VkDescriptorSet* instead of VkDescriptorSet and D3D12_GPU_DESCRIPTOR_HANDLE* instead of D3D12_GPU_DESCRIPTOR_HANDLE. I personally don't know either API at that point and don't know what would be best. Heck, I don't even understand what those things are, what their lifetime is in a typical dx12/vulkan codebase (there is an open question in #914 to decide how ImTextureID should be interpreted in the default Vulkan binding)

(E)
Hide the issue under the carpet by removing 32-bit build for the Vulkan and DirectX12 demo from our example projects and/or upcoming project generator, leave the static asserts on.

(F)
Other suggestions?

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions