You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the problem or limitation you are having in your project
Currently we are limited in our options to blend in skeletal data from sources other than animations loaded into an animation tree.
A core use case in XR is the ability to apply hand tracking data to the skeleton of a mesh. On the horizon is that this logic will be extended beyond hands and include body/arm/leg data which only increases the issues with the limitations we currently have.
We only support this in OpenXR in a very limited way, the tracking data is simply applied through a helper node (OpenXRHands) with no ability to influence this data, and limiting the use to meshes specifically designed for the hands of the platform being used.
In its most basic form we want to be able to improve this by blending the data with animations so selected fingers can be placed strategicly while other fingers react to the finger placement of the user.
In its fully form we want to be able to take a humanoid skeleton and:
animate the fingers using the tracking data
animate the head using tracking data
apply IK to the arms using tracking data
animate the legs based on walk/run/idle animations.
So here we are talking about blending 4 sources of pose data into a single skeleton.
While the above sketches use cases for XR, in discussing the proposed solution with the animation team a number of non-XR use cases were identified:
Leg avoiding intersecting the ground
Procedural foot planting
Procedural arm planting
Doom-style IK-based inverse pose tracking
Active ragdolls
Describe the feature / enhancement and how it helps to overcome the problem or limitation
We change the XR implementation so a Skeleton3D node is populated with hand skeleton that is posed according to the tracking data. Ideal would be if we actually had a skeleton resource that the node consumes but that may be a too disruptive change, so having a Skeleton3D node populated would do. This could mean subclasses to an XRSkeleton3D and/or a OpenXRSkeleton3D node with the added logic embedded.
Then we add a new blend tree node that has a path to a Skeleton3D node and will retarget the pose data in that skeleton of the Mesh being animated by the blend tree.
One of the suggestions made was to define a Godot standard skeleton for the XRSkeleton3D node, the XR platform is then responsible for adjusting the incoming data to what we're expecting. This would make it easier to make platform agnostic games that can be deployed using OpenXR, WebXR, or alternative XR interfaces such as for Apple Vision Pro.
Note that one thing that needs to be taken into account is that there are two competing lines of through in the XR space on how skeletons should work:
In many VR applications the hand of the player is decorated (see Half Life Alyx for a brilliant example), or formed based on the character being played (think of a giants hand), this results in the hand having fixed dimensions and the tracking data needing to adjust to this.
In many AR applications it is a requirement for the dimensions to match the user hand precisely as it is overlaid with the users actual hands. A size/placement discrepency is immediately noticed. This often puts extra requirements on properly skinning a mesh so it deforms properly based on size.
Both scenarios need to be supported.
Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams
See above
If this enhancement will not be used often, can it be worked around with a few lines of script?
Unless we introduce a way to create blend tree nodes from GDScript or GDExtension, no.
Is there a reason why this should be core and not an add-on in the asset library?
As the use cases of this cover other animation needs, it makes sense to have this as core functionality
The text was updated successfully, but these errors were encountered:
cc @lyuma@fire sorry it took me awhile before I was able to transcribe this after our meeting. Let's continue the discussion and see if we can add more detail into this as is needed.
OpenXR hand tracking (XR_EXT_hand_tracking) is just one type of "live streaming" animation source. For example Rokoko expose their full-body tracking data over TCP or UDP. Other types of software (Faceware Studio, Live Link Face, Optitrack, etc) support similar protocols.
It's exceptionally easy to stuff this tracking data into an Animation resource; but currently the only way to expose these Animation instances to an AnimationTree for configuration or mixing is to add it to an AnimationLibrary and add the library to the AnimationTree.
One solution might be to define AnimationProvider nodes that can be added as children to AnimationTree nodes, which automatically add their animations.
It may be possible to prototype this by:
Construct the OpenXRAnimationProvider as a gdscript tool node
The node dynamically constructs an AnimationLibrary with a single Animation
The node calls the AnimationTree.add_animation_library when ready
The node updates the Animation dynamically using the OpenXRInterface.get_hand_joint_xxx methods
Calinou
changed the title
Skeleton based retarget blend tree node for XR
Add a skeleton-based retargeting animation blend tree node for XR
Aug 24, 2024
Describe the project you are working on
Core OpenXR support in Godot
Describe the problem or limitation you are having in your project
Currently we are limited in our options to blend in skeletal data from sources other than animations loaded into an animation tree.
A core use case in XR is the ability to apply hand tracking data to the skeleton of a mesh. On the horizon is that this logic will be extended beyond hands and include body/arm/leg data which only increases the issues with the limitations we currently have.
We only support this in OpenXR in a very limited way, the tracking data is simply applied through a helper node (OpenXRHands) with no ability to influence this data, and limiting the use to meshes specifically designed for the hands of the platform being used.
In its most basic form we want to be able to improve this by blending the data with animations so selected fingers can be placed strategicly while other fingers react to the finger placement of the user.
In its fully form we want to be able to take a humanoid skeleton and:
So here we are talking about blending 4 sources of pose data into a single skeleton.
While the above sketches use cases for XR, in discussing the proposed solution with the animation team a number of non-XR use cases were identified:
Describe the feature / enhancement and how it helps to overcome the problem or limitation
We change the XR implementation so a
Skeleton3D
node is populated with hand skeleton that is posed according to the tracking data. Ideal would be if we actually had a skeleton resource that the node consumes but that may be a too disruptive change, so having aSkeleton3D
node populated would do. This could mean subclasses to anXRSkeleton3D
and/or aOpenXRSkeleton3D
node with the added logic embedded.Then we add a new blend tree node that has a path to a
Skeleton3D
node and will retarget the pose data in that skeleton of the Mesh being animated by the blend tree.One of the suggestions made was to define a Godot standard skeleton for the
XRSkeleton3D
node, the XR platform is then responsible for adjusting the incoming data to what we're expecting. This would make it easier to make platform agnostic games that can be deployed using OpenXR, WebXR, or alternative XR interfaces such as for Apple Vision Pro.Note that one thing that needs to be taken into account is that there are two competing lines of through in the XR space on how skeletons should work:
Both scenarios need to be supported.
Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams
See above
If this enhancement will not be used often, can it be worked around with a few lines of script?
Unless we introduce a way to create blend tree nodes from GDScript or GDExtension, no.
Is there a reason why this should be core and not an add-on in the asset library?
As the use cases of this cover other animation needs, it makes sense to have this as core functionality
The text was updated successfully, but these errors were encountered: