Description
During execution, a graph involves calling from the network output and building up the call stack to reach leftwards to the leaves. This callstack-building phase goes right-to-left, but it isn't the data model users think of when interacting with the graph UI, where data flows left-to-right. That phase happens when the callstack-popping phase goes left-to-right.
The call argument is what we pass along during the right-to-left callstack-building phase which allows, for example, Footprint to be used as the call argument to enable the adaptive resolution system. This is necessary because nodes, like the Transform node or the future Liquify node, need to modify the call argument before passing it along to its upstream (leftward) node. The call argument is also able to be edited on the fly between renders without any need to recompile the graph, which makes it useful to reduce the render latency when panning.
However, it's easier to think of nodes as blocks that process data flowing left-to-right as is the case in the graph UI. Usually, nodes will just pass along the call argument to its upstream (leftward) node. To avoid boilerplate involved in making every node handle the callstack-building phase by personally calling its upstream nodes with .eval()
to get the data it depends on, we have the automatic composition system (autocomp) which inserts compose nodes so that nodes using autocomp don't have to think about right-to-left data flow at all. Those nodes can be implemented using just the left-to-right data flow conceptual model akin to the actual graph UI concepts. Any call arguments are passed through (but can't be read or modified) automatically along the primary data flow (but not secondary).
But that system was designed a while ago before the methodology of the adaptive resolution system was fleshed out. Autocomp has many limitations when the Footprint call argument has to be passed through to secondary inputs, or read (for example as part of a cull node), or edited (for example as part of a transform node). These cases all require opting for manual composition instead of autocomp, and it's becoming increasingly common for graphical nodes to do so, which reduces the value provided by the system.
The lack of standardization between the autocomp and manual comp conceptual models are also a significant burden for everyone using (and especially learning) the system, because the data flow directionality is counterintuitively reversed and the meaning of different function parameters in the node_fn
proto node definition becomes a source of confusion. With autocomp, the function's first parameter is the user-facing node's primary input which doesn't appear in the node struct while all secondary inputs are found in the struct. But with manual comp, the function's first parameter is the call argument (Footprint) and both the user-facing node's primary and secondary inputs all appear in the struct. To avoid confusion, we instead want to standardize on that: where the call argument is always given (and the user may ignore it with _
) and the primary input is always part of the struct just like the secondary inputs.
To address the above nonstandardization issues, the node_fn
macro should be updated in a way that makes all cases have the same function signature expectations. It should also entirely auto-generate the struct to prevent confusion arising from it (and its weird use of type arguments that are basically a way to abuse Rust and Rust-Analyzer into working despite essentially building a DSL out of a Rust function rather than fully building our own DSL syntax that would lack tooling support).
We should also follow the trend (as we move towards the full catalog of nodes supporting the adaptive resolution system) of removing autocomp from nodes by ripping it out of all of them to retire the system, and then replace it with a similar system that has similar goals, but importantly: it gives access to the call argument in the node_fn
macro's proto node function for direct use (and hopefully modification). For now, nodes will have to call .eval()
on their upstream nodes to get their data, but we can then work towards abstracting that boilerplate further through the macro once we're in that position.
Another change to make this happen will be the removal of the graph rewriting step which auto-inserts compose nodes. It's worth noting that, because all nodes will be responsible for calling .eval()
on their upstream nodes, we don't have a standardized way to hook into the composition system for distributed computing applications. But many of the performance-intensive nodes (the graphical ones) already opted out of autocomp anyways and did their own .eval()
calls so this may be for the best anyways. When it's time to deal with distributed computation, we can add some kind of hook to the .eval()
method itself or some other place for breaking up tasks across compute resources. That kind of hook will also be necessary for showing the progress of each node's execution within the graph UI (milliseconds spent per node, which ones have been executed or are waiting to be executed, etc.).
@TrueDoctor estimates 1 day for the macro syntax change, 1 day to migrate all the node usages, and 1 day to fix .then()
usages in the node registry or other nodes that'll potentially also break.
The design for the node_fn macro's function signature usage should look like this:
#[node_macro::node_fn]
fn cull_node<T>(footprint: Footprint, cullable_data: T, ...) -> T {
let returned_value_of_type_t = do_stuff(cullable_data);
returned_value_of_type_t
}
#[node_macro::node_fn]
async fn construct_layer_node<Data: Into<GraphicElement> + Send>(
footprint: crate::transform::Footprint,
mut stack: GraphicGroup,
graphic_element: Data,
) -> GraphicGroup {
stack.push(graphic_element.into());
stack
}
Metadata
Metadata
Assignees
Type
Projects
Status