-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add schematics RFC #64
base: main
Are you sure you want to change the base?
Conversation
It's great to see this moving! I'm glad to see my initial proposal is well received. This looks like a great start! Thoughts in no particular order:
@MDeiml I'd be very interested to hear your thoughts on immediate conversion vs collect-resolve-apply. One thing I would emphasise is that we should be thinking of this as a feature aimed at a non-technical audience. Obviously implementing a schematic is a technical process, but a designer consuming them should not be assumed to have any technical experience. For this reason, things like invariant enforcement with good error descriptions are essential. |
Thanks for your feedback! This was really fast. Gonna reorder your points to organize my answer better
You could just leave the corresponding entity empty. Maybe we could even spawn the entity lazily for the first inserted component. Loosing this correspondence would mean one extra call to
That is tricky indeed. There is no That being said in most cases this should be fine. A typical case where you would
Similar to the last point I think. This might make sense in some cases, but we would really need to trust the developer to do it right. Imagine for example a schematic that generates some randomness like a random id when it is converted. It then wants to remember this id somewhere, which can only be in the schematic world. The other thing to consider here is that schematics as I described them are just systems. Currently it is not possible to restrict systems to be read-only at the schedule level.
Sounds good. I also think I didn't stress the "stable interface" part enough. Feel free to suggest specific changes to the document btw :)
Completely agree. There should also be error detection for e.g. not every schematic component being read, runtime components not being converted back, ...
I agree on both points. This question is not yet resolved with what we got so far. Gonna make an extra comment to keep things organized ;) |
Now to the question of schematics interacting / resolving conflicts. My intuition here is the following: (almost) All runtime data needs to be converted to data in the schematic world somehow. Otherwise the scene format that is created using schematics would not really be "complete". You could also think about this from the other direction: Every change you want to make to the runtime world should be achievable through changing something in the schematic world. That's the whole point of having schematics as a editor representation. Now IMO it would be good to ensure at least to some degree that there is no duplicate data in the schematic world. As an example imagine an entity that has both a "mesh renderer" component and a "particle system" component. Both of these depend on a "visibility" component. My argument is that (in the schematic world) "visibility" should neither be saved in the "particle system" component nor in the "mesh renderer" component, as otherwise we would save this data twice in the schematic world. For me the better design here would be to have a "visibility" component in the schematic world that is seperate from "mesh renderer" and "particle system". For me what these design choices naturally lead to, is that every piece of data in the runtime world has one and only one corresponding piece of data in the schematic world. Now the term "piece of data" is not very exact, but my suggestion would be to interpret that as "component" as components (in the runtime world) already should be "atmoic" i.e. be so small that breaking them apart into multiple pieces makes them kind of useless. A component that can be separated into multiple pieces IMO are bad design. Summarizing what I say is that "every component in the runtime world has one and only one corresponding component in the schematic world". Of course this immediately makes the process of conversion quite a bit easier. There is no need for "conflict resolution" any more other than checking that every component in the runtime world is only touched by one schematic (one system). As such there's also no need for "collect-resolve-apply". (Checking what I just mentioned should be possible by just looking at the commands that are produced by each system) On the other hand it doesn't solve the problem of enforcing envariants, i.e. every entity with a "mesh renderer" component should also have a "visibility" component (in the schematic world). My suggestion here would be to solve this problem completely in the schematic world. We could for example look for the "mesh renderer / visibility" invariant in the schematic world and then resolve it there or give out an error message, whatever is more appropriate. See also bevyengine/bevy#1481. Sry for the wall of text. Happy to hear your opinion on this. |
A fair subtle roast of my organisational habits.
Is there any fundamental reason why there couldn't be change trackers for resources? On the "mutable access is an assumed change" principle of course. I can imagine "schematic resources" also being used for things like per-level configurations in the editor. In that case, we definitely would need to automatically rerun conversion when they changed. For me, giving responsibility to the developer is a last resort - the API should be as simple to use and as hard to break as possible.
Ah, Your second comment will receive a second reply. Spoilers; we're going to disagree on that one. |
FYI, resources do use change tracking: they're accessible via the |
One-to-one schematic components to runtime components fundamentally contradicts the design goals for me, for a number of reasons. Schematic components are intended to be logical units of functionality from the perspective of a designer. Often, that logical unit will be significantly larger than the atomic components used at runtime. My goto example is a Rigidbody Schematic. The designer wants one big block they can add to an object, with neat checkboxes for things like "Is Kinematic". Dynamic and kinematic rigidbodies likely have quite different sets of runtime components, and certainly break into many atomic parts. One-to-one is also incompatible with the goal of being flexible to changes (both small and large) in the runtime data layout. You may design the schematic as being one-to-one, then want to break the runtime data into multiple components later. I see two broad categories of cooperation between schematics:
On the principle that we should empower users as much as possible, my inclination is towards a powerful underlying schematic "engine", capable of tracking all of these dependencies and managing constraints. We then build simple APIs for the common case on top of that engine. It's harder but (IMO) it's better. |
Sry, I was not very clear. I proposed a "one-to-many" structure. I'll make a concrete example cause it's much easier to talk about it that way. struct MeshRendererSchematic {
mesh: Handle<Mesh>,
material: Handle<Material>,
}
struct VisibilitySchematic {
visible: bool
}
struct TransformSchematic {
translation: Vec3,
rotation: Vec3,
scale: Vec3,
}
fn main() {
App::new()
// ...
.add_schematic(mesh_render_schematic)
.add_schematic(visibility_schematic)
.add_schematic(transform_schematic)
// results in `VisibilitySchematic` automatically being added
.add_archetype_invariant(MeshRenderSchematic::implies(VisibilitySchematic))
// results in an error message if entity doesn't have a `TransformSchematic`
// (`implies` would also fit better here, just for example purposes)
.add_archetype_invariant(MeshRenderSchematic::requires(TransformSchematic))
// ...
.run();
}
fn mesh_render_schematic(query: SchematicQuery<MeshRendererSchematic>) {
for (mesh_renderer_schematic, commands) in query {
commands.insert_or_update(mesh_renderer_schematic.mesh);
commands.insert_or_udpate(mesh_renderer_schematic.material);
}
}
fn transform_schematic(query: SchematicQuery<TransformSchematic>) {
for (transform_schematic, commands) in query {
commands.insert_or_update(Transform {
translation: transform_schematic.translation,
rotation: Quaternion::from_euler(
EulerRot::default(),
transform_schematic.rotation.x,
transform_schematic.rotation.y,
transform_schematic.rotation.z,
),
scale: transform_schematic.scale,
});
// This could maybe even be an `ArchetypeInvariant` on the runtime world
commands.insert_or_update(GlobalTransform::default());
}
}
fn visibility_schematic(query: SchematicQuery<VisibilitySchematic>) {
// you get the point
} I guess this even allows a "many-to-many" relationship between components. Just not a "many-to-many" relationship between schematic systems and runtime components. Are we getting closer to agreeing? xD |
That's great. Still means that developers need to check this manually. Do you think that's ok? Otherwise we could maybe do some magic and have something like |
To summarize, we're disagreeing in two points (probably):
|
Ah, our positions are much closer than I thought then! The "every component in the runtime world has one and only one corresponding component in the schematic world" line threw me off. I've certainly been imagining that something like schematic implies/requires relationships would be possible, so we're agreed on that. However, this brings up the question of whether we're expecting people to only have one canonical schematic to achieve a certain goal, and whether we're comfortable with those schematics being tightly coupled to each other. If there is one single The other approach is to define compatibility purely on the basis of the interpreted runtime data. This approach is purer, more versatile, and (I think) strictly more powerful. However, it does require a significantly more complex implementation. One issue with the runtime data compatibility approach is that it's possible for the compatibility between schematics to change accidentally if their implementation changes. Personally, I think I'm happy saying that if programmers mess that up then they've introduced a bug, go fix it. I'll make a follow up comment with a very rough sketch of how I imagine schematic systems may look, attempting to deal with neatly piping through the necessary information for change tracking, constraint resolution, and nice editor error messages and suggestions. |
Here's a very rough vision for the collect-resolve-apply constraint based approach. I'm going to use the language of "interpreter functions" instead of "interpreter systems" to avoid ambiguity. Maybe they would be registered as systems, or maybe they would be managed through some schematic engine layer. Common interpretersfn interpret_transform(schem: &TransformSchematic, commands: SchematicCommands<TransformComponent>) {
commands.require_exact(TransformComponent(schem.pos, schem.rot, schem.scale));
} Note that this takes a single schematic component, not a query . The function will be invoked once for each schematic that needs interpretation - it's a map. The engine knows which schematic component is being interpreted, and tracks dependencies for the purpose of knowing which things to invalidate when schematic data changes. The only way for this automated tracking to be broken/cheated would be for the function to touch global data. The commands are in a declarative constraint format.
It would be an error to constrain a component that isn't included as a We can also allow things like resource parameters here, with dependency tracking of them done in the same way as tracking of schematic components themselves. I.e. some outer system doing the tracking work and invoking interpreters. Query interpretersOccasionally, we may want a conversion process to depend on arbitrary data in the schematic world. fn some_crazy_interpreter(
query: SchematicQuery<(TransformSchematic, SomeOtherSchematic)>,
Res<SomeResource>,
SchematicCommands<(TransformComponent, SomeOtherComponent)>
) {
// Iterate and constrain, using arbitrary data.
} In this version, all commands would be given a dependency on all schematic components in the query. You can use the data in arbitrary ways, but any change to any of the schematics will trigger reinterpretation of all of them. (We could give advanced APIs to override these dependencies with manual ones if people want to tune performance.) The resolution pass logic is actually relatively simple (check exact requirements first, then soft ones - if there's a conflict, throw an error), but I'll leave that to later. |
To motivate why I prefer this over Using the query approach, it's possible to cheat the tracking by storing information from previous iterations in a local. I don't believe that there's any way to 100% stop people from ever breaking the tracking, but forcing them to write to global data to break it feels more robust to me. It's more rusty. |
Also seems like a good design.
I think we should talk about the duplicate API of "schematic functions" and "schematic systems" though. Tbh I think there's good reasons for and against both (feel free to add points, I'll edit this comment to stay up to date): Pros:
Cons:
Finally regarding resolving conflicts in the schematic world vs as part of conversion: (The complexity of implementation should be roughly similar.) Pro "schematic world" / archetype invariants:
Pro "collect-resolve-apply":
Btw I'm amazed at how quickly this is moving :). Seems like together we'll be able to come up with a pretty nice design here. |
You can also do a query style The intent is to allow the engine to know which interpreter functions can create which components. This is useful for editor UX reasons. If we have a schematic with a |
I'm still don't understand completely, I'm sorry. The call to |
We want the engine to be able to work this out without running the interpretation function. There may be no It works this out by seeing that there's a registered interpreter which takes It's an attempt to recapture the niceness of Under my proposed approach, user experience would be:
|
Yep, I also thought that was very pretty. As a matter of personal style, I always like to encode constraints into the type system as heavily as possible. I wil happily go to extra effort to move documented restrictions and runtime errors into compiler errors. Given that personal leaning - which I think Rust shares - I prefer the more locked down approach. I don't think it's a dealbreaker either way, it's a surface level difference.
Hmm, that seems like a rare but very fair concern. |
I think maybe this is a good thing. In fact, maybe we want to encapsulate the schematic If we want to enforce proper use of dependency/change tracking, we probably want to disallow people from doing untracked queries, which they would be able to do if it were just a plain system. This is my personal tendency towards locking things down (without restricting power) again. |
Oh, I'm not sure I explained myself good here. My comment was regarding your query interpreters where Btw thanks for clarifying my question about the generic parameter, I understand it now. It's solving a problem though, that only comes up with the collect-resolve-apply approach, not with archetype invariants. Otherwise I think we reached a point where we just have different but equally valid opinions, so we should maybe write down both designs with their pros and cons and then get feedback from other people. Do you agree? We could add two sections for archetype invariants and your collect-resolve-apply approach with a comment explaining that this is a "pick one out of two" situation. Also add a section for your "common interpreters" with a comment explaining the feature is optional. |
Oh I see. Yes, I think this is just a matter of API style, they seem equivalent in all meaningful ways.
Agreed.
There's a big scary monster that we haven't addressed yet, and that's readback. If we assume that it's possible to drop in and out of a "play mode" in the editor, we would ideally like changes to runtime state to somehow be reflected back into the schematics. Schematics are the interface that designers are used to using to interact with that data, and we'd like them to be able to stay in that familiar territory for basic observation of runtime state. We also want to be able to do things like edit the I don't believe an automated solution is possible in the general case, because we can't (and don't want to) enforce that the schematic -> runtime mapping is bijective. We could trivially create an interpreter that ignores a schematic field, and now there's no way to recreate the schematic from the runtime data. I expect that this will require some type of opt-in manually implemented "schematic inference" functionality. It won't be possible for everything, but for something like A big question here is whether we attempt to fold that API into the existing interpreters (so an interpreter also specifies how to invert itself as part of the command) or whether the solution is to have a completely separate set of "inference systems" that go the other way. We should probably do some thinking about whether this part of the problem impacts the choice between the two designs before we solicit feedback on them. |
Good point. I think I'll still write down both solutions for future reference.
Agreed
I also think that's the best tactic here. It would be nice though if the editor could output something like "TransformSchematic is not being read back" in an appropriate place, so there should be a mechanism to detect schematics that don't have this implemented.
Good question. Intuitively I'd suggest something like:
Thinking about this now (as you predicted) makes me more keen to include common interpreters for that exact reason. 😅 |
Edit note: Originally I called them Edit and Play schematics. This has been changed to forward and backward. One possible solution to this is to have two classes of schematics, forward and backward. Forward schematics are the ones already discussed in this thread. Backward schematics are the ones that get inferred from the runtime state during play. Often, one schematic will hold both roles - but there's room for both forward only and backward only schematics.
Inference functions would be like inverse interpreters. They can perform queries over the runtime world, and infer schematics into the edit world. (Editing gets a bit complicated...) In the case of something like Every time I've tried to solve this readback problem previously, I've ended up concluding that my solution was fatally flawed in some way. So I wonder what's wrong with this one - any ideas? |
I'm happy to do the write up of mine if you want to do yours - it's probably best if each is presented by its strongest advocate.
Amusingly I actually don't think this is a good reason! In the inference model proposed above, I'm quite happy for them to be implemented independently, even if they're commonly side-by-side in code. An advantage of that inference approach is that it's not tied too closely to the schematic that generated the data. We should assume that runtime data will drift away from what the schematic specified (including components being added and removed) and our play UI should attempt to be as tolerant of that as possible. A little conceptual separation here may not be a bad thing. |
I don't really agree, but I guess you invented the term, sooo. In my opinion the main purpose is to be the representation that is used for UI. I guess we just don't agree here, but in the end it's also the code authors decision what they use it for and not ours. We can just make suggestions.
I see that point. So in the end there would be four representations: schematic, schematic UI (which hopefully is not that different), runtime and runtime UI (which hopefully is the same as schematic UI). Changing runtime UI without changing schematics means though, that the UI would suddenly behave different when entering play mode. Maybe in a way that is not noticeable to the user. That seems dangerous to me.
There is a conversion process to UI components. Imagine for example that rotation in the UI is given in euler angles. There is a tiny error in floating point arithmetic when converting quaternions to euler angles and back. This error would appear both for readback and custom runtime UI.
The author could either choose to round the float (there is an error, but the error would not multiply over time, also obviously the error would only come up if the designer would actually edit the schematic during runtime). Alternatively the author could choose to change the schematic. I would strongly suggest the second option. Otherwise the value couldn't be set to e.g. 1.5 during the actual editing, so outside play mode. Updated pro / contra (just meant to keep this structured): Pro readback:
Pro runtime UI:
|
I feel like we're at a point where it might be possible that we can't agree. So I'd offer a different way to look at it. The scope of this RFC is schematics, not UI. Obviously we need to make sure that a UI can actually built from what we're creating, but we don't need to think about the specifics. This means that we shouldn't focus on whether custom runtime UI is useful. I actually agree that it is. The question is rather if readback is useful, if it is implementable and if it can cause unsafety / frustration: I think we might agree that it is useful. I claimed that it is also implementable, but obviously that question will be answered as soon as I write the "implementation" section in the RFC for this. So the remaining question is, whether it can cause unsafety or frustration. I take it that your answer to this is "yes"? |
TBH I don't understand how instability could ever be acceptable here. I think we're agreed that schematics are the format in which assets/scenes are serialized. In that case, how can it be acceptable for that format to be unstable? That would break compatibility with assets already built - which is a huge deal! There's some amount of tolerance here if we e.g. use Serde for serialization and use its field attributes to handle simple migrations, but that's fairly brittle and we want it to happen as little as possible.
I don't quite think of the UIs as "representations" because the data is never stored in them - they're more like mapping functions from However, customising the UI is optional in both places, so it's two representations + more if you care about making this one pretty. The simple case of just combining multiple runtime components into one UI could be automated too - it would just have to inline their default representations next to each other in one panel. A simple attribute or one-liner could take care of that.
I think it's on the programmer not to do anything too silly here.
This is a great point I'd never thought of - the imgui pattern might accumulate precision error in some cases! However, this is how Unity (and I think Unreal and Godot?) do it today and I've never heard of anybody complaining about it being a problem, so I guess it's not a big deal in practice.
Scenario: My game is on a grid, and it's only allowable to set those positions to grid-aligned values. However, they're floats at runtime because they can be animated between grid positions.
Good summary!
Agree ish. However, because schematic data and runtime data are sometimes fundamentally different, forced consistency may be bad.
Equally true for the UI route, where customisation only happens if you don't want to accept the defaults. Mixing defaults and customised is basically the same experience as merging. |
I fear we may be.
Yes. I contest that any solution that you come up with, I will be able to give you plausible real-world scenarios that readback will not be able to handle satisfactorily, but that will be handled by the UI route. You're welcome to take that as a challenge. 😂 |
I think I'm going to put together a "rival" RFC - of course intended in the spirit of cooperation not rivalry! That way we can get a clear look at both approaches side by side, and perhaps bring others in to comment at that stage. I may play around with a prototype implementation too, as I find that's usually a good way to be quickly confronted by all of the ways in which I'm an idiot. |
Sure, that will help a lot with discussion. |
I'm going to, but first we need to solve a different question that I forgot just now and that is probably more important. I'm sorry 🤦 That is the question of "stable format", which we both agree is one of the motivations of what we're doing. You (if I understand correctly) see this as the most important motivations and interpret stable as "stable, unless stability is impossible". I see usage in editors as the most important motivation and interpret stable as "stable, unless the runtime representation changes enough". I'm gonna go out on a limb here and define my position a bit stricter as "stable, unless schematics are no longer surjective" ("surjective" meaning that every sensible runtime state can at least be reached through schematics). I'd like to give some context: Unity AFAIK does not have stable formats in your sense. If you change the attributes of a custom Do I understand correctly that you would want bevy to behave differently? |
I feel like if we don't agree on what's motivation schematics, then there's no way we'll agree on a design 😆 |
Stability unless stability is impossible, in which case friction should be minimized. If a game feature gets radically redesigned, then stability from old version assets to new is probably nonsense. If a feature's implementation gets changed (i.e. new/different systems - operating on a different ECS layout) stability should be maintained. I can write a whole new physics engine and keep the same rigidbody schematic, as long as the engine is exposing the same concepts. Serde style migration attributes on the schematic are acceptable as a last resort.
I would say it's every runtime state that the programmers want the designers to be able to create. I actually think it's perfectly sensible to create a In programming terms, schematics are a tool for encapsulation. Schematics are the public interface to the state - and programmers can choose what to make public, and what to keep private. Some states can only be reached "privately" - i.e. through progression of game logic. That's a good thing, because many states would be nonsensical - e.g. unnormalized direction vectors. The only reason you need to relax the stability from full to surjective is to accomodate the problems with the readback approach.
Sort of. In the case of Monobehaviours, they don't. That's bad and I want to fix it. However, it's a slightly less bad problem for them. Monobehaviours, being managed classes, are not particularly sensitive to data layout. Being stuck with an existing layout of serialized fields for backward compatibility is usually okay - you just do the equivalent of "interpreting" that data in In Bevy we're data and performance oriented all the way down. Data layout is very sensitive, and is also something we want to be able to change freely, either as we change our designs or improve performance. Being stuck for backward compatibility reasons is a much bigger problem. When it comes to their ECS prototype, they sort of do have a stable layer. They use Monobehaviours to serialize their assets, and an interpretation pass to turn that into ECS data. Generally, people will avoid breaking the stability of those Monobehaviours for exactly this reason. |
The reason why this is important, is because this dictates if "you can implement readback for every sensible schematic". If the schematic component covers every combination of components that are meant to come up during play, in mathematical terms if they are surjective on to the set of states encountered during runtime, then (again mathematically speaking) there exists an inverse function. If on the other hand a situation like example with I hope I'm not misinterpreting your answer, but our difference of opinion boils down to:
vs
I feel like both are very much valid. It's probably not surprising then, that I see schematics as something that can also be used during runtime while you disagree. |
Excellent summary! My stance is that only supporting surjectivity is an extra restriction, and we shouldn't make decisions that constrain our users without very clear justification. The fact that we disagree over whether the I'm not persuaded that there's anything that the readback approach does so dramatically better than the UI approach as to justify these restrictions. I think the path forward is that we both write up and/or prototype our respective versions, then come back and compare how they look in practice - perhaps challenging each other to show how we'd deal with some challenging scenarios. |
Now since we won't come to agree on the topic of "stable data", I'm gonna adopt your stance on that for discussions sake.
This of course means that I agree that there are schematics where you couldn't implement readback in a satisfactory way. But I'd still like to have readback as an optional feature of schematics, since I believe that the situation where it can't be implemented are very rare. A editor UI would find a way to deal with these rare cases, whether through "custom runtime UI", "merging" or some solution we haven't thought of yet. I'd expect, that if someone learns of a feature in bevy, which helps in converting one representation into another, then a natural question would be "Is there also a way to convert back?". I'd like the answer to be "yes". Even if not, it makes sense to ensure that our design at least has the possibility for conversions the other way. |
Haha, not sure how productive that would be, but sounds fun. I imagine that - having different goals as well as different scopes (I guess you're also going to talk about UI in your RFC) - we just wouldn't agree on which is better xD |
Github says our discussion is too robust. I got locked out for 10 minutes by a rate limit! 😂 Yep I agree with all of that. My challenge to myself is to make the UI proposal good enough to persuade you that readback is obsolete. I reckon I've got a 50% shot at that. I'm going to take some time to collect my thoughts and come back with a draft RFC and/or prototype - probably in a couple of days if I had to guess. |
Didn't even know github had a rate limit for discussions xD Sure, take your time. |
I'm very unknowledgeable about the whole of the conversation/discussion, but I figured I would give my view on the subject anyways. Schematics - Editor or Artist representation of game data/systems. Two easy examples brought up so far was:
The idea of stability was brought up for Schematics and while they would/should provide stability from changes to the Runtime representation, Schematics themselves would/should not be stable because an Artist/Editor-user may need to change that representation as they see fit. How Archetype Invariants + Entity Kinds would work with Schematics I am unaware, perhaps these are Runtime constraints only. |
Just to check that I'm understanding you @Diddykonga, do you mean that a designer should be able to choose e.g. whether to input a rotation as Euler angles or as a quaternion? If so, I think that's best dealt with one of two ways:
Both of these methods could be used with either myself or @MDeiml's preferred approaches. |
My apologies, what I meant is that there would always be one for editor and one for runtime, but that both should have mutability/migration-ability, which was why I said it is unstable. Although I may be misunderstanding what stable is in this context ¯_(ツ)_/¯ So as long as the Editor Schematic is Euler Angles, that's what a designer/artist would use, but should also be able to change it without changing the Runtime Schematic, to support designer/artist workflows more precisely. |
I think you understood stability right. "Stability" would mean that the actual data structures used for the schematic representation don't change at all under any circumstances. If I understand correctly you're actually mentioning another point that we haven't yet talked about. Which is that artists / designers might want to intentionally change the data representation they see in the UI. An example would be if the developer / non-designer initially adds rotation to the UI as quaternions. In this situation a designer would probably complain that they don't understand quaternions and would rather like to input euler angles. The developer would then need to change the schematics to meet that request. So I guess optimizing for designer workflow is contrary to the schematic representation being stable... |
In practice, what matters is long term stability. I want to be able to make runtime implementation changes without worrying about breaking assets that were built months ago. One would hope that the way that example would resolve itself is that the arist would complain as soon as the schematic was given to them, and changes would be made before a backlog of content was built on the bad version. Stability only matters once the team start relying on it. Technically stable doesn't have to mean that the schematic struct layout can't change at all - the real requirement is that any previous version can sensibly deserialize into the current version. If we assume that serialization is being done by Having to do that is tedious. Supporting deserialization from multiple prior versions of the same struct is error prone, and doing so responsibily probably requires writing unit tests for the migration. Nobody wants to do that for their supposedly quick & seamless schematics, but the fact that this is possible means that even the worst case isn't a catastrophe. |
Rendered
Unresolved questions:
See also bevyengine/bevy#3877