-
Notifications
You must be signed in to change notification settings - Fork 0
Constraints language #13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Example of a multi-segment path constrained to a distance.
The only thing is, I have no idea how to do curves. I think the implementation would need to be aware of how they’re represented with the formulas so that they can be symbolically manipulated. I’m realizing also that I think |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wrote these notes up independently, and only commented inline where I felt it would add something. Those notes are marked with (inline)
at their start when mentioned below. This is really cool. Especially the idea of custom defined constraints, I mean that's so powerful for libraries wow.
I would want to see more examples of this notation, and have some more sense of the implementation level of effort (although this is a hell of a start). And if you can demonstrate that it is a superset of existing KCL's capabilities I think that would help a lot.
affirmations
- The idea of explicitly defining points and constraints is powerful I think.
- While this does more directly map to the status quo of tradCAD solver-based approaches, I think that is a good thing in this case for the reasons you mentioned.
- I like the thinking about the spatial and temporal relationships in the design note in section 1, point 10
- Damn this allows for the expression of custom constraints that's fucking sick for real, cannot be understated how cool that would be for library authors.
criticisms
- (Inline) I think the
||v1||
notation would be opaque to most users. It's clear for your proposal but I think having the default notation belen(v1)
or something would be more immediately understandable. - (Inline) The idea that anyone but power users would know to type
||
into the KCL expression box to initiate a selection seems a bit far-fetched to me. I would rather a UX where we addlen(l1)
(see my note above) on selection automatically, and if the user wants something other than magnitude they'd have to specify. But I think I understand this whole example flow as a deeply power user one anyway, and most would use simpler constraint applications likeparellel
anyway.\ - (Inline)
p1.x = 5
I'm not sure I understand this notation. - (Inline) I could see
screen_point
being seen as the "default" way of making points confusing people writing KCL (not who I think we should prioritize), andpoint
seeming more like an abstract power user way of defining a point. These seem powerful, but I would make the concrete one the shortest and easiest to write.
questions
- (Inline) I'm a little concerned about
screen_point
, because to me it reads as "screen space", as in it is dependent on the camera position which seems odd. I think I need a bit more explanation on what that means (theprint_point
both helped and harmed what I thought my understanding was). - (Inline) I could use a little more explanation around what it means to be fully constrained in this paradigm's sense. I think it means in the mathematical sense that the equation we're we're solving for has more than one solution, but that seems distinct from the CAD meaning of the term, in which one or more positions are "movable". I think they are the same underlying idea in both senses, but in CAD you typically display "not fully constrained" sketch geometry (traditionally black geometry is constrained and blue geometry could be moved around).
- (Inline) Am I understanding correctly that this would actually use Datalog under the hood, or just conceptually, and continue to be in Rust?
- (Inline) This seems oriented (correctly) toward robustly describing 2D geometry, but I'm interested in how this maps into 3D geometry as well? This would help me understand what an "extrude" and "revolve" map to in this realm of thinking.
- (Inline) How is construction geometry represented?
- (Inline) During KittyCAMP 2024 we discussed the benefits of "having every segment be relative unless a 'fixed' constraint is applied". How are relative lines represented in this approach at all?
- How does this proposal interface with tags? Does it preclude the need for them, since every entity is represented explicitly? I don't think that is the case.
challenges
- Is there a way for us to allow this without breaking existing KCL models, as an enhancing non-breaking change?
- If we can think of ways to display partially-constrained geometry, perhaps with default values or hints for the unconstrained properties, would that support proper components in the same manner? More of a UX challenge for us than a language one.
examples requested
- Show what authoring that farmbot Onshape sketch would look like in this paradigm
- Show what authoring a basic cube would look like
- Show at least a couple of our samples would look like, maybe starting with the
I'll post a couple diagrams of my understanding of what you talk about early in the proposal tomorrow, and if you like em you can add them inline.
|
||
Additionally, KCL requires users to write functions that ultimately come down to computing the locations of points given other points. We have standard library functions that help make this somewhat higher-level, like `tangentialArc`, but it ultimately comes down to directly computing output locations based on inputs. This means that users must adapt their way of thinking to the tool rather than the tool adapting to them. | ||
|
||
Notation: ||AB|| is defined as the length of the line segment between points A and B. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the ||v1||
notation would be opaque to most users. It's clear for your proposal but I think having the default notation be len(v1)
or something would be more immediately understandable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd argue that this is one of the few notations from math that's standard enough to use. Pretty sure we learned this in high school. There was talk of using ||
for logical OR. It would be nice if we got away from general purpose programming language syntax and leaned-in to geometry and linear algebra notation.
But I don't care for this notation in particular if people don't like it, just the general idea. There are infinite possibilities, and everyone tends to use the same old thing. So I will continue to propose strange-at-first-glance language ideas 🙂 I initially had more strange things, but I dialed it back so that people would focus on the important ideas here.
Also, there's the common trade-off between helpful for beginners vs. helpful for experts. Go is a programming language whose design continually favored ease for beginners, and I personally don't want to use the result.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Totally! I like that as a general rule, lean into the syntax of geometry. If our mechies understand it at a glance I'd be all for it. I think I've been poisoned by seeing logical ORs as ||
for too long, so that magnitude notation has become strange to me since high school, but you're right.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My related observation is that the MechEs who liked math and data—the ones who would probably recognise the ||x||
notation—tended to gravitate away from CAD altogether.
Nerdy MechEs like me were somewhat automagically sorted into a Performance or Simulation or Vehicle Dynamics department, where you tended to write 'scientist code' all day. So, if we're targeting the people who stayed as designers, then I think we should stick to basically plain/natural language that's descriptive, and avoid symbols and math notation beyond the absolute basics.
Even for me, I've never looked at a function being called inside a for loop and thought: "Damn, what a terrible syntax—I wish that said ∀x ∈ S : f(x)
"
8. The user selects a line with their pointing device, and the app fills in `l1||`. | ||
9. The user types `^2 Enter`, which finishes the expression. | ||
10. At this point, the app appends at the end of the code the new line `||l3|| = 0.5 * ||l1||^2`. | ||
- Design note: We could add this relationship next to the definition of `l3`. This is a reasonable choice. But I think we should prefer to append to the end by default because it retains the history of user operations in the order that they did them, which can be useful when reading code. Not only do UI operations correspond one-to-one with lines of code (a spatial relationship), but they correspond in order to how the author created them (a temporal relationship). This strengthens the tie between the UI and code in the user's mind. If they find that they'd like a different order, they can reorder lines without affecting the meaning of the code in many cases. (We'll discuss exactly which cases later.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea that anyone but power users would know to type ||
into the KCL expression box to initiate a selection seems a bit far-fetched to me. I would rather a UX where we add len(l1)
(see my note above) on selection automatically, and if the user wants something other than magnitude they'd have to specify. But I think I understand this whole example flow as a deeply power user one anyway, and most would use simpler constraint applications like parallel
anyway.
3. Named points are natural snap points in the point-and-click UI. Similar to how we use variable names to autocomplete in the editor, when the user is drawing, we can use named points as the set of points that a user might want to snap to, which is the point-and-click analogue of autocomplete. | ||
4. It allows creating unconstrained points. | ||
1. In the above example, we used `p1 = screen_point(x: 10, y: 12)`, which is fully constrained to a coordinate relative to the screen's coordinate system. | ||
2. However, we could make an unconstrained point `p1 = point()`. This is still useful for constraining in code later. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could see screen_point
being seen as the "default" way of making points confusing people writing KCL (not who I think we should prioritize), and point
seeming more like an abstract power user way of defining a point. These seem powerful, but I would make the concrete one the shortest and easiest to write.
2. It allows creating points that are used purely for construction (not display or manufacturing). For example, users can create other geometry derived from the point through relationships. | ||
3. Named points are natural snap points in the point-and-click UI. Similar to how we use variable names to autocomplete in the editor, when the user is drawing, we can use named points as the set of points that a user might want to snap to, which is the point-and-click analogue of autocomplete. | ||
4. It allows creating unconstrained points. | ||
1. In the above example, we used `p1 = screen_point(x: 10, y: 12)`, which is fully constrained to a coordinate relative to the screen's coordinate system. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little concerned about screen_point
, because to me it reads as "screen space", as in it is dependent on the camera position which seems odd. I think I need a bit more explanation on what that means (the print_point
both helped and harmed what I thought my understanding was).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The important bit here is that it's a something_point
, not just a point
. Every coordinate is implicitly relative to a coordinate system. When you draw in the UI, coordinates that it uses are relative to an origin and axes. This is totally arbitrary. Similar to units, coordinates do not mean anything unless you know what coordinate system they're in. That's why I believe all points should explicitly specify it. You can call it something other than "screen". I just didn't know a better name for the coordinate system we use when displaying on the screen.
Since we want everything to be relative when possible, we may not actually use this at all. Or maybe it's very rare.
|
||
### Displaying Geometry | ||
|
||
Any geometry that is fully constrained will be shown by default. A user can opt-out of this using: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could use a little more explanation around what it means to be fully constrained in this paradigm's sense. I think it means in the mathematical sense that the equation we're we're solving for has more than one solution, but that seems distinct from the CAD meaning of the term, in which one or more positions are "movable". I think they are the same underlying idea in both senses, but in CAD you typically display "not fully constrained" sketch geometry (traditionally black geometry is constrained and blue geometry could be moved around).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know what you mean. Maybe I need to read up on how CAD software uses the term. I intended "fully constrained" to mean that a variable, like the X-coordinate of a point, has a unique and computable value. If this isn't the case, we cannot know where to display it on the screen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jtran Under-constrained sketches are always displayed in CAD, with the program leveraging construction values as guesses (rendered in blue not black as @franknoirot said) or making sure the points don't fly away when a constraint is applied (likely doing some sort of distance minimization while meeting the constraints)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way I think about sketcher constraints is in terms of 'breaking' degrees of freedom.
A sketch is only fully constrained when it has no remaining degrees of freedom, so can't be wiggled around. For a line, for example, that means that as initially drawn it has 4 DOF: start x, start y, end x, end y.
You could 'break' all 4 DOF by fixing:
- Both point coordinates in both x and y.
- Start point coordinates in both x and y, plus an angle of travel, plus a length.
- Start point coordinates in both x and y, plus an end point x, plus a length.
- Start point coordinates in both x and y, plus an end point y, plus a length.
- Plus the inverse fixing end points instead of start etc...
You could manage to feed in values that aren't possible with some of these, but that's the broad pattern.
In terms of the math, I think that means it's a system of linear equations like
The simplest (good) example I could think up is a 3-4-5 triangle.
If we had a line set up like this, where:
-
$x1$ = 0, fixed -
$y1$ = 0, fixed -
$x2$ = 4, fixed -
$y2$ = Unknown -
$l$ = 5, fixed
We know that the y2 value has to be ±3, but I think the actual equation system setup is:
To get that into matrix form, I think you need to linearise the square term, but then you can do the fancy stuff (Taylor expansion, Moore-Penrose) to get to a solution for
|
||
By adding `maximize(my_radius)`, a specific value can be computed for `my_radius` even though it's not fully constrained. This is useful for displaying geometry that would only be partially constrained otherwise. | ||
|
||
Resolution strategies are only used if values are not fully constrained otherwise. You can think of them as optional constraints. This is useful for library authors while writing constraint functions. Depending on what a constraint function is applied to, the result may or may not be fully constrained. Resolution strategies allow for a way to give hints to the runtime system that aren't strict requirements. I.e. resolution strategies will never cause geometry to be over-constrained. But they can help allow unconstrained or partially constrained geometry to become fully constrained. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems oriented (correctly) toward robustly describing 2D geometry, but I'm interested in how this maps into 3D geometry as well? This would help me understand what an "extrude" and "revolve" map to in this realm of thinking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know. We'll have to work through more examples. Part of me thinks that it's just more geometry. I think Bézier curves are pretty straight-forward math. But this is beyond my expertise. Until we try it, I don't know what challenges we'll face. Even curves in 2D are kind of over my head. 3D is probably even more challenging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that's a totally fine place to be. Doing some examples as exercises seems the way to hammer those unknowns out, along with everyone else's thoughts.
|
||
1. User enters sketch mode, which defaults to line drawing. | ||
2. User clicks a point. A variable `p1` is defined with the screen coordinates. | ||
3. User clicks another point. A variable `p2` is defined at that point. Since they're in line-sketch mode, `l1 = line(p1, p2)` is also generated. The app knows to name it with the prefix `l` because it is generating the line, so it knows it's a line. Similarly with points. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is construction geometry represented?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly that construction geometry is geometry that is used only for building the model, but not intended to be exported or manufactured as part of the model, this language design currently doesn't treat it differently from other geometry. All geometry that has known coordinates will be displayed.
Perhaps we could add something similar to hide
, but instead of not displaying, it omits the geometry from export.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's typically a UI toggle yeah, going from solid lines to dashed lines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah in Onshape's FeatureScript it is a function argument for geometry functions. For example, here is a construction point at the midpoint of a line segment:
skPoint(sketch, "BRe0F8pfMCRv.middle", { "construction" : true, "index" : "0" });
``` | ||
p1 = screen_point(x: 10, y: 12) | ||
p2 = screen_point(x: 20, y: 13) | ||
l1 = line(p1, p2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
During KittyCAMP 2024 we discussed the benefits of "having every segment be relative unless a 'fixed' constraint is applied". How are relative lines represented in this approach at all?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you're absolutely right. I completely screwed this up. I want everything to be relative. I will need to change a few things. Maybe this will inform how to fix screen_point()
et al.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sick, okay that makes sense though. Maybe the point definitions continue to be "absolute" (your adjustments to screen_point()
notwithstanding), and it's actually the constraints that drive their positions, not the other way round. I'm not sure. I know that's why you can sometimes get unexpected movement of points on mostly-constrained sketches in solver-based CAD programs like Onshape; it's a kind of drift that occurs due to this relative/absolute tension. I can't seem to create an example video for you but I will post it here if I manage it.
Yes, this needs work.
I don't know what you mean. Maybe I need to read up on how CAD software uses the term. I intended "fully constrained" to mean that a variable, like the X-coordinate of a point, has a unique and computable value. If this isn't the case, we cannot know where to display it on the screen.
I imagine that we'd either write our own Datalog engine in Rust or use a Rust library, like this one. The Rust compiler uses chalk, which is another guide.
I don't know. We'll have to work through more examples. Part of me thinks that it's just more geometry. I think Bézier curves are pretty straight-forward math. But this is beyond my expertise. Until we try it, I don't know what challenges we'll face. Even curves in 2D are kind of over my head. 3D is probably even more challenging.
If I understand correctly that construction geometry is geometry that is used only for building the model, but not intended to be exported or manufactured as part of the model, this language design currently doesn't treat it differently from other geometry. All geometry that has known coordinates will be displayed. Perhaps we could add something similar to
Yes, you're absolutely right. I completely screwed this up. I want everything to be relative. I will need to change a few things. Maybe this will inform how to fix
I believe that by giving everything a variable by default, like I've brought this up before that tag declarators in KCL today are essentially output parameters. We could rearrange return values of stdlib functions such that we could use standard variable bindings in place of local tags. But it would be more verbose. The whole thing where they modify Challenges(one of them)
I mentioned this in the comment that I think it would be nice if when you drew in the UI, it essentially created fallback coordinates/offsets that are only used if something else in the program doesn't constrain it to be somewhere else. But this is very under-baked. I'm not sure how we could implement this. |
Yes this makes sense; it aligns exactly with what you and @lf94 have been saying about tags as output parameters, another way of declaring. |
Note that the code is generated by the user clicking in the UI. The resulting code is in the order that it was created, meaning it's a history of actions that the user performed in the UI. | ||
|
||
``` | ||
p1 = screen_point(x: 10, y: 12) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The X/Y coordinates here are confusing. Are they constrained coordinates? Or are they just "first guess" coordinates based on approximate mouse-clicks? To me, this reads like it's fully constrained, which means all the constraints that follow are unnecessary -- because all the lines are between fully-constrained points, so the lines are all fully-constrained too.
I must be wrong about this, right? Should I be ignoring the points' x/y coordinates when I read them? But I can't just ignore them because they're used to calculate constraints later on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I screwed this up. Others have pointed it out as well. I think that when the user clicks in the UI, it needs to be treated special as some kind of default or fallback coordinate that's only used if it's not constrained in another way. Apparently Onshape displays sketches as blue or black depending on whether it was drawn in the UI and movable vs. actually constrained and not movable. We'll need to do something similar.
I basically conflated the idea of known/unknown with constrained/unconstrained. They're separate concepts that need to be represented in the design and implementation.
Some high level comments (I have thoughts on syntax and stuff, but the details don't matter too much as this stage). I've addressed the proposal on its own terms (i.e., my comments are purely technical), but also this would be a big change at a late stage and even once we implement it, it would require a lot of iteration to get right, so I think that adopting the proposal is more of a business decision than a technical one.
|
All 4 of these sub-points are great. @nrc if you don't mind making your list numbered, it would make referring to points a bit easier in conjunction with quote replies. |
/r/ProgrammingLanguages thread from yesterday: programming languages where algebra is a central feature of the language One link that has a neat interactive demo: Constrain - a JS library for animated, interactive web figures, based on declarative constraint solving |
1st of all I really appreciate you walking through how the code is generated for each of the user's UI interactions, syntax proposals without these walk-throughs are missing important context.
equalLength(l2, l4)
parallel(l1, l2) OR ||l2|| = 1.2 * ||l4||
<<l1<< = <<l2<< // <<line<< is a arbitrary syntax for angles Would achieve similar results (NOT equivalent, but similar in terms of user intention) Question, what happens if if something is over constrained? I imagine in the UI we will have a way to stop users from adding extra constraints to already full constrained segments, but we can't stop user from typing extra constraints. Assuming the constraint solving is done all within kcl (no engine needed) are we able to show diagnostics in the editor pointing out the problem? With
Do you think there's confusion with using the same simples like The resolution strategies section makes me a bit nervous. Am I right in saying that this comes as a result of allowing the mySegmentAngle < 370
maximize(mySegmentAngle) maybe not important but maximising an angle to 370, is really maximising it to 10 ya know, feels a little uncomfortable. You lost me completely in the typing in I have a question around tags, obviously they are not needed for defining 2d constraints anymore with the paradigm, but tags use goes beyond sketches, for selecting faces etc. Will this work with tags, or are tags not needed because p1 = screen_point(x: 10, y: 12)
p2 = screen_point(x: 20, y: 13)
l1 = line(p1, p2)
Lastly I'll link the reasons why I came up with the current chaining syntax and why I didn't want to go down a solver path I feel like I didn't see any examples of dimensions being constrained here, like how would a user specify that lineA is 5 units away from lineB? |
Obviously, this whole doc is very under-baked. So take everything with a grain of salt. It's just ideas, not fully thought out.
Interesting. I like weird language ideas 🙂
It would show an error displaying the source lines of all constraints that conflict. Yes, this is a problem in certain real programming languages. If everything is a constraint on equal footing, who's to say which one is wrong? I would love if we could find a sweet-spot the way programming languages have mostly settled on specifying the type on function signatures. That's authoritative. If a call site differs, the call site is wrong, not the function signature. Contrast this with call sites constraining what types function parameters are. That's what would happen if you treated all constraints as the same.
Yes, I think it can be confusing to distinguish between pure computation and constraints. The main difference being which way data flows, into parameters or out. This is why I invented the split between
You're exactly right. It's because of inequalities. They complicate things. Dropping them would definitely simplify things. (See update below.)
Yes, I screwed this up. I conflated known/unknown with constrained/unconstrained. They're two separate things that need to be included in the design and implementation. Every piece of geometry, when the user draws it in the UI, they're giving it a default/fallback known coordinate, meaning we can always display it. But constraints may move it somewhere else. Solved constraints override the default.
Yes. That's how I imagine it.
Interesting. I will give this a read. I really missed you last week!
Is this what you mean? My first comment on the PR above: #13 But if you think about it, that is only partially constrained. I tried to work through the actual derivation by hand. It's long and reminds me of the painful parts of high school math classes. The exercise was helpful, though, because after you do all the symbolic substitution, at a certain point, you end up with something like this:
What do we do now? Maybe I'm just not that good at math, and this is a system of linear equations that's trivially solvable that I just can't see. But at a certain point, it seems like you have to start plugging in numbers using the default values. After all, we know they're only partially constrained. But how do we do that? An obvious approach is to start with Update: I think I made a mistake with the square root. I think we need to derive the fact that the thing inside the square root is greater than zero. Otherwise, we'd have complex numbers. So I don't think inequalities can go away completely. But maybe we just error out instead of allowing for |
I just read KittyCAD/modeling-app#111. The claim is that this CadQuery example is hard to read or difficult to understand. import cadquery as cq
result = (
cq.Sketch()
.segment((0, 0), (0, 3.0), "s1")
.arc((0.0, 3.0), (1.5, 1.5), (0.0, 0.0), "a1")
.constrain("s1", "Fixed", None)
.constrain("s1", "a1", "Coincident", None)
.constrain("a1", "s1", "Coincident", None)
.constrain("s1", "a1", "Angle", 45)
.solve()
.assemble()
) I tried to reproduce this in KCL. But it wasn't straight-forward at all. I couldn't figure it out with our current arc(). I'm imagining what it might look like once we implement 3-point arc KittyCAD/modeling-app#1659. After about 15 minutes oftrial and error, I came up with this. (The angle between the line and the arc is wrong, but it's the general shape.)
... which was actually completely unintuitive. I stumbled on this solution of using So we again have lots of flavors of all our functions. They're opaque in that you as a user of KCL couldn't create such a thing yourself. I've found it to be common that I need to rotate my paths so that the starting point can change. The claim was that direct computation using the style of KCL of today was more concise and easier to follow. Going through this exercise convinces me even more that KCL has a predefined way of doing things, and if a user tries to stray outside that way at all, they're SOL. I would even argue that the CadQuery example is indeed easier to read because our target audience thinks this way. They think in terms of high level things like lines being parallel or points being coincident. On the other hand, they do not think in terms of directly computing points. I think the example is actually pretty intuitive. MitigationDirect computation of points in a path is easier to read or consume if you're trying to construct the points as the implementer. They correspond one-to-one with the output. But as a user, there's a big difference in mental model. @jgomez720 expressed frustration with how all line segments in KCL/the Modeling App have a direction. I understand why this is, and after talking with Alyn about constraints, I think leaning into deltas (AKA relative path segments) is generally what we want under the hood. But could the UI shield users from this somehow? @nrc brought up the idea that the UI could withhold writing KCL until the user exits sketch mode and "commits" to a sketch. The nice thing about this is that the user could do all kinds of complicated edits on lines and points in any order, and then once they've built up an entire thing, only then does the tool need to write the corresponding KCL. This has some nice effects. For example, because the tool knows the entire path, it can group things cleanly in the source. The downside is that the user loses the live one-to-one connection between UI operations and KCL changes, which was always intended to aid in users learning KCL. And presumably users would want to edit existing sketches, so this doesn't actually save us much work in KCL refactoring because we'd still need to be able to understand and refactor previously created sketches to properly edit them. I think this is an interesting, under-explored approach. Something might have to give. To be clear, I'm not trying to say that the current way is so bad and a solver approach is so much better. I think that we need to address the big difference in mental model. Default CoordinatesOne thing I'd like to add is how in the initial draft of this PR, I conflated the idea of a point being known with it being constrained. When people pointed this out, I immediately thought it was an oversight that needed to be fixed. But the more I think about this, I'm actually not so sure. I think that part of why other CAD tools are so unintuitive in the solver department is because of these "initial guesses" for locations of points. I still haven't worked through the details yet, but it's something I'm thinking about. The whole idea of a "default" that somehow informs how things get solved is actually pretty weird. |
I haven't been through all the comments yet, but have been through the original markdown doc. My impressions:
Will get into the comments and report back 🫡 |
Rendered
This is a very rough sketch of an idea that resulted from conversations during KittyCAMP 2024.