Skip to content

Constraints language #13

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

Constraints language #13

wants to merge 1 commit into from

Conversation

jtran
Copy link

@jtran jtran commented Oct 21, 2024

Rendered

This is a very rough sketch of an idea that resulted from conversations during KittyCAMP 2024.

@jtran
Copy link
Author

jtran commented Oct 21, 2024

Example of a multi-segment path constrained to a distance.

// We'll use two segments, but there could be any number.
p1 = screen_point(x: 10, y: 12)
p2 = screen_point(x: 20, y: 13)
l1 = line(p1, p2)
p3 = screen_point(x: 21, y: 23)
l2 = line(p2, p3)

// The required span distance. Maybe this is a parameter.
d = 10

// Assuming they're not axis-aligned:
dist(p1, p3) = d

// Return the Euclidean distance between two points.
fn dist(p1, p2) {
  return sqrt((p2.x - p1.x)^2 + (p2.y - p1.y)^2 + (p2.z - p1.z)^2)
}

The only thing is, I have no idea how to do curves. I think the implementation would need to be aware of how they’re represented with the formulas so that they can be symbolically manipulated.

I’m realizing also that I think screen_point() needs to have special treatment as some kind of soft-constraint that only applies when not fully constrained otherwise.

Copy link

@franknoirot franknoirot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wrote these notes up independently, and only commented inline where I felt it would add something. Those notes are marked with (inline) at their start when mentioned below. This is really cool. Especially the idea of custom defined constraints, I mean that's so powerful for libraries wow.

I would want to see more examples of this notation, and have some more sense of the implementation level of effort (although this is a hell of a start). And if you can demonstrate that it is a superset of existing KCL's capabilities I think that would help a lot.

affirmations

  1. The idea of explicitly defining points and constraints is powerful I think.
  2. While this does more directly map to the status quo of tradCAD solver-based approaches, I think that is a good thing in this case for the reasons you mentioned.
  3. I like the thinking about the spatial and temporal relationships in the design note in section 1, point 10
  4. Damn this allows for the expression of custom constraints that's fucking sick for real, cannot be understated how cool that would be for library authors.

criticisms

  1. (Inline) I think the ||v1|| notation would be opaque to most users. It's clear for your proposal but I think having the default notation be len(v1) or something would be more immediately understandable.
  2. (Inline) The idea that anyone but power users would know to type || into the KCL expression box to initiate a selection seems a bit far-fetched to me. I would rather a UX where we add len(l1) (see my note above) on selection automatically, and if the user wants something other than magnitude they'd have to specify. But I think I understand this whole example flow as a deeply power user one anyway, and most would use simpler constraint applications like parellel anyway.\
  3. (Inline) p1.x = 5 I'm not sure I understand this notation.
  4. (Inline) I could see screen_point being seen as the "default" way of making points confusing people writing KCL (not who I think we should prioritize), and point seeming more like an abstract power user way of defining a point. These seem powerful, but I would make the concrete one the shortest and easiest to write.

questions

  1. (Inline) I'm a little concerned about screen_point, because to me it reads as "screen space", as in it is dependent on the camera position which seems odd. I think I need a bit more explanation on what that means (the print_point both helped and harmed what I thought my understanding was).
  2. (Inline) I could use a little more explanation around what it means to be fully constrained in this paradigm's sense. I think it means in the mathematical sense that the equation we're we're solving for has more than one solution, but that seems distinct from the CAD meaning of the term, in which one or more positions are "movable". I think they are the same underlying idea in both senses, but in CAD you typically display "not fully constrained" sketch geometry (traditionally black geometry is constrained and blue geometry could be moved around).
  3. (Inline) Am I understanding correctly that this would actually use Datalog under the hood, or just conceptually, and continue to be in Rust?
  4. (Inline) This seems oriented (correctly) toward robustly describing 2D geometry, but I'm interested in how this maps into 3D geometry as well? This would help me understand what an "extrude" and "revolve" map to in this realm of thinking.
  5. (Inline) How is construction geometry represented?
  6. (Inline) During KittyCAMP 2024 we discussed the benefits of "having every segment be relative unless a 'fixed' constraint is applied". How are relative lines represented in this approach at all?
  7. How does this proposal interface with tags? Does it preclude the need for them, since every entity is represented explicitly? I don't think that is the case.

challenges

  1. Is there a way for us to allow this without breaking existing KCL models, as an enhancing non-breaking change?
  2. If we can think of ways to display partially-constrained geometry, perhaps with default values or hints for the unconstrained properties, would that support proper components in the same manner? More of a UX challenge for us than a language one.

examples requested

  1. Show what authoring that farmbot Onshape sketch would look like in this paradigm
  2. Show what authoring a basic cube would look like
  3. Show at least a couple of our samples would look like, maybe starting with the

I'll post a couple diagrams of my understanding of what you talk about early in the proposal tomorrow, and if you like em you can add them inline.


Additionally, KCL requires users to write functions that ultimately come down to computing the locations of points given other points. We have standard library functions that help make this somewhat higher-level, like `tangentialArc`, but it ultimately comes down to directly computing output locations based on inputs. This means that users must adapt their way of thinking to the tool rather than the tool adapting to them.

Notation: ||AB|| is defined as the length of the line segment between points A and B.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the ||v1|| notation would be opaque to most users. It's clear for your proposal but I think having the default notation be len(v1) or something would be more immediately understandable.

Copy link
Author

@jtran jtran Oct 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd argue that this is one of the few notations from math that's standard enough to use. Pretty sure we learned this in high school. There was talk of using || for logical OR. It would be nice if we got away from general purpose programming language syntax and leaned-in to geometry and linear algebra notation.

But I don't care for this notation in particular if people don't like it, just the general idea. There are infinite possibilities, and everyone tends to use the same old thing. So I will continue to propose strange-at-first-glance language ideas 🙂 I initially had more strange things, but I dialed it back so that people would focus on the important ideas here.

Also, there's the common trade-off between helpful for beginners vs. helpful for experts. Go is a programming language whose design continually favored ease for beginners, and I personally don't want to use the result.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Totally! I like that as a general rule, lean into the syntax of geometry. If our mechies understand it at a glance I'd be all for it. I think I've been poisoned by seeing logical ORs as || for too long, so that magnitude notation has become strange to me since high school, but you're right.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My related observation is that the MechEs who liked math and data—the ones who would probably recognise the ||x|| notation—tended to gravitate away from CAD altogether.

Nerdy MechEs like me were somewhat automagically sorted into a Performance or Simulation or Vehicle Dynamics department, where you tended to write 'scientist code' all day. So, if we're targeting the people who stayed as designers, then I think we should stick to basically plain/natural language that's descriptive, and avoid symbols and math notation beyond the absolute basics.

Even for me, I've never looked at a function being called inside a for loop and thought: "Damn, what a terrible syntax—I wish that said ∀x ∈ S : f(x)"

8. The user selects a line with their pointing device, and the app fills in `l1||`.
9. The user types `^2 Enter`, which finishes the expression.
10. At this point, the app appends at the end of the code the new line `||l3|| = 0.5 * ||l1||^2`.
- Design note: We could add this relationship next to the definition of `l3`. This is a reasonable choice. But I think we should prefer to append to the end by default because it retains the history of user operations in the order that they did them, which can be useful when reading code. Not only do UI operations correspond one-to-one with lines of code (a spatial relationship), but they correspond in order to how the author created them (a temporal relationship). This strengthens the tie between the UI and code in the user's mind. If they find that they'd like a different order, they can reorder lines without affecting the meaning of the code in many cases. (We'll discuss exactly which cases later.)
Copy link

@franknoirot franknoirot Oct 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea that anyone but power users would know to type || into the KCL expression box to initiate a selection seems a bit far-fetched to me. I would rather a UX where we add len(l1) (see my note above) on selection automatically, and if the user wants something other than magnitude they'd have to specify. But I think I understand this whole example flow as a deeply power user one anyway, and most would use simpler constraint applications like parallel anyway.

3. Named points are natural snap points in the point-and-click UI. Similar to how we use variable names to autocomplete in the editor, when the user is drawing, we can use named points as the set of points that a user might want to snap to, which is the point-and-click analogue of autocomplete.
4. It allows creating unconstrained points.
1. In the above example, we used `p1 = screen_point(x: 10, y: 12)`, which is fully constrained to a coordinate relative to the screen's coordinate system.
2. However, we could make an unconstrained point `p1 = point()`. This is still useful for constraining in code later.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could see screen_point being seen as the "default" way of making points confusing people writing KCL (not who I think we should prioritize), and point seeming more like an abstract power user way of defining a point. These seem powerful, but I would make the concrete one the shortest and easiest to write.

2. It allows creating points that are used purely for construction (not display or manufacturing). For example, users can create other geometry derived from the point through relationships.
3. Named points are natural snap points in the point-and-click UI. Similar to how we use variable names to autocomplete in the editor, when the user is drawing, we can use named points as the set of points that a user might want to snap to, which is the point-and-click analogue of autocomplete.
4. It allows creating unconstrained points.
1. In the above example, we used `p1 = screen_point(x: 10, y: 12)`, which is fully constrained to a coordinate relative to the screen's coordinate system.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little concerned about screen_point, because to me it reads as "screen space", as in it is dependent on the camera position which seems odd. I think I need a bit more explanation on what that means (the print_point both helped and harmed what I thought my understanding was).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The important bit here is that it's a something_point, not just a point. Every coordinate is implicitly relative to a coordinate system. When you draw in the UI, coordinates that it uses are relative to an origin and axes. This is totally arbitrary. Similar to units, coordinates do not mean anything unless you know what coordinate system they're in. That's why I believe all points should explicitly specify it. You can call it something other than "screen". I just didn't know a better name for the coordinate system we use when displaying on the screen.

Since we want everything to be relative when possible, we may not actually use this at all. Or maybe it's very rare.


### Displaying Geometry

Any geometry that is fully constrained will be shown by default. A user can opt-out of this using:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could use a little more explanation around what it means to be fully constrained in this paradigm's sense. I think it means in the mathematical sense that the equation we're we're solving for has more than one solution, but that seems distinct from the CAD meaning of the term, in which one or more positions are "movable". I think they are the same underlying idea in both senses, but in CAD you typically display "not fully constrained" sketch geometry (traditionally black geometry is constrained and blue geometry could be moved around).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know what you mean. Maybe I need to read up on how CAD software uses the term. I intended "fully constrained" to mean that a variable, like the X-coordinate of a point, has a unique and computable value. If this isn't the case, we cannot know where to display it on the screen.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jtran Under-constrained sketches are always displayed in CAD, with the program leveraging construction values as guesses (rendered in blue not black as @franknoirot said) or making sure the points don't fly away when a constraint is applied (likely doing some sort of distance minimization while meeting the constraints)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's an example of a sketch that has two segments that are constrained (because I made the lower-right point coincident with the origin) and two that are not.

Screenshot 2024-10-22 at 8 52 06 PM

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The way I think about sketcher constraints is in terms of 'breaking' degrees of freedom.

A sketch is only fully constrained when it has no remaining degrees of freedom, so can't be wiggled around. For a line, for example, that means that as initially drawn it has 4 DOF: start x, start y, end x, end y.

You could 'break' all 4 DOF by fixing:

  • Both point coordinates in both x and y.
  • Start point coordinates in both x and y, plus an angle of travel, plus a length.
  • Start point coordinates in both x and y, plus an end point x, plus a length.
  • Start point coordinates in both x and y, plus an end point y, plus a length.
  • Plus the inverse fixing end points instead of start etc...

You could manage to feed in values that aren't possible with some of these, but that's the broad pattern.

In terms of the math, I think that means it's a system of linear equations like $Ax=b$ for us to solve, and the system is only 'constrained' when there's a single, unique solution.

The simplest (good) example I could think up is a 3-4-5 triangle.

If we had a line set up like this, where:

  • $x1$ = 0, fixed
  • $y1$ = 0, fixed
  • $x2$ = 4, fixed
  • $y2$ = Unknown
  • $l$ = 5, fixed

image

We know that the y2 value has to be ±3, but I think the actual equation system setup is:

$$ \begin{aligned} \text{Point 1: } & \begin{cases} x_1 = 0 \\ y_1 = 0 \end{cases} \\ \\ \text{Point 2: } & \begin{cases} x_2 = 4 \\ (x_2 - x_1)^2 + (y_2 - y_1)^2 = 25 \end{cases} \\ \\ \text{So: } & \\ 4^2 + y_2^2 = 25 \\ 16 + y_2^2 = 25 \\ y_2^2 = 9 \\ y_2 = \pm 3 \end{aligned} $$

To get that into matrix form, I think you need to linearise the square term, but then you can do the fancy stuff (Taylor expansion, Moore-Penrose) to get to a solution for $x=A^{-1}b$.


By adding `maximize(my_radius)`, a specific value can be computed for `my_radius` even though it's not fully constrained. This is useful for displaying geometry that would only be partially constrained otherwise.

Resolution strategies are only used if values are not fully constrained otherwise. You can think of them as optional constraints. This is useful for library authors while writing constraint functions. Depending on what a constraint function is applied to, the result may or may not be fully constrained. Resolution strategies allow for a way to give hints to the runtime system that aren't strict requirements. I.e. resolution strategies will never cause geometry to be over-constrained. But they can help allow unconstrained or partially constrained geometry to become fully constrained.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems oriented (correctly) toward robustly describing 2D geometry, but I'm interested in how this maps into 3D geometry as well? This would help me understand what an "extrude" and "revolve" map to in this realm of thinking.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know. We'll have to work through more examples. Part of me thinks that it's just more geometry. I think Bézier curves are pretty straight-forward math. But this is beyond my expertise. Until we try it, I don't know what challenges we'll face. Even curves in 2D are kind of over my head. 3D is probably even more challenging.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that's a totally fine place to be. Doing some examples as exercises seems the way to hammer those unknowns out, along with everyone else's thoughts.


1. User enters sketch mode, which defaults to line drawing.
2. User clicks a point. A variable `p1` is defined with the screen coordinates.
3. User clicks another point. A variable `p2` is defined at that point. Since they're in line-sketch mode, `l1 = line(p1, p2)` is also generated. The app knows to name it with the prefix `l` because it is generating the line, so it knows it's a line. Similarly with points.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is construction geometry represented?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly that construction geometry is geometry that is used only for building the model, but not intended to be exported or manufactured as part of the model, this language design currently doesn't treat it differently from other geometry. All geometry that has known coordinates will be displayed.

Perhaps we could add something similar to hide, but instead of not displaying, it omits the geometry from export.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's typically a UI toggle yeah, going from solid lines to dashed lines

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah in Onshape's FeatureScript it is a function argument for geometry functions. For example, here is a construction point at the midpoint of a line segment:

skPoint(sketch, "BRe0F8pfMCRv.middle", { "construction" : true, "index" : "0" });

```
p1 = screen_point(x: 10, y: 12)
p2 = screen_point(x: 20, y: 13)
l1 = line(p1, p2)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

During KittyCAMP 2024 we discussed the benefits of "having every segment be relative unless a 'fixed' constraint is applied". How are relative lines represented in this approach at all?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you're absolutely right. I completely screwed this up. I want everything to be relative. I will need to change a few things. Maybe this will inform how to fix screen_point() et al.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sick, okay that makes sense though. Maybe the point definitions continue to be "absolute" (your adjustments to screen_point() notwithstanding), and it's actually the constraints that drive their positions, not the other way round. I'm not sure. I know that's why you can sometimes get unexpected movement of points on mostly-constrained sketches in solver-based CAD programs like Onshape; it's a kind of drift that occurs due to this relative/absolute tension. I can't seem to create an example video for you but I will post it here if I manage it.

@jtran
Copy link
Author

jtran commented Oct 22, 2024

  • (Inline) I'm a little concerned about screen_point, because to me it reads as "screen space", as in it is dependent on the camera position which seems odd. I think I need a bit more explanation on what that means (the print_point both helped and harmed what I thought my understanding was).

Yes, this needs work.

  • (Inline) I could use a little more explanation around what it means to be fully constrained in this paradigm's sense. I think it means in the mathematical sense that the equation we're we're solving for has more than one solution, but that seems distinct from the CAD meaning of the term, in which one or more positions are "movable". I think they are the same underlying idea in both senses, but in CAD you typically display "not fully constrained" sketch geometry (traditionally black geometry is constrained and blue geometry could be moved around).

I don't know what you mean. Maybe I need to read up on how CAD software uses the term. I intended "fully constrained" to mean that a variable, like the X-coordinate of a point, has a unique and computable value. If this isn't the case, we cannot know where to display it on the screen.

  • (Inline) Am I understanding correctly that this would actually use Datalog under the hood, or just conceptually, and continue to be in Rust?

I imagine that we'd either write our own Datalog engine in Rust or use a Rust library, like this one. The Rust compiler uses chalk, which is another guide.

  • (Inline) This seems oriented (correctly) toward robustly describing 2D geometry, but I'm interested in how this maps into 3D geometry as well? This would help me understand what an "extrude" and "revolve" map to in this realm of thinking.

I don't know. We'll have to work through more examples. Part of me thinks that it's just more geometry. I think Bézier curves are pretty straight-forward math. But this is beyond my expertise. Until we try it, I don't know what challenges we'll face. Even curves in 2D are kind of over my head. 3D is probably even more challenging.

  • (Inline) How is construction geometry represented?

If I understand correctly that construction geometry is geometry that is used only for building the model, but not intended to be exported or manufactured as part of the model, this language design currently doesn't treat it differently from other geometry. All geometry that has known coordinates will be displayed.

Perhaps we could add something similar to hide, but instead of not displaying, it omits the geometry from export.

  • (Inline) During KittyCAMP 2024 we discussed the benefits of "having every segment be relative unless a 'fixed' constraint is applied". How are relative lines represented in this approach at all?

Yes, you're absolutely right. I completely screwed this up. I want everything to be relative. I will need to change a few things. Maybe this will inform how to fix screen_point() et al.

  • How does this proposal interface with tags? Does it preclude the need for them, since every entity is represented explicitly? I don't think that is the case.

I believe that by giving everything a variable by default, like p1 and l1 in the examples, we're essentially tagging everything. In fact, we're able to tag points, which I don't think KCL can currently do.

I've brought this up before that tag declarators in KCL today are essentially output parameters. We could rearrange return values of stdlib functions such that we could use standard variable bindings in place of local tags. But it would be more verbose. The whole thing where they modify .tags of the sketch is a separate thing.

Challenges

(one of them)

If we can think of ways to display partially-constrained geometry, perhaps with default values or hints for the unconstrained properties, would that support proper components in the same manner? More of a UX challenge for us than a language one.

I mentioned this in the comment that I think it would be nice if when you drew in the UI, it essentially created fallback coordinates/offsets that are only used if something else in the program doesn't constrain it to be somewhere else. But this is very under-baked. I'm not sure how we could implement this.

@franknoirot
Copy link

I believe that by giving everything a variable by default, like p1 and l1 in the examples, we're essentially tagging everything. In fact, we're able to tag points, which I don't think KCL can currently do.

I've brought this up before that tag declarators in KCL today are essentially output parameters. We could rearrange return values of stdlib functions such that we could use standard variable bindings in place of local tags. But it would be more verbose. The whole thing where they modify .tags of the sketch is a separate thing.

Yes this makes sense; it aligns exactly with what you and @lf94 have been saying about tags as output parameters, another way of declaring.

Note that the code is generated by the user clicking in the UI. The resulting code is in the order that it was created, meaning it's a history of actions that the user performed in the UI.

```
p1 = screen_point(x: 10, y: 12)
Copy link
Collaborator

@adamchalmers adamchalmers Oct 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The X/Y coordinates here are confusing. Are they constrained coordinates? Or are they just "first guess" coordinates based on approximate mouse-clicks? To me, this reads like it's fully constrained, which means all the constraints that follow are unnecessary -- because all the lines are between fully-constrained points, so the lines are all fully-constrained too.

I must be wrong about this, right? Should I be ignoring the points' x/y coordinates when I read them? But I can't just ignore them because they're used to calculate constraints later on.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I screwed this up. Others have pointed it out as well. I think that when the user clicks in the UI, it needs to be treated special as some kind of default or fallback coordinate that's only used if it's not constrained in another way. Apparently Onshape displays sketches as blue or black depending on whether it was drawn in the UI and movable vs. actually constrained and not movable. We'll need to do something similar.

I basically conflated the idea of known/unknown with constrained/unconstrained. They're separate concepts that need to be represented in the design and implementation.

@nrc
Copy link

nrc commented Oct 22, 2024

Some high level comments (I have thoughts on syntax and stuff, but the details don't matter too much as this stage). I've addressed the proposal on its own terms (i.e., my comments are purely technical), but also this would be a big change at a late stage and even once we implement it, it would require a lot of iteration to get right, so I think that adopting the proposal is more of a business decision than a technical one.

  1. The general concept of constraints is one which is bothering me in existing KCL. I think we can do better. I'm not sure that this is the approach I'd choose, but it might be a good one. I expect that even if we don't lean into it as hard as in this proposal we might be able to take some ideas from it and apply them to evolve the current language
  2. I like the idea of focusing on the spatial relationships rather than temporal ones. I had some idle thoughts about treating functions more like 'parametric geometry', but it didn't really go anywhere. Anyway, in some sense we do need to 'build up' geometry in the source code (and indeed that is still happening here, but in a different way). My intuition here is that this is a complex part of the design space and that 'building up' is good and necessary, but it is a bit too easy to over-do that and force correlation between the building up and the description of the scene.
  3. I like the idea of concretely specifying constraints, but making the whole language a constraint solving problem would have some big downsides:
    3.1. it is less straightforward to reason about because it is more abstract. It can also be less predictable: changing a small part of the code in a small way can have large and non-local effects on the output
    3.2. performance can be an issue and again can be unpredictable to naive users - two programs which look superficially similar can have wildly different performance.
    3.3. error messages can be much less useful - often it is difficult or impossible to explain how to fix an error or why an error occurs in a user-friendly manner
    3.4. I think having geometry hidden or shown depending on it being fully constrained is a bit too subtle and error-prone, but that is easy enough to address.
  4. leaning into the functional paradigm (which I think would be necessary for keeping functions processable) is likely to make the language more difficult to learn, especially for non-programmers.
  5. My biggest concern is a bit vague, it is around the relation to the UI and to programming as a first-class way to produce CAD models. My feeling is that the UI and the source code should be alternate views onto an abstract model of the scene. There are likely to be things in each view which are purely to do with the medium, not with the scene itself and that these should not leak into the scene nor other views (for example, comments in the source code, or the camera position in the UI). I believe that in this proposal that principle is violated by the concept of screen_point and similar things: IMO this should be part of the UI and only the more abstract concept of point (albeit relative to a coord system or object) should be part of the model and thus reflected in the source code, although I do agree we need better representation of points. I think also that preserving the ordering of clicks and constraints is also a violation of this principle. That also feels like the source code is a 'denormalized' view of the abstract model rather than a view of the model which is optimal for representation in source code. As well as this vague feeling of the source code being denormalized, I have the feeling that it is also a bit too low-level a representation? This might be fixable by designing the std functions and constraints in a more high level way, but I think there is something of the flavour of having lots of variables and lots of small expressions which makes the idea here feel like it would be hard to read the code and build a mental model of the scene.

@franknoirot
Copy link

I like the idea of concretely specifying constraints, but making the whole language a constraint solving problem would have some big downsides:

All 4 of these sub-points are great. @nrc if you don't mind making your list numbered, it would make referring to points a bit easier in conjunction with quote replies.

@jtran
Copy link
Author

jtran commented Oct 23, 2024

/r/ProgrammingLanguages thread from yesterday: programming languages where algebra is a central feature of the language

One link that has a neat interactive demo: Constrain - a JS library for animated, interactive web figures, based on declarative constraint solving

@Irev-Dev
Copy link

1st of all I really appreciate you walking through how the code is generated for each of the user's UI interactions, syntax proposals without these walk-throughs are missing important context.

||l2|| = 1.2 * ||l4|| and parallel(l1, l2) from two of your examples seem incongruent to me, I'm mostly talking about syntax, as it on uses special syntax, and the other uses a function call.
but another perspective is that one defining a length in reference to another while the other just defines an higherlevel constraint. I can imagine that either

equalLength(l2, l4)
parallel(l1, l2)

OR

||l2|| = 1.2 * ||l4||
<<l1<< = <<l2<< // <<line<< is a arbitrary syntax for angles

Would achieve similar results (NOT equivalent, but similar in terms of user intention)
Maybe we should choose one (both is also fine) but I kinda like the expression syntax, it at least defines the length/angle of one of the segments as the reference length/angle

Question, what happens if if something is over constrained? I imagine in the UI we will have a way to stop users from adding extra constraints to already full constrained segments, but we can't stop user from typing extra constraints. Assuming the constraint solving is done all within kcl (no engine needed) are we able to show diagnostics in the editor pointing out the problem?

With

<expression> <relational operator> <expression>

Do you think there's confusion with using the same simples like = and < since they have other uses?

The resolution strategies section makes me a bit nervous. Am I right in saying that this comes as a result of allowing the < or > relation operators? (am I understanding correctly?) if so maybe we just don't include this initially?
As a side note

mySegmentAngle < 370
maximize(mySegmentAngle)

maybe not important but maximising an angle to 370, is really maximising it to 10 ya know, feels a little uncomfortable.

You lost me completely in the Displaying Geometry, why can't unconstrained geometry be displayed? I thought that the use of the screen_point(x: 10, y: 12) gives all of the segments their initial values that then would be overridden by the constraints? I guess I'm misunderstanding something.

typing in || and us than going into a UI mode where they select a segment seems very jaring to me, I think bog standard auto complete makes more sense, and we have a separate UI for adding common constraints, (click equal length button, than select two segments.)

I have a question around tags, obviously they are not needed for defining 2d constraints anymore with the paradigm, but tags use goes beyond sketches, for selecting faces etc. Will this work with tags, or are tags not needed because

p1 = screen_point(x: 10, y: 12)
p2 = screen_point(x: 20, y: 13)
l1 = line(p1, p2)

l1 becomes the tag?

Lastly I'll link the reasons why I came up with the current chaining syntax and why I didn't want to go down a solver path
KittyCAD/modeling-app#111
The alternative I'm comparing against is very different from this proposal

I feel like I didn't see any examples of dimensions being constrained here, like how would a user specify that lineA is 5 units away from lineB?

@nrc nrc mentioned this pull request Oct 24, 2024
@jtran
Copy link
Author

jtran commented Oct 25, 2024

@Irev-Dev,

Obviously, this whole doc is very under-baked. So take everything with a grain of salt. It's just ideas, not fully thought out.

<<l1<< = <<l2<< // <<line<< is a arbitrary syntax for angles

Interesting. I like weird language ideas 🙂

Question, what happens if if something is over constrained?

It would show an error displaying the source lines of all constraints that conflict.

Yes, this is a problem in certain real programming languages. If everything is a constraint on equal footing, who's to say which one is wrong? I would love if we could find a sweet-spot the way programming languages have mostly settled on specifying the type on function signatures. That's authoritative. If a call site differs, the call site is wrong, not the function signature. Contrast this with call sites constraining what types function parameters are. That's what would happen if you treated all constraints as the same.

Do you think there's confusion with using the same simples like = and < since they have other uses?

Yes, I think it can be confusing to distinguish between pure computation and constraints. The main difference being which way data flows, into parameters or out. This is why I invented the split between fn and constraint. But I agree that it can be confusing which context you're in. Since I wrote this, I've been reading about logic programming and functional logic programming. The latter tries to unify logic programming and functional programming to get the best of both, and that's exactly what I'd like to do.

Am I right in saying that this comes as a result of allowing the < or > relation operators? (am I understanding correctly?) if so maybe we just don't include this initially?

You're exactly right. It's because of inequalities. They complicate things. Dropping them would definitely simplify things.

(See update below.)

You lost me completely in the Displaying Geometry, why can't unconstrained geometry be displayed?

Yes, I screwed this up. I conflated known/unknown with constrained/unconstrained. They're two separate things that need to be included in the design and implementation. Every piece of geometry, when the user draws it in the UI, they're giving it a default/fallback known coordinate, meaning we can always display it. But constraints may move it somewhere else. Solved constraints override the default.

p1 = screen_point(x: 10, y: 12)
p2 = screen_point(x: 20, y: 13)
l1 = line(p1, p2)

l1 becomes the tag?

Yes. That's how I imagine it.

why I didn't want to go down a solver path
KittyCAD/modeling-app#111

Interesting. I will give this a read. I really missed you last week!

I feel like I didn't see any examples of dimensions being constrained here, like how would a user specify that lineA is 5 units away from lineB?

Is this what you mean? My first comment on the PR above: #13

But if you think about it, that is only partially constrained.

I tried to work through the actual derivation by hand. It's long and reminds me of the painful parts of high school math classes. The exercise was helpful, though, because after you do all the symbolic substitution, at a certain point, you end up with something like this:

// Default values from UI drawing.
default(p1.x) = 10
default(p1.y) = 12
default(p2.x) = 20
default(p2.y) = 13
default(p3.x) = 21
default(p3.y) = 23

// From line equations.
p2.y - p1.y = ((p2.y - p1.y) / (p2.x - p1.x)) * (p2.x - p1.x)
p3.y - p2.y = ((p3.y - p2.y) / (p3.x - p2.x)) * (p3.x - p2.x)

// From the Euclidean distance.
(p3.x - p1.x)^2 + (p3.y - p1.y)^2 = 100

What do we do now? Maybe I'm just not that good at math, and this is a system of linear equations that's trivially solvable that I just can't see. But at a certain point, it seems like you have to start plugging in numbers using the default values. After all, we know they're only partially constrained. But how do we do that? An obvious approach is to start with p1, but that's so arbitrary.

Update: I think I made a mistake with the square root. I think we need to derive the fact that the thing inside the square root is greater than zero. Otherwise, we'd have complex numbers. So I don't think inequalities can go away completely. But maybe we just error out instead of allowing for maximize and minimize.

@jtran
Copy link
Author

jtran commented Oct 25, 2024

I just read KittyCAD/modeling-app#111.

The claim is that this CadQuery example is hard to read or difficult to understand.

import cadquery as cq

result = (
    cq.Sketch()
    .segment((0, 0), (0, 3.0), "s1")
    .arc((0.0, 3.0), (1.5, 1.5), (0.0, 0.0), "a1")
    .constrain("s1", "Fixed", None)
    .constrain("s1", "a1", "Coincident", None)
    .constrain("a1", "s1", "Coincident", None)
    .constrain("s1", "a1", "Angle", 45)
    .solve()
    .assemble()
)

I tried to reproduce this in KCL. But it wasn't straight-forward at all. I couldn't figure it out with our current arc(). I'm imagining what it might look like once we implement 3-point arc KittyCAD/modeling-app#1659. After about 15 minutes oftrial and error, I came up with this. (The angle between the line and the arc is wrong, but it's the general shape.)

sketch001 = startSketchOn('XZ')
  |> startProfileAt([0, 0], %)
  |> arc({
    angleStart: 90,
    angleEnd: -120,
    radius: 100,
  }, %)
  |> close(%)

... which was actually completely unintuitive. I stumbled on this solution of using close() as the straight segment. I first tried to start the path with the straight segment and arc to the end. I was trying to figure it out with arc(), couldn't, then imagined how it might work with a hypothetical 3-point arc, which seemed easier. But I feel like the solution I ended up with is a hack that won't work for anything more complicated. close() is completely opaque. nrc pointed out, that it might be nice to close a sketch using an arc or some other kind of segment, other than a straight line.

So we again have lots of flavors of all our functions. They're opaque in that you as a user of KCL couldn't create such a thing yourself.

I've found it to be common that I need to rotate my paths so that the starting point can change.

The claim was that direct computation using the style of KCL of today was more concise and easier to follow. Going through this exercise convinces me even more that KCL has a predefined way of doing things, and if a user tries to stray outside that way at all, they're SOL.

I would even argue that the CadQuery example is indeed easier to read because our target audience thinks this way. They think in terms of high level things like lines being parallel or points being coincident. On the other hand, they do not think in terms of directly computing points. I think the example is actually pretty intuitive.

Mitigation

Direct computation of points in a path is easier to read or consume if you're trying to construct the points as the implementer. They correspond one-to-one with the output. But as a user, there's a big difference in mental model. @jgomez720 expressed frustration with how all line segments in KCL/the Modeling App have a direction. I understand why this is, and after talking with Alyn about constraints, I think leaning into deltas (AKA relative path segments) is generally what we want under the hood. But could the UI shield users from this somehow?

@nrc brought up the idea that the UI could withhold writing KCL until the user exits sketch mode and "commits" to a sketch. The nice thing about this is that the user could do all kinds of complicated edits on lines and points in any order, and then once they've built up an entire thing, only then does the tool need to write the corresponding KCL. This has some nice effects. For example, because the tool knows the entire path, it can group things cleanly in the source.

The downside is that the user loses the live one-to-one connection between UI operations and KCL changes, which was always intended to aid in users learning KCL. And presumably users would want to edit existing sketches, so this doesn't actually save us much work in KCL refactoring because we'd still need to be able to understand and refactor previously created sketches to properly edit them.

I think this is an interesting, under-explored approach. Something might have to give.

To be clear, I'm not trying to say that the current way is so bad and a solver approach is so much better. I think that we need to address the big difference in mental model.

Default Coordinates

One thing I'd like to add is how in the initial draft of this PR, I conflated the idea of a point being known with it being constrained. When people pointed this out, I immediately thought it was an oversight that needed to be fixed.

But the more I think about this, I'm actually not so sure. I think that part of why other CAD tools are so unintuitive in the solver department is because of these "initial guesses" for locations of points. I still haven't worked through the details yet, but it's something I'm thinking about. The whole idea of a "default" that somehow informs how things get solved is actually pretty weird.

@jtran jtran closed this Nov 20, 2024
@nickmccleery
Copy link

I haven't been through all the comments yet, but have been through the original markdown doc.

My impressions:

  • The constraint language concept is super interesting. I really like the idea of separating the functionality of creating geometry from the definition of a set of constraints.
  • Completely agree on points as first class citizens, including the unconstrained p1 = point() example.
  • Things like adding min/max ops for partially constrained systems feel very much like a V2 or V3 sort of functionality if users of incumbent CAD are the target customer. There's already a paradigm shift of sorts being demanded, so throwing anything more complex in there might scare off users.
  • Hide and show functionality would be very useful.

Will get into the comments and report back 🫡

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants