Merged
Conversation
Signed-off-by: Eric Myhre <hash@exultant.us>
Signed-off-by: Eric Myhre <hash@exultant.us>
For now, of course, this is only worth discussing in that it informs scope limits for the first pass. So here's comments on that. Signed-off-by: Eric Myhre <hash@exultant.us>
After further thought from this jumping off point, the right choice is going to look mostly like a knowledge base with a series of concurrent actors. The work planner is only one of them. Publishers are another kind (this saves publish criteria from being too directly tied up to work planner logic, even if by default they might be quite causal). And perhaps most importantly, database mergers look no different than either of them. Thus, what we really need to do is get gated writes around the knowledge base, a consistent (and critically, resumable) system of observables, and then other read transactions can do pretty much whatever they want. The first pass interfaces for Catalogs and Watchables is probably wrong as well. Catalog isn't even an interface: it's a concrete data format. The mechanisms for discovering them (from dirs on disk, from git, etc) are of course all implementations, but they're really implementations of a Catalog factory function. Caching and updating can and should be handled by the knowledge base component. And the watchables for catalog updates should look juuuust about the exact same as the ones for any other kind of change in the knowledge base (new stage2 formula, new runrecord/stage3, etc) -- if you *need* (for some odd reason) a single catalog with caching and change notifications alone, build a tiny knowledge base for just it; that's fine. Signed-off-by: Eric Myhre <hash@exultant.us>
Signed-off-by: Eric Myhre <hash@exultant.us>
Flipped catalogs to a concrete type as described in plans a few commit messages ago. This first actor implementation is using a strategy of selecting over a couple different work triggering events... and a channel that reports old stuff without triggers is simply another one of those: this model solves the problem of how to react to fresh inputs in realtime while still backfilling (whenever time permits) for the possibility of missed events when the actor wasn't yet alive and registered for observables. This pattern should work well for expressing that basic priority and scale freely to more kinds of triggers. Leaving lots of todos strew about on the insides of the knowledge base components themselves, still; just trying to get all the pieces to line up so they can be coordinated efficiently and in race-free ways. Signed-off-by: Eric Myhre <hash@exultant.us>
Signed-off-by: Eric Myhre <hash@exultant.us>
Starting to try to draft tests on this, which is good, because not everything in the current foreman draft is well separated for testability, and that is a major code smell now correctly highlighted. There will be more knowledge base implementations in the future (I'm thinking database-backed storage will be desirable at some scale), but haven't switched to an interface yet. Signed-off-by: Eric Myhre <hash@exultant.us>
Plan is shorter, but Commission seems to much better capture the associated concepts that this thing has the authority to trigger more work, and is goal-oriented while not a concrete plan itself, etc. ("Procedure" was another candidate, specifically because "SOP"s are a *perfect* analogy, but nobody wants to talk about their sopping wet mess.)
Signed-off-by: Eric Myhre <hash@exultant.us>
These might not be the final boundaries of functions (it's still pretty high granularity), and also the event pumping and the formula evocation may end up in separate goroutines later. The key point for now is that some of the state is getting exposed, and the functions that mutate it can be called in single steps so we can observe them and test via prying open the state. Definitely not black box testing, so the test specs won't be reusable, but that's okay; different actors have the option of such radically different opinions about work planning that I'm not sure tests would ever be reusable anyway. Signed-off-by: Eric Myhre <hash@exultant.us>
Signed-off-by: Eric Myhre <hash@exultant.us>
Turns out you really need to set channels. I can't come up with a reasonable convenient way to make these tests immune to hangups if you accidentally ask for more things than will happen. Using a default in the select isn't cool because then you can't decide to block later; copy-pasting the rest of the select and toggling that behavior with a conditional does not sound great; so, giving up. Need to write More knowledge base stuff before we can really fly. Signed-off-by: Eric Myhre <hash@exultant.us>
What is being written now is a simply in-memory-only form, but later I expect to write a disk-backed one using something simple/portable like boltdb, and perhaps later heavier types of relational database as well. Signed-off-by: Eric Myhre <hash@exultant.us>
Signed-off-by: Eric Myhre <hash@exultant.us>
…o planning a formula! Signed-off-by: Eric Myhre <hash@exultant.us>
Signed-off-by: Eric Myhre <hash@exultant.us>
Signed-off-by: Eric Myhre <hash@exultant.us>
Signed-off-by: Eric Myhre <hash@exultant.us>
…orrectly. Signed-off-by: Eric Myhre <hash@exultant.us>
Well, had to fix the knowledgebase observer bus stuff to work like, At All. Should probably make unit tests for that in its own package... :I Signed-off-by: Eric Myhre <hash@exultant.us>
…ommission is retriggered. Just worked, :feels_good: Signed-off-by: Eric Myhre <hash@exultant.us>
Signed-off-by: Eric Myhre <hash@exultant.us>
Having a zero-cost executor with deterministic behaviors... or not!... seems like it should be incredibly useful for testing pipelining and planning stuff. Signed-off-by: Eric Myhre <hash@exultant.us>
Fix that executor to not mutate the formula it's handed in. I think that's going to be part of the standard behavior going forward; implementations will be adjusted to that as they're found. Meanwhile, the sanity check in the "stage2" ID function is pretty useful at catching BS. Also, wildly enough, this is finally the first time we're seeing use of the `conjecture` flag *matter*. So that's fun. (Not exhaustively testing it here, though: it's coincidentally relevant to the way nil-det mode *happens* to be implemented; it should really be covered by its own suite of tests in the model package. Will probably do that post-refactor of bringing all the existing `def` code into `model` (aka, at least one more PR down the road from the present).) Signed-off-by: Eric Myhre <hash@exultant.us>
All of these are for testing the foreman's response to existing and incoming catalog updates given some other state of commissions. So say that. I was originally thinking of moving on to add more tests with closing the loop here, but... that may not actually be the right API surface. Release publishing actors should *not* get entangled with this work planner implementation. Which means testing full-circle work stuff should come later, in another package for integration testing that pulls both hemispheres of the system together. Signed-off-by: Eric Myhre <hash@exultant.us>
Not functional yet but probably approximately right and has been sitting uncommitted for too long. Signed-off-by: Eric Myhre <hash@exultant.us>
Signed-off-by: Eric Myhre <hash@exultant.us>
At this point, saying "local" is redundant. And it's starting to look wholy possible that most of the planning parts being written here will actually be general to both farm and non-farm situations after all. Signed-off-by: Eric Myhre <hash@exultant.us>
We might need to keep more info around for handing to release publish actors. Also, the index math was making me feel a little hinky. Signed-off-by: Eric Myhre <hash@exultant.us>
(Mostly, dropping comments from previous design concepts, because on further reflection there's a reason those todos seemed awkward to implement: they implied some nasty semiotic flaws down the road.) Managing this whole merry-go-round at scale requires semantics we can use for garbage-collection. And I'd rather not have that end up requiring fudge factors based on timestamps to avoid discarding WIP stuff. So: GC is going to be based on strong references (because of course it is), we're going to start planning for that now, and the current obvious roots of interest are anything that made it into a relase catalog. As a result, if you need state caching for WIP stuff, do it yourself; and inter-actor WIP message passing is currently seen as unspecified (because I don't want to deal with putting a "lease" pattern in the KB; trying to define sharing for those would be a snarl). Signed-off-by: Eric Myhre <hash@exultant.us>
We have that clone method now, happily. Signed-off-by: Eric Myhre <hash@exultant.us>
… info with plans. Commission info is retained for the whole trip because we want to refer back to it later in the cycle when plotting out whether and where to publish releases. Lease semantics should make it pretty easy to parallelize executors from here. (Though we may not end up getting much mileage out of cancels until we get to either farming or a crash-resistant plan queue.) Signed-off-by: Eric Myhre <hash@exultant.us>
Signed-off-by: Eric Myhre <hash@exultant.us>
Signed-off-by: Eric Myhre <hash@exultant.us>
Test basic catalog behavior and immutability. (As with a lot of things, this "immutability" and this "clone" function are approximations and still lean on you to play nice within the contracts. Slices aren't being deepcopied at any point here. Bugs that mutated those would have a Nasty spill radius.) Signed-off-by: Eric Myhre <hash@exultant.us>
Many, many placeholders where configuration options will be needed later. Current objective is just to get to the point where we can feed a cycle forward. Signed-off-by: Eric Myhre <hash@exultant.us>
Signed-off-by: Eric Myhre <hash@exultant.us>
Previously missed this case. And it didn't immediately work.... although fortunately, that was just because of the placeholder catalog IDs not converging, so the bug was in the test. So we're good. Signed-off-by: Eric Myhre <hash@exultant.us>
Signed-off-by: Eric Myhre <hash@exultant.us>
Signed-off-by: Eric Myhre <hash@exultant.us>
This should be much DRYer now that we have all those construction methods on catalogs. Signed-off-by: Eric Myhre <hash@exultant.us>
Previously, both cases crashed with nil derefs, tsk tsk. Signed-off-by: Eric Myhre <hash@exultant.us>
This is the first time we've tested a new catalog appearing in the knowledge base entirely as a result of formulas planned from commission+catalog. Woohoo! (Well, we're still pushing the foreman through its paces manually for test purposes (and yes, that's concealing a bug where as currently written the planner part will block the evoker), but still.) Also, add sort interface for catalog IDs. I seem to be needing sortables with increasing frequency just to deal with testing. For the $n$'th time, I'd trade a sizable brick of gold for order-preserving maps. Signed-off-by: Eric Myhre <hash@exultant.us>
Multi-step chain with internal triggering: demo'd. Signed-off-by: Eric Myhre <hash@exultant.us>
Signed-off-by: Eric Myhre <hash@exultant.us>
Signed-off-by: Eric Myhre <hash@exultant.us>
Contributor
|
You know, everyone trashes on English for having crazy spelling rules, but I'd have to say that it's way more fun to abuse English spelling than in a phonetically spelled language like Czech :D. Still don't think it makes up for all those weekly spelling bees, though. |
Member
Author
|
Making fun of my opinionated, artisanal spelling of "evokation"? Hush, you! |
Contributor
|
"artisanal spelling" :D brb, I'm off to get my masters degree in mispronuciation. |
Member
Author
|
Merging to thunderous applause (cough) because I wanna get on with some refactors on master that'll make a real hash of this branch if it doesn't fold back in first. :) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This is the first draft of mechanisms behind pipelining. Lots of the infrastructure in repeatr to date is about defining highly isolated pieces of work, then helping refine that definition of work until the results are effectively immutable. Now it's becoming time to start building in the other direction, as well: knitting pieces of work together, and making it possible to pump updates triggered by changing any one piece of data all the way through a series of transitions to produce something that's both new... and reproducible and auditable once we've charted the way.
To that end, this sketch introduces a new layer of configuration -- the
Commission. Commissions are much like Formulas: they list inputs, actions to perform on them, and outputs to capture afterwards. The difference is that Commissions are allowed to refer to things by rough "names", which are human readable and mutable -- where Formulas refer to inputs by hashes like git commits, Commissions refer to their inputs by names comparable to git branches. Both layers are important: Formulas are completely repeatable descriptions of work because they continue to pin all inputs precisely; Commissions are less precise, but by producing Formulas from a Commission, we can get the best of both worlds.The mapping from names to hashes is performed by another new structure called a Catalog (
catalog.Bookin the code). Catalogs list a series of names, and tell you which hash that name should resolve to. When you want to publish a new release of a product? Publish a new edition of the Catalog with that new hash. Commissions which consume that Catalog name will be automatically triggered to emit a new formula by...... the Foreman! The Foreman is an actor upon a KnowledgeBase which contains a whole suite of related Commissions and Catalogs, and the Formulas they've produced and Wares they all reference. The Foreman listen for new Catalogs and Commissions, and evaluates them to produce Formulas... which then are scheduled to run on an Executor (this is the old familar turf, where we simply expect "formula in -> (hopefully deterministic) outputs out"). When the Executor returns results, the output wares may be fed back into releasing a new edition of a Catalog. This may continue to flow through a whole graph of dependent Commissions -- making it possible to update one ware and watch updated builds depending on it, and depending on things that depend on it, and so on... flow through the whole system automatically. 🎉
And many other miscellaneous bits:
Other features hinted at in the future but as yet deferred to later rounds of drafts:
There's no connection to the main() method yet -- no config, nothing -- this is still purely sketching, self-consistency testing, and a couple judicious but extremely visible duct-tape placeholders. But it is demoing multi-stage pipelines, automatically triggering evaluation between dependents in response to updates. So that's pretty cool. Enough to keep iterating on