Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add invariant manager #3263

Merged
merged 8 commits into from
May 26, 2020

Conversation

andrewjdawson2016
Copy link
Contributor

Potential risks

@@ -140,13 +140,13 @@ func Open(state int) bool {
func DeleteExecution(
exec *Execution,
pr PersistenceRetryer,
) FixResult {
) *FixResult {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Seems like other places are still returning the struct directly, instead of a pointer. Probably make them consistent?

I think returning struct is fine here as the struct itself is not big. Returning a struct will cause the object to be allocated on the heap and increase GC pressure.

func (o *openCurrentExecution) Fix(execution common.Execution) common.FixResult {
return common.DeleteExecution(&execution, o.pr)
func (o *openCurrentExecution) Fix(execution common.Execution, resources *common.InvariantResourceBag) common.FixResult {
fixResult, checkResult := checkBeforeFix(o, execution, resources)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am confused. To me, if a check is needed before fix, then it's the caller's responsibility to first call Check() and then Fix(). I think putting them together inside Fix() couples the logic and makes the code a little bit inflexible.

// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// SOFTWARE.

package shard
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's this shard package for?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 I think this can live in the invariants package

}
}

// InvariantTypes returns sorted list of all invariants that manager will run.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

returns sorted list of all invariants types that manager will run

@coveralls
Copy link

coveralls commented May 15, 2020

Coverage Status

Coverage increased (+0.04%) to 68.457% when pulling 0bc90d7 on andrewjdawson2016:InvariantManager into 55bf6da on uber:master.


// NewInvariantManager handles running a collection of invariants according to the policy provided.
// InvariantManager takes care of ensuring invariants are run in their correct dependency order.
func NewInvariantManager(
Copy link
Contributor

@emrahs emrahs May 15, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't want to overengineer this, but I have a slightly different idea about how we can implement this part about Policies and Manager and wanted to hear what you think about it:

  1. Don't limit the manager to a single policy. Make it such that it defaults to running all policies unless a filter says otherwise. We can keep this filter in a config without hard-coding in the application.
  2. Define a collection of all policies (maybe we shouldn't call this policy to avoid confusion), which would act like test cases, where each of them consists of an ordered list of Invariants to test a meaningful case end-to-end. Ideally, these policies should never be too long. An invariant should be able to appear in multiple policies. There shouldn't be any order/dependency requirements between policies in the collection.
  3. Manager should take a list of policies to include or exclude (the config I mentioned above), and decide to run all Policies one by one, without worrying about their order. As it checks for the invariants in each policy, it can populate a map of invariant types => check results, and use that map to avoid running a given invariant repeatedly across multiple policies. At the end of the iterations, all invariants that are involved in one or more included policies will get executed exactly once.

If we follow the design above, we get the following benefits:

  1. Seperation of concerns between invariants-related logic vs. the overall Scanner. Scanner interacts with a single Manager to run the tests, and the only thing it provides is the config that tells which policies should run.
  2. More flexibility in the config without adding complexity. Rather than specifying a predefined policy to run, we can precisely tell what policies we want to run (and/or what policies we don't). Since the policies are not ordered or dependent on each other, the config is simple to manage.
  3. We avoid running the same invariant repeatedly if it was already covered by an earlier policy, without adding a complex invariant sorting algorithm to the mix.

For the number of invariants & policies you currently have in the PR, my proposal is definitely an overkill. However, if we eventually have a dozen or more invariants that are spread over multiple overlapping policies, it should scale better in terms of maintainability.

@andrewjdawson2016 andrewjdawson2016 merged commit 11c17f9 into uber:master May 26, 2020
@andrewjdawson2016 andrewjdawson2016 deleted the InvariantManager branch May 26, 2020 20:04
mkolodezny pushed a commit to mkolodezny/cadence that referenced this pull request May 29, 2020
andrewjdawson2016 added a commit that referenced this pull request Jun 6, 2020
andrewjdawson2016 added a commit that referenced this pull request Jun 22, 2020
yux0 pushed a commit to yux0/cadence that referenced this pull request May 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants