Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Removed replication mocks #4883

Conversation

vytautas-karpavicius
Copy link
Contributor

What changed?
Removed all replication mocks.

Why?
They are either not used at all or used internally in a couple of places.
For them we can use light-weigh fakes instead and have less code.

How did you test it?
Existing tests should pass.

Potential risks

Release notes

Documentation Changes

@vytautas-karpavicius vytautas-karpavicius marked this pull request as ready for review June 27, 2022 15:01
@vytautas-karpavicius vytautas-karpavicius requested a review from a team June 27, 2022 15:01
@coveralls
Copy link

coveralls commented Jun 27, 2022

Pull Request Test Coverage Report for Build 0181cd19-b304-431d-a72d-bd3f7075fa6a

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • 90 unchanged lines in 15 files lost coverage.
  • Overall coverage decreased (-0.03%) to 57.802%

Files with Coverage Reduction New Missed Lines %
service/matching/matcher.go 1 91.87%
common/membership/hashring.go 2 83.54%
common/persistence/executionManager.go 2 77.95%
common/persistence/statsComputer.go 2 95.71%
common/task/weightedRoundRobinTaskScheduler.go 2 89.64%
service/history/execution/mutable_state_util.go 2 36.14%
service/history/task/transfer_active_task_executor.go 2 71.41%
service/matching/taskListManager.go 2 73.77%
service/history/task/task_util.go 4 75.14%
service/history/task/transfer_standby_task_executor.go 4 87.61%
Totals Coverage Status
Change from base Build 0181c1b1-1cf4-48fb-b8f0-c2e3ebb16a40: -0.03%
Covered Lines: 83802
Relevant Lines: 144981

💛 - Coveralls

@Groxx
Copy link
Member

Groxx commented Jun 27, 2022

While I'm largely in favor of this, one advantage of lines like

	s.taskExecutor.EXPECT().execute(replicationTask, true).Return(0, nil).Times(1)

that is being lost is the ability to require that those mocks are used. the current fakes are silent about non-use.

Which may be entirely reasonable for those tests, I haven't read closely enough yet. Would there be any simple way to require them though? Unused fakes leading to gradual drift and misunderstanding of tests has been a pretty big problem in past projects in my experience, and the "must be used" check is a pretty simple way to lock that down.

Copy link
Member

@davidporter-id-au davidporter-id-au left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tbh I'm not sure this is significantly easier to maintain fakes in the long-term, but 👍

@vytautas-karpavicius vytautas-karpavicius force-pushed the remove-replication-mocks branch from 6d187bc to 4c63863 Compare July 1, 2022 13:27
@vytautas-karpavicius
Copy link
Contributor Author

that is being lost is the ability to require that those mocks are used. the current fakes are silent about non-use.

Yes, you are entirely correct on this point. I have added a check back, so that test semantics would remain the same.

Unused fakes leading to gradual drift and misunderstanding of tests has been a pretty big problem in past projects in my experience, and the "must be used" check is a pretty simple way to lock that down.

Could you elaborate on what you mean by "gradual drift"?

Some thoughts in general. Cadence seems to be build in a very "Java" like style. Interfaces comes first, necessary or not. Along with mocks. It seems to be contrary to many best practices offered by go-lang. Specifically to this discussion: return structs, accept interfaces that are often defined where they are used and only with a subset of actually needed methods. Smaller interfaces are easily faked, making unit-tests easier to write. IMO it is very counter-productive to write unit-tests where 90% of the test is mocking complex dependencies. Small interfaces along with fakes, makes unit testing much easier. I would very much like us go into that direction.

Of course mocks still have their place. Especially for Facade style interfaces exposed by packages. One good example - all persistence managers and their corresponding mocks. However, all dependencies that can be modeled as a state or used completely within a single package internally can probably be simplified as fakes instead.

Feel free to argue on those topics, I'm interested in other opinions as well.

@vytautas-karpavicius vytautas-karpavicius merged commit 354e6b0 into cadence-workflow:master Jul 5, 2022
@vytautas-karpavicius vytautas-karpavicius deleted the remove-replication-mocks branch July 5, 2022 08:29
@Groxx
Copy link
Member

Groxx commented Jul 5, 2022

Some thoughts in general. Cadence seems to be build in a very "Java" like style. Interfaces comes first, necessary or not. Along with mocks. It seems to be contrary to many best practices offered by go-lang. Specifically to this discussion: return structs, accept interfaces that are often defined where they are used and only with a subset of actually needed methods. Smaller interfaces are easily faked, making unit-tests easier to write. IMO it is very counter-productive to write unit-tests where 90% of the test is mocking complex dependencies. Small interfaces along with fakes, makes unit testing much easier. I would very much like us go into that direction.

Of course mocks still have their place. Especially for Facade style interfaces exposed by packages. One good example - all persistence managers and their corresponding mocks. However, all dependencies that can be modeled as a state or used completely within a single package internally can probably be simplified as fakes instead.

Feel free to argue on those topics, I'm interested in other opinions as well.

I pretty much completely agree, and yeah - the up-front interface-and-mock habits lead to very artificial tests. I'd much prefer to mock less, further away from the code actually being tested, because it means you're getting additional checks of "this leads to behavior X which is the actual goal of this method, even though it's 3 calls away". Super fine-grained mocks are often hard to read and understand, and tend to break at the smallest touch which makes them a real pain during refactoring (rather than helping refactoring).


Re fakes and "drift": largely two things.

  1. fakes tend to become less and less accurate in being accurate mimics of the thing they're faking, unless there's some way to automatically check that they produce identical(-enough) behavior. The tests often still work, so they don't tend to be investigated until something breaks in production... and then when you fix it you discover that dozens of tests have been wrong for months.
  2. fakes frequently do not require that they are actually used / that data you put in them is actually relevant to the test. E.g. if you put record X in a fake storage, it should (usually) be a test failure if X is not ever looked at. It implies your test code is not accurately describing what happens. I've consistently seen heavy use of fakes turn into "this test prepares hundreds of fake records, less than 1% of which are relevant, and ~10% are not used by any test in the entire project".

^ and those often combine to create a gradual loss of accuracy. A lot of which can be traced back to their frequent inability to ensure they're relevant.
Mocks have the same issues (and some that are much worse), but the automated check of "expected X, X must happen" prevents a fairly large category of mistakes very simply.

None of which is unfixable. I just want us to establish a strong habit of e.g. "if you have a fake in a test, it must be relevant", with only rare exceptions.
And ideally a way to automatically check that the fake behaves the same as the real thing, though I realize that's frequently not worth the effort.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants