Replies: 4 comments 16 replies
-
@GazEdge I was thinking about this too, this is where there's possible issues regarding impelmentation purity. So first question is, how is someone doing transactions? 2 approaches:
I'm currently doing option 1, so I have a unit of work interface and repository interfaces. In the use case handler, I inject the repository(ies) and the unit of work. Let's say this is an SubmitOrderUseCase, I would check that, after executing the use case, that the FakeOrderRepository contains the newly created order, and that the commit() method has been executed exactly once. For invalid (negative scenarios, I would check that the FakeOrder Repository is unchanged (i.e. it does not have the new order) and unit of work has not been called. The unit test does have awareness of the unit of work interface, but not of the real DB implementation |
Beta Was this translation helpful? Give feedback.
-
Sorry for jumping into this but I am curious about the topic too and how to model it :) My perspective is that a use case represents the primary concern of the application (the core domain) while transactions represent a secondary concern (system). That is, one represents my business goal (e.g. placing an order) and the other one describes a non-functional requirement (e.g., ensure that data stay consistent). You can have multiple secondary concerns like security & audit too, of course. Aspects (as in Aspect Oriented Programming) or their poor cousins, the interceptors, are good fit for modelling these secondary, cross-cutting concerns and can be modelled separately. Therefore, I would :
From there on I start getting "pragmatic" (shame on me, wannabe clean programmer) - I leverage existing frameworks / solutions to achieve this (e.g., for transactions I would use Spring built-in transaction support, same for authorization / security, ...). I won't re-invent the wheel unless I don't trust my framework (but this is another story) or I have to implement my own system logic (e.g., custom integration or rules that I have to enforce across different use cases). I believe that by "wisely" adapting the test pyramid it is possible to test both primary and secondary concerns and the ensure that integration between them works and still have robust tests against structural refactoring. My 2 cents. Ps: testing asynchronous use cases (e.g., orchestrated sagas of microservices) is another interesting topic but I will leave it for another entry :D |
Beta Was this translation helpful? Give feedback.
-
@scalasm excellent comment! Based on this, I'm extending my response above to include the option you mentioned: Aspect-oriented way of modeling transactions, whereby we would separately test the interceptor (so it means we don't test the transactions through our use case, but instead testing the interceptor task) Distinguishing between use cases where the whole use case handle method is transactional versus the case where only a part of the use case handle method is transactional. If the whole method is transactional, then AOP is fine. However, if we need granularity, then the only option I see is injecting unit of work. I think this brings us to a series of options and trade-offs. Like a tradeoff matrix. |
Beta Was this translation helpful? Give feedback.
-
Unit of work is not easily applied to any type of storage. This usually works best when working with relation dbs and multiple repositories. I usually simply stick with repository implementation (and manually call save each time in the use case) and take care that each use case changes just a single aggregate. Synchronization between different aggregates is then performed asynchronously |
Beta Was this translation helpful? Give feedback.
-
Wondering how transactional consistency is tested in true behaviour TDD.
I spent some time thinking about it and decided that the output of the behaviour has to be transactional i.e. we get all the behaviour or non of the behaviour. The issue is that transactional consistency depends on how you implement the solution. This then would force implementation details to leak into the tests (in the form of a failing mock) to test that either all the behaviour happens, or none of it happens.
Any thoughts?
Beta Was this translation helpful? Give feedback.
All reactions