-
Notifications
You must be signed in to change notification settings - Fork 0
System Design Patterns

- Layered Architecture (N-Tier Architecture)
- Microservices Architecture
- Service-Oriented Architecture (SOA)
- Event-Driven Architecture (EDA)
- Hexagonal Architecture (Ports and Adapters)
- Component-Based Architecture
- Blackboard Architecture
- Serverless architecture
- Circuit Breaker Pattern
- Model-view-controller pattern
read
CQRS (Command Query Responsibility Segregation)
CQRS & MediatR pattern are generally fused together
CQRS separates read and write concerns, while MediatR decouples components through a mediator.
Understanding CQRS in Its Pure Form
CQRS is a pattern that separates read and write operations in your application. The pattern suggests that the models used for reading data should be different from those used for writing data
.
That's it.
No specific implementation details, no prescribed libraries, just a simple architectural principle.





- The pattern emerged from the understanding that in many applications, especially those with complex domains, the requirements for reading and writing data are fundamentally different.
-
Read operations often need to combine data from multiple sources or present it in specific formats for UI consumption. Write operations need to enforce business rules, maintain consistency, and manage domain state
.
This separation provides several benefits:
- Optimized read and write models for their specific purposes
- Simplified maintenance as read and write concerns evolve independently
- Enhanced scalability options for read and write operations
- Clearer boundary between domain logic and presentation needs
MediatR: A Different Tool for Different Problems
MediatR is an implementation of the mediator pattern. Its primary purpose is to reduce direct dependencies between components by providing a central point of communication. Instead of knowing about each other, the mediator connects the components.
The library provides several features:
In-process messaging between components Behavior pipelines for cross-cutting concerns Notification handling (publish/subscribe) The indirection MediatR introduces is its most criticized aspect. It can make code harder to follow, especially for newcomers to the codebase. However, you can easily solve this problem by defining the requests in the same file as the handler.




read
This pattern is helpful if you want to abstract away some peripheral part of your main application in different microservice. This aids to result in independence between services, breaking tightly coupled components.
Sidecar/Sidekick pattern is helpful choice if application uses a same languages and libraries and service is needed that shares the lifecycle but can be independently deployed. It would be bad decision to implement a Sidecar/Sidekick pattern to the application if resource cost of deploying sidecar service for each instance is not worth the advantage of isolation. The functionalities like logging, configuration etc. can be abstracted away to another microservice, as shown in example below. This pattern has 1:1 relation with Primary Service
read
The main gist of this pattern is making another layer of backend between frontends and real backends. This would be called Backend For Frontend. This is one the most popular patterns that is applied extensively in industry.
By adding this kind of extra layer you can orchestrate between different frontend and backend servers, validate filter responses that come from frontend, map and convert the data models that are delivered from backend.
read
The transactional outbox pattern is a great solution to the dual write problem. It ensures that updates to both the database and an outbox table happen together in a single transaction. This keeps the database and outbox table in sync.
An asynchronous process then keeps an eye on the outbox table. When it sees a new entry, it reads it and sends the corresponding event to Kafka, ensuring downstream services get the latest information promptly.
read
Change Data Capture (CDC) systems monitor and record changes (like inserts, updates, and deletes) happening in the database. This process is asynchronous and separate from regular database operations. CDC can also help solve the dual write problem.
When changes are detected, they are sent as events to Kafka for the downstream services to pick up.
The top 4 CDC implementation techniques are:
- Timestamp Based Technique.
- Triggers Based Technique.
- Snapshot Based Technique.
- Log Based Technique.
read
Event sourcing is another way to tackle the dual write problem. Instead of just storing the current state of an object, you store the entire history of events leading up to that state. This pattern is especially useful in industries like healthcare, banking, or finance, where tracking every change is crucial.
To solve the dual write problem, a microservice writes the event to an event log. Then, an asynchronous process reads from the event log and sends the corresponding signal to Kafka for downstream services.
read
When data needs to be atomically stored on multiple cluster nodes, nodes cannot make the data accessible to clients until the decision of other cluster nodes is known. Each node needs to know if other nodes successfully stored the data or if they failed.
The essence of two-phase commit, unsurprisingly, is that it carries out an update in two phases:
- The prepare phase asks each node if it can promise to carry out the update.
- The commit phase actually carries it out.
As part of the prepare phase, each node participating in the transaction acquires whatever it needs to assure that it will be able to do the commit in the second phase—for example, any locks that are required. Once each node is able to ensure it can commit in the second phase, it lets the coordinator know, promising the coordinator that it can and will commit in the second phase. If any node is unable to make that promise, then the coordinator tells all nodes to roll back, releasing any locks they have, and the transaction is aborted. Only if all the participants agree to go ahead does the second phase commence—at which point it's expected they will all successfully update. It is crucial for each participant to ensure the durability of their decisions using pattern like Write-Ahead Log. This means that even if a node crashes and subsequently restarts, it should be capable of completing the protocol without any issues.
Let’s take a simple example of a monolithic banking application. This system interacts with a single database server managing multiple tables. Assume that the database is managing user’s balances. The application is responsible for handling user’s bank transactions.
When a user A transfers money to the user B, we need to ensure the following things:-
- If the transactions succeeds, the system must credit the user B's account & debit user A's account
- The database server may crash after the completion of the transaction. However, it must go back to its state before the crash
- The transaction may fail due to multiple reasons. For eg:- the user A may not have sufficient balance. In this case, the accounts of both the users shouldn’t be updated
- The database needs to be in a consistent state after the transaction completion. For eg:- user B shouldn’t receive the credit without the user A getting the debit

If you use a relational database, it will guarantee all the above four points. Relational databases use transactions to achieve the same. The transaction is an abstraction & it encapsulates a unit of work. Transactions guarantee atomicity in a database. So, either all operations complete successfully or none of them execute.
In simple words, a transaction is a set of SQL statements, that a database can execute. Database executes every SQL statement. In case there is a failure, it will abort the transaction. When the transaction is aborted, no change is done on the underlying data. From the state perspective, it’s equivalent to not executing any statement.
If all the statements execute, the transaction is committed. Once a transaction is committed, the underlying data is modified & persisted.
We have now decided to scale our database, to cater to increasing customers. Data is distributed across multiple database servers. So, user A and user B’s database records may fall in different shards.
Can we still guarantee atomicity in the case of sharded databases? No, since only a single database server guarantees atomicity. While dealing with many database servers, it’s the application’s responsibility to make a transaction atomic. We will see what are the different error scenarios that we need to tackle.
We will have to execute the two SQL queries on two separate servers. If either of the SQL queries fails, it will result in an inconsistent state. We want to prevent such an inconsistent state.
We have to ensure that either the transaction completes successfully or fails. We don’t want to leave the transaction midway in an inconsistent state. 2-Phase Commit makes distributed transactions atomic in nature.
We will now take a look at the working of the 2-Phase protocol. We introduce a new entity called Transaction Coordinator . This entity orchestrates the commit part of the transaction. Other servers managing the individual transactions are known as Participants.
In our example, we have two transactions Txn Credit& Txn Debit. Txn Credit runs on Shard A & Txn Debit runs on Shard B respectively. The client initiates both the transactions and sends them to the two shards. The below diagram illustrates this process. Both the database servers start transaction execution.
Later, the client sends a commit message to the Transaction Coordinator. The transaction commit is now divided into two phases by the Transaction Coordinator.
In the first phase, a RequestCommit the message is sent to all the participant servers. Every server has to respond to this message either with an OKor FAIL message. The server replies with an OKif it’s able to execute the transaction successfully. A FAIL message will be returned if there are any errors during the execution. For eg:- If the account balance went negative during the debit transaction.
The Transaction Coordinator waits for a response from all the servers. Once it receives a response, it will decide to either Commit or Abort the transaction. This becomes the second phase of the commit. The transaction will be committed only if every server replies with a OK message. If at least one server responds with a FAILmessage, the transaction will be aborted.
read

- The Saga pattern is a design pattern used to manage distributed transactions in a microservices architecture.
- Unlike traditional transaction management methods that require locking resources across multiple services, the Saga pattern breaks a transaction into a series of smaller, independent sub-transactions.
- Each sub-transaction is executed by a different service and, upon completion, triggers the next sub-transaction.
- If a sub-transaction fails, the pattern invokes compensating actions to undo the changes made by previous sub-transactions, ensuring the system returns to a consistent state.
- This approach allows for more scalable and resilient transaction management, avoiding the performance bottlenecks and blocking issues associated with protocols like two-phase commit (2PC).
- The Saga pattern thus provides a robust framework for maintaining data integrity and consistency in distributed systems without compromising on the benefits of microservices architecture.
Sagas can be implemented in two main ways: orchestration and choreography.
In orchestration, a central coordinator controls the sequence of sub-transactions, while in choreography, each service reacts to events and triggers the next step in the process.
The Saga pattern thus provides a robust framework for maintaining data integrity and consistency in distributed systems without compromising on the benefits of microservices architecture.
Choreography is a decentralized way of implementing sagas, where each microservice communicates with other microservices through events. There is no central coordinator that controls the flow of the saga. Instead, each microservice decides what to do based on the events it receives and publishes. For example, in the order saga, the inventory service might publish an event when it reserves a product, and the payment service might subscribe to that event and charge the customer. If the payment service fails, it might publish a failure event, and the inventory service might subscribe to that event and release the product.
- Is decentralized and decoupled
- Is good for highly independent microservices
- Is “easier” to implement, at least initially
- Is an easy choice for converting established monoliths to microservices
- Can make control flow unclear
- Can be challenging to debug
Example
-
If a customer applies for credit card, bank will have to perform series of tasks before issuing a credit card, such as ID Verification, CIF ID creation, KYC Update, Credit Score Verification, Processing Credit Card, Dispatch Card, Dispatch PIN and Confirmation. These tasks are performed independently and communicates with each other service asynchronously using events upon successful completion.
-
Each service involved in the saga is responsible for coordinating its own actions and compensations
- Reducing coupling between microservices
- Increasing scalability and availability
- Allowing for more flexibility and evolution.
- Increased complexity and testing
- Reduced visibility and monitoring
- the need for more coordination and consistency.
- It requires microservices to agree on the event schema and semantics, as well as handle duplicate or out-of-order events. All of these factors can make choreography a complex task.
Orchestration is a centralized way of implementing sagas, where a single microservice acts as a coordinator that commands other microservices to perform local transactions. The coordinator knows the logic and the order of the saga, and it communicates with other microservices through commands and replies. For example, in the order saga, the coordinator might send a command to the inventory service to reserve a product, and wait for a reply. If the reply is positive, it might send a command to the payment service to charge the customer, and so on. If any of the commands fails, the coordinator might send compensating commands to the previous microservices to roll back the saga.
- Has one service issuing “commands” to execute microservices
- Makes control flow easier to understand
- Easier to build with greenfield applications
- Makes debugging and failure handling clearer
- Is “harder” to implement initially, but pays dividends later
- Has a single point of failure (the message broker)
- simplifying complexity and testing,
- improving visibility and monitoring,
- requiring less coordination and consistency.
- increasing the coupling between microservices,
- decreasing scalability and availability,
- limiting flexibility and evolution.
- Orchestration necessitates that microservices know the commands and replies they send and receive, not just the events. Additionally, it creates a single point of failure or bottleneck in the saga that can affect other microservices if one joins or leaves.
-
When determining which approach is better for implementing a saga, it depends on various factors such as the complexity and frequency of the saga, the reliability and latency of communication, and the autonomy and ownership of microservices.
-
Orchestration may be more efficient and consistent if communication is reliable and fast, whereas choreography may be more scalable and flexible if the saga is complex and frequent. Additionally, choreography may be better suited to preserve independence and alignment if microservices are owned by different teams or organizations, while orchestration may be more convenient to coordinate collaboration and integration if they are owned by the same team or organization.
-
Ultimately, you need to weigh the pros and cons of both approaches to choose the best one for your scenario; alternatively, you can use a hybrid approach where parts of the saga are choreographed and others are orchestrated depending on context and requirements.
- Concurrent Update Deadlocks in Microservices
- Microservice Scenarios
- CQRS and Check-then-Act in Microservices
- Redis Persistence in Microservices Architecture
- https://www.linkedin.com/pulse/essential-guidelines-effective-system-design-momen-negm-q2c5f/?trackingId=oLjkqOuZl6rIVjlpMFvdGw%3D%3D
- https://docs.aws.amazon.com/prescriptive-guidance/latest/modernization-data-persistence/cqrs-pattern.html#:~:text=The%20CQRS%20pattern%20splits%20the,by%20using%20the%20read%20replicas.