Skip to content

Microservices

Full Stack edited this page Mar 16, 2025 · 32 revisions

Definition of a Microservice

A microservice is a small, independent, and loosely coupled service that performs a specific business function within a larger system. It is a software architectural style that structures an application as a collection of small, autonomous services that communicate over a network, typically using APIs. Each microservice is developed, deployed, and scaled independently.

Microservices comes with the following main advantages over monolith applications:

  • Scalability: Individual microservices can be scaled independently based on demand, optimizing resource usage.
  • Flexibility: Different microservices can be developed, tested, deployed, and maintained using different technologies.
  • Faster Development: Smaller, focused teams can work on separate microservices concurrently, speeding up development cycles and release times.
  • Resilience: Failures in one microservice are isolated and less likely to affect the entire system, improving overall reliability.
  • Easier Maintenance: Smaller codebases for each microservice are easier to understand, modify, and debug, reducing technical debt.
  • Flexible to outsourcing: Intellectual property protection can be a concern when outsourcing business functions to third-party partners. A microservices architecture can help by isolating partner-specific components, ensuring the core services remain secure and unaffected.

On the other hand, it comes with some challenges:

  • Complexity: Developing and maintaining a microservices-based application typically demands more effort than a monolithic approach. Each service requires its own codebase, testing, deployment pipeline, and documentation.
  • Inter-Service Communication: Microservices rely on network communication, which can introduce latency, failures, and complexities in handling inter-service communication.
  • Data Management: Distributed data management can be challenging, as each microservice may have its own database, leading to issues with consistency, data synchronization, and transactions.
  • Deployment Overhead: Managing the deployment, versioning, and scaling of multiple microservices can require sophisticated orchestration and automation tools like Kubernetes.
  • Security: Each microservice can introduce new potential vulnerabilities, increasing the attack surface and requiring careful attention to security practices.

Key Considerations for Creating a Microservice

When designing a microservice, consider the following aspects:

1. Service Boundaries

  • Define clear business capabilities for each microservice.
  • Ensure services do not overlap in responsibilities.
  • Follow the Single Responsibility Principle to avoid monolithic behavior.

2. Communication & API Design

  • Use lightweight protocols like RESTful APIs, gRPC, or GraphQL.
  • Consider event-driven communication with message queues (Kafka, RabbitMQ, etc.) for async processing.
  • Handle service discovery to locate and interact with other services.

3. Data Management

  • Adopt a database-per-service approach to ensure data independence.
  • Use event sourcing or CQRS (Command Query Responsibility Segregation) for better data integrity.
  • Ensure data consistency via event-driven architectures instead of direct database calls.

4. Deployment & Scaling

  • Containerize microservices using Docker and orchestrate with Kubernetes.
  • Implement auto-scaling and load balancing for performance optimization.
  • Ensure CI/CD pipelines for seamless deployment and updates.

5. Fault Tolerance & Resilience

  • Implement circuit breakers (e.g., Netflix Hystrix) to handle service failures gracefully.
  • Use retry mechanisms and fallback strategies.
  • Enable health checks and monitoring (e.g., Prometheus, Grafana).

6. Security

  • Secure APIs with OAuth 2.0, JWT, or API gateways.
  • Apply role-based access control (RBAC).
  • Encrypt sensitive data and enforce SSL/TLS communication.

7. Observability & Logging

  • Implement centralized logging using tools like ELK stack (Elasticsearch, Logstash, Kibana).
  • Use distributed tracing (e.g., Jaeger, Zipkin) to track requests across services.
  • Set up metrics and monitoring to detect anomalies.

8. Performance & Optimization

  • Optimize database queries and caching (Redis, Memcached).
  • Use asynchronous processing for long-running tasks.
  • Minimize network overhead by reducing inter-service calls.

Example

image

Popular microservices patterns

image

Database Per Service Pattern

The Database per Service pattern is a design approach in microservices architecture where each microservice has its own dedicated database. Each database is accessible only via its microservice own API. The service’s database is effectively part of the implementation of that service. It cannot be accessed directly by other services.

If a relational database is chosen, there are 3 ways to keep the data private to other databases:

  • Private tables per service: Each service owns a set of tables that must only be accessed by that service
  • Schema per service: Each service has a database schema that’s private to that service
  • Database server per service: Each service has it’s own database server.
  • Here are the main benefits to use this pattern:
  • Loose Coupling: Services are less dependent on each other, making the system more modular.
  • Technology Flexibility: Teams can choose the best database technology, chosing a proper database size, for their specific service requirements for each microservice.

A design pattern always comes with trade offs, here are some challenges that are not solved with this pattern:

  • Complexity: Managing multiple databases, including backup, recovery, and scaling, adds complexity to the system.
  • Cross-Service Queries: Hard to implement queries for data spread across multiple databases. API gateway or Aggregator pattern can be used to tackle this issue.
  • Data Consistency: Maintaining consistency across different services’ databases requires careful design and often involves other patterns like Event sourcing or Saga pattern.

Event Sourcing Pattern

The Event Sourcing pattern captures state changes as a sequence of events, stored in an event store instead of directly saving the current state. This event store acts like a message broker, allowing services to subscribe to events via an API. When a service records an event, it is sent to all interested subscribers. To reconstruct the current state, all events in the event store are replayed in sequence. The last process can be optimzed using snapshots to avoid replaying every events but only the last ones.

Here are the main benefits of event sourcing pattern:

  • Audit Trail: Provides a complete history of changes, which is useful for auditing, debugging, and understanding how the system evolved over time.
  • Scalability: By storing only events, write operations can be easily scaled. This allows the system to handle a high volume of writes across multiple consumers without performance concerns.
  • Evolutivity: Easy addition of new functionality by introducing new event types, as the business logic for processing events is separated from the event storage.

It comes with these drawbacks:

  • Complexity: The need to manage event streams and reconstruct state can be more complex than a traditional approach, also there is a learning curve to master this practice.
  • Higher storage requirements: Event Sourcing usually demands more storage than traditional methods, as all events must be stored and retained for historical purposes.
  • Complex querying: Querying event data can be more challenging than with traditional databases because the current state must be reconstructed from events.

Traditional systems only store the current state of data (e.g., a user’s account balance). But what if you need to know how that state changed over time? Event sourcing records every change as an event, giving you a full history.

It is like keeping the journal

Instead of overwriting data, every change is saved as an event in an event store. You can replay these events to see how the data evolved.

Best Practices:

  • Use tools like Apache Kafka for storing events.
  • Take periodic snapshots to speed up rebuilding the current state.
  • Ensure events are immutable and well-documented.

Strangular Pattern

Developers mostly use the strangler design pattern to incrementally transform a monolith application to microservices. This is accomplished by replacing old functionality with a new service — and, consequently, this is how the pattern receives its name. Once the new service is ready to be executed, the old service is “strangled” so the new one can take over.

To accomplish this successful transfer from monolith to microservices, a facade interface is used by developers that allows them to expose individual services and functions. The targeted functions are broken free from the monolith so they can be “strangled” and replaced.

Many companies have old, monolithic systems that are hard to maintain. Rewriting everything at once is risky and time-consuming. The Strangler Fig Pattern lets you modernize your system gradually. You replace parts of the old system with microservices one piece at a time. Over time, the old system “shrinks” as microservices take over.

Best Practices:

  • Start with smaller, less risky features.
  • Use metrics to monitor performance and issues during migration.
  • Document changes so teams can track the migration process.

To fully understand this specific pattern, it’s helpful to understand how monolith applications differ from microservices.

𝐀𝐏𝐈 𝐆𝐚𝐭𝐞𝐰𝐚𝐲 𝐏𝐚𝐭𝐭𝐞𝐫𝐧

Centralizes external access to your microservices, simplifying communication and providing a single entry point for client requests.

The API Gateway acts as a single entry point. Instead of connecting to multiple services, the client sends one request to the gateway, and it routes the request to the right services.

The API Gateway pattern is a design approach in microservices architecture where a single entry point (the API gateway), handles all client requests. The API gateway acts as an intermediary between clients and the microservices, routing requests to the appropriate service, aggregating responses, and often managing cross-cutting concerns like authentication, load balancing, logging, and rate limiting.

image

Here are the main advantages of using an API Gateway in a microservice architecture:

  • Simplified Client Interaction: Clients interact with a single, unified API instead of dealing directly with multiple microservices.
  • Centralized Management: Cross-cutting concerns are handled in one place, reducing duplication of code across services.
  • Improved Security: The API gateway can enforce security policies and access controls, protecting the underlying microservices.

Here are the main drawbacks:

  • Single Point of Failure: If the API gateway fails, the entire system could become inaccessible, so it must be highly available and resilient.
  • Performance Overhead: The gateway can introduce latency and become a bottleneck if not properly optimized when scaling.

Aggregator Pattern

The Aggregator Pattern is a design pattern used to consolidate data or responses from multiple sources into a single, unified result. An aggregator component or service manages the collection of data from different sources, coordinating the process of fetching, merging, and processing the data.

image

Here are the main benefits from the Aggregator pattern:

  • Simplified Client Interaction: Clients interact with one service or endpoint, reducing complexity and improving ease of use.
  • Reduced Network Calls: Aggregates data from multiple sources in one place, minimizing the number of calls or requests needed from clients and improving overall efficiency.
  • Centralized Data Processing: Handles data processing and transformation centrally, ensuring consistency and coherence across different data sources.

Here are the drawbacks of this pattern:

  • Added Complexity: Implementing the aggregation logic can be complex, especially when dealing with diverse data sources and formats.
  • Single Point of Failure: Since the aggregator serves as the central point for data collection, any issues or failures with the aggregator can impact the availability or functionality of the entire system.
  • Increased Latency: Aggregating data from multiple sources may introduce additional latency, particularly if the sources are distributed or if the aggregation involves complex processing.
  • Scalability Challenges: Scaling the aggregator to handle increasing amounts of data or requests can be challenging, requiring careful design to manage load and ensure responsiveness.

An aggregator design pattern is used to collect pieces of data from various microservices and returns an aggregate for processing. Although similar to the backend-for-frontend (BFF) design pattern, an aggregator is more generic and not explicitly used for UI.

To complete tasks, the aggregator pattern receives a request and sends out requests to multiple services, based on the tasks it was assigned. Once every service has answered the requests, this design pattern combines the results and initiates a response to the original request.

𝐁𝐚𝐜𝐤𝐞𝐧𝐝𝐬 𝐟𝐨𝐫 𝐅𝐫𝐨𝐧𝐭𝐞𝐧𝐝𝐬 𝐏𝐚𝐭𝐭𝐞𝐫𝐧 (𝐁𝐅𝐅)

Creates dedicated backend services for each frontend, optimizing performance and user experience tailored to each platform.

read more

BFF Provides Exclusivity - in providing features based on the client type (web/mobile/partner integration)

The Backend for Frontend (BFF) pattern is a design approach where a dedicated backend service is created for each specific frontend or client application, such as a web app, mobile app, or desktop app. Each BFF is designed to respond to the specific needs of its corresponding frontend, handling data aggregation, transformation, and communication with underlying microservices or APIs. The BFF pattern is best used in situations where there are multiple front-end applications that have different requirements.

image

Here are the benefits of such a pattern:

  • Optimized Communication with Frontends: Frontends get precisely what they need, leading to faster load times and a better user experience.
  • Reduced Complexity for Frontends: The frontend is simplified as the BFF handles complex data aggregation, transformation, and business logic.
  • Independent Evolution: Each frontend and its corresponding BFF can evolve independently, allowing for more flexibility in development.

However, this pattern comes with these drawbacks:

  • Complexity: Maintaining separate BFFs for different frontends adds to the development and maintenance complexity.
  • Potential Duplication: Common functionality across BFFs might lead to code duplication if not managed properly.
  • Consistency: Ensuring consistent behavior across different BFFs can be challenging, especially in large systems.

Think of the user-facing application as being two components — a client-side application living outside your perimeter and a server-side component (BFF) inside your perimeter. BFF is a variant of the API Gateway pattern, but it also provides an additional layer between microservices and each client type separately. Instead of a single point of entry, it introduces multiple gateways. Because of that, you can have a tailored API that targets the needs of each client (mobile, web, desktop, voice assistant, etc.), and remove a lot of the bloat caused by keeping it all in one place. The below image describes how it works.

image

Why BFF?

Decoupling of Backend and Frontend​ for sure gives us faster time to market as frontend teams can have dedicated backend teams serving their unique needs. The release of new features of one frontend does not affect the other.

We can much easier maintain and modify APIs​ and even provide API versioning dedicated for specific frontend, which is a big plus from a mobile app perspective as many users do not update the app immediately.

Simplify the system from Frontend and Backend perspectives​ as we don’t need any compromise.

The BFF can benefit from hiding unnecessary or sensitive data before transferring it to the frontend application interface, so keys and tokens for 3rd party services can be stored and used from BFF.

Allows to send formatted data to frontend and because of that can minimalize logic on it.

Additionally, give us possibilities for performance improvements and good optimization for mobile.

Fault tolerance in BFF

Fault tolerance is the most significant power of BFF allows us to handle most of the problems on the BFF layer instead of the client (frontend). In case of any microservice change, it can be controlled by all BFFs without emergency deployment of frontend clients. We all know that update of the mobile application on both stores is not an easy operation. It requires some additional time like review, which is not easy to estimate, and sometimes even its output can be a surprise — rejection. With BFF solution, we can cover versioning and backward compatibility separately per each client (frontend). Whole fault tolerance and strategies for that can be covered and managed in the BFF layer. For example, in our system, we can introduce separate BFF for each mobile client and avoid problems when one of them has some issues that affect the whole system, such as doing some self DDoS. In that case, we can disconnect this BFF from the system and investigate the problem inside of it without affecting the rest of the system. This is for sure a good strategy for BFFs dedicated for 3rd party services.

image

When to use BFF?

  • Consider BFF when you plan to extend the number of frontend types, it will make sense if and when you a significant amount of aggregation required on the server side.

  • If your application needs to develop an optimized backend for a specific frontend interface or your clients need to consume data that require aggregation on the backend, BFF is for sure a suitable option. Of course, you might reconsider if the cost of deploying additional BFFs’ services is high, but even then the separation of concerns that a BFF can bring make it a fairly compelling proposition in most cases.

  • https://medium.com/mobilepeople/backend-for-frontend-pattern-why-you-need-to-know-it-46f94ce420b0


𝐒𝐞𝐫𝐯𝐢𝐜𝐞 𝐃𝐢𝐬𝐜𝐨𝐯𝐞𝐫𝐲 𝐏𝐚𝐭𝐭𝐞𝐫𝐧

Enables microservices to dynamically discover and communicate with each other, simplifying service orchestration and enhancing system scalability.

𝐂𝐢𝐫𝐜𝐮𝐢𝐭 𝐁𝐫𝐞𝐚𝐤𝐞𝐫 𝐏𝐚𝐭𝐭𝐞𝐫𝐧

Implements a fault-tolerant mechanism for microservices, preventing cascading failures by automatically detecting and isolating faulty services.

When a service fails or becomes slow, it can cause the entire system to break down. For example, if a payment service is down, all orders might fail. The Circuit Breaker Pattern helps by stopping calls to a failing service until it recovers.

A circuit breaker acts like a switch:

  • Closed State: Requests go through as usual.
  • Open State: Requests are blocked to protect the system.
  • Half-Open State: Limited requests are sent to check if the service is working again.

Best Practices:

  • Combine with retries for temporary failures.
  • Monitor metrics to tune thresholds.
  • Provide meaningful fallbacks so users aren’t left hanging.

𝐑𝐞𝐭𝐫𝐲 𝐏𝐚𝐭𝐭𝐞𝐫𝐧

Enhances microservices' resilience by automatically retrying failed operations, increasing the chances of successful execution and minimizing transient issues.

read
  • The retry with backoff pattern improves application stability by transparently retrying operations that fail due to transient errors.

  • In distributed architectures, transient errors might be caused by service throttling, temporary loss of network connectivity, or temporary service unavailability. Automatically retrying operations that fail because of these transient errors improves the user experience and application resilience. However, frequent retries can overload network bandwidth and cause contention. Exponential backoff is a technique where operations are retried by increasing wait times for a specified number of retry attempts.

Applicability

Use the retry with backoff pattern when:

  • Your services frequently throttle the request to prevent overload, resulting in a 429 Too many requests exception to the calling process.

  • The network is an unseen participant in distributed architectures, and temporary network issues result in failures.

  • The service being called is temporarily unavailable, causing failures. Frequent retries might cause service degradation unless you introduce a backoff timeout by using this pattern.

Issues and considerations

  • Idempotency: If multiple calls to the method have the same effect as a single call on the system state, the operation is considered idempotent. Operations should be idempotent when you use the retry with backoff pattern. Otherwise, partial updates might corrupt the system state.

  • Network bandwidth: Service degradation can occur if too many retries occupy network bandwidth, leading to slow response times.

  • Fail fast scenarios: For non-transient errors, if you can determine the cause of the failure, it is more efficient to fail fast by using the circuit breaker pattern.

  • Backoff rate: Introducing exponential backoff can have an impact on the service timeout, resulting in longer wait times for the end user.

Example

image

𝐒𝐢𝐝𝐞𝐜𝐚𝐫 𝐏𝐚𝐭𝐭𝐞𝐫𝐧

Attaches additional components to your microservices, providing modular functionality without altering the core service itself.

𝐒𝐚𝐠𝐚 𝐏𝐚𝐭𝐭𝐞𝐫𝐧

The Saga Pattern is used in distributed systems to manage long-running business transactions across multiple microservices or databases. It does this by breaking the transaction into a sequence of local transactions, each updating the database and triggering the next step via an event. If a transaction fails, the saga runs compensating transactions to undo the changes made by previous steps.

image

Sagas can be coordinated in two ways:

Choreography: Each service listens to events and triggers the next step in the saga. This is a decentralized approach where services communicate directly with each other.

image

Orchestration: A central orchestrator service directs the saga, telling each service when to perform its transaction and managing the flow of the entire process.

image

Here are the main benefits of the saga pattern:

  • Data eventual consistency: It enables an application to maintain data consistency across multiple services.
  • Improved Resilience: By breaking down transactions into smaller, independent steps with compensating actions, the Saga Pattern enhances the system’s ability to handle failures without losing data consistency.

It comes with its drawbacks:

  • Complexity: Implementing the Saga Pattern can add complexity, especially in managing compensating transactions and ensuring all steps are correctly coordinated.
  • Lack of automatic rollback: Unlike ACID transactions, sagas do not have automatic rollback, so developers must design compensating transactions to explicitly undo changes made earlier in the saga.
  • Lack of isolation: The absence of isolation (the “I” in ACID) in sagas increases the risk of data anomalies during concurrent saga execution.

Manages distributed transactions across multiple microservices, ensuring data consistency while maintaining the autonomy of your services.

A saga is a series of local transactions. In microservices applications, a saga patterncan help maintain data consistency during distributed transactions.

The saga pattern is an alternative solution to other design patterns that allows for multiple transactions by giving rollback opportunities.

When multiple services are involved in a transaction, things get tricky. Imagine placing an order — one service deducts money, another updates inventory, and another creates the order. What if one step fails? The Saga Pattern ensures that all steps work together or everything rolls back cleanly.

The Saga Pattern breaks a transaction into smaller steps. Each service handles its step and then triggers the next one. If something goes wrong, it can undo its step.

Types of Sagas:

  • ** Choreography**: Each service knows what to do next.
  • Orchestration: A central controller decides what happens next.

Best Practices:

  • Ensure each step can handle retries without errors.
  • Use logs to track what happened in case of issues.
  • Test your sagas thoroughly to avoid data inconsistencies.

𝐂𝐐𝐑𝐒 (𝐂𝐨𝐦𝐦𝐚𝐧𝐝 𝐐𝐮𝐞𝐫𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐲 𝐒𝐞𝐠𝐫𝐞𝐠𝐚𝐭𝐢𝐨𝐧) 𝐏𝐚𝐭𝐭𝐞𝐫𝐧

The CQRS pattern is a design approach where the responsibilities of reading data (queries) and writing data (commands) are separated into different models or services. The separation of concerns enables each model to be tailored to its specific function:

  • Command Model: Can be optimized for handling complex business logic and state changes.
  • Query Model: Can be optimized for efficient data retrieval and presentation, often using denormalized views or caches.

image

Here are the main benefits of the CQRS pattern:

  • Performance Optimization: Each model can be optimized for its specific operations, enhancing overall system performance.
  • Scalability: Read and write operations can be scaled independently, improving resource utilization.
  • Maintainability: By separating command and query responsibilities, the codebase becomes more organized and easier to understand and modify.

Here are the challenges with this pattern:

  • Complexity: The need to manage and synchronize separate models for commands and queries add complexity to the system.
  • Data Consistency: Ensuring consistency between the command and query models, especially in distributed systems where data updates may not be immediately propagated, can be challenging.
  • Data Synchronization: Synchronizing the read and write models can be challenging, particularly with large volumes of data or complex transformations. Techniques such as event sourcing or message queues can assist in managing this complexity.

Separates the read and write operations in a microservice, improving performance, scalability, and maintainability.

Read

Overall image

Best Practices

  • Focus on UI/UX and data needed by them.
  • Don’t try to make everything generic from the beginning; this may cause that component to be used across organizations and many people will want to contribute.
  • Use particular feature first over generic usage strategy. This is the best approach to keep clean API dedicated to one client.
  • Use the gold rule of three - Refer here for definition.

References

Clone this wiki locally