Essentials of Microservices

LiftOff LLC
7 min readFeb 23, 2021

Microservices have gained immense popularity in Software Development in the last few years as they provide many benefits for Agile and DevOps teams. 🤗
Netflix, eBay, Amazon, etc have all evolved from monolithic to microservices architecture. 😮

Unlike microservices, a monolith application is built as a single, autonomous unit. This makes changes to the application slow as it affects the entire system. A modification made to a small section of code might require building and deploying an entirely new version of the software. Scaling specific functions of an application, also means you have to scale the entire application.

Microservices solve these challenges of monolithic systems by being as modular as possible. In the simplest form, they help build an application as a suite of small services, each running in its own process and are independently deployable. These services may be written in different programming languages and may use different data storage techniques. While this results in the development of systems that are scalable and flexible, it needs a dynamic makeover

Of course, no technology is a silver bullet, while Microservices come with several benefits, they also bring several issues that need to be addressed which is rather less difficult in Monoliths 🤨 to list a few :

On what basis do you split/decompose application into different services?

How does communication happen between them?

How is the data consistency maintained?

How is Authentication & Authorization implemented across the application?

How are the Services deployed & monitored to reduce downtime?

In this blog, I’ll try talking briefly about some of them, let’s get started

1. Decomposing application into Services :

Identifying an application’s services is an essential step to breaking it into its components. There’s no mechanical process to follow, but there are various decomposition strategies you can use. Each one tackles the problem from a different perspective and uses its own terminology. No matter what strategy you choose the end objective should always be to ensure

  • Services are loosely coupled
  • Teams that work on one or more services stay autonomous with minimal collaboration with other teams
  • Each Service is small enough to be maintained/developed by a team
  • Services must conform to the Common Closure Principle — things that change together should be packaged together

Decompose by Business capability

It involves a two-step process

  • Identifying Business Capabilities — which is done by understanding the organization’s purpose, structure & business processes
  • Define serviceseach identified business capabilities or a group of them could be mapped to a service

A sample mapping of the capability of Food delivery business to architecture service could be

A key benefit of organizing services around capabilities, which are stable, is that the resulting architecture will be relatively stable. The individual components of the architecture may evolve as aspects of the business change, but the architecture itself remains unchanged.

Decompose by Sub-domain

Decomposing an application using business capabilities might seem easy but you will come across a few Classes which would be common among multiple services and not be easy to decompose. In such cases define services corresponding to Domain-Driven Design (DDD) subdomains. DDD refers to the application’s problem space — the business — as the domain. A domain consists of multiple subdomains. Each subdomain corresponds to a different part of the business.

It comprises of steps as shown

2. Communication between services :

Communication between modules in a monolithic application running on a process is done simply by invoking a language-level method or function calls which could be strongly coupled if you’re creating objects with code (new ClassName() ) or can be invoked in a decoupled way if you’re using Dependency Injection. Either way, the objects are running within the same process

However, replicating the same in a distributed environment using RPC(Remote Procedure Call) for all communication could result in a lot of in-efficiency and since a microservices-based application is a distributed system running on multiple processes or services. Each service instance is typically a process. Therefore, services must interact using an inter-process communication protocol such as HTTP, AMQP, or a binary protocol like TCP, depending on the nature of each service. There could be many ways clients & services could communicate, each one suitable for different scenarios. The types of communications can be classified into two axes.

The first axis defines if the protocol is synchronous or asynchronous:

  • Synchronous: HTTP is synchronous, meaning the client sends a request & waits for a response irrespective of code execution on the client (sync or async) which clearly means that client code can only continue its task when it receives the HTTP server response. REST is a popular architectural style
  • Asynchronous: This protocol uses Messaging-based communication. This is the most popular choice to make the integration loosely coupled and asynchronous, also make our application resilient and scalable. We usually use a message broker to manage and process the message sent by the producer and it can be persisted if required; the message broker will guarantee message delivery to the Consumer. This makes the communication asynchronous. There are two options one being Point-to-Point & the other Publisher-Subscriber. Apache Kafka, RabbitMQ, ZeroMQ are few examples

The second axis defines if the communication has a Single receiver (or) Multiple receivers

When to use what? Here are my suggestions

  • Use REST For communication between browser & your service or provide public API
  • For communication between internal services try to model your processes using messages, if not possible then choose gRPC or Event/Message Driven mechanism
  • When dealing with high volumes of messages via HTTP or If you find latency or network saturation to be any sort of bottleneck, consider adopting RPC

Remember the more you add synchronous dependencies between microservices, such as query requests, the worse the overall response time gets for the client apps! 😑

3. Data management :

There are several data management patterns in microservices and each one solves a specific issue with data management. Let's discuss the popular ones below

The Data-per-Service pattern

The core characteristic of the microservices architecture is the loose coupling of services. For that, each service must have its own private data store. For example in an online store application Order Service and Customer Service each store data in their own databases. Changes to one database don’t impact other microservices. The Customer Service’s database can’t be accessed directly by other microservices. Each service’s persistent data can only be accessed via API. But what if the transaction spans multiple services 🤔? This leads us to the next pattern

The Saga pattern

For applications where multiple transactions are possible. Instead of using traditional distributed transactions (XA/2PC-based), you have to use the sequence of local transactions (aka Saga) where one local transaction updates the database and then triggers the next transaction through messaging.

When a customer places an order in an eCommerce store, the two services called customer service and order service will be working. When a customer service sends the order, the order will be in the pending state. The saga contacts the eCommerce store through the order service and will manage the placing of events. Once the order service gets the confirmation about the order, it sends the reply.

Depending on the reply, the saga will approve or reject the order. The final status of the order is presented to the customer stating that the order will be delivered or having the buyer proceed to the payment method. In cases where the order is not accepted by the eCommerce store, it sends an apology message or shows the out-of-stock indicator.

The CQRS pattern

To retrieve data that are scattered across multiple services, you can’t use the traditional distributed query mechanism 😤. That's where CQRS comes into the picture, another alternative would be the API Composition pattern which implements a query by invoking certain services that own the data and combining the results. But again, for larger applications, the complexity of using this pattern increases 🙄

CQRS suggests splitting the application into two parts — the command side and the query side. The command side handles the Create, Update, and Delete requests. The query side handles the query part by using the materialized views. The event sourcing pattern is generally used along with it to create events for any data change. Materialized views are kept updated by subscribing to the stream of events. 🥱

Woof ! That was long anyways Glad that you reached here 😇 , there’s a lot more to discuss 😅 but don’t worry I won’t go beyond this. I’ll leave it for the next part 😁.

If you enjoyed this article, share it with your friends and colleagues!

Additional Resources and Tutorials on Microservices

For further reading and information on microservices, including some helpful tutorials, visit the following resources:

--

--

LiftOff LLC

We are business accelerator working with startups / entrepreneurs in building the product & launching the companies.