Microservices Tradeoffs and Design Patterns

Let aside the reason why we should and should not jump into Microservices from previous post , here we talk more about what Tradeoffs of Microservices and Design Patterns that are born to deal with them.

Building Microservices is not easy like installing some packages into your current system. Actually you will install a lot of things :). The beauty of Microservices lies on the separation of services that enable each module to be developed independently and keep each module simple. But that separation also is the cause of new problems.

More I/O operations ?

First issue that we can easily to recognize is the emerging of I/O calls between separated services. It exactly looks like when we integrate our system to 3rd party services, but this time, all that 3rd party services is just out internal ones. To have correct API calls, there will be efforts to document and synchronize knowledge between teams handling different services.

But here is the bigger problem, if every services has to keep a list of another services addresses (to call their APIs), they become tight coupled, means strong dependent between each other and it destroys the promised scalability of Microservices. So it is when the Event-Driven style comes to rescue.

Event Driven Design Pattern

Example tools : RabbitMQ, ActiveMQ, Apache Kafka, Apache Pulsar, and more

Main idea with this pattern is to allow services not need to know about each others addresses. Each service just need to know an event pipe, or a message broker and entrust it for distributing its message and feeding back data from other services. There will be no direct API call between services. Each services only fires some events to the pipe, and listen on some events happened from the pipe.

Along with this design pattern, the mindset on how to storing data is required some escalations too. We will not only store STATE of entities, but also store the stream of EVENTs that construct that STATE. This storing strategy also is very effective when dealing with concurrent modifications on the same entity that can cause inconsistent in data. There are 2 approaches to store and consume events : by using the Queue and using the Log that we will discover in later topics.

More Complex Query Mechanism ?

It is obviously there will be moments that we need to query some data that need the co-operation between multiple services. In the past with monoliths style, when all data of all services is located in the same database, writing an SQL query is simple. But in Microservices style, it can’t. Each service secures its own database as a recommended practice. We suddenly can’t JOIN tables, we lost the out-of-the-box rollback mechanism from database’s Transaction feature in case of something wrong with storing data, we may have a longer delay while each service may have to wait for data from other services. And those obstacles turn Event Driven to be a “must have” design for Microservices system since that design is the foundation to support patterns solving this Querying issue, most common are Event Sourcing, CRSQ, and Saga.

Event Sourcing

It can be a bit confusing between terms Event Driven vs Event Sourcing. Event Driven is about communication mechanism between services , since Event Sourcing is about coding solution inside each service to retrieve a state of an entity: instead of fetching the entity from the database, we reconstruct it from an event stream. The event stream can be stored in many ways: it can be stored on a database’s table, or it can be read from Event-Driven supported components such as Apache Kafka, or RabbitMQ, or using some dedicated event stream database like EventStore, etc. This method brings new responsibility to developers that they will have to create and maintain the reconstructing algorithms for each type of entity .

As mentioned at previous section, this strategy is helpful when dealing with concurrent data modification scenario, something like collaboration features that can be seen in Google Docs or Google Sheets, or simply to deal with scenario that 2 user hit “Save” on the same form at very closed moments. But this reconstructing way is not so friendly to a more complex query which is so natural with traditional database like Oracle or PostgresSQL, the SELECT * WHERE ones. So, to cover this drawbacks, each service usually also maintain a traditional database to store states of entities and using it for querying. And this combination form a new pattern called : CQRS (Command and Query Responsibility Segregation) where the read and the write on an entity happens on different databases.

CQRS (Command and Query Responsibility Segregation)

As mention above, this pattern is to separates read and update operations for a data store. A service can use Event Sourcing technique for update an entity, or construct an memory based database such as H2 database to quickly store updates on entities, while as quick as possible to persist the calculated states of entities back to a SQL database for example. This pattern prevents the data conflict while there are many updates on a single entity come at the same time while also keep a flexible interface for query data.

This pattern is effective for scaling purpose since we can scale the read database and the write database independently, and fit for high load scenario when the writing requests can complete quicker because it reduces calls to database with potential delay from locking mechanism inside databases. Quicker response mean there will be more room for other requests, especially in thread-based server technology such as Servlet or Spring.

A drawback of this pattern is the coding complexity. There is more components join in the process, there will be more problem to handle. So it is not recommended to use this way in cases that the domain or business logics are simple. Simple features is nice fit with traditional CRUD method Overusing anythings is not good. I also want to remind that if the whole system does not have special needs on the load, or write-heavy features, it is not recommended to switch to Microservices too. (reason is here )

Saga

Saga means a long heroic story. And the story about Transaction inside Microservices is truly heroic and long. Transaction is an important feature for a database that aim to maintain the data consistency, it prevents partial failure when updating entities. With distributed services, we are having distributed Transactions. Now, the mission is how to co-ordinate those separated Transactions to regain attributes of a single Transaction : ACID (atomicity, consistency, isolation, durability) over distributed services . We can understand simply that : Saga is a design pattern aim to form the Transaction for Microservices.

Saga patterns is about what system must do if there is a failure inside a service. It should somehow reverse some previous successful operations to maintain data consistency. And the simplest way is to send out messages to ask some services to rollback some updates. To make a Saga, developers may have to anticipate a lot scenarios that an operation can fail. The more high level solution for rollback mechanism is to implement some techniques like Semantic lock or Versioning entity. We can discuss about this in other topics. But the point here is it also brings much complexities to the source code. The recommendation is to divide services well to avoid writing too much Saga. If there are some services that are tight coupled, we should think about merging them into one Monoliths service again. Saga is less suitable for tight coupled transaction.

More Deployment Effort ?

Back to Monoliths realm, the deployment means running a few command lines to build an API instances and to build a client side application. When go with Microservices, obviously we are having more than 1 instance, and we need to deploy each instance, one by one.

To reduce this effort, we can use some CI/CD tools such as Jenkins, or some available Cloud base CI/CD out there. We also can write ourself tools , it won’t be difficult. But there is still some more issues than just running command lines.

Log Aggregation

Logging is vital practice when building any kind of application to provide the picture of how system is doing and to troubleshoot issues. Checking logs on separated services can be not very convenient in Microservices so it is recommended to stream logs to one center. There are many tools dedicated for this purpose nowadays such as GreyLog or Logstash. The most famous stack for collecting, parsing and visualizing for now is ELK which is the combination of ElasticSearch + Logstash + Kibana. The drawback of those available logging technology is it requires a bit much RAM and CPU, mostly to support searching logs. For small projects, preparing a machine that is strong enough to run ELK stack may not very affordable. Logstash requires about 1-2 GB is plenty enough. GreyLog requires ElasticSearch so it also require about 8GB RAM and 4 Cores CPU. ELK is much more than that.

Health Check & Auto restart

Beside Logging, we also must have a way to keep track availability of services. Each service may have its own API /healthcheck that we can have a tool to periodically call to to check whether it’s alive or not. Or we can use proactive monitoring tools such as Monit or Supervisord to monitor ports / processes and configure its behavior when some errors occur, such as sending emails or notifications to the Slack channel.

Beside Heath Check, each service should have auto-restarting ability when something take it down. We can configure for a process to start up whenever the machine is up by adding scripts to /etc/init.d or /etc/systemd for most of Linux server. For processes, we can make use of Docker to automatically bring services up right after it is down. For the machine itself, if we use physical machine, we should enter BIOS and set up Auto-Restart when power is on. If we use Cloud machines, it is no worry.

Those techniques are not only recommended for Microservices but also for any Monoliths system to ensure the availability.

Circuit Breaker

This is for when bad things happen and we have no way to deal with it but accepting. There is always such situation is life. For some reasons, one or many services is down or become so slow due to network issues that it will makes user wait long just for a button click. Most of users are impatient and they will likely to retry the pending action, a lot and you know system can got worser. It is when a Circuit Breaker take action. It’s role is just similar to electric circuit breaker , is to prevent catastrophic cascading failure across system. The circuit breaker pattern allows you to build a fault tolerant and resilient system that can survive gracefully when key services are either unavailable or have high latency.

The Circuit Breaker must be placed between client and actual servers containing services. Circuit Breaker has 2 main states: Closed, Open. The rules among those states are:

  • At Closed state, Circuit Breaker just forward request from clients to behind services.
  • Once Circuit Breaker discovers a fail request or high latency, it change status to Open.
  • In Open state, Circuit Breaker will return errors to client’s requests immediately, so the user acknowledge the failure and it is better than let users wait, and it also reduces the load to the system.
  • Periodically, Circuit Breaker makes retry-call to behind services to check their availability. If behind services is good again, it changes to Closed state, if not it remain Open state.

Luckily we may don’t have to implement this pattern ourself. There are available tools out there such as : Hystrix – a part of Netflix OSS, or Istio – the community one

Service Discovery

As we mentioned at Event Driven section, services inside a Microservices no need to know each own addresses by using an Event channel. But what if the team does not familiar with Event style and decide not to use it, or the services is simple enough to just expose REST APIs only. Using Event Driven is not a must-do, and in this case, how do we solve the addressing problem between services.

When system need to be scaled, there will be more instances for one or many services need to be added, or removed, or just be moved around. To let every services know the address (IP , port ) of others, we need a man in the middle that keep the records about service’s addresses and keep it up to date. This module is called Service Discovery ad usually be used along with Load Balancing modules. We may discuss about this more on other topics.

We also no need to create this component from scratch. There are some tools out there such as : etcd, consul, Apache ZooKeeper. Let’s give a try with them.

Ending

Above is an overview of what we need to know when moving to Microservices. Make sure you google them all before really starting. Each of patterns will have its pros and cons and overcoming solutions that another topics will cover. Thanks for reading !!

Is Microservices good ?

Yes and No.

Yes when we are facing problems that it solves and No when we blindly follow that “trend”.

Once my boss read somewhere about how amazing the Microservices is and instantly he asked the development team to “Let do Microservices”. He’s purely a business man but always want to apply the newest technology. How lucky am I, but also a challenge when to switch a system design to another. Actually it sounds cool to us so it is a quickly agreement between boss and developers. So let do Microservices.

What is Microservices ?

Microservices, clearly said, is a system design approach, I personally don’t count it as a technology. Microservices system itself will be composed from multiple technologies. Each piece of technology solves a business problem or problems emerging inside Microservices itself. The opposite approach to Microservices is called Monoliths – an All In One Big Service, shortly is what mostly systems nowadays are, composed from a single set of a API server and a database. Switching to Microservices, technically, is to divide functions of One Big Service into multiple small services running independently, wire them together and then we can choose fittest technologies for each small service. Each technology here can exist as a programing language, a framework, a software, a third-party service, or a tool.

The simplest form of a Microservices system, we can think it is composed from multiple Monoliths system. Each Monoliths system contains its own server & database and exposes its own API gateway . Monoliths systems communicate to each other by call APIs of others directly or listen to a shared event channels, depends on use cases.

Microservices is NOT a new skill set. Microservices is composed from multiple Monoliths services, so to do Microservices, developers must good at building Monoliths first.

What problems do Microservices solves and NOT solve ?

There is a reason that every bosses want to move to Microservices that is they think it is good. But I think not everyone understand WHAT it is good for. Microservices is NOT a pure better design than other designs. It is an adaptation to overcome problems emerging when a system is growing to big and huge size, in both manner of traffic and logic complexity. So if your system does not suppose to be the next Amazon or Netflix, Monoliths design is fine for you since it is much simpler to set up and maintain. A few thousand users with few hundred connection per second is in capability of mostly technologies nowadays, such as Spring or Node, Ruby on Rail or PHP, etc. But it is hard to estimate the threshold because each system has different features and the best way to find out it maximum capability is to do the stress test – basically to send as much as possible requests then analyze the response time. When you know your system capability, you will have a reference in number to decide when to move to Microservices. Microservices is a journey, only carry on when you are well prepared.

Microservices does NOT magically increase the system load threshold, unless the services are divided and designed appropriately. Remind that I/O processes take the main part in the delaying time between request & response. Normally, in Monoliths design, all services are on the same memory and it is the fastest way for services to cooperate to each other. But if we blindly deploy services to multiple different places to make it look like Microservices, there will be more I/O time since services have to send more requests to others that it depends on, then performance of the system will go down significantly. This may be the most common mistake when creating a Microservices system. Microservices is NOT to fan out all services to multiple servers. We must calculate to identify the bottleneck in the system before deciding to move some related services to an independent server. And it also is NOT simply to deploy current service’s source code to other server. The new server may have some beneficial points, such as a greater processing power that accelerates the service, or it is to redesign the service with other technologies that have some benefits the service needs.

A good example for redesigning the service is to separate the READ and WRITE data into two services for the same domain object (same table), the purpose is to support a large of concurrent reading/writing data with low latency. Assume that we are having a Monoliths system but after a period of growing, we have a very huge data amount and complex data schema on a SQL database so that every time a query is issued, it freeze the whole system for a few seconds. This is bad and we want to improve. That moment, we may come to this solution: We will divide the service in READ & WRITE aspect. The READ data service may use a NoSQL database as a persistence storage but with fast reading speed to reduce user’s waiting time. The WRITE data services may use an in-memory database such as H2 to proceed data updating as fast as possible, then gradually synchronize in memory data to the persistence storage of the READ service. Those two services should run on different machine to be able to maximize resource usages. And this is a truly story of Microservices. If we simply deploy another identical service on another server to handle more traffic by routing traffic by IP or by zone, it is the term of Load Balancing.

Microservices is NOT to reduce development cost. In fact it increases. Firstly, we need more machines to run independent services, as well as more machines to run other monitor tools. Microservices is an architectural design approach, it is the view of the whole system, NOT on how each service coding solution. It does NOT magically reduce bugs. You may read here for more understanding about the source of bugs. But when services are divided well, it does enhance the boundary between services, so that can help developers to avoid using wrong components as well as to avoid creating too much cross-cutting concern components with many hidden logic. Microservices brings the real need of DevOps positions, who will take responsibility to deploy multiple services as quick as possible to ensure lowest down time between deployment. They obviously will have to create some CI/CD system to automate the deployment process, calculate system load and create/install monitor tools to keep track how services are doing with other. When a bug happens inside a Microservices system, it is more complicated to fix than in Monoliths since now there are more than 1 places to figure out what is the truth source of a bug. Developers also always have to set up an identical system on their local machine for developing and testing. A system of multiple services requires stronger machine for developers. Too many services system can be somehow impossible to deploy on a single machine and we may need some Mocking technique to create fake API gateways on behalf services. Writing automation tests gets harder too, etc. And many many behind the scene works like these will disturb developers when switching to Microservices. More work, more job, more salary.

Microservices is NOT to freely apply latest technology. I bet that your team won’t want to work in a tech-soup. Agree that Microservices open us an ability to mix multiple technologies to make use of their advantages. But remember that it does require us to understand their advantages before applying, or your system will get more complexities without any significant benefit and crying is coming soon. Microservices is NOT only about technology, it’s also about human. It may depends on how your team is organized, what their skills are, what they are good at. Because learning some new things does take time and if you are in rush, let do with tools that you are familiar. Example we are about to create a small service to handle Employee’s documents in 1 month and we are having only 1 thousand employees. Our developers are experts at Java but Go is the new language and it is on trending. You may hear somewhere that “Go is faster” but here is the point, your developers will build that new service faster with Java than Go and that one thousand users is not the limitation to have to switch from Java to Go.

Microservices is NOT to create boundaries between teams. It is to create the boundary between services that your teams are creating only, technical boundaries only. The more developers know about other services, the more chances they find out problems early and the less communication cost between teams. Don’t use the architectural design as a political tool inside an organization. One developer can work for multiple services depends on his/her ability. Those people usually act as an important bridge between services. I know that some managers want to divide teams to rule easier, but I feel it is not a good way to create an organization: people will go to work with doubts and envies because much or less, all services are necessary at some points but at each moment, some are important than others. Non-boundary teams also activate cross-checking that can push teams move forward, also reduce job security. No sharing, no checking between employees will gradually hint a few ones to think that they are irreplaceable. It is a toxic thinking for an organization.

So when to go Microservices ?

Microservices do not help to reduce costs, not help to improve performance, not help to be “better”, so why do it is on trending ? Because it is from big tech companies, and people tend to believe what come from big boys always “better”. We easily blindly copy without diving deep to understand why they do that. With big tech companies, they hit the limitation of technologies and a single Monoliths system can’t help them anymore so they have to use multiple Monoliths to solve problems. And the result is a system that they named Microservices. Technology changes everyday and who know what will come in next few years. We see many frameworks, languages, platforms come as “better” options then die. So the key point to decide to move to Microservices is to know the limitation of current system by testing the load well.

Another reason we might need the Microservices is to implement many projects at the same time. Example we need to build a Pricing Engine module in the same time with an ERP module to manage employees, we might assign them for 2 teams since the business logic of modules does not depend on other. Each team can develop their own service on separated server so the deployments of each service is independent too. If 2 modules is built into 1 Monoliths service, an issue happening on a module may block the whole deployment process to prevent risks happening on production environment. So the key point when dividing services for teams is the dependency between services. They should be loosely coupled. It means each service can act as a separated product without knowing or need existence of other services.

When each service is truly independent, it can be reused too. Example your companies has multiple projects but using the same employees for all projects, so to avoid duplicating features like authentication, employee manager, or full text search service, etc, we can carving them to separated services that can be reused by different projects.

One scenario that you can find out your system look like Microservices, is when to rewriting the legacy system with up-to-date technology. Rewriting the whole system is time consuming so we usually have to rewrite module by module. Each rewritten module can be deployed at a separated server and on the way of rewriting the legacy system, you are using Microservices.