How to compete with generative AI as a software engineer ?

Before the decade of AI bursting, software engineering is mostly about writing code that realize requirements. Software Engineers, at some extent, act like a translators between human languages and computer language. This translation today can be accomplished by many generative AI products in seconds and from my observation, generated code has pattern even better than code written by most of developers. It is understandable when companies begin laying off employees that does not match existing AI. It is just a cost optimization – vital part of every business – and also a coldest truth of this life, might be !

What is generative AI good and not good at ?

Recall the flow that each software engineer has to do daily is:

Receive requirements -> Review current state of source code -> Define a target state of source code -> Retrieve information from documents of related tools, libraries and solutions -> Pick solutions -> Actually write code -> Aligning new code to existing code -> Deploy -> Testing -> Measuring results -> Read error messages -> Debugging.

Some steps of this flow is done better by generative AI, and some is better by human:

StepsDescriptionWinner and Why
Receive requirementsto capture goals, constraints, acceptance criteria, performance, security needs, and stakeholders’ expectations.Human
Reason: human are better at eliciting ambiguous needs, negotiating trade-offs, and asking the right follow-ups with stakeholders. AI can help by summarizing long requirement documents and suggesting missing or inconsistent points.
Review current state of source codeto read codebase, architecture, tests, docs, build scripts, dependencies, and CI config.Human + AI
Reason: AI can quickly index, summarize files, find patterns, risky hotspots, and generate dependency graphs. But humans provide domain knowledge, historical context, and recognize subtle intent (business logic, quirks, trade-offs).
Define a target state of source code to design the desired architecture, interfaces, data flows, APIs, and acceptance criteria for the new state.Human + AI
Reason: AI can propose multiple concrete design options, highlight trade-offs. Humans must pick the option that fits non-technical constraints (policy, team skill, product strategy).
Retrieve information from documents of related tools, libraries and solutions to find API docs, migration guides, best practices, configuration notes.AI
Reason: AI can extract key steps, call signatures, breaking changes, and produce concise examples from long docs much faster than manual reading. Humans validate and interpret edge cases.
Pick solutionsto select libraries, patterns, and implementation approaches considering performance, security, license, team skills.Human
Reason: human decision-makers must weigh organizational constraints, long-term maintenance, licensing, and political factors.
Actually write codeimplement features, refactor, add tests, update docs.AI
Reason: AI excels at generating boilerplate, test stubs, consistent code patterns, and translations across languages.
Aligning new code to existing codeensure style, APIs, error-handling, logging, and patterns match the codebase; maintain backward compatibility.Human + AI
Reason: AI can automatically reformat, rename for consistency, and propose refactors to match patterns; humans confirm that changes don’t break conventions tied to tests or runtime behaviors.
Deploypush to staging/production, run migration scripts, coordinate releases, rollback plans.Human
Reason: Humans must coordinate cross-team tasks, business windows, and incident response. AI/automation is excellent at packaging, CI/CD scripts, and repeatable deployment steps.
TestingRun the application locally and manually verify that new changes behave as expected.Human
Reason: Manual testing relies heavily on intuition, product knowledge, and human perception (e.g., UX feel, layout issues, unexpected delays, weird state transitions).
Measuring resultsmonitor metrics, logs, user feedback, testing results, and define success signals.Human + AI
Reason: AI can detect anomalies, summarize metrics, and surface correlations. Humans decide what metrics matter, interpret business impact, and choose next actions.
Read error messagesanalyze stack traces, logs, exceptions, and failure contexts.Human + AI
Reason: AI quickly maps errors to likely root causes and suggests reproducible steps. Humans provide context (recent changes, infra issues) and confirm fixes.
Debuggingreproduce issues, step through code, identify root cause, fix and validate.Human
Reason: AI speeds discovery (identifying suspicious diffs, suggesting breakpoints, generating reproducer scripts), but complex debugging often needs human insight into domain rules, race conditions, and stateful behaviors.

How to compete with generative AI to secure the career as a software engineer ?

Similar to the Industrial Revolution and Digital Revolution, where labors is replaced by machines, some jobs disappeared but new jobs got created. And at some extent, AI, is just another machine, huge one, so, essentially, we are still in the Revolution of Machine era.

The answer for this question is that we need to work on where this huge machine cannot. So far, at the moment of this post, what we can do to compete with AI in software development are:

Transit to Solution Architect

As AI becomes strong at writing code, humans can shift upward into architectural thinking. A Solution Architect focuses on shaping systems, not just lines of code. This involves interpreting ambiguous requirements, negotiating constraints across teams, balancing trade-offs between cost, performance, security, and future growth. AI can propose patterns, but only a human understands organizational politics, legacy constraints, domain history, and long-term impact. By moving into architecture, you operate at a layer where human judgment, experience, and foresight remain irreplaceable.

Become Reviewer / Validator

AI can produce solutions quickly, but it still needs someone to verify correctness, safety, and alignment with real-world constraints. A human reviewer checks assumptions, identifies risks, ensures compliance with business rules, and validates that AI-generated code or plans actually make sense in context. Humans excel at spotting hidden inconsistencies, ethical issues, and practical pitfalls that AI may overlook. Becoming a Validator means owning the final approval — the role of the responsible adult in the loop.

Become Orchestrator

Future engineers will spend less time typing code and more time coordinating AI agents, tools, workflows, and automation. An Orchestrator knows how to decompose problems, feed the right information to the right AI tool, evaluate outputs, and blend them into a coherent product. This role requires systems thinking, communication, and the ability to see the entire workflow end-to-end. AI is powerful but narrow; an Orchestrator provides the glue, strategy, and oversight that turns multiple AI capabilities into a real solution.

Study Broader knowledge

AI is good at depth — consuming a specific library or framework instantly — but humans win by having breadth. Understanding multiple domains (networking, security, product design, compliance, UX, devops, data, hardware) allows you to make decisions AI cannot contextualize. Breadth lets you spot cross-domain interactions, anticipate downstream consequences, and design better holistic systems. The more wide your knowledge, the more you can see risks, opportunities, and real-world constraints that AI cannot infer from text alone.

Expertise in task description

In an AI-driven era, the most valuable skill is the ability to turn a messy idea into a clear, precise, constraints-rich task. This includes defining scope, edge cases, success criteria, and architectural boundaries. AI is only as good as the instructions it receives — so those who excel at describing tasks will control the quality of AI output. Humans who master problem framing, prompt engineering, and requirement decomposition gain leverage: they make AI more accurate, faster, and more predictable than others can.

Business Analyst

The heart of value creation lies in understanding the business, not writing the code. AI cannot replace someone who knows market dynamics, user behavior, budget constraints, prioritization logic, risk tolerance, stakeholder psychology, and regulatory boundaries. A Business Analyst bridges the gap between technology and real-world value. They decide why a feature matters, who it serves, how it impacts revenue or cost, and what risk it introduces — areas where AI can help, but not replace the human nuance needed.

Pentester

Security is one of the hardest domains for AI to master fully because it requires creativity, unpredictability, street knowledge, and adversarial thinking. A pentester does more than run scanners — they exploit human behavior, spot surprising vulnerabilities, and think like an attacker. Humans who understand security fundamentals, threat modeling, social engineering, and advanced exploitation techniques will stay in demand. AI helps automate scanning and code analysis, but a creative pentester stays ahead by understanding motives, tactics, and real-world constraints.


Essentially, it is to use AI as a super-assistant
that can write code very well
to realize our intentions.

What is Refactor and why it is matter ?

For someone with no programming experience, the word “refactor” can be confusing at first, because it’s mostly used in coding. For a simple explaining, refactor is Making something better or clearer without changing what it does.

Why Refactoring when there is nothing change ?

If it works, don’t touch it” is a principle that is still valid in real world. It’s truth, pragmatic and is recommended at some extent when we do not have enough understanding about system we are working on. From business perspective, Refactoring feels unproductive when there is no new features added to system, but, just like a business, sometime we need to restructure processes, reorganize people and rearrange tasks to maximize outcomes, programing process also need refactoring to optimize coding experience which help source code more readable, maintainable, and scalable. These benefits, in turns, accelerate developers when adding new features or fixing bugs later on.

What is Readable code ?

Readable code is code written clearly so another developer, or future you, can read it quickly and know what’s going on, sometime just by guessing via variable names and function names. Some tactics can be applied to ensure readable code are:

  • Clear naming: Variables, functions, and classes have names that explain their purpose.
  • Short, focused functions: Each function does one thing, not many things.
  • Consistent formatting: Proper indentation, spacing, and line breaks.
  • Avoids unnecessary complexity: No overly clever tricks, Straightforward logic.
  • Helpful comments: Explain why, not what.
  • Use of standard patterns: Code follows common conventions so others instantly recognize the structure.

What is Maintainable code ?

  • Readable: as explained as above
  • Well-organized: Code is structured logically into modules, functions, or classes
  • Consistent: Follows the same style, naming, and patterns everywhere.
  • Well-tested: Covered by tests to catch bugs early and safely.
  • Documented: Has comments or docs explaining why and how things work.
  • Flexible: Easy to modify, extend, or adapt without breaking existing code.

What is Scalable code ?

  • Efficient: Uses memory and CPU wisely, Avoids unnecessary heavy operations.
  • Modular: Pieces of code can be separated or duplicated easily
  • Asynchronous / non-blocking when needed: Doesn’t freeze the whole system while waiting for one slow task.
  • Uses good architecture: Clear layers, Can split into microservices or separate components if needed.
  • Uses proper data structures: For example, using a Map instead of a List for fast lookups.
  • Database scalability: Indexes, caching, batching queries, sharding, etc.

Refactor safety

Because the goal is to keep system working as the same while rewriting codes, there must be a metric indicate sameness, or early detect differences in system behaviors. This is where Test Driven Development shines.

Writing Test is mistakenly overlooked by inexperience developers. Beginners usually think programing job is just to write code, see it run then move on writing another code. Writing tests looks like an extra work or an annoying requirement. This is okay just as a young men does not understand “karma”. And karma for this overlooking usually are:

  • Bugs keep coming back
  • Bugs evolve when there is more code added
  • Take so much time for debugging
  • Source code become a mess and a small change can take months to add

When bugs bring enough pain, developers begin more experience.

Test Driven Development (aka TDD)

Test-Driven Development (TDD) is a software development process where you write tests before writing the actual code.

TDD follows a repeating 3-step loop:

  1. Write a failing test ( yes, always fail first ! )
    • The test describes what the code should do.
    • It fails because the feature doesn’t exist yet, or the bug is not fixed yet.
  2. Write the minimum code needed to make the test pass
    • Not perfect code, just enough to pass the test.
  3. Refactor – Clean up the code
    • Improve readability, maintainability, scalability
    • Keep the tests passing.

Then repeat the cycle for the next feature and bug fixing.

Tests ideally can simulate the UX that users will engage on real product. This can not be 100% achieved but keep this principle in mind will help a lot to write good tests. Depends on how closely a test to real world UX, tests can be classified to 3 levels: Unit Test, Integration Test, and E2E Test.

Unit Test

Unit Test is ideal to test behaviors of a function or a class. In each unit test, we can test output of a function given a particular input. We can anticipate what inputs can be, even unrealistic ones (hackers usually input unrealistic ones) , to ensure our functions keep functioning regardless what input is. Unit tests can be used as a debugging tool when we can test directly a part of system without try reproducing via UI/UX. For functions that is well guarded by unit tests, developers can feel more confident to add changes or refactor it because bugs can be caught early.

Integration Test

Integration tests are tests that check how multiple parts of system work together. Functions, Classes and Flows can be tested on how they are interacting together inside a system. It ensures that every “pieces” of the system are integrating properly. Similar to Unit Test, we can anticipate and simulates Flows to can catch bugs soon.

E2E Test

E2E (End-to-End) tests are the tests that simulate a real user using the real app, by actually click buttons and typing text.

This is ultimate form of Test that can catch bugs that unit or integration tests cannot. E2E Tests test the app in an environment closest to production. They validate the entire system from UI/UX to data storage. But it is the hardest tests to make when a real system need to be deployed for E2E tests can execute. Simulating user behaviors by coding requires more effort. This is why many teams usually stop at Integrating Tests and it is totally ok when majority of bugs can be catch at level of Integration Test. Writing E2E Tests is time consuming so we should only write it for bugs that un-produceable at Integration Test level. These bugs are high-level bugs and it should be addressed by high-level tests, and they mostly about concurrency, timing and resources:

  • Race Condition: A situation where the correct behavior of a program depends on the relative timing or interleaving of multiple threads or processes. It’s about uncontrolled timing causing wrong results.
  • DeadLock: Two or more threads/processes are waiting on each other indefinitely, preventing progress. As the result, system freezes because resources are locked in a circular wait.
  • Livelock: Threads or processes keep changing state in response to each other but make no actual progress. As the result, CPU or threads are active, but nothing gets done.
  • Starvation: A thread never gets access to a needed resource or CPU because other threads dominate it. As the result, resource exists, but some threads never get a chance to execute.
  • Atomicity violation: A set of operations that should be executed as a single, indivisible unit is interrupted, causing incorrect results.
  • Order violation: Correct behavior depends on operations happening in a specific order, but the order is not guaranteed that eventually leads to incorrect results.
  • Heisenbug: A bug that disappears or changes when you try to observe it (e.g., by debugging, logging, or adding print statements). This sounds like quantum computing but yes, it does exists. These bugs often caused by concurrency, timing, or memory issues.
  • Data corruption: Shared data is modified concurrently without proper synchronization, resulting in invalid or inconsistent values.
  • Lost update: Two concurrent operations overwrite each other’s results, causing data loss.
  • Dirty read / inconsistent read: A thread reads a partially updated or uncommitted value from another thread or transaction and then produce wrong results.
  • Priority inversion: A low-priority thread holds a resource needed by a high-priority thread, causing the high-priority thread to wait unnecessarily.

In conclusion, to Refactor code safely, we need a lot of tests, good tests one !

Phishing attack at its ultimate form in Asia

Here is a poster in Vietnam that every buildings have to place to warn citizen about online scammer. Scammers now are tech + government powered criminals, well funded and well-organized !

Above poster lists popular tricks that have been used by scammer for decade and caused extreme financial damage to citizen. Below is a summary on what happened and existing solutions at the end of this post

Impersonate bankers

Scammers pretend to be bank employees, using forged caller IDs or fake emails to convince victims that their accounts have problems or suspicious activity. They pressure people to provide OTPs, passwords, or transfer money to “secure accounts,” exploiting the victim’s fear of losing funds.

Love trap on social networks

Criminals create fake profiles on Facebook, Zalo, or dating apps, using attractive photos and sweet messages to build emotional bonds. After gaining trust, they fabricate emergencies, travel problems, or gifts stuck at customs and ask the victim to send money to “help.”

Impersonate telecommunication officer

Fraudsters pose as telecom staff claiming your SIM will be locked, your number is involved in illegal activity, or you must update customer information. They then guide victims to provide ID details or install malicious apps that allow remote control of the phone.

Fake Sim 4G upgrade

Scammers contact victims saying their SIM card needs to be upgraded to 4G/5G and ask for OTP verification. When the victim shares the OTP, the scammer hijacks the phone number, enabling them to reset banking passwords and steal funds.

Recruit Partner

These scams offer “partnership” opportunities with fake companies or online stores. Victims are promised high profits or commissions, but after investing money, they cannot withdraw earnings, or the scammers disappear entirely.

Impersonate Social Insurance

Scammers claim to be from the social insurance authority, saying the victim has unpaid contributions, benefits problems, or involvement in illegal records. They create panic and manipulate victims into sharing personal data or making payments.

Impersonate charity

Fraudsters pose as charity organizations, exploiting compassion by collecting “donations” for fake causes such as medical emergencies, disaster relief, or orphan support. The collected money goes directly to the scammers’ accounts.

Gambling

Many scams involve illegal online betting sites. Victims are lured with promises of guaranteed wins or insider tips. After depositing money, the site manipulates the results or locks the account, making withdrawal impossible.

Impersonate Financial Organization

Scammers pretend to be from loan companies or investment firms, offering high returns or easy loan approval. They require “processing fees,” “insurance,” or initial deposits—after receiving the money, they vanish.

Forced loan

Victims is transferred an amount of money from strangers. Then strangers call them and tell that it is borrowed from black credit firms, and threaten that if they do not pay, they can come with force.

Fake Crypto Trading Platform

Fraudulent crypto apps or websites show manipulated profit charts to convince victims they are earning money. When victims deposit larger amounts, withdrawals are blocked, and the platform disappears.

Recruit house cleaner

Scammers post fake job ads for housekeeping, offering high salaries. Applicants are then asked to pay “training fees,” “uniform fees,” or deposits for tools. Once paid, the job offer is withdrawn and the scammer disappears.

Buy / sell on digital platforms

In online marketplaces, scammers sell products they never deliver, or buy goods and send fake payment receipts. Some also lure victims into sending deposits to “hold” an item, then immediately block them.

Missions via strange apps

Victims are assigned “simple online tasks” such as liking posts or rating products, with small initial payouts. Later, the tasks require larger deposits to continue earning, and once enough money is collected, the scammers cut off contact.

Clone Facebook account

Fraudsters impersonate the victim by cloning their facebook account, asking friends and family to send emergency money or mobile card codes. Others use the hacked account to run ads or steal linked personal information.

Impersonate government officers

Scammers masquerade as police, prosecutors, or tax officials, claiming the victim is involved in money laundering, tax evasion, or criminal cases. They use intimidation to force victims into transferring money to “verify” or “clear” their records.

Fake jackpot / gift

Victims receive messages claiming they’ve won a prize, iPhone, or overseas gift package. To claim it, they must pay customs fees or taxes. After sending the money, the supposed prize never arrives.

Terrorism via phone calls

Some scammers make threatening calls pretending to be criminals or debt collectors. They use fear—claiming harm, kidnapping, or legal consequences—to force victims to transfer money quickly without thinking.

Impersonate law firms

Scammers pose as lawyers claiming there is a lawsuit, unpaid debt, or urgent legal issue. They pressure victims to pay consulting fees or settlement amounts immediately to avoid prosecution.


Terribly, this keeps going on, at least at the moment of this post, regardless many effort from Vietnam, Korea, Singapore, etc polices. Because it is backed by some other governments, it is really hard to eliminate them all.

Well-organized criminal networks

Scam centers in Cambodia are hard to destroy because they are often backed by well-organized criminal networks that operate across multiple countries. These groups have resources, connections, and the ability to relocate quickly when law enforcement pressure increases. Their cross-border structure makes it difficult for any single government to completely shut them down.

Corruption & weak enforcement

Another reason is the presence of corruption and weak enforcement in certain regions. Some scam compounds operate in areas where local authorities have limited oversight or where bribery and influence allow criminals to continue operating with minimal interference. Even when raids happen, the networks frequently rebuild in nearby locations or migrate to neighboring countries.

Many scam centers also hide behind the facade of legal businesses, such as casinos, entertainment centers, or investment companies. These fronts make investigations more complicated because law-enforcement agencies need strong evidence before taking action. Criminals exploit this ambiguity to stay operational for long periods.

Human trafficking victims

Additionally, these scam operations rely on a steady supply of human trafficking victims brought in from various countries. Victims are forced to work under threats, making the operations difficult to expose. Because the workers are often imprisoned and isolated, reliable information rarely reaches the outside world, slowing down international rescue efforts.

High profitability and Low traceability

Finally, global factors contribute to their persistence. The rapid rise of online scams, cryptocurrency, and digital anonymity provides scam centers with high profitability and low traceability. As long as these operations generate massive revenue with relatively low risk, shutting them down completely requires coordinated international action—something that remains complex and slow.


Solutions

So looks like that citizens have to protect themself before government get things done.

And below is some protection tactics that can be observed in Vietnam

Community-based reporting website

chongluadao.vn

Chongluadao.vn is a Vietnamese cybersecurity initiative that maintains a large database of verified scam websites, phishing pages, and fake online services. It allows users to check whether a link is safe and relies heavily on community submissions to keep its blacklist updated. It focuses on suspicious urls and websites. User can search for past reports to know whether a page is scam.

trangtrang.com

TrangTrang.com is another platform supporting community reporting of suspicious phone numbers. It focuses on gathering public complaints about calls. Users can search past reports before pick up a call, helping them avoid risks.

Firewalls on smartphone

Smartphone Firewalls can act as a digital shield that monitors network traffic to detect and block malicious connections. Unlike antivirus software that only reacts after threats appear, firewalls proactively prevent dangerous apps or websites from communicating with scam servers. They help stop phishing pages, data exfiltration, and suspicious background activities. This makes them especially useful in preventing scams delivered through fake apps or hidden links.

SafePhone (Firewall for smartphone)

SafePhone is a specialized mobile firewall designed to filter both internet traffic and incoming call threats. It can block incoming calls from known scam numbers. It also can prevent users to access scam websites when tapping urls on messengers. By putting blacklists right on user’s smartphone, it helps users defend against risks more seamlessly without frequently looking up on other websites.

Browser Extensions

Browser extensions can add an additional security layer directly inside the user’s web browser. They can warn about dangerous websites before loading, block trackers, stop pop-ups, and identify phishing attempts. Extensions with anti-scam features check every website against a global blacklist and use heuristics to detect fake login pages or fraudulent shopping sites. This type of protection is crucial because most scams start with a single click on a malicious link.

chongluadao.vn

Chongluadao.vn offers a browser extension that automatically warns users whenever they visit a suspicious or reported scam site.

SafePhone

SafePhone includes a feature called SafeBrowser. SafeBrowser is a secure browsing mode inside the SafePhone ecosystem. It routes traffic through SafePhone’s protection filters, blocking malicious domains and preventing users from accidentally accessing scam websites. This controlled environment is especially useful for elderly users, children, or anyone who prefers a safe but still simple browsing experience.


First time to NLP huh ?

Natural Language Processing (NLP) is a major research field of AI and to almost developers, it sounds like a miracle. Lately I have an interest in this field since the noticeable viral news of GPT-3 model. I decided to learn to make use of it as a tool before somehow it will replace developer job in the future as many predictions from many illustrious figures. But the more I study about it, the more nothing I know. There are too many background knowledge to know before understanding each word on the GPT-3 paper. Below is a quick summary about works behind the scene that hopefully useful to developers like me who wants make a leap to catch up with the AI progress.

List of keywords

It is inevitable long and exhausting journey to make sure we can understand fairly basic about below terms:

  • Convolutional Neuron Network, Recurrent Neuron Network, Activation Function, Loss Function, Back Propagation, Feed Forward.
  • Word Embedding, Contextual Word Embedding, Positional Encoding.
  • Long – Short Term Memory (LSTM).
  • Attention Mechanism.
  • Encoder – Decoder Architecture.
  • Language Model.
  • Transformer Architecture.
  • Pre-trained Model, Masked Language Modeling, Next Sentence Prediction.
  • Zero-shot learning. One-shot learning, Few-shot learning.
  • Knowledge Graph.
  • BERT, GPT, BART, T5

What exists before BERT and GPT ?

There was a lot of researches and works existed in NLP field. Work on NLP field means to solve below common Tasks:

  • Tagging Part of Speech.
  • Recognising Named Entities.
  • Sentiment Classification.
  • Question & Answering.
  • Text Generation.
  • Machine Translation.
  • Summarization.
  • Similarity Matching.

SpaCy and NLTK is two most famous libraries in NLP field that provide tools, frameworks and models solving a few Tasks above, but not everything. Each Task usually had its own model and there is no reusing or transferring between models, until the Transformer Architecture is published. With its amazing performance and ability of Transformer Architecture, researchers begin to think about using this architecture to perform above NLP tasks, to have one single model can do it all. And the result is the BERT and GPT models which both are using Transformer. A fact is that, BERT is powering the Google search engine, and GPT-3 is the one powering ChatGPT application. There are also more applications making used of these models can be found around Internet.

Some Core Challenges when doing NLP

No matter what method is applied, the challenges that forming the NLP field is still the same:

  • Computer does not understand words, it understands numbers. Find a method to convert each word in a sentence into a vector (a group of numbers) that: given 2 words with similar meanings, 2 vectors can have a close-distance to present the similarity.
  • Given a sentence with many words and variable length, find a vector can present the sentence.
  • Given a passage with many sentences and variable length, find a vector can present the whole passage.
  • From a vector of a word, sentence or passage, find a method to convert it back to words/sentences/passage. This task in turn become the Machine Translation, or Text Summarization.
  • From a vector of a word, sentence or passage, find a method to classify it into some senses/intents. This task in turn become Sentiment Classification.
  • From a vector of a word, sentence or passage, find a method to calculate the similarity to other vector. This task in turn become Question & Answering, or Text Generation, Text Suggestion.

It will be too long to dive into each keyword here so please Subscribe button to receive upcoming posts from my learning journey.

Thanks for reading!

Microservices Tradeoffs and Design Patterns

Let aside the reason why we should and should not jump into Microservices from previous post , here we talk more about what Tradeoffs of Microservices and Design Patterns that are born to deal with them.

Building Microservices is not easy like installing some packages into your current system. Actually you will install a lot of things :). The beauty of Microservices lies on the separation of services that enable each module to be developed independently and keep each module simple. But that separation also is the cause of new problems.

More I/O operations ?

First issue that we can easily to recognize is the emerging of I/O calls between separated services. It exactly looks like when we integrate our system to 3rd party services, but this time, all that 3rd party services is just out internal ones. To have correct API calls, there will be efforts to document and synchronize knowledge between teams handling different services.

But here is the bigger problem, if every services has to keep a list of another services addresses (to call their APIs), they become tight coupled, means strong dependent between each other and it destroys the promised scalability of Microservices. So it is when the Event-Driven style comes to rescue.

Event Driven Design Pattern

Example tools : RabbitMQ, ActiveMQ, Apache Kafka, Apache Pulsar, and more

Main idea with this pattern is to allow services not need to know about each others addresses. Each service just need to know an event pipe, or a message broker and entrust it for distributing its message and feeding back data from other services. There will be no direct API call between services. Each services only fires some events to the pipe, and listen on some events happened from the pipe.

Along with this design pattern, the mindset on how to storing data is required some escalations too. We will not only store STATE of entities, but also store the stream of EVENTs that construct that STATE. This storing strategy also is very effective when dealing with concurrent modifications on the same entity that can cause inconsistent in data. There are 2 approaches to store and consume events : by using the Queue and using the Log that we will discover in later topics.

More Complex Query Mechanism ?

It is obviously there will be moments that we need to query some data that need the co-operation between multiple services. In the past with monoliths style, when all data of all services is located in the same database, writing an SQL query is simple. But in Microservices style, it can’t. Each service secures its own database as a recommended practice. We suddenly can’t JOIN tables, we lost the out-of-the-box rollback mechanism from database’s Transaction feature in case of something wrong with storing data, we may have a longer delay while each service may have to wait for data from other services. And those obstacles turn Event Driven to be a “must have” design for Microservices system since that design is the foundation to support patterns solving this Querying issue, most common are Event Sourcing, CRSQ, and Saga.

Event Sourcing

It can be a bit confusing between terms Event Driven vs Event Sourcing. Event Driven is about communication mechanism between services , since Event Sourcing is about coding solution inside each service to retrieve a state of an entity: instead of fetching the entity from the database, we reconstruct it from an event stream. The event stream can be stored in many ways: it can be stored on a database’s table, or it can be read from Event-Driven supported components such as Apache Kafka, or RabbitMQ, or using some dedicated event stream database like EventStore, etc. This method brings new responsibility to developers that they will have to create and maintain the reconstructing algorithms for each type of entity .

As mentioned at previous section, this strategy is helpful when dealing with concurrent data modification scenario, something like collaboration features that can be seen in Google Docs or Google Sheets, or simply to deal with scenario that 2 user hit “Save” on the same form at very closed moments. But this reconstructing way is not so friendly to a more complex query which is so natural with traditional database like Oracle or PostgresSQL, the SELECT * WHERE ones. So, to cover this drawbacks, each service usually also maintain a traditional database to store states of entities and using it for querying. And this combination form a new pattern called : CQRS (Command and Query Responsibility Segregation) where the read and the write on an entity happens on different databases.

CQRS (Command and Query Responsibility Segregation)

As mention above, this pattern is to separates read and update operations for a data store. A service can use Event Sourcing technique for update an entity, or construct an memory based database such as H2 database to quickly store updates on entities, while as quick as possible to persist the calculated states of entities back to a SQL database for example. This pattern prevents the data conflict while there are many updates on a single entity come at the same time while also keep a flexible interface for query data.

This pattern is effective for scaling purpose since we can scale the read database and the write database independently, and fit for high load scenario when the writing requests can complete quicker because it reduces calls to database with potential delay from locking mechanism inside databases. Quicker response mean there will be more room for other requests, especially in thread-based server technology such as Servlet or Spring.

A drawback of this pattern is the coding complexity. There is more components join in the process, there will be more problem to handle. So it is not recommended to use this way in cases that the domain or business logics are simple. Simple features is nice fit with traditional CRUD method Overusing anythings is not good. I also want to remind that if the whole system does not have special needs on the load, or write-heavy features, it is not recommended to switch to Microservices too. (reason is here )

Saga

Saga means a long heroic story. And the story about Transaction inside Microservices is truly heroic and long. Transaction is an important feature for a database that aim to maintain the data consistency, it prevents partial failure when updating entities. With distributed services, we are having distributed Transactions. Now, the mission is how to co-ordinate those separated Transactions to regain attributes of a single Transaction : ACID (atomicity, consistency, isolation, durability) over distributed services . We can understand simply that : Saga is a design pattern aim to form the Transaction for Microservices.

Saga patterns is about what system must do if there is a failure inside a service. It should somehow reverse some previous successful operations to maintain data consistency. And the simplest way is to send out messages to ask some services to rollback some updates. To make a Saga, developers may have to anticipate a lot scenarios that an operation can fail. The more high level solution for rollback mechanism is to implement some techniques like Semantic lock or Versioning entity. We can discuss about this in other topics. But the point here is it also brings much complexities to the source code. The recommendation is to divide services well to avoid writing too much Saga. If there are some services that are tight coupled, we should think about merging them into one Monoliths service again. Saga is less suitable for tight coupled transaction.

More Deployment Effort ?

Back to Monoliths realm, the deployment means running a few command lines to build an API instances and to build a client side application. When go with Microservices, obviously we are having more than 1 instance, and we need to deploy each instance, one by one.

To reduce this effort, we can use some CI/CD tools such as Jenkins, or some available Cloud base CI/CD out there. We also can write ourself tools , it won’t be difficult. But there is still some more issues than just running command lines.

Log Aggregation

Logging is vital practice when building any kind of application to provide the picture of how system is doing and to troubleshoot issues. Checking logs on separated services can be not very convenient in Microservices so it is recommended to stream logs to one center. There are many tools dedicated for this purpose nowadays such as GreyLog or Logstash. The most famous stack for collecting, parsing and visualizing for now is ELK which is the combination of ElasticSearch + Logstash + Kibana. The drawback of those available logging technology is it requires a bit much RAM and CPU, mostly to support searching logs. For small projects, preparing a machine that is strong enough to run ELK stack may not very affordable. Logstash requires about 1-2 GB is plenty enough. GreyLog requires ElasticSearch so it also require about 8GB RAM and 4 Cores CPU. ELK is much more than that.

Health Check & Auto restart

Beside Logging, we also must have a way to keep track availability of services. Each service may have its own API /healthcheck that we can have a tool to periodically call to to check whether it’s alive or not. Or we can use proactive monitoring tools such as Monit or Supervisord to monitor ports / processes and configure its behavior when some errors occur, such as sending emails or notifications to the Slack channel.

Beside Heath Check, each service should have auto-restarting ability when something take it down. We can configure for a process to start up whenever the machine is up by adding scripts to /etc/init.d or /etc/systemd for most of Linux server. For processes, we can make use of Docker to automatically bring services up right after it is down. For the machine itself, if we use physical machine, we should enter BIOS and set up Auto-Restart when power is on. If we use Cloud machines, it is no worry.

Those techniques are not only recommended for Microservices but also for any Monoliths system to ensure the availability.

Circuit Breaker

This is for when bad things happen and we have no way to deal with it but accepting. There is always such situation is life. For some reasons, one or many services is down or become so slow due to network issues that it will makes user wait long just for a button click. Most of users are impatient and they will likely to retry the pending action, a lot and you know system can got worser. It is when a Circuit Breaker take action. It’s role is just similar to electric circuit breaker , is to prevent catastrophic cascading failure across system. The circuit breaker pattern allows you to build a fault tolerant and resilient system that can survive gracefully when key services are either unavailable or have high latency.

The Circuit Breaker must be placed between client and actual servers containing services. Circuit Breaker has 2 main states: Closed, Open. The rules among those states are:

  • At Closed state, Circuit Breaker just forward request from clients to behind services.
  • Once Circuit Breaker discovers a fail request or high latency, it change status to Open.
  • In Open state, Circuit Breaker will return errors to client’s requests immediately, so the user acknowledge the failure and it is better than let users wait, and it also reduces the load to the system.
  • Periodically, Circuit Breaker makes retry-call to behind services to check their availability. If behind services is good again, it changes to Closed state, if not it remain Open state.

Luckily we may don’t have to implement this pattern ourself. There are available tools out there such as : Hystrix – a part of Netflix OSS, or Istio – the community one

Service Discovery

As we mentioned at Event Driven section, services inside a Microservices no need to know each own addresses by using an Event channel. But what if the team does not familiar with Event style and decide not to use it, or the services is simple enough to just expose REST APIs only. Using Event Driven is not a must-do, and in this case, how do we solve the addressing problem between services.

When system need to be scaled, there will be more instances for one or many services need to be added, or removed, or just be moved around. To let every services know the address (IP , port ) of others, we need a man in the middle that keep the records about service’s addresses and keep it up to date. This module is called Service Discovery ad usually be used along with Load Balancing modules. We may discuss about this more on other topics.

We also no need to create this component from scratch. There are some tools out there such as : etcd, consul, Apache ZooKeeper. Let’s give a try with them.

Ending

Above is an overview of what we need to know when moving to Microservices. Make sure you google them all before really starting. Each of patterns will have its pros and cons and overcoming solutions that another topics will cover. Thanks for reading !!

Is Microservices good ?

Yes and No.

Yes when we are facing problems that it solves and No when we blindly follow that “trend”.

Once my boss read somewhere about how amazing the Microservices is and instantly he asked the development team to “Let do Microservices”. He’s purely a business man but always want to apply the newest technology. How lucky am I, but also a challenge when to switch a system design to another. Actually it sounds cool to us so it is a quickly agreement between boss and developers. So let do Microservices.

What is Microservices ?

Microservices, clearly said, is a system design approach, I personally don’t count it as a technology. Microservices system itself will be composed from multiple technologies. Each piece of technology solves a business problem or problems emerging inside Microservices itself. The opposite approach to Microservices is called Monoliths – an All In One Big Service, shortly is what mostly systems nowadays are, composed from a single set of a API server and a database. Switching to Microservices, technically, is to divide functions of One Big Service into multiple small services running independently, wire them together and then we can choose fittest technologies for each small service. Each technology here can exist as a programing language, a framework, a software, a third-party service, or a tool.

The simplest form of a Microservices system, we can think it is composed from multiple Monoliths system. Each Monoliths system contains its own server & database and exposes its own API gateway . Monoliths systems communicate to each other by call APIs of others directly or listen to a shared event channels, depends on use cases.

Microservices is NOT a new skill set. Microservices is composed from multiple Monoliths services, so to do Microservices, developers must good at building Monoliths first.

What problems do Microservices solves and NOT solve ?

There is a reason that every bosses want to move to Microservices that is they think it is good. But I think not everyone understand WHAT it is good for. Microservices is NOT a pure better design than other designs. It is an adaptation to overcome problems emerging when a system is growing to big and huge size, in both manner of traffic and logic complexity. So if your system does not suppose to be the next Amazon or Netflix, Monoliths design is fine for you since it is much simpler to set up and maintain. A few thousand users with few hundred connection per second is in capability of mostly technologies nowadays, such as Spring or Node, Ruby on Rail or PHP, etc. But it is hard to estimate the threshold because each system has different features and the best way to find out it maximum capability is to do the stress test – basically to send as much as possible requests then analyze the response time. When you know your system capability, you will have a reference in number to decide when to move to Microservices. Microservices is a journey, only carry on when you are well prepared.

Microservices does NOT magically increase the system load threshold, unless the services are divided and designed appropriately. Remind that I/O processes take the main part in the delaying time between request & response. Normally, in Monoliths design, all services are on the same memory and it is the fastest way for services to cooperate to each other. But if we blindly deploy services to multiple different places to make it look like Microservices, there will be more I/O time since services have to send more requests to others that it depends on, then performance of the system will go down significantly. This may be the most common mistake when creating a Microservices system. Microservices is NOT to fan out all services to multiple servers. We must calculate to identify the bottleneck in the system before deciding to move some related services to an independent server. And it also is NOT simply to deploy current service’s source code to other server. The new server may have some beneficial points, such as a greater processing power that accelerates the service, or it is to redesign the service with other technologies that have some benefits the service needs.

A good example for redesigning the service is to separate the READ and WRITE data into two services for the same domain object (same table), the purpose is to support a large of concurrent reading/writing data with low latency. Assume that we are having a Monoliths system but after a period of growing, we have a very huge data amount and complex data schema on a SQL database so that every time a query is issued, it freeze the whole system for a few seconds. This is bad and we want to improve. That moment, we may come to this solution: We will divide the service in READ & WRITE aspect. The READ data service may use a NoSQL database as a persistence storage but with fast reading speed to reduce user’s waiting time. The WRITE data services may use an in-memory database such as H2 to proceed data updating as fast as possible, then gradually synchronize in memory data to the persistence storage of the READ service. Those two services should run on different machine to be able to maximize resource usages. And this is a truly story of Microservices. If we simply deploy another identical service on another server to handle more traffic by routing traffic by IP or by zone, it is the term of Load Balancing.

Microservices is NOT to reduce development cost. In fact it increases. Firstly, we need more machines to run independent services, as well as more machines to run other monitor tools. Microservices is an architectural design approach, it is the view of the whole system, NOT on how each service coding solution. It does NOT magically reduce bugs. You may read here for more understanding about the source of bugs. But when services are divided well, it does enhance the boundary between services, so that can help developers to avoid using wrong components as well as to avoid creating too much cross-cutting concern components with many hidden logic. Microservices brings the real need of DevOps positions, who will take responsibility to deploy multiple services as quick as possible to ensure lowest down time between deployment. They obviously will have to create some CI/CD system to automate the deployment process, calculate system load and create/install monitor tools to keep track how services are doing with other. When a bug happens inside a Microservices system, it is more complicated to fix than in Monoliths since now there are more than 1 places to figure out what is the truth source of a bug. Developers also always have to set up an identical system on their local machine for developing and testing. A system of multiple services requires stronger machine for developers. Too many services system can be somehow impossible to deploy on a single machine and we may need some Mocking technique to create fake API gateways on behalf services. Writing automation tests gets harder too, etc. And many many behind the scene works like these will disturb developers when switching to Microservices. More work, more job, more salary.

Microservices is NOT to freely apply latest technology. I bet that your team won’t want to work in a tech-soup. Agree that Microservices open us an ability to mix multiple technologies to make use of their advantages. But remember that it does require us to understand their advantages before applying, or your system will get more complexities without any significant benefit and crying is coming soon. Microservices is NOT only about technology, it’s also about human. It may depends on how your team is organized, what their skills are, what they are good at. Because learning some new things does take time and if you are in rush, let do with tools that you are familiar. Example we are about to create a small service to handle Employee’s documents in 1 month and we are having only 1 thousand employees. Our developers are experts at Java but Go is the new language and it is on trending. You may hear somewhere that “Go is faster” but here is the point, your developers will build that new service faster with Java than Go and that one thousand users is not the limitation to have to switch from Java to Go.

Microservices is NOT to create boundaries between teams. It is to create the boundary between services that your teams are creating only, technical boundaries only. The more developers know about other services, the more chances they find out problems early and the less communication cost between teams. Don’t use the architectural design as a political tool inside an organization. One developer can work for multiple services depends on his/her ability. Those people usually act as an important bridge between services. I know that some managers want to divide teams to rule easier, but I feel it is not a good way to create an organization: people will go to work with doubts and envies because much or less, all services are necessary at some points but at each moment, some are important than others. Non-boundary teams also activate cross-checking that can push teams move forward, also reduce job security. No sharing, no checking between employees will gradually hint a few ones to think that they are irreplaceable. It is a toxic thinking for an organization.

So when to go Microservices ?

Microservices do not help to reduce costs, not help to improve performance, not help to be “better”, so why do it is on trending ? Because it is from big tech companies, and people tend to believe what come from big boys always “better”. We easily blindly copy without diving deep to understand why they do that. With big tech companies, they hit the limitation of technologies and a single Monoliths system can’t help them anymore so they have to use multiple Monoliths to solve problems. And the result is a system that they named Microservices. Technology changes everyday and who know what will come in next few years. We see many frameworks, languages, platforms come as “better” options then die. So the key point to decide to move to Microservices is to know the limitation of current system by testing the load well.

Another reason we might need the Microservices is to implement many projects at the same time. Example we need to build a Pricing Engine module in the same time with an ERP module to manage employees, we might assign them for 2 teams since the business logic of modules does not depend on other. Each team can develop their own service on separated server so the deployments of each service is independent too. If 2 modules is built into 1 Monoliths service, an issue happening on a module may block the whole deployment process to prevent risks happening on production environment. So the key point when dividing services for teams is the dependency between services. They should be loosely coupled. It means each service can act as a separated product without knowing or need existence of other services.

When each service is truly independent, it can be reused too. Example your companies has multiple projects but using the same employees for all projects, so to avoid duplicating features like authentication, employee manager, or full text search service, etc, we can carving them to separated services that can be reused by different projects.

One scenario that you can find out your system look like Microservices, is when to rewriting the legacy system with up-to-date technology. Rewriting the whole system is time consuming so we usually have to rewrite module by module. Each rewritten module can be deployed at a separated server and on the way of rewriting the legacy system, you are using Microservices.