How Fake BTS Attacks Steal Your OTP — And How to Protect Yourself

If you are receiving OTP via SMS for your bank transfers, logins, or reseting passwords, you must read this. This is a realistic hack happened in real life in many countries and cybercriminals has stolen a lot of money by this trick. Victims are any people who live in countries that still use 2G mobile network, use old phones with 2G network mode enabled by default, and has many things to be stolen.

1. What is 2G mobile network

2G (Second Generation) is one of the earliest digital mobile network technologies, introduced in the 1990s. Unlike the old analog 1G systems, 2G allowed phones to transmit voice calls digitally, making communication clearer and more secure than 1G. 2G was designed mainly for: Voice calls, SMS text messages and Very slow mobile internet (GPRS / EDGE).

Compared to modern networks today such as 4G and 5G, 2G has extremely limited bandwidth and weak security protections. Many security mechanisms used by 2G were created decades ago and are now considered outdated.

Why 2G Still Exists

Even today, many telecom providers still keep 2G active because:

  • Old feature phones still depend on it
  • Some IoT devices use it
  • Rural areas may rely on legacy infrastructure
  • Emergency fallback compatibility

However, this backward compatibility also creates a serious security problem.

2. What Is a Base Transceiver Station (BTS)?

A Base Transceiver Station (BTS) is the radio communication equipment that connects mobile phones to a cellular network. In simple terms, a BTS is the “cell tower” your phone talks to when you:

  • making calls
  • sending SMS
  • using mobile data
  • registering to the network

Every time your phone shows signal bars, it means your device is communicating with a nearby BTS.

Image

MS — Mobile Station

The Mobile Station is the physical mobile phone, plus the SIM card identity inside it. Each MS has identifiers such as:

  • IMSI (International Mobile Subscriber Identity)
  • IMEI (device identifier)

These identifiers are important and fake BTS attacks often try to capture them.

BTS — Base Transceiver Station

The BTS acts as the bridge between your phones and the telecom core network. Its responsibilities include:

  • transmitting radio signals
  • receiving signals from phones
  • managing communication channels
  • broadcasting network information
  • forwarding traffic to the carrier network

A BTS usually covers a geographic area called a “cell.” When you move around, your phone constantly switches between BTS towers through a process called: handover, or roaming

How MS and BTS Communicate

The communication between phone and BTS happens over radio frequencies using GSM protocols. Basic flow is like so:

  1. Phone searches for nearby BTS signals
  2. BTS broadcasts network identity information
  3. Phone selects the strongest or preferred tower
  4. Phone registers itself to the network
  5. BTS assigns communication channels
  6. Voice/SMS/data traffic begins

In 2G GSM, the BTS continuously broadcasts:

  • MCC (country code)
  • MNC (carrier code)
  • Cell ID
  • supported encryption modes

The problem is that early GSM protocols were designed with a dangerous assumption: The phone trusts the BTS automatically. This becomes the core weakness exploited by fake BTS devices.

3. The Security Problem in 2G GSM

In modern 4G/5G systems, both sides, BTS and MS, authenticate each other. But in classic 2G GSM:

  • The network authenticates the user
  • The user does NOT authenticate the network

That means:

  • A fake tower can pretend to be a legitimate carrier
  • Nearby phones may connect automatically
  • Users often receive no warning

Attackers exploit this weakness by broadcasting a stronger signal than legitimate towers. Once the phone connects, the rogue BTS can:

  • Request IMSI identifiers: this means attacker can know your phone number without asking.
  • Downgrade connections from 4G to 2G for weaker encryption: this means attacker can read your SMS.
  • Intercept SMS: this means attacker can even impersonate you and send SMS to your friends, under your name.
  • Send phishing messages: attacker can impersonate other legit phone numbers, your boss’s number for example, to send you a link and require you to fill passwords

This is the fundamental mechanism behind IMSI Catchers and Fake BTS attacks.

4. What Is a Fake BTS (IMSI Catcher)?

Mobile phones are designed to automatically search for the “best” available cellular signal. In GSM/2G networks, your phone often prioritize connecting to BTS tower that has stronger signal. Attackers exploit this behavior by broadcasting:

  • Stronger signals than nearby legitimate towers
  • with Copied carrier information
  • with Attractive network parameters

To the phone, the fake BTS appears to be a normal carrier tower. Because classic GSM lacks proper network authentication, the device may connect automatically without warning the user.

IMSI stands for: International Mobile Subscriber Identity. It is a unique identifier stored inside the SIM card. An IMSI Catcher is named after its ability to trick phones into revealing this identifier. Once attackers collect IMSI numbers, they can:

  • Identify devices
  • Track movement
  • Target specific users

This is one of the first steps in many surveillance-oriented attacks.

5. Attack Setup (High-Level, No Harmful Instructions)

A simplified Fake BTS attack flow is like so:

  1. Attacker activates rogue BTS equipment to be a fake tower
  2. Fake tower advertises itself as a legitimate carrier
  3. Nearby phones detect strong signal
  4. Devices connect automatically to the tower with stronger signal
  5. Then Fake BTS requests device identifiers and controls the communication process.

Depend on attacker’s purpose, the fake tower can:

  • Downgrade your phone from 4G to 2G: this is the most common technique for stealing OTP purpose.
  • Disable encryption: so attacker can read SMS content, which may contains OTP code.
  • Forward traffic to real networks: this is so called: Man-In-The-Middle attack, where attackers keep you communicating normally, but can eavesdrop everything.
  • Inject phishing SMS messages: you can receive SMS from your friend numbers, but actually that SMS is delivered from fake BTS tower, your phone just display it.

Below is a confiscated fake BTS, captured in public, by police, while doing above attack:

6. How to defend

Symptoms of a Possible Fake BTS Attack

Detecting a Fake BTS in real life is extremely difficult. Modern rogue base stations are designed to look almost identical to legitimate carrier towers, and most smartphones provide very little visibility into low-level cellular behavior. Still, there are several warning signs that may indicate suspicious activity.

Sudden Drop to 2G or “E” Signal

One of the most common indicators is your phone suddenly falling back from 4G/5G to 2G, commonly with the icon “E” instead “4G” on top-right corner of the phone screen. Attackers often force devices onto 2G because:

  • GSM security is weaker
  • Phones trust the network more easily
  • Encryption protections are cracked easily

A downgrade becomes more suspicious when 4G/5G coverage is normally strong in the area but the signal change happens unexpectedly, and, multiple nearby devices behave similarly.

Weak or Missing Encryption Indicator

In classic GSM networks, the BTS controls whether encryption is enabled. A rogue BTS can force weaker encryption, or request no encryption at all. Historically, some phones displayed warnings such as: “unencrypted network”, “ciphering disabled”. But today, most smartphones hide these low-level network details, so users rarely receive visible warnings. As a result, users may have no obvious indication that something suspicious is happening.

Reality: Detection Is Extremely Difficult

The uncomfortable reality is: Most users cannot reliably detect a Fake BTS attack. Reasons include:

  • Users do not understand how phone calls and SMS work in tech.
  • Smartphones show very little info about radio diagnostics.
  • Rogue towers can imitate legitimate carrier behavior.

Even cybersecurity professionals often require specialized equipment to investigate suspicious cellular activity. Advanced detection may involve using SDR (Software Defined Radio) analysis, Baseband Monitoring tools and Carrier database comparisons. But ordinary users typically have no easy way to confirm whether a nearby tower is genuine.That is one reason Fake BTS attacks remain effective even decades after GSM was introduced.

Mitigation Strategies

Due to it is unreliable to detect a Fake BTS, it is reliable to stay away from OTP sent via SMS. Use Authenticator app such as Google Authenticator, or Authy, for OTP is highly recommended. Beside of that, make sure to disable 2G on your phone if it still support 2G. Most of today mobile phone disable 2G by default, so if you are using old phone, let search on how to disable 2G on your phone model. Last but not least, Avoid login, resetting password, or doing bank transfer on public networks, only do it in your trusted places.


What make Social Networks addictive (and what we can learn for software development )

Social Networks have become a social norm today. Almost everyone tends to have at least one profile on one of platforms such as Facebook, X, Tiktok, and a few others. I was on Facebook when i was a student and honestly I did not get what actually Facebook was and why people use it. I wrote something on my wall, then I got a notification saying a friend liked my post. I also saw my friends posted something funny on their walls, but I did not hit the like button, not because it was not funny, it was because I did not aware that I should press like button if I found it funny. I left Facebook because playing games is much more engaging than this thing. Until when I came to university, my friends too, but we live in different districts and study in different universities. We lived far away and it was really hard to meet frequently like when in school. Call & SMS is costly for long conversations, and it is not fun too. Then I back to Facebook because most of friends was using it too. We got free messaging & video calls. We can share thoughts, opinions, discussions via comments and showing support via the like button. We share moments by uploading photos and videos. We did not meet in person frequently like before, but we feel that we know what others are doing. Until I saw first news about Social Network Addiction! And I did not understand. How does a tool that simply informs its users about someone about something, become addictive ?

At first glance, Facebook, X, Tiktok or any Social Networks, looks simple: “someone posted something, then you see it.” But the addictiveness doesn’t come from the information itself — it comes from how that information is delivered, timed, and socially framed. This post will reveal the real mechanism behind it, or at least the core part.

Before understand the whole mechanism, it is important to understand some artifacts that build up the mechanism: The Slot Machine Effect, Social Validation Need, FOMO, Stopping Cues, Personalization, Triggers and Social Obligation Pressure.

1. The slot machine effect

The slot machine effect is a nickname for a behavioral psychology: Variable-ratio reinforcement. Simply put, “you repeat an action because the reward is unpredictable but sometimes great.” It is likely what happens inside gamblers’s psychology. When using Social Networks, each time we open it, what we get is random. Sometimes, there is nothing interesting. Sometimes, there is a funny post, a like, or a message. Sometimes, there is something emotionally strong such as a drama, a praise or a surprise – and we feel good. This unpredictability trains human brain to try again because “maybe the next scroll will be good.” . That’s what keeps users opening the application and keep scrolling, like a hunt for emotions. And human loves go hunting, this activity is deep rooted in brain since very first day of human kind. But what we hunt is not simply food anymore.

2. Social Validation Need

Humans, as a nature, care deeply about how others see them. This is a survival factor, evolved and deeps rooted in human brain for thousand years, since Tribal Age when there is no law and what tribal members perceive you determine you alive, or die. Our brain is wired to care about Being accepted, Being noticed and Not being rejected. Social Networks do not reinvent this, it measured and amplified it. In real life, validation is subtle. It is a feeling via daily interaction between people. Each person even has their own way showing validation. Each culture has its own custom to visualize validation. Here on Social Networks, validation is visualized by number of likes, comments & shares. 1 like vs 100 likes! 0 comments vs 20 comments! 0 shares vs 10 shares! Comparison is triggered. This turns Social Validation into something closer to a score system than an natural feeling. Social Validation now becomes Social Comparison – when we evaluate our opinions, abilities, and worth by comparing us to others.

As a blending of Social Validation and Social Comparison, human brain tends to translate Likes into Approval, Comments into Attention, and Shares into Influence. It is a translation from numbers to a feeling. It is a false translation because these numbers can be manipulated by many ways: psychology tricks, ads campaign, payment or from clone accounts. But it does not easy to escape that false translation. Because of Cognitive Ease – human brain loves simple things – and here interpreting Likes as Approval is easier than real life approval which can be complex: tone, facial expression, context. This triggers dopamine (reward signaling) as well, making us want to check reactions, post again, stay engaged.

3. Stopping Cues

Social Networks, at some extents, is likely a TV shows, or books, when it also provides content. The diffs are, Social Network content is made by anyone without necessary knowledge, skills and permissions. People on Social Network can be not directors, not scholars, not professor but nothing stop them to tell stories, teaching or bragging. TV shows or books have endpoints. We know when it is end and take time to relax. Social Networks removes that, on purpose.

A common design pattern often used in Social Networks is Infinite Scroll. This design keeps users in a continuous loop with no friction to stop. Human brain relies on boundaries to end activities. End of a chapter, End of a page, End of an episode is cues for brain to stop. Infinite scroll deletes those cues. Without a clear “end,” human brain defaults to keep going on. It pairs perfectly with the Slot Machine Effect when Unpredictable rewards keep behavior going longer than predictable ones. This also exploit the Completion Bias – the psychological tendency to prioritize easy, quick tasks over more important, complex ones to gain a fleeting sense of accomplishment and a dopamine boost. This bias tricks the brain into valuing the “done” feeling, often leading to wasted time on trivial tasks rather than high-impact. And here, keep scrolling feels easier than close the app.

4. Fear of missing out (FOMO)

Fear of Missing Out (FOMO) is a psychological concept describing anxiety when other people is having rewarding experiences without their participation. Simply put, it says that: you can feel anxiety when you see others are winning. This feeling is exploited strongly on Social Networks, where people frequently & easily compare their lives to others profiles, via New Feeds, number of Like, Comments & Shares, eventually leading to feelings of inadequacy or exclusion. FOMO reflects the human need for Social Validation, and also stemming from Social Comparison – when a person must know, must do, or must have something to be belong to a group. FOMO people often experience greater dissatisfaction and impulsive decision-making.

Social Networks amplify FOMO by providing constant updates about others’ activities, achievements, and lifestyles. This can create a loop of checking, posting, and comparing to other. Users can feel anxiety when comparing to other. And then the brain want some relief when it feel anxiety. Turns out the most relieved action for this anxiety is to check if they are what they are. Checking via Social Networks app is faster, easier, even anonymous so it is the best choice for brain – Cognitive Ease again. Although feel anxiety, users do not flee away. This is classic Negative Reinforcement: a behavior sticks because it removes an unpleasant feeling. The Social Networks apps, one hand bring anxiety to users, on another hand, become a fasted way for user to relieve that anxiety. And it become addictive because it is a fasted way for user to get relief.

5. Personalization

Naturally, people don’t like people that have different opinions. If a Social Networks only shows content that contradicts user’s perspectives, they won’t use the app. To keep people using Social Network, it needs to show what users like to see. And to a human, there is nothing better than seeing what they already believe. This is Confirmation Bias – when human brain automatically filters out what not support the existing belief and only focus on what support that belief. Exploiting this bias, Social Networks analyze users’s behaviors and only show what a users tend to like. Time spent on certain post, likes, comments, shares, or even demographic info, or even avatars, is inputs to an algorithm that predicts what a user might like. For a long time watching people interacting on Internet, these algorithm seem know what its users like. And when that algorithm only show only user what they like, it makes users feel that the whole Social Network is people just like them – this is Halo Effect when humans use a small cue to judge the whole thing. Because users like something posted on a Social Network, they might like that Social Network as well. This illusion keeps user returning because no one can resist seeing what they like.

6. Triggers

Above artifacts function based on many psychological instincts of human being. Because it is instincts, it is hard to resist. But instinct does not function all the time. It needs external triggers.

Human has language, in written format. Human brain can translate symbols into meaning. Depends on what meaning is translated to, it can trigger instincts just like a deer hears sounds in a bush. Simply put, human instinct can be triggered via text. We all may have a friend that is triggered when hearing or seeing certain words. It can be any word, but depend on their experiences in the past, words can bring different feelings. Social Networks exploit these well via Notification. Notification sent to user does not simply informing some events. It’s message is designed to trigger human instincts. Example:

  • “You were mentioned in a comment” → triggers Social Validation (“someone is talking about me”)
  • “Someone liked your post” → triggers Social Validation (“people value what I shared”)
  • “You have 5 new notifications” → triggers FOMO (“what did I miss?”)
  • “Your friend just posted after a long time” → triggers FOMO (“this might matter”)
  • “This is getting a lot of attention” → triggers Social Validation (“this could be important or trending”)

Each message is short, but it is not neutral. It is designed to activate specific psychological responses such as curiosity, belonging, urgency, or FOMO. Over time, the brain begins to associate these phrases with emotional outcomes. This is why people feel an urge to check immediately, even when they were not planning to.

In this way, notifications function less like messages and more like triggers. They convert language into instinctive reactions, turning attention into a reflex rather than a deliberate choice.

7. Social Obligation Pressure

Social Obligation Pressure is the feeling that you owe a response, attention, or presence because of social expectations—even if you don’t actually want to engage at that moment. This obligation come from Fear of Negative Judgment. This fear is amplified by features such as: Read receipts or Typing indicators, which is commonly used in Chat Box. This is natural feeling in human when it helps to forming social. But on Social Networks, people do not see each other face, so by visualizing via indicators, Social Network ensure that Fear exists and push user engaging because no one want to be seen as impolite. It’s not just “I should reply” — it’s more like “If I don’t, people will think something bad about me.”

Social Obligation Pressure, or Fear of Negative Judgment, targets identity, not just curiosity. Humans constantly assume they are being evaluated. We predict how others might interpret our behavior. We try to avoid being seen as: rude, ignoring, ungrateful or socially incompetent. This fear is not about the action itself —it’s about your brain anticipates the meaning others might assign to your action – which may not true. Many times, when we reply to someone on Social Networks, Not because we want to — but because we want to avoid negative judgment. Read receipts removes plausible deniability, Typing indicators creates expectation of response, Online status signals availability, Notification creates urgency. All features are designed around Social Obligation Pressure.

Put It All Together

Social Networks profit from advertisement, where the more users addicted, the more revenue it earns. By combining all above artifacts, Social Network applications train human brain a behavior loops by exploiting human biases and instincts to keep users spending at much time as possible on its app, by following steps:

  1. From a free tool that solves real life problems: Communication – such as Messenger, Chat, Video Calls, etc…
  2. Triggers – the Notifications – is added to trigger anxiety, or FOMO
  3. Social Obligation Pressure pushes users to engage: reply messages, check information, etc
  4. Users Open the Social Network app (e.g. Facebook / TikTok)
  5. Personalization algorithm shows highly relevant, easy-to-consume content to users
  6. Slot Machine Effect: users get unpredictable rewards while scrolling
  7. Social Validation Need: users eventually get likes/comments that give dopamine hits
  8. No Stopping Cues: no natural point to exit leads to doom scrolling
  9. After leaving / pausing using Social Network: anxiety, curiosity, or social pressure still lingering in brain
  10. Social Networks introduce new trigger forms to make user urge to check again, then back to Step 2 !

And we already all heard and knew about real life harms caused by social network addictiveness — from wasted time and reduced productivity, to anxiety, low self-esteem, and constant comparison. Over time, it can lead to irritability, anger, and strained relationships, as attention is pulled away from real-world interactions. In more serious cases, the cycle of validation and comparison can deepen emotional distress, contributing to isolation and even self-harm. What makes this especially concerning is that these outcomes are not caused by a single feature, but by a system of reinforcing loops that continuously pull users back in, often without them realizing it.

Be aware about the mechanism behind Social Networks can be the first steps of escaping the addictiveness loop. If you have someone that is addicting to Social Networks, let share this post to them!

Lessons for Software Design

Although bad side effects of Social Networks is undeniable, but the high user engaging ratio of Social Networks app is also a dream to any software company. As a software creators, we all want our applications are used daily, especially when competition is getting high every day. We still have a way of applying mechanism observed in Social Networks for good purpose. It is long post here already and I will continue this part on next parts. To not be missing out, please subscribe so you can get a notification when next parts is available:


9 Reasons Why You Never Hit Your Goals (and what actually works)

Everyone loves setting goals. New year, new plans. New week, new habits. New project, new ambitions, etc. But if looking honestly, most goals fail, isn’t it! Ironically, they don’t fail because they’re too hard, they fail because they’re vague, emotional, or just unrealistic. If you find yourself missed your goals too many times, this post is for you. But this post will not provide you motivation, this post will expose your misperceptions, and knowing these misperceptions is the first step toward your goals.

1. You are making wishes, not goals

This is the most common misperceptions can be seen when looking into people’s todo list, or new year resolutions. People usually write their wishes instead of goals and completely not aware about the diff. For example, it is easy to see these lines in someone’s todo list: “to be better at something”, “build a great app”, “be rich”, “be happy”, “be confident”, etc. These lines won’t make any outcome, they are just wishes in a world without genie. A goal must be Specific & Measurable. And because it is Measurable, it will be Achievable.

To be Specific & Measurable, each goal must be written in a simple sentence using 1 number, 1 noun & 1 verb and a deadline. Example:

  • deliver 1 feature every 3 days
  • publish 10 blog posts in quarter 1
  • run total 15km each week
  • save 50% of salary each month
  • read 1 page of any book each day

If you can’t measure your actions, you can’t achieve the outcome. Focus on number, be familiar with scoring yourself. Be a project manager of your life, avoid saying vague words, be specific! Writing todo in this format will be the first step of realizing any goal. When all metrics are met, you goal! When metric is not met, at least you will know why.

2. You expect outcome happen over night

Good things take time. It is one rule for any goal. Outcome comes from concentrate, commitment and consistency, not from your commands. Outcome is compounded from tiny results each day.

Setting goals make illusions of fast results. When you set a goal, your brain immediately imagines the outcome, especially if you have an imaginative brain, you might lock yourself in your imaginary world without notice the boundary with your reality. Although the outcome is imaginary, it somehow triggers dopamine in your brain and you “feel” success. That mental picture feels real but you skips the process entirely. You likely borrow dopamine from your future but any borrowing need to be paid. And when life pull you back to its reality, when someone or something reminds you about your goals, which are not completed yet, a crack appears inside your world and that crack hurts a lot. It triggers other toxic hormones as well. And this might explain symptoms that many people avoid to mention again their goals, or go outrage if someone mentions it.

Expecting fast outcomes creates a dangerous loop: You start strong, You don’t see results quickly, You feel discouraged then You quit or switch goals. This loop waste your time, energy and mind a lot and it is harmful than you think. Not because the goal was wrong, but because your timeline was unrealistic. Treat your goals as seeds. It will grow slowly but for sure. Most meaningful outcomes come lates because it requires many many suitable conditions. They usually come after weeks or months of invisible effort. So, when making goals, at the deadline part, give it time, count in month is a good starting point.

3. You depend on emotions, not habits

Motivation! yeah it is the emotion everyone love. People even pay significantly to just attend some meetings that “sell” motivation. But then that motivation expired right after you left the meeting. It’s expensive, and smell like scam. Your brain don’t need that external motivation. You don’t need fake motivation.

Motivation is unreliable. At the beginning of any goal, motivation feels strong. You’re excited, focused, ready to act. But motivation is temporary. Some days you’ll feel tired, distracted or simply won’t feel like doing anything. You can burn out. No one gonna compliment you every time, everywhere. Not everyone understand your goals evenly. If your actions depend on how you feel, your progress will always be dependent on external conditions, aka you lose the control of your life.

Habits solve this problem by removing the need to decide what to do every day. Build a daily routine that make you harder to fail than succeed. No debate. No negotiation. You don’t rely on energy—you rely on structure of a day you will spend. Remember sleepy days, rainy days, hot as hell days but you still have to complete 5 classes before go to bed. That is a sample of how to complete a goal. Structure your timeline and turn it into habit. Sciences proved that any your actions can turn into habits after 3 weeks. After that threshold, you will act unconsciously toward your goal.

Habits are not just actions—they shape your identity. You don’t “try to work out”, you just run every morning at 5AM. You don’t “try to learn” , you just studies every night at 8PM, etc. Be specific about when you do what and repeat it daily. You can spend 1 month to test this theory and see (not feel) the result.

4. You review other too much instead of yourself

Comparison! It is not easy feeling but everyone unconsciously does that, at least for a while when they were younger. This is normal behavior and is a source of motivation. But because we learn to not depend on motivation here, comparison is unnecessary too. Comparison slows you down more than you think.

It’s easy to spend time analyzing others on what they’re doing, how fast they’re growing, what strategies they use. It feels like learning, even productive when you absorb new information. At some extent, this gives you hints on how to do a stuff and keep you moving, but if too much, it counter attack you by wasting your time. Your progress stands still while you make comparison with other. In the worst part, when you focus too much on others, two things may happen: You feel behind too far – then you feel discouraged, or You copy blindly – then you lack of direction. You might give up or try doing too many things, all at once, which is a fail-for-sure strategy. You end up reacting instead of building. You’re measuring your progress against someone else’s timeline, resources, and starting point—which you don’t fully see.

Progress doesn’t come from observing others. It comes from observing yourself. If you don’t review your own actions, you miss: what worked, what didn’t, where you wasted time, where you improved. Without this loop, you are doomed to repeat the same mistakes. Never expect that same methods would produce different results. Ignore others, focus on yourself. Track your progress toward your goals. Update the progress daily. Do not care about what other post on their social profiles such as LinkedIn or Facebook, many times they lies, or just exaggerate about themself. Real professional shows their result, not lines on their CV.

5. You work hard, but not deep

Given that you already have Specific & Measurable goals, with right habits that serve the goals, and completely ignore other people on social networks. Now you are going to be busy: each day, you answering messages, switching tasks, reacting to notifications, reading a few articles, check some news, write some code, draw something, and repeat that routine for 10 days already. You got busy all the time but your none of metric you set are met. What is wrong ? Is the right goal not just enough ? This busy seems not productive. It is distraction actually. It is not “deep” enough. Deep work is something like: You spend 7 continuous days to make a first version of your app then you spend 8 hours to complete a drawing, then you spend 8 hours to collect relevant information from articles and news, then you spend 2 hours to reply and react to messages and notifications. Same 10 days spent, but the result will be different.

Deep work is to focus in one goal in a long timespan, long enough to deliver a meaningful result. Don’t switch tasks too much because your brain is a single-threaded machine, not a multi-threaded one. Switching tasks can make you feel busy, productive, and do more but in fact, it creates movement, not meaningful progress. You waste time and energy when switching tasks because your brain has to switch the context, and lose the short-term-memory that is important for resolving difficult issues.

People has tendency to choose working hard instead of deep working. Working hard is easy to see, has small wins and human brain loves that feel. Deep work requires patience, honesty and creativity. Deep work does not gives instant satisfaction by small wins like hard work, but it forces you to confront what you don’t know, where you struggle and accept that how slow a progress can be – which is not an easy feel to most of people. And human brain has a default mode to choose the easy one. Working hard is good, but it is not enough to complete goals, meaningful ones!

6. You sense, but not score

How do you know you are doing a good work?

Let say you already spent 1 months focusing on one goal such as making a simple software, or building a website, or learning a new language. You barely switch tasks in 1 month. You focused on one goal. Good job, you are very close. Now it is the time to scoring yourself. And score is a number. For example for above sample goals, let gather data about: how many users want to use your new software, or, how many people go to your website, or, can you take an official language test yet, etc. Does that number met the metric you set ? If yes, wonderful, if no, lets find out!

It is completely okay to not be 10/10 per goal. It is not even matter. The honesty to yourself is matter and it will adjust your strategy when it sees that scores. Those scores act as feedbacks from reality. It measures the gap between your assumption and reality and can tell you whether you are on the right track. If the goal is not met even you escaped above 5 misperceptions, the wrong part is in your method, your approach, aka, in how you work on specific tasks. There must be some missing steps, or overdo steps, or wrong assumptions, or underrated steps.

This scoring habit is to calculate effectiveness. If it is not effective yet, let experiment other methods and again gather scores. After a few try, scores can tell you what works, what do not, and what you feel it work. Focus on what actually work only!

Scoring yourself has another psychological effects. It can train your brain to be open minded, flexible when you willing to adapt multiple methods for same goal, and get rid of some cognitive biases. Human brain has many cognitive biases, the common ones that can be fixed by scoring yourself are:

  • Confirmation bias: You notice only evidence that supports what you already believe and ignore what contradicts it.
  • Self-serving bias: You naturally credit yourself for what works and blame external factors for what do not work.
  • Effort justification: You assume that because you put in effort, you’re making progress.
  • Recency bias: You overvalue what happened recently.
  • Optimism bias: You overestimate how well things are going, or will go.
  • Availability bias: You judge based on what’s easiest to remember.
  • Consistency bias: You resist admitting that your current approach isn’t working because you’ve already committed to it.

Escaping those biases, you can be stronger than ever!

7. You put yourself in wrong environment

Now your are strong, the goal is right, but progress is slow. What’s wrong now ?

Goals are like seeds. Good seeds can grow slowly due to wrong environment. Your progress is also, it speeds up or slows down depend a lot on where you are sitting, what you eat and who you collaborate with. Place, Food and Supporters is the environment that effect your goals the most.

Place – where you actually do the work. It can be an office, at home, or at certain kind of coffee shop. If your space is full of distractions, noise, or easy escapes, your focus will always be fragile. You’ll need constant willpower just to do basic work. Know yourself, measure yourself to understand where is the place gives you most productive work. Some people love work at office, some people love work at home, some love work at a coffee shop, some want to be near the sea, etc. Each person has different soul that decide where is their productive places. Some person has unique fear that decide where is not an easy escape environment. This is likely give you no retreat option so the only choice is moving forward.

Food — your hidden performance system. What you eat doesn’t just affect your health, it affects your energy, clarity, and consistency. Low-quality fuel brain can leads to: energy crashes, brain fog, inconsistent output. You might think you lack discipline, but sometimes you’re just under-fueled. You don’t need a perfect diet, but if your goal requires focus and long term effort, your body needs stable energy supply. So, always feed yourself well, then work.

Supporters — who shapes your standards. You don’t gonna need a big network but you do need the right people. The people around you influence: what you consider “normal”, how high you should aim and how you respond to feedbacks. If your environment tolerates excuses, you’ll make them. If it values growth, you’ll feel pressure to improve. Support doesn’t always mean encouragement or compliments. Most of time they don’t understand what you are doing. It is just their personality. It can be accountability, it can be honesty. Sometimes it’s just being around people who take things seriously. And the best person will be the one already make it, the one already achieve whatever goals you set. Learn from them is the best.

8. You have too many goals

Now you are super strong, your mind can focus, your body is full of energy, your environment is fit. The goal is close than ever. But not just too many goals!

Now here some other cognitive biases emerge! You need to fix them too:

  • Shiny Object Syndrome: You’re attracted to new ideas simply because they’re new then you back to switching tasks.
  • Opportunity Cost Neglect: You focus on what you gain from a new goal—but ignore what it costs then You overload yourself without realizing what you sacrificed.
  • Overconfidence Bias: Because things are going well, you assume you can handle more. But you underestimate the cognitive load and split your attention.
  • Planning Fallacy: You underestimate how long things actually take then you stack multiple goals on unrealistic timelines.
  • FOMO (Fear of Missing Out): You get more excitement from starting than finishing, so you have a bunch of half-completed goals rather than completed goals.
  • Identity Expansion Bias: You want to become multiple versions of yourself at once, but then no one can identify who you really are, and eventually you lose opportunities because people don’t remember complex things.

Less is more! At this level, what you need to notice is not about adding goals, but about filtering goals. When everything is working, your biggest risk is not failure, it’s dilution. Time and energy is limited resources, and most of time, life provide you just enough to complete one goal – the one that give you identity

9. You don’t collaborate

Teamwork ? . No, it is not mandatory here.

People often use collaboration and teamwork as if they’re the same. They overlap—but they solve different problems.

Teamwork is about shared execution. Teamwork is about working as a unit toward a shared outcome. Roles are defined, responsibility is distributed, success or failure is collective. No single person owns everything. The team does, and team members might be changed.

Collaboration is combining strengths. Collaboration is about bringing different people with different skills together to have solve a problem. You still stay responsible for your goal. You only involve others when it adds value, aka solve what you can’t. It’s flexible method and often temporary. You can hire, or consult, many experts in short term to help you overcome somethings out of your expertise. For example, when building a software, you can hire a designer in a few months for a final UI design instead of draw yourself. Or when you’re writing, you can have some friends reviews and challenges your ideas. You’re still the owner. Others enhance your work. And don’t forget to pay them, or help them back!

If you’ve already chosen one goal, you don’t necessarily need a full team. What you likely need is targeted collaboration. Jumping straight into teamwork can actually slow you down because teamwork might require more coordination, more dependencies, less flexibility. Stay owner, but don’t stay isolated.



Merge vs Rebase: which is better ?

I usually prefer using Merge to Rebase for safety first.

Merge and Rebase is 2 ways of combining changes from different branches when using Github as chosen source code management platform. Since Merge seems to be enough to get things done in every cases, why does Github includes Rebase method ?

The answer seems related to team’s preference on the commit history. Github maintains a tree of commits per repository and each commit is a snapshot of all files. It is important to notice that Github stores project snapshots, not the diffs that we see with command git diff . Diffs are calculated on the fly when we compare 2 commits. This nature of Github affects to how actually Merge and Rebase behaves under the hood:

How does Merge actually work ?

When using git merge, for example, to merge branch A into branch B and given branch B is created from branch A, Github performs below steps:

  1. Finds the common ancestor snapshot, aka the commit where branch B is created from.
  2. Compares the latest snapshots of branch A to ancestor snapshot, get the diffs D1 (aka MERGE_HEAD)
  3. Compares the latest snapshot of branch B to ancestor snapshot, get the diffs D2 (aka HEAD)
  4. Applies diffs D1 & D2 on the ancestor snapshot then output a new merged snapshot, stored in a new commit of branch B

Because commits are snapshots:

  • Git doesn’t need to replay all intermediate diffs.
  • It just looks at 3 snapshots: ancestor , HEAD and MERGE_HEAD.

That’s why merging large histories is fast and doesn’t rewrite old commits — the snapshots are stable and immutable. When using Merge, if Conflicts happen, because there are always 3 snapshots is taken into account and the output is always 1 new snapshot, resolving Conflicts when using Merge likely happens only once.

How does Rebase actually work ?

When using git rebase, for example, to rebase branch B onto branch A, given that branch B is created from branch A, Github performs below steps:

  1. Calculate the diff between each commit (aka snapshot) of branch B to its parent commit. This is likely to create a “patch” telling step-by-step how changes are already made on branch B,
  2. Reapplies those diffs (patches) on top of latest snapshot of branch A
  3. Creates new commits with new IDs (aka new snapshots).

So when each time when we rebase a branch B onto branch A, new commits (or snapshots) are added as if we have just made those changes on the snapshots of branch A. Because diffs are reapplied every time when we rebase, if there are Conflicts, it is likely we have to resolve same conflicts again and again. And this is why I prefer Merge to Rebase.

So, why does Rebase exists ?

Rebase is mostly used when we have a reason to control how the commit history looks like on a branch. This can be useful when a team prefer a linear commit history that is easier to read and do not care what actually happen such as when a branch is created and what is merged. Because it rewrites commit history on a branch, Rebase is not recommended to use on main branch due to the risk of losing commits and resolving conflicts multiple time. Rebase is safer only on a feature branch, which is created from main branch, and most important, this feature branch should have short-time development. On a feature branch that long-live enough, re-resolving conflicts might happen frequently and this can slow down development speed and even frustrate developers.

Conclusion

In summary, my suggestion on Merge vs Rebase is :

  1. Always using Merge for safety first
  2. If we are working on a feature branch (NOT the main or master one), and want to have a nicer commit history on this branch, and development time for this branch is short, then can use Rebase

Story behind ads you see on your Facebook

TO PROTECT YOUR PRIVACY !!

Have you ever wonder why powerful tools like Google Search, Gmail, Facebook, X, Tiktok, etc are all free to use ?

When you see they sell nothing, then you are what to be sold !

Advertisement is the main profit source that keep most of online tools free nowadays. Advertisement is not bad when it brings information to us in proactive way so we don’t have to spend time investigating market options. But due to this indirect method of advertising, we can’t know who actually behind it. And in fact, this has become an ideal channel for scammers to lure users via social networks. A lot of students, elders were victims because they has least knowledge and experience online. And when many companies systems were infiltrated, hacked, stolen data, personal data of users is leaked globally. These problems, when combine together, harms our privacy !

This post reveals some methods around online advertising industry found on Facebook and the same also can be applied in every social networks as well.

How anyone can make you read something while using Facebook ?

Given that we are all using some banking or non-banking applications, and on the news we hear about companies behind those application are hacked, and data is leaked on some dark web. Dark webs are websites operated outside the laws and are the ideal places for criminal activities which selling hacked data is the most popular. Today anyone can buy leaked data using Bitcoin or Ethereum to hide their identity completely. When someone has our name, phone numbers, emails or even addresses, they can search for and start stalking us on social networks such as Facebook, Instagram, X, etc, then after that they pay to run a targeted ads campaign with information they know about us, and let Facebook algorithm handle presenting that ads to our mobile phone.

How can advertisers control who will see their ads:

1. Custom Audiences

Facebook allows anyone to use phone numbers, emails, or names to directly target a specific user:

  • Advertisers upload a list of customer data (CSV) to Facebook Ads Manager.
  • Facebook matches the data (phone/email/name) with existing user accounts.
  • Once matched, only those users in the list will see the ads.

This is the most direct method of delivering ads to a known individual.

2. Lookalike Audiences

Based on a seed audience (e.g., 100 users with names and emails), Facebook finds other users with similar behavior.

This is indirect targeting, used to expand reach to similar users even advertisers don’t know names or emails.

3. Geotargeting / Geofencing

Facebook allow anyone to use the user’s location (address or GPS) to limit where an ad appears. This usually being used by physical stores. If you ever notice when you pass by some stores, you more likely see their ads on news feed.

4. Interest, Demographic, and Behavioral Targeting

When no personal data is available, Facebook allows anyone to filter audiences by:

  • Age, gender, region, job title
  • Online behavior (e.g., searching for a laptop, following specific pages)
  • Past engagement with posts, videos, or websites

This is an indirect ways but still get ads appear to us.

Lesson

By utilize above methods, anyone, real advertisers, or even scammers, can show some messages to our face when we are scrolling on Facebook, Youtube, Instagram, etc. Social Network applications, one hand, create free tools to soothe the desire of connection in people, and one hand, sell privacy to anyone willing to pay.

Although most of ads is not harmful, make sure to share just enough on social networks, to avoid worst situations that scammers can use ads too !

Microservices Tradeoffs and Design Patterns

Let aside the reason why we should and should not jump into Microservices from previous post , here we talk more about what Tradeoffs of Microservices and Design Patterns that are born to deal with them.

Building Microservices is not easy like installing some packages into your current system. Actually you will install a lot of things :). The beauty of Microservices lies on the separation of services that enable each module to be developed independently and keep each module simple. But that separation also is the cause of new problems.

More I/O operations ?

First issue that we can easily to recognize is the emerging of I/O calls between separated services. It exactly looks like when we integrate our system to 3rd party services, but this time, all that 3rd party services is just out internal ones. To have correct API calls, there will be efforts to document and synchronize knowledge between teams handling different services.

But here is the bigger problem, if every services has to keep a list of another services addresses (to call their APIs), they become tight coupled, means strong dependent between each other and it destroys the promised scalability of Microservices. So it is when the Event-Driven style comes to rescue.

Event Driven Design Pattern

Example tools : RabbitMQ, ActiveMQ, Apache Kafka, Apache Pulsar, and more

Main idea with this pattern is to allow services not need to know about each others addresses. Each service just need to know an event pipe, or a message broker and entrust it for distributing its message and feeding back data from other services. There will be no direct API call between services. Each services only fires some events to the pipe, and listen on some events happened from the pipe.

Along with this design pattern, the mindset on how to storing data is required some escalations too. We will not only store STATE of entities, but also store the stream of EVENTs that construct that STATE. This storing strategy also is very effective when dealing with concurrent modifications on the same entity that can cause inconsistent in data. There are 2 approaches to store and consume events : by using the Queue and using the Log that we will discover in later topics.

More Complex Query Mechanism ?

It is obviously there will be moments that we need to query some data that need the co-operation between multiple services. In the past with monoliths style, when all data of all services is located in the same database, writing an SQL query is simple. But in Microservices style, it can’t. Each service secures its own database as a recommended practice. We suddenly can’t JOIN tables, we lost the out-of-the-box rollback mechanism from database’s Transaction feature in case of something wrong with storing data, we may have a longer delay while each service may have to wait for data from other services. And those obstacles turn Event Driven to be a “must have” design for Microservices system since that design is the foundation to support patterns solving this Querying issue, most common are Event Sourcing, CRSQ, and Saga.

Event Sourcing

It can be a bit confusing between terms Event Driven vs Event Sourcing. Event Driven is about communication mechanism between services , since Event Sourcing is about coding solution inside each service to retrieve a state of an entity: instead of fetching the entity from the database, we reconstruct it from an event stream. The event stream can be stored in many ways: it can be stored on a database’s table, or it can be read from Event-Driven supported components such as Apache Kafka, or RabbitMQ, or using some dedicated event stream database like EventStore, etc. This method brings new responsibility to developers that they will have to create and maintain the reconstructing algorithms for each type of entity .

As mentioned at previous section, this strategy is helpful when dealing with concurrent data modification scenario, something like collaboration features that can be seen in Google Docs or Google Sheets, or simply to deal with scenario that 2 user hit “Save” on the same form at very closed moments. But this reconstructing way is not so friendly to a more complex query which is so natural with traditional database like Oracle or PostgresSQL, the SELECT * WHERE ones. So, to cover this drawbacks, each service usually also maintain a traditional database to store states of entities and using it for querying. And this combination form a new pattern called : CQRS (Command and Query Responsibility Segregation) where the read and the write on an entity happens on different databases.

CQRS (Command and Query Responsibility Segregation)

As mention above, this pattern is to separates read and update operations for a data store. A service can use Event Sourcing technique for update an entity, or construct an memory based database such as H2 database to quickly store updates on entities, while as quick as possible to persist the calculated states of entities back to a SQL database for example. This pattern prevents the data conflict while there are many updates on a single entity come at the same time while also keep a flexible interface for query data.

This pattern is effective for scaling purpose since we can scale the read database and the write database independently, and fit for high load scenario when the writing requests can complete quicker because it reduces calls to database with potential delay from locking mechanism inside databases. Quicker response mean there will be more room for other requests, especially in thread-based server technology such as Servlet or Spring.

A drawback of this pattern is the coding complexity. There is more components join in the process, there will be more problem to handle. So it is not recommended to use this way in cases that the domain or business logics are simple. Simple features is nice fit with traditional CRUD method Overusing anythings is not good. I also want to remind that if the whole system does not have special needs on the load, or write-heavy features, it is not recommended to switch to Microservices too. (reason is here )

Saga

Saga means a long heroic story. And the story about Transaction inside Microservices is truly heroic and long. Transaction is an important feature for a database that aim to maintain the data consistency, it prevents partial failure when updating entities. With distributed services, we are having distributed Transactions. Now, the mission is how to co-ordinate those separated Transactions to regain attributes of a single Transaction : ACID (atomicity, consistency, isolation, durability) over distributed services . We can understand simply that : Saga is a design pattern aim to form the Transaction for Microservices.

Saga patterns is about what system must do if there is a failure inside a service. It should somehow reverse some previous successful operations to maintain data consistency. And the simplest way is to send out messages to ask some services to rollback some updates. To make a Saga, developers may have to anticipate a lot scenarios that an operation can fail. The more high level solution for rollback mechanism is to implement some techniques like Semantic lock or Versioning entity. We can discuss about this in other topics. But the point here is it also brings much complexities to the source code. The recommendation is to divide services well to avoid writing too much Saga. If there are some services that are tight coupled, we should think about merging them into one Monoliths service again. Saga is less suitable for tight coupled transaction.

More Deployment Effort ?

Back to Monoliths realm, the deployment means running a few command lines to build an API instances and to build a client side application. When go with Microservices, obviously we are having more than 1 instance, and we need to deploy each instance, one by one.

To reduce this effort, we can use some CI/CD tools such as Jenkins, or some available Cloud base CI/CD out there. We also can write ourself tools , it won’t be difficult. But there is still some more issues than just running command lines.

Log Aggregation

Logging is vital practice when building any kind of application to provide the picture of how system is doing and to troubleshoot issues. Checking logs on separated services can be not very convenient in Microservices so it is recommended to stream logs to one center. There are many tools dedicated for this purpose nowadays such as GreyLog or Logstash. The most famous stack for collecting, parsing and visualizing for now is ELK which is the combination of ElasticSearch + Logstash + Kibana. The drawback of those available logging technology is it requires a bit much RAM and CPU, mostly to support searching logs. For small projects, preparing a machine that is strong enough to run ELK stack may not very affordable. Logstash requires about 1-2 GB is plenty enough. GreyLog requires ElasticSearch so it also require about 8GB RAM and 4 Cores CPU. ELK is much more than that.

Health Check & Auto restart

Beside Logging, we also must have a way to keep track availability of services. Each service may have its own API /healthcheck that we can have a tool to periodically call to to check whether it’s alive or not. Or we can use proactive monitoring tools such as Monit or Supervisord to monitor ports / processes and configure its behavior when some errors occur, such as sending emails or notifications to the Slack channel.

Beside Heath Check, each service should have auto-restarting ability when something take it down. We can configure for a process to start up whenever the machine is up by adding scripts to /etc/init.d or /etc/systemd for most of Linux server. For processes, we can make use of Docker to automatically bring services up right after it is down. For the machine itself, if we use physical machine, we should enter BIOS and set up Auto-Restart when power is on. If we use Cloud machines, it is no worry.

Those techniques are not only recommended for Microservices but also for any Monoliths system to ensure the availability.

Circuit Breaker

This is for when bad things happen and we have no way to deal with it but accepting. There is always such situation is life. For some reasons, one or many services is down or become so slow due to network issues that it will makes user wait long just for a button click. Most of users are impatient and they will likely to retry the pending action, a lot and you know system can got worser. It is when a Circuit Breaker take action. It’s role is just similar to electric circuit breaker , is to prevent catastrophic cascading failure across system. The circuit breaker pattern allows you to build a fault tolerant and resilient system that can survive gracefully when key services are either unavailable or have high latency.

The Circuit Breaker must be placed between client and actual servers containing services. Circuit Breaker has 2 main states: Closed, Open. The rules among those states are:

  • At Closed state, Circuit Breaker just forward request from clients to behind services.
  • Once Circuit Breaker discovers a fail request or high latency, it change status to Open.
  • In Open state, Circuit Breaker will return errors to client’s requests immediately, so the user acknowledge the failure and it is better than let users wait, and it also reduces the load to the system.
  • Periodically, Circuit Breaker makes retry-call to behind services to check their availability. If behind services is good again, it changes to Closed state, if not it remain Open state.

Luckily we may don’t have to implement this pattern ourself. There are available tools out there such as : Hystrix – a part of Netflix OSS, or Istio – the community one

Service Discovery

As we mentioned at Event Driven section, services inside a Microservices no need to know each own addresses by using an Event channel. But what if the team does not familiar with Event style and decide not to use it, or the services is simple enough to just expose REST APIs only. Using Event Driven is not a must-do, and in this case, how do we solve the addressing problem between services.

When system need to be scaled, there will be more instances for one or many services need to be added, or removed, or just be moved around. To let every services know the address (IP , port ) of others, we need a man in the middle that keep the records about service’s addresses and keep it up to date. This module is called Service Discovery ad usually be used along with Load Balancing modules. We may discuss about this more on other topics.

We also no need to create this component from scratch. There are some tools out there such as : etcd, consul, Apache ZooKeeper. Let’s give a try with them.

Ending

Above is an overview of what we need to know when moving to Microservices. Make sure you google them all before really starting. Each of patterns will have its pros and cons and overcoming solutions that another topics will cover. Thanks for reading !!

Is Microservices good ?

Yes and No.

Yes when we are facing problems that it solves and No when we blindly follow that “trend”.

Once my boss read somewhere about how amazing the Microservices is and instantly he asked the development team to “Let do Microservices”. He’s purely a business man but always want to apply the newest technology. How lucky am I, but also a challenge when to switch a system design to another. Actually it sounds cool to us so it is a quickly agreement between boss and developers. So let do Microservices.

What is Microservices ?

Microservices, clearly said, is a system design approach, I personally don’t count it as a technology. Microservices system itself will be composed from multiple technologies. Each piece of technology solves a business problem or problems emerging inside Microservices itself. The opposite approach to Microservices is called Monoliths – an All In One Big Service, shortly is what mostly systems nowadays are, composed from a single set of a API server and a database. Switching to Microservices, technically, is to divide functions of One Big Service into multiple small services running independently, wire them together and then we can choose fittest technologies for each small service. Each technology here can exist as a programing language, a framework, a software, a third-party service, or a tool.

The simplest form of a Microservices system, we can think it is composed from multiple Monoliths system. Each Monoliths system contains its own server & database and exposes its own API gateway . Monoliths systems communicate to each other by call APIs of others directly or listen to a shared event channels, depends on use cases.

Microservices is NOT a new skill set. Microservices is composed from multiple Monoliths services, so to do Microservices, developers must good at building Monoliths first.

What problems do Microservices solves and NOT solve ?

There is a reason that every bosses want to move to Microservices that is they think it is good. But I think not everyone understand WHAT it is good for. Microservices is NOT a pure better design than other designs. It is an adaptation to overcome problems emerging when a system is growing to big and huge size, in both manner of traffic and logic complexity. So if your system does not suppose to be the next Amazon or Netflix, Monoliths design is fine for you since it is much simpler to set up and maintain. A few thousand users with few hundred connection per second is in capability of mostly technologies nowadays, such as Spring or Node, Ruby on Rail or PHP, etc. But it is hard to estimate the threshold because each system has different features and the best way to find out it maximum capability is to do the stress test – basically to send as much as possible requests then analyze the response time. When you know your system capability, you will have a reference in number to decide when to move to Microservices. Microservices is a journey, only carry on when you are well prepared.

Microservices does NOT magically increase the system load threshold, unless the services are divided and designed appropriately. Remind that I/O processes take the main part in the delaying time between request & response. Normally, in Monoliths design, all services are on the same memory and it is the fastest way for services to cooperate to each other. But if we blindly deploy services to multiple different places to make it look like Microservices, there will be more I/O time since services have to send more requests to others that it depends on, then performance of the system will go down significantly. This may be the most common mistake when creating a Microservices system. Microservices is NOT to fan out all services to multiple servers. We must calculate to identify the bottleneck in the system before deciding to move some related services to an independent server. And it also is NOT simply to deploy current service’s source code to other server. The new server may have some beneficial points, such as a greater processing power that accelerates the service, or it is to redesign the service with other technologies that have some benefits the service needs.

A good example for redesigning the service is to separate the READ and WRITE data into two services for the same domain object (same table), the purpose is to support a large of concurrent reading/writing data with low latency. Assume that we are having a Monoliths system but after a period of growing, we have a very huge data amount and complex data schema on a SQL database so that every time a query is issued, it freeze the whole system for a few seconds. This is bad and we want to improve. That moment, we may come to this solution: We will divide the service in READ & WRITE aspect. The READ data service may use a NoSQL database as a persistence storage but with fast reading speed to reduce user’s waiting time. The WRITE data services may use an in-memory database such as H2 to proceed data updating as fast as possible, then gradually synchronize in memory data to the persistence storage of the READ service. Those two services should run on different machine to be able to maximize resource usages. And this is a truly story of Microservices. If we simply deploy another identical service on another server to handle more traffic by routing traffic by IP or by zone, it is the term of Load Balancing.

Microservices is NOT to reduce development cost. In fact it increases. Firstly, we need more machines to run independent services, as well as more machines to run other monitor tools. Microservices is an architectural design approach, it is the view of the whole system, NOT on how each service coding solution. It does NOT magically reduce bugs. You may read here for more understanding about the source of bugs. But when services are divided well, it does enhance the boundary between services, so that can help developers to avoid using wrong components as well as to avoid creating too much cross-cutting concern components with many hidden logic. Microservices brings the real need of DevOps positions, who will take responsibility to deploy multiple services as quick as possible to ensure lowest down time between deployment. They obviously will have to create some CI/CD system to automate the deployment process, calculate system load and create/install monitor tools to keep track how services are doing with other. When a bug happens inside a Microservices system, it is more complicated to fix than in Monoliths since now there are more than 1 places to figure out what is the truth source of a bug. Developers also always have to set up an identical system on their local machine for developing and testing. A system of multiple services requires stronger machine for developers. Too many services system can be somehow impossible to deploy on a single machine and we may need some Mocking technique to create fake API gateways on behalf services. Writing automation tests gets harder too, etc. And many many behind the scene works like these will disturb developers when switching to Microservices. More work, more job, more salary.

Microservices is NOT to freely apply latest technology. I bet that your team won’t want to work in a tech-soup. Agree that Microservices open us an ability to mix multiple technologies to make use of their advantages. But remember that it does require us to understand their advantages before applying, or your system will get more complexities without any significant benefit and crying is coming soon. Microservices is NOT only about technology, it’s also about human. It may depends on how your team is organized, what their skills are, what they are good at. Because learning some new things does take time and if you are in rush, let do with tools that you are familiar. Example we are about to create a small service to handle Employee’s documents in 1 month and we are having only 1 thousand employees. Our developers are experts at Java but Go is the new language and it is on trending. You may hear somewhere that “Go is faster” but here is the point, your developers will build that new service faster with Java than Go and that one thousand users is not the limitation to have to switch from Java to Go.

Microservices is NOT to create boundaries between teams. It is to create the boundary between services that your teams are creating only, technical boundaries only. The more developers know about other services, the more chances they find out problems early and the less communication cost between teams. Don’t use the architectural design as a political tool inside an organization. One developer can work for multiple services depends on his/her ability. Those people usually act as an important bridge between services. I know that some managers want to divide teams to rule easier, but I feel it is not a good way to create an organization: people will go to work with doubts and envies because much or less, all services are necessary at some points but at each moment, some are important than others. Non-boundary teams also activate cross-checking that can push teams move forward, also reduce job security. No sharing, no checking between employees will gradually hint a few ones to think that they are irreplaceable. It is a toxic thinking for an organization.

So when to go Microservices ?

Microservices do not help to reduce costs, not help to improve performance, not help to be “better”, so why do it is on trending ? Because it is from big tech companies, and people tend to believe what come from big boys always “better”. We easily blindly copy without diving deep to understand why they do that. With big tech companies, they hit the limitation of technologies and a single Monoliths system can’t help them anymore so they have to use multiple Monoliths to solve problems. And the result is a system that they named Microservices. Technology changes everyday and who know what will come in next few years. We see many frameworks, languages, platforms come as “better” options then die. So the key point to decide to move to Microservices is to know the limitation of current system by testing the load well.

Another reason we might need the Microservices is to implement many projects at the same time. Example we need to build a Pricing Engine module in the same time with an ERP module to manage employees, we might assign them for 2 teams since the business logic of modules does not depend on other. Each team can develop their own service on separated server so the deployments of each service is independent too. If 2 modules is built into 1 Monoliths service, an issue happening on a module may block the whole deployment process to prevent risks happening on production environment. So the key point when dividing services for teams is the dependency between services. They should be loosely coupled. It means each service can act as a separated product without knowing or need existence of other services.

When each service is truly independent, it can be reused too. Example your companies has multiple projects but using the same employees for all projects, so to avoid duplicating features like authentication, employee manager, or full text search service, etc, we can carving them to separated services that can be reused by different projects.

One scenario that you can find out your system look like Microservices, is when to rewriting the legacy system with up-to-date technology. Rewriting the whole system is time consuming so we usually have to rewrite module by module. Each rewritten module can be deployed at a separated server and on the way of rewriting the legacy system, you are using Microservices.


Why your software get bugs year after year

Human, machine or money are all involved.

*Software here mentions to every kind of PC applications, web applications, mobile applications

The very first year of my software developer career is to be a bug killer. The year that I joined, I can feel that there is no thing new added to the product, everyone seemed to have to spend a year, actually more, to fix bugs that created just from prior 2 years of coding. It sounds expensive, right ! The cost for fixing bugs not only employees wage but also the trust from customers, and sometime to be a loser to your competitors. At least that year teaches me on how a thing can be done wrong and I do believe that, knowing how to do it WRONG is even more important than knowing how to do it RIGHT. Every mistakes are worthy to learn, even it’s yours or not.

How is human involved ?

Testers usually blame developers for bugs. Actually it’s true, but remember that they are the source of other non-bug things. People make mistakes all the time, unconsciously, and so do developers and testers too. Seniors will make less bugs than Juniors, not because Senior are smarter, it is because Seniors already made all of that mistakes in the past. Smartness only helps to figure out and solve problem faster, it does not help to avoid problems, or mistakes. The job of testers is to help to reveal developer’s mistakes, to ensure the product’s quality, so avoid carping and blaming developers for bugs.

The easiest-to-fix bugs are Programming Errors. Yes, this is all about developers. Programming Errors includes :

  • Syntax error : Each programming language has its own coding convention. Newbie in coding usually make those mistakes because they are still not have full understanding about predefined conventions. Sometime they don’t aware that style of coding even exists.
  • Semantic error: When the coder got familiar with syntaxes, they literally can tell the computer to do things they desire in which order of steps. But sometime, some steps conflict with others such as altering a value that is still not available, or steps are executed in a wrong order that leads to producing a wrong result. Null pointer exception maybe the most famous error that developers and testers ever heard during their jobs. It is when the computer are asked to read a value that doesn’t exist in its memory. The more experience developer, the easiest way they can avoid those errors.
  • Logical error: Even multiple years Senior developers still make this mistake. It is when a developer has to turn his understanding about a requirement into lines of code. And as a human and depends on how efficient the communication method is, he may mis-understand the logic then produce mis-behavior code. When he has correct understanding about requirements but still produces wrong things, it is because of later reasons.
The main different between Junior and Senior developer

The next level of complexity bugs are from Communication Error. This time, it is about everyone: product owners, business analyst, developers, testers, managers, and the cat of developers.

There are always communication gap between any couple of people. If a team lacks of the co-operation or is weak at co-operation, its number of members does not reduce development time, it increases, because the more inefficient communication is, the more problems come. And again, please don’t mis-percept between communication versus talky or chat, or gossips. Communication Error here is about the mis-distributing information between people. It is the lack of Reporting systems between stakeholders. Lack of information leads to wrong understanding and even conflicting, not only in human aspects, but also in product design and development process.

  • Reporting between Developer and Developer: This is the most popular problem in a development team. A typical phenomenon is a developer usually re-invent a thing that already exists, not because to improve or to overcome some limitations, but because he doesn’t aware of its existence. And as we know, any inventions must go through a lot of mistakes, and re-invent is to go to face to those mistakes again. Mistake is bug!
    In scope of developers, the lack of reporting on what is done, what is being done, what will be done creates blind spots in making decision on how to write code. Writing new things is time consuming and much error prone. Using existing ones but without document or guidance is error prone too. Well organized tech meeting is a good practice to distribute information. A serious Technical document and good distributing methods also can ensure developers knowledge synchronized and then less error prone.

  • Reporting between Business analyst and Developer : Lacking of reporting on what is doing good, what is doing poorly and why this, not that also create smoke lines in developers mind. Everyone has their own past and the past creates assumptions. Sometime you find out a developer build things in his manner way that he believes is right but it does not fit to your current business logic. Don’t assume people understand things in your manner way. So, to make developer understand the situation, please also let them know what customers do, how customers do that and why customers do that way.
  • Reporting between Product Owner and Business Analyst : This is usually merely verbal communication. This communication step turns an idea to detail actions that developer later will engage with. And idea is something mutable. The change in idea creates the change in requirements. And changes in requirement, if not are clarified carefully, it can cause conflict with old requirements or mismatch to them. And this case, even developers do exactly as requirements, the software still gets bugs. It is business logic bug. To reduce this case, business analyst should have a method to oversee the affection between business rules, so that can report early to the “idea generator” potential problems, then in turn, with Product Owner to clarify the final and safe actions needed. The Product Owner also should give a clear vision and strategy so that everyone can have a particular destination in mind and in turn, it helps everyone adjusts their own actions to fit to the strategy. Do you believe that the idea itself can bug?
  • Reporting between Employee and Manager: Managers are familiar with reporting. They use information reported from others stakeholders to create detail action plans. For some reasons, budget for example, or mis-estimating required effort, the deadline for an action is sometime too tight. It create a time pressure on developer and tester. With limited time, everyone will choose the fastest way. Developer will choose hard-code solution, testers will miss some test cases, and eventually, bugs will emerge. Hard-code is to cure an illness by cover up symptoms without applying time-consuming proven treatments.

  • Reporting between Testers : The job of tester is to manage test cases and go through them, often. With some coding knowledge, testers can write automation tests that can automatically run after every changes in requirement, big or small. This is the most efficient way for testers. In some case, testers are non-tech, developers usually take care this part too. Without automation tests, testers will have to manually test case by case after each change in requirement. Testers should not assume that developers will take care all affections of the change to other functions. Sometime developers even do not aware about that. Fixing bugs can create bugs too. So, if you are tired with manual tests, create automation tests.
  • The cat of employees : This is a metaphor for employee’s mental health. It does affect to employee’s alertness and is a cause to create mistakes. A broken heart is more likely to destroy the system more than a normal one, right. So, make sure your team is heathy.
How is machine involved ?

Beside mistakes obviously made by human, there are some objective conditions can make your software run some undesired behaviors that customers can report as bugs. Actually someone can blame developers about their inability to anticipate those conditions, please remember that it is also because tester’s inability to anticipate those conditions, and it does not in the requirement written by BAs or Product Owner too.

Experienced developers can aware about those situations and can give early solutions. But everything takes time. Especially, solving those kinds of bug requires more effort than above ones. Testers should aware about those situations and learn more advanced testing techniques so that can help to reveal those problems early before it happens to end users.

  • Incompatibility: Your customer’s machine can have some limitations to be sufficient to run your software. It can be a low memory computer (RAM, Hard disk) , old (weak) processor or out-update Operating System. Every software will need a minimum available memory and pre-existing components for initializing and further executions. When the computer run out of resources, anything can run improperly. Every software should be shipped with a system requirement note, it can act as a disclaimer term for your team.
  • Memory Consumption : With the same problem, each developer can have different solutions and each solution has a different speed and memory consumption. Memory is a limited resource. Much or less, this depends on developer’s skill. But with nowadays computer power, a few Gigabytes memory is usual so it makes most of developers in most of situations does not so care about memory consumption anymore, until it crashes.
    Running out of memory not only is caused by your own software. A computer can run many softwares as the same time so it is not always because of you. It is the best if the software can notify to users about its memory situation so the user can understand what is going wrong.
    In web servers context, each request is allocated a maximum memory amount. This practice is to ensure the server can handle hundreds to thousands requests per seconds. The code written by developers may not consume so much memory, but it may exceed this threshold. Many other components behind the scene like databases, proxies, other third party services are all vulnerable by this problem. So to developers, never assume everything does well, always prepare for the failure because bad things do happen.
  • Unstable Network: Offline-only software has nothing to worry about this. This is more important for software that has client-server model. Most of application nowadays is client-server model. The connection quality between client and server is extreme important, especially with application requires realtime responses like stock market, online gaming or streaming services. The technology behind those applications already has some tactics to recover or endure under unstable network or low bandwidth connection, but they are “try best” only, don’t expect it has magic. The ability to work under low bandwidth or unstable connection, to some application like video streaming, is the key competition factor. Testers should aware about this and should have serious test scenarios.
    To optimize solutions to make sure the software can work under this situation, developers must have deep knowledge about computing and networking. Fix those kind of bug is extreme hard. Don’t expect the solution is always available, current Civilization still have some limitations.

  • Offline Accidents: Electricity off, for some reasons. The suddenly shutdown can cause some functions in a program fail partially. It can cause some problem in the next start such as mismatch data or corrupt data. Some softwares handling sensitive data like in banking industry for example, usually have some recover strategies, digital and non-digital to ensure the business from damages.
How is money involved?

Budget of a project affects to the plan in time pressure and the priority of works. Most of time, developer will focus first on writing code to fit with business logic and let aside potential problem about machine. It makes sense because we should avoid the Pre-mature Optimizing: how sure that potential problem will happen? This ignorance from priority reason produces something called Technical Debt, and every debts need to be paid, soon or late, with or without interest.

Budget determines the quality of team members. It obviously the more experience employees, the more benefit they want. Experience means they are aware about mistakes and they do have a way to avoid them. And when they know how to avoid mistakes, they know how to do it right.

Budget affects employee’s motivation. Motivation makes them do their best to make the best software they can. Sometime your software does not make money enough to ensure that kind of employee’s motivation or minimum member quality, remember my question: “Do you believe that the idea itself can bug“? If your software does not make enough money YET, why not share your vision with everyone and let them be one of it. Excellent people who willing work for joy and opportunity still exists, trust me !!.

Ending

Above is some senses from my debugging era. Hope it can help to point out some mysteries from developer world and help teams has more appropriate actions to deal with bugs. Feel free to comment for anything you interested in or not agree. Thanks for your time.