How to compete with generative AI as a software engineer ?

Before the decade of AI bursting, software engineering is mostly about writing code that realize requirements. Software Engineers, at some extent, act like a translators between human languages and computer language. This translation today can be accomplished by many generative AI products in seconds and from my observation, generated code has pattern even better than code written by most of developers. It is understandable when companies begin laying off employees that does not match existing AI. It is just a cost optimization – vital part of every business – and also a coldest truth of this life, might be !

What is generative AI good and not good at ?

Recall the flow that each software engineer has to do daily is:

Receive requirements -> Review current state of source code -> Define a target state of source code -> Retrieve information from documents of related tools, libraries and solutions -> Pick solutions -> Actually write code -> Aligning new code to existing code -> Deploy -> Testing -> Measuring results -> Read error messages -> Debugging.

Some steps of this flow is done better by generative AI, and some is better by human:

StepsDescriptionWinner and Why
Receive requirementsto capture goals, constraints, acceptance criteria, performance, security needs, and stakeholders’ expectations.Human
Reason: human are better at eliciting ambiguous needs, negotiating trade-offs, and asking the right follow-ups with stakeholders. AI can help by summarizing long requirement documents and suggesting missing or inconsistent points.
Review current state of source codeto read codebase, architecture, tests, docs, build scripts, dependencies, and CI config.Human + AI
Reason: AI can quickly index, summarize files, find patterns, risky hotspots, and generate dependency graphs. But humans provide domain knowledge, historical context, and recognize subtle intent (business logic, quirks, trade-offs).
Define a target state of source code to design the desired architecture, interfaces, data flows, APIs, and acceptance criteria for the new state.Human + AI
Reason: AI can propose multiple concrete design options, highlight trade-offs. Humans must pick the option that fits non-technical constraints (policy, team skill, product strategy).
Retrieve information from documents of related tools, libraries and solutions to find API docs, migration guides, best practices, configuration notes.AI
Reason: AI can extract key steps, call signatures, breaking changes, and produce concise examples from long docs much faster than manual reading. Humans validate and interpret edge cases.
Pick solutionsto select libraries, patterns, and implementation approaches considering performance, security, license, team skills.Human
Reason: human decision-makers must weigh organizational constraints, long-term maintenance, licensing, and political factors.
Actually write codeimplement features, refactor, add tests, update docs.AI
Reason: AI excels at generating boilerplate, test stubs, consistent code patterns, and translations across languages.
Aligning new code to existing codeensure style, APIs, error-handling, logging, and patterns match the codebase; maintain backward compatibility.Human + AI
Reason: AI can automatically reformat, rename for consistency, and propose refactors to match patterns; humans confirm that changes don’t break conventions tied to tests or runtime behaviors.
Deploypush to staging/production, run migration scripts, coordinate releases, rollback plans.Human
Reason: Humans must coordinate cross-team tasks, business windows, and incident response. AI/automation is excellent at packaging, CI/CD scripts, and repeatable deployment steps.
TestingRun the application locally and manually verify that new changes behave as expected.Human
Reason: Manual testing relies heavily on intuition, product knowledge, and human perception (e.g., UX feel, layout issues, unexpected delays, weird state transitions).
Measuring resultsmonitor metrics, logs, user feedback, testing results, and define success signals.Human + AI
Reason: AI can detect anomalies, summarize metrics, and surface correlations. Humans decide what metrics matter, interpret business impact, and choose next actions.
Read error messagesanalyze stack traces, logs, exceptions, and failure contexts.Human + AI
Reason: AI quickly maps errors to likely root causes and suggests reproducible steps. Humans provide context (recent changes, infra issues) and confirm fixes.
Debuggingreproduce issues, step through code, identify root cause, fix and validate.Human
Reason: AI speeds discovery (identifying suspicious diffs, suggesting breakpoints, generating reproducer scripts), but complex debugging often needs human insight into domain rules, race conditions, and stateful behaviors.

How to compete with generative AI to secure the career as a software engineer ?

Similar to the Industrial Revolution and Digital Revolution, where labors is replaced by machines, some jobs disappeared but new jobs got created. And at some extent, AI, is just another machine, huge one, so, essentially, we are still in the Revolution of Machine era.

The answer for this question is that we need to work on where this huge machine cannot. So far, at the moment of this post, what we can do to compete with AI in software development are:

Transit to Solution Architect

As AI becomes strong at writing code, humans can shift upward into architectural thinking. A Solution Architect focuses on shaping systems, not just lines of code. This involves interpreting ambiguous requirements, negotiating constraints across teams, balancing trade-offs between cost, performance, security, and future growth. AI can propose patterns, but only a human understands organizational politics, legacy constraints, domain history, and long-term impact. By moving into architecture, you operate at a layer where human judgment, experience, and foresight remain irreplaceable.

Become Reviewer / Validator

AI can produce solutions quickly, but it still needs someone to verify correctness, safety, and alignment with real-world constraints. A human reviewer checks assumptions, identifies risks, ensures compliance with business rules, and validates that AI-generated code or plans actually make sense in context. Humans excel at spotting hidden inconsistencies, ethical issues, and practical pitfalls that AI may overlook. Becoming a Validator means owning the final approval — the role of the responsible adult in the loop.

Become Orchestrator

Future engineers will spend less time typing code and more time coordinating AI agents, tools, workflows, and automation. An Orchestrator knows how to decompose problems, feed the right information to the right AI tool, evaluate outputs, and blend them into a coherent product. This role requires systems thinking, communication, and the ability to see the entire workflow end-to-end. AI is powerful but narrow; an Orchestrator provides the glue, strategy, and oversight that turns multiple AI capabilities into a real solution.

Study Broader knowledge

AI is good at depth — consuming a specific library or framework instantly — but humans win by having breadth. Understanding multiple domains (networking, security, product design, compliance, UX, devops, data, hardware) allows you to make decisions AI cannot contextualize. Breadth lets you spot cross-domain interactions, anticipate downstream consequences, and design better holistic systems. The more wide your knowledge, the more you can see risks, opportunities, and real-world constraints that AI cannot infer from text alone.

Expertise in task description

In an AI-driven era, the most valuable skill is the ability to turn a messy idea into a clear, precise, constraints-rich task. This includes defining scope, edge cases, success criteria, and architectural boundaries. AI is only as good as the instructions it receives — so those who excel at describing tasks will control the quality of AI output. Humans who master problem framing, prompt engineering, and requirement decomposition gain leverage: they make AI more accurate, faster, and more predictable than others can.

Business Analyst

The heart of value creation lies in understanding the business, not writing the code. AI cannot replace someone who knows market dynamics, user behavior, budget constraints, prioritization logic, risk tolerance, stakeholder psychology, and regulatory boundaries. A Business Analyst bridges the gap between technology and real-world value. They decide why a feature matters, who it serves, how it impacts revenue or cost, and what risk it introduces — areas where AI can help, but not replace the human nuance needed.

Pentester

Security is one of the hardest domains for AI to master fully because it requires creativity, unpredictability, street knowledge, and adversarial thinking. A pentester does more than run scanners — they exploit human behavior, spot surprising vulnerabilities, and think like an attacker. Humans who understand security fundamentals, threat modeling, social engineering, and advanced exploitation techniques will stay in demand. AI helps automate scanning and code analysis, but a creative pentester stays ahead by understanding motives, tactics, and real-world constraints.


Essentially, it is to use AI as a super-assistant
that can write code very well
to realize our intentions.

What is Refactor and why it is matter ?

For someone with no programming experience, the word “refactor” can be confusing at first, because it’s mostly used in coding. For a simple explaining, refactor is Making something better or clearer without changing what it does.

Why Refactoring when there is nothing change ?

If it works, don’t touch it” is a principle that is still valid in real world. It’s truth, pragmatic and is recommended at some extent when we do not have enough understanding about system we are working on. From business perspective, Refactoring feels unproductive when there is no new features added to system, but, just like a business, sometime we need to restructure processes, reorganize people and rearrange tasks to maximize outcomes, programing process also need refactoring to optimize coding experience which help source code more readable, maintainable, and scalable. These benefits, in turns, accelerate developers when adding new features or fixing bugs later on.

What is Readable code ?

Readable code is code written clearly so another developer, or future you, can read it quickly and know what’s going on, sometime just by guessing via variable names and function names. Some tactics can be applied to ensure readable code are:

  • Clear naming: Variables, functions, and classes have names that explain their purpose.
  • Short, focused functions: Each function does one thing, not many things.
  • Consistent formatting: Proper indentation, spacing, and line breaks.
  • Avoids unnecessary complexity: No overly clever tricks, Straightforward logic.
  • Helpful comments: Explain why, not what.
  • Use of standard patterns: Code follows common conventions so others instantly recognize the structure.

What is Maintainable code ?

  • Readable: as explained as above
  • Well-organized: Code is structured logically into modules, functions, or classes
  • Consistent: Follows the same style, naming, and patterns everywhere.
  • Well-tested: Covered by tests to catch bugs early and safely.
  • Documented: Has comments or docs explaining why and how things work.
  • Flexible: Easy to modify, extend, or adapt without breaking existing code.

What is Scalable code ?

  • Efficient: Uses memory and CPU wisely, Avoids unnecessary heavy operations.
  • Modular: Pieces of code can be separated or duplicated easily
  • Asynchronous / non-blocking when needed: Doesn’t freeze the whole system while waiting for one slow task.
  • Uses good architecture: Clear layers, Can split into microservices or separate components if needed.
  • Uses proper data structures: For example, using a Map instead of a List for fast lookups.
  • Database scalability: Indexes, caching, batching queries, sharding, etc.

Refactor safety

Because the goal is to keep system working as the same while rewriting codes, there must be a metric indicate sameness, or early detect differences in system behaviors. This is where Test Driven Development shines.

Writing Test is mistakenly overlooked by inexperience developers. Beginners usually think programing job is just to write code, see it run then move on writing another code. Writing tests looks like an extra work or an annoying requirement. This is okay just as a young men does not understand “karma”. And karma for this overlooking usually are:

  • Bugs keep coming back
  • Bugs evolve when there is more code added
  • Take so much time for debugging
  • Source code become a mess and a small change can take months to add

When bugs bring enough pain, developers begin more experience.

Test Driven Development (aka TDD)

Test-Driven Development (TDD) is a software development process where you write tests before writing the actual code.

TDD follows a repeating 3-step loop:

  1. Write a failing test ( yes, always fail first ! )
    • The test describes what the code should do.
    • It fails because the feature doesn’t exist yet, or the bug is not fixed yet.
  2. Write the minimum code needed to make the test pass
    • Not perfect code, just enough to pass the test.
  3. Refactor – Clean up the code
    • Improve readability, maintainability, scalability
    • Keep the tests passing.

Then repeat the cycle for the next feature and bug fixing.

Tests ideally can simulate the UX that users will engage on real product. This can not be 100% achieved but keep this principle in mind will help a lot to write good tests. Depends on how closely a test to real world UX, tests can be classified to 3 levels: Unit Test, Integration Test, and E2E Test.

Unit Test

Unit Test is ideal to test behaviors of a function or a class. In each unit test, we can test output of a function given a particular input. We can anticipate what inputs can be, even unrealistic ones (hackers usually input unrealistic ones) , to ensure our functions keep functioning regardless what input is. Unit tests can be used as a debugging tool when we can test directly a part of system without try reproducing via UI/UX. For functions that is well guarded by unit tests, developers can feel more confident to add changes or refactor it because bugs can be caught early.

Integration Test

Integration tests are tests that check how multiple parts of system work together. Functions, Classes and Flows can be tested on how they are interacting together inside a system. It ensures that every “pieces” of the system are integrating properly. Similar to Unit Test, we can anticipate and simulates Flows to can catch bugs soon.

E2E Test

E2E (End-to-End) tests are the tests that simulate a real user using the real app, by actually click buttons and typing text.

This is ultimate form of Test that can catch bugs that unit or integration tests cannot. E2E Tests test the app in an environment closest to production. They validate the entire system from UI/UX to data storage. But it is the hardest tests to make when a real system need to be deployed for E2E tests can execute. Simulating user behaviors by coding requires more effort. This is why many teams usually stop at Integrating Tests and it is totally ok when majority of bugs can be catch at level of Integration Test. Writing E2E Tests is time consuming so we should only write it for bugs that un-produceable at Integration Test level. These bugs are high-level bugs and it should be addressed by high-level tests, and they mostly about concurrency, timing and resources:

  • Race Condition: A situation where the correct behavior of a program depends on the relative timing or interleaving of multiple threads or processes. It’s about uncontrolled timing causing wrong results.
  • DeadLock: Two or more threads/processes are waiting on each other indefinitely, preventing progress. As the result, system freezes because resources are locked in a circular wait.
  • Livelock: Threads or processes keep changing state in response to each other but make no actual progress. As the result, CPU or threads are active, but nothing gets done.
  • Starvation: A thread never gets access to a needed resource or CPU because other threads dominate it. As the result, resource exists, but some threads never get a chance to execute.
  • Atomicity violation: A set of operations that should be executed as a single, indivisible unit is interrupted, causing incorrect results.
  • Order violation: Correct behavior depends on operations happening in a specific order, but the order is not guaranteed that eventually leads to incorrect results.
  • Heisenbug: A bug that disappears or changes when you try to observe it (e.g., by debugging, logging, or adding print statements). This sounds like quantum computing but yes, it does exists. These bugs often caused by concurrency, timing, or memory issues.
  • Data corruption: Shared data is modified concurrently without proper synchronization, resulting in invalid or inconsistent values.
  • Lost update: Two concurrent operations overwrite each other’s results, causing data loss.
  • Dirty read / inconsistent read: A thread reads a partially updated or uncommitted value from another thread or transaction and then produce wrong results.
  • Priority inversion: A low-priority thread holds a resource needed by a high-priority thread, causing the high-priority thread to wait unnecessarily.

In conclusion, to Refactor code safely, we need a lot of tests, good tests one !

How to avoid Merge Conflicts in software development

Beside Ambiguous Requirements, Tight Deadline and Unstable Legacy Codebase, Merge Conflict is another light fear that annoys and disrupts developers the most while making software.

From a fact of how Merge Conflict might appear in this post that long-live branches should be avoid as much as possible to mitigate chance of conflict, here are some guidelines to help any software teams deal with this fear.

User Story based Task Description

Human brain is designed to consume story, not ambiguity. In software development, User Story is short, simple description of a feature or requirement told from the perspective of the end user. It’s a fundamental element of Agile and Scrum methods — meant to capture what the user wants and why, without prescribing how developers should implement it. A user story usually follows this format: As a [type of user], when [something happen], user want [some goal] so that [some reason] . For example: As a registered user, I want to reset my password so that I can access my account if I forget it. This helps teams understand who needs something, what they need, and why it matters.

Depends on how big the goal user want is, a User Story can be splitted into simpler stories. We don’t have to write an essay in task description because we are not at school. Clarity is the top priority when writing tasks descriptions, so don’t think, just tell stories.

When a User Story is simple enough, a task stemmed from it can have a small scope of change with limited effect of codebase, and in predictable way. Small scope of change in a task helps to mitigate chance of conflicts. Even conflicts happen, resolving them can be easier because it happens in fewer places.

Merge/Rebase daily, resolve early

When tasks are defined well and scope of change is limited, the branch now can be short-live which is okay for Rebase tactic. Depend on preference of history commit tree, Merge or Rebase both is okay. The recommended practice here is to do it daily: merge main branch into new branches, or rebase new branches onto main branch. This practice helps to early aware of possible places might cause conflicts so that we can adjust coding tactic or sync up with other developers about what changes are made.

FIFO Merging

It is obviously right when prioritize tasks but do not apply priority to the order of merging code because it can turn some branches into long-live one when higher priority tasks keep being merge first. When a task is put on progress, your Kanban board for example, and when it is in Done column for example, it should be merged asap regardless priority of its task. What is completed first should be merged first, (First-In-First-Out order) . To achieve this state, utilize any tool to automatically do so is highly recommended. Of course, completion of a task here includes testing phase as well.

When a task is well defined with limited scope in User Story format, and somehow it gets stuck and turn into long-live branch, we need to review its necessity:

  • If we don’t need this feature anymore, so discard the task and close the branch.
  • If we still need this feature, but it contains risky changes, and it is why no one dare to merge it to main branch, so it is time to make simplier stories from the risky parts. Don’t let task being stuck. Keep feature branches short-live.

Commit with Task ID

Commit message is encouraged to describe what changes are but there is nothing to force or ensure that consistency. This depends on how good a developer can explain things. So to make it simple and scalable, it is recommended to begin a commit message with Task ID, for example: #1234 fix things , so whenever anyone wonders why a commit is added, they can trace back to related task description and let the User Story explains.

Long-live branches as Microservices

For any reason that a branch is planned to be long-live, such as when making a challenging feature that inevitable requires long time of development, consider to turn this big part into Microservices. Microservices architecture can keep new code in separated repository, which in turn, can mitigate risk of conflicts with existing main branch. Main system can communicate to this new Microservices in any kind to get things done without worrying about large changes from new features. The new Microservices can have its own tasks board with its own User Stories, and User here, is the Main system.

Time is money, so Don’t waste development time for resolving merge conflicts !

Merge vs Rebase: which is better ?

I usually prefer using Merge to Rebase for safety first.

Merge and Rebase is 2 ways of combining changes from different branches when using Github as chosen source code management platform. Since Merge seems to be enough to get things done in every cases, why does Github includes Rebase method ?

The answer seems related to team’s preference on the commit history. Github maintains a tree of commits per repository and each commit is a snapshot of all files. It is important to notice that Github stores project snapshots, not the diffs that we see with command git diff . Diffs are calculated on the fly when we compare 2 commits. This nature of Github affects to how actually Merge and Rebase behaves under the hood:

How does Merge actually work ?

When using git merge, for example, to merge branch A into branch B and given branch B is created from branch A, Github performs below steps:

  1. Finds the common ancestor snapshot, aka the commit where branch B is created from.
  2. Compares the latest snapshots of branch A to ancestor snapshot, get the diffs D1 (aka MERGE_HEAD)
  3. Compares the latest snapshot of branch B to ancestor snapshot, get the diffs D2 (aka HEAD)
  4. Applies diffs D1 & D2 on the ancestor snapshot then output a new merged snapshot, stored in a new commit of branch B

Because commits are snapshots:

  • Git doesn’t need to replay all intermediate diffs.
  • It just looks at 3 snapshots: ancestor , HEAD and MERGE_HEAD.

That’s why merging large histories is fast and doesn’t rewrite old commits — the snapshots are stable and immutable. When using Merge, if Conflicts happen, because there are always 3 snapshots is taken into account and the output is always 1 new snapshot, resolving Conflicts when using Merge likely happens only once.

How does Rebase actually work ?

When using git rebase, for example, to rebase branch B onto branch A, given that branch B is created from branch A, Github performs below steps:

  1. Calculate the diff between each commit (aka snapshot) of branch B to its parent commit. This is likely to create a “patch” telling step-by-step how changes are already made on branch B,
  2. Reapplies those diffs (patches) on top of latest snapshot of branch A
  3. Creates new commits with new IDs (aka new snapshots).

So when each time when we rebase a branch B onto branch A, new commits (or snapshots) are added as if we have just made those changes on the snapshots of branch A. Because diffs are reapplied every time when we rebase, if there are Conflicts, it is likely we have to resolve same conflicts again and again. And this is why I prefer Merge to Rebase.

So, why does Rebase exists ?

Rebase is mostly used when we have a reason to control how the commit history looks like on a branch. This can be useful when a team prefer a linear commit history that is easier to read and do not care what actually happen such as when a branch is created and what is merged. Because it rewrites commit history on a branch, Rebase is not recommended to use on main branch due to the risk of losing commits and resolving conflicts multiple time. Rebase is safer only on a feature branch, which is created from main branch, and most important, this feature branch should have short-time development. On a feature branch that long-live enough, re-resolving conflicts might happen frequently and this can slow down development speed and even frustrate developers.

Conclusion

In summary, my suggestion on Merge vs Rebase is :

  1. Always using Merge for safety first
  2. If we are working on a feature branch (NOT the main or master one), and want to have a nicer commit history on this branch, and development time for this branch is short, then can use Rebase

Store datetime as ISO 8601 instead of timestamp

Like any programmer, I used to use Date for storing date time values on database, until I suffer with bugs related to timezone and DST (Daylight Saving Time). The most common symptom is the date fields, such as created_date or updated_date, often display 1 day prior to what it is set before: Let say an admin set the created_date to Jan 1st 2025 00:00:00 clients somehow see value Dec 31 2024, 23:00:00 .

I tried putting browser timezone information into adjusting timestamps but then I noticed that the code base became more complex and the bug still can’t be fixed when DST (Daylight Saving Time) happens. The bug will happen when:

  • most of databases use integer representing microseconds (or timestamp) to store datetime data
  • users locate in many countries with different timezones
  • Daylight Saving Time happens with no fixed schedule

Then I realize the power of standard: ISO 8601, is that it can be used to store date time values in plain text and still be sortable. Storing datetime as format YYYY-MM-DD HH:mm:ss at UTC, (or any format instructed in ISO 8601) can remove all headaches when using timestamp but also remain sorting & filtering capabilities. The only downside, is the frontend part and backend part has to make sure date time data is sent and received in ISO 8061 format instead of integers, and this conversion is simple enough and can help avoid days of debugging.

A thought about language

It is certainly that there is a lot of programming languages on the market. Each language creates itself job positions and even programming “religion” where engineers prefer using a certain language over others, and thanks to that, the software development market can thrive because the diversity in language increase the scarcity in workforce.

So why some language offer higher salary than others ?

Well, language is tool, each tool is more appropriate for a certain purpose. Salary is the compensation from the budget assigned for a purpose. So actually there is no standard market price assigned to each language. It still is the supply-demand game. The difference is the demand is influenced by big boys in the industry. Big boys create programing languages, frameworks that overcome limitations of the ones exists before. The limitation can be speed, memory efficiency, friendly syntax, built-in solution for repeatable tasks. As the result, it can save money in renting physical resource (computers) and save working hours (constructing and error fixing) and that savings also contribute to the compensation, or the salary.

Can an engineer learn multiple language ?

Absolutely Yes.

Learning a language is always hard and time-consuming. There is a lot to remember: syntax. And syntax can be translated from one to another by the same principle as human language. So far, for each language, what we need to remember are:

  • How to declare variable
  • How to write conditional check (if-else)
  • How to do the loop (for-while)
  • How to separate code in reusable blocks (class, method, function, interface)
  • How to handle concurrency (process, thread, event loop)
  • What libraries are offered out-of-the-box per each language.

The tactic is, for the first language, learn it carefully and practice a lot. For new languages, compare it to the one we knew, and find the translation between syntax sets, then practice a lot.

Another tip is to search for open-source projects, and read their code. This is also a quickest way to learn from top engineers.

Is it worthy to know more than one language?

Yes, I think.

If we only need a stable job, one language is enough. But remember that technology changes everyday, and any project has the starting date and the end date. So, ensure your adaptability.

If we are interested in technology, learning new languages can provide us a deeper understanding about computer as well as to see more than 1 way to solve same issues. It keeps us open-minded, curios and provide a broad range of knowledge, which is a truly stability, so far as what I learn from many professionals.

And, the whole programing activities is a career, not a single language itself.

Is it tired to know more than one language?

Yes, obviously. No need to explain.

Microservices Tradeoffs and Design Patterns

Let aside the reason why we should and should not jump into Microservices from previous post , here we talk more about what Tradeoffs of Microservices and Design Patterns that are born to deal with them.

Building Microservices is not easy like installing some packages into your current system. Actually you will install a lot of things :). The beauty of Microservices lies on the separation of services that enable each module to be developed independently and keep each module simple. But that separation also is the cause of new problems.

More I/O operations ?

First issue that we can easily to recognize is the emerging of I/O calls between separated services. It exactly looks like when we integrate our system to 3rd party services, but this time, all that 3rd party services is just out internal ones. To have correct API calls, there will be efforts to document and synchronize knowledge between teams handling different services.

But here is the bigger problem, if every services has to keep a list of another services addresses (to call their APIs), they become tight coupled, means strong dependent between each other and it destroys the promised scalability of Microservices. So it is when the Event-Driven style comes to rescue.

Event Driven Design Pattern

Example tools : RabbitMQ, ActiveMQ, Apache Kafka, Apache Pulsar, and more

Main idea with this pattern is to allow services not need to know about each others addresses. Each service just need to know an event pipe, or a message broker and entrust it for distributing its message and feeding back data from other services. There will be no direct API call between services. Each services only fires some events to the pipe, and listen on some events happened from the pipe.

Along with this design pattern, the mindset on how to storing data is required some escalations too. We will not only store STATE of entities, but also store the stream of EVENTs that construct that STATE. This storing strategy also is very effective when dealing with concurrent modifications on the same entity that can cause inconsistent in data. There are 2 approaches to store and consume events : by using the Queue and using the Log that we will discover in later topics.

More Complex Query Mechanism ?

It is obviously there will be moments that we need to query some data that need the co-operation between multiple services. In the past with monoliths style, when all data of all services is located in the same database, writing an SQL query is simple. But in Microservices style, it can’t. Each service secures its own database as a recommended practice. We suddenly can’t JOIN tables, we lost the out-of-the-box rollback mechanism from database’s Transaction feature in case of something wrong with storing data, we may have a longer delay while each service may have to wait for data from other services. And those obstacles turn Event Driven to be a “must have” design for Microservices system since that design is the foundation to support patterns solving this Querying issue, most common are Event Sourcing, CRSQ, and Saga.

Event Sourcing

It can be a bit confusing between terms Event Driven vs Event Sourcing. Event Driven is about communication mechanism between services , since Event Sourcing is about coding solution inside each service to retrieve a state of an entity: instead of fetching the entity from the database, we reconstruct it from an event stream. The event stream can be stored in many ways: it can be stored on a database’s table, or it can be read from Event-Driven supported components such as Apache Kafka, or RabbitMQ, or using some dedicated event stream database like EventStore, etc. This method brings new responsibility to developers that they will have to create and maintain the reconstructing algorithms for each type of entity .

As mentioned at previous section, this strategy is helpful when dealing with concurrent data modification scenario, something like collaboration features that can be seen in Google Docs or Google Sheets, or simply to deal with scenario that 2 user hit “Save” on the same form at very closed moments. But this reconstructing way is not so friendly to a more complex query which is so natural with traditional database like Oracle or PostgresSQL, the SELECT * WHERE ones. So, to cover this drawbacks, each service usually also maintain a traditional database to store states of entities and using it for querying. And this combination form a new pattern called : CQRS (Command and Query Responsibility Segregation) where the read and the write on an entity happens on different databases.

CQRS (Command and Query Responsibility Segregation)

As mention above, this pattern is to separates read and update operations for a data store. A service can use Event Sourcing technique for update an entity, or construct an memory based database such as H2 database to quickly store updates on entities, while as quick as possible to persist the calculated states of entities back to a SQL database for example. This pattern prevents the data conflict while there are many updates on a single entity come at the same time while also keep a flexible interface for query data.

This pattern is effective for scaling purpose since we can scale the read database and the write database independently, and fit for high load scenario when the writing requests can complete quicker because it reduces calls to database with potential delay from locking mechanism inside databases. Quicker response mean there will be more room for other requests, especially in thread-based server technology such as Servlet or Spring.

A drawback of this pattern is the coding complexity. There is more components join in the process, there will be more problem to handle. So it is not recommended to use this way in cases that the domain or business logics are simple. Simple features is nice fit with traditional CRUD method Overusing anythings is not good. I also want to remind that if the whole system does not have special needs on the load, or write-heavy features, it is not recommended to switch to Microservices too. (reason is here )

Saga

Saga means a long heroic story. And the story about Transaction inside Microservices is truly heroic and long. Transaction is an important feature for a database that aim to maintain the data consistency, it prevents partial failure when updating entities. With distributed services, we are having distributed Transactions. Now, the mission is how to co-ordinate those separated Transactions to regain attributes of a single Transaction : ACID (atomicity, consistency, isolation, durability) over distributed services . We can understand simply that : Saga is a design pattern aim to form the Transaction for Microservices.

Saga patterns is about what system must do if there is a failure inside a service. It should somehow reverse some previous successful operations to maintain data consistency. And the simplest way is to send out messages to ask some services to rollback some updates. To make a Saga, developers may have to anticipate a lot scenarios that an operation can fail. The more high level solution for rollback mechanism is to implement some techniques like Semantic lock or Versioning entity. We can discuss about this in other topics. But the point here is it also brings much complexities to the source code. The recommendation is to divide services well to avoid writing too much Saga. If there are some services that are tight coupled, we should think about merging them into one Monoliths service again. Saga is less suitable for tight coupled transaction.

More Deployment Effort ?

Back to Monoliths realm, the deployment means running a few command lines to build an API instances and to build a client side application. When go with Microservices, obviously we are having more than 1 instance, and we need to deploy each instance, one by one.

To reduce this effort, we can use some CI/CD tools such as Jenkins, or some available Cloud base CI/CD out there. We also can write ourself tools , it won’t be difficult. But there is still some more issues than just running command lines.

Log Aggregation

Logging is vital practice when building any kind of application to provide the picture of how system is doing and to troubleshoot issues. Checking logs on separated services can be not very convenient in Microservices so it is recommended to stream logs to one center. There are many tools dedicated for this purpose nowadays such as GreyLog or Logstash. The most famous stack for collecting, parsing and visualizing for now is ELK which is the combination of ElasticSearch + Logstash + Kibana. The drawback of those available logging technology is it requires a bit much RAM and CPU, mostly to support searching logs. For small projects, preparing a machine that is strong enough to run ELK stack may not very affordable. Logstash requires about 1-2 GB is plenty enough. GreyLog requires ElasticSearch so it also require about 8GB RAM and 4 Cores CPU. ELK is much more than that.

Health Check & Auto restart

Beside Logging, we also must have a way to keep track availability of services. Each service may have its own API /healthcheck that we can have a tool to periodically call to to check whether it’s alive or not. Or we can use proactive monitoring tools such as Monit or Supervisord to monitor ports / processes and configure its behavior when some errors occur, such as sending emails or notifications to the Slack channel.

Beside Heath Check, each service should have auto-restarting ability when something take it down. We can configure for a process to start up whenever the machine is up by adding scripts to /etc/init.d or /etc/systemd for most of Linux server. For processes, we can make use of Docker to automatically bring services up right after it is down. For the machine itself, if we use physical machine, we should enter BIOS and set up Auto-Restart when power is on. If we use Cloud machines, it is no worry.

Those techniques are not only recommended for Microservices but also for any Monoliths system to ensure the availability.

Circuit Breaker

This is for when bad things happen and we have no way to deal with it but accepting. There is always such situation is life. For some reasons, one or many services is down or become so slow due to network issues that it will makes user wait long just for a button click. Most of users are impatient and they will likely to retry the pending action, a lot and you know system can got worser. It is when a Circuit Breaker take action. It’s role is just similar to electric circuit breaker , is to prevent catastrophic cascading failure across system. The circuit breaker pattern allows you to build a fault tolerant and resilient system that can survive gracefully when key services are either unavailable or have high latency.

The Circuit Breaker must be placed between client and actual servers containing services. Circuit Breaker has 2 main states: Closed, Open. The rules among those states are:

  • At Closed state, Circuit Breaker just forward request from clients to behind services.
  • Once Circuit Breaker discovers a fail request or high latency, it change status to Open.
  • In Open state, Circuit Breaker will return errors to client’s requests immediately, so the user acknowledge the failure and it is better than let users wait, and it also reduces the load to the system.
  • Periodically, Circuit Breaker makes retry-call to behind services to check their availability. If behind services is good again, it changes to Closed state, if not it remain Open state.

Luckily we may don’t have to implement this pattern ourself. There are available tools out there such as : Hystrix – a part of Netflix OSS, or Istio – the community one

Service Discovery

As we mentioned at Event Driven section, services inside a Microservices no need to know each own addresses by using an Event channel. But what if the team does not familiar with Event style and decide not to use it, or the services is simple enough to just expose REST APIs only. Using Event Driven is not a must-do, and in this case, how do we solve the addressing problem between services.

When system need to be scaled, there will be more instances for one or many services need to be added, or removed, or just be moved around. To let every services know the address (IP , port ) of others, we need a man in the middle that keep the records about service’s addresses and keep it up to date. This module is called Service Discovery ad usually be used along with Load Balancing modules. We may discuss about this more on other topics.

We also no need to create this component from scratch. There are some tools out there such as : etcd, consul, Apache ZooKeeper. Let’s give a try with them.

Ending

Above is an overview of what we need to know when moving to Microservices. Make sure you google them all before really starting. Each of patterns will have its pros and cons and overcoming solutions that another topics will cover. Thanks for reading !!

What is a Senior software developer ?

Shortly said, when you can handle entire development cycle, end to end.

Most of developers can quickly master the syntax of particular programming languages, but it does not mean Senior is a developer who mastered all syntaxes. Senior must be master of syntaxes as default , plus many other skills.

A software development cycle includes steps :

  • Collect requirements from customer needs
  • Choose programing methods and technical solution that fit requirements and the development team.
  • Operate the continuous delivery process to bring the product to reality.
  • Troubleshooting issues
  • Provide guidances for other developers to let help them contribute to development process.
  • Estimate development time.

Depends on each developer abilities, some can tackle all phases in 2 years, some can take 10 years, or even never for some reasons that I will mention at the end of this post too.

From above steps, we can deduce some skills that a Senior must have to afford the job

  • Be patient enough to listen to customers to understand their real needs
  • Have strong understanding about UI/UX to offer solutions, customers usually come with the imagination of the best scenario but most of time, bad things happen.
  • Can explain technical terms to non-tech customers. Some time we need the customer to empathize with the development team on how difficult a feature can be, or why some feature is more expensive than others, or why something is impossible.
  • Experience on some programing paradigms. The most popular paradigm today is Object Oriented (OOP), but beside it, Functional Programing or Reactive Programing is raising too. Each paradigm has its own pros and cons. Each language, depends on its features, can be classified into 1 or multiple programing paradigms. For example, Java is an OOP language, but from Java 8, it has some libraries designed as Functional styles. Javascript is Functional as beginning, but today it support OOP style too. Pros and Cons of each paradigm can make writing code for some certain types of requirements harder or easier, and write code much or less. So it is important that a Senior should understand as much as possible of different programing paradigms to ensure the maximum quality of code with minimum effort. And this is why firms willing to pay much more for one who be truly Senior.
  • Know how to integrate with popular 3rd party services. Those services focus on solving some common issues for common features such as sending & managing emails, upload / download file, or billing, etc. Integrate with 3rd parties helps to avoid re-inventing too much, reducing error prone and development time.
  • Proficient with command lines & shell scripts. Most of processes of packaging products use command lines. Some modern IDEs provide features of package the product, we can use it instead of command lines, but in some situations such as deploying web servers , or to automatically build mobile application, we need to use command lines. A Senior of any kind of product must know how to build his application using command line only. IDE is a convenient tool only, does not dependent on it too much.
  • Proficient with Version Control tools such as GitHub. Development process contains a lot of code change from many developers, some good, some bad, some worthy, some trashy, some full of risks. Senior must review other codes frequently to always aim the code quality to the highest.
  • Proficient with debugging techniques to find out root cause of issues. Most IDE today support well breakpoints to stop the program at any point so that developers can double check everything. Some situation we have to use logs on console to figure out the problem.
  • Can quickly understand the source code and project structure as well as business concepts to have quick diagnose for issues.
  • Can create automation tests to early detect issues after each code change.
  • Have knowledge about computer architecture & system design to provide optimizing solution when the application hit its limitation.
  • Good at technical explaining, to provide guidances or support for lower levels developers, as well as to cooperate with other Seniors.
  • Good at technical writing to contribute the knowledge base to the team.
  • Have a few achievements in the past, such as completed features, completed projects, proven optimizing solutions.
  • Have a sense of development time. Be able to give a reasonable estimation time is a point proving that a developer has seen thoroughly the development process.

To acquire above abilities, there is no way but Practicing. The point is we should know what to practice. Experience is not about number of years you go to work, it is number of things you have done. And the most important is, external validation is not necessary. You don’t need to wait for a validation exam or certificates to call yourself Senior, especially in the software development world. As I know that there is no official contest to generate the title for developers, but the modernness of current world is cultivated by many no-title heroes. We give respect to ones who contribute.

So now, what to practice ?

There are many ways to practice, via your work, on your free time, via courses , but I would like to suggest below :

  • If you have a chance, do some small outsourcing projects. This will throw you to entire development process and get familiar with customer needs.
  • If there is no chance to find an outsourcing projects, create one for yourself as a personal project. Do any application idea that you wish to have !
  • If you have no idea, just try to clone some famous applications. !
  • Watch Tech conferences. It’s easy to find them on Youtube now. This will up you to date on technology fields and listening to experts is a best way to learn.
  • Read books about related subjects such as UI UX designs, Design Patterns, System Designs, Operating Systems, or even Management area if you are interested in or even Psychology if you feel you are having troubles when talking to clients. Books can gradually feed you with knowledge in an organized way and some day it will be useful all. Don’t expect any one book can change your life, one thousand books can !! .
  • Be curios ! Spend your free time on learning new technology, compare to what you already know to see the different, the improvements or a new way to solve an old problem.
  • Write down everything. Don’t trust your memory. Writing is a good way to learn again and learn deeper.
  • Practice explaining complex & abstract concepts by talking to friends or writing blogs like me. 🙂

And last but not least, what will prevent you to level up ? , according to me :

  • Do the same tasks by the same way in a long time. You won’t have a broad enough view and so you won’t have the deep view too.
  • Too Passive. Remember that you are the change you need ! Never wait for a chance from somebody. People is tired enough to find chances for themself.
  • Reject reading or learning.
  • Satisfy with what you knew. You never know what you don’t know !

This post is just some thoughts with intent to provide more detail about an abstract term “Senior” in software development world. Hope this can help developers to double check the career status or help people on other fields to understand important criteria when finding or hiring developers.

Thanks for your time.

Is Microservices good ?

Yes and No.

Yes when we are facing problems that it solves and No when we blindly follow that “trend”.

Once my boss read somewhere about how amazing the Microservices is and instantly he asked the development team to “Let do Microservices”. He’s purely a business man but always want to apply the newest technology. How lucky am I, but also a challenge when to switch a system design to another. Actually it sounds cool to us so it is a quickly agreement between boss and developers. So let do Microservices.

What is Microservices ?

Microservices, clearly said, is a system design approach, I personally don’t count it as a technology. Microservices system itself will be composed from multiple technologies. Each piece of technology solves a business problem or problems emerging inside Microservices itself. The opposite approach to Microservices is called Monoliths – an All In One Big Service, shortly is what mostly systems nowadays are, composed from a single set of a API server and a database. Switching to Microservices, technically, is to divide functions of One Big Service into multiple small services running independently, wire them together and then we can choose fittest technologies for each small service. Each technology here can exist as a programing language, a framework, a software, a third-party service, or a tool.

The simplest form of a Microservices system, we can think it is composed from multiple Monoliths system. Each Monoliths system contains its own server & database and exposes its own API gateway . Monoliths systems communicate to each other by call APIs of others directly or listen to a shared event channels, depends on use cases.

Microservices is NOT a new skill set. Microservices is composed from multiple Monoliths services, so to do Microservices, developers must good at building Monoliths first.

What problems do Microservices solves and NOT solve ?

There is a reason that every bosses want to move to Microservices that is they think it is good. But I think not everyone understand WHAT it is good for. Microservices is NOT a pure better design than other designs. It is an adaptation to overcome problems emerging when a system is growing to big and huge size, in both manner of traffic and logic complexity. So if your system does not suppose to be the next Amazon or Netflix, Monoliths design is fine for you since it is much simpler to set up and maintain. A few thousand users with few hundred connection per second is in capability of mostly technologies nowadays, such as Spring or Node, Ruby on Rail or PHP, etc. But it is hard to estimate the threshold because each system has different features and the best way to find out it maximum capability is to do the stress test – basically to send as much as possible requests then analyze the response time. When you know your system capability, you will have a reference in number to decide when to move to Microservices. Microservices is a journey, only carry on when you are well prepared.

Microservices does NOT magically increase the system load threshold, unless the services are divided and designed appropriately. Remind that I/O processes take the main part in the delaying time between request & response. Normally, in Monoliths design, all services are on the same memory and it is the fastest way for services to cooperate to each other. But if we blindly deploy services to multiple different places to make it look like Microservices, there will be more I/O time since services have to send more requests to others that it depends on, then performance of the system will go down significantly. This may be the most common mistake when creating a Microservices system. Microservices is NOT to fan out all services to multiple servers. We must calculate to identify the bottleneck in the system before deciding to move some related services to an independent server. And it also is NOT simply to deploy current service’s source code to other server. The new server may have some beneficial points, such as a greater processing power that accelerates the service, or it is to redesign the service with other technologies that have some benefits the service needs.

A good example for redesigning the service is to separate the READ and WRITE data into two services for the same domain object (same table), the purpose is to support a large of concurrent reading/writing data with low latency. Assume that we are having a Monoliths system but after a period of growing, we have a very huge data amount and complex data schema on a SQL database so that every time a query is issued, it freeze the whole system for a few seconds. This is bad and we want to improve. That moment, we may come to this solution: We will divide the service in READ & WRITE aspect. The READ data service may use a NoSQL database as a persistence storage but with fast reading speed to reduce user’s waiting time. The WRITE data services may use an in-memory database such as H2 to proceed data updating as fast as possible, then gradually synchronize in memory data to the persistence storage of the READ service. Those two services should run on different machine to be able to maximize resource usages. And this is a truly story of Microservices. If we simply deploy another identical service on another server to handle more traffic by routing traffic by IP or by zone, it is the term of Load Balancing.

Microservices is NOT to reduce development cost. In fact it increases. Firstly, we need more machines to run independent services, as well as more machines to run other monitor tools. Microservices is an architectural design approach, it is the view of the whole system, NOT on how each service coding solution. It does NOT magically reduce bugs. You may read here for more understanding about the source of bugs. But when services are divided well, it does enhance the boundary between services, so that can help developers to avoid using wrong components as well as to avoid creating too much cross-cutting concern components with many hidden logic. Microservices brings the real need of DevOps positions, who will take responsibility to deploy multiple services as quick as possible to ensure lowest down time between deployment. They obviously will have to create some CI/CD system to automate the deployment process, calculate system load and create/install monitor tools to keep track how services are doing with other. When a bug happens inside a Microservices system, it is more complicated to fix than in Monoliths since now there are more than 1 places to figure out what is the truth source of a bug. Developers also always have to set up an identical system on their local machine for developing and testing. A system of multiple services requires stronger machine for developers. Too many services system can be somehow impossible to deploy on a single machine and we may need some Mocking technique to create fake API gateways on behalf services. Writing automation tests gets harder too, etc. And many many behind the scene works like these will disturb developers when switching to Microservices. More work, more job, more salary.

Microservices is NOT to freely apply latest technology. I bet that your team won’t want to work in a tech-soup. Agree that Microservices open us an ability to mix multiple technologies to make use of their advantages. But remember that it does require us to understand their advantages before applying, or your system will get more complexities without any significant benefit and crying is coming soon. Microservices is NOT only about technology, it’s also about human. It may depends on how your team is organized, what their skills are, what they are good at. Because learning some new things does take time and if you are in rush, let do with tools that you are familiar. Example we are about to create a small service to handle Employee’s documents in 1 month and we are having only 1 thousand employees. Our developers are experts at Java but Go is the new language and it is on trending. You may hear somewhere that “Go is faster” but here is the point, your developers will build that new service faster with Java than Go and that one thousand users is not the limitation to have to switch from Java to Go.

Microservices is NOT to create boundaries between teams. It is to create the boundary between services that your teams are creating only, technical boundaries only. The more developers know about other services, the more chances they find out problems early and the less communication cost between teams. Don’t use the architectural design as a political tool inside an organization. One developer can work for multiple services depends on his/her ability. Those people usually act as an important bridge between services. I know that some managers want to divide teams to rule easier, but I feel it is not a good way to create an organization: people will go to work with doubts and envies because much or less, all services are necessary at some points but at each moment, some are important than others. Non-boundary teams also activate cross-checking that can push teams move forward, also reduce job security. No sharing, no checking between employees will gradually hint a few ones to think that they are irreplaceable. It is a toxic thinking for an organization.

So when to go Microservices ?

Microservices do not help to reduce costs, not help to improve performance, not help to be “better”, so why do it is on trending ? Because it is from big tech companies, and people tend to believe what come from big boys always “better”. We easily blindly copy without diving deep to understand why they do that. With big tech companies, they hit the limitation of technologies and a single Monoliths system can’t help them anymore so they have to use multiple Monoliths to solve problems. And the result is a system that they named Microservices. Technology changes everyday and who know what will come in next few years. We see many frameworks, languages, platforms come as “better” options then die. So the key point to decide to move to Microservices is to know the limitation of current system by testing the load well.

Another reason we might need the Microservices is to implement many projects at the same time. Example we need to build a Pricing Engine module in the same time with an ERP module to manage employees, we might assign them for 2 teams since the business logic of modules does not depend on other. Each team can develop their own service on separated server so the deployments of each service is independent too. If 2 modules is built into 1 Monoliths service, an issue happening on a module may block the whole deployment process to prevent risks happening on production environment. So the key point when dividing services for teams is the dependency between services. They should be loosely coupled. It means each service can act as a separated product without knowing or need existence of other services.

When each service is truly independent, it can be reused too. Example your companies has multiple projects but using the same employees for all projects, so to avoid duplicating features like authentication, employee manager, or full text search service, etc, we can carving them to separated services that can be reused by different projects.

One scenario that you can find out your system look like Microservices, is when to rewriting the legacy system with up-to-date technology. Rewriting the whole system is time consuming so we usually have to rewrite module by module. Each rewritten module can be deployed at a separated server and on the way of rewriting the legacy system, you are using Microservices.

Why your software get bugs year after year

Human, machine or money are all involved.

*Software here mentions to every kind of PC applications, web applications, mobile applications

The very first year of my software developer career is to be a bug killer. The year that I joined, I can feel that there is no thing new added to the product, everyone seemed to have to spend a year, actually more, to fix bugs that created just from prior 2 years of coding. It sounds expensive, right ! The cost for fixing bugs not only employees wage but also the trust from customers, and sometime to be a loser to your competitors. At least that year teaches me on how a thing can be done wrong and I do believe that, knowing how to do it WRONG is even more important than knowing how to do it RIGHT. Every mistakes are worthy to learn, even it’s yours or not.

How is human involved ?

Testers usually blame developers for bugs. Actually it’s true, but remember that they are the source of other non-bug things. People make mistakes all the time, unconsciously, and so do developers and testers too. Seniors will make less bugs than Juniors, not because Senior are smarter, it is because Seniors already made all of that mistakes in the past. Smartness only helps to figure out and solve problem faster, it does not help to avoid problems, or mistakes. The job of testers is to help to reveal developer’s mistakes, to ensure the product’s quality, so avoid carping and blaming developers for bugs.

The easiest-to-fix bugs are Programming Errors. Yes, this is all about developers. Programming Errors includes :

  • Syntax error : Each programming language has its own coding convention. Newbie in coding usually make those mistakes because they are still not have full understanding about predefined conventions. Sometime they don’t aware that style of coding even exists.
  • Semantic error: When the coder got familiar with syntaxes, they literally can tell the computer to do things they desire in which order of steps. But sometime, some steps conflict with others such as altering a value that is still not available, or steps are executed in a wrong order that leads to producing a wrong result. Null pointer exception maybe the most famous error that developers and testers ever heard during their jobs. It is when the computer are asked to read a value that doesn’t exist in its memory. The more experience developer, the easiest way they can avoid those errors.
  • Logical error: Even multiple years Senior developers still make this mistake. It is when a developer has to turn his understanding about a requirement into lines of code. And as a human and depends on how efficient the communication method is, he may mis-understand the logic then produce mis-behavior code. When he has correct understanding about requirements but still produces wrong things, it is because of later reasons.
The main different between Junior and Senior developer

The next level of complexity bugs are from Communication Error. This time, it is about everyone: product owners, business analyst, developers, testers, managers, and the cat of developers.

There are always communication gap between any couple of people. If a team lacks of the co-operation or is weak at co-operation, its number of members does not reduce development time, it increases, because the more inefficient communication is, the more problems come. And again, please don’t mis-percept between communication versus talky or chat, or gossips. Communication Error here is about the mis-distributing information between people. It is the lack of Reporting systems between stakeholders. Lack of information leads to wrong understanding and even conflicting, not only in human aspects, but also in product design and development process.

  • Reporting between Developer and Developer: This is the most popular problem in a development team. A typical phenomenon is a developer usually re-invent a thing that already exists, not because to improve or to overcome some limitations, but because he doesn’t aware of its existence. And as we know, any inventions must go through a lot of mistakes, and re-invent is to go to face to those mistakes again. Mistake is bug!
    In scope of developers, the lack of reporting on what is done, what is being done, what will be done creates blind spots in making decision on how to write code. Writing new things is time consuming and much error prone. Using existing ones but without document or guidance is error prone too. Well organized tech meeting is a good practice to distribute information. A serious Technical document and good distributing methods also can ensure developers knowledge synchronized and then less error prone.

  • Reporting between Business analyst and Developer : Lacking of reporting on what is doing good, what is doing poorly and why this, not that also create smoke lines in developers mind. Everyone has their own past and the past creates assumptions. Sometime you find out a developer build things in his manner way that he believes is right but it does not fit to your current business logic. Don’t assume people understand things in your manner way. So, to make developer understand the situation, please also let them know what customers do, how customers do that and why customers do that way.
  • Reporting between Product Owner and Business Analyst : This is usually merely verbal communication. This communication step turns an idea to detail actions that developer later will engage with. And idea is something mutable. The change in idea creates the change in requirements. And changes in requirement, if not are clarified carefully, it can cause conflict with old requirements or mismatch to them. And this case, even developers do exactly as requirements, the software still gets bugs. It is business logic bug. To reduce this case, business analyst should have a method to oversee the affection between business rules, so that can report early to the “idea generator” potential problems, then in turn, with Product Owner to clarify the final and safe actions needed. The Product Owner also should give a clear vision and strategy so that everyone can have a particular destination in mind and in turn, it helps everyone adjusts their own actions to fit to the strategy. Do you believe that the idea itself can bug?
  • Reporting between Employee and Manager: Managers are familiar with reporting. They use information reported from others stakeholders to create detail action plans. For some reasons, budget for example, or mis-estimating required effort, the deadline for an action is sometime too tight. It create a time pressure on developer and tester. With limited time, everyone will choose the fastest way. Developer will choose hard-code solution, testers will miss some test cases, and eventually, bugs will emerge. Hard-code is to cure an illness by cover up symptoms without applying time-consuming proven treatments.

  • Reporting between Testers : The job of tester is to manage test cases and go through them, often. With some coding knowledge, testers can write automation tests that can automatically run after every changes in requirement, big or small. This is the most efficient way for testers. In some case, testers are non-tech, developers usually take care this part too. Without automation tests, testers will have to manually test case by case after each change in requirement. Testers should not assume that developers will take care all affections of the change to other functions. Sometime developers even do not aware about that. Fixing bugs can create bugs too. So, if you are tired with manual tests, create automation tests.
  • The cat of employees : This is a metaphor for employee’s mental health. It does affect to employee’s alertness and is a cause to create mistakes. A broken heart is more likely to destroy the system more than a normal one, right. So, make sure your team is heathy.
How is machine involved ?

Beside mistakes obviously made by human, there are some objective conditions can make your software run some undesired behaviors that customers can report as bugs. Actually someone can blame developers about their inability to anticipate those conditions, please remember that it is also because tester’s inability to anticipate those conditions, and it does not in the requirement written by BAs or Product Owner too.

Experienced developers can aware about those situations and can give early solutions. But everything takes time. Especially, solving those kinds of bug requires more effort than above ones. Testers should aware about those situations and learn more advanced testing techniques so that can help to reveal those problems early before it happens to end users.

  • Incompatibility: Your customer’s machine can have some limitations to be sufficient to run your software. It can be a low memory computer (RAM, Hard disk) , old (weak) processor or out-update Operating System. Every software will need a minimum available memory and pre-existing components for initializing and further executions. When the computer run out of resources, anything can run improperly. Every software should be shipped with a system requirement note, it can act as a disclaimer term for your team.
  • Memory Consumption : With the same problem, each developer can have different solutions and each solution has a different speed and memory consumption. Memory is a limited resource. Much or less, this depends on developer’s skill. But with nowadays computer power, a few Gigabytes memory is usual so it makes most of developers in most of situations does not so care about memory consumption anymore, until it crashes.
    Running out of memory not only is caused by your own software. A computer can run many softwares as the same time so it is not always because of you. It is the best if the software can notify to users about its memory situation so the user can understand what is going wrong.
    In web servers context, each request is allocated a maximum memory amount. This practice is to ensure the server can handle hundreds to thousands requests per seconds. The code written by developers may not consume so much memory, but it may exceed this threshold. Many other components behind the scene like databases, proxies, other third party services are all vulnerable by this problem. So to developers, never assume everything does well, always prepare for the failure because bad things do happen.
  • Unstable Network: Offline-only software has nothing to worry about this. This is more important for software that has client-server model. Most of application nowadays is client-server model. The connection quality between client and server is extreme important, especially with application requires realtime responses like stock market, online gaming or streaming services. The technology behind those applications already has some tactics to recover or endure under unstable network or low bandwidth connection, but they are “try best” only, don’t expect it has magic. The ability to work under low bandwidth or unstable connection, to some application like video streaming, is the key competition factor. Testers should aware about this and should have serious test scenarios.
    To optimize solutions to make sure the software can work under this situation, developers must have deep knowledge about computing and networking. Fix those kind of bug is extreme hard. Don’t expect the solution is always available, current Civilization still have some limitations.

  • Offline Accidents: Electricity off, for some reasons. The suddenly shutdown can cause some functions in a program fail partially. It can cause some problem in the next start such as mismatch data or corrupt data. Some softwares handling sensitive data like in banking industry for example, usually have some recover strategies, digital and non-digital to ensure the business from damages.
How is money involved?

Budget of a project affects to the plan in time pressure and the priority of works. Most of time, developer will focus first on writing code to fit with business logic and let aside potential problem about machine. It makes sense because we should avoid the Pre-mature Optimizing: how sure that potential problem will happen? This ignorance from priority reason produces something called Technical Debt, and every debts need to be paid, soon or late, with or without interest.

Budget determines the quality of team members. It obviously the more experience employees, the more benefit they want. Experience means they are aware about mistakes and they do have a way to avoid them. And when they know how to avoid mistakes, they know how to do it right.

Budget affects employee’s motivation. Motivation makes them do their best to make the best software they can. Sometime your software does not make money enough to ensure that kind of employee’s motivation or minimum member quality, remember my question: “Do you believe that the idea itself can bug“? If your software does not make enough money YET, why not share your vision with everyone and let them be one of it. Excellent people who willing work for joy and opportunity still exists, trust me !!.

Ending

Above is some senses from my debugging era. Hope it can help to point out some mysteries from developer world and help teams has more appropriate actions to deal with bugs. Feel free to comment for anything you interested in or not agree. Thanks for your time.