How to compete with generative AI as a software engineer ?

Before the decade of AI bursting, software engineering is mostly about writing code that realize requirements. Software Engineers, at some extent, act like a translators between human languages and computer language. This translation today can be accomplished by many generative AI products in seconds and from my observation, generated code has pattern even better than code written by most of developers. It is understandable when companies begin laying off employees that does not match existing AI. It is just a cost optimization – vital part of every business – and also a coldest truth of this life, might be !

What is generative AI good and not good at ?

Recall the flow that each software engineer has to do daily is:

Receive requirements -> Review current state of source code -> Define a target state of source code -> Retrieve information from documents of related tools, libraries and solutions -> Pick solutions -> Actually write code -> Aligning new code to existing code -> Deploy -> Testing -> Measuring results -> Read error messages -> Debugging.

Some steps of this flow is done better by generative AI, and some is better by human:

StepsDescriptionWinner and Why
Receive requirementsto capture goals, constraints, acceptance criteria, performance, security needs, and stakeholders’ expectations.Human
Reason: human are better at eliciting ambiguous needs, negotiating trade-offs, and asking the right follow-ups with stakeholders. AI can help by summarizing long requirement documents and suggesting missing or inconsistent points.
Review current state of source codeto read codebase, architecture, tests, docs, build scripts, dependencies, and CI config.Human + AI
Reason: AI can quickly index, summarize files, find patterns, risky hotspots, and generate dependency graphs. But humans provide domain knowledge, historical context, and recognize subtle intent (business logic, quirks, trade-offs).
Define a target state of source code to design the desired architecture, interfaces, data flows, APIs, and acceptance criteria for the new state.Human + AI
Reason: AI can propose multiple concrete design options, highlight trade-offs. Humans must pick the option that fits non-technical constraints (policy, team skill, product strategy).
Retrieve information from documents of related tools, libraries and solutions to find API docs, migration guides, best practices, configuration notes.AI
Reason: AI can extract key steps, call signatures, breaking changes, and produce concise examples from long docs much faster than manual reading. Humans validate and interpret edge cases.
Pick solutionsto select libraries, patterns, and implementation approaches considering performance, security, license, team skills.Human
Reason: human decision-makers must weigh organizational constraints, long-term maintenance, licensing, and political factors.
Actually write codeimplement features, refactor, add tests, update docs.AI
Reason: AI excels at generating boilerplate, test stubs, consistent code patterns, and translations across languages.
Aligning new code to existing codeensure style, APIs, error-handling, logging, and patterns match the codebase; maintain backward compatibility.Human + AI
Reason: AI can automatically reformat, rename for consistency, and propose refactors to match patterns; humans confirm that changes don’t break conventions tied to tests or runtime behaviors.
Deploypush to staging/production, run migration scripts, coordinate releases, rollback plans.Human
Reason: Humans must coordinate cross-team tasks, business windows, and incident response. AI/automation is excellent at packaging, CI/CD scripts, and repeatable deployment steps.
TestingRun the application locally and manually verify that new changes behave as expected.Human
Reason: Manual testing relies heavily on intuition, product knowledge, and human perception (e.g., UX feel, layout issues, unexpected delays, weird state transitions).
Measuring resultsmonitor metrics, logs, user feedback, testing results, and define success signals.Human + AI
Reason: AI can detect anomalies, summarize metrics, and surface correlations. Humans decide what metrics matter, interpret business impact, and choose next actions.
Read error messagesanalyze stack traces, logs, exceptions, and failure contexts.Human + AI
Reason: AI quickly maps errors to likely root causes and suggests reproducible steps. Humans provide context (recent changes, infra issues) and confirm fixes.
Debuggingreproduce issues, step through code, identify root cause, fix and validate.Human
Reason: AI speeds discovery (identifying suspicious diffs, suggesting breakpoints, generating reproducer scripts), but complex debugging often needs human insight into domain rules, race conditions, and stateful behaviors.

How to compete with generative AI to secure the career as a software engineer ?

Similar to the Industrial Revolution and Digital Revolution, where labors is replaced by machines, some jobs disappeared but new jobs got created. And at some extent, AI, is just another machine, huge one, so, essentially, we are still in the Revolution of Machine era.

The answer for this question is that we need to work on where this huge machine cannot. So far, at the moment of this post, what we can do to compete with AI in software development are:

Transit to Solution Architect

As AI becomes strong at writing code, humans can shift upward into architectural thinking. A Solution Architect focuses on shaping systems, not just lines of code. This involves interpreting ambiguous requirements, negotiating constraints across teams, balancing trade-offs between cost, performance, security, and future growth. AI can propose patterns, but only a human understands organizational politics, legacy constraints, domain history, and long-term impact. By moving into architecture, you operate at a layer where human judgment, experience, and foresight remain irreplaceable.

Become Reviewer / Validator

AI can produce solutions quickly, but it still needs someone to verify correctness, safety, and alignment with real-world constraints. A human reviewer checks assumptions, identifies risks, ensures compliance with business rules, and validates that AI-generated code or plans actually make sense in context. Humans excel at spotting hidden inconsistencies, ethical issues, and practical pitfalls that AI may overlook. Becoming a Validator means owning the final approval — the role of the responsible adult in the loop.

Become Orchestrator

Future engineers will spend less time typing code and more time coordinating AI agents, tools, workflows, and automation. An Orchestrator knows how to decompose problems, feed the right information to the right AI tool, evaluate outputs, and blend them into a coherent product. This role requires systems thinking, communication, and the ability to see the entire workflow end-to-end. AI is powerful but narrow; an Orchestrator provides the glue, strategy, and oversight that turns multiple AI capabilities into a real solution.

Study Broader knowledge

AI is good at depth — consuming a specific library or framework instantly — but humans win by having breadth. Understanding multiple domains (networking, security, product design, compliance, UX, devops, data, hardware) allows you to make decisions AI cannot contextualize. Breadth lets you spot cross-domain interactions, anticipate downstream consequences, and design better holistic systems. The more wide your knowledge, the more you can see risks, opportunities, and real-world constraints that AI cannot infer from text alone.

Expertise in task description

In an AI-driven era, the most valuable skill is the ability to turn a messy idea into a clear, precise, constraints-rich task. This includes defining scope, edge cases, success criteria, and architectural boundaries. AI is only as good as the instructions it receives — so those who excel at describing tasks will control the quality of AI output. Humans who master problem framing, prompt engineering, and requirement decomposition gain leverage: they make AI more accurate, faster, and more predictable than others can.

Business Analyst

The heart of value creation lies in understanding the business, not writing the code. AI cannot replace someone who knows market dynamics, user behavior, budget constraints, prioritization logic, risk tolerance, stakeholder psychology, and regulatory boundaries. A Business Analyst bridges the gap between technology and real-world value. They decide why a feature matters, who it serves, how it impacts revenue or cost, and what risk it introduces — areas where AI can help, but not replace the human nuance needed.

Pentester

Security is one of the hardest domains for AI to master fully because it requires creativity, unpredictability, street knowledge, and adversarial thinking. A pentester does more than run scanners — they exploit human behavior, spot surprising vulnerabilities, and think like an attacker. Humans who understand security fundamentals, threat modeling, social engineering, and advanced exploitation techniques will stay in demand. AI helps automate scanning and code analysis, but a creative pentester stays ahead by understanding motives, tactics, and real-world constraints.


Essentially, it is to use AI as a super-assistant
that can write code very well
to realize our intentions.

First time to NLP huh ?

Natural Language Processing (NLP) is a major research field of AI and to almost developers, it sounds like a miracle. Lately I have an interest in this field since the noticeable viral news of GPT-3 model. I decided to learn to make use of it as a tool before somehow it will replace developer job in the future as many predictions from many illustrious figures. But the more I study about it, the more nothing I know. There are too many background knowledge to know before understanding each word on the GPT-3 paper. Below is a quick summary about works behind the scene that hopefully useful to developers like me who wants make a leap to catch up with the AI progress.

List of keywords

It is inevitable long and exhausting journey to make sure we can understand fairly basic about below terms:

  • Convolutional Neuron Network, Recurrent Neuron Network, Activation Function, Loss Function, Back Propagation, Feed Forward.
  • Word Embedding, Contextual Word Embedding, Positional Encoding.
  • Long – Short Term Memory (LSTM).
  • Attention Mechanism.
  • Encoder – Decoder Architecture.
  • Language Model.
  • Transformer Architecture.
  • Pre-trained Model, Masked Language Modeling, Next Sentence Prediction.
  • Zero-shot learning. One-shot learning, Few-shot learning.
  • Knowledge Graph.
  • BERT, GPT, BART, T5

What exists before BERT and GPT ?

There was a lot of researches and works existed in NLP field. Work on NLP field means to solve below common Tasks:

  • Tagging Part of Speech.
  • Recognising Named Entities.
  • Sentiment Classification.
  • Question & Answering.
  • Text Generation.
  • Machine Translation.
  • Summarization.
  • Similarity Matching.

SpaCy and NLTK is two most famous libraries in NLP field that provide tools, frameworks and models solving a few Tasks above, but not everything. Each Task usually had its own model and there is no reusing or transferring between models, until the Transformer Architecture is published. With its amazing performance and ability of Transformer Architecture, researchers begin to think about using this architecture to perform above NLP tasks, to have one single model can do it all. And the result is the BERT and GPT models which both are using Transformer. A fact is that, BERT is powering the Google search engine, and GPT-3 is the one powering ChatGPT application. There are also more applications making used of these models can be found around Internet.

Some Core Challenges when doing NLP

No matter what method is applied, the challenges that forming the NLP field is still the same:

  • Computer does not understand words, it understands numbers. Find a method to convert each word in a sentence into a vector (a group of numbers) that: given 2 words with similar meanings, 2 vectors can have a close-distance to present the similarity.
  • Given a sentence with many words and variable length, find a vector can present the sentence.
  • Given a passage with many sentences and variable length, find a vector can present the whole passage.
  • From a vector of a word, sentence or passage, find a method to convert it back to words/sentences/passage. This task in turn become the Machine Translation, or Text Summarization.
  • From a vector of a word, sentence or passage, find a method to classify it into some senses/intents. This task in turn become Sentiment Classification.
  • From a vector of a word, sentence or passage, find a method to calculate the similarity to other vector. This task in turn become Question & Answering, or Text Generation, Text Suggestion.

It will be too long to dive into each keyword here so please Subscribe button to receive upcoming posts from my learning journey.

Thanks for reading!