12 Camouflage Techniques that Scam Websites Are Using (and How To Detect Them)

Scammers today are high tech equipped. They have IT team, as good as any software company. These IT guys might not operate scam activities themself, but provide dangerous tools & systems to scammers hand. It is unclear that those high educated guys chose to work for scam industry, or themself also are victims of another scam recruitment, or they are backed by some cybercriminal gangs which in turn, backed by a few governments – which you can guess :). But, an uncomfortable fact is: they has black hats in their side!

Fake websites, Impersonated websites (or Rogue websites) today is designed as polish as official ones. Scam websites copies not only logos, but also the professional feel. But their weakness is always on their domain names. Security researcher often can detect these websites easily by a web crawler, but it is not that easy anymore. These websites today can use some Camouflage techniques to hide themself from security researchers.

This post will list some techniques commonly used by scammer to hide their content from researchers, and a solution around this problem.

1. Cookie-based cloaking

Cookie-based cloaking, or Cookie-Based Redirecting, Cookie-Gated Content is a web technique where a website changes its behavior depending on cookies stored in the visitor’s browser. A cookie is a small piece of data websites save in the browser to remember information such as: login sessions, referral sources, advertising campaigns, previous visits, tracking identifiers. A website can use these information to determine what content to show to a visitor. Scam websites use this technique to:

  • Show trivial content, such as a skateboard product homepage, or a small HR company landing page, etc, to visitors that visitors access directly via entering their domain name.
  • But, if a visitor comes via clicking an ads on Social Networks, it shows scam contents such as impersonating famous services or companies to trick visitors to download or pay in advance.

By this trick, a web crawler will not see scam content, so it can fail to flag it as scam.

2. Geo-Targeting

Similarly to Cookie-based cloaking, Geo-Targeting scam activates only for visitors from certain countries or cities. Scam websites can use IP of visitors to determine what content to display instead of data in cookies. Scam websites can use this technique to hide themself from cybersecurity researchers – who will hunt for them. Many cybersecurity companies scan websites from US cloud providers, datacenter IP ranges or known research networks. Scam sites can detect these IP ranges and automatically hide scam content from those locations.

Another usage of geo-targeting is to localize content by using visitor’s language. Scam contents feel more convincing if it uses local language, local currency, local phone numbers, local branding and region-specific holidays or events. Victims are more likely to trust the page if they see familiar information and symbols. With a domain name slightly different from legitimate ones, it actually fool a lot of people around the world.

3. Device-Based Targeting

Device-based targeting is a technique where a website changes its behavior depending on the visitor’s device, operating system, browser, or hardware characteristics. The same URL may show contents completely differently among Android phones, iOS phones, Window PC or MacOS. Scammers use this technique to target specific victims to deliver platform-specific malware. For example, if scammers want to deliver Window malware, they can make their scam website to display scamming messages only if user is using Window. This is possible because browsers (Chrome, Firefox, …) attach OS info in every HTTP requests. When a researcher using MacOS or using phone, they won’t see the scam messages. This is one of the most common camouflage methods in modern phishing and malvertising campaigns.

4. Time-Based Activation

Time-based activation is a camouflage technique where a scam website only becomes malicious during specific periods of time.

This technique often is used with ad campaigns. Because digital ads platform such as Facebook or Google, always review website’s content before placing ads and they strictly ban scam & impersonated content. But scammers can now bypass this Ad Review System. Scammers can put normal content on a website during review period so their webiste can be accepted. But scammer’s website can be programmed in a way that it only show scam content at specific time, for example: only from 8PM-10PM. Because Ad Review Systems have no access to website source code so they have no clue if a website use this technique. As a result, scammer can guess when their victim usually online, and configure scam website to show scam content at that time.

This Time-based activation method also help them avoid being detected by scanners, limit their exposure and increase their success rate.

5. URL Shortener Abusing

URL Shorteners such as Bitly or TinyURL are tools to shorten urls to make it looks nice when sharing, and looks less dangerous. Scammer can exploit these tool to make their links less suspicious. When users click on a shorten link, let say shorten by TinyURL, browsers (Chrome, Firefox) make request to TinyURL’s server, then TinyURL redirects user to scammer actual link. Scammers exploit this function to hide their real domain names and borrow credit from famous companies, here is Bitly and TinyURL. This method often is used when scammers chose to send links via SMS. Because the URL looks short, and from famous services like Bitly or TinyURL, victims may let their guard down and click the shorten link.

6. One-Time URLs

Another effective camouflage method used by scammers is the use of “One-Time URLs.” One-Time URLs are links that display scam content only once; afterward, the content disappears or changes completely. Technically, this behavior is not difficult to implement — any experienced web developer can build such functionality, and organized scam operations often have dedicated IT teams capable of deploying it at scale.

In a typical scenario, when a targeted victim clicks a malicious link sent through SMS, email, social media, or advertisements, the page displays phishing content, fake login forms, investment scams, or malware download prompts. However, if the victim later revisits the same link — or sends it to a friend, bank employee, or cybersecurity researcher for verification — the page may suddenly become unavailable, return a “404 Not Found” error, redirect to a harmless website, or display completely normal content unrelated to the scam.

7. JavaScript-Only Payloads

Many web scanners depend on HTML content when analyzing websites. To hide scamming intention, modern scam websites increasingly avoid placing malicious text, phishing forms, or scam indicators directly inside the initial HTML response. Instead, they use JavaScript to dynamically generate content only after the page loads, often based on factors such as device type, browser behavior, cookies, location, or user interaction.

In many cases, the HTML page initially appears almost empty or completely harmless to automated scanners. The actual phishing interface, fake login form, or malicious redirect is later constructed in the browser using obfuscated JavaScript, remote payload downloads, or delayed execution techniques. Some scam pages even activate only for real mobile users while showing benign content to security researchers or automated bots.

This technique, commonly referred to as a JavaScript-only payload or client-side payload delivery, makes detection significantly more difficult because traditional scanners may never execute the necessary scripts long enough to observe the malicious behavior.

8. Image Only Websites

Similar to JavaScript-Only Payloads, to bypass traditional scanners, some scam websites avoid placing meaningful textual content directly inside the HTML page and instead render their entire interface as images. Banking forms, warning messages, promotional banners, fake customer support chats, and even login screens may exist only as embedded images, while the underlying HTML remains nearly empty or harmless-looking.

Because many security systems primarily analyze HTML structure, DOM text, metadata, and visible keywords, image-only websites can significantly reduce the effectiveness of conventional phishing detection methods. Without performing advanced image analysis or OCR (Optical Character Recognition), automated scanners may fail to recognize brand impersonation, phishing instructions, or scam-related language contained inside the images themselves.

Some campaigns further combine this technique with JavaScript rendering, geo-targeting, or device-based targeting to dynamically serve different image payloads depending on the victim’s environment, making automated analysis even more difficult.

9. Compromised Legitimate Websites

This case rarely happens, but it does occur — even on legitimate government websites. In some countries, cybersecurity investment remains limited, outdated, or poorly maintained. As a result, official government websites may eventually get hacked through vulnerable CMS platforms, weak administrator passwords, outdated plugins, exposed servers, or neglected infrastructure.

Once attackers gain access, they may place scam advertisements, phishing links, fake investment promotions, gambling content, malware downloads, or redirects to rogue websites directly on the homepage or inside trusted government subpages. In other cases, attackers quietly inject hidden links or malicious JavaScript that redirects only selected visitors to scam pages while the website otherwise appears normal.

Because the malicious content is hosted on an official government domain, victims are far more likely to trust it. This case demonstrates an important reality: a trusted domain does not always guarantee trusted content. Even legitimate websites can be hacked and be injected with scam campaigns if their systems are not properly secured and monitored.

10. SEO Poisoning

People today often trust Google search results more than their own judgment, and scammers actively exploit this behavior through a technique commonly known as SEO poisoning. Instead of sending suspicious links directly, attackers attempt to manipulate search-engine rankings so that their scam pages appear near the top of search results for popular or urgent keywords.

Scammer today has their own content creator team. These teams are responsible for producing convincing materials designed to build trust, attract victims, and make scam campaigns appear professional and legitimate. They also has SEO team, which are responsible for optimize SEO ranking of their websites. As a result, when a user searches for a solution on Google Search, they may land to scammer’s websites. These websites usually provide content that is 90% truth, and harmless, but the rest 10%, is faked, mostly to instruct users – which already trust it due to that 90% – to download malware, or to make advanced payments.

11. Advertisement Abusing

When SEO to top ranking takes time or impossible, scammer still have another choice. They run ads campaign. They pay to Google Ads to display their website on top. These ads usually has word “Sponsored” under its name to distinguish to other native SEO ranking. But users often neglect this, and usually trust the first website.

Scammers usually exploit this behavior by creating ads that imitate banks, airlines, government services, cryptocurrency platforms, technical support companies & package delivery services. The advertisement itself may appear completely legitimate, using official logos, professional descriptions and similar domain names. Some malicious campaigns even use typo-squatting domains that look visually similar to trusted brands.

Because advertising systems operate at massive scale, attackers sometimes manage to run malicious ads temporarily before automated moderation systems detect and remove them. During that window, thousands of users may already have clicked the scam advertisement.

12. Multi-Step Redirect Chains

This is not a new technique, but rather a combination of many of the camouflage methods described above. In a Multi-Step Redirect Chain attack, the victim does not directly land on the final scam page. Instead, they are silently redirected through multiple intermediate websites, tracking systems, shortened URLs, advertising networks, cloaking pages, or compromised domains before eventually reaching the malicious destination. Each step serves a specific purpose:

  • dynamically changing payloads
  • hiding the final destination
  • bypassing blacklist systems
  • filtering unwanted visitors
  • tracking victims
  • evading automated scanners

For example, a security scanner may inspect only the first redirect and conclude the link is harmless, while the actual phishing content appears only after several additional redirects triggered under very specific conditions. Some redirect chains additionally check:

  • IP reputation
  • country
  • browser fingerprint
  • mobile vs desktop
  • cookies
  • referral source
  • whether the visitor appears to be a scanner

If the visitor is suspected to be: a researcher, a security crawler, a virtual machine or a headless browser, the chain may terminate early and show harmless content instead of the real scam page.

Modern scam operations often treat redirect chains almost like traffic-routing infrastructure. Different victims may be sent to completely different scam pages depending on: language, location, device type, advertising campaign and time of day. This technique is particularly effective because no single website in the chain necessarily appears obviously malicious on its own. Some intermediate pages may even belong to legitimate ad networks, hacked government websites, trusted cloud platforms, URL shorteners or compromised websites.

As a result, automated detection becomes significantly harder because scanners must successfully follow every redirect step, emulate realistic user behavior, and trigger the correct environmental conditions before the final malicious payload is revealed.

So how to detect these camouflaged scam websites ?

How to detect camouflaged scam websites ?

Based on known camouflage techniques, detection algorithms can no longer rely solely on static content analysis anymore. Modern scam websites are increasingly capable of dynamically changing their behavior depending on the visitor’s device, location, cookies, referral source, browsing history, or even the current time. A webpage that appears completely harmless to an automated scanner may simultaneously display phishing forms, malware downloads, or fake investment dashboards to real victims under carefully selected conditions.

Because of this, modern detection systems must evolve from simple “page inspection” into behavioral and contextual analysis systems. Instead of analyzing only the final rendered HTML, security solutions increasingly need to observe:

  • redirect chains
  • device-specific responses
  • geo-dependent behavior
  • JavaScript execution
  • timing anomalies
  • browser fingerprint checks

For example, if a website behaves differently between mobile and desktop devices, changes content after several visits, or only activates after arriving from advertisements, these behavioral inconsistencies themselves may become strongest indicators than the visible content alone.

This is one reason why modern phishing detection has become significantly more difficult than traditional spam filtering. Scam infrastructure is no longer static. It is adaptive, selective, and increasingly designed to study the visitor before revealing its real intent.

( There is a project that is active adapting this approach to combat scamming plague: SafePhone. SafePhone for Android is now available on PlayStore , homepage is at: https://safephone.io.vn/. )


4 Advanced Scam Techniques & How To Defend

In the past, scams were often easy to spot: it can be suspicious messages, with poor grammar, or random strangers asking for money. Today, things are very different, it evolves!

Modern scammers use psychology, social engineering, AI-generated voices and videos, fake phone systems, and carefully planned trust-building strategies. Even smart, experienced people are getting tricked and losing tens or even hundreds of thousands of dollars.

This article breaks down several advanced scam techniques that are becoming increasingly common, and more importantly, how you can defend yourself and your family.

1. AI Voice & Video Impersonation Scams

One of the most dangerous new scam trends involves AI-generated faces and voices. Imagine receiving a message from a relative asking to borrow money urgently for a surgery! Naturally, you can become suspicious and decide to verify it with a video call. But during the call:

  • You clearly see their face
  • You hear their voice
  • They speak naturally
  • They say they need money to saving a life.

Everything looks real. Except it isn’t. Due to Social Networks and how careless people are using it, scammers can now:

  • Collect photos and videos from social media
  • Generate realistic facial movements from collected photos
  • Clone person’s voice from video sounds
  • Create short fake video calls or deepfake clips from AI-generated photos and sounds

This is possible because modern AI systems can now copy not only face expressions, but also eye movement, head pose, emotional tone from voice and conversational timing.

Warning signs

A major limit of AI generated content is latency. If the conversation get lagged above 300–500ms, human start feeling “off”. That’s why many “real-time” video calls from scammers are usually:

  • Very short conversations & Excuses to avoid longer interaction: This is happen regularlry because scammer can’t predict what you will ask and there is not enough time to generate fake videos.
  • Low resolution: If scammers decide to go with a long video calls and entrust AI to generate deepfake video & audio in realtime, they must have a very strong computer. Low resolution can be a solution to reduce the lag and feel “off”.
  • Delayed audio synchronization & Awkward facial movement: Although AI can clone person’s voice and facial expression, it takes time to process so you can feel the delay in their responses.

In some cases, tiny details reveal the truth — such as outdated clothing, old work uniforms, or backgrounds that don’t match reality.

How to protect yourself

  • Never trust a video call that borrow money.
  • Call the person back via phone number, not Social Networks video calls.
  • Ask unexpected questions only the real person would know.

AI impersonation technology is improving rapidly. Verification habits must improve too.

2. Relationship-Based Business Scams

Some scams are no longer random attacks. They are long-term psychological operations.

The setup

A scammer spends weeks or months building trust with someone online by:

  • Buying products normally
  • Chatting regularly
  • Interacting professionally
  • Acting friendly and reliable

Eventually, they ask for a business introduction. For example:

  • “I’m looking for computer equipment suppliers.”
  • “Can you introduce me to someone trustworthy?”
  • “We have a large government or school contract.”

Because the relationship already feels genuine, the referral happens naturally.

The trap

The scammer then approaches the referred person with a seemingly legitimate business deal:

  • Large purchase orders
  • Attractive profit margins
  • Familiar references
  • Official-looking invoices
  • Corporate or government claims

After negotiations, the scammer introduces a “secondary supplier” or “special product batch” that requires advance payment. The victim may transfers money because they believe that:

  • The deal is legitimate
  • The introduction came from a trusted person
  • The final customer exists

Then the scammer disappears, after receiving money.

Why this scam is so effective

This attack exploits:

  • Trust between family members
  • Professional reputation
  • Fear of missing business opportunities
  • Emotional pressure from “special deals”
  • Greed mixed with familiarity

This scam is carefully calculated so that every step feels reasonable.

How to protect yourself

  • Never rely solely on personal referrals
  • Verify companies independently
  • Refuse unusual invoice-merging requests
  • Be suspicious of advance payments to third parties
  • Confirm contracts through official business channels
  • Slow down when large profits appear “too easy”

Professional scammers are patient. They may spend months preparing a single attack.

3. Fake Government & Military Procurement Scams

A similar scam targets small business owners.

Typical scenario

Scammers pretend to represent: Military departments, Government agencies, Schools, Hospitals, or Large organizations. They contact vendors claiming they need bulk purchases such as: Office supplies, Furniture, Electronics, Plastic chairs, Construction materials. The order appears legitimate and valuable. Then the scammer says:

“We also need another product that you don’t sell. We found another supplier already. Can you help combine the invoice?”

Soon afterward:

  • A fake supplier contacts the victim
  • Payment is requested upfront
  • The victim transfers money
  • Then Everyone disappears

Why victims fall for it

Because:

  • The “customer” sounds official
  • The order size feels realistic
  • The opportunity seems profitable
  • The victim expects reimbursement later

This psychological manipulation is extremely effective.

Defense strategy

  • Government organizations rarely operate through informal personal arrangements
  • Never pay suppliers on behalf of customers without independent verification
  • Verify procurement requests using official government contact channels
  • Be suspicious of invoice manipulation requests

4. Caller ID Spoofing & Fake Support Calls

One of the scariest modern scams involves fake phone numbers and spoofed caller IDs.

What is caller ID spoofing?

Scammers, with tech skills, can manipulate what appears on your phone screen. You may receive a call that appears to come from: your bank, the police, tax authorities telecom providers or government agencies. But the displayed number or name is fake.

How they do it

Modern calling systems using VoIP (Voice over Internet Protocol) allow attackers to manipulate caller information. Combined with high tech attack such as Fake BTS systems, the scam can look extremely convincing.

Common scam scenarios

The caller claims:

  • Your bank account was hacked
  • Your identity is under investigation
  • Your SIM card will be disabled
  • Your tax records need updating
  • Suspicious transactions were detected

Then they pressure you into:

  • Sharing OTP codes
  • Installing apps
  • Clicking links
  • Sending money
  • Changing passwords

The golden rules

  • Never share OTP codes: No legitimate bank or authority should ever ask for your verification code over the phone.
  • Hang up and call back manually: If someone claims to represent an organization: End the call –> Visit the official website –> Call the publicly listed number yourself

Never trust incoming caller IDs alone.


Modern scams are no longer based on technical hacking alone. They rely heavily on emotional manipulation and social engineering. Scammers understand human psychology surprisingly well. Often, victims are not careless or unintelligent, they are simply manipulated under pressure.

Scams are evolving faster than ever. Artificial intelligence, voice cloning, deepfakes, caller ID spoofing, and long-term trust manipulation are making fraud far more convincing than traditional scams from the past. The most important defense today is not technology, it is awareness. A few extra minutes spent verifying information can prevent devastating financial losses. Stay skeptical. Stay informed. And most importantly, help educate the people around you, especially older family members who may be more vulnerable to these increasingly sophisticated attacks.


How Fake BTS Attacks Steal Your OTP — And How to Protect Yourself

If you are receiving OTP via SMS for your bank transfers, logins, or reseting passwords, you must read this. This is a realistic hack happened in real life in many countries and cybercriminals has stolen a lot of money by this trick. Victims are any people who live in countries that still use 2G mobile network, use old phones with 2G network mode enabled by default, and has many things to be stolen.

1. What is 2G mobile network

2G (Second Generation) is one of the earliest digital mobile network technologies, introduced in the 1990s. Unlike the old analog 1G systems, 2G allowed phones to transmit voice calls digitally, making communication clearer and more secure than 1G. 2G was designed mainly for: Voice calls, SMS text messages and Very slow mobile internet (GPRS / EDGE).

Compared to modern networks today such as 4G and 5G, 2G has extremely limited bandwidth and weak security protections. Many security mechanisms used by 2G were created decades ago and are now considered outdated.

Why 2G Still Exists

Even today, many telecom providers still keep 2G active because:

  • Old feature phones still depend on it
  • Some IoT devices use it
  • Rural areas may rely on legacy infrastructure
  • Emergency fallback compatibility

However, this backward compatibility also creates a serious security problem.

2. What Is a Base Transceiver Station (BTS)?

A Base Transceiver Station (BTS) is the radio communication equipment that connects mobile phones to a cellular network. In simple terms, a BTS is the “cell tower” your phone talks to when you:

  • making calls
  • sending SMS
  • using mobile data
  • registering to the network

Every time your phone shows signal bars, it means your device is communicating with a nearby BTS.

Image

MS — Mobile Station

The Mobile Station is the physical mobile phone, plus the SIM card identity inside it. Each MS has identifiers such as:

  • IMSI (International Mobile Subscriber Identity)
  • IMEI (device identifier)

These identifiers are important and fake BTS attacks often try to capture them.

BTS — Base Transceiver Station

The BTS acts as the bridge between your phones and the telecom core network. Its responsibilities include:

  • transmitting radio signals
  • receiving signals from phones
  • managing communication channels
  • broadcasting network information
  • forwarding traffic to the carrier network

A BTS usually covers a geographic area called a “cell.” When you move around, your phone constantly switches between BTS towers through a process called: handover, or roaming

How MS and BTS Communicate

The communication between phone and BTS happens over radio frequencies using GSM protocols. Basic flow is like so:

  1. Phone searches for nearby BTS signals
  2. BTS broadcasts network identity information
  3. Phone selects the strongest or preferred tower
  4. Phone registers itself to the network
  5. BTS assigns communication channels
  6. Voice/SMS/data traffic begins

In 2G GSM, the BTS continuously broadcasts:

  • MCC (country code)
  • MNC (carrier code)
  • Cell ID
  • supported encryption modes

The problem is that early GSM protocols were designed with a dangerous assumption: The phone trusts the BTS automatically. This becomes the core weakness exploited by fake BTS devices.

3. The Security Problem in 2G GSM

In modern 4G/5G systems, both sides, BTS and MS, authenticate each other. But in classic 2G GSM:

  • The network authenticates the user
  • The user does NOT authenticate the network

That means:

  • A fake tower can pretend to be a legitimate carrier
  • Nearby phones may connect automatically
  • Users often receive no warning

Attackers exploit this weakness by broadcasting a stronger signal than legitimate towers. Once the phone connects, the rogue BTS can:

  • Request IMSI identifiers: this means attacker can know your phone number without asking.
  • Downgrade connections from 4G to 2G for weaker encryption: this means attacker can read your SMS.
  • Intercept SMS: this means attacker can even impersonate you and send SMS to your friends, under your name.
  • Send phishing messages: attacker can impersonate other legit phone numbers, your boss’s number for example, to send you a link and require you to fill passwords

This is the fundamental mechanism behind IMSI Catchers and Fake BTS attacks.

4. What Is a Fake BTS (IMSI Catcher)?

Mobile phones are designed to automatically search for the “best” available cellular signal. In GSM/2G networks, your phone often prioritize connecting to BTS tower that has stronger signal. Attackers exploit this behavior by broadcasting:

  • Stronger signals than nearby legitimate towers
  • with Copied carrier information
  • with Attractive network parameters

To the phone, the fake BTS appears to be a normal carrier tower. Because classic GSM lacks proper network authentication, the device may connect automatically without warning the user.

IMSI stands for: International Mobile Subscriber Identity. It is a unique identifier stored inside the SIM card. An IMSI Catcher is named after its ability to trick phones into revealing this identifier. Once attackers collect IMSI numbers, they can:

  • Identify devices
  • Track movement
  • Target specific users

This is one of the first steps in many surveillance-oriented attacks.

5. Attack Setup (High-Level, No Harmful Instructions)

A simplified Fake BTS attack flow is like so:

  1. Attacker activates rogue BTS equipment to be a fake tower
  2. Fake tower advertises itself as a legitimate carrier
  3. Nearby phones detect strong signal
  4. Devices connect automatically to the tower with stronger signal
  5. Then Fake BTS requests device identifiers and controls the communication process.

Depend on attacker’s purpose, the fake tower can:

  • Downgrade your phone from 4G to 2G: this is the most common technique for stealing OTP purpose.
  • Disable encryption: so attacker can read SMS content, which may contains OTP code.
  • Forward traffic to real networks: this is so called: Man-In-The-Middle attack, where attackers keep you communicating normally, but can eavesdrop everything.
  • Inject phishing SMS messages: you can receive SMS from your friend numbers, but actually that SMS is delivered from fake BTS tower, your phone just display it.

Below is a confiscated fake BTS, captured in public, by police, while doing above attack:

6. How to defend

Symptoms of a Possible Fake BTS Attack

Detecting a Fake BTS in real life is extremely difficult. Modern rogue base stations are designed to look almost identical to legitimate carrier towers, and most smartphones provide very little visibility into low-level cellular behavior. Still, there are several warning signs that may indicate suspicious activity.

Sudden Drop to 2G or “E” Signal

One of the most common indicators is your phone suddenly falling back from 4G/5G to 2G, commonly with the icon “E” instead “4G” on top-right corner of the phone screen. Attackers often force devices onto 2G because:

  • GSM security is weaker
  • Phones trust the network more easily
  • Encryption protections are cracked easily

A downgrade becomes more suspicious when 4G/5G coverage is normally strong in the area but the signal change happens unexpectedly, and, multiple nearby devices behave similarly.

Weak or Missing Encryption Indicator

In classic GSM networks, the BTS controls whether encryption is enabled. A rogue BTS can force weaker encryption, or request no encryption at all. Historically, some phones displayed warnings such as: “unencrypted network”, “ciphering disabled”. But today, most smartphones hide these low-level network details, so users rarely receive visible warnings. As a result, users may have no obvious indication that something suspicious is happening.

Reality: Detection Is Extremely Difficult

The uncomfortable reality is: Most users cannot reliably detect a Fake BTS attack. Reasons include:

  • Users do not understand how phone calls and SMS work in tech.
  • Smartphones show very little info about radio diagnostics.
  • Rogue towers can imitate legitimate carrier behavior.

Even cybersecurity professionals often require specialized equipment to investigate suspicious cellular activity. Advanced detection may involve using SDR (Software Defined Radio) analysis, Baseband Monitoring tools and Carrier database comparisons. But ordinary users typically have no easy way to confirm whether a nearby tower is genuine.That is one reason Fake BTS attacks remain effective even decades after GSM was introduced.

Mitigation Strategies

Due to it is unreliable to detect a Fake BTS, it is reliable to stay away from OTP sent via SMS. Use Authenticator app such as Google Authenticator, or Authy, for OTP is highly recommended. Beside of that, make sure to disable 2G on your phone if it still support 2G. Most of today mobile phone disable 2G by default, so if you are using old phone, let search on how to disable 2G on your phone model. Last but not least, Avoid login, resetting password, or doing bank transfer on public networks, only do it in your trusted places.


3 steps to avoid malicious mobile apps

Today, everyone has smart phones, from children to elders. Smart phones contains a bunch of applications that increase productivity in real life. Human today may spend time with smart phones even more than with human. Smart phones become a part of life, an accessories, and maybe secrets holder of everyone. People put almost everything in their phone, from photo, identity to bank accounts. This habit makes smart phones top priority target for hackers in hacking campaigns, to steal secrets, or simply money. These hacking campaigns usually exploit users’s low awareness or low knowledge about mobile app security factors. Android & iOS, as default, provide many mechanisms to protect users from getting hacked but the weakest point in the system is always human psychology. “Amateurs hack machine, Professionals hack people“. If you are afraid of hacking, this post is for you. This post hopefully can guard your mind up to defense against one of the highest risk factors in Internet era: cybercriminal.

Most of cyber security incidents – aka get hacked – known in public begins from a very non-technical step and can be performed by anyone, named Social Engineering. Social Engineering is a type of manipulation where someone tricks people into giving away sensitive information, access, or money—by exploiting human psychology rather than hacking systems. To steal data from your phones, 99% of time, hackers need to trick you to install malicious applications. Malicious applications, once installed, will silently steal data and send back to hackers. So, just by acknowledging which app can be malicious, you already get you safe 99%. The rest 1% is involved to Zero Day exploitations, which are real hacking, require top-notch hacking knowledge and skills, but will not be mentioned in this post. For more understanding about Zero Day exploitations, you can subscribe here then the-tech-lead.com will inform you when there is any article available.

Here we back to How to know if a mobile app is malicious!

1. Double Attention on download source

As a golden rule for mobile applications, only download from trusted store which is PlayStore and AppStore. PlayStore and AppStore is pre-installed on any Android or iOS smartphones. For any applications, only download from PlayStore app (for Android phones such as Samsung, Pixel, Nexus, etc) and AppStore app(for iPhones). Do NOT install any applications outside these 2 official stores, regardless any reasons, urgency or who tell us.

For Android world, mobile applications are written in Java and Kotlin language, exported as APK files (file has extension .apk). This .apk files then be signed with digital signature of its owner – who registered as developer on PlayStore with their legal information. This process is essential as it can tell who actually behind an application, and if we has evidence about any malicious activities, we know who to sue. The information of who develop certain application can be found at section “App Support” under its logo.

APK files can be installed directly to Android phone via user’s explicit grant. Users can tap to .apk files stored in their phone (inside Download folder, or Document folder for example), a popup will display asking installing permission. If user grant it, the .apk will be installed. This process usually is for developers to test applications before submitting to PlayStore. For regular users, this process is an absolute indicator for a malicious application. So if someone, for any reason, tell you to do these steps manually, don’t trust them and report them to police asap. Typical trick flow is like so:

  1. You are on Social Network such as Facebook, seeing a post tell that install an application to get free 1000USD as a reward for its early users.
  2. You click on download link, your phone download it into Download folder
  3. You follow “installation guide” written next to download link, saying that you open Setting app, enable “installation app from unknown source”, then open Download folder, tap on APK file.
  4. Your Android phone show a popup telling you that APK is from unknown source, but according to the guide, it tell you just press Accept.
  5. Then the malicious APK is installed then it steal your data.

Similarly, on iPhone world, iOS applications are written in Swift and ObjectC language, and exported as .ipa file. IPA files can be installed via the App Store or through developer tools like Xcode. Usually, we can’t freely install IPA files unless the app is signed with a valid certificate or the iPhone is registered for development. But there is still a trick that hacker can trick users to install malicious IPA files: via TestFlight abusing. TestFlight is Apple’s official tool for distributing beta (testing) versions of iOS apps before they go public on the App Store. Developers use it to invite testers, collect feedback and fix bugs before release. TestFlight is legit—but it can be abused in social engineering attacks. Typical trick flow is like so:

  1. Someone impersonates a bank employee, call you, tell exactly your name, your address, and saying “Your bank account is in legal risk due to a transfer from criminal gang” or “Police is screening your account because they think you laundry money”, with urgent, serious, and a bit threaten.
  2. Then they sent you a link to install their internal iOS app to prove that you are innocent.
  3. You tap on that link, iPhone redirect you to TestFlight app because it is TestFlight invitation link and your iPhone does not have TestFlight installed
  4. Then you are told to tap on the link again, this time the fake application is installed to your iPhone, via TestFlight
  5. The fake app looks the same to bank’s official application so you have no doubt
  6. But the app then steal data from your iPhone, or trick you to fill username, password, even OTP and CVV number

2. Make sense of app permissions

When users smart enough to not install app from untrusted source anymore, hackers may use level 2 of malice: Camouflage. Typical hacking plan is like so:

  1. This time, hackers develop or purchase normal mobile application source code then publish via PlayStore and AppStore normally.
  2. Because it is normal, PlayStore & Appstore accepts and make it available.
  3. Then hacker send next updates for the normal application, with new features requiring some system permissions such as: read contact list, read call logs, read gallery, read GPS, etc…
  4. Hackers advertises that app with awesome features that can make outstanding outcomes, right in need of some users.
  5. Then with curiosity, users install the app, from PlayStore, or AppStore depends on their phone OS.
  6. The app requires user to grant quite a lot permission but users usually don’t care and don’t understand so just accept it.
  7. Then the app steal call logs, photos, location data, etc …, from the phone, thanks to user’s grant.

Both Android & iOS has default safeguard to protect user’s privacy. Every application, as default, can not access to sensitive data on smart phone. For example, if an application want to read some photos, developer – who is making that application – must register “Access Gallery” permission. Then whenever the application want to use this permission, the operating system (Android / iOS) will display a message asking users to grant that permission. When granted, application now can see photos in the phone. Similarly, other sensitive info such as call logs, GPS, and many more also requires user grant permission before the app can actually read data. To know an application want what permission, we can check right on PlayStore for Android app, and AppStore for iOS app.

How to check Permissions of Android application

Before installing:

  1. Open the app page on the Google Play Store
  2. Scroll down to “App info” → “Permissions”
  3. Tap “See more” to view full details
  4. Check what the app can access:
    • Location
    • Contacts
    • Storage
    • Microphone, etc.

After installing:

  1. Go to Settings → Privacy → Permission Manager
  2. Select a permission (e.g. Location)
  3. See which apps are using it
  4. You can:
    • Allow
    • Allow only while using
    • Deny

👉 Tip: Android also shows permissions during first use, so don’t just tap “Allow” automatically.

How to check Permissions of iOS application

Before installing:

  1. Open the app page on the App Store
  2. Scroll to “App Privacy” section
  3. Review what data the app may collect:
    • Location
    • Contacts
    • Identifiers
    • Usage data
    • etc …

After installing:

  1. Go to Settings → Privacy & Security
  2. Tap a category (e.g. Location, Photos, Microphone)
  3. Select the app
  4. Choose access level:
    • Never
    • Ask Next Time
    • While Using
    • Always (for location)

Review these permission carefully. Anticipates which features need it. If there is too much permissions comparing to expected features, it is a red flag.

Here’s a practical mapping of common Android & iOS permissions you’ll see on the Google Play Store, AppStore and the features that legitimately use them. This helps you judge whether a request makes sense.

Android Permissions & Legit Features use them

PermissionLegit FeaturesSuspicious If
Read Contact, Write ContactMessaging apps (find friends)
Contact backup/sync
Invite friends feature
Suspicious if a simple game or flashlight asks for this
Read Call Log, Read Phone StateCaller ID / spam detection apps
Dialer & call management
Suspicious if: unrelated apps request call history
Read SMS, Send SMSMessaging apps
OTP auto-fill
High Risk: can intercept verification codes
Recommend: NEVER download
Access Fine Location, Access Coarse LocationMaps & navigation
Ride-hailing / delivery
Weather apps (local forecast)
Suspicious if: calculator or offline app asks for precise location
Read External Storage, Media AccessUpload photos (social media)
File managers
Image/video editing apps
Suspicious if: app doesn’t handle files but asks access
Record AudioVoice messages / calls
Recording apps
Voice assistants
Suspicious if: no voice feature exists
CameraTaking photos/videos
QR/barcode scanning
Video calls
Suspicious if: app has no visual capture feature
Notification accessNotification managers
Smart reply apps
High risk: these app can read OTPs and messages,
Recommend: NEVER download
Accessibility ServiceScreen readers (for visually impaired)
Automation tools
High Risk: these app can control screen, read inputs, commonly abused in scams
Recommend: NEVER download

iOS Permissions & Legit Features use them

PermissioniOS Permission Name / KeyCommon Legit FeaturesSuspicious If…
ContactsContacts (NSContactsUsageDescription)Messaging, contact sync, invite friendsGame or simple app requests it
Location (GPS)Location (NSLocationWhenInUse / Always)Maps, ride-hailing, delivery, weatherApp doesn’t need location
Photos / MediaPhotos (NSPhotoLibraryUsageDescription)Upload images, editing appsApp doesn’t use images/files
CameraCamera (NSCameraUsageDescription)Photos, video calls, QR scanningNo camera-related feature
MicrophoneMicrophone (NSMicrophoneUsageDescription)Voice calls, recording, voice inputNo audio-related feature
BluetoothBluetooth (NSBluetoothAlwaysUsageDescription)IoT devices, wearables, accessoriesApp has no hardware/device interaction
NotificationsNotifications (UNUserNotificationCenter)Alerts, messages, remindersSpammy or excessive notifications
TrackingApp Tracking Transparency (ATT)Ads personalization, analyticsApp unrelated to ads asks for tracking
Local NetworkLocal Network (NSLocalNetworkUsageDescription)Smart home, device discoveryNo local device interaction
Motion / FitnessMotion (NSMotionUsageDescription)Fitness apps, step trackingApp unrelated to activity tracking

Simple rule to evaluate permissions

When you are considering to install a new mobile application:

  • Anticipate what functions that app may have,
  • Check the Permissions that app requires
  • Then ask yourself: “Does this feature really need this permission?

If there are permissions that is not aligned with expected functions:

  1. Then slow down, don’t rush to install for whatever reason.
  2. Find alternative applications, compare Permissions among them.
  3. If you not sure but want to check the app, use Emulators to test it first. Emulators is virtual smart phones and can be created via tools such as Genymotion, VirtualBox and a few others. Emulators is isolated environment and do not contain your data.
  4. If you know any experts in cybersecurity field, ask them for advise.

3. Monitor phone’s performance

Welcome to the level 3 of malice: Zero Day Exploitation

Thanks to strictly review process of AppStore and PlayStore, most of malicious mobile app is banned. But optimism is not a recommended character in cybersecurity field. Zero Day is vulnerabilities that is unknown by public, even among experts, and in fact, they are weaponized by many governments as a national strength.

Android & iOS itself is softwares. Softwares might have bugs and security holes. These vulnerabilities is actively hunted by experts in cybersecurity industry and sponsored by governments. Once a Zero Day is discovered, it becomes a secret weapon for cybercriminal groups to attack or infiltrate system on over the world. Mobile app is not immune. If there is some vulnerabilities in operating systems, here is Android or iOS, then it will be the target for level 3 of malice.

Although it is rare, but it still a case for us – regular users – to keep an eye on. After install an application from Google Play Store, or AppStore, pay attention on device performance:

  • whether it get slower,
  • or hotter,
  • or get lagged
  • or any abnormal behaviors.

Vulnerabilities has many forms, it is hard to explain on a single post here but many of its form create a lot workload on device, as a try to exploiting, so it may make the phone slower, hotter, or lagged.

Example: a well-known Spyware

One of the most well-known cases of this level 3 of malice involves commercial spyware: Pegasus, developed by NSO Group. This spyware has successfully stolen sensitive data on user’s phone often without any visible permission prompts. The trick flow is like so:

  1. NSO Group Deliver Pegasus via app or link. Target users receives a message that trick them to install the app. The app looks absolutely normal since it require minimal permissions.
  2. Once installed, Hidden zero-day exploit triggers. The app, or content inside it, exploits an unknown vulnerability in Android.
  3. Privilege escalation: The exploit gains deeper system access than normal apps should have and bypasses Android’s sandbox protections.
  4. Silent data access: then NSO Group can access Messages, Camera / microphone, Location without user’s awareness

This attacks are extremely expensive and used for targeted surveillance, not mass scams. Once the exploit method is discovered, it can be quickly patched by developers behind Android & iOS system. But the problem is it really hard to discover.

There isn’t just one single CVE for Pegasus. It has used multiple zero-day vulnerabilities over time, often chaining several together. Here are some of the most well-known ones:

Notable CVEs linked to Pegasus campaigns

1. FORCEDENTRY exploit chain (2021)

  • CVE-2021-30860
  • Affected: iOS (Apple devices)
  • Type: CoreGraphics / PDF parsing vulnerability

What it did:

  • Delivered via iMessage (no user interaction needed)
  • Exploited how the system processed malicious image/PDF data
  • Led to full device compromise

👉 This was one of the most advanced zero-click exploits ever discovered

2. WhatsApp exploit (2019)

  • CVE-2019-3568
  • Affected: WhatsApp on Android & iOS
  • Type: buffer overflow in VoIP call handling

What it did:

  • Attacker placed a WhatsApp call
  • Even if you didn’t answer → exploit could trigger
  • Installed spyware silently

3. Chrome sandbox escape (used in chains)

  • CVE-2020-6418
  • Affected: Google Chrome (Android)

What it did:

  • Used as part of a chain to escape browser sandbox
  • Combined with other bugs to gain deeper access

4. KISMET (suspected chain, 2020)

  • No single confirmed CVE publicly disclosed
  • Targeted iMessage (iOS 13)

What it did:

  • Zero-click exploit (no interaction)
  • Predecessor to FORCEDENTRY

To understand more about these CVE in the future, please subscribe so when the-tech-lead.com post any, you will be informed. Each of CVE deserves a long post itself.


What make Social Networks addictive (and what we can learn for software development )

Social Networks have become a social norm today. Almost everyone tends to have at least one profile on one of platforms such as Facebook, X, Tiktok, and a few others. I was on Facebook when i was a student and honestly I did not get what actually Facebook was and why people use it. I wrote something on my wall, then I got a notification saying a friend liked my post. I also saw my friends posted something funny on their walls, but I did not hit the like button, not because it was not funny, it was because I did not aware that I should press like button if I found it funny. I left Facebook because playing games is much more engaging than this thing. Until when I came to university, my friends too, but we live in different districts and study in different universities. We lived far away and it was really hard to meet frequently like when in school. Call & SMS is costly for long conversations, and it is not fun too. Then I back to Facebook because most of friends was using it too. We got free messaging & video calls. We can share thoughts, opinions, discussions via comments and showing support via the like button. We share moments by uploading photos and videos. We did not meet in person frequently like before, but we feel that we know what others are doing. Until I saw first news about Social Network Addiction! And I did not understand. How does a tool that simply informs its users about someone about something, become addictive ?

At first glance, Facebook, X, Tiktok or any Social Networks, looks simple: “someone posted something, then you see it.” But the addictiveness doesn’t come from the information itself — it comes from how that information is delivered, timed, and socially framed. This post will reveal the real mechanism behind it, or at least the core part.

Before understand the whole mechanism, it is important to understand some artifacts that build up the mechanism: The Slot Machine Effect, Social Validation Need, FOMO, Stopping Cues, Personalization, Triggers and Social Obligation Pressure.

1. The slot machine effect

The slot machine effect is a nickname for a behavioral psychology: Variable-ratio reinforcement. Simply put, “you repeat an action because the reward is unpredictable but sometimes great.” It is likely what happens inside gamblers’s psychology. When using Social Networks, each time we open it, what we get is random. Sometimes, there is nothing interesting. Sometimes, there is a funny post, a like, or a message. Sometimes, there is something emotionally strong such as a drama, a praise or a surprise – and we feel good. This unpredictability trains human brain to try again because “maybe the next scroll will be good.” . That’s what keeps users opening the application and keep scrolling, like a hunt for emotions. And human loves go hunting, this activity is deep rooted in brain since very first day of human kind. But what we hunt is not simply food anymore.

2. Social Validation Need

Humans, as a nature, care deeply about how others see them. This is a survival factor, evolved and deeps rooted in human brain for thousand years, since Tribal Age when there is no law and what tribal members perceive you determine you alive, or die. Our brain is wired to care about Being accepted, Being noticed and Not being rejected. Social Networks do not reinvent this, it measured and amplified it. In real life, validation is subtle. It is a feeling via daily interaction between people. Each person even has their own way showing validation. Each culture has its own custom to visualize validation. Here on Social Networks, validation is visualized by number of likes, comments & shares. 1 like vs 100 likes! 0 comments vs 20 comments! 0 shares vs 10 shares! Comparison is triggered. This turns Social Validation into something closer to a score system than an natural feeling. Social Validation now becomes Social Comparison – when we evaluate our opinions, abilities, and worth by comparing us to others.

As a blending of Social Validation and Social Comparison, human brain tends to translate Likes into Approval, Comments into Attention, and Shares into Influence. It is a translation from numbers to a feeling. It is a false translation because these numbers can be manipulated by many ways: psychology tricks, ads campaign, payment or from clone accounts. But it does not easy to escape that false translation. Because of Cognitive Ease – human brain loves simple things – and here interpreting Likes as Approval is easier than real life approval which can be complex: tone, facial expression, context. This triggers dopamine (reward signaling) as well, making us want to check reactions, post again, stay engaged.

3. Stopping Cues

Social Networks, at some extents, is likely a TV shows, or books, when it also provides content. The diffs are, Social Network content is made by anyone without necessary knowledge, skills and permissions. People on Social Network can be not directors, not scholars, not professor but nothing stop them to tell stories, teaching or bragging. TV shows or books have endpoints. We know when it is end and take time to relax. Social Networks removes that, on purpose.

A common design pattern often used in Social Networks is Infinite Scroll. This design keeps users in a continuous loop with no friction to stop. Human brain relies on boundaries to end activities. End of a chapter, End of a page, End of an episode is cues for brain to stop. Infinite scroll deletes those cues. Without a clear “end,” human brain defaults to keep going on. It pairs perfectly with the Slot Machine Effect when Unpredictable rewards keep behavior going longer than predictable ones. This also exploit the Completion Bias – the psychological tendency to prioritize easy, quick tasks over more important, complex ones to gain a fleeting sense of accomplishment and a dopamine boost. This bias tricks the brain into valuing the “done” feeling, often leading to wasted time on trivial tasks rather than high-impact. And here, keep scrolling feels easier than close the app.

4. Fear of missing out (FOMO)

Fear of Missing Out (FOMO) is a psychological concept describing anxiety when other people is having rewarding experiences without their participation. Simply put, it says that: you can feel anxiety when you see others are winning. This feeling is exploited strongly on Social Networks, where people frequently & easily compare their lives to others profiles, via New Feeds, number of Like, Comments & Shares, eventually leading to feelings of inadequacy or exclusion. FOMO reflects the human need for Social Validation, and also stemming from Social Comparison – when a person must know, must do, or must have something to be belong to a group. FOMO people often experience greater dissatisfaction and impulsive decision-making.

Social Networks amplify FOMO by providing constant updates about others’ activities, achievements, and lifestyles. This can create a loop of checking, posting, and comparing to other. Users can feel anxiety when comparing to other. And then the brain want some relief when it feel anxiety. Turns out the most relieved action for this anxiety is to check if they are what they are. Checking via Social Networks app is faster, easier, even anonymous so it is the best choice for brain – Cognitive Ease again. Although feel anxiety, users do not flee away. This is classic Negative Reinforcement: a behavior sticks because it removes an unpleasant feeling. The Social Networks apps, one hand bring anxiety to users, on another hand, become a fasted way for user to relieve that anxiety. And it become addictive because it is a fasted way for user to get relief.

5. Personalization

Naturally, people don’t like people that have different opinions. If a Social Networks only shows content that contradicts user’s perspectives, they won’t use the app. To keep people using Social Network, it needs to show what users like to see. And to a human, there is nothing better than seeing what they already believe. This is Confirmation Bias – when human brain automatically filters out what not support the existing belief and only focus on what support that belief. Exploiting this bias, Social Networks analyze users’s behaviors and only show what a users tend to like. Time spent on certain post, likes, comments, shares, or even demographic info, or even avatars, is inputs to an algorithm that predicts what a user might like. For a long time watching people interacting on Internet, these algorithm seem know what its users like. And when that algorithm only show only user what they like, it makes users feel that the whole Social Network is people just like them – this is Halo Effect when humans use a small cue to judge the whole thing. Because users like something posted on a Social Network, they might like that Social Network as well. This illusion keeps user returning because no one can resist seeing what they like.

6. Triggers

Above artifacts function based on many psychological instincts of human being. Because it is instincts, it is hard to resist. But instinct does not function all the time. It needs external triggers.

Human has language, in written format. Human brain can translate symbols into meaning. Depends on what meaning is translated to, it can trigger instincts just like a deer hears sounds in a bush. Simply put, human instinct can be triggered via text. We all may have a friend that is triggered when hearing or seeing certain words. It can be any word, but depend on their experiences in the past, words can bring different feelings. Social Networks exploit these well via Notification. Notification sent to user does not simply informing some events. It’s message is designed to trigger human instincts. Example:

  • “You were mentioned in a comment” → triggers Social Validation (“someone is talking about me”)
  • “Someone liked your post” → triggers Social Validation (“people value what I shared”)
  • “You have 5 new notifications” → triggers FOMO (“what did I miss?”)
  • “Your friend just posted after a long time” → triggers FOMO (“this might matter”)
  • “This is getting a lot of attention” → triggers Social Validation (“this could be important or trending”)

Each message is short, but it is not neutral. It is designed to activate specific psychological responses such as curiosity, belonging, urgency, or FOMO. Over time, the brain begins to associate these phrases with emotional outcomes. This is why people feel an urge to check immediately, even when they were not planning to.

In this way, notifications function less like messages and more like triggers. They convert language into instinctive reactions, turning attention into a reflex rather than a deliberate choice.

7. Social Obligation Pressure

Social Obligation Pressure is the feeling that you owe a response, attention, or presence because of social expectations—even if you don’t actually want to engage at that moment. This obligation come from Fear of Negative Judgment. This fear is amplified by features such as: Read receipts or Typing indicators, which is commonly used in Chat Box. This is natural feeling in human when it helps to forming social. But on Social Networks, people do not see each other face, so by visualizing via indicators, Social Network ensure that Fear exists and push user engaging because no one want to be seen as impolite. It’s not just “I should reply” — it’s more like “If I don’t, people will think something bad about me.”

Social Obligation Pressure, or Fear of Negative Judgment, targets identity, not just curiosity. Humans constantly assume they are being evaluated. We predict how others might interpret our behavior. We try to avoid being seen as: rude, ignoring, ungrateful or socially incompetent. This fear is not about the action itself —it’s about your brain anticipates the meaning others might assign to your action – which may not true. Many times, when we reply to someone on Social Networks, Not because we want to — but because we want to avoid negative judgment. Read receipts removes plausible deniability, Typing indicators creates expectation of response, Online status signals availability, Notification creates urgency. All features are designed around Social Obligation Pressure.

Put It All Together

Social Networks profit from advertisement, where the more users addicted, the more revenue it earns. By combining all above artifacts, Social Network applications train human brain a behavior loops by exploiting human biases and instincts to keep users spending at much time as possible on its app, by following steps:

  1. From a free tool that solves real life problems: Communication – such as Messenger, Chat, Video Calls, etc…
  2. Triggers – the Notifications – is added to trigger anxiety, or FOMO
  3. Social Obligation Pressure pushes users to engage: reply messages, check information, etc
  4. Users Open the Social Network app (e.g. Facebook / TikTok)
  5. Personalization algorithm shows highly relevant, easy-to-consume content to users
  6. Slot Machine Effect: users get unpredictable rewards while scrolling
  7. Social Validation Need: users eventually get likes/comments that give dopamine hits
  8. No Stopping Cues: no natural point to exit leads to doom scrolling
  9. After leaving / pausing using Social Network: anxiety, curiosity, or social pressure still lingering in brain
  10. Social Networks introduce new trigger forms to make user urge to check again, then back to Step 2 !

And we already all heard and knew about real life harms caused by social network addictiveness — from wasted time and reduced productivity, to anxiety, low self-esteem, and constant comparison. Over time, it can lead to irritability, anger, and strained relationships, as attention is pulled away from real-world interactions. In more serious cases, the cycle of validation and comparison can deepen emotional distress, contributing to isolation and even self-harm. What makes this especially concerning is that these outcomes are not caused by a single feature, but by a system of reinforcing loops that continuously pull users back in, often without them realizing it.

Be aware about the mechanism behind Social Networks can be the first steps of escaping the addictiveness loop. If you have someone that is addicting to Social Networks, let share this post to them!

Lessons for Software Design

Although bad side effects of Social Networks is undeniable, but the high user engaging ratio of Social Networks app is also a dream to any software company. As a software creators, we all want our applications are used daily, especially when competition is getting high every day. We still have a way of applying mechanism observed in Social Networks for good purpose. It is long post here already and I will continue this part on next parts. To not be missing out, please subscribe so you can get a notification when next parts is available:


9 habits that make you unsecured on Internet (and how to protect yourself)

In the digital age, personal data is an extremely valuable asset. However, many people unintentionally expose their own information due to habits that seem harmless. Below are common habits that make you vulnerable to data theft—and that you should stop immediately.

1. Using Weak or Reused Passwords

This is the most common mistake in personal security. In many data breach cases, users were found using extremely simple passwords like “123456” or “password”. Others create passwords based on personal information, making them easy to guess.

There are many tools in cybersecurity designed to guess passwords using personal data by trying all possible combinations—this technique is known as brute force.

In addition, reusing the same password across multiple platforms makes things much worse. If one account is compromised, all others are at risk.

Best practice:

  • Use passwords with at least 10 characters
  • Avoid personal information
  • Combine letters, numbers, and special characters

2. Saving Passwords in Browsers

Browsers like Chrome and Firefox offer password-saving features for convenience. However, this habit carries risks.

If these browsers have undiscovered vulnerabilities (known as zero-day vulnerabilities), attackers could potentially steal stored passwords.

Also, when using shared computers—such as in internet cafés, print shops, or even your workplace—you should never save passwords. Others may access your accounts through stored credentials.

Safer alternatives:

  • Memorize important passwords
  • Use encrypted password managers with biometric authentication
  • Always log out after use, especially on shared devices

3. Connecting to Unsafe Public Wi-Fi

Free Wi-Fi at cafés or airports is often poorly secured.

Common risks include:

  • Weak encryption:
    If a network uses WEP or WPA, avoid connecting. These encryption methods are outdated and easily cracked.
    The minimum safe standard today is WPA2 or higher (as of 2026).
  • Evil Twin attacks:
    Attackers create fake Wi-Fi networks with the same name as legitimate ones. If you connect, they can monitor your activity or steal login data.
  • Unnecessary data collection:
    Some Wi-Fi networks request personal information through surveys—you can usually skip this step.

4. Clicking on Suspicious Links (Phishing)

Phishing is one of the most common ways attackers steal data. It relies on psychological manipulation to trick users into revealing information or installing malware.

Common phishing scenarios:

  • Fake banking emails that tell your account has some problems.
  • “You’ve won a prize” messages
  • Fake login pages of others popular websites

To avoid be fooled, you must always double check the domain name on the url. A simple trick is you should search the business name on google and call their customer support to confirm situation.


5. Installing Apps from Untrusted Sources

Applications downloaded from unofficial sources may contain malware designed to steal data.

Attackers often disguise malware as:

  • Free “useful” software
  • Cracked versions of paid tools

Trusting unknown sources can lead to data theft or even ransomware.

Stay safe by:

  • Downloading software only from official websites
  • Verifying sources before installing

6. Oversharing on Social Media

People today spend more time on social media platforms like Facebook, TikTok, and X than in real life.

Sharing too much personal information can be dangerous. Scammers can collect:

  • Your name and location
  • Friends and family connections
  • Habits and interests

This information can be used for scams, impersonation, or malware attacks.

Even more concerning, modern AI can generate fake images or sensitive videos using just a few photos of your face.

Protect yourself by:

  • Limiting personal information shared online
  • Avoiding posting sensitive content
  • Enabling profile privacy settings

7. Not Enabling Two-Factor Authentication (2FA)

Many popular platforms like Gmail, Facebook, and X offer two-factor authentication (2FA).

This feature adds an extra layer of security by requiring:

  • OTP codes sent to your phone
  • Biometric verification

Even if your password is compromised, attackers still cannot fully access your account.

However, 2FA is often disabled by default.

Action step:
Review your accounts and enable 2FA as soon as possible.


8. Not Updating Software & Using Cracked Versions

Outdated software often contains serious unpatched vulnerabilities that attackers can exploit.

Many people think updates are only for:

  • New features
  • Better UI
  • Performance improvements

But the most important purpose is security patching.

Each update typically:

  • Fixes known vulnerabilities
  • Blocks new attack methods
  • Strengthens system defenses

Without updates, you may be using software with publicly known exploits.

In some cases, simply opening a malicious image, audio file, or website can infect your system through these vulnerabilities.

Best practice:

  • Always update to the latest version
  • Avoid cracked software—they may include hidden malware

9. Ignoring App Permissions

Many apps collect more data than necessary, but users often ignore this.

On app stores, applications must declare required permissions—but most users simply tap “Allow” without review.

This habit may result in:

  • Sharing personal data unnecessarily
  • Giving apps access to sensitive system features

Stay in control by:

  • Reviewing permissions before installing
  • Avoiding apps with excessive or unrelated access requests
  • Checking reviews or consulting experts if unsure

Conclusion

The habits that lead to personal data exposure are often small—but the long-term consequences can be severe.

By recognizing and correcting these behaviors, you can significantly improve your cybersecurity awareness and avoid unnecessary risks on the Internet.


7 risks on Internet that You must know

A normal morning.

You wake up, check your phone, read emails, scroll through social media, and pay a few bills. Everything feels fast, familiar—almost automatic.

But within those “normal” moments, countless hidden risks quietly exist in the digital world.

Cyberattacks are not always loud or obvious. Sometimes, they begin with a careless click, a rushed login, or a misplaced trust.

Below are familiar scenarios—each representing some of the most common threats on the internet today that you could encounter at any time.


1. Phishing (Impersonation Scams)

You receive an email from your “bank” warning about suspicious activity. The message looks professional, complete with logos and branding, and includes a link asking you to log in immediately to verify your account.

Feeling concerned, you click the link and enter your information. Everything seems normal… until a few hours later, your account is compromised.

Common signs of phishing:

  • Urgent, well-written emails that mimic official communication
  • Fake login websites that look almost identical to real ones
  • Suspicious domain names (typos, mismatched names, or strange subdomains)

This method exploits users who are unfamiliar with how domains and links work.

If you’re not confident in identifying suspicious links, consider using tools like SafePhone, which can detect and block phishing links before you even access them.


2. Malware (Malicious Software)

You download a free tool online because it “looks useful.” Installation is quick and smooth—nothing seems wrong.

But soon after, your device becomes slower, and your data may be accessed without your knowledge.

This could be malware—software designed to secretly monitor or steal your information.

Common sources:

  • Email attachments
  • Downloads from forums or unknown websites
  • Cracked or pirated software

How to stay safe:

  • Only download apps from trusted platforms like official app stores
  • Install reliable antivirus software
  • Avoid unknown or suspicious files

3. Ransomware (Data Extortion Malware)

One day, you turn on your computer—and all your files are locked. A message appears demanding payment to restore access.

No warning. No undo.

This is ransomware, one of the most serious cyber threats today.

Once inside your system, it will:

  • Encrypt all your data
  • Demand payment for a decryption key
  • Often require payment in cryptocurrencies like Bitcoin or Ethereum to avoid traceability

Prevention tips:

  • Only install software from official sources
  • Use updated antivirus protection
  • Regularly back up your data

4. Online Scams

A friend messages you on social media, saying they’re in urgent need of money. The message feels real—the tone is familiar. Without hesitation, you transfer the money.

Later, you find out their account was hacked.

Common scam patterns:

  • Impersonating friends by copying profile pictures and information
  • Fake investment opportunities
  • Requesting deposits and then disappearing
  • Trick you into installing malware
  • Using your identity to scam others

How to protect yourself:

  • Lock your social media profiles
  • Be cautious with financial requests
  • Verify identity via video calls
  • Use shared private memories to confirm authenticity

5. Data Breaches

You reuse the same email and password across multiple services. One day, you receive a notification about a login from an unknown device.

It’s not necessarily your mistake—one of the services you used may have been breached.

Your data could have been exposed long ago and is now circulating on underground markets.

Risks include:

  • Compromised login credentials
  • Personal data leaks
  • Chain attacks across multiple accounts
  • Financial loss

Reduce risk by:

  • Using unique passwords for each service
  • Changing passwords regularly
  • Using encrypted password managers with biometric protection

6. Public Wi-Fi Attacks

You sit at a café and connect to free Wi-Fi. It’s convenient and fast.

But at the same time, someone could be monitoring your data.

Risks of public Wi-Fi:

  • Data interception if encryption is weak
  • Fake Wi-Fi networks (Evil Twin attacks)
  • Unauthorized access to your device

7. Social Engineering (Psychological Manipulation)

You receive a call from “technical support” asking for an OTP code to “verify your account.” They sound professional, trustworthy—even urgent.

In reality, they are not hacking systems—they are hacking you.

Common tactics:

  • Impersonating authorities
  • Creating urgent scenarios (accidents, penalties, account suspension)
  • Pretending to be someone you trust

Conclusion

The digital world isn’t dangerous in obvious ways—it’s dangerous because threats often appear in familiar forms.

An email. A message. An app.
Each could be the starting point of a serious incident.

Understanding these risks doesn’t just help you avoid them—it helps you make better decisions in moments that seem completely ordinary.

6 entrances that hackers use to infiltrate your company

If you are a business owner, you are likely no stranger to news about data breaches causing millions of dollars in losses across companies in all industries. The leaked data could be your customers’ information, and sometimes even employee login credentials for your internal systems. Regardless of the type of data, assessing and reviewing vulnerabilities is always a critical step for every company—especially in today’s digital era.

However, security vulnerabilities are an extremely complex concept and not easy to grasp, which makes them difficult for business owners and their teams to identify. While it is hard to pinpoint exact vulnerabilities, it is much easier to block the sources that commonly lead to them. Therefore, this article will highlight several common sources of serious security vulnerabilities and suggest solutions to strengthen security for you, your company, and anyone working in the modern digital age.

1. Outdated Software

Every business today uses various software tools to automate and optimize workflows—such as Chrome, Word, Excel, Photoshop, PDF readers, and many specialized tools. These software products are developed by different developers, who may or may not have strong expertise in security. As a result, features may contain vulnerabilities that even the creators are unaware of.

Software is constantly updated, and many updates include patches for bugs and security flaws. However, most people tend to stick with older versions or hesitate to update—sometimes simply because they are unaware of new releases. This habit can leave systems exposed to unpatched vulnerabilities, making them easy targets for hackers.

Information about known vulnerabilities can even be bought and sold on black markets, including the dark web and deep web. This makes outdated software a highly attractive entry point for attackers. Therefore, always keep your software up to date to reduce security risks.


2. Outdated Windows Operating System

Older Windows versions such as Windows 7, Windows XP, or unsupported Windows Server editions are prime targets for hackers. This is because Windows itself is a collection of system-level software components, many of which may contain unpatched vulnerabilities over time.

Taking advantage of users’ reluctance to upgrade, many hacking campaigns successfully infiltrate systems running outdated operating systems through known exploits. The consequences can include data loss, ransomware attacks, remote surveillance, and privacy violations.

To stay safe, regularly update your Windows system and only install applications from trusted sources.


3. Cracked Software

Cracked software often contains malware or hidden backdoors that can take control of your system. Many users prefer free software, and paid software is frequently cracked by hackers to bypass licensing.

However, downloading cracked versions from the internet is extremely risky. You have no way of knowing who modified the software or whether malicious code has been injected. Many cyberattacks originate from installing cracked software embedded with viruses or backdoors.

Whenever possible, use licensed software and keep it updated to avoid both malware and vulnerabilities in outdated versions.


4. Self-Developed Websites

Most companies today maintain their own websites to establish an online presence. Many also have internal IT teams responsible for building and maintaining these systems.

Just like external software, internal development teams may lack sufficient expertise or experience in cybersecurity. This reality often leads to unnoticed vulnerabilities within company-built systems. These weaknesses may exist in the operating systems, third-party libraries, or even in the system design itself.

To mitigate these risks, companies should continuously invest in security training for their IT teams. In urgent cases, hiring professional penetration testing (pentest) teams to audit and identify vulnerabilities is highly recommended, although it can be costly.


5. Email Phishing Attacks

Phishing emails are one of the most common methods used to compromise business accounts. These attacks require minimal technical skill but are highly effective because they exploit human psychology and general lack of technical awareness.

Common tactics include impersonating banks, government agencies, or reputable companies to trick recipients into entering login credentials or sharing OTP codes. In other cases, phishing emails disguise themselves as legitimate software downloads but actually contain malware.

Many businesses have customer support staff who may lack sufficient cybersecurity awareness, making them easy targets. Simply training employees is often not enough, as phishing techniques are becoming increasingly sophisticated.


6. Weak Operational Processes

Poorly controlled internal processes can allow hackers—or even insiders—to gain access to sensitive information. Some global cybercriminal groups have even deployed insiders by infiltrating companies as employees to create internal backdoors.

Companies with weak hiring, monitoring, and access control processes are especially vulnerable. Large multinational corporations face higher risks due to their scale, but small and medium-sized businesses are not immune—especially from competitors.

To reduce these risks, companies should enforce strict access control policies, granting employees only the permissions they need—and only for a limited time.


Conclusion

Prevention is better than cure. Identifying and addressing security vulnerabilities early is essential to protecting your company’s data, finances, and reputation.


What is Smart Contract ? – explained in plain English

Smart Contract is a contract but instead of written in human language, it is written in a programing language. Like a contract, a Smart Contract defines conditions and financial obligations among participants. Unlike a contract, with a Smart Contract, financial obligations can be executed automatically if conditions are met, without underwriters, lawyers or law enforcement entities.

A contract can protect financial rights of participants only if there is a government enforce and operate. Financial obligations in contracts is money. Money is managed and operated by banks. A Smart Contract is a program and only execute on Blockchain. Blockchain is a network of computers, obey to a protocol that provide services similar to a bank such as : holding balances, transferring value, recording ownership, enforcing rules automatically, and keeping an immutable transaction history.

What Conditions can be added to a Smart Contract ?

Not every statement in contract can be converted to Smart Contract. Smart Contract is a program so it works well with precise numbers and clearly defined if-else conditions such as: money amount, date time, vote counting, temperature.

For example that a company is using Smart Contract to pay employees salary. The Smart Contract can easily implement the agreement that, at every 1st of each month, a fixed amount of money is automatically transferred from the company’s wallet to each employee’s wallets, provided that sufficient funds have been deposited in advance.

Once deployed, the smart contract does not rely on the company’s willingness to pay or on any manual action from accountants or banks. If the date condition is met, the payment is executed exactly as written. If the funds are not available, the payment simply does not occur, making the failure transparent and verifiable to all parties.

In this way, the smart contract replaces trust in the employer or intermediaries with trust in predefined rules and automated execution, ensuring predictable and timely salary payments without human discretion.

However, the smart contract cannot determine whether the employee actually worked, worked well, or should be fired. Those human decisions must be made outside the system. The smart contract only enforces what was clearly defined in advance: who gets paid, how much, and when.

What Conditions can NOT be added to a Smart Contract ?

Smart Contract can not work with Emotions, Quality Judgements, Real Life Events.

For example that a company hires a developer under this agreement:

“The developer will build a high-quality mobile app that meets business needs.
Payment will be made if the work is satisfactory.”

This is where Smart Contract can not replace contract. A smart contract, as a program, can not decide:

  • Whether the app is “high-quality”
  • Whether it “meets business needs”
  • Whether the work is “satisfactory”

These require:

  • Human judgment
  • Discussion
  • Interpretation
  • Sometimes negotiation or compromise

Smart contracts are excellent at enforcing clear rules, but they cannot replace contracts that rely on human judgment, quality assessment, or trust.

How to save $99/year when build app on iOS

If $99 per year is dust to you then this post is not for you 🙂

If it is not, then please take a look !

A fact is that it will cost $99 per year to be able to publish mobile applications to AppStore. For any indie developers that is at the first step of publishing their app, this cost might cause some hesitation.

In case that your application is simple, which is not depends system level APIs such as GPS, File System, Bluetooth, Background Activities or Push Notification, it is possible to make use of PWA feature that is supported by Safari browser which always available on iOS and MacOS.

PWA stands for Progressive Web App. It is a web app, but can be installed into smart phones like a mobile app. Simply put, instead of accessing via a web browser like Safari or Chrome, users can find an icon on their phone, tap it and open the app. This experience makes it feel like it is a mobile app, but under beneath, it opens a browser session and render HTML, JS, CSS code. Although the feel when using PWA app is not as smooth and optimized as when on mobile app, it is acceptable for simple tools, content-first applications, or admin dashboards.

I will take one of my favorite PWA application, Meme Express, as an example. Meme Express is a meme editor that I am using on my Macbook and iPhone whenever I want to make a meme. This meme editor is built with Flutter framework. It has a native app on PlayStore for Android, and a PWA version for the rest of OS including: iOS, MacOS, Window and Linux, essentially, any device can run a browser.

How is PWA version of Meme Express made ?

Framework

Align with mobile first design, Flutter is in used. For simple tools, Flutter is a perfect cross-platform solution, when we can write code once then port to iOS, Android, WebApp, Window and Linux application.

Deploy

For Android version, it is published via Playstore normally, at here: https://play.google.com/store/apps/details?id=com.ease_studio.meme. Unlike other cross-platform framework that utilize in-app web view to mimic mobile app, Flutter ports application to a native Android app.

For iOS version, Flutter can port app to native iOS code as well. But because 99$/year is not an option here, PWA version comes as a rescue.

To publish a PWA version, a hosting server is required. A hosting server requires monthly cost. Luckily, Github Page allow us to deploy a web app from a repository for free and it can be accessed via URL username.github.io/app-name , for example with Meme Express, it is https://ease-studio.github.io/meme-pwa/ . Github Page also allows to map a domain name to it. For example here, https://meme-express.io.vn/ actually points to https://ease-studio.github.io/meme-pwa/ .

Make it Installable

To make a web app installable, aka make it a PWA app, a file manifest.json is added. manifest.json structured is defined at: https://developer.mozilla.org/en-US/docs/Web/Progressive_web_apps/Manifest.

Install PWA app

Below is a demonstration of installing an PWA app to an iPhone, taking Meme Express as an example. Simply put:

  1. Open Safari go to https://meme-express.io.vn/
  2. tap “Share” icon
  3. tap “Add to Home Screen”