Inside Predictive Surveillance: How They Watch Before You Act
Tue, Feb 10, 2026 • 6 min read
In 2023, a major city started a controversial surveillance program that used algorithms to predict possible crimes. As a result, some innocent people were flagged and monitored.
Today’s surveillance systems try to predict what people might do. Instead of asking, “What did this person do?” they now ask:
“What is this person likely to do next?”
From Observation to Prediction
Predictive targeting works by collecting lots of data and using algorithms that learn from it. Every bit of information you leave behind, like where you go, what you search for, who you contact, what you buy, what websites you visit, which apps you use, and who you talk to, is gathered into huge computer systems. On their own, these pieces of data seem harmless. But together, they create a kind of behavior fingerprint, similar to how a credit score is built from your financial actions to guess how reliable you are with money. Instead of judging who you are, these systems look at how you act. Once they know that, they can predict:
- These systems can predict your routines, how risky you appear, how much you influence others, your moods, political views, financial health, how likely you are to travel, and even your chances of disagreeing with authority. According to a Fox News report, Facebook CEO Mark Zuckerberg could not confirm how many data points the company collects per user, but estimates range into the tens of thousands. All this data helps turn your activity into patterns that predict your behavior.
How Predictive Surveillance Works in Practice
Predictive targeting follows a familiar pipeline:
- Collection
Data is gathered continuously from telecom providers, internet platforms, financial institutions, smart devices, cameras and sensors, public records, and private data brokers. Companies like Acxiom and Experian are key players in this ecosystem, quietly amassing vast amounts of personal information. Most of this happens behind the scenes, so you rarely notice it.
- Correlation
These systems link information that might seem unrelated. For example, your phone connects to a tower, your card makes a payment, your account logs in, and a camera sees your face. All of this is connected just by your actions.
Another example, buying groceries with a credit card, is recorded and may be linked with your online activity. If you recently researched international travel, algorithms might view this as trip planning. If this resembles risky patterns, you may face additional airport security checks due to these related actions.
- Profiling
Algorithms group people into categories such as “Likely protest participant,” “High financial stress,” “Radicalization risk,” “Cross-border traveler,” “Network hub,” “Influencer node,” or “Low compliance probability.” You may never see these labels, but they exist and can follow you.
- Prediction
Based on your past patterns, these systems try to guess where you’ll go, who you’ll meet, what you’ll search for, what you’ll buy, when you’ll be active, and how you might react to stress.
You Can Be Flagged Without Doing Anything Wrong
This is the most troubling part. Predictive targeting doesn’t need you to do anything wrong. It works on probability, not proof, creating real risks such as being wrongly flagged, losing access to services, or facing suspicion without cause. You might be flagged because:
- Your friends were flagged, you visited certain places, you read certain topics, you joined certain groups, you resemble known profiles, or your behavior “deviates” from norms. Some officials defend predictive systems by saying they are needed for public safety, claiming they help identify threats before crimes happen and prevent harm. According to the Brennan Center for Justice, supporters believe predictive policing can more accurately and effectively predict crime than traditional police methods.
Pre-Crime Logic: Acting Before Action
Some governments and agencies now use predictive algorithms to try to prevent problems. In theory, this sounds reasonable: “We want to stop harm before it happens.” In practice:
This can mean preemptive monitoring, travel restrictions, extra screening, financial scrutiny, account suspensions, watchlists, and silent blacklisting. The key risks of these measures include a lack of transparency, denial of due process, and the risk of wrongful targeting. All of this can happen without formal charges, explanation, or appeal. You are never told, "You were predicted to be risky." You just experience the consequences. This approach is similar to past preventive measures, such as the loyalty programs of the 1950s, which targeted people based on associations. These examples show that predictive logic is not new; it just uses more advanced technology today.
Commercial Surveillance Feeds State Surveillance
Predictive targeting is not just used by governments. Big tech companies and data brokers developed it first. Advertising systems already predict:
- what you’ll buy, when you’ll buy, how much you’ll spend, what messages persuade you, and when you’re most likely to be influenced. These same systems can be easily repurposed. What sells products today can flag "threats" tomorrow. The process is the same; only the goal changes. With the global ad-tech market worth over $600 billion, there is a strong financial incentive for this shift, showing how the drive for profit can move from commercial goals to government surveillance.
Why Models Are Not Neutral
Algorithms are not objective. They inherit: biased data, unfair history, political goals, business interests, and cultural beliefs. The risks are that discrimination in past data leads to discriminatory predictions, and over-scrutinized communities remain labeled as “high risk.”
Behavior Shaping: Surveillance That Trains You
When people know they might be watched, they self-censor, avoid sensitive topics, and stay away from controversial discussions. Here is a simple text exchange between friends:
"Hey, are we still meeting up tomorrow?" one friend texts.
There's a pause, and the reply comes cautiously: "Yeah, but let's not chat about the protest stuff over text...you never know who might be watching."
This subtle hesitation shows how social interactions change. People isolate and conform instead. Over time, society becomes quieter.
The Psychological Cost of Being “Predictable”
Living under predictive systems changes how people think. You start to wonder: "Will this look suspicious?" "Will this trigger something?" "Is this worth the attention?" "Should I avoid this?" Imagine standing at an airport checkpoint, passport in hand, as a security officer looks at you. You feel tense as their eyes linger a bit longer. "Will this trip be flagged?" you wonder.
Why Opting Out Is Nearly Impossible
Escaping predictive targeting is kinda hard because:
- Phones are needed for daily life, IDs are required, payments are tracked, the internet is run by a few companies, and systems are always monitored. A key risk is that opting out draws suspicion and attracts unwanted attention. For example, not using social media can make it seem like you have something to hide, while using only digital payments means your spending is always tracked. This shows how trying to opt out can look suspicious, since modern tools are now necessary, not just a choice. Privacy itself becomes suspicious. The system treats not joining as a warning sign.
If this made you think differently about privacy, power, and modern surveillance:
Share it with someone who still thinks "I’ve got nothing to hide"