The surveillance infrastructure of 2026 is no longer a government project. It is a market. Photo: Unsplash
The week of March 3-7, 2026, will be remembered as the week America's surveillance architecture stopped being a conspiracy theory and became a press release. Customs and Border Protection admitted it purchased phone location data from the advertising industry. Workers in Nairobi revealed they were watching Meta smart-glasses footage of users in bathrooms, bedrooms, and intimate moments. The FBI disclosed "suspicious activity" on the very network segment that handles its wiretap data. And a highly sophisticated iPhone-hacking toolkit built for the US government turned up in the hands of foreign intelligence agencies and organized crime.
Any one of these stories would have dominated the news cycle in 2019. In March 2026, they dropped inside a weekly security roundup, buried under war coverage from the Gulf. That burial is itself part of the story. While the country watches bombs fall on Bushehr, the domestic surveillance infrastructure quietly crossed lines that used to require congressional hearings to even debate.
This is not paranoia. This is documentation. Let's go through each thread, because they connect in ways that should alarm anyone who pays attention to power.
Thread One: Your Attention Is Being Sold to the Border Police
Every time you open an app and see an advertisement, something is happening that most people don't know about. In the milliseconds between you loading a screen and an ad appearing, an auction is taking place. Advertisers bid to show you that specific ad. This process - called real-time bidding, or RTB - generates a river of data about your device, your location, your browsing habits, and your behavioral fingerprints. That data flows to hundreds of brokers who aggregate, package, and resell it.
For years, privacy researchers warned that law enforcement could purchase this data to track people's movements without a warrant. No judge. No probable cause. No Fourth Amendment scrutiny. Just a government credit card and a subscription to the right data broker.
This week, those warnings became official government acknowledgment. 404 Media via FOIA
A Privacy Threshold Analysis document obtained by 404 Media through a Freedom of Information Act request confirmed that United States Customs and Border Protection purchased phone location data linked to real-time bidding processes. The document covers a trial CBP ran between 2019 and 2021 - but critically, the agency did not respond to 404 Media's follow-up question about whether it is still buying the data. That non-response is itself an answer.
"The data has been called a 'gold mine' for tracking people's daily activities." - WIRED, citing 404 Media's reporting on the CBP Privacy Threshold Analysis document
The mechanics here matter. RTB location data is not a clean GPS ping from a single app you knowingly installed. It is an aggregate drawn from dozens or hundreds of apps running on your phone - apps you gave location permission to for entirely unrelated purposes. A weather app, a game, a news reader. Each of these can quietly participate in RTB auctions, leaking your precise location coordinates to a network of data brokers who then sell that data to anyone willing to pay.
CBP is not the only agency looking at this market. The same reporting noted that ICE has reportedly planned to purchase access to a separate system called Webloc, which takes location surveillance a step further: it can monitor entire neighborhoods for mobile phone movements. Not individuals. Neighborhoods. The technical framing here is "geofencing at scale" - drawing a virtual perimeter around a geographic area and tracking every device that enters or exits it.
The legal framework for all of this is thin. A 2018 Supreme Court ruling in Carpenter v. United States held that the government generally needs a warrant to obtain cell-site location records from telecom companies. But the RTB loophole exploits a different path: the government isn't subpoenaing a telecom. It's buying data from a commercial broker. Courts have not definitively settled whether the Carpenter protections apply to commercially purchased location data, and until they do, agencies will keep buying.
The second-order effect to watch: CBP's admission is the first official one. That doesn't mean it's the first or only agency doing this. It means CBP is the first one caught with FOIA-able documents. Treasury, DHS, DEA, state and local law enforcement - all of them have procurement budgets, and all of them have been quietly shopping at the same data brokerage market for years, according to prior investigative reporting by the Electronic Frontier Foundation and the ACLU.
Real-time bidding turns every ad auction into a location data harvest - and government agencies are buying. Photo: Unsplash
Thread Two: Meta's Smart Glasses Are Watching - and So Are the Humans Behind the AI
Meta's Ray-Ban smart glasses have been on the market since 2023, steadily growing in popularity as the clearest mainstream example of wearable AI becoming a real product. The glasses feature a built-in camera, microphones, and an AI assistant called "live AI" that lets you point the camera at things and ask questions about what you're seeing. Cooking instructions. Translation. Identifying objects.
The pitch is seamless AI integration with your physical world. The reality, this week, is more complicated.
Reporting by Swedish newspapers Svenska Dagbladet and Goteborgs-Posten - based on interviews with more than 30 workers and former employees at data-labeling firm Sama in Nairobi - revealed that human contractors are routinely reviewing footage captured by the glasses and used to train Meta's AI systems. Svenska Dagbladet / Goteborgs-Posten
The footage includes videos of users in bathrooms. Videos of users undressing. Videos exposing financial information - bank screens, PIN entry, sensitive documents. Workers told reporters they have seen sexual content. Workers who raise concerns about what they're viewing report being threatened with dismissal.
"Contractors working for Meta say they've routinely reviewed sensitive footage captured by the company's AI-powered smart glasses, including videos showing users in bathrooms, undressing, or exposing financial information." - WIRED, summarizing Swedish investigative reporting
The technical explanation is straightforward: when users activate "live AI," the glasses record video and audio. Meta's policies permit the company to retain and review these recordings for AI training purposes. This is disclosed in the terms of service. What the contractors in Nairobi say - and what appears to be the actual problem - is that many users appear entirely unaware that humans, not just autonomous systems, can and do see this content.
This is the pattern of AI training data collection playing out at the most intimate physical scale yet. With language models, the human review of training data is abstract - text, websites, documents. With image and video models, that same review pipeline means humans in a labeling factory watching whatever the camera saw. And the camera was on a person's face. In their home. In their bathroom.
The second-order effect here is structural. Meta is building a product line that will eventually include true augmented reality glasses - the kind that overlay digital information onto the physical world in real time. The Ray-Ban glasses are the first-generation proof-of-concept for that vision. Every major tech company building AR hardware faces the same data pipeline problem: you cannot train AI that understands the physical world without collecting footage of the physical world. That footage will inevitably include private moments. The question is not whether this will happen again - it is how many companies are currently operating the same pipeline without a Swedish investigative journalism team to expose it.
Meta, for its part, has not disputed the core facts of the reporting. The company's position, essentially, is that users consented to this through terms of service. The workers' position is that those terms of service don't reflect the lived reality of what they're being asked to watch, in what quantities, under what working conditions.
The Data Collection Flywheel
Meta's smart glasses collect footage during "live AI" sessions. That footage is sent to data labeling contractors (in this case, Sama in Nairobi). Contractors annotate and review the footage to train AI models. Trained models make the glasses more capable. More capable glasses attract more users. More users generate more footage. The flywheel doesn't stop. And your bathroom isn't off-limits.
Thread Three: Someone Got Into the FBI's Wiretap Network
In December 2024, it emerged that Chinese state hackers from a group called Salt Typhoon had broken into virtually every major US telecom carrier. The entry point, in multiple cases, was the systems those telecoms operate specifically to enable court-ordered wiretaps on behalf of the FBI and other law enforcement agencies. The irony was exquisite and terrible: the infrastructure America built to surveil its own people had been turned against America by a foreign power.
Congress held hearings. The FBI director called it one of the worst intelligence breaches in American history. The Cybersecurity and Infrastructure Security Agency issued emergency directives. Fourteen months later, here we are again.
CNN reported this week that the FBI is investigating "suspicious activity" on a portion of its own network - specifically, the part that handles wiretaps and surveillance warrants. CNN The FBI confirmed the incident is real and has prompted a response from senior officials focused on national security and civil liberties. Neither the FBI nor CNN provided further technical details about the nature of the suspicious activity or how far the intrusion may have extended.
The deliberate vagueness of "suspicious activity" is bureaucratic hedging language. It means something was detected, someone is investigating, and officials are not yet ready to characterize the scope. Whether this is an intrusion comparable to Salt Typhoon, a lateral movement attempt by the same or a different actor, or something less severe will take time to determine. But the timing - 14 months after Salt Typhoon, against the same category of infrastructure - is not coincidental noise.
Wiretap systems are a particularly high-value intelligence target. If you can get access to the infrastructure that routes law enforcement surveillance requests, you get three things simultaneously: you learn who law enforcement is currently watching, you learn what investigative techniques are being used, and you can potentially manipulate or corrupt what law enforcement actually receives. A sophisticated adversary doesn't need to read the wiretaps themselves - knowing that a target is being wiretapped, and by whom, is already a counterintelligence goldmine.
"Any mention of a potential breach of wiretap data calls to mind 2024's disastrous intrusions by China's Salt Typhoon hacker group, which broke into practically every US telecom, in some cases by exploiting their systems for enabling wiretaps on behalf of law enforcement." - WIRED
The structural problem that Salt Typhoon exposed - and that this new incident suggests has not been fixed - is that CALEA, the Communications Assistance for Law Enforcement Act passed in 1994, forced every telecom and eventually many internet service providers to build wiretap-compatible backdoors into their infrastructure. The law was designed to solve a narrow problem: making sure FBI could get court-ordered access to communications. The unintended consequence was creating a standardized attack surface. If you know the architecture of how US law enforcement accesses communications - and Salt Typhoon definitely knows now - you have a roadmap for the next intrusion.
Security experts have been warning about this exact failure mode since CALEA was first applied to internet communications in 2006. The argument was simple: any backdoor you build for the good guys will eventually be found and used by the bad guys. That argument lost the policy debate. The FBI's wiretap system is now experiencing the predicted consequences.
Thread Four: America's iPhone-Hacking Tools Are Now in Criminal Hands
For years, the US government paid extraordinary amounts of money to develop or purchase highly sophisticated techniques for hacking into iPhones. Some of this work went through contractors like the now-infamous NSO Group. Some was developed in-house by agencies like the NSA and CIA. Some was procured through other classified channels whose names have never been publicly disclosed.
These tools were supposedly tightly controlled. Export controls. Classification levels. Strict protocols around deployment. The theory was that the most dangerous phone-hacking capabilities would remain with governments sophisticated enough to use them responsibly (a contested premise, but the stated one).
WIRED reported this week, citing security research, that a highly sophisticated set of iPhone-hijacking techniques - likely originally built for or by the US government - has spread well beyond that intended user base. WIRED / Andy Greenberg The toolkit has apparently infected tens of thousands of phones or more. It is in the hands of multiple other nation-states. It is also, critically, in the hands of criminal scammers who have used the tools to compromise victims' devices.
The path from classified government tool to criminal weapon follows a now-familiar pattern. Tools get leaked - through insider theft, contractor security failures, or direct nation-state espionage. They get reverse-engineered by sophisticated adversaries. They get copied, adapted, and eventually commoditized. What costs $10 million to develop in 2018 costs $50,000 to buy on the dark web in 2026.
Apple has been playing an increasingly sophisticated game of cat and mouse with these techniques, issuing security updates through its Rapid Security Response program and its annual iOS major releases. The company's Lockdown Mode, introduced in 2022 for high-risk users, aggressively limits attack surface by disabling functionality that sophisticated exploits commonly target. But Lockdown Mode requires a conscious decision to trade convenience for security - most users don't know it exists, and fewer still have enabled it.
Classified hacking tools have a consistent lifecycle: government development, contractor proliferation, adversary theft, criminal deployment. Photo: Unsplash
The Leakbase Takedown: What Good Law Enforcement Looks Like
It would be unfair to end this piece without noting that one piece of genuinely good news dropped this week in the security space. The FBI, Europol, and multiple European law enforcement agencies successfully dismantled Leakbase, a subscription cybercrime forum that had grown to 142,000 paying members since launching in 2021. The Record / Europol
The operation resulted in 13 arrests, seizures of dark-web infrastructure spanning the Netherlands to Malaysia, and interviews with dozens of additional suspects. Leakbase was a well-known market for stolen credentials and breach data - the kind of platform that makes mass credential-stuffing attacks and identity theft operations viable at scale.
Brett Leatherman, the FBI's assistant director of cyber operations, confirmed the operation details to The Record. This is law enforcement cyber operations working as designed - international coordination, judicial process, operational disruption. The contrast with the surveillance overreach documented in the rest of this article is worth sitting with.
Good surveillance practice looks like: get a warrant, coordinate with allies, arrest actual criminals who cause direct harm. Bad surveillance practice looks like: buy location data from ad brokers, monitor neighborhoods without court approval, use opaque legal mechanisms to identify protesters.
Both happen simultaneously in 2026. The government that took down Leakbase is the same government that purchased RTB data. These are not contradictions - they are features of a system that has simultaneously developed rigorous tools for some targets and operated near-total impunity for others.
The Timeline: How We Got Here
CALEA passes, requiring telecoms to build law enforcement backdoors into communications infrastructure. Security researchers warn this creates a permanent attack surface. Congress disagrees.
CBP runs a trial purchasing phone location data from RTB ad networks, according to FOIA documents released in 2026. The trial apparently concludes without triggering any public oversight review.
Apple launches Lockdown Mode, its maximum-security iOS configuration, acknowledging that state-level attackers are a real and ongoing threat to specific user populations. Most users never enable it.
Salt Typhoon, a Chinese state hacking group, is revealed to have penetrated virtually every major US telecom by exploiting CALEA wiretap infrastructure. Congressional hearings follow. The fundamental architecture is not changed.
Meta's Ray-Ban AI smart glasses expand to new markets. Data labeling operations begin using footage captured by the glasses to train AI models. Contractors in Nairobi begin reviewing the footage, including intimate and bathroom content.
CBP admission via FOIA. Meta glasses bathroom footage exposed by Swedish journalists. FBI wiretap network breach announced. US iPhone-hacking toolkit confirmed in criminal hands. All in one week.
What This Means for the People Reading This
There is a tendency to compartmentalize these stories. CBP buying ad data is a government-overreach story. Meta glasses bathroom footage is a corporate-ethics story. FBI wiretap breach is a national-security story. iPhone hacking tools in criminal hands is a cybercrime story.
That compartmentalization is exactly why none of these problems get fixed. They are the same problem expressed at different layers of the same infrastructure. The underlying pattern is identical: data collected for one purpose (advertising optimization, AI training, law enforcement access, government security) flows to actors who were never supposed to have it (border agents, human contractors, foreign intelligence agencies, criminals).
The data flows because the legal and technical controls designed to contain it are either too weak, too porous, or too easily gamed. RTB data is sold through brokers who face essentially no legal obligation to verify how their customers will use it. AI training data is reviewed by contractors under terms that users never clearly understood. Wiretap infrastructure is secured by telecommunications companies who face CALEA mandates but not the resources or oversight to secure them against nation-state adversaries. And government hacking tools proliferate because the contractors who build and deploy them operate in a classified environment where accountability is thin and the incentives for secrecy override the incentives for security hygiene.
"The murkiest parts of the advertising industry can collect data from your device, including your phone's identifying details and location data; this is then repackaged and sold to companies and entities." - WIRED, on the RTB surveillance economy
The second-order effect that most people miss: as surveillance infrastructure becomes more capable and more widely distributed, the concept of targeted surveillance becomes increasingly fictional. When CBP can buy a data stream that covers an entire neighborhood's device movements, the surveillance is no longer targeted. It is ambient. You don't have to be a target to be tracked. You just have to exist in the same geographic cell as someone who is.
When Meta's glasses are at a party and the live AI feature is running, everyone at that party is potentially being recorded and reviewed by a Nairobi contractor, regardless of whether any of them consented to being in a smart glasses AI training dataset. Surveillance has always had a blast radius. In 2026, that radius has expanded to include anyone who is simply present in physical space near someone carrying certain hardware.
Where the Regulatory Framework Fails
The United States has no comprehensive federal data privacy law. This is not news - privacy advocates have been pointing this out for two decades. But the specific absence matters now more than ever, because the US is unique among major democracies in leaving the commercial surveillance market essentially unregulated at the federal level.
The European Union's GDPR, which took effect in 2018, imposes strict requirements on data collection and processing, explicit consent standards, and meaningful penalties for violations. The UK has its own version. Brazil passed its Lei Geral de Proteção de Dados. Canada, Australia, Japan, India - all have or are developing comprehensive privacy frameworks. The US has California's CPRA, a patchwork of sector-specific laws (HIPAA for health, FERPA for education), and nothing else.
The RTB surveillance market that CBP purchased from is legal in the United States precisely because there is no federal law that prohibits it. Privacy researchers at Duke University published a report in 2023 documenting that data brokers would sell sensitive location data to essentially anyone with a credit card, including researchers posing as law enforcement. No warrant required. No regulatory review required. No meaningful oversight required.
The EU has been actively investigating RTB under GDPR for years, with Ireland's Data Protection Commission (home to most of the big ad tech companies' European headquarters) ruling in 2022 that IAB Europe's Transparency and Consent Framework - the technical standard that governs RTB data sharing - was itself a violation of GDPR. That ruling led to years of negotiations about how to restructure the ad tech market. In the US, the same market operates without challenge.
The FBI wiretap infrastructure problem has a different regulatory failure at its root: Congress passed CALEA without adequately funding or requiring the security measures that CALEA-compatible backdoor infrastructure demands. This is not a new argument. In 2010, the Obama administration sought to expand CALEA to internet communications despite security experts testifying that the expansion would create exactly the vulnerabilities that Salt Typhoon later exploited. The warnings were ignored. The legislation passed in modified form. The vulnerabilities materialized on schedule.
The Path Forward Is Narrow But Not Closed
Pessimism is the easy read of a week like this. The honest read is more complicated. Each of these stories involves disclosure, which is the precondition for reform. CBP's admission came through FOIA - which means FOIA is working, which means investigative journalism is working. The Meta glasses story broke because workers in Nairobi were willing to talk to Swedish journalists despite threat of dismissal - whistleblower courage still functions.
The FBI wiretap breach disclosure, however vague, was made in less than a week of the incident occurring. The Salt Typhoon breach went undetected for months and was eventually disclosed by the Chinese hackers themselves through operational mistakes. Faster disclosure, however uncomfortable, is progress on the institutional-accountability axis.
And the Leakbase takedown demonstrates that when law enforcement operates within its proper remit - with warrants, with judicial authorization, with international coordination - it can be extraordinarily effective against actual criminal infrastructure. The same tools used for overreach can be used correctly when the oversight structures are functioning.
What does functional oversight look like in 2026? It looks like Congress passing the Fourth Amendment Is Not For Sale Act, which would close the data broker loophole by requiring law enforcement to get a warrant before purchasing commercial location data. The bill was introduced in the Senate by Ron Wyden in 2021 and has not passed, but it has gained cosponsors with each new disclosure. CBP's admission this week is the kind of documented evidence that moves bills from committee to floor.
It looks like Apple continuing to develop and publicize Lockdown Mode, pushing the security/convenience tradeoff toward security for the highest-risk users while simultaneously improving baseline security for everyone. The iPhone-hacking toolkit story is a reminder that this arms race is permanent, not solvable - but Apple's approach of aggressively shrinking attack surface is more effective than pretending the threat doesn't exist.
It looks like the European regulators who have been suing IAB Europe over RTB data collection practices continuing that work, and US states with their own privacy laws - California, Virginia, Colorado, Connecticut - using their own enforcement mechanisms to pressure ad tech companies that operate in their jurisdictions.
And it looks like Swedish investigative journalists and 404 Media reporters and FOIA requesters doing exactly what they're doing: treating "we disclosed this in our terms of service" as the beginning of an accountability conversation rather than the end of one.
Get BLACKWIRE reports first.
Breaking news, investigations, and analysis - straight to your phone.
Join @blackwirenews on TelegramSources: WIRED Security Roundup (March 7, 2026), 404 Media FOIA reporting on CBP Privacy Threshold Analysis, Svenska Dagbladet and Goteborgs-Posten investigation into Meta Sama labeling workers, CNN reporting on FBI wiretap network incident, WIRED / Andy Greenberg reporting on US government iPhone toolkit proliferation, The Record / Europol on Leakbase takedown.