The Ear in the Room: AI Wearables Are Listening to Everything. Now Someone Wants to Jam Them.
A Harvard graduate built a $1,199 ultrasonic orb to block always-listening AI wearables. The physics say it probably won't work. But the fact that it exists - and went viral - tells you everything about where the surveillance arms race is headed.
Always-on microphones are now worn on wrists, necks, and foreheads. The question is who controls what they capture. (Unsplash)
In a conference room in San Francisco, your colleague's yellow bracelet is recording your meeting. At your kitchen table, a pendant around your partner's neck is transcribing your argument. On the subway, the person across from you has a bead stuck to their temple that's logging your one-sided conversation about your landlord.
This is not a thought experiment. These devices exist, they're cheap - some as low as $50 - and they are multiplying fast. The AI wearable boom has produced a category of always-on listening devices that are rewriting what privacy means in shared physical space. And now a Harvard graduate has built a sleek orb to fight back.
The Deveillance Spectre I, announced this week, is a portable tabletop device designed to jam nearby microphones using ultrasonic frequencies and AI-guided targeting. Wired reported that its founder, Aida Baradari, was motivated by the exact anxiety that's spreading through privacy-conscious communities worldwide: the feeling that you can no longer have a conversation without an AI somewhere capturing and analyzing it.
The Spectre I went viral. Security researchers immediately called it too good to be true. The founder called it a necessary act of resistance. Both things are right - and the tension between them reveals a deeper crisis about consent, surveillance creep, and the gaping hole in law where meaningful regulation should exist.
The Wearables Already in the Wild
AI wearables have moved from concept to consumer product in under 18 months. (Unsplash)
To understand why the Spectre I resonated so hard, you need to understand what's already out there. The category coalesced at CES 2025, when a cluster of AI wearable companies announced devices that would do something no consumer product had done before: record continuously, all day, without requiring any user input to activate.
Bee AI's Pioneer wearable - a yellow bracelet that looks like a fitness tracker - uses two built-in microphones to record conversations, then processes them through large language models to produce to-do lists, meeting summaries, and personal insights. It costs $50. Bee AI was co-founded by Maria de Lourdes Zollo and Ethan Sutin, both veterans of Squad, which was acquired by Twitter (now X). Bee AI's wearable doesn't store raw audio long-term, but it does transmit voice data to Bee AI's servers in real time for processing. Wired documented the category's rise in January 2025.
Omi makes a bead that can be worn around the neck or stuck to your forehead near your temple. It adds an electroencephalogram to the mix - reportedly allowing the device to detect when you're thinking about talking to it. Omi's device is $89. Then there's the Friend pendant, which offers a more emotional pitch: an AI companion that listens to your life and responds like a best friend who never sleeps.
Samsung announced this week that it remains on track to launch its first AR smart glasses in 2026. Jay Kim, Samsung's EVP of mobile, confirmed to CNBC that the device will feature a camera "at your eye level" - meaning whoever wears them will have a recording device pointed at whoever they're talking to, from the most natural conversation angle possible. Combined with Google's Android XR platform, Samsung's glasses represent the first mass-market attempt at continuous first-person recording from a major hardware company. The Verge confirmed Samsung's 2026 launch timeline this week.
The cumulative effect of these products is a fundamental shift in the social contract of conversation. Voice assistants like Alexa and Siri required wake words. These devices require nothing. They are always on, always listening, always uploading. The only question that remains - and it's one that legal frameworks have not answered - is whether that's legal when the person being recorded hasn't consented.
What the Spectre I Claims to Do
Aida Baradari launched Deveillance with one product: the Spectre I, a tabletop orb that uses a combination of ultrasonic frequency emitters and AI processing to detect and jam nearby microphones. The device, expected to sell for $1,199 in the second half of 2026, went viral after her announcement post on X was shared widely by privacy advocates and tech journalists alike.
Baradari, a recent Harvard graduate, built the device in response to the specific anxiety of having conversations in rooms where she couldn't be sure whether someone's AI wearable was recording her. Her pitch is simple: people should have control over what gets captured in their own conversations, regardless of what technology others are wearing.
"People should have a choice over what they want to share, especially in conversations. If we can't converse anymore without feeling scared of saying something that's potentially taken out of context or wrong, then how are we going to build human connection in this new age?" - Aida Baradari, Deveillance founder
The Spectre I's design goes further than traditional audio jammers. It claims to not just emit jamming frequencies but also actively detect nearby microphones and log them - creating a registry of recording devices in the room. The AI component, Baradari says, allows the device to adapt its jamming to different microphone types rather than broadcasting a static frequency pattern.
Ultrasonic microphone jammers are not new technology. They've been available as consumer products for years, sold on Amazon and Alibaba to journalists, lawyers, executives, and anyone else who wants to ensure their conversations stay private. These older devices work by flooding the ultrasonic frequency range - above 20kHz, where human hearing ends but microphone pickup continues - with interference that corrupts audio recordings.
What Baradari is claiming to add on top of the established technology is AI-guided adaptation and active detection. Whether that translates into meaningfully better jamming performance than existing products - especially against the newer directional microphone arrays in modern wearables - is exactly what skeptics are questioning.
The Physics Problem (And Why It Matters)
Ultrasonic jamming works on the same physics that limits its effectiveness - spread over distance and blocked by objects. (Unsplash)
When the Spectre I announcement hit social media, the praise was immediate and passionate. So was the technical pushback. Musician and security researcher Benn Jordan, who has made detailed videos on audio surveillance and jamming technology, summarized the problem bluntly in a post that gained significant traction: "These are some pretty big promises. Unfortunately, they're kind of up against physics."
The core limitation of any ultrasonic jammer is propagation. Ultrasonic frequencies dissipate significantly over distance. A jammer powerful enough to work at range would need to be large, heavy, and draw considerable power - not the kind of portable tabletop orb Baradari is showing off. Making the device smaller means making it less powerful, which means its effective radius shrinks.
There's also the directionality problem. Modern AI wearables - particularly Bee AI's Pioneer - use multiple microphones with noise isolation algorithms designed to isolate the speaker's voice and filter out ambient sound. These same algorithms can, in theory, help the device distinguish between the jamming signal and the human voice, potentially allowing them to filter out the jammer the same way they filter out background noise.
The detection claim is equally contested. Detecting nearby microphones passively requires either radio frequency scanning (looking for the RF emissions that some recording devices produce) or acoustic probing (sending a test signal and listening for the response). RF detection misses passive analog microphones entirely. Acoustic probing requires the device to make noise of its own, which creates its own complications in a quiet room. Claiming AI can solve these fundamental limitations is a bold assertion that requires significant proof of concept data - which Deveillance has not yet published.
None of this means the Spectre I is necessarily useless. Against simpler recording devices - a smartphone sitting on a table, a basic voice recorder - an ultrasonic jammer can work reasonably well in close proximity. The question is whether it performs against the new generation of AI-optimized, noise-cancelling, always-on wearables that it's specifically designed to counter. That remains unproven.
The more interesting question is not whether the Spectre I works. It's why so many people wanted it to.
The Surveillance Backdrop That Made This Inevitable
The Spectre I's viral moment did not happen in a vacuum. It landed in the middle of a week when Americans are grappling with multiple surveillance convergences that have fundamentally changed the relationship between government, corporations, and personal privacy.
US Customs and Border Protection spent years using commercial advertising data - the same targeting infrastructure that serves you banner ads - to track the physical locations of phones. Wired reported this week that CBP contracted with data brokers who aggregate location data harvested from apps across millions of devices. No warrant required. No probable cause. Just a commercial transaction, paid for with taxpayer money, that gave federal law enforcement the ability to track anyone who carries a smartphone. The CBP surveillance story had been reported in fragments before, but the full scope - the systematic, ongoing nature of it - landed with fresh force this week.
The same week, 404 Media published court records showing that Proton Mail - the encrypted email service beloved by privacy advocates - complied with a Swiss court order to hand over payment information tied to an account used by a Stop Cop City protester in Atlanta. Proton correctly notes that end-to-end encryption protected the email content. But the payment metadata - the name and billing information of the account holder - was enough to identify the person to the FBI. The story underscored that even privacy-first services operate within legal frameworks that governments can leverage.
Meanwhile, in February, Ring ran a Super Bowl commercial showing its neighborhood camera network being used to find a lost dog. The privacy community's reaction was swift and furious. Ring subsequently killed a planned partnership with Flock Safety, a license-plate-reading AI surveillance company that uses overseas gig workers to label surveillance footage. Ring's retreat happened under public pressure - but the infrastructure that enabled the partnership is still in place. The cameras still exist. The network still runs.
"People are kind of waking up to the idea that they may not have privacy at any given time." - Benn Jordan, musician and security researcher, speaking to Wired about audio surveillance
ICE, under the current administration, has built out surveillance systems targeting immigrants that span social media monitoring, location tracking, and cell tower interception. That expansion of federal surveillance capability has accelerated a broad awareness - felt across political lines, not just among civil libertarians - that the baseline expectation of private life in public or semi-public spaces has eroded dramatically.
The Spectre I is a product designed for this moment. Whether it works or not, its existence is a symptom of a genuine social crisis around ambient surveillance.
The Legal Void Where Consent Should Be
The AI Wearable Surveillance Timeline
Here is the fundamental problem: recording laws in the United States were written for a different era. Most US states are single-party consent jurisdictions, meaning you can legally record a conversation you are a participant in without notifying others. Two-party (or all-party) consent states require everyone in the conversation to agree. But the always-listening AI wearable sits in an awkward gap: the person wearing the device is "participating" in every conversation around them, even casual ambient ones, which could - under the most permissive legal readings - qualify them to record everyone they encounter without explicit consent.
Even in two-party consent states, enforcement is murky. What counts as "consent" when someone walks into a room where another person is wearing a listening device? Does the device need to announce itself? Does wearing it visibly satisfy some implied notice requirement? Nobody knows, because these questions haven't been litigated at scale yet.
Europe's GDPR offers somewhat more protection - recording someone without their knowledge and processing their voice data would generally require a legal basis under Article 6, and collecting biometric data triggers Article 9's higher threshold. But GDPR enforcement has been inconsistent and slow, and consumer AI wearable companies have so far not faced significant regulatory action in any jurisdiction.
The US Congress, meanwhile, has not passed comprehensive federal privacy legislation despite years of attempts. The American Privacy Rights Act, which came closest to passage in recent memory, died in committee. Without a federal privacy floor, states like California (CCPA) and a handful of others offer patchwork protection that the average wearable-wearing user in a non-covered state doesn't benefit from.
The result: a two-year head start for AI wearables in a regulatory environment that has essentially decided not to decide. Companies like Bee AI, Omi, and whatever Samsung ships later this year are operating in a largely unconstrained space, writing their own data policies, making their own decisions about what gets stored and who can access it, and betting that regulators will continue to move slowly.
What Samsung's Eye-Level Camera Changes
Every category above has been about wearables you choose to wear or devices others choose to carry. Samsung's AR glasses represent something qualitatively different: a camera built into eyewear that is quickly becoming mainstream fashion.
Meta's Ray-Ban smart glasses demonstrated that people will wear camera glasses when they look sufficiently like normal glasses. Samsung's partnership with Google on Android XR is specifically designed to make a more capable device - one with a heads-up display, phone connectivity, and a built-in camera - that still looks wearable in public. Samsung EVP Jay Kim confirmed to CNBC this week that the camera will be positioned at eye level, capturing what the wearer sees.
The second-order effect here is under-discussed. It's not just about the Samsung wearer recording what they see. It's about what happens when that footage is processed by AI systems. Meta Ray-Ban glasses were used last year - in a demonstration that went viral and sparked genuine alarm - to photograph strangers in public, run their faces through reverse image search tools, and surface their home addresses from public records within seconds. The workflow was banned by the platforms involved, but the underlying capability is still present in the hardware.
Samsung's AR glasses, once launched, will represent a much more capable version of that same device. The question is not whether bad actors will find similar uses. The question is what the acceptable use policy looks like, who enforces it, and what recourse strangers have when they're recorded without their knowledge on a public street. So far, the answers are: nothing, nobody, and none.
Deveillance's Spectre I targets audio. It does nothing for camera-equipped wearables. The arms race has multiple fronts.
The Counter-Surveillance Economy Being Born
A market is forming around counter-surveillance tech - from jammers to detector apps to signal-blocking fabrics. (Unsplash)
The Spectre I is not alone. It's one visible node in an emerging market for counter-surveillance technology aimed at AI wearables specifically. Another recent example: a hobbyist developer built an app that detects when someone nearby is wearing Meta Ray-Ban smart glasses, using Bluetooth scanning to identify the glasses' wireless signature. The app, documented by 404 Media, alerts the user with a notification - a sort of proximity warning for surveillance-enabled eyewear.
That app is simpler and arguably more practical than the Spectre I. It doesn't block recording; it just tells you when it might be happening. That's a different philosophy - transparency over jamming - and it sidesteps the physics problem entirely. Detection is fundamentally easier than prevention.
But detection only works if the device broadcasts something detectable. Many recording devices - a smartphone in a shirt pocket, a button-sized audio recorder - produce no distinctive RF signature. Detecting them requires either physical inspection or acoustic probing. The Spectre I claims to handle this; existing apps do not.
Faraday fabric and signal-blocking pouches - already sold for phone privacy - are finding new marketing angles around wearable AI. Some privacy consultants now recommend checking meeting rooms for wearables the same way they would sweep for bugs. The surveillance sweep, once a service reserved for corporate espionage concerns, is slowly becoming relevant to ordinary professional settings.
Amazon's acquisition of Bee AI, completed in February 2026, adds a new dimension to this market dynamic. Amazon already operates Ring's neighborhood camera network, Echo smart speakers, and a vast logistics surveillance apparatus. Adding an always-on voice AI wearable to that portfolio - one that travels with users through their entire day - gives Amazon a data stream that no fixed device can match. The privacy implications of that consolidation are significant and largely unexplored in public discourse.
Bee AI's privacy policy, like most in this space, is detailed but flexible. Data is processed by third-party AI providers. Aggregated insights may be used to improve the service. Retention periods are defined but contingent on account status. The language is careful, not committal. It offers users the feeling of control without the reality of it.
Where the Arms Race Ends - Or Doesn't
There is a version of this story where technology solves the problem technology created. Privacy-preserving AI wearables that process everything on-device, never upload raw audio, and give users cryptographic proof of what was and wasn't stored. Federated learning that improves AI performance without centralizing personal data. Hardware attestation that lets you verify, independently, what a wearable is doing with its microphone at any given moment.
Some of this is technically possible. The Friend pendant claims on-device processing. Omi has made claims about local storage options. But "claims" and "independently verifiable" are different things, and no AI wearable company has submitted to the kind of third-party technical audit that would let users actually trust those assertions. The privacy model for this entire device category is currently built on terms of service agreements, not cryptographic guarantees.
The Spectre I, whatever its limitations, is trying to solve a different problem: not whether companies can be trusted, but whether you can reclaim control of your immediate physical environment regardless. That's a harder problem. Physics is not a policy you can lobby against.
What's more likely than a technical solution - in the near term - is social and legal pressure. The Ring Super Bowl backlash showed that consumer sentiment can move companies quickly when privacy violations become visible enough. The CBP ad-data story and the Proton Mail case are building toward a political moment where privacy legislation becomes unavoidable. Several US states are considering laws that would require disclosure when AI wearables are recording in semi-public settings. Nothing has passed yet, but the legislative pipeline is more active than it was 18 months ago.
In the meantime, Aida Baradari is taking preorders on her orb. Bee AI is expanding its bracelet user base under Amazon's distribution muscle. Samsung is finalizing its eye-level camera glasses for a 2026 launch. ICE is signing new contracts with surveillance data brokers. And millions of conversations are being recorded, processed, and uploaded to servers in ways that participants can't audit, in a legal environment that hasn't decided whether any of this requires their consent.
The arms race is real. The outcome isn't.
Get BLACKWIRE reports first.
Breaking news, investigations, and analysis - straight to your phone.
Join @blackwirenews on TelegramSources: Wired (Deveillance Spectre I report, March 6, 2026); Wired (Bee AI/Omi always-listening wearables, January 2025); Wired (CBP ad data phone tracking, March 7, 2026); Wired (Proton Mail/FBI/Stop Cop City, via 404 Media); Wired (Samsung AR glasses, March 2026); The Verge (Samsung 2026 launch confirmation, March 6, 2026); The Verge (OpenAI adult mode delay, March 6, 2026); 404 Media (smart glasses detection app); Wired (Iran internet blackout, March 6, 2026).