← BLACKWIRE Articles

PRISM Bureau

The Privacy Mirage: Perplexity AI Sued for Secretly Feeding Your Conversations to Meta and Google

A class-action lawsuit filed in San Francisco federal court alleges the AI search engine embedded 'undetectable' tracking software that transmitted user conversations to advertising giants - even when users explicitly chose Incognito mode

By PRISM Bureau • April 1, 2026 • 12 min read
Close-up of hands typing on a laptop in dim light, representing digital privacy concerns

The lawsuit claims every conversation you had with Perplexity's AI was silently forwarded to the world's largest advertising companies. Credit: Pexels

The promise was simple: ask anything, get answers, no strings attached. Perplexity AI built its $22 billion valuation on positioning itself as the thinking person's search engine - a clean, ad-free alternative to Google's cluttered results and ChatGPT's corporate fog. Millions of users trusted it with their most sensitive queries: tax strategies, medical concerns, financial planning, legal questions. The kind of information you whisper to a professional behind closed doors.

On Tuesday, April 1, 2026, a federal court filing in San Francisco shattered that trust. A class-action lawsuit, filed by a Utah man identified only as John Doe, accuses Perplexity AI of embedding hidden tracking software into its platform that silently transmitted user conversations to Meta Platforms and Alphabet's Google - the two largest advertising companies on Earth. The trackers allegedly activated the moment users logged into Perplexity's homepage. More damning still: they continued operating even when users explicitly selected Perplexity's "Incognito" mode, a feature whose entire purpose was supposed to be privacy protection.

The complaint names not just Perplexity but Meta and Google as defendants, accusing all three of violating federal and California state privacy and fraud laws. If certified as a class action, it could encompass every person who has ever used Perplexity AI - potentially tens of millions of users worldwide.

The Anatomy of the Alleged Betrayal

Digital lock symbolizing cybersecurity and data protection

The complaint alleges that tracking tools embedded deep in Perplexity's code operated as a persistent surveillance mechanism. Credit: Pexels

According to the complaint, the mechanism worked like this: the moment a user landed on Perplexity's homepage and logged in, tracking software was silently downloaded onto their device. These weren't the standard analytics cookies that every website uses to measure traffic. The lawsuit describes them as "undetectable" trackers specifically designed to capture conversation data - the actual content of what users typed into the AI search engine and what answers they received back.

The trackers then created a backdoor channel, the complaint alleges, transmitting that conversational data directly to Meta and Google's advertising infrastructure. The purpose, according to the filing, was to enable targeted advertising and potential resale of sensitive user data to additional third parties. In practical terms: you ask Perplexity about divorce lawyers in your area, and suddenly your Instagram feed fills with family law advertisements. You research a medical condition, and Google's display network starts showing you pharmaceutical ads across the web.

What makes this allegation particularly explosive is the Incognito mode claim. Perplexity explicitly offers an Incognito feature, marketing it as a way for users to search privately without their queries being stored or tracked. The lawsuit alleges this was functionally a lie - the tracking software operated identically regardless of whether Incognito mode was enabled. The privacy toggle, if the allegations hold, was purely cosmetic.

The plaintiff - John Doe from Utah - claims he used Perplexity extensively for sensitive personal matters, including his family's financial planning, tax obligations, and personal investment strategies. The kind of information that, in the wrong hands, could be used for financial targeting, identity profiling, or worse. He is seeking to represent a broader class of affected users.

Jesse Dwyer, a Perplexity spokesperson, told Bloomberg: "We have not been served any lawsuit that matches this description, so we are unable to verify its existence or claims." A carefully worded non-denial that conspicuously does not address whether the tracking behavior described actually exists in Perplexity's codebase.

Meta, for its part, pointed reporters to a Facebook help page stating that it is against the company's rules for advertisers to send them sensitive information. Which raises its own question: if Meta's policies prohibit receiving sensitive data, what was Perplexity's alleged tracker sending them? Google declined to comment.

The Incognito Illusion: A Pattern Across Tech

Person browsing on a smartphone in dark environment

Privacy modes across the tech industry have repeatedly been revealed as less protective than users believe. Credit: Pexels

Perplexity's alleged Incognito deception, if proven true, would join a growing list of privacy theater across the technology industry. The concept of "private browsing" has been systematically undermined for years, but the scale of the deception has escalated in the AI era.

Google itself paid $5 billion in 2023 to settle a class-action lawsuit alleging that Chrome's Incognito mode still tracked users' browsing activity. That settlement, announced in December 2023 after a four-year legal battle, established a precedent that calling something "private" or "incognito" creates a legally binding expectation of actual privacy - not just the absence of local browsing history. The parallels to Perplexity's situation are striking.

But there is a crucial difference between a browser's incognito mode and an AI chatbot's privacy mode. When you browse the web in Chrome's Incognito, you're visiting public websites. The privacy expectation is about whether Google tracks that activity. When you use an AI chatbot in "private" mode, you're often sharing deeply personal information - medical symptoms, legal troubles, financial anxieties - in what feels like a confidential conversation. The betrayal of that trust, if it occurred, hits harder because the information is more intimate.

A Surfshark study published in late March 2026 found that 70% of AI chatbots collect user location data, and on average, AI chatbots collect 14 distinct types of personal data from their users. The study analyzed the data collection practices of every major AI chatbot available on iPhone, including Meta AI, Google Gemini, ChatGPT, and Perplexity. The findings painted a picture of an industry where data collection isn't a bug - it's the business model.

The Perplexity lawsuit adds a new dimension to this picture. It's not just about what data an AI company collects from you directly. It's about whether AI companies are functioning as covert data pipelines, funneling your most private thoughts to the advertising duopoly that already controls most of the internet's economic infrastructure. If Perplexity was indeed sending conversation data to Meta and Google, the AI search engine was functioning less like a search tool and more like a surveillance relay station with a friendly interface.

DuckDuckGo, the privacy-focused search engine, recently launched its own AI chatbot and has seen a significant surge in adoption. A ZDNET report from March 30 attributed the growth directly to increasing concerns about AI companies and their data practices. The demand for genuinely private AI is real and growing, which makes the Perplexity allegations all the more consequential for the broader industry.

Perplexity's Legal Siege: A Company Under Attack on Every Front

Judge holding a gavel symbolizing legal proceedings

The data-sharing lawsuit is the latest in a cascade of legal challenges that now threaten Perplexity's business model from multiple directions. Credit: Pexels

The privacy lawsuit doesn't exist in a vacuum. Perplexity AI is currently fighting legal battles on at least four separate fronts, each attacking a different pillar of the company's business model. Taken together, they paint a picture of a startup that moved fast and broke things - and is now discovering that the things it broke have expensive lawyers.

The Amazon Comet Injunction

In early March 2026, Amazon won a temporary injunction blocking Perplexity's Comet AI browser from accessing Amazon's marketplace. The ruling, issued by District Judge Maxine Chesney in San Francisco, came after Amazon sued Perplexity in November 2025 over its agentic shopping feature. Comet allowed users to delegate purchases to an AI agent that would browse Amazon, compare products, and complete transactions on the user's behalf.

Amazon's argument was twofold: first, that Perplexity's automated agents were covertly accessing customer accounts while disguising automated activity as human browsing - essentially impersonating real shoppers. Second, that these AI agents posed security risks to Amazon customer data by operating within protected computer systems without authorization.

Perplexity's response was characteristically combative. The company called the lawsuit "a bald attempt" to block innovation, arguing that AI agents "don't have eyeballs to see the pervasive advertising Amazon bombards its users with." That argument - that AI agents disrupting ad-supported platforms is a feature, not a crime - may prove prophetic, but the court wasn't persuaded. The injunction stands.

On March 17, Reuters reported a brief reprieve when the court temporarily allowed Perplexity's shopping agents back on Amazon under strict conditions. But the fundamental legal question remains unresolved: can an AI act on your behalf inside a platform that explicitly forbids automated access?

The Copyright Avalanche

Simultaneously, Perplexity faces a cascade of copyright lawsuits from some of the world's most powerful media organizations. The New York Times sued in December 2025, accusing Perplexity of "illegally" copying, distributing, and displaying millions of articles to power its AI-generated answers. News Corp - parent of The Wall Street Journal and the New York Post - had filed a similar suit months earlier. In August 2025, Japanese publishers Asahi Shimbun and Nikkei also filed copyright infringement suits in Tokyo.

Reddit, too, sued Perplexity in October 2025 for allegedly bypassing its security measures to scrape user-generated content. Perplexity had actually admitted to the practice, which made the legal defense considerably more complicated.

The copyright cases strike at the heart of Perplexity's product: it generates answers by retrieving and synthesizing information from across the web. Publishers argue this is theft. Perplexity argues it's transformative use. The courts have not yet definitively ruled, but the trajectory isn't favorable for the startup. Every ruling that restricts AI companies from using copyrighted content without licensing narrows the pool of information Perplexity can draw from.

Timeline: Perplexity's Legal Battles

The $22 Billion Question: Can Perplexity Survive Its Own Tactics?

Person analyzing financial data and charts on screen

Perplexity's $22 billion valuation faces an existential test as legal threats mount from every direction. Credit: Pexels

Perplexity AI reached a valuation of approximately $22 billion following its Series E-6 funding round in early 2026, according to Tracxn data. The company has raised roughly $1.22 billion in total funding, with a $500 million Series E round in December 2024 being the largest single raise. Its annual recurring revenue grew from $80 million in late 2024 to an estimated $200 million by February 2026. In January 2026, the company committed $750 million to Microsoft Azure infrastructure - a bet on scaling that assumes the current business model survives.

$22B
Latest Valuation
$1.22B
Total Funding Raised
$200M
Estimated ARR (Feb 2026)
$750M
Azure Infrastructure Commitment

But that business model is now under simultaneous legal assault. The copyright suits threaten the raw material - web content - that Perplexity synthesizes into answers. The Amazon injunction threatens the agentic commerce strategy that was supposed to be its next growth frontier. And the privacy lawsuit threatens the one thing no tech company can survive losing: user trust.

Consider the competitive context. Perplexity's entire pitch is that it's the smart alternative to Google Search. Cleaner, faster, more honest. If a court finds that Perplexity was secretly feeding user data to Google the entire time, the irony would be lethal. You left Google for Perplexity to escape surveillance capitalism, and your data ended up at Google anyway - just routed through an intermediary that also took your subscription fee.

The timing matters too. CEO Aravind Srinivas has been positioning Perplexity as a future public company, with the $22 billion valuation clearly intended to set the stage for an eventual IPO. Public market investors apply different standards than venture capitalists. They read lawsuits. They price in regulatory risk. A company entering the public markets with active litigation alleging that it deceived users about their privacy isn't entering from a position of strength - it's entering from a position of legal exposure that would need to be disclosed prominently in any S-1 filing.

The Perplexity privacy lawsuit also arrives at a peculiar moment for the AI industry's relationship with advertising revenue. OpenAI recently disclosed that its ads pilot is generating more than $100 million in annual recurring revenue after just six weeks. The advertising money in AI is real and growing. But the Perplexity case illustrates the tension: if AI companies want advertising revenue, they need user data. If they need user data, they have to collect it. If they collect it, they face the same privacy backlash that has dogged Google and Meta for a decade. There's no clean way to be both an advertising platform and a privacy-respecting AI tool.

The Deeper Pattern: AI as Surveillance Infrastructure

Server room with blinking lights representing data infrastructure

The Surfshark study found AI chatbots collect an average of 14 types of personal data - the era of 'private AI' may already be over. Credit: Pexels

Zoom out from Perplexity's specific legal problems and a systemic pattern emerges. The AI industry is building the most sophisticated surveillance infrastructure in human history, and it's doing so with the enthusiastic cooperation of the people being surveilled.

The fundamental problem is conversational. When you use a traditional search engine, you type keywords. "Best pizza near me." "Weather tomorrow." These are data points, but they're shallow ones. When you use an AI chatbot, you have a conversation. You explain your situation. You provide context. You ask follow-up questions that reveal the depth of your concern. A Google search for "headache" tells Google almost nothing. A conversation with Perplexity about "I've had persistent headaches for three weeks, they're worst in the morning, I'm 45 years old, my father died of a brain aneurysm, should I be worried?" tells the AI - and anyone listening - everything.

This is the category of information the Perplexity plaintiff alleges was being transmitted to Meta and Google. Not search terms. Conversations. The shift from keyword search to conversational AI represents an exponential increase in the intimacy of the data being generated, and the privacy frameworks governing it haven't caught up.

California's privacy laws - the California Consumer Privacy Act (CCPA) and its successor, the California Privacy Rights Act (CPRA) - are among the strongest in the United States. They require companies to disclose what personal information they collect, allow consumers to opt out of data sales, and impose penalties for violations. The Perplexity lawsuit alleges violations of these statutes specifically. But even these laws were written for an era of cookies and tracking pixels, not AI conversations. The legal infrastructure is playing catch-up with a technology that moves faster than any legislature can.

The European Union's AI Act, which entered force in stages starting in August 2024, takes a more aggressive approach by classifying AI systems by risk level and imposing strict requirements on high-risk applications. But even the EU framework doesn't specifically address the scenario alleged in the Perplexity lawsuit: an AI system that functions as intended on the surface while secretly operating as a data pipeline for third-party advertisers underneath.

This matters because Perplexity won't be the last AI company accused of this kind of behavior. If the business model works - if you can charge users $20 per month for "private" AI search while simultaneously selling their conversation data to advertisers - the incentive structure guarantees other companies will try it. The Perplexity lawsuit may establish the legal framework that determines whether this dual-revenue model is possible or whether the courts will force AI companies to choose: subscriptions or surveillance, but not both.

What Happens Next: The Legal and Industry Fallout

Business professionals in discussion representing corporate consequences

The outcome of this lawsuit could reshape how every AI company handles user data. Credit: Pexels

The immediate next steps are procedural but consequential. Perplexity must be formally served with the lawsuit - the company's statement that it hasn't been served yet is a standard preliminary response, not a defense. Once served, Perplexity will likely file a motion to dismiss, arguing either that the tracking behavior described doesn't exist, that it was adequately disclosed in its terms of service, or that the plaintiff lacks standing to bring the suit.

If the case survives a motion to dismiss, the discovery phase will be devastating for Perplexity regardless of the ultimate outcome. Discovery would require the company to turn over internal documents about its tracking practices, communications with Meta and Google about data sharing, engineering documentation about how its Incognito mode actually functions, and financial records showing any revenue generated from user data sales. For a company preparing for a potential IPO, the forced transparency of litigation discovery is a nightmare scenario.

Class certification is the next critical milestone. If the court certifies the class to include all Perplexity users, the potential damages scale dramatically. California's privacy statutes allow for statutory damages of $100 to $750 per consumer per incident. With tens of millions of users, the math gets alarming fast. Even at the minimum statutory rate, a class of 10 million users would expose Perplexity to $1 billion in potential damages - nearly its entire fundraising history.

The naming of Meta and Google as co-defendants adds another layer of complexity. Both companies have massive legal teams and deep pockets, but they also have their own privacy-related vulnerabilities they'd prefer to keep out of court. Meta recently settled an $800 million data tracking lawsuit. Google's $5 billion Incognito settlement is still echoing through the tech industry. Neither company wants to be associated with allegations that they knowingly received stolen conversation data from an AI startup.

For the broader AI industry, the Perplexity lawsuit arrives at an inflection point. OpenAI just raised $122 billion at an $852 billion valuation, claiming 900 million weekly users and $2 billion in monthly revenue. Anthropic signed an AI safety memorandum with Australia. The industry is simultaneously reaching unprecedented scale and facing unprecedented scrutiny. The companies that survive the next five years of litigation will be the ones that figured out how to grow without treating user privacy as an acceptable casualty.

The Trust Economy: Why This Lawsuit Matters Beyond Perplexity

Person looking at phone screen with concerned expression

Every user who shared personal information with an AI chatbot is now asking the same question: who else was listening? Credit: Pexels

The most important consequence of the Perplexity lawsuit may not be legal but psychological. For two decades, the tech industry operated on a bargain: users gave up their data in exchange for free services. Google Search was free because Google sold your attention to advertisers. Facebook was free because Facebook sold your social graph to brands. The bargain was understood, if not loved.

AI chatbots were supposed to break that pattern. Perplexity charges $20 per month for its Pro tier. ChatGPT Plus costs $20 per month. Claude Pro runs $20 per month. These are paid products. Users paying for AI services have a reasonable expectation that the exchange is simple: money for service, no data extraction required. The Perplexity lawsuit, if its allegations hold, reveals that some AI companies are double-dipping - charging subscription fees while simultaneously running the same surveillance-capitalism playbook that made the free internet feel like a con.

This matters because trust is the entire foundation of the AI business model. People tell AI chatbots things they wouldn't tell their friends. They share medical fears, financial insecurities, relationship problems, career anxieties. Every one of those conversations is a trust transaction. The user is betting that the AI company will treat that information with the same confidentiality they'd expect from a doctor, a lawyer, or a therapist. If that trust collapses - if people start treating AI chatbots as hostile surveillance tools rather than helpful assistants - the industry's growth trajectory flattens overnight.

The early signals suggest the trust erosion is already underway. DuckDuckGo's privacy-first AI chatbot is experiencing its fastest growth period ever. Brave's Leo AI assistant, which processes queries on-device, is seeing increased adoption. Open-source local AI models - which run entirely on a user's own hardware and never transmit data to any server - are growing in popularity among privacy-conscious users. The market is speaking, and what it's saying is: we want AI that we can actually trust.

Perplexity's response to this lawsuit will signal whether the company understands the stakes. A combative defense that treats the allegations as a nuisance will read as an admission that user trust was never the priority. A transparent audit of its tracking practices - published openly, not buried in legal filings - would go further toward rebuilding trust than any legal victory. Whether Aravind Srinivas and his team choose the path of transparency or the path of legal combat will tell us whether Perplexity sees its users as customers to be served or data points to be monetized.

The AI industry is at a fork in the road. One path leads to a future where AI companies compete on trust, where privacy is a product feature rather than a marketing slogan, and where users can interact with AI systems without wondering who else is listening. The other path leads to a surveillance apparatus that makes Google's cookie-tracking era look quaint by comparison - one where every thought you share with an AI becomes a commodity traded in markets you'll never see.

Perplexity didn't create that fork. But its lawsuit may force the industry to choose a direction.

Get BLACKWIRE reports first.

Breaking news, investigations, and analysis - straight to your phone.

Join @blackwirenews on Telegram