~/home/news/perplexity-ai-accused-leaking-full-2026-04-03

Perplexity AI Accused of Leaking Full Chat Transcripts to Google and Meta

A class-action lawsuit alleges Perplexity AI shares complete user prompts-including health and financial data-with Google and Meta via URL parameters, potentially violating privacy statutes. The case could set a precedent for AI-driven data-privacy regulation.

Overview/Introduction

On April 2, 2026, a Utah resident filed a 140-page class-action complaint in the U.S. District Court for the Northern District of California accusing Perplexity AI, the fast-growing conversational search platform, of transmitting full user chat transcripts to Google and Meta (formerly Facebook) through analytics tools embedded on its website. The plaintiff, John Doe, alleges that the practice is hidden from end-users, violates multiple state and federal privacy statutes, and exposes highly sensitive health and financial information to third parties without consent.

The lawsuit claims that every time a user interacts with Perplexity’s AI “machine,” the entire prompt-along with any attached email address when a user is logged in-is appended to a URL query string that is then sent to Google Analytics and Meta Pixel endpoints. The complaint further contends that these data flows were not disclosed in the platform’s privacy policy, nor were users offered an opt-out mechanism.

Technical Details

According to the filing, Perplexity’s front-end JavaScript collects the following fields from each chat session:

  • prompt_text: The exact user-typed query, which can contain unstructured personal data (e.g., "What is the best treatment for liver cirrhosis?").
  • user_id: A hashed identifier linked to the user’s account email when logged in.
  • session_id: A UUID generated for each browser session.
  • timestamp: Epoch time of the request.

These values are concatenated into a query string and transmitted via HTTP GET requests to the following third-party endpoints:

https://www.google-analytics.com/collect?tid=UA-XXXXXX-Y&cid={user_id}&dp={prompt_text}&dt={timestamp}
https://www.facebook.com/tr?id=XXXXXXXXXX&ev=ChatPrompt&cd[prompt]={prompt_text}&cd[user]={user_id}

The use of GET requests means the full query is visible in server logs, proxy logs, and browser history. The complaint also alleges that Perplexity’s back-end logs the same data in plain text before forwarding it to its own analytics pipeline, creating multiple points of exposure.

No CVE identifier is attached to the issue because it is not a vulnerability in the traditional sense; rather, it is a design-level data-handling practice that conflicts with privacy law. However, the pattern mirrors a class of “privacy-by-design” failures that regulators have begun to treat as de-facto security incidents.

Impact Analysis

The alleged data leakage impacts several stakeholder groups:

  • End-users: Individuals who ask Perplexity for medical, legal, or financial advice may have their sensitive details harvested by advertising and analytics giants.
  • Perplexity AI: Potential exposure to massive class-action damages, injunctive relief, and a loss of user trust.
  • Google and Meta: While the companies are merely recipients of the data, the suit alleges they knowingly accepted it, potentially implicating them under California’s Wiretap Act and the California Consumer Privacy Act (CCPA).
  • Enterprise customers: Companies that embed Perplexity into internal tools could inadvertently violate their own compliance programs.

The severity is judged “high” because the disclosed data includes personally identifiable information (PII) combined with health and financial details-categories that trigger heightened protection under HIPAA-related state statutes and the EU’s GDPR for any EU residents using the service.

Timeline of Events

  • 2024-Q3 - Perplexity launches a free-tier account system and integrates Google Analytics and Meta Pixel for marketing attribution.
  • 2025-01 - Internal audit by an independent security firm discovers that prompt text is sent as URL parameters to third-party endpoints.
  • 2025-06 - Perplexity updates its privacy policy to mention “aggregated usage data” but does not disclose raw prompt content.
  • 2026-02-28 - John Doe (pseudonym) files the class-action complaint in San Francisco federal court.
  • 2026-04-02 - MediaPost publishes the story, bringing the case to public attention.
  • 2026- TBD - Defendants have not yet filed an answer or motion to dismiss.

Mitigation/Recommendations

Organizations using Perplexity-or any AI conversational interface-should adopt a layered mitigation strategy:

  1. Data-Flow Audit: Conduct a thorough review of all outbound network calls from the front-end. Look for GET requests that embed user-generated content in query strings.
  2. Switch to POST: If analytics are required, send data via HTTP POST with payload encryption (TLS 1.3) instead of exposing it in URLs.
  3. Content Scrubbing: Strip or hash any PII from prompts before transmission. Implement a regex-based filter that redacts health, financial, or personally identifiable terms.
  4. Consent Management: Provide an explicit opt-in checkbox for users before any prompt is logged or shared with third parties. Record consent in an auditable log.
  5. Vendor Contracts: Update data-processing agreements with Google and Meta to reflect that raw prompt data is not shared unless expressly permitted.
  6. Policy Revision: Revise privacy notices to clearly disclose the exact nature of data shared with analytics providers, including examples of prompt content.
  7. Legal Review: Engage counsel to assess compliance with CCPA, CPRA, HIPAA (if applicable), and GDPR. Prepare for possible class-action exposure.

Real-World Impact

For everyday users, the lawsuit underscores a growing privacy blind spot: conversational AI tools often treat user prompts as “non-PII” because they are perceived as search queries. In reality, a single prompt can contain a full medical history, credit-card numbers, or legal strategy. If leaked, such data can fuel targeted advertising, identity theft, or even blackmail.

Enterprises that have integrated Perplexity into internal knowledge bases or employee self-service portals may find themselves in breach of corporate data-handling policies. A breach could trigger mandatory breach notifications under state laws, insurance claim disputes, and potential regulatory fines.

From a market perspective, the case could accelerate the emergence of “privacy-first” AI platforms that explicitly keep prompts on-device or use homomorphic encryption to process queries without ever exposing raw text to third parties.

Expert Opinion

As a senior cybersecurity analyst, I view this lawsuit as a watershed moment for AI-driven privacy regulation. The core issue is not a technical vulnerability that can be patched; it is a systemic design choice that treats user-generated content as a benign analytics signal. In the pre-AI era, analytics pipelines rarely dealt with raw natural-language input, so the privacy implications were minimal. Today, with LLMs handling sensitive queries at scale, the old model is untenable.

Regulators are likely to lean on existing statutes-CCPA, CPRA, and the California Wiretap Act-to argue that transmitting full prompts without consent is an unlawful interception of electronic communication. Moreover, the case may inspire new legislative language specifically targeting AI-generated data, akin to the EU’s forthcoming AI Act which includes provisions for “high-risk” AI systems.

From a defensive standpoint, organizations must shift from a “post-breach” mindset to a “privacy-by-design” approach for AI. That means building data-minimization, consent, and auditability into the architecture from day one. Companies that fail to do so will not only face legal exposure but also risk eroding user trust-an asset that is increasingly hard to regain in the AI market.

In short, the Perplexity suit could become the catalyst that forces the entire conversational-AI ecosystem to re-examine how it handles the most intimate data users entrust to machines. The winners will be platforms that can prove they keep prompts private, while the laggards may see a wave of litigation, regulatory action, and user attrition.