The Sovereignty Crisis: Firefox Tor De-anonymized, Apple Handed Your Signal Messages to Cops, and Google Says 75% of Its Code Is AI
A Firefox IndexedDB bug links every Tor identity you have ever used. Apple cached deleted Signal messages in a notification database the FBI learned to read. Google now generates three-quarters of its own code with AI. George Hotz asks whether you really want the US to "win" AI. AI coding tools are rewriting entire functions when all you asked for was a one-line fix. The thread connecting all of it: who actually controls the software you depend on?
The infrastructure of trust is cracking. Photo: Unsplash
I. The Week Software Sovereignty Broke
Three stories hit on April 22, 2026 that, taken individually, are serious security vulnerabilities and impressive corporate milestones. Taken together, they describe a single, coherent problem: the people who build the software you rely on do not fully understand what it does, and the institutions that should protect you from its failures are instead exploiting them.
Fingerprint.com published research showing that a bug in Firefox's IndexedDB implementation creates a stable, persistent identifier that links every Tor Browser profile a user has ever created on the same machine. Apple released iOS 26.4.2 to fix a bug where notification content, including messages from Signal that users had deliberately deleted, was cached in a database that law enforcement could extract with forensic tools. And Google CEO Sundar Pichai announced at Cloud Next that 75% of all new code at Google is now AI-generated, up from 50% last fall.
These are not the same problem. But they rhyme. Firefox's Tor de-anonymization is a browser architecture failure that survived for years because nobody looked. Apple's notification caching is an OS design choice that became a surveillance tool because nobody asked what happens when "deleted" does not actually mean deleted. Google's 75% AI code figure is a corporate efficiency milestone that raises the question: who is responsible when AI-generated code contains the same kind of bugs that just de-anonymized Tor users and leaked encrypted messages to the FBI?
The answer, increasingly, is nobody. And that is the sovereignty crisis.
II. The Firefox Tor Bug: Your Identities Were Never Separate
Every identity you thought was isolated, linked by a single database. Photo: Unsplash
The research from Fingerprint.com, published on April 22, is devastating in its simplicity. Firefox, and by extension the Tor Browser which is built on Firefox, uses IndexedDB to store data for web applications. IndexedDB databases are scoped to an origin (a domain plus protocol plus port). But Firefox also writes metadata about these databases, including a unique identifier called the originCacheId, into a shared internal storage directory.
That originCacheId persists across Tor Browser sessions. It persists across different Tor circuits. It persists across different identities. If you use Tor Browser to browse as "User A" in one session and "User B" in another session on the same machine, both sessions generate IndexedDB activity that writes to the same shared metadata store. An attacker who can read that store, which any script with access to the browser's profile directory can do, can determine that both identities belong to the same physical device.
This is not a theoretical attack. Fingerprint.com demonstrated a working proof of concept that extracts the stable identifier and correlates Tor sessions. The bug earned 729 upvotes on Hacker News in under 18 hours, making it one of the most-discussed security stories of the year so far. The implications are severe: journalists operating under authoritarian regimes, whistleblowers, activists, and anyone else who depends on Tor's identity isolation may have been de-anonymized by this bug for as long as it existed.
Key detail: The Firefox IndexedDB identifier is not a cookie or a tracking pixel that Tor Browser's existing protections were designed to block. It lives in the browser's own internal storage layer, below the reach of privacy extensions, first-party isolation, or Tor's circuit separation. You cannot clear it by deleting cookies. You cannot avoid it by using Tor's "New Identity" feature. It persists until you delete the entire browser profile.
The Mozilla team has been quick to respond, and a fix is in development. But the deeper problem remains: Tor Browser inherits Firefox's architecture, and Firefox was never designed with the adversary model that Tor users require. Firefox is a general-purpose browser that happens to support privacy extensions. Tor Browser is a privacy-critical tool built on top of a general-purpose browser. When the foundation has bugs that violate the upper layer's security model, the upper layer's users are exposed in ways they cannot detect and cannot mitigate.
This is the browser sovereignty problem. You depend on a codebase you do not control, maintained by developers who are optimizing for a different threat model than yours. When their architecture fails your security, you find out when a researcher publishes a paper about it, not when the bug is introduced. By then, the damage is already done.
III. Apple's Notification Cache: "Deleted" Was Never Deleted
Your messages said they self-destructed. Your phone disagreed. Photo: Unsplash
The Apple story is, if anything, worse in its practical consequences. On April 22, Apple released iOS 26.4.2 with a security fix described in its advisory as addressing an issue where "notifications marked for deletion could be unexpectedly retained on the device." The advisory language is dry. The reality is not.
Earlier in April, 404 Media reported that the FBI had successfully extracted deleted Signal messages from a suspect's iPhone using forensic tools. Signal messages that users had set to auto-delete, or that they had manually deleted from the app, were still present in iOS's notification database. When a Signal message arrives, iOS displays it as a notification. The content of that notification, including the full text of the message, gets written to a SQLite database called the notification storage database. When the user deletes the message in Signal, the notification that displayed it is supposed to be deleted too. But due to the bug Apple just fixed, the notification content was retained for up to a month.
Law enforcement with access to forensic extraction tools like Cellebrite or Magnet AXIOM could read these retained notifications. The FBI did exactly that. Signal president Meredith Whittaker responded publicly: "Notifications for deleted messages shouldn't remain in any OS notification database."
Notifications for deleted messages shouldn't remain in any OS notification database.
Meredith Whittaker, Signal president, on Bluesky
She is right, and the point extends far beyond Signal. Any messaging app that uses iOS notifications, including WhatsApp, Telegram, and Apple's own iMessage, was potentially affected. The notification database became a surveillance goldmine that existed entirely outside the encrypted messaging layer. End-to-end encryption protects messages in transit. It does not protect messages that the operating system has already read, cached, and stored in a plaintext database that the user cannot access and did not know existed.
Apple backported the fix to iOS 18, which means the company recognized this was serious enough to patch older software. That is commendable. But it also means that for an unknown period of time, every iPhone running iOS 18 or later was silently retaining notification content that users believed they had deleted. The fix comes after the vulnerability was exploited by law enforcement, not before. The FBI discovered the capability and used it. Apple found out from a media report and then fixed the bug.
This is the OS sovereignty problem. You chose Signal because you wanted your messages to disappear. Your phone's operating system made a different choice on your behalf, without telling you, and stored the evidence in a database you could not see or clear. The entity that violated your expectation of privacy was not the messaging app you selected, but the platform that the messaging app runs on. You trusted the app. You had no choice but to trust the OS. The OS betrayed that trust silently.
IV. Google's 75% AI Code: Who Writes the Software You Run?
Three quarters of new Google code: written by machines, approved by engineers who may not understand it. Photo: Unsplash
At Google Cloud Next 2026, Sundar Pichai announced that 75% of all new code at Google is now AI-generated and approved by engineers, up from 50% last fall. This is a staggering number. Google is not a startup experimenting with Copilot. Google is one of the largest software organizations on Earth, with billions of users and systems that underpin critical internet infrastructure. If three-quarters of the new code flowing into those systems is generated by AI models, the implications for software quality, security, and accountability are immense.
Pichai framed this as a productivity milestone. Engineers are "orchestrating fully autonomous digital task forces," he said. A complex code migration completed by agents and engineers working together was "six times faster than was possible a year ago with engineers alone." These are real efficiency gains. Nobody disputes that AI coding tools can accelerate certain categories of work.
But consider what "AI-generated and approved by engineers" actually means in practice. An engineer reviews AI-generated code. How carefully? Google's codebase is enormous. The volume of AI-generated code is enormous. The ratio of AI output to human review capacity is shifting in the wrong direction. Code review was already a bottleneck before AI started writing three-quarters of the changes. The math is simple: if AI produces more code than humans can deeply review, then some fraction of that code ships without thorough human understanding.
Now connect this to the Firefox and Apple stories. The Firefox IndexedDB bug existed because the browser's internal storage architecture had a side effect that nobody anticipated. Apple's notification caching existed because the OS stored notification content in a database that nobody thought to audit from a privacy perspective. Both bugs were subtle, systemic, and hard to find. They are exactly the kind of bugs that AI-generated code could introduce, because AI models optimize for functional correctness, not for the unintended consequences of architectural choices.
If 75% of Google's new code is AI-generated, how many IndexedDB-style bugs is it introducing? How many notification-caching-style privacy violations? Nobody knows. The existing testing infrastructure, which already failed to catch these bugs when humans wrote the code, is now reviewing AI output at even higher volume with even less human scrutiny per line. The result is not necessarily worse code, but it is code that fewer people understand deeply, and that is a different and possibly more dangerous problem.
This is the AI sovereignty problem. The software you run is increasingly written by systems you cannot audit, reviewed by humans who cannot keep up, and deployed into environments where the consequences of subtle bugs are catastrophic. When the next Firefox-style privacy bug emerges from AI-generated code, who is responsible? The model that wrote it? The engineer who approved it? The company that deployed it? The framework that generated the prompt? Accountability dissolves across a chain of automated systems, and the user is left holding the consequences.
V. Hotz's Question: Do You Want the US to "Win" AI?
Winning AI is not the same as winning with AI. Photo: Unsplash
George Hotz, the hacker known for jailbreaking the PS3, founding comma.ai, and briefly running tinygrad, published a blog post on April 23 that cuts through the corporate narrative around AI competition with unusual directness. The post is titled "Do you really want the US to 'win' AI?" and it is worth reading in full.
Hotz's argument is not anti-AI. He has spent his career building AI systems. His argument is about power. When politicians and tech executives talk about the US "winning" AI, they mean American companies dominating the global AI stack. Hotz asks a different question: what does that victory look like for a normal person?
As an American, is this an investment into helping you and improving your life, or figuring out how to take your job and further extract from you? I think most Americans have been watching tech companies for the last 10 years and understand which one it is. They aren't going to get better with more power, they are going to get worse.
George Hotz, "Do you really want the US to 'win' AI?"
Hotz also takes aim at Anthropic and the effective altruism movement's tendency toward what he calls "fear-based marketing campaigns." He points out that the same people who launched the "GPT-2 is too dangerous to release" campaign in 2019, when they were at OpenAI, are now running the "AI is existentially dangerous" narrative at Anthropic. "It's literally the exact same people doing the same exact shit," he writes. His prescription is not regulation or restriction, but open distribution: "The good world is where everyone has AI, and not as a revokable privilege through an API, but through hard possession."
The connection to the sovereignty crisis is direct. When Hotz says AI should be "hard possession," he is talking about the same kind of sovereignty that Firefox users lost when IndexedDB linked their identities, and that Signal users lost when Apple cached their deleted messages. You have sovereignty over software when you can inspect it, modify it, and run it on hardware you control. You lose sovereignty when the software you depend on is controlled by a remote API, a closed operating system, or a browser whose internal architecture you cannot audit.
Hotz points to DeepSeek, which releases its models openly, and contrasts them with Anthropic, which has never open-sourced any LLM. The contrast is deliberate. Open models can be audited, forked, and run locally. Closed models cannot. When the closed model generates a bug that leaks your data, you cannot fix it. When the open model does, at least you have the option.
This is not a naive argument that open source automatically equals secure. Firefox is open source, and the IndexedDB bug still existed. The point is about power and recourse. When open source software fails, the community can find the bug, fix it, and verify the fix. When closed software fails, you wait for the vendor to acknowledge the problem, ship a patch, and hope the patch does not introduce new problems you also cannot audit. The Firefox bug was found by an external researcher. Apple's bug was revealed by a media report about FBI exploitation. In both cases, the users who were harmed had no way to detect or fix the problem themselves.
VI. The Over-Editing Problem: AI Does Not Know When to Stop
AI fixed your bug. It also rewrote your entire function. Photo: Unsplash
The same day as the Firefox and Apple stories, a different kind of AI software failure was climbing the Hacker News front page. A research post titled "Over-Editing" by nreHieW documented a systematic problem with AI coding models: they rewrite code that did not need rewriting.
The research is methodical. The author programmatically introduced single-line bugs into 400 problems from BigCodeBench, things like flipping a comparison operator or changing a boolean value. The ground truth fix is always one token. Then they asked frontier models to fix the bugs and measured how much code the models changed beyond the minimal fix.
The results are instructive. GPT-5.4 with high reasoning effort, when asked to fix a simple range(len(x) - 1) off-by-one error, rewrote the entire function. It added explicit None checks, introduced numpy array conversions, added finite-value masking, changed function signatures, and replaced plotting logic. The output passed all tests. It was functionally correct. And it bore almost no resemblance to the original code.
The over-editing problem in practice: You ask the model to fix a single off-by-one error. The model rewrites the function, adds input validation you did not request, renames variables, introduces helper functions, and changes the control flow. Tests pass. Code review gets harder because nothing is recognizable. And the model introduced cognitive complexity that was not there before, making future maintenance more difficult. The bug is fixed, but the codebase is worse.
This connects to Google's 75% AI code figure in a specific way. Over-editing is invisible to test suites. A model that over-edits produces functionally correct code that degrades codebase quality. When 75% of new code is AI-generated, the accumulation of over-edits is a compounding problem. Each unnecessary rewrite makes the codebase slightly harder for humans to understand. Each additional layer of unsolicited complexity makes the next bug slightly harder to find. The tests pass, the features ship, and the software slowly becomes illegible to the people who are supposed to maintain it.
The research introduces two metrics for measuring over-editing. Token-level Levenshtein distance measures how far the model's output diverges from the original code. Added Cognitive Complexity measures how much harder the model's output is to understand compared to the original. Both metrics show that frontier models systematically over-edit, even when explicitly instructed to make minimal changes.
This is the quality sovereignty problem. You accepted AI coding assistance to be more productive. The AI made you more productive, but it also made your codebase less understandable, and it did so in ways that no test can detect. You now have more code that works, and less code that anyone understands. When the next subtle bug emerges, the people looking for it will be navigating a codebase that was partially written by machines that rewrote things unnecessarily. The sovereignty you lost was not over privacy or security, but over comprehension. You can still read the code. You just might not recognize it.
VII. Zed's Parallel Agents: The Tool That Could Go Either Way
Parallel agents in Zed: powerful, and exactly the kind of tool that amplifies over-editing at scale. Photo: Unsplash
On April 22, the code editor Zed launched Parallel Agents, a feature that lets developers run multiple AI agents simultaneously in the same window. Each thread can use a different model, access different repositories, and work on different tasks. The Threads Sidebar provides orchestration, letting developers monitor and control agents as they work.
Zed's launch post is thoughtful about the relationship between AI and craftsmanship. Co-founder Nathan Sobo coined the term "agentic engineering" to describe "combining human craftsmanship with AI tools to build better software." The post explicitly positions Zed's approach between the extremes of "fully giving into the vibes" and disabling all AI features entirely.
But the timing is telling. Parallel Agents launch on the same day that research shows AI models systematically over-edit. The same day that Firefox's Tor de-anonymization bug reveals the consequences of architectural complexity that nobody fully understood. The same day that Apple's notification caching bug shows what happens when a platform layer silently violates the security properties of the application layer above it.
Parallel agents are a force multiplier. They can make a skilled developer dramatically more productive. They can also make an over-editing model dramatically more destructive, because now the over-editing happens in parallel across multiple threads simultaneously. The research on over-editing shows that a single AI agent making unnecessary changes is already a problem. Multiple AI agents making unnecessary changes in parallel, across different parts of a codebase, is a compounding problem that scales quadratically with the number of agents.
Zed's approach, with its emphasis on human oversight and per-thread isolation, is better than most. But the fundamental tension remains: the tools that make developers fastest are the same tools that make it easiest to ship code that nobody deeply understands. The sovereignty question is not whether to use these tools, but how to use them without losing the ability to comprehend what they produce.
VIII. The Tailscale Founder's New Cloud: Sovereignty by Architecture
Building a cloud for people who like computers. Photo: Unsplash
One more story from this week deserves attention in the sovereignty context. David Crawshaw, co-founder of Tailscale, announced that he is building a new cloud computing company called exe.dev. His blog post explaining why is titled simply "I am building a cloud," and it is the most cogent critique of current cloud infrastructure published this year.
Crawshaw's thesis is that current cloud abstractions are the wrong shape. Virtual machines are tied to CPU/memory resources when users want to buy raw compute and run as many VMs as they like. Remote block storage made sense for hard drives but imposes a 10x IOPS penalty on SSDs compared to local storage. Egress pricing is 10x what you pay racking a server in a normal data center. And Kubernetes, which exists to make clouds portable and usable, is "a product attempting to solve an impossible problem: make clouds portable and usable. It cannot be done."
I like computers. I always have. It is really fun getting computers to do things. So it is no small thing for me when I admit: I do not like the cloud today.
David Crawshaw, "I am building a cloud"
Crawshaw connects this to AI not as a buzzword but as a driver of demand. "Agents, by making it easiest to write code, means there will be a lot more software. Economists would call this an instance of Jevons paradox. Each of us will write more programs, for fun and for work. We need private places to run them."
This is sovereignty by architecture. Crawshaw is not arguing that people should self-host everything. He is arguing that the abstractions of the current cloud were designed for the convenience of cloud vendors, not for the capabilities of the computers underneath, and that the result is infrastructure that constrains users instead of enabling them. His new company aims to fix that by building abstractions that match the shape of actual computers rather than the shape of vendor billing models.
The parallel to the Firefox, Apple, and Google stories is exact. Firefox's IndexedDB bug exists because the browser's storage architecture was not designed with Tor's threat model in mind. Apple's notification caching exists because iOS's notification system was not designed with Signal's security model in mind. Google's 75% AI code exists because the development process was optimized for velocity, not for comprehension. In each case, the architecture serves the needs of the builder, not the needs of the user. The sovereignty crisis is what happens when that gap accumulates over years and across every layer of the stack.
IX. What Sovereignty Actually Requires
Software sovereignty is not self-hosting. It is not running Linux on a ThinkPad. It is not choosing open source over closed source, or local models over cloud APIs. Those are tools, not outcomes.
Sovereignty is the ability to understand, audit, modify, and fix the software you depend on. It is the condition where, when a bug de-anonymizes your Tor identities or leaks your encrypted messages to law enforcement, you have the technical and legal capability to detect the problem, mitigate it, and verify the fix. It is the condition where, when an AI model rewrites your codebase unnecessarily, you can recognize what it changed, reject what should not have changed, and maintain comprehension of what remains.
Today, that capability is eroding across every layer:
- Browser layer: Firefox's IndexedDB architecture leaks identity across Tor sessions. You cannot fix it without rebuilding the browser's storage subsystem.
- OS layer: Apple's notification cache retains deleted Signal messages. You cannot clear it because you cannot access it. You cannot detect it because Apple did not document it.
- Development layer: AI models generate 75% of new code at Google and systematically over-edit beyond what was requested. You cannot fully review it because the volume exceeds human capacity.
- Infrastructure layer: Cloud abstractions constrain compute, storage, and networking in ways that serve vendor economics rather than user capability. You cannot work around them without rebuilding the entire deployment stack.
At each layer, the user's ability to understand and control their own software stack is diminishing. The software works. The tests pass. The features ship. But the user's sovereignty, their capacity to be an active participant in the software they depend on rather than a passive consumer of it, is being quietly redistributed to the platforms that build, host, and generate that software.
X. The Question Nobody Is Asking
The Firefox bug will be patched. Apple has already released the fix. Google will continue to report rising AI code percentages. Zed will ship more agent features. Crawshaw will build his cloud. Each story, in isolation, has a narrative of progress: vulnerability found and fixed, efficiency increased, tools improved.
But the aggregate direction is not progress. The aggregate direction is a systematic transfer of comprehension and control from the people who use software to the systems and institutions that produce it. Every layer that becomes opaque, every AI-generated line that ships without deep review, every platform design choice that silently overrides the user's security model, is a small, cumulative loss of sovereignty.
Hotz asked the right question. Not "who is winning the AI race?" or "how much code can AI generate?" but "is this investment into helping you and improving your life, or figuring out how to take your job and further extract from you?" The same question applies to every layer of the stack. Is your browser helping you stay anonymous, or is its internal architecture quietly linking your identities? Is your phone protecting your encrypted messages, or is it caching them in a database the FBI can read? Is your AI coding tool making you more productive, or is it making your codebase less comprehensible while you are not looking?
The sovereignty crisis in one sentence: The software you depend on is increasingly written by systems you cannot audit, running on platforms you cannot inspect, generating code you cannot fully understand, and the institutions that should protect you from the consequences are instead the ones exploiting the vulnerabilities.
The fixes for individual bugs are important. The Firefox team will patch IndexedDB. Apple has patched notification caching. Researchers will continue to find and report these problems. But individual patches do not reverse the structural trend. The structural trend is toward more code, more complexity, more layers, more opacity, and less human comprehension per line of code in production. The sovereignty crisis is not a bug. It is the current architecture's defining feature.
The alternative is what Hotz calls "hard possession" and what Crawshaw calls "computers that are fun." Software that you can run locally, audit completely, modify freely, and fix yourself. Software where the architecture serves the user's threat model, not the vendor's billing model. Software where "deleted" actually means deleted, where identities are actually separate, and where the person running the code can still read it.
That world requires specific, difficult architectural choices. It requires browsers designed for Tor's threat model from the kernel up, not retrofitted onto a general-purpose browser. It requires operating systems where notification data belongs to the application that generated it, not to a system service with its own retention policy. It requires AI coding tools that optimize for minimal edits, not maximal generation. It requires cloud infrastructure that gives you the computer, not a billing abstraction of the computer.
These choices exist. The research on over-editing shows that models can be trained to make smaller, more faithful edits. Crawshaw is building a cloud with different abstractions. Hotz is pointing to open models as the path to distributed AI capability. The Firefox team can redesign IndexedDB's storage architecture. Apple can, and now has, fixed notification retention.
But these are all individual solutions to individual problems. The sovereignty crisis is systemic. It will not be solved by patches. It will only be solved when the people who build software start optimizing for user comprehension and control, not just for feature velocity and code volume. That is a design philosophy, not a bug fix. And right now, the industry's trajectory is running the opposite direction.
The Firefox bug was found by researchers. The Apple bug was revealed by a media report about FBI exploitation. The over-editing problem was documented by an academic. The cloud's wrong abstractions are being challenged by a startup founder. In each case, the people pointing out the problem are outside the system that created it. The people inside the system are too busy shipping code, 75% of which is AI-generated, to notice what they might be losing.
That is the sovereignty crisis. Not that the bugs exist, but that the system that produces them is accelerating in the direction that makes more of them inevitable, and the people who would advocate for a different direction are outnumbered and out-resourced by the machines and institutions that benefit from the current one.