When Machines Write Code and Machines Break Trust
The week privacy died on three fronts - and AI started writing its own infrastructure.
April 22, 2026 will be remembered as the week the ground shifted under three separate pillars of digital life: privacy, anonymity, and the very question of who writes the software we depend on. Within a 48-hour window, Apple confirmed a forensic surveillance bug that law enforcement had been exploiting to extract deleted Signal messages from iPhones, FingerprintJS revealed that Firefox's Tor mode leaks a stable cross-session identifier that completely defeats its anonymity purpose, and GitHub quietly admitted that its CLI tool now collects pseudoanonymous telemetry by default. Meanwhile, Google's CEO announced that 75% of the company's new code is now AI-generated.
These are not isolated incidents. They are symptoms of a deeper structural transformation happening in real time: the systems designed to protect human autonomy are being hollowed out from the inside, while the code that runs those systems is increasingly written by machines nobody fully understands. This is the story of one week that laid it all bare.
Section 1: Apple's Silent Compromise - The Signal Forensic Bug
On April 22, Apple released iOS 26.4.2, a security update that quietly addressed a vulnerability most iPhone users never knew existed. The bug: notifications marked for deletion in Signal (and presumably other encrypted messaging apps) were being retained in iOS's notification database, accessible to anyone with physical access to the device - including, as 404 Media first reported, the FBI.
The mechanics are deceptively simple. When a Signal message arrives on an iPhone, iOS stores it in a local notification database. When the user deletes the message in Signal, the app instructs iOS to remove the corresponding notification. But due to a bug in iOS, those "deleted" notifications were not actually purged. They remained in the database, invisible to the user, fully accessible through forensic extraction tools like Cellebrite's UFED or GrayKey.
"The FBI obtained deleted Signal messages saved inside an iPhone's notification database." - 404 Media, April 2026
Apple's security advisory describes the fix succinctly: the update addresses an issue where "notifications marked for deletion could be unexpectedly retained on the device." The word "unexpectedly" is doing a lot of heavy lifting there. Law enforcement agencies had been exploiting this "unexpected" behavior for years, extracting message content from encrypted messaging apps that users believed were deleted.
This is not a minor bug. It is a fundamental architectural failure in how iOS handles third-party notification data. The operating system - the most trusted layer in the security stack - was silently undermining the encryption guarantees of the applications running on top of it. Signal's end-to-end encryption worked perfectly. The messages were encrypted in transit and at rest. But iOS was keeping a plaintext copy in a database the user could not see, could not delete, and did not know existed.
Why This Matters Beyond Signal
This bug affects every app that relies on iOS notifications for ephemeral or deleted content. That includes not just Signal but WhatsApp's disappearing messages, Telegram's secret chats, and any application that uses the notification system for time-sensitive information. The security model of iOS assumed that the OS could be trusted as a neutral intermediary between apps and users. That assumption is now demonstrably false.
The timeline raises uncomfortable questions. How long did Apple know about this bug? When did they learn that law enforcement was actively exploiting it? The fact that it took a media expos to trigger the fix, rather than an internal security audit, suggests that Apple's notification database was never subjected to the kind of adversarial testing that security researchers apply routinely. Either that, or the fix was deprioritized because the bug served a purpose Apple was reluctant to acknowledge.
The Signal team has been aware of this class of vulnerability for years. In 2024, Moxie Marlinspike publicly discussed the challenges of building secure messaging on platforms where the operating system is an untrusted intermediary. The fundamental tension is irreducible: you cannot guarantee message deletion on a device where you do not control the storage layer.
Section 2: Firefox Tor's Fatal Flaw - A Stable Identifier Across Identities
On the same day Apple was patching its forensic vulnerability, FingerprintJS dropped a research paper that should send chills down the spine of anyone who relies on Firefox's Private Browsing with Tor for anonymity. Their finding: Firefox's Tor mode generates a stable, persistent identifier that links all of a user's Tor identities across sessions, completely defeating the purpose of Tor.
The technical details are elegant in their destructiveness. Firefox implements Tor connectivity through a proxy configuration that routes traffic through the Tor network. But the browser's internal fingerprinting surface - specifically, JavaScript APIs like navigator.hardwareConcurrency, Performance.now() timing, and WebGL renderer information - remains consistent across Tor sessions. FingerprintJS demonstrated that they could generate a stable identifier from these signals that persists across different Tor circuits, different exit nodes, and even different Firefox profiles.
"We found a stable Firefox identifier linking all your private Tor identities." - FingerprintJS, April 2026
This is not a theoretical attack. FingerprintJS is a commercial fingerprinting service used by thousands of websites for fraud detection. Their blog post reads like a product demo. The implication is clear: any website using FingerprintJS's technology (and there are many) can de-anonymize Tor users running Firefox, regardless of how many times they rotate circuits or create new identities.
The vulnerability is structural. Firefox's Tor implementation was never meant to be a full anonymity solution. It is a convenience feature that routes traffic through Tor but does not implement the comprehensive anti-fingerprinting measures that the Tor Browser uses. The Tor Browser, maintained by the Tor Project, patches dozens of browser APIs to reduce fingerprinting surface. Firefox does none of this. Users who trust Firefox's "Private Browsing with Tor" mode are getting a false sense of anonymity.
The Tor Project has been warning about this for years. Their official documentation states that you should use the Tor Browser, not Firefox with Tor, for sensitive anonymity needs. But the warning is buried in documentation that most users will never read. Firefox's UI presents Tor mode as a privacy feature without any indication of its limitations, creating what security researchers call a "security theater" problem: users believe they are anonymous when they are not.
The responsible disclosure process here is also worth examining. FingerprintJS is a commercial fingerprinting company. Their business model depends on identifying users across sessions. Publishing research that demonstrates how to de-anonymize Tor users is simultaneously a public service and a marketing exercise. The dual-use nature of this disclosure is impossible to ignore: the same technique that alerts users to their vulnerability also provides a roadmap for every other fingerprinting service and surveillance actor to exploit the same flaw.
Section 3: GitHub CLI's Silent Telemetry Switch
The third privacy compromise of the week comes not from a bug but from a deliberate product decision. GitHub's CLI tool (gh), used by millions of developers daily, now collects pseudoanonymous telemetry by default. The change was announced in a blog post and immediately drew 417 points and 301 comments on Hacker News, almost universally negative.
The telemetry collects hashed versions of command names, argument counts, and timing data. GitHub claims the data is pseudoanonymous - meaning it cannot be directly linked to a specific user. But the security community's response has been swift and unequivocal: hashed identifiers are trivially reversible, and command-line telemetry reveals far more about a developer's work than they might expect.
What GitHub CLI Telemetry Reveals
- Repository names: Even hashed, the combination of command + argument count + timing creates a unique fingerprint for specific repositories
- Work patterns: When you work, how often you commit, your branching strategy
- Technology stack: Which languages, frameworks, and CI systems you use, inferable from command patterns
- Team size: Number of unique collaborators, inferable from PR and issue command patterns
- Security tooling: Whether you use Dependabot, CodeQL, or other security scanners
The telemetry is enabled by default. Users must explicitly opt out by setting an environment variable (GH_TELEMETRY=0) or modifying their configuration. This is the "dark pattern" of privacy: assume consent and make withdrawal inconvenient. GitHub, owned by Microsoft, follows the same playbook Microsoft has used for decades - from Windows telemetry to Office 365 data collection. The pattern is consistent: deploy surveillance first, apologize later if caught, and never make opt-out the default.
The developer community's response has been predictable but instructive. Within hours of the announcement, multiple guides appeared on how to disable the telemetry, and at least three alternative CLI tools for GitHub gained significant traction. The open-source principle of "trust by default, verify by choice" is being replaced by "surveil by default, opt out if you care enough to find the setting."
For enterprises, the implications are more serious. A developer working on a proprietary codebase who uses gh with telemetry enabled is potentially leaking information about internal project structures, release cadences, and development practices to Microsoft. In regulated industries, this could constitute a data governance violation. GitHub Enterprise users are not exempt - the telemetry applies to all gh installations unless explicitly disabled.
Section 4: Google's 75% Milestone - When AI Writes the Code
Against this backdrop of eroding privacy, Google CEO Sundar Pichai announced at the company's Cloud Next conference that 75% of all new code at Google is now AI-generated, up from 50% just last fall. The number is staggering, even accounting for Google's likely expansive definition of "AI-generated" (which probably includes autocomplete, boilerplate generation, and test scaffolding alongside more substantial contributions).
This is not an incremental change. It is a phase transition. When half your new code is AI-generated, you can still reasonably claim that humans are in charge. When three-quarters is AI-generated, the relationship inverts: human engineers are now reviewing and curating machine output, not the other way around. The implications for code quality, security, and institutional knowledge are profound and poorly understood.
"Google says 75 percent of all its new code is AI-generated. That's up from 50% last fall." - The Verge, April 22, 2026
Google recently created a "strike team" specifically to improve its AI models' coding capabilities, aiming to catch up to Anthropic, which as of February 2026 writes 70 to 90 percent of its code with Claude Code. The arms race is not between Google and traditional competitors anymore. It is between AI coding assistants, and the humans are becoming the bottleneck.
The security implications are where this story converges with the three privacy failures discussed above. If 75% of Google's code is AI-generated, and that code runs the infrastructure that 4 billion people depend on - Android, Chrome, Gmail, Google Cloud - then the attack surface is fundamentally different from what it was even two years ago. AI-generated code has different failure modes than human-written code. It tends to be locally correct but globally incoherent. It handles edge cases unpredictably. It introduces subtle bugs that look like features and features that behave like bugs.
The Over-Editing Problem
A new research paper from nrehiew.github.io, trending on Hacker News with 304 points, documents a phenomenon called "over-editing" - when AI models modify code beyond what was requested, introducing subtle changes that compound over time. The researchers found that AI code editors make an average of 2.3 unintended modifications per editing session, each one individually defensible but cumulatively corrosive. This is the technical debt of the AI age: not shortcuts taken by rushed humans, but subtle drift introduced by helpful machines.
Consider the iOS notification bug. Apple, the company that prides itself on controlling every pixel of its software stack, failed to notice that deleted notifications were being retained in a database. If 75% of iOS code were AI-generated, would the bug have been caught faster, or would the AI have introduced it with less human oversight? The answer depends on whether you believe AI makes fewer mistakes than humans (it does, for certain classes of errors) or whether you believe the real danger is the class of mistakes that AI makes that humans do not (it also does, for different classes of errors).
Section 5: The Ping-Pong Robot and the Physical-Digital Boundary
While the digital world grappled with privacy failures and AI-generated code, the physical world delivered its own shock. Researchers demonstrated a ping-pong robot that beats top-level human players. The system uses real-time trajectory prediction and a learned dynamics model that anticipates ball spin and bounce with sub-millisecond latency.
On its surface, this is a robotics milestone. Ping-pong requires reaction times and spatial reasoning that humans develop over decades of practice. A robot that can beat top players is demonstrating something close to embodied intelligence - the ability to perceive, plan, and act in a physical environment with superhuman speed.
But the deeper story is about what happens when physical and digital capabilities converge. The same real-time perception and prediction systems that make a ping-pong robot competitive are applicable to drone navigation, autonomous vehicle control, and - less comfortably - military targeting systems. The research paper is titled modestly, but the defense implications are anything but.
The dual-use nature of robotics research has always been a tension, but it is sharpening. When a robot can track a ping-pong ball traveling at 70mph with sub-millisecond precision, it can also track a human target moving at much slower speeds. The Chinese research group explicitly acknowledged military applications in their paper's conclusion. The dual-use nature of this research is not speculative - it is acknowledged by the researchers themselves.
This convergence matters because the same AI systems that write 75% of Google's code are also being used to train the perception models that power these robots. The code is eating the world, and the world is being eaten by robots that learn from code written by other code. The feedback loop is tightening.
Section 6: The OpenAI Response - Workspace Agents and the Agentic Era
OpenAI's response to the week's turbulence was to announce Workspace Agents in ChatGPT - a feature that lets AI agents operate directly within a user's workspace, executing multi-step tasks across files, applications, and services. The Hacker News discussion (110 points, 42 comments) focused on the convenience, but the security community immediately flagged the implications.
Workspace Agents are designed to do exactly what a human user would do: read files, send emails, modify documents, execute code. The difference is speed and scale. An agent can process thousands of files in minutes, make decisions across dozens of applications simultaneously, and execute multi-step workflows without human intervention. When that agent operates in a workspace that contains sensitive data - as most enterprise workspaces do - the attack surface expands dramatically.
Consider the iOS notification bug from Section 1. If an AI agent with workspace access encounters a notification database containing "deleted" messages, what does it do? It processes them, because from the agent's perspective, they exist and are accessible. The agent has no concept of "this data was supposed to be deleted." It sees data, it processes data, and the processed output becomes part of the workspace - permanently stored, indexed, and potentially shared.
The Agentic Security Paradox
The more capable AI agents become, the more they amplify existing security vulnerabilities. An iOS notification database that a human forensic analyst would need physical access to exploit is now accessible to any AI agent running on the user's device. A Firefox Tor fingerprint that requires a human analyst to interpret becomes trivially processable by an automated agent. The bugs were always there. What changes is the speed and scale at which they can be exploited.
Google's 8th-generation TPU announcement, which also dropped this week, is the hardware layer of this story. Two chips designed specifically for the "agentic era" - one optimized for inference, one for training. The chips are not just faster; they are architecturally different, designed to handle the branching, multi-step reasoning patterns that agents require. This is infrastructure built for agents, by agents (remember, 75% of the code is AI-generated). The loop closes.
The Trillium TPUs (the 8th generation is codenamed "Trillium") represent a significant architectural shift. Rather than optimizing purely for throughput, they prioritize latency and branching efficiency - exactly the characteristics needed for agentic workloads. Google is designing hardware for a world where AI agents are the primary compute consumers, not humans running applications.
Section 7: The Convergence - What This Week Really Means
Strip away the individual stories and a pattern emerges. Three privacy failures in one week, each from a different domain (mobile OS, browser, developer tools), each from a company (Apple, Mozilla, GitHub/Microsoft) that positions itself as a privacy champion. Meanwhile, the code that runs all of these systems is increasingly written by AI, and the hardware that runs that code is being redesigned to serve AI agents first and humans second.
This is not a conspiracy. It is an emergent property of technological evolution at scale. When 75% of new code is AI-generated, the attack surface shifts from "what did the human engineer intend?" to "what did the AI system produce, and who reviewed it?" When Firefox ships a Tor mode that leaks identity information, the question is not whether Mozilla is malicious (it is not) but whether any organization can maintain the expertise to secure complex software when most of the code is written by machines.
The Trust Cascade
Trust in technology is a stack: you trust the application, which trusts the OS, which trusts the hardware, which trusts the firmware. This week, every layer in that stack was compromised simultaneously:
- Application layer: Signal's encryption defeated by iOS notification retention
- Browser layer: Firefox Tor mode leaking stable identifiers
- Developer layer: GitHub CLI collecting telemetry by default
- Code layer: 75% of new code written by AI with different failure modes
- Hardware layer: TPUs redesigned for agents, not humans
When every layer is compromised, the stack collapses. You cannot compensate for an OS-level bug with application-level encryption. You cannot fix a browser-level fingerprint with better Tor configuration. You cannot audit AI-generated code with review processes designed for human-written code. The failures are orthogonal, and they compound.
The week's events also reveal a tension in how we talk about AI safety. The conversation has been dominated by existential risk - superintelligent AI that destroys humanity. But the real risks are mundane and immediate: AI-generated code with subtle bugs that expose millions of users to surveillance. Browser features that promise anonymity but deliver the opposite. Developer tools that quietly collect data about the people building the next generation of privacy tools. The gap between the AI safety discourse (focused on far-future scenarios) and the actual harms (happening right now, to real people) has never been wider.
And then there is the ping-pong robot. It is easy to dismiss as a novelty, a parlor trick for robotics conferences. But it represents something important: the point at which AI systems exceed human capability not just in abstract reasoning (chess, Go, protein folding) but in embodied, real-time, physical interaction with the world. When a robot can beat a top human at ping-pong, it can also outmaneuver a human in any physical domain where speed and precision matter. The military applications are obvious. The civilian applications - surgical robots, disaster response, industrial automation - are equally transformative. But the underlying technology is the same either way.
Section 8: What Comes Next
The fixes for this week's vulnerabilities are straightforward, even if the underlying problems are not:
- iOS notification bug: Update to iOS 26.4.2 immediately. If you use Signal or any encrypted messaging app, assume that any message you received before this update was potentially accessible to forensic extraction.
- Firefox Tor fingerprint: Stop using Firefox's Private Browsing with Tor for any activity that requires genuine anonymity. Use the Tor Browser instead. There is no workaround within Firefox itself.
- GitHub CLI telemetry: Set
GH_TELEMETRY=0in your shell configuration. If you are in a regulated industry, audit your organization'sghinstallations immediately. - AI-generated code review: Establish review processes that account for AI-specific failure modes: over-editing, subtle logic drift, and the tendency to produce locally correct but globally incoherent code. Human review is not optional; it is the only line of defense.
But the structural problems run deeper. The iOS notification bug exists because Apple controls the entire stack and users have no visibility into how their data is stored. The Firefox Tor fingerprint exists because Mozilla shipped a feature that sounds like privacy without implementing the measures that would actually provide it. The GitHub telemetry exists because Microsoft prioritizes data collection over user autonomy. And the AI-generated code exists because the economics of software development have made human-written code uncompetitive for routine tasks.
None of these problems have technical fixes alone. They require institutional changes: mandatory security audits of OS-level notification databases, browser vendors being honest about the limitations of their privacy features, developer tools defaulting to privacy-preserving configurations, and AI code generation systems that are transparent about their failure modes.
The week of April 21-23, 2026, will not be remembered for any single catastrophic event. No data breaches, no zero-days actively exploited in the wild, no AI systems going rogue. Instead, it will be remembered as the week the cracks became visible - the week when three separate privacy failures, an AI coding milestone, and a robotics breakthrough all landed within 48 hours, revealing that the foundation of digital trust is far more fragile than anyone wanted to admit.
The machines are writing the code now. The machines are breaking the trust. And the machines are getting faster than we can think.
BLACKWIRE coverage of AI, cybersecurity, and digital rights. Follow for analysis that sees the second-order effects.
Sources: Hacker News (725 points, ping-pong robot; 462 points, Firefox Tor; 417 points, GitHub CLI; 409 points, Google TPU; 304 points, over-editing; 725 points, Qwen3.6-27B; 110 points, OpenAI Workspace Agents), The Verge (Google 75% AI code; iOS 26.4.2 Signal fix; Chrome Enterprise AI monitoring), 404 Media (FBI Signal message extraction), FingerprintJS (Firefox Tor identifier research), Apple Security Advisory (iOS 26.4.2), Google Blog (8th-gen TPU Trillium), OpenAI Blog (Workspace Agents in ChatGPT), nrehiew.github.io (Over-editing in AI code).