Abstract AI neural network visualization

Something shifted this week. Not in any single domain - in the architecture of how truth gets made, who gets to make it, and what happens when the gatekeepers find themselves outflanked by tools they don't control. Three stories broke within days of each other. Each one, on its own, would be a footnote. Together, they form a pattern that rewrites the rules.

A 23-year-old with no math training used ChatGPT to solve an Erdos problem that had resisted expert attack for six decades - and the AI used a method no human had thought of. The United States Department of Justice intervened in an xAI lawsuit to argue that requiring AI systems to avoid discrimination is itself unconstitutional. And OpenAI's super PAC was tied to a fake news site where AI-generated "reporters" wrote AI-generated stories designed to influence AI policy.

Three different arenas. One underlying question: who gets to decide what counts as correct, fair, or true - and what happens when the tools that produce answers are owned by the same entities trying to control the questions?

The Vibe-Maths Breakthrough

Mathematical notation on a chalkboard

Liam Price is 23 years old. He has no advanced mathematics training. On an idle Monday afternoon, he typed an Erdos problem into ChatGPT Pro, the $200/month subscription that gives access to OpenAI's most powerful models, just to see what would happen. "I didn't know what the problem was," he told Scientific American. "I was just doing Erdos problems as I do sometimes, giving them to the AI and seeing what it can come up with. And it came up with what looked like a right solution."

The problem concerned primitive sets - collections of whole numbers where no number in the set evenly divides any other. Primes are the simplest example. The legendary mathematician Paul Erdos conjectured in the 1960s that the lowest possible score for such a set, a value he called the "Erdos sum," approached exactly 1 as the numbers in the set approached infinity. Stanford mathematician Jared Lichtman proved the upper bound as part of his doctoral thesis in 2022. But the lower bound - proving the sum couldn't go below 1 - resisted every attempt by serious mathematicians.

"There was kind of a standard sequence of moves that everyone who worked on the problem previously started by doing. The LLM took an entirely different route." - Terence Tao, UCLA

What makes this different from the recent spate of AI-math stories is that the method itself was genuinely novel. Previous AI solutions to Erdos problems tended to use approaches that, while clever, were essentially variations on known techniques. The LLM - specifically GPT-5.4 Pro - used a formula well-known in adjacent areas of mathematics but that no mathematician had applied to this class of problems. As Tao put it: "What's beginning to emerge is that the problem was maybe easier than expected, and it was like there was some kind of mental block."

💡 The Second-Order Effect

The raw output of ChatGPT's proof was, by all accounts, poor. Lichtman described sifting through "quite poor" output to identify the real insight. But that insight - now distilled by Tao and Lichtman into a clean proof - already shows promise for other problems. "We have discovered a new way to think about large numbers and their anatomy," Tao said. The method might transfer. This is what no benchmark captures: the possibility that AI doesn't just solve problems faster but opens doors that humans couldn't see.

Price and his collaborator Kevin Barreto, a second-year undergraduate at Cambridge, had jump-started the AI-for-Erdos trend late last year by prompting a free version of ChatGPT with open problems chosen at random. An AI researcher subsequently gifted them ChatGPT Pro subscriptions to encourage their "vibe mathing" - a term that has since become semi-official shorthand for the practice of throwing AI at problems without deep domain expertise.

The phrase "vibe maths" is doing a lot of work. It papers over the real question: if a tool can produce genuine insights that domain experts missed, what exactly is the domain expertise for? The answer, so far, is that expertise still matters - but as a filter. Price's initial AI output was incomprehensible without Lichtman's trained eye to identify the signal. The pattern is emerging: AI generates, experts verify, and sometimes the verification reveals something genuinely new.

The DOJ, xAI, and the Constitutional Right to Discriminate

Courtroom with gavel and legal books

On April 24, 2026, the United States Department of Justice filed a complaint in intervention in a lawsuit originally brought by Elon Musk's xAI against Colorado's SB24-205, the Consumer Protections for Artificial Intelligence Act. The law, set to take effect June 30, requires developers and deployers of "high-risk" AI systems to take "reasonable care" to avoid algorithmic discrimination. The DOJ's filing argues this requirement violates the Equal Protection Clause of the Fourteenth Amendment.

The legal theory is worth reading carefully, because it is extraordinary. The DOJ does not merely argue that the law is overbroad or poorly drafted. It argues that requiring AI systems to avoid discriminatory outcomes is itself a form of unconstitutional discrimination. The complaint asserts that SB24-205 "compel[s] persons to discriminate against other persons because of race" because it treats statistical disparities as evidence of discrimination and requires developers to adjust outputs to avoid differential impact.

Read that again. The federal government's position is that telling an AI company "your system should not produce racially disparate outcomes" is equivalent to telling it "your system must discriminate against certain races." The equal protection clause, originally enacted to protect formerly enslaved people from state discrimination, is being weaponized to argue that anti-discrimination rules are themselves discriminatory.

✅ What SB24-205 Requires

Developers of high-risk AI must take "reasonable care" to avoid algorithmic discrimination. Deployers must conduct impact assessments. Consumers must be notified when interacting with AI in consequential decisions.

❌ What the DOJ/xAI Argues

The law "compels" discrimination by treating statistical disparities as evidence of bias. Requiring AI to produce "balanced" outcomes forces developers to distort neutral, merit-based systems. This is unconstitutional compulsion.

The complaint quotes extensively from Executive Order 14365, issued in December 2025, which states: "United States AI companies must be free to innovate without cumbersome regulation." It positions the case as a matter of national security: "Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits." The subtext is clear: regulation slows American AI; China does not regulate; therefore, regulation is a national security threat.

There is a reasonable debate about how to regulate AI. Colorado's law is not perfect - its definitions of "high-risk" are broad, its compliance burden falls disproportionately on small companies, and its enforcement mechanism gives the Attorney General broad discretion. But the DOJ's argument does not engage with any of these nuances. It goes straight for the throat: any requirement that AI outputs consider demographic impact is unconstitutional per se.

If the Government is prohibited from discriminating on the basis of race, then surely it is also prohibited from enacting laws mandating that third parties discriminate on the basis of race. - DOJ Complaint, citing Ricci v. DeStefano

The logical extension of this argument would invalidate virtually every civil rights law that considers disparate impact. The Fair Housing Act, the Equal Employment Opportunity provisions of Title VII, the Voting Rights Act - all of these treat statistical disparities as relevant evidence. The DOJ's theory does not distinguish between "mandating quotas" and "requiring systems to be audited for discriminatory effects." Both, in this framework, are unconstitutional.

⚠️ The Stakes

If xAI and the DOJ prevail, it would establish precedent that any state law requiring AI fairness audits is unconstitutional. It would also provide a roadmap for challenging existing civil rights laws that rely on disparate impact theory. The case is not just about AI regulation - it is about whether the 14th Amendment protects people from discrimination or protects systems from being asked whether they discriminate.

The Acutus Wire: OpenAI's Super PAC and the Fake News Factory

Computer screen with code and text

Nathan Calvin is the vice president and general counsel of Encode, an advocacy group focused on AI safety. Last week, he received an email from a reporter named Michael Chen, writing for a publication called The Wire by Acutus. Chen was seeking comment for a story about an AI bill in Tennessee. The email looked professional enough. But something was off.

Calvin forwarded the email to Tyler Johnston, who investigated and published his findings. The investigation revealed that Michael Chen almost certainly does not exist. Neither do any of the other "reporters" listed on Acutus. The site, which launched December 29, 2025, published 94 full-length articles in under four months - on AI policy, Senate races, pharmacy reform, nuclear energy, crypto regulation, and more. It has no masthead, no named editors, and no explanation of who runs it.

Johnston ran every article through Pangram, an AI content detector with a near-zero false positive rate. Sixty-nine percent came back flagged as fully AI-generated. Another 28% were flagged as partially AI-generated. Only three articles were classified as human-authored.

But the rabbit hole goes deeper. The site's React JavaScript bundle - visible to anyone who inspects the page source - reveals an editorial interface with fields labeled "AI Background Context" (described as "Background information for the AI to use when generating questions and writing the story") and "Question Prompts" (described as "Suggested questions for the AI interviewer to ask"). There is a "Generate Story Draft" button. A "Regenerate" button. A multi-pass AI editorial review that scores output across editorial benchmarks. The entire workflow, from topic selection to publication, can be run without a human touching a keyboard.

The Money Trail

The financial trail leads to OpenAI. The Verge reports that the site appears to be funded by an OpenAI super PAC. This is the same OpenAI that released a Privacy Filter model this week, the same OpenAI whose CEO apologized to the town of Tumbler Ridge after ChatGPT was used to describe violent scenarios by a suspect in a school shooting, the same OpenAI that is building the most widely-used AI systems on the planet.

Consider the ecosystem: an AI company funds a political action committee, which funds a fake news site, which uses the same company's AI to generate articles that advocate for policies favorable to that company. The articles look like journalism. They quote real people. They reference real legislation. They arrive in the inboxes of real advocates and legislators. But the "reporter" who wrote them is a language model, and the "editorial judgment" is a scoring algorithm.

⚠️ Why This Is Different From Automation

Newsrooms using AI tools to assist reporting is not new. The Associated Press has used AI for earnings reports since 2014. The difference is transparency and accountability. The AP's AI-generated stories are clearly labeled, produced under editorial oversight, and the organization stands behind them. Acutus presents AI output as independent journalism while hiding its funding source, its lack of human editors, and the fact that its "reporters" are chatbots.

OpenAI's Privacy Filter: The Good Cop

Lock and security concept

In the same week that OpenAI's super PAC was linked to a fake news operation, OpenAI itself released something genuinely useful: an open-weight Privacy Filter model for detecting and redacting personally identifiable information in text.

The model is small (1.5 billion parameters, 50 million active), runs locally, processes 128,000-token contexts in a single pass, and achieves 96-97% F1 on the PII-Masking-300k benchmark. It detects eight categories of PII: private person names, addresses, emails, phone numbers, URLs, dates, account numbers, and secrets like API keys and passwords. It is context-aware, distinguishing between information that should be preserved because it is public and information that should be masked because it relates to a private individual.

This is exactly the kind of infrastructure the AI ecosystem needs. Local PII detection that doesn't send data to a server, fine-tunable for specific domains, with a transparent model card reporting limitations and failure modes. OpenAI even identified and corrected annotation issues in the benchmark dataset, achieving a corrected F1 of 97.43%.

And yet. The same organization releasing this privacy tool is, through its political arm, funding fabricated journalism designed to shape the regulatory environment around its products. The same company that built a model to protect personal information is connected to an operation that fabricates the appearance of human reporters. The left hand builds privacy infrastructure. The right hand undermines the social trust that makes privacy meaningful. Both hands belong to the same body.

GnuPG 2.5.19: Post-Quantum Crypto Enters the Mainstream

Abstract encryption and cryptography visualization

While the AI world was busy arguing about who gets to decide what's true, the cryptography world quietly achieved a milestone that will outlast every model currently in production. On April 24, Werner Koch released GnuPG 2.5.19, which introduces Kyber (officially ML-KEM, per FIPS-203) as a post-quantum encryption algorithm into the mainline OpenPGP implementation.

This is not a research prototype. This is GnuPG - the tool that millions of people use to encrypt email, verify software downloads, and sign git commits. The 2.5 series is a release branch; the 2.4 series reaches end-of-life in two months. If you run GnuPG on a production system, you will be upgrading to a version that speaks post-quantum cryptography within weeks, whether you know it or not.

The significance is hard to overstate. For nearly 30 years, the OpenPGP ecosystem has relied on RSA and elliptic-curve algorithms that are theoretically vulnerable to quantum computers. The transition to post-quantum cryptography has been discussed in abstract terms for years - NIST finalized the ML-KEM standard in 2024, and libraries like liboqs have provided experimental support. But GnuPG is not experimental. It is infrastructure. It is the difference between "post-quantum crypto exists in a paper" and "post-quantum crypto is what your system actually does."

🔒 Why Timing Matters

There is a concept in cryptography called "harvest now, decrypt later." Any encrypted data captured today can be stored by adversaries until a quantum computer capable of breaking current encryption becomes available. The earlier post-quantum algorithms are deployed in production, the less data is vulnerable to this attack. Every month of delay is a month of data that could be retroactively decrypted. GnuPG's move means that new encrypted communications will start defaulting to quantum-resistant algorithms in practice, not just in theory.

The Convergence: Who Controls Intelligence

Networked globe representing interconnected systems

These stories are not separate. They are facets of the same phenomenon: the struggle over who controls the production and distribution of intelligence.

The Production of Truth

Liam Price proved that AI can generate mathematical insights that exceed what human experts have found. This is not incremental. This is a 23-year-old without domain expertise producing a solution that Terence Tao describes as revealing "a new way to think about large numbers and their anatomy." The insight came from a machine. The verification came from humans. The result is a new mathematical technique that may have broader applications.

But the same tools that can solve Erdos problems can also generate fake journalism. Acutus didn't need a human writer. It needed a prompt, an API key, and a pipeline. The cost of producing something that looks like truth has dropped to effectively zero. The cost of verifying that it is truth has not dropped at all - and may have increased.

The Distribution of Power

The DOJ's argument in the Colorado case is not just about AI regulation. It is about whether democratic institutions have the authority to impose any constraints on AI systems at all. The Equal Protection Clause argument, if accepted, would make it constitutionally impermissible for states to require AI systems to audit themselves for discriminatory outcomes. Not just quotas. Not just disparate impact thresholds. Any requirement to check whether a system treats different groups differently.

This would, in effect, create a constitutional shield around AI systems. If you cannot require an AI to be fair, you also cannot require it to be transparent, accountable, or safe - because those requirements would also impose "cumbersome regulation" that slows innovation.

The Infrastructure of Credibility

OpenAI released a privacy filter in the same week that its super PAC was linked to a fake news site. The privacy filter is real and useful. The fake news site is real and harmful. Both are products of the same organizational ecosystem. This is not hypocrisy - it is structural. An organization that builds tools for detecting personal information also has an interest in shaping the regulatory environment around those tools. The privacy filter is the carrot. The fake news site is the stick.

GnuPG, by contrast, is the opposite model. It is infrastructure built by a small team, released freely, maintained by a community, and deployed without any corporate entity shaping the regulatory environment around it. When Koch releases a version with post-quantum crypto, it is not because a lobbyist argued that quantum-resistant encryption is good for the encryption industry. It is because the mathematics says it is necessary and the engineering says it is ready.

The Three Futures

The three stories of this week sketch three possible futures for the relationship between AI and truth:

🌱 The Verification Future

AI generates insights, humans verify and extend them. The Price-Lichtman-Tao model. The GnuPG model. Expertise still matters, but as a filter and amplifier, not as the sole source of novelty. Slow, careful, and produces genuine new knowledge. This is the best case.

🔴 The Capture Future

AI companies use their tools to shape the regulatory environment in their favor, while building useful infrastructure that creates dependency. The DOJ-xAI-Acutus model. Regulation is unconstitutional, criticism is fake news, and the companies that build the systems also control the narrative about whether those systems should be regulated. This is the current trajectory.

🟠 The Flood Future

The cost of producing convincing content drops to zero. Acutus is the early warning, not the final form. When anyone can spin up a news site with 94 AI-generated articles in four months, verification doesn't scale. Trust collapses. The Erdos-level insights get buried in the noise. This is the default if nothing is done.

🔮 The Infrastructure Future

Open infrastructure like GnuPG, open-weight models like the Privacy Filter, and open verification processes like mathematical proof review create a layer of trustworthy, auditable systems. This is the path of maximum resilience and minimum corporate dependency. It requires funding, maintenance, and a community that values correctness over market position. It is possible but underfunded.

What Happens Next

Circuit board close-up

The Colorado case will be the first major constitutional test of AI regulation in the United States. If the DOJ prevails, it will create a precedent that extends far beyond AI - potentially undermining the legal framework for every civil rights law that considers disparate impact. If Colorado prevails, it will establish that states have the authority to require AI systems to be audited for fairness, even when the federal government objects.

The Acutus story will continue to develop. The model republic investigation has identified the financial trail to OpenAI's super PAC, but the full extent of the operation - how many legislators were contacted, how many stories influenced policy debates, whether the operation extends beyond Acutus to other sites - remains unknown.

The Erdos solution will be formalized and submitted for peer review. The technique the AI discovered will be examined for applications beyond primitive sets. The phrase "vibe maths" will either mature into a recognized practice or be discarded as hype. The underlying reality - that AI can now produce genuinely novel mathematical insights - will not go away.

And GnuPG 2.5.19 will be deployed. The old 2.4 series will reach end-of-life. Systems will upgrade. Post-quantum cryptography will become a default, not a research project. This is how infrastructure changes: not with a press conference, but with a release announcement on a mailing list that most people have never heard of, written by a developer that most people have never heard of, protecting data that most people will never know was at risk.

The question is not whether AI will transform the production of truth. It already has. The question is whether the institutions that verify, distribute, and govern truth will adapt fast enough to matter - or whether they will be outflanked by the tools that produce convincing falsehoods at zero marginal cost, operated by the same companies that build the tools that produce genuine insights.

The week's stories are a demonstration project. The math is real. The lawsuit is real. The fake news site is real. The post-quantum crypto is real. The privacy filter is real. The question is which of these realities we choose to build on, and who gets to choose.

April 18, 2026
Liam Price prompts ChatGPT Pro with Erdos primitive set problem; AI produces novel solution approach
April 22, 2026
Solution posted to erdosproblems.com; Terence Tao and Jared Lichtman validate and streamline the proof
April 24, 2026
DOJ files complaint in intervention in xAI v. Weiser, arguing Colorado's AI anti-discrimination law is unconstitutional
April 24, 2026
GnuPG 2.5.19 released with Kyber/ML-KEM post-quantum encryption in mainline
April 24, 2026
OpenAI releases open-weight Privacy Filter model for PII detection
April 25, 2026
Model Republic investigation links Acutus Wire fake news site to OpenAI super PAC
April 26, 2026
Analysis: the convergence of AI truth production, regulatory capture, and cryptographic infrastructure

📚 Sources & Further Reading