1 The knowledge class and its enemies
Writing for The Nation, Elizabeth Spiers reaches for Richard Hofstadter's Anti-Intellectualism in American Life to frame something that should have been nagging at the edges of tech criticism for quite some time. Hofstadter's great insight was that anti-intellectualism in America has historically come from below, targeting the knowledge elite from outside. What Silicon Valley has managed is something structurally different: anti-intellectualism from within the elite itself, produced by people who are themselves the beneficiaries of exactly the kind of education they're now busy dismantling.
Peter Thiel's programme is paying students not to go to college. Marc Andreessen's bragging that he avoids introspection. The Suno CEO insisting that making music isn't enjoyable, which would be news, as Spiers notes, to every musician, professional and amateur alike. The pattern to this isn't random eccentricity. It's the logical product of people who believe they've cornered the market on critical thinking and therefore have nothing left to learn. The result is a class that hires linguists to improve its large language models while actively sneering at the kind of person who becomes a linguist.
There's a power-analysis here that Spiers makes explicit. An informed workforce is harder to control. Deep learning produces autonomy. Autonomy produces organisation, which produces demands. The tech oligarchs' anti-intellectualism isn't merely a cultural preference — it's a class strategy, dressed up as meritocracy and sold as concern for working people.
These self-described lovers of rationalism, who love to talk about IQ and logic while dismissing emotion as weakness, have managed to produce a cognitive ecosystem so closed that it can no longer generate original thought. They've enshittified their own thinking. The model for this, per Spiers, is Curtis Yarvin, their favoured intellectual and a man whose central political theory is that California should be run like a profitable corporation by a CEO-king. That Silicon Valley's most rational minds have outsourced their political philosophy to someone who thinks feudalism was underrated says more about the intellectual health of the valley than any number of TED talks.
2 Can Sam Altman be trusted? I'm betting you know the answer
Ronan Farrow and Andrew Marantz spent over a year writing a 16,000-word New Yorker profile with a headline that doesn't mince words: can Sam Altman be trusted? John Gruber's long read of the piece is itself essential — not just as a summary but as a considered response from someone who knew Aaron Swartz well and finds the newly-reported Swartz material the most significant thing in it.
That material: Aaron, in the months before his death, told friends that Altman "can never be trusted" and "is a sociopath — he would do anything." Swartz is not a casual source. He was brilliant, famously honest, and his warnings about specific people and institutions have consistently aged well. Gruber's point about Paul Graham is equally telling: Graham has spent the week since publication carefully explaining that Altman wasn't fired from Y Combinator, that he wasn't wanted gone. What he hasn't said — not once, not unambiguously — is that Altman is honest, trustworthy, or a man of integrity.
The organisations building frontier AI models require leaders of extraordinary integrity precisely because the asymmetry of information is so large. Users, governments, and investors cannot independently verify what these systems are doing or whether the people running them are being straight about their capabilities and risks. The degree to which we're extending trust to OpenAI is, in that context, essentially a bet on Altman personally. The New Yorker piece makes clear that a significant number of people who've worked closely with him have concluded that bet is badly placed.
3 The post-American internet is already being built
Cory Doctorow's latest Pluralistic takes an origin story — his own dotcom-era c//ompany Opencola, where he and his co-founders once brainstormed a way to spam Google into uselessness, and then didn't do it, because they loved the web too much — and turns it into a theory of internet history. The difference between the early Tron-pilled builders who held the line against breaking the internet and the people who eventually did break it wasn't intelligence or technical capability. It was callousness. When good-faith technologists red-teamed the internet, they felt scared and wanted to protect it. When the Zuckerbergs and Musks of the world did the same exercise, they turned it into a pitch deck.
Cory is finishing a book called The Post-American Internet, framed explicitly as a geopolitical sequel to Enshittification. The argument, as he's developing it publicly, is that Trump's demolition of US soft power has created the conditions for an alternative internet that might actually be better — not because it's technically different but because the political project behind it is different. The week's tech policy news gives him significant supporting evidence.
Nick Heer's linklog piece on digital sovereignty aggregates several stories that, taken together, show this project in motion: France announcing it will migrate government computers from Windows to Linux; Schleswig-Holstein having already moved off Microsoft Exchange and Outlook; the International Criminal Court switching to openDesk; UK banks quietly beginning to explore a domestic alternative to Visa and Mastercard, prompted by the visible demonstration that US financial infrastructure is a geopolitical weapon. A Canadian ICC judge who authorised investigations into US conduct in Afghanistan now has to phone hotels ahead of time to explain why she can't pay by card. The lesson, taken seriously at an institutional level, is that reliance on American platforms is a strategic vulnerability.
Heer adds the necessary caution: if this produces domestic walls rather than greater international cooperation, it will be disappointing. I think that's right. Sovereignty as protectionism looks different from sovereignty as the conditions for a distributed alternative.
The good news is that a lot of the tentative moves toward digital sovereignty are based on open platforms and open standards. The bad news is that institutional politics has a habit of taking open systems and turning them to dust. Hopefully that won't happen this time, when we really need it not to.
4 X becomes uninhabitable
The Electronic Frontier Foundation has left X, and the reason it gives is less a statement about politics than a straightforward piece of platform analytics. The EFF used to get 50 to 100 million impressions per month on the platform. Last year, 1,500 posts earned roughly 13 million impressions for the entire year. A single post now delivers less than 3% of the views a single tweet delivered seven years ago.
The EFF is not a fringe account. It is one of the most important civil liberties organisations in tech, with a long-established audience and a clear reason to be on platforms where technology and politics intersect. If its posts are reaching fewer than 3% of their previous audience, that reach suppression isn't an edge case — it's the product working as designed. For organisations whose purpose is information dissemination, that makes X not merely unwelcoming but structurally useless. Staying is a form of institutional self-harm.
The broader significance is about what kinds of organisations the platform has made untenable. It isn't just that political speech the platform's owner dislikes gets suppressed, though that happens. It's that the entire architecture of attention on X has been rebuilt around a different set of priorities — engagement over reach, monetisation over distribution, the owner's preferences over the ecosystem's health. Civil society organisations, independent journalism, and anyone whose value proposition is informational rather than algorithmic are the casualties.
What's left, largely, is a platform that works well for accounts the algorithm rewards: inflammatory, reactive, high-engagement posting that doesn't require an audience to take any particular action other than stay on X longer. That the EFF — which exists to protect digital rights, including rights that affect X's own users — finds no functional home there is not incidental. It's the platform's clearest statement about what it's for.
Those 97 percentage points of vanished reach aren't going to come back. If you're an organisation and still on there, that's worth bearing in mind.
5 Your privacy is only as strong as your notification settings
404 Media's Joseph Cox reports that the FBI forensically extracted copies of Signal messages from a defendant's iPhone — even after the Signal app had been deleted — because incoming messages had been stored in iOS's push notification database. The mechanism is more subtle than it first appears, and understanding it matters for anyone who thinks that deleting an app erases their data.
Push notifications for end-to-end encrypted apps like Signal are received by the OS before being decrypted by the app. Signal and similar apps handle this by using a notification service extension that decrypts the content locally. The problem is that after decryption, the content gets written to the system notification database. When Signal is deleted, the app is gone — but the database entry stays. The FBI pulled the messages from there.
The broader lesson here is one that keeps reasserting itself: privacy is a systems property and not just a matter of the right settings existing. The mental model most users have — that privacy is managed at the level of individual app permissions and toggles — is consistently inadequate to the actual architecture of modern operating systems. The data exists in places the permissions UI doesn't surface. The threat model includes not just the app but the OS, the notification infrastructure, and every forensic tool that can reach the file system.
For anyone who carries sensitive communications on their phone, the practical takeaway is to enable content-free notifications in Signal (Settings > Notifications > Show > No Name or Message). It's not a complete fix, but it closes this particular vector.
6 You don't own what you buy
Amazon is crippling older Kindle models from May 20. Devices released in 2012 or earlier — which already lost store access in 2022 — will from that date be unable to add any new content at all, whether bought, borrowed, or downloaded. Wendy Grossman at net.wars has the clearest framing of why this matters beyond the immediate inconvenience: functioning hardware will become electronic waste because a single company has decided that the relationship between its customers and their devices should be conditional on its continued goodwill.
The Kindle has always been the cleanest example of the ownership gap in platform economics. You don't buy a book on Kindle; you license access to a file through a device that Amazon can update, modify, or functionally degrade at will. The older models' users find themselves at the end of that contract, not because the device stopped working but because Amazon has decided it isn't worth supporting. There's a discount on newer hardware available, which is the product working as intended: the hardware was always a loss-leader for the ecosystem. Now you have bought lots of books, Amazon has made its money back.
7 Age verification's dangerous gamble
Governments across the world are mandating online age verification, and Proton's analysis of the consequences makes a point that deserves to be made much more loudly: in the rush to protect children from harmful content, policymakers are creating concentrated stores of extremely sensitive personal data with inadequate security requirements, and the inevitable breaches will harm the children they were designed to protect.
The UK's Online Safety Act provides the test case. Discord, in compliance with the UK's age-verification requirements that took effect last July, was collecting photographs of users' government-issued IDs — passports, driving licences — through a third-party verification vendor. In September, an attacker compromised that vendor and extracted at least 70,000 of those images. The children whose IDs were in that database are now more vulnerable, not less, than they were before the verification regime existed.
The structural problem is one of threat modelling. Age verification as currently implemented requires users to prove their identity to third-party companies that have no core competency in security, that aggregate verification data across many services, and that therefore become extremely high-value targets for attackers. Ofcom's own research found that many companies operating under the UK law weren't maintaining records consistently with guidance, and couldn't demonstrate how they were taking responsibility for the data. The compliance infrastructure is weak, the attack surface is large, and the data being protected is among the most sensitive that exists.
The companion Proton piece on alternatives to age verification outlines what thoughtful policymakers should be considering instead: zero-knowledge proofs, which can cryptographically demonstrate that a user meets an age threshold without revealing their identity; device-based verification that doesn't create third-party data stores; parental control systems that push the verification relationship closer to where it belongs, within families rather than on centralised servers. These aren't fringe technical proposals — they're well-understood approaches that have been available for years.
The gap between what's technically possible and what's being legislated is, as usual, a gap in political will and industry lobbying, not capability. Age verification requirements have been strongly shaped by companies that profit from operating verification infrastructure. They have obvious commercial reasons to prefer centralised, identity-based approaches over distributed, privacy-preserving ones. Policymakers who don't understand the technical landscape are easy to lead.
8 Fork you (and some of the problem with open source)
A lot of the tools that I use on a daily basis are open source, and I thank the $DEITY every day that people devote their time and energy to make them. However… sometimes the community around open source can be a complete pain in the posterior.
Usually this isn’t the people actually making things, it’s the Believers who feel that, in order to promote the credo, they need to belittle anyone who doesn’t entirely buy into it.
One thing that has always bugged me is the insistence that, if you don’t entirely like a particular project, you should “fork the code”. HELLO DO I LOOK LIKE I CAN CODE? As Dave at HumanCode puts it, “You, a rich person with technical skills and time to spare, may be willing to bear the cost of forking a popular project, but others can’t. Think beyond your selfish self.”
9 A bicycle for the mind
The first time I came across the phrase “a bicycle for the mind” was in Uxbridge, which only makes sense if you also know that Apple UK was based there, and I was interning. And now you do know that, so I can continue.
Anyway, it’s always been one of my absolute favourite phrases about technology, because it encapsulates what technology should be. Just as a bicycle amplifies the strength of its rider, so computers can – and should – be amplifiers of their user’s own thinking. It has a resonance with the concept of the centaur in automation theory: a human head, driving the power of a horse’s body.
And I think that Parker Ortolani is absolutely right that the MacBook Neo is a bicycle for the mind. It’s affordable, powerful, personal, and a better exemplar of “the clearest expression of Apple’s vision of the future of personal computing” than the iPad turned out to be. It’s even repairable, or at least more repairable than other Macs have become.
10 The Loomerisation of tech politics
There's a telling detail buried in the WSJ's account of the war between Dario Amodei and the White House. When David Sacks wanted to signal that Anthropic had stepped out of line, he didn't reach for a policy lever or a regulatory threat. He went on his podcast for twelve minutes and then suggested that people in the network needed to be "Loomered" — shorthand for siccing Laura Loomer on them until they're fired.
This is how US politics works now. Loomer, a far-right activist with no formal role in government, has become a disciplinary mechanism. She called out members of the National Security Council who opposed Sacks's plans. They were let go. The threat is effective because it has worked.
Amodei's crime, in this telling, was straightforward. He warned publicly that AI could destroy half of all entry-level white-collar jobs. He added a former Netflix CEO and Democratic donor to Anthropic's board. He declined to tell the White House that there are only two genders. White House officials had been testing AI chatbots — including Anthropic's — specifically on that question, as a gauge of ideological compliance. In July, Trump signed an executive order banning government agencies from doing business with "woke" chatbots.
The administration then drafted an executive order titled "Preventing Woke AI in the Federal Government." Anthropic wasn't named, but everyone in Washington knew who it was aimed at.
Like AI or not, Amodei is at least trying to hold a line. His argument is that AI safety is a national security question, not a culture war one. "Some of the elements in government don't get it," he said, "and are doing exactly the wrong things." That's about as direct as a Silicon Valley CEO gets when talking about an administration that controls his regulatory environment and his government contracts.
There are lots of interesting things about this story, and whoever writes the tell-all, well sourced book about it is going to be rich. But what this story reveals is the choice being forced on every serious AI company in Washington right now: perform ideological compliance, or get Loomered. There's no third option on the menu.