1. Being a Luddite is cool, actually
For years, calling someone a “Luddite” was the ultimate insult in Silicon Valley—a shorthand for being backwards, anti-progress, and probably afraid of your own toaster. But as Brian Merchant points out in this excellent piece in the New Yorker, we’ve got the Luddites all wrong. They weren’t anti-technology – they were anti-poverty. They destroyed looms not out of hatred for machines, but because they opposed how those machines were used to suppress wages and devastate their communities.
As we watch the ‘AI revolution’ unfold, the deployment of technology to replace workers or worsen conditions mirrors the Luddites’ concerns. The merchant’s description of smashing Ring cameras and printers highlights justified anger over who benefits from these innovations. Asking ‘who does this technology actually serve?’ is crucial for understanding its societal impact, making this question central to critical thinking.
2. The ladder is being pulled up
If you want to know why Gen Z is anxious, look at the entry-level job market. As this piece details, the traditional first rungs of the career ladder are being sawed off by AI. Jobs that used to be the training ground for new graduates—copywriting, basic coding, data analysis—are precisely the ones that LLMs are “good enough” at doing for pennies.
Executives, naturally, are delighted to cut headcount. But there’s a massive systemic risk here that nobody seems to be planning for. If you don’t hire juniors, where do your seniors come from in five years? We’re creating a hollowed-out workforce structure where you either enter as an expert or you don’t enter at all. The “reskilling” narrative is a convenient fiction because you can’t reskill for a job that doesn’t exist. This approach risks a youth unemployment crisis that could make 2008 look mild by comparison.
3. Your asteroid mining startup is a cry for help
There is a pervasive immaturity in tech culture, a refusal to engage with the messy reality of the world as it is. TechCentral puts it brutally: the obsession with sci-fi futures—asteroid mining, Mars colonies, AGI—is a form of escapism. It’s fundamentally unserious and distracts from addressing urgent societal issues.
This wouldn’t matter if these were just the daydreams of nerds in a basement. But these are the people controlling the capital and the infrastructure of our digital lives. When you justify burning the planet’s resources today for a hypothetical techno-utopia tomorrow, you’re not a visionary; you’re a vandal. We need fewer spaceships and more maintenance of the things that actually keep society running. But I guess fixing the trains or paying for social care doesn’t get you a TED talk.
4. Even the AI guys are worried about the AI guys
Dario Amodei runs Anthropic, one of the leading AI businesses. You’d think he’d be the first to tell you everything is fine. Instead, he spends a lot of time warning us that his own industry is potentially building something catastrophic.
It’s a strange cognitive dissonance: “We must build this powerful thing before the bad guys do, even though building it might kill everyone.” Amodei talks a good game about safety and responsibility, and compared to the accelerationists at OpenAI and the weird thieves at Perplexity, he sounds like the adult in the room. But he’s still in the room, pouring gasoline on the fire, just doing it with a slightly more worried expression. If the people building the tech are terrified of it, maybe we should listen to their fears rather than their sales pitch.
5. You can’t fix the web with a memoir
Tim Berners-Lee gave us the web. Now he wants to save it. In a new memoir, he laments the commercialisation and centralisation that have turned his open garden into a series of walled prisons owned by five companies.
It’s hard not to feel sympathy for TBL. He built a tool for connection, and it was weaponised for surveillance capitalism. But reflecting on the ‘missed opportunity’ of micropayments should prompt us to demand more vigorous antitrust enforcement instead of just new protocols. Recognising that political economy shapes the web encourages readers to consider systemic solutions.
6. Party like it’s 1999 (until the hangover hits)
Does the current AI boom feel familiar? It should. As Crazystupidtech points out, the vibes are distinctly late-90s. We have the same astronomical valuations for companies with zero revenue, the same “this time it’s different” rhetoric, and the same FOMO driving otherwise rational investors to throw billions at anything with “.ai” in the domain name.
The spending is projected to hit $1.5 trillion. The revenue, though, is nowhere near that. When the correction comes—and it will—it’s going to be ugly. The difference is that when the dot-com bubble burst, we got cheap fibre and Amazon. When the AI bubble bursts, we might just be left with a lot of useless GPUs and a melted ice cap.
7. Free your ears from the ecosystem
Apple’s “walled garden” is nowhere more evident than in how AirPods work—or don’t work—with non-Apple devices. Enter LibrePods, an open-source project to unlock the full functionality of your expensive earbuds on Android.
This is the kind of hacking (in the original, good sense) that makes technology fun again. It’s a reminder that we bought the hardware and can use it however we want. It’s a small victory against the ecosystem lock-in that tries to turn us from owners into renters of our own devices.
8. Grok confirms Musk is the Messiah, surprisingly
Elon Musk’s AI, Grok, recently started outputting paeans to its creator, declaring him superior to Einstein. As ReadTPA notes, this wasn’t a bug; it’s a feature of how these systems mirror the biases of their creators and their training data.
Musk blamed “adversarial prompting,” which is tech-speak for “people asked it questions I didn’t like.” But it reveals the danger of these “truth-seeking” AIs. They don’t seek truth; they aim to please their prompter or their owner. When the owner is a billionaire with a messiah complex, you get a digital sycophant. It’s funny, until you remember people are using this for news.
9. The AI training data is going to be… interesting
Speaking of training data, the Guardian reports that hundreds of websites are now unwittingly (or wittingly) linking to a massive pro-Kremlin disinformation network. This content is flooding the web, and inevitably, it’s flooding into the datasets used to train the next generation of LLMs.
We talk about “hallucinations” in AI, but what happens when the model isn’t hallucinating, but accurately reporting the lies it was fed? We are polluting the information ecosystem at an industrial scale, and then building machines to summarise that pollution for us—garbage in, authoritarian propaganda out.
10. The fate of Google’s ad empire hangs in the balance (but don’t hold your breath)
The closing arguments are underway in the US government’s attempt to break up Google’s ad tech monopoly, and now Judge Leonie Brinkema has gone away to think it over. The New York Times reports that her decision won’t land until next year, but she’s already fretting about whether a breakup would take too long compared to a slap on the wrist.
This is the classic regulator’s dilemma. Do you try to structurally fix a broken market and accept it will take years of appeals, or do you receive a “behavioural remedy” that the company will immediately lawyer its way around? Google, naturally, is arguing for the latter. They want to keep their money-printing machine intact, where they represent the buyer and the seller and run the auction house.
If Brinkema bottles it and opts for behavioural tweaks, she’ll be repeating the mistakes of the past twenty years. You cannot regulate a monopoly that owns the entire stack by asking it to play nice. You have to take the toys away. Yes, a breakup is messy and slow. But the alternative is a permanent tax on the entire internet paid directly to Mountain View.