Ten Blue Links, “forever blowing bubbles” edition

Profile picture of Ian Betteridge
Ian Betteridge
Nov 09, 2025

This week’s topic is AI. In some ways at the moment, every week's topic is AI, but this one especially so.

1. DeepSeek’s big debut comes with a cold shower about jobs

In their first major public outing since going global, a senior DeepSeek researcher warned that AI’s short‑term upsides could give way to serious employment shocks within 5–10 years, as reported by Reuters. That’s a rare moment of candour in an industry that has basically made out this is all for the benefit of mankind.

China is, of course, positioning DeepSeek as proof it can innovate despite (or perhaps because of) US sanctions, and the company keeps shipping — including an upgraded model tuned for domestic chips. The subtext is that AI scale is coming either way. The only real question is whether policy and industry will manage the human transition or pretend it’ll sort itself out.

2. Meta’s fraud economy problem

Internal docs suggest Meta could book around 10% of 2024 revenue — roughly $16bn — from ads linked to scams and banned goods, according to Platformer. Pair that with the finding that a third of successful US scams touch Meta’s platforms, plus user reports being allegedly ignored 96% of the time, and you have a portrait of the company’s incentives gone feral. When fines are rounding errors and high‑risk ads are lucrative, why should Meta even bother trying to fix this?

3. Are AI bubbles good for society?

You can guess what I’m going to say. The romantic story is that bubbles leave behind useful infrastructure. The less romantic truth, as the FT notes: they also waste capital, invite fraud, and distorte priorities. The AI boom is a typical bubble, with huge build‑out, overheated expectations and crowd psychology. Useful to remember when every cost is waved away with “progress.”

4. Big Tech quietly dims the diversity lights

Google, Microsoft, and Meta have stopped releasing workforce diversity statistics, citing shifting politics and priorities—a reversal covered by Wired. Apple, Amazon and Nvidia still publish. Transparency isn’t a panacea, but turning off the lights makes it harder to see whether representation improves or slides backwards. The message is clearly “this isn’t a focus anymore.” If it ever really was.

5. A self‑driving tragedy that says the quiet part out loud

After a Waymo car killed a beloved neighbourhood cat in San Francisco, the backlash wasn’t just about one incident. As Bay Area Current recounts, it tapped into a deeper resentment about tech occupying public space without owning the consequences. Corporate condolences don’t cut it when accountability feels optional. If autonomy wants public trust, it needs humility — and skin in the game when things go wrong.

6. Philanthro‑optimism meets politics

Bill Gates’s recent pivot from “cut carbon now” to the fuzzier ideal of “human flourishing” has been rightly read as a retreat from climate politics. The critique — laid out in Dave Karpf’s newsletter — is that technology can’t substitute for legitimacy, coalition‑building, and the grind of governance, especially under an administration openly hostile to climate action. If the plan relies on benevolent billionaires, it’s not a plan.

7. Amazon vs. Perplexity is a preview of the agentic web

Amazon sent a cease‑and‑desist to Perplexity over its Comet shopping agent operating on Amazon.com, alleging ToS violations and potential fraud, per Platformer. Perplexity says it’s enabling user intent rather than impersonating it. Beyond the legal wrangling is a bigger question: when AI agents do the browsing and buying, who holds power — platforms, publishers, or the agent itself (somehow)?

8. “Only biology can be conscious,” says Microsoft’s AI chief

Mustafa Suleyman wants developers to stop flirting with machine consciousness and focus on useful systems that don’t pretend to feel pain, as he told CNBC. Treating models as tools rather than quasi‑people might spare us a lot of anthropomorphic nonsense and some bad policy.

9. I don’t know, just why would kids not be working?

The number of young people not in education, employment or training is rising, and the government are bamboozled as to why. But the answer is pretty obvious: the very AI which the government has been championing is hitting the entry-level job market hard. And employers are finally admitting what anyone with a brain would know: they are using AI to cut headcount.

It’s going to get worse. Earlier this year, Dario Amodei, CEO of Anthropic, predicted that AI could take away half of all entry-level jobs, and that this would disproportionately affect what we used to call white-collar jobs. For decades, a university education has been pitched as the gateway to one of these higher-paying jobs. Now that that's gone, young people have less incentive to stay in education. Who wants to be saddled with £50,000 of debt when you’re going to end up unemployed anyway?

As demand for degrees falls, this will lead to further pressure on our already near-bankrupt universities. And… you can see where all this goes.

10. That rumble you can hear is the sound of the impending a(i)pocalypse. Maybe.

Two hundred billion dollars. That’s how much debt has been loaded onto the markets to fund the relentless expansion of AI capabilities that tech companies are currently indulging in. If that sounds scary, it’s understandable.

Now to put that into context, OpenAI’s Sam Altman has publicly stated the company is on its way to $100bn a year in revenue. And if those predictions turned out to be true, then that $200bn looks like a bargain.

Some people, though, have predicted we are in for a 2008-style crash when – not if – the AI bubble implodes. But there are some profound differences between now and 2008. The 2008 crisis was driven by loose underwriting, subprime defaults, and complex securitisations (MBS/CDOs) that transmitted losses through the global banking system. There’s no evidence that this is happening now.

A burst AI bubble would more likely manifest as equity drawdowns, capex cuts, and sector-specific spread widening rather than a cascading credit crisis via complex securitisations.

For me, the question marks over AI aren’t about potential financial risks, but about societal and cultural risks. Mass unemployment amongst the young rarely leads to a more stable society and will magnify the division between the young and the old, who have property and pensions to fall back on. Meanwhile, a class of billionaires will take the message from AI that they no longer need the rest of us. It’s going to be a difficult decade.

Mastodon