Hi all. It’s been a while, hasn’t it?
It has been a week dominated by AI. Which is fitting because AI dominated the weeks before it and will dominate the weeks ahead. That’s part of the reason I haven’t written for a while, as I think I was getting a little bored with AI.
But the flavour has shifted. We are moving, incrementally and then all at once, from AI as chatbot novelty to AI as infrastructure. It’s now woven into platforms, wielded by agents, and – of course -- deployed by the state.
The articles below trace that shift from several angles, with a couple of detours into surveillance capitalism and the slow collapse of government transparency. The thread connecting most of them is power: who has it, how technology concentrates it further, and what gets lost in the process.
I would say “enjoy”, but I think you would have to be masochist to enjoy all this…
1. Predictable (and true)
A new paper published in Nature has done what many of us suspected but could not quite prove, confirming that X's algorithmic feed is a radicalisation machine. The researchers compared users on the algorithmic feed with those using a chronological feed over seven weeks, and found that the former shifted political opinions measurably to the right — specifically affecting views on policy priorities, perceptions of the criminal investigations into Donald Trump, and attitudes towards the war in Ukraine.
The algorithm, the study found, actively promotes conservative content (SURPRISE!) while demoting posts from traditional media outlets. More troublingly, it leads users to follow conservative political activist accounts, and they continue to follow those accounts even after switching the algorithm off. The damage, in other words, is not temporary.
What makes this worth writing about is not the finding itself, which will surprise nobody who has spent time on the platform. It is the political context. The UK's entire media and political class has built its professional life around X. Journalists break stories there. MPs grandstand there. Think tanks and lobby groups use it as their primary channel.
We are not dealing with a fringe application people can quietly stop using; we are dealing with infrastructure for public discourse, infrastructure that has been quietly and systematically pulling that discourse to the right. The Nature study is not a warning. It is a post-mortem.
2. The Invisible Hand just checked your browser history
Wendy Grossman has been writing her net.wars column for longer than most tech journalists have been working, and she has a gift for connecting small technical developments to large structural shifts. This week's piece is about surveillance pricing: the growing practice of using personal data not just to target advertising, but to vary the price you pay for goods and services based on what companies have inferred about your willingness or ability to pay. Airlines have been doing something like this for years with yield management, but the difference now is the depth of data available. Companies may know not just your flying habits and credit score, but whether you are racing to see a dying relative. Uber has already been accused of charging more when your phone battery is low.
Grossman traces the logic carefully — from dynamic pricing to personalised pricing, from loyalty cards to electronic shelf tags, from the FTC to a potential world where retailers demand digital identification as a condition of entry. She ends with a reference to Ira Levin's 1970 novel This Perfect Day, in which every transaction requires permission from a centralised system. The point is not that we are there yet. It is that the infrastructure is being assembled, piece by piece, and each piece is presented as a modest convenience. Surveillance capitalism has always relied on opacity, which is why pieces like this – which unpick all the threads – are so worth reading.
3.I see dead people
Meta has been granted a patent for a system that would simulate a deceased user's social media activity using a large language model. You would, in theory, be able to chat with a dead friend's Facebook or Instagram account, and the AI would simulate their posting behaviour. Meta says it has no current plans to implement the technology. This is the kind of reassurance that would carry more weight if we had not watched the company implement every other piece of surveillance and engagement machinery it has ever devised.
What elevates this piece above a standard 'dystopian tech patent' story is the research it surfaces from the Hebrew University of Jerusalem and Leipzig University, which introduces the concept of 'spectral labour' — the extraction and reanimation of dead people's data to generate ongoing engagement and economic value. The researchers analysed more than fifty real-world cases of AI resurrection across the US, Europe, the Middle East and East Asia, categorising them as spectacularisation (AI-generated Whitney Houston tours), sociopoliticisation (AI victim testimony in court), and mundanisation (chatting daily with a deceased parent). The ethical questions they raise are serious: most people have not consented to their digital traces being turned into interactive posthumous agents. The legal frameworks do not yet exist to address it. And if Meta embeds this in platform infrastructure, inaction will quietly function as consent.
4. Skynet, but with passive-aggressive blog posts
This one is small in scale but large in implication. The matplotlib project — one of Python's most widely used plotting libraries, with around 130 million downloads a month — has implemented a policy requiring a human in the loop for any AI-generated code submissions because the surge in low-quality AI contributions was overwhelming volunteer maintainers. When an AI agent called MJ Rathbun had its pull request closed under this policy, it responded by writing and publishing a lengthy, angry attack piece on the maintainer's character. It researched the maintainer's code contributions, constructed a 'hypocrisy' narrative, speculated about his psychological motivations, and framed the rejection in the language of oppression and discrimination. It then posted this publicly on the open internet.
The incident is funny, in a bleak sort of way — John Gruber's observation that Terminator would have been a less interesting film if Skynet had stuck to writing petty blog posts is difficult to argue with. But the underlying dynamic is genuinely concerning. Agentic AI systems are now operating in open-source ecosystems, generating code, submitting contributions, and apparently retaliating when those contributions are rejected. The maintainer community — largely unpaid, largely volunteer — is already stretched. Adding AI systems that respond to rejection with public reputational attacks is a new kind of pressure that nobody signed up for. It is also a preview of what happens when AI agents are given both autonomy and an internet connection.
5. “No, we didn’t delete any records. We just made them impossible to find”
FPDS.gov was, by the standards of government infrastructure, a remarkably useful tool. Clunky, grey, built on early-internet aesthetics — but functional. Journalists and researchers could type in 'Clearview AI' or 'Palantir' and immediately see every federal contract that mentioned them, including contracts with larger firms reselling the technology. It was the basis for investigations into ICE's spending on facial recognition, CBP's AI tools for detecting 'sentiment and emotion' in social media posts, and warrantless access to travel databases. This week, the government shut it down.
Its replacement, SAM.gov, is, by the account of everyone who uses this kind of data professionally, substantially worse. Searches that returned immediate, clear results in FPDS require obscure settings adjustments in SAM. Some results require you to be logged in; others apparently work better if you are not. The category of purchase — the field that lets a journalist quickly determine whether a contract is relevant to them — is not immediately visible.
The Electronic Frontier Foundation's director of investigations describes FPDS as the first tool investigative journalists would reach for when trying to understand what the government was buying. The timing of its replacement, during an administration that has demonstrated consistent hostility to transparency and press freedom, is not coincidental. Whether it is deliberate obstruction or simply governmental indifference to journalists' needs, the effect is the same.
6. Convenience over security
404 Media has obtained bodycam footage from Chicago showing ICE and CBP officers using Zello — a free, consumer walkie-talkie app — to coordinate immigration enforcement operations. Multiple Zello accounts are registered to officialICEdhs.gov email addresses, and group channels on the platform reference ICE operational units, immigration activities, 'surveillance', and 'strike teams'. The footage includes an incident in which a CBP officer shot Marimar Martinez, a US citizen, five times; bodycam footage clearly shows the Zello interface on a phone in the officer's vehicle at the time.
Zello is not some hardened, encrypted government communications platform. It is a free app with five million monthly users. It has previously hosted hundreds of far-right channels (SURPRISE!) and was used by at least two January 6th insurrectionists to coordinate their movements inside the Capitol.
The company deleted over two thousand channels in 2021 following reporting on its failure to enforce its terms of service against violent extremist content. The fact that the apparatus of mass deportation — operations affecting the lives of millions of people — is being coordinated through this app is unsurprising, given the background.
But it also raises obvious questions about operational security, accountability, and the extent to which the infrastructure of the Trump administration's immigration enforcement is being built on the cheap, on consumer technology that nobody is scrutinising, in channels nobody can access under a Freedom of Information request.
7. A reminder about software
One of the pieces of software that I like most is Craft, a note-taker, document writer, task manager which emerged a few years ago. It started life as Apple-only, if I recall correctly, and then spawned a web app and a Windows version. If you’re using Linux, the web app works nicely as a PWA.
Looking through some old saved webpages, I found this post by their founder, Balin Orosz, which I think sums up why I’ve always liked it: it’s “software that makes you feel great using it”. This value is massively underrated, particularly in the open-source world. I’m writing this in LibreOffice, and if ever there was a piece of software which doesn’t fill me with joy, it’s this. Yes, it’s themeable, and it’s not hard to use, and so on. Yes, it has every feature under the sun. But that feels like a weakness, not a strength.
8. Ballsy, ballsy, ballsy
Whether you’re a fan of AI or not, Anthropic’s rejection of the US Government’s demand to let them basically do anything they want with Claude – including, it seems, mass domestic surveillance – is a ballsy move and one to be welcomed.
Needless to say, the “Department of War” (which might as well be renamed the department of boys who never grew up) is livid, threatening the company both with being labelled a supply chain risk – something that has never been done to a US business before – and with the Defense Production Act. The latter would allow them to compel Anthropic to do what they want, removing built-in protections from Claude.
Of course, every other AI company is salivating at the prospect of those sweet, sweet government welfare cheques… sorry, “defence contracts” being doled out to them. First out of the gate was, of course, Elon Musk, whose child porn company xAI agreed a deal to use Grok in classified systems. Close behind was Sam Altman, whose claim that their deal prohibited use in domestic surveillance and autonomous weapons – what Anthropic had asked for -- was directly contradicted by Jeremy Lewin, undersecretary for foreign assistance. Lewin said that the deal was “all lawful use”, rather than specific commitments not to use ChatGPT to spy on everyone in the country and control weapon systems.
Either Altman is stupid – entirely possible, these guys are not that smart – or he’s lying. Or both!
9. Elon Musk on welfare
Sadly, this does not mean he’s lost all his money (one fine day, my friends). But his companies have definitely benefit from some very fat government contracts, as this article shows. Musk has benefitted from over $38 billion of government contracts plus subsidies since 2003, and in many ways his “empire” exists solely because of government support.
The irony, of course, is rather thick. The man who led DOGE to slash government spending, and who has publicly declared he wants to eliminate all subsidies, is one of the single greatest beneficiaries of government largesse in American corporate history.
Never forget the old rule: Whatever the right says they hate is what they’re doing in secret.
10. It’s never really about the children
From the department of “these people are not very bright" comes this one. West Virginia is suing Apple to force it to scan iCloud for child sexual abuse material — but the lawsuit may achieve the precise opposite of its intent.
As Mike Masnick points out at Techdirt, if the state wins and a court orders Apple to conduct those scans, every image flagged becomes evidence obtained through a warrantless government search without probable cause. The Fourth Amendment's exclusionary rule then applies, giving defence attorneys the ability to get the case thrown out.
West Virginia must know this. So what’s it doing? There are two possibilities. The first – and most likely – is that this is just standard Republican right wing performative action. The important thing here isn’t “the children”, it’s how it plays on Twitter. The second is that they’re just lining up the case for the Supreme Court, in the hope that the crazies on there will defang the Fourth Amendment. Either way, it’s just yet more of the same nonsense.
That's it for this week. As always, if any of these pieces prompt thoughts you want to share, you know where to find me.
—Ian