Digital Welfare State edition 009

DWS Newsletter - edition 9

March 2026

To say there is a lot going on at the moment feels like a ridiculous understatement. While much of the international news might not seem connected to the digital welfare state, there are long and deep threads that link the firms and technologies involved. I start this edition with a few thoughts about it.

As always, if you have anything you’d like to share, as well as international news and commentary, or if you’d like to collaborate on a project - please don’t be shy in dropping me a line.

Anna

P.S. if you want to read any previous editions of the newsletter you can find them here, and you can join the generous folk who have made a donation to my costs in putting the newsletter together by giving me a tip.


—————————————————————

Some rambling thoughts

I've had a wandering chain of thoughts that I'm going to try to articulate. It starts with the row between AI firm Anthropic and the Pentagon. Anthropic didn't want their contract with the US government to be extended to allow its tools to be used for 'any lawful use', as the Pentagon was requiring. Anthropic technologies are already used in the US military, but this potential expansion into, for example, fully autonomous weapons, was apparently beyond what they could stomach.

Then the horror and huge controversy over the targeting of a school in Iran by US bombs, which at first sight seemed to be a result of Anthropic's LLM Claude (essentially a chatbot). In fact, this article shows how Claude was a distraction, and the real technology to blame was Maven, a system launched by Google and then taken over by Palantir. It seems that it used data on potential targets which was over a decade old; the school used to be part of a military compound but tragically was no longer.

This led me to thinking more about Palantir, a highly controversial company that many of you will be familiar with. They essentially provide the means to combine multiple datasets to allow detailed analysis by private and public sector organisations. In the UK they have contracts with the Ministry of Defence and National Health Service among others. Many people already have concerns about Palantir's operations and leadership, and their deeply embedded role in the functioning of government (not just in the UK).

The potential for their systems to be used in even more troubling ways has recently been highlighted by medical charity Medact. They warn that Palantir's work in the UK health system (a data platform which aims to connect health data across different parts of the system) could easily be repurposed to target people for deportation, echoing how Palantir’s tools are being used to enable ICE (the US Immigration force). Political party Reform have promised to introduce their own version of ICE if they were to gain power, and have said they would used large-scale data sharing to enable it.

So this led me to think more broadly about the potential for government technologies to be repurposed, or used as an instrument for deeply troubling purposes. We only have to look at DOGE in the US, or Aadhaar in India, to see how technologies can be hijacked (DOGE) or suffer scope creep (Aadhaar) and result in harm (quite apart from the harms that many produce in their original intended forms).

There is a naivety to the breathless excitement around AI and other tech in public services which is either unaware of or chooses to ignore this potential, and assumes that either sufficient safeguards exist, or that no-one would simply ignore them, like DOGE did. How can we design and govern a digital welfare state that resists anti-democratic takeover?

We also need to think far more deeply about the ease with which companies like Palantir embed themselves deep into our public services. The norms it creates and actions it enables, from pre-emptive policing to continuous surveillance, are troubling enough under a broadly functional democracy. Under leadership which has no qualms about ignoring norms and safeguards, the effects are terrifying. We need to be far more careful about what and whom we are inviting to shape the digital welfare state.

——————————————————————————-

Things to read

DWP has put a call out for technology companies to help them shape what the new digital Jobs and Careers Service will look like. It is undertaking 'preliminary market engagement' i.e. conversations with potential suppliers to explore how digital products could support a wide range of activities including career coaching, job market insights, skills analysis and interview support. AI solutions are part of the mix.

——————————————————————————-

This article in Tech Policy Press discusses data privacy and AI in the context of commercial platforms, but in my mind there is a lot of read-across to public services as well. The article describes how AI systems that predict user behaviour can infer a great deal of sensitive information about people without them explicitly sharing it. While discrimination on the basis of things like ethnicity, gender or sexuality is generally prohibited by law, the article argues that AI can 'learn' these fundamental characteristics and then alter the content or recommendations it provides.

If digital welfare continues to adopt AI for more and more complex and consequential decisions, and to provide more personally tailored advice and information, are we going to see an expansion of biases and subsequent differential treatment of people?

——————————————————————————-

Privacy International, the European Disability Forum, and Pirkko Mahlamäki,, Finnish Disability Forum representative have published a statement warning over risks to privacy and human rights from new digital welfare measures in Finland. The new laws would allow the social security agency Kela to obtain financial data about people without their consent, if they fail to provide information or are suspected of fraud. It doesn't state what constitutes 'reasonable suspicion' so the door is left open to interpretation, and potentially abuse. Other reforms potentially breach the right to privacy, with 'technological monitoring' of people living in social care, via for example wearable technology, without their express consent being given.

——————————————————————————-

I've mentioned this anti-fraud mess a few times in previous newsletters - inaccurate data was used to work out if people were fraudulently claiming child benefit in the UK, and many thousands of people were wrongly penalised and lost their benefit. In around 60% of the cases, benefits were wrongly suspended. The scheme is to be re-instated, with new data being used to cross-check suspected fraud cases, and benefits won't be suspended until a claimant has had a chance to clear up any mistakes or miscommunication. But the additional data (about earned income) won't cover everyone, so the risk of mistakes remains. Previous reporting indicated that officials were comfortable with the level of risk and mistakes; what other systems does this apply to?

——————————————————————————-

Anna Dent