Digital Welfare State edition 006

DWS Newsletter - edition 6

December 2025

Welcome to December’s Digital Welfare State newsletter. Well done making it to the end of a turbulent, fast-moving and at times very challenging year. And thank you for reading! I’ve enjoyed putting the newsletter together, though it’s been hard to keep up with all the news.

This edition starts with a guest spot from David Nolan at Amnesty, introducing a new resource to support organisations to investigate algorithmic systems.

Have a great Christmas and New Year, and please do get in touch next year if you would like to share anything in a future edition. I’m still looking for more international news and commentary - please don’t be shy in dropping me a line.

I write this newsletter in my spare time with no funding, so if you read and enjoy it please consider giving me a tip.

Anna
P.S. if you missed the first five newsletters you can find them here.


——————————————————————————

Amnesty Algorithmic Investigation Toolkit

Amnesty has just launched our Algorithmic Accountability Toolkit. This toolkit is designed to support civil society organizations, journalists and community organisers in investigating and challenging harmful uses of AI in government and public institutions, with a particular focus for organisations who may not have directly worked on issues related to algorithmic accountability before. It offers practical guidance on:

  • Researching opaque AI and automated decision-making systems, including templates for freedom of information requests to governmental authorities

  • Centering human rights frameworks and lived experiences of affected communities

  • Strategies for advocacy, accountability and change.

This toolkit brings together years of knowledge from Amnesty International's algorithmic investigations into a clear, practical method combining legal analysis, community testimony and public records, and includes suggested routes for pursuing accountability. It also emphasises that the strongest algorithmic investigations are multi-disciplinary endeavours, bringing together the methods of research, auditing practices, advocacy, campaigns, strategic comms, media and more. 

David Nolan, Amnesty Tech

——————————————————————————

Things to Read: news and views

Further to my adventures in FoI in the last few months, a new entry in the Algorithmic Transparency Register sheds more light on Whitemail AI that I wrote about in my FoI special. It includes the detail that Whitemail uses a model called BART, from Facebook AI. 

Whitemail doesn’t just look for evidence indicating vulnerability but also other ‘themes’ like a change of address. One element looks for general themes in the correspondence, then another looks specifically for vulnerabilities. The entry in the register seems to be incomplete - it discusses how it classifies correspondence according to general themes and themes relating to vulnerabilities, but I can’t find a list of these in the record. In fact it refers to a section which doesn’t exist, so it’s still not clear what they consider to be the words or phrases which indicate particular vulnerabilities. 

——————————————————————————

Lots has already been written about the carer’s allowance debacle, in which people were penalised for honest mistakes in a badly designed system. The official review into what went wrong came out recently, and among lots of findings was one particularly relevant to anyone interested in the digital aspects.

People receiving CA can only work and earn a certain amount before they lost eligibility for the benefit - this was the crux of the problem which led to thousands of people being penalised. Within the DWP is a system which flags CA recipients who have gone over their weekly earnings allowance, apparently in almost real time. You would think this would automatically trigger an alert to the individual so that they were aware and could ensure they did not break the threshold again.

But instead, DWP staff decided whether or not to investigate a flagged case, and were driven by internal targets to tackle fraud, not by ensuring claimants were receiving the current amount. Some people accumulated years of debt before they were informed.

This speaks to the limitations of automated systems when they are not set up to benefit the claimant or used as a tool for other policy priorities, and the ongoing unfairness in a system which treats people with suspicion as the default. It also underlines that the digital and the human are not easily unravelled. At every stage in the process of digitisation someone has made decisions, whether it’s about the choice to digitise a process in the first place, or to what to do with the outputs it creates. Having a ‘human in the loop’ is often suggested as a way to improve the fairness and transparency of digital systems, but in this case it seems to have had the opposite effect. 

——————————————————————————

A report from the Digital Good Network, Migrant Voice, and University of Warwick shows the online-only system to prove immigration status is not functioning as intended or needed, with technical failures, unclear instructions, and organisations which need to see proof of migration status not understanding it. People are being denied entry to the UK on the basis of its failures, and are anxious and confused using it. 

——————————————————————————
Also on migration, age estimation AI is set to be introduced to support officials to estimate the age of people seeking asylum in the UK. Even companies that provide age estimation software are clear that it is not 100% accurate, so thankfully the AI won’t be used as the only tool, it will be used alongside other measures. However refugee charities have still raised concerns about the safeguarding and fairness implications of flawed decisions based on the AI.

——————————————————————————

This report from the Public Law Project sets out how the DWP’s Universal Credit sanctions regime is causing harm to claimants. It finds that sanctions are applied for minor, first time mistakes and failures to follow the requirements set out in their claimant commitment. One of the contributing factors to the unfair application of sanctions is the digital-first nature of Universal Credit. Claimants talk about their reliance on mobile data and unreliable phones, which can cause them to miss calls and messages, leading to immediate sanctions. PLP argue that a complete overhaul of the sanctions system is needed. 

——————————————————————————

In Sweden, an investigation by local reporters and Lighthouse Reports discovered that a benefit fraud algorithm was unfairly biased against some groups, disproportionately triggering investigations into women and low earners, among others. That system has now been shut down, while the government assess whether it meets EU AI regulation, and they have indicated that they do not currently intend to start using it again.

——————————————————————————

Spanish civic society organisation Civio has produced a searchable register of AI tools being used in the Spanish healthcare system.

——————————————————————————

In Australia, a phishing attack has used GenAI to produce scam emails sent to people claiming welfare benefits, aiming to harvest personal details and then commit identity theft. It’s a bit of a scary mash-up of contemporary digital problems - phishing, identity theft - with the digitisation of welfare benefits. It opens up vulnerable people to bad outcomes as a result, in a new arena for scams: digital public services. I wonder how many governments worldwide are implementing safety measures to reduce the risk of benefit claimants being scammed.
——————————————————————————

Finally, the Public Authorities Fraud, Error and Recovery Act has passed into law. I wrote about it back in May, and now the mass bank surveillance powers are set to be introduced. Codes of practice are being consulted on now - if you’re engaged in this area of policy and law do respond.
—————————————————————————

Anna Dent