Digital Welfare State edition 008
DWS Newsletter - edition 8
February 2026
Somehow it’s already the end of February. Spring is coming here in the UK, and not soon enough.
Thank you to the generous folk who have made a donation in the last couple of months, I really do appreciate it. If you would like to follow suit, please consider giving me a tip.
As always, please get in touch if you have anything happy to share, as well as international news and commentary - please don’t be shy in dropping me a line.
Anna
P.S. if you want to read any previous editions of the newsletter you can find them here.
——————————————————————————
Things to read: news and views
We start this month with a guest post from the excellent team at the Administrative Fairness Lab.
As touched on in the last newsletter, the Department for Work and Pensions' 'Targeted Case Review' has continued to ramp up. It involves a team of around 6,000 agents checking the accuracy of payments made to millions of Universal Credit recipients, and has already delivered substantial savings – over £1 billion – to date.
As part of plans announced in the Budget 2025 to accelerate efforts to recoup funds lost to fraud and error, the scheme is to be extended into the 2030s, which the government expects to generate total savings of more than £13 billion.
A team of academic researchers based in the Administrative Fairness Lab – a research group led out of King's College London and the University of York – are currently engaged in a 2-year project exploring the roll-out and claimant experiences of the TCR scheme. In a forthcoming research paper which draws on original qualitative interviews with Universal Credit claimants who have been through the TCR process, they argue that serious questions are raised about how efforts to reduce welfare fraud and error may be balanced with appropriate safeguards for claimant welfare and procedural fairness.
The team are looking to interview welfare rights advisers who have encountered TCRs in their advice work. If you are interested in participating, please contact Dr Mark Bennett at mark.bennett@york.ac.uk.
——————————————————————————-
The Ada Lovelace Institute has published a report into AI powered transcribing tools. Right now, they are most often used in social care by social workers to record conversations, meetings and other spoken discussion into text. They are used to reduce the administrative load on frontline workers, theoretically freeing up more time for them to use on the relational aspects of their work, such as working directly with the people they are caring for.
The report points out that the rapid deployment of these tools is partly in response to the tight budgets in public sector organisations, but that this same lack of resources means it is very difficult for the organisations using them to properly monitor and evaluate their performance. Given the propensity for generative AI to 'hallucinate' (make things up) and the high stakes of some of the conversations which social workers take part in, any inaccuracies could have far-reaching consequences.
Social workers report some positives from using the tools, and if the problems with hallucination and bias could be resolved (a big 'if'), they could be beneficial, but currently it is not possible to trust their output. There is already evidence of transcription tools being used in other parts of the public sector, so the need to properly evaluate their use and where necessary put in limits and controls is growing more urgent.
——————————————————————————-
Public sector data sharing is a huge topic, which I won't be covering in its entirety here, but a UK story recently encapsulated some of the potential and the innate challenges. A committee of MPs has told the DWP that they are not doing enough to share data across government departments as part of its work to reduce fraud.
The report points out that some effective data sharing does exist, such as DWP receiving real-time earnings data from HMRC. Rather than claimants having to report earnings themselves, benefit amounts can be calculated directly with the earnings information. It also notes the potential to use other sources of official data such as school records from the Department for Education to verify how many children are living in a household.
On the one hand, this approach can reduce the administrative burden for claimants, stop them from having to report the same information over and over again to different agencies, and go some way to ensuring that everyone receives the benefits they are entitled to (the report notes that there were overpayments of £1bn last year, and underpayments of £1.2bn). But it also constitutes a major invasion of privacy which people not claiming benefits would not be subject to.
The detailed and extremely personal profile that could be created through joining up more sources of government data is a highly sensitive topic. Data sharing can be done in a transparent way, with appropriate opt outs and choice for citizens. But in the case of something like 'tackling fraud', individual rights are deprioritised at the expense of administrative priorities.
——————————————————————————-
The Open Data Institute have found that commonly-used generative AI models (also often called chatbots) are regularly providing false information about government services. The questions posed in the research included several about how to claim for different benefits. Many of the models provided wrong information, or provided confusingly large amounts of text, risking administrative and even legal errors by anyone following the faulty guidance. The main challenge seems to be that the models do not prioritise official information over any other source, so the models’ outputs are a mix of fact and fiction. The reliability and accuracy of models needs to be improved before they can be trusted to provide information to the public.
——————————————————————————-
Speaking of chatbots, the most senior civil servant in the DWP has suggested that benefit claimants are likely to increasingly interact with technology such as chatbots rather than human advisers. He proposes that this will be mainly for people who don't need as much human support, so are perhaps more able to 'self serve' with a digital tool.
——————————————————————————-
In almost-certainly-related news, big tech firm Anthropic will be working with the UK government to create AI assistants which will help people look for work as well as other 'crucial life moments'. A pilot is scheduled for later in 2026. It's not clear (to me at least) if this is linked to the 'Jobcentre in your pocket' project which will be a universal digital jobs and careers tool (see previous newsletter).
——————————————————————————-
The Netherlands has a chequered history with predictive algorithms in the public sector. The latest problematic use is an algorithm meant to predict if someone is likely to reoffend, the results of which are used by judges to make important decisions about the length of sentences. The predictive algorithm has been found to generate inaccurate predictions for 1 in 5 people. The system has been suspended.
——————————————————————————-
A really compelling interview (to listen to or watch) by Alix Dunn of The Maybe, talking to Indian lawyer, activist and researcher Usha Ramanathan. They discuss the framing of AI for development - the promise that using AI in welfare and development policy and programmes will improve outcomes for the least well-off in society.
Data is a valuable asset, goes the theory, which can help to generate wealth for citizens who give it up to welfare systems. Ramanathan discusses this in the context of Indian system Aadhaar, which I mentioned a couple of months ago. She has been instrumental in identifying problems with Aadhaar and how it excludes millions from support they are entitled to, and questioning exactly what people's data is being used for. She points out that the onus is always placed on the individual to twist themselves to fit the system, rather than anyone expecting the system to work for everyone, and notes the level of precarity and anxiety that this creates. Definitely worth reading/ watching the whole thing.
——————————————————————————-