Digital Welfare State edition 007

DWS Newsletter - edition 7

January 2026

Welcome to the new year. In this edition there’s news from France, updates on digital ID and the child benefit data drama, and news about the harms arising from a predictive tool used on children in Bristol. I also watched the recent Work and Pensions Committee session so you don’t have to.

I won’t even try to predict where the next 12 months will take us in terms of digital welfare, but I’ll share some hopes. First, that the slow progess on transparency around digital welfare systems and how they work gathers a bit more pace. Much of it is driven by investigations and campaigns by civil society organisations. I’d love government agencies to be more proactive about sharing what they’re doing without waiting to be forced.

I’d also love some more positive news! Good practice on fairness, community involvement in design, systems that prioritise user needs above surveillance or control. Please get in touch if you have anything happy to share, as well as international news and commentary - please don’t be shy in dropping me a line.

I write this newsletter in my spare time with no funding, so if you read and enjoy it please consider giving me a tip.

Anna
P.S. if you missed the first six newsletters you can find them here.

——————————————————————————

Things to read: news and views

The latest update in the child benefit data mess. After tens of thousands of people had their child benefit payments suspended on the basis of flawed data, nearly 75% of them have had now had their benefits restarted. The error rate in payment suspensions seems to have been huge; travel information was used to decide if someone had left the country, and rather than triggering further investigation it seems that payments were halted immediately.

According to reporting by the Guardian, HMRC believed this level of error to be 'tolerable', with a 'remote' chance of inflicting harm. The checks that might have reduced error rates, namely checking travel data against earnings records, were removed from the process to make it more ‘efficient’, which almost certainly increased the rate of errors. How this can be described as a remote risk of harm is baffling. Given the vast majority of child benefit claimants are made by women, we can reasonably assume that a lot more women than men had their child benefit cut off for no reason.

——————————————————

A collective of civil society organisations in France have come together to draw attention to and challenge the use of an algorithm which is used to risk score benefit claimants. It assigns a score to people according to the theoretical likelihood that they will commit benefit fraud. There are 25 organisations taking legal action against the benefits agency CNAF; they hope to get use of the algorithm halted. Organisations in the collective have been tracking the impact of the algorithm for several years, and just succeeded in getting the source code released; this should enable more detailed analysis of how the algorithm works and its impacts. They already know that it disproportionately targets some vulnerable groups including people on a low income, or receiving disability benefits. I'll keep monitoring the case to see how they get on.

——————————————————

I missed this OECD report last year, looking at the use of AI in social protection schemes globally. It's a useful overview of how AI is being used, what impact it's having, and public attitudes towards it. It includes examples from Asia, Europe, the US and other countries, and breaks down AI usage into broad categories like fraud detection, back-office management, and AI chatbots. The public polling emphasises the gap between official enthusiasm for welfare AI and how the majority of people actually want to see AI used in public services.

——————————————————

This recent session of the Work and Pensions Committee saw officials from DWP answering questions about their Annual Reports and Accounts 2024-25. It includes discussion of the department’s use of algorithms and machine learning etc. Assuming most of you won’t be watching it, here are a few highlights.

In a discussion about fairness of ML models, it’s mentioned that there are four more ML models in development. Senior DWP officials stress that a human will always have the final decision to stop someone’s benefits, it is never fully automated. They also state that they have publised all of the details of all the algorithms currently in use. There are 12 entries on the Algorithmic Transparency Register from DWP - so can we be confident that this is all the algorithmic systems currently in use?

One thing I’m interested in is Targeted Case Review - a major programme to tackle fraud and error. There is a big team of reviewers assessing thousands of Universal Credit claims for overpayment, underpayment or potential fraud. In the Committee session it sounds like there may be some kind of algorithm flagging cases for human review - but it’s not very clear. Does anyone know more about this? Deloitte hold a big contract to provide support to iterate an ‘end to end Digital service’, but the details in the contract are pretty high level. This article points out the challenges of large-scale risk assessment; it would be good to know more about TCR and how it works to alay fears of mistakes and harms. There is nothing on the Algorithmic Transparency Register about TCR, so can we conclude that none are used in the process?

DWP is also working with the Home Office to understand if claimants are leaving the country on a long term basis, as HMRC did with child benefit. How will they avoid similar errors? DWP say they will not automatically suspend benefits based on a single source of data, which is what HRMC did, and also that they would not suspend while an investigation is happening.

The department is also working on a digital solution which will allow people to report a change of circumstances for one benefit and have this information replicated across all benefits that the person is receiving. Sounds like a good way to reduce the administrative burden on claimants. More of this please!

——————————————————

Public Technology has done a deep dive into the DWP's Advance Payments fraud algorithm, something also touched on in the W&P Committee session above. It discusses the controversy surrounding the tool, including the department's admission that it is not working as they would expect, in part because it disproportionately flags some groups for fraud investigation. Further details of how the tool works were released in December.

So far, the machine learning model has flagged 7,000 requests that were subsequently deemed to be fraudulent by a human reviewer. As the article points out, this is a fairly small proportion of the estimated scale of the problem, though is believed to be more effective than the previous model of face-to-face meetings with all Advance claimants.

However, the trade-off between effectiveness and potential harms is also highlighted. The model flags some people more than others, and organisations such as Amnesty and the Public Law Project have pointed out that this means it may be discriminatory against some claimants. The department has committed to adjusting and re-testing the model this year. There's lots more detail in the article, do give it a read. It really exemplifies the constant juggling of fairness, transparency, effectiveness and value for money that tools like the Advances fraud model require.

——————————————————

On that topic, the Government Digital Service (GDS) have updated their framework that provides guidance about the considerations government departments must take into account when they develop new data and AI technologies. It now includes the requirement to consider privacy, societal impact, environmental sustainability and safety as well as other concerns such as transparency. The guidance acknowledges that there may need to be 'trade-offs' between the different considerations. That feels like something of an understatement.

——————————————————

Issues of fairness and societal impact are key to the new 'bank spying' powers introduced by the Public Authorities Fraud, Error and Recovery Act. The Act enables mass surveillance of benefit claimants' bank accounts, among other things. Civil society actors raised concerns about the powers and the risk of opaque decisions, and mistaken efforts to recoup money or pursue people for fraud. This article by Big Brother Watch goes into more detail.

——————————————————

This report from my local independent news outlet the Bristol Cable, along with Lighthouse Reports, Liberty Investigates and Wired, delves into a predictive system being used by the public sector in Bristol. The Think Family database brings together dozens of different datasets, and is accessible by schools and social workers across the city. It uses the data to generate a 'risk score' for children and young people.

According to the reporting, the risk score is currently only used to ascertain if a child is at high risk of being NEET, but in the past was used for other purposes, such as predicting who might be at risk of criminal exploitation. These models led to some children and young people being racially profiled, and inaccurately labelled as being involved in criminal behaviour. There are still lots of questions about how the model has been used in the past, the impacts it may have had on children, and how it continues to be used now. I recommend reading the full article.

——————————————————

A quick update on digital ID in the UK - it will now not be mandatory for right to work checks, but one of several options. It seems as though government may be pivoting to digital ID being one of a suite of optional digital ID and verification services rather than the 'illegal migration' busting tool it was described as originally.

——————————————————————————

Anna Dent