Digital Welfare State edition 003
DWS Newsletter - edition 3
September 2025
Welcome to September’s Digital Welfare State newsletter. This edition is a bumper pack of things you might have missed in the last few weeks and months. Think of it as your reading list for the start of the new term. Make a cup of tea, get a biscuit and get stuck in.
Make sure you get to the end - there’s an invitation from the Public Law Project to contribute to their brand new project on litigation in the automated state.
As ever, if you would like to share anything in a future edition - a report, a comment piece, a rant - do get in touch. I’m particularly keen to include more content that isn’t UK-focused. If you have any feedback on the newsletter, let me know. If you’d like to collaborate on a project, drop me a line.
And just to whet your appetite for next month, October will be an FoI special! Lots of juicy and brand new content on Freedom of Information and its role in improving the transparency of the digital welfare state.
If you missed the first two newsletters you can find them here.
Things to read
Global Perspectives on Automated Welfare: Comparative Considerations for Assessing Impacts
This is an excellent paper by Victoria Adelmant of NYU. It’s an international comparison of digital welfare systems, specifically patterns of similar flaws, failures and legal issues with systems which automatically assess eligibility based on data matching and analysis. It is a really important piece of work, drawing out the patterns of use, how and why systems are deployed, similarities in design, and crucially the repeated problems which arise.
By solely looking at these cases in isolation, we have not been able to build a compelling narrative or case why governments should exercise more caution in the use of digital welfare tools, or examine their reasoning for introducing them. Nor have we generated a shared bank of knowledge for affected communities and civil society organisations to know what to look out for and how to challenge the systems.
I won’t list out all the good bits, I really recommend you read it yourself, but there are a few points in the paper which echo many of my own thoughts and interests:
· These systems are not merely technical updates or quick fixes, they are ideologically driven and shaped. This includes outdated or very narrow understandings of what constitutes poverty and need, and paternal instincts about how people with limited financial means should be living their lives, as well as unchallenged assumptions that people who need social welfare support should be subject to intensive automated scrutiny and surveillance. Some examples show an active disdain or disregard for the welfare or rights of those subject to the systems
· This stuff is not new: the earliest case she cites is from 2013! And yet governments persist in thinking their version is going to be the one which is foolproof, the one which manages to avoid all the pitfalls
· The intentional unwillingness to learn from mistakes and problems. It must be intentional because governments are very happy to adopt the latest ideas from each other while simultaneously ignoring the problems.
This last point is particularly interesting to me: why do these ideas and systems spread so easily across countries and continents? Why does every government seem to think they are going to do it right and avoid the pitfalls? What role does procurement play in this - how many of these systems were built in-house, and how many involved a commercial tech company selling the promise of efficiency and effectiveness? And what about the role of international institutions - at least one of the examples in the paper references the World Bank as an enthusiastic supporter of digital welfare. If anyone wants to collaborate on exploring these questions let me know!
——————————
This special edition of Debates in Social Issues in the Journal of Social Welfare and Family Law is great, and all open access. There are five articles on the implications of a digitally mediated welfare system, where people primarily, and sometimes only, engage through a digital interface. This is the introduction to the issue - the other articles are linked down the side.
——————————
This report from Amnesty UK is a wide-ranging analysis of the human rights impacts of the digitisation of social security via Universal Credit. It discusses the impact of digital exclusion, in which claimants do not have secure or consistent access to the digital platform, the extensive digitalisation of claimants’ lives and the surveillance this entails, the potential drawbacks of automated decision making, and the DWP’s increasing use of AI. As per Victoria Adelmant’s paper above, it’s notable how many of the issues raised are not unique to the UK.
It points out, with first hand evidence from claimants, the negative impacts that digital welfare can have on people’s lives when it does not function as promised, and the difficulties inherent in trying to understand how and why decisions have been made. It calls for an overhaul of DWP’s digital systems to centre human rights, transparency and accountability.
——————————
An interesting paper from LSE explores the kinds of LLMs that local authorities might be using to make decisions about adult social care. It finds that they insert bias into their summaries of people’s social care needs, downplaying needs when the subject is a woman, amplifying them for a man, even when based on exactly the same information.
——————————
The DWP is using predictive AI to identify parents who might default on their child maintenance payments. While the aim is laudable, predictive AI does not have a stellar track record. How many flagged cases turn out to be defaulting for real? What is the equalities impact assessment? What are the criteria which contribute to someone being flagged? The more that predictive tools like this are rolled out the more credible they appear, particularly when we don’t have ready access to information about how they perform, and indeed there are many in use already.
——————————
The DWP is busy: they have also commissioned a prototype digital platform for a new jobs and careers service. There’s a lot in there to like - joined-up jobs and careers advice, a better offer to employers, dismantling recruitment barriers - but I’m not sure I would have chosen Deloitte to deliver it. There are organisations with far more experience and knowledge of how to effectively support jobseekers.
——————————
Speaking of Deloitte, it appears they may have used GenAI in a report for the Australian government which includes references to reports which do not exist. The report which contained the mysterious references is about a damning judgement by the Commonwealth Ombudsman into the automated cancellation of income support payments. In 2022 the law in Australia was changed to require the consideration of a claimant’s circumstances before their payments were cancelled, but it was found that this was not actually carried through to how the service functioned. Nearly 1000 had their income support cut off between 2022 and 2024.
——————————
If and when the Fraud Bill (for details see my article) passes into law (currently at Report stage in the House of Lords) banks will be obliged to comply with DWP requests to check benefit claimants’ identities and assets. It’s reported that this will begin with a ‘test and learn’ approach.
——————————
Perhaps the best known of the DWP’s automations is the machine learning model it uses to assess fraud risk in requests for advance payments. Advance payments can be requested by a new claimant to cover their costs during the five week wait for their first payment (don’t get me started on why there has to be a five week delay). There have been some significant advance payment scams, with organised groups appropriating other people’s identities to request advance payments under the guise of providing a service. Of course, they just make off with the money.
DWP’s ML model is designed to identify potentially fraudulent advance claims. A fairness assessment of the model was released earlier this year. The headline is that there are “minimal concerns of discrimination, unfair treatment or detrimental impact on legitimate claimants arising from the Advances model”. This is despite evidence from last year that it does in fact disproportionately select some claimants for investigation, and evidence in this analysis that some age groups are selected for investigation more than they should be. One reason the department is not concerned is that the final decision is always made by a human. Whether this continues now that the Data Use and Access Act is in place is unknown, as the Act drastically scales back the requirement for human oversight of automated decisions.
Interestingly, the report states that the UC Advances model is the only machine learning model "currently deployed at scale into live service”. We would of course love to know which other ML models and other AI and automations are deployed at any scale in any part of the service, but this is not in the public domain.
——————————
In more lack-of-transparency news, the Home Office is denying there are significant problems with their digital eVisa system, which is meant to provide real-time proof of immigration status. The potential implications for people when it doesn’t work are huge.
——————————
The ICO is investigating another Home Office system. Two algorithmic tools used in immigration enforcement are under scrutiny for the level of intrusion into people’s private lives, and lack of transparency about how the resulting data is used.
——————————
This Tech Policy Press article by Haakon Huynh sets out the extent of the integration of private technology firms into public infrastructure in India. While India is far from the only country pursuing this model, Aadhaar and the wider ‘India Stack’ are seen as world-leading, and used as a template by many other countries. This is despite well-documented concerns about surveillance and exclusion from essential support of potentially millions of Indians. The article draws a parallel between Shoshana Zuboff’s surveillance capitalism and the growing surveillance government model.
——————————
You won’t be surprised to hear there’s more news on the DOGE incursion into the US social security system. Also not a surprise to learn that they have no regard for privacy or data protection. A senior official from the department of social security has blown the whistle on DOGE activities, which include creating a copy of the personal and private data of millions of Americans in a potentially unsecured ‘cloud environment’ (Amazon Web Services), risking identity theft and the possibility of the loss of social security benefits. He points out that bad actors could use the data to target people based on their identity or vulnerabilities.
——————————
And late last week it was revealed that Coventry City Council have signed a deal with Palantir to integrate their data tools into the council’s processes for children who have special educational needs. They are already working together on children’s social care. Many people have expressed major concerns. Palantir works with the Israeli government and supports mass deportations in the USA, among many other things. The ethics and efficacy of automating processes in critical services such as social care is also a subject of much debate. I expect we will see many more of these deals between cash-strapped councils and the tech firms that claim to have all the answers.
And finally
The Public Law Project have launched a brand new project looking at transparency in automated decision making, and they are looking for your input:
Public Law Project are looking for people to take part in research interviews for their new project ‘Public Law Litigation in the Automated State’. We want to talk to people who have been involved with actual or potential litigation regarding automated decision-making by government, particularly litigators, individual litigants, and front-line and user-led organisations. If you’d be willing to talk to us about any experience you have with public law litigation in the automated state, please contact j.summers@publiclawproject.org.uk or PLLIAS@publiclawproject.org.uk.