Digital Welfare State edition 005
DWS Newsletter - edition 5
November 2025
Welcome to November’s Digital Welfare State newsletter.
Big news here in the UK is the announcement of a new digital ID scheme. It didn’t land very well with the public (nearly 3 million people signed a petition against it) and it sparked many, many hot takes. I won’t repeat those, but in this edition I reflect on the role it might play in digital welfare.
As ever, if you would like to share anything in a future edition - a report, a comment piece, a rant - do get in touch. I’m particularly keen to include more content that isn’t UK-focused. If you have any feedback on the newsletter, or you’d like to collaborate on a project, drop me a line.
I write this newsletter in my spare time with no funding, so if you read and enjoy it please consider giving me a tip.
Anna
P.S. if you missed the first four newsletters you can find them here.
——————————————————————————
Digital ID and the digital welfare state
At the moment, the only thing that the UK digital ID will be mandatory for is to prove one’s right to work in the UK, but there are already declarations from ministers that indicate much bigger ambitions. Darren Jones MP has said that digital ID will be a key part of shutting down the ‘legacy state’ and making peoples’ experiences of public services ‘much better’. Jones has apparently also said he’d like to turn the whole of government into a Monzo-style app, which indicates to me a fundamental misunderstanding of what people need and want from public services.
In October a de facto test run was launched: a digital ID for armed forces veterans to enable them to prove their status and access things like medical support and consumer discounts. A lot of people will be keeping a close eye on how the rollout goes.
The initial press release about the national digital ID does suggest that it will play a part in accessing public services in future; digital ID in this context throws up some interesting issues. On paper, it could be a good thing if it makes it easier to put in a benefits application or report changes in circumstances without having to submit enormous amounts of evidence. But if we take a look at India we can get an indication of how it might actually pan out.
The Aadhaar system in India is often cited as a world-leading digital ID set-up, and it is almost certainly part of the inspiration for the UK scheme: Prime Minister Kier Starmer had discussions with Indian officials about Aadhaar on a recent visit to Mumbai. It was designed to ensure that Indian residents get access to the benefits and services they are entitled to, and to cut out unnecessary waste or corruption. It relies on biometric data (fingerprints and iris scans) to identify people.
While Aadhaar started with a relatively well-defined purpose, which it has fulfilled for many, it has led to the opposite for many others. People are denied access to services if they are not registered with Aadhaar, but many cannot provide the data required and so are excluded from essential support. Older people for example may not have a viable fingerprint, and people from some tribal communities do not have birth certificates. It’s also worth noting that people with facial differences can be locked out of systems which rely on facial recognition for ID.
The scope of Aadhaar has slowly crept wider and wider, with more and more organisations using it as their default proof of ID. To all intents and purposes it has become mandatory across both public and many private services, for example getting a new SIM card. Serious security issues have arisen, with sensitive personal information being leaked online, and some of its expansion failed dismally: millions of Indian citizens were denied the right to vote because of a botched attempt to link Aadhaar and voter ID. Rather than cutting fraud, in some cases the misuse of Aadhaar IDs has led to large scale fraud by organised gangs.
Aadhaar is now the basis of a much more extensive database of all Indian residents. The more people are excluded from it the further they are pushed to the margins of society, missing out on essential welfare support.
Many commentators and experts are concerned that the UK digital ID will suffer from the same scope-creep as Aadhaar. The implications for privacy, surveillance and exclusion would be huge. Even if it never becomes legally mandated, if enough services and organisations start to require it or make other forms of ID difficult to use, there would be little opportunity to opt out. As seen in India, this could lead to people losing out on financial and other support they are entitled to.
——————————————————————————
Digital ID and fraud
UK digital ID is already being touted as a device to tackle benefit fraud. Politicians are obsessed with fraud, so it makes sense to hitch the ID idea to it (let’s not probe too deeply how it might actually work).
Trouble is, effective fraud prevention isn’t as easy as some would like it to be. It recently emerged that thousands of UK families have had their child benefit payments incorrectly suspended. The government department that administers child benefit (HMRC) was using data from another department (the Home Office) about people’s international travel to monitor if someone had permanently left the country while still claiming child benefit.
Unfortunately, the data was not up to the task: people who left from one airport or port and returned to another were flagged as having left permanently, and lost their child benefit. Some people that didn’t even leave the country were penalised (one family who couldn’t board a flight due to last minute health issues were still cut off).
Breaking news yesterday - nearly half of the families whose child benefit was suspended were wrongly accused of fraud.
I’m sure it seemed like a good idea at the time: share data between government departments to check if someone leaves the country and doesn't come back. But when the data is inaccurate or incomplete, and decisions are made automatically, major mistakes happen. It was left to the individual parents to try to sort the mess out, with all the time and stress that surely involved. Government has now said that it is also going to cross-check travel data with additional tax data, but whether this will be sufficient to avoid any more mistakes remains to be seen.
This case is unusual in a couple of ways. Firstly, we actually know quite a bit about what happened here. Most other digital welfare systems are carefully kept under wraps, with very little in the public domain about how they work or who they have affected. There is still a lot of detail we don’t yet know about this case, but it’s a clear example of the importance of greater transparency for public accountability and redress.
Next, who has been caught up in the problem. I expect many of those affected were unaware they were under this level of digital surveillance, possibly thinking that it only happened to people on out-of-work benefits. Working parents earning decent salaries are eligible for child benefit, people with less to lose by complaining in the national press than someone with the below-subsistence income provided by unemployment or disability benefits. I’d like to think that this case might help to shift public opinion or at the very least raise awareness that digital surveillance isn’t just something that happens to other people.
The ICO, the regulator in charge of data privacy among other things, has said that they are ‘in touch’ with HMRC, and state that privacy concerns must be balanced with the potential benefits of data sharing. This principle will surely need to be applied to digital ID too if it is adopted as an anti-fraud tool. Would digital ID combine more data and therefore be less prone to error if it was used to detect fraud? Or would it come up against the same reality: data is not perfect, it contains flaws and cannot be assumed to be comprehensive, and automating decisions without proper oversight will always be problematic.
I also wonder how many of the automations that I shared in last month’s newsletter (see this doc) involve this kind of cross-departmental data sharing, and what safeguards are in place to ensure its accuracy. I can feel an avalanche of FoIs coming on.
—————————————————————————
Things to read
If you want to read more about Aadhaar and the wider digitisation of the welfare state in India, here are a couple of interesting articles:
This study published by UCL looks at the challenge of digitising physical identity documents, and the practicalities involved in creating digital ID infrastructure.
It’s easy to overlook the human labour needed to make digital systems work. This article from the Foundation for Responsive Governance about the frontline Indian data workers who interact with the public and enter information into digital welfare platforms provides a fascinating insight and establishes the importance of human oversight and discretion.
—————————————————————————
This report from the Centre for Democracy and Technology explores the importance of human oversight in more detail, looking at how it might be effectively applied to the use of AI in digital welfare.
—————————————————————————
Digital ID and digital welfare are part of the wider shift in how countries are run and how we access public services; the mechanisms that underpin this shift are often referred to as Digital Public Infrastructure. This report from FP Analytics (who are quite keen on ‘thought leadership’ as a service, just to warn you) includes some useful links to international examples of DPI, if you can overlook the plug for public-private partnerships that they keep dropping in.
—————————————————————————
DWP is using an algorithm to identify messages which might indicate an individual who may be at risk of harm. The algorithm analyses messages written by benefit claimants in their online journal, which they use to communicate with officials. If the algorithm identifies certain words or phrases, it will flag the message as high priority for review by a human operator. According to the official entry about the algorithm on the Algorithmic Transparency Register, the model was trained on around 53,500 urgent journal messages, and over 5 million non-urgent messages. I am assuming that this is different to the Whitemail model which analyses letters, as that was described to me as ‘pre-trained’ and I received no details of what data it was trained on (see this and this for more details).
If this journal algorithm and Whitemail are using different criteria and models to identify vulnerable claimants what does that mean for people who are interacting with the department? Are people being picked up by one method and not the other? Is everyone that needs additional support being identified, and is everyone getting the support they need despite the department running (at least) two different automations for very similar purposes?
—————————————————————————
Speaking of the Algorithmic Transparency Register, DWP has been ordered to publish a list of all of the AI tools that it plans to add to the Register. They had 30 days from the 24th of September to comply. That avalanche of FoIs is going to get bigger!
—————————————————————————
Please share this newsletter with your friends, colleagues and pets, and if you really, really love it please consider giving me a tip: I am putting this together in my spare time and any contribution makes a difference!