Digital Welfare State edition 004

DWS Newsletter - edition 4

October 2025

Welcome to October’s Digital Welfare State newsletter. This is my first special edition - it’s all about Freedom of Information, more commonly known as FoI or FoIA.

A right to ask public authorities for data and other information might seem underwhelming or extremely niche, but FoI requests have been used to uncover hugely important issues. Over 100 countries have some kind of freedom of information laws, and 2025 is the 25th anniversary of the FoI Act in the UK.

It’s really pertinent to the digital welfare state because most bodies that design and deploy digital welfare systems like to keep the details under wraps, claiming that releasing them would help people to defraud the system.

This edition features a guest spot from Dr Morgan Currie of Edinburgh University, and my personal odyssey through FoIs, as well as the usual wider news about the DWS.

As ever, if you would like to share anything in a future edition - a report, a comment piece, a rant - do get in touch. I’m particularly keen to include more content that isn’t UK-focused. If you have any feedback on the newsletter, or you’d like to collaborate on a project, drop me a line

P.S. if you missed the first three newsletters you can find them here.

What does FoI reveal about welfare AI?

Guest spot: how FOIA sheds light on automated welfare

Thank you to Dr Morgan Currie from Edinburgh University for contributing this piece about her project with Alli Spring. They took on a huge task to collate and analyse dozens of DWP FoI responses, and built up a fascinating picture of how they work.

Applying for Universal Credit? The DWP is currently using machine learning models (ML, a form of artificial intelligence) to flag UC applications for fraud and error. The department trains its models on data profiles of past fraudsters to catch future ones with similar profiles. Civil society groups are worried that the models could overly target certain groups – and in fact, a DWP fairness analysis found that it does, based on age, disability, marital status and nationality.

Yet we still know very little about these tools – which teams oversee them, how often they are tested for bias, and what underlying data they use.  One way to gain a larger picture of the DWP’s practices is to look at the results of FoI requests. This summer, we searched the website WhatDoTheyKnow, run by UK civic tech organisation MySociety, to find 51 such requests from 2018 up to the present. We were able to get a chronology of some of the tools DWP is using, some documents showing how they are evaluated and some information on how they relate to staff roles. We also collected 44 internal documents that DWP released to requesters (many heavily redacted). You can find our report on this project here

This method also revealed how the DWP approaches the FoI process. DWP only fully answered seven out of 51 requests, and 24 were only partially answered. We see how the DWP cites lawful FoI exemptions to keep this information at bay, and how some requesters use legal manoeuvres, such as complaints to the Information Commissioner’s Office (ICO), to pursue disclosures, in months-long back and forth correspondence.

Particularly concerning, DWP takes a non-standard approach to answering FoI requests. Requesters who take the time to report the DWP to the ICO achieved greater success than other requesters asking for the same information. We argue that the DWP should be more forthcoming to the public about its automated fraud and error detection tools – especially disclosing information about the internal safeguards deployed to protect claimants from their potential risks – and more consistent in releasing requested information about these technologies.

———————————————————————

Adventures in FoI

As part of this special edition, I’m sharing some of my adventures in FoI. I posed a series of questions to the DWP on the AI tools they are using, and wrote a blogpost a while ago based on my first round of FoIs, which you can read here. I then submitted a second round of questions. I recommend reading that blogpost first so that the following makes sense! 

Doing a successful FoI is a bit of an art, and I didn’t get back everything I wanted. Even when you do get a decent response, it often throws up more questions than it answers. 

Aigent and A-cubed

My initial questions about Aigent and A-cubed, tools for frontline workers to query official guidance and policy and speed up decisions, yielded some answers but still left lots of gaps. I asked for more details on how the tools were tested and evaluated; they were reported in the Guardian as having been halted, the implication being they were in some way a failure. 

The DWP response was that they were ‘not halted but concluded successfully as planned’ and the result of ‘a range of different kinds of assessments of the proof-of-concepts’ are being used to improve further iterations of the tools. 

A summary of the A-cubed evaluation notes that it was trialled in four Jobcentres with work coaches. 20 work coaches were interviewed about their experience using the tool; their feedback was generally positive, as they found it saved them time looking for guidance, and some believed it could improve the quality of the service they provide. Others found the responses conflicted with their own knowledge. I assume, but don’t know, that ironing out the discrepancies will be part of the work on the next iteration. 

I believe this next iteration is an AI called DWP Ask. I unfortunately asked too many questions about that and my FoI was refused on the grounds that it would take too long to answer. I’ll need to break it down into smaller questions and send them separately (it’s a lot of admin doing FoIs). 

Whitemail

Whitemail, an AI designed to screen customer correspondence and flag potentially vulnerable people who need extra support, was said to be based on a ‘pre-trained’ LLM. I asked for more information about this ‘pre-training’ but didn’t get an answer, other than ‘it is an open source, natural language model’. I asked some technically-minded folk what this might mean: one suggestion is that it might be Meta’s Llama. ‘Open source’ implies free and open to use, but my technical sources suggest if it is Llama this does not necessarily mean it’s free, and also that the department’s ability to build on or interrogate the model would be limited. 

On the other hand, another source tells me that the Department for Transport built and hosted their own model to do something similar, so maybe that’s what DWP has done. But they don’t tell me either way in their response. 

I then asked how it was decided what words, phrases or combinations of words might signal someone is vulnerable: these were ‘defined by the DWP, based on DWP’s definition of a vulnerable customer’, which in turn was ‘based on conversations with operational teams, who support vulnerable customers’. 

So I went to look up the definition of a vulnerable customer. I found something (eventually), though I don’t know if it is the most up to date version. It states the description of vulnerability for DWP purposes is “an individual who is identified as having complex needs and/or requires additional support to enable them to access DWP benefits and use our services”. It goes on to list life events and personal circumstances which may contribute to vulnerability, including bereavement, being a victim of crime, addiction, certain health conditions and disabilities. 

The Data Protection Impact Assessment (DPIA) for Whitemail gives a bit more information about how it works. They receive batches of customer correspondence from the scanning supplier (I don’t know who this is because it was redacted). These are encrypted and analysed in order to identify if someone is a vulnerable claimant, presumably by identifying if they have any of the personal circumstances or life events outlined in the vulnerability definition. The data analysed might include name, home addresses, health-related information, bank account details, credit card details, racial and sexual characteristics and date of birth of children, and documents might include medical notes, passports and wage slips. 

Lots of the DPIA was redacted, including pretty much the whole section on how and where the personal data will be held and the security measures that will be in place, and most of the section on any organisations other than DWP that will be involved in processing the data.

Another bit that caught my eye is in the section relating to how individuals will know that their personal data is being processed. I’ll quote it: “Our use of personal data is covered by: … DWP uses of profiling. DWP uses profiling to help: call handling and providing services… tailor support for individuals… improve DWP services”. In the department’s Personal Information Charter it also states that they use profiling to “detect and prevent fraud and error”. The mention of profiling sent me off down another rabbit hole, but that’s for another time.

The next section, which asks whether individuals have any choice about being part of the initiative, states that because customers have submitted evidence as part of their claim, and the reason for DWP processing their claim has not changed, the individual does “not need to know about their involvement”. While this might be true from a DPIA perspective, can we really say that it is true ethically? And in the section about profiling, under ‘explain how you will notify individuals about the profiling’ the response is “We will pick this up once the business have agreed with our identification methods for a vulnerable claimant”. Should that not be part of the DPIA process? 

The whole section on risk assessment is redacted, so I have no idea how the department rates the data protection risks of Whitemail.  

The Equalities Impact Assessment is pretty scant - basically it concludes that Whitemail will be beneficial in fostering good relations and advancing equality of opportunity regarding age and disability, and no adverse effects are noted.  

Other automations

In April there was a written question from Terry Jermy MP to the Secretary of State for Work and Pensions asking:

“What assessment she has made of the potential merits of increasing (a) digitisation and (b) automation in the provision of welfare services”.

The answer, from Andrew Western MP, states that “58 automations have been deployed across the DWP, with 38 of them currently active. These automation processes have handled a total of 44.46 million claims and saved 3.4 million staff hours”. 

Naturally I could not let this news go un-noted, so I put in an FoI request to find out what these 58 automations actually are. I got a lovely list in reply. (NB I have tried to embed this into this email so that you don’t have to download or open a pdf but it was beyond my technical abilities).

They range from the pretty straightforward sounding - Get Your State Pension, which copies data from an online application and enters it into the DWP pensions system - to the much more consequential Third Party Deductions, which applies deductions to benefits when the claimant is in debt to a third party, and Employment Support Allowance Fit Notes, which “processes medical Fit Notes to ensure benefit eligibility”. 

Many of these would warrant their own FoI investigation: what is automated and how? Where are the DPIA and Equalities Assessments? How did they come up with that claim of 3.4 million staff hours? 

I also think they have been a bit sneaky with the definition of ‘automation’. We know (see last newsletter) that there is at least one predictive tool in use by the Department which assesses Advance claims for potential fraud. That is not on this list, which makes me think that either for administrative purposes, or to keep certain things under wraps, some (more complex or controversial?) automations and AI have not been included. 

Doing all of these FoIs could become a fulltime job. I’ve really only scratched the surface with these requests, and as I said, many of the responses just stir up more questions rather than providing a definitive answer. It shouldn’t be this difficult or time consuming to understand what technologies are being used by a public service, and it shouldn’t be left to people like me to do in their spare time!

Things to read (and watch)

This webinar hosted by MySociety in August was a great run through of how FoI can be used to reveal more information about how AI is being used in public services. Morgan Currie spoke about her FoI project that she shares in this newsletter; Gabriel Geiger spoke about his work at Lighthouse Reports on welfare AI in Europe (if you haven’t read these investigations already you really should), and Jake Hurfurt from Big Brother Watch shared some of the work they have done on AI transparency. 

—————————————————————————

Spanish transparency campaigning organisation Civio has successfully challenged the Spanish government’s secrecy around a social security algorithm. The BOSCO system, which automatically assesses eligibility for a payment meant to help people with the cost of their bills, seemed to be wrongly withholding payment from significant numbers of people. Civio requested access to the algorithm’s source code, but the government claimed that releasing it would jeopardise intellectual property. Civio have just won their case, with the supreme court in Spain ruling in their favour. The BOSCO system made fully automated decisions, without any human intervention - something that is likely to happen more often in the UK now that the Data Use and Access Bill is in place. 

—————————————————————————

Deaf people are being put at risk because of the flawed procurement of AI sign language technologies. The rapid roll-out of AI across the public sector includes the use of AI-driven interpretation tools, including those using sign language, in public services such as health and education. Deaf users and communities are not being sufficiently involved in the choice and deployment of systems, risking communication failure and exclusion from vital public services. This report from the Minderoo Centre for Technology and Democracy highlights the problem.

—————————————————————————

This report from the Work and Pensions Committee includes details about a forthcoming ‘Jobcentre in your pocket’ digital tool from DWP. Planned for 2028, it will bring together jobs and careers support and will “use up-to-date technology to offer personalised support”. More details will emerge in time I’m sure. The report also mentions that Jobcentre staff are using Copilot to help with CV writing for customers. 

—————————————————————————

A related but seemingly different digital tool has been announced, an AI agent for citizens to help them get ‘life admin’ done. The agent would be able to interact directly with government services on the user’s behalf, taking on “boring life admin by dealing with public services on your behalf – from filling in forms to completing applications and booking appointments”. One of the first areas the agent will be deployed in is tasks related to employment, education and skills, where it will provide tailored advice. How this might interact with the Jobcentre in your pocket project is unclear.  

—————————————————————————

Honestly, every time I think I’ve put a newsletter to bed another story crops up, normally from DWP. They have signed a big deal with IBM to deliver AI projects. It includes “support from subject matter experts in AI to build a deep and continually updated understanding of AI capabilities mapped against DWP’s key business challenges and opportunities” and support from IBM for the department as it “identifies, builds, and tests proof-of-concepts to investigate how AI can help the department address ‘to-be-identified’ business problems”. It’s an interesting choice to award the contract to a US firm, with the ongoing geo-technological-political dramas going on over there, and the apparent interest in UK sovereign AI (definitions of which vary, to put it politely).

—————————————————————————

Public Technology has a further rundown of DWP digital activity - I hope someone has a really good gantt chart of all this stuff, it’s hard to keep track of. 
—————————————————————————
This is a really interesting report from the Digital Government Hub at Georgetown University in the US, looking at the challenges of identity verification when claiming digital benefits. It highlights the privacy compromises people face, and the added difficulties when someone does not have secure, stable access to the internet.  
—————————————————————————
Related is this news that many millions of UK benefit claimants are struggling with digital systems, with a third saying they cannot navigate online claims without help.
—————————————————————————
The resources, advice and guidance on tackling digital welfare harms from the Benefits Tech Advisory Hub look super useful. It’s US focused, but there are some transferable ideas and important solidarity for communities experiencing similar issues.
—————————————————————————

And finally

There’s been a big hoo-ha about digital ID in the UK, with an announcement that the ‘Britcard’ is going to be introduced for all adults. Many, many people have a take on this. I expect I’ll tackle it the next newsletter, as it relates to digital welfare.

—————————————————————————

Please share this newsletter with your friends, colleagues and pets, and if you really, really love it please consider giving me a tip: I am putting this together in my spare time and any contribution makes a difference!

Anna Dent