Digital Welfare State newsletter edition 001

I’ll be publishing my newsletter as blogposts a week or so after I send them out, so if you’ve missed an email you will still be able to find everything here!

What’s going on? News from the last few weeks

UK Data legislation: millions could have their bank details routinely checked

Here in the UK two pieces of important legislation are moving through parliament. 

The Data Use and Access Bill, among many other things, proposes to remove human oversight from many cases of automated decision-making in the public sector. Human oversight is definitely not the answer to every problem with automated decisions, but it does serve a couple of purposes. 

It inserts a pause in the chain of automation where a human takes some accountability for the decisions made and their impact on the person receiving the decision. The ‘human in the loop’ can, in theory, step in to investigate, challenge or change an automated decision, which could reduce mistakes, bias and unfair exclusions or penalties. 

Its second benefit is more symbolic: it acknowledges that automated decisions are NOT foolproof, and that we can’t simply wave them all through with absolute confidence in their accuracy or fairness. 

The changes proposed under the Bill would limit human oversight to decisions which use certain categories of data, rather than taking account of the impact of the decision: the need for a human decision-maker would not be based on the potential risk of a wrong decision causing harm. 

This is where the DUAB overlaps with the Public Authorities (Fraud, Error and Debt) Bill. One of the most contentious aspects of the Bill is the proposal to routinely require banks to scan benefit claimants’ bank accounts for as-yet undefined signals of fraud or error. Any accounts that are flagged will be investigated by DWP officials. The Bill talks of the importance of human oversight in the process of investigation and clawing back of over-payments, but it is not enshrined in the draft legislation, only in DWP policy. It could easily be removed in future, opening up the potential for digital surveillance, fraud investigation and recouping of money to be entirely automated. 

If you want to know more, Joseph Summers from the Public Law Project has more to say about the Data Bill here, and I’ve written about the Fraud Bill for Computer Weekly

US Social Security: DOGE gets access to millions of people’s personal data 

In the US, DOGE has been granted unrestricted access to sensitive personal data held by the Social Security Administration by the Supreme Court, despite ongoing legal challenges. The stated aim of DOGE’s work is to “carry out commonsense efforts to eliminate waste, fraud and abuse and modernize government information systems.” How it intends to use access to the records of millions of US citizens on deeply personal matters is not yet clear. 

In April, the Social Security website crashed multiple times, and staff there said they feared the system would completely collapse as a result of DOGE’s ambition to migrate social security data and re-write the agency’s code. 

Actual rates of social security fraud appear to be low, with errors often caused by an already-understaffed team. An anti-fraud algorithm introduced for phone claims only found a tiny percentage had any indicators of intentional fraud, despite claims from DOGE staff that 40% of phone claims were from organised fraudsters. The main outcome of the new checks was a drastic increase in waiting times. 

Cutting thousands of agency jobs, re-writing code they don’t understand, and closing down phone lines and in-person offices is highly unlikely to have any impact on fraud, but almost certain to exclude people from the support they are entitled to receive. 

Things to read

This extract from the new book by Emily M Bender and Alex Hanna, The AI Con, is a neat, if depressing, summary of some of the problems that can be caused by governments’ use of AI. From official government chatbots that suggest landlords can break the law, to GenAI systems designed to help tackle homelessness, there are many, many examples of the wrong technologies being used in the wrong places in the public sector.

A sobering article about an algorithmic system in New York which claims to identify at-risk children. Among the variables it uses to classify a family are the neighbourhood they live in, the age of the mother and how many siblings the child under investigation has. Rights organisations are concerned that families in poverty and parents of colour are more likely to be pinpointed for investigation by the algorithm. Families who are flagged by the system are not told that it’s a result of the algorithm, so they are unlikely to ever really know why they’ve been deemed high risk. An internal audit of the system by the department using it recognises that it’s likely to replicate existing biases, but they have so far continued using it. 

This article by Rosa Curling from Foxglove spells out the extent to which private sector tech companies have inserted themselves into our public services. They are no longer simply suppliers of off-the-shelf software, but deeply involved in shaping public policy. Firms like Palantir are increasingly integral to critical services like the NHS. Whether they live up to their promises is difficult to quantify, and whether the trade-off between the potential benefits and the handing over of our personal data and control over our public services is worth it is highly debatable. 

Have you heard about…

If you’re new to the DWS, I’ll be sharing some landmark cases and examples of why I think we should worry about AI, algorithms and other digital tools being rolled out across the welfare state.

The ‘RoboDebt’ scandal in Australia is perhaps the most expensive DWS mistake (so far). An automated system wrongly flagged thousands of people in receipt of benefits as in debt to the government. People struggled to clear their names and have the debt cancelled, suffering immense stress and hardship in the process. Eventually, the Australian government was forced to admit its mistakes. They not only had to repay debts of over $700 million, they also paid out over $1 billion in a class action lawsuit.

A Royal Commission into the system and its devastating impacts reported in 2023: if you want to read more about it, the full report and recommendations are online. It shouldn’t take a Royal Commission to recommend that policies and services should have “a primary emphasis on the recipients [they are] meant to serve” but there you go. It neatly sums up why so many digital welfare policies and services cause harm and fail to deliver: the people they deal with are at the bottom of the list of people to please.

Anna Dent