Digital Welfare State edition 010

DWS Newsletter - edition 10

April 2026

This is the tenth edition of this newsletter - not a milestone I knew I would reach! If you’ve been here from the first, thanks for sticking with me. If you’re new, hopefully you’ll enjoy being a subscriber.

This month seems to have been relatively light on digital welfare state news (not on other news, I’m well aware) so I’ve got space for a couple of original pieces. One discusses the importance of trust in the way digital welfare systems are designed and deployed, and the other is an overview of a Dutch automated fraud system which contributed to the collapse of the government.

As always, if you have anything you’d like to share, as well as international news and commentary, or if you’d like to collaborate on a project - please don’t be shy in dropping me a line.

Anna

P.S. if you want to read any previous editions of the newsletter you can find them here, and you can join the generous folk who have made a donation to my costs in putting the newsletter together by giving me a tip. I research and write it without funding or support (or AI).

——————————————————————————

Trust and digital welfare

With digitalisation and automation becoming the norm in welfare systems globally, used in processes from communicating with citizens to assessing eligibility and attempting to predict fraud, our understanding of how it plays out in society is ever more important.

Trust, between citizens, staff and institutions, is essential to the effective functioning of digital systems. Without it, people tend not to use them, potentially missing out on support and income they are entitled to. If staff don’t trust citizens, the relationship between them starts, and continues, on a non-productive basis, and staff time is taken up checking things like eligibility and compliance, rather than delivering effective services.

Trust is not a given. In many contexts, digital public services are starting from a low base in terms of public trust. In the UK, polling by the Ada Lovelace Institute and Turing Institute showed an increase in people concerned about the use of AI in assessing welfare eligibility for example.

We need to think of trust as a socio-technical phenomenon in this situation; built and eroded by a mix of social and technical factors. How does it feel to interact with a service and its staff, is it open and friendly or guarded and suspicious? Is the digital interface instinctive and easy to use, or disjointed and frustrating? If citizens don’t believe that a digital service will function as advertised, or that the intentions behind it are benevolent, building and maintaining trust will be extremely challenging.

The role of human relationships and human discretion is an interesting one in the digital welfare space. In a service like employment support, frontline staff have traditionally had a level of discretion to take individual circumstances into account. This can increase trust, as people feel listened to and seen as a whole, legitimate person. But the promise of greater standardisation via digital systems holds the potential to build trust in a different way: without fallible, potentially biased human input, can we be more confident that everyone will be treated equally? Of course, we know in reality this promise often falls apart and unequal treatment is automated rather than eliminated.

We have seen bias against women, people from migrant backgrounds, low income families and disabled people encoded in digital fraud systems; unemployed people locked out of benefits because facial recognition software doesn’t work reliably; and lives devastated by badly designed tools and institutions reluctant to investigate or undo their mistakes.

When more and more of the essential, basic functions of the welfare state are accessed through digital platforms, assessed partially or wholly automatically, and where more and more decisions are made with limited human oversight, trust needs to be front and centre of how systems are designed and deployed. If they are not trusted or trustworthy (not necessarily the same thing) citizens will withdraw from services and support they need, mistakes large and small will proliferate, and public trust in institutions and government will decline further than it already has. This puts us in dangerous political territory.

——————————————————————————-

Have you heard about?

An algorithm used by the Dutch government to detect and predict fraud in the child benefit system led to tens of thousands of parents being wrongly accused. The risk classification model in theory flagged parents more likely to commit fraud, who were then investigated and required to submit additional information to prove their entitlement. Parents often found it impossible to discover why they had been flagged by the system, or what they were supposed to have done wrong.

Developed against a political background keen to crack down on fraud, after fraudulent childminding services and high profile benefit fraud instigated by Bulgarian gangs, the algorithm built on an already opaque and flawed system.

Parents who were accused and investigated often had to pay back thousands of Euros, far more than they were deemed to have been wrongly paid. Many went into debt, losing jobs and homes; children were taken into care; health and mental health conditions worsened. Many years later, many parents are still in debt and trying to recover their lives. Women, particularly single mothers, were more frequently investigated than men, having a knock-on effect on their children.

The Dutch data protection authority said that the system significantly violated the GDPR, and used data in prohibited ways. It assigned higher risk scores to people without Dutch citizenship, among other flawed measures. As a self-learning algorithm, staff did not know exactly how it reached its decisions, or how the model might have changed over time due to its ability to update itself.

When the scale of the mistakes, the harshness of the system, and the government’s efforts to avoid accounting for their decisions came to light, the affair had far-reaching consequences. Parents filed official complaints, ministers resigned from government, and many hundreds of thousands of Euros were pledged in compensation for parents and children who had been affected.

If you want to read more about it, this article from Politico provides a good overview, and this report from Amnesty focuses on the racial profiling and discriminatory use of nationality as a risk factor. This article (in Dutch) hears from several women who tell their stories of being wrongly accused of fraud.

——————————————————————————-

Things to read

This is a thought-provoking article from the US. Portland City Council has been trialling an algorithmic system which aims to identify residents who need help to pay their water bills. Through the machine learning system, people eligible for help will be automatically enrolled, their ability to pay their water bills estimated, and a bespoke discount on their bills of up to 80% applied.

The city council are working with an external, commercial supplier for the software to pilot the scheme, and are sharing confidential data about residents of the city in order for the scheme to work.

It’s an interesting example of a version of ‘dynamic pricing’ appearing in the public sector - normally used for commercial goods and services, algorithms or AI can adjust the price of things depending on market conditions and the profile of the customer. It has been shown to be potentially discriminatory, with for example low income customers less able to shop around for cheaper alternatives. In this case, the algorithm is intended to make bills more affordable for those less able to pay. The city, and the software company, assert that it will reduce debts to and thereby help the city to improve infrastructure. The software company is set to receive 7% of any additional revenue collected as a result of the new model.

However, it’s unclear what criteria are used by the algorithm to decide on the dynamic pricing; as is often the case, the technical details are considered commercially sensitive and therefore confidential. Although the service itself doesn’t collect information on household income, it does use data from a data broker, and the exact implications for data privacy are unclear. The article reports that officials may be questioning whether such a data-hungry system is really needed to achieve their goals. There is ongoing work to assess the risks of bias and other problems with the trial and its system, and officials have noted the importance of building technologies that deserve public trust.

This city-level pilot raises bigger questions: about the use of commercial-style tools in public service settings; whether resource targeting via technology is actually efficient; whether citizens are comfortable potentially swapping privacy for financial support; and what the role of commercial companies should be in our digital public services. I’m sure these questions are being debated in cities and national governments worldwide as I type.

——————————————————————————-

This is an interesting addition to the list of public sector AI and algorithms that have been abandoned. The Finnish benefits agency Kela was trialling an AI which aimed to find fraudulent pension claims. But because of an absence of relevant case law, there was no clear mandate for using AI for such a purpose, or established guidelines over how it should be used in the public sector more generally. The article gives a useful overview of the debates happening within Finland about AI use in settings such as social security.

——————————————————————————-

Anna Dent