Digital Welfare State edition 002

Welcome! 

Welcome to edition 002 of the Digital Welfare State newsletter. Written by me, Anna Dent, it comes out approximately once a month with a round-up of news about the DWS, reflections from me (and hopefully other people) and links to interesting papers, research, commentary and other good stuff. You can also read lots of background and thoughts about the DWS here.  

If you missed the first edition you can find it here. I’ll be taking a break from the newsletter in August, unless something properly earth-shattering happens.

You can subscribe to the newsletter via the form at the bottom of this page.

What’s going on? News from the last few weeks

Amsterdam Smart Check failure shows de-biasing welfare algorithms is impossible

Investigative journalism organisation Lighthouse Reports, in partnership with Dutch newsroom Trouw and the MIT Technology Review, has published an in-depth report into yet another Dutch welfare AI system (see below for details on another).

In 2023 the city of Amsterdam announced it was building a ‘fair’ and ‘unbiased’ algorithm to identify fraudulent claims for welfare benefits. Aware of the history in the Netherlands and elsewhere of biased algorithmic systems, the city planned to implement every piece of ‘ethical AI’ guidance they could to ensure this new system didn’t make the same mistakes.

Digital rights advocates and an advisory board made up of benefit recipients and other advocates said from the start that the new system, Smart Check, would not achieve its aims, and advised the city not to develop it at all.

The city continued working on the system however, and initially thought they had managed to adjust it to remove bias against particularly nationalities. But over time they realised it was generating new biases, and was in fact no better at spotting possible fraud than the human reviewers that it was designed to replace.

The officials who worked on the system deserve credit for both their intentions to make an unbiased system, and their transparency in releasing details about how it was designed, which enabled the reporting team to write this story. The city also did the right thing in eventually abandoning the system when it was clear it could not be ‘de-biased’.

But many would say this was an inevitable outcome, that an AI-driven fraud detection system could never be fair. Even when they carefully implemented all the ethical AI principles, bias was impossible to remove, suggesting the whole premise was deeply flawed. On a technical and ethical level, the system was doomed from the start.

This is not just a Dutch problem. The global political obsession with welfare fraud encourages the creation of this kind of system, which, judging by the publicly available evidence, wrongly target innocent people every time they are used. Benefit recipients, digital rights advocates, lawyers and researchers point out the flaws but as in Amsterdam they are not listened to.

In most cases, we know far less than was available to the team that investigated Smart Check, meaning much of the inevitable bias is going unchecked. Governments normally keep the details of fraud detection systems under wraps to avoid ‘helping the scammers’, but this also makes them impossible to properly scrutinise for fairness and bias.

If the tide can’t be turned to stop the creation of these systems, the least we can ask for is more transparency, proper assessment of their impact on rights and fairness, and that the people who are actually affected i.e. welfare recipients are systematically and meaningfully consulted.

Things to read

More on Robodebt: I featured the Australian Robodebt scandal in issue 1. This longer article by Dom Moynihan explores it in more detail, and draws out lessons for governments to consider in their own automation efforts. It highlights not only the financial and legal implications When Algorithms Go Wrong (that should be a new Channel 5 documentary actually), but the political fallout as well. In Australia as in the Netherlands (see below), a scandal resulting from a flawed automated system can cause enormous political damage to the administration that oversaw it.

The Orb (not the band): I haven’t got the space or frankly the willpower to explain this in loads of detail, but I recommend doing some reading on Tools for Humanity’s Orb. Set up by Sam Altman of OpenAI / ChatGPT infamy, the Orb is a biometric humanity-verifier. Yes it’s as weird and dystopian as it sounds.

The reason I’m mentioning it in a newsletter about the digital welfare state is a) it was first trialled in countries in the Global South as a way to distribute ‘welfare’ (actually crypto called Worldcoin) and b) it’s apparently going to be used to provide a Universal Basic Income via the medium of crypto. Tech guys love the idea of UBI as an alternative to current models of the welfare state, as it relieves them of any guilt about wiping out jobs and would justify (in their minds) a huge roll-back of state support. The Orb’s rollout in countries like Kenya has not been without its controversies, as you might imagine.

While I don’t think it’s around the corner imminently, in my more doom-laden moments I can envisage us all scanning our irises with an Orb to book an appointment with the GP, apply for Universal Credit or collect our pensions.

Oh and the Orb is now in London.

More on DOGE and social security: if you or someone you know has access to the New York Times, I recommend reading this article about the further destruction of the US social security system. False information about fraud rates, unfettered access to the personal data of millions of Americans: none of it is cheerful reading, but it shows the potential for harm when people get hold of digital systems they don’t understand.

Have you heard about…

If you’re new to the DWS, I’ll be sharing some landmark cases and examples of why I think we should worry about AI, algorithms and other digital tools being rolled out across the welfare state.

As I mentioned above, the Netherlands is not new to welfare fraud algorithms. In the 2010s, an algorithmic system called SyRI was in use, designed to identify potential fraud in the child benefit system. Thousand of parents were subjected to investigations because of false allegations against them, prompted by SyRI.

SyRI was judged to be in breach of EU human rights law as it was using families’ nationalities as one of the characteristics which could flag them as a fraud risk. You can read more about the case and how the truth was established in this article from the Digital Freedom Fund.

The fallout from the scandal contributed to the national government stepping down, but unfortunately it was not the end of the Netherlands’ algorithm experiments. This Lighthouse article shows how other fraud detection algorithms were being targeted on low income neighbourhoods across the country.

Anna Dent