I’m big on racist AI.
Government officials are using artificial intelligence (AI) and complex algorithms to help decide everything from who gets benefits to who should have their marriage licence approved, according to a Guardian investigation.
The findings shed light on the haphazard and often uncontrolled way that cutting-edge technology is being used across Whitehall.
Civil servants in at least eight Whitehall departments and a handful of police forces are using AI in a range of areas, but especially when it comes to helping them make decisions over welfare, immigration and criminal justice, the investigation shows.
The Guardian has uncovered evidence that some of the tools being used have the potential to produce discriminatory results, such as:
An algorithm used by the Department for Work and Pensions (DWP) which an MP believes mistakenly led to dozens of people having their benefits removed.
A facial recognition tool used by the Metropolitan police has been found to make more mistakes recognising black faces than white ones under certain settings.
An algorithm used by the Home Office to flag up sham marriages which has been disproportionately selecting people of certain nationalities.
Artificial intelligence is typically “trained” on a large dataset and then analyses that data in ways which even those who have developed the tools sometimes do not fully understand.
If the data shows evidence of discrimination, experts warn, the AI tool is likely to lead to discriminatory outcomes as well.
Rishi Sunak recently spoke in glowing terms about how AI could transform public services, “from saving teachers hundreds of hours of time spent lesson planning to helping NHS patients get quicker diagnoses and more accurate tests”.
But its use in the public sector has previously proved controversial, such as in the Netherlands, where tax authorities used it to spot potential child care benefits fraud, but were fined €3.7m after repeatedly getting decisions wrong and plunging tens of thousands of families into poverty.
Experts worry about a repeat of that scandal in the UK, warning that British officials are using poorly-understood algorithms to make life-changing decisions without the people affected by those decisions even knowing about it. Many are concerned about the abolition earlier this year of an independent government advisory board which held public sector bodies accountable for how they used AI.
The NHS has used AI in a number of contexts, including during the Covid pandemic, when officials used it to help identify at-risk patients who should be advised to shield.
Wait… AI was used during the “Covid pandemic”???
Who would have thought…?
Do Palestinians deserve to be free? pic.twitter.com/bUa8uxU73S
— Ramy Abdu| رامي عبده (@RamAbdu) October 17, 2023
I got the same results pic.twitter.com/kNyQPhoAld
— Crosbie (@jecrosbie) October 22, 2023
I asked ChatGPT if Palestinians deserve equal rights, then I asked if Israelis deserve equal rights 🥲 pic.twitter.com/m5xett6Fo3
— Grave Jones (@iamgravejones) October 22, 2023