UK Government Criticised for Failing to Ensure Transparency in AI Use

The UK government’s pledge to make the use of artificial intelligence (AI) transparent has hit an alarming snag. Despite announcing in February that all government departments are required to register their AI systems, not a single department has adhered to the mandate. This lack of transparency raises significant concerns about the unchecked deployment of AI in public services, potentially affecting millions of lives.
What is the AI Transparency Problem?
AI is already influencing government decision-making in areas such as benefits, immigration enforcement, and even policing. Yet, only nine algorithmic systems have been logged on the mandatory public register, and none of them are from high-stakes departments like the Home Office or the police. This glaring gap highlights a troubling disconnect between policy and practice.
Recent reports reveal that public sector organisations have awarded dozens of AI-related contracts, including a £20 million facial recognition software initiative by a police procurement body. Yet, transparency, a crucial element in fostering public trust, remains conspicuously absent.
The Growing Role of AI in Government Functions
AI’s role within the public sector continues to expand. Notable examples include:
Department for Work and Pensions (DWP)
The DWP employs AI for tasks such as fraud detection in Universal Credit claims and summarising vast amounts of evidence for work coaches in jobcentres. Despite these advancements, the systems remain unnamed on the register.
Home Office
AI is being used in immigration enforcement, including a controversial “robo-caseworker” system. Critics claim the system operates without adequate human oversight, yet it has not been disclosed as part of the mandatory transparency initiative.
Policing
Several police forces deploy facial recognition technology to identify criminal suspects. However, proposals like the £20 million facial recognition contract reignite fears of widespread biometric surveillance.
These examples underline AI’s increasing integration into government operations, making the lack of transparency even more disconcerting.
Why Transparency in AI Use Matters
The risks associated with AI adoption aren’t hypothetical. Prominent failures, such as the Post Office’s Horizon IT scandal, demonstrate how technology misuse can have devastating impacts on civilian lives. Experts caution that AI systems, if unchecked, could lead to discriminatory outcomes or exacerbate existing inequalities.
Imogen Parker, an associate director at the Ada Lovelace Institute, said, “The public sector’s lack of transparency isn’t just keeping citizens in the dark; it risks undermining public trust in AI technologies altogether.” Transparency is a vital tool to ensure that algorithms serve the public rather than harm it.
Missed Opportunities for Public Accountability
The current scenario has prompted criticism from various quarters. Peter Kyle, Secretary of State for Science and Technology, admitted, “The government hasn’t taken seriously enough the need to be transparent in its use of algorithms.” Kyle acknowledged that public trust hinges on ensuring that AI systems are designed and utilised to serve citizens’ best interests.
However, despite the government’s claims of prioritising transparency, public bodies have listed only three algorithms on the national register since late 2022. These include:
- A system used by the Cabinet Office to identify long-term historical records.
- An AI-powered camera analysing pedestrian crossings in Cambridge.
- A platform allowing NHS patients to share reviews about healthcare services.
The disparity between the number of unrecorded AI systems and the government’s commitments raises doubts about its resolve to prioritise public accountability.
The Cost of Secrecy in AI Use
Since February, 164 contracts referencing AI have been signed by public bodies, according to Tussell, a contracts monitoring firm. These include agreements with major companies like Microsoft, Meta, and Google. Google Cloud, for instance, funded a report suggesting that increased deployment of generative AI could save the public sector £38 billion annually by 2030. However, such projections mean little without robust mechanisms to ensure ethical deployment and accountability.
AI contracts range widely in purpose, from improving local council efficiency to optimising education technology. But the opacity around these tools undermines their potential benefits. Madeleine Stone from Big Brother Watch described this secrecy as a “serious threat to data rights” and renewed calls for greater transparency.
The Broader Implications for the Public Sector
The government’s inability to enforce its transparency requirements raises broader concerns. For organisations like the NHS, which is heavily investing in AI-powered platforms like the £330 million data initiative with US company Palantir, public scrutiny is crucial. Critics worry that weaknesses in oversight could lead to an erosion of patient privacy and trust.
Similarly, the integration of AI chatbots, such as Redbox within government departments, highlights the importance of documenting and overseeing technologies that potentially influence high-level decision-making processes.
The Path Forward
To restore public trust and maximise the social benefits of AI, experts and advocacy groups argue that the UK government must urgently address its transparency shortcomings. Concrete actions include:
- Publishing All AI Activities on the public register, ensuring all active systems are fully disclosed.
- Strengthening Oversight Mechanisms to independently evaluate the effectiveness and ethical implications of AI tools.
- Enabling Public Scrutiny by establishing accessible platforms for citizens to understand how technology impacts their lives.
The Department for Science and Technology has stated that several new records will be published soon. However, restoring credibility will require more than symbolic gestures—it needs sustained efforts to close the transparency gap.
Building AI Accountability in Governance
The misuse or secrecy surrounding AI tools in governance risks eroding public confidence in both the technology and the institutions deploying it. If the UK government seeks to harness AI’s full potential, it must act now to ensure accountability and transparency remain at the forefront of this transformation.
Source
Explore more entrepreneurial insights and success stories at Inspirepreneur, your go-to magazine for business innovation and leadership.