The use of facial recognition technology advancing with few controls
February 2026
One of the features of the Chinese state is the massive use of facial recognition technology throughout China. It is a vast system with millions of cameras and is used to monitor every movement of its citizens. The system is used to control every citizen and it means no one can move or meet someone without it being observed and logged by the state. It is the penultimate example of the panopticon in a modern setting.
Some are remarkably relaxed about its use in the UK believing claims made that it will be properly controlled and will be used to catch criminals, drug dealers and the like. So innocent people have nothing to fear. Read on …
Right to Privacy
The government’s Biometric Technology Consultation (to close on 12 February) aims to help develop a new legal framework for the use of facial recognition and similar technologies by law enforcement. Despite the landmark UK Court of Appeal ruling in 2020, that found the South Wales Police use of automated facial recognition (AFR) technology was unlawful, police forces in different parts of the country have increased their use of Live Facial Recognition (LFR). As these systems contain the biometric data of huge numbers of people, some concerns from human rights groups, including Amnesty and Liberty, are briefly summarised here.
The first concern is to ensure that a new legal framework should apply to all use of ‘biometric technologies’ by all law enforcement and other organisations and should be transparent. Complex mathematical processing is used to identify facial features and generate ‘similarity scores’ but the internal logic of how a match is calculated is hidden from both the police operators and the public. The authorities cannot explain the specific basis for an intervention nor account for why or if the technology produces biased or inaccurate results.
Second, it is concerning that the use of Facial Recognition Technology (FRT) is becoming widespread and easily accessible, with retail outlets taking on a quasi law enforcement role, aided and supported by the police, and drawing on the same or similar databases. It is becoming normalised in schools, in commercial and retail settings, with information flowing between sectors and under a patchwork of inconsistent laws which the public does not understand and find almost impossible to challenge.
This was demonstrated in a recent Guardian report (6th February) on the apprehension and removal from Sainsbury’s store in Elephant and Castle of a customer who had been wrongly matched by staff with a photo of a different customer flagged by their Facewatch camera. In order to prove his innocence he had to apply to the agency using a QR code and submit a photo and a copy of his passport to them before they declared him not on their blacklist.
The following is a list of factors of concern to human rights groups:
– Transparency – to include how and when the technology is being used and the clarity of accessible information about rights.
– Whether biometric data is acquired overtly or covertly.
– Whether it is collected voluntarily or involuntarily. Pervasive monitoring is leading to the normalisation of suspicion less surveillance.
-The subject’s status – whether or not someone is the intended subject of a police investigation or an innocent bystander walking past a camera.
– Who has access to the data and the results.
– The space, context and location of deployment. Expectations of privacy vary significantly between a quiet park and a busy thoroughfare.
– Whether the system is used to make inferences about a person’s internal state, their emotion or intent.
– Whether the interference is demonstrably ‘necessary’ and ‘proportionate’ to a legitimate aim, such as the prevention of serious crime.
– Whether assessments consider the Public Sector Equality Duty (PSED) to ensure the technology does not have a discriminatory impact.
– Algorithm bias: whether this performs differently based on race, gender, or age, which could lead to the over-policing of marginalised communities.
– Watchlist bias: whether scrutiny of criteria can ensure groups are not disproportionately targeted based on protected characteristics.
– Bias in interpretation of identification patterns claiming to provide predictive evidence.
Human Rights groups say that any new regulation should protect privacy and limit data-sharing between public authorities, law enforcement and private companies; and that as the state gains powerful new ways to monitor citizens, a strong and resilient oversight body with true independence is needed to protect human rights and dignity.

