A police force has defended its use of facial recognition technology after it revealed the software has made thousands of mistakes.
Ahead of the UEFA Champions League Final in Cardiff last June, South Wales Police announced it would be using cameras to scan faces in the crowds and compare them against a database of custody images to identify potential troublemakers.
As 170,000 people descended on the Welsh capital for the game between Real Madrid and Juventus, 2,470 potential matches were identified.
However, according to data on the force's website, 2,297 – 92% – were found to be incorrect.
South Wales Police blamed the high number of "false positives" on "poor quality images" supplied by agencies including Uefa and Interpol, as well as the fact it was its first major use of the technology.
It said that "no facial recognition system is 100% accurate" but that the technology had led to more than 450 arrests since its introduction.
The force also said no one had been arrested after an incorrect match.
A South Wales Police spokesman said: "Over 2,000 positive matches have been made using our 'identify' facial recognition technology with over 450 arrests.
"Successful convictions so far include six years in prison for robbery and four-and-a-half years imprisonment for burglary. The technology has also helped identify vulnerable people in times of crisis.
"Technical issues are common to all face recognition systems, which means false positives will be an issue as the technology develops.
"Since initial deployments during the European Champions League Final in June 2017, the accuracy of the system used by South Wales Police has continued to improve."
Figures also revealed that 46 people were wrongly identified at an Anthony Joshua fight, while there were 42 false positives from a rugby match between Wales and Australia in November.
The force also said it had considered privacy issues "from the outset", and had built in checks to ensure its approach was justified and proportionate.
More from UK
However, civil liberties campaign group Big Brother Watch criticised the technology.
In a post on Twitter, the group said: "Not only is real-time facial recognition a threat to civil liberties, it is a dangerously inaccurate policing tool."