Skip to main contentSkip to navigationSkip to navigation
Facial Recognition Technology<br>Facial Recognition System, Concept Images. Portrait of young man.
Photograph by Getty Images/ iStockphoto
Photograph by Getty Images/ iStockphoto

Facial recognition… coming to a supermarket near you

This article is more than 4 years old

The technology is helping to combat crimes police no longer deal with, but its use raises concerns about civil liberties

Paul Wilks runs a Budgens supermarket in Aylesbury, Buckinghamshire. Like most retail owners, he’d had problems with shoplifting – largely carried out by a relatively small number of repeat offenders. Then a year or so ago, exasperated, he installed something called Facewatch. It’s a facial-recognition system that watches people coming into the store; it has a database of “subjects of interest” (SOIs), and if it recognises one, it sends a discreet alert to the store manager. “If someone triggers the alert,” says Paul, “they’re approached by a member of management, and asked to leave, and most of the time they duly do.”

Facial recognition, in one form or another, is in the news most weeks at the moment. Recently, a novelty phone app, FaceApp, which takes your photo and ages it to show what you’ll look like in a few decades, caused a public freakout when people realised it was a Russian company and decided it was using their faces for surveillance. (It appears to have been doing nothing especially objectionable.) More seriously, the city authority in San Francisco have banned the use of facial-recognition technologies by the police and other government agencies; and the House of Commons science and technology committee has called for British police to stop using it as well, until regulation is in place, though the then home secretary (now chancellor) Sajid Javid, said he was in favour of trials continuing.

There is a growing demand for the technology in shops, with dozens of companies selling retail facial-recognition software – perhaps because, in recent years, it has become pointless to report shoplifting to the police. Budgets for policing in England have been cut in real terms by about 20% since 2010, and a change in the law in 2014, whereby shoplifting of goods below a value of £200 was made a summary offence (ie less serious, not to be tried by a jury), meant police directed time and resources away from shoplifting. The number of people being arrested and charged has fallen dramatically, with less than 10% of shoplifting now reported. The British Retail Consortium trade group estimates that £700m is lost annually to theft. Retailers are looking for other methods. The rapid improvement in AI technologies, and the dramatic fall in cost, mean that it is now viable as one of those other methods.

“The systems are getting better year on year,” says Josh Davis, a psychologist at the University of Greenwich who works on facial recognition in humans and AIs. The US National Institute of Standards and Technology assesses the state of facial recognition every year, he says, and the ability of the best algorithms to match a new image to a face in a database improved 20-fold between 2014 and 2018. And analogously with Moore’s law, about computer processing power doubling every year – the cost falls annually as well.

Facial-recognition technology being tested in Romford, Essex, earlier this year. Photograph: Ian Davidson/Alamy

In ideal environments such as airport check-ins, where the face is straight on and well lit and the camera is high-quality, AI face recognition is now better than human, and has been since at least 2014. In the wild – with the camera looking down, often poorly lit and lower-definition – it’s far less effective, says Prof Maja Pantic, an AI researcher at Imperial College London. “It’s far from the 99.9% you get with mugshots,” she says. “But it is good, and moving relatively fast forward.”

Each algorithm is different, but fundamentally, they work the same way. They are given large numbers of images of people and are told which ones are the same people; they then analyse those images to pick out the features that identify them. Those features are not things like “size of ear” or “length of nose”, says Pantic, but something like textures: the algorithm assesses faces by gradients of light and dark, which allow it to detect points on the face and build a 3D image. “If you grow a beard or gain a lot of weight,” she says, “very often a passport control machine cannot recognise you, because a large part of the texture is different.”

But while the algorithms are understood at this quite high level, the specific things that they use to identify people are not and cannot be known in detail. It’s a black box: the training data goes into the algorithm, sloshes around a bit, and produces very effective systems, but the exact way it works is not clear to the developer. “We don’t have theoretical proofs of anything,” says Pantic. The problem is that there is so much data: you could go into the system and disentangle what it was doing if it had looked at a few tens of photos, perhaps, or a few hundred, but when it has looked at millions, each containing large amounts of data itself, it becomes impossible. “The transparency is not there,” she says.

Still, neither she nor Davis is unduly worried about the rise of facial recognition. “I don’t really see what the big issue is,” Pantic says. Police prosecutions at the moment often rely on eyewitnesses, “who say ‘sure, that’s him, that’s her’, but it’s not”: at least facial recognition, she says, can be more accurate. She is concerned about other invasions of privacy, of intrusions by the government into our phones, but, she says, facial recognition represents a “fairly limited cost of privacy” given the gains it can provide, and given how much privacy we’ve already given up by having our phones on us all the time. “The GPS knows exactly where you are, what you’re eating, when you go to the office, whether you stayed out,” she says. “The faces are the cherry on top of the pie, and we talk about the cherry and forget about the pie.”

As with all algorithmic assessment, there is reasonable concern about bias. No algorithm is better than its dataset, and – simply put – there are more pictures of white people on the internet than there are of black people. “We have less data on dark-skinned people,” says Pantic. “Large databases of Caucasian people, not so large on Chinese and Indian, desperately bad on people of African descent.” Davis says there is an additional problem, that darker skin reflects less light, providing less information for the algorithms to work with. For these two reasons algorithms are more likely to correctly identify white people than black people. “That’s problematic for stop and search,” says Davis. Silkie Carlo, the director of the not-for-profit civil liberties organisation Big Brother Watch, describes one situation where an 14-year-old black schoolboy was “swooped by four officers, put up against a wall, fingerprinted, phone taken, before police realised the face recognition had got the wrong guy”.

That said, the Facewatch facial-recognition system is, at least on white men under the highly controlled conditions of their office, unnervingly good. Nick Fisher, Facewatch’s CEO, showed me a demo version; he walked through a door and a wall-mounted camera in front of him took a photo of his face; immediately, an alert came up on his phone (he’s in the system as an SOI, so he can demonstrate it). I did the same thing, and it recognised me as a face, but no alert was sent and, he said, the face data was immediately deleted, because I was not an SOI.

Protesters in Seattle demonstrating against Amazon’s Rekognition technology, October 2018. Photograph: Elaine Thompson/AP

Facewatch are keen to say that they’re not a technology company themselves – they’re a data management company. They provide management of the watch lists in what they say is compliance with the European General Data Protection Regulation (GDPR). If someone is seen shoplifting on camera or by a staff member, their image can be stored as an SOI; if they are then seen in that shop again, the shop manager will get an alert. GDPR allows these watch lists to be shared in a “proportionate” way; so if you’re caught on camera like this once, it can be shared with other local Facewatch users. In London, says Fisher, that would be an eight-mile radius. If you’re seen stealing repeatedly in many different cities, it could proportionately be shared nationwide; if you’re never seen stealing again, your face is taken off the database after two years.

Carlo is not reassured: she says that it involves placing a lot of trust in retail companies and their security staff to use this technology fairly. “We’re not talking about police but security staff who aren’t held to the same professional standards. They get stuff wrong all the time. What if they have an altercation [with a customer] or a grievance?” The SOI database system, she says, subverts our justice system. “How do you know if you’re on the watch list? You’re not guilty of anything, in the legal sense. If there’s proof that you’ve committed a crime, you need to go through the criminal justice system; otherwise we’re in a system of private policing. We’re entering the sphere of pre-crime.”

Fisher and Facewatch, though, argue that it is not so unlike the age-old practice of shops and bars having pictures up in the staff room of regular troublemakers. The difference, they say, is that it is not relying on untrained humans to spot those troublemakers, but a much more accurate system.

The problem is that, at the moment, there is very little regulation – other than GDPR – governing what you can and can’t do with a facial-recognition system. Facewatch say, loudly and often, that they want regulation, so they know what they are legally allowed to do. On the other hand, Carlo and Big Brother Watch, along with other civil liberties groups, want an urgent moratorium and a detailed democratic debate about the extent to which we are happy with technologies like these in our lives. “Our politicians don’t seem to be aware that we’re living through a seismic technological revolution,” she says. “Jumping straight to legislation and ‘safeguards’ is to short-circuit what needs to be a much bigger exercise.”

Either way, it needs to happen fast. In Buckinghamshire, Paul Wilks is already using the technology in his Budgens, and is finding it makes life easier. When he started, his shop would have things stolen every day or two, but since he introduced the system, it’s become less common. “There’s definitely been a reduction in unknown losses, and a reduction in disruptive incidents,” he says. As well as a financial gain, his staff feel safer, especially late at night, “which is good for team morale”. If enough retailers start using facial-recognition technology before the government takes notice, then we may find that the democratic discussion has been short-circuited already.

A demonstration of the Chinese firm Megvii’s facial-recognition technology at their headquarters in Beijing, May 2018. Photograph: New York Times/eyevine

Hot spots: facial-recognition technology around the world

China
China has embraced facial recognition, using it to implement a national surveillance system and bolstering its authoritarian regime. The technology is already pervasive in Chinese society, with facial recognition used for airport check-ins, cash withdrawals and to monitor the attention of school students. In the Xinjiang region, facial recognition is increasingly used to aid the oppression of the Uighur Muslims, with the state collecting their biometric data, including face scans.

United Kingdom
In the UK, police are conducting trials of the technology in public areas in south Wales, Leicestershire and London. However, there are currently no laws or government policies in place to regulate its use. Police use of facial recognition is currently being challenged in the courts.

United States
One in two American citizens is on a law-enforcement facial-recognition database. Concerns over lack of regulation and privacy led to the city of San Francisco banning the use of facial recognition by the police in May this year, with Somerville, Massachusetts, following its lead. Dani Ellenby

The AI Does Not Hate You by Tom Chivers is published by Weidenfeld & Nicolson (£16.99). To order a copy go to guardianbookshop.com. Free UK p&p on all online orders over £15

More on this story

More on this story

  • Police to be able to run face recognition searches on 50m driving licence holders

  • MPs and peers call for ‘immediate stop’ to live facial recognition surveillance

  • Facial recognition could transform policing in same way as DNA, says Met chief

  • UK passport images database could be used to catch shoplifters

  • Revealed: Home Office secretly lobbied for facial recognition ‘spy’ company

  • TechScape: ‘Are you kidding, carjacking?’ – The problem with facial recognition in policing

  • Home Office secretly backs facial recognition technology to curb shoplifting

  • ‘We’ll just keep an eye on her’: Inside Britain’s retail centres where facial recognition cameras now spy on shoplifters

  • Police using live facial recognition at British Grand Prix

  • Campaigners urge London food banks to end use of face scans

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed