People protest in the street outside a protest to defund the police in a place they are calling the “City Hall Autonomous Zone” in support of “Black Lives Matter” in the Manhattan borough of New York City, New York, U.S., June 30, 2020.
Carlo Allegri | Reuters

In June of last year, following pressure from civil rights advocates and national protests sparked by the murder of George Floyd, three of the biggest names in facial recognition technology self-imposed restrictions on their sale to police.

But after a year of public discussions over the state of policing in America, the use of facial recognition technology to surveil the public, like many other policing practices, have mostly yet to be reined in.

That’s left companies like Amazon and Microsoft, who enacted moratoriums to give Congress time to come up with fair rules of the road, in limbo. IBM, by contrast, said it would exit the business entirely.

In the year since these tech companies pressed pause on facial recognition, lawmakers are still grappling with how to properly regulate the technology at the state and federal level. A coalition of Democrats have pressed for a pause on the government’s use of the technology entirely until they can come up with better rules. So far, most of the action has taken place in a handful of states.

Privacy and civil liberties advocates say they view the moratoria by companies as a promising first step, but they also remain wary about other, worrisome forms of surveillance that technology companies continue to profit from.

And while Amazon and others restricted the sale of their facial recognition technology, police still seem to have used similar tools during the widespread protests around police brutality last summer, though law enforcement has not been forthcoming about its use.

The unique challenge of facial recognition

Facial recognition poses unique risks to citizens, privacy advocates say, even when compared with on-the-ground police surveillance. 

“With most of the digital surveillance, the difference isn’t that there’s more of a court oversight for that sort of activity in the analogue space, the difference is the cost,” said Albert Fox Cahn, Executive Director of the Surveillance Technology Oversight Project (STOP). While trailing someone undercover requires a huge investment of time and money, creating fake social media pages to keep tabs on people is cheap and quick, Cahn said.

Matt Mahmoudi, a researcher and advisor on artificial intelligence and human rights at Amnesty International, said another issue lies in the way facial recognition can be used without the subject’s knowledge.

“In a standard police lineup you’re well aware that you’re being lined up,” Mahmoudi said. “In the case of facial recognition, you have no idea that you’re in a virtual line up. You might at any moment be in a virtual lineup.”

The sense that facial recognition could be deployed at any time — and the lack of transparency around how law enforcement uses the technology — could chill speech and free expression, activists fear.

Facial-recognition grid
Stegerphoto | Peter Arnold | Getty Images

The potential threat of such tools is especially salient for Black and Brown people, on whom facial recognition tools have been proven to be less accurate in identifying, due in part to the fact that the algorithms tend to be trained with datasets that skew white and male.

Research has indicated that facial recognition software may contain racial and gender bias. In 2018, MIT computer scientist Joy Buolamwini and renowned AI researcher Timnit Gebru co-authored a landmark paper showing IBM and Microsoft’s facial recognition systems were significantly worse when it came to identifying darker-skinned individuals.

Additionally, studies by the American Civil Liberties Union and M.I.T. found that Amazon’s Rekognition technology misidentifies women and people of color more frequently than it does white men.

Proponents of facial recognition technology, including Amazon, have argued that it can help law enforcement track down suspected criminals and reunite missing children with families. Amazon also disputed the ACLU and M.I.T. studies, arguing that researchers used Rekognition differently than how it recommends law enforcement agencies use the software.

Rep. Bobby Rush, D-Ill., himself an activist who joined the Student Nonviolent Coordinating Committee during the Civil Rights Movement of the 1960s and co-founded the Illinois chapter of the Black Panther Party, raised concerns about the technology’s biases and supported a federal moratorium on its use.

“There’s been a generations-long, I guess you would call it, trope in the Black community that all Black people look alike,” Rush said in an interview with CNBC. “Technically, with the advent of this facial recognition technology, that trope has become a truth.”

Tech companies are still ‘monetizing surveillance’

Amazon, Microsoft and IBM have placed sweeping restrictions on their sale of facial recognition tools to police, but law enforcement agencies still have a wealth of surveillance tools at their disposal. 

Microsoft has played a large role in aiding police surveillance outside of facial recognition. The company developed the Domain Awareness System in partnership with the New York Police Department, according to the department’s site. The system is billed as a “crime-fighting and counterterrorism tool” that uses “the largest networks of cameras, license plate readers and radiological sensors in the world.” Microsoft did not comment or provide further information on the DAS.

Amazon’s smart home security subsidiary, Ring, has also faced intense scrutiny from privacy advocates over its rapidly expanding work with police. Since 2018, Ring has formed more than 2,100 partnerships with police and fire departments that offer them access to video footage recorded by its users’ internet connected cameras. Video clips are requested through Ring’s social-media-esque community safety app, called Neighbors, where users can upload and comment on recorded footage and discuss goings on in their area. 

Devin Hance | CNBC

Ring doesn’t disclose sales of its products, but in a letter to lawmakers last January, it said “there are millions of customers who have purchased a Ring device.” 

As Ring’s police partnerships have grown, privacy advocates have expressed concern that the program, and Ring’s accompanying Neighbors app, have turned residents into informants, while giving police access to footage without a warrant and with few guardrails around how they can use the material. 

Ring has argued it creates “safer, more connected communities.” Amazon in 2018 claimed that Ring’s video doorbell product reduces neighborhood burglaries by as much as 55%, though recent investigations by NBC News and CNET found there’s little evidence to support that claim.

Ring’s partnerships with public safety agencies have only grown in the year since Amazon put a pause on selling Rekognition to police. The company has announced 468 new partnerships with police departments since June 10, 2020, public records published by Ring show.

In the latest sign of how much the program has expanded, all 50 U.S. states now have police or fire departments participating in Amazon’s Ring network, according to data from the company’s active agency map.

Following Amazon’s moratorium on Rekognition and amid global protests around police violence, civil liberties and human rights groups seized on the moment to call for Ring to end its partnerships with police. At the time, the Electronic Frontier Foundation argued that Amazon’s statements of solidarity with the Black community rang hollow, given that Ring works with the police, providing them with tools that advocacy groups fear will heighten racial profiling of minorities.

Ring told CNBC in a statement that the company doesn’t tolerate racial profiling and hate speech in content shared from Ring devices and on the Neighbors app.

Privacy advocates who spoke to CNBC said they believe Ring doorbells and Rekognition raise similar concerns in that both products are adding to an increased network of police surveillance. 

“[Amazon is] clearly trying very hard to monetize surveillance technologies and to cozy up to police departments to make it profitable for themselves,” said Nathan Freed Wessler, a senior staff attorney with the ACLU’s Speech, Privacy and Technology Project. “Ring is less concerning in some fundamental ways than face recognition, but it’s really worrisome in that they are basically placing little surveillance cameras in residential neighborhoods across the country and providing police with a very efficient way to try to get access to that footage, which provides law enforcement with just a huge wealth of video of people going about their lives that they never would have had access to before.”

Police need consent to gain access to Ring camera footage. That process became more transparent as a result of an update by Ring last week, which requires police and fire departments to submit requests for user video footage via public posts in the Neighbors app. Previously, agencies could privately email users to request videos. Users can also opt out of seeing posts from public safety agencies in the Neighbors app.

Ring has said that the footage can be a valuable tool to help police investigate crimes like package theft, burglaries and trespassing. But advocates and lawmakers worry that Ring devices will lead to increased surveillance and racial profiling.   

In February, the Electronic Frontier Foundation obtained emails from the Los Angeles Police Department that showed the department requested access to Ring footage during Black Lives Matter protests last summer. The EFF called it “the first documented evidence that a police department specifically requested footage from networked home surveillance devices related to last summer’s political activity.”  

“The LAPD ‘Safe L.A. Task Force’ is asking for your help,” reads one email from LAPD Detective Gerry Chamberlain. “During the recent protests, individuals were injured & property was looted, damaged and destroyed. In an effort to identify those responsible, we are asking you to submit copies of any video(s) you may have for [redacted].”

Ring said its policies prohibit public safety agencies from submitting video requests for protests and other lawful activities. The company added that Ring requires all police requests for video in the Neighbors app to include a valid case number for active investigations, along with incident details.

Privacy and civil liberties advocates not only worry that home surveillance devices like Ring could lead to increased surveillance of protesters, but that Ring footage could be used in concert with other technologies, like facial recognition, so that police can quickly and easily identify individuals.

Law enforcement agencies aren’t prohibited from sharing Ring footage with third parties. Amazon told lawmakers in 2019 that police who download Ring footage can keep the videos forever and share them with anyone, even if the video includes no evidence of a crime, The Washington Post reported.

“Once police get that footage, if they’re in one of the many cities that does not yet ban face recognition, they can take Ring footage and then use a different company’s face recognition system to identify one person, or for that matter, anyone who walks by,” said Wessler. “There would be nothing technologically stopping them from running every face through the system to try to identify people.”

For its part, Ring said last August that it doesn’t use facial recognition technology in any of its devices or services and wouldn’t sell or offer the technology to law enforcement.

Facial recognition and protests

Last summer, privacy advocates warned of the dystopian ways in which protesters for racial justice could be tracked and identified by police. Articles about how to disguise faces with makeup and masks and secure smartphones from sending out detailed location information bounced around progressive circles. 

A year later, there have been a handful of reports about how facial recognition and other surveillance technology might have been used on protesters. But activists say that the information that’s become public about protest surveillance barely scratches the surface of law enforcement capabilities — and that’s part of the problem.

In many cases, law enforcement is not made to disclose information about how they surveil citizens. It wasn’t until last June, in the midst of the protests, that the New York City legislature passed a law requiring the police department to disclose how it uses surveillance technology on the public. Through a lawsuit over the NYPD’s lack of disclosure around its use of facial recognition, STOP found that the department’s Facial Identification Section handled over 22,000 cases over three years, though little else has been revealed.

“It’s been like walking a little bit in the dark,” said Mahmoudi of Amnesty International. 

In one highly publicized case last summer, the NYPD appeared to use facial recognition to track down Black Lives Matter protester Derrick “Dwreck” Ingram, in an attempted arrest that resulted in an hours-long standoff when Ingram refused to let officers enter his apartment without a warrant. Ingram live-streamed the ordeal on social media as dozens of officers reportedly lined his block and a police helicopter flew overhead. The police eventually left and he turned himself in the next day.

In a statement to CNBC, an NYPD spokesperson said police were responding to an open complaint that Ingram had allegedly assaulted a police officer nearly two months prior during a demonstration by yelling into an officer’s ear with a megaphone. Ingram has denied the NYPD’s allegation of assault and the charges were ultimately dismissed.

Ingram said he was “taken aback” and “shaken” to learn that facial recognition tools seemed to be involved in his investigation. A spokesperson for the NYPD’s deputy commissioner of public information, Sergeant Jessica McRorie, did not comment on whether the tools were used in his case but said the NYPD “uses facial recognition as a limited investigative tool” and a match would not count as probable cause for an arrest.

Protesters kneel in front of police during a demonstration on Broadway near New York City's Union Square, June 2, 2020.

As protests over the killing of George Floyd continue, here’s how police use powerful surveillance tech to track them

Ingram’s surprise was due in part to his fluency in surveillance tools, having led sessions for other activists on how they could protect themselves from surveillance by using encrypted apps, making their social media pages private and other strategies. Still, he didn’t think he would be tracked in such a way.

Now when he educates other activists about surveillance, he understands protesters like himself could still be tracked if law enforcement so chooses. 

“If the government, if police, want to use tools to monitor us, you will be monitored,” he said. “My pushback is that we should use those same tools to prove the harm that this causes. We should be doing the research, we should be fighting with legislation and really telling stories like mine to make what happens public and really expose the system for how much of a fraud and how dangerous it truly is.”

In the nation’s capital, law enforcement revealed in court documents their use of facial recognition tools to identify a protester accused of assault. At the time, the police official who headed the area’s facial recognition program told The Washington Post the tool would not be used on peaceful protests and was only used for leads. A new Virginia law restricting facial recognition by local law enforcement will soon put an end to the facial recognition system, the Post later reported. The system had been a pilot program used across Maryland, Virginia and Washington, D.C., requiring buy-in from each region.

Rep. Anna Eshoo, D-Calif., attempted to learn more about how the federal government used surveillance tools during the racial justice protests last summer and to urge the agencies to limit their use of such tools, but said she was underwhelmed with the response from those agencies at the time.

“I received high-level responses, but very few details,” Eshoo said in an interview with CNBC. “What remains is a lot of unanswered questions.”

Representatives from the agencies to whom Eshoo wrote — the Federal Bureau of Investigation, Drug Enforcement Administration, National Guard and Customs and Border Protection — either did not respond or declined to comment on their responses or use of facial recognition tools on protests.

Reining in facial recognition technology

Momentum for facial recognition laws has seemed to wax and wane over the past year and a half. Prior to the pandemic, several privacy advocates told CNBC they sensed progress on such regulations. 

But the public health crisis reset priorities and possibly even reshaped how some lawmakers and citizens thought about surveillance technologies. Soon, government agencies were discussing how to implement contact tracing on Americans’ smartphones and the widespread use of masks lent some comfort to concerns about technology that could identify their faces.

The social movement following the murder of Floyd by police renewed fears around facial recognition technology and specifically around how law enforcement might use it to surveil protesters. Privacy advocates and progressive lawmakers warned of a chilling effect on speech and free expression should such surveillance go unchecked. 

Lawmakers like Eshoo and Rush, sent a flurry of letters to law enforcement agencies asking about how they surveilled protests and signed onto new bills like the Facial Recognition and Biometric Technology Moratorium Act. That bill would pause the use of such technologies by federal agencies or officials without permission by Congress.

In an interview with CNBC, Eshoo emphasized that the moratorium was just that — not an outright ban, but a chance for Congress to place stronger guardrails on the use of the product.

“The goal in this is that the technology be used responsibly,” she said. “It can be a very useful and fair tool but we don’t have that now.”

But, Eshoo said, things haven’t moved along as quickly as she’d like.

“I’m not happy about where we are because I don’t think the needle has moved at all,” she said.

Where there has been some change is at the state and local level, where legislatures in Sommerville, Mass., San Francisco and Oakland, Calif. have opted to ban the use of facial recognition technology by their city agencies. California now has in place a three year moratorium on the use of facial recognition technology in police body cameras. Last year, lawmakers in Portland, Ore. passed one of the broadest bans on the technology and Washington state legislators opted to require more guardrails and transparency around the government use of the technology.

It could take more of these laws for Congress to finally take action, just as the rise of state digital privacy laws have added urgency for a federal standard (though lawmakers have yet to coalesce around a single bill in that case either).

Still, many continue to call for a permanent ban of law enforcement use of the tools and for federal regulation. 

“While there’s lots of things happening at the state and local level that are incredibly important, we have to push our federal government to actually be able to pass legislation,” said Arisha Hatch, chief of campaigns at Color of Change.

Privacy advocates also remain wary of industry-supported legislation as tech companies such as Amazon and Microsoft have built up heavy lobbying presences at state capitals across the U.S. to help craft facial recognition bills. 

Microsoft CEO Satya Nadella (L) and Amazon CEO Jeff Bezos visit before a meeting of the White House American Technology Council in the State Dining Room of the White House June 19, 2017 in Washington, DC.
Chip Somodevilla | Getty Images

The concern is that technology companies will push for state laws that, in effect, allow them to continue selling and profiting from facial recognition with few guardrails. 

Advocates point to Washington state’s recently passed facial recognition law, which was sponsored by a state senator employed by Microsoft, as a weak attempt at regulating the technology. Versions of Washington’s law have since been introduced in several states including California, Maryland, South Dakota and Idaho.

Groups such as the American Civil Liberties Union argued the bill should have temporarily banned face surveillance until the public can decide if and how the technology should be used. The ACLU also took issue with the fact that, under the Washington law, it’s legal for government agencies to use facial recognition to deny citizens access to essential services such as “housing, health care, food and water,” as long as those decisions undergo “loosely defined ‘meaningful human review,'” the group said.  

At the federal level, tech giants like Amazon, IBM, Microsoft and Google have all voiced support for establishing rules governing facial recognition. But privacy advocates worry companies are calling for weaker federal regulation that, if passed, could end up preempting stronger state laws. 

“Any federal law that is less than a total ban on police use of facial recognition technology has to have a non-preemption provision,” meaning that the federal law wouldn’t supercede any state laws that are potentially more restrictive of facial recognition technology, said the ACLU’s Wessler. 

Wessler added that any federal facial recognition law must give individuals the right to sue entities, such as police departments, that violate the law.

“Those are the two things that Amazon and Microsoft and the other companies want to avoid,” Wessler said. “They want a weak law that basically gives them the cover of saying, ‘We’re now a safe, regulated space, so don’t worry about it.'”

While it could be a while until federal legislation reining in the technology enters the books, decisions by the private sector to place limits on the use of their products — even if incomplete — could be helpful. Several privacy advocates critical of the technology and companies that sell it agreed that any limits on the use of the tool are significant.

“While it is great that Amazon put a pause and all of the other companies put a pause, people are still developing this and they are even still developing this,” said Beryl Lipton, investigative researcher at the Electronic Frontier Foundation. 

There is little transparency into how facial recognition software developed by big technology companies is being used by police. For example, Amazon hasn’t disclosed the law enforcement agencies that use Rekognition or how many use the technology. Additionally, when it announced its one-year moratorium on facial recognition sales to police, the company declined to say whether the ban applies to federal law enforcement agencies such as Immigrations and Customs Enforcement, which was reportedly pitched the technology in 2018.

Large consumer brands like Amazon aren’t the only ones developing this technology or considering integrating it into their products. Lesser-known companies like facial recognition start-up Clearview AI have only begun to enter the public consciousness for their work with law enforcement. Rank One Computing, another company that supplies facial recognition technology to police, made headlines last year after its face matching service incorrectly matched a Detroit man’s license photo to surveillance video of someone shoplifting, leading to the first known wrongful arrest in the U.S. based on the technology.

That means it can be even more impactful when a company that directly deals with law enforcement or relies significantly on the sector’s business limits the use of facial recognition. Police body camera manufacturer Axon said in 2019 it would not use facial recognition technology for the time being after an independent research board it solicited for advice recommended it avoid the technology due largely to ethical considerations. Lipton said that move felt like “meaningful action.”

WATCH: Concern is growing over police use of facial recognition

You May Also Like

Google has begun allowing employees to hold some meetings outdoors on campus

A cyclist rides past Google Inc. offices inside the Googleplex headquarters in…
Grand Theft Auto 6 Won’t Arrive Until Fall 2025, Take-Two Confirms

Grand Theft Auto 6 Won’t Arrive Until Fall 2025, Take-Two Confirms

Take-Two Interactive Software tempered expectations for fiscal 2025 bookings on Thursday, after…

HP laying off 4,000-6,000 employees globally over the next three years

Enrique Lores, CEO, HP Scott Mlyn | CNBC Computer maker HP Inc.…

Inside Redwood Materials, former Tesla CTO’s effort to recycle batteries for rare components

In this article TSLA Lithium-ion batteries are everywhere — in phones, laptops,…