Illustration by Alex Castro / The Verge
As protestors square off against police across the country, California is readying a bill that could expand the state’s use of facial recognition, including for law enforcement purposes.
Introduced as Assembly Bill 2261, the bill would provide a framework by which companies and government agencies could legally engage in facial recognition, provided they give prior notice.
The bill has been moving slowly through the state legislature since February and is being considered by the Assembly Appropriations Committee this week. For supporters, it’s an important privacy measure, heading off the more extreme uses of widely available technology. Ed Chau, the assemblyman who introduced the bill, called it “the long overdue solution to regulate the use of facial recognition technology by commercial, state and local public entities,” in an editorial for CalMatters on Tuesday.
But critics — including the American Civil Liberties Union of Northern California — say the bill will only expand the use of the technology further. In particular, they allege that providing legal conditions under which the technology can be used undercuts outright bans that have been put in place by a number of California municipalities, including San Francisco, Oakland, and Berkeley.
Crucially, the local ACLU chapter says it’s too easy to scan a user’s face without their permission, with no consent required for government agencies and only minimal requirements for businesses. “The bill would invite the use of facial recognition to deny health care, housing, financial products, and basic necessities,” said Matt Cagle, attorney with the ACLU of Northern California, told The Verge. “All a company would have to do is keep a human in the loop (even if that human is an employee of the company). Instead of providing real protection to Californians, this bill will further endanger the Black and brown people most harmed by COVID-19 and police violence.”
Police use of facial recognition has been widely criticized by activists and researchers. A 2019 study from Georgetown’s Center on Privacy and Technology found that police often used commercial systems incorrectly, either by inputting fraudulent faces or obscured images to get the desired result.
Update 5:09PM ET: Updated with new quote from ACLU-NC representative.