abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

這頁面沒有繁體中文版本,現以English顯示

評論文章

2020年8月18日

作者:
Nahla Davies

US police are using facial recognition technology at protests - adding to systemic racism

查看所有標籤

Author: Nahla Davies, software developer and tech writer

If there’s one issue besides the COVID-19 pandemic that has drawn significant attention this year it’s the deep inequalities that exist within the US criminal justice system. This has been reflected in the major shifting of attitudes, not only from the general public in the form of mass protests, but also from major corporations and institutions.

In June 2020, major corporations such as Microsoft, IBM, and Amazon announced that they would pause sales of facial recognition technology to police in the United States.

While it is welcome to see major companies taking action to help fight against racial injustice and showing solidarity with activists, this step is also long overdue. It’s simply become too consistent of a pattern for companies and corporations to simply release statements expressing support but enacting little to no action beyond that. In some cases, such companies may often engage in actions that actually directly conflict with their previous statements of support for anti-racist causes.

Racial justice advocates have been proclaiming for years, with overwhelming evidence, that facial recognition technology in the hands of law enforcement is not only a tool that enables abuse by police but also a major threat to our privacy.

Facial recognition technology is one of the most dangerous technologies available to law enforcement for everyone and poses a particular threat to racial minorities.

Facial recognition technology is racially biased

Facial recognition technology relies on a massive database of photos, such as driver’s licenses or mugshots, and uses biometrics to map together facial features to help identify people. The primary concern about facial recognition is that the technology can itself be clearly racially biased.

Research from the Massachusetts Institute of Technology found facial analysis algorithms, for instance, misclassified people of color over a third of the time, while for white people there were hardly any such mistakes.

As many software-based business models increasingly rely on facial recognition tech, these error-prone algorithms exacerbate the already-pervasive racial biases towards black, indigenous, and people of color (BIPOC). False matches can lead to wrongful arrests, longer detention times, and in the worst-case police violence.

Facial recognition software is also tied into mugshot databases and seems to further amplifies racism. Each time an individual is arrested, law enforcement will take a mugshot and store the image in a database next to the individual’s personal information. Since people of color are more likely to be arrested for minor crimes, their faces are therefore more likely to be stored in databases, which increases the odds of misidentification and other errors.

How facial recognition technology is used at protests for racial justice

The massive error rate that exists with facial recognition technologies when o “identifying” people of color should be of tremendous concern because of the number of false positives it creates, resulting in people of color being identified as suspects or criminals when they are not.

These technologies are also being used by law enforcement at racial justice protests all over the US and while facial recognition tech may have improved significantly over the last few years, police are still relying on after-the-fact systems. This means that footage recorded from CCTV cameras and other sources at protests is used to identify and then arrest protestors after the event is over, with their images then being matched against their mugshot databases.

Making matters worse, there are few laws on how facial recognition can be used by police. The result is that racial minority lives are subjected to more unwarranted surveillance, with a greater number of individuals likely to be misidentified and therefore arrested. The irony is that this is occurring at protests against precisely the issue they help perpetuate.

Privacy concerns of facial recognition technology

With all of this in mind, the question on an even larger scale needs to be: is facial recognition technology in the hands of law enforcement really designed to keep us safer, or is it just a form of intrusive state surveillance? The misuse of facial recognition technology is a concern for everybody, and not just for minorities.

When Apple introduced its FaceID technology back in 2017, the technology came under scrutiny for being used by the Federal Bureau of Investigation (FBI) to gain access to data on the phones of criminal suspects. Since the key is literally our facial features, there is no need for active consent. In other words, facial recognition technology can tag us and track us, and all without us even knowing.

It’s not just in America where all of this is a problem. In Europe, while the GDPR has introduced sweeping privacy regulations, these only provide a framework and are not specifically focused on facial recognition tech.

In Canada, laws have failed to reign in Clearview AI (a company that provides facial recognition technology to private companies and law enforcement alike), to the point that the right to be forgotten is not even recognized under Canadian law.

The bottom line is that civilians all around the world need to recognize just how fast facial recognition technology is advancing and how it is showing itself as a clear threat to racial justice and privacy. Facial recognition technology should be regulated so that it is not designed or used in discriminatory ways.

As technology continues to evolve, it is of the utmost importance to ensure that it is used to protect the rights of disadvantaged minorities and the privacy of the general public when it meets the law. The only way this could be accomplished is if the algorithms used in law enforcement tech, such as facial recognition devices, don't create erroneous assumptions to produce prejudiced results. And if manufacturers of this technology do not correct the bias in their systems, then more decisive action, such as banning the use of the technology by police all together, may be the only recourse left.