abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

이 페이지는 한국어로 제공되지 않으며 English로 표시됩니다.

의견

18 8월 2020

저자:
Nahla Davies

US police are using facial recognition technology at protests - adding to systemic racism

Author: Nahla Davies, software developer and tech writer

If there’s one issue besides the COVID-19 pandemic that has drawn significant attention this year it’s the deep inequalities that exist within the US criminal justice system. This has been reflected in the major shifting of attitudes, not only from the general public in the form of mass protests, but also from major corporations and institutions.

In June 2020, major corporations such as Microsoft, IBM, and Amazon announced that they would pause sales of facial recognition technology to police in the United States.

While it is welcome to see major companies taking action to help fight against racial injustice and showing solidarity with activists, this step is also long overdue. It’s simply become too consistent of a pattern for companies and corporations to simply release statements expressing support but enacting little to no action beyond that. In some cases, such companies may often engage in actions that actually directly conflict with their previous statements of support for anti-racist causes.

Racial justice advocates have been proclaiming for years, with overwhelming evidence, that facial recognition technology in the hands of law enforcement is not only a tool that enables abuse by police but also a major threat to our privacy.

Facial recognition technology is one of the most dangerous technologies available to law enforcement for everyone and poses a particular threat to racial minorities.

Facial recognition technology is racially biased

Facial recognition technology relies on a massive database of photos, such as driver’s licenses or mugshots, and uses biometrics to map together facial features to help identify people. The primary concern about facial recognition is that the technology can itself be clearly racially biased.

Research from the Massachusetts Institute of Technology found facial analysis algorithms, for instance, misclassified people of color over a third of the time, while for white people there were hardly any such mistakes.

As many software-based business models increasingly rely on facial recognition tech, these error-prone algorithms exacerbate the already-pervasive racial biases towards black, indigenous, and people of color (BIPOC). False matches can lead to wrongful arrests, longer detention times, and in the worst-case police violence.

Facial recognition software is also tied into mugshot databases and seems to further amplifies racism. Each time an individual is arrested, law enforcement will take a mugshot and store the image in a database next to the individual’s personal information. Since people of color are more likely to be arrested for minor crimes, their faces are therefore more likely to be stored in databases, which increases the odds of misidentification and other errors.

How facial recognition technology is used at protests for racial justice

The massive error rate that exists with facial recognition technologies when o “identifying” people of color should be of tremendous concern because of the number of false positives it creates, resulting in people of color being identified as suspects or criminals when they are not.

These technologies are also being used by law enforcement at racial justice protests all over the US and while facial recognition tech may have improved significantly over the last few years, police are still relying on after-the-fact systems. This means that footage recorded from CCTV cameras and other sources at protests is used to identify and then arrest protestors after the event is over, with their images then being matched against their mugshot databases.

Making matters worse, there are few laws on how facial recognition can be used by police. The result is that racial minority lives are subjected to more unwarranted surveillance, with a greater number of individuals likely to be misidentified and therefore arrested. The irony is that this is occurring at protests against precisely the issue they help perpetuate.

Privacy concerns of facial recognition technology

With all of this in mind, the question on an even larger scale needs to be: is facial recognition technology in the hands of law enforcement really designed to keep us safer, or is it just a form of intrusive state surveillance? The misuse of facial recognition technology is a concern for everybody, and not just for minorities.

When Apple introduced its FaceID technology back in 2017, the technology came under scrutiny for being used by the Federal Bureau of Investigation (FBI) to gain access to data on the phones of criminal suspects. Since the key is literally our facial features, there is no need for active consent. In other words, facial recognition technology can tag us and track us, and all without us even knowing.

It’s not just in America where all of this is a problem. In Europe, while the GDPR has introduced sweeping privacy regulations, these only provide a framework and are not specifically focused on facial recognition tech.

In Canada, laws have failed to reign in Clearview AI (a company that provides facial recognition technology to private companies and law enforcement alike), to the point that the right to be forgotten is not even recognized under Canadian law.

The bottom line is that civilians all around the world need to recognize just how fast facial recognition technology is advancing and how it is showing itself as a clear threat to racial justice and privacy. Facial recognition technology should be regulated so that it is not designed or used in discriminatory ways.

As technology continues to evolve, it is of the utmost importance to ensure that it is used to protect the rights of disadvantaged minorities and the privacy of the general public when it meets the law. The only way this could be accomplished is if the algorithms used in law enforcement tech, such as facial recognition devices, don't create erroneous assumptions to produce prejudiced results. And if manufacturers of this technology do not correct the bias in their systems, then more decisive action, such as banning the use of the technology by police all together, may be the only recourse left.

개인정보

이 웹사이트는 쿠키 및 기타 웹 저장 기술을 사용합니다. 아래에서 개인정보보호 옵션을 설정할 수 있습니다. 변경 사항은 즉시 적용됩니다.

웹 저장소 사용에 대한 자세한 내용은 다음을 참조하세요 데이터 사용 및 쿠키 정책

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

분석 쿠키

ON
OFF

귀하가 우리 웹사이트를 방문하면 Google Analytics를 사용하여 귀하의 방문 정보를 수집합니다. 이 쿠키를 수락하면 저희가 귀하의 방문에 대한 자세한 내용을 이해하고, 정보 표시 방법을 개선할 수 있습니다. 모든 분석 정보는 익명이 보장되며 귀하를 식별하는데 사용하지 않습니다. Google은 모든 브라우저에 대해 Google Analytics 선택 해제 추가 기능을 제공합니다.

프로모션 쿠키

ON
OFF

우리는 소셜미디어와 검색 엔진을 포함한 제3자 플랫폼을 통해 기업과 인권에 대한 뉴스와 업데이트를 제공합니다. 이 쿠키는 이러한 프로모션의 성과를 이해하는데 도움이 됩니다.

이 사이트에 대한 개인정보 공개 범위 선택

이 사이트는 필요한 핵심 기능 이상으로 귀하의 경험을 향상시키기 위해 쿠키 및 기타 웹 저장 기술을 사용합니다.