abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeblueskyburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfilterflaggenderglobeglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptriangletwitteruniversalitywebwhatsappxIcons / Social / YouTube

這頁面沒有繁體中文版本,現以English顯示

故事

USA: Police used Clearview AI facial recognition tool to identify and arrest a pro-Palestinian student protester, despite its use being banned, incl. co. non-response

In July 2025, it was revealed that the NYPD got around its own ban on facial recognition by asking a New York City fire marshal to use the FDNY’s access to Clearview AI to use facial recognition to identify a pro-Palestinian student protester, Zuhdi Ahmed, involved in a 2024 clash at Columbia University.

The FDNY has had a contract with Clearview AI since 2022, and the marshal also accessed records normally off-limits to police, helping NYPD detectives identify Ahmed. He was charged with a hate crime, but the case was later dismissed, with the judge warning about government overreach and lack of transparency in surveillance practices.

Civil rights groups like the Legal Aid Society and the Surveillance Technology Oversight Project condemned the NYPD and FDNY for violating city laws and using secret surveillance methods. Legal Aid has now filed a lawsuit to uncover how the FDNY uses facial recognition technology.

In September 2025, the Resource Centre contacted Clearview AI to request their response to these allegations, but the company did not respond.