abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

這頁面沒有繁體中文版本,現以English顯示

文章

2023年6月13日

作者:
Amnesty International

EU: AI Act at risk as European Parliament may legitimize abusive technologies

EU: AI Act at risk as European Parliament may legitimize abusive technologies

The European Parliament must use a plenary vote cementing its final position on the European Union’s Artificial Intelligence Act (AI Act) to ban racist and discriminatory profiling systems that target migrants and other marginalized groups, Amnesty International said today, ahead of the vote on 14 June.

There is a risk the European Parliament may upend considerable human rights protections reached during the committee vote on May 11, opening the door for the use of technologies which are in direct conflict with international human rights law
Mher Hakobyan, Advocacy Advisor on AI regulation

The organization is calling for the European Parliament to ban the use of mass surveillance technologies, such as retrospective and live remote biometric identification tools, in the AI Act, a landmark piece of legislation. 

Amnesty International research shows that invasive facial recognition technology amplifies racist and discriminatory law enforcement against racialized people, including stop-and-search practices which disproportionately affect Black and brown people. It is also used to prevent and curtail the movement of migrants and asylum seekers.

“There is a risk the European Parliament may upend considerable human rights protections reached during the committee vote on May 11, opening the door for the use of technologies which are in direct conflict with international human rights law,” said Mher Hakobyan, Advocacy Advisor on AI regulation at Amnesty International.

Lawmakers must ban racist profiling and risk assessment systems, which label migrants and asylum seekers as ‘threats’; and forecasting technologies to predict border movements and deny people the right to asylum
Mher Hakobyan, Advocacy Advisor on AI regulation

In their attempt to strengthen the walls of “Fortress Europe”, EU member states have increasingly resorted to using opaque and hostile technologies to facilitate abuse against migrants, refugees and asylum seekers at their borders. 

“With such a persistently inhospitable environment towards people fleeing wars and conflict or in search of a better life, it is vital that the European Parliament doesn’t dismiss the harms of racist AI systems. Lawmakers must ban racist profiling and risk assessment systems, which label migrants and asylum seekers as ‘threats’; and forecasting technologies to predict border movements and deny people the right to asylum,” said Mher Hakobyan. 

While the AI Act can help prevent and reduce harm caused by new technologies in Europe, it is crucial that the EU doesn’t contribute to human rights violations by exporting draconian technologies beyond its territories. The AI Act must prohibit export of any systems which are not allowed to be used in the EU, such as facial recognition and other surveillance technologies.

Amnesty International’s research has identified that cameras made by a Dutch company called TKH Security, are used in public spaces and attached to police infrastructure in occupied East Jerusalem, to entrench the Israeli government’s control over Palestinians and Israel’s system of apartheid against Palestinians. 

Similar investigations have also revealed that companies based in France, Sweden and the Netherlands sold digital surveillance systems, such as facial recognition technology (FRT) and network cameras, to key players of the Chinese mass surveillance apparatus. In some cases, the export was directly for use in China’s indiscriminate mass surveillance programmes, with the risk of being used against Uyghurs and other predominantly Muslim ethnic groups throughout the country.

“The European Parliament has a duty to uphold human rights. Anything short of an outright ban on technologies used for mass surveillance, racist policing and profiling would be a failure of that duty,” said Mher Hakobyan.

“EU lawmakers must also ensure that technologies banned within the EU are not exported to commit human rights abuses elsewhere. This legislation must protect and promote the human rights of all people, not just people in Europe.” 

Background:

The European Commission proposed legislation governing the use of artificial intelligence on 21 April 2021. The Council of the EU, composed of EU national governments, adopted its position in December 2022. The European Parliament aims to have a final vote to form its official position on 14 June, after which the two institutions, together with the European Commission, will have to agree on a common text for the Regulation.

Amnesty International, as part of a coalition of civil society organizations led by the European Digital Rights Network (EDRi), has been calling for EU artificial intelligence regulation that protects and promotes human rights. 

時間線