abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

這頁面沒有繁體中文版本,現以English顯示

新聞稿

2022年9月28日

Surveillance technology companies complicit in human rights abuses in MENA region

Surveillance industry secrecy puts migrants, asylum seekers and refugees at risk due to lack of regulation over these technologies.

  • Lack of transparency: None of the companies surveyed stated which countries they operate in or to which they provide surveillance solutions, services, goods or equipment.
  • Lack of adequate and effective due diligence: Only four companies (Airbus, Thales Group, G4S and IrisGuard) said they undertake human rights impact assessments. None publicly disclosed their due diligence policies.
  • Lack of meaningful engagement with stakeholders: Only one company (IrisGuard) provided a substantial response when asked about stakeholder engagement processes.
  • Lack of adequate grievance mechanisms: Only one company (Airbus) confirmed employees, suppliers and third parties can raise anonymous complaints concerning human rights abuses facilitated by company products and services.

Companies are profiting from surveillance technologies which facilitate human rights abuses against migrants, asylum seekers and refugees in the Middle East and North Africa, new research finds. The Business & Human Rights Resource Centre (Resource Centre) surveyed 24 companies deploying surveillance technology for migration management and border control in the MENA region, seeking response regarding their human rights policies and practices, as well as what steps they are taking to mitigate human rights risks related to the use of these technologies. In Palestine, Jordan and Libya, data and research from the Resource Centre and partners on the ground have illustrated how this technology can embed bias and discrimination, resulting in violations of autonomy and interception and return of asylum-seekers fleeing torture, among other rights infringements.

None of the companies surveyed by the Resource Centre said which countries they operate in or to which they provide surveillance solutions, services, goods or equipment. In addition, none of the companies provided an updated list of the surveillance solutions, products, services or equipment they supply to governments in MENA. With companies hiding behind confidentiality clauses, the lack of transparency makes it difficult to identify the human rights impacts of their tech products or hold them accountable for any potential abuses linked to their operations.

There is an urgent need for companies to recognise the human rights risks associated with their products and adopt effective due diligence, include engaging in regular consultations with affected rightsholders and civil society organisations, as well as establishing robust and effective grievance mechanisms to enable victims of potential abuse to submit complaints and seek redress. Where such mechanisms have not been implemented, investors must cease to invest in such companies.

Case study

Microsoft came under fire in 2019 for funding the Israeli facial recognition company AnyVision, which reported carried out surveillance on Palestinians crossing checkpoints in the West Bank – which critics argued could lead to bias and discrimination against thousands of Palestinians who pass through checkpoints to visit friends and family. Following widespread criticism, Microsoft halted investment in AnyVision, citing the need for greater oversight and control over the use of the technology.

Dima Samaro, MENA Regional Researcher & Representative, Business & Human Rights Resource Centre, said: “Governments in the MENA region are increasingly purchasing and using digital tools for border management and migration – including autonomous border security systems such as drones, facial recognition and biometric systems – which pose threats to people already marginalised by society. For example, in 2013, an iris scanning system for registering Syrian refugees was introduced in Jordan, which led to refugees being forced to use the system if they wanted to withdraw money or make grocery purchases, removing any autonomy for refugees. These iris scan technologies and biometric systems can be sold by companies as ‘humanitarian aid’, allowing them to be tested on a large scale while eroding the freedoms of marginalised groups.

“When operating in conflict-affected or high-risk regions as the MENA region, the surveillance sector must undertake heightened human rights due diligence and, if it cannot do so or it identifies evidence of harm, it should stop selling its technology to companies or governments, as Microsoft did in the West Bank. Lack of adequate due diligence measures by private companies will only worsen the situation for those from marginalised communities, putting their lives in jeopardy as the absence of robust regulation and effective mechanisms in the region allows surveillance technologies to be operated freely and without scrutiny.”

//ENDS

Note to editors: