EU develops guidelines for trustworthy Artifical Intelligence (AI)

Get RSS feed of these results

All components of this story

Article
26 April 2019

EU Commission pushes for ethical AI regulation

Author: Siddharth Venkataramakrishnan, Financial Times

'EU backs AI regulation while China and US favour technology', 25 April 2019

[…] If human-like AI robots were to gain legal standing of their own, companies could look to place the blame on them when things go wrong. […]

Fears incubated in popular culture are not entirely misguided, however: killer robots (officially called lethal autonomous weapon systems) are just one of many risks. A more pedestrian threat comes from automated hiring, where applicants are judged by AI that has learnt from historical data sets. “Discrimination that comes out of systems trained on data from people . . . reflects the behaviour of people [who previously carried out the job],” explains Yoshua Bengio, a professor in the University of Montreal’s department of computer science. In response to these concerns, ethical frameworks for AI are being written around the world. […]

Yet the tension between safeguarding citizens and fostering innovation can pull policymakers in opposite directions. […]

As with data and privacy regulation, the EU is pressing ahead with rulemaking for AI. The guidelines published by the European Commission in April and drawing on high level experts follow the idea of “Trustworthy AI”. They provide clear ethical principles and a checklist to be used when developing AI systems. The principles will now be tested by companies and other stakeholders in a pilot project to start in the summer of 2019.

The EU’s regulatory preparedness contrasts with the countries which are leading in AI research. “The US was on the path to really forward-thinking AI national policy under the Obama administration. Now, we’re not,” says Mark Latonero, a fellow at the USC Annenberg Center on Communication Leadership & Policy. [...]

China’s AI strategy has just two passing references to ethics. But the country is not alone: ethics remains a fundamentally international problem. “AI will have the tendency to scale very quickly without really any regards to national borders,” says Mr Latonero. [...]

Read the full post here

Article
9 April 2019

Expert commentary: "Ethics washing" made in Europe

Author: Thomas Metzinger, Tagesspiegel

[...] Europe has just taken the lead in the hotly contested global debate on the ethics of artificial intelligence (AI). On Monday in Brussels, the EU Commission presented its Ethics Guidelines for Trustworthy AI. The 52-member High-Level Expert Group on Artificial Intelligence (HLEG AI), of which I am a member, worked on the text for nine months. The result is a compromise of which I am not proud, but which is nevertheless the best in the world on the subject. The United States and China have nothing comparable. [...]

The underlying guiding idea of a “trustworthy AI” is, first and foremost, conceptual nonsense. Machines are not trustworthy; only humans can be trustworthy (or untrustworthy). If, in the future, an untrustworthy corporation or government behaves unethically and possesses good, robust AI technology, this will enable more effective unethical behaviour. Hence the Trustworthy AI narrative is, in reality, about developing future markets and using ethics debates as elegant public decorations for a large-scale investment strategy. [...]

The guidelines are lukewarm, short-sighted and deliberately vague. They ignore long-term risks, gloss over difficult problems (“explainability”) with rhetoric, violate elementary principles of rationality and pretend to know things that nobody really knows. [...]

Given this situation, who could now develop ethically convincing "Red Lines" for AI? Realistically, it looks as if it can only be done by the new EU Commission that starts its work after the summer. Donald Trump's America is morally discredited to the bone; it has taken itself out of the game. And China? Just as in America, there are many clever and well-meaning people there, and with a view to AI security, it could, as a totalitarian state, enforce any directive bindingly. But it's already far ahead in the employment of AI-based mass surveillance on its 1.4 billion citizens; we cannot expect genuine ethics there. As “digital totalitarianism 2.0”, China is not an acceptable source for serious ethical discussions. Europe must now bear the burden of a real historical responsibility. [...]

Read the full post here

Article
9 April 2019

The EU's 7 steps for trusty AI

Author: Zulfikar Abbany, Deutsche Welle

This being a new report from the European Commission, it should come as no surprise that its "seven essentials for achieving trustworthy AI," and robotics are themselves only one of three über-steps.

The other two steps include a "large-scale pilot with partners" and "building international consensus for human-centric AI" — more on both later. [...]

The European Commissioner for Digital Economy and Society Mariya Gabriel says the EU is taking "an important step towards ethical and secure AI. We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society." [...]

The seven essentials have been released as part of the European Commission's AI Strategy from 2018. They were designed by a high-level expert group, known as HLEG. [...]

In a press release, Ursula Pachl, HLEG member and deputy director general of the consumer group, BEUC, said it was "crucial to go beyond ethics now and establish mandatory rules to ensure AI decision-making is fair, accountable and transparent." [...]

Read the full post here

Article
8 April 2019

EU Commission takes forward its work on ethical guidelines

Author: European Commission

[...] Building on the work of the group of independent experts appointed in June 2018, the Commission is today launching a pilot phase to ensure that the ethical guidelines for Artificial Intelligence (AI) development and use can be implemented in practice. The Commission invites industry, research institutes and public authorities to test the detailed assessment list drafted by the High-Level Expert Group, which complements the guidelines.

The Commission is taking a three-step approach: setting-out the key requirements for trustworthy AI, launching a large scale pilot phase for feedback from stakeholders, and working on international consensus building for human-centric AI. [...]

Seven essentials for achieving trustworthy AI

Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. [...]

The Commission wants to bring this approach to AI ethics to the global stage because technologies, data and algorithms know no borders. To this end, the Commission will strengthen cooperation with like-minded partners such as Japan, Canada or Singapore and continue to play an active role in international discussions and initiatives including the G7 and G20. The pilot phase will also involve companies from other countries and international organisations. [...]

Read the full post here