OECD Principles on Artificial Intelligence state that AI systems should be designed so that they respect human rights & include safeguards
The OECD Principles on Artificial Intelligence...were adopted on 22 May 2019 by OECD member countries when they approved the OECD Council Recommendation on Artificial Intelligence... Beyond OECD members, other countries including Argentina, Brazil, Colombia, Costa Rica, Peru and Romania have already adhered to the AI Principles, with further adherents welcomed. The OECD AI Principles set standards for AI that are practical and flexible enough to stand the test of time in a rapidly evolving field...
The Recommendation identifies five complementary values-based principles for the responsible stewardship of trustworthy AI:
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
- There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
- AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
- Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
Consistent with these value-based principles, the OECD also provides five recommendations to governments.