Closing the Human Rights Gap in AI Governance

Mozilla Foundation, Rockefeller Foundation, Element AI

6057

Artificial intelligence is expected to generate important social and economic gains, from transforming enterprise productivity to advancing the Sustainable Development Goals. At the same time, recurring revelations of problematic impacts of the use of AI — such as in the criminal justice system, predictive policing, public benefits systems, targeted surveillance programs, advertisements and disinformation — have highlighted the extent to which the misuse of AI poses an existential threat to universal rights. The rights include privacy, equality and non-discrimination, freedom of assembly, freedom of expression, freedom of opinion, freedom of information, and in some cases, even democracy.

In October 2019, Element AI partnered with the Mozilla Foundation and The Rockefeller Foundation to convene a workshop on the human rights approach to AI governance, to determine what concrete actions could be taken in the short term to help ensure that respect for human rights is embedded into the design, development, and deployment of AI systems. Global experts from the fields of human rights, law, ethics, public policy, and technology participated. This report provides a summary of the workshop discussions and includes a list of recommendations that emerged out of the meeting.