The European Commission’s High Level Expert Group on Artificial Intelligence has released a new set of guidelines for ensuring that AI is “trustworthy”, following a public consultation with feedback from over 500 contributors.

The updated guidelines set out the EU’s guidance for assisting developers and deployers in achieving “trustworthy AI”, maximizing the benefits and minimizing the risks associated with this emerging area of technology.

Following its European strategy on AI (published in April 2018), the guidelines were drafted by an independent expert group, comprising of 52 representatives from academia, industry and society.

How do you make sure AI is trustworthy?

The guidelines provide that “trustworthy AI” should be lawful, ethical and robust from a technical and social perspective. They recognise that AI systems do not operate in a vacuum and do not aim to replace any existing laws or regulations applicable to AI. They largely focus on the ethical aspects of AI and call particular attention to protecting vulnerable groups, such as children.

Based on fundamental rights and ethical principles, the guidelines list seven key requirements that AI systems should meet in order to be considered trustworthy:

  1. Human agency and oversight:  AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  2. Technical Robustness and safety: algorithms should be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  3. Privacy and data governance: citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  4. Transparency: the traceability of AI systems should be ensured.
  5. Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  6. Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  7. Accountability: mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

Of particular interest to AI developers and deployers should be the non-exhaustive “AI trustworthiness assessment list”, which can be used as a practical checklist in AI risk assessments (for example to assess the appropriate level of human control for an AI system).

What does this mean for AI architects?

Although the principles are somewhat abstract and the guidelines aren’t legally binding, they provide a good starting point for AI developers and deployers to determine whether their new AI technologies are ethical. The guidelines also demonstrate the EU’s approach to regulating this emerging technology and will likely form the basis of any future laws on AI. Businesses should continue to comply with existing laws and regulations while being mindful of this changing landscape and keeping abreast of all guidance being published in the AI sphere.

What’s next?

The EC is now inviting all interested businesses to participate in a pilot phase of the “assessment list” (in June 2019) to provide practical feedback on how best to implement and verify the group’s recommendations. The EU has also launched a forum for the exchange of best practices and wants businesses interested in participating to join the European AI Alliance. Following this pilot and based on the feedback received, the expert group will propose a revised version of the assessment list in early 2020.