My comments on the draft ethics guidelines

Introduction: Rationale and Foresight of the Guidelines

I welcome the Communications made by the Commission on the 25th of April 2018 and on the 7th of December 2018. In my opinion, a proposal of hard law would have been more efficient to send the message the EU is practically creating a common legislative framework on AI and to prevent from a fragmentation of the market. Such legislative proposal could have ensured the defense of European values.
The goal of a Trustworthy AI through ethical purpose and technical robustness requirements promoted by this working document is a good thing. However, I would like to do some comments.

P.2 Trustworthy in AI

“it should be noted that no legal vacuum currently exists, as Europe already has regulation in place that applies to AI”: It could be useful to have the reference to this ongoing regulation and of the studies assessing the absence of legal gap in the field of AI. Indeed, as you are already aware of my position, I have concerns regarding the existence of legal gaps for example in the field of liability, consumer law or data protection.

P. 2 The Role of AI Ethics

“It concerns itself with issues of diversity and inclusion […] as well as issues of distributive justice”: In my opinion, these goals and values are the core of the desirable/ winning AI in the EU serving the general interest/ common good first.

“This document should thus not be seen as an end point, but rather as the beginning of a new and open-ended process of discussion”: I would like to stress that I appreciate the living character of the document. It could be useful to explain/ set up a process through which Ethical guidelines would be monitored and updated (Who? What? How?).

P2. Purpose and Target Audience of the Guidelines + P.3 Scope of the Guidelines

“A mechanism will be put in place that enables all stakeholders to formally endorse and sign up to the Guidelines on a voluntary basis. This will be set out in the final version of the document.” + “The Guidelines are not an official document from the European Commission and are not legally binding. They are neither intended as a substitute to any form of policy-making or regulation, nor are they intended to deter from the creation thereof.” :
In my opinion, this notion of “voluntary basis” is a problem as these Ethical guidelines should aim at applying to every stakeholders. Even as a soft law, the universal character of these guidelines would be more in compliance with our EU values treating equally all parties. At least, the creation of a label and / or incentives to conform to the guidelines shall be sought.

Chapter I: Respecting Fundamental Rights, Principles and Values – Ethical Purpose

1. The EU’s Rights’ based approach to AI Ethics

P.5 “The AI HLEG considers that a rights-based approach to AI ethics brings the additional benefit of limiting regulatory uncertainty.”: There is no legal vacuum (as stated in p.2) or does a regulatory uncertainty exist? Does it imply that the courts will have to interpret the legislation according to these ethics guidelines?

2. From Fundamental rights to Principles and Values

P.5-6 I was surprised by the interaction proposed: fundamental rights ensure human dignity implying human rights values recognising the ethical principle of autonomy of human being. I usually thought the values were giving birth to fundamental rights that were practically applied through ethical principles. This part is not totally clear to me even with the reference to precedent. In my opinion, our EU values should lead the guidelines, not the principle of autonomy.

3. Fundamental rights of Human Beings

P.7 I support the list of the five fundamental rights. However, these principles could enter into conflicts against each other. How do you ensure a fair balance between the respect of these different fundamental rights? Who should be competent to assess the respect of this balance in case of conflict?

4. Ethical Principles in the Context of AI and Correlating Values

P. 8 “It should also be noted that, in particular situations, tensions may arise between the principles when considered from the point of view of an individual compared with the point of view of society, and vice versa. There is no set way to deal with such trade-offs. In such contexts, it may however help to return to the principles and overarching values and rights protected by the EU Treaties and Charter. Given the potential of unknown and unintended consequences of AI, the presence of an internal and external (ethical) expert is advised to accompany the design, development and deployment of AI.Such expert could also raise further awareness of the unique ethical issues that may arise in the coming years.” :
The possibility of tensions in particular situations is accurate. However, I think the suggested presence of internal and external ethical expert shall be more detailed: Where do they come from? How are they selected? How are they paid? Any selection procedure? Required diploma? Obligations of independence and transparence?

5. Critical concerns raised by AI

P.11 5.1 Identification without consent

“As current mechanisms for giving informed consent in the internet show, consumers give consent without consideration. This involves an ethical obligation to develop entirely new and practical means by which citizens can give verified consent to being automatically identified by AI or equivalent technologies.” I share the view of this paragraph stating the inefficiency of the current system collecting the consent of individuals. It is essential to develop practical means ensuring the respect of GDPR and data protection assessing the clear, free and inform consent of the consumer.

Exceptions could indeed be sought for criminal law purpose or strict necessity of the functioning of the AI product with anonymization of the data, for e.g. driverless car.

The right to oblivion as it was developed in articles 17 and 19 of the GDPR and by the case law of the ECJ shall be strengthened.

The right of the consumer to be informed about the identification and the use of its data should be protected.

P11-12 5.2 Covert AI systems

A human shall always be informed she/he is interacting with a robot; this information should be repeated in case of long term interaction. The design of humanoid and android robots requires a special attention to respect ethical principles and shall be monitored by an ethical committee.

P.12 5.3 Normative and Mass Citizens Scoring without consent in deviation of Fundamental Rights

Although I share the ideal goal of having an opt-out system, I think it is so far unrealistic in practice. For this reason, I would encourage the monitoring by an ethical committee to frame this practice.

P.12-13 5.5 Potential longer-term concerns

Without falling into psychosis, a follow-up and monitoring shall be done to frame the development of the technologies using AI. In case of doubts about the impact of AI, an ethical committee should carefully examine risk-assessment.

Chapter II: Realising Trustworthy AI

P. 14 “This list is non-exhaustive and introduces the requirements for Trustworthy AI in alphabetical order, to stress the equal importance of all requirements.” : I appreciate the openness of this list which could be updated upon needs. A procedure to update it could be defined (Who? What? How?).

A protocol of the respect of these requirements during the development before the marketing of the AI product monitored by an ethical Committee could be useful.

P.14 1. Accountability

This paragraph rules only the post malfunctioning of AI and remains very open on the possibilities to fix it. The creation of a protocol to report the problem to the relevant producer / authority / committee should be considered/encouraged. Furthermore, the respect of the right to compensation should be strongly enforced to ensure the respect of consumer rights. The consumer should be clearly informed about his rights when a problem occurred.

P.16 5. Non-Discrimination

I fully support the content of this paragraph. Concerning the control of the data with which you feed and train the AI, I would like to underline a problem which could occur: Feeding repeatedly the AI with the same data could lead to the overcautiousness of the AI reasoning preventing it from adapting to changes. For e.g. the AI used in court to write judgement would base its answer only on former case law without any evolution.

P.16 6. Respect for ( & Enhancement of) Human Autonomy

It is very important to inform and explain to the user on which indicators / parameters the AI took the decision. Such a right is proposed in the Omnibus directive to improve the transparency of online platforms.

P.17 7 Respect for Privacy

To assess whether GDPR fits for the development and use of AI, I would encourage the launch of a study.
A strong and clear enforcement mechanism in case of breach of data protection in the field of AI should be put in place.

P.17-18 8 Robustness

I support the different requirements. I think it would be useful to clarify the possible consequences if they are not respected.

P.19-21 Technical and Non-Technical Methods to achieve Trustworthy AI
1. Technical Methods

I very much welcome these practical guidelines which pave the way for a proper protocol in the field of AI. These requirements, especially the last three (Testing and Validating, Traceability and Auditability, Explanation XAI Research), seem fundamental to me. It should be mandatory to respect all these requirements cumulatively.
It could lead to a label to encourage the stakeholders to respect the Ethical guidelines.

P.21-22 2. Non-Technical Methods

I call for this general framework in the EU since the creation of the WG on Robotics and AI in the European Parliament in 2015.

P. 22 Accountability governance

Having one person working at internal level is very different from having an external panel. The requirement could be more precised to ensure the same level of protection.
A committee could also monitor the selection / choice of these experts to ensure the absence of conflict of interest.

P.22 Diversity

There should be a hard law to ensure the respect of the diversity inclusion with proper enforcement mechanism.

As a general remark, a committee could monitor and report the fulfilment of all the requirements. I recall the necessity of creating a framework to encourage the stakeholders to respect the guidelines.

Chapter III: Assessing Trustworthy AI

Although the use of an assessment list could be useful and the questions proposed are a good start, I still believe this process will not be taken seriously without a proper procedure validated by a group of independent experts.
I agree the process should be adapted according to the size and means of the company (SMEs or big companies etc.)

General Comments

I very much welcome the draft of this ethical guidelines and its content as a whole.
However, I have doubts about its practical application and impacts.

The goal is to protect our EU values by creating a trustworthy AI but I do not see any incentive to encourage stakeholders to use these guidelines: it is a soft law on voluntary basis without any proper control nor benefits for the stakeholder.

If I support the main part of the text, I would delete every reference to the “voluntary basis”. Ethical principles should aim at universal application. The future of our European values and fundamental rights are at stake, it cannot be something you choose to take part in or not.

I would recommend the creation of a proper process explaining the concrete steps to take, how they can be controlled and who is competent to assess the compliance with the guidelines. The ideal scenario would be to create a certification or a label: the control would be effective and it could create benefits for the stakeholders.

I am still convinced the Commission should propose a regulatory framework to monitor the development of AI and enforce EU standards.

Draft ethics guidelines: https://ec.europa.eu/futurium/en/ai-alliance-stakeholders-consultation/draft-ethics-guidelines-trustworthy-ai

mediafins.com