The first draft of the law on the application of AI by the European Commission.

The European Commission recently released the first draft of the law on the application of artificial intelligence, to regulate its use.

4 risk bands have been defined:

Unacceptable risk

Prohibition of particularly harmful uses, which contravene EU values ​​as they would violate fundamental rights (e.g., social scoring as implemented in China, exploitation of children’s vulnerabilities, the use of subliminal techniques and – apart from several exceptions for law enforcement – biometric identification systems such as face recognition, at a distance and in real time, in publicly accessible areas.

High risk

A limited number of AI systems that have a potentially negative impact on the safety of individuals or their fundamental rights, for which they are required stringent requirements (obligation to use high quality datasets, creation of adequate technical documentation, record keeping, transparency and provision of information to users, human supervision and robustness, accuracy, and IT security).*

Limited risk

AI systems for which specific transparency requirements are imposed, for example when there is a clear risk of manipulation (think of the use of chatbots). In those cases, transparency requires users to be aware that they are interacting with a machine.

Minimal Risk

All other AI systems can be developed and used in compliance with existing legislation without further legal obligations. According to the document, most artificial intelligence systems currently used in the EU fall into this category. Providers of these systems can choose, on a voluntary basis, to apply the requirements for reliable AI and adhere to voluntary codes of conduct.

*In case of a violation, the national authorities will have access to the information necessary to investigate whether the use of the AI ​​system has been carried out in compliance with the law. This means, in simple terms, that if a country of the Union detects a violation in a US, Chinese or Russian artificial intelligence system, the authorities of that country will be able to examine all the technical documentation, including the datasets used, which sometimes they are rightly considered by companies as an industrial secret.

The text provides for fines of up to 6% of annual worldwide turnover for those who break the rules.

While the United States, China and Russia work to obtain ever more precise and innovative results, Europe insists on risks, ethics, and the need to develop an AI that is human-centric, and after harmonizing the regulation with the Charter of EU fundamental rights, the proposal for a Regulation on data governance and the Machinery Directive, passes the ball to the legislators of the individual countries.

About facial recognition

Europe has taken a more cautious approach than other world powers also for the many associations, 116 members of the European Parliament and citizens who have lashed out against facial recognition technologies and their controversial applications. Today we are aware of how long the regulation approval process will still be full of potential changes.

The first reactions to the draft are obviously opposed between those who fear too many limitations to the applications of artificial intelligence and an increase in bureaucracy such as to make it almost “the enemy of growth”, and those who consider these rules still too permissive (between forgetfulness, ambiguous approaches, etc.) and calls for stricter rules for its application.

An example of ambiguity is provided by Article 5.1.d. (on uses considered prohibited): a judge may authorize the use of facial recognition remotely and in real time (usually always prohibited) to prevent specific, substantial, and imminent threats (such as, for example, in suspicion of a terrorist attack).

The European privacy supervisor has assessed the text as too permissive: if on the one hand it prohibits facial recognition in real time, it allows it “post facto”, thus only partially limiting the intrusions into the private life of citizens. Furthermore, the document prohibits the use of this technology by law enforcement but not by other organizations and private companies.

The European Union has taken the first step towards regulating such a complicated and dynamic matter, which will also be applied to companies that do not reside in the Union but that produce or distribute systems used by EU citizens. Let’s see how countries with an already very advanced AI sector, such as the United States, will react.

Complete Artificial Intelligence Act https://ec.europa.eu/newsroom/dae/items/709090

Leave a Reply

Overace

HQ & Factory
Corso Casale 297/bis
10132 – Turin – Italy

info@overacegroup.com

Close Bitnami banner
Bitnami