Proposal for an Artificial Intelligence Act by the EU Commission

The European Commission is working on a proposal of a regulation on Artificial Intelligence. This proposal is origins from many initiatives and statements regarding the digital market and impact of AI on ethics and law. The goal is to create an EU wide effective law which protects rights and values by categorizing, forbidding and or requiring certain standards in Artificial Intelligence applications.

The proposal has a very wide idea on what AI is and what it wants to regulate. It aims on software based on techniques like machine learning, statistics and especially Bayse’ statistics, logic programming, inference and expert systems.

Category 1 : Forbidden

Some Artificial Intelligence applications which contradict rights and values of the European Union and it citizens shall be forbidden. The proposal names the following

  • Social Scoring by government agencies, i.e. scoring the social behavior of persons and creating a negativ impact. Such systems are known from China where non-conforming behavior can result in disadvantages when applying for a living space, university, etc.
  • Manipulate persons through subliminal techniques beyond their consciousness. A phenomenon that was recently shown in US elections where voters have been manipulated by using social media.
  • Remote Biometric Surveillance shall be banned but with very generous exceptions. For example searching for missing children, victims of crime and terrorists.

Category 2: High-Risk

Many Artificial Intelligence applications are considered to put rights of individuals on high risk and require therefore special legal treatment. The proposal lists a wide rang of possible high-risk applications. For example critical infrastructure, education, private and public services, law enforcement, migration, biometric identification.

The proposal states that for these applications require additional precautions like risk management, data quality, documentation, human supervision, hardening against security threats. It is also planned that high-risk applications have to be registered in a database before put to use.

Category 3: Low-Risk

Users of low-risk application needs to be informed that they are interacting with an AI application. For example using a chat bot. AI generated audio and videos also require labeling to prevent deep fakes. Games won’t need such a labeling because it is obvious that they present artificial generated content.

Fines

The AI act proposal suggests very high fines for violating the rules. For example using forbidden AI systems may result in a fine up to 30 Million € or 6% of the worldwide yearly revenue. Violating requirements for high-risk applications may result in 20 Million € or 4% world wide yearly revenue. Wrong or misleading information on national agencies 10 Million € or 2 % of world wide yearly revenue.

Industry statements regarding AI act

Many companies working on AI technology like IBM and Microsoft have handed in their own statement on the act. Microsoft has provided a statement that puts the spotlight on some problems in practice including the following key points.

  • The AI act proposal does not fully cover the complex AI eco-system. There are providers of general purpose AI components like image processing, text analysis etc. In many cases these of-the-shelf components are included in new applications which are then sold to customers and customized by other companies. Which participant in the eco-system has to take care of the requirements defined for high and low risk applications?
  • Some requirements in the act are unnecessary complex and unrealistic. For example article 10.3 requires error-free and complete data sets. This is almost impossible when training natural language or image processing applications.
  • Exception for remote biometric identification should be restricted. The actual proposal lists remote biometric identification as forbidden but gives law-enforcement a wide set of exceptions. For example searching for missing children, searching for victims, looking for terrorists and prevention of crimes. Microsoft states that the application of real-time biometric identification should only be used for serious and violent crimes. Moreover transparency reports should be mandatory for law-enforcement and judical authorities including what techniques were used, which training and test datasets were used, where, when, why and how long was remote biometric identification put to use.

The summary on the artificial intelligence act was created as part of the course “Algorithm, Discrimination and Law” at JKU Linz Winter term 2021.

About erpcoder
Azure Cloud Architect and Dynamics 365 enthusiast working in Research & Development for InsideAx

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: