The Artificial Intelligence Act (AI Act) is a proposed regulation of the European Union. Proposed by the European Commission on 21 April 2021, it aims to introduce a common regulatory and legal framework for artificial intelligence. Its scope encompasses all sectors (except for military), and to all types of artificial intelligence. As a piece of product regulation, it would not confer rights on individuals, but would regulate the providers of artificial intelligence systems, and entities making use of them in a professional capacity.
The proposed EU Artificial Intelligence Act aims to classify and regulate artificial intelligence applications based on their risk to cause harm. This classification primarily falls into three categories: banned practices, high-risk systems, and other AI systems.
Banned practices are those that employ artificial intelligence to cause subliminal manipulation or exploit people's vulnerabilities that may result in physical or psychological harm, make indiscriminate use of real-time remote biometric identification in public spaces for law enforcement, or use AI-derived 'social scores' by authorities to unfairly disadvantage individuals or groups. The Act entirely prohibits the latter, while an authorisation regime is proposed for the first three in the context of law enforcement.
High-risk systems, as per the Act, are those that pose significant threats to health, safety, or the fundamental rights of persons. They require a compulsory conformity assessment, undertaken as self-assessment by the provider, before being launched in the market. Particularly critical applications, such as those for medical devices, require the provider's self-assessment under AI Act requirements to be considered by the notified body conducting the assessment under existing EU regulations, like the Medical Devices Regulation.
AI systems outside the categories above are not subject to any regulation, with Member States largely prevented from further regulating them via maximum harmonisation.