The European Parliament Adopts the AI Act (EU AI Act)

The European Union has introduced a groundbreaking set of regulations to govern the use of AI technologies after extensive deliberation. These rules mark a significant milestone globally and are expected to profoundly impact not only the EU but also serve as a model for other nations navigating the complexities of AI advancement.

What’s the AI Act about?

At its core, the AI Act seeks to find a middle ground between encouraging innovation in AI and ensuring the protection of fundamental rights, democracy, and environmental sustainability. It does this through a risk-based framework, which means that different AI applications will have different levels of obligations depending on their potential impact and associated risks.

Who Is Concerned?

The AI Act extends its jurisdiction to cover importers and distributors of AI systems within the European Union. Additionally, it applies to providers of AI systems, referring to companies that design AI systems intending to introduce them to the market or utilize them under their own brand or trademark, regardless of whether they are offered for payment or free of charge.

Importantly, the AI Act also applies to “deployers”, which are defined as legal or natural persons using AI under their authority in the course of their professional activities.

Where Does the AI Act Apply?

The AI Act has a significant extraterritorial effect, covering providers who introduce AI systems into the EU market regardless of their location. It also extends to providers or users outside the EU if their AI systems are utilized within the EU. However, it only affects deployers, importers, and individuals within the EU.

The AI Act excludes AI systems developed solely for scientific research and those used in research and development before market introduction, except for real-world testing. Additionally, it doesn't apply to AI systems under free and open-source licenses unless they are deemed high-risk, prohibited, or generative AI.

Key challenges posed by the AI Act

In essence, the AI Act aims to uphold essential rights like safeguarding personal data, privacy, and communication confidentiality by promoting sustainable and ethical data practices in AI development and utilization.

  • Fostering innovation and competitiveness in the AI ecosystem, and facilitating its development.
  • Understanding the interplay between the AI Act and existing rules applicable to AI, including on data protection, intellectual property and data governance.
  • Navigating the complex supervision and enforcement stakeholder map that is forming.
  • Designing and implementing appropriate multi-disciplinary governance structures within organizations.

Scope: The Act extends its jurisdiction beyond the borders of the EU, applying to specific organizations either operating within the EU or offering AI system products or services to EU users, irrespective of their location outside the EU. It's imperative for organizations to assess their involvement in a particular AI system to comprehend the duties and commitments they must undertake once the Act comes into force.

Applicability: Applies to providers, importers, and distributors of AI systems or general-purpose AI models, regardless of their location, if they are placed on the EU market, put into service, or used within the EU.

Enforcement and Penalties: The AI Office, situated within the European Commission, will oversee AI systems built on a general-purpose AI model when both the model and the system are supplied by the same provider. It will be endowed with the authority equivalent to that of a market surveillance body. National market surveillance authorities will retain responsibility for supervising all other types of AI systems.

The AI Office will be responsible for harmonizing governance efforts across member states and overseeing the enforcement of regulations concerning general-purpose AI.

Member state authorities will lay down rules on penalties and other enforcement measures, including warnings and non monetary measures. Individuals can lodge a complaint of infringement with a national competent authority, which can then launch market surveillance activities. The act does not provide for individual damages.

There are penalties for:

  • Prohibited AI violations, up to 7% of global annual turnover or 35 million euros.
  • Most other violations, up to 3% of global annual turnover or 15 million euros.
  • Supplying incorrect information to authorities, up to 1% of global annual turnover or 7.5 million euros.

The AI Board will provide guidance on the implementation of the act, facilitate communication among national authorities, and offer recommendations and opinions as necessary.

What Is the EU Approach to AI Regulation?

The AI Act adopts a risk-based approach, implying that varying requirements are imposed depending on the level of risk involved.

High risk: High-risk AI systems must adhere to rigorous standards outlined in the AI Act, including risk mitigation measures, quality datasets, activity logging, comprehensive documentation, transparent user information, human supervision, and robust cybersecurity. Examples of such systems include those governing critical infrastructures like energy and transportation, medical devices, and those involved in educational or employment access decisions.

Limited risk: Providers are obligated to ensure that AI systems, like chatbots, which directly engage with individuals, are designed to make it clear that they are interacting with an AI. Additionally, deployers of AI systems creating or altering deepfakes must disclose that the content is artificially generated or manipulated.

Minimal risk: AI systems with low-risk profiles, like AI-driven video games or spam filters, are not subject to specific regulations outlined in the AI Act. Nonetheless, companies can choose to follow voluntary codes of conduct. This classification also encompasses broad-purpose AI models or generative AI.

During the negotiations, an additional section pertaining to general-purpose AI models was incorporated into the AI Act. The law now distinguishes between three categories: general-purpose AI models, a subset labeled as "general-purpose AI models with systemic risk," and general-purpose AI models possessing high-impact capabilities.

Relationship With the GDPR

The European Union's regulations concerning the safeguarding of personal data, privacy, and the confidentiality of communications will govern the handling of personal data in relation to the AI Act. It's important to note that the AI Act does not alter the applicability or provisions of the GDPR (Regulation 2016/679) or the ePrivacy Directive 2002/58/EC, as stated in Article 2(7).

Getting ready for upcoming regulations with Ardent Privacy

Ardent helps you accelerate responsible, transparent and explainable AI workflows. Also allows you to accelerate your AI governance, the directing, managing and monitoring of your organization’s AI activities. It employs software automation to strengthen your ability to mitigate risks, manage policy requirements, and govern the life cycle for both generative AI and predictive machine learning (ML) models.

Ardent Privacy helps to drive model transparency, explainability and documentation in 3 key areas:

Compliance:  helps to facilitate AI transparency and ensure adherence to regulations by integrating data with risk controls. Automate the documentation of model metadata through factsheets to streamline inquiries and audits.

Risk management:  Establish predefined risk thresholds to proactively identify and address potential risks associated with AI models. Continuously monitor for fairness, drift, bias, performance against evaluation metrics, instances of toxic language, and protection of personal identifiable information (PII). Utilize user-based dashboards and reports to gain insights into organizational risks.

AI governance : help govern both generative AI and predictive machine learning models across the lifecycle using integrated workflows and approvals, Monitor the status of use cases, in-process change requests, challenges, issues and assigned tasks.