EU AI Act: Framework and Application

The EU AI Act, effective July 12, 2024, standardises AI regulations across EU states, promoting trustworthy AI aligned with EU values. It classifies AI by risk, mandates risk management, transparency, and documentation.

EU AI Act: Framework and Application

The European Union Artificial Intelligence Act (EU AI Act), officially known as Regulation (EU) 2024/1689, represents a pivotal legislative milestone in the regulation of AI technologies within the European Union. Enacted on July 12, 2024, this comprehensive framework is designed to harmonise AI regulations across all EU member states, ensuring that AI systems are developed, deployed, and used in a manner that aligns with EU values and fundamental rights. The EU AI Act addresses the need for a unified approach to AI governance, aiming to mitigate risks while promoting innovation and ensuring public trust in AI systems.




Source

[1]

MiFID/MiFIR Regulation Updates Transition
ESMA updates MiFID II/MiFIR regulations, enhancing transparency and compliance across EU financial markets. Key changes include a singular volume cap mechanism, refined Systematic Internaliser (SI) framework, and expanded transaction reporting.

[2]

MiFID II/MiFIR : Reporting for Enhanced Market Compliance
Strategic compliance insights and MiFID II/MiFIR adjustments in the financial sector: Transitional strategies, impacts on investment banks, brokerage firms, and asset managers, with a focus on enhancing regulatory frameworks for improved market integrity and competitiveness.



Objectives of the EU AI Act


The primary objectives of the EU AI Act include:


  1. Ensuring Human-Centric and Trustworthy AI: The Act emphasizes the development of AI systems that prioritize human well-being and adhere to ethical standards. This includes ensuring transparency, accountability, and fairness in AI operations.
  2. Promoting Innovation and Competitiveness: By providing a clear regulatory framework, the EU AI Act aims to foster innovation and enhance the competitiveness of the European AI industry on a global scale.
  3. Protecting Fundamental Rights and Public Interests: The Act seeks to safeguard fundamental rights, such as privacy and non-discrimination, and protect public interests by mitigating potential harms associated with AI technologies.
  4. Harmonizing Regulatory Approaches: The EU AI Act aims to prevent legal fragmentation by establishing uniform rules and standards across all member states, facilitating the free movement of AI-based goods and services within the internal market.



Key Elements of the Regulation


The EU AI Act introduces several key regulatory elements:


  • Classification of AI Systems: The Act categorizes AI systems based on their risk levels, ranging from minimal risk to high risk, with corresponding regulatory requirements for each category.
  • Risk Management Framework: For high-risk AI systems, the Act mandates a comprehensive risk management framework that includes risk assessment, mitigation strategies, and continuous monitoring.
  • Transparency and Documentation: The Act requires detailed documentation and transparency measures to ensure that AI systems can be audited and understood by regulatory bodies and stakeholders.
  • Accountability and Governance: Organizations must establish governance structures that ensure compliance with the Act, including assigning responsibilities and implementing oversight mechanisms.
  • Public and Stakeholder Engagement: The Act encourages engagement with stakeholders, including public consultations and the involvement of civil society, to ensure that AI systems meet societal needs and expectations.

Legislative Process and Stakeholder Involvement


The development of the EU AI Act involved extensive consultations with a wide range of stakeholders, including industry representatives, civil society organizations, academic experts, and regulatory bodies. This collaborative approach ensured that the Act reflects diverse perspectives and addresses the concerns of various stakeholders.


EU AI ACT: Scope and Applicability
EU AI ACT: Scope and Applicability


EU AI ACT: Scope and Applicability


Broad Geographic Scope


The EU AI Act is designed with a comprehensive geographic scope that extends its regulatory reach well beyond the borders of the European Union. This is crucial in ensuring that AI technologies used within the EU adhere to consistent standards, regardless of their origin. The Act applies to:


  1. Organizations within the EU: Any entity developing, deploying, or utilizing AI systems within EU member states must comply with the EU AI Act.
  2. Organizations outside the EU: Non-EU entities whose AI systems’ outputs are utilized within the EU are also subject to the Act. This extraterritorial application ensures that AI products and services entering the EU market are held to the same rigorous standards as those developed within the Union.

By encompassing both EU-based and international organizations, the EU AI Act aims to prevent regulatory fragmentation, fostering a uniform legal framework that facilitates the free movement of AI-based goods and services across the internal market. This broad applicability is essential in maintaining a level playing field and ensuring that all AI systems used in the EU are safe, trustworthy, and compliant with EU values and regulations.




Key Definitions


To fully understand the EU AI Act and its implications, it is essential to grasp the key definitions outlined in the regulation. These definitions provide clarity and scope, ensuring that the regulation is effectively applied and interpreted.


AI System


An AI System under the EU AI Act is defined as software that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions that influence real or virtual environments. This definition encompasses a wide range of AI technologies, including but not limited to:


  • Machine Learning Approaches: These systems learn from data and can improve their performance over time. This includes supervised learning, unsupervised learning, reinforcement learning, and deep learning techniques.
  • Logic- and Knowledge-Based Approaches: These systems utilize encoded knowledge and logical rules to infer conclusions, often used in expert systems and symbolic AI.

Deployer


The term Deployer refers to any natural or legal person, including public authorities, that uses an AI system under their authority. This definition explicitly excludes personal, non-professional activities, focusing instead on the professional and organizational use of AI systems. Deployers are responsible for ensuring that the AI systems they use comply with the EU AI Act’s requirements, regardless of whether they developed the system themselves or procured it from a third party.


Biometric Data


Biometric Data is defined as any physical, physiological, or behavioral characteristics used for identification purposes. This includes:


  • Physical Characteristics: Such as facial features, fingerprints, and iris patterns.
  • Physiological Characteristics: Including voice patterns and DNA.
  • Behavioral Characteristics: Such as gait, typing patterns, and other unique identifiers that can be measured and analyzed by AI systems.

Additional Definitions


The EU AI Act also introduces several other critical definitions to ensure clarity and precision in its application:


  • Providers: Entities that develop and place AI systems on the market.
  • Users: Individuals or organizations that deploy AI systems within their operations.
  • High-Risk AI Systems: AI applications that pose significant risks to health, safety, or fundamental rights, subject to stringent regulatory requirements.

European AI ACT: Obligations and Penalties
European AI ACT: Obligations and Penalties


European AI ACT: Obligations and Penalties


The EU AI Act establishes a robust framework of obligations for organizations involved in the development, deployment, and use of AI systems. These obligations are designed to ensure that AI technologies are safe, ethical, and compliant with European values and fundamental rights. Non-compliance with these obligations can result in severe penalties, underscoring the importance of adhering to the regulatory requirements set forth by the Act.


Obligations for Organizations


High-Risk AI Systems


Organizations deploying high-risk AI systems face stringent requirements due to the potential impact of these systems on health, safety, and fundamental rights. These obligations include:


  1. Risk Management System: Organizations must implement a comprehensive risk management system that covers the entire lifecycle of the AI system. This includes:
    • Identifying and analyzing risks associated with the AI system.
    • Implementing measures to mitigate identified risks.
    • Continuously monitoring the AI system to identify new risks and assess the effectiveness of mitigation measures.
  2. Data Governance: Ensuring the quality and integrity of data used by the AI system is crucial. Organizations must:
    • Maintain high standards for training, validation, and testing data.
    • Implement processes to address data bias and ensure representativeness.
    • Document data sources and ensure data traceability.
  3. Technical Documentation: Detailed technical documentation must be maintained to ensure transparency and accountability. This includes:
    • Providing a clear description of the AI system's functionality and design.
    • Documenting the algorithms and models used.
    • Maintaining logs of the AI system's decisions and operations to facilitate auditing and compliance checks.
  4. Human Oversight: High-risk AI systems must be designed to allow for human oversight. This includes:
    • Ensuring that human operators can understand and intervene in the AI system's operations.
    • Implementing measures to prevent automation bias and ensure that humans remain in control of critical decisions.
  5. Post-Market Monitoring: Organizations must continuously monitor the performance of deployed AI systems. This involves:
    • Collecting and analyzing data on the AI system's performance and impact.
    • Implementing corrective actions if the AI system deviates from its expected behavior or poses new risks.



Penalties for Non-Compliance


The EU AI Act imposes significant penalties for non-compliance, reflecting the seriousness with which the EU approaches AI regulation. These penalties are designed to enforce compliance and deter organizations from neglecting their regulatory obligations:


  1. Breaches of Prohibitions: Violations of the Act's prohibitions, such as deploying banned AI systems or using AI in prohibited applications, can result in fines of up to €35 million or 7% of the organization's total worldwide annual turnover, whichever is higher.
  2. Other Provisions: Non-compliance with other key provisions of the Act, such as failing to implement required risk management or data governance practices, can lead to fines of up to €15 million or 3% of total worldwide annual turnover.
  3. Supply of Incorrect, Incomplete, or Misleading Information: Providing false or misleading information to regulators can result in fines of up to €7.5 million or 1% of total worldwide annual turnover. This underscores the importance of transparency and honesty in regulatory interactions.

Regulatory Enforcement


Regulators have extensive powers to enforce compliance with the EU AI Act. These include:


  • Mandating Compliance: Regulators can require organizations to take specific actions to bring their AI systems into compliance with the Act.
  • Market Withdrawal: In cases of severe non-compliance, regulators can order the withdrawal of non-compliant AI systems from the market to protect public safety and fundamental rights.
  • Audits and Inspections: Regulators can conduct audits and inspections to verify compliance with the Act's requirements. Organizations must cooperate with these regulatory activities and provide access to relevant documentation and systems.



EU Artificial Intelligence ACT: Compliance Requirements


The EU AI Act establishes a timeline for compliance, outlining specific dates by which various obligations must be met. These requirements are designed to mitigate risks and ensure that AI technologies are developed and deployed responsibly. Below is a detailed breakdown of the key dates and the corresponding compliance requirements.


Immediate Prohibitions (Effective February 2, 2025)


Starting from February 2, 2025, certain AI practices are outright prohibited due to their potential to cause significant harm. These prohibitions are critical for protecting fundamental rights and maintaining public trust in AI technologies.


Emotion Recognition


Emotion Recognition Systems are banned in workplaces and educational settings. This prohibition addresses the ethical and privacy concerns associated with using AI to infer emotions from biometric data, which can lead to intrusive surveillance and discriminatory practices. Organizations must cease the deployment of such systems in these contexts to comply with the Act.


Social Scoring


Social Scoring systems are prohibited when used inappropriately, such as evaluating individuals based on behavior across unrelated contexts. This includes using AI to score individuals' behavior for purposes unrelated to the original data collection context, which can lead to unfair discrimination and societal division. The prohibition ensures that AI systems do not undermine the principles of fairness and non-discrimination.




General Purpose AI (Effective August 2, 2025)


By August 2, 2025, providers of general-purpose AI tools, such as chatbots and large language models, must comply with specific transparency and information-sharing obligations.


Transparency and Information Obligations


Providers must ensure that downstream users are well-informed about the AI system's capabilities and limitations. This includes:


  • Providing Clear Documentation: Detailed documentation outlining the functionality, limitations, and intended use of the AI system.
  • User Instructions: Clear instructions on how to use the AI system safely and effectively.
  • Disclosure of AI Use: Informing users that they are interacting with an AI system, ensuring transparency in AI-human interactions.

High-Risk AI Systems (Effective August 2, 2026)


From August 2, 2026, AI applications classified as high-risk must adhere to stringent risk management and governance requirements. High-risk AI systems include those used in critical areas such as credit risk assessment, insurance underwriting, and emotion recognition outside restricted settings.


Risk Management System


Organizations must implement a comprehensive risk management system throughout the AI system's lifecycle. This involves:


  • Risk Assessment: Identifying potential risks associated with the AI system.
  • Risk Mitigation: Implementing strategies to mitigate identified risks.
  • Continuous Monitoring: Regularly monitoring the AI system to identify new risks and assess the effectiveness of mitigation measures.

Data Governance


Ensuring the quality and integrity of data used by high-risk AI systems is crucial. Requirements include:


  • High-Quality Data: Using high-quality training, validation, and testing data to ensure accurate and reliable AI outputs.
  • Bias Mitigation: Implementing processes to address data bias and ensure representativeness.
  • Data Documentation: Documenting data sources and maintaining data traceability.

Documentation and Traceability


Organizations must maintain detailed technical documentation and logs to ensure transparency and accountability.


  • Functionality Description: A clear description of the AI system's functionality and design.
  • Algorithm Documentation: Detailed documentation of the algorithms and models used.
  • Decision Logs: Maintaining logs of the AI system's decisions and operations to facilitate auditing and compliance checks.

Integration with EU Product Safety Legislation (Effective August 2, 2027)


By August 2, 2027, high-risk AI systems falling under other EU product safety regulations, such as medical devices and machinery, must comply with additional obligations to ensure comprehensive regulatory coverage.


Compliance with Product Safety Regulations


These obligations ensure that AI systems integrated with products covered by other EU safety legislations meet all relevant safety standards.


  • Conformity Assessments: Ensuring that AI systems undergo rigorous conformity assessments to verify compliance with safety standards.
  • Harmonized Standards: Adhering to harmonized standards that align with both the EU AI Act and specific product safety regulations.
  • Cross-Sectoral Compliance: Implementing measures to ensure that AI systems comply with all applicable regulatory requirements across different sectors.

AI Act: Key Obligations for High-Risk AI Systems
AI Act: Key Obligations for High-Risk AI Systems


AI Act: Key Obligations for High-Risk AI Systems


The EU AI Act places significant emphasis on ensuring that high-risk AI systems are developed, deployed, and utilized in a manner that mitigates potential risks and upholds ethical standards. Organizations must adhere to stringent requirements to manage these systems effectively. Below, we delve into the detailed obligations and technical requirements that organizations must fulfill to comply with the Act.


Risk Management and Governance


Establishing a Robust Governance Framework


Organizations are required to establish a comprehensive governance framework tailored to the complexities of high-risk AI systems. This framework should be designed to systematically manage risks throughout the lifecycle of the AI system.


  1. Stakeholder Involvement
    • Engagement: It is crucial to involve relevant stakeholders across various functions within the organization, including business units, IT, data offices, procurement, legal, and HR. This multidisciplinary approach ensures that all potential risks are identified and managed effectively.
    • Information Needs: Clearly define the information needs of each stakeholder group to ensure they are equipped with the necessary knowledge to contribute to the risk management process.
  2. Legal Risk Library
    • Repository Creation: Develop a comprehensive repository of legal risks associated with AI systems. This library should include detailed descriptions of potential legal challenges and the regulatory requirements that must be met.
    • Playbooks for Non-Experts: Create playbooks that provide non-experts with practical guidelines for evaluating and managing legal risks. These playbooks should simplify complex legal concepts and offer step-by-step procedures for compliance.
  3. Triage and Evaluation
  • Triage Process: Implement a triage process to prioritize and categorize risks based on their severity and likelihood. This process should help allocate resources efficiently and focus on the most critical risks.
  • Right-Sizing Evaluation: Customize the evaluation efforts to align with the specific risks associated with different AI systems. This ensures that the evaluation process is neither too burdensome nor too lenient, striking a balance that facilitates effective risk management.

Continuous Monitoring and Improvement


High-risk AI systems require ongoing oversight to ensure they remain compliant and do not pose unforeseen risks. Organizations must establish processes for continuous monitoring and periodic reviews of their AI systems.


  • Performance Monitoring: Regularly track and analyze the performance of AI systems to identify any deviations from expected behavior. This includes monitoring accuracy, reliability, and the impact on users.
  • Periodic Reviews: Conduct periodic reviews of the AI system's risk management framework and governance processes. These reviews should assess the effectiveness of existing measures and identify areas for improvement.

Transparency and Accountability


Transparency and accountability are critical components of the EU AI Act. High-risk AI systems must be designed and operated in a manner that ensures these principles are upheld.


Clear Instructions


  • User Documentation: Provide users with comprehensive documentation that clearly explains how to use the AI system safely and ethically. This documentation should cover:
  • System Capabilities: Detailed descriptions of the AI system's functionalities and intended use cases.
  • Usage Guidelines: Instructions on how to operate the system correctly, including any limitations and potential risks.
  • Ethical Considerations: Guidance on ethical considerations related to the use of the AI system, emphasizing the importance of fairness and non-discrimination.

Human Oversight


  • Human-in-the-Loop: Ensure that AI systems are designed to allow for human oversight and intervention. This includes:
  • Intervention Mechanisms: Implementing mechanisms that enable human operators to intervene and override AI decisions when necessary. This is particularly important in high-stakes scenarios where AI decisions can have significant consequences.
  • Training for Operators: Providing adequate training for human operators to ensure they understand the AI system's operations and can effectively manage and intervene when required.

Documentation and Traceability


  • Technical Documentation: Maintain detailed technical documentation that provides transparency into the AI system's design and decision-making processes. This includes:
    • Algorithm Descriptions: Comprehensive descriptions of the algorithms and models used by the AI system.
    • Decision Logs: Keeping logs of the AI system's decisions and actions to facilitate auditing and accountability. These logs should be detailed enough to trace the reasoning behind specific decisions.
    • Audit Trails: Establishing audit trails that document changes and updates to the AI system, ensuring that any modifications are tracked and can be reviewed retrospectively.



Preparing for Compliance with the EU AI Act


Compliance with the EU AI Act requires organizations to take proactive and detailed steps to ensure that their AI systems meet all regulatory requirements. This involves creating an AI inventory, establishing a robust governance process, and developing a procurement process that aligns with regulatory standards. Below, we delve into each of these components in greater detail, emphasizing the technical aspects necessary for compliance.


Creating an AI Inventory


Developing a comprehensive inventory of AI systems is a critical first step in ensuring compliance with the EU AI Act. This inventory serves as a foundation for identifying regulatory obligations and managing risks associated with AI technologies.


Steps to Create an AI Inventory


  1. Identification of AI Systems:
    • Catalog All AI Systems: List all AI systems currently in use, under development, or planned for deployment. Include detailed descriptions of each system’s functionality, purpose, and operational context.
    • Classify AI Systems: Categorize AI systems based on their risk levels (e.g., minimal risk, high risk) as defined by the EU AI Act. High-risk AI systems require special attention due to their potential impact on health, safety, and fundamental rights.
  2. Assessment of Compliance Status:
    • Regulatory Compliance Check: Evaluate each AI system against the regulatory requirements outlined in the EU AI Act. Identify gaps and areas needing improvement to achieve compliance.
    • Documentation of Non-Compliance: Document instances where AI systems do not meet regulatory standards. Develop action plans to address these gaps and ensure timely compliance.
  3. Prohibited Practices and Systems:
  • Identify Prohibited AI Practices: Highlight AI systems that involve prohibited practices, such as emotion recognition in restricted contexts or inappropriate social scoring.
  • Plan for Decommissioning: Develop plans to decommission or modify AI systems that cannot be brought into compliance due to inherent prohibited practices.

Establishing a Governance Process


A robust governance process is essential for managing AI risks and ensuring compliance with the EU AI Act. This process should integrate various organizational functions and stakeholders to create a cohesive compliance strategy.


Key Steps in Establishing a Governance Process


  1. Identifying Stakeholders:
    • Map Stakeholder Roles: Identify key stakeholders within the organization, including business units, IT, data offices, procurement, legal, and HR. Determine their roles and responsibilities in the governance process.
    • Define Information Needs: Establish what information each stakeholder group requires to effectively participate in governance activities.
  2. Creating Legal Risk Libraries:
    • Develop Risk Repositories: Create comprehensive repositories of legal risks associated with AI systems. Include detailed descriptions of regulatory requirements and potential legal challenges.
    • Design Playbooks for Non-Experts: Develop playbooks that provide non-experts with practical guidance on evaluating and managing legal risks. These playbooks should simplify complex regulatory concepts and offer actionable steps for compliance.
  3. Implementing Triage Processes:
  • Risk Prioritization: Implement triage processes to prioritize risks based on their severity and likelihood. This helps allocate resources effectively and focus on the most critical compliance areas.
  • Right-Sizing Evaluation Efforts: Customize evaluation efforts to align with the specific risks associated with different AI systems. This ensures that the evaluation process is appropriately scaled to manage regulatory requirements without being overly burdensome.

Developing a Procurement Process


Organizations must develop a procurement process for AI technologies that aligns with their governance framework. This ensures that AI systems are procured in a manner that supports compliance with the EU AI Act.


Steps to Develop a Compliant Procurement Process


  1. Aligning Procurement with Governance:
    • Governance Integration: Ensure that the procurement process is integrated with the governance framework. This involves aligning procurement policies with regulatory requirements and organizational risk management strategies.
  2. Vendor Assessment and Selection:
    • Evaluate Vendor Compliance: Assess potential vendors for compliance with the EU AI Act. Ensure that vendors provide detailed documentation of their AI systems’ compliance status.
    • Contractual Obligations: Include contractual obligations requiring vendors to maintain compliance with the EU AI Act. This includes provisions for data governance, risk management, and transparency.
  3. Ongoing Vendor Management:
    • Regular Compliance Audits: Conduct regular audits of vendor compliance with the EU AI Act. This includes verifying that AI systems continue to meet regulatory requirements throughout their lifecycle.
    • Update Procurement Policies: Continuously update procurement policies to reflect changes in regulatory requirements and industry best practices.

Reduce your
compliance risks