Ai Regulation: Eu's Upcoming Ai Act

when will eu ai act become law

The EU Artificial Intelligence Act (AI Act) was approved by the European Parliament on March 13, 2024, and will be the world's first comprehensive law regulating AI. The AI Act will enter into force 20 days after its publication in the Official Journal of the EU, which occurred on July 12, 2024, with the majority of provisions applying from August 2, 2026. The ban on AI systems posing an unacceptable risk will come into force on February 2, 2025, and obligations for high-risk systems will apply from August 2, 2027. The AI Act sets out a broad definition of AI systems and takes a risk-based approach to regulation, with greater potential risks leading to greater compliance obligations. The Act will have extraterritorial scope, impacting international companies that are not based in the EU but still fall under its provisions.

Characteristics Values
Date of Approval by EU Parliament 13 March 2024
Date of Approval by Council 21 May 2024
Date of Publication in Official Journal 12 July 2024
Date of Entry into Force 2 August 2024
Date of Applicability of Most Parts of the Regulation 2 August 2026
Date of Applicability of Prohibited AI Practices 2 February 2025
Date of Applicability of Obligations for High-Risk Systems 2 August 2027

lawshun

The EU AI Act will become law 20 days after its publication in the Official Journal of the EU

The EU Artificial Intelligence Act (AI Act) is a comprehensive legal framework for the regulation of AI systems across the EU. It was approved by the EU Parliament on March 13, 2024, and by the Council on May 21, 2024. The AI Act will become law 20 days after its publication in the Official Journal of the EU, which is expected to occur in May or June 2024.

The AI Act is the first-ever legal framework on AI that addresses the risks associated with AI technology and aims to position Europe as a leader in the global AI landscape. It establishes clear requirements and obligations for AI developers and deployers regarding specific AI uses. The regulation also seeks to reduce administrative and financial burdens for businesses, particularly small and medium-sized enterprises (SMEs).

The AI Act is part of a broader package of policy measures, including the AI Innovation Package and the Coordinated Plan on AI, which work together to guarantee the safety and fundamental rights of individuals and businesses in relation to AI. These measures also promote uptake, investment, and innovation in AI across the EU.

The AI Act will enter into force across all 27 EU Member States on August 1, 2024, with the enforcement of the majority of its provisions commencing on August 2, 2026. However, it's important to note that certain provisions will have different applicability timelines. For instance, prohibitions on certain AI practices will take effect after six months, while the governance rules and obligations for general-purpose AI models will become applicable after 12 months. The rules for AI systems embedded into regulated products will apply after 36 months.

The EU AI Act is a significant development in the regulation of AI, ensuring that Europeans can trust the technology and that AI systems respect fundamental rights, safety, and ethical principles.

lawshun

The Act will be fully applicable 24 months after it becomes law

The EU Artificial Intelligence Act (AI Act) will become law 20 days after its publication in the Official Journal of the EU, which is expected to be in May or June 2024. The Act will be fully applicable 24 months after it becomes law, but some parts will be applicable sooner.

The ban on AI systems posing unacceptable risks will come into force six months after the AI Act enters into force. This includes AI systems that exploit vulnerabilities of a person or a specific group of persons, biometric categorisation systems, social scoring systems, AI systems for assessing the risk of committing a criminal offence, and AI systems to infer emotions in the workplace and educational institutions.

Codes of practice will apply nine months after the entry into force of the AI Act. These codes will be developed by the industry, with the participation of member states through the AI Board, and will be evaluated and approved by the AI Office.

Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the AI Act enters into force. This includes compliance with EU copyright law and publishing detailed summaries of the content used for training.

Obligations for high-risk AI systems will become applicable 36 months after the entry into force of the AI Act. High-risk AI is broadly defined as AI posing a significant risk of harm to the health, safety, or fundamental rights of natural persons, including by materially influencing the outcome of decision-making. Examples include critical infrastructure, medical devices, and systems that determine access to jobs.

The EU AI Act is the world's first comprehensive horizontal legal framework for AI. It provides EU-wide rules on data quality, transparency, human oversight, and accountability. The Act will have a profound impact on companies conducting business in the EU, with significant compliance obligations and fines of up to 35 million euros or 7% of global annual revenue for non-compliance.

The Bill's Journey: Lawmaking Simplified

You may want to see also

lawshun

The ban on AI systems posing unacceptable risks will come into force six months after the Act becomes law

The EU Artificial Intelligence Act (AI Act) is the world's first comprehensive legal framework on AI. It was approved by the EU Parliament on March 13, 2024, and by the Council on May 21, 2024. The AI Act will enter into force 20 days after publication in the Official Journal, which is expected to occur in May or June 2024.

The Act introduces a risk-based framework for classifying AI systems, categorizing them as prohibited, high-risk, or low (or minimal) risk. Prohibited AI practices, or AI systems with unacceptable risk, are those that pose an inherent threat to core Union values and fundamental rights, including human dignity, freedom, equality, and privacy. These practices will be the first to be addressed in the Act's gradual application timeline, which spans 24 months from the entry into force.

  • Subliminal, manipulative, and deceptive AI techniques with the risk of significant harm. This includes practices that alter human behavior and coerce individuals into making decisions they wouldn't otherwise consider, potentially undermining personal autonomy and freedom of choice.
  • AI systems that exploit the vulnerabilities of persons, such as age, disabilities, or specific social or economic situations, in a way that can cause significant harm.
  • AI systems used for the classification or scoring of people based on behavior or personality characteristics, leading to detrimental treatment.
  • Predictive policing based solely on AI profiling or AI assessment of personality traits.
  • Untargeted scraping of facial images to create AI facial recognition databases.
  • AI systems for inferring emotions in workplaces and education.
  • Biometric categorization AI systems to infer sensitive personal traits, such as race, political leanings, religious beliefs, or sexual orientation.
  • AI systems for real-time remote biometric identification in publicly accessible spaces for law enforcement purposes.

The ban on these unacceptable risk AI systems is a critical component of the AI Act, ensuring that practices that pose a threat to fundamental rights and values are prohibited in the EU.

lawshun

Rules on general-purpose AI systems will apply 12 months after the Act becomes law

The EU's Artificial Intelligence Act (AI Act) is a groundbreaking piece of legislation that lays down the first-ever comprehensive legal framework for AI. The Act was approved by the EU Parliament on March 13, 2024, and subsequently by the Council on May 21, 2024. The AI Act is designed to regulate the use of AI, ensuring ethical deployment while safeguarding fundamental rights, boosting innovation, and fostering trust in AI technologies.

The Act will enter into force 20 days after its publication in the Official Journal, which occurred on July 12, 2024, and will become law on August 1, 2024. While most provisions will become applicable two years after the Act enters into force, there are varying timelines for specific aspects of the legislation.

One of the critical aspects of the AI Act is the regulation of general-purpose AI systems. General-purpose AI systems are defined as AI systems that can serve multiple purposes, either directly or when integrated into other AI systems. Due to their versatility and widespread applicability, robust regulatory oversight is necessary to manage risks and ensure ethical deployment.

The rules on general-purpose AI systems will come into effect 12 months after the Act becomes law, which means they will be applicable by August 2, 2025. This delay allows stakeholders to prepare and ensure compliance with the new regulations. During this period, companies and organisations utilising AI technologies should take proactive steps to assess, document, and monitor their AI systems to maintain a competitive edge in the rapidly evolving technological landscape.

The rules on general-purpose AI systems include several key obligations for providers of such systems. These obligations aim to ensure transparency, ethical use, and risk management. Providers of general-purpose AI systems must:

  • Draw up and maintain technical documentation of the system's training and evaluation.
  • Make available documentation and information to downstream providers to help them understand the system's capabilities and limitations.
  • Put in place a policy to comply with EU copyright law and ensure that all content used for training respects reserved rights.
  • Publish a summary of the content used for training the system.
  • For general-purpose AI systems with systemic risk, additional obligations include performing model evaluations, assessing and mitigating systemic risks, ensuring cybersecurity protections, and reporting serious incidents.

The AI Act's rules on general-purpose AI systems are designed to ensure the safe and trustworthy use of AI innovations. By providing clear guidelines and obligations, the EU is fostering an environment that promotes ethical AI development and deployment while protecting the rights and interests of its citizens.

lawshun

Obligations for high-risk systems will apply 36 months after the Act becomes law

The EU Artificial Intelligence Act (AI Act) is a comprehensive legal framework for AI, providing EU-wide rules on data quality, transparency, human oversight, and accountability. The Act places obligations on providers, importers, distributors, and deployers of AI systems within the EU market or affecting those located in the EU.

The obligations for high-risk systems will apply 36 months after the AI Act becomes law. High-risk AI systems are those that pose a significant risk of harm to health, safety, fundamental rights, environment, democracy, and the rule of law. Examples include critical infrastructure, education, employment, essential private and public services, and systems used by law enforcement, migration and border management, justice, and democratic processes.

Deployers of high-risk AI systems are required to take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions for use. This includes assigning human oversight to individuals with the necessary competence, training, authority, and support. Deployers must also monitor the operation of the high-risk AI system and keep logs for an appropriate period, typically at least six months.

Before using a high-risk AI system in the workplace, deployers must inform workers' representatives and affected workers. Additionally, deployers who are public authorities or EU institutions must comply with specific registration obligations.

High-risk AI systems must also comply with extensive requirements regarding transparency, risk management, data and data governance, technical documentation, record-keeping, accuracy, robustness, and cybersecurity.

The Evolution of Bicycle Helmet Laws

You may want to see also

Frequently asked questions

The EU AI Act was approved by the European Parliament on March 13, 2024, and was published in the Official Journal of the EU on July 12, 2024. It will enter into force 20 days after its publication in the Official Journal, on August 2, 2024, with most provisions applying from August 2, 2026.

The EU AI Act is the world's first comprehensive horizontal legal framework for AI, providing EU-wide rules on data quality, transparency, human oversight, and accountability. It establishes obligations for AI based on its potential risks and level of impact, with a focus on protecting fundamental rights, democracy, and environmental sustainability.

The penalties for non-compliance are significant, with fines of up to EUR 35 million or 7% of the company's global annual turnover in the previous financial year, whichever is higher.

Written by
Reviewed by
Share this post
Print
Did this article help you?

Leave a comment