Ai Act: When Will Artificial Intelligence Be Regulated?

when will ai act become law

The EU AI Act is the world's first comprehensive AI law, regulating the use of artificial intelligence in the EU. The Act was provisionally agreed on December 8, 2023, and will be fully applicable 24 months after entry into force, with some parts applicable sooner. The Act will ensure that AI systems placed on the European market and used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. It will also stimulate investment and innovation in AI in Europe. The Act's risk-based approach categorises AI systems into four risk classes, with stricter rules for higher-risk systems. The EU AI Act is a significant development in the regulation of AI and is expected to have a substantial impact on AI governance worldwide.

Characteristics Values
First regulation on AI The AI Act is the world's first comprehensive AI law
Purpose To ensure the proper functioning of the EU single market by setting consistent standards for AI systems across EU member states
Scope AI systems "placed on the market, put into service or used in the EU"
Exemptions AI systems used exclusively for military or defence purposes; AI developed and used for scientific research; and free and open-source AI systems and components
Risk-based approach Four categories of risk: unacceptable, high, limited, and minimal/none
Unacceptable risk systems Prohibited outright and include those that have a significant potential for manipulation or by exploiting vulnerabilities
High-risk systems Fall into one of two categories: safety component or product subject to existing safety standards and assessments; or used for a specific sensitive purpose
Requirements for high-risk systems Developers must meet various requirements, including risk management, data governance, monitoring and record-keeping practices, detailed documentation, transparency and human oversight obligations, and standards for accuracy, robustness, and cybersecurity
Governance A new body within the EU Commission, the European AI Office, will oversee the implementation of the AI Act, with support from a scientific panel of independent experts
Enforcement Primarily enforced through national competent market surveillance authorities in each Member State, with additional coordination at the EU level through the European AI Office and the European AI Board
Penalties Fines for violations will depend on the type of AI system, company size, and severity of the infringement, ranging from €7.5 million to €35 million or a percentage of the company's total worldwide annual turnover
Timeline Entered into force on 1 August 2024, with most provisions applying two years after its entry into force

lawshun

The EU AI Act will be the world's first comprehensive AI law

The EU AI Act is set to be the world's first comprehensive AI law. In April 2021, the European Commission proposed the first regulatory framework for AI, which was adopted by the European Parliament in June 2023. The Act aims to regulate the selling and using of AI in the EU, ensuring that AI systems are safe, transparent, traceable, non-discriminatory, and environmentally friendly.

The Act establishes a risk-based approach to regulating AI, imposing different rules for different risk levels. AI systems deemed to pose a threat to people fall into the 'unacceptable risk' category and will be banned. This includes social scoring systems, systems that manipulate children or vulnerable groups, and real-time remote biometric systems.

High-risk AI systems are those that negatively affect safety or fundamental rights. These are further divided into two subcategories: AI systems used in products under the EU's product safety legislation and AI systems falling into specific areas that need to be registered in an EU database. High-risk systems must undergo pre-market and post-market assessments.

Low-risk AI systems, such as chatbots and image-, audio-, and video-generating AI, must comply with transparency requirements, informing users that they are interacting with AI and allowing them to decide if they wish to continue using it. Generative AI models, like ChatGPT, must also be designed to prevent the generation of illegal content.

The EU AI Act is expected to have a significant impact on global AI governance and set a precedent for other countries' AI regulations. The Act will be fully applicable 24 months after entry into force, with some parts applicable sooner, such as the ban on AI systems posing unacceptable risks, which will apply six months after the entry into force.

lawshun

The act will be based on a risk-based approach, with four risk classes

The EU Artificial Intelligence Act (AIA) sets out four risk levels for AI systems: unacceptable, high, limited, and minimal (or no) risk. Each class has different regulations and requirements for organisations developing or using AI systems. The higher the risk category, the greater the legal requirements that must be met.

Unacceptable risk is the highest level of risk and covers eight main types of AI applications incompatible with EU values and fundamental rights. These applications will be prohibited in the EU. AI systems deemed to pose an unacceptable risk include those that involve subliminal manipulation, such as influencing a person's behaviour without their knowledge or consent, and the exploitation of vulnerable persons, such as voice-activated toys that encourage dangerous behaviour in children.

High-risk AI systems are the most regulated systems allowed in the EU market. This level includes safety components of already regulated products and stand-alone AI systems in specific areas, which could negatively affect the health and safety of people, their fundamental rights, or the environment. These AI systems can potentially cause significant harm if they fail or are misused. High-risk AI systems are subject to extensive obligations and must meet certain requirements before they can be put on the market and operated in the EU.

The third level of risk is limited risk, which includes AI systems with a risk of manipulation or deceit. AI systems falling under this category must be transparent, meaning humans must be informed about their interaction with the AI (unless this is obvious), and any deep fakes should be denoted as such. For example, chatbots classify as limited risk.

The lowest level of risk is minimal risk. This level includes all other AI systems that do not fall under the above-mentioned categories, such as a spam filter. AI systems under minimal risk do not have any restrictions or mandatory obligations. However, it is recommended that they follow general principles such as human oversight, non-discrimination, and fairness.

The EU AI Act proposes a risk-based approach to regulating AI systems, with the main idea being to regulate AI based on its capacity to cause harm to society. The act will ensure that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values. It will also stimulate investment and innovation in AI in Europe.

lawshun

It will ban unacceptable-risk AI systems, such as biometric categorisation systems

The EU Artificial Intelligence Act is the world's first comprehensive AI law. It was approved by the European Parliament in March 2024 and endorsed by MEPs in January 2025. The Act introduces a risk-based framework for classifying AI systems, categorizing them as prohibited, high-risk, or low (or minimal) risk.

Unacceptable-risk AI systems are deemed a threat to human safety, livelihoods, and rights, and will be banned. These include biometric categorisation systems, which analyze biometric data such as facial characteristics or fingerprints to deduce sensitive personal traits, such as race, political leanings, religious beliefs, sexual orientation, or details about an individual's sex life. The use of AI in this manner enables discriminatory practices across sectors, including employment and housing, reinforcing societal disparities and infringing on fundamental rights like privacy and equality.

For example, when landlords or housing managers employ these AI tools for screening prospective tenants, there is a tangible risk of biased decisions against people from specific racial or ethnic backgrounds, or discrimination based on sexual orientation or gender identity. Such practices not only undermine fairness but also contravene principles of nondiscrimination and personal dignity.

However, the AI Act acknowledges exceptions for legally permissible activities, including the organization of biometric data for specific, regulatory-approved purposes. Lawful uses might involve organizing images by attributes such as hair or eye color for objectives provided by law, including certain law enforcement activities, provided these actions comply with EU or national legislation. This nuanced approach aims to balance the benefits of AI technologies with the imperative to protect individual rights and prevent discrimination.

The ban on unacceptable-risk AI systems, including biometric categorisation systems, will apply six months after the entry into force of the AI Act. Non-compliance with these prohibitions can result in significant administrative fines of up to €35,000,000 or up to 7% of the offender's global annual turnover.

Montana's SB-333: Law or Not?

You may want to see also

lawshun

High-risk AI systems will be carefully regulated and face numerous obligations

The EU Artificial Intelligence Act (AI Act) is the world's first comprehensive AI law, which aims to regulate the use of AI in the EU. The law, which was adopted in March 2024, outlines specific rules and obligations for providers and users of AI systems, depending on the level of risk associated with the technology.

High-risk AI systems are those that negatively affect safety or fundamental rights and fall into two categories. The first category includes AI systems used in products under the EU's product safety legislation, such as toys, aviation, cars, medical devices, and lifts. The second category comprises AI systems used in critical areas such as the management and operation of critical infrastructure, education, employment, access to essential private and public services, migration, and law enforcement.

High-risk AI systems will face numerous obligations and strict requirements before they can be placed on the EU market. These obligations include:

  • Adequate risk assessment and mitigation systems
  • High-quality datasets to minimise risks and discriminatory outcomes
  • Logging of activity to ensure traceability of results
  • Detailed documentation providing all necessary information on the system and its purpose for authorities to assess compliance
  • Clear and adequate information to the deployer
  • Appropriate human oversight measures to minimise risk
  • High levels of robustness, security, and accuracy
  • Compliance with accessibility requirements

Additionally, all remote biometric identification systems are considered high-risk and are subject to strict requirements and regulations. The use of such systems in publicly accessible spaces for law enforcement purposes is generally prohibited, with narrow exceptions for specific situations like searching for a missing child or preventing a terrorist threat.

The obligations aim to ensure that high-risk AI systems respect fundamental rights, safety, and ethical principles while addressing the risks associated with powerful and impactful AI models. The EU AI Act is a significant step towards fostering the development and uptake of safe and trustworthy AI across the EU's single market.

lawshun

The act will also govern the use of AI by law enforcement

The EU Artificial Intelligence Act (AIA) will govern the use of AI by law enforcement agencies. The act will introduce rules for AI systems used by law enforcement, categorised according to the risk they pose.

AI systems that pose an "unacceptable risk" will be banned. This includes cognitive behavioural manipulation, social scoring, and biometric identification and categorisation of people. However, some exceptions will be made for law enforcement purposes. For example, real-time remote biometric identification systems will be permitted in limited cases, and post-event remote biometric identification will be allowed for prosecuting serious crimes with court approval.

AI systems deemed "high-risk" will face stringent mandatory requirements and must undergo a conformity assessment. The AIA categorises certain AI tools used by law enforcement as “high-risk" due to their potential impact on individual freedoms and safety. These include post-event biometric identification, individual risk assessment of natural persons, emotion detection and polygraphs, evaluating the reliability of evidence, predictive policing, and criminal profiling.

Users, providers, developers, and sellers of "high-risk" AI systems in law enforcement will need to follow strict rules. Each application must undergo an exhaustive risk assessment and mitigation process to understand and counter potential hazards. The foundational data driving these AI systems must be of the highest quality to diminish risks and circumvent any discriminatory outcomes and algorithmic bias.

The AIA aims to balance innovation with the protection of fundamental rights and societal values. It provides a robust legal framework to guide the application of AI in law enforcement, upholding the EU's values while leveraging the transformative capabilities of the technology.

Frequently asked questions

The AI Act is a legal framework that governs the sale and use of artificial intelligence in the EU. Its official purpose is to ensure the proper functioning of the EU single market by setting consistent standards for AI systems across EU member states.

The AI Act was provisionally agreed on December 8, 2023, and entered into force on August 1, 2024. The majority of its provisions will apply two years after its entry into force.

The AI Act adopts a risk-based approach, classifying AI systems into four risk categories: unacceptable-risk, high-risk, limited-risk, and minimal/no-risk. The Act focuses on unacceptable-risk and high-risk AI systems, which are subject to various obligations and restrictions.

The AI Act sets out to ensure that AI systems in the EU are "safe, transparent, traceable, non-discriminatory, and environmentally friendly". It establishes obligations and restrictions for providers and users of AI systems, depending on the level of risk. The Act also promotes innovation and investment in AI, providing opportunities for start-ups and small and medium-sized enterprises to develop and train AI models.

Written by
Reviewed by
Share this post
Print
Did this article help you?

Leave a comment