Top Questions
What is the purpose of the Artificial Intelligence (AI) Act?
When was the AI Act formally adopted?
What are some prohibited uses of AI under the AI Act?
How does the AI Act categorize AI systems by risk?
Why have tech companies pushed back against the AI Act?
Artificial Intelligence Act (AI Act), European Union (EU) legislation that seeks to improve EU citizens’ experience, privacy, and safety when using artificial intelligence (AI). The act places limitations on corporations and other entities that use AI in sharing or gathering information, and it aims to help EU citizens avoid discrimination, which may occur when AI makes decisions that privilege some groups over others.
The AI Act was released with other initiatives designed to improve how corporations use AI. For example, in January 2024 the European Commission (the EU’s executive arm) launched an AI innovation package that was designed to support start-ups and small to medium-sized enterprises by providing them with supercomputing infrastructure to improve how such businesses’ AI models are trained. The plan also aims to have companies use “AI factories,” which are supercomputers around Europe working on AI models.
The first proposal to improve AI regulation in the EU was put forth by the European Commission in April 2021. After three years of deliberation and revision, the European Council (which consists of the heads of each EU member country) adopted the AI Act on May 21, 2024. Although the AI Act went into effect in August 2024, enforcement of its terms is expected to roll out in two parts, in August 2025 and August 2026, which allows businesses time to revise their practices.
The AI Act applies to any entity that creates or uses AI in its business. These entities include providers, such as OpenAI (which developed the ChatGPT generative AI model); deployers, or companies that use models such as ChatGPT or any AI chatbot; and importers, which are entities that bring AI technology to the EU from elsewhere. Although the act applies only to countries within the EU, similar legislation has been imposed in other countries, including South Korea and Brazil, and in more than a dozen U.S. states, including Illinois, California, Colorado, and New York.
The AI Act puts forth various risk tiers based on how AI is used. AI systems that pose an “unacceptable risk” are prohibited; some of the terms those systems would violate include the following:
AI cannot be used to manipulate or deceive its users. For example, AI-generated information that has not been fact-checked may lead users to engage in risky behavior, which could lead to serious injury.
AI cannot be used to discriminate against specific social groups. For instance, if autonomous vehicles use AI, developers must ensure that the vehicles can detect pedestrians of all skin colors to avoid accidents.
AI cannot be used to assign individuals a “social score.” This practice, which is used by the Chinese government, ranks citizens on a scale for favorable or poor treatment.
AI cannot be used to discriminate based on biometric identifiers. Although biometric systems can be lawfully used (for example, identifying an office worker entering a building), such systems cannot be used to discriminate against social groups based on physical features.
AI cannot be used to create databases of individuals most likely to commit crimes. This clause again targets discrimination based on appearance and addresses privacy around CCTV (closed-circuit television) footage. Real-time information collection is also limited based on circumstance and necessity. AI can be used to identify people who have already committed crimes.
“High risk” AI systems are subject to intense scrutiny but are not banned outright. These systems include critical safety infrastructure, such as traffic light controls and medical devices; biometrics (of which some forms that fall under the “unacceptable risk” category are banned); and employment-related AI, which can lead to applicant discrimination when used in hiring practices. Companies that operate high-risk systems must submit documentation showing that their systems do not violate the act. Such transparency is important for receiving government approval for high-risk systems.
“Limited risk” AI systems have some potential to manipulate consumers—i.e., pose transparency risks—but to a much lesser extent than high-risk or unacceptable-risk systems. This level is especially relevant to generative AI systems and chatbots. Although they may be designed improperly at times, such systems are, according to the AI Act, unlikely to cause significant harm to their users. This provision also addresses deepfakes, or synthetically generated AI media. Companies must disclose if they distribute such content, because it can be difficult to distinguish from real images and videos.
The final category, “minimal risk,” addresses systems that do not inherently violate consumers’ rights and that are generally expected to follow principles of nondiscrimination. The AI Act also states that tech companies must notify individuals if their works are used to train the companies’ generative AI models. If a company violates any of the discussed principles, the penalty is a maximum fine of either €35 million or 7 percent of the company’s worldwide turnover.
Many large technology companies, including Meta and OpenAI, call these regulations “cumbersome,” especially the requirement to notify people if their work has been used in training data, and say the regulations will lead to slower innovation. OpenAI CEO and cofounder Sam Altman, during a panel discussion at Technical University in Berlin, pushed for Europe to accept AI as the future, stating that the OpenAI team “want[s] to be able to deploy our products in Europe as quickly as we do in the rest of the world.” The statement hinted at impatience with AI rollout because of EU restrictions.
Meta also took an aggressive stance. In February 2025 a lobbyist for the company, Joel Kaplan, equated the EU’s technology fines with tariffs, echoing a similar statement by Meta CEO Mark Zuckerberg. Kaplan, like Altman, stressed the importance of innovation and stated that the EU would fall behind if AI was monitored with the AI Act’s proposed strictness level. Many technology companies have been especially aggressive with their requests to the EU to ease regulations after Donald Trump, who became U.S. president in January 2025, argued that the act stifles innovation.