作者:PYMNTS
The EU AI Act has been lauded as the most comprehensive set of regulations on artificial intelligence on the planet. But it is a set of general principles without details for implementation.
The real work comes with the Code of Practice for general-purpose AI models, which details the compliance requirements for AI companies.
“Many outside Europe have stopped paying attention to the EU AI Act, deeming it a done deal. This is a terrible mistake. The real fight is happening right now,” wrote Laura Caroli, a senior fellow at the Wadhwani AI Center, for the Center for Strategic and International Studies.
The code of practice will undergo three drafts before being finalized at the end of April. These voluntary requirements take effect in August.
However, the third draft was supposed to be released on Feb. 17, but it’s now delayed, with indications that it won’t be out for a month, Risto Uuk, head of EU policy and research at the Future of Life Institute, told PYMNTS. The advocacy group is led by MIT professor Max Tegmark as president.
Uuk believes the draft’s delay was due to pressure from the tech industry. Particularly tricky are rules for AI models that pose a systemic risk, which apply to 10 to 15 of the biggest models created by OpenAI, Google, Meta, Anthropic, xAI and others, he added.
Big tech companies are boldly challenging EU regulations, believing they will have the support of the Trump administration, according to the Financial Times (FT). Meta has dispatched tech lobbyists in the EU to water down the AI Act.
The FT also said Meta refused to sign the code of practice, which is a voluntary compliance agreement, while Google’s Kent Walker, president of global affairs, told Politico that the code of practice was a “step in the wrong direction” at a time when Europe wants to be more competitive.
“Certain big technology companies are coming out saying they either will not sign this code of practice unless it is changed according to what they want,” Uuk said.
Some issues of contention revolve around how copyrighted material is used for training. Another is having an independent third party assess their models for risks, according to Uuk.
They’ve complained that the code goes beyond the EU AI Act’s requirements, Uuk said. But he noted that many of them already exercise these practices themselves in collaboration with the U.K. AI Safety Institute and others. Tech companies also already release their technical reports publicly.
Uuk said there’s concern that the EU will weaken the safety provisions because of tech companies’ opposition. He also noted that the new European Commission administration that took office last December leans toward cutting red tape, simplifying rules and increasing innovation.
The Future of Life Institute is perhaps best known in AI circles as the organization that circulated an open letter in March 2023 calling for a six-month moratorium on advanced AI models until safety protocols were developed. It was signed by Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, AI pioneers Yoshua Bengio and Stuart Russell, among others.
Did the letter work? Uuk said there was no pause in AI development, and the fast pace of AI advancements has continued.
Moreover, “Many of these [AI] companies have not increased their safety work, which the ‘pause’ letter called for,” he added. “The pause was not just for the sake of pausing, but you would use it to increase AI safety work, and this work, arguably, in many cases, has not happened.”
In May 2024, OpenAI dissolved its AI safety team days after the resignations of its two AI safety leaders: Chief Scientist Ilya Sutskever and safety co-leader Jan Leike, who posted on X that OpenAI didn’t prioritize AI safety.
One silver lining is that while tech companies continued to build, regulatory action has gained momentum globally, Uuk said. He noted the following:
However, Uuk was disappointed about this year’s AI Action Summit in Paris, following the first two in the U.K. and South Korea.
Unlike past safety summits, the Paris summit leaned toward promoting AI innovation. “There was barely any discussion of safety,” Uuk said.