(The Center Square) – New York business groups are raising concerns about a legislative proposal to regulate artificial intelligence in the state, saying the plan would stifle innovation and create barriers for businesses looking to develop the new technology.
The Responsible AI Safety and Education Act, or the RAISE Act, filed by a group of Democrats earlier this week, seeks to limit harm from powerful frontier artificial intelligence models through safety reporting requirements. The measure, if approved, would require "large developers" of artificial intelligence models to have safety and security protocols, transparency measures, and mandatory reporting of incidents to the attorney general.
Backers of the plan say the new regulations are needed to prevent the emerging machine-learning technology from being abused.
"Artificial intelligence is evolving faster than any technology in human history," the bill's summary states. "It is driving groundbreaking scientific advances leading to life-changing medicines, unlocking new creative potential, and automating mundane tasks. At the same time, experts and practitioners in the field readily acknowledge the potential for serious risks."
But a coalition of business groups are pushing for changes to the proposal, which they say poses "significant threats across all New York industries that seek to leverage the latest in AI innovation."
"The bill's broad liability provisions and onerous compliance mandates risk substantially delaying new technological advancements from coming online, impeding the adoption of crucial tools," the groups wrote in a letter Wednesday to legislative leaders. "Businesses in critical sectors increasingly rely on AI to enhance efficiency, improve services, and drive innovation, but they would find the rollout of these beneficial technologies severely hindered by the proposed regulations."
A group of investors in New York startups and companies are also urging lawmakers to amend the bill to remove mandatory predeployment safety protocols and annual third-party audits for advanced systems, saying it could choke off funding and stifle research and development.
"While the intent to ensure AI safety is laudable, the practical implications of these requirements are deeply troubling," the investors wrote to legislative leaders. "The current ecosystem for independent, qualified AI auditors is nascent at best, meaning that imposing such mandates now would create severe bottlenecks, significantly delaying critical development cycles."
New York is one of dozens of states that are scrambling to set regulations for the emerging technology to set protections from AI-related harms, such as "deepfake" scams, political misinformation, algorithmic discrimination and job displacement. So far, at least three states – California, Colorado and Utah – have passed laws regulating use in the commercial sector.
A proposal included in President Donald Trump's tax cut bill would preempt artificial intelligence laws and regulations passed recently in dozens of states, but the plan has drawn opposition from some Republicans and a bipartisan group of attorneys general that have regulated high-risk uses of the technology. The measure was approved by the U.S. House of Representatives and is being considered by the Senate.