The US Has Failed to Pass AI Regulation. New York City Is Stepping Up

The US Congress won’t pass federal AI regulation anytime soon. NYC is forging ahead with an AI Action Plan and a proposal for a new Office of Algorithmic Data Integrity.
Multicolored 3D cubes forming a human brain with dramatic shadows
Illustration: Andriy Onufriyenko/Getty Images

As the US federal government struggles to meaningfully regulate AI—or even function—New York City is stepping into the governance gap.

The city introduced an AI Action Plan this week that mayor Eric Adams calls a first of its kind in the nation. The set of roughly 40 policy initiatives is designed to protect residents against harm like bias or discrimination from AI. It includes development of standards for AI purchased by city agencies and new mechanisms to gauge the risk of AI used by city departments.

New York’s AI regulation could soon expand still further. City council member Jennifer Gutiérrez, chair of the body’s technology committee, today introduced legislation that would create an Office of Algorithmic Data Integrity to oversee AI in New York.

If established, the office would provide a place for citizens to take complaints about automated decisionmaking systems used by public agencies, functioning like an ombudsman for algorithms in the five boroughs. It would also assess AI systems before deployment by the city for bias and discrimination.

Several US senators have suggested creating a new federal agency to regulate AI earlier this year, but Gutiérrez says she’s learned that there’s no point in waiting for action in Washington, DC. “We have a unique responsibility because a lot of innovation lives here,” she says. “It’s really important for us to take the lead.”

New York City council member Jennifer Gutiérrez wants the city to create an office to regulate AI.

Photograph: William Alatriste

Gutiérrez supports a testing requirement for algorithms used by city government because AI is beginning to be widely used, she says. City departments are interested in using AI software for administrative tasks like to assess the risk a child will be a victim of abuse, or student learning rates. She is also wary of mayor Eric Adams’ fondness for technology like robotic dogs and AI for making robocalls in languages he doesn’t speak. New York City has a reputation as a testing ground for surveillance technology, from a recent rise in drone use to questionable use of face recognition in housing and stadiums, and by police.

New York was ahead of the federal government on AI regulation even before this week. A city task force was formed in 2018 to assess its use of the technology. A law that requires hiring algorithms used by businesses to be checked for bias went into effect earlier this year. But some protections have been dialed back. In January 2022, Adams canceled an executive order signed by his predecessor, Bill de Blasio, that created an Algorithms Management and Policy Officer to work with city agencies on deploying AI in an equitable way. A report issued by the officer in 2020 said that city agencies used 16 kinds of algorithms that had a potentially substantial impact on people’s rights, but did not survey every AI model used by the New York City Police Department.

Gutiérrez says she still doesn’t have a comprehensive understanding of algorithms used by city agencies. An audit released in February by the New York state comptroller found that the city has an ad hoc and incomplete approach to AI governance. It warned this means the city can’t “ensure that the City’s use of AI is transparent, accurate, and unbiased and avoids disparate impacts.”

Julia Stoyanovich, an AI researcher and director of the Center for Responsible AI at New York University helped compile that report, and she says she supports the legislation proposed by Gutiérrez—as long as the resulting agency works as an independent, external body. That will be determined if the bill garners sponsors on the city council and advances towards approval.

Stoyanovich is less complementary about the city’s choice to release the AI Action Plan this week on the same day that the city expanded a chatbot called MyCity that is designed to answer questions for small businesses about things like permits and licenses they need. She finds it problematic that the project escaped being subjected to the provisions in the AI Action Plan like seeking public input. Mayor’s office spokesperson Jonah Allon says the chatbot was released at the same time to act as a pressure test for the evaluation process laid out in the plan, which will address existing and future uses of AI.

New York may have been early to regulate AI, but it has not been consistent, Stoyanovich says. “Especially considering that we were the first ones to start, it really pains me to see that we're not very far along five or six years after we started and no articulation of AI governance principles within the city government,” she says.

In the few places where US states, cities, and officials have moved to regulate AI, the idea of requiring systems to be assessed to evaluate potential harm before deployment has become a common approach. It features in an executive order signed by California governor Gavin Newsom last month and a voluntary agreement President Biden struck with major AI companies in July. A common point of disagreement is whether such assessments can be carried out internally or by an independent third party. The White House is expected to release an executive order that deals with regulating AI in the coming weeks, sources familiar with the matter told WIRED. As the House has continued to struggle to elect a new speaker this week, Congress appears unlikely to debate or pass AI legislation any time soon.