With little federal guidance, US cities take lead on AI policymaking

By the time Stephanie Deitrick started writing an AI policy for the city of Tempe, Arizona, she worried it was already too late.

“It had been on my mind as something that we really need to look at … before we’re hit with something we’re not expecting,” said Ms. Deitrick, the city’s chief data and analytics officer. “And then ChatGPT was released.”

Almost overnight after its launch in November last year, a technology with wide-ranging implications that Ms. Deitrick had been considering in theory became widely used by the public, she told the Thomson Reuters Foundation.

She called the experience surreal. “It feels like everyone is racing against everyone else to see how to get more AI into what they’re doing,” Ms. Deitrick said.

Like many others in similar roles across the United States, Ms. Deitrick is now sprinting to catch up. In June, the city council adopted an “Ethical AI Policy” that she spearheaded, and in October a new governance committee started meeting to hash out the city’s future approach to AI tools.

Both in the United States and beyond, cities are trying to put in place AI policies largely in the absence of national or transnational guidance, said Mona Sloane, an assistant professor of data science and media studies at the University of Virginia. She calls this local-level leadership “AI localism.”

The U.S. cities of Boston, New York, Seattle, and San Jose have all in recent months adopted guidelines and policies around AI and “generative” AI tools such as ChatGPT that allow for easy text-based commands.

In October, President Joe Biden issued an executive order creating standards around privacy, safety and rights related to the use of AI, and Senate Majority Leader Charles Schumer has been spearheading a series of lawmaker meetings on potential legislation.

But Congress has yet to pass an AI law, leaving local authorities to step in.

Kate Garman Burns is executive director of MetroLab Network, a nonprofit working with 45 local governments to create policy guidance by next summer.

“The question I got the most is, what are you hearing that other people are doing?” she said.

She said cities felt under pressure to act in the next six to 12 months to understand what the technology can do to improve city services – and what to beware of.

Prioritizing efficiency or humanity?

In part that pressure is coming from the tech industry. A blog post from Microsoft on generative AI warns that “the public sector cannot remain frozen as AI changes the world around us.”

ChatGPT creator Open AI did not respond to a request for comment on city efforts to craft guidance.

“This tech genie is out of the bottle,” Ms. Garman Burns said.

“This is in the hands of the public, and cities are trying to figure out how to respond and be responsible with it.”

For Ms. Deitrick, that meant emphasizing the central role of people in the use, oversight, and results of AI tools.

“I put the word human in there a lot,” she said of Tempe’s policy, “so we don’t prioritize efficiency over basic human dignity.”

Local initiatives can result in procurement rules or general transparency in a city’s deployment of these tools, or in the regulation of specific uses – such as a new law in New York on AI in the hiring process, or around autonomous vehicles or facial recognition, said Ms. Sloane, of the University of Virginia.

Together these efforts will likely have knock-on effects for other places, Ms. Sloane said, creating “an environment of compliance practices that establish themselves as a standard that will affect an industry at large.”

That means cities have an opportunity to be key test beds, said Milou Jansen, Amsterdam-based coordinator of the Cities Coalition for Digital Rights, a global network of municipalities helping each other in digital rights policymaking.

That includes testing whether these tools work and offer actual efficiencies, but also looking at “what is the impact on the neighborhood, and does it address the needs of citizens?” Ms. Jansen said.

“Right now, we’re still discovering what kind of norms should be okay,” she said. “Maybe we want [AI tools] to be used for traffic light optimization, but not social security.”

Some cities are also looking into temporarily halting the use of AI, she said.

A global database of locally led “ethical” AI initiatives called the Atlas of Urban AI lists 184 projects in 66 cities, including in Dubai, Helsinki, Mexico City, and elsewhere.

These are scored in part on transparency, accountability, lack of discrimination, and sustainability – the latter of which ranks poorest among the atlas’s projects, said Alexandra Vidal, a researcher and project manager who helps lead the project at the Barcelona think-tank CIBOD.

So far, such initiatives are found more in the Global North, said Marta Galceran Vercher, a research fellow at CIBOD, but she noted that cities such as Barcelona, where officials have passed an explicit mandate around ethical AI, offer significant models.

“Cities are stepping out ahead of the national governments to say, ‘We need to be more ambitious’,” she said.

“Do a lot more with what we have”

While machine learning and text analytics are not new, tools driven by generative AI offer significant opportunities for cities, Ms. Jansen, Ms. Garman Burns, and others emphasized.

In Williamsport, Pennsylvania, city council president Adam J. Yoder is excited by those prospects though sober about the risks, and the work required for a small town such as his, numbering fewer than 30,000 people, to take advantage of AI.

“This is a really interesting tool that can help us maximize our productivity, to do a lot more with what we have,” he said, pointing to possible benefits including producing documentation or streamlining permitting.

Such efficiency could be especially useful as Williamsport and other towns deal with shrinking revenues while still needing to provide robust municipal services.

Yet Williamsport is only now in the midst of digitizing its processes and education about these new tools will be critical, said Mr. Yoder, who is taking part in the national MetroLab discussions and looking forward to the guidance that results.

For now, the city has no policy to either introduce AI or guard against related risks such as data privacy or cybersecurity risks, he said.

But on his own Mr. Yoder is already using tools such as ChatGPT, including in his city work – summarizing large documents, reviewing op-eds he has written, even as a starting point to crafting legislation.

“This really enhances my ability to be more effective in the time I offer to the city,” said Mr. Yoder.

“It’s really good as a starting point or a review tool. You just can’t take what it gives you as gospel.”

This story was reported by the Thomson Reuters Foundation.

Note: This article have been indexed to our site. We do not claim legitimacy, ownership or copyright of any of the content above. To see the article at original source Click Here

Related Posts
Predicting AGI Within 35 Months and the Capabilities of ChatGPT4 thumbnail

Predicting AGI Within 35 Months and the Capabilities of ChatGPT4

Dr Alan D. Thompson is a world expert in artificial intelligence (AI), specializing in the augmentation of human intelligence, and advancing the evolution of ‘integrated AI’. He is predicting Artificial General Intelligence will arrive in about 35 months. Alan provides AI consulting and advisory to intergovernmental organizations including member states of the United Nations, Non-Aligned
Read More
One of the longest-lived ozone holes on record is about to close thumbnail

One of the longest-lived ozone holes on record is about to close

Environment 21 December 2021 By Adam Vaughan A representation of the ozone layer in late November 2021Copernicus Atmosphere Monitoring Service, ECMWF One of the longest-lived ozone holes on record is expected to close this Wednesday after several weeks of exposing wildlife and people in Antarctica to very high levels of ultraviolet radiation.  The Montreal Protocol’s…
Read More
Dead coral skeletons left by bleaching events hinder reef recovery thumbnail

Dead coral skeletons left by bleaching events hinder reef recovery

Coral reefs are like underwater cities, with myriad species forming a thriving ocean metropolis. That complexity, however, can hinder a reef's survival, scientists funded by the U.S. National Science Foundation have found. After bleaching events, the dead coral skeletons left behind allow seaweed to outgrow new young coral, preventing reefs from recovering. The results are published
Read More
Index Of News
Total
0
Share