The Associated Press reports that Adams told reporters Monday (Oct. 16) that he uses AI to contort his voice to translate phone messages into languages he doesn’t speak, including Yiddish and Mandarin.
Deepfakes, similar to Photoshopping a digital picture, use a form of artificial intelligence called deep learning to make fake voices and video images.
In the robocalls, Adams doesn’t disclose that his voice is AI-generated nor that he speaks English only.
That’s a problem for watchdog groups calling for government regulation. But Congress has struggled to legislate guardrails as an avalanche of AI-generated deepfake videos and images are poised to deceive voters in the 2024 election cycle.
“The mayor is making deep fakes of himself. This is deeply unethical, especially on the taxpayer’s dime. Using AI to convince New Yorkers that he speaks languages that he doesn’t is outright Orwellian. Yes, we need announcements in all of New Yorkers’ native languages, but the deep fakes are just a creepy vanity project,” Albert Fox Cahn, executive director of Surveillance Technology Oversight Project, told the AP.
But Adams sees nothing unethical with his robocalls, saying that he is simply trying to communicate to New Yorkers in the language they understand. His administration has used technology from the startup ElevenLabs to imitate Adams’ voice speaking multiple languages, a mayor’s office spokesperson told the AP.
“I got one thing: I’ve got to run the city, and I have to be able to speak to people in the languages that they understand, and I’m happy to do so. And so, to all, all I can say is a ‘ni hao,’” Adams said.