USALife.info / NEWS / 2024 / 02 / 08 / FCC BANS DECEPTIVE AI-GENERATED VOICES IN ROBOCALLS
 NEWS   TOP   TAGS   ARCHIVE   TODAY   ES 

FCC bans deceptive AI-generated voices in robocalls

14:09 08.02.2024

The Federal Communications Commission (FCC) has taken a decisive step in combatting the use of voice-cloning technology in illegal robocalls. In a unanimous ruling, the FCC has outlawed robocalls that utilize artificial intelligence (AI) voices, sending a clear message that the exploitation of this technology for scams and voter manipulation will not be tolerated.

The ruling specifically targets robocalls made with AI voice-cloning tools under the Telephone Consumer Protection Act, a law enacted in 1991 to restrict unsolicited calls that use artificial or prerecorded voice messages. This announcement comes as authorities in New Hampshire continue their investigation into AI-generated robocalls that imitated President Joe Biden's voice in an attempt to discourage voting in the state's primary election last month.

Effective immediately, the new regulation empowers the FCC to impose fines on companies that use AI voices in their robocalls or block the service providers that facilitate these calls. Additionally, the ruling enables call recipients to file lawsuits and provides state attorneys general with a new mechanism to crack down on violators, according to the FCC.

FCC Chairwoman Jessica Rosenworcel emphasized the threat posed by bad actors using AI-generated voices in robocalls, noting that they have been employed to misinform voters, impersonate celebrities, and extort family members. Rosenworcel stated, "It seems like something from the far-off future, but this threat is already here," underscoring the urgency of the FCC's action.

Under the existing consumer protection law, telemarketers are generally prohibited from using automated dialers or artificial and prerecorded voice messages to call cellphones. They must also obtain prior written consent from call recipients before making such calls to landlines. The new ruling classifies AI-generated voices in robocalls as "artificial," subjecting them to the same standards. Violators of the law can face substantial fines, with penalties reaching over $23,000 per call, and call recipients have the right to take legal action and potentially recover up to $1,500 in damages for each unwanted call.

The FCC's decision has been welcomed as a step forward in addressing the potential threat posed by AI in elections. However, experts caution that it may not entirely mitigate the risk of personalized spam targeting voters through various channels, including phone calls, text messages, and social media. Josh Lawson, director of AI and democracy at the Aspen Institute, warns that bad actors will continue to exploit the technology, emphasizing the need for continued vigilance.

The use of sophisticated generative AI tools, including voice-cloning software and image generators, has already become prevalent in elections worldwide, including the United States. In previous election cycles, campaign advertisements have utilized AI-generated audio or imagery, and some candidates have experimented with AI chatbots to engage with voters. Bipartisan efforts in Congress have sought to regulate AI in political campaigns, but no federal legislation has been passed thus far, with the general election only nine months away.

The recent incident in New Hampshire, where AI-generated robocalls impersonated President Biden, has highlighted the urgency of addressing the misuse of AI in elections. The calls used a voice similar to Biden's, employed his catchphrase, "What a bunch of malarkey," and falsely suggested that voting in the primary would prevent voters from casting a ballot in November. New Hampshire officials have identified the source of the calls as the Texas-based company Life Corp., owned by Walter Monk. The calls were transmitted by another Texas-based company, Lingo Telecom. Cease-and-desist orders and subpoenas have been issued to both companies, and a task force of attorneys general from all 50 states and Washington, D.C., has warned Life Corp. to cease originating illegal calls immediately.

While the FCC's ruling is a significant development, it is important to note that both Lingo Telecom and Life Corp. have previously faced investigations for illegal robocalls. Life Corp. received a citation from the FCC in 2003 for delivering illegal prerecorded and unsolicited advertisements, while Lingo Telecom has been accused by the task force of being the gateway provider for 61 suspected illegal calls from overseas. Lingo Telecom, formerly known as Matrix Telecom, has previously been subject to a cease and desist order from the Federal Trade Commission in 2022.

In response to the FCC's ruling, Lingo Telecom stated that it has cooperated with the investigation into the AI-generated robocalls impersonating President Biden and took immediate action by identifying and suspending Life Corp. when contacted by the task force. Life Corp. declined to comment on the matter.

The FCC's decision has been praised for providing states with additional tools to combat fraudulent robocalls. State attorneys general will now have the means to crack down on these scams and protect the public from fraud and misinformation. However, experts caution that the ruling may not entirely eliminate the threat of AI-generated disinformation campaigns in elections. As the 2024 campaign cycle intensifies, AI-generated images, videos, and audio continue to propagate online, further highlighting the need for comprehensive regulation and safeguards.

/ Thursday, February 8, 2024, 2:09 PM /

themes:  Joe Biden  Washington  New Hampshire

VIEWS: 196


27/04/2024    info@usalife.info
All rights to the materials belong to the sources indicated under the heading of each news and their authors.
RSS