this post was submitted on 08 Feb 2024
306 points (99.0% liked)
Technology
58303 readers
10 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Days before a pivotal election in Slovakia to determine who would lead the country, a damning audio recording spread online in which one of the top candidates seemingly boasted about how he’d rigged the election.
And if that wasn’t bad enough, his voice could be heard on another recording talking about raising the cost of beer.
The recordings immediately went viral on social media, and the candidate, who is pro-NATO and aligned with Western interests, was defeated in September by an opponent who supported closer ties to Moscow and Russian President Vladimir Putin.
While the number of votes swayed by the leaked audio remains uncertain, two things are now abundantly clear: The recordings were fake, created using artificial intelligence; and US officials see the episode in Europe as a frightening harbinger of the sort of interference the United States will likely experience during the 2024 presidential election.
“As a nation, we are woefully underprepared,” said V.S. Subrahmanian, a Northwestern University professor who focuses on the intersection of AI and security.
Senior national security officials in the US have been gearing up for “deepfakes” to inject confusion among voters in a way not previously seen, a senior US official familiar with the issue told CNN. That preparation has involved contingency planning for a foreign government potentially using AI to interfere in the election.
State and federal authorities are also grappling with increased urgency to pass legislation and train election workers to respond to deepfakes, but limited resources within elections offices and inconsistent policies have led some experts to argue that the US is not equipped for the magnitude of the challenge, a CNN review found.
Already, the US has seen AI-generated disinformation in action.
In New Hampshire, a fake version of President Joe Biden’s voice was featured in robocalls that sought to discourage Democrats from participating in the primary. AI images that falsely depicted former President Donald Trump sitting with teenage girls on Jeffrey Epstein’s plane circulated on social media last month. A deepfake posted on Twitter last February portrayed a leading Democratic candidate for mayor of Chicago as indifferent toward police shootings.
Various forms of disinformation can shape public opinion, as evidenced by the widely held false belief that Trump won the 2020 election. But generative AI amplifies that threat by enabling anyone to cheaply create realistic-looking content that can rapidly spread online.
Political operatives and pranksters can pull off attacks just as easily as Russia, China or other nation state actors. Researchers in Slovakia have speculated that the vote-rigging deepfake their country faced was the work of the Russian government.
“I can imagine scenarios where nation state adversaries record deepfake audios that are disseminated using both social media as well as messaging services to drum up support for candidates they like and spread malicious rumors about candidates they don’t like,” said Subrahmanian, the Northwestern professor.
The FBI or Department of Homeland Security can move more swiftly to speak out publicly against a threat if they know that a foreign actor is behind a deepfake, said a senior US official familiar with the issue. But if an American citizen could be behind a deepfake, US national security officials would be more reluctant to counter it publicly out of fear of giving the impression that they are influencing the election or restricting speech, the official said.
And once a deepfake appears on social media, it can be nearly impossible to stop its spread.
“The concern is that there’s going to be a deepfake of a secretary of state who says something about the results, who says something about the polling, and you can’t tell the difference,” said the official, who was not authorized to speak to the press.
Efforts to regulate deepfakes and guard against their effects vary greatly among US states.
Some states including California, Michigan, Minnesota, Texas and Washington have passed laws that regulate deepfakes in elections. Minnesota’s law, for example, makes it a crime for someone to knowingly disseminate a deepfake intended to harm a candidate within 90 days of an election. Michigan’s laws require campaigns to disclose AI-manipulated media, among other mandates. More than two dozen other states have such legislation pending, according to a review by Public Citizen, a nonprofit consumer advocacy group.
CNN asked election officials in all 50 states about efforts to counter deepfakes. Out of 33 that responded, most described existing programs in their states to respond to general misinformation or cyber threats. Less than half of those states, however, referenced specific trainings, policies or programs crafted to respond to election-related deepfakes.
“Yes, this is something that keeps us all up at night,” said Alex Curtas, a spokesperson for New Mexico’s secretary of state, when asked about the issue. Curtas said New Mexico has plans for tabletop-exercises with local officials that will include discussion of deepfakes, but he said the state is still looking for tools to share with the public to help determine whether content has been generated with artificial intelligence.
Jared DeMarinis, Maryland’s administrator of elections, told CNN his state issued a rule that requires political ads that involve AI-generated content to include disclaimers, but he said he hopes the state legislature will pass a law that gives the state more authority on the issue.
“I don’t believe you can completely
Some efforts to combat disinformation have triggered more distrust. Last year, Washington’s secretary of state’s office signed a contract with a tech company to track election-related falsehoods on social media, which would include deepfakes, a spokesperson told CNN. But in November, the state’s Republican Party submitted an ethics complaint related to that contract, alleging the secretary was using public funds to pay a company to “surveil voters … suppressing opposition views.” The state ethics board declined to move forward on the complaint, which elicited more protest from the party.
Multiple pieces of federal legislation on election-related deepfakes have been proposed. US law currently prohibits campaigns from “fraudulently misrepresenting” other candidates, but whether that includes deepfakes is an open question. The Federal Election Commission has been considering the idea but has not reached a decision on the matter
Ilana Beller of Public Citizen, the consumer advocacy group, expressed cautious optimism over the rate that both red- and blue-leaning states have been proposing and passing legislation on deepfakes, but she said more must be done.
“We would like to see more from the federal government, from the FEC and from many states that haven’t taken the step to regulate on this issue,” Beller said. Paul Vallas, a former mayoral candidate for Chicago, was the subject of a deepfake recording that characterized him as indifferent to police shootings. Paul Vallas, a former mayoral candidate for Chicago, was the subject of a deepfake recording that characterized him as indifferent to police shootings.
Some US candidates have been forced to personally figure out how to respond to deepfakes.
Paul Vallas, for example, ran for mayor of Chicago as a moderate Democrat last year and was targeted by an audio clip posted on X, formerly known as Twitter, by a mysterious account called “Chicago Lakefront News.”
“These days people will accuse a cop of being bad if they kill one person that was running away. Back in my day, cops would kill, say, 17 or 18 civilians in their career and nobody would bat an eye,” said the voice in the post that sounded nearly identical to Vallas. “We need to stop defunding the police and start refunding them.”
Vallas’ campaign responded by issuing a statement that denounced the video as fake and deceptive. But by then, it had been viewed thousands of times before being deleted. While Vallas won the first round of voting, he ultimately lost the election in a runoff to a progressive candidate, Brandon Johnson.
Asked if he thinks the deepfake cost him the race, Vallas said, “No, you know, I think it was a factor in a close election.”
“We’ll never know who actually created the video, but clearly there was a campaign on multiple fronts to try to misrepresent my record and to try to characterize my candidacy as something that it was not,” he added. “There’s some damage that’s not repairable, so in a close race something like that can be a factor.”
Michal Šimečka, the leader of the Progressive Slovakia party, understands why some people could have been fooled by the deepfake that falsely purported to capture him discussing with a journalist a plan to manipulate votes at polling stations.
“It does sound like me,” Šimečka told CNN, referring to the audio, which he said played into conspiracy theories that a segment of the population already believed.
The fake audio emerged on the barely regulated messaging app Telegram two days before Slovakia’s parliamentary elections and quickly jumped to TikTok, YouTube and Facebook.
Šimečka said his team and others complained to social media platforms and law enforcement. Despite some platforms removing or slapping factcheck warnings on some posts containing the audio, it continued to spread.
Šimečka said there’s no way to know whether the deepfake altered the outcome of the election, which his party lost to a more Russia-friendly party, but said, “It probably had some effect.”
Daniel Milo, who until December ran a center within Slovakia’s Ministry of Interior setup to counter disinformation, said the debacle showed the way in which some major social media platforms lack processes to effectively respond to deepfakes.
TikTok and YouTube outright deleted copies of the deepfake, he said, while Facebook deleted some, marked others as false but did not touch others. He estimates hundreds of thousands of people saw posts containing the audio. President Joe Biden, who just announced his reelection campaign for president, delivers remarks at North America's Building Trades Unions Legislative Conference at the Washington Hilton in Washington, DC, on Tuesday.
He said social media platforms need to “put measures in place” to prevent attempts to meddle with an election.
A spokesperson for Meta, Facebook’s parent company, said in a statement, “Our independent fact-checking network reviews and rates misinformation—including content that’s AI-generated— and we label it and down-rank it in feed so fewer people see it.” While the statement said content that violates company policies is removed, it did not address why some posts containing the Slovak deepfake were not marked as false.
While the original source of the vote-rigging deepfake has not been confirmed, Milo said that some of the earliest posts containing the audio came from pro-Russian politicians in Slovakia. He believes it’s not a coincidence that Russia’s government publicly pushed a similar conspiracy theory on the same day the deepfake emerged.
“In my professional capacity, I do believe that this deepfake was part of a wider influence campaign by Russia to interfere in the Slovak elections,” Milo said.
Janis Sarts, director of the NATO Strategic Communications Centre of Excellence, a NATO-accredited research organization based in Latvia, said in a statement that there’s no known evidence showing the deepfake originated in Russia, though he also noted that just over an hour before the deepfake surfaced, Russia’s Foreign Intelligence Service (SVR) released a press statement accusing the US of trying to influence Slovakia’s election in favor of Slovakia’s progressive party. The Russian statement specifically named Šimečka.
“The claims made in the Russian Intelligence Service’s statement and the content of the deepfake that went viral simultaneously correspond to each other. They both target Progressive Slovakia and promote the same false narrative,” Sarts said. He added that one of the politicians in Slovakia who first posted the deepfake appeared on the news of a Russian channel within a day and made similar claims.
Russia’s SVR did not respond to a request for comment.
Regardless of the source, Milo said the US and other nations with elections this year should get ready.
“My warning is brace yourself for upcoming barrage of deepfakes, of audio and video content that will be targeting presidential candidates that will try to polarize and disrupt the social cohesion in the US,” Milo said.
It was a sentiment echoed by Šimečka.
“I think this might be the year when we see a deepfake boom in election campaigns all across the world,” he said. “It’s effective. It’s fairly easy to produce. There isn’t regulation to combat it effectively.”
Thank you!