How AI-generated misinformation threatens election integrity

From robocalls to deep fakes, artificial intelligence is already playing a role in the 2024 election. Last week the Federal Communications Commission made AI-generated voice calls illegal. Laura Barrón-López has been covering AI’s impact on the upcoming election and discussed the latest with Amna Nawaz.

Read the Full Transcript

Notice: Transcripts are machine and human generated and lightly edited for accuracy. They may contain errors.

  • Amna Nawaz:

    From robocalls to deepfakes, artificial intelligence is already playing a role in the 2024 election.

    Today, The Washington Post and Axios reported, a group of leading tech companies, including Meta, Google and TikTok, committed to limiting misleading A.I. content on their platforms.

    Laura Barron-Lopez has been covering what this means for the upcoming election and joins me now.

    So, Laura, how have we seen A.I. already playing a role in the election?

  • Laura Barron-Lopez:

    Last week, Amna, the Federal Communications Commission ruled that robocalls using A.I.-generative content are illegal.

    So, that comes, Amna, after the New Hampshire attorney general launched an investigation into robocalls that used A.I. to impersonate President Joe Biden's voice leading up to the New Hampshire primary there, and that investigation so far traced those robocalls back to a Texas company called Life Corp.

    But that investigation is still ongoing. And last year, we saw a number of ads using A.I.-generative content, including the RNC, the Republican National Committee, putting out a beat Joe Biden ad that used A.I.-generative imagery and video to depict a dystopian future under a second Biden term.

    We also saw Ron DeSantis' super PAC aligned with Ron DeSantis' campaign using A.I. to mimic Donald Trump's voice, and vice versa, Donald Trump's campaign putting out video using A.I. that impersonated Ron DeSantis' voice.

    So, you're seeing a suite of A.I. being used, particularly by Republicans. And in preparation for this, the Biden campaign told me that they have assembled attorneys, as well as legal academics, to try to be prepared to combat more A.I.-generative content.

  • Amna Nawaz:

    But what are the concerns here when it comes to the election? I mean, what do experts tell you about how A.I. is a potential threat to democracy?

  • Laura Barron-Lopez:

    So this is a change in degree.

    And it's not that A.I. hasn't been used before in past elections, but it's that A.I.-generative tools are now more widely available, and they're much more sophisticated. So the A.I. threats in 2024 include things like robocalls that can clone a voice, phishing e-mails that replicate official templates, increasingly realistic deepfake video and photography, and then spoof accounts impersonating officials, offices, and news outlets.

    So the point, Amna, is that, unlike 2016, A.I. is now faster, cheaper, easier to make because of the widely available generative A.I. tools.

    And I spoke to Katie Reisner, who's the senior counsel at States United Democracy, a nonpartisan group focused on election security, and she summed up the danger to the election process.

  • Katie Reisner, States United Democracy Center:

    Election officials are already doing their jobs in such an elevated threat environment. They are facing harassment, threats of physical violence, disruptions to their administration of elections.

    They're having trouble recruiting sufficient staff and poll workers, and, ultimately, they don't have enough resources. So adding artificial intelligence to this mix is potentially going to make these election officials' jobs even more difficult.

    It's like pouring accelerant on this already very flammable substance.

  • Laura Barron-Lopez:

    So, one example, Amna, is that you may remember that, in the aftermath of 2020, 2022, there were Republicans and people circulating debunked video, but video of what they called poll workers cheating or throwing away ballots.

    And so what the — what A.I. gives bad actors the ability to do is to recirculate that video, manipulate it, change it to make it look real.

  • Amna Nawaz:

    So those kinds of videos and the e-mails, the images, the robocalls you mentioned, are they being targeted at certain groups? I mean, who is most at risk when it comes to being potentially impacted by A.I. in an election year?

  • Laura Barron-Lopez:

    The new power of A.I. allows bad actors to target specific groups.

    And so, in 2020, minority communities were targeted with robocalls that discouraged them from voting. But now, because A.I.-generative tools are much more sophisticated, it allows creators to tailor content specifically to certain communities and make e-mails and calls just much more convincing.

  • Amna Nawaz:

    Well, Meta announced recently they're going to be flagging images and A.I.-generated content there, but, more broadly, is there enough being done just to safeguard against this kind of content?

  • Laura Barron-Lopez:

    So even though those companies, as you said, Amna, are deciding to label the content, they aren't outright banning it.

    And, notably, X, formerly Twitter, has not even agreed to label A.I.-generative content that might be fake.

    And so I spoke to experts like Lawrence Norden, the director of elections and democracy at Brennan Center for Justice. And he told me that labeling A.I. imagery and video is a good first step, but that ultimately it's all on the companies to be the gatekeepers and to be able to protect democracy.

    Lawrence Norden, Brennan Center for Justice: They, I think, have the responsibility not only to ensure that — to the extent possible, that anything that is generated by A.I. is labeled for the public, but to increase their trust and safety teams to be on the lookout for coordinated bot activity that might be disinformation campaigns, to be on the lookout for fake news sites, and to be taking them down when they find them.

    And I'd really like to see them take as much responsibility for our democracy and for the integrity of our democracy.

  • Laura Barron-Lopez:

    So, again, policing is all on the tech companies right now, because there is no federal legislation mandating tech companies do this. They have to do it of their own accord. And, also, there's no federal legislation banning the use of A.I. content in political ads.

    And, of course, even if there were, that doesn't stop foreign actors from using it.

  • Amna Nawaz:

    Laura, what about people themselves who are seeing this content? What can they do to stay vigilant and not get fooled?

  • Laura Barron-Lopez:

    This technology is very confusing for a lot of people. And many people may not really understand even the labeling that companies are saying they will put on A.I.-generative content.

    Labeling is not always easy to even see on an ad or on videos or on photographs. But advice that experts give is to trust with known sources. So if you see something that you think might be fake floating around on the Internet or on social media or from an influencer, go to a known news outlet.

    Also, of course, if it's a question about voting, then go to your local state, county election official Web sites.

  • Amna Nawaz:

    Great advice. Important information.

    Laura Barron-Lopez, thank you so much.

  • Laura Barron-Lopez:

    Thank you.

Listen to this Segment