Generative artificial intelligence (GenAI) is helping increase efficiency by automating manual efforts to gather data and present it in a more digestible manner. Utilizing GenAI, professionals can effectively prepare reports and provide insights based on depths of data that were previously too unwieldy to review, evaluate and summarize.
The Good and the Bad
AI is helping political candidates analyze data on voter demographics, social media behavior, past voting patterns, and more. This insight can drive more personalized campaigns to reach undecided or unmotivated voters. Applying GenAI aids in crafting and distributing more targeted messages to move the needle with these voters. Overall, AI assists political candidates in running data-driven, efficient, and targeted campaigns.
Unfortunately, this efficiency also benefits bad actors looking to spread political disinformation. Such disinformation campaigns are headed by domestic and foreign groups looking to show the candidate they feel would act in their best interest, in a better light. During this election cycle alone, a number of these campaigns have been uncovered:
- The U.S. Justice Department indicted two Russian state media employees for using state funds to covertly finance and control a Tennessee-based online content creation firm that pushed out divisive political content, developed using AI, to Americans.
- Microsoft’s threat intelligence unit found that Iran-backed hacker groups accelerated efforts to increase divisiveness among American voters, creating websites and social media campaigns that cast doubt on election legitimacy and discourage voter turnout.
It’s Not Over When It’s Over
The intelligence community fully expects foreign influence actors to continue their disinformation campaigns by calling the validity of the election’s results into question after the polls close. Once all the votes have been counted, foreign actors from China, Russia and Cuba are expected to extend their work to boost candidates of their choice during the election by seeding false narratives of election fraud.
Fighting AI with AI
The tech industry is not passively accepting these threats. Nineteen leading tech firms answered a request from Congress to provide details about their commitment to, and process for, monitoring their platforms for AI-augmented content related to the forthcoming 2024 presidential election. While responses varied across the platforms, all are utilizing AI to identify and label AI-generated content, highlight language that is likely translated, and remove manipulated videos and images (deepfakes) that impersonate candidates’ voices and likenesses.
AI and the Future of Elections
The groundwork is already being laid to provide more oversight for AI’s impact on future elections. A bill, the Preparing Election Administrators for AI Act, has been introduced to require the Election Assistance Commission (EAC) to release public recommendations around the use of AI tools for election administration. The EAC has already allowed state election officials to use federal funds to “counter foreign influence in elections, election disinformation and potential manipulation of information on voting systems and/or voting procedures disseminated and amplified by AI technologies.”
As AI use continues to evolve, so does its impact on many areas of our lives, including how we elect our leaders. In the coming years we’ll see candidates and technology companies alike work to ensure that the efficiency and innovation of AI are used to make information more accessible, and elections more inclusive, while preventing its use for disinformation.
As the founder of GovEvents and GovWhitePapers, Kerry is on a mission to help businesses interact with, evolve, and serve the government. With 25+ years of experience in the information technology and government industries, Kerry drives the overall strategy and oversees operations for both companies. She has also served in executive marketing roles at a number of government IT providers.
Leave a Reply
You must be logged in to post a comment.