AI-generated disinformation looms ahead of Nigeria’s 2027 elections
Days after gunmen slaughtered over 100 villagers in Benue State, Enoche was scrolling through her Facebook posts at a roadside barbers’ shop in Abuja while her husband was having a haircut, when suddenly her timeline popped with a chilling video which “exposed” Nigerian soldiers in full military gear guarding cows in “VIP format.”
The footage, styled like a news broadcast, showed herds of cattle lounging on a stretch of plush green grass in Yelwata, her hometown, where the carnage occurred. A stern-voiced anchor clenched a microphone and claimed, “The army prioritises cows over human lives.” Enoche’s scrolling thumb froze mid-air. She stared at the footage with her heart sinking, raging with fire and fury at the same time. It looked like a scene from a Nollywood thriller, but it appeared provocatively true.
By the time Enoche shared the content with an acquaintance who fact-checked such claims for verification, the video had gone far, moving from WhatsApp to Telegram with the speed of outrage. In the end, it was a confirmed AI-generated deepfake.
In a country already grappling with farmer-herder tensions, displaced communities, and a fragile security architecture, that video was the clearest warning yet that Nigeria’s 2027 elections are walking straight into a storm of synthetic deception.
As Nigeria approaches its 2027 general elections, the proliferation of deepfake technology on social media platforms like X, TikTok, and Facebook has begun seeding the early stages of AI-driven disinformation campaigns targeting the country’s upcoming polls.
This report uncovers accounts actively experimenting with deepfake political content. This seriously threatens the integrity of elections and potentially undermines the foundation of democratic trust.
The accounts and activities include local partisan operatives experimenting with generative AI tools to sway public opinion, fake celebrity validation, trails of foreign influence and the covert role of social media ‘influencers’ in amplifying foreign-themed synthetic content.
From meticulously crafted deepfakes to deceptively simple shallowfakes engineered to inflame tensions, assassinate character, manufacture viral controversy, and stoke the most divisive narratives imaginable, Nigerian social media timelines now surge with AI-generated fabrications.
The 2023 presidential election served as a chilling preview, an “inundation” of AI-generated content that presented a new challenge for government and journalists. With 95% of online Nigerians using WhatsApp, 64% relying on social media for their political news, and social media political campaigns playing a significant role in persuading voters in the 2023 general elections, the fertile ground for disinformation is well established and unmistakable. This widespread digital illiteracy means that a substantial portion of the population struggles to differentiate between genuine and fabricated content, leaving them highly susceptible to deception.
From slapstick posts of Nigeria’s President Bola Tinubu being whisked by ‘Interpol’ at Eagle Square to the President harassing suspended Rivers State Governor Siminalaye Fubara and embracing FCT Minister Nyesom Wike, or former President Olusegun Obasanjo and Tinubu in a fistfight, and former ruling party (APC) Chairman, Abdullahi Ganduje, being hounded, what is at stake is not just about electoral outcomes but palpable mayhem. A fabricated video, for instance, depicting a candidate inciting violence, could in Nigeria’s politically unstable environment quickly lead to bloodshed.
Waves of AI-driven propaganda flood X
A January 2025 report by Deloitte reveals the alarming rise of synthetic media created using AI to deceive, as well as its implications for digital platforms, technology firms, and society at large. It highlights the explosive 550% growth in deepfake content between 2019 and 2023, driven by increasingly accessible Generative AI tools.
The report reveals that 98% of deepfake content is related to adults, encompassing a range of issues, including fraud, disinformation, and reputational harm. It also flags the growing use of such content in political manipulation and misinformation campaigns.
Just six months after that report, one of many such examples of deepfakes as an active weapon in the information battlefield targeting public figures and hijacking political discourse surfaced. On July 10, 2025, the X account Sarki (@Waspapping_) posted an image that claimed to show former Labour Party Presidential candidate, Peter Obi and Abacha having tea together “way back” while a follow-up video of the photo posted by X account THE TROJAN BEAST (@THETROJANBEAST) under the thread, added further manipulation to the original post. The content surfaced on the same day Mr. Obi denied meeting the late General Sani Abacha, the former Head of State, clarifying his relationship with the late ruler.
As of July 18, 2025, the content had generated about 134,000 views, 1,500 likes, and 40 bookmarks. DUBAWA and PRNigeria‘s fact check revealed the content to be an AI-generated narrative that employed a delegitimisation tactic, leveraging historical grievances and falsely associating Obi with a figure widely associated with democratic truncation.
Despite being effectively debunked, the flames of toxic engagement had already been stoked. Obi’s supporters alleged a coordinated campaign by rival political factions to smear their candidate. Reacting, CKing (@Obidient_Police) accused Atiku’s followers of being behind the narrative, stating, “They have doctored the picture from someone else we know into Peter Obi… Atiku online thugs.”
The video also resonated with people who believed it was genuine. A K I N O M O A K I N (@MichealAkinsuyi) said, “and he (Obi) said he never met Abacha in his lifetime. Peter Obi and Lie…)”.
Godsman Okenwa (@OkenwaGodsman), on the other hand, towed the ethnic line. “Yaribaboon’s accounts are easy to identify. It surprises me how an entire tribe would be suffering from an inferiority complex,” he replied.
Further investigation revealed that this is not the first instance of such image manipulation aimed at tarnishing political figures. A similar doctored image allegedly showing President Bola Tinubu with Abacha having tea in the same setting also made rounds on social media. In that case, too, no credible source validated the photograph’s authenticity.
The pattern suggests a deliberate strategy to tarnish reputations by linking current figures to Nigeria’s authoritarian past, a tactic that could intensify as the 2027 election nears.
Meanwhile, the two major actors in this narrative appear to be active political influencers pushing a given narrative. While Sarki (@Waspapping_), whose timeline has several posts targeting the Obidients (Peter Obi supporters), describes himself as a “Voice of the North! Reputation builder and seller!”, his co-traveller in the thread THE TROJAN BEAST (@THETROJANBEAST) reports on politics and other subjects. His timeline is flooded with AI-generated content that appears to malign Mr Obi as seen here, here, here, here and here.
Other waves of viral fakery on TikTok, Facebook, and YouTube
In the aftermath of the death of former Nigerian President Muhammadu Buhari on Sunday, July 13, 2025, a TikTok user shared a post of Nigerian President Bola Tinubu in a hospital bed, sick and seriously gasping for breath. However, Tinubu’s public appearance debunked the video as he was not only alive and well but also actively participating in Buhari’s burial ceremonies.
On March 5, 2024, at approximately 9:21 pm, a user identified as PIED (@obehieguakhide) shared a 12-second video depicting an individual surmised to be the President of Nigeria. The caption read: “By the end of August, fuel will be sold at N 100 per litre—Bola T, 2023.”
“I also want to tell Nigerians that things are now working fine and that by the end of August, fuel will be sold at 100 Naira per litre,” the supposed president said in the clip.
As of July 17, 2025, the post had garnered 91,200 views, 540 reposts, 751 likes, and 72 bookmarks. Image and video forensic analysis revealed that the alleged clip was doctored and manipulated to spread a narrative. The original video obtained from a Feb. 8, 2023, Nigerian Economic Summit Group (NESG) Dialogue was given a manipulated voiceover to mock Tinubu’s governance. Many users fell for the gimmick, as Vivian Umukoro (@vian337) responded, “speaking under the influence of something.” The REFORMER (@Samreformer) wrote “stole people’s mandate and lost focus on what to do with it.”
On August 10, 2024, near the eve of the US Presidential elections, an 18-second video was shared by Bad Influence (@BadInfluence), an X account user. The video purportedly shows the then-US presidential candidate, Donald Trump, “begging” Nigerians to vote for him. Trump promised Nigerians a free visa to the US if they voted for him, promising “free VISA to the US for all Nigerians” and to “remove bad governance in Nigeria” if they voted for him.
The post was shared by a TikTok account VeryDarkBlackMan09 (verydarkblackman09— now inactive), leveraging the controversial activist and social media influencer profile, via TikTok. The post pinned to the account’s profile garnered 73,500 likes, 17,700 shares, 6,696 comments, and 5,010 bookmarks at the time. However, the Cable Fact Check found that it was an AI-generated video.
It remains unclear whether the account was operated by the real VeryDarkBlackMan, as no official verification or confirmation linked the handle directly to the well-known influencer.
On May 23, 2025, an AI-generated deepfake clip of US President Trump speaking extensively about Nigerian national life, particularly oil, politics, and military, was shared on TikTok by BB Smart TV. The post got 8,000 likes, 920 comments, and 18,800 shares, prompting users like James Favour to write, “Nigeria is the country blocking the progress of Africa.” The post’s 421,500 total views demonstrates its far reach and illustrates how global figures are weaponised to influence local politics.
More insidiously, on June 18, 2025, a deepfake video in a broadcast news format emerged online, falsely claiming Nigerian soldiers were guarding cattle in VIP fashion in Yelwata, Benue State. This came just days after an overnight attack in the same town left over 100 people dead. The video’s caption, “Nigerian Army guarding cows in VIP format in Yelwata,” and the anchor’s commentary suggesting preferential treatment for cattle over human lives were debunked by PRNigeria. This manipulation was aimed at stoking ethnic tensions that may further a destabilisation agenda.
Beyond political disinformation, generative AI has also been weaponised for celebrity and social hoaxes. A viral image of Peter Obi kneeling before Tinubu during Pope Leo XIV’s inauguration was digitally altered. Similarly, a fabricated video of Senator Saliu Mustapha endorsing a Ponzi scheme used Arise TV and BBC overlays to lend false credibility. AI-generated videos falsely linked Burna Boy with Burkina Faso’s junta, claiming Tiwa Savage released a tribute song for late President Buhari.
Other videos claimed Nigeria was sending troops to Israel or reported a fake flood in Abeokuta, raising public concern that “Na AI go finish us for this country.”
Shallowfakes are equally dangerous
Mimicking the techniques of “shallow fakes,” AI-generated dances, slapstick arrests, or satirical boxing matches between real politicians—often accompanied by manipulated songs in vernacular languages—may provide comic relief, but they also serve as intermediaries for serious disinformation campaigns. Each meme or parody, regardless of its apparent triviality, contributes cumulatively to the climate of disbelief and cynicism, where even genuine news is met with instant scepticism.
Videos like this on TikTok claiming that the President of Nigeria has declared free “opua” for all the youth going through hard times may contribute to the overall ecosystem. “Opua” suggests the act of engaging in sex as applied in the context of emerging local slang.
Read Also:
Other examples may be this AI voice rendition song on of the Bollywood blockbuster film RRR in Telegu language that translates, “Tinubu you are ours but you betrayed us. With or without you, we will triumph,” or Tinubu endorsing the use of cocaine and hard drugs in Hausa language, “calling on guys to sniff “wiwi (drugs)” to the extreme, and another video of him addressing in Hause language despite his linguistic limitations.
Other examples include a video of him and the late Buhari enjoying the soft life, him and Senate President Godswill Akpabio and a similar one of Peter Obi.
In his reaction, Abdoll_Pc on Instagram wrote in Hausa and translated in English that, “Jonathan and Buhari have escaped. Their time, there was no crazy AI, they would have collected back-to-back.”
Deepfakes — A blast from before
During the heated 2023 Nigerian presidential race, deepfakes were predominantly used for negative manipulation, serving as mudslinging tools to tarnish opponents. Some of these deepfakes were also deployed to enhance a candidate’s image. One of the most impactful incidents involved a manipulated audio deepfake, released just hours before voting, purporting to capture former Vice President Atiku Abubakar discussing plans to rig the election. This audio, which quickly went viral, played directly into existing public cynicism about electoral integrity, strengthening “confirmation bias” among those who already believed the elections would be rigged. Skilled fact-checkers and digital forensic analysts later confirmed the audio was pieced together from previous recorded speeches, containing mismatched voices, unnatural pauses, and segments with altered backgrounds—hallmarks of a synthetic deepfake.
On the other hand, deepfakes were also used to enhance the image of Peter Obi, a candidate with a significant youth following. Videos surfaced depicting Hollywood actors, Elon Musk, and Donald Trump endorsing Obi, leveraging the perceived weighty validation of Western celebrity endorsements within Nigeria’s political landscape. These deepfakes were primarily attributed to semi-unaffiliated groups, such as the Obidients—Obi’s fervent social media followers—driven by a zealous desire to outshine rivals on social media platforms. This democratisation of deepfake creation, enabled by increasingly accessible AI tools, complicates accountability, making it harder to trace origins back to official campaigns.
Beyond sophisticated deepfakes, the 2023 election also saw a proliferation of “shallow fakes”—media manipulated through more conventional means. For instance, a Reuters-debunked shallow fake video showed Bola Tinubu facing criticism for an incoherent response during a Chatham House event. Supporters of various candidates, including the “Atikulated” for Atiku Abubakar and “BATified” for Bola Tinubu, extensively employed these manipulated photos and videos as part of their disinformation activities.
The use of AI in deepfakes and shallowfakes in Nigeria’s political history isn’t entirely new; pictures of former President Mohammed Buhari, falsely depicting him as dead and replaced by a lookalike, circulated as early as 2018. Similarly, a deepfake video purporting to show former Edo State Governor Adams Oshiomhole in a “raw sex clip” was debunked as an AI-manipulated product designed to damage his reputation.
The threat extends beyond Nigeria’s borders. AI-generated deepfakes of Donald Trump have been widely disseminated on platforms like TikTok and X, specifically designed to mislead Nigerians. One viral deepfake, viewed 871,000 times on X, depicted Trump allegedly announcing changes to U.S.-Nigeria immigration policy. Another featured a phoney Trump calling for the release of separatist leader Nnamdi Kanu, threatening to withdraw U.S. aid if Kanu was not released. These transnational deepfakes highlight a growing phenomenon in which foreign actors are leveraging synthetic media to shape public discourse and potentially influence geopolitical relations or internal stability within Nigeria.
Indeed, foreign entities such as Cambridge Analytica and Team Jorge were implicated in the 2015 Nigerian presidential campaigns, aiming to discredit opposition candidates and disrupt their communication. Russia, too, employs a “franchising strategy” in Africa, funnelling guidelines and payments to residents and influencers to create manipulated content across various platforms, making pro-Russia narratives appear genuine and evade moderation. Iran has also conducted targeted disinformation campaigns in Nigeria using proxy social media accounts.
Deepfakes – A transnational trend
Pro-Russian Politician, Robert Fico’s victory in the 2023 Slovak parliamentary elections, thrust this small Central European country into the global spotlight. Fico’s campaign pledges to oppose sanctions on Russia and end military support for Ukraine were noteworthy enough, but the potential influence of a deepfake truly captured global attention.
Just two days before the election, a fake audio recording emerged, allegedly featuring pro-European candidate Michal Šimečka—Fico’s main rival—conspiring with a well-known journalist to commit electoral fraud. Although both parties swiftly dismissed the clip as fabricated, it spread rapidly online, gaining traction due to its release during Slovakia’s legally mandated electoral “silence period.” This period, a holdover from the traditional media era, is a 48-hour total media blackout restricting election-related reporting right before voting. Despite leading in the polls, Šimečka’s unexpected defeat sparked speculation that the election may have been “the first swung by deepfakes.”
The incident, now dubbed “the Slovak case,” is widely regarded as a turning point—a harbinger of a new era in disinformation, and a “test case” for assessing the vulnerability of democratic systems to AI-powered manipulation. As Casey Newton of Platformer noted, “what happened in Slovakia will likely soon occur in many more countries around the world.” Others issued starker warnings: “the deepfake genie is out of the bottle.”
Following the Slovak incident, in Argentina’s 2023 elections, leading presidential candidates reportedly used deepfakes, including in election posters and materials ridiculing opponents. While not necessarily “swinging” the election, it demonstrates the widespread adoption of the technology in campaigns. In Bangladesh’s January 2024 elections, a deepfake video circulated just before the polls, allegedly showing a candidate announcing her withdrawal.
Other incidents include India’s 2024 general election, where a deepfake of popular deceased politicians was used to appeal to voters, and the same resurrection of the dead technique in Indonesia in the same year.
While the full impact of deepfakes on the United States general election is still being assessed, there have been several instances of deepfakes circulating, including an AI-generated image falsely depicting Trump with a convicted sex worker.
In the February 2025 Germany election, the Alternative for Germany (AfD) party used AI-generated imagery, including a video presenting an “idyllic” future without immigration, to advance their anti-immigrant agenda and influence voters, while neigbouring Ghana’s 2024 general elections saw notable use of AI-driven disinformation tactics, including deepfakes, bot networks, and manipulated media to spin narratives.
Threat to Nigeria’s Democratic Process
The rise of synthetic media, particularly deepfakes, poses an immediate and severe threat to the credibility of elections and public trust in their results. Digital disinformation directly threatens the integrity of Nigeria’s democratic processes.
False information is intentionally used to achieve political objectives, such as discrediting opponents or swaying voter behaviour. During elections, disinformation campaigns frequently exploit the country’s ethnic and religious divisions, intensifying tensions. Foreign influence operations also aim to manipulate public perceptions, influence electoral outcomes, and shape government policy, often linked to broader geopolitical agendas.
Studies indicate that a substantial portion of the Nigerian population encounters and acts upon misleading information; approximately 75% of Nigerians report encountering such content online, and a concerning 68% admit to believing and acting on fake news. This widespread acceptance of false narratives highlights a critical gap in the populace’s media literacy and verification skills.
The spread of fake news has been linked to instigating inter-ethnic conflicts, instilling fear and panic, and ruining the reputations of public figures and institutions. This can incite violence and culminate in civil unrest, posing a massive threat to national security.
Despite the growing prevalence of deepfakes, a significant portion of the Nigerian population remains vulnerable due to low digital literacy. Many lack the skills to differentiate between factual and fabricated digital content. While a study in Benin City found that most residents were knowledgeable about deepfake technology and viewed it negatively, it also confirmed its role in the faster spread of misinformation. This highlights a critical vulnerability: even with some awareness, persistent low digital literacy, particularly in rural areas, means a large segment of the population remains susceptible to manipulation.
The road ahead: Expert perspectives
The proliferation of sophisticated disinformation tactics presents significant challenges for journalists and fact-checkers. Verifying multimedia content in real-time is increasingly challenging without powerful AI detection technologies. A hostile environment for media professionals exacerbates this burden. Nigeria is ranked 112th out of 180 countries in the 2024 World Press Freedom Index, reflecting an alarming increase in attacks on journalists. Such threats severely impede the ability of journalists and fact-checkers to combat disinformation effectively.
A broader systemic challenge is the insufficient access of tech companies to online data, which hinders their ability to accurately measure and address the risks posed by disinformation campaigns in African countries.
Meanwhile, as AI-generated deepfakes, shallow fakes, and the subsequent spread of disinformation continue to proliferate in elections worldwide, the Institute for Security and Technology (IST) has launched the Generative Identity Initiative (GII) to understand better how generative artificial intelligence affects social trust and institutional decision-making.
GII will build off the work of IST’s Digital Cognition and Democracy Initiative (DCDI), which in 2022 published its foundational report, Rewired: How Digital Technologies Shape Cognition and Democracy. With a powerful coalition of more than 50 subject matter experts, DCDI found that the impacts of digitally influenced cognition could undermine the very core elements of a democratic society, specifically due to its effect on the independent, critically thinking minds of individuals. With advancing generative artificial intelligence, however, this has proven no longer to be a hypothetical.
However, journalist Hannah Ajakaiye, founder of FactsMatterNg, in a March 2024 interview with the IST, believes the problem lies not only in the advances but also in the lack of updated laws to address synthetic content and AI-generated manipulations, particularly about elections.
She believes this is paramount because of “how generative AI has weaponised misinformation and made it far more believable, harder to detect, and scalable.”
While calling for algorithmic transparency by Big Tech, Hannah believes that journalism must also evolve to match the sophistication of AI-driven deceptions.
“We can’t fact-check our way out of this problem alone—we need systems, structures, and speed’ she added.
Samad Uthman, a digital investigations journalist with AFP, also believes Big Tech must get involved. “They must detect, label, and limit the reach of deepfakes, especially during elections,” he explained, emphasising that transparency with researchers and fact-checkers is also paramount.
Samad also stressed that journalists and newsrooms must work hard in unravelling the ever-evolving tactics of deepfake paddling.
“To curb deepfakes, journalism bodies too need to work hard in driving public education to orient people on how best not to fall prey to synthetic lies,” he maintained, adding that more innovative detection tools and rapid public awareness remain key.
On his part, Isaac Nwokeocha, a tech and digital culture journalist, believes that the 2027 elections will be a defining moment for Nigeria, not just politically, but also technologically.
“If we fail to recognise and prepare for the weaponisation of artificial intelligence in the information space, we risk sleepwalking into a national crisis,” Isaac warned.
As Nigeria stands at the forefront of the synthetic media battle, with its upcoming 2027 general elections just two years away, the acclaimed African giant is at a critical juncture that demands proactive and coordinated efforts across all sectors. This involves strengthening legal and policy frameworks, enhancing digital literacy, fostering collaboration among stakeholders, and promoting responsible platform governance, which are imperative to ensure that Nigeria’s democratic future is built on truth and transparency.