Combating Deepfake Videos: A Fight for Nigeria’s Soul By Mukhtar Ya’u Madobi
Lately, a disturbing video clip made the rounds across Nigerian social media. It showed men in Nigerian Army uniforms purportedly escorting cattle in Yelewata, Benue State. The voice-over, laced with sarcasm, bemoaned how soldiers were protecting cows instead of citizens. In a country already grappling with farmer-herder tensions, displaced communities, and a fragile security architecture, such a video was bound to trigger outrage.
Fortunately, PRNigeria, a leading fact check platform rose to the occasion. Its forensic analysis, conducted through Fact-Check, revealed the content for what it truly was: a deepfake. This digitally engineered manipulation wasn’t designed to inform, but to mislead and inflame.
This incident marks a new chapter in Nigeria’s ongoing information war—one increasingly fought with artificial intelligence (AI). The tools once reserved for Silicon Valley innovation are now being used to deepen mistrust, incite violence, and blur the lines between reality and falsehood. Nigeria isn’t just contending with insecurity on the ground; it’s now fighting a dangerous war of perception in the digital realm.
The video in question was confirmed to be AI-generated, with visual distortions, mismatched shadows, and uniform inconsistencies signaling manipulation. But these are not details an average internet user might notice. In a country where digital literacy remains relatively low, many Nigerians aren’t equipped to verify what they watch, making us all vulnerable.
The danger here is profound: If communities believe security forces are complicit in their marginalization, or worse, in protecting perpetrators, they may lose faith in the state and seek justice through retaliation. In flashpoints like Benue, Kaduna, Taraba, or Plateau States, such deliberate misinformation could cost lives. Deepfakes also erode trust in vital institutions. Insecurity is already pushing citizens to the brink. The military, despite its constraints, remains one of the few state institutions still commanding some respect.
When deepfakes attack that credibility, they weaken the very fabric holding the country together, threatening national unity and social cohesion. These manipulated videos also create fertile ground for manipulation. In an election season or during national unrest, malicious actors can use AI tools to fabricate hate speech, frame opponents, and trigger mass hysteria. The 2023 and other previous electoral cycles have already seen spikes in fake news; AI only supercharges the problem.
Read Also:
Beyond perception, misinformation like this causes tangible harm. Every rise in insecurity in Benue, for instance, has led to a drop in crop output and livestock production. That’s not just an economic issue—it’s a food security crisis in the making. Communities already devastated by displacement, drought, or conflict cannot afford the added burden of false narratives triggering further unrest. For such populations, the damage from a fake video could mean the difference between a harvest and a famine. So, who benefits from these calumny campaigns?
The sponsors of these malicious acts clearly benefit from the chaos. The purveyors of these deepfakes often remain faceless, operating behind foreign servers, hidden usernames, and encrypted platforms. But their intent is clear: to weaponize misinformation for political, ethnic, or ideological gain. They exploit our fears and divisions, and their greatest weapon isn’t the AI software—it’s our own willingness to believe without verifying.
We are at a critical juncture, and this new threat cannot be tackled by security agencies alone. It requires a whole-of-society approach: Government agencies, NGOs, and media platforms must launch nationwide campaigns to educate citizens on how to spot deepfakes. Indicators like unnatural movement, off-sync audio, lighting inconsistencies, or missing sources can be taught through infographics, school curriculums, and community radio.
Fact-checking platforms and other independent media outfits should be supported and expanded to verify viral content in real-time. Their fact-checks must be amplified just as widely as the fakes. Tech companies must not sit on the sidelines. Platforms like Facebook, X (formerly Twitter), TikTok, and Instagram must improve content moderation in Nigeria. AI-generated content should carry warning labels or be demoted in visibility when flagged. Nigeria urgently needs a regulatory framework for deepfakes—one that doesn’t censor free speech, but penalizes the deliberate creation and spread of malicious digital falsehoods. At the community level, local and traditional leaders must be part of the solution. In rural areas where smartphones outnumber laptops, rumors spread faster than clarification. Leaders must be empowered with correct information and tools to douse tension before it erupts.
The “soldiers protecting cattle” video may have been debunked, but the next one could be more convincing, more viral, and more deadly in its consequences. As AI tools become more accessible, so too does the potential for chaos—unless we rise to the occasion. Nigeria is already battling poverty, insurgency, banditry, and political instability. We cannot afford to add algorithmic deception to the list. If we fail to act, we risk losing not just our grip on national security, but also on national sanity. The fight against deepfakes is not just a fight against technology—it is a fight for truth, for peace, and ultimately, for the soul of the nation. We must sustain the tempo.
MUKHTAR Ya’u Madobi is a Research Fellow at the Centre for Crisis Communication. He can be reached via [email protected].