Is AI Aiding or Undermining Fact-checking? A Closer Look at the Issues 

By Peter Oyinmiebi ThankGod, HTCI Fellow

‘Content is king’ is one of the most popular slangs in the 21st century. While virality is the vehicle, user engagement is the end-goal driven by popularity, effective marketing and income generation. However, in the guise of content is an insidious and often glossed-over issue – the spread of misinformation and disinformation.

The rise of misinformation and disinformation is a global problem with enormous, localized impacts, particularly for societies with low literacy and digital media awareness or polarized ones battling existing social anomalies. In either case, this two-headed challenge undermines fragile trust, reignites tensions and incites violence.

The rapid spread of vast amounts of information leaves consumers vulnerable and gives too little room for examining information integrity. Several digital and emerging technologies, such as social media and AI, contribute to the misinformation/disinformation epidemic. Yet, beside all the doom associated with AI, the technology has become an indispensable element in the media world and the entire information life cycle.

AI in the digital information age

This image was generated by AI.

Artificial intelligence plays a dualised, transformative role, serving as a tool both in the spread and countering of misinformation. On the positive side, AI is useful for facilitating and strengthening the information and data democratisation process. The technology offers reporters, journalists, and researchers speed, efficiency, and cost-effectiveness in gathering, producing, and distributing news. Apart from easing and simplifying tedious newsroom chores and data gathering, AI is an effective assistant for tailoring content, generating insights, optimising feeds, writing news reports and automating operations that enhance the dissemination of information.

But there’s more.

AI-assistance: The Surge of Misinformation

As an emerging technology, AI is limited by hallucinations, lack of empathy, data and security breaches, and ethical complacency but beyond these, the technology is being actively exploited to undermine the integrity of information ecosystems leading to a destructive impact on society. Artificial intelligence poses the most significant threat due to its capability to create “realistic” content – images, audio and videos that are becoming more humanised and hardly distinguished as fake (deepfakes). Deep fakes are artificial creations with human likenesses designed to be believable.

The upsurge of AI-generated content or deepfakes in recent years has been alarming with concerns about how it supercharges the wave of fake news, the information disorder ecosystem, and disinformer-facilitated campaigns targeted at eroding public trust and complexifying our collective fight against mis/disinformation. One report from DeepMedia stated that there is an exponential growth of deepfakes, doubling in number every half of the year. Only in 2023, an estimated 500,000 AI-generated synthetic videos and audios proliferated social media globally, and about 8 million deepfakes will find their way online in 2025. With AI at the centre of this global crisis and considering the absence of proper regulations and governance on this emerging technology, there is a need to look for practical alternatives to fight dis/misinformation until AI regulatory framework and policies are strengthened globally and locally.

What can be done? Fact-checking as a lifeline

With AI contributing to the rising levels of information disorder, in the most sophisticated manner, we are left with no option but to check and recheck. This has created room for fact-checking, and a need to incentivize the verification of news and information that we allow into the information space or consume as individuals. Fact-checking entails a process that involves verifying the facts and accuracy of information or statements credited to an individual or organization and shared publicly. While scholars and experts have agreed that fact-checking holds positive prospects in fighting dis/misinformation, several limitations affect the process. The low level of media literacy, the tediousness that fact-checking requires and the belief that virality equates accuracy complicate the effectiveness of the process. Despite the distressing look of the situation, fact-checking is a step in the right direction in countering the potential effects of dis/misinformation in health, democracy and trust-building.

The other side: AI in fact-checking

AI is here to stay, and we cannot avoid it, not even in fact-checking. This realisation compels everyone concerned about fact-checking and the fight against disinformation not to ignore it. The enormous scale and amount of mis/disinformation spreading like wildfire globally has necessitated the need to adopt technologically-driven solutions to tackle this problem. And no other solution is as effective as AI in carrying out this task. There are growing numbers of fact-checking platforms and fact-checkers leveraging AI’s capabilities to verify information daily across different online platforms. Fact-checkers use AI to search for, retrieve, classify and treat information daily, which is helpful in simplifying the fact-checking process.

AI also facilitates the automation of the mis/disinformation verification process. Fact-checkers can train AI models or use them to identify, analyse, and draw insights on potential and existing misinformation on time, which reduces the time lag that fact-checkers have to deal with to counter misinformation, thus speeding up the time for countering it. Artificial Intelligence is one of the most effective complementary tools supporting human judgement in flagging dis/misinformation online. It  can cross-reference, check bias, and identify patterns in fake news trends. Lastly, AI can identify synthetic visuals or deepfakes. However, this process has limitations and can ultimately be improved in the long run.

Conclusion

Although artificial intelligence is a double-edged tool with the potential to be used for good or bad, it will continue to play a major role in the accelerated process of digital transformation in every facet of our lives and human processes, particularly shaping how we produce, disseminate, and consume information. However, the ultimate decision rests with us as the users to use AI to solve social and personal problems rather than exploiting it for bad. What this decision means is that fact-checking will be and is becoming a major survival and critical skill in navigating the complexities of today’s information age and digital world.

Share the Post:

Related Posts