Моделювання структури пояснювального ШІ для виявлення фейкових новин за допомогою машинного навчання

DOI: 10.31673/2412-9070.2025.050411

Authors

  • М. С. Гнатишин, (Hnatyshyn M. S.) National Technical University of Ukraine «Igor Sikorsky Kyiv Polytechnic Institute»
  • О. Л. Недашківський, (Nedashkivskiy O. L.) State University of Information and Communication Technologies, Kyiv

DOI:

https://doi.org/10.31673/2412-9070.2025.050411

Abstract

This article explores the pressing need for enhancing interpretability and transparency in fake news detection systems based on machine learning techniques. It introduces a novel conceptual framework designed to systematically integrate Explainable Artificial Intelligence (XAI) components to address the “black-box” nature of such models.
The primary goal is to strengthen system transparency, foster user trust, and support effective moderation, thereby improving efforts to counteract digital disinformation. The proposed approach outlines the architectural foundations required for embedding XAI mechanisms into fake news detection workflows. It also examines how established explanation methods — such as LIME, SHAP, and attention mechanisms — can be aligned with the specific informational needs of different stakeholder groups, including developers, moderators, journalists, and end users. Special consideration is given to the challenges posed by multimodal disinformation, which includes text, images, video, and other content types.
The framework’s practical relevance is illustrated through simulated scenarios and example use cases that demonstrate its potential functional and operational benefits.
The scientific contribution of this work lies in the development of an integrated, stakeholder-centered XAI architecture, specifically tailored to the complex task of detecting fake news. Unlike ad hoc applications, the framework offers a systematic approach that encompasses multimodal content, defines integration-focused architectural considerations, and matches explanation types to differentiated user demands.
Findings from this conceptual study suggest that the proposed XAI structure offers a coherent pathway toward building more transparent, accountable, and effective fake news detection systems. Its adoption is expected to enhance users’ ability to critically assess information, improve model adaptability, and support human–AI collaboration — providing a foundation for further empirical research in combating online disinformation.

Keywords: fake news detection; explainable AI (XAI); machine learning; AI transparency; conceptual framework; software engineering; disinformation; software engineering; web applications.

Published

2025-11-08

Issue

Section

Articles