Моделювання структури пояснювального ШІ для виявлення фейкових новин за допомогою машинного навчання
DOI: 10.31673/2412-9070.2025.050411
DOI:
https://doi.org/10.31673/2412-9070.2025.050411Abstract
This article explores the pressing need for enhancing interpretability and transparency in fake news detection systems based on machine learning techniques. It introduces a novel conceptual framework designed to systematically integrate Explainable Artificial Intelligence (XAI) components to address the “black-box” nature of such models.
The primary goal is to strengthen system transparency, foster user trust, and support effective moderation, thereby improving efforts to counteract digital disinformation. The proposed approach outlines the architectural foundations required for embedding XAI mechanisms into fake news detection workflows. It also examines how established explanation methods — such as LIME, SHAP, and attention mechanisms — can be aligned with the specific informational needs of different stakeholder groups, including developers, moderators, journalists, and end users. Special consideration is given to the challenges posed by multimodal disinformation, which includes text, images, video, and other content types.
The framework’s practical relevance is illustrated through simulated scenarios and example use cases that demonstrate its potential functional and operational benefits.
The scientific contribution of this work lies in the development of an integrated, stakeholder-centered XAI architecture, specifically tailored to the complex task of detecting fake news. Unlike ad hoc applications, the framework offers a systematic approach that encompasses multimodal content, defines integration-focused architectural considerations, and matches explanation types to differentiated user demands.
Findings from this conceptual study suggest that the proposed XAI structure offers a coherent pathway toward building more transparent, accountable, and effective fake news detection systems. Its adoption is expected to enhance users’ ability to critically assess information, improve model adaptability, and support human–AI collaboration — providing a foundation for further empirical research in combating online disinformation.
Keywords: fake news detection; explainable AI (XAI); machine learning; AI transparency; conceptual framework; software engineering; disinformation; software engineering; web applications.