Connectivity https://con.duikt.edu.ua/index.php/communication <p><img src="/public/site/images/coneditor/Обкладинка_Звязок_№_6_(172)3.png"></p> <p><strong>Name of journal</strong> – «Connectivity» (Зв'язок)<br><strong>Founder</strong>: State University of Telecommunications.<br><strong>Year of foundation</strong>: 1995.<br><strong>State certificate of registration</strong>: <a href="http://www.irbis-nbuv.gov.ua/cgi-bin/irbis_nbuv/cgiirbis_64.exe?C21COM=2&amp;I21DBN=UJRN&amp;P21DBN=UJRN&amp;Z21ID=&amp;Image_file_name=IMG%2Fvduikt_s.jpg&amp;IMAGE_FILE_DOWNLOAD=0">КВ № 20996-10796 ПР від 25.09.2014</a>. <br><strong>ISSN</strong>: 2412-9070<br><strong>Subject</strong>: telecommunications, informative technologies, computing engineering, education.<br><strong>Periodicity</strong> – six times a year.<br><strong>Address</strong>: Solomyanska Str., 7, Kyiv, 03110, Ukraine.<br><strong>Telephones</strong>:+380 (44) 249 25 42;<br><strong>E-mail</strong>: <strong><a href="mailto:kpstorchak@ukr.net">kpstorchak@ukr.net</a></strong><br><strong>Web-сайт: </strong><a href="http://www.dut.edu.ua/" target="_blank" rel="noopener">http://www.dut.edu.ua/</a>, <a href="http://con.dut.edu.ua/">http://con.dut.edu.ua/</a></p> uk-UA Connectivity Title https://con.duikt.edu.ua/index.php/communication/article/view/2911 <p>Title</p> ##submission.copyrightStatement## 2025-11-07 2025-11-07 5 1 1 Content https://con.duikt.edu.ua/index.php/communication/article/view/2912 <p>Content</p> ##submission.copyrightStatement## 2025-11-07 2025-11-07 5 2 2 Construction of scenarios for the development of cascade failures in the power grid infrastructure https://con.duikt.edu.ua/index.php/communication/article/view/2913 <p>The study of cascading failure scenarios in critical infrastructure (power grid) plays an important role in decision-making in such situations, using existing experience to reduce negative conse&nbsp;quences for system components. The modeling and simulation process is complex and requires&nbsp;a large amount of resources, and the volume of data increases rapidly depending on the number of&nbsp;components and connections between them, therefore there is a need to formalize data for further&nbsp;storage, processing and use in analytical models. The article describes the possibilities of using an&nbsp;ontological model to analyze cascading effects in the power grid. The ontological model is used to&nbsp;describe the structure of the network, connections between power grid components and their characteristics at the time of the scenario. The developed model helps the user to formalize information&nbsp;about the operation of the power grid that is understandable to humans and machines. The defined&nbsp;semantic rules are used to verify data and draw logical conclusions, which facilitates the process of&nbsp;understanding the power grid in a given scenario. The defined network structure in the form of&nbsp;a graph facilitates the visual perception of the connections between components, and the description&nbsp;of their characteristics increases the level of detail of the model. The developed model has the ability&nbsp;to integrate with various data sources in the subject area, which are defined using concepts and logical connections between them. The model can be expanded and supplemented with additional concepts and rules according to user requirements. The developed method and ontological model can be&nbsp;integrated with software tools to create a tool for working with data according to the user's needs. </p> <p><strong>Keywords:</strong> critical infrastructure; power grid; cascading failure; cascade effect; blackout;&nbsp;ontology; graphs; power flow model; scenario modeling; software.</p> Хоменко О. М. (Khomenko O. M.) Коваль О. В. (Koval O. V.) ##submission.copyrightStatement## 2025-11-07 2025-11-07 5 3 12 Predictive software update management for the internet of things https://con.duikt.edu.ua/index.php/communication/article/view/2915 <p>The rapid proliferation of the Internet of Things (IoT) has created unprecedented challenges in maintaining and updating software across millions of heterogeneous devices operating&nbsp;in dynamic environments. This research paper addresses the critical problem of inefficient software update management in large-scale IoT networks, where traditional deployment methodlogies often prove insufficiently flexible, secure, and reliable. The study introduces a groundbreaking intelligent framework that synergistically combines Artificial Intelligence (AI) algorithms&nbsp;with the canary release strategy to revolutionize the update process for distributed IoT ecosystems. At the core of this innovative approach lies a sophisticated mathematical model that enables real-time optimization of deployment parameters through continuous monitoring of system&nbsp;performance metrics and failure patterns. <br>The proposed framework employs Reinforcement Learning (RL) techniques to create an&nbsp;autonomous decision-making system capable of dynamically adjusting rollout strategies based&nbsp;on actual network conditions and device performance. The AI agent operates within a formally&nbsp;defined state space encompassing critical parameters such as the number of successfully updated devices, current error rates, and system load indicators. Through iterative learning, the system develops an optimal policy for managing update deployments by evaluating actions against&nbsp;a comprehensive cost function that balances stability requirements with operational efficiency.&nbsp;This function incorporates weighted factors including failure rates, performance degradation,&nbsp;and total deployment duration, enabling the system to make intelligent choices between continuing, pausing, or rolling back updates. <br>Experimental results demonstrate that the AI-enhanced canary release model achieves remarkable improvements in deployment reliability and resource utilization compared to conventional approaches. The system reduces rollout-related failures while decreasing overall deploy&nbsp;ment time, significantly enhancing operational continuity in critical IoT applications. Further&nbsp;more, the framework optimizes network bandwidth consumption through intelligent scheduling&nbsp;and prioritization mechanisms, addressing one of the most pressing constraints in large-scale&nbsp;IoT environments. The mathematical formalization of the deployment process provides a solid&nbsp;theoretical foundation for reproducible results and further academic investigation. The proposed solution not only addresses immediate operational challenges but also paves the way for developing self-healing IoT infrastructures capable of adaptive behavior in increasingly complex&nbsp;networked environments. The paper concludes by outlining promising directions for future&nbsp;work, including the integration of federated learning for privacy-preserving analytics and the&nbsp;development of predictive maintenance capabilities for proactive system management. </p> <p><strong>Keywords:</strong> Internet of Things; software updates; artificial intelligence; reinforcement learning; canary release; deployment optimization; information systems; mathematical model.</p> Бондарчук А. П. (Bondarchuk A. P.) Глушак О. М. (Hlushak O. M.) Пронькін О. В. (Pronkin O. V.) Стражніков А. А. (Strazhnikov A. A.) ##submission.copyrightStatement## 2025-11-08 2025-11-08 5 13 17 Design and implementation of a lightweight autonomous AI agent for SRV6 multi-criteria optimization within frrouting https://con.duikt.edu.ua/index.php/communication/article/view/2916 <p>Modern info-communication networks, increasingly leveraging Segment Routing over IPv6 (SRv6) for&nbsp;enhanced flexibility and programmability, face significant challenges in achieving dyna-mic and effective&nbsp;multi-criteria optimization (MCO). The relentless growth in network traffic volume, the escalating diversity&nbsp;of services driven by 5G, IoT, and edge computing, and the highly dynamic nature of performance demands&nbsp;necessitate a paradigm shift from traditional, often manual or statically configured, network management&nbsp;towards more autonomous and intelligent control systems. SRv6 provides a powerful architectural foundation for this evolution by encoding routing instructions directly within the IPv6 data plane, yet harnessing&nbsp;this programmability to simultaneously optimize multiple, often conflicting, criteria—such as latency, throughput, reliability, and resource utilization—remains a complex undertaking, particularly under fluctuating&nbsp;network conditions and diverse application requirements. <br>Traditional centralized network management approaches, including those based on Software-Defined&nbsp;Networking (SDN) controllers, often encounter limitations related to scalability, potential single points of&nbsp;failure, and the inherent latency involved in collecting global network state and distributing control commands. Conversely, deploying sophisticated decision-making intelligence directly onto network devices,&nbsp;while offering the promise of faster localized responses and enhanced resilience, is frequently hindered by&nbsp;the inherent constraints in computational resources (CPU, memory) typical of standard routing hardware.&nbsp;This paper specifically addresses the critical feasibility of de-signing and deploying a controller-less, autonomous Artificial Intelligence (AI) agent directly on a Linux-based routing platform to perform MCO for&nbsp;SRv6 traffic engineering. A core aspect of our investigation is the agent's seamless integration with the widely adopted FRRouting open-source routing suite, which serves as the operational platform for both monitoring network state and enacting SRv6 policy modifications. <br>We present the detailed design principles and a comprehensive implementation strategy for a lightweight AI agent. This agent is specifically architected to utilize resource-efficient Reinforcement Learning (RL)&nbsp;and/or Graph Neural Network (GNN) techniques, which are particularly well-suited for operation within&nbsp;such constrained environments. The proposed agent functions as a distinct software process, running independently yet interacting locally with the co-located FRRouting daemons (e.g., zebra, bgpd, ospfd/isisd, pathd). This interaction is facilitated through standard, well-defined Application Programming Interfaces&nbsp;(APIs), such as YANG/NETCONF, gRPC, or REST, operating over local Inter-Process Communication&nbsp;(IPC) mechanisms. This local API-driven approach enables the agent to continuously monitor relevant network state parameters derived from FRRouting and autonomously apply SRv6 Traffic Engineering (TE) policy modifications back to the FRRouting suite without reliance on any remote controller. <br>The paper meticulously details the overall system architecture, the conceptual components underpinning the agent's intelligent decision-making process (including state representation derived from network&nbsp;telemetry, the defined action space corresponding to SRv6 policy controls, and the formulation of multi-criteria objective functions), the specific local API-based integration mechanisms with FRRouting and the planned implementation methodology. This methodology leverages Python as the primary development language, augmented by standard AI libraries (e.g., TensorFlow, PyTorch), and employs a containerized environment (Docker) for consistent deployment and rigorous validation within the Mininet network emulator.&nbsp;The offline training of the AI models is envisioned to utilize scalable cloud platforms like Google Cloud Platform (GCP) to handle computational demands. The primary scientific and practical contribution of this&nbsp;work lies in the thorough demonstration of the feasibility of designing and implementing this novel autonomous, on-device agent architecture. By showcasing its capability to effectively interact with a standard, production-grade routing platform like FRRouting for the purpose of SRv6 MCO, this research paves the way&nbsp;for the development of more adaptive, resilient, and intelligent network control strategies deployed directly&nbsp;within the network infrastructure, thereby fostering a new generation of decentralized and autonomous network management solutions. </p> <p><strong>Keywords:</strong> autonomous network management; artificial intelligence; reinforcement learning; segment&nbsp;routing; SRv6; multi-criteria optimization; FRRouting; model; local API; inter-process communication (IPC).</p> Владарчик Ю. Л. (Vladarchyk Yu. L.) Нестеренко К. С. (Nesterenko K. S.) ##submission.copyrightStatement## 2025-11-08 2025-11-08 5 18 24 Prompt optimization for large language models https://con.duikt.edu.ua/index.php/communication/article/view/2917 <p>The rapid development of large language models (LLMs) has significantly transformed natural language processing, enabling more natural and effective human-computer interaction. Large language models, such as GPT- BERT, and others, have demonstrated remarkable success in various natural language&nbsp;processing tasks, including text generation, translation, sentiment analysis, and more. However, the quality of the output from these models largely depends on how the input prompts are formulated. This paper&nbsp;explores prompt optimization strategies, including prompt engineering, automatic prompt tuning, and&nbsp;task-specific adaptation. <br>Prompt engineering is a crucial aspect for achieving high-quality results from large language models.&nbsp;It involves designing and testing various prompt formulations to find the most effective ones for specific tasks. For instance, in text generation tasks, it is essential to formulate the prompt correctly so that the model&nbsp;can understand the context and provide an appropriate response. Automatic prompt tuning is another&nbsp;important approach that allows for the automatic adjustment of prompts to achieve optimal results. This&nbsp;method uses machine learning algorithms to analyze and optimize prom-pts based on previous outcomes. <br>Task-specific adaptation is a vital aspect of prompt optimization. Large language models can be&nbsp;fine-tuned to perform various tasks, such as translation, sentiment analysis, text classification, and more.&nbsp;For each of these tasks, it is necessary to develop specific prompts that consider the task's nuances and&nbsp;ensure high-quality results. For example, in translation tasks, it is important to consider the context and&nbsp;cultural nuances of the language to ensure accurate translation. <br>We analyze existing studies, propose a categorization of optimization techniques, and present insights from practical experiments. In this study, we conducted a series of experiments using various language&nbsp;models and prompts to evaluate their effectiveness. The results showed that optimized prompts signifycantly improve the efficiency, predictability, and interpretability of language model responses. For instance,&nbsp;in text generation tasks, optimized prompts led to higher quality text, reduced grammatical errors, and&nbsp;improved logical structure.&nbsp;<br>An important aspect of prompt optimization is considering the context and specifics of the task. This&nbsp;requires a detailed analysis of the task and the development of specific prompts that account for all aspects&nbsp;of the task. For example, in sentiment analysis tasks, it is important to consider the context and emotional&nbsp;state of the text to ensure accurate sentiment analysis. In text classification tasks, it is crucial to consider&nbsp;the specifics of the text and develop prompts that allow the model to accurately classify the text. <br>The findings indicate that optimized prompts significantly enhance the efficiency, predictability, and&nbsp;interpretability of language model responses. This allows for the use of large language models to solve&nbsp;various natural language processing tasks with high-quality results. For example, in text generation tasks,&nbsp;optimized prompts lead to higher quality text, reduced grammatical errors, and improved logical structure. In translation tasks, optimized prompts ensure accurate translation, considering the context and cultural nuances of the language. <br>Thus, prompt optimization is a crucial aspect for achieving high-quality results from large language&nbsp;models. Prompt engineering, automatic prompt tuning, and task-specific adaptation signifycantly improve the efficiency, predictability, and interpretability of language model responses. The analysis of existing studies and the results of empirical experiments confirm the importance of prompt optimization for&nbsp;achieving high-quality results from large language models.</p> <p><strong>Keywords:</strong> language model; natural language processing; query tuning; optimization; instructional&nbsp;design; GPT; zero-shot; few-shot; text generation; testing; algorithm; machine learning; query; mathematical model.</p> Сторчак К. П. (Storchak K. P.) Миколаєнко В. О. (Mykolaienko V. O.) Довженко Т. П. (Dovzhenko T. P.) ##submission.copyrightStatement## 2025-11-08 2025-11-08 5 25 31 Overview of modern methods for detecting financial crimes using artificial intelligence agents https://con.duikt.edu.ua/index.php/communication/article/view/2918 <p>The article presents a comprehensive analysis of modern methods for detecting financial crimes&nbsp;using artificial intelligence (AI) agents. It examines the classification of AI agents (reactive, deliberative, autonomous, and multi-agent), their operational features, and their role in financial monitoring&nbsp;systems. A comparative analysis of rule-based approaches, machine learning methods, hybrid models, blockchain architectures, and graph algorithms has been conducted. The study reveals that hybrid solutions and graph neural networks demonstrate the highest levels of precision and recall in detecting suspicious transactions, as confirmed by consolidated metrics from peer-reviewed sources. <br>Special attention is given to the use of deep neural networks, time series processing techniques, natural language processing (NLP), and Explainable AI (XAI) methods, which enhance the transparency&nbsp;and interpretability of AI-driven decisions—an essential requirement in heavily regulated financial domains. The advantages of the multi-agent approach are emphasized, including the ability for parallel analysis of complex fraud schemes, dynamic adaptability, and system scalability, which position&nbsp;it as a promising direction for the development of distributed financial intelligence systems. Alongside the identified benefits, the study also outlines key challenges hindering real-world implementation:&nbsp;limited interpretability of deep learning models, the requirement for large volumes of high-quality&nbsp;and balanced data, high computational costs, and legal and ethical constraints related to the automated processing of sensitive financial information, especially under regulations such as GDPR and&nbsp;AMLD. <br>In terms of future development, the article highlights the potential for integrating AI agents with blockchain networks to ensure transactional transparency and immutability, applying quantum algorithms for processing complex financial graphs, and adopting edge computing for real-time anomaly&nbsp;detection on decentralized devices. Furthermore, the research underlines the importance of an inter disciplinary approach that combines expertise in artificial intelligence, cybersecurity, economics,&nbsp;and legal compliance in building robust and effective financial crime prevention systems. Thus, the&nbsp;findings confirm that effective financial crime detection today requires complex technological solutions based on AI agents with an emphasis on transparency, scalability, and regulatory compliance.&nbsp;The conclusions presented in this study hold both theoretical and practical value for the development&nbsp;of modern financial transaction monitoring systems and the formulation of policies in the field of digital financial security. </p> <p><strong>Keywords:</strong> financial crimes; artificial intelligence; artificial intelligence agents; machine learning; blockchain; graph analysis.</p> Калинюк Б. С. (Kalyniuk B. S.) Замрій І. В. (Zamrii I. V.) Калинюк А. М. (Kalyniuk A. M.) ##submission.copyrightStatement## 2025-11-08 2025-11-08 5 32 42 Моделювання структури пояснювального ШІ для виявлення фейкових новин за допомогою машинного навчання https://con.duikt.edu.ua/index.php/communication/article/view/2919 <p>This article explores the pressing need for enhancing interpretability and transparency in fake&nbsp;news detection systems based on machine learning techniques. It introduces a novel conceptual framework designed to systematically integrate Explainable Artificial Intelligence (XAI) components to&nbsp;address the “black-box” nature of such models. <br>The primary goal is to strengthen system transparency, foster user trust, and support effective moderation, thereby improving efforts to counteract digital disinformation.&nbsp;The proposed approach outlines the architectural foundations required for embedding XAI mechanisms into fake news detection workflows. It also examines how established explanation methods —&nbsp;such as LIME, SHAP, and attention mechanisms — can be aligned with the specific informational&nbsp;needs of different stakeholder groups, including developers, moderators, journalists, and end users.&nbsp;Special consideration is given to the challenges posed by multimodal disinformation, which includes&nbsp;text, images, video, and other content types. <br>The framework’s practical relevance is illustrated through simulated scenarios and example use&nbsp;cases that demonstrate its potential functional and operational benefits. <br>The scientific contribution of this work lies in the development of an integrated, stakeholder-centered XAI architecture, specifically tailored to the complex task of detecting fake news. Unlike ad hoc applications, the framework offers a systematic approach that encompasses multimodal content, defines integration-focused architectural considerations, and matches explanation types to differentiated&nbsp;user demands. <br>Findings from this conceptual study suggest that the proposed XAI structure offers a coherent&nbsp;pathway toward building more transparent, accountable, and effective fake news detection systems.&nbsp;Its adoption is expected to enhance users’ ability to critically assess information, improve model&nbsp;adaptability, and support human–AI collaboration — providing a foundation for further empirical&nbsp;research in combating online disinformation.</p> <p><strong>Keywords:</strong> fake news detection; explainable AI (XAI); machine learning; AI transparency; conceptual framework; software engineering; disinformation; software engineering; web applications.</p> Гнатишин М. С. (Hnatyshyn M. S.) Недашківський О. Л. (Nedashkivskiy O. L.) ##submission.copyrightStatement## 2025-11-08 2025-11-08 5 43 50 Study of the reliabillity and safety of an N-dimensional system by the method of stepwise ortohogonalization https://con.duikt.edu.ua/index.php/communication/article/view/2920 <p>In the context of digital transformation, ensuring the reliability and security of information and&nbsp;communication systems is becoming increasingly critical, as their stable operation determines the&nbsp;continuity of managing safety-critical objects and the protection of the information environment. This&nbsp;paper proposes a stepwise orthogonalization method for analyzing complex N-dimensional systems&nbsp;operating under stochastic feedback loops. Unlike traditional approaches, such as the Gaussian elimination method, the proposed solution accounts for the incompatibility of events arising in control&nbsp;loops, making it suitable for studying multi-loop structures with a high level of uncertainty.<br>The method combines tools of linear algebra and logic-probabilistic analysis, thus establishing&nbsp;a methodological link between deterministic and stochastic approaches. The algorithm relies on the&nbsp;sequential elimination of unknown variables and the substitution of compatible events with incompatible ones, which allows constructing expressions for evaluating the probabilities of information flows within the system. Its applicability is demonstrated using a graph with n vertices, modeling the distribution of information flows in environments with multiple control circuits. A key outcome of the method is the ability to accurately consider the influence of ε-events associated with the passage of signals through graph edges. <br>The proposed approach provides opportunities to assess the reliability and security of N-dimensional systems both at the level of individual information flows and the entire infrastructure. The step&nbsp;wise orthogonalization method helps identify hidden stochastic dependencies, evaluate the probability of critical component failures, and determine their impact on overall system performance. The&nbsp;study emphasizes that this method can be effectively applied in practical safety management tasks,&nbsp;particularly in the design of information and communication networks, as well as in the integration&nbsp;of modern technologies such as MEC, NFV, and URLLC. <br>The obtained results confirm the efficiency of the stepwise orthogonalization method as a tool&nbsp;for reducing risks, improving reliability, and ensuring the resilience of complex systems under conditions of uncertainty. Future research may focus on extending the applicability of the method to more&nbsp;complex topologies, multiparametric scenarios, and systems with advanced adaptive control mechanisms. </p> <p><strong>Keywords:</strong> information and communication system; N-dimensional system; orthogonalization;&nbsp;stochastic connection; system of event equations; logical-probabilistic analysis; event graph; control&nbsp;loop; flow probability.</p> Галаган Н. В. (Halahan N. V.) Борисенко І. І. (Borysenko I. I.) Гізун А. І. (Gizun A. I.) Хаб'юк Н. С. (Khabiuk N. S.) ##submission.copyrightStatement## 2025-11-08 2025-11-08 5 51 57 The impact of AI-enabled tools on the architecture and testing of high-load information systems https://con.duikt.edu.ua/index.php/communication/article/view/2921 <p>In the modern world, information systems (IS) play a key role in the functioning of nearly all&nbsp;areas of activity — from finance and telecommunications to healthcare and public administration.&nbsp;As data volumes, user numbers, and information exchange speeds continue to grow, the load on IS&nbsp;computing resources increases significantly. This generates new challenges in ensuring their reliability, scalability, and stability — particularly in high-load environments. Ensuring the sustainable operation of such systems requires not only classical engineering solutions but also modern approaches&nbsp;to design, analysis, and testing. In recent years, AI-enabled tools have drawn increasing attention,&nbsp;as they are becoming more deeply integrated into the development, monitoring, and operation of IS.&nbsp;AI tools offer capabilities such as automated architectural decision generation, predictive testing,&nbsp;real-time anomaly detection, and resource usage optimization. However, the implementation of such&nbsp;solutions in high-load systems comes with a range of risks and limitations related to model validation,&nbsp;security, and explainability. The goal of this article is to analyze the impact of AI-enabled tools on&nbsp;architectural decisions and testing methods for high-load information systems. Special attention is&nbsp;given to both the advantages and potential challenges that arise during the integration of such tools&nbsp;into the design and maintenance processes of systems with complex and dynamic workloads. In this&nbsp;context, AI-enabled tools are increasingly viewed as promising instruments for enhancing the efficiency and reliability of high-load information systems. These tools can support automated generation&nbsp;of test scenarios, predictive modeling of system behavior under varying workloads, and early detection of potential bottlenecks or failures. Moreover, AI-driven monitoring can adaptively allocate resources, helping to maintain optimal performance even under sudden spikes in demand. Despite these&nbsp;advantages, the adoption of AI in critical IS environments raises important concerns. Model accuracy, interpretability, and robustness under unforeseen conditions remain significant challenges,&nbsp;while security and compliance considerations impose additional constraints on deployment. Therefore, the integration of AI tools requires careful validation, iterative testing, and alignment with existing engineering practices to ensure that they genuinely contribute to the stability and resilience of&nbsp;high-load systems. </p> <p><strong>Keywords:</strong> information system; artificial intelligence; architecture; computing system; high-load system; testing; software; performance optimization.</p> Корнага Я. І. (Kornaga Y. I.) Олексій А. В. (Oleksii A. V.) ##submission.copyrightStatement## 2025-11-08 2025-11-08 5 58 65 Evaluating rule-based vs. machine learning approaches for fraudulent transaction detection https://con.duikt.edu.ua/index.php/communication/article/view/2922 <p>Financial institutions nowadays rely heavily on rule engines, such as thresholds,&nbsp;white/black lists, velocity checks to flag suspicious transactions. Machine learning (ML) models, on the other hand, while promising higher accuracy and adaptability, are being dependent&nbsp;on data characteristics, class imbalance, latency constraints, and interpretability requirements.&nbsp;In this paper, I present a controlled evaluation of a configurable rule-based baseline and several&nbsp;supervised ML models (logistic regression, random forest, gradient boosting) on an imbalanced&nbsp;transaction dataset. I measure detection performance (ROC-AUC, PR-AUC, precision/recall at&nbsp;operating points), operational costs (false-positive rate, alerts per 1k transactions), and engineering trade-offs (inference latency, feature complexity, interpretability). Results show that while&nbsp;rules remain competitive at high-precision, low-recall regimes, ML approaches achieve substantially better recall at comparable precision, especially when coupled with calibrated thresholds and class-imbalance handling. I discuss deployment-oriented considerations and outline&nbsp;a hybrid strategy that combines rules for policy compliance with ML for generalization.</p> <p><strong>Keywords:</strong> fraud detection; financial risk; anomaly detection; rule engine; machine learning;&nbsp;class imbalance; interpretability.</p> Гайна Г. А. (Gaina G. A.) Масюк Д. В. (Masiuk D. V.) ##submission.copyrightStatement## 2025-11-08 2025-11-08 5 66 71 Identifying the potential for applying neural network models in the context of natural language processing https://con.duikt.edu.ua/index.php/communication/article/view/2923 <p>Today, mental health remains the number one issue. According to the latest 2024 report on the&nbsp;state of healthcare, 45% of respondents consider mental health to be one of the main healthcare&nbsp;problems facing their country. Cancer ranks second (38%), and stress ranks third (31%) in 31 countries [1]. <br>This article focuses on determining the potential of natural language processing models for analyzing the diagnosis of psychological disorders. Three models were selected for analysis and research: a support vector machine classifier, a logistic regression model, and a DistilBERT transformer&nbsp;model. A dataset was created from open sources Reddit Mental Health Dataset and data was selected&nbsp;from Kaggle for the neutral data class without pronounced markers of psychological disorders. <br>First, an analysis of the general differences between algorithms and model operating principles&nbsp;was performed. Then, testing was performed on the same volume of pre-processed data. Based on the&nbsp;results of the research and comparisons, conclusions were drawn and it was determined that the transformer model, on a relatively small amount of data, still shows better results than the usual classical models. The classical models also showed good results, but when the amount of data is increased, the results rapidly deteriorate, while the transformer model improves.</p> <p><strong>Keywords:</strong> neural network models; logistic regression; support vector model; transformer models; psychological diagnosis; psychological disorders; natural language processing.</p> Давиденко К. О. (Davydenko K. O.) Заячковський А. В. (Zayachkovskyi A. V.) Антипенко Р. В. (Antypenko R. V.) ##submission.copyrightStatement## 2025-11-08 2025-11-08 5 72 78 Application of machine learning to filter inefficient trading signals generated by mechanical approach indicators https://con.duikt.edu.ua/index.php/communication/article/view/2924 <p>The subject of this study is the application of machine learning algorithms to filter out ineffective&nbsp;trading signals generated by indicators of the mechanistic approach to the analysis of cryptocurrency&nbsp;markets. The purpose of the work is to develop and test ML models capable of increasing the profitability of trading strategies by filtering false signals that arise when using mechanistic indicators. The&nbsp;research tasks include: 1) formalization of mechanistic indicators (mechanistic moving average, MAS&nbsp;Buy, MAS Sell); 2) use of the triple barriers method to classify signals into effective and ineffective;&nbsp;3) application and comparison of 5 machine learning algorithms (Summary Classifier, Catch22, Rocket, TimeCNN, Stacking); 4) evaluation of the effectiveness of models using the ROC metrics AUC,&nbsp;Precision, Recall, Average Precision and Sharpe Ratio; 5) ranking models using a multi-criteria approach to decision-making. The results obtained showed that machine learning models outperform the&nbsp;basic Dummy model in most cases, especially for long positions, where higher Sharpe Ratio and total&nbsp;return values were recorded. The best results were demonstrated by the Catch22, Rocket, and Stacking classifiers. On the other hand, short positions turned out to be less effective, which is associated&nbsp;with the growing trend of cryptocurrencies in 2019–2025. It was also found that variations in windows (16, 32, 64, 128) significantly affect the results, confirming the importance of parameter optimization. Thus, the work demonstrates the feasibility of integrating machine learning algorithms into&nbsp;a mechanistic approach to improve the quality of trading signals and the profitability of strategies.&nbsp;The results can be used to improve algorithmic trading systems and develop new approaches to the&nbsp;application of ML in the direction of quantitative finances. </p> <p><strong>Keywords:</strong> mechanistic approach; cryptocurrency; retrospective testing; machine learning; algorithm; filtering; classifier; adaptive learning; metric; target variable.</p> Цапро І. В. (Tsapro I. V.) ##submission.copyrightStatement## 2025-11-08 2025-11-08 5 79 86 Mathematical model of seismoacoustic monitoring of blast fields for remote reconnaissance https://con.duikt.edu.ua/index.php/communication/article/view/2925 <p>This article discusses the construction of a mathematical model of seismoacoustic monitoring of explosion fields for remote exploration. As is known, seismoacoustic monitoring of blast&nbsp;fields is used for remote sensing and is a set of routine observations, whereby the mode of the&nbsp;observations themselves and the spectral parameters of the object under study depend on the research task at hand. As shown in the article, the reliability of explosive signal classification is&nbsp;improved through the use of remote sensing information technologies based on seismoacoustic&nbsp;monitoring. To achieve the research goal, it is necessary to construct a mathematical model of&nbsp;a continuous explosive field signal that would reflect the most important aspects of the explosive&nbsp;field signal monitoring process. In the process of constructing such a model, it is necessary to&nbsp;take into account both the parameters describing the process itself and the parameters of interference and natural background noise, as well as the characteristics of the transmission function of the medium. <br>To monitor explosive fields, it is necessary to collect statistical data for various explosive field signals and transfer functions of the media in which the signal propagates. This will provide&nbsp;a priori information about both the explosive field at the study points and the explosive field signals themselves, which will significantly reduce its impact on the evaluation of the studied explosive field signal. The work takes into account the influence of the instability of the parameters&nbsp;of the studied process and optimizes the procedure for processing the observed data according&nbsp;to criteria that take into account the characteristics of natural background interference. It is&nbsp;shown that the process of monitoring explosive fields boils down to the evaluation of informative&nbsp;parameters of parametric mathematical models of individual and continuous signals of the explosive field, the superposition of which forms the explosive field itself. The set of all informative&nbsp;parameters of each signal of the explosive field forms a vector of these parameters in n-dimensional Euclidean space. The optimal estimation of signal parameters involves determining the&nbsp;vector of free parameters that minimizes the value of the consistency criterion between the model&nbsp;and the observation data. Such a model provides good consistency in the case of modeling a linear system of oscillatory objects and, thus, takes into account the oscillatory nature of explosive&nbsp;signals. Thus, the article presents a new mathematical model of the explosive seismic field,&nbsp;which takes into account different types of signals in the explosive field, and provides a mathematical apparatus for solving this model. To assess the adequacy of the model for non-separable&nbsp;signals in the explosive field, simulation modeling of non-separable signals was carried out within the framework of an improved seismoacoustic monitoring methodology based on seismoacoustic analysis.</p> <p><strong>Keywords:</strong> seismoacoustic monitoring; parametric mathematical model; seismoacoustic analysis; explosive fields; seismoacoustic signal; seismoacoustic signal model.</p> Ярмолай І. О. (Yarmolay I. O.) ##submission.copyrightStatement## 2025-11-08 2025-11-08 5 87 94 Modern approaches to modeling adaptive behavior of agent in virtual ecosystems https://con.duikt.edu.ua/index.php/communication/article/view/2926 <p>Modeling adaptive behavior in virtual ecosystems is a promising and interdisciplinary research&nbsp;area that combines the achievements of computer science, ecology, sociology, and artificial intelligence. This article reviews modern methods and tools that allow creating multi-agent systems (MAS)&nbsp;that are able to adapt to environmental changes in virtual space. <br>Particular attention is paid to key technologies such as neural networks, evolutionary algorithms, and physically based simulations. The advantages and limitations of popular platforms, including AnyLogic, Unity, TensorFlow, and ML.NET, are also analyzed. <br>Agent-based modeling (ABM) is a basic tool in creating autonomous agents that are able to respond to environmental changes. The capabilities of such platforms as NetLogo and AnyLogic are&nbsp;compared: the former is convenient for building simple models, while the latter allows for implementting more complex scenarios, but requires deeper technical knowledge and more computational resources. <br>Neural networks and machine learning (ML) methods play a key role in the development of adaptive behavior of agents. TensorFlow shows high efficiency when working with large amounts of data,&nbsp;and PyTorch is distinguished by its flexibility and convenience for rapid prototyping, which is especially important in the initial stages of research. <br>Evolutionary algorithms and genetic programming have proven themselves well in adaptation&nbsp;and optimization tasks. Libraries such as DEAP (Python) and GALib (C++) allow you to model the&nbsp;mechanisms of natural selection, although they require careful parameter tuning and significant computing power. <br>Multi-agent systems (MAS) are considered as an extension of the ABM approach, with an emphasis on the interaction of agents with each other. The Repast and MASON platforms allow you to model&nbsp;complex collective dynamics - both in biological and social systems. The integration of physical simulations in Unity ML-Agents or Unreal Engine allows you to create more realistic scenarios of agent&nbsp;interaction with the environment. Unity is distinguished by its broad support for ML tools, while&nbsp;Unreal Engine provides extremely high-quality visualization. <br>The application of adaptive behavior modeling covers a wide range of industries: from ecology&nbsp;(modeling interactions between species) to economics (analysis of consumer behavior) and sociology&nbsp;(study of information dissemination in networks). This once again confirms the universality of app&nbsp;roaches to creating virtual ecosystems. <br>At the same time, certain challenges remain: significant computational costs, the complexity of&nbsp;achieving plausible agent behavior, as well as the need for close interdisciplinary cooperation. In the&nbsp;future, active implementation of the latest technologies is expected - in particular, quantum computing, real-time data integration via IoT, and combining different approaches to increase the accuracy&nbsp;of simulations. <br>As a result, adaptive behavior modeling opens up new horizons in the analysis of complex systems. For simple models, NetLogo is sufficient, and for more complex and more realistic simulations,&nbsp;TensorFlow, Unity, or AnyLogic are better suited. The prospects of this direction are associated with&nbsp;hybrid solutions that combine the advantages of neural networks, agent-based approaches, and evolutionary algorithms, creating large-scale and reliable virtual ecosystems. </p> <p><strong>Keywords:</strong> adaptive behavior; multi-agent systems; machine learning; neural networks; virtual&nbsp;ecosystems; evolutionary algorithms; TensorFlow; Unity ML-Agents; AnyLogic.&nbsp;</p> Бур'янов Д. С. (Burianov D. S.) ##submission.copyrightStatement## 2025-11-08 2025-11-08 5 95 100