https://con.duikt.edu.ua/index.php/communication/issue/feedConnectivity2026-01-06T00:16:41+00:00Open Journal Systems<p><img src="/public/site/images/coneditor/p_131_893638051.jpg"></p> <p><strong>Name of journal</strong> – «Connectivity» (Зв'язок)<br><strong>Founder</strong>: State University of Telecommunications.<br><strong>Year of foundation</strong>: 1995.<br><strong>State certificate of registration</strong>: <a href="http://www.irbis-nbuv.gov.ua/cgi-bin/irbis_nbuv/cgiirbis_64.exe?C21COM=2&I21DBN=UJRN&P21DBN=UJRN&Z21ID=&Image_file_name=IMG%2Fvduikt_s.jpg&IMAGE_FILE_DOWNLOAD=0">КВ № 20996-10796 ПР від 25.09.2014</a>. <br><strong>ISSN</strong>: 2412-9070<br><strong>Subject</strong>: telecommunications, informative technologies, computing engineering, education.<br><strong>Periodicity</strong> – six times a year.<br><strong>Address</strong>: Solomyanska Str., 7, Kyiv, 03110, Ukraine.<br><strong>Telephones</strong>:+380 (44) 249 25 42;<br><strong>E-mail</strong>: <strong><a href="mailto:kpstorchak@ukr.net">kpstorchak@ukr.net</a></strong><br><strong>Web-сайт: </strong><a href="http://www.dut.edu.ua/" target="_blank" rel="noopener">http://www.dut.edu.ua/</a>, <a href="http://con.dut.edu.ua/">http://con.dut.edu.ua/</a></p>https://con.duikt.edu.ua/index.php/communication/article/view/2928Title2026-01-02T13:30:46+00:00<p>Title</p>2025-12-30T15:39:26+00:00##submission.copyrightStatement##https://con.duikt.edu.ua/index.php/communication/article/view/2929Content2026-01-02T13:30:49+00:00<p>Content</p>2025-12-30T15:42:49+00:00##submission.copyrightStatement##https://con.duikt.edu.ua/index.php/communication/article/view/2930Per-order ordered parallelism method in microservice event delivery systems2026-01-02T13:30:40+00:00Колодюк А. В. (Kolodiuk A.)con@duikt.edu.uaВолощук О. Б. (Voloschuk O.)con@duikt.edu.ua<p>This paper presents a per-order ordered parallelism method for event delivery in microservice-based architectures, designed to ensure strict execution order within each logical workflow while sustaining high throughput and fault tolerance. The proposed approach introduces an application-level ordering layer built atop the RabbitMQ message broker, avoiding any modification of broker internals. The method relies on per-event sequence identifiers (X-Sequence-ID), dynamic per-key queue instantiation, and a gap-replay mechanism for recovering missing states, thereby providing deterministic processing and reliable restoration of system state even under message loss or redelivery. <br>The scientific contribution consists in the development of a formal execution model that integrates at-least-once delivery semantics, idempotent event handling, controlled redelivery, and bounded buffering. An analytical model is constructed to quantify the influence of parallelism on system latency, throughput, and stability, taking into account probabilistic message loss, queue load factors, resource constraints, and fallback HTTP channels. This model enables formal evaluation of the system overhead ε and predictive assessment of behavior during broker degradation, partial failures, or system updates, offering a mathematically substantiated basis for tuning concurrency and recovery mechanisms. <br>Experimental evaluation within the AutoGivex microservice ecosystem demonstrates reduced processing latency, increased stability, and improved scalability compared to traditional broker-level ordering strategies. The results confirm that the proposed method provides a robust foundation for building resilient event-driven distributed systems that require strict per-order consistency, deterministic behavior, and adaptive throughput optimization. The findings can be applied to a wide range of domains—financial, logistics, enterprise, and real-time platforms — where correctness and reliability of event sequencing are essential. </p> <p><strong>Keywords:</strong> ordered parallelism; microservices; event delivery; RabbitMQ; X-Sequence-ID; idempotency; gap-replay; concurrency model; fault tolerance; analytical model; AutoGivex information technology.</p>2025-12-30T16:01:09+00:00##submission.copyrightStatement##https://con.duikt.edu.ua/index.php/communication/article/view/2931Method for ensuring the functional resilience of a software-defined computer network based on state prediction using a generalized parameter2026-01-02T13:30:46+00:00Фісун О. С. (Fisun O.)con@duikt.edu.uaДовженко Т. П. (Dovzhenko T.)con@duikt.edu.ua<p>The article proposes a method for ensuring the functional resilience of a software-defined computer network (SDN) based on predicting its operational state using a generalized parameter. The study focuses on developing a predictive model that evaluates the dynamic behavior of SDN elements and determines the probability of the network transitioning to critical or unstable states. The approach integrates statistical normalization of network parameters, Bayesian classification of operational conditions, and nonparametric estimation of probability densities to form a robust assessment of the current and future states of the system. The generalized parameter is derived from multiple monitored metrics (such as flow rate, packet loss, and delay), allowing the model to capture multidimensional dependencies and reduce the influence of noise or partial data loss. <br>The developed method enables real-time forecasting of network degradation trends and provides a basis for proactive reconfiguration and load redistribution within the control plane. Simulation results demonstrate that the predictive model improves decision-making accuracy in selecting stable network configurations and reduces the time required to restore optimal operation after disturbances. The proposed approach ensures adaptive resilience against both internal failures and external cyber-impacts by combining probabilistic modeling and dynamic prediction. <br>The results of the research can be applied to enhance SDN controllers, network management systems, and automated security mechanisms that require continuous monitoring of network stability. The method contributes to the advancement of intelligent network management, where resilience is achieved not only by redundancy or recovery but also through predictive adaptation based on probabilistic assessment of state evolution. </p> <p><strong>Keywords:</strong> software-defined network (SDN); functional stability; traffic; algorithm; Bayesian method; data; SDN controller; metric.</p>2025-12-30T16:13:34+00:00##submission.copyrightStatement##https://con.duikt.edu.ua/index.php/communication/article/view/2932Technological aspects of implementing IoT networks in intelligent transport systems2026-01-02T13:30:50+00:00Петренко В. О. (Petrenko V.)con@duikt.edu.uaГригоренко О. В. (Hryhorenko O.)con@duikt.edu.ua<p>This paper explores the technological aspects of implementing Internet of Things (IoT) networks in intelligent transport systems (ITS) within urban infrastructure. A conceptual model of IoT architecture is presented, highlighting key components, cybersecurity solutions and directions for practical application. The main objective of the study is to design a conceptual IoT network model for ITS that takes into account real–time data collection, transmission, and processing capabilities, along with the integration of cybersecurity mechanisms such as encryption methods, device authentication, and event monitoring. The object of the study is the automation of processes within urban transport infrastructure using IoT technologies. Rapid urbanization and the growing number of vehicles increase pressure on transport systems, demanding advanced solutions to enhance traffic efficiency, safety, and environmental sustainability. IoT–based intelligent transport systems enable automated interaction between vehicles, infrastructure, and users by employing sensors, video surveillance cameras, data transmission gateways, and cloud platforms. The paper presents a case study of modeling IoT architecture for a real intersection in Kyiv using LoRaWAN technology, the MQTT protocol, server side processing, and the AWS cloud platform. The stages of model development, optimal placement of IoT devices, and data processing architecture are described in detail. Simulation results using Cisco Packet Tracer demonstrated the effectiveness of the proposed architecture and its readiness for real–world integration and scaling within urban environments. </p> <p><strong>Keywords:</strong> intelligent transport systems; urban infrastructure; Internet of Things; V2X communication; LoRaWAN; cloud computing; sensor devices; network gateways; cybersecurity; MQTT protocol; simulation; traffic management.</p>2025-12-30T16:25:56+00:00##submission.copyrightStatement##https://con.duikt.edu.ua/index.php/communication/article/view/2933The use of artificial intelligence in fifth-generation and subsequent generation communication networks2026-01-02T14:03:47+00:00Галаган Н. В. (Galagan N.)con@duikt.edu.uaКравченко В. І. (Kravchenko V.)con@duikt.edu.uaБлаженний Н. В. (Blazhennyi N.)con@duikt.edu.uaСазонов О. О. (Sazonov O.)con@duikt.edu.ua<p>The article is devoted to the consideration of the technology of using artificial intelligence in communication networks of the fifth and subsequent generations, where it is necessary to first of all evaluate the basic principles, key tasks, as well as the most modern approaches and solutions. It is proved that the use of artificial intelligence forms a reaction to the real world factor. At the same time, in infocommunication networks the requirements of very reliable networks with ultra-low delays must be implemented, while ensuring the growing amount of traffic of inter-machine communications in the infocommunication network. <br>Networks of the next generations must control and manage a large number of modern and innovative services, to which high requirements are imposed on the quality of user service, as well as the requirements of the tactile Internet, the implementation of sensitive perception, the implementation of the telepresence effect. <br>Therefore, when considering the aspects of the feasibility of using artificial intelligence, it is first necessary to consider its basic properties. <br>The innovative approach used at the Department of Mobile and Video Information Technologies to improve existing and promising artificial intelligence models that will be used in fifth-generation and subsequent generation networks is highlighted. An innovative model is considered, which additionally includes three or more stages of reinforcement learning aimed at identifying improved reasoning models and matching them with human preferences, as well as three or more stages of controlled fine-tuning, which serve as the basis for the ability to reason and not reason in the model, and the gain function and loss function are taken into account at each stage of functioning, self-checking, reflection and generation of long chains of thoughts of the artificial intelligence model are carried out.</p> <p><strong>Keywords:</strong> artificial intelligence; Internet of Things; "smart home"; network service; tactile Internet; radio access networks.</p>2025-12-30T16:38:53+00:00##submission.copyrightStatement##https://con.duikt.edu.ua/index.php/communication/article/view/2934Real-time face image recognition method using convolutional neural network MobileNetV32026-01-02T13:30:58+00:00Ілащук М. М. (Ilashchuk M.)con@duikt.edu.uaМельничук С. В. (Melnychuk S.)con@duikt.edu.ua<p>This article describes a developed methodology for real-time face detection and recognition. The methodology is based on an algorithmic pipeline for parallel data processing and exchange between three threads: video camera image acquisition, neural network processing, and recognized image visualization. <br>The study involved the development of a full real-time data processing pipeline that integratesimage acquisition from a webcam, preprocessing of frames, neural network inference, and visual display of recognition results. The MobileNetV3 model was selected due to its balance between recognition accuracy and computational efficiency. . The main improvement in data processing is achieved through the use of two Convolutional Neural Network (CNN) models with the MobileNetV3 architecture. CNN model #1 was trained for face detection, and CNN model #2 was trained for face recognition. Based on the proposed methodology, a software application was developed using the Python language. Key attention was focused on optimizing the methodology to achieve minimal latency during frame processing. <br>To overcome the latency inherent to CPU-only systems, a multi-threaded software architecture was designed, consisting of three independent threads responsible for image capture, neural network inference, and visualization. The conducted experiments demonstrated the effectiveness of the developed methodology, which ensures stable operation on a personal computer even without a Graphics Processing Unit (GPU). An average performance of 5.5 frames per second (FPS) with a latency of 0.72 seconds was obtained. The proposed methodology is flexible to changes in the operating system load. </p> <p><strong>Keywords:</strong> facial image recognition; convolutional neural networks; image processing; obileNetV3; Python; machine learning; detection; algorithm; computer vision.</p>2025-12-30T16:52:40+00:00##submission.copyrightStatement##https://con.duikt.edu.ua/index.php/communication/article/view/2935Nonlinear dynamics of market segment evolution: mathematical modeling based on the cusp catastrophe concept2026-01-02T13:31:02+00:00Соломаха С. А. (Solomakha S.)con@duikt.edu.uaЯсінецький О. О. (Yasinetskyi O.)con@duikt.edu.ua<p>A mathematical model describing the nonlinear evolution of market segments is developed based on the cusp catastrophe (catastrophe of the “cusp” type) from René Thom’s catastrophe theory. The proposed framework conceptualizes market development not as a smooth, continuous trajectory but as a sequence of stable phases separated by abrupt transitions. These transitions emerge from the interaction of three core factors: unsatisfied demand, consumer satisfaction, and the intensity of competition. Their dynamic interplay shapes the structure of the potential function whose minima correspond to stable market states. <br>The model introduces the canonical potential function of the cusp catastrophe along with its equilibrium condition, discriminant, and bifurcation manifold, which together determine the number and stability of possible market equilibria. It is shown that under specific parameter configurations, the system exhibits bistability, where two stable phases (growth and saturation) coexist alongside an unstable transitional state. This structure explains the possibility of sudden market shifts driven by small variations in external parameters—analogous to market saturation events, competitive shocks, or abrupt changes in consumer behavior. <br>In addition, the model highlights the role of characteristic diffusion—when key product attributes spread among competitors—in reducing differentiation and increasing competitive pressure. This mechanism gradually pushes the system toward the bifurcation region, where phase transitions become unavoidable. Illustrative examples from the smartphone industry and historical production cycles demonstrate how these nonlinear dynamics manifest in real markets. <br>The results support the applicability of catastrophe theory as a tool for analyzing market instability and structural transformations. The model offers a compact yet powerful mathematical instrument for identifying critical points, forecasting nonlinear responses, and enhancing strategic decision-making in competitive economic environments. </p> <p><strong>Keywords:</strong> cusp catastrophe; nonlinear dynamics; potential function; market segment; phase transitions; catastrophe theory; diffusion of characteristics.</p>2025-12-30T17:04:46+00:00##submission.copyrightStatement##https://con.duikt.edu.ua/index.php/communication/article/view/2936Method for integrating multimodal generative models into DSP platforms with semantic control for dynamic advertising content generation2026-01-02T13:31:06+00:00Коротін Д. С. (Korotin D.)con@duikt.edu.uaЛащевська Н. О. (Lashchevska N.)con@duikt.edu.ua<p>The article presents a scientific method for integrating multimodal generative models into Demand-Side Platforms (DSPs) to enable dynamic generation of advertising content based on user be havioral features. The proposed architecture consists of four interconnected modules: a prompt generator, a multimodal content generator, a semantic control module, and a DSP connector. <br>A key contribution of this research is the development of the semantic control module, which performs multi-level verification of generated content, including linguistic safety, visual consistency, and brand compliance. A mathematical optimization model is proposed to maximize content utility by balancing semantic similarity, contextual relevance, and content safety. <br>Experimental modeling was carried out using simulated data of 10,000 user sessions. The results demonstrated significant improvements compared to a baseline DSP campaign: click-through rate (CTR) increased by 24 %, engagement rate (ER) by 18 %, and conversion rate (CR) by 12 %. Statistical testing confirmed the reliability of these improvements (p < 0.05). <br>The developed method provides a scalable framework for adaptive, AI-driven content generation within DSP ecosystems. Its practical implementation through REST APIs allows advertisers to automate creative production, enhance personalization, and ensure semantic and ethical control of advertising content. </p> <p><strong>Keywords:</strong> multimodal models; DSP; generative artificial intelligence; semantic control; adaptive advertising; content security.</p>2025-12-30T17:15:36+00:00##submission.copyrightStatement##https://con.duikt.edu.ua/index.php/communication/article/view/2937Architecture of the software system for evaluating the level of internationalization of the scientific institutions2026-01-02T13:31:09+00:00Стативка Ю. І. (Statyvka Y.)con@duikt.edu.uaМінцзюнь Ч. (Mingjun Z.)con@duikt.edu.ua<p>The article presents the development of a basic architecture of a software system designed to evaluate the level of internationalization of scientific institutions. The need for such a system arises due to the growing importance of internationalization in scientific activity and the lack of standardized software tools that support this process. Developed on the basis of a generalized process model, the proposed architecture meets the key functional requirements for the software system and ensures the implementation of the main use cases of all users, including experts from the executive group, experts from the community of stakeholders, managers of institutions and public users. The architecture consists of four main modules: design of the valuation system, construction and selection of methodologies, conducting computational valuations and publication of results. <br>The proposed approach simplifies the development of software systems to support decision-making based on adequate and substantiated processing and analysis of indicators of international scientific cooperation and competitiveness for heads of scientific institutions and other stakeholders. </p> <p><strong>Keywords:</strong> internationalization of scientific institutions; evaluation of the level of internationalization;, architecture of the software system; software engineeringю</p>2025-12-30T17:27:51+00:00##submission.copyrightStatement##https://con.duikt.edu.ua/index.php/communication/article/view/2938Information for intelligent measurement of psychological state coefficients2026-01-02T13:31:13+00:00Шаповал В. П. (Shapoval V.)con@duikt.edu.uaТарасенко Я. В. (Tarasenko Ya.)con@duikt.edu.ua<p>The paper presents an information system for the intelligent measurement of psychological state coefficients that indicate the cause of the temporal dynamics of psychological states. The phases of intelligent measurement, a contextual diagram, and a structural-logical diagram of the system's functioning are provided. A two-level architecture is proposed based on the determination of primary psychological functional states and in-depth analysis of the psychophysiological state. In-depth analysis refines the measured values of primary psychological functional states obtained through video monitoring. The parametric-dynamic model for predicting the cause of the temporal dynamics of psychological states has been improved through a two-level correlation-regression analysis. The presented architecture and improved prediction model have made it possible to increase the accuracy of determining psychological coefficients by 8-10% while reducing the average probability of errors of the 1st and 2nd kind by 20% compared to the parametric-dynamic model before improvement and other similar models used in solving the problems addressed in this work. The intellectual measurement process takes into account the weighting coefficients of both the initial analysis and video monitoring. The measurement results are influenced by clarifying psycholinguistic tests. The accuracy of the intellectual measurement of psychological states is improved by identifying the probable causes of their temporal dynamics. The proposed information system is important for the development of information complexes to counter various types of insider attacks in the private and public sectors. The improved model is the basis for the further development of approaches to the intellectual measurement of psychological state coefficients in the direction of increasing their accuracy and speed of processing data that comes to the intellectual measurement system. </p> <p><strong>Keywords:</strong> information monitoring system; psychological coefficients; threat prediction; intelligent measurement; parametric-dynamic model.</p>2025-12-30T18:25:46+00:00##submission.copyrightStatement##https://con.duikt.edu.ua/index.php/communication/article/view/2939Methods of data cleaning for forecasting investments in education2026-01-02T14:18:08+00:00Бажан Т. О. (Bazhan T.)con@duikt.edu.uaКриворучко В. Ф. (Kryvoruchko V.)con@duikt.edu.ua<p>This article addresses the relevance of data cleaning in the context of forecasting investments in the educational sector, emphasizing that the quality of input data is crucial for the accuracy and reliability of machine learning prognostic models. Poor quality data inevitably leads to distorted patterns and, consequently, to erroneous investment forecasts, which can negatively impact the allocation of financial resources and the development of the educational system. The specificity of educational data, its diversity, and susceptibility to errors highlight the urgent need for thorough cleaning. <br>Based on an analysis of existing literature, it was found that while there is a significant body of research on general data cleaning methods and the application of machine learning in education, there is a lack of focused studies that specifically investigate the effectiveness of various data cleaning methods for improving the accuracy of investment forecasting in the educational field. This underscores the scientific novelty and relevance of the conducted research. <br>The aim of the study is to develop and substantiate an effective data cleaning method aimed at improving the accuracy of forecasting investments in education. To achieve this goal, a number of tasks were set, including analyzing existing methods, conducting a comparative analysis of their effectiveness, identifying the most suitable approaches, developing possible enhancements, creating a block diagram of the proposed method, and formulating practical recommendations.<br>The paper thoroughly examines the fundamental stage of data cleaning in the machine learning pipeline, which precedes the creation and training of models. A comparative analysis of key data cleaning methods is presented, including handling missing values (row/column deletion, mean/median/mode imputation, predictive imputation), outlier detection and treatment (visualization, statistical methods, machine learning algorithms, outlier transformation), duplicate removal, error and inconsistency correction (spell check/format validation, source reconciliation, rule-based validation), as well as data scaling and normalization (Min-Max Scaling, StandardScaler) and data type conversion. <br>Selection of the best cleaning methods for forecasting investments in education is proposed, considering the specificity of educational data. These include comprehensive missing value handling, robust outlier detection and treatment, thorough duplicate detection and elimination, strict rule-based data validation using domain knowledge, and format harmonization and data type conversion. Opportunities for improving data cleaning methods are discussed, specifically the development of hybrid approaches, consideration of the context of educational data, automation of the cleaning process using machine learning, creation of interactive tools, and evaluation of the impact of cleaning methods on forecast quality. <br>Practical recommendations for using data cleaning methods for educational investment forecasts are provided, with an emphasis on understanding the specific characteristics of educational data (its origin, hierarchical structure, temporal dependencies, categorical features, sensitivity to policy changes), comprehensive handling of missing values, robust outlier detection and treatment, specificcleaning methods for educational data (standardization of categorical features, consistency control between levels, validation based on standards), integration and reconciliation of data from different sources, and evaluation of the impact of cleaning on forecast accuracy. The importance of involving experts from the educational sector at all stages of the process is highlighted. <br>In the conclusions, it is stated that high-quality data cleaning is critically important for building reliable prognostic models in the field of educational investment. The proposed comprehensive approach, combining missing value handling and robust outlier detection and treatment methods, significantly improves the quality of input data and enhances forecast accuracy. Prospects for further research include testing the method on larger volumes of real-world data and comparing its effectiveness with other existing data cleaning approaches.</p> <p><strong>Keywords:</strong> data cleaning; data quality; machine learning; investment forecasting; educational data; predictive models.</p>2025-12-30T19:27:48+00:00##submission.copyrightStatement##https://con.duikt.edu.ua/index.php/communication/article/view/2940Propaganda detection in textual modality on telegram by means of transformers2026-01-02T13:31:24+00:00Сніцаренко О. О. (Snitsarenko O.)con@duikt.edu.ua<p>The state propaganda machine of the Russian Federation, supported by multi-billion annual budgets, strategically employs the Telegram platform as the primary channel for creating and disse minating propagandistic content. This activity significantly affects the perception of the war in Ukraine, the international information environment, and within Russia itself. Since Telegram already has one billion active users, the platform plays a crucial role in shaping public narratives and information flows. Moreover, the modern development of large language models makes the dissemination of propaganda more automated and rapid. This paper explores deep transformer-based neural networks, their role, and effectiveness in detecting Russian propaganda, using an algorithmically created dataset of Telegram channels that reflects the digital footprint of Russian propaganda during the full-scale invasion period. An effective transformer-based model is proposed, post-trained, and evaluated on this dataset. <br>The results of the study demonstrated that certain transformer-based models outperform other multilingual models, including other transfromers. These models also show higher effectiveness compared to classical and ensemble baselines. Moreover, the developed approach to processing and constructing efficient Telegram datasets—through an asynchronous algorithm with well-developed systematic approach, the randomized sampling and adherence to data construction principles and configurations—has proven to make machine learning algorithms highly effective, achieving accuracy levels between 80% and 96%. <br>Future research should focus on expanding multimodal aspects (text combined with images and metadata), exploring cross-linguistic transformer models, and developing online monitoring systems for tracking propaganda campaigns in social networks.</p> <p><strong>Keywords:</strong> deep neural networks; transformers; computational propaganda; social media; Telegram.</p>2025-12-30T19:53:34+00:00##submission.copyrightStatement##https://con.duikt.edu.ua/index.php/communication/article/view/2941Enhancing reliability of energy management software through predictive modeling and automated repair2026-01-06T00:16:41+00:00Verlan A. (Verlan A.)con@duikt.edu.uaZhinai W. (Zhinai W.)con@duikt.edu.uaYunhai Z. (Yunhai Z.)con@duikt.edu.ua<p>This research is conducted within the Department of Software Engineering for Power Industry, NTUU KPI and Foreign Expert Studio for Demand Response at the Shandong-Uzbekistan Technological Innovation Research Institute collaboration under the Project H20240943 Quality Assurance Project for Intelligent Energy Management Software Based on AI Methods and the De-velopment and Industrialization of Intelligent Grid Demand Response Technology Project. <br>Intelligent Energy Management Software (IEMS) plays a vital role in forecasting, optimization, and anomaly detection within modern energy infrastructures. However, evolving data distributions and heterogeneous deployment conditions introduce high risks of software defects and unstable behavior. This paper proposes a prediction–repair framework that unifies defect prediction with automated multi-level repair to ensure both accuracy and reliability. The prediction module employs hybrid models combining temporal and structural features, while the repair module operates at software, model, and system levels. Public datasets – NASA MDP and PROMISE for software defect prediction, NAB for anomaly detection, and UCI Energy for calibration assessment – are used for validation. Results show that the proposed method consistently outperforms baseline approaches, yielding F1-score improvements of 5–10 points on defect prediction and an 8-point gain on NAB anomaly detection (0.70 → 0.78). Calibration reliability also increases, reducing Expected Calibration Error to 0.032 and Negative Log Likelihood to 0.18. Furthermore, integrated repair improves recovery to 87% and reduces latency by 36% compared with single-level strategies. These findings demonstrate that coupling predictive modeling with automated repair enhances robustness and trustworthiness of IEMS under distributional shift, providing a practical route for reliable deployment in residential, commercial, and industrial contexts.</p> <p><strong>Keywords:</strong> intelligent energy management software (IEMS); defect prediction; automated repair; anomaly detection; calibration; robustness; software quality assurance.</p>2025-12-30T20:07:38+00:00##submission.copyrightStatement##https://con.duikt.edu.ua/index.php/communication/article/view/2944Comparing the performance of Python Django and Java Hinernate ORM frameworks2026-01-02T13:56:44+00:00Гавор А. С. (Havor A.)con@duikt.edu.uaНіщеменко Д. О. (Nishchemenko D.)con@duikt.edu.uaГордієнко К. О. (Hordiienko K.)con@duikt.edu.uaАронов А. О. (Aronov A.)con@duikt.edu.ua<p>The article presents a comparative analysis of the performance of ORM frameworks Django ORM (Python) and Hibernate (Java) in large-scale relational database environments. The relevance of the study is determined by the need to optimize the interaction between the application layer and the database in systems with high demands for performance and data consistency. <br>The purpose of the work is to experimentally evaluate performance, latency, SQL query intensity, CPU usage, and memory consumption for both ORM frameworks in a unified PostgreSQL environment. The modeling included CRUD operations, join queries, and aggregation scenarios with a dataset exceeding 900,000 records. <br>The results showed that Django ORM outperforms in basic operations and complex data selections due to its lower level of abstraction and reduced number of intermediate layers. Hibernate, in contrast, demonstrates stability and consistency under high transactional load, ensuring scalability through its multi-layered architecture and JVM-level optimizations. <br>It is recommended to use Django ORM in web applications and rapid development systems, while Hibernate is more suitable for corporate and financial systems where stability and object state control are critical. </p> <p><strong>Keywords:</strong> ORM, Django, Hibernate, relational database, performance, latency, transactions, PostgreSQL, Python, Java, programming languages.</p>2026-01-02T13:56:43+00:00##submission.copyrightStatement##https://con.duikt.edu.ua/index.php/communication/article/view/2942The method of increasing the security of distributed database systems based on optimization approaches and blockchain technologies2026-01-02T13:58:04+00:00Жебка С. В. (Zhebka S.)con@duikt.edu.uaСініцин І. П. (Sinitsyn I.)con@duikt.edu.ua<p>This paper presents a novel method for enhancing the security of distributed database systems based on optimization techniques and blockchain technologies, aimed at ensuring system resilience under dynamic cyber-threat conditions. A mathematical framework is developed using stochastic models of node compromise probability, shard failure likelihood, and blockchain consensus disruption. These models are integrated into a unified risk function that reflects system vulnerability considering cryptographic strength, replication structure, consensus parameters, and environmental factors. A multi-objective optimization model is proposed to minimize the combined metric of risk, operational cost, and latency, achieving an optimal balance between security and performance in hybrid architectures involving on-chain and off-chain data storage. <br>The operational algorithm functions as a closed adaptive loop, continuously performing riskestimation, checking hard and probabilistic SLA constraints, analyzing threat deviations, and selecting optimal security configurations. The method incorporates SAA, ADMM, and MPC, enabling efficient large-scale optimization across numerous shards and ensuring real-time adaptation. Blockchain serves as a mechanism for immutable audit logging, decentralized access control, and system state verification. <br>The proposed solution reduces the compromise risk by 25–30 %, decreases unnecessary security overhead, and ensures predictable system behavior even under peak loads or active cyberattacks. The method offers an integrated approach for building adaptive, resilient, and scalable distributed databases suitable for corporate, cloud, and decentralized environments.</p> <p><strong>Keywords:</strong> distributed databases; blockchain; optimization; risk; consensus; replication; crypto-graphic security; SAA; ADMM; MPC; adaptive security; on-chain/off-chain storage.</p>2026-01-02T13:12:35+00:00##submission.copyrightStatement##https://con.duikt.edu.ua/index.php/communication/article/view/2943Вирішення парадоксу Заде: аксіоматична теорія можливостей як фундамент надійного штучного інтелекту2026-01-02T13:58:17+00:00Бичков О. С. (Bychkov O.)con@duikt.edu.uaІщеряков С. М. (Ishcheriakov S.)con@duikt.edu.uaЛитвинчук Х. М. (Lynvynchuk H.)con@duikt.edu.uaАнтонов В. В. (Antonov V.)con@duikt.edu.ua<p>The paper builds a mathematically consistent model for describing uncertainty, which replaces the traditional one-dimensional approach to evaluating events with a dual system of measures - the measure of possibility and the measure of necessity. It is shown that the Zadeh paradox is not a random anomaly, but a critical failure of the conflict resolution mechanism in the Dempster-Shafer theory. <br>An axiomatic theory of possibility, using dual measures of possibility and necessity, has been presented as a plausible alternative. This approach provides a more honest and complete representation of the uncertain and contradictory state of knowledge. Instead of hiding the conflict, it brings it to the forefront, allowing the system to communicate the fundamental ambiguity of the situation. <br>An approach to modeling uncertainty for artificial intelligence systems is proposed that combines Dempster-Shafer theory with possibility theory, providing a consistent representation of epistemic uncertainty through confidence intervals and possibility/necessity measures. A modified approach to combining evidence for high-conflict scenarios is developed. A criterion for source consistency and a mechanism for adaptive source weighting in the expert assessment process are proposed.</p> <p><strong>Keywords:</strong> possibility theory; artificial intelligence; reliability; mathematical modeling; uncertainty.</p>2026-01-02T13:30:10+00:00##submission.copyrightStatement##