Analysis of risks when ensuring information safety
DOI: 10.31673/2412-9070.2026.028906
DOI:
https://doi.org/10.31673/2412-9070.2026.028906Abstract
The article presents a methodology for optimizing Large Language Models (LLMs) to process ultra-large contextual datasets for managerial decision support. This approach addresses the gro =wing need for high-tech processing of unstructured information in digital management. The study examines the “lost-in-the-middle” problem and the resulting degradation in factual retrieval accuracy when input volume exceeds 100,000 tokens. Architectural limitations of transformers often lead to arithmetic hallucinations during complex calculations.
A hybrid concept is proposed, based on the integration of technical model auditing and Prompt Engineering techniques, including Dynamic Analytical Model Injection (DAP) and the transmission of precomputed statistical parameters (regression coefficients) in structured textual formats (JSON, Markdown, YAML, CSV). This ensures that the LLM operates on verified mathematical foundations rather than attempting to derive complex calculations from raw text alone.
A system of critical technical metrics (TTFT, TPOT, Needle-in-a-Haystack) is defined, which directly correlates with the responsiveness and validity of managerial decisions. The proposed structured data transmission methodology enables the minimization of arithmetic hallucinations and reduces the model’s cognitive load. This approach ensures stable model performance and maintains analytical accuracy even when processing extra-large context volumes. The application of Prompt Engineering techniques allows the transformation of LLMs from text generation tools into full-fledged interpreters of complex analytical models without additional fine-tuning.
This hybrid approach increases enhances the transparency and reproducibility of analytical workflows within Decision Support Systems (DSS). By structuring quantitative parameters into machine-readable formats and aligning them with contextual evidence, the approach ensures a more reliable interpretation of diverse datasets. This minimizes analytical errors in large contexts and provides a reliable basis for strategic planning.
Keywords: Large Language Models (LLM), Prompt Engineering, RAG emulation, Decision Support Systems, managerial decision-making, analytical models, RAG, JSON, Markdown.