THREE-LEVEL KNOWLEDGE ARCHITECTURE AS A TOOL FOR MINIMIZING LOGICAL DRIFT IN AI-ASSISTED RESEARCH
DOI: 10.31673/2412-4338.2026.019008
Abstract
This paper addresses the problem of “logical drift” and statistical hallucinations of large language models (LLMs) in the context of fundamental scientific research. The author proposes and formalizes a method of Induced AI-Theory Expansion (IAI-TE), based on a three-level knowledge architecture: the axiomatic core (A-Core), the conceptual codex (S-Template), and the full specification. The key innovation of the method lies in transforming the generative capacity of AI from a source of error into an instrument of rigorous deduction through the implementation of artificial reality filters. A Consistency-Enforcement Protocol (CE-Protocol) is developed to ensure dual verification: textual (logical coherence) and symbolic (dimensional analysis). The practical validation of the method is demonstrated through the complete deductive reconstruction of the Temporal Theory of the Universe (TTU) from a compact 7.2 KB core. Experimental results confirm 100% successful recovery of 47 fundamental equations of the theory after 23 iterations of the CE-Protocol, demonstrating a transition from memorization to genuine deduction. The proposed method establishes the foundation for a new epistemological paradigm — AI-Resilient Science — in which scientific theories become executable algorithms capable of self-regeneration and scalable expansion without loss of logical integrity.
Keywords: IAI-TE, artificial intelligence, scientific methodology, axiomatic core, theory coherence, LLM, logical drift, AI-Resilient Science, Temporal Theory of the Universe (TTU), post-book science, algorithmic epistemology, consistency-enforcement protocol, self-regenerating theories.
References
- Ji Z., Lee N., Frieske R. et al. Survey of hallucination in natural language generation. ACM Computing Surveys. 2023. Vol. 55, No. 12. P. 1–38. DOI: 10.1145/3571730.
- Lemeshko A. Temporal Theory of the Universe – Minimal Memory Kernel (TTU_CORE_RECALL_v1.0). ResearchGate. 2024. DOI: 10.13140/RG.2.2.28830.40001.
- Marcus G. The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence. arXiv preprint. 2020. arXiv:2002.06177.
- Pearl J. Causality: Models, Reasoning, and Inference. 2nd ed. Cambridge: Cambridge University Press, 2009. 484 p.
- Wolfram S. Writings: On the symbolic and linguistic capabilities of large language models. Wolfram Media. 2023. URL: https://writings.stephenwolfram.com/ (дата звернення: 12.12.2025).
- Wilkinson M. D. et al. The FAIR Guiding Principles for Scientific Data Management and Stewardship. Scientific Data. 2016. Vol. 3, Article 160018. DOI: 10.1038/sdata.2016.18.
- Krenn M. et al. On Scientific Understanding with Artificial Intelligence. Nature Reviews Physics. 2022. Vol. 4. P. 761–769.
- Popper K. R. The Logic of Scientific Discovery. London: Routledge, 1959. 544 p.
- Kuhn T. S. The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1962. 172 p.
- Smaragdis E. et al. AI-Driven Knowledge Discovery and Representation in Scientific Domains. AI Magazine. 2023. Vol. 44(4). P. 1–15.
- Chen M. et al. Evaluating Large Language Models Trained on Code. arXiv preprint. 2021. arXiv:2107.03374.
- Berners-Lee T., Hendler J., Lassila O. The Semantic Web. Scientific American. 2001. Vol. 284(5). P. 34–43.
- Stanford Institute for Human-Centered Artificial Intelligence. AI Index Report 2024. Stanford University, 2024. URL: https://aiindex.stanford.edu/report/
- Vaswani A. et al. Attention Is All You Need. Advances in Neural Information Processing Systems. 2017. Vol. 30. P. 5998–6008.