JOURNAL ARTICLE

Large Language Models and Logical Reasoning

Robert Friedman

Year: 2023 Journal:   Encyclopedia Vol: 3 (2)Pages: 687-697   Publisher: Multidisciplinary Digital Publishing Institute

Abstract

In deep learning, large language models are typically trained on data from a corpus as representative of current knowledge. However, natural language is not an ideal form for the reliable communication of concepts. Instead, formal logical statements are preferable since they are subject to verifiability, reliability, and applicability. Another reason for this preference is that natural language is not designed for an efficient and reliable flow of information and knowledge, but is instead designed as an evolutionary adaptation as formed from a prior set of natural constraints. As a formally structured language, logical statements are also more interpretable. They may be informally constructed in the form of a natural language statement, but a formalized logical statement is expected to follow a stricter set of rules, such as with the use of symbols for representing the logic-based operators that connect multiple simple statements and form verifiable propositions.

Keywords:
Computer science Statement (logic) Natural language Artificial intelligence Object language Logical consequence Set (abstract data type) Defeasible reasoning Natural language processing Logical form Formal language Verifiable secret sharing Datalog Programming language Linguistics

Metrics

11
Cited By
2.81
FWCI (Field Weighted Citation Impact)
37
Refs
0.89
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Language and cultural evolution
Social Sciences →  Social Sciences →  Cultural Studies
© 2026 ScienceGate Book Chapters — All rights reserved.