JOURNAL ARTICLE

EconNLI: Evaluating Large Language Models on Economics Reasoning

Abstract

Large Language Models (LLMs) are widely used for writing economic analysis reports or providing financial advice, but their ability to understand economic knowledge and reason about potential results of specific economic events lacks systematic evaluation. To address this gap, we propose a new dataset, natural language inference on economic events (EconNLI), to evaluate LLMs' knowledge and reasoning abilities in the economic domain. We evaluate LLMs on (1) their ability to correctly classify whether a premise event will cause a hypothesis event and (2) their ability to generate reasonable events resulting from a given premise. Our experiments reveal that LLMs are not sophisticated in economic reasoning and may generate wrong or hallucinated answers. Our study raises awareness of the limitations of using LLMs for critical decision-making involving economic reasoning and analysis. The dataset and codes are available at https://github.com/Irenehere/EconNLI.

Keywords:
Computer science

Metrics

3
Cited By
1.92
FWCI (Field Weighted Citation Impact)
0
Refs
0.83
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Semantic Web and Ontologies
Physical Sciences →  Computer Science →  Artificial Intelligence
Stock Market Forecasting Methods
Social Sciences →  Decision Sciences →  Management Science and Operations Research
Modeling, Simulation, and Optimization
Physical Sciences →  Mathematics →  Discrete Mathematics and Combinatorics
© 2026 ScienceGate Book Chapters — All rights reserved.