Accurate syntactic parsing is fundamental to natural language processing (NLP) applications such as machine translation, text summarization, and sentiment analysis. Traditional rule-based and statistical parsing methods often struggle with complex sentence structures, ambiguities, and cross-linguistic variations. Recent advances in large language models (LLMs) have shown remarkable capabilities in understanding linguistic patterns, offering new opportunities to enhance syntactic parsing accuracy and efficiency. This paper proposes an optimized syntactic structure parsing framework based on LLMs, integrating self-attention mechanisms and fine-tuned transformer architectures to improve parsing precision and generalization. The framework incorporates a hybrid training approach that combines supervised learning with reinforcement learning from human feedback (RLHF) to refine parsing decisions. Experimental evaluations on benchmark datasets demonstrate that the proposed method outperforms conventional dependency and constituency parsers in accuracy, robustness, and adaptability to complex sentence structures. These findings highlight the potential of LLM-driven syntactic parsing for advancing NLP applications.
Anton CagleAhmed Ceifelnasr Ahmed