This research delves into the difficulty of summarizing legal documents using Natural Language Processing. It examines how cutting-edge models like XLNet and BART can be used for abstractive summarization specifically tailored for lengthy legal cases. The study assesses these models' abilities to condense complex legal texts, highlighting the constraints imposed by input token limits. Through a thorough comparison of XLNet and BART based on legal-specific standards, the research introduces a fresh approach to improve summarization by leveraging these models' strengths while addressing their limitations. Evaluation methods include ROUGE scores. This study advances our understanding of abstractive summarization, particularly in the realm of legal texts, offering valuable insights for both legal professionals and NLP researchers.
Miracle AureliaSheila MonicaAbba Suganda Girsang
Shoaib HayatAvishek DasMohammed Moshiul Hoque
Bilal Ahmed Khan BalouchFawad Hussain
Pritom Jyoti GoutomNomi BaruahParamananda Sonowal