JOURNAL ARTICLE

Prune Once for All: Sparse Pre-Trained Language Models

Ofir ZafrirAriel LareyGuy BoudoukhHaihao ShenMoshe Wasserblat

Year: 2021 Journal:   Zenodo (CERN European Organization for Nuclear Research)   Publisher: European Organization for Nuclear Research

Abstract

@article{zafrir2021prune, title={Prune Once for All: Sparse Pre-Trained Language Models}, author={Zafrir, Ofir and Larey, Ariel and Boudoukh, Guy and Shen, Haihao and Wasserblat, Moshe}, journal={arXiv preprint arXiv:2111.05754}, year={2021} }

Keywords:
Computer science Natural language processing Artificial intelligence

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
28
Refs
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Topic Modeling
Physical Sciences →  Computer Science →  Artificial Intelligence
Natural Language Processing Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Speech Recognition and Synthesis
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

BOOK-CHAPTER

Pre-trained Language Models

Huaping ZhangJianyun Shang

Year: 2025 Pages: 73-90
BOOK-CHAPTER

Pre-trained Language Models

Gerhard PaaßSven Giesselbach

Artificial intelligence: foundations, theory, and algorithms/Artificial intelligence: Foundations, theory, and algorithms Year: 2023 Pages: 19-78
BOOK-CHAPTER

Improving Pre-trained Language Models

Gerhard PaaßSven Giesselbach

Artificial intelligence: foundations, theory, and algorithms/Artificial intelligence: Foundations, theory, and algorithms Year: 2023 Pages: 79-159
© 2026 ScienceGate Book Chapters — All rights reserved.