JOURNAL ARTICLE

Run-time versus compile-time instruction scheduling in superscalar (RISC) processors: performance and tradeoffs

Abstract

The RISC revolution has spurred the development of processors with increasing degrees of instruction level parallelism (ILP). In order to realize the full potential of these processors, multiple instructions must continuously be issued and executed in a single cycle. Consequently, instruction scheduling plays a crucial role as an optimization in this context. While early attempts at instruction scheduling were limited to compile-time approaches, the current trends are aimed at providing dynamic support in hardware. In this paper, we present the results of a detailed comparative study of the performance advantages to be derived by the spectrum of instruction scheduling approaches: from limited basic-block schedulers in the compiler, to novel and aggressive schedulers in hardware. A significant portion of our experimental study via simulations, is devoted to understanding the performance advantages of run-time scheduling. Our results indicate it to be effective in extracting the ILP inherent to the program trace being scheduled, over a wide range of machine and program parameters. Furthermore, we also show that this effectiveness can be further enhanced by a simple basic-block scheduler in the compiler, which optimizes for the presence of the run-time scheduler in the target; current basic-block schedulers are not designed to take advantage of this feature. We demonstrate this fact by presenting a novel basic-block scheduling algorithm that is sensitive to the lookahead hardware in the target processor.

Keywords:
Computer science Instruction scheduling Compiler Scheduling (production processes) Parallel computing Reduced instruction set computing Compile time Dynamic priority scheduling Computer architecture Instruction set Two-level scheduling Operating system Schedule

Metrics

1
Cited By
0.00
FWCI (Field Weighted Citation Impact)
21
Refs
0.22
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Parallel Computing and Optimization Techniques
Physical Sciences →  Computer Science →  Hardware and Architecture
Distributed and Parallel Computing Systems
Physical Sciences →  Computer Science →  Computer Networks and Communications
Embedded Systems Design Techniques
Physical Sciences →  Computer Science →  Hardware and Architecture

Related Documents

JOURNAL ARTICLE

Run-Time versus Compile-Time Instruction Scheduling in Superscalar (RISC) Processors: Performance and Trade-Off

Allen LeungKrishna V. PalemCristian Ungureanu

Journal:   Journal of Parallel and Distributed Computing Year: 1997 Vol: 45 (1)Pages: 13-28
JOURNAL ARTICLE

Data buffering: run-time versus compile-time support

H. Mulder

Year: 1989 Pages: 144-151
JOURNAL ARTICLE

Data buffering: run-time versus compile-time support

H. Mulder

Journal:   ACM SIGARCH Computer Architecture News Year: 1989 Vol: 17 (2)Pages: 144-151
© 2026 ScienceGate Book Chapters — All rights reserved.