JOURNAL ARTICLE

Clairvoyance: look-ahead compile-time scheduling

Abstract

To enhance the performance of memory-bound applications, hardware designs have been developed to hide memory latency, such as the out-of-order (OoO) execution engine, at the price of increased energy consumption. Contemporary processor cores span a wide range of performance and energy efficiency options: from fast and power-hungry OoO processors to efficient, but slower in-order processors. The more memory-bound an application is, the more aggressive the OoO execution engine has to be to hide memory latency. This proposal targets the middle ground, as seen in a simple OoO core, which strikes a good balance between performance and energy efficiency and currently dominates the market for mobile, hand-held devices and high-end embedded systems. We show that these simple, more energy-efficient OoO cores, equipped with the appropriate compile-time support, considerably boost the performance of single-threaded execution and reach new levels of performance for memory-bound applications. Clairvoyance generates code that is able to hide memory latency and better utilize the OoO engine, thus delivering higher performance at lower energy. To this end, Clairvoyance overcomes restrictions which yielded conventional compile-time techniques impractical: (i) statically unknown dependencies, (ii) insufficient independent instructions, and (iii) register pressure. Thus, Clairvoyance achieves a geomean execution time improvement of 7% for memory-bound applications with a conservative approach and 13% with a speculative but safe approach, on top of standard O3 optimizations, while maintaining compute-bound applications' high-performance.

Keywords:
Computer science Compiler Latency (audio) Parallel computing CAS latency Efficient energy use Energy consumption Scheduling (production processes) Embedded system Operating system Semiconductor memory Memory controller Engineering

Metrics

16
Cited By
3.21
FWCI (Field Weighted Citation Impact)
41
Refs
0.92
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Parallel Computing and Optimization Techniques
Physical Sciences →  Computer Science →  Hardware and Architecture
Interconnection Networks and Systems
Physical Sciences →  Computer Science →  Computer Networks and Communications
Embedded Systems Design Techniques
Physical Sciences →  Computer Science →  Hardware and Architecture

Related Documents

JOURNAL ARTICLE

Disk scheduling at compile time

Per Brinch Hansen

Journal:   Software Practice and Experience Year: 1976 Vol: 6 (2)Pages: 201-205
JOURNAL ARTICLE

Continuous-time look-ahead flexible ramp scheduling in real-time operation

Avishan BagherinezhadRoohallah KhatamiMasood Parvania

Journal:   International Journal of Electrical Power & Energy Systems Year: 2020 Vol: 119 Pages: 105895-105895
BOOK-CHAPTER

Resource Modeling and Compile Time Scheduling

Orlando MoreiraHenk Corporaal

Embedded systems Year: 2013 Pages: 77-116
© 2026 ScienceGate Book Chapters — All rights reserved.