T. SimunicL. BenianiGiovanni De Micheli
The policy optimization problem for dynamic power management has received considerable attention in the recent past. We formulate policy optimization as a constrained optimization problem on continuous-time semi-Markov decision processes (SMDP). SMDPs generalize the stochastic optimization approach based on discrete-time Markov decision processes (DTMDP) presented in the earlier work by relaxing two limiting assumptions. In SMDPs, decisions are made at each event occurrence instead of at each discrete time interval as in DTMDP and thus saving power and giving higher performance. In addition, SMDPs can have general inter-state transition time distributions, allowing for greater generality and accuracy in modeling real-life systems where transition times between power states are not geometrically distributed.
T. SimunicL. BenianiGiovanni De Micheli
Luca BeniniG. CastelliAlberto MaciiB. MaciiR. Scarai
Luca BeniniG. CastelliAlberto MaciiB. MaciiR. Scarai
T. SimunicLuca BeniniPeter W. GlynnG. De Micheli
Jan Hendrik SchönherrJan RichlingMatthias WernerGero Mühl