Daniel J. GauthierErik BolltAaron GriffithWendson A. S. Barbosa
Abstract Reservoir computing is a best-in-class machine learning algorithm for processing information generated by dynamical systems using observed time-series data. Importantly, it requires very small training data sets, uses linear optimization, and thus requires minimal computing resources. However, the algorithm uses randomly sampled matrices to define the underlying recurrent neural network and has a multitude of metaparameters that must be optimized. Recent results demonstrate the equivalence of reservoir computing to nonlinear vector autoregression, which requires no random matrices, fewer metaparameters, and provides interpretable results. Here, we demonstrate that nonlinear vector autoregression excels at reservoir computing benchmark tasks and requires even shorter training data sets and training time, heralding the next generation of reservoir computing.
K. E. NikiruyT. IvanovMartin ZieglerD. RossettiFernando CorintoAlon AscoliR. TetzlaffAhmet Şamil DemirkolN. Schmitt
Hao WangJianqi HuYoonSeok BaekKohei TsuchiyamaMalo JolyQiang LiuSylvain Gigan
Lyudmila GrigoryevaHannah Lim Jing TingJuan‐Pablo Ortega
Kuan RenWoyu ZhangFred WangZeyu GuoDashan Shang