How to predict chaotic systems from data: an accessible introduction
This chapter explains, in plain terms, how to approach the prediction of chaotic systems from data. The authors first review the basic ideas of dynamical systems and chaos theory. They then describe machine‑learning methods for time series forecasting — notably echo state networks (ESNs) and long short‑term memory networks (LSTMs) — but always from the viewpoint of dynamical systems. The chapter also includes simple examples with classic chaotic models (for instance the Lorenz system) and points to online coding tutorials at https://github.com/MagriLab/Tutorials.
The text starts by defining the kind of systems under study: deterministic systems described by a state that evolves in time according to a fixed rule. Chaotic behaviour means extreme sensitivity to tiny changes. Two nearly identical starting points can drift apart rapidly. The chapter introduces attractors (the set of states the system settles into), and the assumption of ergodicity, which means time averages and averages over many states are the same for the system on its attractor.
To quantify predictability the authors explain linearization around a trajectory. The key mathematical objects are the Jacobian (which linearizes the dynamics near a point) and the tangent propagator (which shows how tiny perturbations grow in time). From this follows the largest, or dominant, Lyapunov exponent: a number that measures the average exponential growth rate of very small errors. The chapter gives a practical recipe to estimate that exponent: run a converged simulation, add a tiny perturbation (they suggest norms in the range 10^−9 to 10^−3), evolve both trajectories, find the time window where the log of the separation grows linearly, and take the slope as the exponent.
On the data‑driven side, the chapter treats recurrent neural networks that are commonly used to forecast time series. Echo state networks (ESNs) are discussed along with their dynamical interpretation, architecture and training. Long short‑term memory networks (LSTMs) are also covered. The authors mention several practical topics: closed‑loop prediction (where the model feeds its own outputs back as inputs), validation metrics and strategies, variants that add physical knowledge (physics‑informed ESN and PI‑LSTM), and how to compute Jacobians for these networks. Pedagogical examples, like the Lorenz system, are used to make the ideas concrete.