How to recover a low‑dimensional neural model from one noisy signal
Researchers show a way to estimate the unknown parameters of a low‑dimensional mean‑field model and reconstruct unobserved variables using only a single measured macroscopic signal from a finite neural network. The idea is to assume the form of the mean‑field equations is known, but their parameters are not. Given a time series of one observable—for example, the average membrane potential or firing rate—they fit the model so it reproduces that signal and then read out the hidden macroscopic variables that the model contains.
To find the parameters the authors use a derivative‑free global optimizer called Differential Evolution. Differential Evolution is a population‑based algorithm that creates and tests many candidate solutions by mutation and recombination. It is robust to moderate noise and does not require computing derivatives of the model. The authors combine this optimizer with a synchronization trick so that the model does not need exact initial values for the unobserved variables.
Synchronization is the key technical idea that addresses the problem of unknown initial conditions. The mean‑field model is coupled to the measured output from the finite network in a master‑slave arrangement so that the model trajectory is driven toward the observed signal. After a short transient, the driven model “forgets” its initial state and follows the measured component. This lets the optimization focus on parameters rather than on fitting unknown starting values. The paper discusses two coupling styles called noninvasive and invasive, and evaluates the loss only after the transient to avoid transient mismatches.
The method is tested on two example networks made of quadratic integrate‑and‑fire (QIF) neurons. One network is inhibitory with synaptic kinetics and shows periodic collective oscillations. The other is excitatory with spike‑frequency adaptation and shows chaotic collective dynamics. In both examples the authors report that parameters were recovered with relative errors below 1% when the simulated network had more than about 1,000 neurons. They also reconstruct the hidden macroscopic variables and study how finite‑size fluctuations (noise coming from having a finite number of neurons) affect inference accuracy as the network size changes.