Graph neural networks reconstruct global entanglement from local measurement records in monitored quantum circuits
Researchers show that a type of machine learning called graph neural networks (GNNs) can infer a global measure of quantum entanglement from only local projective measurement records in monitored quantum circuits. The task is to predict the half-chain von Neumann entanglement entropy — a standard way to quantify how strongly the left and right halves of a chain are quantum-mechanically correlated — using the classical outcomes of measurements that happen at specific qubits and times. The main finding is that prediction quality improves as the machine can integrate information over a larger spacetime region, and that different network designs give the same results when measured by a single effective spacetime scale.
The authors test this idea on one-dimensional monitored random quantum circuits. These circuits evolve a chain of N qubits for t_max = 30 layers of two-qubit random gates arranged in a brick-wall pattern. After each layer, every qubit is measured in the computational basis with probability p. Each simulated run produces a classical spacetime record of where and when measurements happened and what their outcomes were. The learning target is the final half-chain von Neumann entropy S_{N/2}, normalized by N/2 to remove the leading size dependence. For N up to 16 the dynamics were simulated with exact state vectors; for larger sizes the paper says they used matrix product states (MPS) to control errors.
To feed spacetime measurement records to the learner, each run is converted into a directed spacetime graph whose nodes are qubit–time events and whose edges follow causal connections: worldline edges from (i,t) to (i,t+1) and gate-induced diagonal edges between neighboring qubits across time layers. Node features include whether a measurement happened, the measurement outcome, a normalized time coordinate, and simple boundary-aware position flags. No raw qubit indices are included so the same model can be used across different system sizes. The authors compare two graph-based architectures. A single-scale directed GraphSAGE model stacks K message-passing layers so that information propagates only along paths of length up to K, making the reachable spacetime region grow roughly linearly with K. An RG-inspired hierarchical model repeatedly coarse-grains spacetime in 2×2 blocks so the effective receptive field grows much faster with the number of coarse-graining levels.