Neural-network models struggle to compress nuclear ground states with high “quantum magic”
What the paper is about: The authors test how well a simple neural-network wavefunction can represent the ground states of medium-mass atomic nuclei. These nuclear states are both strongly entangled and carry a property called non‑stabilizerness (also called “quantum magic”), which quantifies how far a state is from a class of easier-to-describe states. The study asks whether non‑stabilizerness makes nuclear states harder for neural networks to learn and compress.
What the researchers did: They adapted a second‑quantized neural quantum state (NQS) method to nuclear problems and used a restricted Boltzmann machine (RBM) as the neural ansatz. In this setup each single-particle orbital becomes a visible node whose value encodes occupation (filled or empty). They worked in the sd shell, an active space that contains 12 proton and 12 neutron orbitals, and compared RBM results to exact diagonalization inside that active space, the interacting shell model (ISM). The RBM parameters were optimized either by maximizing overlap (fidelity) with the exact ISM ground state or by minimizing the RBM energy using variational Monte Carlo and a standard optimizer.
How it works at a high level: The RBM ties the visible occupation numbers to a layer of hidden units. The number of hidden units relative to visible nodes (the hidden‑unit density α) controls the network’s expressivity. The authors measured non‑stabilizerness with a stabilizer Rényi entropy, denoted M2, which grows when a state is farther from stabilizer states. They then tracked how RBM accuracy, measured by fidelity with the exact ISM ground state, depended on M2 and on the network size.
What they found and why it matters: For a fixed number of many-body configurations, RBMs systematically reproduce states with low non‑stabilizerness more accurately than states with high non‑stabilizerness. The trend persists as the network size (α) grows: larger networks improve fidelity overall, but states with larger M2 remain harder to learn. As a concrete example, 24Mg — with about 28,503 many‑body configurations in the chosen symmetry sector — showed one of the highest M2 values and the lowest RBM fidelity among the tested nuclei. The result suggests that non‑stabilizerness is a key factor limiting the compression and representational efficiency of RBM-type neural quantum states in entangled regimes.