May’s Complexity-Stability Hypothesis in Neural Networks
Published in Work in Progress, 2026
In this project I study May’s complexity-stability hypothesis, the prediction from random matrix theory that sufficiently complex systems are generically unstable, in the context of neural networks. Real ecosystems violate this bound through structural mechanisms shaped by natural selection: heavy-tailed interaction-strength distributions, negative pairwise correlations, and non-random topology. The central question is whether neural networks, trained by gradient descent or by explicit selection pressure, develop analogous structure, and what properties of the optimization process, namely the objective function, selection intensity, and training dynamics, determine whether a system can escape the random-matrix regime.
