Three dynamic regimes for learning in a two-layer neural network with m hidden neurons (labeled MF, Feat. learning, Feat. learning and Overfit/Unlearn in the figure).
Three dynamic regimes for learning in a two-layer neural network with m hidden neurons (labeled MF, Feat. learning, Feat. learning and Overfit/Unlearn in the figure). © Andrea Montanari, Pierfrancesco Urbani

A scenario for learning in overparametrized neural networks

Scientific news

Accepted as an oral contribution to the NeurIPS 2025 conference, this study reveals a novel scenario in which, in massively overparameterised neural networks, learning relevant features coexists with overfitting; but at distinct moments during training, thanks to a separation of time scales.

The present study was carried out in the following CNRS laboratory:

  • Institut de physique théorique (IPhT, CEA/CNRS)

References :

Dynamical Decoupling of Generalization and Overfitting in Large Two-Layer Networks, Andrea Montanari, Pierfrancesco Urbani, The Thirty-ninth Annual Conference on Neural Information Processing Systems
Open Archives : openreview, arXiv

Contact

Pierfrancesco Urbani
Chercheur CNRS à l'Institut de physique théorique (IPhT)
Communication CNRS Physique