Artificial Intelligence

Early Stopping Strategy using Neural Tangent Kernel Theory and Rademacher Complexity

Publié le - 2025 American Control Conference (ACC 2025)

Auteurs : Daniel Martin Xavier, Ludovic Chamoin, Laurent Fribourg

The early stopping strategy consists in stopping the training process of a neural network (NN) on a set S of input data before training error is minimal. The advantage is that the NN then retains good generalization properties, i.e. it gives good predictions on data outside S, and a good estimate of the statistical error ("population loss") is obtained. Using the theories of Rademacher complexity and neural tangent kernel, we give here two stopping strategies that minimize upper bounds on the population loss. These methods are well suited to the underparameterized context (where the number of parameters is moderate compared with the number of data). They are illustrated on the example of an NN simulating the model predictive control of a Van der Pol oscillator.