📕 subnode [[@KGBicheno/convergence]]
in 📚 node [[convergence]]
📓
garden/KGBicheno/Artificial Intelligence/Introduction to AI/Week 3 - Introduction/Definitions/Convergence.md by @KGBicheno
convergence
Go back to the [[AI Glossary]]
Informally, often refers to a state reached during training in which training loss and validation loss change very little or not at all with each iteration after a certain number of iterations. In other words, a model reaches convergence when additional training on the current data will not improve the model. In deep learning, loss values sometimes stay constant or nearly so for many iterations before finally descending, temporarily producing a false sense of convergence.
See also early stopping.
See also Boyd and Vandenberghe, Convex Optimization.
📖 stoas
- public document at doc.anagora.org/convergence
- video call at meet.jit.si/convergence