📚 node [[interpretability]]

interpretability

Go back to the [[AI Glossary]]

The degree to which a model's predictions can be readily explained. Deep models are often non-interpretable; that is, a deep model's different layers can be hard to decipher. By contrast, linear regression models and wide models are typically far more interpretable.

📖 stoas
⥱ context