📕 subnode [[@KGBicheno/deep learning]] in 📚 node [[deep-learning]]

Deep Learning

Go to [[Week 2 - Introduction]] or back to the [[Main AI Page]]

The point: Even with limited examples, neural networks can generalize and successfully deal with unseen examples.

Deep Learning layers algorithms to create a Neural Network, an artificial replication of the structure and functionality of the brain, enabling AI systems to continuously learn on the job and improve the quality and accuracy of results.

Deep learning neural nets have many "hidden layers" of [[Perceptrons]] through which the inputs are passed, with each layer being tuneable by engineers to find the patterns the model is supposed to find.

The basic flowchart of a deep learning model

A more accurate view of a neural net.

Some nets can be smaller, some can be much larger

Neural networks are based on their [[Biological comparisons]], the neurons in our brain.

Artificial neural networks pass numbers as signals through a weight to each neuron, which itself modifies that signal with its own bias (the same for all input signals). The weighted and biased inputs are then passed through an activation function (such as a sigmoid function) to become that neuron's output. << This process taken in isolation is a perceptron, as described in [[Perceptrons]].

A graphical representation of a sigmoid perceptron

📖 stoas
⥱ context