Post the Imagenet movement in 2012, Deep Learning has grown leaps and bounds. Deep Learning has now become a part and parcel of our daily life with many algorithms running around when we talk with our voice assistants, use our home automation systems, write emails, etc. In fact, its impact has been so much that we can see books on Amazon titled “Neural Networks for babies” :D

**How does “Deep” Learning occur?**

At its core, Deep Learning is nothing but a miniature version of how a human brain works (ignoring the actual complexity present in our human brains which is still very very difficult to replicate). The computer learns from its inputs and outputs using hundreds and even thousands of neuron connections across deep layers (hence the word ‘deep learning’). Each of these neuron connections has different weights associated with themselves.

The weights get optimized in each iteration so that the predicted loss gets minimized and the output is predicted with the highest accuracy. Computers are machines and machines understand only numbers. Hence at the ground level, all of these weights that we are talking about are n-dimensional matrices or **Tensors.**

Since each weight is an n-dimensional matrix or a tensor, learning and optimization of weights involve millions of matrix multiplications. Over the last 6–7 years, we have seen many DL frameworks emerging that simplify this task.