These five python notebooks are an illustrated introduction to core pytorch idioms. Click below to run them on Colab.
Tensor arithmetic: the notation for manipulating n-dimensional arrays of numbers on CPU or GPU.
Autograd: how to get derivatives of any scalar with respect to any tensor input.
Optimization: ways to update tensor parameters to reduce any computed objective, using autograd gradients.
Network modules: how pytorch represents neural networks for convenient composition, training, and saving.
Datasets and Dataloaders: for efficient multithreaded prefetching of large streams of data.
Pytorch is a numerical library that makes it very convenient to train deep networks on GPU hardware. It introduces a new programming vocabulary that takes a few steps beyond regular numerical python code. Although pytorch code can look simple and concrete, much of of the subtlety of what happens is invisible, so when working with pytorch code it helps to thoroughly understand the runtime model.