Tutorial: The PyTorch Library for Deep Learning

Day II – PyTorch: Dive in and Finer points

When

11 a.m., April 24, 2020

Have you been interested in training and using Deep Neural Networks (DNNs)? In this two-part tutorial presentation, we will present on what PyTorch is and how to use it, with a focus on live examples using Jupyter Notebooks. Though not mandatory, attendees are encouraged to bring their laptops. For more advanced topics, we provide an overview and links for attendees to learn more.  Zoom:  https://arizona.zoom.us/j/92580666693

Day II – PyTorch: Dive in and Finer points

4) Recurrent Neural Networks (RNNs) (15 min)

-Structure and basics RNNs. 

-PyTorch implementation – introduce various RNN implementations and use cases.

-Importance of hidden state and ways to initialize it (manual and automatic)

-Example of RNN/LSTM and training- Caveats and tips for practical use
 

5) Backpropation/Automatic Differentiation in Pytorch: (15 min)

- What is Autograd? Why is it important for NNs?

- Forward and Reverse mode Autograd

- PyTorch implementation of AD & differences from other implementations

- Examples and use cases 

- Verifying the accuracy and consistency of AD- When and how to extend Autograd with custom Autograd Functions
 

6) Best Practices and Advanced Topics - Overview (15 min)

- Loading CUDA model on CPU for inference.

- Using Dataloaders for efficiency. Explain CPU load time and GPU compute time lags, and how to choose the right number of workers to load data.

- Writing Custom Dataloaders for your datasets- CPU/GPU device agnostic code.

- Torch NN vs functional – differences and use cases.

- Dynamic graph adjustments 

- Moving tensors between devices

- Parallel and Distributed training: useful resources to get started with.

- Mixed precision/half precision training and memory savings + issues to be aware of

- Developing custom NN architectures