Deep learning (DL) is currently the most effective way to model the non-structured data, such as images, videos and time series signals. While there are many excellent online resources to teach deep learning with different levels of complexity, it is still of interests to develop and offer a deep learning crash couse for NHLBI DIR community. The motivation behind this course includes:
- Introduce the basics of deep learning for convolution neural network, recurent neural network and generative model.
- Introduce practical techniques for model training, debugging, testing, deployment
- Help everyone be comfort with coding up their models
- Grow interests inside DIR and improve community awareness of applying DL in biomedical R&D
- Provide oppertunity for social gathering by bringing colleagues who are interested in these topics together
- Prepare trainnees and fellows for DL related jobs
Hui Xue | David C. Hansen |
---|---|
hui.xue@nih.gov | davidchansen@gradientsoftware.net |
Hui is an active researcher on developing deep learning based cardiac imaging applications. The AI imaging products he built are deployed globally and used daily. | David is the founder and technical director of Gradient Software. He developed AI solutions for onocoloy planning, image denoising and enhancement, and reporting. |
The assignments and lab contents are the meat of this course. The goal is to demonstrate practical skills to solve real-world DL problems using neural networks and to teach useful software packages and tools for effective model development.
After each lecture, there are suggested reading lists. Please read them.
There are 7 assignments in this offering.
Assignments | What it is for |
---|---|
Setup and basic NN | Set up the development environment, including GPU. Code up a NN with MLP and CNN in numpy, for forward-pass, backprop, optimization and training loop etc. |
Pytorch model building | Build up the CNN model using pytorch, implement training and validation |
CNN for segmentation and detection | Build up the full-size u-net model for segmentation and detection; Resnet; Add your own loss function |
RNN for trigger detection | Build up RNN model for trigger detection from time signal; Solve the problem with LSTM, GRU and Transfomer |
GAN and fun | Build up the generative adversial network to create new cardaic images; Will your image be good enough to let your model segment it (from assigment 3)? |
Data management, experiment manage | Use DVC to manage data and training models; Add wandb for experiment management |
Mode deployemnt | Use streamlit to deploy your model from assgiment 2; You can run the model as a web service and show to your lab |
After going through the assignments, you should feel comfort/confident to build, debug and deploy your own models! 👏👏👏
Time | Lectures | What it is for | Reading list | Assignments |
---|---|---|---|---|
L1 | Fundation of deep learning (Hui) | Building blocks, MLP, course overview | ||
L2 | Fundation of deep learning, cont. (Hui) | backprop, optimization, loss function ... | Assignment 1 | |
L3 | Convolution neural network (David) | Convolution and its variants, badkprop of conv, Common architectures | Assignment 2 | |
L4 | Convolution neural network, cont. (David) | Resnet, u-net, use CNN for classification, segmentation, detection | Assignment 3 | |
L5 | Recurrent neural network (Hui) | RNN basics, vallina RNN, LSTM | ||
L6 | RNN and Transformer (Hui) | GRU, Self-attention, Workd embeding, Transformers | Assignment 4 | |
L7 | Model training (David) | Nut and bolts, parallel training, learning rate scheduling, OneCycle, Debug | ||
L8 | Generative model (Hui) | Generative vs. discriminative, GAN, its cost function, GAN variants, CycleGAN, CycleGAN+Unet | Assignment 5 | |
L9 | Tooling and infrastructures (Hui) | Data management, experiment management, testing, tooling | Assignment 6 | |
L10 | Other topics (Hui) | Latest progress, meta-learning, model deployment, build your own DL lab | Assignment 7 | |
L11 | Invited lecture 1 | Weak supervision and software 2.0 | ||
L12 | Invited lecture 2 | Full stack deep learning |