Skip to content

Slides and code of my talk “Introduction to Supervised Learning with TensorFlow”.

Notifications You must be signed in to change notification settings

lschmelzeisen/talk-supervised-learning-tensorflow

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introduction to Supervised Learning with TensorFlow

This repository contains the slides, code, and generated output of my talk “Introduction to Supervised Learning with TensorFlow”. The target audience are people new to Machine Learning. To follow, basic linear algebra and rudimentary programming knowledge should be enough. The talk is designed to take roughly two hours. It presents the formal components of a supervised learning problem, the two basic models Linear Regression and Logistic Regression, how to generalize them to Feedforward Neural Networks, as well as their implementations in the Machine Learning framework TensorFlow.

Code

All code is written in Python and provided as Jupyter Notebooks. Each experiment expands upon the previous one, so they should probably be read in order. To experiment with the code without having to worry about installing Python and the required dependencies, Google Colaboratory, a free Jupyter Notebook environment for research and education, can be used. The following links directly open the respective notebooks in Colaboratory:

  1. Simple linear regression (disturbed line)
  2. Simple linear regression (housing)
  3. Linear regression (housing)
  4. Linear regression (housing) with standardization
  5. Linear regression (housing) with batching
  6. Linear regression (spam)
  7. Logistic regression (spam)
  8. Multinomial logistic regression (wine)
  9. Feedforward nfeural network (housing)
  10. Feedforward neural network (housing) advanced
  11. Feedforward neural network (housing) with regularization
  12. Feedforward neural network (spam)

To make Git and Jupyter Notebooks play nice, all cell output and metadata is stripped from the notebooks. Instead, the cell outputs used in the talk are persisted as HTML files in the output folder. Through their Machine Learning nature, all experiments feature randomness, therefore rerunning them will not yield identical results, only similar ones. The regenerate.sh-script can be used to clean the notebooks before committing as well as to rerun them and persist results into the output-folder.

Dependencies

All dependencies required to run the code locally are specified in the Pipfile. To be placed in a virtual environment satisfying these dependencies, install Pipenv and run:

pipenv install --skip-lock
pipenv shell

Alternatively, you can use Docker:

docker build -t talk-supervised-learning-tensorflow .
docker run -it talk-supervised-learning-tensorflow

Acknowledgements

I thank René Pickardt for extensive feedback on early versions of the experiments.

License

Creative Commons License

The contents of this talk are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0).

All code and generated output is hereby released into the public domain (CC0).

Releases

No releases published

Packages

No packages published