Skip to content

stefbraun/rnn_benchmarks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

65 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

rnn_benchmarks

Welcome to the rnn_benchmarks repository! We offer:

  • A training speed comparison of different LSTM implementations across deep learning frameworks
  • Common input sizes, network configurations and cost functions from automatic speech recognition
  • Best-practice scripts to learn coding up a network, optimizers, loss functions etc.

Update June 4th 2018

Run the benchmarks

Go to the folder 'main' and execute the 'main.py' script in the corresponding benchmark folder. Before running 'main.py', you need to give the paths to the python environment that contain the corresponding framework. The 'main.py' script creates a 'commands.sh' script that will execute the benchmarks. The measured execution times will be written to 'results/results.csv'. The toy data and default parameters are provided by 'support.py', to make sure every script uses the same hyperparameters.

Releases

No releases published

Packages

No packages published