Skip to content

caffetrain.sh

Chris Churas edited this page Oct 15, 2018 · 2 revisions

This shell script is generated by CreateTrainJob.m and runs caffe on a specific model within the directory the script resides.

Usage

usage: caffetrain.sh [-h] [--numiterations NUMITERATIONS] [--gpu GPU]
                              [--base_lr BASE_LR] [--power POWER] 
                              [--momentum MOMENTUM] 
                              [--weight_decay WEIGHT_DECAY] 
                              [--average_loss AVERAGE_LOSS] 
                              [--lr_policy POLICY] [--iter_size ITER_SIZE] 
                              [--snapshot_interval SNAPSHOT_INTERVAL]
                              model trainoutdir

              Version: 1.6.0

              Runs caffe on CDeep3M model specified by model argument 
              to perform training. The trained model will be stored in
              <trainoutdir>/<model>/trainedmodel directory
              Output from caffe will be redirected to <trainoutdir>/<model>/log/out.log
 
              For further information about parameters below please see: 
              https://github.com/BVLC/caffe/wiki/Solver-Prototxt

    
positional arguments:
  model                The model to train, should be one of the following:
                       1fm, 3fm, 5fm
  trainoutdir          Directory created by runtraining.sh contained
                       output of training.

optional arguments:
  -h, --help           show this help message and exit
  --gpu                Which GPU to use, can be a number ie 0 or 1 or
                       all to use all GPUs (default all)
  --base_learn         Base learning rate (default 1e-02)
  --power              Used in poly and sigmoid lr_policies. (default 0.8)
  --momentum           Indicates how much of the previous weight will be 
                       retained in the new calculation. (default 0.9)
  --weight_decay       Factor of (regularization) penalization of large
                       weights (default 0.0005)
  --average_loss       Number of iterations to use to average loss
                       (default 16)
  --lr_policy          Learning rate policy (default poly)
  --iter_size          Accumulate gradients across batches through the 
                       iter_size solver field. (default 8)
  --snapshot_interval  How often caffe should output a model and solverstate.
                       (default 2000)
  --numiterations      Number of training iterations to run (default 30000)

Example usage

./caffe_train.sh 1fm ~/trainout
Clone this wiki locally