With this project, we propose a viable pipeline for application-aware optimization of both spiking and non-spiking neural networks by means of the NNI toolkit (NNI webpage, NNI github repository).
First shown with focus on the human activity recognition (HAR) task in the paper "Human activity recognition: suitability of a neuromorphic approach for on-edge AIoT applications", the procedure we present here is flexible to be tailored to different problems. Specifically, since we employed it for both convolutional and recurrent neural networks, its application domain can be extended to both time-dependent and time-independent tasks.
The picture below schematically summarizes the different steps:
As first step (a), being the proposed workflow an application-aware procedure, the dataset(s) of interest is/are identified. Based on such selection, the network architectures of interest are defined (b). Then, the NNI experiment is designed by means of a search space for the (hyper)parameters to be optimized (c) and of a configuration file with the details on how to perform the optimization (d). Once the experiment is fully defined, it can be run (e) resulting in an optimized classifier for the specific task of interest (f).
In the following, practical information to reproduce all the steps are provided.
Vittorio Fra, Evelina Forno, Riccardo Pignari, Terrence Stewart, Enrico Macii, and Gianvito Urgese:
Human activity recognition: suitability of a neuromorphic approach for on-edge AIoT applications.
Neuromorphic Computing and Engineering 2022.
See article on IOPscience
To clone the project:
git clone https://github.com/neuromorphic-polito/NeHAR
To reproduce and activate the (conda) environment:
conda env create -f env_nni.yml
conda activate env
To define and run an NNI optimization experiment, three different instruction files are needed:
1) .json
file for the search space definition;
2) .yml
file to specify the experiment execution details;
3) .py
file to actually define the experiment to be run.
The latter contains all the information about the network to be optimized: its structure, its training and then its evaluation are defined in this file. In the experiments folder, the .py
files for all the networks investigated within this project (namely LSTM
, CNN
, spiking CNN
, LMU
and spiking LMU
) are available.
In the .json
file the characteristics of the search space to be explored during the optimization are instead defined. In more detail, here all the (hyper)parameters accounted for are listed by defining their range of values. The specific file for each of the above mentioned networks is available in the searchspaces folder.
Finally, the .yml
file defines the configuration of the NNI experiment:
- experiment duration;
- number of trials to be performed;
- tuner and optimization rule;
- GPU usage (to be specified whether GPU is available or not);
- paths of the .json
and the .py
files to be used.
In the configuration files, the command to run the proper .py
file is defined.
Before starting an NNI experiment, check if GPU can be used or not and set accordingly the gpuNum
and gpuIndices
settings in the .yml
file. Then, to start the NNI optimization experiment, the following command has to be used:
nnictl create --config configurations/{NameOfTheYmlFile}
.yml
files available in the project:
nni_cnn_lstm_trial.yml
nni_lmu_trial.yml
nni_scnn_trial.yml
nni_slmu_trial.yml
Note that, if the --port
option is not specified, the default 8080 will be used.
All the experiments will be saved in:
os.path.expanduser('~')/nni-experiments/
There, logs and results of each trial will be saved in relative path defined as:
{ExperimentID}/trials/{TrialID}
In the output folder, the network weights (giving the best test results) will be instead saved following the path defined by the out_dir
variable in the corresponding .py
file.
The post-optimization notebook can be then employed to load the optimized (hyper)parameters obtained by the NNI experiments and obtain, for each network, the confusion matrix, the evaluation of the memory footprint and an assessment of energy consumption.
In order to successfully do so, the experiment ID and the trial ID of the best results from the NNI optimization must be used, as it is explained at the beginnning of the notebook itself.