Designing an end-to-end Pipeline for Developing and Deploying IoT Solutions on Embedded Neuromorphic Platforms
M.Sc. Thesis Project
Author: Marco Bramini
Thesis Link: https://webthesis.biblio.polito.it/29384/
This project was specifically thought to work on Linux OS, having Conda installed. A fully comprehensive virtual environment can be installed on such OS using the following command:
conda env create -f=requirements.txt -n myenv
In case of different operating system, the user must manually build the virtual environment and install packages one by one:
conda create env -n myenv
conda activate myenv
conda install pip
pip install “rockpool[all]”
pip install nni
pip install tonic
pip install -U "jax[cuda12_pip]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
pip install ipykernel
Use 7-Zip to unpack the multipart zip file. The extraction must produce the following files:
wisdm_watch_full_40.npz
wisdm_watch_full_40_encoded.npy
wisdm_watch_full_40_classes.json
The provided scripts assume these files are contained in a root folder named data/
.
The task-definition
folder contains all the scripts needed for the Task Definition step.
- Configure the global settings in the head of the script
task-definition/generate_tasks_dtw.py
. - Run the script with the command:
cd task-definition python generate_tasks_dtw.py
- Configure the global settings in the head of the script
task-definition/generate_tasks_kld.py
. - Run the script with the command:
cd task-definition python generate_tasks_kld.py
- Configure the global settings in the head of the script
task-definition/plot_kde.py
. - Run the script with the command:
cd task-definition python plot_kde.py
-
Configure the training pipeline editing the file
training-pipeline/run_xylo_rockpool_train.py
In particular:
- Select the model architecture editing the line 37, selecting one of the predefined architectures in the library:
xylo_networks.get_ff_simple(...) xylo_networks.get_ff_deep(...) xylo_networks.get_ff_deep_res(...) xylo_networks.get_ff_deep_deep_res(...) xylo_networks.get_rec_simple(...) xylo_networks.get_rec_deep(...)
- Edit the parameter
input_params["enabled_classes"]
at line 177, with the class labels associated to the task 7CB
- Select the model architecture editing the line 37, selecting one of the predefined architectures in the library:
-
Run NNI with the following commands:
cd training-pipeline python run_nni_hpo.py --config_path nni_experiment_configs/xylo_as.json --port 32000
It is possible to follow the process on the NNI GUI at
localhost:32000
-
At the end of the process, the full list of experiments and model checkpoints will be stored in the folders
training-pipeline/experiments/{EXPERIMENT_ID}/{TRIAL_ID}
The best performing model can be also retrieved with the script
training-pipeline/get_best_trial_id.py
, after setting the experiment id at line 4
-
Configure the training pipeline editing the file
training-pipeline/run_(dynapse2|xylo)_rockpool_train.py
In particular:
- Edit the parameter
input_params["enabled_classes"]
at line 177, with the class labels associated to the selected task
- Edit the parameter
-
Run NNI with the following commands:
cd training-pipeline python run_nni_hpo.py --config_path nni_experiment_configs/(dynapse2|xylo)_hpo.json --port 32000
-
At the end of the process, the full list of experiments and model checkpoints will be stored in the folders
training-pipeline/experiments/{EXPERIMENT_ID}/{TRIAL_ID}
The best performing model can be also retrieved with the script
training-pipeline/get_best_trial_id.py
, after setting the experiment id at line 4
-
Configure the training pipeline editing the file
training-pipeline/run_(dynapse2|xylo)_rockpool_train.py
In particular:
- Edit the parameter
input_params["enabled_classes"]
at line 177, with the class labels associated to the selected task
- Edit the parameter
-
Directly run the training pipeline with the following command:
cd training-pipeline PYTHONPATH="lib" python run_(dynapse2|xylo)_rockpool_train.py
-
At the end of the process, the folder
best_model/
will contain the checkpoint of the trained model
This process needs for a trained model. The tuning parameters will be automatically applied in the model checkpoint.
-
Configure the NNI Quantization Tuning experiment configuration by editing the file
training-pipeline/nni_experiment_configs/(dynapse2|xylo)_tuning_(task).json
In particular:
- Change the model path at line 2.
-
Run NNI with the following commands:
cd training-pipeline python run_nni_hpo.py --config_path nni_experiment_configs/(dynapse2|xylo)_tuning_(task).json --port 32000
Refer to the Python Notebooks in dynapse2-deploy/
and xylo-deploy/
for the generation of the hardware configuration.