A proof-of-concept pipeline for performing hyperparameter optimization of machine learning models with Nextflow.
-
Install Nextflow (version 23.10.x or higher):
curl -s https://get.nextflow.io | bash
-
Launch the pipeline:
# use conda natively (requires Conda) ./nextflow run nextflow-io/hyperopt -profile conda # use Wave containers (requires Docker) ./nextflow run nextflow-io/hyperopt -profile wave
-
When the pipeline completes, you can view the training and prediction results in the
results
folder.
Note: the first time you execute the pipeline, Nextflow will take a few minutes to download the pipeline code from this GitHub repository and any related software dependencies (e.g. conda packages or Docker images).
The hyperopt pipeline consists of the following steps:
- Download a dataset
- Split the dataset into train/test sets
- Visualize the train/test sets
- Train a variety of models on the training set
- Evaluate each model on the test set
- Select the best model based on evaluation score
You can control many aspects of this workflow with the pipeline parameters, including:
- Enable/disable each individual step
- Download a different dataset (default is
wdbc
, see OpenML.org to view available datasets) - Provide your own training data instead of downloading it
- Provide your own pre-trained model and test data
- Select different models (see the
train
module for all available options)
See the nextflow.config
file for the list of pipeline parameters.
Since Nextflow provides an abstraction between the pipeline logic and the underlying execution environment, the hyperopt pipeline can be executed on a single computer or an HPC cluster without any modifications.
Visit the Nextflow documentation to see which HPC schedulers are supported, and how to use them.
The hyperopt pipeline uses Python (>=3.10) and several Python packages for machine learning and data science. These dependencies are defined in the conda.yml
file.