CodeFlare is a framework to simplify the integration, scaling and acceleration of complex multi-step analytics and machine learning pipelines on the cloud.
Its main features are:
- Pipeline execution and scaling: CodeFlare Pipelines facilities the definition and parallel execution of pipelines. It unifies pipeline workflows across multiple frameworks, while providing nearly optimal scale-out parallelism on pipelined computations.
- Deploy and integrate anywhere: CodeFlare simplifies deployment and integration by enabling a serverless user experience with the integration with Red Hat Open Shift and IBM Cloud Code Engine, and integrating adapters and connectors to make it simple to load data and connect to data services.
This project is under active development. See the Documentation for design descriptions and latest version of the APIs.
CodeFlare can be installed from PyPI.
Prerequisites:
- Python 3.8+
- Jupyter Lab (to run examples)
We recommend installing Python 3.8.7 using pyenv.
Install from PyPI:
pip3 install --upgrade codeflare
Alternatively, you can also build locally with:
git clone https://github.com/project-codeflare/codeflare.git
pip3 install --upgrade pip
pip3 install .
pip3 install -r requirements.txt
You can try CodeFlare by running the docker image from Docker Hub:
projectcodeflare/codeflare:latest
has the latest released version installed.
The command below starts the most recent development build in a clean environment:
docker run -it -p 8888:8888 projectcodeflare/codeflare:latest jupyter-lab --debug
It should produce an output similar to the one below, where you can then find the URL to run CodeFlare from a Jupyter notebook in your local browser.
To access the notebook, open this file in a browser:
...
Or copy and paste one of these URLs:
http://<token>:8888/?token=<token>
or http://127.0.0.1:8888/?token=<token>
You can try out some of CodeFlare features using the My Binder service.
Click on a link below to try CodeFlare, on a sandbox environment, without having to install anything.
CodeFlare Pipelines reimagined pipelines to provide a more intuitive API for the data scientist to create AI/ML pipelines, data workflows, pre-processing, post-processing tasks, and many more which can scale from a laptop to a cluster seamlessly.
The API documentation can be found here, and reference use case documentation here.
Examples are provided as execuatble notebooks: notebooks.
Examples can be run with locally with:
jupyter-lab notebooks/<example_notebook>
If running with the container image, examples are found in codeflare/notebooks
, which can be executed directly from Jupyter environment.
As a first example, we recommend the sample pipeline.
The pipeline will use ray.init()
to start a local Ray cluster. See the deployment options below to run a Ray cluster on in the cloud, or the details here if you are running a Ray cluster locally.
Unleash the power of pipelines by seamlessly scaling on the cloud. CodeFlare can be deployed on any Kubernetes-based platform, including IBM Cloud Code Engine and Red Hat Open Shift Container Platform.
- IBM Cloud Code Engine for detailed instructions on how to run CodeFlare on a serverless platform.
- Red Hat OpenShift for detailed instructions on how to run CodeFlare on Open Shift Container Platform.
If you are interested in joining us and make CodeFlare better, we encourage you to take a look at our Contributing page.