It is now part of 👉 nuvlaedge/nuvlaedge 👈.
This repository contains the source code for the NuvlaEdge Peripheral Manager for GPU devices - this microservice is responsible for the discovery, categorization and management of all NuvlaEdge GPU peripherals.
This microservice is an integral component of the NuvlaEdge Engine.
NOTE: this microservice is part of a loosely coupled architecture, thus when deployed by itself, it might not provide all of its functionalities. Please refer to https://github.com/nuvlaedge/deployment for a fully functional deployment
This repository is already linked with Travis CI, so with every commit, a new Docker image is released.
There is a POM file which is responsible for handling the multi-architecture and stage-specific builds.
If you're developing and testing locally in your own machine, simply run docker build .
or even deploy the microservice via the local compose files to have your changes built into a new Docker image, and saved into your local filesystem.
If you're developing in a non-master branch, please push your changes to the respective branch, and wait for Travis CI to finish the automated build. You'll find your Docker image in the nuvladev organization in Docker hub, names as nuvladev/peripheral-manager-gpu:<branch>.
The NuvlaEdge Peripheral Manager for GPU will only work if a Nuvla endpoint is provided and a NuvlaEdge has been added in Nuvla.
Why? Because this microservice has been built to report directly to Nuvla. Every GPU device will be registered in Nuvla and associated with an existing NuvlaEdge.
- Docker (version 18 or higher)
- Docker Compose (version 1.23.2 or higher)
- Linux
NUVLAEDGE_UUID | (required) before starting the microservice, make sure you export the ID of the NuvlaEdge you've created through Nuvla: export NUVLAEDGE_UUID=<nuvlaedge id from nuvla> |
NUVLA_ENDPOINT_INSECURE | if you're using an insecure Nuvla endpoint, set this to True : export NUVLA_ENDPOINT_INSECURE=True |
NUVLA_ENDPOINT | if you're not using nuvla.io then set this to your Nuvla endpoint: export NUVLA_ENDPOINT=<your endpoint> |
Simply run docker-compose up --build
Simply run docker-compose -f docker-compose.localhost.yml up --build
This microservice is completely automated, meaning that as long as all the proper environment variables have been correctly set and the right dependencies have been met, the respective Docker container will start by itself and automatically start registering peripherals into Nuvla, in real-time.
This is an open-source project, so all community contributions are more than welcome. Please read CONTRIBUTING.md
Copyright © 2021, SixSq SA