Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. And, Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.
- Docker 20.10.5, build 55c4c88
- Ubuntu 20.04 x86_64
- CUDA 11.2
- cuDNN 8.1
- TensorFlow v2.5.0-rc1 (MediaPipe Custom OP, FlexDelegate, XNNPACK enabled)
- tflite_runtime v2.5.0-rc1 (MediaPipe Custom OP, FlexDelegate, XNNPACK enabled)
- edgetpu-compiler
- flatc 1.12.0
- TensorRT cuda11.1-trt7.2.3.4-ga-20210226
- PyTorch 1.8.1+cu112
- TorchVision 0.9.1+cu112
- TorchAudio 0.8.1
- OpenVINO 2021.3.394
- tensorflowjs
- coremltools
- onnx
- tf2onnx
- tensorflow-datasets
- openvino2tensorflow
- tflite2tensorflow
- onnxruntime
- onnx-simplifier
- MXNet
- gdown
- OpenCV 4.5.2-openvino
- Intel-Media-SDK
- Intel iHD GPU (iGPU) support
https://hub.docker.com/repository/docker/pinto0309/mtomo/tags?page=1&ordering=last_updated
$ xhost +local: && \
docker run -it --rm \
--gpus all \
-v `pwd`:/home/user/workdir \
-v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
--device /dev/video0:/dev/video0:mwr \
--net=host \
-e LIBVA_DRIVER_NAME=iHD \
-e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
-e DISPLAY=$DISPLAY \
--privileged \
pinto0309/mtomo:ubuntu2004_tf2.5.0-rc1_torch1.8.1_openvino2021.3.394
$ git clone https://github.com/PINTO0309/mtomo.git && cd mtomo
$ docker build -t {IMAGE_NAME}:{TAG} .
$ xhost +local: && \
docker run -it --rm \
--gpus all \
-v `pwd`:/home/user/workdir \
-v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
--device /dev/video0:/dev/video0:mwr \
--net=host \
-e LIBVA_DRIVER_NAME=iHD \
-e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
-e DISPLAY=$DISPLAY \
--privileged \
{IMAGE_NAME}:{TAG}