Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: exporting model in OpenVINO and Torch is order dependent (CPU) #2447

Open
1 task done
FedericoDeBona opened this issue Dec 3, 2024 · 0 comments
Open
1 task done

Comments

@FedericoDeBona
Copy link

FedericoDeBona commented Dec 3, 2024

Describe the bug

After training the model, I exported it in both OpenVINO and Torch formats. The order of export appears to significantly affect the results.

  • Case 1: Exporting to Torch first, followed by OpenVINO, produces visually different results.
  • Case 2: Exporting to OpenVINO first, followed by Torch, produces visually identical results. It seems that OpenVINO overwrites some shared data, which then affects the subsequent Torch export.

Case 1:

engine.export(model=model,export_type=ExportType.TORCH)
engine.export(model=model,export_type=ExportType.OPENVINO)

Torch Inference:
torch
OpenVino Inference:
openvino

Case 2:

engine.export(model=model,export_type=ExportType.OPENVINO)
engine.export(model=model,export_type=ExportType.TORCH)

Torch Inference:
torch
OpenVino Inference:
openvino

By doing something like this, the case 2 (first openvino then torch) give the results of case 1

del engine
del model
del datamodule
engine = Engine()
engine.export(
	model=Patchcore(),
	export_type=ExportType.OPENVINO,
	ckpt_path=f"/home/trainer/trainer_engine/results/Patchcore/MVTec/{CATEGORY}/latest/weights/lightning/model.ckpt",
	export_root=f"/home/trainer/trainer_engine/results/Patchcore/MVTec/{CATEGORY}/latest"
) 
del engine
engine = Engine()
engine.export(
	model=Patchcore(),
	export_type=ExportType.TORCH,
ckpt_path=f"/home/trainer/trainer_engine/results/Patchcore/MVTec/{CATEGORY}/latest/weights/lightning/model.ckpt",
	export_root=f"/home/trainer/trainer_engine/results/Patchcore/MVTec/{CATEGORY}/latest"
) 

Full code

Dataset: drive
Notes
• For the OpenVINO Inferencer, the input must be images read using cv2.
• For the Torch Inferencer, the input requires the image path.
• Both inferencers are configured to run on the CPU.

from anomalib.data import Folder
from anomalib.models import Patchcore
from anomalib.engine import Engine
from anomalib.deploy import ExportType
from anomalib import TaskType
from PIL import Image
from glob import glob
from anomalib.utils.visualization import ImageVisualizer
from anomalib.utils.visualization.image import  VisualizationMode
from anomalib import TaskType
import os
import cv2
from anomalib.deploy import OpenVINOInferencer, TorchInferencer

datamodule = Folder(
	name = "burulli",
	root = "/home/trainer/trainer_engine/datasets/MVTec/burulli",
	normal_dir = "train/good",
	normal_test_dir= "test/good",
	normal_split_ratio= 0,
	abnormal_dir = "test/defect",
	task=TaskType.CLASSIFICATION,
	image_size=(256,256)
)
datamodule.setup()
model = Patchcore()
engine = Engine()

engine.fit(datamodule=datamodule, model=model)

#Change the order here
engine.export(model=model,export_type=ExportType.OPENVINO)
engine.export(model=model,export_type=ExportType.TORCH)

vino_inferencer = OpenVINOInferencer(
    path=f"/home/trainer/trainer_engine/results/Patchcore/burulli/latest/weights/openvino/model.bin",
    metadata=f"/home/trainer/trainer_engine/results/Patchcore/burulli/latest/weights/openvino/metadata.json",
    device="CPU")
torch_inferencer = TorchInferencer(path=f"/home/trainer/trainer_engine/results/Patchcore/burulli/latest/weights/torch/model.pt", device="cpu")

visualizer = ImageVisualizer(mode=VisualizationMode.FULL, task=TaskType.SEGMENTATION)

for defect_path in sorted(glob(f"/home/trainer/trainer_engine/datasets/MVTec/burulli/test/*")):
	defect = os.path.basename(defect_path)
	print(f"===== {defect} =====")
	for img_path in sorted(glob(f"/home/trainer/trainer_engine/datasets/MVTec/burulli/test/{defect}/*")):
		img_name = os.path.basename(img_path)
		print(f"Inferencing {img_name}")
		vino_res = vino_inferencer(cv2.imread(img_path))
		torch_res = torch_inferencer(img_path)

		print("OPENVINO", "predscore:", vino_res.pred_score)
		display(Image.fromarray(cv2.resize(visualizer.visualize_image(vino_res), (500*2,125*2))))
		print("TORCH","predscore:", torch_res.pred_score)
		display(Image.fromarray(cv2.resize(visualizer.visualize_image(torch_res), (500*2,125*2))))

Dataset

Other (please specify in the text field below)

Model

PatchCore

Steps to reproduce the behavior

See above

OS information

OS information:

  • OS: Ubuntu 24.04
  • Python version: 3.10.14
  • Anomalib version: 1.2.0.dev0
  • PyTorch version: 2.4.0+cu118
  • CUDA/cuDNN version: 11.8
  • GPU models and configuration: GeForce RTX 3090 Ti
  • Any other relevant information: I'm using a custom dataset (link above)

Expected behavior

Not sure, I suppose the exported models should have almost equal result in both OpenVINO and Torch

Screenshots

No response

Pip/GitHub

GitHub

What version/branch did you use?

1.2.0.dev0

Configuration YAML

-

Logs

-

Code of Conduct

  • I agree to follow this project's Code of Conduct
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant