Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding the pipeline for the task explanation and Llm #2190

Open
wants to merge 50 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 4 commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
adbca17
Add Task EXPLANATION and the visualization of images with description.
Bepitic Jul 15, 2024
5611ec1
upd dataset task with explanation
Bepitic Jul 15, 2024
8ed23a3
fix tasktype on metrics, depth, cataset, inferencer.
Bepitic Jul 15, 2024
a463b5b
Merge branch 'main' into llm-pipeline
Bepitic Jul 15, 2024
d5baf6b
fix lint on visualization/image
Bepitic Jul 16, 2024
b7c8eaa
Merge branch 'openvinotoolkit:main' into llm-pipeline
Bepitic Jul 18, 2024
5b563d9
Merge branch 'llm-pipeline' of github.com:Bepitic/anomalib into llm-p…
Bepitic Jul 18, 2024
bfd936e
Fix formatting dataset
Bepitic Jul 18, 2024
f541316
fix format data/base/depth
Bepitic Jul 18, 2024
4e392a9
Fix formatting openvino_inferencer
Bepitic Jul 18, 2024
5fc70ba
fix formatting
Bepitic Jul 18, 2024
75099af
Add Explanation to error-msg.
Bepitic Aug 2, 2024
e5040d3
OpenAI - VLM init
Bepitic Aug 3, 2024
86ad803
Add wrapper to run OpenAI
Bepitic Aug 4, 2024
3678f72
add in ppyproject
Bepitic Aug 4, 2024
7413842
Add Test and fix description/title
Bepitic Aug 12, 2024
dc42cbd
Add Readme and fix bug.
Bepitic Aug 13, 2024
5788d22
Update src/anomalib/models/image/openai_vlm/lightning_model.py
Bepitic Aug 13, 2024
e4f6bec
Update src/anomalib/models/image/openai_vlm/__init__.py
Bepitic Aug 13, 2024
5437467
Add fix pipeline bug.
Bepitic Aug 13, 2024
982c9ca
Add test.
Bepitic Aug 13, 2024
642fd26
Merge branch 'OpenAI-VLM' of github.com:Bepitic/anomalib into OpenAI-VLM
Bepitic Aug 13, 2024
b8cacf0
add changes
Bepitic Aug 16, 2024
0929dc9
Add integration test and unit test + skip export.
Bepitic Aug 16, 2024
39cf996
change to LANGUAGE
Bepitic Aug 16, 2024
671693d
Update images in Readme.
Bepitic Aug 17, 2024
224118b
Update src/anomalib/models/image/chatgpt_vision/__init__.py
Bepitic Aug 20, 2024
b703a41
Update src/anomalib/models/image/chatgpt_vision/chatgpt.py
Bepitic Aug 20, 2024
24c5486
Update src/anomalib/models/image/chatgpt_vision/lightning_model.py
Bepitic Aug 20, 2024
68e757e
Update tests/integration/model/test_models.py
Bepitic Aug 20, 2024
86714a1
Update src/anomalib/models/image/chatgpt_vision/lightning_model.py
Bepitic Aug 20, 2024
196d2a3
Update src/anomalib/models/image/chatgpt_vision/lightning_model.py
Bepitic Aug 20, 2024
b7f345a
fix comments
Bepitic Aug 20, 2024
b285d10
remove last file of chatgpt_vision.
Bepitic Aug 20, 2024
a688530
fix tests
Bepitic Aug 20, 2024
0fb5f79
Merge pull request #1 from Bepitic/OpenAI-VLM (GPTVad)
Bepitic Aug 20, 2024
6503543
Merge branch 'main' into llm-pipeline
Bepitic Aug 20, 2024
8e92e5e
Update src/anomalib/models/image/gptvad/chatgpt.py
Bepitic Aug 21, 2024
5ab044d
upd: language -> VISUAL_PROMPTING
Bepitic Aug 21, 2024
3f9ca93
fix visual prompting and model_name
Bepitic Aug 21, 2024
391b4c4
fix GPT for Gpt and the folder of the tests.
Bepitic Aug 21, 2024
ca1a0bb
fix: change import error outside.
Bepitic Aug 21, 2024
022dcb7
fix readme pointing to the right model.
Bepitic Aug 21, 2024
af7b9e9
fix import cycle, and separate usecase by explicit if.
Bepitic Aug 21, 2024
faf334f
upd: add comments to the few shot / zero shot.
Bepitic Aug 21, 2024
3ed8d3f
fix: dataset expected colums
Bepitic Aug 21, 2024
7f454c4
upd: add the same logic of the label on visualize_full.
Bepitic Aug 22, 2024
45bd520
Merge branch 'main' into llm-pipeline
Bepitic Aug 22, 2024
44586d6
Fix in the logic of the code.
Bepitic Aug 22, 2024
7adb835
Merge branch 'llm-pipeline' of github.com:Bepitic/anomalib into llm-p…
Bepitic Aug 22, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions src/anomalib/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,3 +22,4 @@ class TaskType(str, Enum):
CLASSIFICATION = "classification"
DETECTION = "detection"
SEGMENTATION = "segmentation"
EXPLANATION = "explanation"
Bepitic marked this conversation as resolved.
Show resolved Hide resolved
34 changes: 27 additions & 7 deletions src/anomalib/callbacks/metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ def setup(
pixel_metric_names: list[str] | dict[str, dict[str, Any]]
if self.pixel_metric_names is None:
pixel_metric_names = []
elif self.task == TaskType.CLASSIFICATION:
elif self.task in (TaskType.CLASSIFICATION, TaskType.EXPLANATION):
pixel_metric_names = []
logger.warning(
"Cannot perform pixel-level evaluation when task type is classification. "
Bepitic marked this conversation as resolved.
Show resolved Hide resolved
Expand All @@ -88,14 +88,23 @@ def setup(
)

if isinstance(pl_module, AnomalyModule):
pl_module.image_metrics = create_metric_collection(image_metric_names, "image_")
if hasattr(pl_module, "pixel_metrics"): # incase metrics are loaded from model checkpoint
pl_module.image_metrics = create_metric_collection(
image_metric_names,
"image_",
)
if hasattr(
pl_module,
"pixel_metrics",
): # incase metrics are loaded from model checkpoint
Bepitic marked this conversation as resolved.
Show resolved Hide resolved
new_metrics = create_metric_collection(pixel_metric_names)
for name in new_metrics:
if name not in pl_module.pixel_metrics:
pl_module.pixel_metrics.add_metrics(new_metrics[name])
else:
pl_module.pixel_metrics = create_metric_collection(pixel_metric_names, "pixel_")
pl_module.pixel_metrics = create_metric_collection(
pixel_metric_names,
"pixel_",
)
Bepitic marked this conversation as resolved.
Show resolved Hide resolved
self._set_threshold(pl_module)

def on_validation_epoch_start(
Expand All @@ -121,7 +130,11 @@ def on_validation_batch_end(

if outputs is not None:
self._outputs_to_device(outputs)
self._update_metrics(pl_module.image_metrics, pl_module.pixel_metrics, outputs)
self._update_metrics(
pl_module.image_metrics,
pl_module.pixel_metrics,
outputs,
)

def on_validation_epoch_end(
self,
Expand Down Expand Up @@ -156,7 +169,11 @@ def on_test_batch_end(

if outputs is not None:
self._outputs_to_device(outputs)
self._update_metrics(pl_module.image_metrics, pl_module.pixel_metrics, outputs)
self._update_metrics(
pl_module.image_metrics,
pl_module.pixel_metrics,
outputs,
)

def on_test_epoch_end(
self,
Expand All @@ -181,7 +198,10 @@ def _update_metrics(
image_metric.update(output["pred_scores"], output["label"].int())
if "mask" in output and "anomaly_maps" in output:
pixel_metric.to(self.device)
pixel_metric.update(torch.squeeze(output["anomaly_maps"]), torch.squeeze(output["mask"].int()))
pixel_metric.update(
torch.squeeze(output["anomaly_maps"]),
torch.squeeze(output["mask"].int()),
)

def _outputs_to_device(self, output: STEP_OUTPUT) -> STEP_OUTPUT | dict[str, Any]:
if isinstance(output, dict):
Expand Down
21 changes: 17 additions & 4 deletions src/anomalib/data/base/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,11 @@
from anomalib.data.utils import LabelName, masks_to_boxes, read_image, read_mask

_EXPECTED_COLUMNS_CLASSIFICATION = ["image_path", "split"]
_EXPECTED_COLUMNS_EXPLANATION = ["image_path", "split"]
_EXPECTED_COLUMNS_SEGMENTATION = [*_EXPECTED_COLUMNS_CLASSIFICATION, "mask_path"]
_EXPECTED_COLUMNS_PERTASK = {
"classification": _EXPECTED_COLUMNS_CLASSIFICATION,
"explanation": _EXPECTED_COLUMNS_EXPLANATION,
"segmentation": _EXPECTED_COLUMNS_SEGMENTATION,
"detection": _EXPECTED_COLUMNS_SEGMENTATION,
}
Expand Down Expand Up @@ -61,7 +63,11 @@ class AnomalibDataset(Dataset, ABC):
Defaults to ``None``.
"""

def __init__(self, task: TaskType | str, transform: Transform | None = None) -> None:
def __init__(
self,
task: TaskType | str,
transform: Transform | None = None,
) -> None:
super().__init__()
self.task = TaskType(task)
self.transform = transform
Expand All @@ -83,7 +89,11 @@ def __len__(self) -> int:
"""Get length of the dataset."""
return len(self.samples)

def subsample(self, indices: Sequence[int], inplace: bool = False) -> "AnomalibDataset":
def subsample(
self,
indices: Sequence[int],
inplace: bool = False,
) -> "AnomalibDataset":
"""Subsamples the dataset at the provided indices.

Args:
Expand Down Expand Up @@ -169,7 +179,7 @@ def __getitem__(self, index: int) -> dict[str, str | torch.Tensor]:
image = read_image(image_path, as_tensor=True)
item = {"image_path": image_path, "label": label_index}

if self.task == TaskType.CLASSIFICATION:
if self.task in (TaskType.CLASSIFICATION, TaskType.EXPLANATION):
item["image"] = self.transform(image) if self.transform else image
elif self.task in (TaskType.DETECTION, TaskType.SEGMENTATION):
# Only Anomalous (1) images have masks in anomaly datasets
Expand Down Expand Up @@ -204,5 +214,8 @@ def __add__(self, other_dataset: "AnomalibDataset") -> "AnomalibDataset":
msg = "Cannot concatenate datasets that are not of the same type."
raise TypeError(msg)
dataset = copy.deepcopy(self)
dataset.samples = pd.concat([self.samples, other_dataset.samples], ignore_index=True)
dataset.samples = pd.concat(
[self.samples, other_dataset.samples],
ignore_index=True,
)
return dataset
8 changes: 6 additions & 2 deletions src/anomalib/data/base/depth.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,13 @@ def __getitem__(self, index: int) -> dict[str, str | torch.Tensor]:

image = to_tensor(Image.open(image_path))
depth_image = to_tensor(read_depth_image(depth_path))
item = {"image_path": image_path, "depth_path": depth_path, "label": label_index}
item = {
"image_path": image_path,
"depth_path": depth_path,
"label": label_index,
}

if self.task == TaskType.CLASSIFICATION:
if self.task in (TaskType.CLASSIFICATION, TaskType.EXPLANATION):
item["image"], item["depth_image"] = (
self.transform(image, depth_image) if self.transform else (image, depth_image)
)
Expand Down
27 changes: 21 additions & 6 deletions src/anomalib/deploy/inferencers/openvino_inferencer.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,9 @@
if TYPE_CHECKING:
from openvino import CompiledModel
else:
logger.warning("OpenVINO is not installed. Please install OpenVINO to use OpenVINOInferencer.")
logger.warning(

Check warning on line 30 in src/anomalib/deploy/inferencers/openvino_inferencer.py

View check run for this annotation

Codecov / codecov/patch

src/anomalib/deploy/inferencers/openvino_inferencer.py#L30

Added line #L30 was not covered by tests
"OpenVINO is not installed. Please install OpenVINO to use OpenVINOInferencer.",
)


class OpenVINOInferencer(Inferencer):
Expand Down Expand Up @@ -110,7 +112,10 @@

self.task = TaskType(task) if task else TaskType(self.metadata["task"])

def load_model(self, path: str | Path | tuple[bytes, bytes]) -> tuple[Any, Any, "CompiledModel"]:
def load_model(
self,
path: str | Path | tuple[bytes, bytes],
) -> tuple[Any, Any, "CompiledModel"]:
"""Load the OpenVINO model.

Args:
Expand Down Expand Up @@ -143,7 +148,11 @@
cache_folder.mkdir(exist_ok=True)
core.set_property({"CACHE_DIR": cache_folder})

compile_model = core.compile_model(model=model, device_name=self.device, config=self.config)
compile_model = core.compile_model(
model=model,
device_name=self.device,
config=self.config,
)

input_blob = compile_model.input(0)
output_blob = compile_model.output(0)
Expand Down Expand Up @@ -238,7 +247,11 @@
"""
return self.model(image)

def post_process(self, predictions: np.ndarray, metadata: dict | DictConfig | None = None) -> dict[str, Any]:
def post_process(
self,
predictions: np.ndarray,
metadata: dict | DictConfig | None = None,
) -> dict[str, Any]:
"""Post process the output predictions.

Args:
Expand Down Expand Up @@ -277,11 +290,13 @@
pred_idx = pred_score >= metadata["image_threshold"]
pred_label = LabelName.ABNORMAL if pred_idx else LabelName.NORMAL

if task == TaskType.CLASSIFICATION:
if task in (TaskType.CLASSIFICATION, TaskType.EXPLANATION):
_, pred_score = self._normalize(pred_scores=pred_score, metadata=metadata)
elif task in (TaskType.SEGMENTATION, TaskType.DETECTION):
if "pixel_threshold" in metadata:
pred_mask = (anomaly_map >= metadata["pixel_threshold"]).astype(np.uint8)
pred_mask = (anomaly_map >= metadata["pixel_threshold"]).astype(
np.uint8,
)

anomaly_map, pred_score = self._normalize(
pred_scores=pred_score,
Expand Down
Loading
Loading