Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: cannot import name '_encode_image' from 'ollama._client' #2453

Open
1 task done
zhangjy328 opened this issue Dec 6, 2024 · 8 comments · May be fixed by #2456
Open
1 task done

[Bug]: cannot import name '_encode_image' from 'ollama._client' #2453

zhangjy328 opened this issue Dec 6, 2024 · 8 comments · May be fixed by #2456
Assignees
Labels
Dependencies Pull requests that update a dependency file

Comments

@zhangjy328
Copy link

Describe the bug

when i run train.py, ImportError: cannot import name '_encode_image' from 'ollama._client'
error

Dataset

Other (please specify in the text field below)

Model

PatchCore

Steps to reproduce the behavior

install anomalib use anaconda prompt

OS information

OS information:

  • OS: win10
  • Python version: 3.10
  • Anomalib version: 1.2.0
  • PyTorch version: 2.5.1+cpu
  • CUDA/cuDNN version: none
  • GPU models and configuration: cpu
  • Any other relevant information: [e.g. I'm using a custom dataset]

Expected behavior

Traceback (most recent call last):
File "E:\model\anomalib-main\train.py", line 3, in
from anomalib.models import Patchcore
File "E:\model\anomalib-main\src\anomalib\models_init_.py", line 15, in
from .image import (
File "E:\model\anomalib-main\src\anomalib\models\image_init_.py", line 23, in
from .vlm_ad import VlmAd
File "E:\model\anomalib-main\src\anomalib\models\image\vlm_ad_init_.py", line 6, in
from .lightning_model import VlmAd
File "E:\model\anomalib-main\src\anomalib\models\image\vlm_ad\lightning_model.py", line 14, in
from .backends import Backend, ChatGPT, Huggingface, Ollama
File "E:\model\anomalib-main\src\anomalib\models\image\vlm_ad\backends_init_.py", line 9, in
from .ollama import Ollama
File "E:\model\anomalib-main\src\anomalib\models\image\vlm_ad\backends\ollama.py", line 23, in
from ollama._client import _encode_image
ImportError: cannot import name '_encode_image' from 'ollama._client' (C:\Users\zjy\AppData\Local\anaconda3\envs\anomalib_env\lib\site-packages\ollama_client.py)

Screenshots

No response

Pip/GitHub

GitHub

What version/branch did you use?

No response

Configuration YAML

# Import the required modules
from anomalib.data import MVTec
from anomalib.models import Patchcore
from anomalib.engine import Engine
from anomalib.data import Folder
from anomalib.utils.normalization import NormalizationMethod


# Initialize the datamodule, model and engine
# datamodule = MVTec(num_workers=0)
model = Patchcore()
engine = Engine(
    default_root_dir = './result',
    task = "classification",
    callbacks = None,
    normalization = NormalizationMethod.MIN_MAX,
    threshold = "F1AdaptiveThreshold",
    image_metrics = None,
    pixel_metrics = None,
    logger = None
)

datamodule = Folder(
    name="mvtec",
    root="../datasets/CSP",
    normal_dir="normal",
    abnormal_dir="abnormal",
    task="classification",
    train_batch_size=16,
    eval_batch_size=16,
    num_workers=0,
    image_size=[1024, 1024],
    mask_dir=None,
    normal_split_ratio=0.2,
    seed=None,
)

datamodule.setup()

# Train the model
engine.train(datamodule=datamodule, model=model)

Logs

Traceback (most recent call last):
  File "E:\model\anomalib-main\train.py", line 3, in <module>
    from anomalib.models import Patchcore
  File "E:\model\anomalib-main\src\anomalib\models\__init__.py", line 15, in <module>
    from .image import (
  File "E:\model\anomalib-main\src\anomalib\models\image\__init__.py", line 23, in <module>
    from .vlm_ad import VlmAd
  File "E:\model\anomalib-main\src\anomalib\models\image\vlm_ad\__init__.py", line 6, in <module>
    from .lightning_model import VlmAd
  File "E:\model\anomalib-main\src\anomalib\models\image\vlm_ad\lightning_model.py", line 14, in <module>
    from .backends import Backend, ChatGPT, Huggingface, Ollama
  File "E:\model\anomalib-main\src\anomalib\models\image\vlm_ad\backends\__init__.py", line 9, in <module>
    from .ollama import Ollama
  File "E:\model\anomalib-main\src\anomalib\models\image\vlm_ad\backends\ollama.py", line 23, in <module>
    from ollama._client import _encode_image
ImportError: cannot import name '_encode_image' from 'ollama._client' (C:\Users\zjy\AppData\Local\anaconda3\envs\anomalib_env\lib\site-packages\ollama\_client.py)

Code of Conduct

  • I agree to follow this project's Code of Conduct
@samet-akcay
Copy link
Contributor

samet-akcay commented Dec 6, 2024

Looks like the api has changed a bit in ollama. Can you try installing the library as follows: pip install "ollama<0.4"

@samet-akcay
Copy link
Contributor

Until we adapt the new API changes, this should work

@samet-akcay samet-akcay added the Dependencies Pull requests that update a dependency file label Dec 6, 2024
@zhangjy328
Copy link
Author

but i have a new question, LightningModule.configure_optimizers returned None, this fit will run with no optimizer, and training_step returned None

C:\Users\zjy\AppData\Local\anaconda3\envs\anomalib_env\lib\site-packages\timm\models\layers_init_.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {name} is deprecated, please import via timm.layers", FutureWarning)
INFO:anomalib.data.base.datamodule:No normal test images found. Sampling from training set using a split ratio of 0.20
dict_keys(['image_path', 'label', 'image'])
dict_keys(['image_path', 'label', 'image'])
dict_keys(['image_path', 'label', 'image'])
INFO:anomalib.models.components.base.anomaly_module:Initializing Patchcore model.
INFO:timm.models._builder:Loading pretrained weights from Hugging Face hub (timm/wide_resnet50_2.racm_in1k)
INFO:timm.models._hub:[timm/wide_resnet50_2.racm_in1k] Safe alternative available for 'pytorch_model.bin' (as 'model.safetensors'). Loading weights using safetensors.
INFO:timm.models._builder:Missing keys (fc.weight, fc.bias) discovered while loading pretrained weights. This is expected if model is being adapted.
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
INFO:anomalib.data.base.datamodule:No normal test images found. Sampling from training set using a split ratio of 0.20
WARNING:anomalib.metrics.f1_score:F1Score class exists for backwards compatibility. It will be removed in v1.1. Please use BinaryF1Score from torchmetrics instead
C:\Users\zjy\AppData\Local\anaconda3\envs\anomalib_env\lib\site-packages\lightning\pytorch\core\optimizer.py:182: LightningModule.configure_optimizers returned None, this fit will run with no optimizer

| Name | Type | Params | Mode

0 | model | PatchcoreModel | 24.9 M | train
1 | _transform | Compose | 0 | train
2 | normalization_metrics | MetricCollection | 0 | train
3 | image_threshold | F1AdaptiveThreshold | 0 | train
4 | pixel_threshold | F1AdaptiveThreshold | 0 | train
5 | image_metrics | AnomalibMetricCollection | 0 | train
6 | pixel_metrics | AnomalibMetricCollection | 0 | train

24.9 M Trainable params
0 Non-trainable params
24.9 M Total params
99.450 Total estimated model params size (MB)
15 Modules in train mode
174 Modules in eval mode
C:\Users\zjy\AppData\Local\anaconda3\envs\anomalib_env\lib\site-packages\lightning\pytorch\trainer\connectors\data_connector.py:424: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the num_workers argumenttonum_workers=11in theDataLoaderto improve performance. C:\Users\zjy\AppData\Local\anaconda3\envs\anomalib_env\lib\site-packages\lightning\pytorch\trainer\connectors\data_connector.py:424: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of thenum_workers argument to num_workers=11 in the DataLoader to improve performance.
Epoch 0: 0%| | 1/250 [00:51<3:35:14, 0.02it/s]C:\Users\zjy\AppData\Local\anaconda3\envs\anomalib_env\lib\site-packages\lightning\pytorch\loops\optimization\automatic.py:132: training_step returned None. If this was on purpose, ignore this warning...

@samet-akcay
Copy link
Contributor

Patchcore doesn't have an optimizer, so this is on purpose. As long as you get a val/test score, everything works as expected

@samet-akcay samet-akcay linked a pull request Dec 6, 2024 that will close this issue
9 tasks
@zhangjy328
Copy link
Author

Thanks, So is everything normal like this? Just wait for it to finish training and produce results?

@zhangjy328
Copy link
Author

And I also have another question:How can I continue training the model based on the results of the previous training session?

@neoragex2002
Copy link

neoragex2002 commented Dec 10, 2024

Looks like the api has changed a bit in ollama. Can you try installing the library as follows: pip install "llama<0.4"

it is not pip install "llama<0.4"

it is pip install "ollama<0.4" ...

@samet-akcay
Copy link
Contributor

well it's a typo, thanks for the fix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants