-
Notifications
You must be signed in to change notification settings - Fork 694
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: cannot import name '_encode_image' from 'ollama._client' #2453
Comments
Looks like the api has changed a bit in ollama. Can you try installing the library as follows: |
Until we adapt the new API changes, this should work |
but i have a new question, C:\Users\zjy\AppData\Local\anaconda3\envs\anomalib_env\lib\site-packages\timm\models\layers_init_.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers | Name | Type | Params | Mode0 | model | PatchcoreModel | 24.9 M | train
|
Patchcore doesn't have an optimizer, so this is on purpose. As long as you get a val/test score, everything works as expected |
Thanks, So is everything normal like this? Just wait for it to finish training and produce results? |
And I also have another question:How can I continue training the model based on the results of the previous training session? |
it is not it is |
well it's a typo, thanks for the fix |
Describe the bug
when i run train.py, ImportError: cannot import name '_encode_image' from 'ollama._client'
Dataset
Other (please specify in the text field below)
Model
PatchCore
Steps to reproduce the behavior
install anomalib use anaconda prompt
OS information
OS information:
Expected behavior
Traceback (most recent call last):
File "E:\model\anomalib-main\train.py", line 3, in
from anomalib.models import Patchcore
File "E:\model\anomalib-main\src\anomalib\models_init_.py", line 15, in
from .image import (
File "E:\model\anomalib-main\src\anomalib\models\image_init_.py", line 23, in
from .vlm_ad import VlmAd
File "E:\model\anomalib-main\src\anomalib\models\image\vlm_ad_init_.py", line 6, in
from .lightning_model import VlmAd
File "E:\model\anomalib-main\src\anomalib\models\image\vlm_ad\lightning_model.py", line 14, in
from .backends import Backend, ChatGPT, Huggingface, Ollama
File "E:\model\anomalib-main\src\anomalib\models\image\vlm_ad\backends_init_.py", line 9, in
from .ollama import Ollama
File "E:\model\anomalib-main\src\anomalib\models\image\vlm_ad\backends\ollama.py", line 23, in
from ollama._client import _encode_image
ImportError: cannot import name '_encode_image' from 'ollama._client' (C:\Users\zjy\AppData\Local\anaconda3\envs\anomalib_env\lib\site-packages\ollama_client.py)
Screenshots
No response
Pip/GitHub
GitHub
What version/branch did you use?
No response
Configuration YAML
Logs
Code of Conduct
The text was updated successfully, but these errors were encountered: