Set model attributes from runner? #3392
Unanswered
RobbieFernandez
asked this question in
Q&A
Replies: 1 comment
-
I wasn't able to do it with the class YoloRunnable(bentoml.Runnable):
SUPPORTS_CPU_MULTI_THREADING = True
def __init__(self, model_file):
self.model = torch.hub.load(
'ultralytics/yolov5',
'custom',
path=model_file,
trust_repo=True
)
@bentoml.Runnable.method()
def predict(self, image, confidence_threshold):
# Mutating this object seems like a bad idea :(
self.model.conf = confidence_threshold
return self.model(image)
runner = bentoml.Runner(
YoloRunnable,
name="fish_finder",
runnable_init_params={"model_file": "my_model_file"}
) My question now is, are there any issues around mutating the model like that? If there are multiple requests being handled asynchronously will it cause them to interfere with each other? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm using BentoML to wrap a YOLO model that's created with this library: https://github.com/fcakyon/yolov5-pip.
From the library docs, this is how it can be used outside of BentoML (Note that are extra parameters that can be set on the model instance that will affect the inference) :
I'm saving the model with this code:
My question is, when writing a service for this model, is it possible to expose those additional attributes (model.conf etc)? I would like for clients to be able to specify the confidence threshold when making inference requests, but I'm not sure how I can set those attributes with a
Runner
instance.Beta Was this translation helpful? Give feedback.
All reactions