modes/export/ #7933
Replies: 73 comments 199 replies
-
Where can we find working examples of a tf.js exported model? |
Beta Was this translation helpful? Give feedback.
-
How to use exported engine file for inference of images in a directory? |
Beta Was this translation helpful? Give feedback.
-
I trained a custom model taking yolov8n.pt (backbone) and I want to do a model registry in MLFLOW of the model in the .engine format. It's possible directly without the export step? Someone deal with something similar? Tks for your help! |
Beta Was this translation helpful? Give feedback.
-
Hi, I appreciate the really awesome work within Ultralytics. I have a simple question. What is the difference between |
Beta Was this translation helpful? Give feedback.
-
Hello @pderrenger Can you plz help me out with how can i use Paddlepaddle Format to extract the text from the images? Your response is very imp to me i am waiting for your reply. |
Beta Was this translation helpful? Give feedback.
-
my code from ultralytics import YOLO model = YOLO('yolov8n_web_model/yolov8n.pt') # load an official model model = YOLO('/path_to_model/best.pt') i got an error ERROR: The trace log is below.
What you should do instead is wrap
ERROR: input_onnx_file_path: /home/ubuntu/Python/runs/detect/train155/weights/best.onnx TensorFlow SavedModel: export failure ❌ 7.4s: SavedModel file does not exist at: /home/ubuntu/Python/runs/detect/train155/weights/best_saved_model/{saved_model.pbtxt|saved_model.pb} what is wrong and what i need to do for fix? thanks a lot |
Beta Was this translation helpful? Give feedback.
-
Hello! the error I get is "TypeError: Model.export() takes 1 positional argument but 2 were given" |
Beta Was this translation helpful? Give feedback.
-
Are there any examples of getting the output of a pose estimator model in C++ using a torchscript file. I'm getting an output of shape (1, 56, 8400) for an input of size (1, 3, 640, 640) with two people in the sample picture. How should I interpret/post-process this output? |
Beta Was this translation helpful? Give feedback.
-
I trained a yolov5 detection model a little while ago and have successfully converted that model to tensorflowjs. That tfjs model works as expected in code only slightly modified from the example available at https://github.com/zldrobit/tfjs-yolov5-example. My version of the relevant section:
I have now trained a yolov8 detection model on very similar data. The comments in https://github.com/ultralytics/ultralytics/blob/main/ultralytics/engine/exporter.py#L45-L49 However, that does not seem to be the case. The v5 model output is the 4 length array of tensors (which is why the destructuring assignment works), but the v8 model output is a single tensor of shape [1, X, 8400] thus the example code results in an error complaining that the model result is non-iterable when attempting to destructure. From what I understand, the [1, X, 8400] is the expected output shape of the v8 model. Is further processing of the v8 model required, or did I do something wrong during the pt -> tfjs export? |
Beta Was this translation helpful? Give feedback.
-
I was wondering if anyone could help me with this code: I exported my custom trained yolov8n.pt model to .onnx but now my code is not working(model.export(format='onnx', int8=True, dynamic=True)). I am having trouble using the outputs after running inference. My Code: def load_image(image_path):
def draw_bounding_boxes(image, detections, confidence_threshold=0.5): def main(model_path, image_path):
if name == "main": Error: |
Beta Was this translation helpful? Give feedback.
-
"batch_size" is not in arguments as previous versions? |
Beta Was this translation helpful? Give feedback.
-
I converted the model I trained with costum data to tflite format. Before converting, I set the int8 argument to true. But when I examined the tflite format from the netron website, I saw that the input information is still float32. Is this normal or is there a bug? Also thank you very much for answering every question without getting bored. |
Beta Was this translation helpful? Give feedback.
-
!yolo export model=/content/drive/MyDrive/best-1-1.pt format=tflite export failure ❌ 33.0s: generic_type: cannot initialize type "StatusCode": an object with that name is already defined |
Beta Was this translation helpful? Give feedback.
-
Hi I havr tried all TFLITE export formats to convert the best.pt to .tflite but non is working. I have also checked my runtime and all the latest imports pip install -U ultralytics, and I have also tried the code you gave to someone in the comments but the issue is not resolvig Step 1: Export to TensorFlow SavedModel!yolo export model='/content/drive/MyDrive/best-1-1.pt' format=saved_model Step 2: Convert the exported SavedModel to TensorFlow Liteimport tensorflow as tf Save the TFLite modelwith open('/content/drive/MyDrive/yolov8_model.tflite', 'wb') as f: but the same error comes back. |
Beta Was this translation helpful? Give feedback.
-
can we export sam/mobile sam model to tensorRT or onnx? |
Beta Was this translation helpful? Give feedback.
-
Hi, can you show me an example code to use with the engine format? Is model = YOLO("yolov8l.engine") good? since with the .pt the model can detect the correct objects but not for the .engine. anyway this is how i export to .engine out = model.export(format="engine", half=True, device=0) |
Beta Was this translation helpful? Give feedback.
-
Thnks
…On Mon, Oct 14, 2024, 8:07 AM AlgosDuCouteau ***@***.***> wrote:
Hi, can you show me an example code to use with the engine format? Is
model = YOLO("yolov8l.engine")
results = model.predict("datasets/coco8/images/train/000000000009.jpg",
save=True, half=True, device=0)
good? since with the .pt the model can detect the correct objects but not
for the .engine. anyway this is how i export to .engine
out = model.export(format="engine", half=True, device=0)
—
Reply to this email directly, view it on GitHub
<#7933 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BECV2N4XNXNSBOHSPYJJ4VTZ3MYQZAVCNFSM6AAAAABCTE3SQOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAOJTGE4TAMA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
hello! thanks for yolo. is it possible to change image input type(to uint8) and outputs(score,location,number of detections) while conversion to tflite ? |
Beta Was this translation helpful? Give feedback.
-
get an error :
|
Beta Was this translation helpful? Give feedback.
-
I am trying to load a YOLO TensorFlow Lite model using the GPU, but the current approach is not working. I have attempted loading with |
Beta Was this translation helpful? Give feedback.
-
Hello,@pderrenger,@glenn-jocher "Hi, does YOLOv8 support converting PT (PyTorch) models into apps with the .app extension, or into binary files without an extension?" |
Beta Was this translation helpful? Give feedback.
-
how to use the exported onnx file? |
Beta Was this translation helpful? Give feedback.
-
Is there any roadmap for fixing the dependency-problems for the export case (onnx2tf seems not to be working with the export-method in ultralytics) Problem at last line: No module named Imp ######################################## model = YOLO('yolo11n.pt') model.export(format="tflite", ######################################## ######################################## Ultralytics 8.3.20 🚀 Python-3.12.7 torch-2.5.0+cpu CPU (12th Gen Intel Core(TM) i7-12700H) PyTorch: starting from 'yolo11n.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (5.4 MB) TensorFlow SavedModel: starting export with tensorflow 2.17.0... ONNX: starting export with onnx 1.17.0 opset 19... During handling of the above exception, another exception occurred: Traceback (most recent call last): |
Beta Was this translation helpful? Give feedback.
-
model.export(format="onnx") only support oonx<=1.16.0, newest 1.17.0 not support |
Beta Was this translation helpful? Give feedback.
-
Why is it that when i format an 'onnx' file to 'int8' and 'dynamic' it work, but when i format the 'onnx' file to 'float16' and 'dynamic' it does't work. |
Beta Was this translation helpful? Give feedback.
-
hi, could i order the version of onnx to 13 , 11 or 10. Is it possible todowngrade the onnx version? |
Beta Was this translation helpful? Give feedback.
-
Hello! Load a modelmodel = YOLO("./runs/detect/train_v8_bird10/weights/best.pt") # load a custom trained model Export the modelmodel.export(format="torchscript", imgsz=320) PyTorch: starting from 'runs\detect\train_v8_bird10\weights\best.pt' with input shape (1, 3, 320, 320) BCHW and output shape(s) (1, 5, 2100) (5.9 MB) TorchScript: starting export with torch 1.12.1+cu116... |
Beta Was this translation helpful? Give feedback.
-
i am using Tesla T4 GPU but i can't achive 3ms inference time. it takes 7 - 8 ms. for openvino model. |
Beta Was this translation helpful? Give feedback.
-
Hello Team, So could you let me know why this happen, what is the reason my code not running smooth with speed up on 'nvidia jetson agx orin 64gb developer kit' |
Beta Was this translation helpful? Give feedback.
-
Hello, I'm trying to convert a Yolov8 custom trained classification model .pt file to an IMX format to be used with the new Raspberry Pi AI camera. However during initializing, it tries to find labels but due to it being a classification model and not a detection model, there are no labels.
Will this be a problem during exporting or should I use other arguments? Thanks! |
Beta Was this translation helpful? Give feedback.
-
modes/export/
Step-by-step guide on exporting your YOLOv8 models to various format like ONNX, TensorRT, CoreML and more for deployment. Explore now!.
https://docs.ultralytics.com/modes/export/
Beta Was this translation helpful? Give feedback.
All reactions