Replies: 1 comment 5 replies
-
@7assanx7 Using the generator as shown below, you can use yolo frame by frame. from ultralytics import YOLO
model = YOLO('yolov8n.pt')
results = model.predict(source=src, verbose=False, stream=True)
for r in results:
if 0 in r.boxes.cls: # boxes object contains bbox outputs.
# YOUR CODE
# play sound function This is sample of r.boxes:
|
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
`# Ultralytics YOLO 🚀, GPL-3.0 license
import torch
from ultralytics.yolo.engine.predictor import BasePredictor
from ultralytics.yolo.engine.results import Results
from ultralytics.yolo.utils import DEFAULT_CFG, ROOT, ops
from ultralytics.yolo.utils.plotting import Annotator, colors, save_one_box
class DetectionPredictor(BasePredictor):
def predict(cfg=DEFAULT_CFG, use_python=False):
model = cfg.model or 'yolov8n.pt'
source = cfg.source if cfg.source is not None else ROOT / 'assets' if (ROOT / 'assets').exists()
else 'https://ultralytics.com/images/bus.jpg'
if name == 'main':
predict()`
how can i play sound for each detection or class in real time detection
i want to if class 0 detected play sound by mp3 file
Beta Was this translation helpful? Give feedback.
All reactions