Replies: 2 comments
-
👋 Hello @vdt104, thank you for reaching out and contributing to the Ultralytics community 🚀! We understand overlapping label predictions can be challenging. We recommend taking a look at our comprehensive Docs where you can find insights on using Python and CLI with YOLO. Often, these resources can offer solutions to common issues. Since you mentioned encountering a problem with NMS affecting your evaluation scores, understanding the context in which NMS is applied can be vital. If this is a 🐛 Bug Report, please help us assist you better by providing a minimum reproducible example. This will enable us to dive deeper into potential issues. Additionally, if this counts as a custom training ❓ Question, providing detailed information, including dataset examples and training logs, is crucial for us to give precise advice. Make sure also to follow our Tips for Best Training Results. You're also welcome to engage with the Ultralytics community for more support. For real-time conversations, join us on Discord 🎧. For more detailed discussions, visit our Discourse, or explore various threads on our Subreddit. UpgradeTo ensure any recent patches or improvements potentially relevant to your issue have been applied, upgrade to the latest pip install -U ultralytics EnvironmentsConsider trying YOLO in these verified environments, each preconfigured with necessary dependencies including CUDA, CUDNN, Python, and PyTorch:
StatusThis badge reflects the current status of our Ultralytics CI tests. If green, all tests validating YOLO operations on macOS, Windows, and Ubuntu are passing, confirming current stability. This response is automated, but an Ultralytics engineer will review your discussion and provide additional assistance soon. 😊 |
Beta Was this translation helpful? Give feedback.
-
@vdt104 to address label overlap issues, consider adjusting the confidence threshold and IoU threshold for Non-Maximum Suppression (NMS) to better differentiate between classes. Additionally, reviewing and refining your dataset annotations might help improve model accuracy. |
Beta Was this translation helpful? Give feedback.
-
My model, trained with YOLO, predicts on the test set, and some vehicles labeled as cars (label 1) are being detected as both 0 (0 means motorbikes) and 1, with both bounding boxes almost overlapping. Ideally, those objects should only have label 1. What should I do to resolve this issue? I’ve tried using NMS, but it significantly reduces the evaluation mAP@50 score.
Beta Was this translation helpful? Give feedback.
All reactions