How to reduce the generalization intensity of the model #17044
Unanswered
wylzhouzhou
asked this question in
Q&A
Replies: 1 comment
-
@wylzhouzhou to improve your model's specificity, consider increasing the dataset size with more diverse examples of unadorned palms. You might also try fine-tuning the model with a higher learning rate or using a more specific loss function. Upgrading to YOLO11 could offer enhanced features, but dataset quality remains crucial. For more tips, visit our model training guide at https://docs.ultralytics.com/guides/model-training-tips/. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
我最近在训练用于检测手掌的模型,我的数据集比较少,只有100张能看到清晰手掌的图片,我需要训练出可以检测不戴任何手套、戒指等饰品的遮挡物的手掌,而且我的数据集图片也都是不带任何遮挡物的手掌。
可是在默认训练参数下,我发现训练出的模型会识别出很多带有不同颜色的手套或者戒指的手掌。
我重新调整了模型训练时的数据增强参数,将以下两个参数设置为0:"erasing=0.0,translate=0.0",而且训练步数设置为"epochs=1500",完整的训练参数如下:”results = model.train(data=data_yaml, epochs=1500, batch=1, device=0,cache=True,patience=0,translate=0.0,hsv_h=0.0, hsv_s=0.2, hsv_v=0.1,degrees=90.0, flipud=0.5 , mosaic=0.0, erasing=0.0, crop_fraction=0.1)“,但是训练的结果还是不理想,虽然是能够识别出手掌,但是我想要的是不带任何手套、饰品的手掌。
我认为是模型过于泛化了,但是设置了较低的"epochs=50",也没有达到我的目的。
我该如何做呢?我现在使用的还是YOLO8,升级YOLO11是不是也会遇到一样的情况?
用软件翻译了一下我的描述:
I have been training a model for detecting palms recently, and my dataset is relatively small, with only 100 images of clearly visible palms, I need to train palms that can detect occlusions without any gloves, rings, etc., and my dataset images are all unoccluded palms.
However, with the default training parameters, I found that the trained model recognized a lot of different colors of gloves or rings on the palm.
I readjust the data enhancement parameters during model training, and set the following two parameters to 0: "erasing=0.0,translate=0.0", and the number of training steps to "epochs=1500". The complete training parameters are as follows: results = model.train(data=data_yaml, epochs=1500, batch=1, device=0,cache=True,patience=0,translate=0.0,hsv_h=0.0, hsv_s=0.2, hsv_v=0.1,degrees=90.0, flipud=0.5, mosaic=0.0, erasing=0.0, crop_fraction=0.1) ", but the training result is still not satisfactory. Although the palm can be identified, But what I want is a hand with no gloves, no accessories.
I think the model is too general, but setting a low "epochs=50" does not achieve my purpose.
What should I do? I am still using YOLO8, will I encounter the same situation when upgrading YOLO11?
Beta Was this translation helpful? Give feedback.
All reactions