You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for reaching out and providing detailed information about the issue you're encountering. It appears that your validation metrics are not aligning with the COCO API metrics, which are showing almost zero values.
To better assist you, could you please provide a minimal reproducible example of your code? This will help us investigate the issue more effectively. You can refer to our guide on creating a minimal reproducible example here: Minimum Reproducible Example.
In the meantime, please ensure that you are using the latest versions of torch and the YOLOv5 repository. You can update your repository and dependencies with the following commands:
Additionally, the warning message confidence threshold 0.1 > 0.001 produces invalid results suggests that the confidence threshold you are using might be too high. You might want to try lowering it to see if it affects the results.
Here's a quick example of how you might adjust the confidence threshold:
This should help ensure that the confidence threshold is not impacting your results.
Please let us know if the issue persists after trying these steps, and don't hesitate to share the minimal reproducible example for further investigation.
Search before asking
YOLOv5 Component
No response
Bug
(yolov5) F:\Tensorrt\yolov5>python val.py
val: data=data\coco.yaml, weights=yolov5s.pt, batch_size=4, imgsz=640, conf_thres=0.1, iou_thres=0.45, max_det=300, task=val, device=0, workers=8, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=True, project=runs\val, name=exp, exist_ok=False, half=False, dnn=False
WARNING confidence threshold 0.1 > 0.001 produces invalid results
YOLOv5 v7.0-334-g100a423b Python-3.10.13 torch-2.1.0+cu118 CUDA:0 (NVIDIA GeForce RTX 3060 Ti, 8192MiB)
Fusing layers...
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs
val: Scanning F:\LSR\datasets\coco\labels\val2017... 4952 images, 48 backgrounds, 0 corrupt: 100%|██████████| 5000/5000 [00:09<00:00, 503.19it/s]
val: WARNING Cache directory F:\LSR\datasets\coco\labels is not writeable: [WinError 183] : 'F:\LSR\datasets\coco\labels\val2017.cache.npy' -> 'F:\LSR\datasets\coco\labels\val2017.cache'
Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 1250/1250 [00:49<00:00, 25.46it/s]
all 5000 36335 0.661 0.525 0.597 0.412
Speed: 0.1ms pre-process, 2.9ms inference, 1.0ms NMS per image at shape (4, 3, 640, 640)
Evaluating pycocotools mAP... saving runs\val\exp5\yolov5s_predictions.json...
loading annotations into memory...
Done (t=0.53s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.41s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type bbox
DONE (t=6.09s).
Accumulating evaluation results...
DONE (t=1.55s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.002
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.004
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.004
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.007
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.004
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.005
Results saved to runs\val\exp5
save_json=True, coco api metrics almost 0,but val metric is right,why?
Environment
No response
Minimal Reproducible Example
No response
Additional
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: