Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

metric question #13162

Open
1 of 2 tasks
HuKai97 opened this issue Jul 3, 2024 · 1 comment
Open
1 of 2 tasks

metric question #13162

HuKai97 opened this issue Jul 3, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@HuKai97
Copy link

HuKai97 commented Jul 3, 2024

Search before asking

  • I have searched the YOLOv5 issues and found no similar bug report.

YOLOv5 Component

No response

Bug

(yolov5) F:\Tensorrt\yolov5>python val.py
val: data=data\coco.yaml, weights=yolov5s.pt, batch_size=4, imgsz=640, conf_thres=0.1, iou_thres=0.45, max_det=300, task=val, device=0, workers=8, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=True, project=runs\val, name=exp, exist_ok=False, half=False, dnn=False
WARNING confidence threshold 0.1 > 0.001 produces invalid results
YOLOv5 v7.0-334-g100a423b Python-3.10.13 torch-2.1.0+cu118 CUDA:0 (NVIDIA GeForce RTX 3060 Ti, 8192MiB)

Fusing layers...
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs
val: Scanning F:\LSR\datasets\coco\labels\val2017... 4952 images, 48 backgrounds, 0 corrupt: 100%|██████████| 5000/5000 [00:09<00:00, 503.19it/s]
val: WARNING Cache directory F:\LSR\datasets\coco\labels is not writeable: [WinError 183] : 'F:\LSR\datasets\coco\labels\val2017.cache.npy' -> 'F:\LSR\datasets\coco\labels\val2017.cache'
Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 1250/1250 [00:49<00:00, 25.46it/s]
all 5000 36335 0.661 0.525 0.597 0.412
Speed: 0.1ms pre-process, 2.9ms inference, 1.0ms NMS per image at shape (4, 3, 640, 640)

Evaluating pycocotools mAP... saving runs\val\exp5\yolov5s_predictions.json...
loading annotations into memory...
Done (t=0.53s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.41s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type bbox
DONE (t=6.09s).
Accumulating evaluation results...
DONE (t=1.55s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.002
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.004
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.004
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.007
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.004
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.005
Results saved to runs\val\exp5

save_json=True, coco api metrics almost 0,but val metric is right,why?

Environment

No response

Minimal Reproducible Example

No response

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@HuKai97 HuKai97 added the bug Something isn't working label Jul 3, 2024
@glenn-jocher
Copy link
Member

@HuKai97 hello,

Thank you for reaching out and providing detailed information about the issue you're encountering. It appears that your validation metrics are not aligning with the COCO API metrics, which are showing almost zero values.

To better assist you, could you please provide a minimal reproducible example of your code? This will help us investigate the issue more effectively. You can refer to our guide on creating a minimal reproducible example here: Minimum Reproducible Example.

In the meantime, please ensure that you are using the latest versions of torch and the YOLOv5 repository. You can update your repository and dependencies with the following commands:

git pull  # update YOLOv5 repo
pip install -r requirements.txt  # update dependencies

Additionally, the warning message confidence threshold 0.1 > 0.001 produces invalid results suggests that the confidence threshold you are using might be too high. You might want to try lowering it to see if it affects the results.

Here's a quick example of how you might adjust the confidence threshold:

python val.py --weights yolov5s.pt --data coco.yaml --img 640 --conf-thres 0.001 --iou-thres 0.45 --max-det 300 --device 0 --save-json

This should help ensure that the confidence threshold is not impacting your results.

Please let us know if the issue persists after trying these steps, and don't hesitate to share the minimal reproducible example for further investigation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants