Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Train Custom Data Tutorial ⭐ #12

Open
glenn-jocher opened this issue Jun 3, 2020 · 212 comments
Open

Train Custom Data Tutorial ⭐ #12

glenn-jocher opened this issue Jun 3, 2020 · 212 comments
Assignees
Labels
documentation Improvements or additions to documentation

Comments

@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 3, 2020

📚 This guide explains how to train your own custom dataset with YOLOv5 🚀. See YOLOv5 Docs for additional details. UPDATED 13 April 2023.

Before You Start

Clone repo and install requirements.txt in a Python>=3.7.0 environment, including PyTorch>=1.7. Models and datasets download automatically from the latest YOLOv5 release.

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Train On Custom Data



Creating a custom model to detect your objects is an iterative process of collecting and organizing images, labeling your objects of interest, training a model, deploying it into the wild to make predictions, and then using that deployed model to collect examples of edge cases to repeat and improve.

1. Create Dataset

YOLOv5 models must be trained on labelled data in order to learn classes of objects in that data. There are two options for creating your dataset before you start training:

Use Roboflow to create your dataset in YOLO format ⭐

1.1 Collect Images

Your model will learn by example. Training on images similar to the ones it will see in the wild is of the utmost importance. Ideally, you will collect a wide variety of images from the same configuration (camera, angle, lighting, etc.) as you will ultimately deploy your project.

If this is not possible, you can start from a public dataset to train your initial model and then sample images from the wild during inference to improve your dataset and model iteratively.

1.2 Create Labels

Once you have collected images, you will need to annotate the objects of interest to create a ground truth for your model to learn from.

Roboflow Annotate is a simple
web-based tool for managing and labeling your images with your team and exporting
them in YOLOv5's annotation format.

1.3 Prepare Dataset for YOLOv5

Whether you label your images with Roboflow or not, you can use it to convert your dataset into YOLO format, create a YOLOv5 YAML configuration file, and host it for importing into your training script.

Create a free Roboflow account
and upload your dataset to a Public workspace, label any unannotated images,
then generate and export a version of your dataset in YOLOv5 Pytorch format.

Note: YOLOv5 does online augmentation during training, so we do not recommend
applying any augmentation steps in Roboflow for training with YOLOv5. But we
recommend applying the following preprocessing steps:

  • Auto-Orient - to strip EXIF orientation from your images.
  • Resize (Stretch) - to the square input size of your model (640x640 is the YOLOv5 default).

Generating a version will give you a point in time snapshot of your dataset so
you can always go back and compare your future model training runs against it,
even if you add more images or change its configuration later.

Export in YOLOv5 Pytorch format, then copy the snippet into your training
script or notebook to download your dataset.

Now continue with 2. Select a Model.

Or manually prepare your dataset

1.1 Create dataset.yaml

COCO128 is an example small tutorial dataset composed of the first 128 images in COCO train2017. These same 128 images are used for both training and validation to verify our training pipeline is capable of overfitting. data/coco128.yaml, shown below, is the dataset config file that defines 1) the dataset root directory path and relative paths to train / val / test image directories (or *.txt files with image paths) and 2) a class names dictionary:

# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/coco128  # dataset root dir
train: images/train2017  # train images (relative to 'path') 128 images
val: images/train2017  # val images (relative to 'path') 128 images
test:  # test images (optional)

# Classes (80 COCO classes)
names:
  0: person
  1: bicycle
  2: car
  ...
  77: teddy bear
  78: hair drier
  79: toothbrush

1.2 Create Labels

After using an annotation tool to label your images, export your labels to YOLO format, with one *.txt file per image (if no objects in image, no *.txt file is required). The *.txt file specifications are:

  • One row per object
  • Each row is class x_center y_center width height format.
  • Box coordinates must be in normalized xywh format (from 0 - 1). If your boxes are in pixels, divide x_center and width by image width, and y_center and height by image height.
  • Class numbers are zero-indexed (start from 0).

The label file corresponding to the above image contains 2 persons (class 0) and a tie (class 27):

1.3 Organize Directories

Organize your train and val images and labels according to the example below. YOLOv5 assumes /coco128 is inside a /datasets directory next to the /yolov5 directory. YOLOv5 locates labels automatically for each image by replacing the last instance of /images/ in each image path with /labels/. For example:

../datasets/coco128/images/im0.jpg  # image
../datasets/coco128/labels/im0.txt  # label

2. Select a Model

Select a pretrained model to start training from. Here we select YOLOv5s, the second-smallest and fastest model available. See our README table for a full comparison of all models.

YOLOv5 Models

3. Train

Train a YOLOv5s model on COCO128 by specifying dataset, batch-size, image size and either pretrained --weights yolov5s.pt (recommended), or randomly initialized --weights '' --cfg yolov5s.yaml (not recommended). Pretrained weights are auto-downloaded from the latest YOLOv5 release.

# Train YOLOv5s on COCO128 for 3 epochs
$ python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt

💡 ProTip: Add --cache ram or --cache disk to speed up training (requires significant RAM/disk resources).
💡 ProTip: Always train from a local dataset. Mounted or network drives like Google Drive will be very slow.

All training results are saved to runs/train/ with incrementing run directories, i.e. runs/train/exp2, runs/train/exp3 etc. For more details see the Training section of our tutorial notebook. Open In Colab Open In Kaggle

4. Visualize

Comet Logging and Visualization 🌟 NEW

Comet is now fully integrated with YOLOv5. Track and visualize model metrics in real time, save your hyperparameters, datasets, and model checkpoints, and visualize your model predictions with Comet Custom Panels! Comet makes sure you never lose track of your work and makes it easy to share results and collaborate across teams of all sizes!

Getting started is easy:

pip install comet_ml  # 1. install
export COMET_API_KEY=<Your API Key>  # 2. paste API key
python train.py --img 640 --epochs 3 --data coco128.yaml --weights yolov5s.pt  # 3. train

To learn more about all of the supported Comet features for this integration, check out the Comet Tutorial. If you'd like to learn more about Comet, head over to our documentation. Get started by trying out the Comet Colab Notebook:
Open In Colab

yolo-ui

ClearML Logging and Automation 🌟 NEW

ClearML is completely integrated into YOLOv5 to track your experimentation, manage dataset versions and even remotely execute training runs. To enable ClearML:

  • pip install clearml
  • run clearml-init to connect to a ClearML server (deploy your own open-source server here, or use our free hosted server here)

You'll get all the great expected features from an experiment manager: live updates, model upload, experiment comparison etc. but ClearML also tracks uncommitted changes and installed packages for example. Thanks to that ClearML Tasks (which is what we call experiments) are also reproducible on different machines! With only 1 extra line, we can schedule a YOLOv5 training task on a queue to be executed by any number of ClearML Agents (workers).

You can use ClearML Data to version your dataset and then pass it to YOLOv5 simply using its unique ID. This will help you keep track of your data without adding extra hassle. Explore the ClearML Tutorial for details!

ClearML Experiment Management UI

Local Logging

Training results are automatically logged with Tensorboard and CSV loggers to runs/train, with a new experiment directory created for each new training as runs/train/exp2, runs/train/exp3, etc.

This directory contains train and val statistics, mosaics, labels, predictions and augmentated mosaics, as well as metrics and charts including precision-recall (PR) curves and confusion matrices.

Local logging results

Results file results.csv is updated after each epoch, and then plotted as results.png (below) after training completes. You can also plot any results.csv file manually:

from utils.plots import plot_results
plot_results('path/to/results.csv')  # plot 'results.csv' as 'results.png'

results.png

Next Steps

Once your model is trained you can use your best checkpoint best.pt to:

  • Run CLI or Python inference on new images and videos
  • Validate accuracy on train, val and test splits
  • Export to TensorFlow, Keras, ONNX, TFlite, TF.js, CoreML and TensorRT formats
  • Evolve hyperparameters to improve performance
  • Improve your model by sampling real-world images and adding them to your dataset

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@shenglih
Copy link

shenglih commented Jul 7, 2020

I used a my_training.txt file that includes a list of training images instead of a path to the folder of images and annotations, but it always returns AssertionError: No images found in /path_to_my_txt_file/my_training.txt. Could anyone kindly give some pointers to where it went wrong? Thanks

@zuoxiang95
Copy link

I get the same error @shenglih

@synked16
Copy link

hey @glenn-jocher can i train a model on images with size 450x600??

@glenn-jocher
Copy link
Member Author

@justAyaan sure, just use --img 600, it will automatically use the nearest correct stride multiple.

@synked16
Copy link

synked16 commented Jul 22, 2020

@glenn-jocher
ok.. will try.. just one doubt.. is the value of xcenter = (x + w)/ 2
OR is it x + w/2??

@glenn-jocher
Copy link
Member Author

x_center is the center of your object in the x dimension

@clxa
Copy link

clxa commented Aug 1, 2020

I used Yolov5 training on Kaggel kernel, and this error occurred. I am a novice. How can I solve this problem

Apex recommended for faster mixed precision training: https://github.com/NVIDIA/apex
Using CUDA device0 _CudaDeviceProperties(name='Tesla P100-PCIE-16GB', total_memory=16280MB)

Namespace(batch_size=4, bucket='', cache_images=False, cfg='/kaggle/input/yolov5aconfig/yolov5x.yaml', data='/kaggle/input/yolov5aconfig/wheat0.yaml', device='', epochs=15, evolve=False, hyp='', img_size=[1024, 1024], local_rank=-1, multi_scale=False, name='yolov5x_fold0', noautoanchor=False, nosave=False, notest=False, rect=False, resume=False, single_cls=False, sync_bn=False, total_batch_size=4, weights='', world_size=1)
Start Tensorboard with "tensorboard --logdir=runs", view at http://localhost:6006/
2020-08-01 04:17:02.964977: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
Hyperparameters {'optimizer': 'SGD', 'lr0': 0.01, 'momentum': 0.937, 'weight_decay': 0.0005, 'giou': 0.05, 'cls': 0.5, 'cls_pw': 1.0, 'obj': 1.0, 'obj_pw': 1.0, 'iou_t': 0.2, 'anchor_t': 4.0, 'fl_gamma': 0.0, 'hsv_h': 0.015, 'hsv_s': 0.7, 'hsv_v': 0.4, 'degrees': 0.0, 'translate': 0.0, 'scale': 0.5, 'shear': 0.0}

             from  n    params  module                                  arguments                     

0 -1 1 8800 models.common.Focus [3, 80, 3]
1 -1 1 115520 models.common.Conv [80, 160, 3, 2]
2 -1 1 315680 models.common.BottleneckCSP [160, 160, 4]
3 -1 1 461440 models.common.Conv [160, 320, 3, 2]
4 -1 1 3311680 models.common.BottleneckCSP [320, 320, 12]
5 -1 1 1844480 models.common.Conv [320, 640, 3, 2]
6 -1 1 13228160 models.common.BottleneckCSP [640, 640, 12]
7 -1 1 7375360 models.common.Conv [640, 1280, 3, 2]
8 -1 1 4099840 models.common.SPP [1280, 1280, [5, 9, 13]]
9 -1 1 20087040 models.common.BottleneckCSP [1280, 1280, 4, False]
10 -1 1 820480 models.common.Conv [1280, 640, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 1 5435520 models.common.BottleneckCSP [1280, 640, 4, False]
14 -1 1 205440 models.common.Conv [640, 320, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 1 1360960 models.common.BottleneckCSP [640, 320, 4, False]
18 -1 1 5778 torch.nn.modules.conv.Conv2d [320, 18, 1, 1]
19 -2 1 922240 models.common.Conv [320, 320, 3, 2]
20 [-1, 14] 1 0 models.common.Concat [1]
21 -1 1 5025920 models.common.BottleneckCSP [640, 640, 4, False]
22 -1 1 11538 torch.nn.modules.conv.Conv2d [640, 18, 1, 1]
23 -2 1 3687680 models.common.Conv [640, 640, 3, 2]
24 [-1, 10] 1 0 models.common.Concat [1]
25 -1 1 20087040 models.common.BottleneckCSP [1280, 1280, 4, False]
26 -1 1 23058 torch.nn.modules.conv.Conv2d [1280, 18, 1, 1]
27 [] 1 0 models.yolo.Detect [1, [[116, 90, 156, 198, 373, 326], [30, 61, 62, 45, 59, 119], [10, 13, 16, 30, 33, 23]], []]
Traceback (most recent call last):
File "/kaggle/input/yolov5/yolov5-master/train.py", line 469, in
train(hyp, tb_writer, opt, device)
File "/kaggle/input/yolov5/yolov5-master/train.py", line 80, in train
model = Model(opt.cfg, nc=nc).to(device)
File "/kaggle/input/yolov5/yolov5-master/models/yolo.py", line 70, in init
m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
File "/kaggle/input/yolov5/yolov5-master/models/yolo.py", line 100, in forward
return self.forward_once(x, profile) # single-scale inference, train
File "/kaggle/input/yolov5/yolov5-master/models/yolo.py", line 120, in forward_once
x = m(x) # run
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in call
result = self.forward(*input, **kwargs)
File "/kaggle/input/yolov5/yolov5-master/models/yolo.py", line 27, in forward
x[i] = self.mi # conv
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/container.py", line 147, in getitem
return self._modules[self._get_abs_string_index(idx)]
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/container.py", line 137, in _get_abs_string_index
raise IndexError('index {} is out of range'.format(idx))
IndexError: index 0 is out of range

@glenn-jocher
Copy link
Member Author

glenn-jocher commented Aug 1, 2020

This is old code. Suggest you git clone the latest. PyTorch 1.6 is a requirement now.

@twangnh
Copy link

twangnh commented Aug 5, 2020

@glenn-jocher yolov5 is really flexible for training on custom datasets! could you please share the script for processing and converting coco to the train2017.txt file? I already have this file but would like to regenerate it with some modifications.

@rwin94
Copy link

rwin94 commented Aug 5, 2020

Hi, I have a newbie question...
Could you explain what the difference between providing pretrained weights vs not? when to use?

  • Start training from pretrained --weights yolov5s.pt, or from randomly initialized --weights '' *

Should I expect better results if I use yolov5s.pt as pretrained weights?
Thanks

@glenn-jocher
Copy link
Member Author

glenn-jocher commented Aug 5, 2020

@twangnh train2017.txt is just a textfile with a list of images. You can use the glob package to create this, but it's not necessary, as YOLOv5 data yaml's will also accept a simple directory of training images. You can see this format in the coco128 dataset:

# COCO 2017 dataset http://cocodataset.org - first 128 training images
# Download command: python -c "from yolov5.utils.google_utils import *; gdrive_download('1n_oKgR81BJtqk75b00eAjdv03qVCQn2f', 'coco128.zip')"
# Train command: python train.py --data coco128.yaml
# Default dataset location is next to /yolov5:
# /parent_folder
# /coco128
# /yolov5
# train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/]
train: ../coco128/images/train2017/ # 128 images
val: ../coco128/images/train2017/ # 128 images

@glenn-jocher
Copy link
Member Author

glenn-jocher commented Aug 5, 2020

@rwin94 for small datasets or for quick results yes always start from the pretrained weights:

python train.py --cfg yolov5s.yaml --weights yolov5s.pt

You can see a comparison of pretrained vs from scratch in the custom data training turorial:
https://docs.ultralytics.com/yolov5/tutorials/train_custom_data#6-visualize

@Lg955
Copy link

Lg955 commented Aug 6, 2020

I want to ask a question: it showes
File "/content/yolov5/utils/datasets.py", line 344, in <listcomp> labels, shapes = zip(*[cache[x] for x in self.img_files]) KeyError: '../dota_data/images/val/P0800__1__552___0.png
when training in Google colab.

But everything is ok in my own laptop, so it cannot train with .png?

@liumingjune
Copy link

Hello, I'd like to ask you something. "If no objects in image.no *. txt file is required". For images without labels in the training data (there is no target inthe image), how does it participate in the training as a negative sample?

@glenn-jocher
Copy link
Member Author

@liumingjune all images are treated equally during training, irrespective of the labels they may or may not have.

@liumingjune
Copy link

Thanks your reply. I have a question after looking at the code. I would like to ask if you only keep the latest and best models which under runs file when you save the training model? Can't you save the model under the specified epoch? Is it ok to choose the latest model directly after the training? How to select other models under the epoch if this optimal detection does not work well?

@glenn-jocher
Copy link
Member Author

@liumingjune best.pt and last.pt are saved, which are the best performing model across all epochs and the most recent epoch's model. You can customize checkpointing logic here:

yolov5/train.py

Lines 333 to 348 in 9ae8683

# Save model
save = (not opt.nosave) or (final_epoch and not opt.evolve)
if save:
with open(results_file, 'r') as f: # create checkpoint
ckpt = {'epoch': epoch,
'best_fitness': best_fitness,
'training_results': f.read(),
'model': ema.ema.module if hasattr(ema, 'module') else ema.ema,
'optimizer': None if final_epoch else optimizer.state_dict()}
# Save last, best and delete
torch.save(ckpt, last)
if best_fitness == fi:
torch.save(ckpt, best)
del ckpt
# end epoch ----------------------------------------------------------------------------------------------------

@liumingjune
Copy link

Hello, thank you for your reply. I want to know if Yolov5 has done any work on small target detection.

@glenn-jocher
Copy link
Member Author

@liumingjune yes, of course. All models will work well with small objects without any changes. My only recommendation is to train at the largest --img-size you can, up to native resolution.

We do have custom small object models with extra ops targeted to higher resolution layers (P2 and P3) which we have not open sourced. These custom models outperform our official models on COCO, in particular in the small object class, but also come with a speed penalty, so we decided not to release them to the community at the present time.

@liumingjune
Copy link

@liumingjune all images are treated equally during training, irrespective of the labels they may or may not have.

Hello, I have a question. My test results showed a high false alarm rate. If I want to add some large images with no target and only background as negative samples to participate in the training. Obviously, in the format you requested, these images are not labled with *.txt. I have lots of big images with no targets here. so do you have any suggestions for the number of large images with no targets? I am concerned that the imbalance in the number of positive and negative samples will affect the effectiveness of training.Wish your reply.Thanks!

@liumingjune
Copy link

Hello, I have a question. My test results showed a high false alarm rate. If I want to add some large images with no target and only background as negative samples to participate in the training. Obviously, in the format you requested, these images are not labled with *.txt. I have lots of big images with no targets here. so do you have any suggestions for the number of large images with no targets? I am concerned that the imbalance in the number of positive and negative samples will affect the effectiveness of training.Wish your reply.Thanks!

@glenn-jocher
Copy link
Member Author

@liumingjune I can't advise you on custom dataset training.

@liumingjune
Copy link

@liumingjune I can't advise you on custom dataset training.

Thanks. I understand. I just want to find out if the ratio of positive and negative samples(number of image has label and has no label) has any effect on training.

@TianFuKang
Copy link

I want to ask a question: it showes
File "/content/yolov5/utils/datasets.py", line 344, in <listcomp> labels, shapes = zip(*[cache[x] for x in self.img_files]) KeyError: '../dota_data/images/val/P0800__1__552___0.png
when training in Google colab.

But everything is ok in my own laptop, so it cannot train with .png?

delete *.cache file, again run python3 train.py

@glenn-jocher
Copy link
Member Author

@Minxiangliu in general I would first train with all default settings to establish a baseline before attempting to improve anything.

@kenil22
Copy link

kenil22 commented May 12, 2022

how to use yolov5 pretrained model only for detecting car objects in image and ignoring all other objects.

@mannaandpoem
Copy link

hello. How to go from full trained model yolov5n to yolov5s?
I firstly use yolov5n , but I find network is too small , I want to further to improve my mAP and I plan to use yolov5s, how can I go from full trained model yolov5n to yolov5s?

@glenn-jocher
Copy link
Member Author

@kenil22 use the --classes argument with detect.py

@mannaandpoem you don't. You start a new training: python train.py --weights yolov5s.pt --data DATA.yaml

@liuchangf
Copy link

First of all , thank you for sharing.

I made my custom dataset ,but the size of images are different. Some are 790 X 1181 ,some are 720 X 450 , others are 425 X 283 . Does the data need to be preprocessed? Waiting for your kindly reply

@glenn-jocher
Copy link
Member Author

@liuchangf no

@SwHaraday
Copy link

SwHaraday commented May 18, 2022 via email

@willwang-cv
Copy link

it's so gorgeous!

@Dilipraj6
Copy link

Dilipraj6 commented Jun 3, 2022

Hello,

I'm doing object detection using yolov5 with a custom dataset. There are only 2 classes but the images are around 5k. So, my issue is while training, it is not detecting the labels automatically. I'm getting an error "AssertionError: train: No labels Can not train without labels"

PLEASE do not reply with the link of "Train custom data". That is the one which I'm using. I'm asking help here because it doesn't work even if I'm using the same way that page suggests. I'm providing all the details below. I would really appreciate if you could take a look and tell me where I'm doing wrong.

My Dataset folders are organized in below way:
Screenshot 2022-06-03 171702

Dataset-->
Images-->
train
val
labels-->
train
val

Below is the screenshot of the format of my labels:
Screenshot 2022-06-03 172354

Below is the .yaml file:
Screenshot 2022-06-03 172108

And I'm saving the .yaml file in yolov5 folder. I also saving the file in yolov5>data folder but still the same error.

Please kindly review the above details and let me know where am I doing wrong.

@glenn-jocher
Copy link
Member Author

@PonyPC this is user error, your custom dataset is structured incorrectly. Follow the tutorial and use COCO128 as an example:

python train.py --data coco128.yaml

@PonyPC
Copy link

PonyPC commented Jun 3, 2022

@glenn-jocher @Dilipraj6

@SwHaraday
Copy link

SwHaraday commented Jun 4, 2022 via email

@00kar
Copy link

00kar commented Jun 24, 2022

Hi there,
Could yolo work for related classes and associate them?
For example I want yolo to recognize me(as class person1) and my car(as class car1), and associate us together, Is it real?
Thanks, in advance.

@glenn-jocher
Copy link
Member Author

glenn-jocher commented Jun 24, 2022

@00kar 👋 Hello! Thanks for asking about handling inference results. YOLOv5 🚀 PyTorch Hub models allow for simple model loading and inference in a pure python environment without using detect.py.

Simple Inference Example

This example loads a pretrained YOLOv5s model from PyTorch Hub as model and passes an image for inference. 'yolov5s' is the YOLOv5 'small' model. For details on all available models please see the README. Custom models can also be loaded, including custom trained PyTorch models and their exported variants, i.e. ONNX, TensorRT, TensorFlow, OpenVINO YOLOv5 models.

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')  # yolov5n - yolov5x6 official model
#                                            'custom', 'path/to/best.pt')  # custom model

# Images
im = 'https://ultralytics.com/images/zidane.jpg'  # or file, Path, URL, PIL, OpenCV, numpy, list

# Inference
results = model(im)

# Results
results.print()  # or .show(), .save(), .crop(), .pandas(), etc.

results.xyxy[0]  # im predictions (tensor)
results.pandas().xyxy[0]  # im predictions (pandas)
#      xmin    ymin    xmax   ymax  confidence  class    name
# 0  749.50   43.50  1148.0  704.5    0.874023      0  person
# 2  114.75  195.75  1095.0  708.0    0.624512      0  person
# 3  986.00  304.00  1028.0  420.0    0.286865     27     tie

See YOLOv5 PyTorch Hub Tutorial for details.

Good luck 🍀 and let us know if you have any other questions!

@myt-sheila
Copy link

myt-sheila commented Dec 15, 2022

Hello, @glenn-jocher!

Suppose I would want to train a model based on YOLOv5 that could detect face mask wearing condition. The classes are fully_covered, not_covered, and partially_covered. I have a few questions:

  1. Would this be possible to only use YOLOv5 or I would need to use other technologies?
  2. How would image labeling be done in the case for partially_covered class? Would I enclose the bounding box on the whole face of the person incorrectly wearing a mask?

Any help is appreciated. Thank you! 🙏

@glenn-jocher
Copy link
Member Author

@myt-sheila yes you could train a mask detector with only YOLOv5, though you need a labelled dataset :)

Yes face labelling would work well for full/partial/none, or mask labelling for partial/full would also work.

@glenn-jocher glenn-jocher changed the title Train Custom Data Tutorial 🚀 Train Custom Data Tutorial ⭐ Feb 8, 2023
@ultralytics ultralytics deleted a comment from dipto9991 Apr 1, 2023
manole-alexandru added a commit to manole-alexandru/yolov5-uolo that referenced this issue Apr 9, 2023
manole-alexandru added a commit to manole-alexandru/yolov5-uolo that referenced this issue Apr 10, 2023
manole-alexandru added a commit to manole-alexandru/yolov5-uolo that referenced this issue Apr 11, 2023
ultralytics#12 lead to overfitting. We attempt to address this in the added segmentation path
@o0GoldLike0o
Copy link

hi, I have test.txt in yaml, But it seems not working.
train: train.txt
val: val.txt
test: test.txt

@TomyangSydney
Copy link

Hi @glenn-jocher , what's the difference between the probability argument p in copy_paste function and the copy_paste hyperparameter in [hyp.scratch-low.yaml] file? If I want to use copy_paste function to do copy_paste augmentation on some particular types of data, should I set the copy_paste in hyperparameter file to be 0?

@jianganghuang
Copy link

I make my custom dataset and put them on right position,when I execute"python train.py",it appears like below but does't have other output information. I execute "nvidia-smi",the result is as the second picture, on GPU 1, it shows it is working but occupy very lov memory.
05b11676da02ad9824176a3bed012e5
26d74cf7d98c3476497f773aea5fe42

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests