Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

very low mAP on coco val2014 when training from scratch #818

Open
fyw1999 opened this issue Mar 19, 2023 · 18 comments
Open

very low mAP on coco val2014 when training from scratch #818

fyw1999 opened this issue Mar 19, 2023 · 18 comments

Comments

@fyw1999
Copy link

fyw1999 commented Mar 19, 2023

The code is elegant and concise, but the training performance on coco val2014 is poor. The mAP is only 0.00912 after 24 epochs when I train the model from scratch.

@fyw1999 fyw1999 closed this as completed Mar 19, 2023
@fyw1999 fyw1999 reopened this Mar 19, 2023
@Flova
Copy link
Collaborator

Flova commented Mar 19, 2023

Did you use the imagenet pretrained backbone weights (weights/darknet53.conv.74)?

Training totally from random initialization is not feasible on coco in 24 epochs. Even if you use the pretrained backbone, more than 50 epochs are needed. Ideal are a couple hundred epochs (default value is 300).

@fyw1999
Copy link
Author

fyw1999 commented Mar 20, 2023

Did you use the imagenet pretrained backbone weights (weights/darknet53.conv.74)?

Training totally from random initialization is not feasible on coco in 24 epochs. Even if you use the pretrained backbone, more than 50 epochs are needed. Ideal are a couple hundred epochs (default value is 300).

Thanks for your quick reply! I forgot to load the imagenet pretrained backbone weights before, but now I have already loaded it and trained the model for 60 epochs and the mAP is 0.03232. Is this a normal value? Can you give me some suggestions? Thank you very much!

@Flova
Copy link
Collaborator

Flova commented Mar 20, 2023

I also started a training after you opened the issue and I have also a mAP of ~3 at the same epoch. I would train it for a couple hundred more and maybe try to tune the hyperparameters a bit. I don't train coco from scratch that often with this repo. I mostly train with in house datasets and get mAPs in the high 90s for these datasets, but the default hyperparameters should work for coco, so I will check that.

@fyw1999
Copy link
Author

fyw1999 commented Mar 21, 2023

I also started a training after you opened the issue and I have also a mAP of ~3 at the same epoch. I would train it for a couple hundred more and maybe try to tune the hyperparameters a bit. I don't train coco from scratch that often with this repo. I mostly train with in house datasets and get mAPs in the high 90s for these datasets, but the default hyperparameters should work for coco, so I will check that.

Thank you for your attention to this issue. I have now trained on the basis of imagenet pretrained backbone for 114 epochs, but the mAP is only 0.03480. I feel that even after 300 training epochs, the performance will not be good.

@Flova
Copy link
Collaborator

Flova commented Mar 21, 2023

That is strange. You could try to deactivate the data augmentation and see what happens.

@Flova
Copy link
Collaborator

Flova commented Mar 22, 2023

I am currently testing a hyperparameter set that achieves 11.6 mAP at epoch 3. I'll keep you updated.

@fyw1999
Copy link
Author

fyw1999 commented Mar 23, 2023

I am currently testing a hyperparameter set that achieves 11.6 mAP at epoch 3. I'll keep you updated.

I train the code with default hyperparameter using my dataset, and it works well. So I think the default hyperparameter is not suitable for coco dataset. What dataset did you test with, Is it coco? Thanks for you sharing.

@Flova
Copy link
Collaborator

Flova commented Mar 23, 2023

I am currently trying to find better hyperparameters for coco and already have a few promising sets. Sadly training on coco takes quite some time. Even it you are running 4 nodes with different hyperparameters progress is slow :/ I'll keep you updated.

@fyw1999
Copy link
Author

fyw1999 commented Mar 26, 2023

I am currently trying to find better hyperparameters for coco and already have a few promising sets. Sadly training on coco takes quite some time. Even it you are running 4 nodes with different hyperparameters progress is slow :/ I'll keep you updated.

I think the reason for the poor performance on the coco data set is that the learning rate decays so fast that the learning rate is too small after 50 epochs. In your code, the lr multiplied by 0.1 when epoch greater than 50 and multiplied by 0.01 when epoch greater than 56.

@Flova
Copy link
Collaborator

Flova commented Mar 26, 2023

Exactly. This is one of the reasons. I implemented a few fixes and training is still going on.
They include:

  • Fix for burn in (leads to better "initialization" in the beginning). It was skipped due to an off by one error...
  • Linear interpolation for LR decay
  • Decay LR based on optimizer steps and not the number of batches, as they differ due to gradiant accumulation, this leads as you suggested to a slower decay
  • Usage of SGD with nesterov instead of Adam (brings a surprising benefit with some hyperparameters)
  • Higher initial learning rate (0.01 for sgd)
  • Multiplication of loss by the mini batch size to account for split gradients

I will create a PR soon, but I am currently on vacation and my training is still running.

@Flova
Copy link
Collaborator

Flova commented Mar 26, 2023

Screenshot_20230326_220921_Chrome

Sorry for the bad phone screenshot, my laptop broke during my vacation...
Green is SGD with higher LR etc. and blue is the beginning of the training with fixed burn in, and longer, interpolated LR decay.

@J-LINC
Copy link

J-LINC commented Apr 6, 2023

Exactly. This is one of the reasons. I implemented a few fixes and training is still going on. They include:

  • Fix for burn in (leads to better "initialization" in the beginning). It was skipped due to an off by one error...
  • Linear interpolation for LR decay
  • Decay LR based on optimizer steps and not the number of batches, as they differ due to gradiant accumulation, this leads as you suggested to a slower decay
  • Usage of SGD with nesterov instead of Adam (brings a surprising benefit with some hyperparameters)
  • Higher initial learning rate (0.01 for sgd)
  • Multiplication of loss by the mini batch size to account for split gradients

I will create a PR soon, but I am currently on vacation and my training is still running.

May I ask if you are using the Adam optimizer in. cfg without making any changes?

@Flova
Copy link
Collaborator

Flova commented Apr 6, 2023

Adam is the default at the moment afaik

@J-LINC
Copy link

J-LINC commented Apr 6, 2023

Adam is the default at the moment afaik

wow can you show me the modification strategy you mentioned above in the yolov3.cfg?I see you mentioned changing adam to sgd maybe work better

@J-LINC
Copy link

J-LINC commented Apr 6, 2023

Adam is the default at the moment afaik

I found that there are only four data enhancement operations that rotate and change the saturation of the image and those are the first ones in the cfg file what else is there and I found that there seems to be only one strategy for adjusting the learning rate which is multiplying by 0.1 according to the steps

@J-LINC
Copy link

J-LINC commented Apr 6, 2023

Adam is the default at the moment afaik

I'm sorry I'm asking a little bit too many questions but I really want to know that it might be useful for me to train large data sets

@Flova Flova mentioned this issue Apr 12, 2023
4 tasks
@maximelianos
Copy link

I can confirm current settings work well for COCO, using pretrained darknet weights. On COCO test this checkpoint gets mAP 0.52318
image

@lmz123321
Copy link

Setting the learning rate to 1e-3 and cancelling the lr_decrease helps in my case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants