Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

downloaded pretrained model and number of training images and converged loss value #55

Open
JohnTargaryen opened this issue Feb 27, 2020 · 4 comments

Comments

@JohnTargaryen
Copy link

JohnTargaryen commented Feb 27, 2020

Greetings! Thank you for your excellent work.
I have come across three problems and hope that you or the github community can provide me some insights.

  1. I downloaded the pretrained model paris_streetview provided in the main page, which I unzipped and get the folder below
    image

However , when i run the command to test this pretrained model as instructed

python test.py --dataset paris_streetview --data_file ./test.txt --load_model_dir ./checkpoints/paris-streetview_256x256_rect --random_mask 0

An error would occur:
image

I wonder if the model i downloaded is somehow damaged?(If so, could you kindly send a pretrained model to my e-mail 294484601@qq.com for that it's hard for me to access google drive) Or that I've done something wrong elsewhere?

  1. How many images are there in your training datasets? What's the optimal number of training image? I tried to train the network on Adobe FiveK dataset that contains 5000 images of various scenes and portraits. Is that too many or too few training images?

  2. To what value should the loss converge on the pretrain phase and fine-tune phase respectively?

Your prompt response is highly appreciated. Thank you for your time, and wish you all the best!

@shepnerd
Copy link
Owner

You mentioned pretrained models are for tensorflow instead of pytorch. You can try it with tensorflow testing code.

@JohnTargaryen
Copy link
Author

You mentioned pretrained models are for tensorflow instead of pytorch. You can try it with tensorflow testing code.

Thank you for your quick reply. I will download the pytorch pretrained model instead.
Also, could you please share some thoughts on the other two questions? Thanks!

@shepnerd
Copy link
Owner

shepnerd commented Feb 28, 2020

For your other two questions,

  1. It is hard to determine the optimal training images. The basic rule is, the required training number is relatively small when the data are well-aligned (e.g., aligned human faces in celebA), while that number should be numerous for a data distribution containing various categories (e.g., Places2). The training numbers for each used dataset is listed as:
    image
    As far as I know, image contents in Adobe5K are quite diverse and 5k seems fewer than what we expect. But, increasing training iterations may help considering image size in Adobe5k is huge and randomly patch sampling may cover the real data distribution when having enough samples.

  2. I may check the number later. For a relatively small dataset (e.g., celebA-HQ or Paris streetview), 20-40 training epochs are enough for the pre-training stage. Such figures also work for the fine-tune stage.

@JohnTargaryen
Copy link
Author

For your other two questions,

  1. It is hard to determine the optimal training images. The basic rule is, the required training number is relatively small when the data are well-aligned (e.g., aligned human faces in celebA), while that number should be numerous for a data distribution containing various categories (e.g., Places2). The training numbers for each used dataset is listed as:
    image
    As far as I know, image contents in Adobe5K are quite diverse and 5k seems fewer than what we expect. But, increasing training iterations may help considering image size in Adobe5k is huge and randomly patch sampling may cover the real data distribution when having enough samples.
  2. I may check the number later. For a relatively small dataset (e.g., celebA-HQ or Paris streetview), 20-40 training epochs are enough for the pre-training stage. Such figures also work for the fine-tune stage.

Thank you! It helps a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants