Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible bug with is_training parameter #7

Open
lucgiffon opened this issue Oct 1, 2018 · 0 comments
Open

Possible bug with is_training parameter #7

lucgiffon opened this issue Oct 1, 2018 · 0 comments

Comments

@lucgiffon
Copy link

lucgiffon commented Oct 1, 2018

Hello,

Going through the code of your project, I think the parameter is_training is not taken into account for the CNN model in file mnist_cnn_train.py.

I've seen that the cnn_model.CNN function takes "is_training" argument with default equals True which prevent the code to crash.

In mnist_cnn_train, you define the is_training placeholder but you don't use it when calling the cnn_model.CNN function. You use it in the training and testing loops of the same file so I assume this is not an intended behavior.

I've not tested it yet, but I think the is_training entry of the feed_dict is just ignored and this cause dropout to be applied during the testing loop (same goes for batch normalization). This bug could be the cause of the issue #1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant