fix dropout = 1.0 issue. If dropout = 1.0, it should not run dropout …#202
fix dropout = 1.0 issue. If dropout = 1.0, it should not run dropout …#202mpjlu wants to merge 3 commits intojakeret:masterfrom
Conversation
|
Thx for you contribution. I see why this is better during training. But how should we control the dropout during validation and prediction? There we want to set the dropout to 1. |
|
For prediction, we don't need dropout. |
|
Right. So during training we want dropout to be < 1 and during validation it should be = 1. |
|
We can create two Unet with different keep_prob for training and validation. How do you think about it? |
|
Don't we have to train two models then? |
|
Hi @jakeret , any comment on the data. The data is based on CPU. |
|
An 16% performance improvement is nice. |
|
I am sorry for reply later. |
|
I don't see how this should be implemented. The computation-graph would be different for the two networks, which makes it hard to transfer the weights from one to the other |
|
There is no weight for the dropout layer, it is ok to save model in the train net, and restore them in the validation net. |

Python Dropout op uses the following code to check keep_prob value:
if tensor_util.constant_value(keep_prob) == 1: return x
If keep_prob is placeholder, tensor_util.constant_value(keep_prob) will return None, if statement will always be false.