-
Notifications
You must be signed in to change notification settings - Fork 23
problem when training ECNN individually #5
Copy link
Copy link
Open
Description
Hi,
I trained the ecnn individually using the provided codes (training_reflection_ecnn.lua) but found that the loss went down from about 10^8 to about 10^6 and just stuck.
Tracing the code, I found that the images were rescaled by 255 (line 208,209) before computing the edges and all the elements of the image were divided with 0.02 in computeEdge (line 75). I guessed this was the reason why the initial MSE error looked so large.
Is there any reason why rescaling the image to 0~255 and rescaling again before computing the edges?
I'm not sure if this cause the network stuck to such high loss.
I will be grateful if anyone can give me some tips!
p.s. I trained on VOC2012(cropped to 224x224) just as the paper done
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels