Skip to content

problem when training ECNN individually #5

@yutseng318

Description

@yutseng318

Hi,
I trained the ecnn individually using the provided codes (training_reflection_ecnn.lua) but found that the loss went down from about 10^8 to about 10^6 and just stuck.
Tracing the code, I found that the images were rescaled by 255 (line 208,209) before computing the edges and all the elements of the image were divided with 0.02 in computeEdge (line 75). I guessed this was the reason why the initial MSE error looked so large.
Is there any reason why rescaling the image to 0~255 and rescaling again before computing the edges?
I'm not sure if this cause the network stuck to such high loss.
I will be grateful if anyone can give me some tips!

p.s. I trained on VOC2012(cropped to 224x224) just as the paper done

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions