Skip to content

The final loss gradient is 1D but network output is (1,2). How is the gradient propagated ? #70

@prateethvnayak

Description

@prateethvnayak

I was wondering if the tf.reduce_sum and y are 1d and the mse cost term is 1d, however the gradient to be propagated needs to same dimension as network output i.e (1,ACTIONS) = (1,2). Is the final loss grad just replicated in both dimension ? i.e (1,1) -> (1,2) ?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions