Skip to content

Would it be possible wrap a pre-trained model with the nested learning architecture #4

@bjoern79de

Description

@bjoern79de

I'm wondering if it would be possible to initialize the "longterm-memory"-layers with pre-trained weights and do a "finetuning" just for the HOPE-specific layers (inspired by https://github.com/fabienfrfr/tptt).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions