Skip to content

Training dataset sharding #111

@danifranco

Description

@danifranco

Now the dataset is replicated for each worker that is spawned. We should split the dataset as it is done in "by chunks" inference in order to save memory when training with large datasets

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions