Skip to content

Standardize usage of batch_size #212

Open
@lballes

Description

@lballes

Currently, our ER-based methods use a batch that consists of batch_size points from the current task and memory_batch_size points from the memory. This is inconvenient to compare/standardize to some other learners (e.g., Joint, GDumb, Fine-tuning) and will also result in a smaller total batch size in the first training stage (when no memory exists).

I propose that all methods use batches of size batch_size. ER-based methods can have an additional argument called memory_frac that determines which fraction of the batch will be filled with points from the memory.

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions