Skip to content

Perplexity dtype restriction too strict #2224

Closed
@ZhaofengWu

Description

@ZhaofengWu

🐛 Bug

The perplexity metric requires the input dtype to be either fp32 or fp64, but this doesn't work with e.g. fp16, and users need to manually recast.

_TORCH_FLOAT_OR_DOUBLE = (torch.float32, torch.float64)

Expected behavior

The metric should accept other floating point dtypes.

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions