Skip to content

Half precision (float16 or bfloat16) support #539

Open
@CloudyDory

Description

@CloudyDory

Does BrainPy fully support half-precision floating point numbers? I have tried to changed some of my own BrainPy code from using brainpy.math.float32 to brainpy.math.float16 or brainpy.math.bfloat16 (by explicitly setting the dtype of all variables and using a debugger to make sure that they won't be promoted to float32), but it seems that the GPU memory consumption and running speed is almost the same as using float32.

Metadata

Metadata

Assignees

No one assigned

    Labels

    brainpy.dynissue belongs to brainpy.dyn modulebrainpy.mathissue belongs to brainpy.math moduleenhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions