Open
Description
Does BrainPy fully support half-precision floating point numbers? I have tried to changed some of my own BrainPy code from using brainpy.math.float32
to brainpy.math.float16
or brainpy.math.bfloat16
(by explicitly setting the dtype of all variables and using a debugger to make sure that they won't be promoted to float32
), but it seems that the GPU memory consumption and running speed is almost the same as using float32
.