Open
Description
Currently when users of the python and java interfaces construct an index with the float8 storage datatype, we automatically use a scaling ratio of <1, 127>
(python, java).
While this seems like a sane default, it effectively automatically reduces the precision of the value space (by a factor of 127/actual_range
I think?). For some use cases, known/expected input dimensional value boundaries may be, for example, between -10.0 to 10.0
, in which case we would get a higher precision quantization by using a scalefactor of <1, 10>
.
It would be good if we could expose this parameter within the constructors of the bindings since they're already available in core.