Description
Lucene now has sandbox support for BigInteger (LUCENE-7043), and hopefully BigDecimal will follow soon. We should look at what needs to be done to support them in Elasticsearch.
I propose adding big_integer
and big_decimal
types which have to be specified explicitly - they shouldn't be a type which can be detected by dynamic mapping.
Many languages don't support big int/decimal. Javascript will convert to floats or throw an exception if a number is out of range. This can be worked around by always rendering these numbers in JSON as strings. We can possibly accept known bigints/bigdecimals as numbers but there are a few places where this could be a problem:
- indexing a known big field (do we know ahead of time to parse a floating point as a BigDecimal?)
- dynamic mapping (a floating point number could have lost precision before the field is defined as big_decimal)
- ingest pipeline (ingest doesn't know about field mappings)
The above could be worked around by telling Jackson to parse floats and ints as BIG* (USE_BIG_DECIMAL_FOR_FLOATS
and USE_BIG_INTEGER_FOR_INTS
) but this may well generate a lot of garbage for what is an infrequent use case.
Alternatively, we could just say that Big* should always be passed in as strings if they are to maintain their precision.