-
Notifications
You must be signed in to change notification settings - Fork 285
Closed
Description
There is a consistency issue between the default way decimals are printed and the way they are inferred. Values like 1.8, -2.22, 3.4 will be inferred to be Decimal. However if you printf "%a" these values, they will be represented as 1.8M, -2.22M, 3.4M. Reading them again will result in strings instead of decimals. It would be nice if there could be more consistency.
There are several solutions I can think of each one of them with pros and cons:
- The type of numbers ending with M could be inferred to be Decimal
- numbers like 1.8, -2.22, 3.4 can be inferred to be floats
- we can allow the user to select what to do in case of ambiguity. For example a columns that contains 1.8, 2.3, 5.6 can be converted to strings, floats and decimals. The current default is to convert them to decimals but we can allow the user to specify that in such cases floats are preferred.
- there are probably many other ways in which in can be resolved
Reactions are currently unavailable