Description
Currently metadata is supported for conversion via the CLI converter on server side from models from Python however if you train a model in the browser there is no way to save custom key value pairs that you may need in future pre/post processing.
For example if you perform normalization on your training data you need to know the min/max values for a given attribute in order to normalize future data after the model is trained too at inference for any new prediction data that comes in.
Currently a user has to save this data in a different pipeline to the TensorFlow.js one which is inconvenient and does not make sense. It would be far more optimal to expose a simple interface to the end user to store their metadata as part of the model.json file and then able to read these values on model load such that they can be retrieved and used as needed for any custom pre/post processing when using the model.
All users training models in the browser either fully or via transfer learning could benefit from this feature.