Closed
Description
I am fine tuning various models which have different styles of prompts. In inference I need to style the prompts adequately for each model. It would be great if it was possible to save a metadata or config json when creating the model, and retrieve it with the model. Otherwise I would need to either: (1) manage my own external database which is cumbersome when using google colab (2) hack the model suffix to encode my config params (let's not do that), or (3) reverse engineer my training jsonl to deduce the model's specific prompt styling, which is difficult. Any other options?