Hi,
Thanks for such a useful library 🙏
I have been testing some of the models listed in the documentation and trying some of the databricks/databricks* models but they all seem to be using the backup model:
Warning: model not found. Using cl100k_base encoding.
Below is a minimal example to re-create the issue:
from tokencost import calculate_prompt_cost, calculate_completion_cost, count_string_tokens
model = "databricks/databricks-meta-llama-3-1-405b-instruct"
prompt_string ="Hello world"
completion = "How may I assist you today?"
prompt_cost = calculate_prompt_cost(prompt_string, model)
completion_cost = calculate_completion_cost(completion, model)
print(f"{prompt_cost} + {completion_cost} = {prompt_cost + completion_cost}")
print(count_string_tokens(prompt=f"{prompt_string + completion}", model=model))
How can I get costs/tokens for the databricks models listed here: https://docs.databricks.com/aws/en/machine-learning/model-serving/foundation-model-overview ?
Hi,
Thanks for such a useful library 🙏
I have been testing some of the models listed in the documentation and trying some of the
databricks/databricks*models but they all seem to be using the backup model:Warning: model not found. Using cl100k_base encoding.Below is a minimal example to re-create the issue:
How can I get costs/tokens for the databricks models listed here: https://docs.databricks.com/aws/en/machine-learning/model-serving/foundation-model-overview ?