You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello everyone,
currently I'm developing an agent that makes use of tools in which LLM calls are required. Let's say the workflow of the tool looks like this: LLM extracts country out of user query -> country is doubled checked against a list, if it is relevant -> return country if it is in the list.
More is happening in the tool but for the sake of simplicity let's keep it with that.
In my tool I would then need to call the LLM. I understood that it is possible to send custom values to a tool via **kwargs, but it shouldn't be complex objects, so it would make more sense to create the LLM API object inside the tool?
In addition I get my prompts from langfuse via their API to track them and have version control. This would also lead to another object, which needs to be initialized inside the function or given as an object via **kwargs
In the end the agent will have multiple of this extract_something tools and
So, my overall question is: What is the best approach here?
Or am I missing here an overall conceptual point?
Would it make sense to create a class that has the LLMAPI object and the langfuse client and so as attributes and the tool functions are class methods and I could call them with self.llm_client? and the method would look like def extract_country(self, user_query)?
Here is some pseudo code of the idea:
@tool
async def extract_country(user_query, **kwargs):
langfuse_client = Langfuse(api_values from kwargs)
prompt_template = # get prompt template from client
final_prompt = prompt_template.format(user_query=user_query)
llm_client = LLMAPIObject(api_values_from_kwargs)
response = llm_client.invoke(final_prompt)
tool_output = # process response do some checks to get the final tool output
return tool_output
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone,
currently I'm developing an agent that makes use of tools in which LLM calls are required. Let's say the workflow of the tool looks like this: LLM extracts country out of user query -> country is doubled checked against a list, if it is relevant -> return country if it is in the list.
More is happening in the tool but for the sake of simplicity let's keep it with that.
In my tool I would then need to call the LLM. I understood that it is possible to send custom values to a tool via **kwargs, but it shouldn't be complex objects, so it would make more sense to create the LLM API object inside the tool?
In addition I get my prompts from langfuse via their API to track them and have version control. This would also lead to another object, which needs to be initialized inside the function or given as an object via **kwargs
In the end the agent will have multiple of this extract_something tools and
So, my overall question is: What is the best approach here?
Or am I missing here an overall conceptual point?
Would it make sense to create a class that has the LLMAPI object and the langfuse client and so as attributes and the tool functions are class methods and I could call them with self.llm_client? and the method would look like def extract_country(self, user_query)?
Here is some pseudo code of the idea:
Beta Was this translation helpful? Give feedback.
All reactions