Skip to content

Conversation

nikhil23-Rao
Copy link

  • changing the output schema of reduce operation in folded outputs (to ultimately reduce # tokens llm has to output)
  • edited cached_call_llm function which has logic for processing function calls and storing it in an updated state
  • at the very end of all the folds convert the localized state into the user's self-defined schema in the reduce.py file
  • testing data all moved to a folder called testing-data to avoid pollution of project in base
  • found certain experiments with the code to result in faster times and cheaper costs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant