Skip to content

Add Dataclasses and RepoEnv Info refac #50

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 19 commits into from
Feb 11, 2025
Merged

Conversation

matheper
Copy link
Collaborator

  • Create some dataclasses to keep track of the environment information (EnvInfo) and LLM responses (LLMResponse and TokenUsage)
  • Methods step and reset always return an EnvInfo object instead of tuples
  • LLM always returns an LLMResponse
  • Move token usage tracking to HistoryTracker
  • Combine HistoryTracker.save_prompt_response_pairs into HistoryTracker.step
  • Remove Random LLM
  • Test tool and toolbox

]
prp = self.prompt_response_pairs[game_step]
if prp and include_prompt_response_pairs:
json_out["prompt_response_pairs"] = self._format_prompt_response_pairs(
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm changing the format of the prompt_response_pairs when the prompt is a list of messages, see _format_prompt_response_pairs and tests/test_agents.py:193. I'm not sure I understand the previous logic when there are multiple message turns. Now the messages are concatenated into one prompt.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was for the cases where there are multiple steps of LLM calls in a single env.step. For instance, in some CoT settings, we first call the LLM to generate the reasoning string, then conditioned on this string, we ask the LLM again to generate an action. I believe it's better to keep the list instead of concat, because list can be better presented when doing json.dumps when creating prompts.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the string format, but moved the token_usage inside prompt_response_pair since there's usage for each of the llm calls

@matheper matheper changed the title Repoenv info dataclass Add Dataclasses and RepoEnv Info refac Feb 10, 2025
]
prp = self.prompt_response_pairs[game_step]
if prp and include_prompt_response_pairs:
json_out["prompt_response_pairs"] = self._format_prompt_response_pairs(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was for the cases where there are multiple steps of LLM calls in a single env.step. For instance, in some CoT settings, we first call the LLM to generate the reasoning string, then conditioned on this string, we ask the LLM again to generate an action. I believe it's better to keep the list instead of concat, because list can be better presented when doing json.dumps when creating prompts.

Copy link
Collaborator

@xingdi-eric-yuan xingdi-eric-yuan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's go!

@matheper matheper merged commit 2fbeae5 into main Feb 11, 2025
4 checks passed
@matheper matheper deleted the repoenv-info-dataclass branch February 11, 2025 21:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants