Skip to content

Comments

best effort tito#955

Open
eligotts wants to merge 1 commit intomainfrom
eli/best-effort-tito
Open

best effort tito#955
eligotts wants to merge 1 commit intomainfrom
eli/best-effort-tito

Conversation

@eligotts
Copy link
Contributor

@eligotts eligotts commented Feb 24, 2026

Description

Best effort tito. instead of assuming extension and looking back at strictly last trajectory step, we walk backward until we find a MESSAGES level prefix hit

tested on both wiki-search and eligottlieb/poker-multiagent, which was giving the loud failure previously with TITO as it does explicit rewriting of history (like context folding)

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Test improvement

Testing

  • All existing tests pass when running uv run pytest locally.
  • New tests have been added to cover the changes

Checklist

  • My code follows the style guidelines of this project as outlined in AGENTS.md
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • Any dependent changes have been merged and published

Additional Notes


Note

Cursor Bugbot is generating a summary for commit 0ebf090. Configure here.

@eligotts eligotts changed the title best effort tito, we look back in the trajectory list for last step w… best effort tito Feb 24, 2026
Copy link
Member

@mikasenghaas mikasenghaas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeap, this works!

prev_turn_completion_ids = prev_turn_tokens["completion_ids"]
prev_turn_ids = prev_turn_prompt_ids + prev_turn_completion_ids

def normalize_for_comparison(value: Any) -> Any:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we make this a general message_util? seems useful in other places too? also, vaguely remember we have a similar util to this alr but might be wrong

return [normalize_for_comparison(item) for item in value]
return value

async def find_largest_prefix_match_tokens() -> list[int] | None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe a small docstring here as well


return 0

# we add suffix_ids to prev_turn_ids. suffix_ids are tokens that are added
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i know unrelated to this pr but can i think we might be able to remove the suffix part since we tokenize the env_response_ids = full_ids[len(prev_turn_ids) :] and not tokenize env response ids in isolation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants