Skip to content

first shim wrapper around litellm#37

Merged
miguelg719 merged 5 commits intolocal_supportfrom
miguel/stg-339-integrate-litellm
Apr 23, 2025
Merged

first shim wrapper around litellm#37
miguelg719 merged 5 commits intolocal_supportfrom
miguel/stg-339-integrate-litellm

Conversation

@miguelg719
Copy link
Collaborator

It's time to start doing a full rewrite. LiteLLM will help a lot
here is a sample test:

class Joke(BaseModel):
    joke: str
    explanation: str
    setup: str
    punchline: str

class Jokes(BaseModel):
    jokes: list[Joke]

async def main():
    # Build a unified configuration object for Stagehand
    config = StagehandConfig(
        env="BROWSERBASE",
        api_key=os.getenv("BROWSERBASE_API_KEY"),
        project_id=os.getenv("BROWSERBASE_PROJECT_ID"),
        model_name="gemini/gemini-2.5-flash-preview-04-17",
        model_client_options={"apiKey": os.getenv("MODEL_API_KEY")},
        # Use verbose=2 for medium-detail logs (1=minimal, 3=debug)
        verbose=3,
    )

    stagehand = Stagehand(
        config=config, 
        server_url=os.getenv("STAGEHAND_SERVER_URL"),
    )
    response = stagehand.llm.create_response(
        messages=[{"role": "user", "content": "Hello, how are you? can you tell me a few jokes?"}],
        model="gemini/gemini-2.0-flash",
        response_format=Jokes,
    )
    print("Received={}".format(response.choices[0].message.content))

@linear
Copy link

linear bot commented Apr 22, 2025

@miguelg719 miguelg719 marked this pull request as ready for review April 22, 2025 23:08
@filip-michalsky
Copy link
Collaborator

Can we please target this to a different branch than main - I was thinking like v2 or python-rewrite or something. Once we are done, we merge to main? what do you think? this way if we need to fix issues in the "stagehand API wrapper" version, we can still do that in parallel?

requirements.txt Outdated
rich
browserbase No newline at end of file
browserbase
litellm No newline at end of file
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would probably spec out the lib versions here - litellm moves very fast from my experience and its easy to introduce breaking changes.

@miguelg719 miguelg719 changed the base branch from main to local_support April 23, 2025 19:34
try:
# Use litellm's completion function
if self.async_mode:
response = litellm.acompletion(**filtered_params)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you would need to await this in the async mode, right? this call is otherwise blocking the event loop i think

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yea good catch, gonna remove acompletion until we find a use case for it

Copy link
Collaborator

@filip-michalsky filip-michalsky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM -

great start, I assume the next step is to wire up the first stagehand primitive here... maybe observe as Ani said?

please merge to a different branch than main though

@miguelg719 miguelg719 merged commit 92494cd into local_support Apr 23, 2025
@filip-michalsky filip-michalsky deleted the miguel/stg-339-integrate-litellm branch May 4, 2025 12:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants