Skip to content

models - llamaapi - async #372

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

pgrayy
Copy link
Member

@pgrayy pgrayy commented Jul 7, 2025

Description

Use the AsyncLlamaAPIClient Python client.

Related Issues

#83

Type of Change

  • Bug fix
  • New feature
  • Breaking change
  • Documentation update
  • Other (please describe):

Testing

How have you tested the change? Verify that the changes do not break functionality or introduce warnings in consuming repositories: agents-docs, agents-tools, agents-cli

  • I ran hatch run prepare
  • Wrote new unit and integ tests: Note, don't have a Llama API key for the integ tests.
  • Ran the following test script: Will run when I get a Llama API key
model = LlamaAPIModel(client_args={"api_key": "****"}, model_id="Llama-4-Maverick-17B-128E-Instruct-FP8")
prompt = "What is 2+2? Think through the steps."


async def func_a():
    agent = Agent(model=model, callback_handler=None)
    result = await agent.invoke_async(prompt)
    print(f"FUNC_A: {result.message}")


async def func_b():
    agent = Agent(model=model, callback_handler=None)
    result = await agent.invoke_async(prompt)
    print(f"FUNC_B: {result.message}")


async def func_c():
    agent = Agent(model=model, callback_handler=None)
    result = await agent.invoke_async(prompt)
    print(f"FUNC_C: {result.message}")


async def func_d():
    agent = Agent(model=model, callback_handler=None)
    result = await agent.invoke_async(prompt)
    print(f"FUNC_D: {result.message}")


async def main():
    await asyncio.gather(func_a(), func_b(), func_c(), func_d())


asyncio.run(main())

Using the sync LlamaAPI Python client, every function ran sequentially and the total run time averaged Xs. Using the AsyncLlamaAPI Python client, every function ran concurrently and the total run time averaged Ys.

Checklist

  • I have read the CONTRIBUTING document
  • I have added any necessary tests that prove my fix is effective or my feature works
  • I have updated the documentation accordingly
  • I have added an appropriate example to the documentation to outline the feature, or no new docs are needed
  • My changes generate no new warnings
  • Any dependent changes have been merged and published

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@pgrayy pgrayy temporarily deployed to auto-approve July 7, 2025 23:59 — with GitHub Actions Inactive
@pgrayy pgrayy marked this pull request as ready for review July 8, 2025 00:02
@pgrayy pgrayy mentioned this pull request Jul 11, 2025
12 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants