Skip to content

[bug] llm_api function triggering twice #681

Closed
@edisontim

Description

@edisontim

Describe the bug
I have a wrapper function around the async openai.AsyncClient().chat.completions.create. Printing inside the function after passing it to the guard make the print happen twice, wondering the API call gets also triggered twice.

To Reproduce
Steps to reproduce the behavior:

class AsyncOpenAiClient:
    client: openai.AsyncOpenAI

    def __init__(self):
        self.client = openai.AsyncClient()

    async def request_prompt_completion(self, input_str: str, *args, **kwargs) -> str:
        system_prompt = kwargs["instructions"]
        del kwargs["instructions"]

        print("Printing")
        response = await self.client.chat.completions.create(
            messages=[
                {
                    "role": "user",
                    "content": input_str,
                },
                {"role": "system", "content": system_prompt},
            ],
            *args,
            **kwargs,
        )
        msg = response.choices[0].message.content
        return msg

    async def request_embedding(self, input, **kwargs) -> list[float]:
        response = await self.client.embeddings.create(input=input, **kwargs)
        return response.data[0].embedding
class DialogueSegment(BaseModel):
    full_name: str = Field(description="Full name of the NPC speaking.")
    dialogue_segment: str = Field(description="The dialogue spoken by the NPC.")

class Thought(BaseModel):
    full_name: str = Field(description="Full name of the NPC expressing the thought.")
    value: str = Field(
        description="""The NPC's thoughts and feelings about the discussion, including nuanced emotional responses and sentiments towards the topics being discussed."""
    )

class Townhall(BaseModel):
    dialogue: list[DialogueSegment] = Field(
        description="""Discussion held by the NPCs, structured to ensure each NPC speaks twice, revealing their viewpoints and
            emotional reactions to the discussion topics."""
    )
    thoughts: list[Thought] = Field(
        description="""Collection of NPCs' thoughts post-discussion, highlighting their reflective sentiments and emotional
            responses to the topics covered."""
    )
    plotline: str = Field(description="The central theme or main storyline that unfolds throughout the dialogue.")


llm_client = AsyncOpenAiClient()
guard = Guard.from_pydantic(output_class=Townhall, instructions="whatever", num_reasks=0)

_raw_llm_response, validated_response, *_rest = await guard(
    llm_api=self.client.request_prompt_completion,
    prompt="Hello world!",
    model="gpt-4-0125-preview",
    temperature=1,
)

Expected behavior
Whatever happens in my wrapper function should only trigger once

Library version:
0.4.2

Additional context
By the way, I struggled hard to get the async function of OpenAi to work and I'm wondering if I even did it the right way (see how the wrapper function needs to delete the instructions field from the kwargs, as otherwise OpenAi triggers an Exception). The documentation on guardrails' website seems to rely on an old version of the OpenAI python API, could you update it please? Would be very much appreciated :)

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions