Make chat calls using your LLM.
The llm.chat method can have the following parameters.
| Parameter | Type | Description |
|---|---|---|
input |
str | The input message to send to the chat model. |
is_stream |
bool | The temperature parameter for the model. |
**kwargs |
dict | Additional parameters to pass to the chat model. |
Refer to your provider-specific documentation for additional kwargs you can use.
| Output | Type | Description |
|---|---|---|
ChatCompletion |
object | A chat completion object in the OpenAI format + metrics computed by LLMstudio. |
Here's how to use .chat() to make calls to your LLM.
<Tabs>
<Tab title="String format">
```python
message = "Hello, how are you today?"
```
</Tab>
<Tab title="OpenAI format">
```python
message = [
{"role": "system", "content": "You are a helpfull assistant."},
{"role": "user", "content": "Hello, how are you today?"}
]
```
</Tab>
</Tabs>
</Step>
<Step>
<Tabs>
<Tab title="Non-stream response">
Get your response.
```python
response = llm.chat(message)
```
Vizualize your response.
```python
print(response)
```
</Tab>
<Tab title="Stream response">
Get your response.
```python
response = llm.chat(message, is_stream = True)
```
Vizualize your response.
```python
for chunk in response:
print(chunk)
```
</Tab>
</Tabs>
<Check>You are done chating with your **LLMstudio LLM**!</Check>
</Step>