Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
75 changes: 75 additions & 0 deletions docs/docs/03-hooks/01-natural-language-processing/useLLM.md
Original file line number Diff line number Diff line change
Expand Up @@ -210,6 +210,81 @@ To configure model (i.e. change system prompt, load initial conversation history

- [`topp`](../../06-api-reference/interfaces/GenerationConfig.md#topp) - Only samples from the smallest set of tokens whose cumulative probability exceeds topp.

### Model configuration example

```tsx
import { useEffect } from 'react';
import {
MessageCountContextStrategy,
DEFAULT_SYSTEM_PROMPT,
ToolCall,
useLLM,
LLAMA3_2_1B_SPINQUANT,
} from 'react-native-executorch';

const TOOL_DEFINITIONS: LLMTool[] = [
{
name: 'get_weather',
description: 'Get/check weather in given location.',
parameters: {
type: 'dict',
properties: {
location: {
type: 'string',
description: 'Location where user wants to check weather',
},
},
required: ['location'],
},
},
];

const getWeather = async (_call: ToolCall) => {
return 'The weather is great!';
};

const executeTool: (call: ToolCall) => Promise<string | null> = async (
call
) => {
switch (call.toolName) {
case 'get_weather':
return await getWeather(call);
default:
console.error(`Wrong function! We don't handle it!`);
return null;
}
};

const llm = useLLM({ model: LLAMA3_2_1B_SPINQUANT });

const { configure } = llm;
useEffect(() => {
configure({
chatConfig: {
systemPrompt: `${DEFAULT_SYSTEM_PROMPT} Current time and date: ${new Date().toString()}`,
initialMessageHistory: [
{
role: 'user',
content: 'What is the current time and date?',
},
],
contextStrategy: new MessageCountContextStrategy(6),
},
toolsConfig: {
tools: TOOL_DEFINITIONS,
executeToolCallback: executeTool,
displayToolCalls: true,
},
generationConfig: {
outputTokenBatchSize: 15,
batchTimeInterval: 100,
temperature: 0.7,
topp: 0.9,
},
});
}, [configure]);
```

### Sending a message

In order to send a message to the model, one can use the following code:
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/06-api-reference/interfaces/LLMConfig.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Object configuring chat management, contains following properties:

`initialMessageHistory` - An array of `Message` objects that represent the conversation history. This can be used to provide initial context to the model.

`contextWindowLength` - The number of messages from the current conversation that the model will use to generate a response. The higher the number, the more context the model will have. Keep in mind that using larger context windows will result in longer inference time and higher memory usage.
`contextStrategy` - Defines a strategy for managing the conversation context window and message history

---

Expand Down
2 changes: 1 addition & 1 deletion packages/react-native-executorch/src/types/llm.ts
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ export interface LLMConfig {
*
* `initialMessageHistory` - An array of `Message` objects that represent the conversation history. This can be used to provide initial context to the model.
*
* `contextWindowLength` - The number of messages from the current conversation that the model will use to generate a response. The higher the number, the more context the model will have. Keep in mind that using larger context windows will result in longer inference time and higher memory usage.
* `contextStrategy` - Defines a strategy for managing the conversation context window and message history
*/
chatConfig?: Partial<ChatConfig>;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,6 @@ useEffect(() => {
llm.configure({
chatConfig: {
systemPrompt: 'You are a helpful assistant',
contextWindowLength: 10,
},
generationConfig: {
temperature: 0.7,
Expand All @@ -67,6 +66,81 @@ llm.sendMessage('Hello!');
console.log(llm.messageHistory);
```

## Model configuration

```tsx
import { useEffect } from 'react';
import {
MessageCountContextStrategy,
DEFAULT_SYSTEM_PROMPT,
ToolCall,
useLLM,
LLAMA3_2_1B_SPINQUANT,
} from 'react-native-executorch';

const TOOL_DEFINITIONS: LLMTool[] = [
{
name: 'get_weather',
description: 'Get/check weather in given location.',
parameters: {
type: 'dict',
properties: {
location: {
type: 'string',
description: 'Location where user wants to check weather',
},
},
required: ['location'],
},
},
];

const getWeather = async (_call: ToolCall) => {
return 'The weather is great!';
};

const executeTool: (call: ToolCall) => Promise<string | null> = async (
call
) => {
switch (call.toolName) {
case 'get_weather':
return await getWeather(call);
default:
console.error(`Wrong function! We don't handle it!`);
return null;
}
};

const llm = useLLM({ model: LLAMA3_2_1B_SPINQUANT });

const { configure } = llm;
useEffect(() => {
configure({
chatConfig: {
systemPrompt: `${DEFAULT_SYSTEM_PROMPT} Current time and date: ${new Date().toString()}`,
initialMessageHistory: [
{
role: 'user',
content: 'What is the current time and date?',
},
],
contextStrategy: new MessageCountContextStrategy(6),
},
toolsConfig: {
tools: TOOL_DEFINITIONS,
executeToolCallback: executeTool,
displayToolCalls: true,
},
generationConfig: {
outputTokenBatchSize: 15,
batchTimeInterval: 100,
temperature: 0.7,
topp: 0.9,
},
});
}, [configure]);
```

## Tool Calling

```tsx
Expand Down
Loading