SF to post prompts to LLM and storing the response back among annotation fields.
Currently, the following LLMs are supported:
- OpenAI
- AWS Bedrock
- Copy the code of the SF for your selected provider
- In Rossum UI, create a new extension
- Select Custom function extension type and Python as programming language
- Paste the code and save the extension config
- Edit the extension and set configuration & secrets as described below
The following configuration fields are required:
Here you can configure annotation fields as variables for the prompt in the form of key-value pairs (supports both header fields and line items):
"input": {
"variable": "{fieldId}"
}
for example:
"input": {
"input_address": "sender_address"
}
You can then use the variable input_address
in the prompt definition.
Defines field_id
where the function stores its result:
"output": "sender_address_parsed_json"
Defines the prompt that will be submitted to the LLM:
"prompt": "This is an address extracted from a document: \"input_address\" Parse the address into the following fields: name, street, house number, city, state, country. Return valid JSON. Return no extra text."
{
"keep_history": true,
"configurations": [
{
"input": {
"input_address": "sender_address"
},
"output": "sender_address_parsed_json",
"prompt": "This is an address extracted from a document: \"input_address\" Parse the address into the following fields: name, street, house number, city, state, country. Return valid JSON. Return no extra text."
},
...
]
}
{
"key": "<access_key_id>",
"secret": "<access_key_secret>"
}
Every LLM comes with its own parameters such as model, limit of tokens, used region, etc. Configuration of the following fields is optional.
model
Specifies the Claude model to use (default:anthropic.claude-3-haiku-20240307-v1:0
)region
AWS Bedrock region to use (default:us-east-1
)max_tokens
Maximum tokens for output (default:64
)temperature
Controls output randomness (range: 0.0 to 1.0, default:0.0
)batch_size
For line items; defines how many items are processed per request. Default is1
(processes individually). Set to 0 to process all items in one request, or specify any other number to indicate how many line items will be sent in each request. Note: Processing multiple items in one request increases the risk of AI hallucinationskeep_history
Determines if the model retains session history for context-aware responses (default:false
)
- Model availability and pricing based on region - https://aws.amazon.com/bedrock/pricing/#Anthropic
- Model IDs - https://docs.anthropic.com/en/docs/about-claude/models#model-names (AWS Bedrock column)
model
Specifies the model to use (default:gpt-3.5-turbo
)max_tokens
Maximum tokens for output (default:64
)temperature
Controls output randomness (range: 0.0 to 2.0, default:0.0
)batch_size
For line items; defines how many items are processed per request. Default is1
(processes individually). Set to 0 to process all items in one request, or specify any other number to indicate how many line items will be sent in each request. Note: Processing multiple items in one request increases the risk of AI hallucinations