Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Building a Custom Agent - LlamaIndex 🦙 0.9.24 #193

Open
1 task
irthomasthomas opened this issue Jan 2, 2024 · 0 comments
Open
1 task

Building a Custom Agent - LlamaIndex 🦙 0.9.24 #193

irthomasthomas opened this issue Jan 2, 2024 · 0 comments
Labels
AI-Agents Autonomous AI agents using LLMs Automation Automate the things Code-Interpreter OpenAI Code-Interpreter embeddings vector embeddings and related tools llm Large Language Models llm-experiments experiments with large language models llm-function-calling Function Calling with Large Language Models llm-inference-engines Software to run inference on large language models MachineLearning ML Models, Training and Inference ml-inference Running and serving ML models. Models LLM and ML model repos and links multimodal-llm LLMs that combine modes such as text and image recognition. RAG Retrieval Augmented Generation for LLMs

Comments

@irthomasthomas
Copy link
Owner

The easiest way to build a custom agent is to simply subclass CustomSimpleAgentWorker and implement a few required functions. You have complete flexibility in defining the agent step-wise logic.

This lets you add arbitrarily complex reasoning logic on top of your RAG pipeline.

We show you how to build a simple agent that adds a retry layer on top of a RouterQueryEngine, allowing it to retry queries until the task is complete. We build this on top of both a SQL tool and a vector index query tool. Even if the tool makes an error or only answers part of the question, the agent can continue retrying the question until the task is complete.

Setup the Custom Agent
Here we setup the custom agent.

Refresher
An agent in LlamaIndex consists of both an agent runner + agent worker. An agent runner is an orchestrator that stores state like memory, whereas an agent worker controls the step-wise execution of a Task. Agent runners include sequential, parallel execution. More details can be found in our lower level API guide.

Most core agent logic (e.g. ReAct, function calling loops), can be executed in the agent worker. Therefore we’ve made it easy to subclass an agent worker, letting you plug it into any agent runner.

@irthomasthomas irthomasthomas added inbox-url embeddings vector embeddings and related tools llm Large Language Models llm-experiments experiments with large language models Automation Automate the things AI-Agents Autonomous AI agents using LLMs MachineLearning ML Models, Training and Inference Code-Interpreter OpenAI Code-Interpreter llm-function-calling Function Calling with Large Language Models ml-inference Running and serving ML models. Models LLM and ML model repos and links llm-inference-engines Software to run inference on large language models RAG Retrieval Augmented Generation for LLMs multimodal-llm LLMs that combine modes such as text and image recognition. labels Jan 2, 2024
@ShellLM ShellLM removed the llama label May 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AI-Agents Autonomous AI agents using LLMs Automation Automate the things Code-Interpreter OpenAI Code-Interpreter embeddings vector embeddings and related tools llm Large Language Models llm-experiments experiments with large language models llm-function-calling Function Calling with Large Language Models llm-inference-engines Software to run inference on large language models MachineLearning ML Models, Training and Inference ml-inference Running and serving ML models. Models LLM and ML model repos and links multimodal-llm LLMs that combine modes such as text and image recognition. RAG Retrieval Augmented Generation for LLMs
Projects
None yet
Development

No branches or pull requests

2 participants