Eidolon helps developers designing and deploying agent-based services.
With Eidolon, agents are services, so there is no extra work when it comes time to deploy. The HTTP server is built in.
Since agents are services with well-defined interfaces, they easily communicate with tools dynamically generated from the openapi json schema defined by the agent services.
With a focus on modularity, Eidolon makes it easy to swap out components. Grab an off the shelf llm, rag impl, tools, etc or just define your own.
This means no vendor lock-in and minimizes the work needed to upgrade portions of an agent. Without this flexibility, developers will not be able to adapt their agents to the rapidly changing AI landscape.
Check out Eidolon's website to learn more.
git clone https://github.com/eidolon-ai/eidolon-quickstart.git
cd eidolon-quickstart
make docker-serve
This command will download the dependencies required to run your agent machine and start the Eidolon http server in "dev-mode". It will also start the Eidolon webui, which you can access at http://localhost:3000.
The first time you run this command, you may be prompted to enter credentials that the machine needs to run (ie, OpenAI API Key):
π OPENAI_API_KEY (required):
Please note that your OPENAI_API_KEY
must belong to an account with at least a $5 balance, or you won't be able to run the GPT-4 Turbo model required to run the server.
You'll know this is an issue if you see this output:
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'message': 'The model `gpt-4-turbo` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
If your AgentMachine successfully started, you should see the following logs in your terminal.
INFO - Building machine 'local_dev'
INFO - Starting agent 'hello-world'
INFO - Server Started
You can also check out your machine's swagger docs.
Head over to another terminal where we will install a cli, create a new process, and then converse with our agent on that process.
pip install 'eidolon-ai-client[cli]'
export PID=$(eidolon-cli processes create --agent hello-world)
eidolon-cli actions converse --process-id $PID --body "Hi! I made you"
Believe it or not, you are already up and running with a simple agent! π
Now that you have a running agent machine with a simple agent. Let's start customizing!
- Add new capabilities via logic units (tools)
- Enable agent-to-agent communication
- Swap out components (like the underlying llm)
- Use structured inputs for prompt templating
- Leverage your agent's state machine
Eidolon is a completely open source project. Keep your dirty money!
βοΈ But we love your stars βοΈ
We welcome and appreciate contributions!
Reach out to us on discord if you have any questions or suggestions.
If you need help with the mechanics of contributing, check out the First Contributions Repository.
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!