A Ruby-based AI agent that can perform various tasks through a collection of tools and different LLM providers.
- Multiple LLM Provider Support:
- OpenAI (GPT-4)
- DeepSeek
- Perplexity
- Moonshot (Kimi)
- Built-in Tools:
- Wikipedia Search
- Google Search
- Safe Calculator
- Clean Terminal UI:
- Emoji-based iteration tracking
- Markdown-formatted responses
- Verbose mode for debugging
- Ruby 3.0+
- Required API Keys:
- OpenAI API Key (for OpenAI)
- DeepSeek API Key (for DeepSeek)
- Perplexity API Key (for Perplexity)
- Moonshot API Key (for Moonshot/Kimi)
- Google Search API Key and Search Engine ID (for Google Search)
- Clone the repository:
git clone https://github.com/yourusername/simple_agent_rb.git
cd simple_agent_rb- Install dependencies:
bundle install- Set up your environment variables:
cp .env.example .envEdit .env and add your API keys:
OPENAI_API_KEY=your_openai_key
DEEPSEEK_API_KEY=your_deepseek_key
PERPLEXITY_API_KEY=your_perplexity_key
MOONSHOT_API_KEY=your_moonshot_key
GOOGLE_SEARCH_API_KEY=your_google_key
GOOGLE_SEARCH_ENGINE_ID=your_search_engine_id
Run the agent:
ruby bin/main.rbWith verbose mode (shows detailed tool execution):
ruby bin/main.rb -v
# or
ruby bin/main.rb --verboseBy default, the agent uses Moonshot. You can modify bin/main.rb to use a different provider:
# Use OpenAI
agent = Agent.new(:openai)
# Use Perplexity
agent = Agent.new(:perplexity)
# Use Moonshot with specific model
agent = Agent.new(:moonshot, "kimi-k2-0711-preview")
# Specify a model
agent = Agent.new(:openai, "gpt-4")simple_agent_rb/
├── bin/
│ └── main.rb # Entry point
└── lib/
├── agent/
│ └── agent.rb # Main agent logic
├── llm_clients/ # LLM providers
│ ├── llm_client.rb # Base LLM client
│ ├── openai_client.rb
│ ├── deepseek_client.rb
│ ├── perplexity_client.rb
│ └── moonshot_client.rb
└── tools/ # Available tools
├── tool.rb # Base tool class
├── tool_registry.rb # Tool management
├── wikipedia_tool.rb
├── google_search_tool.rb
├── simon_blog_search_tool.rb
└── calculate_tool.rb
In normal mode, the agent displays clean, emoji-based logs:
- 🔄 Agent iteration tracking
- 📞 Tool call indicators
- ↻ Loop continuation messages
- Beautiful markdown-formatted responses
Enable with -v or --verbose flag to see:
- Detailed tool execution logs
- Full observation outputs
- Debugging information
- Create a new tool class in
lib/tools/that inherits fromTool:
require_relative "tool"
class MyNewTool < Tool
def initialize
super("my_new_tool") # The name used in prompts
end
def call(input)
# Implement your tool logic here
end
end- The tool will be automatically registered and available to use.
- Create a new client class in
lib/llm_clients/that inherits fromLLMClient:
require_relative "llm_client"
class MyNewClient < LLMClient
def initialize(system = "", model = nil)
super(system, model)
@api_key = ENV["MY_NEW_API_KEY"]
end
private
def execute
# Implement your API call here
end
end- Update
Agent#create_llm_clientto support your new provider.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -am 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.