-
Notifications
You must be signed in to change notification settings - Fork 3
Open
Description
Summary
Add additional provider backends that conform to the existing BaseProvider interface so users can choose their LLM vendor without changing agent logic. Implementations should mirror the capabilities of OpenAIProvider while remaining vendor-agnostic at the agent level.
Motivation
- Increase flexibility and reduce vendor lock-in.
- Enable users to leverage their preferred LLMs and enterprise contracts.
- Standardize provider behavior behind
BaseProviderfor consistent agent UX.
Scope
- Ensure the providers implement the three async methods defined in
BaseProvider:generate_response(messages, system_prompt=None, triggered_by_user_message=False, **kwargs) -> strshould_respond(messages, elapsed_time, context, **kwargs) -> boolcalculate_sleep_time(wake_up_pattern, min_sleep_time, max_sleep_time, context, **kwargs) -> tuple[int, str]
- Wire each provider for easy import and usage in the agent.
Non-Goals
- Changes to decision engines or the agent’s scheduling logic.
- Adding tests (can be tracked separately if needed).
Current Architecture (for reference)
- Interface:
proactiveagent/providers/base.py(BaseProvider) - Example implementation:
proactiveagent/providers/openai_provider.py - Provider usage:
proactiveagent/agent.py(accepts aBaseProviderinstance)
Design and Implementation Details
- Create one file per provider in
proactiveagent/providers/:anthropic_provider.py
- Each class should:
- Accept
model: strand provider-specific**kwargsin__init__and store configuration. - Use the vendor’s official SDKs/clients if available; otherwise, a minimal HTTP client.
- Respect the same message schema used in
OpenAIProvider(list of dicts withroleandcontent). - Keep behavior consistent with
OpenAIProviderfor system prompts andtriggered_by_user_message. - Implement vendor-appropriate logic for
should_respondandcalculate_sleep_timewhile returning the same types and honoring the min/max constraints for sleep time.
- Accept
- Update
proactiveagent/providers/__init__.pyto export new providers via__all__. - Document environment variables and config keys required by each provider (e.g., API keys, endpoints, regions).
Minimal Provider Skeleton
from typing import List, Dict, Any, Optional
from .base import BaseProvider
class AnthropicProvider(BaseProvider):
def __init__(self, model: str, **kwargs):
super().__init__(model, **kwargs)
# init vendor client here
async def generate_response(
self,
messages: List[Dict[str, str]],
system_prompt: Optional[str] = None,
triggered_by_user_message: bool = False,
**kwargs
) -> str:
# call vendor API and return text
return "..."
async def should_respond(
self,
messages: List[Dict[str, str]],
elapsed_time: int,
context: Dict[str, Any],
**kwargs
) -> bool:
# vendor-backed decision or lightweight heuristic
return True
async def calculate_sleep_time(
self,
wake_up_pattern: str,
min_sleep_time: int,
max_sleep_time: int,
context: Dict[str, Any],
**kwargs
) -> tuple[int, str]:
# compute int within [min_sleep_time, max_sleep_time], plus reasoning
return min_sleep_time, "reason"Developer Experience
- Provide simple usage examples in
examples/showing how to instantiateProactiveAgentwith each new provider (similar to existing examples). - Document provider selection and required env vars in
README.mdandproactiveagent/providers/README.md.
Additional Context
- Reference
OpenAIProviderfor structure and behavior parity. - Ensure async boundaries are respected to avoid blocking the agent loop.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels