Skip to content

[Provider] Add Anthropic provider using the BaseProvider interface #1

@leomariga

Description

@leomariga

Summary

Add additional provider backends that conform to the existing BaseProvider interface so users can choose their LLM vendor without changing agent logic. Implementations should mirror the capabilities of OpenAIProvider while remaining vendor-agnostic at the agent level.

Motivation

  • Increase flexibility and reduce vendor lock-in.
  • Enable users to leverage their preferred LLMs and enterprise contracts.
  • Standardize provider behavior behind BaseProvider for consistent agent UX.

Scope

  • Ensure the providers implement the three async methods defined in BaseProvider:
    • generate_response(messages, system_prompt=None, triggered_by_user_message=False, **kwargs) -> str
    • should_respond(messages, elapsed_time, context, **kwargs) -> bool
    • calculate_sleep_time(wake_up_pattern, min_sleep_time, max_sleep_time, context, **kwargs) -> tuple[int, str]
  • Wire each provider for easy import and usage in the agent.

Non-Goals

  • Changes to decision engines or the agent’s scheduling logic.
  • Adding tests (can be tracked separately if needed).

Current Architecture (for reference)

  • Interface: proactiveagent/providers/base.py (BaseProvider)
  • Example implementation: proactiveagent/providers/openai_provider.py
  • Provider usage: proactiveagent/agent.py (accepts a BaseProvider instance)

Design and Implementation Details

  • Create one file per provider in proactiveagent/providers/:
    • anthropic_provider.py
  • Each class should:
    • Accept model: str and provider-specific **kwargs in __init__ and store configuration.
    • Use the vendor’s official SDKs/clients if available; otherwise, a minimal HTTP client.
    • Respect the same message schema used in OpenAIProvider (list of dicts with role and content).
    • Keep behavior consistent with OpenAIProvider for system prompts and triggered_by_user_message.
    • Implement vendor-appropriate logic for should_respond and calculate_sleep_time while returning the same types and honoring the min/max constraints for sleep time.
  • Update proactiveagent/providers/__init__.py to export new providers via __all__.
  • Document environment variables and config keys required by each provider (e.g., API keys, endpoints, regions).

Minimal Provider Skeleton

from typing import List, Dict, Any, Optional
from .base import BaseProvider

class AnthropicProvider(BaseProvider):
    def __init__(self, model: str, **kwargs):
        super().__init__(model, **kwargs)
        # init vendor client here

    async def generate_response(
        self,
        messages: List[Dict[str, str]],
        system_prompt: Optional[str] = None,
        triggered_by_user_message: bool = False,
        **kwargs
    ) -> str:
        # call vendor API and return text
        return "..."

    async def should_respond(
        self,
        messages: List[Dict[str, str]],
        elapsed_time: int,
        context: Dict[str, Any],
        **kwargs
    ) -> bool:
        # vendor-backed decision or lightweight heuristic
        return True

    async def calculate_sleep_time(
        self,
        wake_up_pattern: str,
        min_sleep_time: int,
        max_sleep_time: int,
        context: Dict[str, Any],
        **kwargs
    ) -> tuple[int, str]:
        # compute int within [min_sleep_time, max_sleep_time], plus reasoning
        return min_sleep_time, "reason"

Developer Experience

  • Provide simple usage examples in examples/ showing how to instantiate ProactiveAgent with each new provider (similar to existing examples).
  • Document provider selection and required env vars in README.md and proactiveagent/providers/README.md.

Additional Context

  • Reference OpenAIProvider for structure and behavior parity.
  • Ensure async boundaries are respected to avoid blocking the agent loop.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions