Skip to content

Fix/model config init #61

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

AlexCuadron
Copy link

No description provided.

- Add async support for all model decoders
- Add streaming support for all model decoders
- Improve error handling with LiteLLM-specific exceptions
- Add exponential backoff for retries
- Refactor code for better organization and reusability
- Add proper type hints and docstrings
feat: enhance LiteLLM integration with async and streaming support
- Add TOML-based configuration system
- Add model factory for creating models from config
- Add model and agentless configuration classes
- Update model classes to use configuration
- Add model-specific feature support checks
- Add all LiteLLM parameters to configuration
- Add example configurations for different model types
- Add validation and adjustment of parameters based on model capabilities
- Add comprehensive LiteLLM integration guide
- Document all configuration options
- Add model-specific feature support
- Add usage examples and error handling
- Update README with LLM integration section
- Convert dataset to list before using with ThreadPoolExecutor
- Update iteration to use list instead of dataset directly
- Fix potential thread safety issues when accessing dataset items
- Fix ModelConfig initialization in OpenAIChatDecoder and AnthropicChatDecoder
- Add explicit parameters instead of **kwargs
- Add Qwen model configuration
- Remove duplicate tools definition
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant