| 
 | 1 | +# CLAUDE.md  | 
 | 2 | + | 
 | 3 | +This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.  | 
 | 4 | + | 
 | 5 | +## Build and Development Commands  | 
 | 6 | + | 
 | 7 | +### Building the Project  | 
 | 8 | +```bash  | 
 | 9 | +./gradlew build                 # Build all modules  | 
 | 10 | +./gradlew clean build           # Clean and build all modules  | 
 | 11 | +./gradlew assemble             # Assemble outputs without running tests  | 
 | 12 | +```  | 
 | 13 | + | 
 | 14 | +### Running Tests  | 
 | 15 | +```bash  | 
 | 16 | +./gradlew test                  # Run tests for all platforms  | 
 | 17 | +./gradlew jvmTest              # Run JVM tests only  | 
 | 18 | +./gradlew macosArm64Test      # Run macOS ARM64 tests  | 
 | 19 | +./gradlew iosSimulatorArm64Test # Run iOS simulator tests  | 
 | 20 | +./gradlew allTests             # Run tests for all targets with aggregated report  | 
 | 21 | +./gradlew check                # Run all verification tasks  | 
 | 22 | +```  | 
 | 23 | + | 
 | 24 | +### Test Coverage  | 
 | 25 | +```bash  | 
 | 26 | +./gradlew koverHtmlReport      # Generate HTML coverage report for all code  | 
 | 27 | +./gradlew koverXmlReport       # Generate XML coverage report  | 
 | 28 | +./gradlew koverVerify          # Run coverage verification (min 86% required)  | 
 | 29 | +```  | 
 | 30 | + | 
 | 31 | +### Running Specific Module Tests  | 
 | 32 | +```bash  | 
 | 33 | +./gradlew :openai-client:openai-client-core:test  | 
 | 34 | +./gradlew :anthropic-client:anthropic-client-core:test  | 
 | 35 | +./gradlew :ollama-client:ollama-client-core:test  | 
 | 36 | +./gradlew :gemini-client:gemini-client-core:test  | 
 | 37 | +./gradlew :openai-gateway:openai-gateway-core:test  | 
 | 38 | +```  | 
 | 39 | + | 
 | 40 | +## Project Architecture  | 
 | 41 | + | 
 | 42 | +This is a Kotlin Multiplatform project providing AI/LLM client implementations for multiple providers. The codebase follows a modular architecture with clear separation of concerns.  | 
 | 43 | + | 
 | 44 | +### Core Architecture Patterns  | 
 | 45 | + | 
 | 46 | +1. **Multiplatform Structure**: Each client module has platform-specific implementations:  | 
 | 47 | +   - `-core`: Common implementation shared across platforms  | 
 | 48 | +   - `-darwin`: Apple platform specific implementations (iOS, macOS)  | 
 | 49 | +   - `-cio`: JVM-specific implementations using CIO  | 
 | 50 | + | 
 | 51 | +2. **Provider Pattern**: The `openai-gateway` module implements a gateway pattern that allows switching between different LLM providers (OpenAI, Anthropic, Ollama, Gemini) using a unified interface.  | 
 | 52 | + | 
 | 53 | +3. **HTTP Client Abstraction**: The `common` module provides a shared `HttpRequester` interface that abstracts HTTP operations across platforms, using Ktor under the hood.  | 
 | 54 | + | 
 | 55 | +4. **Dependency Injection**: Uses Koin for dependency injection across the codebase, with platform-specific configurations.  | 
 | 56 | + | 
 | 57 | +### Module Structure  | 
 | 58 | + | 
 | 59 | +- **common/**: Shared networking and utility code  | 
 | 60 | +  - HTTP client abstraction (`HttpRequester`)  | 
 | 61 | +  - Ktor configuration for different platforms  | 
 | 62 | +  - JSON serialization utilities  | 
 | 63 | + | 
 | 64 | +- **openai-client/**: OpenAI API client implementation  | 
 | 65 | +  - Chat completions, images, and legacy completions APIs  | 
 | 66 | +  - Streaming support for chat completions  | 
 | 67 | + | 
 | 68 | +- **anthropic-client/**: Anthropic Claude API client  | 
 | 69 | +  - Messages API implementation  | 
 | 70 | +  - Image support with base64 encoding  | 
 | 71 | + | 
 | 72 | +- **ollama-client/**: Ollama local LLM client  | 
 | 73 | +  - Chat and generate endpoints  | 
 | 74 | +  - Local model management  | 
 | 75 | + | 
 | 76 | +- **gemini-client/**: Google Gemini API client  | 
 | 77 | +  - Text generation with multimodal support  | 
 | 78 | + | 
 | 79 | +- **openai-gateway/**: Unified gateway for all providers  | 
 | 80 | +  - Provider abstraction allowing runtime switching  | 
 | 81 | +  - Adapter pattern to convert between provider-specific and OpenAI formats  | 
 | 82 | +  - Extensions for converting requests/responses between different provider formats  | 
 | 83 | + | 
 | 84 | +### Key Interfaces  | 
 | 85 | + | 
 | 86 | +- `OpenAI`: Main interface for OpenAI operations (Chat, Images, Completions)  | 
 | 87 | +- `OpenAIGateway`: Gateway interface for multi-provider support  | 
 | 88 | +- `HttpRequester`: HTTP client abstraction for cross-platform requests  | 
 | 89 | +- `OpenAIProvider`: Provider interface for different LLM services  | 
 | 90 | + | 
 | 91 | +### Configuration  | 
 | 92 | + | 
 | 93 | +Each client uses a configuration pattern (e.g., `OpenAIConfig`, `AnthropicConfig`) that requires:  | 
 | 94 | +- Base URL (with defaults for each provider)  | 
 | 95 | +- API key  | 
 | 96 | +- Optional provider-specific settings  | 
 | 97 | + | 
 | 98 | +### Testing Strategy  | 
 | 99 | + | 
 | 100 | +The project uses:  | 
 | 101 | +- Unit tests with mocked HTTP clients (`MockHttpClient`)  | 
 | 102 | +- Integration tests (files ending with `ITest`) for actual API calls  | 
 | 103 | +- Platform-specific test configurations  | 
 | 104 | +- Kover for code coverage with 86% minimum threshold  | 
0 commit comments