|
| 1 | +# Reflection Agent Service 🤖✨ |
| 2 | + |
| 3 | +[](https://github.com/PRYSKAS/REFLECTION_PATTERN_AGENT/actions) |
| 4 | + |
| 5 | +This project is an AI microservice built around the **Reflection agent design pattern** to iteratively analyze, critique, and refine Large Language Model (LLM) outputs. It showcases the end-to-end engineering process of transforming a core AI script into a robust, containerized, and production-ready service. |
| 6 | + |
| 7 | +## 🧠 Core Concept: The Reflection Pattern |
| 8 | + |
| 9 | +The Reflection Pattern enhances the quality and reliability of LLM outputs through a structured, three-step self-critique process: |
| 10 | + |
| 11 | +1. **Generate:** The agent produces an initial draft in response to a prompt. |
| 12 | +2. **Reflect:** The agent analyzes its own draft, identifies flaws or areas for improvement, and generates a list of actionable, constructive critiques. |
| 13 | +3. **Refine:** The agent re-attempts the original task, this time using its own critiques as a guide to generate a superior final output. |
| 14 | + |
| 15 | +This cycle mimics the human process of drafting and revision, leading to responses that are more coherent, accurate, and aligned with the user's intent. |
| 16 | + |
| 17 | +## 🚀 Engineering & MLOps Highlights |
| 18 | + |
| 19 | +This project emphasizes the engineering required to serve an AI model reliably and scalably. |
| 20 | + |
| 21 | +* **Microservice API:** The agent's logic is exposed via a RESTful API using **FastAPI**, with clear data contracts enforced by **Pydantic** for robust I/O validation. |
| 22 | +* **Containerization:** The entire application is containerized with **Docker**, ensuring a consistent execution environment and simplifying deployment across any platform. |
| 23 | +* **Unit Testing:** The agent's core business logic is rigorously tested using **Pytest** and **pytest-mock**, guaranteeing the reliability and integrity of each component. |
| 24 | +* **Automated CI/CD:** A **GitHub Actions** pipeline is triggered on every push to `main`, automatically performing: |
| 25 | + * **Linting** with **Ruff** to enforce code quality and style consistency. |
| 26 | + * **Unit Testing** to prevent regressions and ensure code health. |
| 27 | +* **Secrets Management:** API keys and sensitive credentials are handled securely using `.env` files for local development and **GitHub Actions Secrets** in the CI/CD pipeline, preventing any exposure in the repository. |
| 28 | + |
| 29 | +## 🏗️ Service Architecture |
| 30 | + |
| 31 | +```mermaid |
| 32 | +graph TD |
| 33 | + A[User/Client] -->|HTTP POST Request| B(FastAPI Service); |
| 34 | + B -->|prompt| C{ReflectionAgent}; |
| 35 | + C -->|1. Generate Draft| D[OpenAI API]; |
| 36 | + D -->|Draft| C; |
| 37 | + C -->|2. Generate Reflections| D; |
| 38 | + D -->|Reflections| C; |
| 39 | + C -->|3. Generate Final Output| D; |
| 40 | + D -->|Final Output| C; |
| 41 | + C -->|Complete Response| B; |
| 42 | + B -->|JSON Response| A; |
| 43 | +``` |
| 44 | + |
| 45 | +## 🏁 Getting Started |
| 46 | + |
| 47 | +### Prerequisites |
| 48 | +* Git |
| 49 | +* Python 3.9+ |
| 50 | +* Docker Desktop (running) |
| 51 | + |
| 52 | +### 1. Running Locally (for Development) |
| 53 | + |
| 54 | +1. **Clone the repository:** |
| 55 | + ```bash |
| 56 | + git clone [https://github.com/PRYSKAS/REFLECTION_PATTERN_AGENT.git](https://github.com/PRYSKAS/REFLECTION_PATTERN_AGENT.git) |
| 57 | + cd REFLECTION_PATTERN_AGENT |
| 58 | + ``` |
| 59 | + |
| 60 | +2. **Set up the environment:** |
| 61 | + * Create a `.env` file from the example: `copy .env.example .env` (on Windows) or `cp .env.example .env` (on Unix/macOS). |
| 62 | + * Add your `OPENAI_API_KEY` to the new `.env` file. |
| 63 | + |
| 64 | +3. **Install dependencies:** |
| 65 | + ```bash |
| 66 | + pip install -r requirements.txt |
| 67 | + pip install -e . |
| 68 | + ``` |
| 69 | + |
| 70 | +4. **Run tests to verify the setup:** |
| 71 | + ```bash |
| 72 | + pytest |
| 73 | + ``` |
| 74 | + |
| 75 | +5. **Start the API server:** |
| 76 | + ```bash |
| 77 | + uvicorn main:app --reload --port 8001 |
| 78 | + ``` |
| 79 | + The API will be available at `http://127.0.0.1:8001/docs`. |
| 80 | + |
| 81 | +### 2. Running with Docker (Production Mode) |
| 82 | + |
| 83 | +This is the recommended way to run the service for a stable, isolated deployment. |
| 84 | + |
| 85 | +1. **Build the Docker image:** |
| 86 | + ```bash |
| 87 | + docker build -t reflection-agent-service . |
| 88 | + ``` |
| 89 | + |
| 90 | +2. **Run the container:** |
| 91 | + ```bash |
| 92 | + docker run -d -p 8001:8001 --env-file .env --name reflection-agent reflection-agent-service |
| 93 | + ``` |
| 94 | + The service will now be running in the background. Access the API documentation at `http://127.0.0.1:8001/docs`. |
| 95 | + |
| 96 | +## 📡 API Endpoint |
| 97 | + |
| 98 | +### `POST /run` |
| 99 | + |
| 100 | +Executes the agent's full generate-reflect-generate cycle. |
| 101 | +
|
| 102 | +**Request Body:** |
| 103 | +```json |
| 104 | +{ |
| 105 | + "prompt": "Write a tweet about the importance of CI/CD in AI engineering." |
| 106 | +} |
| 107 | +``` |
| 108 | +
|
| 109 | +**Success Response (200 OK):** |
| 110 | +```json |
| 111 | +{ |
| 112 | + "initial_draft": "CI/CD is crucial in AI engineering. #AI #MLOps", |
| 113 | + "reflections": [ |
| 114 | + "- The tweet is too short and generic.", |
| 115 | + "- It could add a specific benefit, like 'accelerating value delivery'.", |
| 116 | + "- An emoji would increase engagement." |
| 117 | + ], |
| 118 | + "final_output": "🚀 CI/CD in AI Engineering isn't a luxury; it's a necessity! It accelerates value delivery by automating testing and deployment, ensuring robust models reach production faster. #MLOps #AIEngineering" |
| 119 | +} |
| 120 | +``` |
0 commit comments