Skip to content

About LLM-Compare-FastAPI is an open-source tool for comparing AI language models like DeepSeek, OpenAI GPT, Google Gemini, and more, using FastAPI and Streamlit

License

Notifications You must be signed in to change notification settings

serkanyasr/LLM-Compare-FastAPI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

LLM-COMPARE-FASTAPI

Unleashing AI Power, One Language Model at a Time!

license last-commit repo-top-language repo-language-count


image

πŸ”— Table of Contents


πŸ“ Overview

LLM-Compare-FastAPI is an innovative open-source project designed to streamline the comparison of various AI language models. It leverages FastAPI and Streamlit to create a user-friendly interface where users can input prompts and view responses from different models, including DeepSeek, OpenAI GPT, Google Gemini, Anthropic Claude, and Cohere Command. The project offers a unique solution for AI enthusiasts, researchers, and developers seeking to evaluate and understand the nuances of different language models in a simple, efficient manner.


πŸ‘Ύ Features

Feature Summary
βš™οΈ Architecture
  • The project uses a two-tier application architecture, with a backend service built with FastAPI and a frontend service using Streamlit.
  • The backend and frontend services are orchestrated using a docker-compose.yml file, ensuring both services run on a shared network.
  • The backend service is responsible for language processing tasks and server operations, while the frontend service handles user interaction and API calls to the backend.
πŸ”© Code Quality
  • The project is written primarily in Python, with a clean and modular code structure.
  • It uses Docker for containerization, enhancing the project's portability and scalability.
  • The project leverages the dotenv module to securely load and store keys for various AI services.
πŸ“„ Documentation
  • The project provides detailed installation and usage commands for both pip and docker.
  • It outlines the necessary dependencies for both the backend and frontend services in requirements.txt files.
  • The project also provides a test command using pytest.
πŸ”Œ Integrations
  • The project integrates with various AI services such as DeepSeek, OpenAI, Google AI, Anthropic, and Cohere.
  • It provides routes to generate text using different AI models via the endpoints.py file in the backend application.
  • The frontend service makes API calls to the backend and displays response times.
🧩 Modularity
  • The project is structured into separate backend and frontend services, each with its own Dockerfile and requirements.txt file.
  • The backend service is further divided into core and API modules, with a central config.py file for managing API keys.
  • The frontend service is encapsulated in a single app.py file, handling user interaction and API calls.
πŸ§ͺ Testing
  • The project provides a test command using pytest.
  • However, specific test cases or test files are not mentioned in the provided context details.
⚑️ Performance
  • The project uses FastAPI for the backend service, known for its high performance and asynchronous capabilities.
  • The frontend service uses Streamlit, enabling rapid prototyping and efficient user interaction.
πŸ›‘οΈ Security
  • The project uses the dotenv module to securely load and store keys for various AI services.
  • However, specific security measures or practices are not mentioned in the provided context details.

πŸ“ Project Structure

└── LLM-Compare-FastAPI/
    β”œβ”€β”€ LICENSE
    β”œβ”€β”€ README.md
    β”œβ”€β”€ backend
    β”‚   β”œβ”€β”€ Dockerfile
    β”‚   β”œβ”€β”€ app
    β”‚   └── requirements.txt
    β”œβ”€β”€ docker-compose.yml
    └── frontend
        β”œβ”€β”€ Dockerfile
        β”œβ”€β”€ app.py
        └── requirements.txt

πŸ“‚ Project Index

LLM-COMPARE-FASTAPI/
__root__
docker-compose.yml - The docker-compose.yml orchestrates the deployment of a two-tier application architecture, comprising a backend service built with FastAPI and a frontend service using Streamlit
- It ensures both services run on a shared network, with the frontend service dependent on the backend, and both services restarting automatically if they fail.
backend
requirements.txt - Backend/requirements.txt outlines the necessary dependencies for the project, including various language processing libraries and server frameworks
- It ensures the correct packages are installed for the backend to function properly, facilitating language processing tasks and server operations.
Dockerfile - The Dockerfile in the backend directory sets up a Python-based environment, installs necessary dependencies from the requirements.txt file, and prepares the application for running on a server
- It enables the application to be containerized and run consistently across different platforms, enhancing the project's portability and scalability.
app
main.py - Main.py, located in the backend/app directory, serves as the entry point for the FastAPI LangChain API
- It integrates the application's endpoints and initiates the FastAPI server
- The code's execution starts the server locally on port 8000, enabling the API's functionality.
core
config.py - Config.py serves as a central hub for managing API keys within the backend of the application
- It leverages the dotenv module to securely load and store keys for various AI services such as OpenAI, Google AI, Anthropic, and Cohere
- This configuration ensures seamless integration and secure communication with these external services.
models.py - The core/models.py module in the backend application serves as an interface for various AI chat models
- It provides functions to interact with GPT-3.5 Turbo, Gemini, Claude-2.1, and ChatCohere models, enabling the sending of prompts and receiving of AI-generated responses
- This module plays a crucial role in facilitating AI-based conversations in the project.
api
endpoints.py - The 'endpoints.py' in the backend application serves as the API gateway, providing routes to generate text using different AI models - OpenAI GPT, Google Gemini, Anthropic Claude, and Cohere Command
- It also includes a route to check the API's health status.
frontend
app.py - The frontend/app.py serves as the user interface for the LangChain project, facilitating user interaction with multiple language models
- It allows users to input a prompt, adjust model settings, and view responses from different models, including OpenAI GPT, Google Gemini, Anthropic Claude, and Cohere Command
- The file also handles API calls to the backend and displays response times.
requirements.txt - Frontend/requirements.txt outlines the necessary Python packages for the frontend component of the project
- It specifies 'streamlit' and 'requests' as dependencies, ensuring the application's user interface runs smoothly and can make HTTP requests
- This file plays a crucial role in maintaining the project's environment consistency.
Dockerfile - The Dockerfile in the frontend directory sets up a Python environment, installs necessary dependencies from the requirements.txt file, and prepares the application for execution
- It specifically configures Streamlit to run the app.py file, making the application accessible via a specified server port and address.

πŸš€ Getting Started

β˜‘οΈ Prerequisites

Before getting started with LLM-Compare-FastAPI, ensure your runtime environment meets the following requirements:

  • Programming Language: Python
  • Package Manager: Pip
  • Container Runtime: Docker

βš™οΈ Installation

Install LLM-Compare-FastAPI using one of the following methods:

Build from source:

  1. Clone the LLM-Compare-FastAPI repository:
❯ git clone https://github.com/serkanyasr/LLM-Compare-FastAPI
  1. Navigate to the project directory:
❯ cd LLM-Compare-FastAPI
  1. Install the project dependencies:

Using pip Β 

❯ pip install -r backend/requirements.txt, frontend/requirements.txt

Using docker Β 

❯ docker build -t serkanyasr/LLM-Compare-FastAPI .

πŸ€– Usage

Run LLM-Compare-FastAPI using the following command: Using pip Β 

❯ python {entrypoint}

Using docker Β 

❯ docker run -it {image_name}

πŸ§ͺ Testing

Run the test suite using the following command: Using pip Β 

❯ pytest

πŸ”° Contributing

  • πŸ’¬ Join the Discussions: Share your insights, provide feedback, or ask questions.
  • πŸ› Report Issues: Submit bugs found or log feature requests for the LLM-Compare-FastAPI project.
  • πŸ’‘ Submit Pull Requests: Review open PRs, and submit your own PRs.
Contributing Guidelines
  1. Fork the Repository: Start by forking the project repository to your github account.
  2. Clone Locally: Clone the forked repository to your local machine using a git client.
    git clone https://github.com/serkanyasr/LLM-Compare-FastAPI
  3. Create a New Branch: Always work on a new branch, giving it a descriptive name.
    git checkout -b new-feature-x
  4. Make Your Changes: Develop and test your changes locally.
  5. Commit Your Changes: Commit with a clear message describing your updates.
    git commit -m 'Implemented new feature x.'
  6. Push to github: Push the changes to your forked repository.
    git push origin new-feature-x
  7. Submit a Pull Request: Create a PR against the original project repository. Clearly describe the changes and their motivations.
  8. Review: Once your PR is reviewed and approved, it will be merged into the main branch. Congratulations on your contribution!
Contributor Graph


πŸŽ— License

This project is licensed under the MIT License.For more details, refer to the LICENSE file.


About

About LLM-Compare-FastAPI is an open-source tool for comparing AI language models like DeepSeek, OpenAI GPT, Google Gemini, and more, using FastAPI and Streamlit

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published