This repository provides a basic Language Model (LLM) application using Langchain Expression Language (LCEL). It covers the basics of building an API with FastAPI, handling requests with requests
, running the server with uvicorn
, and creating a client with Streamlit. You can also test the API on Postman. Screenshots are attached to help illustrate the workflow.
The project demonstrates how to:
- Build a FastAPI server that leverages Langchain components for prompt engineering and language model inference.
- Create a simple client using the
requests
library and Streamlit to interact with the API. - Integrate several important libraries, including FastAPI, uvicorn, python-dotenv, langchain_groq, langchain_core, and streamlit.
- Cover the basics of API development, deployment, and testing.
-
FastAPI Server:
Provides a RESTful API endpoint that accepts a text input and a target language, translates the input using a configured language model, and returns the response. -
Client Application:
A simple Streamlit app that allows users to input text, select a target language, and see the translated output. -
API Testing:
You can test the API endpoint on Postman by sending a POST request to the/chain/invoke
endpoint.
- FastAPI: A modern, fast (high-performance) web framework for building APIs with Python.
- uvicorn: An ASGI server used to run the FastAPI application.
- requests: For making HTTP requests in the client.
- python-dotenv: Loads environment variables from a
.env
file. - langchain_groq: For interacting with the language model via the Groq API.
- langchain_core: Provides core prompt templating and output parsing functionalities.
- Streamlit: Used to build the simple client UI for interacting with the API.
-
Clone the repository:
git clone https://github.com/EniolaAdemola/LLM-App-With-LCEL.git
cd LLM-App-With-LCEL
-
Create and activate a virtual environment (optional but recommended) for me i used conda:
conda create -p venv python==3.10 -y
conda activate venv
-
Install the required libraries:
pip install -r requirements.txt
-
Set up environment variables: Create a .env file in the root directory with the following keys:
GROQ_API_KEY=your_groq_api_key
To run the FastAPI server, execute:
python server\server.py
The server will be accessible at http://127.0.0.1:8000. Make sure to add "/docs" to view the Langserve documentations (http://127.0.0.1:8000/docs/)
To start the client application, run:
streamlit run client\client.py
You can then interact with the LLM application using the provided web interface.
POST /chain/invoke This endpoint accepts a JSON payload with the following structure:
{
"input": {
"language": "French",
"text": "Your input text here"
},
"config": {},
"kwargs": {}
}
It returns a JSON response with the translated or processed output.
You can test the API on Postman by sending a POST request to:
streamlit run client\client.py](http://127.0.0.1:8000/chain/invoke
Make sure to set the request body to JSON with the structure shown above.
Contributions are welcome! Feel free to submit a PR or open an issue.
For inquiries, reach out to Eniola Ademola.