Ship production-ready LangChain projects with FastAPI.
- supports token streaming over HTTP and Websocket
- supports multiple langchain
Chain
types - simple gradio chatbot UI for fast prototyping
- follows FastAPI responses naming convention
There are great low-code/no-code solutions in the open source to deploy your Langchain projects. However, most of them are opinionated in terms of cloud or deployment code. This project aims to provide FastAPI users with a cloud-agnostic and deployment-agnostic solution which can be easily integrated into existing backend infrastructures.
The library is available on PyPI and can be installed via pip
.
pip install fastapi-async-langchain
from dotenv import load_dotenv
from fastapi import FastAPI
from langchain import ConversationChain
from langchain.chat_models import ChatOpenAI
from pydantic import BaseModel
from fastapi_async_langchain.responses import StreamingResponse
load_dotenv()
app = FastAPI()
class Request(BaseModel):
query: str
@app.post("/chat")
async def chat(request: Request) -> StreamingResponse:
chain = ConversationChain(llm=ChatOpenAI(temperature=0, streaming=True), verbose=True)
return StreamingResponse.from_chain(chain, request.query, media_type="text/event-stream")
See examples/
for list of available demo examples.
Create a .env
file using .env.sample
and add your OpenAI API key to it
before running the examples.
Contributions are more than welcome! If you have an idea for a new feature or want to help improve fastapi-async-langchain, please create an issue or submit a pull request on GitHub.
See CONTRIBUTING.md for more information.
The library is released under the MIT License.