A React + AWS Serverless full stack implementation of the 30 example applications found in the official OpenAI API documentation. This repository is used as an instructional tool for the YouTube channel "Full Stack With Lawrence" as well as for University of British Columbia course, "Artificial Intelligence Cloud Technology Implementation" taught by Lawrence McDaniel.
New in v0.10: A new chat app named "OpenAI Function Calling". See lambda_openai_function for examples including the fully implemented "get_current_weather()" from The official OpenAI API documentation, and also a fun example of how get OpenAI to not only recognize you but also say flowery nice things about you!
IMPORTANT DISCLAIMERS:
-
AWS' Lambda service has a hard 29-second timeout. OpenAI API calls often take longer than this, in which case the AWS API Gateway endpoint will return a 504 "Gateway timeout error" response to the React client. This happens frequently with apps created using chatgpt-4. Each of the 30 OpenAI API example applications are nonetheless implemented exactly as they are specified in the official documentation.
-
Distribution upload packages for AWS Lambda functions as well as AWS Lambda Layers are limited to 50mb (and 250mb unzipped). Often, this poses serious limitations for Layers, which are intended to store your PyPi / NPM package dependencies. Note that incidentally, these code samples are also pretty code scaffolding for alternative Docker-based deployment strategies using Elastic Container Service and/or Elastic Kubernetes Service.
Code composition as of Jan-2024:
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
HCL 29 346 714 2324
Markdown 52 765 6 2293
Python 26 603 611 2244
YAML 20 112 109 1308
JavaScript 39 114 127 1088
JSX 6 45 47 856
CSS 5 32 14 180
make 1 27 30 119
Text 6 13 0 117
INI 2 15 0 70
HTML 2 1 0 65
Jupyter Notebook 1 0 186 48
Bourne Shell 5 17 55 47
TOML 1 1 0 23
Dockerfile 1 4 4 5
-------------------------------------------------------------------------------
SUM: 196 2,095 1,903 10,787
-------------------------------------------------------------------------------
Complete documentation is located here.
React app that leverages Vite.js, @chatscope/chat-ui-kit-react, and react-pro-sidebar.
- robust, highly customizable chat features
- A component model for implementing your own highly personalized OpenAI apps
- Skinnable UI for each app
- Includes default assets for each app
- Small compact code base
- Robust error handling for non-200 response codes from the custom REST API
- Handles direct text input as well as file attachments
- Info link to the OpenAI API official code sample
- Build-deploy managed with Vite
Complete documentation is located here.
A REST API implementing each of the 30 example applications from the official OpenAI API Documentation using a modularized Terraform approach. Leverages OpenAI's suite of AI models, including GPT-3.5, GPT-4, DALL·E, Whisper, Embeddings, and Moderation.
- OpenAI API library for Python. LangChain enabled API endpoints where designated.
- Pydantic based CI-CD friendly Settings configuration class that consistently and automatically manages Python Lambda initializations from multiple sources including bash environment variables,
.env
andterraform.tfvars
files. - CloudWatch logging
- Terraform fully automated and parameterized build. Usually builds your infrastructure in less than a minute.
- Secure: uses AWS role-based security and custom IAM policies. Best practice handling of secrets and sensitive data in all environments (dev, test, CI-CD, prod). Proxy-based API that hides your OpenAI API calls and credentials. Runs on https with AWS-managed SSL/TLS certificate.
- Excellent documentation
- AWS serverless implementation. Free or nearly free in most cases
- Deploy to a custom domain name
- git. pre-installed on Linux and macOS
- make. pre-installed on Linux and macOS.
- AWS account
- AWS Command Line Interface
- Terraform. If you're new to Terraform then see Getting Started With AWS and Terraform
- OpenAI platform API key. If you're new to OpenAI API then see How to Get an OpenAI API Key
- Python 3.11: for creating virtual environment used for building AWS Lambda Layer, and locally by pre-commit linters and code formatters.
- NodeJS: used with NPM for local ReactJS developer environment, and for configuring/testing Semantic Release.
- Docker Compose: used by an automated Terraform process to create the AWS Lambda Layer for OpenAI and LangChain.
Optional requirements:
- Google Maps API key. This is used the OpenAI API Function Calling coding example, "get_current_weather()".
- Pinecone API key. This is used for OpenAI API Embedding examples.
Detailed documentation for each endpoint is available here: Documentation
To get community support, go to the official Issues Page for this project.
This project demonstrates a wide variety of good coding best practices for managing mission-critical cloud-based micro services in a team environment, namely its adherence to 12-Factor Methodology. Please see this Code Management Best Practices for additional details.
We want to make this project more accessible to students and learners as an instructional tool while not adding undue code review workloads to anyone with merge authority for the project. To this end we've also added several pre-commit code linting and code style enforcement tools, as well as automated procedures for version maintenance of package dependencies, pull request evaluations, and semantic releases.
We welcome contributions! There are a variety of ways for you to get involved, regardless of your background. In addition to Pull requests, this project would benefit from contributors focused on documentation and how-to video content creation, testing, community engagement, and stewards to help us to ensure that we comply with evolving standards for the ethical use of AI.
For developers, please see:
- the Developer Setup Guide
- and these commit comment guidelines 😬😬😬 for managing CI rules for automated semantic releases.
You can also contact Lawrence McDaniel directly.