A simple starter project for experimenting with the OpenAI API. This repository includes basic examples for both JavaScript (Node.js) and Python implementations.
- An OpenAI API key (get one at OpenAI's website)
- Node.js or Python installed on your system
- Clone this repository
- Create a
.env
file based on.env.example
and add your OpenAI API key - Choose your preferred language and follow the setup instructions below
# Install dependencies
npm install
# Run basic examples
node js/chat-completion.js
node js/image-generation.js
node js/embeddings.js
# Run advanced examples
node js/assistants-api.js
node js/chat-response-formats.js
node js/chat-with-memory.js
# Create a virtual environment (optional but recommended)
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Run basic examples
python python/chat_completion.py
python python/image_generation.py
python python/embeddings.py
# Run advanced examples
python python/assistants_api.py
python python/chat_response_formats.py
python python/chat_with_memory.py
- Chat completions using GPT models
- Image generation using DALL-E
- Text embeddings
- Assistants API: Creating assistants, threads, and running conversations with tools
- Chat Response Formats: Different ways to customize and format chat responses
- Chat with Memory: Interactive chat with conversation history management
The Chat Response Formats examples showcase various ways to use the Chat Completions API:
- Standard Chat Responses: Basic completions with default settings
- JSON Mode: Forcing structured JSON output for parsing
- Multiple Completions: Generating multiple response variations
- Streaming: Receiving responses token-by-token in real-time
- Function Calling: Having the model identify when to call functions and with what parameters
The Chat with Memory examples demonstrate how to:
- Build an interactive command-line chat interface
- Maintain conversation history for contextual responses
- Implement simple conversation management (viewing history, clearing context)
- Handle token limits by pruning older messages
This provides a foundation for building chat applications where the model remembers previous parts of the conversation.
The Assistants API examples demonstrate how to:
- Create an assistant with specific instructions and capabilities
- Create a thread for conversation
- Add user messages to the thread
- Run the assistant on the thread
- Retrieve and display the conversation
This is useful for building conversational AI applications with memory and specialized tools.
MIT