A conversational AI assistant for package shipping and tracking, built with OpenAI-compatible LLM, Gradio, and Python. This project demonstrates how to create an AI agent with function calling capabilities for a real-world use case.
- Route Information: Find available shipping destinations and routes
- Price Calculation: Calculate shipping costs based on weight and services
- Package Tracking: Track packages using tracking numbers
- Estimated Arrival: Get delivery date estimates based on departure dates
- Service Options: Three service types:
- Pick from home: Home pickup service
- Drop in home: Service center drop-off
- Express: Priority shipping with enhanced tracking
- Download and install Ollama from https://ollama.ai/
- Pull the llama3.2 model:
ollama pull llama3.2
- Start the Ollama server:
ollama serve
-
Clone the repository (or navigate to the project directory):
cd package_shipping_assintant -
Install dependencies using uv:
uv sync
This will:
- Create a virtual environment
- Install all project dependencies
- Generate a lock file if needed
-
Ensure Ollama is running:
ollama serve
-
Run the application:
uv run python main.py
Or if you want to activate the virtual environment first:
source .venv/bin/activate # On macOS/Linux # or .venv\Scripts\activate # On Windows python main.py
-
Open your browser to the URL shown in the terminal (typically
http://127.0.0.1:7860) -
Start chatting with the DemoLivery AI assistant about your shipping needs!
package_shipping_assintant/
├── main.py # Main application entry point with Gradio interface
├── data.py # Shipping destinations, pricing, and tracking data
├── functions.py # Business logic functions (routing, pricing, tracking)
├── llm_tools.py # LLM tool definitions and configurations
├── pyproject.toml # Project configuration and dependencies
├── README.md # This file
└── docs/
└── TUTORIAL.md # Detailed technical tutorial
- OpenAI: Client for interacting with LLM models
- Gradio: Web UI framework for the chat interface
- Ollama: Local LLM server running llama3.2
- dateparser: Natural language date parsing
- python-dateutil: Date utility functions
- uv: Fast Python package manager
For a detailed explanation of how the system works, see docs/TUTORIAL.md.
This project demonstrates:
- Function Calling: How to make LLMs call custom functions
- Tool Use: Defining and implementing LLM tools
- Prompt Engineering: Creating effective system prompts for AI agents
- Data-Driven Design: Structuring business logic with Python functions
- Web UI Development: Creating interactive chat interfaces with Gradio
- Local LLM Integration: Using Ollama for privacy-focused AI applications
Try asking the assistant:
- "What destinations can I ship to from the United States?"
- "How much does it cost to ship a 10 lb package from New York to Toronto?"
- "Track my package TRK123456789"
- "When will a package shipped tomorrow from US to Canada arrive?"
- "I want to schedule a pick from home service"
- This is a demo application with simulated data
- The tracking numbers are hardcoded examples
- Pricing and routes are for demonstration purposes only
- Ollama must be running locally for the application to work
Issue: "Connection refused" or "Failed to connect to Ollama"
- Solution: Make sure Ollama is running with
ollama serve
Issue: "Model llama3.2 not found"
- Solution: Run
ollama pull llama3.2to download the model
Issue: Dependencies not installing
- Solution: Make sure you have uv installed:
curl -LsSf https://astral.sh/uv/install.sh | sh
This project is for educational purposes.