Skip to content

freddyDOTCMS/Package-Shipping-Assistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DemoLivery AI - Package Shipping Assistant

A conversational AI assistant for package shipping and tracking, built with OpenAI-compatible LLM, Gradio, and Python. This project demonstrates how to create an AI agent with function calling capabilities for a real-world use case.

🚀 Features

  • Route Information: Find available shipping destinations and routes
  • Price Calculation: Calculate shipping costs based on weight and services
  • Package Tracking: Track packages using tracking numbers
  • Estimated Arrival: Get delivery date estimates based on departure dates
  • Service Options: Three service types:
    • Pick from home: Home pickup service
    • Drop in home: Service center drop-off
    • Express: Priority shipping with enhanced tracking

📋 Prerequisites

  • Python 3.12 or higher
  • uv package manager
  • Ollama running locally with the llama3.2 model

Installing Ollama

  1. Download and install Ollama from https://ollama.ai/
  2. Pull the llama3.2 model:
    ollama pull llama3.2
  3. Start the Ollama server:
    ollama serve

🛠️ Installation

  1. Clone the repository (or navigate to the project directory):

    cd package_shipping_assintant
  2. Install dependencies using uv:

    uv sync

    This will:

    • Create a virtual environment
    • Install all project dependencies
    • Generate a lock file if needed

🎯 Running the Application

  1. Ensure Ollama is running:

    ollama serve
  2. Run the application:

    uv run python main.py

    Or if you want to activate the virtual environment first:

    source .venv/bin/activate  # On macOS/Linux
    # or
    .venv\Scripts\activate  # On Windows
    
    python main.py
  3. Open your browser to the URL shown in the terminal (typically http://127.0.0.1:7860)

  4. Start chatting with the DemoLivery AI assistant about your shipping needs!

📁 Project Structure

package_shipping_assintant/
├── main.py              # Main application entry point with Gradio interface
├── data.py              # Shipping destinations, pricing, and tracking data
├── functions.py         # Business logic functions (routing, pricing, tracking)
├── llm_tools.py         # LLM tool definitions and configurations
├── pyproject.toml       # Project configuration and dependencies
├── README.md            # This file
└── docs/
    └── TUTORIAL.md      # Detailed technical tutorial

🔧 Technology Stack

  • OpenAI: Client for interacting with LLM models
  • Gradio: Web UI framework for the chat interface
  • Ollama: Local LLM server running llama3.2
  • dateparser: Natural language date parsing
  • python-dateutil: Date utility functions
  • uv: Fast Python package manager

📚 Documentation

For a detailed explanation of how the system works, see docs/TUTORIAL.md.

🎓 Learning Objectives

This project demonstrates:

  • Function Calling: How to make LLMs call custom functions
  • Tool Use: Defining and implementing LLM tools
  • Prompt Engineering: Creating effective system prompts for AI agents
  • Data-Driven Design: Structuring business logic with Python functions
  • Web UI Development: Creating interactive chat interfaces with Gradio
  • Local LLM Integration: Using Ollama for privacy-focused AI applications

🤝 Usage Examples

Try asking the assistant:

  • "What destinations can I ship to from the United States?"
  • "How much does it cost to ship a 10 lb package from New York to Toronto?"
  • "Track my package TRK123456789"
  • "When will a package shipped tomorrow from US to Canada arrive?"
  • "I want to schedule a pick from home service"

📝 Notes

  • This is a demo application with simulated data
  • The tracking numbers are hardcoded examples
  • Pricing and routes are for demonstration purposes only
  • Ollama must be running locally for the application to work

🔍 Troubleshooting

Issue: "Connection refused" or "Failed to connect to Ollama"

  • Solution: Make sure Ollama is running with ollama serve

Issue: "Model llama3.2 not found"

  • Solution: Run ollama pull llama3.2 to download the model

Issue: Dependencies not installing

  • Solution: Make sure you have uv installed: curl -LsSf https://astral.sh/uv/install.sh | sh

📄 License

This project is for educational purposes.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages