Skip to content

πŸš€ Professional bash script for launching and optimizing Ollama AI models with intelligent profiles, system monitoring, and comprehensive error handling. Universal model support with 55+ latest 2025 models. Developed by JTGSYSTEMS.COM

License

Notifications You must be signed in to change notification settings

jtgsystems/universal-ollama-optimizer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

19 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Banner

πŸš€ Universal Ollama Optimizer

Developed by JTGSYSTEMS.COM | JointTechnologyGroup.com

A professional bash script for launching and optimizing any Ollama AI model with intelligent profiles, system monitoring, and automated configuration.

License: MIT Platform Ollama Bash

πŸ†• Latest Updates (September 2025)

Added 5 Critical Missing Models Based on Community Research:

  • ⭐ GLM-4 - Ranks 3rd overall on hardcore benchmarks, beats Llama 3 8B
  • ⭐ Magicoder - OSS-Instruct trained coding specialist with 75K synthetic instruction data
  • ⭐ Gemma3n - Multimodal model optimized for everyday devices (phones, tablets, laptops)
  • ⭐ Granite 3.3 - IBM's improved models with 128K context (2B and 8B variants)
  • ⭐ Gemma3:270M - Ultra-compact 270M model with 0.75% battery usage on mobile devices

Enhanced Menu Organization:

  • Expanded model selection from 50+ to 55+ latest 2025 models
  • Added new performance benchmarks and descriptions
  • Updated all optimization profiles and system recommendations

✨ Features

  • 🎯 Universal Model Support - Works with any Ollama model (llama3.2, mistral, gemma2, etc.)
  • βš™οΈ Smart Optimization Profiles - 6 pre-configured profiles: Balanced, Technical, Creative, Code, Reasoning, Roleplay
  • πŸ“Š System Resource Monitoring - Real-time GPU, RAM, and disk space monitoring
  • πŸš€ Auto-Download & Validation - Intelligent model downloading with space checking
  • 🎨 Professional Interface - Clean, colorful terminal UI with progress indicators
  • ⚑ Configuration Management - File-based config with runtime overrides
  • πŸ”§ Custom Parameters - Full manual parameter control and Modelfile creation

πŸ“‹ Requirements

  • Linux (Ubuntu 20.04+, other distros compatible)
  • Bash 5.0+ (pre-installed on most systems)
  • Ollama (Download here)
  • 8GB+ RAM recommended
  • GPU optional but recommended (NVIDIA with CUDA support)

πŸš€ Quick Start

One-Command Install & Run

# Download and run
curl -fsSL https://raw.githubusercontent.com/your-username/universal-ollama-optimizer/main/universal-ollama-optimizer.sh -o universal-ollama-optimizer.sh
chmod +x universal-ollama-optimizer.sh
./universal-ollama-optimizer.sh

Manual Installation

# Clone repository
git clone https://github.com/your-username/universal-ollama-optimizer.git
cd universal-ollama-optimizer

# Make executable
chmod +x universal-ollama-optimizer.sh

# Run
./universal-ollama-optimizer.sh

πŸ“– Usage

Interactive Mode (Default)

./universal-ollama-optimizer.sh
  1. Select Model - Choose from available models or enter new model name
  2. Choose Profile - Select optimization profile (1-9)
  3. Launch Model - Model starts with optimized settings

Example Session

╔════════════════════════════════════════════╗
β•‘         Universal Ollama Optimizer        β•‘
β•‘      JTGSYSTEMS.COM | JointTechnologyGroup β•‘
β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

βœ“ Ollama service is running

Available Local Models:
  β€’ llama3.2:latest (4.7GB)
  β€’ mistral:7b (4.1GB)
  β€’ gemma2:9b (5.4GB)

Enter model name: llama3.2:latest

Model Information:
  β€’ Model: llama3.2:latest
  β€’ Size: 4.7GB
  β€’ Parameters: 3.2B

System Status:
  β€’ GPU Memory: 16380 MB
  β€’ System RAM: 32GB
  β€’ Available Disk: 150GB

Optimization Profiles:
  1) Balanced          - General purpose (temp: 0.5)
  2) Technical/Factual - Precise answers (temp: 0.2)
  3) Creative Writing  - Imaginative content (temp: 1.0)
  4) Code Generation   - Programming tasks (temp: 0.15)
  5) Reasoning/Logic   - Problem solving (temp: 0.3)
  6) Roleplay/Chat     - Conversational (temp: 0.8)

Select profile [1-6]: 1

Starting llama3.2:latest with Balanced profile...

βš™οΈ Configuration

Config File Location

~/.config/universal-ollama-optimizer/config.conf

Example Configuration

# Default profile (balanced, technical, creative, code, reasoning, roleplay)
DEFAULT_PROFILE="balanced"

# Auto-start Ollama service if not running
AUTO_START_OLLAMA=true

# System monitoring
SHOW_SYSTEM_INFO=true
SHOW_GPU_INFO=true

# Download settings
DOWNLOAD_TIMEOUT=1800
MIN_DISK_SPACE_GB=5

🎯 Optimization Profiles

Profile Temperature Top-P Top-K Best For
Balanced 0.5 0.85 30 General use, Q&A
Technical 0.2 0.8 20 Documentation, facts
Creative 1.0 0.95 50 Stories, brainstorming
Code 0.15 0.7 15 Programming, debugging
Reasoning 0.3 0.75 25 Logic, analysis
Roleplay 0.8 0.9 40 Character chat

πŸ› οΈ Advanced Features

Custom Parameters

Choose option 7 for manual parameter configuration:

  • Temperature (0.0-2.0)
  • Top-P (0.0-1.0)
  • Top-K (1-100)
  • Context length
  • Max tokens per response
  • Repeat penalty

Modelfile Creation

Choose option 8 to create and save custom model configurations:

# Creates permanent optimized model variants
# Example: llama3.2-coding, mistral-creative

Runtime Commands

Once model is running, use these commands:

/set parameter temperature 0.7    # Adjust parameters
/set system "You are a coding assistant"  # Change system prompt
/show parameters                  # View current settings
/save my-conversation            # Save session
/load my-conversation            # Load session

🚨 Troubleshooting

Common Issues

Ollama Not Found

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

Permission Denied

# Make script executable
chmod +x universal-ollama-optimizer.sh

Model Download Fails

# Check internet connection and disk space
df -h
ping ollama.ai

GPU Not Detected

# Check NVIDIA drivers
nvidia-smi

πŸ“ Project Structure

universal-ollama-optimizer/
β”œβ”€β”€ universal-ollama-optimizer.sh    # Main script
β”œβ”€β”€ README.md                        # This file
β”œβ”€β”€ LICENSE                          # MIT License
└── .github/
    β”œβ”€β”€ workflows/
    β”‚   └── test.yml                 # CI/CD tests
    β”œβ”€β”€ ISSUE_TEMPLATE/              # Issue templates
    └── PULL_REQUEST_TEMPLATE.md     # PR template

🀝 Contributing

We welcome contributions! Please see our Contributing Guidelines.

Development Setup

# Fork and clone
git clone https://github.com/your-username/universal-ollama-optimizer.git
cd universal-ollama-optimizer

# Test the script
./universal-ollama-optimizer.sh

# Run with debug mode
bash -x universal-ollama-optimizer.sh

Reporting Issues

  • Use the issue tracker
  • Include OS version, Ollama version, and error messages
  • Provide steps to reproduce

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • Jesus Christ - Our Lord and Saviour, for all gifts and abilities
  • Ollama Team - For creating an excellent local AI platform
  • Community Contributors - For feedback and improvements
  • JTGSYSTEMS.COM - For development and maintenance

πŸ“ˆ Stars History

Star History Chart

πŸ“ž Support


πŸ€– Recommended Ollama Models (September 2025)

Based on latest community recommendations and performance benchmarks

πŸ† Top-Tier Models (September 2025)

🧠 Best Overall Performance

  • llama3.3:70b - Meta's flagship 2025 model, rivals GPT-4 performance locally
  • glm4:latest ⭐ NEW - Ranks 3rd overall, beats Llama 3 8B in benchmarks
  • llama3.1:8b - Community favorite, best balance of performance and efficiency
  • llama3.1:70b - High-performance for complex reasoning and enterprise use
  • deepseek-r1 - Powerhouse for deep logical reasoning and analysis

πŸ’» Premier Coding Models

  • deepseek-coder:33b - #1 coding model, excels at complex programming tasks
  • magicoder:latest ⭐ NEW - OSS-Instruct trained specialist, 75K synthetic data
  • qwen3-coder:30b - Alibaba's latest 2025 coding model with major improvements
  • codellama:34b - Meta's specialized coding model with excellent context understanding
  • qwen2.5-coder:32b - Previous Alibaba model with solid code generation

⚑ Resource-Efficient Champions

  • granite3.3:8b ⭐ NEW - IBM's improved model with 128K context, rivals Llama 3.1
  • mistral:7b-instruct - Community-recommended for beginners, excellent performance/resource ratio
  • phi4:14b - Microsoft's 2025 state-of-the-art efficiency model
  • granite3.3:2b ⭐ NEW - IBM's efficient enterprise model for edge deployment
  • llama3.2:3b - Compact Llama for lightweight deployments
  • gemma3:270m ⭐ NEW - Ultra-compact 270M model, 0.75% battery usage on mobile
  • gemma2:9b - Google's efficient model, great for general tasks

🎨 Creative & Multimodal

  • gemma3n:latest ⭐ NEW - Multimodal optimized for everyday devices (phones, tablets)
  • llava:latest - Leading vision model for image analysis and VQA
  • qwen-vl - Advanced multimodal model for document and image processing
  • gemma2:27b - Excellent for creative writing and content generation

πŸ“Š Performance Matrix (September 2025)

Use Case Top Model Alternative RAM Required Best Profile
General Chat llama3.3:70b glm4:latest ⭐ 64GB / 9GB Balanced
Code Development deepseek-coder:33b magicoder:latest ⭐ 32GB / 7GB Code
Reasoning Tasks deepseek-r1 glm4:latest ⭐ 16GB / 9GB Reasoning
Creative Writing gemma2:27b gemma3n:latest ⭐ 32GB / 8GB Creative
Resource-Limited granite3.3:2b ⭐ gemma3:270m ⭐ 2GB / 300MB Balanced
Vision/Multimodal llava:latest gemma3n:latest ⭐ 16GB / 8GB Technical
Enterprise/128K Context granite3.3:8b ⭐ llama3.1:8b 8GB Technical
Edge/Mobile gemma3:270m ⭐ granite3.3:2b ⭐ 300MB / 2GB Balanced

πŸš€ Quick Download Commands (September 2025)

# Most recommended overall (2025 flagship)
ollama pull llama3.3:70b-instruct

# NEW: High-performance alternative (ranks 3rd overall)
ollama pull glm4:latest

# NEW: Specialized coding assistant with OSS-Instruct training
ollama pull magicoder:latest

# Premier coding powerhouse
ollama pull deepseek-coder:33b

# NEW: Enterprise model with 128K context
ollama pull granite3.3:8b

# NEW: Ultra-efficient for mobile/edge (270M parameters)
ollama pull gemma3:270m

# NEW: Multimodal for everyday devices
ollama pull gemma3n:latest

# Advanced reasoning powerhouse
ollama pull deepseek-r1

# Vision and image analysis
ollama pull llava:latest

# Best for beginners/limited hardware
ollama pull mistral:7b-instruct

πŸ’‘ 2025 Community Insights

  • Llama 3.3 has become the gold standard for local deployment
  • DeepSeek models dominate coding and reasoning benchmarks
  • Mistral 7B remains the go-to recommendation for newcomers
  • Vision models like LLaVA are gaining massive adoption
  • Over 1,700+ models now available in Ollama ecosystem

Note: Model availability and performance may vary. Check ollama.com/library for the latest models.

πŸ” SEO Keywords & Search Terms

Primary Keywords

ollama optimizer, ollama launcher, universal ollama, ollama bash script, ollama automation, ollama profiles, ollama configuration, ollama setup, local AI, AI model launcher

Technical Keywords

ollama cli, ollama command line, ollama parameters, ollama temperature, ollama top-p, ollama top-k, bash automation, linux ollama, model optimization, AI model tuning, LLM parameters

Use Case Keywords

ollama for developers, code generation optimizer, creative writing AI, AI assistant launcher, local chatbot, offline AI, private AI, self-hosted LLM, enterprise AI solution

Long-tail Keywords

how to optimize ollama models, best ollama configuration, ollama performance tuning, local AI model management, automated ollama setup, ollama bash automation script

Repository: universal-ollama-optimizer | Developer: JTGSYSTEMS.COM | Technology: Bash, Linux, Ollama, AI

SEO Keyword Cloud

ollama optimizer bash linux terminal cli launcher profiles tuning optimization performance monitoring gpu cpu memory benchmarking scheduling logging configuration modelfile inference deployment reasoning coding creative roleplay balanced technical custom parameters temperature topp topk context streaming prompts chat assistant selfhosted privacy enterprise workflow toolkit scripts orchestration automation analytics roadmap integration support

About

πŸš€ Professional bash script for launching and optimizing Ollama AI models with intelligent profiles, system monitoring, and comprehensive error handling. Universal model support with 55+ latest 2025 models. Developed by JTGSYSTEMS.COM

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •  

Languages