Developed by JTGSYSTEMS.COM | JointTechnologyGroup.com
A professional bash script for launching and optimizing any Ollama AI model with intelligent profiles, system monitoring, and automated configuration.
Added 5 Critical Missing Models Based on Community Research:
- β GLM-4 - Ranks 3rd overall on hardcore benchmarks, beats Llama 3 8B
- β Magicoder - OSS-Instruct trained coding specialist with 75K synthetic instruction data
- β Gemma3n - Multimodal model optimized for everyday devices (phones, tablets, laptops)
- β Granite 3.3 - IBM's improved models with 128K context (2B and 8B variants)
- β Gemma3:270M - Ultra-compact 270M model with 0.75% battery usage on mobile devices
Enhanced Menu Organization:
- Expanded model selection from 50+ to 55+ latest 2025 models
- Added new performance benchmarks and descriptions
- Updated all optimization profiles and system recommendations
- π― Universal Model Support - Works with any Ollama model (llama3.2, mistral, gemma2, etc.)
- βοΈ Smart Optimization Profiles - 6 pre-configured profiles: Balanced, Technical, Creative, Code, Reasoning, Roleplay
- π System Resource Monitoring - Real-time GPU, RAM, and disk space monitoring
- π Auto-Download & Validation - Intelligent model downloading with space checking
- π¨ Professional Interface - Clean, colorful terminal UI with progress indicators
- β‘ Configuration Management - File-based config with runtime overrides
- π§ Custom Parameters - Full manual parameter control and Modelfile creation
- Linux (Ubuntu 20.04+, other distros compatible)
- Bash 5.0+ (pre-installed on most systems)
- Ollama (Download here)
- 8GB+ RAM recommended
- GPU optional but recommended (NVIDIA with CUDA support)
# Download and run
curl -fsSL https://raw.githubusercontent.com/your-username/universal-ollama-optimizer/main/universal-ollama-optimizer.sh -o universal-ollama-optimizer.sh
chmod +x universal-ollama-optimizer.sh
./universal-ollama-optimizer.sh# Clone repository
git clone https://github.com/your-username/universal-ollama-optimizer.git
cd universal-ollama-optimizer
# Make executable
chmod +x universal-ollama-optimizer.sh
# Run
./universal-ollama-optimizer.sh./universal-ollama-optimizer.sh- Select Model - Choose from available models or enter new model name
- Choose Profile - Select optimization profile (1-9)
- Launch Model - Model starts with optimized settings
ββββββββββββββββββββββββββββββββββββββββββββββ
β Universal Ollama Optimizer β
β JTGSYSTEMS.COM | JointTechnologyGroup β
ββββββββββββββββββββββββββββββββββββββββββββββ
β Ollama service is running
Available Local Models:
β’ llama3.2:latest (4.7GB)
β’ mistral:7b (4.1GB)
β’ gemma2:9b (5.4GB)
Enter model name: llama3.2:latest
Model Information:
β’ Model: llama3.2:latest
β’ Size: 4.7GB
β’ Parameters: 3.2B
System Status:
β’ GPU Memory: 16380 MB
β’ System RAM: 32GB
β’ Available Disk: 150GB
Optimization Profiles:
1) Balanced - General purpose (temp: 0.5)
2) Technical/Factual - Precise answers (temp: 0.2)
3) Creative Writing - Imaginative content (temp: 1.0)
4) Code Generation - Programming tasks (temp: 0.15)
5) Reasoning/Logic - Problem solving (temp: 0.3)
6) Roleplay/Chat - Conversational (temp: 0.8)
Select profile [1-6]: 1
Starting llama3.2:latest with Balanced profile...
~/.config/universal-ollama-optimizer/config.conf
# Default profile (balanced, technical, creative, code, reasoning, roleplay)
DEFAULT_PROFILE="balanced"
# Auto-start Ollama service if not running
AUTO_START_OLLAMA=true
# System monitoring
SHOW_SYSTEM_INFO=true
SHOW_GPU_INFO=true
# Download settings
DOWNLOAD_TIMEOUT=1800
MIN_DISK_SPACE_GB=5| Profile | Temperature | Top-P | Top-K | Best For |
|---|---|---|---|---|
| Balanced | 0.5 | 0.85 | 30 | General use, Q&A |
| Technical | 0.2 | 0.8 | 20 | Documentation, facts |
| Creative | 1.0 | 0.95 | 50 | Stories, brainstorming |
| Code | 0.15 | 0.7 | 15 | Programming, debugging |
| Reasoning | 0.3 | 0.75 | 25 | Logic, analysis |
| Roleplay | 0.8 | 0.9 | 40 | Character chat |
Choose option 7 for manual parameter configuration:
- Temperature (0.0-2.0)
- Top-P (0.0-1.0)
- Top-K (1-100)
- Context length
- Max tokens per response
- Repeat penalty
Choose option 8 to create and save custom model configurations:
# Creates permanent optimized model variants
# Example: llama3.2-coding, mistral-creativeOnce model is running, use these commands:
/set parameter temperature 0.7 # Adjust parameters
/set system "You are a coding assistant" # Change system prompt
/show parameters # View current settings
/save my-conversation # Save session
/load my-conversation # Load sessionOllama Not Found
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | shPermission Denied
# Make script executable
chmod +x universal-ollama-optimizer.shModel Download Fails
# Check internet connection and disk space
df -h
ping ollama.aiGPU Not Detected
# Check NVIDIA drivers
nvidia-smiuniversal-ollama-optimizer/
βββ universal-ollama-optimizer.sh # Main script
βββ README.md # This file
βββ LICENSE # MIT License
βββ .github/
βββ workflows/
β βββ test.yml # CI/CD tests
βββ ISSUE_TEMPLATE/ # Issue templates
βββ PULL_REQUEST_TEMPLATE.md # PR template
We welcome contributions! Please see our Contributing Guidelines.
# Fork and clone
git clone https://github.com/your-username/universal-ollama-optimizer.git
cd universal-ollama-optimizer
# Test the script
./universal-ollama-optimizer.sh
# Run with debug mode
bash -x universal-ollama-optimizer.sh- Use the issue tracker
- Include OS version, Ollama version, and error messages
- Provide steps to reproduce
This project is licensed under the MIT License - see the LICENSE file for details.
- Jesus Christ - Our Lord and Saviour, for all gifts and abilities
- Ollama Team - For creating an excellent local AI platform
- Community Contributors - For feedback and improvements
- JTGSYSTEMS.COM - For development and maintenance
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Website: JTGSYSTEMS.COM
Based on latest community recommendations and performance benchmarks
llama3.3:70b- Meta's flagship 2025 model, rivals GPT-4 performance locallyglm4:latestβ NEW - Ranks 3rd overall, beats Llama 3 8B in benchmarksllama3.1:8b- Community favorite, best balance of performance and efficiencyllama3.1:70b- High-performance for complex reasoning and enterprise usedeepseek-r1- Powerhouse for deep logical reasoning and analysis
deepseek-coder:33b- #1 coding model, excels at complex programming tasksmagicoder:latestβ NEW - OSS-Instruct trained specialist, 75K synthetic dataqwen3-coder:30b- Alibaba's latest 2025 coding model with major improvementscodellama:34b- Meta's specialized coding model with excellent context understandingqwen2.5-coder:32b- Previous Alibaba model with solid code generation
granite3.3:8bβ NEW - IBM's improved model with 128K context, rivals Llama 3.1mistral:7b-instruct- Community-recommended for beginners, excellent performance/resource ratiophi4:14b- Microsoft's 2025 state-of-the-art efficiency modelgranite3.3:2bβ NEW - IBM's efficient enterprise model for edge deploymentllama3.2:3b- Compact Llama for lightweight deploymentsgemma3:270mβ NEW - Ultra-compact 270M model, 0.75% battery usage on mobilegemma2:9b- Google's efficient model, great for general tasks
gemma3n:latestβ NEW - Multimodal optimized for everyday devices (phones, tablets)llava:latest- Leading vision model for image analysis and VQAqwen-vl- Advanced multimodal model for document and image processinggemma2:27b- Excellent for creative writing and content generation
| Use Case | Top Model | Alternative | RAM Required | Best Profile |
|---|---|---|---|---|
| General Chat | llama3.3:70b |
glm4:latest β |
64GB / 9GB | Balanced |
| Code Development | deepseek-coder:33b |
magicoder:latest β |
32GB / 7GB | Code |
| Reasoning Tasks | deepseek-r1 |
glm4:latest β |
16GB / 9GB | Reasoning |
| Creative Writing | gemma2:27b |
gemma3n:latest β |
32GB / 8GB | Creative |
| Resource-Limited | granite3.3:2b β |
gemma3:270m β |
2GB / 300MB | Balanced |
| Vision/Multimodal | llava:latest |
gemma3n:latest β |
16GB / 8GB | Technical |
| Enterprise/128K Context | granite3.3:8b β |
llama3.1:8b |
8GB | Technical |
| Edge/Mobile | gemma3:270m β |
granite3.3:2b β |
300MB / 2GB | Balanced |
# Most recommended overall (2025 flagship)
ollama pull llama3.3:70b-instruct
# NEW: High-performance alternative (ranks 3rd overall)
ollama pull glm4:latest
# NEW: Specialized coding assistant with OSS-Instruct training
ollama pull magicoder:latest
# Premier coding powerhouse
ollama pull deepseek-coder:33b
# NEW: Enterprise model with 128K context
ollama pull granite3.3:8b
# NEW: Ultra-efficient for mobile/edge (270M parameters)
ollama pull gemma3:270m
# NEW: Multimodal for everyday devices
ollama pull gemma3n:latest
# Advanced reasoning powerhouse
ollama pull deepseek-r1
# Vision and image analysis
ollama pull llava:latest
# Best for beginners/limited hardware
ollama pull mistral:7b-instruct- Llama 3.3 has become the gold standard for local deployment
- DeepSeek models dominate coding and reasoning benchmarks
- Mistral 7B remains the go-to recommendation for newcomers
- Vision models like LLaVA are gaining massive adoption
- Over 1,700+ models now available in Ollama ecosystem
Note: Model availability and performance may vary. Check ollama.com/library for the latest models.
ollama optimizer, ollama launcher, universal ollama, ollama bash script, ollama automation, ollama profiles, ollama configuration, ollama setup, local AI, AI model launcher
ollama cli, ollama command line, ollama parameters, ollama temperature, ollama top-p, ollama top-k, bash automation, linux ollama, model optimization, AI model tuning, LLM parameters
ollama for developers, code generation optimizer, creative writing AI, AI assistant launcher, local chatbot, offline AI, private AI, self-hosted LLM, enterprise AI solution
how to optimize ollama models, best ollama configuration, ollama performance tuning, local AI model management, automated ollama setup, ollama bash automation script
Repository: universal-ollama-optimizer | Developer: JTGSYSTEMS.COM | Technology: Bash, Linux, Ollama, AI
ollama optimizer bash linux terminal cli launcher profiles tuning optimization performance monitoring gpu cpu memory benchmarking scheduling logging configuration modelfile inference deployment reasoning coding creative roleplay balanced technical custom parameters temperature topp topk context streaming prompts chat assistant selfhosted privacy enterprise workflow toolkit scripts orchestration automation analytics roadmap integration support
