-
-
Notifications
You must be signed in to change notification settings - Fork 2
Performance Settings
This guide explains how to optimize the performance of the Acode AI CLI Assistant Plugin for your specific device and usage patterns.
Performance settings allow you to balance functionality with resource consumption, ensuring the plugin works efficiently on your device while providing the best possible experience.
Control how frequently the plugin performs real-time analysis:
- High Sensitivity: 1-second debounce (immediate feedback)
- Medium Sensitivity: 5-second debounce (balanced approach)
- Low Sensitivity: 10-second debounce (resource conservation)
Adjust how the plugin caches AI responses:
- Short Cache: 1-minute expiration (most current results)
- Medium Cache: 5-minute expiration (balanced performance)
- Long Cache: 30-minute expiration (maximum resource savings)
The plugin intelligently filters what content is sent to AI providers:
- Context Limiting: Only sends relevant portions of your code
- File Size Checking: Avoids sending extremely large files
- Smart Selection: Focuses on selected code when available
Monitor your token consumption:
- Session Tokens: Current session usage
- Daily Tokens: 24-hour usage cycle
- Total Tokens: All-time token consumption
The plugin automatically adjusts behavior when battery is low:
- Reduced Animations: Disables non-essential visual effects
- Extended Debounce: Increases wait time between analysis
- Limited Features: Temporarily disables resource-intensive features
You can manually configure battery-conscious behavior:
- Disable Real-Time Analysis: Turn off live code suggestions
- Reduce Animation Intensity: Minimize visual effects
- Limit History Storage: Reduce conversation history retention
Optimizations for phones and tablets:
- Memory Management: Efficient cleanup of unused resources
- Network Handling: Better management of intermittent connections
- UI Scaling: Appropriate sizing for touch interfaces
Settings for powerful hardware:
- Enhanced Features: Enable all plugin capabilities
- Frequent Analysis: Reduce debounce timing for immediate feedback
- Rich Animations: Enable all visual effects
- Retry Logic: Automatic retry for failed requests
- Timeout Settings: Configurable request timeout values
- Offline Mode: Limited functionality when offline
- Compression: Minimize data transfer size
- Batching: Combine multiple requests when possible
- Prioritization: Send most important requests first
Control how conversation history is stored:
- History Limit: Set maximum number of saved conversations
- Auto-Cleanup: Automatically remove old conversations
- Storage Location: Configure where history is saved
Manage local caching behavior:
- Cache Size: Limit total cache storage
- Cache Duration: Set how long items remain cached
- Selective Caching: Choose which operations to cache
- Cleanup Intervals: How frequently to clean up memory
- History Retention: How many conversations to keep in memory
- Code Cache: How much code context to retain
- Background Processing: Lower priority for non-critical tasks
- UI Responsiveness: Higher priority for interface updates
- AI Request Queue: Manage order of AI requests
The plugin tracks several performance indicators:
- Response Time: Average AI response time
- Memory Usage: Current memory consumption
- Network Requests: Number of API calls made
- Cache Hits: Percentage of requests served from cache
- Session Statistics: Track performance during current session
- Error Rates: Monitor failed requests and issues
- Resource Consumption: Measure CPU and memory usage
- Check your internet connection speed
- Verify AI provider is functioning properly
- Adjust debounce timing in settings
- Clear cache to resolve potential issues
- Reduce conversation history retention
- Limit cache size in settings
- Restart Acode to clear memory
- Disable real-time analysis if needed
- Check API key validity
- Verify provider quotas and limits
- Adjust retry logic settings
- Check network connectivity
- Enable low battery mode
- Reduce animation intensity
- Increase debounce timing
- Limit background processing
To verify performance optimizations are working:
- Open the AI assistant
- Check settings menu for performance options
- Monitor response times and resource usage
- Verify battery-conscious features activate when needed
- Use medium or low sensitivity settings
- Enable battery conservation features
- Limit conversation history retention
- Use local AI providers (Ollama) when possible
- Use high sensitivity for immediate feedback
- Maintain longer cache durations for frequently asked questions
- Keep conversation history for reference
- Monitor token usage to stay within provider limits
- Disable real-time analysis
- Use longer debounce timing
- Limit cache storage
- Reduce animation effects
If you encounter performance issues:
- Check the Common Issues documentation
- Visit our GitHub Issues page
- Join our Discussions community for support
- Home
- Getting Started
- Usage Guides
- Advanced Features
- Developer Docs
- Troubleshooting