AI-Specific Issues
Troubleshooting guide for AI integration problems in Gonzo. This covers issues with OpenAI, LM Studio, Ollama, and other AI providers.
General AI Issues
AI Features Not Working
Symptom: Pressing i or c does nothing, or shows "AI not configured" error.
Diagnosis & Solutions:
Verify API key is set
# Check if key exists echo $OPENAI_API_KEY # Should show: sk-... or your provider's key format # If empty, set it export OPENAI_API_KEY="sk-your-actual-key-here"Test API key validity
# For OpenAI curl https://api.openai.com/v1/models \ -H "Authorization: Bearer $OPENAI_API_KEY" # Should return list of models, not an errorSpecify model explicitly
# Instead of auto-select gonzo -f logs.log --ai-model="gpt-3.5-turbo"Check API base URL
# Should be set for custom providers echo $OPENAI_API_BASE # Unset for OpenAI (uses default) unset OPENAI_API_BASE
No Response from AI
Symptom: AI analysis appears to hang or never returns.
Causes & Solutions:
Network connectivity
API rate limits
Wait a minute and try again
Check your provider's rate limit status
Use a different model (some have higher limits)
Large log context
Very long logs may timeout
Try analyzing smaller log entries
Increase timeout (provider dependent)
Provider service outage
Model Selection Modal Empty
Symptom: Pressing m shows empty modal or "No models available".
Diagnosis & Solutions:
Verify AI service is configured
Test API endpoint
Provider-specific fixes:
See sections below for each provider
AI Analysis Returns Errors
Symptom: Error messages when attempting AI analysis.
Common Error Messages:
"Invalid API key"
Key is incorrect or expired
Regenerate key from provider dashboard
Ensure no extra spaces in key
"Model not found"
Specified model doesn't exist or you don't have access
Use auto-select instead: remove
--ai-modelflagCheck available models: press
min Gonzo
"Rate limit exceeded"
You've hit API rate limits
Wait and try again
Upgrade your provider plan
Switch to local model (Ollama/LM Studio)
"Context length exceeded"
Log entry is too long for model
Try analyzing a shorter log
Use a model with larger context window (gpt-4 vs gpt-3.5)
OpenAI Issues
Authentication Failed
Symptom: "Incorrect API key provided" or 401 errors.
Solutions:
Verify key format
Check for whitespace
Generate new key
Go to https://platform.openai.com/api-keys
Create new key
Update environment variable
Verify account status
Check billing at https://platform.openai.com/account/billing
Ensure you have credits or valid payment method
Model Access Denied
Symptom: "You don't have access to model gpt-4" or similar.
Solutions:
Use available model
Check model availability
GPT-4 access
GPT-4 requires separate approval
Check https://platform.openai.com/account/limits
Use gpt-3.5-turbo as alternative
Rate Limit Errors
Symptom: "Rate limit exceeded" or 429 errors.
Solutions:
Wait and retry
Limits reset after time period
Wait 1-2 minutes
Use lower tier model
Upgrade plan
Free tier has low limits
Pay-as-you-go has higher limits
Check https://platform.openai.com/account/limits
Switch to local model
Use Ollama or LM Studio (no rate limits)
See sections below
LM Studio Issues
Cannot Connect to LM Studio
Symptom: Connection refused or timeout errors.
Diagnosis & Solutions:
Verify LM Studio is running
Open LM Studio application
Ensure a model is loaded
Check server is started (green indicator)
Check URL format (CRITICAL)
Test server is responding
Check port number
Firewall issues
LM Studio Model Not Loading
Symptom: Server starts but model doesn't load.
Solutions:
Check model is downloaded
In LM Studio, verify model is in "My Models"
Download if missing
Insufficient RAM
Large models need significant RAM
Try smaller model variant
Close other applications
Restart LM Studio
Quit completely
Reopen and load model fresh
Wrong Model Selected in Gonzo
Symptom: Gonzo uses wrong LM Studio model.
Solutions:
Specify exact model name
Load only one model in LM Studio
Unload other models to avoid confusion
Load desired model only
Use model selection in Gonzo
Start Gonzo
Press
mto select model interactively
Ollama Issues
Ollama Service Not Running
Symptom: Connection refused to localhost:11434.
Solutions:
Start Ollama service
Verify service is running
Check for port conflicts
Model Not Found
Symptom: "Model 'llama3' not found" or similar.
Solutions:
List installed models
Pull missing model
Use exact model name
Wrong Ollama URL Format
Symptom: Errors about invalid endpoint or /v1 path.
CRITICAL: Ollama URL format is different from OpenAI/LM Studio.
Ollama API Timeouts
Symptom: Analysis hangs or times out with Ollama.
Solutions:
Check model size vs RAM
Use smaller model variant
Reduce concurrent requests
Only analyze one log at a time
Wait for previous analysis to complete
Check GPU utilization
Custom Provider Issues
Custom API Endpoint Not Working
Symptom: Errors connecting to custom OpenAI-compatible API.
Solutions:
Verify endpoint URL
Check authentication method
Verify API compatibility
Must be OpenAI-compatible API
Should support /v1/chat/completions endpoint
Test with known working example
SSL/TLS Certificate Errors
Symptom: Certificate verification failed errors.
Solutions:
For local development only
Install proper certificates
Use HTTP for local services
Performance Issues
AI Analysis Very Slow
Symptom: AI analysis takes very long to complete.
Causes & Solutions:
Using large model
Network latency
Use local model (Ollama/LM Studio)
Check internet speed
Large log context
Analyzing very long logs takes time
Break into smaller chunks
Local model on CPU
Local models slow on CPU
Use GPU if available
Use smaller model variants
AI Consumes Too Much Memory
Symptom: High memory usage when using AI features.
Solutions:
Use cloud API instead of local
Use smaller local model
Close other applications
Local LLMs need substantial RAM
Close unnecessary programs
Debugging AI Issues
Enable Verbose Logging
Test AI Provider Independently
Before using with Gonzo, verify your AI setup works:
OpenAI:
LM Studio:
Ollama:
Quick Reference: Provider URLs
OpenAI
https://api.openai.com
✅ Yes (auto)
443 (HTTPS)
LM Studio
http://localhost:1234
✅ Yes (must add)
1234
Ollama
http://localhost:11434
❌ No
11434
Custom
(varies)
⚠️ Usually yes
(varies)
Getting Help
Provide This Info When Reporting AI Issues
Related Resources
Common Issues - General troubleshooting
AI Setup Guide - Initial configuration
AI Providers Guide - Provider-specific setup
GitHub Issues - Report bugs
Never share your actual API keys when reporting issues. Use placeholders like sk-... or <redacted>.
Last updated