AI-Specific Issues
Troubleshooting guide for AI integration problems in Gonzo. This covers issues with OpenAI, LM Studio, Ollama, and other AI providers.
General AI Issues
AI Features Not Working
Symptom: Pressing i
or c
does nothing, or shows "AI not configured" error.
Diagnosis & Solutions:
Verify API key is set
# Check if key exists echo $OPENAI_API_KEY # Should show: sk-... or your provider's key format # If empty, set it export OPENAI_API_KEY="sk-your-actual-key-here"
Test API key validity
# For OpenAI curl https://api.openai.com/v1/models \ -H "Authorization: Bearer $OPENAI_API_KEY" # Should return list of models, not an error
Specify model explicitly
# Instead of auto-select gonzo -f logs.log --ai-model="gpt-3.5-turbo"
Check API base URL
# Should be set for custom providers echo $OPENAI_API_BASE # Unset for OpenAI (uses default) unset OPENAI_API_BASE
No Response from AI
Symptom: AI analysis appears to hang or never returns.
Causes & Solutions:
Network connectivity
# Test internet connection ping api.openai.com # Test API endpoint curl -I https://api.openai.com
API rate limits
Wait a minute and try again
Check your provider's rate limit status
Use a different model (some have higher limits)
Large log context
Very long logs may timeout
Try analyzing smaller log entries
Increase timeout (provider dependent)
Provider service outage
# Check status # OpenAI: https://status.openai.com # Provider-specific status pages
Model Selection Modal Empty
Symptom: Pressing m
shows empty modal or "No models available".
Diagnosis & Solutions:
Verify AI service is configured
# Check all AI-related env vars env | grep OPENAI # Should show: # OPENAI_API_KEY=... # OPENAI_API_BASE=... (if using custom provider)
Test API endpoint
# For OpenAI curl https://api.openai.com/v1/models \ -H "Authorization: Bearer $OPENAI_API_KEY" | jq '.data[].id' # For LM Studio curl http://localhost:1234/v1/models | jq '.data[].id' # For Ollama curl http://localhost:11434/api/tags | jq '.models[].name'
Provider-specific fixes:
See sections below for each provider
AI Analysis Returns Errors
Symptom: Error messages when attempting AI analysis.
Common Error Messages:
"Invalid API key"
Key is incorrect or expired
Regenerate key from provider dashboard
Ensure no extra spaces in key
"Model not found"
Specified model doesn't exist or you don't have access
Use auto-select instead: remove
--ai-model
flagCheck available models: press
m
in Gonzo
"Rate limit exceeded"
You've hit API rate limits
Wait and try again
Upgrade your provider plan
Switch to local model (Ollama/LM Studio)
"Context length exceeded"
Log entry is too long for model
Try analyzing a shorter log
Use a model with larger context window (gpt-4 vs gpt-3.5)
OpenAI Issues
Authentication Failed
Symptom: "Incorrect API key provided" or 401 errors.
Solutions:
Verify key format
# OpenAI keys start with sk- echo $OPENAI_API_KEY | grep -E '^sk-'
Check for whitespace
# Trim any whitespace export OPENAI_API_KEY=$(echo $OPENAI_API_KEY | tr -d ' \t\n\r')
Generate new key
Go to https://platform.openai.com/api-keys
Create new key
Update environment variable
Verify account status
Check billing at https://platform.openai.com/account/billing
Ensure you have credits or valid payment method
Model Access Denied
Symptom: "You don't have access to model gpt-4" or similar.
Solutions:
Use available model
# Try gpt-3.5-turbo (widely available) gonzo -f logs.log --ai-model="gpt-3.5-turbo"
Check model availability
curl https://api.openai.com/v1/models \ -H "Authorization: Bearer $OPENAI_API_KEY" \ | jq '.data[].id'
GPT-4 access
GPT-4 requires separate approval
Check https://platform.openai.com/account/limits
Use gpt-3.5-turbo as alternative
Rate Limit Errors
Symptom: "Rate limit exceeded" or 429 errors.
Solutions:
Wait and retry
Limits reset after time period
Wait 1-2 minutes
Use lower tier model
# gpt-3.5-turbo has higher rate limits gonzo -f logs.log --ai-model="gpt-3.5-turbo"
Upgrade plan
Free tier has low limits
Pay-as-you-go has higher limits
Check https://platform.openai.com/account/limits
Switch to local model
Use Ollama or LM Studio (no rate limits)
See sections below
LM Studio Issues
Cannot Connect to LM Studio
Symptom: Connection refused or timeout errors.
Diagnosis & Solutions:
Verify LM Studio is running
Open LM Studio application
Ensure a model is loaded
Check server is started (green indicator)
Check URL format (CRITICAL)
# ✅ CORRECT - must include /v1 export OPENAI_API_BASE="http://localhost:1234/v1" # ❌ WRONG - missing /v1 export OPENAI_API_BASE="http://localhost:1234"
Test server is responding
curl http://localhost:1234/v1/models # Should return JSON with model list
Check port number
# Default is 1234, verify in LM Studio settings # If different, update URL export OPENAI_API_BASE="http://localhost:PORT/v1"
Firewall issues
# Ensure port is open # macOS sudo lsof -i :1234 # Linux sudo netstat -tulpn | grep 1234
LM Studio Model Not Loading
Symptom: Server starts but model doesn't load.
Solutions:
Check model is downloaded
In LM Studio, verify model is in "My Models"
Download if missing
Insufficient RAM
Large models need significant RAM
Try smaller model variant
Close other applications
Restart LM Studio
Quit completely
Reopen and load model fresh
Wrong Model Selected in Gonzo
Symptom: Gonzo uses wrong LM Studio model.
Solutions:
Specify exact model name
# List available models curl http://localhost:1234/v1/models | jq '.data[].id' # Use exact name gonzo -f logs.log --ai-model="openai/gpt-oss-120b"
Load only one model in LM Studio
Unload other models to avoid confusion
Load desired model only
Use model selection in Gonzo
Start Gonzo
Press
m
to select model interactively
Ollama Issues
Ollama Service Not Running
Symptom: Connection refused to localhost:11434.
Solutions:
Start Ollama service
ollama serve # Or as background service (Linux) systemctl start ollama # macOS # Ollama usually runs as application
Verify service is running
curl http://localhost:11434/api/tags # Should return list of models
Check for port conflicts
# See what's using port 11434 lsof -i :11434
Model Not Found
Symptom: "Model 'llama3' not found" or similar.
Solutions:
List installed models
ollama list
Pull missing model
# Pull specific model ollama pull llama3 # Or the one you need ollama pull mistral ollama pull gpt-oss:20b
Use exact model name
# Include tag if needed gonzo -f logs.log --ai-model="llama3:8b"
Wrong Ollama URL Format
Symptom: Errors about invalid endpoint or /v1 path.
CRITICAL: Ollama URL format is different from OpenAI/LM Studio.
# ✅ CORRECT - NO /v1 suffix for Ollama
export OPENAI_API_BASE="http://localhost:11434"
# ❌ WRONG - don't add /v1
export OPENAI_API_BASE="http://localhost:11434/v1"
Ollama API Timeouts
Symptom: Analysis hangs or times out with Ollama.
Solutions:
Check model size vs RAM
# Large models need more RAM # Check system memory free -h # Linux vm_stat # macOS
Use smaller model variant
# Instead of 70b, try 13b or 7b ollama pull llama3:8b gonzo -f logs.log --ai-model="llama3:8b"
Reduce concurrent requests
Only analyze one log at a time
Wait for previous analysis to complete
Check GPU utilization
# If using GPU nvidia-smi # NVIDIA GPU # May need to configure Ollama for GPU
Custom Provider Issues
Custom API Endpoint Not Working
Symptom: Errors connecting to custom OpenAI-compatible API.
Solutions:
Verify endpoint URL
# Check URL is correct and accessible curl $OPENAI_API_BASE/models
Check authentication method
# Most use Bearer token curl $OPENAI_API_BASE/models \ -H "Authorization: Bearer $OPENAI_API_KEY"
Verify API compatibility
Must be OpenAI-compatible API
Should support /v1/chat/completions endpoint
Test with known working example
curl $OPENAI_API_BASE/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -d '{ "model": "gpt-3.5-turbo", "messages": [{"role": "user", "content": "Hello"}] }'
SSL/TLS Certificate Errors
Symptom: Certificate verification failed errors.
Solutions:
For local development only
# Disable certificate verification (NOT for production) export OPENAI_SKIP_VERIFY=true
Install proper certificates
# Better: Fix the certificate issue # Install CA certificate for your provider
Use HTTP for local services
# If running locally, use http instead of https export OPENAI_API_BASE="http://localhost:8080/v1"
Performance Issues
AI Analysis Very Slow
Symptom: AI analysis takes very long to complete.
Causes & Solutions:
Using large model
# Switch to faster model gonzo -f logs.log --ai-model="gpt-3.5-turbo" # Fast # Instead of gpt-4 (slower but better)
Network latency
Use local model (Ollama/LM Studio)
Check internet speed
Large log context
Analyzing very long logs takes time
Break into smaller chunks
Local model on CPU
Local models slow on CPU
Use GPU if available
Use smaller model variants
AI Consumes Too Much Memory
Symptom: High memory usage when using AI features.
Solutions:
Use cloud API instead of local
# Cloud APIs don't use local RAM export OPENAI_API_KEY="sk-..." unset OPENAI_API_BASE
Use smaller local model
# Instead of 70b parameter model ollama pull llama3:8b # Much smaller
Close other applications
Local LLMs need substantial RAM
Close unnecessary programs
Debugging AI Issues
Enable Verbose Logging
# See detailed AI interactions
gonzo -v -f logs.log --ai-model="gpt-3.5-turbo" 2> ai-debug.log
# Check debug output
cat ai-debug.log
Test AI Provider Independently
Before using with Gonzo, verify your AI setup works:
OpenAI:
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Say hello"}]
}'
LM Studio:
curl http://localhost:1234/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "local-model",
"messages": [{"role": "user", "content": "Say hello"}]
}'
Ollama:
curl http://localhost:11434/api/generate \
-d '{
"model": "llama3",
"prompt": "Say hello"
}'
Quick Reference: Provider URLs
OpenAI
https://api.openai.com
✅ Yes (auto)
443 (HTTPS)
LM Studio
http://localhost:1234
✅ Yes (must add)
1234
Ollama
http://localhost:11434
❌ No
11434
Custom
(varies)
⚠️ Usually yes
(varies)
Getting Help
Provide This Info When Reporting AI Issues
# Provider info
echo "Provider: [OpenAI/LM Studio/Ollama/Other]"
echo "API Base: $OPENAI_API_BASE"
echo "Model: [model name]"
# Test connection
curl -I $OPENAI_API_BASE/models
# Gonzo version
gonzo --version
# Error message
# [paste complete error]
Related Resources
Common Issues - General troubleshooting
AI Setup Guide - Initial configuration
AI Providers Guide - Provider-specific setup
GitHub Issues - Report bugs
Never share your actual API keys when reporting issues. Use placeholders like sk-...
or <redacted>
.
Last updated