AI-Specific Issues

Troubleshooting guide for AI integration problems in Gonzo. This covers issues with OpenAI, LM Studio, Ollama, and other AI providers.

General AI Issues

AI Features Not Working

Symptom: Pressing i or c does nothing, or shows "AI not configured" error.

Diagnosis & Solutions:

  1. Verify API key is set

    # Check if key exists
    echo $OPENAI_API_KEY
    # Should show: sk-... or your provider's key format
    
    # If empty, set it
    export OPENAI_API_KEY="sk-your-actual-key-here"
  2. Test API key validity

    # For OpenAI
    curl https://api.openai.com/v1/models \
      -H "Authorization: Bearer $OPENAI_API_KEY"
    
    # Should return list of models, not an error
  3. Specify model explicitly

    # Instead of auto-select
    gonzo -f logs.log --ai-model="gpt-3.5-turbo"
  4. Check API base URL

    # Should be set for custom providers
    echo $OPENAI_API_BASE
    
    # Unset for OpenAI (uses default)
    unset OPENAI_API_BASE

No Response from AI

Symptom: AI analysis appears to hang or never returns.

Causes & Solutions:

  1. Network connectivity

    # Test internet connection
    ping api.openai.com
    
    # Test API endpoint
    curl -I https://api.openai.com
  2. API rate limits

    • Wait a minute and try again

    • Check your provider's rate limit status

    • Use a different model (some have higher limits)

  3. Large log context

    • Very long logs may timeout

    • Try analyzing smaller log entries

    • Increase timeout (provider dependent)

  4. Provider service outage

    # Check status
    # OpenAI: https://status.openai.com
    # Provider-specific status pages

Model Selection Modal Empty

Symptom: Pressing m shows empty modal or "No models available".

Diagnosis & Solutions:

  1. Verify AI service is configured

    # Check all AI-related env vars
    env | grep OPENAI
    
    # Should show:
    # OPENAI_API_KEY=...
    # OPENAI_API_BASE=... (if using custom provider)
  2. Test API endpoint

    # For OpenAI
    curl https://api.openai.com/v1/models \
      -H "Authorization: Bearer $OPENAI_API_KEY" | jq '.data[].id'
    
    # For LM Studio
    curl http://localhost:1234/v1/models | jq '.data[].id'
    
    # For Ollama
    curl http://localhost:11434/api/tags | jq '.models[].name'
  3. Provider-specific fixes:

    • See sections below for each provider

AI Analysis Returns Errors

Symptom: Error messages when attempting AI analysis.

Common Error Messages:

  1. "Invalid API key"

    • Key is incorrect or expired

    • Regenerate key from provider dashboard

    • Ensure no extra spaces in key

  2. "Model not found"

    • Specified model doesn't exist or you don't have access

    • Use auto-select instead: remove --ai-model flag

    • Check available models: press m in Gonzo

  3. "Rate limit exceeded"

    • You've hit API rate limits

    • Wait and try again

    • Upgrade your provider plan

    • Switch to local model (Ollama/LM Studio)

  4. "Context length exceeded"

    • Log entry is too long for model

    • Try analyzing a shorter log

    • Use a model with larger context window (gpt-4 vs gpt-3.5)

OpenAI Issues

Authentication Failed

Symptom: "Incorrect API key provided" or 401 errors.

Solutions:

  1. Verify key format

    # OpenAI keys start with sk-
    echo $OPENAI_API_KEY | grep -E '^sk-'
  2. Check for whitespace

    # Trim any whitespace
    export OPENAI_API_KEY=$(echo $OPENAI_API_KEY | tr -d ' \t\n\r')
  3. Generate new key

    • Go to https://platform.openai.com/api-keys

    • Create new key

    • Update environment variable

  4. Verify account status

    • Check billing at https://platform.openai.com/account/billing

    • Ensure you have credits or valid payment method

Model Access Denied

Symptom: "You don't have access to model gpt-4" or similar.

Solutions:

  1. Use available model

    # Try gpt-3.5-turbo (widely available)
    gonzo -f logs.log --ai-model="gpt-3.5-turbo"
  2. Check model availability

    curl https://api.openai.com/v1/models \
      -H "Authorization: Bearer $OPENAI_API_KEY" \
      | jq '.data[].id'
  3. GPT-4 access

    • GPT-4 requires separate approval

    • Check https://platform.openai.com/account/limits

    • Use gpt-3.5-turbo as alternative

Rate Limit Errors

Symptom: "Rate limit exceeded" or 429 errors.

Solutions:

  1. Wait and retry

    • Limits reset after time period

    • Wait 1-2 minutes

  2. Use lower tier model

    # gpt-3.5-turbo has higher rate limits
    gonzo -f logs.log --ai-model="gpt-3.5-turbo"
  3. Upgrade plan

    • Free tier has low limits

    • Pay-as-you-go has higher limits

    • Check https://platform.openai.com/account/limits

  4. Switch to local model

    • Use Ollama or LM Studio (no rate limits)

    • See sections below

LM Studio Issues

Cannot Connect to LM Studio

Symptom: Connection refused or timeout errors.

Diagnosis & Solutions:

  1. Verify LM Studio is running

    • Open LM Studio application

    • Ensure a model is loaded

    • Check server is started (green indicator)

  2. Check URL format (CRITICAL)

    # ✅ CORRECT - must include /v1
    export OPENAI_API_BASE="http://localhost:1234/v1"
    
    # ❌ WRONG - missing /v1
    export OPENAI_API_BASE="http://localhost:1234"
  3. Test server is responding

    curl http://localhost:1234/v1/models
    
    # Should return JSON with model list
  4. Check port number

    # Default is 1234, verify in LM Studio settings
    # If different, update URL
    export OPENAI_API_BASE="http://localhost:PORT/v1"
  5. Firewall issues

    # Ensure port is open
    # macOS
    sudo lsof -i :1234
    
    # Linux
    sudo netstat -tulpn | grep 1234

LM Studio Model Not Loading

Symptom: Server starts but model doesn't load.

Solutions:

  1. Check model is downloaded

    • In LM Studio, verify model is in "My Models"

    • Download if missing

  2. Insufficient RAM

    • Large models need significant RAM

    • Try smaller model variant

    • Close other applications

  3. Restart LM Studio

    • Quit completely

    • Reopen and load model fresh

Wrong Model Selected in Gonzo

Symptom: Gonzo uses wrong LM Studio model.

Solutions:

  1. Specify exact model name

    # List available models
    curl http://localhost:1234/v1/models | jq '.data[].id'
    
    # Use exact name
    gonzo -f logs.log --ai-model="openai/gpt-oss-120b"
  2. Load only one model in LM Studio

    • Unload other models to avoid confusion

    • Load desired model only

  3. Use model selection in Gonzo

    • Start Gonzo

    • Press m to select model interactively

Ollama Issues

Ollama Service Not Running

Symptom: Connection refused to localhost:11434.

Solutions:

  1. Start Ollama service

    ollama serve
    
    # Or as background service (Linux)
    systemctl start ollama
    
    # macOS
    # Ollama usually runs as application
  2. Verify service is running

    curl http://localhost:11434/api/tags
    
    # Should return list of models
  3. Check for port conflicts

    # See what's using port 11434
    lsof -i :11434

Model Not Found

Symptom: "Model 'llama3' not found" or similar.

Solutions:

  1. List installed models

    ollama list
  2. Pull missing model

    # Pull specific model
    ollama pull llama3
    
    # Or the one you need
    ollama pull mistral
    ollama pull gpt-oss:20b
  3. Use exact model name

    # Include tag if needed
    gonzo -f logs.log --ai-model="llama3:8b"

Wrong Ollama URL Format

Symptom: Errors about invalid endpoint or /v1 path.

CRITICAL: Ollama URL format is different from OpenAI/LM Studio.

# ✅ CORRECT - NO /v1 suffix for Ollama
export OPENAI_API_BASE="http://localhost:11434"

# ❌ WRONG - don't add /v1
export OPENAI_API_BASE="http://localhost:11434/v1"

Ollama API Timeouts

Symptom: Analysis hangs or times out with Ollama.

Solutions:

  1. Check model size vs RAM

    # Large models need more RAM
    # Check system memory
    free -h  # Linux
    vm_stat  # macOS
  2. Use smaller model variant

    # Instead of 70b, try 13b or 7b
    ollama pull llama3:8b
    gonzo -f logs.log --ai-model="llama3:8b"
  3. Reduce concurrent requests

    • Only analyze one log at a time

    • Wait for previous analysis to complete

  4. Check GPU utilization

    # If using GPU
    nvidia-smi  # NVIDIA GPU
    
    # May need to configure Ollama for GPU

Custom Provider Issues

Custom API Endpoint Not Working

Symptom: Errors connecting to custom OpenAI-compatible API.

Solutions:

  1. Verify endpoint URL

    # Check URL is correct and accessible
    curl $OPENAI_API_BASE/models
  2. Check authentication method

    # Most use Bearer token
    curl $OPENAI_API_BASE/models \
      -H "Authorization: Bearer $OPENAI_API_KEY"
  3. Verify API compatibility

    • Must be OpenAI-compatible API

    • Should support /v1/chat/completions endpoint

  4. Test with known working example

    curl $OPENAI_API_BASE/v1/chat/completions \
      -H "Content-Type: application/json" \
      -H "Authorization: Bearer $OPENAI_API_KEY" \
      -d '{
        "model": "gpt-3.5-turbo",
        "messages": [{"role": "user", "content": "Hello"}]
      }'

SSL/TLS Certificate Errors

Symptom: Certificate verification failed errors.

Solutions:

  1. For local development only

    # Disable certificate verification (NOT for production)
    export OPENAI_SKIP_VERIFY=true
  2. Install proper certificates

    # Better: Fix the certificate issue
    # Install CA certificate for your provider
  3. Use HTTP for local services

    # If running locally, use http instead of https
    export OPENAI_API_BASE="http://localhost:8080/v1"

Performance Issues

AI Analysis Very Slow

Symptom: AI analysis takes very long to complete.

Causes & Solutions:

  1. Using large model

    # Switch to faster model
    gonzo -f logs.log --ai-model="gpt-3.5-turbo"  # Fast
    # Instead of gpt-4 (slower but better)
  2. Network latency

    • Use local model (Ollama/LM Studio)

    • Check internet speed

  3. Large log context

    • Analyzing very long logs takes time

    • Break into smaller chunks

  4. Local model on CPU

    • Local models slow on CPU

    • Use GPU if available

    • Use smaller model variants

AI Consumes Too Much Memory

Symptom: High memory usage when using AI features.

Solutions:

  1. Use cloud API instead of local

    # Cloud APIs don't use local RAM
    export OPENAI_API_KEY="sk-..."
    unset OPENAI_API_BASE
  2. Use smaller local model

    # Instead of 70b parameter model
    ollama pull llama3:8b  # Much smaller
  3. Close other applications

    • Local LLMs need substantial RAM

    • Close unnecessary programs

Debugging AI Issues

Enable Verbose Logging

# See detailed AI interactions
gonzo -v -f logs.log --ai-model="gpt-3.5-turbo" 2> ai-debug.log

# Check debug output
cat ai-debug.log

Test AI Provider Independently

Before using with Gonzo, verify your AI setup works:

OpenAI:

curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gpt-3.5-turbo",
    "messages": [{"role": "user", "content": "Say hello"}]
  }'

LM Studio:

curl http://localhost:1234/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "local-model",
    "messages": [{"role": "user", "content": "Say hello"}]
  }'

Ollama:

curl http://localhost:11434/api/generate \
  -d '{
    "model": "llama3",
    "prompt": "Say hello"
  }'

Quick Reference: Provider URLs

Provider
Base URL
/v1 Suffix?
Default Port

OpenAI

https://api.openai.com

✅ Yes (auto)

443 (HTTPS)

LM Studio

http://localhost:1234

✅ Yes (must add)

1234

Ollama

http://localhost:11434

❌ No

11434

Custom

(varies)

⚠️ Usually yes

(varies)

Getting Help

Provide This Info When Reporting AI Issues

# Provider info
echo "Provider: [OpenAI/LM Studio/Ollama/Other]"
echo "API Base: $OPENAI_API_BASE"
echo "Model: [model name]"

# Test connection
curl -I $OPENAI_API_BASE/models

# Gonzo version
gonzo --version

# Error message
# [paste complete error]
  • Common Issues - General troubleshooting

  • AI Setup Guide - Initial configuration

  • AI Providers Guide - Provider-specific setup

  • GitHub Issues - Report bugs

Last updated