AI-Specific Issues

Troubleshooting guide for AI integration problems in Gonzo. This covers issues with OpenAI, LM Studio, Ollama, and other AI providers.

General AI Issues

AI Features Not Working

Symptom: Pressing i or c does nothing, or shows "AI not configured" error.

Diagnosis & Solutions:

  1. Verify API key is set

    # Check if key exists
    echo $OPENAI_API_KEY
    # Should show: sk-... or your provider's key format
    
    # If empty, set it
    export OPENAI_API_KEY="sk-your-actual-key-here"
  2. Test API key validity

    # For OpenAI
    curl https://api.openai.com/v1/models \
      -H "Authorization: Bearer $OPENAI_API_KEY"
    
    # Should return list of models, not an error
  3. Specify model explicitly

    # Instead of auto-select
    gonzo -f logs.log --ai-model="gpt-3.5-turbo"
  4. Check API base URL

    # Should be set for custom providers
    echo $OPENAI_API_BASE
    
    # Unset for OpenAI (uses default)
    unset OPENAI_API_BASE

No Response from AI

Symptom: AI analysis appears to hang or never returns.

Causes & Solutions:

  1. Network connectivity

  2. API rate limits

    • Wait a minute and try again

    • Check your provider's rate limit status

    • Use a different model (some have higher limits)

  3. Large log context

    • Very long logs may timeout

    • Try analyzing smaller log entries

    • Increase timeout (provider dependent)

  4. Provider service outage

Model Selection Modal Empty

Symptom: Pressing m shows empty modal or "No models available".

Diagnosis & Solutions:

  1. Verify AI service is configured

  2. Test API endpoint

  3. Provider-specific fixes:

    • See sections below for each provider

AI Analysis Returns Errors

Symptom: Error messages when attempting AI analysis.

Common Error Messages:

  1. "Invalid API key"

    • Key is incorrect or expired

    • Regenerate key from provider dashboard

    • Ensure no extra spaces in key

  2. "Model not found"

    • Specified model doesn't exist or you don't have access

    • Use auto-select instead: remove --ai-model flag

    • Check available models: press m in Gonzo

  3. "Rate limit exceeded"

    • You've hit API rate limits

    • Wait and try again

    • Upgrade your provider plan

    • Switch to local model (Ollama/LM Studio)

  4. "Context length exceeded"

    • Log entry is too long for model

    • Try analyzing a shorter log

    • Use a model with larger context window (gpt-4 vs gpt-3.5)

OpenAI Issues

Authentication Failed

Symptom: "Incorrect API key provided" or 401 errors.

Solutions:

  1. Verify key format

  2. Check for whitespace

  3. Generate new key

    • Go to https://platform.openai.com/api-keys

    • Create new key

    • Update environment variable

  4. Verify account status

    • Check billing at https://platform.openai.com/account/billing

    • Ensure you have credits or valid payment method

Model Access Denied

Symptom: "You don't have access to model gpt-4" or similar.

Solutions:

  1. Use available model

  2. Check model availability

  3. GPT-4 access

    • GPT-4 requires separate approval

    • Check https://platform.openai.com/account/limits

    • Use gpt-3.5-turbo as alternative

Rate Limit Errors

Symptom: "Rate limit exceeded" or 429 errors.

Solutions:

  1. Wait and retry

    • Limits reset after time period

    • Wait 1-2 minutes

  2. Use lower tier model

  3. Upgrade plan

    • Free tier has low limits

    • Pay-as-you-go has higher limits

    • Check https://platform.openai.com/account/limits

  4. Switch to local model

    • Use Ollama or LM Studio (no rate limits)

    • See sections below

LM Studio Issues

Cannot Connect to LM Studio

Symptom: Connection refused or timeout errors.

Diagnosis & Solutions:

  1. Verify LM Studio is running

    • Open LM Studio application

    • Ensure a model is loaded

    • Check server is started (green indicator)

  2. Check URL format (CRITICAL)

  3. Test server is responding

  4. Check port number

  5. Firewall issues

LM Studio Model Not Loading

Symptom: Server starts but model doesn't load.

Solutions:

  1. Check model is downloaded

    • In LM Studio, verify model is in "My Models"

    • Download if missing

  2. Insufficient RAM

    • Large models need significant RAM

    • Try smaller model variant

    • Close other applications

  3. Restart LM Studio

    • Quit completely

    • Reopen and load model fresh

Wrong Model Selected in Gonzo

Symptom: Gonzo uses wrong LM Studio model.

Solutions:

  1. Specify exact model name

  2. Load only one model in LM Studio

    • Unload other models to avoid confusion

    • Load desired model only

  3. Use model selection in Gonzo

    • Start Gonzo

    • Press m to select model interactively

Ollama Issues

Ollama Service Not Running

Symptom: Connection refused to localhost:11434.

Solutions:

  1. Start Ollama service

  2. Verify service is running

  3. Check for port conflicts

Model Not Found

Symptom: "Model 'llama3' not found" or similar.

Solutions:

  1. List installed models

  2. Pull missing model

  3. Use exact model name

Wrong Ollama URL Format

Symptom: Errors about invalid endpoint or /v1 path.

CRITICAL: Ollama URL format is different from OpenAI/LM Studio.

Ollama API Timeouts

Symptom: Analysis hangs or times out with Ollama.

Solutions:

  1. Check model size vs RAM

  2. Use smaller model variant

  3. Reduce concurrent requests

    • Only analyze one log at a time

    • Wait for previous analysis to complete

  4. Check GPU utilization

Custom Provider Issues

Custom API Endpoint Not Working

Symptom: Errors connecting to custom OpenAI-compatible API.

Solutions:

  1. Verify endpoint URL

  2. Check authentication method

  3. Verify API compatibility

    • Must be OpenAI-compatible API

    • Should support /v1/chat/completions endpoint

  4. Test with known working example

SSL/TLS Certificate Errors

Symptom: Certificate verification failed errors.

Solutions:

  1. For local development only

  2. Install proper certificates

  3. Use HTTP for local services

Performance Issues

AI Analysis Very Slow

Symptom: AI analysis takes very long to complete.

Causes & Solutions:

  1. Using large model

  2. Network latency

    • Use local model (Ollama/LM Studio)

    • Check internet speed

  3. Large log context

    • Analyzing very long logs takes time

    • Break into smaller chunks

  4. Local model on CPU

    • Local models slow on CPU

    • Use GPU if available

    • Use smaller model variants

AI Consumes Too Much Memory

Symptom: High memory usage when using AI features.

Solutions:

  1. Use cloud API instead of local

  2. Use smaller local model

  3. Close other applications

    • Local LLMs need substantial RAM

    • Close unnecessary programs

Debugging AI Issues

Enable Verbose Logging

Test AI Provider Independently

Before using with Gonzo, verify your AI setup works:

OpenAI:

LM Studio:

Ollama:

Quick Reference: Provider URLs

Provider
Base URL
/v1 Suffix?
Default Port

OpenAI

https://api.openai.com

✅ Yes (auto)

443 (HTTPS)

LM Studio

http://localhost:1234

✅ Yes (must add)

1234

Ollama

http://localhost:11434

❌ No

11434

Custom

(varies)

⚠️ Usually yes

(varies)

Getting Help

Provide This Info When Reporting AI Issues

  • Common Issues - General troubleshooting

  • AI Setup Guide - Initial configuration

  • AI Providers Guide - Provider-specific setup

circle-exclamation

Last updated