AI Setup & Integration
Transform your log analysis with artificial intelligence. Gonzo's AI integration brings natural language understanding, pattern recognition, and intelligent insights to log analysis, making complex debugging faster and more intuitive.
Quick Start: Set up your AI provider, then press i on any log entry to get instant AI analysis, or use m to switch between AI models on the fly.
AI Integration Overview
Gonzo supports multiple AI providers and models, giving you flexibility in cost, performance, and privacy:
OpenAI
Paid API
Local Ollama
Free
Local LM Studio
Free
Custom APIs
Varies
Core AI Features
Intelligent Log Analysis
Press i on any log entry for instant analysis:
Original Log:
{"timestamp":"2024-01-15T10:30:05Z","level":"error","service":"api-gateway","message":"Database connection timeout after 30s","user_id":"12345","endpoint":"/api/users","duration":30542}
AI Analysis:
"This error indicates a database connectivity issue affecting user 12345's request to the /api/users endpoint. The 30-second timeout suggests either:
1. Database server overload or unresponsiveness
2. Network connectivity issues between API gateway and database
3. Potential deadlock or long-running query
Recommended investigation steps:
- Check database server health and active connections
- Review recent database queries for performance issues
- Verify network connectivity between services
- Look for similar timeouts in the same timeframe to identify scope"AI-Powered Chat
Press c in log details for interactive analysis:
You: Why are we seeing so many authentication failures?
AI: Based on the logs, I see 156 authentication failures in the last hour. The pattern suggests:
1. Geographic clustering - 80% from IP ranges in Eastern Europe
2. Timing pattern - Bursts every 2-3 minutes, suggesting automation
3. Username patterns - Targeting common usernames (admin, root, user)
This appears to be a coordinated brute force attack. I recommend:
- Implementing rate limiting on authentication endpoints
- Geographic IP blocking for suspicious regions
- Enhanced monitoring for automated attack patterns
Would you like me to help identify the specific IP ranges involved?
You: Yes, show me the top attacking IPsContextual Understanding
AI analyzes logs within context:
Service relationships - Understands how microservices interact
Timing correlation - Connects events across time
Pattern significance - Explains why patterns matter
Business impact - Relates technical issues to user experience
Getting Started with AI
Quick Setup Path
# 1. Choose your AI provider (OpenAI is easiest to start)
export OPENAI_API_KEY="sk-your-key-here"
# 2. Start Gonzo with AI enabled
gonzo -f your-logs.log --ai-model="gpt-4"
# 3. Try AI features immediately:
# - Press 'i' on any log entry
# - Press 'c' for interactive chat
# - Press 'm' to switch modelsOpenAI Setup
Step 1: Get Your API Key
Visit OpenAI API Platform
Create account or sign in
Navigate to API Keys (https://platform.openai.com/api-keys)
Create new secret key
Copy the key (starts with
sk-)
Important: Save your API key securely. OpenAI only shows it once, and you'll need it for Gonzo configuration.
Step 2: Configure Environment
Method 1: Environment Variable (Recommended)
# Add to your ~/.bashrc, ~/.zshrc, or ~/.profile
export OPENAI_API_KEY="sk-your-actual-api-key-here"
# Reload your shell configuration
source ~/.bashrc # or ~/.zshrcMethod 2: Session Variable
# Set for current session only
export OPENAI_API_KEY="sk-your-actual-api-key-here"
# Verify it's set
echo $OPENAI_API_KEYMethod 3: Configuration File
# Create Gonzo config file
mkdir -p ~/.config/gonzo
cat > ~/.config/gonzo/config.yml << EOF
# AI Configuration
ai-model: "gpt-4"
# Environment variables can also be set in config
# But API keys are more secure as environment variables
EOFStep 3: Test Your Setup
# Test with automatic model selection
gonzo -f your-logs.log
# Test with specific model
gonzo -f your-logs.log --ai-model="gpt-4"
# Test with cheaper model for development
gonzo -f your-logs.log --ai-model="gpt-3.5-turbo"Step 4: Verify AI Features Work
# 1. Start Gonzo with your logs
gonzo -f application.log --ai-model="gpt-4"
# 2. Navigate to a log entry and press 'i'
# You should see AI analysis of the log entry
# 3. Try the model switcher with 'm'
# You should see available OpenAI models
# 4. Test AI chat with 'c' in log details
# You should be able to have a conversation about the logsLocal AI Setup (Ollama)
Step 1: Install Ollama
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Verify installation
ollama --version# Option 1: Download from website
# Visit https://ollama.ai/download and download macOS installer
# Option 2: Homebrew
brew install ollama
# Verify installation
ollama --version# Download from https://ollama.ai/download
# Run the installer
# Open PowerShell or Command Prompt
# Verify installation
ollama --versionStep 2: Start Ollama Service
# Start Ollama server (required for Gonzo to connect)
ollama serve
# This should show:
# Ollama is running on http://localhost:11434Step 3: Download AI Models
# Download recommended models for log analysis
ollama pull llama3 # Good general-purpose model (4.7GB)
ollama pull mistral # Faster, smaller model (4.1GB)
ollama pull codellama # Good for technical logs (3.8GB)
# Or download a larger, more capable model
ollama pull llama3:70b # Very capable but requires 40GB+ RAM
# List available models
ollama listStep 4: Configure Gonzo for Ollama
# Set environment variables for Ollama
export OPENAI_API_KEY="ollama" # Special key for Ollama
export OPENAI_API_BASE="http://localhost:11434" # Ollama endpoint
# Verify Ollama is accessible
curl http://localhost:11434/api/tagsStep 5: Test Ollama Integration
# Test with automatic model selection
gonzo -f your-logs.log
# Test with specific model
gonzo -f your-logs.log --ai-model="llama3"
# Test model switching
# Press 'm' in Gonzo to see available Ollama modelsLocal LM Studio Setup
Step 1: Install LM Studio
Download LM Studio from https://lmstudio.ai/
Install the application for your operating system
Launch LM Studio
Step 2: Download Models
Open LM Studio
Go to "Discover" tab
Search and download recommended models:
microsoft/DialoGPT-medium(lightweight, good for testing)meta-llama/Llama-2-7b-chat-hf(balanced performance)meta-llama/Llama-2-13b-chat-hf(better quality, needs more RAM)
Step 3: Start Model Server
Go to "Local Server" tab in LM Studio
Select your downloaded model
Click "Start Server"
Note the server URL (usually
http://localhost:1234)
Step 4: Configure Gonzo for LM Studio
# Set environment variables for LM Studio
export OPENAI_API_KEY="local-key" # Any non-empty value
export OPENAI_API_BASE="http://localhost:1234/v1" # Note the /v1 suffix
# Test connectivity
curl http://localhost:1234/v1/modelsStep 5: Test LM Studio Integration
# Test with LM Studio
gonzo -f your-logs.log
# The model will be auto-selected from whatever's running in LM Studio
# Use 'm' to see available modelsCustom API Setup
Enterprise AI Services
Azure OpenAI Service:
export OPENAI_API_KEY="your-azure-key"
export OPENAI_API_BASE="https://your-resource.openai.azure.com/"
export OPENAI_API_TYPE="azure"
export OPENAI_API_VERSION="2023-05-15"AWS Bedrock (via compatible proxy):
export OPENAI_API_KEY="your-aws-access-key"
export OPENAI_API_BASE="https://your-bedrock-proxy.amazonaws.com/v1"Custom OpenAI-Compatible API:
export OPENAI_API_KEY="your-custom-api-key"
export OPENAI_API_BASE="https://your-ai-service.com/v1"Testing Custom APIs
# Test API connectivity
curl -H "Authorization: Bearer $OPENAI_API_KEY" \
"$OPENAI_API_BASE/models"
# Test with Gonzo
gonzo -f test-logs.log --ai-model="your-model-name"Real-World Examples
See AI features in action:
AI and a TUI: Practical Logging Tools for SREs - Practical AI use cases for incident response
AI Features Deep Dive
Model Auto-Selection
Gonzo intelligently selects the best available AI model:
OpenAI Priority:
gpt-4 → gpt-3.5-turbo → first availableOllama Priority:
gpt-oss:20b → llama3 → mistral → codellama → first availableLM Studio:
First available model loaded in LM StudioRuntime Model Switching
Press m anywhere to switch models:
┌─ MODEL SELECTION ─────────────────────┐
│ Available Models: │
│ │
│ ✓ gpt-4 (current) │ ← Currently active
│ gpt-3.5-turbo │
│ gpt-3.5-turbo-16k │
│ text-davinci-003 │
│ │
│ Navigation: ↑/↓, Enter to select │
│ ESC to cancel │
└───────────────────────────────────────┘Benefits:
Cost optimization - Use expensive models only when needed
Performance tuning - Fast models for quick questions, powerful models for complex analysis
Experimentation - Compare responses from different models
Fallback options - Switch if one model is unavailable
AI Analysis Types
Single Log Entry Analysis:
# Use case: Understanding specific errors
# Trigger: Press 'i' on error log entry
AI provides:
- Error explanation in plain English
- Potential root causes
- Investigation steps
- Related log patterns to look for
- Business impact assessmentExample Response:
"This NullPointerException in the user authentication service suggests the user profile wasn't properly loaded before authentication validation. This typically happens when:
1. Database query returned null (user doesn't exist)
2. Cache miss during user lookup
3. Race condition during user session creation
Impact: Users can't log in, affecting user experience and potentially revenue.
Next steps: Check user existence in database, verify cache hit rates, examine concurrent authentication requests."Multi-Log Pattern Analysis:
# Use case: Understanding recurring issues
# Trigger: Filter logs, then use AI analysis
AI provides:
- Pattern significance explanation
- Trend analysis
- System health implications
- Optimization recommendations
- Preventive measuresExample Response:
"The recurring 'slow query' warnings show a performance degradation pattern:
- Frequency increased 300% in last 2 hours
- Affects primarily user lookup queries
- Correlates with increased user activity
This suggests:
1. Database index degradation
2. Query plan optimization needed
3. Potential need for query caching
Without intervention, this will likely escalate to timeout errors and user impact within 1-2 hours."Overall System Health Analysis:
# Use case: Understanding system-wide issues
# Trigger: AI analysis on filtered time ranges
AI provides:
- System health assessment
- Component interaction analysis
- Failure cascade identification
- Recovery recommendations
- Prevention strategiesExample Response:
"System analysis for the last 30 minutes shows a cascade failure pattern:
Timeline:
1. 14:15 - Database slow queries began
2. 14:18 - API gateway timeouts started
3. 14:20 - Authentication service became unresponsive
4. 14:22 - Load balancer began failing health checks
Root cause: Database performance degradation triggered system-wide impact.
Recovery priority:
1. Immediate: Restart database connections
2. Short-term: Scale database resources
3. Long-term: Implement circuit breakers to prevent cascade failures"AI Workflow Integration
Development Workflow
# AI-enhanced development debugging
gonzo -f logs/debug.log --follow --ai-model="gpt-3.5-turbo"
# Workflow:
# 1. Reproduce issue while monitoring
# 2. AI identifies unusual patterns automatically
# 3. Press 'i' on error logs for instant explanation
# 4. Use AI chat to explore root causes
# 5. Get specific debugging recommendationsProduction Monitoring
# AI-powered production monitoring
gonzo -f /var/log/app/*.log --follow --ai-model="gpt-4"
# Benefits:
# - Automatic anomaly detection
# - Intelligent alert context
# - Root cause hypothesis generation
# - Impact assessment
# - Recovery recommendationsIncident Response
# AI-assisted incident response
gonzo -f incident-logs.log --ai-model="gpt-4"
# Capabilities:
# - Rapid timeline reconstruction
# - Intelligent root cause analysis
# - Impact assessment
# - Recovery priority recommendations
# - Post-incident learning extractionTroubleshooting AI Issues
Common Setup Problems
API Key Issues:
# Verify API key is set correctly
echo $OPENAI_API_KEY
# Test API connectivity
curl -H "Authorization: Bearer $OPENAI_API_KEY" https://api.openai.com/v1/modelsModel Availability:
# Check available models
gonzo --ai-model="test" 2>&1 | grep "available models"
# Test specific model
gonzo -f test.log --ai-model="gpt-3.5-turbo"What's Next?
Ready to set up AI integration? Continue with these detailed guides:
Setup & Configuration - Get your AI provider configured
AI Providers Guide - Detailed setup for each provider
Using AI Features - Master AI-powered workflows
Or explore how AI integrates with other advanced features:
Log Analysis - Combine AI with algorithmic analysis
Format Detection - Optimize data for AI processing
Last updated