# AI Setup & Integration

Transform your log analysis with artificial intelligence. Gonzo's AI integration brings natural language understanding, pattern recognition, and intelligent insights to log analysis, making complex debugging faster and more intuitive.

{% hint style="success" %}
**Quick Start:** Set up your AI provider, then press `i` on any log entry to get instant AI analysis, or use `m` to switch between AI models on the fly.
{% endhint %}

### AI Integration Overview

Gonzo supports multiple AI providers and models, giving you flexibility in cost, performance, and privacy:

| Provider                | Cost     |
| ----------------------- | -------- |
| **OpenAI**              | Paid API |
| **Local Ollama**        | Free     |
| **Local** **LM Studio** | Free     |
| **Custom APIs**         | Varies   |

### Core AI Features

#### Intelligent Log Analysis

**Press `i` on any log entry for instant analysis:**

```
Original Log:
{"timestamp":"2024-01-15T10:30:05Z","level":"error","service":"api-gateway","message":"Database connection timeout after 30s","user_id":"12345","endpoint":"/api/users","duration":30542}

AI Analysis:
"This error indicates a database connectivity issue affecting user 12345's request to the /api/users endpoint. The 30-second timeout suggests either:

1. Database server overload or unresponsiveness
2. Network connectivity issues between API gateway and database
3. Potential deadlock or long-running query

Recommended investigation steps:
- Check database server health and active connections
- Review recent database queries for performance issues
- Verify network connectivity between services
- Look for similar timeouts in the same timeframe to identify scope"
```

#### AI-Powered Chat

**Press `c` in log details for interactive analysis:**

```
You: Why are we seeing so many authentication failures?

AI: Based on the logs, I see 156 authentication failures in the last hour. The pattern suggests:

1. Geographic clustering - 80% from IP ranges in Eastern Europe
2. Timing pattern - Bursts every 2-3 minutes, suggesting automation
3. Username patterns - Targeting common usernames (admin, root, user)

This appears to be a coordinated brute force attack. I recommend:
- Implementing rate limiting on authentication endpoints
- Geographic IP blocking for suspicious regions
- Enhanced monitoring for automated attack patterns

Would you like me to help identify the specific IP ranges involved?

You: Yes, show me the top attacking IPs
```

#### Contextual Understanding

AI analyzes logs within context:

* **Service relationships** - Understands how microservices interact
* **Timing correlation** - Connects events across time
* **Pattern significance** - Explains why patterns matter
* **Business impact** - Relates technical issues to user experience

### Getting Started with AI

#### Quick Setup Path

```bash
# 1. Choose your AI provider (OpenAI is easiest to start)
export OPENAI_API_KEY="sk-your-key-here"

# 2. Start Gonzo with AI enabled
gonzo -f your-logs.log --ai-model="gpt-4"

# 3. Try AI features immediately:
# - Press 'i' on any log entry
# - Press 'c' for interactive chat
# - Press 'm' to switch models
```

### OpenAI Setup

#### Step 1: Get Your API Key

1. **Visit** [**OpenAI API Platform**](https://platform.openai.com/)
2. **Create account or sign in**
3. **Navigate to API Keys** (<https://platform.openai.com/api-keys>)
4. **Create new secret key**
5. **Copy the key** (starts with `sk-`)

{% hint style="warning" %}
**Important:** Save your API key securely. OpenAI only shows it once, and you'll need it for Gonzo configuration.
{% endhint %}

#### Step 2: Configure Environment

**Method 1: Environment Variable (Recommended)**

```bash
# Add to your ~/.bashrc, ~/.zshrc, or ~/.profile
export OPENAI_API_KEY="sk-your-actual-api-key-here"

# Reload your shell configuration
source ~/.bashrc  # or ~/.zshrc
```

**Method 2: Session Variable**

```bash
# Set for current session only
export OPENAI_API_KEY="sk-your-actual-api-key-here"

# Verify it's set
echo $OPENAI_API_KEY
```

**Method 3: Configuration File**

```bash
# Create Gonzo config file
mkdir -p ~/.config/gonzo
cat > ~/.config/gonzo/config.yml << EOF
# AI Configuration
ai-model: "gpt-4"

# Environment variables can also be set in config
# But API keys are more secure as environment variables
EOF
```

#### Step 3: Test Your Setup

```bash
# Test with automatic model selection
gonzo -f your-logs.log

# Test with specific model
gonzo -f your-logs.log --ai-model="gpt-4"

# Test with cheaper model for development
gonzo -f your-logs.log --ai-model="gpt-3.5-turbo"
```

#### Step 4: Verify AI Features Work

```bash
# 1. Start Gonzo with your logs
gonzo -f application.log --ai-model="gpt-4"

# 2. Navigate to a log entry and press 'i'
# You should see AI analysis of the log entry

# 3. Try the model switcher with 'm'
# You should see available OpenAI models

# 4. Test AI chat with 'c' in log details
# You should be able to have a conversation about the logs
```

### Local AI Setup (Ollama)

#### Step 1: Install Ollama

{% tabs %}
{% tab title="Linux" %}

```bash
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Verify installation
ollama --version
```

{% endtab %}

{% tab title="macOS" %}

```bash
# Option 1: Download from website
# Visit https://ollama.ai/download and download macOS installer

# Option 2: Homebrew
brew install ollama

# Verify installation
ollama --version
```

{% endtab %}

{% tab title="Windows" %}

```bash
# Download from https://ollama.ai/download
# Run the installer
# Open PowerShell or Command Prompt

# Verify installation
ollama --version
```

{% endtab %}
{% endtabs %}

#### Step 2: Start Ollama Service

```bash
# Start Ollama server (required for Gonzo to connect)
ollama serve

# This should show:
# Ollama is running on http://localhost:11434
```

{% hint style="info" %}
**Keep this running:** The `ollama serve` command needs to stay running for Gonzo to access AI features. Consider running it in a separate terminal or as a background service.
{% endhint %}

#### Step 3: Download AI Models

```bash
# Download recommended models for log analysis
ollama pull llama3        # Good general-purpose model (4.7GB)
ollama pull mistral       # Faster, smaller model (4.1GB)
ollama pull codellama     # Good for technical logs (3.8GB)

# Or download a larger, more capable model
ollama pull llama3:70b    # Very capable but requires 40GB+ RAM

# List available models
ollama list
```

#### Step 4: Configure Gonzo for Ollama

```bash
# Set environment variables for Ollama
export OPENAI_API_KEY="ollama"                    # Special key for Ollama
export OPENAI_API_BASE="http://localhost:11434"   # Ollama endpoint

# Verify Ollama is accessible
curl http://localhost:11434/api/tags
```

#### Step 5: Test Ollama Integration

```bash
# Test with automatic model selection
gonzo -f your-logs.log

# Test with specific model
gonzo -f your-logs.log --ai-model="llama3"

# Test model switching
# Press 'm' in Gonzo to see available Ollama models
```

### Local LM Studio Setup

#### Step 1: Install LM Studio

1. **Download LM Studio** from <https://lmstudio.ai/>
2. **Install the application** for your operating system
3. **Launch LM Studio**

#### Step 2: Download Models

1. **Open LM Studio**
2. **Go to "Discover" tab**
3. **Search and download recommended models:**
   * `microsoft/DialoGPT-medium` (lightweight, good for testing)
   * `meta-llama/Llama-2-7b-chat-hf` (balanced performance)
   * `meta-llama/Llama-2-13b-chat-hf` (better quality, needs more RAM)

#### Step 3: Start Model Server

1. **Go to "Local Server" tab in LM Studio**
2. **Select your downloaded model**
3. **Click "Start Server"**
4. **Note the server URL** (usually `http://localhost:1234`)

#### Step 4: Configure Gonzo for LM Studio

```bash
# Set environment variables for LM Studio
export OPENAI_API_KEY="local-key"                     # Any non-empty value
export OPENAI_API_BASE="http://localhost:1234/v1"     # Note the /v1 suffix

# Test connectivity
curl http://localhost:1234/v1/models
```

#### Step 5: Test LM Studio Integration

```bash
# Test with LM Studio
gonzo -f your-logs.log

# The model will be auto-selected from whatever's running in LM Studio
# Use 'm' to see available models
```

### Custom API Setup

#### Enterprise AI Services

**Azure OpenAI Service:**

```bash
export OPENAI_API_KEY="your-azure-key"
export OPENAI_API_BASE="https://your-resource.openai.azure.com/"
export OPENAI_API_TYPE="azure"
export OPENAI_API_VERSION="2023-05-15"
```

**AWS Bedrock (via compatible proxy):**

```bash
export OPENAI_API_KEY="your-aws-access-key"
export OPENAI_API_BASE="https://your-bedrock-proxy.amazonaws.com/v1"
```

**Custom OpenAI-Compatible API:**

```bash
export OPENAI_API_KEY="your-custom-api-key"
export OPENAI_API_BASE="https://your-ai-service.com/v1"
```

#### Testing Custom APIs

```bash
# Test API connectivity
curl -H "Authorization: Bearer $OPENAI_API_KEY" \
     "$OPENAI_API_BASE/models"

# Test with Gonzo
gonzo -f test-logs.log --ai-model="your-model-name"
```

### Real-World Examples

See AI features in action:

* [AI and a TUI: Practical Logging Tools for SREs](https://www.controltheory.com/blog/ai-and-a-tui-practical-logging-tools-for-sres/) - Practical AI use cases for incident response

### AI Features Deep Dive

#### Model Auto-Selection

Gonzo intelligently selects the best available AI model:

**OpenAI Priority:**

```
gpt-4 → gpt-3.5-turbo → first available
```

**Ollama Priority:**

```
gpt-oss:20b → llama3 → mistral → codellama → first available
```

**LM Studio:**

```
First available model loaded in LM Studio
```

#### Runtime Model Switching

**Press `m` anywhere to switch models:**

```
┌─ MODEL SELECTION ─────────────────────┐
│ Available Models:                     │
│                                       │
│ ✓ gpt-4                    (current)  │ ← Currently active
│   gpt-3.5-turbo                       │
│   gpt-3.5-turbo-16k                   │
│   text-davinci-003                    │
│                                       │
│ Navigation: ↑/↓, Enter to select      │
│ ESC to cancel                         │
└───────────────────────────────────────┘
```

**Benefits:**

* **Cost optimization** - Use expensive models only when needed
* **Performance tuning** - Fast models for quick questions, powerful models for complex analysis
* **Experimentation** - Compare responses from different models
* **Fallback options** - Switch if one model is unavailable

#### AI Analysis Types

{% tabs %}
{% tab title="Error Analysis" %}
**Single Log Entry Analysis:**

```bash
# Use case: Understanding specific errors
# Trigger: Press 'i' on error log entry

AI provides:
- Error explanation in plain English
- Potential root causes
- Investigation steps
- Related log patterns to look for
- Business impact assessment
```

**Example Response:**

```
"This NullPointerException in the user authentication service suggests the user profile wasn't properly loaded before authentication validation. This typically happens when:

1. Database query returned null (user doesn't exist)
2. Cache miss during user lookup
3. Race condition during user session creation

Impact: Users can't log in, affecting user experience and potentially revenue.

Next steps: Check user existence in database, verify cache hit rates, examine concurrent authentication requests."
```

{% endtab %}

{% tab title="Pattern Analysis" %}
**Multi-Log Pattern Analysis:**

```bash
# Use case: Understanding recurring issues
# Trigger: Filter logs, then use AI analysis

AI provides:
- Pattern significance explanation
- Trend analysis
- System health implications
- Optimization recommendations
- Preventive measures
```

**Example Response:**

```
"The recurring 'slow query' warnings show a performance degradation pattern:

- Frequency increased 300% in last 2 hours
- Affects primarily user lookup queries
- Correlates with increased user activity

This suggests:
1. Database index degradation
2. Query plan optimization needed
3. Potential need for query caching

Without intervention, this will likely escalate to timeout errors and user impact within 1-2 hours."
```

{% endtab %}

{% tab title="System Analysis" %}
**Overall System Health Analysis:**

```bash
# Use case: Understanding system-wide issues
# Trigger: AI analysis on filtered time ranges

AI provides:
- System health assessment
- Component interaction analysis
- Failure cascade identification
- Recovery recommendations
- Prevention strategies
```

**Example Response:**

```
"System analysis for the last 30 minutes shows a cascade failure pattern:

Timeline:
1. 14:15 - Database slow queries began
2. 14:18 - API gateway timeouts started
3. 14:20 - Authentication service became unresponsive
4. 14:22 - Load balancer began failing health checks

Root cause: Database performance degradation triggered system-wide impact.

Recovery priority:
1. Immediate: Restart database connections
2. Short-term: Scale database resources
3. Long-term: Implement circuit breakers to prevent cascade failures"
```

{% endtab %}
{% endtabs %}

### AI Workflow Integration

#### Development Workflow

```bash
# AI-enhanced development debugging
gonzo -f logs/debug.log --follow --ai-model="gpt-3.5-turbo"

# Workflow:
# 1. Reproduce issue while monitoring
# 2. AI identifies unusual patterns automatically
# 3. Press 'i' on error logs for instant explanation
# 4. Use AI chat to explore root causes
# 5. Get specific debugging recommendations
```

#### Production Monitoring

```bash
# AI-powered production monitoring
gonzo -f /var/log/app/*.log --follow --ai-model="gpt-4"

# Benefits:
# - Automatic anomaly detection
# - Intelligent alert context
# - Root cause hypothesis generation
# - Impact assessment
# - Recovery recommendations
```

#### Incident Response

```bash
# AI-assisted incident response
gonzo -f incident-logs.log --ai-model="gpt-4"

# Capabilities:
# - Rapid timeline reconstruction
# - Intelligent root cause analysis
# - Impact assessment
# - Recovery priority recommendations
# - Post-incident learning extraction
```

### Troubleshooting AI Issues

#### Common Setup Problems

**API Key Issues:**

```bash
# Verify API key is set correctly
echo $OPENAI_API_KEY

# Test API connectivity
curl -H "Authorization: Bearer $OPENAI_API_KEY" https://api.openai.com/v1/models
```

**Model Availability:**

```bash
# Check available models
gonzo --ai-model="test" 2>&1 | grep "available models"

# Test specific model
gonzo -f test.log --ai-model="gpt-3.5-turbo"
```

####

### What's Next?

Ready to set up AI integration? Continue with these detailed guides:

* **Setup & Configuration** - Get your AI provider configured
* **AI Providers Guide** - Detailed setup for each provider
* **Using AI Features** - Master AI-powered workflows

Or explore how AI integrates with other advanced features:

* **Log Analysis** - Combine AI with algorithmic analysis
* **Format Detection** - Optimize data for AI processing

***


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.controltheory.com/controltheory-documentation/gonzo-docs/advanced-features/ai-setup-and-integration.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
