Integration Examples
Learn how to integrate Gonzo with popular tools, platforms, and workflows. From container orchestration to cloud log services, these guides show you how to use Gonzo in real-world production environments.
Why Integration Matters
Gonzo's power multiplies when integrated into your existing toolchain:
🐳 Container Ecosystems - Seamless log analysis from Docker, Kubernetes, and container platforms
☁️ Cloud Services - Direct integration with AWS CloudWatch, Azure Monitor, and GCP Logging
📊 Log Storage Systems - High-performance analysis of VictoriaLogs, Elasticsearch, and time-series databases
🔧 Development Tools - Integration with CI/CD, monitoring systems, and alerting platforms
🖥️ System Administration - Enhanced workflows for traditional system log analysis
Real-World Focus: These guides are based on actual production deployments and common use cases from the Gonzo community.
Integration Overview
Gonzo integrates with your infrastructure in three main ways:
Direct Piping
Stdin processing
Real-time streaming
Docker logs, kubectl, stern
File Analysis
File reading
Archived logs, batch processing
CloudWatch exports, log files
OTLP Receiver
Network endpoint
OpenTelemetry integration
Instrumented applications
Featured Integrations
Container Orchestration
Kubernetes Integration
Analyze logs from Kubernetes clusters with powerful tooling integration:
kubectl logs - Direct pod log analysis
Stern integration - Multi-pod log streaming with Gonzo
Container insights - Understanding deployment and pod logs
Namespace-wide analysis - Cluster-level log investigation
Time to complete: 15-20 minutes Prerequisites: Kubernetes cluster access, kubectl installed
Cloud Log Services
AWS CloudWatch Integration
Stream and analyze logs from AWS CloudWatch:
CloudWatch Logs streaming - Real-time log analysis from AWS
AWS CLI integration - Efficient log retrieval and processing
Log group analysis - Multi-service AWS monitoring
Cost optimization - Efficient CloudWatch log querying
Time to complete: 20-25 minutes Prerequisites: AWS account, AWS CLI configured
High-Performance Log Storage
VictoriaLogs Integration
Analyze logs from VictoriaLogs time-series database:
VictoriaLogs querying - High-performance log retrieval
Time-series analysis - Historical log investigation
Query optimization - Efficient VictoriaLogs integration
Large-scale deployment - Production-grade log analysis
Time to complete: 20-25 minutes Prerequisites: VictoriaLogs installation or access
Container Platforms
Docker Integration
Seamless integration with Docker container logs:
Docker logs command - Container log analysis
Docker Compose - Multi-container log aggregation
Container lifecycle - Monitoring container events
Development workflows - Local container debugging
Time to complete: 15-20 minutes Prerequisites: Docker installed
System Administration
System Log Analysis
Enhanced workflows for traditional system administration:
Syslog integration - System log monitoring
Multiple log sources - Unified system analysis
Security monitoring - Auth log analysis
Performance debugging - System health investigation
Time to complete: 15-20 minutes Prerequisites: Linux system access
Development Workflows
Development Integration
Integrate Gonzo into development workflows:
IDE integration - Log analysis during development
Local debugging - Application log investigation
Test result analysis - CI/CD test log processing
Hot reload monitoring - Watch mode integration
Time to complete: 10-15 minutes Prerequisites: Development environment
Integration Patterns
Streaming Pattern (Recommended)
Real-time log analysis through piping:
# Kubernetes with kubectl
kubectl logs -f deployment/my-app | gonzo
# Docker containers
docker logs -f my-container 2>&1 | gonzo
# System logs
tail -f /var/log/syslog | gonzo
# Cloud services
aws logs tail /aws/lambda/my-function --follow | gonzo
Benefits:
✅ Real-time analysis as logs are generated
✅ No intermediate storage required
✅ Works with any tool that outputs to stdout
✅ Minimal resource overhead
File Analysis Pattern
Batch processing of log files:
# Local files
gonzo -f application.log
# Cloud exports
aws logs tail /aws/lambda/function --since 1h > export.log
gonzo -f export.log
# Archived logs
gonzo -f /var/log/app.log.1
# Multiple sources
gonzo -f app.log -f nginx.log -f db.log
Benefits:
✅ Works with archived logs
✅ Repeatable analysis
✅ Can combine multiple sources
✅ Good for historical investigation
OTLP Integration Pattern
OpenTelemetry Protocol receiver:
# Start Gonzo as OTLP receiver
gonzo --otlp-enabled
# Configure your applications to send logs to:
# gRPC: localhost:4317
# HTTP: http://localhost:4318/v1/logs
# Monitor real-time + backup to file
gonzo --otlp-enabled -f backup.log --follow
Benefits:
✅ Native OpenTelemetry integration
✅ Structured log data preserved
✅ Multiple applications simultaneously
✅ Standard protocol
Quick Start by Use Case
"I use Kubernetes"
# Start here: Kubernetes Integration
# 1. Install stern (optional but recommended)
brew install stern
# 2. Stream logs with stern + Gonzo
stern backend | gonzo --ai-model="gpt-4"
# 3. Or use kubectl directly
kubectl logs -f -l app=backend | gonzo
→ Full Kubernetes Guide
"I use AWS"
# Start here: CloudWatch Integration
# 1. Install AWS CLI
# 2. Configure credentials
# 3. Stream CloudWatch logs
aws logs tail /aws/lambda/my-function --follow | gonzo
# 4. Or analyze log groups
aws logs tail my-log-group --since 1h | gonzo --ai-model="gpt-4"
→ Full CloudWatch Guide
"I use Docker"
# Start here: Container Integration
# 1. Start your containers
# 2. Monitor container logs
docker logs -f my-container 2>&1 | gonzo
# 3. Or Docker Compose services
docker-compose logs -f | gonzo --ai-model="gpt-3.5-turbo"
→ Full Docker Guide
"I have traditional servers"
# Start here: System Administration
# 1. Identify your log files
# 2. Monitor system logs
sudo tail -f /var/log/syslog | gonzo
# 3. Or analyze multiple sources
gonzo -f /var/log/syslog -f /var/log/auth.log --follow
→ Full System Admin Guide
Integration Best Practices
🎯 Choose the Right Integration Method
Real-time monitoring → Use streaming (piping)
Historical analysis → Use file analysis
OpenTelemetry apps → Use OTLP receiver
Mixed sources → Combine multiple methods
⚡ Performance Optimization
# For high-volume streams, pre-filter
kubectl logs -f deployment/app | grep -E "(ERROR|WARN)" | gonzo
# Adjust buffer sizes for volume
docker logs -f container | gonzo --log-buffer=20000
# Use appropriate update intervals
tail -f /var/log/busy.log | gonzo --update-interval=5s
🔐 Security Considerations
# Use appropriate authentication
export AWS_PROFILE=production
aws logs tail /aws/lambda/function | gonzo
# Handle sensitive data
# Use local AI for sensitive logs
export OPENAI_API_BASE="http://localhost:11434"
kubectl logs sensitive-app | gonzo --ai-model="llama3"
# Respect access controls
# Use proper IAM roles, RBAC, etc.
📝 Documentation and Sharing
# Document your integration commands
cat > analyze-prod-logs.sh << 'EOF'
#!/bin/bash
# Production log analysis with Gonzo
# Usage: ./analyze-prod-logs.sh
kubectl logs -f -l app=backend,env=prod | \
gonzo --config prod.yml --ai-model="gpt-4"
EOF
chmod +x analyze-prod-logs.sh
# Share with team
git add analyze-prod-logs.sh
git commit -m "Add production log analysis script"
Common Integration Patterns
Multi-Source Aggregation
# Combine multiple log sources
(kubectl logs -f deploy/api & \
kubectl logs -f deploy/worker & \
kubectl logs -f deploy/scheduler) | gonzo
# Multiple containers
docker logs -f api 2>&1 & \
docker logs -f db 2>&1 & \
docker logs -f cache 2>&1 | gonzo
# Multiple files with follow
gonzo -f /var/log/app/*.log --follow
Filtered Streaming
# Pre-filter for performance
kubectl logs -f deployment/app | \
grep -v DEBUG | \
gonzo --log-buffer=10000
# Filter by severity
stern backend | \
grep -E "(ERROR|WARN|FATAL)" | \
gonzo --ai-model="gpt-4"
# Combine filters
aws logs tail /aws/lambda/fn --follow | \
jq -r 'select(.level=="error") | .message' | \
gonzo
Scheduled Analysis
# Cron job for daily analysis
0 2 * * * /usr/local/bin/analyze-logs.sh
# analyze-logs.sh
#!/bin/bash
aws logs tail /aws/lambda/function \
--since 24h \
--format short | \
gonzo --config daily-analysis.yml \
--ai-model="gpt-4" > /var/log/daily-analysis.txt
Troubleshooting Integrations
Connection Issues
# Test connectivity first
kubectl cluster-info
aws sts get-caller-identity
docker ps
# Verify tool output
kubectl logs pod-name | head -10
aws logs tail log-group --since 5m | head -10
# Then add Gonzo
kubectl logs pod-name | gonzo
Performance Issues
# If Gonzo is slow with integration:
# 1. Check log volume
kubectl logs pod-name | wc -l
# 2. Pre-filter if needed
kubectl logs pod-name | grep ERROR | gonzo
# 3. Adjust Gonzo settings
kubectl logs pod-name | gonzo --update-interval=5s --log-buffer=2000
Format Issues
# If logs aren't parsing correctly:
# 1. Check log format
kubectl logs pod-name | head -5
# 2. Test format detection
echo '{"test":"log"}' | gonzo
# 3. Ensure proper JSON/logfmt
# Gonzo auto-detects but prefers structured logs
What's Next?
Choose the integration guide that matches your infrastructure:
Kubernetes Integration - K8s clusters with kubectl and stern
AWS CloudWatch - AWS cloud log services
VictoriaLogs - High-performance log storage
Docker Containers - Container log analysis
System Administration - Traditional system logs
Development Workflows - IDE and development integration
Or explore advanced topics:
Configuration - Optimize for your integration
Advanced Features - Powerful analysis techniques
Troubleshooting - Integration-specific issues
Integrate Gonzo into your existing workflows for powerful log analysis anywhere! 🚀 From cloud platforms to container orchestration, Gonzo adapts to your infrastructure.
Last updated