Overview
IncidentFox connects to your existing observability and data platforms to investigate incidents. This section covers how to configure each data source.
Supported Data Sources
| Platform | Status | Capabilities |
|---|
| Coralogix | Supported | Log search, metrics, alerts, Olly integration |
| Datadog | Supported | Metrics, logs, APM traces |
| Grafana | Supported | Prometheus queries, dashboards, alerts, annotations |
| Prometheus | Supported | PromQL queries, instant queries, alerts |
| Sentry | Supported | Error tracking, issues, project stats, releases |
| New Relic | Supported | NRQL queries, APM summary |
| Elasticsearch | Supported | Log search, aggregations |
| Splunk | Supported | SPL queries, log search |
| Loki | Supported | LogQL queries |
Cloud Providers
| Platform | Status | Capabilities |
|---|
| AWS | Supported | CloudWatch, EC2, Lambda, RDS, ECS, CodePipeline |
| Azure | Supported | Monitor, VMs, App Services |
| GCP | Supported | Cloud Logging, Compute, Cloud Run |
Infrastructure
| Platform | Status | Capabilities |
|---|
| Kubernetes | Supported | Pod logs, events, deployments, metrics |
| Docker | Supported | Container logs, exec, stats, events, inspect (15 tools) |
| Terraform | Supported | State inspection, planning |
Databases
| Platform | Status | Capabilities |
|---|
| Snowflake | Supported | SQL queries, data enrichment |
| PostgreSQL | Supported | Query execution, schema inspection |
| MySQL | Supported | Query execution, schema inspection |
| BigQuery | Supported | SQL queries, analytics |
Code & CI/CD
| Platform | Status | Capabilities |
|---|
| GitHub | Supported | Code search, PRs, Actions, commits, webhooks (16 tools) |
| GitLab | Supported | Repositories, merge requests |
| Jenkins | Supported | Build status, logs |
Documentation & Knowledge
| Platform | Status | Capabilities |
|---|
| Confluence | Supported | Wiki search, page retrieval |
| Notion | Supported | Workspace search |
| Google Docs | Supported | Runbook search |
Messaging & Streaming
| Platform | Status | Capabilities |
|---|
| Kafka | Supported | Topic inspection, consumer lag |
| Debezium | Supported | CDC monitoring |
| Schema Registry | Supported | Schema management |
Data Source Architecture
Credential Management
All credentials should be stored securely using vault references:
{
"tools": {
"coralogix": {
"api_key": "vault://secrets/coralogix-api-key"
}
}
}
Never store credentials in plain text in configuration files.
IncidentFox supports:
- AWS Secrets Manager
- HashiCorp Vault
- Environment variables (for development)
Quick Setup
Choose Your Data Sources
Identify which platforms you want IncidentFox to access
Create API Keys
Generate read-only API keys for each platform
Store in Vault
Add credentials to your secrets manager
Configure in Web UI
Add data source configuration in Team Console
Test Connection
Verify IncidentFox can access each data source
Required Permissions
Each data source requires specific permissions. Generally, IncidentFox needs read-only access for investigation.
| Data Source | Required Permissions |
|---|
| Coralogix | API key with read access |
| AWS | CloudWatch read, EC2 describe, RDS read, Lambda read |
| Kubernetes | Pod logs, events, describe resources |
| GitHub | Repo read, issues read, PRs read |
| Snowflake | SELECT on relevant tables |
| Datadog | API key + App key with read access |
| Prometheus | Query access to /api/v1/query endpoint |
| Grafana | Viewer role, API key with read access |
| Sentry | Project read, issue read |
| Elasticsearch | Read access to indices |
| PostgreSQL | SELECT on relevant tables |
| Docker | Docker socket access or API access |
Principle of least privilege: Only grant permissions that are necessary for investigation. IncidentFox doesn’t need write access unless you enable auto-remediation.
Data Flow
When IncidentFox investigates an incident:
- Agent determines which data sources are relevant
- Tools are invoked to query each data source
- Data is retrieved and processed locally
- Results are correlated across sources
- Findings are reported back to the user
Data is:
- Retrieved on-demand (not continuously polled)
- Processed in-memory (not stored long-term)
- Filtered by time range (typically last 1-24 hours)
Next Steps