Overview
IncidentFox uses RAPTOR (Recursive Abstractive Processing for Tree-Organized Retrieval), a state-of-the-art hierarchical knowledge system based on ICLR 2024 research. This enables handling 100+ page runbooks without context loss.Why RAPTOR?
Traditional RAG (Retrieval Augmented Generation) struggles with:| Challenge | Traditional RAG | RAPTOR |
|---|---|---|
| Long documents | Loses context | Hierarchical abstraction |
| Complex relationships | Flat retrieval | Multi-level reasoning |
| Cross-document queries | Limited | Knowledge graph |
| Learning over time | Static | Pattern recording |
Architecture
Knowledge Types
RAPTOR organizes knowledge into abstraction levels:| Level | Type | Examples |
|---|---|---|
| L1 | Procedural | Step-by-step runbooks, remediation steps |
| L2 | Factual | Service configurations, thresholds, SLAs |
| L3 | Temporal | Past incidents, deployment history |
| L4 | Policy | Escalation rules, on-call rotations |
Adding Knowledge
Via API
Via Slack
Via Web UI
- Navigate to Knowledge Base > Add Knowledge
- Paste or upload content
- Tag with services and categories
- Submit for processing
Document Sources
IncidentFox can ingest knowledge from:| Source | Method |
|---|---|
| Confluence | API integration |
| Google Docs | OAuth connection |
| Notion | API integration |
| Markdown files | Direct upload |
| Past incidents | Automatic extraction |
Confluence Integration
Knowledge Graph
Beyond tree structure, RAPTOR maintains a knowledge graph:Relationships
| Relationship | Description |
|---|---|
depends_on | Service dependencies |
owned_by | Team ownership |
expert_in | Individual expertise |
related_to | Related incidents/runbooks |
Querying the Graph
Learning from Investigations
IncidentFox automatically learns from successful investigations:Pattern Recording
After each investigation:- Extracts cause-solution pairs
- Tags with services and symptoms
- Stores in knowledge base
- Increases confidence with repetition
Example Pattern
Finding Similar Investigations
- Similar symptoms
- Same services
- Related error patterns
Importance Scoring
RAPTOR uses 9+ signals to rank knowledge relevance:| Signal | Weight | Description |
|---|---|---|
| Recency | High | Recently updated knowledge |
| Usage | High | Frequently referenced |
| Confidence | Medium | Verification status |
| Service match | High | Relevant to current service |
| Symptom match | High | Matches current symptoms |
| Author expertise | Medium | Written by domain expert |
| Freshness decay | Dynamic | Older knowledge decays |
| Contextual boost | Dynamic | Current investigation context |
| Feedback | Medium | User feedback signals |
Configuration
RAPTOR Settings
Retrieval Settings
API Endpoints
Retrieve Knowledge
Get Answer with Sources
Provide Feedback
Tree Statistics
- Total documents
- Tree depth
- Node counts by level
- Last sync time
Best Practices
Document Quality
- Keep runbooks up to date
- Include specific commands and thresholds
- Add examples and expected outcomes
- Tag with relevant services
Feedback Loop
- Provide feedback on retrieved knowledge
- Flag outdated information
- Suggest improvements
Service Tagging
Tag all knowledge with:- Service name
- Environment (prod, staging)
- Category (runbook, alert, architecture)
Troubleshooting
Knowledge Not Retrieved
- Check document is indexed:
GET /api/v1/kb/status - Verify tagging and metadata
- Check relevance threshold settings
Stale Knowledge
- Set up automatic sync from sources
- Configure freshness decay
- Regularly review and update

