How does RunLLM ensure the safety and trustworthiness of its automated actions, especially when starting in read-only mode?
RunLLM prioritizes safety by starting in a read-only mode, investigating incidents without making any changes. It uses OAuth-based access with scoped permissions that your existing tools already support. For any actions that could modify your system, such as opening PRs, RunLLM requires explicit human-in-the-loop approval. Every agent is isolated, no data is shared between agents, and all steps are subject to audit logging and policy enforcement.
What specific types of technical data does RunLLM ingest and how does it process this information to provide causal context?
RunLLM ingests massive volumes of technical data including logs, traces, metrics, tickets, and documentation. It processes this information through custom pipelines, structures it into a knowledge graph, and uses GraphRAG to map dependencies and historical events. This allows it to identify causal context, not just correlations, by understanding service relationships and historical incident patterns.
How does RunLLM's continuous learning mechanism improve its performance and adapt to a specific organization's incident patterns?
RunLLM continuously learns from every investigation and user-provided correction. It identifies which checks and queries are most effective for specific alert patterns and reuses proven investigation steps from similar past incidents. This process captures tribal knowledge, automatically updates runbooks, and refines its models, leading to a reduction in MTTR and more accurate, organization-specific incident responses over time.
Beyond incident resolution, how does RunLLM contribute to preventing future incidents and improving system reliability proactively?
RunLLM proactively prevents future incidents by continuously analyzing past incidents, logs, and customer tickets to surface risks early, before they impact customers. It also uses human feedback to refine future responses, clusters recurring issues to highlight documentation gaps, and ensures runbooks and knowledge bases evolve automatically, thereby reducing system drift and improving overall reliability.
Can RunLLM integrate with both proprietary and open-source observability and ticketing systems, and what is the typical setup time?
Yes, RunLLM is designed for universal integration, offering connectors for popular tools like Datadog, Grafana, PagerDuty, Jira, and Zendesk, as well as open APIs for custom and homegrown systems. The platform aims for rapid deployment, allowing teams to connect their tools and see results quickly, often getting live in days rather than weeks, without requiring installation on your infrastructure.