A deep dive into the architectural decisions that make DevRev fundamentally different from legacy platforms.
Legacy platforms designed before AI cannot be retrofitted with GenAI. We've been AI-native since June 2020 - years before the hype.
AI Woven Into Every Layer: Object, Event, Analytics
Not a collection of SaaS tools stitched together - a purpose-built computer. Objects DB, CDC Bus, Platform layer, Search, Data Warehouse, and Metrics all engineered as one system. Multi-tenant to 100k orgs.
Parts DB + Works DB + Users DB on Mongo. CDC Bus for real-time data flow. Platform layer with gRPC, AuthN/Z, K8s. Semantic Search on a patented vector DB. All connected. All purpose-built.
Founding team from Nutanix - the same distributed systems architecture that displaced VMware. Infrastructure scales from 9 to 9,000 agent instances with zero human intervention.
Vertical Copilots · Multimodal · Continuity · Edge AI · Specialists - long-horizon context, fewer escalations
Human-in-the-loop · Real-time indexing · Reasoning & explainability · Interoperability · Contextual decision-making
We replicate any application's object model - Jira, Salesforce, etc. - into our intelligence layer. The UI becomes replaceable; memory compounds infinitely.
Support Users, Not Tickets. Build Products, Not Projects.
Our SQL layer doesn't depend on source systems supporting SQL. Query across any ingested object model with natural language - regardless of origin.
Every enterprise sits on trillions of tokens. AI agents work best with thousands. DevRev's Memory bridges this 6-order-of-magnitude gap.
AirSync keeps tokens organized inside the knowledge graph. The retrieval engine selects just the right tokens — 10–100× fewer than naive RAG.
Text2SQL — exact facts, 100% deterministic
Vector Search — relevant passages, not whole docs
Reverse Index — lightning-fast keyword lookups
A single query uses Text2SQL for ARR, vector search for feedback, and reverse index for the exact ticket — assembling a tight context in a fraction of the tokens naive RAG needs. Fewer tokens = lower cost, faster response, fewer hallucinations.
Most agentic platforms pile data into context windows. The larger the context, the higher the hallucination rate. DevRev inverts this with a dynamic Knowledge Graph - a real-time context engine.
DevRev's Knowledge Graph synthesizes real-time data across structured and unstructured sources - enabling dimensional clustering, temporal event clustering, and enhanced similarity analysis.
General-purpose platforms default to the most recent data. Older but highly relevant context gets deprioritized. DevRev weights data by mathematical relevance - never timestamps.
A 2-year-old policy document can and should outrank a 2-day-old irrelevant ticket. This requires architectural choices that can't be retrofitted.
No scalable vector DB existed for enterprise AI five years ago. DevRev built one in-house on top of RDS - patented. Vectors are namespaced per tenant, powering semantic search across 100k+ organizations.
Built alongside Syntactic Search (Elastic) and a CDC Bus for real-time data flow. This isn't a feature - it's the foundation the entire DevRev computer is built on.
Enterprise knowledge is trapped in PDFs — support articles, invoices, contracts, specs. Poor extraction means AI can't reason over this data. We benchmarked 19 tools to find the best pipeline.
Our Baseline Pipeline (basic text extraction) couldn't handle tables at all and lost formatting. Hi-Res Pipeline (layout-aware parsing) improved tables but was still lossy on complex docs. We needed something better.
Built a custom evaluation dataset — multi-column layouts, scanned docs, merged-cell tables, charts, handwritten text. The GLM OCR model achieves 5.6× improvement over baseline across all document types.
Text Metric — 0 to 1, lower = better extraction
Table Metric — 0 to 1, higher = better structure preserved
Overall = (1 - Text) + Table, max score 2.0
Every layer reinforces every other. Replicating one piece is hard — replicating the compounding effect of all eight together is nearly impossible.