Ontora’s pipeline is built around three layers:Documentation Index
Fetch the complete documentation index at: https://docs.ontora.com/llms.txt
Use this file to discover all available pages before exploring further.
- Conversation layer — voice or chat interviews conducted by an autonomous agent.
- Knowledge layer — transcripts processed through chunking, embedding, and entity extraction into Postgres + Neo4j.
- Insight layer — synthesis outputs (cartography, roadmap, personas) and a GraphRAG query endpoint over the campaign.
Pipeline
When a campaign completes a conversation, a job pipeline kicks off:Multi-tenancy
Every record carries aworkspace_id. API keys are workspace-scoped — there is no cross-workspace data access. See Workspaces.
What gets stored
| Store | Holds | Used for |
|---|---|---|
| Postgres (pgvector) | Documents, chunks, embeddings, jobs, conversations | Vector search, transactional state |
| Neo4j | Entities (Person, Topic, Process), relations | Graph traversal during GraphRAG |
| Object storage | Raw transcripts, audio | Export endpoints |
Three integration surfaces
The same data is exposed through three interfaces, all backed by the same workspace API key:- REST API — request/response for programmatic integration
- MCP server — tool-calling interface for AI agents
- CLI — terminal and CI-friendly wrapper around the REST API