Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.powabase.ai/llms.txt

Use this file to discover all available pages before exploring further.

What is Agentic Platform?

Agentic Platform is the infrastructure layer for AI-powered applications. Most teams building with LLMs end up assembling the same stack: a vector database for RAG, an agent framework for tool use, a workflow engine for automation, an auth layer, a file storage service, and a database — then writing the glue code to connect them. Agentic Platform replaces that entire stack with a single API. Unlike frameworks like LangChain or Agno that give you libraries to run in your own infrastructure, Agentic Platform is a fully managed backend — you don’t deploy or operate anything. Unlike workflow-only tools like n8n or Dify, the platform provides deep AI primitives (multiple indexing strategies, four retrieval algorithms, ReAct agents with tool execution, multi-agent orchestration) alongside automation. And unlike Supabase, which provides a general-purpose backend, Agentic Platform is purpose-built for AI workloads with first-class support for embeddings, vector search, LLM sessions, and streaming. Each project gets a fully isolated stack: its own Postgres database with pgvector, API gateway, auth service, file storage, and AI service worker. There is no shared state between projects — isolation is enforced at the infrastructure level.

Three Core Modules

The platform is organized around three modules that can be used independently or composed together. A simple project might use only Context Engineering for document search. A chatbot uses Context Engineering + Agent Orchestration. A fully automated pipeline uses all three — ingesting documents, reasoning over them with agents, and triggering downstream workflows.

1. Context Engineering Suite (RAG-as-a-Service)

The Context Engineering suite handles the entire RAG pipeline: document ingestion, content extraction, indexing, and retrieval. Upload any document — PDFs, Word files, images, spreadsheets — and the platform extracts text (with OCR for scanned pages), then indexes it using the strategy you choose. This isn’t a one-size-fits-all chunking pipeline. The platform offers multiple indexing strategies and four retrieval algorithms, each designed for different document types and query patterns.
Indexing StrategyWhat It DoesBest For
ChunkEmbedSplits text into overlapping chunks and generates vector embeddingsGeneral RAG — most documents, fastest and cheapest
PageIndexBuilds a hierarchical document tree with LLM-generated summariesLong structured PDFs (legal, compliance, specs)
GraphIndexExtends PageIndex with entity extraction and cross-reference enrichmentDense cross-referenced documents (regulations, codebases)
Doc2JSONExtracts structured fields from documents using a user-defined schemaInvoices, forms, resumes — structured data extraction
Retrieval MethodHow It WorksBest For
Vector SearchCosine similarity over embeddingsFast semantic matching
Full-Text SearchBM25 keyword scoringExact phrases, error codes, IDs
Hybrid SearchVector + BM25 fused via Reciprocal Rank FusionProduction RAG (recommended default)
Tree SearchLLM reasons over document structure to select sectionsPageIndex KBs — complex structural queries
The suite also supports cross-encoder reranking (Cohere, Jina, Voyage, and more) for precision-critical applications, three chunking strategies (recursive, fixed-size, markdown-header), configurable embedding models, and project-level defaults for all parameters. This depth of control is what separates RAG-as-a-Service from a basic vector store.

Knowledge Bases & Indexing

Deep dive into every indexing strategy, retrieval algorithm, and configuration option.

Sources & Extraction

How document ingestion and text extraction works.

2. Agent Orchestration

Agents are LLM-powered conversational entities that use a ReAct (Reason + Act) loop to handle complex tasks. Each agent has a system prompt, a set of tools, optional knowledge bases, and session-based memory. When a user sends a message, the agent reasons about what to do, calls tools as needed, observes the results, and iterates until it can respond — all streamed in real-time via Server-Sent Events. Tools come in three forms: builtin tools (database query, database write, HTTP requests, code execution, storage read/write), custom tools (your own endpoints that the platform calls with tool arguments), and MCP servers (external tool providers that the agent discovers at runtime via the Model Context Protocol). For high-stakes operations, the approval flow pauses execution before a tool call and waits for human approval via the API — enabling human-in-the-loop patterns for production deployments. Multi-agent orchestrations take this further: a coordinator agent analyzes incoming messages and delegates subtasks to specialized entity agents based on their role descriptions. Each entity runs independently with its own tools and knowledge bases, and the coordinator synthesizes their results. This enables domain-specialized teams — a billing agent, a technical support agent, and a sales agent all coordinated by a single orchestration endpoint.

Agents & Tools

The ReAct loop, tool types, sessions, MCP, hooks, and approval flows.

Multi-Agent Orchestration

The coordinator pattern for multi-agent collaboration.

3. Workflow Automation

Workflows are DAG-based automation pipelines for semi-deterministic tasks. Unlike agents (which decide what to do), workflows follow a fixed graph of blocks and edges — but individual blocks can contain LLM calls, agent runs, or code execution, so the output is still dynamic. Workflows are ideal when you know the steps but the content within each step requires AI reasoning: classify an email, extract fields, route to the right team, send a notification. Each workflow is built from composable blocks: starter (manual/API trigger with typed inputs), webhook (HTTP trigger from external systems), agent (run an existing agent with a message), code (execute Python or JavaScript), condition (branch the flow based on expressions), split (parallel fan-out execution), platform_api (call platform resources like knowledge base search or agent runs), general_api (call external HTTP APIs), and response (return results). Blocks reference upstream outputs using template syntax, creating a data pipeline through the graph. Workflows support three trigger mechanisms: manual execution via the API, webhook triggers from external systems (Stripe events, GitHub hooks, form submissions), and scheduled execution via interval timers or cron expressions. Once deployed, a workflow becomes a persistent automation that runs unattended. The AI Copilot can also generate workflow graphs from natural language descriptions, accelerating development.

Workflows

Block types, graph execution, deployment, and webhooks.

Streaming & SSE

Real-time events for agents, orchestrations, and workflows.

How the Modules Work Together

The three modules are designed to compose. A knowledge base created in the Context Engineering suite can be attached to an agent in the Agent Orchestration module — the agent automatically gets a search tool that queries the KB during conversations. An agent can be used as a block in a workflow, bringing LLM reasoning into a deterministic pipeline. A workflow can call the Platform API block to search knowledge bases, run agents, or query the database programmatically. And webhooks connect workflows to external systems, closing the loop between your AI backend and the rest of your infrastructure.
Example: End-to-end document processingA webhook-triggered workflow receives a customer question → a platform_api block searches a knowledge base for relevant context → an agent block reasons over the context and drafts a response → a condition block checks confidence → a general_api block posts the answer to Slack or escalates to a human. Each step is a block in the workflow graph, combining all three modules.
Of course, not all use cases require all modules. The platform is designed to be versatile. You use only the parts you need in order to build your app.

Per-Project Infrastructure

Every project gets a fully isolated infrastructure stack: Postgres with pgvector (your database — both the AI schema and your own tables), Kong API gateway (routing and auth), GoTrue (user authentication), Storage API (file management), and a dedicated AI service worker (document extraction, indexing, agent execution). You get direct PostgREST access to your public schema for building application features alongside AI capabilities, and a full auth system for managing end users.

Architecture

Control plane, data plane, per-project isolation, and database schemas.

Database Access

Direct PostgREST access to your project database.

Getting Started

The fastest way to see the platform in action is the Quickstart guide, which builds a RAG agent end-to-end in about 5 minutes. If you prefer to understand the architecture first, start with the Architecture page. If you want to explore a specific module, jump directly to its concept page above.

Quickstart

Build an end-to-end RAG agent in 5 minutes.

Authentication

Set up your API keys and make your first request.

Architecture

Understand the control plane, data plane, and per-project isolation.