Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.powabase.ai/llms.txt

Use this file to discover all available pages before exploring further.

In this guide you will upload a document, create a knowledge base, index the document into it, spin up an agent backed by that knowledge base, and run a streaming conversation — all through the REST API. By the end you will have a fully functional RAG agent that can answer questions grounded in your own content.
Prerequisites:
  • A project with API keys configured (see Authentication guide)
1

Authenticate

Set up your base URL and authentication headers. Every request to the project API requires the service_role key.Endpoint: Headers: apikey + Authorization
import requests

BASE_URL = "{BASE_URL}"
API_KEY = "{API_KEY}"

headers = {
    "apikey": API_KEY,
    "Authorization": f"Bearer {API_KEY}",
    "Content-Type": "application/json",
}
2

Upload a document

Upload a file to create a Source. The platform automatically extracts its text content for indexing.Endpoint: POST /api/sources/upload
with open("product-docs.pdf", "rb") as f:
    response = requests.post(
        f"{BASE_URL}/api/sources/upload",
        headers={"apikey": API_KEY, "Authorization": f"Bearer {API_KEY}"},
        files={"file": ("product-docs.pdf", f, "application/pdf")},
    )
source = response.json()
source_id = source["id"]
print(f"Source created: {source_id}, status: {source['extraction_status']}")

# Poll until extraction completes
import time
TERMINAL = {"extracted", "attention_required", "failed", "cancelled"}
while True:
    res = requests.get(f"{BASE_URL}/api/sources/{source_id}", headers=headers)
    status = res.json()["extraction_status"]
    if status in TERMINAL:
        print(f"Extraction ended with status: {status}")
        break
    time.sleep(2)
Response:
{
  "id": "source-uuid",
  "filename": "product-docs.pdf",
  "content_type": "application/pdf",
  "extraction_status": "pending",
  "created_at": "2026-01-01T00:00:00Z"
}
3

Create a knowledge base and index the document

Create a knowledge base, then add the source to it. Adding a source triggers chunking and vector indexing automatically.Endpoint: POST /api/knowledge-bases
# Create the knowledge base
response = requests.post(
    f"{BASE_URL}/api/knowledge-bases",
    headers=headers,
    json={
        "name": "Product Docs",
        "description": "Product documentation knowledge base",
    },
)
kb = response.json()
kb_id = kb["id"]
print(f"Knowledge base created: {kb_id}")

# Add the source to trigger indexing
response = requests.post(
    f"{BASE_URL}/api/knowledge-bases/{kb_id}/sources",
    headers=headers,
    json={"source_id": source_id},
)
print(f"Source added, indexing started: {response.json()}")
4

Create an agent with the knowledge base

Create an agent and link the knowledge base to it. The agent automatically gets a search tool for each linked knowledge base.Endpoint: POST /api/agents
# Create the agent
response = requests.post(
    f"{BASE_URL}/api/agents",
    headers=headers,
    json={
        "name": "Docs Assistant",
        "model": "gpt-4o",
        "system_prompt": "You are a helpful assistant. Use the knowledge base to answer questions about our product documentation.",
        "temperature": 0.7,
    },
)
agent = response.json()
agent_id = agent["id"]
print(f"Agent created: {agent_id}")

# Link the knowledge base
response = requests.post(
    f"{BASE_URL}/api/agents/{agent_id}/knowledge-bases",
    headers=headers,
    json={"knowledge_base_id": kb_id},
)
print(f"Knowledge base linked: {response.json()}")
5

Chat with your agent (streaming)

Send a message and consume the SSE stream. The agent will search the knowledge base, reason about the results, and stream back an answer.Endpoint: POST /api/agents/{id}/run/stream
The agent will emit tool_call and tool_result events as it searches the knowledge base, followed by chunk events containing the streamed answer.
response = requests.post(
    f"{BASE_URL}/api/agents/{agent_id}/run/stream",
    headers=headers,
    json={"message": "How do I get started with the product?"},
    stream=True,
)

import json

session_id = None
for line in response.iter_lines():
    if not line:
        continue
    text = line.decode("utf-8")
    if text.startswith("data: "):
        event = json.loads(text[6:])
        if event["event"] == "start":
            session_id = event["session_id"]
        elif event["event"] == "chunk":
            print(event["content"], end="")
        elif event["event"] == "tool_call":
            print(f"\n[Searching: {event['tool_name']}]")
        elif event["event"] == "tool_result":
            print(f"[Results received]")
        elif event["event"] == "complete":
            print(f"\n\nDone! Session: {session_id}")

What’s Next

Agents & Tools

Understand the ReAct loop, tool types, and how agents reason.

Streaming Responses

Deep dive into SSE event handling and multi-turn sessions.

Agents API Reference

Full endpoint documentation for agents.