Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.powabase.ai/llms.txt

Use this file to discover all available pages before exploring further.

Agents are the conversational interface to your AI features. This guide creates an agent, gives it a tool and a knowledge base, then demonstrates a streaming multi-turn conversation.
Prerequisites:
  • Authentication configured (see Authentication guide)
  • Optional: A knowledge base for RAG (see Create a Knowledge Base guide)
1

Create the agent

Define the agent with a name, LLM model, and system prompt that describes its behavior.Endpoint: POST /api/agents
response = requests.post(
    f"{BASE_URL}/api/agents",
    headers=headers,
    json={
        "name": "Support Bot",
        "model": "gpt-4o",
        "system_prompt": "You are a helpful support assistant. Answer questions using the knowledge base when available.",
        "temperature": 0.7,
    },
)
agent = response.json()
agent_id = agent["id"]
print(f"Agent created: {agent_id}")
2

Assign a builtin tool

Enable the agent to use builtin tools like database_query, http_request, or code_execute.Endpoint: POST /api/agents/{id}/tools
response = requests.post(
    f"{BASE_URL}/api/agents/{agent_id}/tools",
    headers=headers,
    json={"tool_name": "database_query"},
)
print(response.json())
3

Link a knowledge base

Assign a knowledge base so the agent automatically gets a search tool for it. During conversations, the agent can search the KB to ground its responses.Endpoint: POST /api/agents/{id}/knowledge-bases
response = requests.post(
    f"{BASE_URL}/api/agents/{agent_id}/knowledge-bases",
    headers=headers,
    json={"knowledge_base_id": kb_id},
)
print(response.json())
4

Chat with the agent (streaming)

Send a message and receive a Server-Sent Events (SSE) stream. Events include tool calls, tool results, and the final response.Endpoint: POST /api/agents/{id}/run/stream
SSE events: start, chunk, step_started, tool_call, tool_result, step_completed, approval_requested, complete, error.
response = requests.post(
    f"{BASE_URL}/api/agents/{agent_id}/run/stream",
    headers=headers,
    json={"message": "What does the product documentation say about getting started?"},
    stream=True,
)

session_id = None
for line in response.iter_lines():
    if not line:
        continue
    text = line.decode("utf-8")
    if text.startswith("data: "):
        import json
        event = json.loads(text[6:])
        if event["event"] == "start":
            session_id = event["session_id"]
        elif event["event"] == "chunk":
            print(event["content"], end="")
        elif event["event"] == "tool_call":
            print(f"\n[Tool: {event['tool_name']}]")
        elif event["event"] == "complete":
            print(f"\n\nDone. Session: {session_id}")
5

Continue the conversation

Pass the session_id from the previous run to continue the multi-turn conversation. The agent retains full message history within the session.Endpoint: POST /api/agents/{id}/run/stream
response = requests.post(
    f"{BASE_URL}/api/agents/{agent_id}/run/stream",
    headers=headers,
    json={
        "message": "Can you summarize that in bullet points?",
        "session_id": session_id,
    },
    stream=True,
)

for line in response.iter_lines():
    if not line:
        continue
    text = line.decode("utf-8")
    if text.startswith("data: "):
        event = json.loads(text[6:])
        if event["event"] == "chunk":
            print(event["content"], end="")

What’s Next

Streaming Responses

Deep dive into SSE event handling.

Advanced Agent Config

Add MCP servers, hooks, and approval flows.

Agents & Tools

Understand the ReAct loop and tool system.