Reference

Flow API

Each agent you save in the TruCopilot builder becomes a callable HTTP endpoint. This page documents the public shape of that API. For per-agent input schemas, response templates, and a tested token, sign in and open /ai-management/.../llm-hub/api-docs for that agent.

Overview

You design an agent on the visual canvas: pick a chat provider, attach a system prompt, declare typed Input variables, optionally connect tools, retrieval, memory, switches, or other agents. When you save the flow it gets a stable slug. From that point the flow is reachable as a single HTTP endpoint.

  • One endpoint per agent. No SDK required — plain HTTP works.
  • OpenAI-compatible. Use the OpenAI client lib by swapping the base_url.
  • Token-authenticated. Tokens are issued per organization.
  • Streaming optional. Server-Sent Events or JSON Lines.

Endpoint

POST https://trucopilot.com/api/llm/{org_code}/flows/{slug}/run
POST https://trucopilot.com/api/llm/{org_code}/flows/{slug}/run/stream
Path paramDescription
org_codeYour organization's stable identifier. Found on the dashboard.
slugThe slug assigned when you saved the agent in the builder.

Authentication

Send an organization API token in the Authorization header. Tokens are managed in your dashboard → LLM Hub → Tokens.

Authorization: Bearer YOUR_TOKEN_HERE
Heads up: tokens are scoped to a single organization and inherit the agent's permissions. Rotate freely — old tokens can be revoked from the dashboard.

Request body

The request body shape is determined by the Input nodes in your flow. Each declared variable becomes a field in the input object, with the type the builder enforces.

{
  "input": {
    "question": "How do I reset my password?",
    "user_id": "u_42"
  },
  "stream": false,
  "metadata": { "session_id": "abc-123" }
}
FieldTypeDescription
inputobjectRequired. Keys must match the agent's Input variables.
streambooleanOptional. If true and you POST to /run, the response is JSON Lines. Use /run/stream for SSE.
metadataobjectOptional. Free-form metadata logged with the request for tracing.

Response shapes

The agent's Output node controls the response body. Three preset shapes:

  • Raw — just the model's text reply, no envelope.
  • OpenAI ChatCompletion — matches openai.chat.completions.create() exactly. Pick this when you want a drop-in for the OpenAI SDK.
  • Custom JSON template — you define the keys in the builder; values can interpolate node outputs.

Example OpenAI-compatible response:

{
  "id": "flowrun_abc123",
  "object": "chat.completion",
  "created": 1762550400,
  "model": "trucopilot-flow",
  "choices": [{
    "index": 0,
    "message": { "role": "assistant", "content": "..." },
    "finish_reason": "stop"
  }],
  "usage": { "prompt_tokens": 312, "completion_tokens": 88, "total_tokens": 400 }
}

Streaming

Two streaming transports are supported:

Server-Sent Events

POST to /run/stream. Each chunk is an SSE data: line carrying a delta object. Terminate on data: [DONE]. Compatible with the OpenAI streaming format.

data: {"choices":[{"delta":{"content":"How "}}]}
data: {"choices":[{"delta":{"content":"can "}}]}
data: {"choices":[{"delta":{"content":"I help?"}}]}
data: [DONE]

JSON Lines

POST to /run with "stream": true. Response is a stream of newline-delimited JSON objects — useful when SSE is awkward in your runtime.

Code samples

curl https://trucopilot.com/api/llm/$ORG/flows/$SLUG/run \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{ "input": { "question": "..." } }'
const r = await fetch(
  `https://trucopilot.com/api/llm/${ORG}/flows/${SLUG}/run`,
  {
    method: "POST",
    headers: {
      "Authorization": `Bearer ${TOKEN}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({ input: { question: "..." } }),
  }
);
const data = await r.json();
import requests

r = requests.post(
    f"https://trucopilot.com/api/llm/{ORG}/flows/{SLUG}/run",
    headers={"Authorization": f"Bearer {TOKEN}"},
    json={"input": {"question": "..."}},
    timeout=60,
)
print(r.json())

OpenAI SDK compatible

If your agent's Output node is set to OpenAI ChatCompletion, the official OpenAI clients work as-is — just point base_url at the flow.

from openai import OpenAI

client = OpenAI(
    base_url="https://trucopilot.com/api/llm/$ORG/flows/$SLUG",
    api_key="$TOKEN",
)

resp = client.chat.completions.create(
    model="trucopilot-flow",
    messages=[{"role": "user", "content": "Summarize today's standup."}],
)
print(resp.choices[0].message.content)

Streaming works through the SDK's usual stream=True path.

Per-agent dynamic docs

Once you sign in, each agent has its own dynamic documentation page that shows the typed input schema, an interactive request builder, and a fresh test token. Open the agent and click API docs, or visit /ai-management/{org}/llm-hub/api-docs.

Open dashboard →