Compose agents on a drag-and-drop canvas — LLM provider, system prompt, tools, memory, retrieval, agent-to-agent. Deploy with a slug. Call with one HTTP request, OpenAI-SDK compatible.
Your AI loses context between sessions. We fix that. A Kanban board with native MCP integration — every task, decision, and file change is tracked automatically. Your LLM picks up exactly where it left off, across days and projects.
add_comment · get_project_status · create_ticket
Compose agents from typed nodes on a Drawflow-powered canvas. Wire an LLM provider into a system-prompt node, attach tools and a vector store, branch with switch / if-else logic. Save the flow, give it a slug, and you have a callable endpoint.
Once the flow has a slug, it runs over HTTP. Three patterns teams build in their first afternoon:
Classify incoming tickets, route billing to a finance-RAG agent, route bugs to engineering, escalate the rest. One Switch node, three downstream agents.
Send a PDF or webpage. The agent runs OCR, extracts entities into a typed JSON schema, and writes the result back to your database. Custom-Python node closes the loop.
Decompose the query, fan out parallel retrieval against your vector store, summarize per-source, then merge into a single grounded answer. Agent-to-Agent edges orchestrate the rest.
Each saved flow becomes a callable endpoint. Token-authenticated, typed input vars, configurable response shape (raw, OpenAI ChatCompletion, or your own JSON template).
curl https://trucopilot.com/api/llm/{org_code}/flows/{slug}/run \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{ "input": { "question": "How do I reset my password?" } }'
const r = await fetch(
"https://trucopilot.com/api/llm/{org_code}/flows/{slug}/run",
{
method: "POST",
headers: {
"Authorization": `Bearer ${TOKEN}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ input: { question: "..." } }),
}
);
const data = await r.json();
import requests
r = requests.post(
"https://trucopilot.com/api/llm/{org_code}/flows/{slug}/run",
headers={"Authorization": f"Bearer {TOKEN}"},
json={"input": {"question": "..."}},
)
print(r.json())
from openai import OpenAI
client = OpenAI(
base_url="https://trucopilot.com/api/llm/{org_code}/flows/{slug}",
api_key=TOKEN,
)
resp = client.chat.completions.create(
model="trucopilot-flow",
messages=[{"role": "user", "content": "..."}],
)
print(resp.choices[0].message.content)
Replace {org_code} and {slug} with values from your dashboard.
Tools we build because we hit the problem ourselves — image hosting, image generation, hallucination tracing, PHP caching, and the full management dashboard.
Quick, focused image hosting for vibe-coding workflows. Drop a screenshot, get a public URL, paste into any LLM chat. No accounts, no tracking.
Open Image Host →Generate and edit images with Google Gemini directly from Claude Code, Cursor, or Windsurf. Text-to-image and image editing — assets save locally, no context-window crashes.
View on GitHub →Catch LLM hallucinations before they ship. Trace every claim back to its source document and score the citation confidence. Built for RAG-heavy enterprise stacks.
Visit HalluTrace AI →The PHP caching library trusted by thousands of production systems. Drivers for Redis / Memcached / APCu / Files / SQLite under one PSR-6 / PSR-16 interface.
Visit phpFastCache →The full management surface: agents, knowledge, calendar, members, check-in, CMS, facility booking, and the Vibe Tickets board — all in one place per organization.
Open Dashboard →No credit card. No quotas. Sign in with Google and create your first agent in under a minute.
Sign in →