Prerequisites
- Docker + Docker Compose
- Python 3.11+
- A Slack webhook URL (optional, for alerts)
1. Start the backend
Clone the repo, copy the environment file, and bring up the stack with Docker Compose.
git clone https://github.com/dunetrace/dunetrace
cd dunetrace
cp .env.example .env
docker compose build
docker compose up -d
Four services come up:
| Service | Port | What it does |
|---|---|---|
| Dashboard | :3000 | Mission control — static HTML |
| Ingest API | :8001 | Accepts events from the SDK |
| Customer API | :8002 | Read-only API for dashboard + integrations |
| Postgres | :5432 | Shared state |
2. Install the SDK
pip install dunetrace # any framework
pip install 'dunetrace[langchain]' # LangChain / LangGraph
pip install 'dunetrace[langchain,langfuse]' # LangChain + Langfuse integration
pip install 'dunetrace[otel]' # OpenTelemetry export
3. Instrument your agent
The fastest path for a Python agent using OpenAI or Anthropic is the decorator:
from dunetrace import Dunetrace
dt = Dunetrace() # no api_key needed for local dev
dt.init(agent_id="my-agent") # patches openai, anthropic, httpx, requests
@dt.agent()
def run_agent(query: str) -> str:
# LLM + HTTP calls inside here are tracked automatically
resp = openai_client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": query}],
)
return resp.choices[0].message.content
run_agent("What is the capital of France?")
dt.shutdown()
For LangChain, use the callback handler:
from dunetrace import Dunetrace
from dunetrace.integrations.langchain import DunetraceCallbackHandler
dt = Dunetrace()
callback = DunetraceCallbackHandler(dt, agent_id="my-langchain-agent")
result = agent.invoke(
{"messages": [("human", "What is the capital of France?")]},
config={"callbacks": [callback]},
)
4. Open the dashboard
Navigate to http://localhost:3000. Your run should appear within fifteen seconds.
api_key is required — the backend accepts unauthenticated requests when AUTH_MODE=dev. Production deployments should generate and use a dt_live_ key.5. Trigger a failure to verify
The SDK ships with failure scenarios so you can confirm signals fire end-to-end before pointing Dunetrace at production traffic:
SCENARIO=failures python examples/decorator_agent.py
# → triggers TOOL_LOOP, RETRY_STORM, RAG_EMPTY_RETRIEVAL
SCENARIO=tool_loop python examples/langchain_agent.py
# → triggers TOOL_LOOP via LangChain
Each signal appears in the dashboard Alerts page within fifteen seconds and fires a Slack alert if SLACK_WEBHOOK_URL is set.
6. Wire up Slack (optional)
Add to .env:
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/xxx/yyy/zzz
SLACK_CHANNEL=#agent-alerts
SLACK_MIN_SEVERITY=HIGH # LOW | MEDIUM | HIGH | CRITICAL
Restart the alerts worker:
docker compose up -d --force-recreate alerts
Next steps
- Detector reference — what each of the 15 checks catches, how to tune thresholds.
- Architecture — how the pipeline fits together.
- All integrations — FastAPI, Flask, OpenTelemetry, Loki.