Fix login timeout on /api/session
Sessions are expiring after 5 minutes instead of the configured 24 hours. Reproduces on staging when refresh tokens are issued out of order.
- just nowhuman:alice created the task
$ backlogBacklog is a local-first task queue your AI coding agents read and write directly. Same database as your CLI. One binary. No server. Built for the agentic loop.
Sessions are expiring after 5 minutes instead of the configured 24 hours. Reproduces on staging when refresh tokens are issued out of order.
{
"ref": "TASK-42",
"title": "Fix login timeout on /api/session",
"type": "bug",
"priority": 2,
"status": "todo",
"actor": "human:alice",
"project": "api"
}
We spent a decade teaching humans to fit into Jira. Now we're going to ask an LLM to do it? An AI agent's backlog should be a file it can read — not a SaaS it has to apologize to.
A loop is one unit of work: pick → plan → run → review → ship. Backlog stores the queue, attributes every step, and keeps multiple agent sessions from stepping on each other. Four parallel sessions pull from the same queue all day — we've measured ~12× the throughput of one developer running a single agent.
And because each task spawns a fresh subagent with only the context it needs, the average session fits in under 50k tokens instead of the 500k a single long-running thread bloats into — same quality, roughly 10× cheaper per loop.
Each agent reads backlog task list --status todo --limit 1, moves it to doing, attaches a plan, ships a PR, posts a completion comment, and marks done. The next session picks up immediately. The DB enforces attribution; the activity log records every step.
The Backlog DB handles concurrent writers from multiple sessions. Each task is owned by exactly one actor at a time. Plans are versioned, so an agent never overwrites another's work. Conflicts surface as commits, not silent drift.
The queue is just rows in the Backlog DB. Four sessions is a starting point — add more for bigger backlogs, or run nightly batches against the same database. The CLI, the MCP server, and the web UI all share the same writes.
Baseline: one developer running a single agent session, plan-then-run-then-review per task. Same developer, same agent model, same workday — four sessions hitting one queue. Throughput measured by done tasks in the activity log. The 12× figure is the median across week-long runs; we've seen higher with longer queues and lower with tasks that require human review on every PR.
One long agent thread accumulates everything it has seen — old plans, dead branches, files it loaded an hour ago. Token bills scale with that bloat. With backlog, each task spawns a focused subagent with only what the task needs: the task description, the relevant memory entries, and the linked plan. Most loops finish in 30–60k tokens. The same work in a single 500k-token session costs an order of magnitude more for the same output.
Different work needs different context. A SAST finding gets a security-focused subagent with the scanner output and the auth-related docs. A feature task gets a feature-shaped subagent with the PRD memory and the affected handler files. Each subagent is born fresh, knows only what it needs, and is gone when the task is done. No cross-contamination, no runaway context.
backlog init in any directory. Three files are created. The workspace is ready.backlog init
backlog task add -p api -t "Fix login timeout" --type bug --priority P2
backlog.db. Commit it with your code. No sync needed.git add backlog.db backlog.json && git commit -m "chore: update backlog"
Every task, plan, comment, and doc is tagged with a typed actor at the database level — not a log on top, the row itself. Filter your backlog by who (or what) created anything.
Learn about actors →# Human opens a vulnerability task
backlog task add -p api \
-t "SQL injection in /search" \
--type vulnerability --priority P1 \
--as human:alice
# Security scanner imports findings in bulk
backlog import-findings findings.json \
--as ai:semgrep
# See exactly who did what
backlog task list --actor-kind ai
backlog task list --actor-name alice --type vulnerability
Connect Claude Code, Cursor, Codex, or OpenCode directly via the MCP stdio server. The AI reads tasks, writes plans, leaves comments — all attributed with its own actor name. One config, full access.
{
"mcpServers": {
"backlog": {
"command": "backlog",
"args": ["mcp", "serve", "--as", "ai:claude-code"],
"env": {
"BACKLOG_DB": "/path/to/backlog.db"
}
}
}
}
Attach a markdown plan to any task. Every edit creates an immutable version — no history is ever lost. See exactly what changed, who changed it, and why.
How versioned plans work →| VER | TITLE | ACTOR | NOTE |
|---|---|---|---|
| v1 | Fix unsigned JWT rejection | ai:claude-code | — |
| v2 | Fix unsigned JWT rejection (revised) | human:alice | added key rotation step |
| v3 | Fix unsigned JWT rejection (final) | human:alice | removed step 4 per review |
backlog plan history $PLAN_ID
A polished workspace served straight from the binary. List, board, grid, and timeline views. Inline editing on every property. No build step, no separate front-end repo.
/tasks/TASK-42, /docs/architecture, /memory?tag=decision — paste it in Slack, refresh, hit back, all work.Tasks are the core, but the agent loop needs more — plans the agent writes, docs it reads, memory it remembers, attachments it analyses, an activity feed humans audit.
Tasks have a status, type, priority, assignee, and a TASK-N ref humans actually type. The CLI, MCP, and web UI all hit the same row.
backlog task list --status todo --priority P2 --limit 1 --json
Whether you're working alone with an AI agent, running a security program, or shipping as a team — the workflow fits.
You create tasks. Your AI agent drafts plans, leaves comments, imports findings — all tagged with its own actor name. At the end of the sprint you can see exactly what was done and by whom. No guessing what the AI touched.
# What has the AI worked on?
backlog task list --actor-kind ai
# What did alice close this week?
backlog task list --actor-name alice --status done
# AI-created plans on open bugs
backlog task list --type bug --actor-kind ai --status todo
Point your scanner at the findings format. Import in bulk with a single command. Every vulnerability becomes a TASK-N with source, external ref, and a pre-attached remediation plan — attributed to the tool that found it.
backlog import-findings scan.json --as ai:semgrep
# Review what was imported
backlog task list \
--actor-name semgrep \
--type vulnerability \
--priority P1
Commit backlog.db and backlog.json to the repo. Every team member — and every AI agent — has the same backlog from the moment they check out the branch. No sync service. No separate account to create.
# On checkout — sync manifest with DB
backlog sync
# Open the shared web UI
backlog web --port 8080
# Export for the weekly report
backlog export --format md --project api
A typed work queue with first-class actor attribution at the row level.
human:name or ai:name on every write.The pieces an agent needs to plan, ship, and pick up the next task.
CLI, MCP, web UI, and skills — all on the same service layer.
--json on every command./backlog + /backlog-enhance-tasks.The boring parts that make a v1 actually trustworthy.
VACUUM INTO backup.It loses on real-time collaboration and 500-person rollouts. Different tools, different problems.
| Backlog | Linear | Jira | GitHub Issues | |
|---|---|---|---|---|
| AI agent can read & write directly | native (MCP) | via API + glue | via API + glue | via API + glue |
| Typed actor on every row | yes — column | no | no | no |
| Immutable plan versions | yes — every edit | edit overwrites | edit overwrites | edit overwrites |
| Workspace = file in your repo | single Backlog DB | cloud-only | cloud-only | tied to repo |
| Self-hosted / offline | single binary | no | on-prem tier | enterprise tier |
| Real-time multi-cursor editing | no | yes | yes | partial |
| Built-in integrations marketplace | no | extensive | extensive | extensive |
| Pricing | free · MIT | per seat | per seat | per repo |
If you need a sprint board for fifty stakeholders, use Linear. If you want a queue your agents can drain overnight, this.
backlog.db and backlog.json to the same repo as your code. Every checkout has the full backlog. Pulls merge it like any other file. For day-to-day collaboration during a session, point your team at one machine running backlog web on a port.backlog.db?backlog.db like a build artefact — one person rebases, runs backlog import from the other branch's DB, and commits the merged result. The --dry-run mode previews the merge first. For tighter sync, backlog sync reconciles against backlog.json.backlog export --format json or pull rows through the HTTP API. The data model is documented; importing into a Postgres-backed clone is a weekend, not a migration. You're never locked in — your backlog is a flat file.Single static binary. No CGO. No runtime dependencies. Go 1.22+.
go install github.com/mazen160/backlog/cmd/backlog@latest
curl -L https://github.com/mazen160/backlog/releases/latest/download/backlog_darwin_arm64.tar.gz | tar xz
sudo mv backlog /usr/local/bin/
curl -L https://github.com/mazen160/backlog/releases/latest/download/backlog_linux_amd64.tar.gz | tar xz
sudo mv backlog /usr/local/bin/
Then run backlog init in any directory to create your first workspace.