Headless Mode

Run Auto-Coder in non-interactive environments with auto-coder.run — ideal for scripts, CI/CD pipelines, Claw, and other automation scenarios.

Headless Mode (auto-coder.run)

When you need to run Auto-Coder in non-interactive environments — Claw, CI/CD pipelines, batch scripts, cron jobs — an interactive terminal is a poor fit. auto-coder.run is the headless CLI entrypoint built for this: one invocation reads a prompt, executes the task, emits the result, and exits.

auto-coder.run and auto-coder.cli are two aliases for the same entrypoint; use whichever you prefer.

How It Differs from Interactive Mode

Aspectauto-coder.chat (interactive)auto-coder.run (headless)
Execution modelLong-running terminal sessionSingle one-shot invocation
InputSlash commands (/auto, /async, …)CLI args, file, or stdin
OutputRich terminal renderingPlain text / JSON / stream JSON
Best forDay-to-day coding collaborationScripts, automation, CI/CD, agent platforms
SessionsBuilt-in /new, /listResumed via --continue / --resume <id>

Minimal Example

Pass the prompt as a positional argument — one shot, one result:

auto-coder.run "Add JSDoc comments to every function in src/utils.ts"

Auto-Coder auto-selects the first model that has an API key configured, runs the task, and prints the final result to stdout.

You can also pipe the prompt via stdin:

echo "Fix all typos in the README" | auto-coder.run

For automation, we recommend this combination:

auto-coder.run \
  --from-prompt-file task.md \
  --verbose \
  --output-format stream-json

What each flag gives you:

  • --from-prompt-file task.md — Prompt lives in a file: version-controlled, reusable, and friendly for long task descriptions.
  • --verbose — Emits the full execution trace (LLM thoughts, tool calls, tool results) for debugging and auditing.
  • --output-format stream-json — Streams each event as a JSON object so an upstream program can parse progress in real time.

Three Output Formats

FormatBest For
text (default)Human-readable terminal output; only the final result
jsonOne-shot, fully aggregated event list; easy for batch scripts
stream-jsonOne JSON object per line per event; ideal for real-time consumers
auto-coder.run "<your prompt>" --output-format json
auto-coder.run "<your prompt>" --output-format stream-json

Common event types include start, llm_thinking, llm_output, tool_call, tool_result, completion, error, token_usage, commit, and pull_request.

Common Options

Input

OptionDescription
<prompt>Positional argument — the task text
--from-prompt-file <file>Read prompt content from a file
--input-format text|json|stream-jsonInput format for stdin (default text)

Priority: positional argument > --from-prompt-file > stdin.

Output

OptionDescription
--output-format text|json|stream-jsonOutput format (default text)
-v, --verboseEmit detailed execution trace

Session Reuse

Each headless invocation is independent by default. To carry context across calls:

# Continue the most recent session
auto-coder.run -c "Polish that snippet further"

# Resume a specific session
auto-coder.run -r <SESSION_ID> "Continue from where we left off"
OptionDescription
-c, --continueContinue the most recent session
-r, --resume <SESSION_ID>Resume a specific session

--continue and --resume are mutually exclusive; passing both raises an error.

Model & Behavior

OptionDescription
--model <name>Model name (e.g. v3_chat). If omitted, the first model with an API key is auto-selected
--max-turns <n>Max conversation turns (default 10000)
--system-prompt <text>Inline system prompt
--system-prompt-path <file>Load system prompt from a file (mutually exclusive with --system-prompt)
--permission-mode manual|acceptEditsPermission mode. acceptEdits auto-accepts edit operations — suitable for fully unattended runs
--allowed-tools tool1 tool2 ...Whitelist of tools the agent may use
--include-rulesInject project rule files (.autocoderrules/) into the context
--include-agent-definitionsInclude agent definitions
--include-skillsInclude skills
--include-libs lib1,lib2Auto-add LLM-friendly packages (comma-separated)
--prCreate a Pull Request after the task completes
--loop <n>Run the task n times (default 1)
--loop-keep-conversationKeep conversation continuity across loops
--loop-additional-prompt <text>Extra prompt injected on each loop iteration
--workflow <name|file>Execute a named or file-based workflow

Unattended tip: --permission-mode acceptEdits auto-accepts code edits. Use it deliberately, and pair it with --pr or a dedicated branch so changes remain reviewable.

Configuration Management

In addition to running tasks, auto-coder.run has a built-in config subcommand:

# Set a single value
auto-coder.run config model=v3_chat

# Set multiple values at once
auto-coder.run config model=v3_chat max_turns=10

# Show help
auto-coder.run config --help

These settings are persisted at the project level so you can omit flags like --model in later calls.

Memory Scheduler

auto-coder.run also exposes a memory subcommand for controlling the project's memory task scheduler:

# Start the scheduler (default interval 300s)
auto-coder.run memory start 600

# Inspect status
auto-coder.run memory status

# List memory tasks
auto-coder.run memory list

# Run a task manually
auto-coder.run memory run <task_name>

# Stop the scheduler
auto-coder.run memory stop

Parallel Execution: --async

For large tasks, split a single Markdown document by headings or a delimiter into multiple sub-tasks that run in parallel inside isolated git worktrees.

Simplest form:

cat tasks.md | auto-coder.run --async

Split Modes

OptionDescription
--split h1|h2|h3|any|delimiterSplit mode (default h1)
--delimiter <str>Custom delimiter (used with --split delimiter; default ===)
--min-level <n>Minimum heading level (used with --split any)
--max-level <n>Maximum heading level (used with --split any)

Examples:

# Split on H2 headings
cat tasks.md | auto-coder.run --async --split h2

# Split on any heading between H1 and H3
cat tasks.md | auto-coder.run --async --split any --min-level 1 --max-level 3

# Split on a custom delimiter
cat tasks.md | auto-coder.run --async --split delimiter --delimiter "==="

Worktrees & Background Runs

OptionDescription
--workdir <path>Async workdir (default ~/.auto-coder/async_agent)
--from <branch>Base branch for each worktree (auto-detected when empty)
--worktree-name <name>Explicit worktree name (auto-generated when empty)
--task-prefix <prefix>Prefix for task names, useful for grouping
--bgRun in the background and return a log file path immediately
--prCreate a PR for each completed sub-task

Example:

# Run in background, auto-create PRs, group by prefix
cat tasks.md | auto-coder.run --async --bg --pr --task-prefix "feature_auth"

Headless async artifacts land under ~/.auto-coder/async_agent/tasks/, with metadata in ~/.auto-coder/async_agent/meta/. See Async Mode for the broader concept.

Typical Scenarios

Scripted Invocation

#!/usr/bin/env bash
set -euo pipefail

auto-coder.run \
  --from-prompt-file tasks/refactor-auth.md \
  --model v3_chat \
  --permission-mode acceptEdits \
  --include-rules \
  --output-format stream-json \
  --verbose

CI/CD Pipeline

Automatically generate a code change from an issue description and open a PR:

auto-coder.run \
  --from-prompt-file .github/prompts/fix-from-issue.md \
  --model v3_chat \
  --permission-mode acceptEdits \
  --pr

Claw / Agent Platform Integration

For upstream agent platforms, use stream-json to stream events line by line:

auto-coder.run \
  --from-prompt-file /workspace/task.md \
  --verbose \
  --output-format stream-json

The upstream consumer parses one JSON object per line, dispatches on event_type, and gets LLM thoughts, tool calls, and the final result as a live feed.

Batch-Process a Plan Document

Fan out a multi-section plan.md into parallel worktrees:

cat plan.md | auto-coder.run \
  --async \
  --split h2 \
  --model v3_chat \
  --permission-mode acceptEdits \
  --bg \
  --pr \
  --task-prefix "plan_q2"

Exit Codes

Exit CodeMeaning
0Success
1Failure (invalid arguments, runtime error, etc.)

On failure the error message goes to stderr. With --verbose a full stack trace is attached for easier diagnosis.

Next Steps

  • Review Basic Usage for the other interaction modes
  • Read Async Mode to understand worktree isolation and parallel merging
  • Check Configuration for model and API key setup