Herd your
AI coding agents
Self-hosted orchestrator that turns Slack messages into tested, reviewed pull requests. Pipeline automation, browser verification, and cost tracking included.
How it works
Three steps from idea to pull request
Connect Slack & GitHub
Add your Slack bot token and GitHub PAT. Gooseherd connects to both and starts listening.
Describe the change
Tell the bot what you need in Slack. It triages the task, picks a pipeline, and spins up an agent.
Get a PR with tests
Receive a tested pull request with screenshots, cost tracking, and browser verification results.
Everything you need
Built for teams that ship fast and want agents that keep up
Pipeline Engine
Configurable YAML pipelines with goto, sub-pipelines, and on_failure loops. Ship complex workflows without code changes.
Browser Verification
Stagehand-powered browser checks with video recording. Catches visual regressions before reviewers do.
Observer Triggers
Auto-fix from Sentry errors, GitHub issues, and Slack messages. Your agents work while you sleep.
Real-time Dashboard
Track runs, replay agent activity, view screenshots and costs. Search, filter, and deep-link to any run.
Cost Tracking
Per-model pricing with per-run cost breakdown. Know exactly what each change costs across providers.
Sandbox Isolation
Docker-out-of-Docker with non-root containers. Each run is isolated with its own filesystem and process space.
See it in action
Real-time dashboard with run tracking, agent activity replay, and cost analytics
Dashboard screenshot coming soon
Up and running in 5 minutes
Docker pull, configure, done
# Pull the image
docker pull ghcr.io/chocksy/gooseherd:latest
# Configure your tokens
cp .env.example .env
vim .env # add Slack + GitHub tokens
# Launch
docker compose up -d
# Open the dashboard
open http://localhost:8787
Or install from source:
git clone && npm ci && npm run build && npm start
Simple pricing
Self-host for free, or let us handle the infrastructure
Run on your own servers with full control over data and configuration.
- MIT licensed
- Your infrastructure, full control
- Unlimited runs
- All features included
- Community support
We manage the infrastructure. You focus on shipping.
- Managed service, no Docker
- Pay per run
- Automatic updates
- Priority support
- Team management
Frequently asked questions
What AI models does Gooseherd support? +
Any model available through OpenRouter, plus direct Anthropic and OpenAI APIs. Configure per-pipeline or per-node: use Claude for planning, GPT-4o for browser verification, or any mix.
Which repositories can it work with? +
Any GitHub repository you grant access to via a Personal Access Token or GitHub App. Works with monorepos, multi-repo setups, and private repositories.
How much does it cost to run? +
The orchestrator itself is free and open source. Your only costs are LLM API calls, typically $0.10-$2.00 per task depending on complexity and model choice. The dashboard tracks every dollar.
Is it secure? +
Each run executes in an isolated Docker container with a non-root user. No host filesystem access beyond the work directory. Tokens are never logged or exposed in the dashboard.
Can I use it without Slack? +
Yes. Trigger runs from the dashboard, GitHub webhooks, or the observer daemon. Slack is optional but provides the best interactive experience.
How does browser verification work? +
Gooseherd deploys your PR to a preview environment, then uses Stagehand (Playwright-based) to navigate the app and verify changes visually. It records video and takes screenshots as evidence.
Ready to herd your agents?
Deploy in 5 minutes. Free and open source forever.