MergeWatch runs specialized AI agents on every pull request — before your reviewer opens the diff. Security issues, logic bugs, style violations, and architectural risks surface as inline comments. Add your own custom agents for anything else. Your reviewer makes the final call.
AGPL v3 — the whole codebase, not just the parts we’re comfortable showing you.
v1.0 · Updated April 2026 · Actively maintained
Runs on AWS · GCP · Azure · Bare metal · Fly.io · Railway
7 stars on GitHub
Real numbers, not testimonials. The full pipeline is public — agent prompts, orchestrator, comment templates. Audit what runs on your code before you install it.
View the repo →
AGPL v3 · Self-host anywhere
Your code never has to leave your infrastructure. docker-compose up and point it at Anthropic, OpenAI via LiteLLM, Ollama for air-gapped, or Amazon Bedrock with IAM auth. No API keys leave your network.
SQL injection, XSS, secrets, OWASP Top 10
User input passed to exec() without sanitization
Null dereferences, off-by-ones, race conditions
Array index i+1 can exceed arr.length
Naming, dead code, missing types
Exported function has no return type annotation
PR intent, risk rating, scope
Adds rate limiting to /api/upload — medium risk
Architecture impact, Mermaid flowchart
Control flow diagram of changed paths
All agents run in parallel — including your custom ones. Total latency is bounded by the slowest agent, not the sum. Most reviews complete in under 60 seconds. Define custom agents in .mergewatch.yml with a name and a prompt.
MergeWatch runs five parallel specialist agents on every pull request: security (OWASP Top 10, SQL and command injection, exposed secrets, path traversal), bugs (null dereferences, race conditions, off-by-one errors, resource leaks), style (naming, dead code, missing types, unused imports), summary (PR intent, scope, and risk rating), and architectural impact (a Mermaid diagram of changed control flow). All agents execute in parallel via Promise.all, so total latency is bounded by the slowest agent, not the sum — most reviews complete in under 60 seconds end-to-end. You can define additional custom agents in .mergewatch.yml with just a name and a prompt, which makes it easy to add framework-specific checks or team conventions. Findings are deduplicated, ranked by severity and confidence, and posted as a single upsert-style comment on the pull request.
MergeWatch is priced by pull request volume, not per developer seat. A five-person team and a hundred-person team merging the same number of PRs pay the same amount, so hiring engineers does not make your bill bigger. The self-hosted distribution is free forever under the GNU AGPL v3 license — you bring your own LLM provider and pay that provider directly, with no markup from MergeWatch on top. The managed SaaS gives the first five reviews free every month, then uses prepaid credits based on actual LLM cost plus a small platform fee. There is no credit card required to start, no seat minimum, and no annual commitment — you can cancel at any time from the dashboard. See mergewatch.ai/pricing for the full interactive breakdown and per-PR cost calculator.
Yes. MergeWatch ships as open-source software under the GNU AGPL v3 license, and the full source code — including every agent prompt, the orchestrator, and all comment templates — is available at github.com/santthosh/mergewatch.ai. Self-hosting requires running docker-compose up, which starts an Express server backed by Postgres on any Docker-capable host. You supply your own GitHub App credentials, database URL, and LLM provider via environment variables, and the server auto-runs Drizzle migrations on startup. MergeWatch runs on AWS, GCP, Azure, bare metal, Fly.io, Railway, or any environment that can run a container. Your code never leaves your infrastructure, which makes the self-hosted distribution appropriate for regulated industries, air-gapped environments, and organizations with strict data residency or compliance requirements.
MergeWatch supports four LLM provider backends out of the box. Anthropic (direct Claude API) is the default for self-hosted installs and the fastest way to get started. Amazon Bedrock (IAM-authenticated Claude models) powers the managed SaaS and eliminates the need to manage API keys anywhere in your infrastructure. LiteLLM is an OpenAI-compatible proxy that gives access to 100+ providers including OpenAI, Google Gemini, Azure OpenAI, Groq, Together AI, Mistral, and Fireworks. Ollama supports local models like Llama 3 and Qwen for air-gapped or privacy-sensitive environments and is currently experimental. Self-hosted deployments select a provider via the LLM_PROVIDER environment variable. The ILLMProvider interface in @mergewatch/core is a single method, so contributing a new backend usually takes less than a hundred lines of code.
Already checked for you:
No secrets or tokens detected
Lock files look clean
847 lines scanned across 12 files, 40 known vulnerability patterns checked
Focus your energy on:
High risk — your attention here will matter most
Adds authentication middleware to admin routes. One bypass path detected in routes/admin.ts — may be intentional.
| Severity | Confidence | Location | Finding |
|---|---|---|---|
| critical | Likely | src/api/handler.ts:42 | Unsanitized input passed to exec() |
| high | Likely | routes/admin.ts:18 | Auth middleware bypassed on /health |
| warning | Worth checking | lib/db.ts:91 | Missing null check on optional user |
Before you approve, consider:
☐ Is the auth bypass in routes/admin.ts:18 intentional?
☐ Does the new retry logic handle network timeouts?
These are flags, not verdicts. You know this codebase.
Posted as inline review comments + a top-level summary. Re-triggers automatically when new commits are pushed.
Most review tools charge per developer per month. Every engineer you hire makes your bill bigger — the tool that’s supposed to help you scale penalizes growth. MergeWatch prices by PR volume, not headcount. A 5-person team and a 100-person team merging the same number of PRs pay the same.
AGPL v3. Not “source available.” Not a limited open-core wrapper around a closed engine. The full review pipeline — every agent prompt, every orchestrator, every comment template — is in the repo. Your security team can audit it. Your engineers can fork it.
Self-host with a single docker-compose up. Use Anthropic, OpenAI via LiteLLM, Ollama for air-gapped environments, or Amazon Bedrock with IAM-native auth — no API keys to manage. GCP, AWS, Azure, bare metal. If you can run Docker, you can run MergeWatch.
Set up in 2 minutes. No credit card required.