From 0 hours to 102 hours—why cycle time varies 50x across top projects.
Cycle Time Variance
homebrew-cask (0h) vs zephyr (102h)
Based on 3,387,250 merged PRs | GitHub Archive / BigQuery | December 2024
When we analyzed cycle time (PR open to merge) across GitHub's most active projects, we found a staggering 102x difference between the fastest and slowest teams.
| Project | Median Cycle Time | PRs Analyzed |
|---|---|---|
| Homebrew/homebrew-cask | 0h | 8,060 |
| microsoft/vscode | 0h | 1,651 |
| Homebrew/homebrew-core | 1h | 8,459 |
| elastic/elasticsearch | 2h | 3,030 |
| elastic/kibana | 3h | 5,000 |
| grafana/grafana | 3h | 2,019 |
| home-assistant/core | 4h | 2,843 |
| Project | Median Cycle Time | PRs Analyzed |
|---|---|---|
| zephyrproject-rtos/zephyr | 102h | 1,813 |
| llvm/llvm-project | 23h | 5,044 |
| rust-lang/rust | 22h | 1,570 |
| NixOS/nixpkgs | 20h | 13,990 |
| cockroachdb/cockroach | 19h | 2,430 |
"Homebrew merges PRs in 0 hours median. Zephyr takes 102 hours. Same platform, same tools—102x difference in speed."
Time to first review varies 12x across projects—from 2 hours to 25 hours median.
| homebrew-core | 2h |
| homebrew-cask | 2h |
| core | 7h |
| rust | 8h |
| llvm-project | 10h |
| next.js | 11h |
| nixpkgs | 13h |
| App | 25h |
| odoo | 24h |
| dagster | 24h |
| kibana | 18h |
| pytorch | 16h |
Homebrew consistently tops our benchmarks with 0-2 hours median cycle time across both homebrew-core and homebrew-cask. How do they do it?
Extensive CI automation validates package updates. If the tests pass, it often merges automatically. Human review is the exception, not the rule.
Most PRs are version bumps or new package additions—low risk, templated changes. The review burden is minimal because the blast radius is small.
Key Insight: Homebrew optimized for throughput by standardizing contributions.
8,060 PRs merged in homebrew-cask with a 0-hour median cycle time.
"VS Code merges PRs in 0 hours median. LLVM takes 23 hours. Elite teams aren't just fast—they've designed their process to be fast."
The Zephyr RTOS project has a 102-hour median cycle time—the slowest in our dataset. But this isn't a problem to fix; it's a feature.
Zephyr powers embedded systems, medical devices, and IoT infrastructure. A bug here can brick hardware or create safety hazards. Thorough review is mandatory.
Changes go through architecture review, code review, and testing gates. Multiple maintainers must sign off. Speed is sacrificed for correctness.
Key Insight: Zephyr's slow cycle time is appropriate for its domain.
1,813 PRs merged with rigorous review—because the cost of bugs is measured in recalls and safety incidents, not just Slack messages.
"Elite" doesn't mean "fastest." It means "appropriate for context."
Homebrew's 0-hour median isn't better than Zephyr's 102-hour median—they're solving different problems. The real question isn't "how do we get faster?" but "what's the right speed for our risk profile?" A package manager can afford to ship fast and revert. An RTOS kernel can't. Know your domain before setting benchmarks.
"Review times vary 12x across elite projects—from 2 hours to 25 hours."
Don't compare your enterprise backend to Homebrew. Find projects with similar risk profiles and complexity. Elastic and Grafana (3h median) are better comparisons for most teams.
Homebrew's speed comes from automation, not heroics. Identify your version-bump equivalents—changes that can be validated by CI without human review.
Smaller, focused PRs get reviewed faster everywhere. VS Code's 0-hour median comes partly from a culture of small, atomic changes.
If you're building infrastructure, databases, or safety-critical systems, Zephyr and LLVM are your peers. A 20+ hour cycle time might be exactly right.
This analysis is based on 3,387,250 merged pull requests from GitHub Archive / BigQuery during December 2024. Cycle time is measured from PR creation to merge event. Review time is measured from PR creation to first review comment or approval. Projects shown had significant PR volume to ensure statistical reliability. For full methodology, see the complete study.
CodePulse shows you exactly where your cycle time and review time stand—and what's slowing you down.