Global GitHub stats are misleading. Here's what matters for teams.
Median Cycle Time
for reviewed PRs
Based on 117,413 reviewed PRs | GitHub Archive / BigQuery | October 2025
When you see "GitHub average merge time" statistics, you're not seeing enterprise reality. You're seeing a mix of solo projects, hobby repos, automated bots, and instant self-merges.
85.4% of GitHub PRs have no code review at all. When you include them in "averages," you're not measuring code review—you're measuring the absence of it.
To find meaningful benchmarks for teams that actually practice code review, we filtered to PRs that received at least one review event. That's only 14.6% of all PRs—but it's the slice that matters for enterprise engineering teams.
"85% of GitHub PRs have no review. Using global averages is like measuring traffic speed by including parked cars."
By filtering to PRs with actual code review, we get metrics that reflect team-based development workflows—the kind most enterprises care about.
117,413
Reviewed PRs
49,808
Unique Repos
50,274
Unique Authors
14.6%
of All PRs
Includes solo projects, hobby repos, self-merged code
PRs with code review - more representative of team workflows
The 3-hour median cycle time breaks down into three phases. Waiting for review dominates—but the story is more nuanced than the averages suggest.
Waiting
PR opened to first review
Median: 0.6h
Review
First review to approval
Median: 0h
Merge
Approval to merged
Median: 0.1h
92% of time is waiting
Waiting for review (96.9h avg) dominates the cycle. Actual review time (4.8h) and merge delay (6.9h) are relatively quick once attention arrives.
"The 3-hour median cycle time is the real enterprise benchmark—not GitHub's 0h (which reflects instant self-merges)."
Even among PRs that receive code review, a significant portion are still self-merged. But the gap tells an important story.
All GitHub PRs
19pp lower than global
The 19 percentage point gap shows that review culture makes a difference. Repos that practice code review have meaningfully lower self-merge rates—but 52% is still high. Even "reviewed" PRs often get approved and then merged by the author.
Stop comparing your team to GitHub averages. They're meaningless for enterprise engineering.
The 3-hour median cycle time from reviewed PRs is your real benchmark. If your team is slower, focus on the waiting phase—that's 92% of the problem. If you're faster, you're likely operating at elite levels. Global stats include too much noise from solo projects and instant self-merges to be useful for team workflows.
If your median PR cycle time exceeds 3 hours, you have room to improve. Elite teams ship faster—but this is the benchmark for teams with healthy review practices.
92% of cycle time is waiting for that first review. Dedicated review times, smaller PRs, and better reviewer assignment can cut this dramatically.
The 52% self-merge rate among reviewed PRs sets a realistic floor. If your rate is higher, your branch protection rules may not be effective.
Reports showing instant merge times are measuring self-merges, not code review. They're not relevant to your engineering culture goals.
The full year-over-year analysis with 800K+ PRs.
Read more92% of PR cycle time is waiting for review. Deep dive into the phases.
Read more71.5% of PRs are self-merged—up from 68% last year.
Read moreCodex, Gemini CLI, and Claude Code ship 70% faster with 6x review engagement.
Read moreThis analysis uses the "reviewed PRs" subset from our 2025 Engineering Benchmarks study. A PR is classified as "reviewed" if it received at least one PullRequestReviewEvent before merge. Data source: GitHub Archive / BigQuery for October 2025. Sample: 117,413 reviewed PRs from 49,808 repositories. Cycle time phases calculated from timestamp differences between PR events.
CodePulse shows you exactly where your cycle time goes—waiting, review, or merge.