What changed in how software ships—and what to watch in 2026
90%
No Review
(1000+ line PRs)
+7pp vs 2024
71%
Self-Merged
(all PRs)
+3pp vs 2024
20x
Less Scrutiny
(large PRs)
Similar to 2024
38%
Longer Wait
(new contributors)
Improved from 53%
Based on analysis of 802,979 merged PRs from GitHub Archive / BigQuery | October 2025
Compare with 2024 Study (3,387,250 PRs)A year after our landmark 2024 study, we analyzed over 800,000 merged pull requests from GitHub's public archive to see how code review practices have evolved. The trends are clear—and concerning.
The 2025 story: Self-merge rates climbed to 71%, massive PRs now ship without review 90% of the time, and bot PRs collapsed to just 15.5%. The automation boom is over—but the review gap is widening.
Here's what changed in the past year. Some trends accelerated, others reversed—but the overall picture is one of faster shipping with less oversight.
Self-Merge Rate
+3.45 ppMore code shipping without peer review
Bot PRs
-22.37 ppDramatic decline in automated PRs
No Review (1000+ lines)
+7.7 ppLarge PRs getting even less scrutiny
First-Timer Wait Penalty
-15.5 ppOnboarding experience improving
Same-Day Merges
+2.6 ppFaster merge cycles
Peak Merge Day
ShiftedMerge patterns shifted mid-week
GitHub's 800K+ monthly PRs include everything from solo hobby projects to enterprise teams. Comparing your team to "GitHub average" can be misleading. We've split the data into two benchmarks: all PRs (the full picture) and reviewed PRs only (more representative of team-based development).
Includes solo projects, hobby repos, self-merged code
PRs with code review - more representative of team workflows
✓ Recommended benchmark for team comparisons
Why this matters: The "0 hour median cycle time" for all GitHub PRs reflects reality—most code ships instantly without review. But if your team does code reviews, the 3-hour median for reviewed PRs is a more meaningful benchmark. First-timers in reviewed repos wait 15.2 hours (vs 1.4h for repeat contributors)—a 10.9x penalty.
For PRs that go through code review, we can break down the cycle into three phases. This is the real data your team can benchmark against—and it maps directly to DORA's Lead Time for Changes metric.
Based on 117,413 PRs that received at least one code review. The P90 (90th percentile) shows what "slow" looks like—useful for SLA planning.
Larger PRs take longer—but XL PRs (8.7h) actually wait longer than Massive (7.3h), suggesting massive PRs may be auto-generated or batch imports.
Our "Total Cycle Time" maps to DORA's Lead Time for Changes (code committed → running in production). For reviewed PRs, the 3-hour median and 149-hour P90 give you real benchmarks. Elite teams target under 1 hour; high performers under 1 day; medium performers under 1 week.
The uncomfortable truth from 2024 has gotten worse. Across all PR sizes, the vast majority ship without any documented review process. For massive PRs (1000+ lines), 90% now ship without formal review—up from 83% in 2024.
Based on 788,166 merged PRs. "No formal review" = zero approvals, zero change requests, zero review comments.
"90% of pull requests over 1,000 lines ship without any code review—up from 83% last year."
Self-merge rates climbed from 68% in 2024 to 71% in 2025. Nearly three-quarters of all code now ships without another developer clicking the merge button. The "code review culture" many teams claim to have is increasingly a fiction.
573,883 PRs self-merged
Author = Merger (71.48%)
228,935 merged by someone else
Different person reviewed (28.52%)
Up 3.5 percentage points from 2024 (68.03%)
"Self-merge rates hit 71%, meaning nearly three-quarters of code gets no peer review before shipping."
The pattern from 2024 holds: the bigger the change, the less anyone looks at it. Review comments per 100 lines drops from 0.98 for tiny PRs to just 0.05 for massive ones—a 20x reduction in scrutiny per line.
As PR size increases, reviewers leave exponentially fewer comments per line of code—suggesting cognitive overload and "rubber-stamping."
Larger PRs take longer to merge (13h median for tiny vs 22h for massive), but not proportionally to their size—suggesting review depth doesn't scale with complexity.
Good news: the first-contributor wait penalty dropped from 53% in 2024 to 38% in 2025. Teams are getting better at onboarding new contributors, but the gap remains significant.
First-time contributors
22h
median time to merge
P90: 280h (11.7 days)
21,289 PRs analyzed
Repeat contributors
16h
median time to merge
P90: 175h (7.3 days)
171,251 PRs analyzed
First-time contributors wait 37.5% longer (6 extra hours)
"Good news for newcomers: first-time contributor wait times dropped from 53% to 38% longer than veterans."
A significant shift from 2024: Wednesday has become the peak merge day at 23.55%, dethroning Monday (which held at 19% in 2024). This suggests teams are moving away from "clear the backlog Monday morning" toward more distributed workflows.
Wednesday dominates at 23.55%, followed by Tuesday (14.33%) and Monday (13.91%). In 2024, Monday led at 19.08%.
"Wednesday is the new Monday: 23.5% of PRs now merge mid-week, shifting from the traditional Monday peak."
The most dramatic change since 2022: bot PRs (Dependabot, Renovate, CI automation) have collapsed from 62% at peak to just 15.5% in October 2025. The automation boom is definitively over. Teams are becoming far more selective about automated dependency updates.
2022 saw peak bot activity at 62%. By 2025, bot PRs dropped to 31.5% (full year) and just 15.5% in October—a four-fold reduction from peak.
Bot PRs (Oct 2025)
15.5%
475,905 PRs
Human PRs (Oct 2025)
84.5%
2,594,901 PRs
"Bot PRs collapsed from 62% (2022 peak) to just 15.5% in 2025. The automation boom is over."
A quarter of all code pushes happen on Saturday and Sunday. The "always-on" engineering culture continues, though weekend work has slightly decreased from 27% in 2024.
25.4%
Weekend Pushes
Down from 27% in 2024
Wednesday
Peak Merge Day
23.55% of merges
13.12%
Friday Merges
"Friday deploy" lives
"Same-day merges hit 55%, up from 53%. Code is shipping faster than ever."
Different language ecosystems have different velocities. PowerShell leads with a 4h median (likely CI/automation scripts), while C is slowest at 24h (rigorous systems programming).
| Language | Merged PRs | Median Hours | Avg PR Size |
|---|---|---|---|
| PowerShell | 1,292 | 4h | 234 lines |
| Dockerfile | 1,174 | 11h | 110 lines |
| Shell | 5,230 | 14h | 148 lines |
| Nix | 1,366 | 14h | 77 lines |
| TypeScript | 43,177 | 15h | 503 lines |
| JavaScript | 16,904 | 15h | 461 lines |
| Ruby | 3,534 | 15h | 174 lines |
| HCL | 1,492 | 15h | 133 lines |
| YAML | 1,416 | 15h | 29 lines |
| C# | 6,762 | 16h | 463 lines |
| HTML | 5,903 | 16h | 447 lines |
| Python | 29,761 | 17h | 371 lines |
| Vue | 1,575 | 17h | 526 lines |
| Scala | 1,466 | 17h | 139 lines |
| CSS | 1,397 | 17h | 496 lines |
"PowerShell remains the speed king at 4 hours median merge time—6x faster than C at 24 hours."
We took a closer look at three prominent AI-powered developer tools to see how their engineering teams ship compared to the GitHub average. The results reveal fascinating differences in velocity, review culture, and automation patterns.
Codex: Faster Than Enterprise Teams
OpenAI ships 23% faster than typical team workflows
Gemini: Exceptional Review Culture
86% of Gemini PRs go through review (6x the GitHub rate)
AI Tools Are Human-Driven
Bot PRs nearly absent—these are human-crafted projects
Gemini Reviews Are Thorough
65% of Gemini PRs have review comments (2x typical)
| Repo | Merged PRs | Median Merge | Self-Merge % | No Review % | Bot PRs % |
|---|---|---|---|---|---|
| OpenAI Codex 453 contributors | 1,031 | 11h | 73.5% | 73.6% | 2.9% |
| Gemini CLI 523 contributors | 948 | 21h | 58.4% | 35.1% | 2.4% |
| Claude Code 53 contributors | 54 | 100h | 68.5% | 81.5% | 0.0% |
| Reviewed PRs Benchmark | — | 3h | 52.1% | 67.8% | 15.5% |
How do AI tool repos compare to the reviewed PRs benchmark (3h median)?
Claude Code: 0.9h
70% faster than benchmark
Codex: 2.3h
23% faster than benchmark
Gemini: 7.8h
2.6x longer (thorough reviews)
"Gemini CLI has 86% review engagement vs 14.6% GitHub average—6x the rate of typical open source."
Starting November 2025, GitHub Archive event payloads have significantly reduced detail. Fields including user.login, additions, deletions, merged_by, and review_comments are no longer populated.
Affected Months
November 2025, December 2025
Last Good Month
October 2025
Impact
Many PR metrics (self-merge rate, PR size analysis, contributor analysis) cannot be calculated from November 2025 onwards.
Recommendation
Teams relying on GitHub Archive for analytics should monitor data quality and consider alternative data sources.
"GitHub Archive data quality degraded in late 2025—a warning sign for the analytics ecosystem."
All data comes from GitHub Archive, a public dataset that records all public GitHub events. We queried the data using Google BigQuery's public dataset.
| Metric | Sample Size |
|---|---|
| PR size and review analysis | 802,979 merged PRs |
| Unique repositories | 262,212 repositories |
| Unique authors | 191,099 developers |
| Self-merge analysis | 802,818 merged PRs |
| Contributor analysis | 192,540 merged PRs |
2024 data comes from December 2024 (3,387,250 PRs). 2025 data comes from October 2025 (802,979 PRs). Different months may have seasonal variations.
These benchmarks come from public open source projects. How do your private repositories stack up? CodePulse tracks review coverage, self-merge rates, and contributor wait times for your team.
No credit card required. 5-minute setup. Read-only GitHub permissions.
See how 2025 compares to our landmark 2024 study that first revealed the review gap.
Read: 3.4 Million PRs, One Uncomfortable Truth (2024)