Where Your PRs Spend Their Time
Waiting Time
of PR cycle time is idle
Based on 117,413 reviewed PRs | GitHub Archive / BigQuery | October 2025
When engineering leaders talk about "speeding up delivery," they often focus on the wrong thing. The data reveals a surprising truth: actual review time is a tiny fraction of your PR lifecycle. The real bottleneck? Waiting.
Every PR that goes through code review passes through three distinct phases. Understanding where time actually goes is the first step to optimization.
PR lifecycle phases shown to scale based on average time spent in each phase
From PR opened to first review comment. The developer is blocked, waiting for someone to look at their code.
From first review to approval. Active collaboration between author and reviewers.
From approval to merge. The PR is approved but not yet merged—often waiting for CI, deployment windows, or final checks.
"89% of PR cycle time is spent waiting for the first review. The actual review takes less than 10% of total time."
The breakdown is stark: 89% of your PR lifecycle is pure idle time. Developers aren't slow—they're blocked.
P90 shows the worst 10% of PRs. At P90, total cycle time reaches 148.9 hours.
Larger PRs don't just take longer to review—they wait longer to be picked up. The data shows a clear correlation between PR size and total cycle time.
| PR Size | PRs | Median Wait | Median Total |
|---|---|---|---|
| Tiny (<10) | 31,491 | 0.2h | 1.1h |
| Small (10-50) | 25,754 | 0.5h | 2.6h |
| Medium (50-200) | 23,192 | 0.9h | 4.7h |
| Large (200-500) | 14,465 | 0.9h | 5.8h |
| XL (500-1000) | 8,356 | 1.1h | 8.7h |
| Massive (1000+) | 14,155 | 0.8h | 7.3h |
Note: Massive PRs (1000+) show lower median wait than XL PRs, suggesting these are often automated or pre-approved changes that bypass normal review queues.
DORA's "Lead Time for Changes" measures the time from code commit to production. PR cycle time is a key component of this metric. Here's how the data maps to DORA performance tiers.
Deploy on demand, multiple times per day
Deploy between once per day and once per week
Deploy between once per week and once per month
Deploy less frequently than once per month
With a median of 3 hours, reviewed PRs on GitHub land solidly in the High tier. But P90 at 149 hours shows the tail extends into Medium territory—there's room to improve.
Based on our analysis of 117,413 reviewed PRs, here are the cycle time benchmarks for each performance tier.
Target: <1 hour total cycle time. Achieved through near-instant review pickup (median 0.2h for tiny PRs), automated testing, and streamlined merge processes.
Target: <4 hours total cycle time. The median for reviewed PRs is3 hours—if you're here, you're doing well. Focus on reducing wait time for your larger PRs.
Target: <24 hours total cycle time. Same-day merge is the goal. If PRs are regularly carrying over to the next day, investigate your review queue.
Warning: >24 hours cycle time. At P90, PRs take 149+ hours. This usually indicates understaffed review capacity, unclear ownership, or process bottlenecks.
Stop optimizing review time. Start optimizing wait time.
The data is clear: 89% of cycle time is waiting for someone to start reviewing. The actual review? Just 4%. Teams obsess over "faster reviews" when they should focus on "faster pickup." Assign reviewers automatically, set SLAs for first response, and make review queue visibility a team priority. The review itself isn't the bottleneck—getting started is.
The full study: 800K+ PRs analyzed for year-over-year trends.
Read moreWhat reviewed PRs reveal about team-based development.
Read moreFirst-time contributors wait 10.9x longer for review.
Read more90% of massive PRs ship without review. Large PRs get 20x less scrutiny.
Read moreThis analysis is based on 117,413 merged pull requests that received at least one code review event, from GitHub Archive / BigQuery during October 2025. Cycle time phases are calculated as: Waiting = PR opened to first review; Review = first review to approval; Merge = approval to merge. PRs without review events are excluded from this analysis. For full methodology and all findings, see the 2025 Engineering Benchmarks.
CodePulse shows exactly where your PRs spend time—and where to optimize.