You know your PR cycle time is too long. But where exactly is the bottleneck? Is it developers taking too long to write code? Reviewers not responding quickly enough? Endless back-and-forth during review? Or delays getting approved code merged?
This guide breaks down cycle time into its four components and shows you exactly how to identify which stage is causing the slowdown—and what to do about it. Most teams focus on total cycle time without understanding that the fix for slow pickup time is completely different from the fix for slow review duration. Get the diagnosis right, and improvements become obvious.
Our Take
Pickup time—the time PRs wait before anyone looks at them—is the silent killer of engineering velocity. We've analyzed thousands of teams and consistently find that 60-70% of cycle time is pure waiting. Not coding. Not reviewing. Waiting. Yet most teams obsess over review efficiency when the real problem is that nobody picks up the PR for 18 hours.
The 4 Phases of Cycle Time
Total cycle time—the time from PR creation to merge—is actually made up of four distinct phases. Each phase has different causes, different owners, and different solutions. Treating them as interchangeable leads to misdiagnosis and wasted effort.
PR LIFECYCLE PHASES
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
PR Created Review First Review Approval Merged
│ Requested │ │ │
▼ ▼ ▼ ▼ ▼
┌────────────────┬────────────────┬────────────────┬──────────────┐
│ CODING TIME │ PICKUP TIME │ REVIEW DURATION│ MERGE DELAY │
│ │ │ │ │
│ Author work │ Waiting for │ Active review │ Post-approval│
│ before ready │ first review │ & iterations │ processes │
└────────────────┴────────────────┴────────────────┴──────────────┘
TYPICAL DISTRIBUTION (unhealthy team):
Coding: 15% ████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
Pickup: 45% ██████████████████░░░░░░░░░░░░░░░░░░░░░░░░ ← Hidden killer
Review: 25% ██████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
Merge: 15% ██████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
TYPICAL DISTRIBUTION (healthy team):
Coding: 25% ██████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
Pickup: 15% ██████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ ← Under control
Review: 45% ██████████████████░░░░░░░░░░░░░░░░░░░░░░░░ ← Actual work
Merge: 15% ██████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░Phase 1: Coding Time
What it measures: Time from when the PR is created until a review is requested.
FORMULA: coding_time = first_review_requested_at - created_at CALCULATION NOTES: - If review requested at creation: coding_time = 0 - Excludes draft PR time if draft opened before ready - Watch for: PRs opened "for discussion" skew this metric
This phase captures how long the PR sits before the author considers it ready for review. High coding time might indicate:
- PRs being opened as drafts and worked on incrementally
- Authors forgetting to request review
- PRs that are too large (taking days to complete)
- Unclear requirements leading to rework before review
- Authors "parking" work in PRs without intending immediate review
"A PR without a review request is a message in a bottle. It might get found. It probably won't."
Phase 2: Pickup Time (Wait for Review)
What it measures: Time from review request until the first reviewer starts looking at the code.
FORMULA: pickup_time = first_review_at - first_review_requested_at CALCULATION NOTES: - "First review" = first comment, approval, or changes requested - Bot comments don't count (linter warnings, CI results) - Timezone-adjusted: excludes nights/weekends for some calculations KEY INSIGHT: This is the #1 bottleneck for most teams. It's pure waste—no value is being added while the PR waits.
This is often the largest component of cycle time. High wait time typically means:
- Reviewers are overloaded with too many requests
- No clear ownership of who should review
- Timezone mismatches in distributed teams
- PRs sitting over weekends or holidays
- Authors not tagging the right reviewers
- Review being seen as "when I have time" rather than a team responsibility
Our Take
Pickup time over 4 hours during working hours is a process failure, not a people failure. If your team regularly leaves PRs waiting overnight, you don't have lazy reviewers—you have no system for review ownership. Fix the system: designate daily reviewers, set up rotation, create alerts. The problem isn't motivation; it's that "someone should review this" means no one will.
For strategies on distributing review load, see our Review Load Balancing Guide.
Phase 3: Review Duration
What it measures: Time from first review activity until approval is granted.
FORMULA: review_duration = first_approval_at - first_review_at SUB-COMPONENTS: time_to_first_review_feedback = first_comment_at - first_review_at iteration_time = time_between_author_updates follow_up_review_time = subsequent_reviews - author_updates CALCULATION NOTES: - Includes all back-and-forth cycles - Excludes weekends/holidays in working-hours calculations - PRs with multiple required reviewers: tracks to LAST approval
This captures the active review period, including any back-and-forth. Long review duration might indicate:
- Complex code that requires deep understanding
- Multiple rounds of changes requested
- Reviewers and authors in different timezones
- Unclear code that needs extensive explanation
- Nitpicky reviewers blocking on minor issues
- Lack of coding standards leading to subjective debates
Time-to-First-Review vs Total Review Time
These metrics tell different stories:
TIME-TO-FIRST-REVIEW (TTFR) ━━━━━━━━━━━━━━━━━━━━━━━━━━━ Measures: How quickly reviewers engage with the code Owner: Reviewers / team process Healthy: < 4 hours during working hours Action: Review rotation, alerts, SLAs TOTAL REVIEW TIME ━━━━━━━━━━━━━━━━━━━━━━━━━━━ Measures: How long the full review process takes Owner: Both authors and reviewers Healthy: < 24 hours for normal PRs Action: Smaller PRs, better code quality, clearer requirements SCENARIO ANALYSIS: ┌─────────────────────────────────────────────────────────────────┐ │ TTFR: Fast (2h) │ Total: Fast (6h) │ Healthy team │ │ TTFR: Fast (2h) │ Total: Slow (48h) │ Too many revision cycles│ │ TTFR: Slow (24h) │ Total: Slow (48h) │ Pickup time problem │ │ TTFR: Slow (24h) │ Total: Fast (26h) │ Reviews are rubber-stamp│ └─────────────────────────────────────────────────────────────────┘
Phase 4: Approval to Merge (Merge Delay)
What it measures: Time from approval until the PR is actually merged.
FORMULA: merge_delay = merged_at - first_approval_at CALCULATION NOTES: - Includes CI time after approval - Includes merge queue wait time - May include release schedule delays - PRs closed without merging: excluded from this metric
This final stage is often overlooked but can be significant. Delays here usually come from:
- Slow CI pipelines running after approval
- Manual deployment gates or release schedules
- Merge conflicts requiring rebasing
- Authors not merging promptly after approval
- Waiting for multiple required approvals
- Merge freeze periods (code freeze before releases)
Industry Benchmarks by Phase
Understanding what "good" looks like for each phase helps you set realistic targets. These benchmarks are based on high-performing teams:
CYCLE TIME PHASE BENCHMARKS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
│ ELITE │ HIGH │ MEDIUM │ NEEDS WORK
━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━┿━━━━━━━━━━━━┿━━━━━━━━━━━━┿━━━━━━━━━━━━
Coding Time │ < 1 hour │ < 4 hours │ < 8 hours │ > 24 hours
Pickup Time │ < 2 hours │ < 4 hours │ < 12 hours │ > 24 hours
Review Duration │ < 4 hours │ < 12 hours │ < 24 hours │ > 48 hours
Merge Delay │ < 30 mins │ < 2 hours │ < 4 hours │ > 8 hours
━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━┿━━━━━━━━━━━━┿━━━━━━━━━━━━┿━━━━━━━━━━━━
TOTAL CYCLE TIME │ < 8 hours │ < 24 hours │ < 48 hours │ > 72 hours
NOTES:
- Benchmarks assume working hours only
- Elite teams often have 80%+ of PRs same-day merge
- "Needs Work" isn't failure—it's opportunityBenchmarks by PR Size
PR size dramatically impacts every phase. Larger PRs take exponentially longer:
CYCLE TIME BY PR SIZE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ PR SIZE (lines) │ PICKUP │ REVIEW │ TOTAL │ ABANDON RATE ━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━┿━━━━━━━━━━━━━┿━━━━━━━━━━━━┿━━━━━━━━━━━━━ < 50 lines │ 2 hours │ 1 hour │ 4 hours │ < 2% 50-200 lines │ 3 hours │ 4 hours │ 12 hours │ 3% 200-400 lines │ 6 hours │ 8 hours │ 24 hours │ 5% 400-800 lines │ 12 hours │ 16 hours │ 48 hours │ 12% 800+ lines │ 24+ hours │ 36+ hours │ 72+ hours │ 25% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ KEY INSIGHT: PRs over 400 lines don't just take longer—they get worse reviews. Reviewers skim large PRs. Bug detection drops. Research shows: - PRs < 200 lines: 70% defect detection rate - PRs > 500 lines: 30% defect detection rate
"Every 100 lines added to a PR doubles the time reviewers spend but halves the bugs they catch. Large PRs are reviewed poorly. They just feel thorough because they take forever."
For detailed guidance on managing PR size, see our PR Size Optimization Guide.
The "Review Roulette" Problem
One of the most common causes of long pickup time is what we call "review roulette"— the practice of tagging multiple reviewers hoping someone picks it up.
THE REVIEW ROULETTE ANTI-PATTERN ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Scenario: Author tags 4 reviewers on a PR What author expects: "4 people tagged = 4x faster review!" What actually happens: Reviewer A: "B, C, or D will get it" Reviewer B: "A, C, or D will get it" Reviewer C: "A, B, or D will get it" Reviewer D: "A, B, or C will get it" Result: PR sits for 18 hours THE FIX: Single Primary Reviewer ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1. Tag ONE primary reviewer (owns responsibility) 2. Tag others as "FYI" or "optional" 3. Primary has 4-hour SLA to respond OR explicitly hand off Alternative: Review Rotation ━━━━━━━━━━━━━━━━━━━━━━━━━━━━ - Daily "reviewer on duty" assignment - All PRs without explicit reviewer go to duty reviewer - Clear handoff at end of shift
Our Take
Diffusion of responsibility is real. When everyone is responsible, no one is responsible. We've seen teams cut pickup time in half just by switching from "tag everyone who might know this code" to "tag exactly one person who will review or find someone else within 2 hours." It feels less efficient but works dramatically better.
For more strategies on assigning reviewers effectively, see our Optimal Reviewer Count Guide.
Identifying Your Specific Bottleneck
Different teams have different bottleneck patterns. Here's how to diagnose yours:
Bottleneck Pattern Analysis
BOTTLENECK DIAGNOSIS FLOWCHART
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Is your total cycle time > 48 hours?
│
├─ NO → You're doing fine. Focus on consistency, not speed.
│
└─ YES → Which phase is largest?
│
├─ CODING TIME > 40%
│ ├─ Check: Are PRs opened as drafts?
│ ├─ Check: Are authors forgetting to request review?
│ └─ Action: PR templates, auto-request on non-draft
│
├─ PICKUP TIME > 40%
│ ├─ Check: Is review load balanced?
│ ├─ Check: Is there review ownership?
│ ├─ Check: Timezone coverage?
│ └─ Action: Review rotation, SLAs, alerts
│
├─ REVIEW DURATION > 40%
│ ├─ Check: Average PR size?
│ ├─ Check: Review iteration count?
│ ├─ Check: Cross-timezone reviewers?
│ └─ Action: Smaller PRs, coding standards, sync review
│
└─ MERGE DELAY > 20%
├─ Check: CI pipeline duration?
├─ Check: Manual gates?
├─ Check: Merge conflict frequency?
└─ Action: Optimize CI, auto-merge, smaller PRsReading the Breakdown in CodePulse
Navigate to your Dashboard and look for the Cycle Time Breakdown component. This shows:
- Total Cycle Time: The headline number showing average hours from PR creation to merge
- Four Stacked Bars: Visual representation of each component's contribution to total time
- Percentage Breakdown: What proportion of cycle time each phase represents
How to Read This in CodePulse
On the Dashboard, the Cycle Time Breakdown shows all four components:
- Look for the longest bar—that's your bottleneck
- Use the time period selector to compare 7d vs 30d vs 90d
- Filter by repository to see if bottlenecks differ per repo
- Compare weekdays vs weekends to identify coverage gaps
Time Period Comparisons
Switch between time periods to see if bottlenecks are consistent or changing:
- 7 days: Shows recent performance, useful for checking if a recent change helped
- 30 days: Good default view that smooths out weekly variance
- 90 days: Reveals longer-term trends and patterns
Per-Repository Analysis
Different repositories often have different bottleneck patterns. Filter by repository to discover:
- Which repos have the longest wait-for-review (usually: fewest qualified reviewers)
- Which repos have the longest review duration (usually: most complex code)
- Which repos have the longest approval-to-merge (usually: slowest CI)
The Impact of PR Size on Each Phase
PR size doesn't just affect total time—it affects each phase differently:
HOW PR SIZE IMPACTS EACH PHASE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
CODING TIME:
- Large PRs: Authors work longer before requesting review
- Small PRs: Faster to complete, easier to keep in working memory
- Impact multiplier: ~1.5x per 100 lines
PICKUP TIME:
- Large PRs: Reviewers procrastinate ("I'll need an hour for this")
- Small PRs: Reviewers grab them in between tasks
- Impact multiplier: ~2x per 100 lines (psychological barrier)
REVIEW DURATION:
- Large PRs: More to review = more comments = more iterations
- Small PRs: Reviewers can hold whole change in their head
- Impact multiplier: ~3x per 100 lines (compound effect)
MERGE DELAY:
- Large PRs: Higher chance of merge conflicts
- Small PRs: Quick CI runs, easy rebases
- Impact multiplier: ~1.2x per 100 lines
TOTAL IMPACT VISUALIZATION:
━━━━━━━━━━━━━━━━━━━━━━━━━━━
PR Size: 100 lines │████░░░░░░░░░░░░░░░░│ 8 hours
PR Size: 200 lines │████████░░░░░░░░░░░░│ 16 hours
PR Size: 400 lines │████████████████░░░░│ 32 hours
PR Size: 800 lines │████████████████████│ 72 hours
The relationship is NOT linear—it's exponential."Two 200-line PRs merge faster than one 400-line PR. Every time. The math isn't intuitive, but it's consistent: the overhead of splitting work is almost always less than the overhead of reviewing large batches."
Phase-Specific Improvement Strategies
Reducing Coding Time
If coding time is your bottleneck (PRs sitting incomplete after creation):
CODING TIME REDUCTION STRATEGIES ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ IMMEDIATE (This Week): □ Add PR template with "Review requested" checkbox □ Set up notification for PRs without review request after 4 hours □ Clarify team definition of "ready for review" SHORT-TERM (This Month): □ Train on breaking large features into reviewable increments □ Introduce "draft PR" policy—when to use, when not to □ Add metrics tracking for PR creation to review request LONG-TERM (This Quarter): □ Implement feature flag infrastructure for incremental delivery □ Establish work breakdown practices in planning □ Build culture of "small and often" over "big and done"
Reducing Pickup Time
If pickup time is your bottleneck (PRs waiting for first review):
PICKUP TIME REDUCTION STRATEGIES ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ IMMEDIATE (This Week): □ Implement "reviewer on duty" daily rotation □ Set up Slack alerts for PRs waiting > 4 hours □ Single primary reviewer policy (no review roulette) SHORT-TERM (This Month): □ Audit and balance review load across team □ Cross-train reviewers for all repositories □ Establish 4-hour SLA with escalation path □ Add timezone coverage if distributed team LONG-TERM (This Quarter): □ Build review culture: "unblocking others is priority one" □ Include review responsiveness in performance conversations □ Consider dedicated review time blocks □ Implement auto-assign based on expertise and availability
Our Take
The "reviewer on duty" pattern is transformative. One person each day is responsible for ensuring no PR waits more than 4 hours. They don't have to do all the reviews—they just have to ensure someone does. This single change has cut pickup time by 60% in teams we've worked with. The key is explicit ownership, not implicit hope.
For detailed SLA implementation guidance, see our Implementing PR SLA Guide.
Reducing Review Duration
If review duration is your bottleneck (long active review periods):
REVIEW DURATION REDUCTION STRATEGIES ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ IMMEDIATE (This Week): □ Classify review comments: blocking vs suggestion vs nitpick □ Allow "approve with suggestions" (non-blocking improvements) □ Time-box review iterations—call if stuck after 2 rounds SHORT-TERM (This Month): □ Enforce PR size limits (< 400 lines recommended) □ Add better PR descriptions explaining WHY, not just what □ Document coding standards to reduce subjective debates □ Pair program on complex changes instead of async review LONG-TERM (This Quarter): □ Build shared understanding of code architecture □ Invest in automated linting for style debates □ Create "review playbook" for common scenarios □ Track and address chronic "long review" repositories
For more on review best practices, see our Code Reviewer Best Practices Guide.
Reducing Merge Delay
If merge delay is your bottleneck (approved PRs not merging):
MERGE DELAY REDUCTION STRATEGIES ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ IMMEDIATE (This Week): □ Enable auto-merge when CI passes □ Notify authors immediately on approval □ Review and remove unnecessary manual gates SHORT-TERM (This Month): □ Optimize CI pipeline—parallelize, cache dependencies □ Reduce flaky tests causing CI reruns □ Smaller PRs = fewer merge conflicts □ Trunk-based development to reduce branch divergence LONG-TERM (This Quarter): □ Invest in CI infrastructure (faster runners, better caching) □ Implement merge queues for high-volume repos □ Evaluate test pyramid—too many slow e2e tests? □ Consider feature flags over branch-based deployment
For CI optimization strategies, see our Test Failure Rate Guide.
Cycle Time Breakdown Dashboard Template
Here's a template for building a cycle time breakdown dashboard that surfaces actionable insights:
CYCLE TIME BREAKDOWN DASHBOARD TEMPLATE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ SECTION 1: HEADLINE METRICS (Top of Dashboard) ┌────────────────────────────────────────────────────────────────────┐ │ TOTAL CYCLE TIME │ PICKUP TIME │ REVIEW DURATION │ │ 24.5 hours │ 8.2 hours (33%) │ 10.1 hours (41%) │ │ ↓ 15% from last wk │ ↑ 8% from last wk │ ↓ 22% from last wk│ │ Target: < 24h │ Target: < 4h │ Target: < 8h │ │ Status: ✓ ON TRACK │ Status: ⚠ AT RISK │ Status: ✓ ON TRACK│ └────────────────────────────────────────────────────────────────────┘ SECTION 2: PHASE BREAKDOWN CHART (Visual) ┌────────────────────────────────────────────────────────────────────┐ │ Coding: ████░░░░░░░░░░░░░░░░░░░░░░░░ 3.1 hours (13%) │ │ Pickup: ████████████░░░░░░░░░░░░░░░░ 8.2 hours (33%) ← FOCUS │ │ Review: ██████████████████░░░░░░░░░░ 10.1 hours (41%) │ │ Merge: ████░░░░░░░░░░░░░░░░░░░░░░░░ 3.1 hours (13%) │ └────────────────────────────────────────────────────────────────────┘ SECTION 3: TRENDS (Past 12 Weeks) ┌────────────────────────────────────────────────────────────────────┐ │ Week │ Total │ Coding │ Pickup │ Review │ Merge │ Note │ │ ─────┼───────┼────────┼────────┼────────┼───────┼────────────── │ │ W-12 │ 32h │ 4h │ 14h │ 10h │ 4h │ │ │ W-11 │ 30h │ 4h │ 12h │ 10h │ 4h │ Review SLA set │ │ ... │ │ │ │ │ │ │ │ W-1 │ 26h │ 3h │ 9h │ 11h │ 3h │ │ │ W-0 │ 24h │ 3h │ 8h │ 10h │ 3h │ Current │ └────────────────────────────────────────────────────────────────────┘ SECTION 4: REPOSITORY BREAKDOWN ┌────────────────────────────────────────────────────────────────────┐ │ Repository │ Total │ Bottleneck │ Action │ │ ─────────────────┼────────┼───────────────────┼───────────────── │ │ api-service │ 18h │ Review (50%) │ Add reviewers │ │ web-frontend │ 32h │ Pickup (55%) │ Review rotation │ │ ml-pipeline │ 48h │ Merge (40%) │ Optimize CI │ │ shared-libs │ 12h │ None │ Maintain │ └────────────────────────────────────────────────────────────────────┘ SECTION 5: ALERTS & OUTLIERS ┌────────────────────────────────────────────────────────────────────┐ │ ⚠ 3 PRs waiting > 8 hours for first review │ │ ⚠ 1 PR in review for > 48 hours │ │ ✓ No PRs blocked on merge │ │ │ │ OUTLIER PRs THIS WEEK: │ │ - PR #1234: 72 hours total (800+ lines - split recommended) │ │ - PR #1256: 36 hours pickup (reviewer on vacation) │ └────────────────────────────────────────────────────────────────────┘
Setting Up Alerts for Each Phase
Don't wait for weekly reports to notice bottlenecks. Set up alerts to catch problems in real-time.
Navigate to Alerts
Go to Alerts → Alert Rules tab → Create Rule.
Recommended Alert Configurations
PHASE-SPECIFIC ALERT CONFIGURATION ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Alert: High Wait for Review (PRIORITY #1) Metric: wait_for_review_hours Operator: > Threshold: 4 (hours) Severity: warning Description: "PR waiting > 4 hours for first review" Channel: #engineering-alerts Alert: Critical Wait for Review Metric: wait_for_review_hours Operator: > Threshold: 8 (hours) Severity: critical Description: "PR waiting > 8 hours - escalate immediately" Channel: #engineering-alerts + @team-lead Alert: Long Review Duration Metric: review_to_approval_hours Operator: > Threshold: 24 (hours) Severity: warning Description: "PR in active review > 24 hours" Channel: #engineering-alerts Alert: Stalled Review Metric: review_to_approval_hours Operator: > Threshold: 48 (hours) Severity: critical Description: "PR stuck in review > 48 hours - sync needed" Channel: #engineering-alerts + author + reviewers Alert: Slow Merge After Approval Metric: approval_to_merge_hours Operator: > Threshold: 4 (hours) Severity: info Description: "Approved PR not merging - check CI or conflicts" Channel: #engineering-alerts Alert: Total Cycle Time Critical Metric: cycle_time_hours Operator: > Threshold: 72 (hours) Severity: critical Description: "PR cycle time > 3 days - needs attention" Channel: #engineering-alerts
Alert Notification Best Practices
Configure where alerts are sent in the Notifications tab:
- Slack: Send to your team channel for visibility
- Email: Daily digest of triggered alerts
- Quiet hours: Suppress overnight alerts if your team is co-located
- Escalation: Route critical alerts to specific individuals
For more on alert configuration, see our Slack Alerts for Engineering Metrics guide and our Alert Rules Guide.
Deep Dive: Diagnosing Each Bottleneck
High Coding Time
Symptom: The "coding time" component is disproportionately large, often 40%+ of total cycle time.
Diagnostic questions:
- Are PRs being opened as drafts? Check if authors work on PRs for days before requesting review.
- What's the average PR size? Large PRs take longer to complete.
- Are requirements clear? Unclear specs lead to rework before review.
- Are authors "parking" work in PRs for later?
Actions:
- Encourage smaller, more frequent PRs
- Clarify the definition of "ready for review"
- Consider using PR templates to prompt review request
- Track time from first commit to PR open (even earlier signal)
High Wait for Review (Pickup Time)
Symptom: PRs sit waiting for hours or days before anyone looks at them. Often the biggest bottleneck.
Diagnostic questions:
- Who are the designated reviewers? Are there enough?
- Is review load evenly distributed? Check if 2-3 people handle 80% of reviews.
- What time of day are PRs opened? Late-day PRs wait until tomorrow.
- Are reviewers tagged individually or is it "review roulette"?
Actions:
- Set a review SLA (e.g., first review within 4 hours)
- Expand the pool of qualified reviewers
- Implement review rotation or "reviewer on duty" schedules
- Set up alerts for PRs waiting too long
High Review Duration
Symptom: Once review starts, it takes days to reach approval due to multiple rounds of feedback.
Diagnostic questions:
- How many review rounds does a typical PR go through?
- Are reviewers blocking on minor issues?
- Is feedback clear and actionable?
- Are author and reviewer in different timezones?
Actions:
- Classify feedback as "blocking" vs "suggestion" vs "nitpick"
- Establish coding standards to reduce subjective feedback
- Encourage synchronous discussion for complex reviews
- Check review culture and sentiment for problematic patterns
High Approval to Merge (Merge Delay)
Symptom: PRs are approved but sit for hours or days before actually merging.
Diagnostic questions:
- How long do CI pipelines take to run?
- Are there manual gates (QA sign-off, release schedules)?
- Do merge conflicts frequently require rebasing?
- Are authors notified promptly when PRs are approved?
Actions:
- Optimize slow CI pipelines (parallelize, cache dependencies)
- Enable auto-merge when CI passes
- Reduce merge conflicts by encouraging smaller PRs
- Review manual gates—are they all necessary?
Improvement Strategies by Timeline
Quick Wins (This Week)
- Set expectations: Communicate that first review should happen within 4 hours of request
- Enable notifications: Ensure reviewers have GitHub/Slack notifications on
- Create a review channel: A place to @-mention when PRs need attention
- Implement daily reviewer rotation: One person "owns" pickup each day
Medium-Term (This Month)
- Balance review load: Audit who's doing reviews and spread the work more evenly
- Optimize CI: Profile your slowest test suites and parallelize or cache where possible
- PR size guidelines: Establish a soft limit (e.g., 400 lines) to keep PRs reviewable
- Implement alerts: Set up automated notifications for each phase threshold
Long-Term (This Quarter)
- Cross-train reviewers: Reduce knowledge silos so more people can review any PR
- Automation: Auto-assign reviewers, auto-merge when approved and CI passes
- Culture shift: Make "unblocking others" a celebrated behavior (see the "Unblock Hero" award in Awards)
- Process refinement: Quarterly review of cycle time data to identify new patterns
Tracking Progress
After implementing changes, use the cycle time breakdown to measure impact:
- Compare 30-day averages before and after changes
- Look for the specific component you targeted to improve
- Set up a recurring calendar reminder to check progress weekly
- Celebrate improvements publicly—teams that see impact stay motivated
Our Take
Improving cycle time is iterative, not transformative. Teams that try to fix everything at once fix nothing. Pick your biggest bottleneck—usually pickup time—and focus on that for 4-6 weeks. Get it under control, then move to the next phase. Small, sustained improvements compound into dramatic results. Teams that cut pickup time from 18 hours to 4 hours often find that everything else improves too, because context stays fresh and PRs stay small.
Common Patterns and Anti-Patterns
Pattern: The "5pm PR Drop"
THE 5PM PR DROP ANTI-PATTERN ━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Symptom: PRs opened between 4-6pm have 3x longer pickup time Cause: - Authors open PR as "end of day" activity - No reviewers available before EOD - PR sits overnight (12+ hours) Analysis: PRs opened 9am-2pm: avg 3 hour pickup PRs opened 4pm-6pm: avg 14 hour pickup Fix: 1. Encourage opening PRs early in the day 2. Async handoff to different timezone team 3. "Last reviewer of day" handles late PRs 4. Accept that late PRs will merge next day
Pattern: The "Expert Bottleneck"
THE EXPERT BOTTLENECK ANTI-PATTERN ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Symptom: Certain repositories have 4x longer cycle time Cause: - Only 1-2 people "qualified" to review specific code - Those experts are overloaded - Knowledge hasn't been distributed Analysis: Repository: ml-pipeline Qualified reviewers: 2 Average review queue per reviewer: 8 PRs Average pickup time: 36 hours Fix: 1. Pair junior engineers with experts during reviews 2. Document architecture and review guidelines 3. Gradually expand "approved reviewer" list 4. Accept slightly longer reviews during transition Timeline: 2-3 months to expand reviewer pool safely
For more on addressing knowledge silos, see our Code Hotspots and Knowledge Silos Guide.
Pattern: The "Merge Conflict Loop"
THE MERGE CONFLICT LOOP ANTI-PATTERN ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Symptom: Approved PRs take 8+ hours to merge Root cause analysis: PR approved → conflicts detected → author rebases → CI reruns (45 min) → new conflicts → rebase again → CI reruns (45 min) → finally merges Contributing factors: - High-churn files modified by multiple PRs - Long-lived branches diverging from main - Slow CI making each iteration expensive Fix: 1. Smaller PRs = fewer conflicts 2. Trunk-based development 3. Merge queues to serialize conflicting PRs 4. Faster CI = cheaper iteration 5. Lock files for coordinated changes
Related Resources
Continue your cycle time improvement journey with these related guides:
- Reduce PR Cycle Time — 30-day playbook for overall cycle time reduction
- Review Load Balancing Guide — Strategies for distributing review work fairly
- PR Size Optimization Guide — Breaking down large PRs for faster review
- Implementing PR SLA — Setting and enforcing review time commitments
- DORA Metrics Guide — How cycle time fits into broader delivery metrics
- Code Reviewer Best Practices — Making reviews faster without sacrificing quality
"The goal isn't the fastest possible cycle time. It's the shortest time consistent with quality. Find that balance, and your team will ship better code faster while enjoying the process more."
See these insights for your team
CodePulse connects to your GitHub and shows you actionable engineering metrics in minutes. No complex setup required.
Free tier available. No credit card required.
Related Guides
We Cut PR Cycle Time by 47%. Here's the Exact Playbook
A practical playbook for engineering managers to identify bottlenecks, improve review processes, and ship code faster—without sacrificing review quality.
Your Best Engineer Is About to Quit. (Check Their Review Load)
Learn how to identify overloaded reviewers, distribute review work equitably, and maintain review quality without burning out your senior engineers.
The Slack Alert That Catches Stuck PRs Before Standup
How to configure alerts for stuck PRs, review SLA breaches, and key metric changes to stay informed without constant dashboard checking.