"How much can your team deliver next quarter?" Every engineering leader hears this question. Story points and gut feelings only go so far. Pull request data gives you a grounded answer: here is what we shipped, here is our pace, and here is what we can commit to next.
Why PR Data Reveals True Capacity
Pull requests represent completed, reviewed, merged work. That makes them a better signal than story points (which vary by team) or hours logged (which rarely correlate with output). PR metrics are:
- Objective: Based on actual code shipped, not estimates
- Consistent: Same measurement across teams and time periods
- Forward-looking: Trends predict capacity issues before they hit
- Automatically collected: No manual tracking required
What PR Data Can Tell You
| Question | PR Metric | Insight |
|---|---|---|
| How much can we ship? | PRs merged per week/sprint | Raw throughput baseline |
| Is throughput sustainable? | Cycle time trends | Rising cycle time = capacity strain |
| Who's contributing? | Active contributors | Effective team size (not headcount) |
| Are we bottlenecked? | Wait for review time | Review capacity constraints |
| Is quality slipping? | Test failure rate, PR size | Rushing = future capacity drain |
Free Download: Capacity Planning Calculator (Interactive) - An interactive tool to estimate team delivery capacity using your PR data.
Key Metrics for Capacity Planning
1. Throughput: PRs Merged
Your simplest capacity metric. Track PRs merged per time period:
- Weekly: Best for sprint planning, shows recent trends
- Monthly: Smooths out vacation/holiday variation
- Quarterly: Strategic planning, executive reporting
Capacity Planning Formula
Calculate expected throughput by adjusting historical average for planned time off and meeting/planning overhead (~15-20%).
Examples:
Interpretation:
2. Velocity Indicators: Cycle Time
Watch cycle time trends to judge whether your current pace can last:
- Stable or declining: Team has capacity headroom
- Rising: Team is at or over capacity, and work is piling up
- Volatile: Inconsistent process or external blockers
See our Cycle Time Breakdown Guide to diagnose where time is going.
3. Contributor Count: Effective Team Size
Headcount and capacity are different numbers. Track active contributors instead:
- Active contributor: Developer who merged at least 1 PR in the time period
- Effective team size: Usually 70-85% of headcount due to PTO, meetings, non-coding work
Effective Capacity Check
If you have 10 engineers but only 7 merged PRs last week, your effective team size is 7, not 10.
Examples:
Interpretation:
📊View Capacity Metrics in CodePulse
Navigate to the Dashboard to see your capacity indicators:
- PRs Merged: Total merged PRs with trend indicator
- Cycle Time: Current average and breakdown by phase
- Deployment Frequency: PRs merged per working day
- Developers page shows individual contribution patterns
Identifying Capacity Constraints
Signs of Overloaded Teams
| Signal | What It Means | Action |
|---|---|---|
| Rising cycle time | Work waiting longer to complete | Reduce WIP, add capacity, or cut scope |
| Growing PR queue | More PRs opened than merged | Review bottleneck. Rebalance reviewers |
| Increasing PR size | Batching to reduce overhead | Often a sign of process friction |
| Declining test pass rate | Rushing, cutting corners | Quality will create future capacity drain |
| Weekend/late commits | Team working overtime | Unsustainable. Burnout risk |
Signs of Underutilized Teams
- Very low cycle times with idle wait periods
- PRs approved instantly with minimal review
- Declining PR count without corresponding time off
- High context-switching across many small tasks
Low output usually points to unclear priorities or blocked work, not excess capacity. Investigate before piling on more tasks.
Planning with Historical Throughput
Building a Capacity Baseline
- Collect 6-12 weeks of data: Enough to smooth out anomalies
- Calculate weekly averages: PRs merged, cycle time, active contributors
- Note variations: What caused high/low weeks?
- Identify sustainable pace: The rate you can maintain without quality degradation
Adjusting for Known Factors
Capacity Adjustment Checklist
Time Off & Availability
- Planned PTO and holidays
- On-call rotation impact
- Company events, offsites
Team Changes
- New hires ramping up (expect 50% productivity for first month)
- Departures (knowledge transfer overhead)
Project Overhead
- Major initiatives (migrations, refactors)
- End-of-quarter planning overhead
- Technical debt paydown allocation
Setting Realistic Targets
A common mistake is planning to 100% of theoretical capacity. Build in buffer:
- 80% rule: Plan to deliver 80% of historical throughput
- Risk adjustment: Novel work takes longer than expected, so add 20-30%
- Interrupt buffer: Reserve 15-20% for bugs, support, and urgent requests
Communicating Capacity to Stakeholders
What Executives Want to Know
Reframe engineering metrics in the language your executives already use:
| Executive Question | Data to Show | How to Frame It |
|---|---|---|
| "Can we ship Feature X by Q2?" | Historical throughput + feature size estimate | "Based on our delivery rate, we can ship X features of this size per quarter" |
| "Why is the team slower?" | Cycle time breakdown, contributor trends | "Review wait time increased 40%. We need to rebalance reviewers" |
| "Do we need more engineers?" | Throughput vs demand, bottleneck analysis | "We're at 95% capacity. Additional scope requires headcount or tradeoffs" |
Creating Capacity Reports
A good recurring capacity report covers five areas:
- Throughput trend: PRs merged over time (weekly or monthly)
- Velocity health: Cycle time trend (is it stable?)
- Utilization: Active contributors vs headcount
- Quality check: Test failure rate, review coverage
- Forward look: Known capacity impacts (PTO, initiatives)
See our Board-Ready Engineering Metrics guide for executive presentation tips.
Avoiding Capacity Planning Pitfalls
Common Mistakes
- Counting PRs without context: A 10-line bugfix isn't equal to a 500-line feature
- Ignoring quality signals: High throughput with rising defects is borrowing from the future
- Planning to 100%: No buffer for surprises guarantees missed commitments
- Static planning: Capacity shifts. Review monthly
- Individual tracking: Use team metrics, not developer scorecards
Healthy Practices
- Review capacity weekly in standups (quick pulse check)
- Recalibrate monthly based on actual vs planned
- Communicate capacity constraints proactively, not reactively
- Use data to advocate for sustainable pace, not to push harder
For benchmarks on what "good" looks like, see our PR Cycle Time Benchmarks guide.
See these insights for your team
CodePulse connects to your GitHub and shows you actionable engineering metrics in minutes. No complex setup required.
Free tier available. No credit card required.
See These Features in Action
Predict delivery timelines based on historical velocity trends.
Board-ready engineering performance in a single page.
Related Guides
Engineering Metrics Dashboard: The 7 Metrics You Need
Skip vanity metrics. Here are the 7 engineering metrics VPs actually need to track team performance, delivery, and quality.
Your PR Cycle Time Is Fine (Here's the Benchmark)
What is a good PR cycle time? Benchmarks and targets based on team size, industry, and engineering maturity.
I Got $2M in Budget With These 5 Engineering Metrics
Learn how to create engineering metrics presentations that resonate with board members, investors, and C-suite executives.
The Data That Made My CFO Say Yes to 5 New Engineers
Build a data-driven business case for engineering headcount using throughput metrics, benchmarks, and trend analysis that executives understand.