GitHub gives you pull request data. Timestamps, diffs, review comments, merge status. What it does not give you is analytics: why PRs take 3 days to merge, which reviewers are overloaded, and whether your cycle time is getting better or worse. This guide bridges that gap.
How do I get pull request analytics from GitHub?
GitHub's built-in Insights show contributor activity but lack PR-specific analytics like cycle time breakdown or review bottleneck detection. For real PR analytics, connect a tool like CodePulse (5-minute setup via GitHub App) that analyzes PR lifecycle data to surface median cycle time, time-to-first-review, review load distribution, and merge patterns. Industry benchmarks from 803,000 PRs show elite teams merge in under 4 hours.
The GitHub Data Gap
GitHub tracks everything about a pull request: when it was opened, who reviewed it, what changed, and when it merged. But it does not calculate the metrics that matter to engineering managers. There is no "average cycle time" chart, no "review bottleneck" view, no "this team's velocity is declining" alert.
GitHub Insights shows commit frequency and contributor counts. That is activity, not analytics. Knowing that your team made 200 commits last week tells you nothing about whether features are shipping faster or slower.
The data you need lives in the gaps between GitHub events: the time between PR creation and first review, between approval and merge, between the last commit and the review request. These intervals are where delivery bottlenecks hide.
🔥 Our Take
GitHub Insights is a vanity dashboard. It tells you your team is busy. It does not tell you your team is effective.
The distinction matters. A team merging 50 small PRs per week with 2-hour cycle times is dramatically more effective than a team merging 10 large PRs with 5-day cycle times, even if the second team writes more code. GitHub Insights cannot show you this difference.
PR Metrics That Actually Matter
Not all PR metrics are created equal. Here are the ones that drive real improvement, ranked by how actionable they are:
| Metric | What It Measures | Why It Matters | Benchmark |
|---|---|---|---|
| Cycle Time | First commit to merge (median) | Overall delivery speed | Elite: <4h, Good: <24h |
| Time to First Review | PR open to first review comment | Review responsiveness | Elite: <2h, Good: <8h |
| Review Turnaround | Review request to approval | Review throughput | Elite: <4h, Good: <24h |
| PR Size | Lines changed per PR | Reviewability, risk | Optimal: 100-300 lines |
| Review Coverage | % PRs with at least 1 review | Quality gate enforcement | Target: >95% |
| Merge Without Approval | % PRs merged without review | Process compliance | Target: <5% |
"Cycle time is the single best proxy for engineering delivery health. If you can only track one PR metric, track this one."
Breaking Down PR Cycle Time
Aggregate cycle time is useful for trending. But to fix problems, you need to break it into phases. Every PR goes through 4 stages:
- Coding Time - First commit to PR open. Measures how long active development takes before requesting review.
- Waiting for Review - PR open to first review. This is idle time where the PR sits in a queue. It is usually the biggest bottleneck.
- In Review - First review to final approval. Measures the back-and-forth of the review process.
- Merge Time - Approval to merge. Usually short unless blocked by CI checks or merge conflicts.
In our analysis of 803,000 pull requests, the waiting-for-review phase accounts for 40-60% of total cycle time. That means the most effective way to speed up delivery is not to code faster, but to review faster.
Review Patterns and Bottlenecks
PR analytics reveal review patterns that are invisible from individual PRs. Common patterns to watch for:
The Review Bottleneck
One or two developers reviewing 60%+ of all PRs. This creates a single point of failure and burns out your best engineers. The fix: use CodePulse's Review Network visualization to see review load distribution and rebalance assignments.
The Rubber Stamp
Reviews that approve within minutes of opening, with no substantive comments. Our research on 3.4 million PRs found that 26% of approved PRs received zero review comments. That is not code review. That is a checkbox.
The Size Creep
PRs gradually getting larger over time. Large PRs take disproportionately longer to review and have higher defect rates. Our data shows PRs over 1,000 lines take 5x longer to merge than PRs under 300 lines. Set up alerts for PRs exceeding your team's size threshold.
"The fastest way to ship faster is not to code faster. It is to review faster. Most teams have a review bottleneck they cannot see."
Building PR Analytics Dashboards
A good PR analytics dashboard answers three questions at a glance:
- Are we getting faster or slower? (Cycle time trend over 4-8 weeks)
- Where is the bottleneck? (Phase breakdown: coding vs. waiting vs. review vs. merge)
- Is the load balanced? (Review distribution across team members)
📊 How to See This in CodePulse
CodePulse builds this dashboard automatically from your GitHub data:
- Dashboard shows cycle time breakdown with phase-by-phase analysis
- Velocity page shows trend over time with configurable granularity
- Review Network visualizes who reviews whom and where bottlenecks form
- Alert Rules notify you when PRs stall or cycle time exceeds thresholds
Tools for GitHub PR Analytics
| Tool | Setup Time | PR Analytics Depth | Price |
|---|---|---|---|
| CodePulse | 5 minutes | Deep (cycle time phases, review patterns, bottlenecks) | Free tier, Pro from $149/mo |
| LinearB | 30 minutes | Deep (with CI/CD integration) | Free tier, paid plans available |
| GitHub Insights | 0 minutes | Basic (activity only, no PR lifecycle) | Included with GitHub |
| Apache DevLake | 1-2 days | Configurable (requires setup) | Free (self-hosted) |
| Custom (Grafana + API) | 2-4 weeks | Whatever you build | Engineering time |
Getting Started
- Connect your repos. Sign up for CodePulse and install the GitHub App. Your first PR analytics will appear within 5 minutes.
- Check your cycle time. Is it improving, stable, or degrading? Look at the last 30 days vs. the prior 30 days.
- Identify the bottleneck phase. Is it waiting for review (most common), review duration, or merge delays?
- Set up one alert. Start with a "PR waiting for review > 8 hours" alert to catch stalled PRs before they age.
For more depth on PR optimization, see our guides on reducing PR cycle time, PR size optimization, and review load balancing.
Frequently Asked Questions
Track cycle time (PR open-to-merge), time to first review, review turnaround time, PR size (lines changed), review coverage (% of PRs with at least one review), and merge-without-approval rate. These metrics reveal bottlenecks in your delivery pipeline that aggregate commit or issue metrics miss.
See these insights for your team
CodePulse connects to your GitHub and shows you actionable engineering metrics in minutes. No complex setup required.
Free tier available. No credit card required.
Related Guides
We Cut PR Cycle Time by 47%. Here's the Exact Playbook
A practical playbook for engineering managers to identify bottlenecks, improve review processes, and ship code faster—without sacrificing review quality.
The 4-Minute Diagnosis That Reveals Why Your PRs Are Stuck
Learn to diagnose exactly where your PRs are getting stuck by understanding the 4 components of cycle time and how to improve each one.
GitHub Is Hiding Your Repo's Real Health Score
Learn which repository-level metrics matter for engineering managers and staff engineers, how to track them across multiple repos, and when to intervene based on health signals.
GitHub Metrics: What to Track and What to Ignore
GitHub is a goldmine of engineering data. This guide shows which GitHub metrics actually matter (PR cycle time, review load, knowledge distribution), how to build a useful dashboard, and which vanity metrics to avoid.