You want to measure DORA metrics like deployment frequency and lead time for changes, but you don't have access to your CI/CD pipeline data. Or maybe you have CI access but the data is fragmented across Jenkins, GitHub Actions, and a custom deploy script. Can you still track these critical delivery metrics?
Yes. This guide shows you how to measure deployment frequency and lead time using only GitHub data—PR merges to your main branch and commit timestamps. You'll learn how to proxy DORA's key delivery metrics without instrumenting a single pipeline, and how to use CodePulse to track delivery cadence across all your teams.
Why Deployment Frequency Matters (Without CI Access)
Deployment frequency is one of the four DORA (DevOps Research and Assessment) metrics proven to correlate with high-performing engineering teams. It measures how often you ship code to production. Elite teams deploy multiple times per day; low performers deploy once per month or less.
The Traditional Problem: No CI/CD Visibility
Most engineering analytics tools tell you to instrument your deployment pipeline: "Just send us a webhook when you deploy!" But in practice:
- DevOps owns the pipeline, engineering managers don't have access
- You deploy from multiple pipelines (staging, production, hotfixes)
- Your deploy definition varies by service (Docker push? Kubernetes apply? Lambda deploy?)
- Historical data doesn't exist—you'd only track deploys going forward
The GitHub-Based Alternative
For most teams, merging to the main branch is a deployment proxy. If your team follows trunk-based development or GitFlow with regular releases, every merge to main or master represents a unit of work that's ready to ship (or ships automatically via CD).
Deployment frequency via GitHub = PRs merged to main branch per working day
This works because:
- Complete historical data: GitHub has your entire PR history; you can calculate deployment frequency retroactively for years
- No instrumentation required: Just connect to GitHub—no pipeline changes needed
- Consistent definition: Works the same way across all repositories, regardless of deploy tooling
- Leading indicator: Even if deploy happens 10 minutes after merge, the merge timestamp is close enough for trend analysis
When This Approach Doesn't Work
GitHub-based deployment frequency won't be accurate if:
- You merge to main but batch-deploy weekly (main != production)
- You deploy without PRs (e.g., direct commits to main)
- Different repositories have different release cycles (monorepo with independent services)
For these cases, you'll need actual deployment instrumentation. But for 80% of teams, GitHub PR merges are a reliable proxy.
Measuring Lead Time from GitHub Data
Lead time for changes—another DORA metric—measures time from code written to code running in production. Like deployment frequency, you can measure this from GitHub without CI access.
Lead Time = First Commit to Merge
The traditional DORA definition is "commit to deploy." Since we're using merge-to-main as our deployment proxy:
Lead time = time from first commit on a branch to PR merged to main
This captures:
- Time spent coding (first commit to PR open)
- Time spent in review (PR open to approval)
- Time waiting to merge (approval to merge)
Why Cycle Time Is a Better Metric
In practice, PR cycle time (PR open to merge) is more actionable than full lead time because:
- Starts when work becomes visible: The first commit might happen days before the PR is opened, but the team can't act on invisible work
- Excludes local experimentation: Developers often commit multiple times while exploring a solution; those early commits aren't "real" lead time
- Measurable bottlenecks: Once a PR is open, you can see exactly where delays happen (waiting for review, review duration, etc.)
CodePulse tracks both:
- Cycle time: PR created to merged (the actionable metric for most teams)
- Lead time: First commit to merged (the DORA-compliant metric for benchmarking)
The Four Components of Lead Time
CodePulse breaks cycle time (our lead time proxy) into four measurable phases:
- Coding Time: PR created to review requested—how long the PR sits before the author marks it ready
- Wait for Review: Review requested to first review—how long reviewers take to start looking
- Review Duration: First review to approval—how long the active review process takes
- Approval to Merge: Approval to merged—how long it takes to actually land the code
This breakdown lets you identify exactly where your lead time is slowing down. For a detailed analysis of each component, see our Cycle Time Breakdown Guide.
Understanding Flow Efficiency
Once you have lead time broken down, you can calculate flow efficiency—a metric from Lean manufacturing that reveals how much of your lead time is active work vs. waiting.
What Is Flow Efficiency?
Flow Efficiency
Measures what percentage of lead time is spent actively working vs. waiting. For software delivery: Active work = Coding time + Review duration. Wait time = Wait for review + Approval to merge.
Examples:
Interpretation:
A PR with 25% flow efficiency spent 75% of its lifecycle waiting, only 25% being worked on.
Why Flow Efficiency Matters
Most engineering teams have flow efficiency under 15%. That means 85% of the time between starting work and deploying it is spent waiting. Common causes:
- Overloaded reviewers (high wait-for-review time)
- Slow CI pipelines (high approval-to-merge time)
- Asynchronous review cycles with timezone delays (high review duration)
Flow efficiency tells you where to focus. If your flow efficiency is 10%, you don't have a "developers are slow" problem—you have a process bottleneck problem.
Using CodePulse to Calculate Flow Efficiency
CodePulse's Cycle Time Breakdown component on the Dashboard shows you all four time components. To estimate flow efficiency:
- Look at the breakdown percentages for each component
- Add coding time % + review duration % = active work %
- The remainder is wait time
If your breakdown shows:
- Coding time: 10%
- Wait for review: 60%
- Review duration: 20%
- Approval to merge: 10%
Your flow efficiency is roughly 30% (10% + 20%). That means 70% of lead time is waiting—a clear opportunity to improve by reducing review wait times.
Setting Delivery Targets by Team Type
What's a "good" deployment frequency or lead time? It depends on your team type, size, and maturity. Here's how to set realistic targets.
Deployment Frequency Targets
DORA research defines four performance levels for deployment frequency:
| Performance Level | Deployment Frequency | PRs Merged per Day |
|---|---|---|
| Elite | Multiple times per day | 5+ per working day |
| High | Once per day to once per week | 1-5 per working day |
| Medium | Once per week to once per month | 0.2-1 per working day |
| Low | Less than once per month | < 0.2 per working day |
Lead Time Targets
Similarly, DORA defines lead time performance levels:
| Performance Level | Lead Time for Changes | Cycle Time (CodePulse) |
|---|---|---|
| Elite | Less than 1 hour | < 4 hours |
| High | 1 hour to 1 day | 4-24 hours |
| Medium | 1 day to 1 week | 1-7 days |
| Low | More than 1 week | > 7 days |
Adjusting Targets by Team Type
Application teams (building customer-facing features) should aim for higher deployment frequency:
- Target: 3-5 PRs merged per day (elite/high performance)
- Target cycle time: < 24 hours (high performance)
- Small, frequent deploys reduce risk and enable fast feedback
Platform teams (building internal tools, infrastructure) often have longer natural cycles:
- Target: 1-2 PRs merged per day (high performance)
- Target cycle time: 1-3 days (medium/high performance)
- Changes are larger, require more thorough review, and have bigger blast radius
For more on platform team metrics, see our Platform Team Metrics Guide.
Adjusting for Team Size
Smaller teams naturally merge fewer PRs per day:
- 2-3 person team: Aim for 1-2 merges per day total (0.5 per person)
- 5-8 person team: Aim for 3-5 merges per day total (0.5-0.7 per person)
- 10+ person team: Aim for 5-10 merges per day total (0.5-1 per person)
If individual developers are merging more than 1 PR per day on average, check PR size—they may be splitting work too finely.
Using CodePulse to Track Delivery Cadence
Now that you understand the metrics, here's how to track them in CodePulse without any CI/CD instrumentation.
Dashboard View: Deployment Frequency
Navigate to your Dashboard and look for the Velocity section. You'll see:
- Deployment Frequency: PRs merged to main branch per working day (excludes weekends and holidays)
- PRs Merged: Total count of PRs merged in the selected time period
- Trend indicator: Whether deployment frequency is increasing or decreasing vs. previous period
📊 Finding Deployment Frequency in CodePulse
To see your team's deployment cadence:
- Go to Dashboard → Velocity Metrics
- Look for Deployment Frequency per Day (PRs merged to main)
- Use the time period selector to view 7d, 30d, or 90d trends
- Click Trends in the nav to see historical patterns over time
- Click Repositories to compare deployment frequency across teams
Dashboard View: Lead Time / Cycle Time
Also in the Dashboard, the Cycle Time metric card shows:
- Average Cycle Time: Mean hours from PR creation to merge for all merged PRs
- Cycle Time Breakdown: Stacked bar chart showing the four components of cycle time
- Trend indicator: Whether cycle time is improving (decreasing) or regressing
Trends Page: Historical Analysis
For deeper historical analysis, go to Trends:
- Deployment Frequency Over Time: Line chart showing PRs merged per day over the past 90 days (or custom range)
- Cycle Time Trend: Line chart showing average cycle time over the same period
- Correlation analysis: See if improving cycle time correlates with higher deployment frequency
Look for:
- Weekly patterns: Do deploys cluster on certain days? (Monday merges after weekend work, Friday slowdowns)
- Sprint boundaries: If you see spikes every 2 weeks, your team may be batching work to sprint ends
- Seasonal changes: Summer and December often show lower deployment frequency due to vacations
Repository Comparison: Cross-Team Analysis
If you manage multiple teams, use the Repositories page to compare:
frontend-app
Application Teambackend-api
Application Teamplatform-infra
Platform TeamThis view helps you:
- Identify which teams need support improving delivery speed
- Find high performers to learn best practices from
- Justify different targets for different team types (e.g., platform vs. application)
Improving Deployment Frequency Over Time
Now that you're tracking deployment frequency and lead time, how do you actually improve them? Here are proven strategies.
Strategy 1: Reduce Batch Size (Smaller PRs)
The most common reason for low deployment frequency is large PRs. If your average PR is 800 lines, developers can only complete one per week. Smaller PRs (200-400 lines) can be completed and reviewed in a day.
Actions:
- Check average PR size in the Dashboard's Quality Metrics section
- Set up an alert when PRs exceed 500 lines (see our PR Size Optimization Guide)
- Encourage feature flagging to ship incomplete features incrementally
- Celebrate developers who split large changes into multiple small PRs
Strategy 2: Speed Up Cycle Time
Deployment frequency is limited by cycle time. If your cycle time is 3 days, you can't deploy more than 2-3 times per week even with perfect parallelization.
Actions:
- Use the Cycle Time Breakdown to identify your biggest bottleneck
- If "wait for review" is high, balance review load across more team members
- If "approval to merge" is high, optimize your CI pipeline or enable auto-merge
- Set a goal to reduce cycle time by 25% this quarter
Strategy 3: Increase Parallel Work in Progress
If each developer works on one PR at a time, your maximum deployment frequency is (team size) × (1 / cycle time in days). For a 5-person team with 2-day cycle time, that's 2.5 deploys per day—capped at "high" performance.
To reach "elite" performance (5+ deploys per day), developers need to work on multiple PRs simultaneously:
- Open a new PR for the next feature while waiting for review on the current one
- Use stacked PRs (PR B depends on PR A) to parallelize work
- Pair program to increase throughput without increasing WIP per person
Caution: Don't increase WIP by having developers start new work before responding to review feedback. That increases cycle time and decreases quality.
Strategy 4: Automate the Merge
If your "approval to merge" time is high (see cycle time breakdown), enable auto-merge:
- GitHub's auto-merge feature: approved PRs merge automatically when CI passes
- Mergify, Kodiak, or similar bots that handle merge conflicts and queuing
- Reduces "approval to merge" time from hours to minutes
Strategy 5: Set Up Alerts for Delivery Slowdowns
Don't wait for the end of the month to notice your deployment frequency has dropped. Set up proactive alerts.
Go to Alerts → Alert Rules → Create Rule:
Alert: Low Deployment Frequency Metric: deployment_frequency_per_day Operator: < Threshold: 2.0 Severity: warning Description: "Deployment frequency dropped below 2 per day" Alert: Cycle Time Regression Metric: cycle_time_hours Operator: > Threshold: 48 Severity: warning Description: "Cycle time exceeded 2 days" Alert: PRs Waiting Too Long Metric: wait_for_review_hours Operator: > Threshold: 8 Severity: critical Description: "PRs waiting over 8 hours for first review"
These alerts catch problems early—before they become visible in monthly reports.
Strategy 6: Tie Cycle Time to Deployment Frequency in Reviews
In weekly or monthly engineering reviews with leadership, always show both metrics together:
- Current deployment frequency: 2.8 PRs/day (high performance)
- Current cycle time: 32 hours (medium performance)
- Goal: Reduce cycle time to 20 hours, increase deployment frequency to 4 PRs/day
This helps leadership understand the connection: improving cycle time directly increases deployment frequency. For more on creating board-ready metrics, see our Board-Ready Engineering Metrics Guide.
Tracking Progress
After implementing these strategies, use CodePulse to measure impact:
- Weekly: Check Dashboard to see if deployment frequency is trending up
- Monthly: Compare 30-day averages for deployment frequency and cycle time
- Quarterly: Use Trends page to visualize 90-day improvement and share with stakeholders
Remember: elite deployment frequency (5+ per day) isn't the goal for every team. The goal is consistent improvement toward a target that matches your team's context.
See these insights for your team
CodePulse connects to your GitHub and shows you actionable engineering metrics in minutes. No complex setup required.
Free tier available. No credit card required.
Related Guides
DORA Metrics Are Being Weaponized. Here's the Fix
DORA metrics were designed for research, not management. Learn how to use them correctly as signals for improvement, not targets to game.
The 4-Minute Diagnosis That Reveals Why Your PRs Are Stuck
Learn to diagnose exactly where your PRs are getting stuck by understanding the 4 components of cycle time and how to improve each one.
Platform Teams: You're Measuring the Wrong Things
How platform and infrastructure teams can use engineering metrics to demonstrate impact, track deployment frequency, and communicate value to leadership.