Skip to main content
All Guides
Delivery

Measuring Deploy Frequency Without CI/CD (The Hack That Works)

Master DORA deployment frequency and lead time using GitHub data alone, without requiring CI/CD pipeline access.

11 min readUpdated January 15, 2025By CodePulse Team

You want to measure DORA metrics like deployment frequency and lead time for changes, but you don't have access to your CI/CD pipeline data. Or maybe you have CI access but the data is fragmented across Jenkins, GitHub Actions, and a custom deploy script. Can you still track these critical delivery metrics?

Yes. This guide shows you how to measure deployment frequency and lead time using only GitHub data—PR merges to your main branch and commit timestamps. You'll learn how to proxy DORA's key delivery metrics without instrumenting a single pipeline, and how to use CodePulse to track delivery cadence across all your teams.

Why Deployment Frequency Matters (Without CI Access)

Deployment frequency is one of the four DORA (DevOps Research and Assessment) metrics proven to correlate with high-performing engineering teams. It measures how often you ship code to production. Elite teams deploy multiple times per day; low performers deploy once per month or less.

The Traditional Problem: No CI/CD Visibility

Most engineering analytics tools tell you to instrument your deployment pipeline: "Just send us a webhook when you deploy!" But in practice:

  • DevOps owns the pipeline, engineering managers don't have access
  • You deploy from multiple pipelines (staging, production, hotfixes)
  • Your deploy definition varies by service (Docker push? Kubernetes apply? Lambda deploy?)
  • Historical data doesn't exist—you'd only track deploys going forward

The GitHub-Based Alternative

For most teams, merging to the main branch is a deployment proxy. If your team follows trunk-based development or GitFlow with regular releases, every merge to main or master represents a unit of work that's ready to ship (or ships automatically via CD).

Deployment frequency via GitHub = PRs merged to main branch per working day

This works because:

  • Complete historical data: GitHub has your entire PR history; you can calculate deployment frequency retroactively for years
  • No instrumentation required: Just connect to GitHub—no pipeline changes needed
  • Consistent definition: Works the same way across all repositories, regardless of deploy tooling
  • Leading indicator: Even if deploy happens 10 minutes after merge, the merge timestamp is close enough for trend analysis

When This Approach Doesn't Work

GitHub-based deployment frequency won't be accurate if:

  • You merge to main but batch-deploy weekly (main != production)
  • You deploy without PRs (e.g., direct commits to main)
  • Different repositories have different release cycles (monorepo with independent services)

For these cases, you'll need actual deployment instrumentation. But for 80% of teams, GitHub PR merges are a reliable proxy.

See your engineering metrics in 5 minutes with CodePulse

Measuring Lead Time from GitHub Data

Lead time for changes—another DORA metric—measures time from code written to code running in production. Like deployment frequency, you can measure this from GitHub without CI access.

Lead Time = First Commit to Merge

The traditional DORA definition is "commit to deploy." Since we're using merge-to-main as our deployment proxy:

Lead time = time from first commit on a branch to PR merged to main

This captures:

  • Time spent coding (first commit to PR open)
  • Time spent in review (PR open to approval)
  • Time waiting to merge (approval to merge)

Why Cycle Time Is a Better Metric

In practice, PR cycle time (PR open to merge) is more actionable than full lead time because:

  • Starts when work becomes visible: The first commit might happen days before the PR is opened, but the team can't act on invisible work
  • Excludes local experimentation: Developers often commit multiple times while exploring a solution; those early commits aren't "real" lead time
  • Measurable bottlenecks: Once a PR is open, you can see exactly where delays happen (waiting for review, review duration, etc.)

CodePulse tracks both:

  • Cycle time: PR created to merged (the actionable metric for most teams)
  • Lead time: First commit to merged (the DORA-compliant metric for benchmarking)

The Four Components of Lead Time

CodePulse breaks cycle time (our lead time proxy) into four measurable phases:

  1. Coding Time: PR created to review requested—how long the PR sits before the author marks it ready
  2. Wait for Review: Review requested to first review—how long reviewers take to start looking
  3. Review Duration: First review to approval—how long the active review process takes
  4. Approval to Merge: Approval to merged—how long it takes to actually land the code

This breakdown lets you identify exactly where your lead time is slowing down. For a detailed analysis of each component, see our Cycle Time Breakdown Guide.

Understanding Flow Efficiency

Once you have lead time broken down, you can calculate flow efficiency—a metric from Lean manufacturing that reveals how much of your lead time is active work vs. waiting.

What Is Flow Efficiency?

Flow Efficiency

(Active Work Time / Total Lead Time) x 100

Measures what percentage of lead time is spent actively working vs. waiting. For software delivery: Active work = Coding time + Review duration. Wait time = Wait for review + Approval to merge.

Examples:
Typical PR
Coding: 2h, Wait: 16h, Review: 4h, Merge: 2h
= 25%
Fast PR
Coding: 2h, Wait: 2h, Review: 1h, Merge: 0.5h
= 55%
Interpretation:
> 40%Excellent - minimal wait time in your process
25-40%Good - typical for healthy teams
15-25%Average - room for improvement
< 15%Poor - significant process bottlenecks

A PR with 25% flow efficiency spent 75% of its lifecycle waiting, only 25% being worked on.

Why Flow Efficiency Matters

Most engineering teams have flow efficiency under 15%. That means 85% of the time between starting work and deploying it is spent waiting. Common causes:

  • Overloaded reviewers (high wait-for-review time)
  • Slow CI pipelines (high approval-to-merge time)
  • Asynchronous review cycles with timezone delays (high review duration)

Flow efficiency tells you where to focus. If your flow efficiency is 10%, you don't have a "developers are slow" problem—you have a process bottleneck problem.

Using CodePulse to Calculate Flow Efficiency

CodePulse's Cycle Time Breakdown component on the Dashboard shows you all four time components. To estimate flow efficiency:

  1. Look at the breakdown percentages for each component
  2. Add coding time % + review duration % = active work %
  3. The remainder is wait time

If your breakdown shows:

  • Coding time: 10%
  • Wait for review: 60%
  • Review duration: 20%
  • Approval to merge: 10%

Your flow efficiency is roughly 30% (10% + 20%). That means 70% of lead time is waiting—a clear opportunity to improve by reducing review wait times.

Identify bottlenecks slowing your team with CodePulse

Setting Delivery Targets by Team Type

What's a "good" deployment frequency or lead time? It depends on your team type, size, and maturity. Here's how to set realistic targets.

Deployment Frequency Targets

DORA research defines four performance levels for deployment frequency:

Performance LevelDeployment FrequencyPRs Merged per Day
EliteMultiple times per day5+ per working day
HighOnce per day to once per week1-5 per working day
MediumOnce per week to once per month0.2-1 per working day
LowLess than once per month< 0.2 per working day

Lead Time Targets

Similarly, DORA defines lead time performance levels:

Performance LevelLead Time for ChangesCycle Time (CodePulse)
EliteLess than 1 hour< 4 hours
High1 hour to 1 day4-24 hours
Medium1 day to 1 week1-7 days
LowMore than 1 week> 7 days

Adjusting Targets by Team Type

Application teams (building customer-facing features) should aim for higher deployment frequency:

  • Target: 3-5 PRs merged per day (elite/high performance)
  • Target cycle time: < 24 hours (high performance)
  • Small, frequent deploys reduce risk and enable fast feedback

Platform teams (building internal tools, infrastructure) often have longer natural cycles:

  • Target: 1-2 PRs merged per day (high performance)
  • Target cycle time: 1-3 days (medium/high performance)
  • Changes are larger, require more thorough review, and have bigger blast radius

For more on platform team metrics, see our Platform Team Metrics Guide.

Adjusting for Team Size

Smaller teams naturally merge fewer PRs per day:

  • 2-3 person team: Aim for 1-2 merges per day total (0.5 per person)
  • 5-8 person team: Aim for 3-5 merges per day total (0.5-0.7 per person)
  • 10+ person team: Aim for 5-10 merges per day total (0.5-1 per person)

If individual developers are merging more than 1 PR per day on average, check PR size—they may be splitting work too finely.

Using CodePulse to Track Delivery Cadence

Now that you understand the metrics, here's how to track them in CodePulse without any CI/CD instrumentation.

Dashboard View: Deployment Frequency

Navigate to your Dashboard and look for the Velocity section. You'll see:

  • Deployment Frequency: PRs merged to main branch per working day (excludes weekends and holidays)
  • PRs Merged: Total count of PRs merged in the selected time period
  • Trend indicator: Whether deployment frequency is increasing or decreasing vs. previous period

📊 Finding Deployment Frequency in CodePulse

To see your team's deployment cadence:

  • Go to DashboardVelocity Metrics
  • Look for Deployment Frequency per Day (PRs merged to main)
  • Use the time period selector to view 7d, 30d, or 90d trends
  • Click Trends in the nav to see historical patterns over time
  • Click Repositories to compare deployment frequency across teams

Dashboard View: Lead Time / Cycle Time

Also in the Dashboard, the Cycle Time metric card shows:

  • Average Cycle Time: Mean hours from PR creation to merge for all merged PRs
  • Cycle Time Breakdown: Stacked bar chart showing the four components of cycle time
  • Trend indicator: Whether cycle time is improving (decreasing) or regressing

Trends Page: Historical Analysis

For deeper historical analysis, go to Trends:

  • Deployment Frequency Over Time: Line chart showing PRs merged per day over the past 90 days (or custom range)
  • Cycle Time Trend: Line chart showing average cycle time over the same period
  • Correlation analysis: See if improving cycle time correlates with higher deployment frequency

Look for:

  • Weekly patterns: Do deploys cluster on certain days? (Monday merges after weekend work, Friday slowdowns)
  • Sprint boundaries: If you see spikes every 2 weeks, your team may be batching work to sprint ends
  • Seasonal changes: Summer and December often show lower deployment frequency due to vacations

Repository Comparison: Cross-Team Analysis

If you manage multiple teams, use the Repositories page to compare:

frontend-app

Application Team
Good
4.2/day
Elite
Deploy Freq
Good
18h
High
Cycle Time

backend-api

Application Team
Good
2.8/day
High
Deploy Freq
Watch
36h
Medium
Cycle Time
Action:Cycle time elevated - review bottleneck analysis

platform-infra

Platform Team
Watch
0.8/day
Medium
Deploy Freq
Watch
62h
Medium
Cycle Time
Context:Platform teams typically have longer cycles due to larger blast radius

This view helps you:

  • Identify which teams need support improving delivery speed
  • Find high performers to learn best practices from
  • Justify different targets for different team types (e.g., platform vs. application)

Improving Deployment Frequency Over Time

Now that you're tracking deployment frequency and lead time, how do you actually improve them? Here are proven strategies.

Strategy 1: Reduce Batch Size (Smaller PRs)

The most common reason for low deployment frequency is large PRs. If your average PR is 800 lines, developers can only complete one per week. Smaller PRs (200-400 lines) can be completed and reviewed in a day.

Actions:

  • Check average PR size in the Dashboard's Quality Metrics section
  • Set up an alert when PRs exceed 500 lines (see our PR Size Optimization Guide)
  • Encourage feature flagging to ship incomplete features incrementally
  • Celebrate developers who split large changes into multiple small PRs

Strategy 2: Speed Up Cycle Time

Deployment frequency is limited by cycle time. If your cycle time is 3 days, you can't deploy more than 2-3 times per week even with perfect parallelization.

Actions:

  • Use the Cycle Time Breakdown to identify your biggest bottleneck
  • If "wait for review" is high, balance review load across more team members
  • If "approval to merge" is high, optimize your CI pipeline or enable auto-merge
  • Set a goal to reduce cycle time by 25% this quarter

Strategy 3: Increase Parallel Work in Progress

If each developer works on one PR at a time, your maximum deployment frequency is (team size) × (1 / cycle time in days). For a 5-person team with 2-day cycle time, that's 2.5 deploys per day—capped at "high" performance.

To reach "elite" performance (5+ deploys per day), developers need to work on multiple PRs simultaneously:

  • Open a new PR for the next feature while waiting for review on the current one
  • Use stacked PRs (PR B depends on PR A) to parallelize work
  • Pair program to increase throughput without increasing WIP per person

Caution: Don't increase WIP by having developers start new work before responding to review feedback. That increases cycle time and decreases quality.

Strategy 4: Automate the Merge

If your "approval to merge" time is high (see cycle time breakdown), enable auto-merge:

  • GitHub's auto-merge feature: approved PRs merge automatically when CI passes
  • Mergify, Kodiak, or similar bots that handle merge conflicts and queuing
  • Reduces "approval to merge" time from hours to minutes

Strategy 5: Set Up Alerts for Delivery Slowdowns

Don't wait for the end of the month to notice your deployment frequency has dropped. Set up proactive alerts.

Go to AlertsAlert RulesCreate Rule:

Alert: Low Deployment Frequency
  Metric: deployment_frequency_per_day
  Operator: <
  Threshold: 2.0
  Severity: warning
  Description: "Deployment frequency dropped below 2 per day"

Alert: Cycle Time Regression
  Metric: cycle_time_hours
  Operator: >
  Threshold: 48
  Severity: warning
  Description: "Cycle time exceeded 2 days"

Alert: PRs Waiting Too Long
  Metric: wait_for_review_hours
  Operator: >
  Threshold: 8
  Severity: critical
  Description: "PRs waiting over 8 hours for first review"

These alerts catch problems early—before they become visible in monthly reports.

Strategy 6: Tie Cycle Time to Deployment Frequency in Reviews

In weekly or monthly engineering reviews with leadership, always show both metrics together:

  • Current deployment frequency: 2.8 PRs/day (high performance)
  • Current cycle time: 32 hours (medium performance)
  • Goal: Reduce cycle time to 20 hours, increase deployment frequency to 4 PRs/day

This helps leadership understand the connection: improving cycle time directly increases deployment frequency. For more on creating board-ready metrics, see our Board-Ready Engineering Metrics Guide.

Tracking Progress

After implementing these strategies, use CodePulse to measure impact:

  • Weekly: Check Dashboard to see if deployment frequency is trending up
  • Monthly: Compare 30-day averages for deployment frequency and cycle time
  • Quarterly: Use Trends page to visualize 90-day improvement and share with stakeholders

Remember: elite deployment frequency (5+ per day) isn't the goal for every team. The goal is consistent improvement toward a target that matches your team's context.

See these insights for your team

CodePulse connects to your GitHub and shows you actionable engineering metrics in minutes. No complex setup required.

Free tier available. No credit card required.