"How fast are we shipping?" It's one of the most common questions engineering leaders face. But measuring engineering velocity incorrectly leads to gaming, burnout, and a false sense of productivity. This guide shows you how to measure velocity in ways that actually predict delivery success.
Velocity isn't about working faster—it's about sustainable, predictable delivery that improves over time. The right metrics help you identify obstacles, optimize flow, and demonstrate value to stakeholders.
Velocity vs Speed: What's the Difference
Speed Is Not Velocity
In physics, speed is how fast you're moving. Velocity includes direction. The same distinction matters in engineering:
- Speed: How many lines of code, PRs, or tickets did we complete?
- Velocity: How much progress did we make toward our goals?
A team that ships 50 PRs but builds the wrong features has high speed and zero velocity. A team that ships 10 PRs that solve the customer's top problem has high velocity.
The Problem with Speed Metrics
When teams optimize for speed metrics, you see predictable dysfunction:
- PRs split artificially to increase count
- Easy tickets cherry-picked, hard problems avoided
- Quality sacrificed for throughput
- Technical debt accumulated faster than value
- Burnout from unsustainable pace
Sustainable Velocity
Real velocity is sustainable. A team that sprints for two weeks and crashes for two weeks has an average velocity of half their sprint capacity—and accumulates the debt from rushed work.
Sustainable velocity means a pace you can maintain indefinitely, with consistent output week over week.
The Velocity Metrics That Actually Matter
Cycle Time
What it measures: Time from first commit to production deployment.
Cycle time is the single most important velocity metric. It measures how quickly you can turn an idea into customer value. Shorter cycle times mean:
- Faster feedback loops
- Smaller batches of work (lower risk)
- More experiments possible
- Quicker response to customer needs
Learn how to break this down and improve it in our cycle time breakdown guide.
📊 How to See This in CodePulse
Navigate to Dashboard to see your cycle time breakdown:
- Total cycle time with trend over time
- Breakdown into coding, review wait, review, and merge phases
- Filter by repository or team to find bottlenecks
Deployment Frequency
What it measures: How often you release to production.
Deployment frequency is a key indicator of delivery capability. Teams that deploy more often tend to:
- Ship smaller, safer changes
- Find problems faster
- Respond to customer needs quickly
- Have more mature CI/CD practices
| Frequency | Industry Benchmark |
|---|---|
| On demand (multiple per day) | Elite performers |
| Daily to weekly | High performers |
| Weekly to monthly | Medium performers |
| Monthly to quarterly | Low performers |
See our deployment frequency guide for improvement strategies.
PR Throughput
What it measures: PRs merged per time period (team-level).
PR throughput is useful as a relative measure—comparing a team against its own history. An increasing trend suggests improving efficiency. A declining trend might indicate:
- Growing complexity in the codebase
- Review bottlenecks
- Scope creep in work items
- Team focus shifting to other activities
Caution: Don't compare throughput across teams or use as individual metrics—PR size and complexity vary too much.
Flow Efficiency
What it measures: Active work time divided by total elapsed time.
Flow efficiency reveals how much of your cycle time is actual work versus waiting. A PR that takes 5 days but only has 4 hours of coding and 1 hour of review has flow efficiency of about 10%.
Typical flow efficiency ranges:
- 5-15%: Common but indicates significant waste
- 15-40%: Good range for most teams
- 40%+: Excellent, approaching optimal
Low flow efficiency usually indicates waiting—for reviews, dependencies, or decisions.
Sustainable Pace vs Burnout Sprint
Recognizing Unsustainable Velocity
High velocity metrics can mask unsustainable practices. Warning signs:
- After-hours activity: Commits clustering late nights and weekends
- Accelerating churn: More code being rewritten shortly after shipping
- Declining review quality: Fewer comments, faster approvals
- Growing PR sizes: Batching work instead of flowing it
- Rising cycle time variability: Some PRs fast, others stuck for days
Learn more about identifying these patterns in our burnout signals guide.
The Productivity J-Curve
When you push for higher velocity, short-term output often increases while long-term productivity decreases—a J-curve pattern:
- Week 1-2: Velocity spikes as people work harder
- Week 3-4: Quality issues emerge, some rework needed
- Week 5+: Technical debt compounds, velocity crashes below baseline
Sustainable velocity avoids this trap by maintaining consistent quality standards.
DORA Metrics and Velocity Connection
The Four Key Metrics
The DORA research program identified four metrics that predict software delivery performance:
- Deployment Frequency: How often you release to production
- Lead Time for Changes: Time from commit to production
- Change Failure Rate: Percentage of changes causing incidents
- Time to Restore: How quickly you recover from failures
The first two measure velocity; the last two measure stability. The research shows elite performers score high on all four—velocity and stability reinforce each other.
Deep dive into these in our complete DORA metrics guide.
Why Velocity Without Stability Fails
Teams that optimize only for velocity often see rising change failure rates. Each incident creates unplanned work that reduces future velocity. The fastest sustainable pace is one that maintains low change failure rates.
Improving Velocity Without Adding Headcount
Reduce Wait Time
Most cycle time is waiting, not working. Target:
- Review wait time: Set review SLAs, balance reviewer load. See our review load balancing guide.
- CI/CD time: Parallelize tests, optimize build pipelines
- Decision latency: Empower teams to make decisions without escalation
- Dependency wait: Reduce coupling between teams
Reduce Batch Size
Smaller batches flow faster through the system:
- Smaller PRs get reviewed and merged faster
- Smaller deployments are safer and easier to debug
- Smaller features deliver value incrementally
See our PR size optimization guide.
Reduce Rework
Every bug found in production, every PR revision, every unclear requirement creates rework. Invest in:
- Better requirements clarity upfront
- Automated testing to catch bugs early
- Code review quality to prevent defects
- Documentation to reduce misunderstandings
Reduce Context Switching
Each interruption costs 20+ minutes of recovery time. Help developers focus:
- Batch meetings to preserve focus blocks
- Asynchronous communication as default
- Clear ownership to reduce cross-team pings
- Automated alerts instead of manual status checks
Explore this in our context switching guide.
Setting Realistic Velocity Targets
Baseline First
Before setting targets, establish your current state:
- What's your current cycle time distribution?
- How frequently are you deploying?
- What's your flow efficiency?
- How much variability exists week-to-week?
Industry Benchmarks
| Metric | Good | Elite |
|---|---|---|
| Cycle time | <1 week | <1 day |
| Deployment frequency | Weekly | On demand |
| Flow efficiency | >15% | >40% |
See our PR cycle time benchmarks for detailed comparisons.
Improvement Targets
Rather than targeting absolute numbers, target improvement rates:
- "Reduce cycle time by 20% this quarter"
- "Increase deployment frequency by 50% over 6 months"
- "Reduce review wait time to under 24 hours"
Improvement targets are more motivating than arbitrary benchmarks, and they account for your team's starting point.
💡 Remember
Velocity should improve quality of life, not degrade it. If improving velocity makes engineers miserable, you're doing it wrong. The goal is sustainable delivery that makes work better for everyone—developers, managers, and customers.
See these insights for your team
CodePulse connects to your GitHub and shows you actionable engineering metrics in minutes. No complex setup required.
Free tier available. No credit card required.
Related Guides
DORA Metrics Are Being Weaponized. Here's the Fix
DORA metrics were designed for research, not management. Learn how to use them correctly as signals for improvement, not targets to game.
Measuring Deploy Frequency Without CI/CD (The Hack That Works)
Master DORA deployment frequency and lead time using GitHub data alone, without requiring CI/CD pipeline access.
Your PR Cycle Time Is Probably Fine (Here's the Real Benchmark)
What is a good PR cycle time? Benchmarks and targets based on team size, industry, and engineering maturity.