New to software metrics? This guide covers the fundamentals—what to measure, what to ignore, and how to use data to improve your team without creating a surveillance culture. No prior metrics experience required.
Engineering metrics can feel overwhelming. There are hundreds of things you could measure. The key is understanding which metrics actually matter and why. This guide gives you a solid foundation to build on.
What Are Software Metrics?
Software metrics are quantitative measures of software development activities and outcomes. They help answer questions like:
- How fast is our team delivering value?
- Is our code quality improving or declining?
- Where are the bottlenecks in our process?
- Are we getting better over time?
Think of metrics as instruments on a dashboard. They give you visibility into what's happening without having to ask everyone for status updates constantly.
The Four Categories of Software Metrics
1. Velocity Metrics (How Fast?)
These measure how quickly work moves through your development process:
| Metric | What It Measures | Good Range |
|---|---|---|
| Cycle Time | First commit to merged PR | 1-5 days |
| Lead Time | Ticket created to production | Varies by org |
| Deployment Frequency | How often you ship to production | Daily or more |
| PR Throughput | PRs merged per time period | Trending up or stable |
2. Quality Metrics (How Good?)
These measure the reliability and maintainability of your software:
| Metric | What It Measures | Good Range |
|---|---|---|
| Change Failure Rate | Deployments that cause incidents | <15% |
| Bug Escape Rate | Bugs found in production vs. total | <10% |
| Test Coverage | Code exercised by automated tests | 70-85% |
| Review Coverage | PRs reviewed before merge | >95% |
3. Stability Metrics (How Reliable?)
These measure how well your systems handle problems:
| Metric | What It Measures | Good Range |
|---|---|---|
| MTTR | Mean time to recover from incidents | <1 hour |
| Uptime | System availability percentage | >99.9% |
| Incident Frequency | Production incidents per time period | Trending down |
4. Team Health Metrics (How Sustainable?)
These measure whether your pace is sustainable:
| Metric | What It Measures | Warning Signs |
|---|---|---|
| Work in Progress | Open items per developer | >3 per person |
| After-Hours Work | Commits outside business hours | Increasing trend |
| Review Load Balance | Distribution of code reviews | One person doing 3x+ average |
The DORA Metrics: The Industry Standard
DORA (DevOps Research and Assessment) identified four key metrics that predict software delivery performance. These are the most important metrics to start with:
1. Deployment Frequency
How often does your organization deploy code to production?
- Elite: Multiple deploys per day
- High: Daily to weekly
- Medium: Weekly to monthly
- Low: Monthly or less
Higher deployment frequency usually means smaller, lower-risk changes.
2. Lead Time for Changes
How long does it take to go from code committed to code running in production?
- Elite: Less than 1 day
- High: 1 day to 1 week
- Medium: 1 week to 1 month
- Low: More than 1 month
Long lead times often indicate process bottlenecks or excessive manual steps.
3. Change Failure Rate
What percentage of changes to production result in degraded service or require remediation?
- Elite: 0-15%
- High: 16-30%
- Medium/Low: 31-45%+
High change failure rates suggest insufficient testing or review processes.
4. Mean Time to Recovery (MTTR)
How long does it take to restore service after an incident?
- Elite: Less than 1 hour
- High: Less than 1 day
- Medium: 1 day to 1 week
- Low: More than 1 week
Fast recovery depends on good observability, automated rollbacks, and practiced incident response.
💡 Why DORA Matters
Research shows these four metrics predict both delivery performance AND organizational outcomes like profitability and market share. Teams that excel at DORA metrics outperform competitors. Learn more in our comprehensive DORA guide.
Metrics Beginners Should Avoid
Not all metrics are helpful. Some are actively harmful:
Lines of Code (LOC)
Counting lines of code rewards verbosity, not value. A developer who deletes 500 lines while maintaining functionality has done better work than one who adds 500 unnecessary lines.
Commit Counts
Some developers make many small commits; others make fewer large ones. Neither style is better—it's personal preference. Counting commits measures style, not productivity.
Individual Velocity Rankings
Comparing individuals creates competition instead of collaboration. Someone with "low" metrics might be:
- Working on complex architecture that takes longer
- Mentoring junior developers
- Handling production incidents
- Learning a new technology stack
Story Points
Story points are estimation tools, not productivity measures. Using them for performance evaluation creates perverse incentives to inflate estimates.
Getting Started: Your First Metrics
Don't try to measure everything at once. Here's a beginner roadmap:
Week 1-2: Cycle Time
Start with cycle time—how long from first commit to merged PR. This single metric reveals a lot:
- Long cycle time? Look for review bottlenecks
- Highly variable? Check for scope creep or unclear specs
- Consistently short? Great—your process is efficient
Week 3-4: PR Size and Review Coverage
Add two related metrics:
- PR size: Are PRs reviewable (<400 lines)? Large PRs get superficial reviews.
- Review coverage: Are all PRs getting reviewed? Self-merges bypass quality gates.
Month 2: DORA Metrics
Once you're comfortable with the basics, add the DORA metrics to understand your delivery performance holistically.
Month 3+: Team Health
Add sustainability metrics like work distribution and after-hours activity. These help you spot burnout before it happens.
Using Metrics the Right Way
Do: Use Metrics for Conversation Starters
Good example: "I noticed our cycle time increased last week—what happened?" This opens dialogue and helps identify problems together.
Don't: Use Metrics for Blame
Bad example: "Your cycle time is 40% worse than the team average." This creates fear and gaming behavior.
Do: Track Trends Over Time
A single snapshot is meaningless. Is cycle time improving? Is quality stable? Trends tell the story.
Don't: Compare Individuals
Compare team performance over time. Compare your team to industry benchmarks. Don't create leaderboards.
Do: Combine Metrics with Context
Low throughput during a major refactoring is expected. High throughput during "technical debt sprint" is suspicious. Context matters.
Don't: Set Targets Without Understanding
"Let's reduce cycle time by 50%" sounds great until you understand why cycle time is high. Address root causes, not symptoms.
Common Beginner Mistakes
Measuring Too Much
More metrics doesn't mean more insight. It usually means more noise. Start with 3-5 metrics and add more only when you understand those well.
Expecting Instant Results
Meaningful patterns emerge over weeks or months, not days. Give metrics time to tell their story.
Hiding Bad Numbers
Bad metrics are signals to investigate, not hide. A high change failure rate is valuable information—it tells you where to focus improvement efforts.
Forgetting About Gaming
Any metric that's tied to rewards will be gamed. If you reward low cycle time, people will split PRs unnecessarily. If you reward high throughput, people will cherry-pick easy work.
Tools for Measuring Software Metrics
Free/Built-In Options
- GitHub Insights: Basic repository statistics
- GitLab Analytics: Built-in CI/CD metrics
- Jira Reports: Sprint and velocity tracking
Dedicated Engineering Analytics Tools
- CodePulse: Comprehensive engineering metrics with team and individual views
- LinearB, Jellyfish, Swarmia: Enterprise-focused platforms
The Engineering Analytics Tools Comparison guide covers options in detail.
Your Next Steps
- Pick one metric: Start with cycle time. It's easy to understand and immediately actionable.
- Establish a baseline: What's your current cycle time? You need a starting point to measure improvement.
- Share with your team: Make metrics visible. Discuss them in retros. Build shared understanding.
- Iterate: After 4-6 weeks, add another metric. Build your measurement capability gradually.
Metrics are a tool, not a goal. They help you understand what's happening so you can make better decisions. Used well, they create transparency and drive continuous improvement. Used poorly, they create fear and gaming. The choice is yours.
See these insights for your team
CodePulse connects to your GitHub and shows you actionable engineering metrics in minutes. No complex setup required.
Free tier available. No credit card required.
Related Guides
DORA Metrics Are Being Weaponized. Here's the Fix
DORA metrics were designed for research, not management. Learn how to use them correctly as signals for improvement, not targets to game.
The Only 7 Metrics Your VP Dashboard Actually Needs
Skip vanity metrics. Here are the 7 engineering metrics VPs actually need to track team performance, delivery, and quality.
Engineering Metrics That Won't Get You Reported to HR
An opinionated guide to implementing engineering metrics that build trust. Includes the Visibility Bias Framework, practical do/don't guidance, and a 30-day action plan.