Skip to main content

How It Works

Welcome to your complete guide to understanding CodePulse. Learn what each metric means, why it matters, and how to use these insights to improve your team's workflow.

What This Platform Does

CodePulse automatically tracks your team's work on GitHub and turns it into clear, actionable insights. We measure velocity, quality, productivity, and collaboration to help you:

  • Identify bottlenecks in your development process
  • Recognize high performers and areas where team members need support
  • Track improvements over time
  • Make data-driven decisions about process changes
  • Build a culture of continuous improvement

Important: These metrics are tools for improvement, not weapons for judgment. Context always matters, and numbers don't tell the whole story.

Key Concepts You Should Know

Time Periods

Metrics are calculated over different time windows:

  • Daily: Today only
  • Weekly: Last 7 days
  • Monthly: Last 30 days
  • Quarterly: Last 90 days

What Gets Counted

We count pull requests that are:

  • Merged in the time period (completed work)
  • Open and actively updated (work in progress)
  • Not closed without merging (abandoned work)

What Gets Excluded

These files don't count toward code metrics:

  • 📦 Dependency files (package-lock.json, Gemfile.lock)
  • 📄 Documentation files (README.md, .md files)
  • 🤖 Bot activity (dependabot, github-actions)

Why? Library updates are maintenance, not code you wrote. We want metrics to reflect your actual coding work.

Working Days Config

Your organization can customize:

  • Which days count as work days (default: Mon-Fri)
  • Company holidays

This ensures "per day" metrics are fair and account for actual work schedules.

Important

Why PRs Matter

Commits directly to the main branch don't generate pull requests, which means they bypass code review, don't get measured by our metrics, and miss opportunities for team collaboration. All production code should go through pull requests to ensure quality and knowledge sharing.

Velocity Metrics

How fast work moves through your development pipeline

How It's Calculated
AVG(merged_at - created_at) in hours
Excludes: Closed PRs (not merged), Stale PRs not updated in period
How It's Calculated
COUNT(PRs where merged_at is not null)
How It's Calculated
prs_merged / working_days

Quality Metrics

How well your team maintains code quality and process discipline

How It's Calculated
AVG(SUM(additions + deletions per PR))
Excludes: Documentation files, Dependency files, Generated code markers
How It's Calculated
(PRs with failing checks / total PRs) * 100
How It's Calculated
(PRs with reviews / total PRs) * 100
How It's Calculated
(PRs merged without approval / total merged PRs) * 100

Developer Productivity Metrics

How much code individual developers are writing and shipping

How It's Calculated
SUM(additions + deletions) / working_days
Excludes: Dependency files, Documentation files, Bot commits
How It's Calculated
COUNT(commits) / working_days
Excludes: Merge commits, Bot commits
How It's Calculated
SUM(additions + deletions) grouped by file extension
Excludes: Dependency files, Documentation files, Binary files

Collaboration Metrics

How well your team works together on code review and knowledge sharing

How It's Calculated
prs_reviewed / prs_created
How It's Calculated
AVG(first_review_at - created_at) in hours
How It's Calculated
COUNT(DISTINCT reviewers) on developer's PRs
How It's Calculated
COUNT(PRs) by author
How It's Calculated
COUNT(PRs merged) by author
How It's Calculated
COUNT(reviews) submitted

Advanced Developer Metrics

Deeper insights into code quality, timing, and work patterns

How It's Calculated
PERCENTILE_CONT(0.5) of (merged_at - created_at)
How It's Calculated
PERCENTILE_CONT(0.95) of (merged_at - created_at)
How It's Calculated
AVG(SUM(additions + deletions per PR))
Excludes: Documentation files, Dependency files
How It's Calculated
AVG(additions + deletions per commit)
Excludes: Merge commits
How It's Calculated
(deletions / (additions + deletions)) * 100
Excludes: Documentation files, Dependency files
How It's Calculated
AVG(commits_count per PR)
How It's Calculated
AVG(changed_files per PR)
How It's Calculated
(PRs without failing checks / total PRs) * 100
How It's Calculated
(test file LOC / (test LOC + app LOC)) * 100
Excludes: Documentation files, Dependency files
How It's Calculated
COUNT(DISTINCT file_extension)
Excludes: Documentation files, Dependency files
How It's Calculated
AVG(first_approval_at - created_at) in hours
How It's Calculated
AVG(review_submitted_at - PR created_at) as reviewer

Repository Metrics

Team-level metrics scoped to specific repositories

About Repository Metrics

Most repository metrics mirror organization and developer metrics, but scoped to a specific repository. For example, Repository Cycle Time shows the average time to merge PRs in just that repo, while Organization Cycle Time averages across all repos.

These metrics use the exact same formulas and exclusions - they're just filtered to one repository. This helps you compare repositories and see which have faster/slower processes.

How It's Calculated
COUNT(commits) / working_days
Excludes: Merge commits, Bot commits
How It's Calculated
COUNT(top 10 most frequently changed files)

Additional Developer Metrics

More specialized metrics for understanding developer work patterns and specialization.

How It's Calculated
COUNT(DISTINCT file_path) across all merged and active PRs by this developer
Excludes: Documentation files, Dependency files, Auto-generated files
How It's Calculated
COUNT(DISTINCT pull_request_id) WHERE developer left at least one review

Awards & Leaderboards

Beyond just tracking metrics, we celebrate excellence! Our Awards page features 15 different leaderboards that highlight developers who excel in specific areas. These aren't about punishment or comparison - they're about recognition and learning.

Speed & Efficiency (4 awards)

🚀 Speed Champion
Fastest median cycle time
⚡ Lightning Reviewer
Fastest review response time
🔓 Unblock Hero
Fastest time to first review given
📦 Rapid Shipper
Best P95 cycle time (handles outliers well)

Code Quality (4 awards)

🛡️ Quality Guardian
Highest test pass rate
✨ Clean Code Advocate
Smallest average PR size (most reviewable)
✅ Careful Coder
Fewest failing checks
♻️ Refactoring Hero
Highest code churn (deletes legacy code)

Collaboration & Teamwork (4 awards)

🤝 Team Player
Highest review load ratio
👨‍🏫 Code Mentor
Most reviews given to teammates
⭐ Collaboration Star
Most diverse reviewer network
🔧 Cross-Pollinator
Touches most areas of codebase

Consistent Output (3 awards)

📈 Steady Contributor
Most consistent commits per day
🚢 Reliable Shipper
Most PRs merged consistently
💪 Productivity Champion
Highest LOC per day
Best Practice

Why Awards?

Awards aren't about competition - they're about celebration and learning. See someone winning "Lightning Reviewer"? Learn from them! Winning "Clean Code Advocate"? Your team might want to adopt your PR habits. Awards surface role models and best practices.

Good to Know

Award Time Periods

You can view awards for the last 30 days (monthly), last 90 days (quarterly), or all-time. Monthly awards show current hot streaks. Quarterly awards show sustained excellence. All-time awards recognize consistent long-term contributors.

Important

Not Winning? That's OK!

These 15 categories can't possibly capture all the ways developers add value. Writing great documentation, mentoring juniors, fixing gnarly bugs, designing systems - none of these win awards, but they're crucial. Awards highlight some good behaviors, not all of them.

Best Practices for Using Metrics

Best Practice

Look at Trends, Not Snapshots

A single week's numbers mean nothing. Look at trends over months. Is cycle time going up or down? Are PRs getting smaller or larger? Trends tell the real story.

Pro Tip

Combine Multiple Metrics

Never judge performance on one metric. High LOC per day is great IF PR size is reasonable AND test coverage is good AND review coverage is high. Context comes from the full picture.

Important

Metrics Can Be Gamed

If you tie bonuses or promotions directly to metrics, people will game them. Splitting PRs artificially, inflating LOC with comments, rubber-stamping reviews. Use metrics for insight, not reward/punishment.

Good to Know

Talk to Your Team

Metrics raise questions, they don't answer them. If cycle time spikes, ask why. If someone's LOC drops, find out what changed. The conversation is more valuable than the number.

Avoid This

Never Punish Based on Metrics Alone

Using metrics for performance reviews requires extreme care. Low numbers might mean someone's doing critical infrastructure work, mentoring juniors, or dealing with personal issues. Always get context before making judgments.

How to Spot Metric Gaming

When metrics become targets, people game them. Watch for these red flags:

  • Tiny PRs everywhere: 20 one-line PRs instead of one coherent change
  • WIP commits: Hundreds of "WIP" commits to inflate commit count
  • Rubber-stamp approvals: Instant approvals with no actual review
  • Code churn: Adding and removing the same code repeatedly
  • Self-reviewing: Creating and immediately merging PRs
  • Comment padding: Inflating LOC with unnecessary comments

The fix: Build a culture where the work matters more than the numbers. Celebrate impact, not metrics.

Glossary of Terms

Quick reference guide to common terms and concepts used throughout the platform.

Pull Request (PR)

A proposed code change submitted for review before merging into the main codebase. All our metrics track work through PRs - direct commits to main branches are not counted.

Cycle Time

Total time from when a PR is created to when it's merged. Measures how long it takes to get code from "done coding" to "in production" (or at least merged to main branch).

Median vs Average

Median is the middle value (half above, half below). Average is total divided by count. We prefer median because it ignores outliers - one 10-day PR doesn't skew a median of 2 days like it would skew an average.

P95 (95th Percentile)

The value where 95% of measurements fall below it. If P95 cycle time is 48 hours, that means 95% of your PRs merge faster than 48 hours. Useful for finding worst-case scenarios.

Lines of Code (LOC)

Total lines added plus lines deleted. Measures code change volume, not quality. A 10-line bug fix can be more valuable than a 1,000-line feature.

Code Churn

Lines deleted divided by total lines changed. High churn (lots of deletions) usually means cleaning up legacy code. Low churn means mostly adding new code.

Review Coverage

Percentage of merged PRs that received at least one code review. Industry best practice is 100% - every PR should be reviewed before merging.

Review Load Ratio

Reviews given divided by PRs created. A ratio of 2.0 means you review twice as many PRs as you create. A ratio of 0.5 means you create more than you review. Healthy teams aim for 1.0+.

Deployment Frequency

Number of merges to main branch per day. Assumes main branch deployments (CI/CD). Part of DORA metrics - high-performing teams deploy multiple times per day.

DORA Metrics

Four key metrics from DevOps Research and Assessment: Deployment Frequency, Lead Time for Changes (cycle time), Change Failure Rate (test failures), and Mean Time to Recovery. Industry standard for measuring dev team performance.

Test Pass Rate

Percentage of PR status checks that pass. Includes CI/CD pipelines, linters, unit tests, integration tests. A passing PR has all green checks.

Hotspot Files

Files that are changed in many different PRs. High-churn files that might need refactoring or extra test coverage. Not always bad - config files and routes are naturally hotspots.

Working Days

Configurable setting for which days count toward "per day" metrics. Default is Monday-Friday. You can customize working days and holidays in Organization Settings.

Bot Filtering

Automated accounts (dependabot, github-actions, renovate) are excluded by default from metrics. You can toggle this on any page to include/exclude bot activity.

Dependency Files

Files like package-lock.json, Gemfile.lock, yarn.lock that are auto-generated when updating dependencies. Excluded from LOC metrics because they're not hand-written code.

Documentation Files

Markdown files (.md), text files, changelogs. Excluded from LOC metrics to focus on code changes. Writing docs is valuable but measured separately.

Active PRs

PRs that were either merged in the time period OR are still open and actively updated. Closed (abandoned) PRs are excluded. This captures both completed work and work-in-progress.

Time Periods

Most metrics support multiple views: Last 7 days (weekly), Last 30 days (monthly), Last 90 days (quarterly), Last 365 days (yearly). Pick the period that matches your team's planning cycle.

Reviewer Diversity

Number of unique people who reviewed a developer's PRs. High diversity means getting feedback from many teammates. Low diversity might mean always asking the same person.

Merge Without Approval

PR was merged without anyone explicitly approving it (clicking "Approve" button). Usually indicates bypassing code review process - a quality risk.

Remember

These metrics are tools to help your team improve, not weapons to punish underperformers. Use them to start conversations, identify bottlenecks, and celebrate wins. The best teams use metrics as a guide, not a gospel.

Questions about how to use these metrics? Want to suggest improvements? We're always learning. Your feedback helps us build better tools.