How It Works
Welcome to your complete guide to understanding CodePulse. Learn what each metric means, why it matters, and how to use these insights to improve your team's workflow.
What This Platform Does
CodePulse automatically tracks your team's work on GitHub and turns it into clear, actionable insights. We measure velocity, quality, productivity, and collaboration to help you:
- Identify bottlenecks in your development process
- Recognize high performers and areas where team members need support
- Track improvements over time
- Make data-driven decisions about process changes
- Build a culture of continuous improvement
Important: These metrics are tools for improvement, not weapons for judgment. Context always matters, and numbers don't tell the whole story.
Key Concepts You Should Know
Time Periods
Metrics are calculated over different time windows:
- Daily: Today only
- Weekly: Last 7 days
- Monthly: Last 30 days
- Quarterly: Last 90 days
What Gets Counted
We count pull requests that are:
- ✅ Merged in the time period (completed work)
- ✅ Open and actively updated (work in progress)
- ❌ Not closed without merging (abandoned work)
What Gets Excluded
These files don't count toward code metrics:
- 📦 Dependency files (package-lock.json, Gemfile.lock)
- 📄 Documentation files (README.md, .md files)
- 🤖 Bot activity (dependabot, github-actions)
Why? Library updates are maintenance, not code you wrote. We want metrics to reflect your actual coding work.
Working Days Config
Your organization can customize:
- Which days count as work days (default: Mon-Fri)
- Company holidays
This ensures "per day" metrics are fair and account for actual work schedules.
Why PRs Matter
Commits directly to the main branch don't generate pull requests, which means they bypass code review, don't get measured by our metrics, and miss opportunities for team collaboration. All production code should go through pull requests to ensure quality and knowledge sharing.
Velocity Metrics
How fast work moves through your development pipeline
Cycle Time
VelocityLower is BetterHow long it takes from opening a PR to merging it. This is your team's speed metric.
AVG(merged_at - created_at) in hoursPRs Merged (Throughput)
VelocityHigher is BetterHow many pull requests your team ships. This measures your output volume.
COUNT(PRs where merged_at is not null)Deployment Frequency
VelocityHigher is BetterHow often your team ships code to production per day. This is a core DORA metric.
prs_merged / working_daysQuality Metrics
How well your team maintains code quality and process discipline
Average PR Size
QualityLower is BetterHow many lines of code change in each PR on average. Smaller is usually better.
AVG(SUM(additions + deletions per PR))Test Failure Rate
QualityLower is BetterWhat percent of PRs have failing CI checks. Lower is better.
(PRs with failing checks / total PRs) * 100Review Coverage
QualityHigher is BetterWhat percent of PRs get at least one code review. Higher is better.
(PRs with reviews / total PRs) * 100Merge Without Approval Rate
QualityLower is BetterWhat percent of merged PRs never got an approval. Lower is better.
(PRs merged without approval / total merged PRs) * 100Developer Productivity Metrics
How much code individual developers are writing and shipping
Lines of Code per Day
ProductivityHigher is BetterHow many lines of code a developer writes on average per working day.
SUM(additions + deletions) / working_daysCommits per Day
ProductivityHigher is BetterHow many commits a developer makes on average per working day.
COUNT(commits) / working_daysLanguage Distribution
ProductivityHigher is BetterWhat programming languages a developer works in, measured by lines of code.
SUM(additions + deletions) grouped by file extensionCollaboration Metrics
How well your team works together on code review and knowledge sharing
Review Load Ratio
CollaborationHigher is BetterHow many PRs a developer reviews compared to how many they create. Above 1.0 means they review more than they write.
prs_reviewed / prs_createdTime to First Review
CollaborationLower is BetterHow long it takes on average for a PR to get its first review after being opened.
AVG(first_review_at - created_at) in hoursReviewer Diversity
CollaborationHigher is BetterHow many different people review a developer's PRs. Higher means better knowledge distribution.
COUNT(DISTINCT reviewers) on developer's PRsPRs Created
CollaborationHigher is BetterHow many pull requests a developer opens. Shows overall activity level.
COUNT(PRs) by authorPRs Merged
CollaborationHigher is BetterHow many pull requests a developer successfully merges. Shows completed work.
COUNT(PRs merged) by authorReviews Given
CollaborationHigher is BetterHow many code reviews a developer submits. Includes multiple reviews on same PR.
COUNT(reviews) submittedAdvanced Developer Metrics
Deeper insights into code quality, timing, and work patterns
Median Cycle Time
VelocityLower is BetterThe middle value of all PR cycle times. Better than average for handling outliers.
PERCENTILE_CONT(0.5) of (merged_at - created_at)P95 Cycle Time
VelocityLower is BetterThe cycle time that 95% of PRs beat. Shows worst-case performance excluding outliers.
PERCENTILE_CONT(0.95) of (merged_at - created_at)Average Lines per PR
ProductivityLower is BetterHow many lines change in each PR on average for a specific developer.
AVG(SUM(additions + deletions per PR))Average Lines per Commit
ProductivityLower is BetterHow many lines typically change in each commit.
AVG(additions + deletions per commit)Code Churn Rate
ProductivityHigher is BetterWhat percentage of code changes are deletions. High churn means refactoring/simplifying.
(deletions / (additions + deletions)) * 100Average Commits per PR
ProductivityHigher is BetterHow many commits typically go into each PR. Shows workflow style.
AVG(commits_count per PR)Average Files per PR
ProductivityLower is BetterHow many files typically change in each PR. Shows scope of work.
AVG(changed_files per PR)Test Pass Rate
QualityHigher is BetterWhat percentage of a developer's PRs pass all tests. Higher is better.
(PRs without failing checks / total PRs) * 100Test Coverage Ratio
QualityHigher is BetterWhat percentage of code changes are in test files. Shows testing discipline.
(test file LOC / (test LOC + app LOC)) * 100Language Diversity Count
ProductivityHigher is BetterHow many different programming languages a developer works in.
COUNT(DISTINCT file_extension)Time to Approval
VelocityLower is BetterHow long it takes on average for a PR to get approved after being opened.
AVG(first_approval_at - created_at) in hoursReview Response Time
CollaborationLower is BetterHow quickly this developer responds when asked to review others' PRs.
AVG(review_submitted_at - PR created_at) as reviewerRepository Metrics
Team-level metrics scoped to specific repositories
About Repository Metrics
Most repository metrics mirror organization and developer metrics, but scoped to a specific repository. For example, Repository Cycle Time shows the average time to merge PRs in just that repo, while Organization Cycle Time averages across all repos.
These metrics use the exact same formulas and exclusions - they're just filtered to one repository. This helps you compare repositories and see which have faster/slower processes.
Repository Commits per Day
RepositoryHigher is BetterHow many commits land in this repository per working day on average.
COUNT(commits) / working_daysFile Hotspots
RepositoryHigher is BetterThe top 10 most frequently changed files in a repository. Identifies refactoring candidates.
COUNT(top 10 most frequently changed files)Additional Developer Metrics
More specialized metrics for understanding developer work patterns and specialization.
Unique Files Touched
ProductivityHigher is BetterHow many different files a developer has edited across all their PRs in the time period.
COUNT(DISTINCT file_path) across all merged and active PRs by this developerPRs Reviewed
CollaborationHigher is BetterNumber of unique pull requests this developer has reviewed (not total reviews - one PR can have multiple reviews).
COUNT(DISTINCT pull_request_id) WHERE developer left at least one reviewAwards & Leaderboards
Beyond just tracking metrics, we celebrate excellence! Our Awards page features 15 different leaderboards that highlight developers who excel in specific areas. These aren't about punishment or comparison - they're about recognition and learning.
Speed & Efficiency (4 awards)
Code Quality (4 awards)
Collaboration & Teamwork (4 awards)
Consistent Output (3 awards)
Why Awards?
Awards aren't about competition - they're about celebration and learning. See someone winning "Lightning Reviewer"? Learn from them! Winning "Clean Code Advocate"? Your team might want to adopt your PR habits. Awards surface role models and best practices.
Award Time Periods
You can view awards for the last 30 days (monthly), last 90 days (quarterly), or all-time. Monthly awards show current hot streaks. Quarterly awards show sustained excellence. All-time awards recognize consistent long-term contributors.
Not Winning? That's OK!
These 15 categories can't possibly capture all the ways developers add value. Writing great documentation, mentoring juniors, fixing gnarly bugs, designing systems - none of these win awards, but they're crucial. Awards highlight some good behaviors, not all of them.
Best Practices for Using Metrics
Look at Trends, Not Snapshots
A single week's numbers mean nothing. Look at trends over months. Is cycle time going up or down? Are PRs getting smaller or larger? Trends tell the real story.
Combine Multiple Metrics
Never judge performance on one metric. High LOC per day is great IF PR size is reasonable AND test coverage is good AND review coverage is high. Context comes from the full picture.
Metrics Can Be Gamed
If you tie bonuses or promotions directly to metrics, people will game them. Splitting PRs artificially, inflating LOC with comments, rubber-stamping reviews. Use metrics for insight, not reward/punishment.
Talk to Your Team
Metrics raise questions, they don't answer them. If cycle time spikes, ask why. If someone's LOC drops, find out what changed. The conversation is more valuable than the number.
Never Punish Based on Metrics Alone
Using metrics for performance reviews requires extreme care. Low numbers might mean someone's doing critical infrastructure work, mentoring juniors, or dealing with personal issues. Always get context before making judgments.
How to Spot Metric Gaming
When metrics become targets, people game them. Watch for these red flags:
- Tiny PRs everywhere: 20 one-line PRs instead of one coherent change
- WIP commits: Hundreds of "WIP" commits to inflate commit count
- Rubber-stamp approvals: Instant approvals with no actual review
- Code churn: Adding and removing the same code repeatedly
- Self-reviewing: Creating and immediately merging PRs
- Comment padding: Inflating LOC with unnecessary comments
The fix: Build a culture where the work matters more than the numbers. Celebrate impact, not metrics.
Glossary of Terms
Quick reference guide to common terms and concepts used throughout the platform.
Pull Request (PR)
A proposed code change submitted for review before merging into the main codebase. All our metrics track work through PRs - direct commits to main branches are not counted.
Cycle Time
Total time from when a PR is created to when it's merged. Measures how long it takes to get code from "done coding" to "in production" (or at least merged to main branch).
Median vs Average
Median is the middle value (half above, half below). Average is total divided by count. We prefer median because it ignores outliers - one 10-day PR doesn't skew a median of 2 days like it would skew an average.
P95 (95th Percentile)
The value where 95% of measurements fall below it. If P95 cycle time is 48 hours, that means 95% of your PRs merge faster than 48 hours. Useful for finding worst-case scenarios.
Lines of Code (LOC)
Total lines added plus lines deleted. Measures code change volume, not quality. A 10-line bug fix can be more valuable than a 1,000-line feature.
Code Churn
Lines deleted divided by total lines changed. High churn (lots of deletions) usually means cleaning up legacy code. Low churn means mostly adding new code.
Review Coverage
Percentage of merged PRs that received at least one code review. Industry best practice is 100% - every PR should be reviewed before merging.
Review Load Ratio
Reviews given divided by PRs created. A ratio of 2.0 means you review twice as many PRs as you create. A ratio of 0.5 means you create more than you review. Healthy teams aim for 1.0+.
Deployment Frequency
Number of merges to main branch per day. Assumes main branch deployments (CI/CD). Part of DORA metrics - high-performing teams deploy multiple times per day.
DORA Metrics
Four key metrics from DevOps Research and Assessment: Deployment Frequency, Lead Time for Changes (cycle time), Change Failure Rate (test failures), and Mean Time to Recovery. Industry standard for measuring dev team performance.
Test Pass Rate
Percentage of PR status checks that pass. Includes CI/CD pipelines, linters, unit tests, integration tests. A passing PR has all green checks.
Hotspot Files
Files that are changed in many different PRs. High-churn files that might need refactoring or extra test coverage. Not always bad - config files and routes are naturally hotspots.
Working Days
Configurable setting for which days count toward "per day" metrics. Default is Monday-Friday. You can customize working days and holidays in Organization Settings.
Bot Filtering
Automated accounts (dependabot, github-actions, renovate) are excluded by default from metrics. You can toggle this on any page to include/exclude bot activity.
Dependency Files
Files like package-lock.json, Gemfile.lock, yarn.lock that are auto-generated when updating dependencies. Excluded from LOC metrics because they're not hand-written code.
Documentation Files
Markdown files (.md), text files, changelogs. Excluded from LOC metrics to focus on code changes. Writing docs is valuable but measured separately.
Active PRs
PRs that were either merged in the time period OR are still open and actively updated. Closed (abandoned) PRs are excluded. This captures both completed work and work-in-progress.
Time Periods
Most metrics support multiple views: Last 7 days (weekly), Last 30 days (monthly), Last 90 days (quarterly), Last 365 days (yearly). Pick the period that matches your team's planning cycle.
Reviewer Diversity
Number of unique people who reviewed a developer's PRs. High diversity means getting feedback from many teammates. Low diversity might mean always asking the same person.
Merge Without Approval
PR was merged without anyone explicitly approving it (clicking "Approve" button). Usually indicates bypassing code review process - a quality risk.
Remember
These metrics are tools to help your team improve, not weapons to punish underperformers. Use them to start conversations, identify bottlenecks, and celebrate wins. The best teams use metrics as a guide, not a gospel.
Questions about how to use these metrics? Want to suggest improvements? We're always learning. Your feedback helps us build better tools.