Skip to main content
All Guides
Metrics

The Monorepo Metrics Trap (And How to Escape It)

How to aggregate, compare, and analyze engineering metrics across multiple repositories or within a monorepo structure.

11 min readUpdated January 15, 2025By CodePulse Team

Whether you're running a monorepo with dozens of services or managing separate repositories per team, aggregating engineering metrics across codebases presents unique challenges. How do you compare performance across repos? How do you roll up metrics for executive reporting? And how do you avoid the "my repo is different" excuse?

This guide covers strategies for measuring engineering performance across multiple repositories, comparing team metrics fairly, and setting up org-level visibility.

Monorepo vs Multi-Repo Metrics Challenges

The Monorepo Challenge

In a monorepo, all code lives in one repository. This creates specific measurement challenges:

  • Attribution: Who owns which directory? How do you slice metrics by team?
  • Scale: Thousands of PRs per week can overwhelm simple dashboards
  • Cross-cutting changes: Some PRs touch multiple teams' code
  • Noise: Infrastructure changes can skew metrics for everyone

The Multi-Repo Challenge

With separate repositories per team or service, the challenges differ:

  • Aggregation: How do you roll up metrics across 50 repos?
  • Comparison: Is it fair to compare the platform repo to a small service repo?
  • Visibility: Leaders need org-level views, not repo-by-repo spelunking
  • Consistency: Different repos may have different branching strategies or review norms

Hybrid Approaches

Many organizations use a mix: a core monorepo plus satellite repos for specific purposes (mobile apps, documentation, tools). This compounds both sets of challenges.

See your engineering metrics in 5 minutes with CodePulse

Aggregating Metrics Across Repositories

What to Aggregate

Not all metrics should be aggregated the same way:

Sum across repos:

  • Total PRs merged
  • Total commits
  • Active contributors
  • Lines changed

Average across repos (weighted):

  • Cycle time (weighted by PR volume)
  • Review coverage
  • First review time

Minimum/maximum:

  • Worst-case cycle time (highlights problems)
  • Best-case deployment frequency (shows what's possible)

Weighting Strategies

Simple averages can be misleading. A small repo with 2 PRs and 10-day cycle time shouldn't equally weight against a busy repo with 200 PRs and 1-day cycle time.

Weighting approaches:

1. PR-weighted average:
   Org cycle time = Sum(repo_cycle_time × repo_pr_count) / Total PRs

2. Contributor-weighted:
   Weight by number of active contributors in each repo

3. Strategic weighting:
   Manually assign weights based on business importance

Example:
  Repo A: 100 PRs, 2-day cycle time
  Repo B: 10 PRs, 5-day cycle time

  Simple average: (2 + 5) / 2 = 3.5 days
  PR-weighted: (100×2 + 10×5) / 110 = 2.27 days

The PR-weighted average better reflects actual team experience.

Handling Outliers

Some repos will always have unusual metrics:

  • Documentation repos: Fast cycle times, low complexity
  • Infrastructure/DevOps repos: Slower reviews, high scrutiny
  • Legacy repos: Minimal activity, occasional maintenance
  • Prototype repos: High velocity, low quality standards

Consider excluding or separately categorizing these in org-level aggregations.

📊 How CodePulse Helps

CodePulse automatically aggregates metrics across all your repositories:

  • Dashboard shows org-level metrics aggregated across all repos
  • Repositories page lists per-repo breakdowns for comparison
  • Filter any view by repository to drill down into specific teams/services

Comparing Team/Service Performance

Fair Comparison Principles

Comparing repos/teams requires acknowledging context:

  • Team size matters: A 3-person team can't match the throughput of a 15-person team
  • Domain complexity differs: Payment processing requires more scrutiny than marketing pages
  • Stage of development: Greenfield projects move faster than mature systems
  • External dependencies: Some teams are blocked by vendors, other teams, or compliance

Normalization Strategies

To compare fairly, normalize metrics:

Per-capita metrics:

  • PRs merged per engineer per week
  • Commits per contributor
  • Review load per reviewer

Efficiency ratios:

  • Cycle time (same for all, regardless of team size)
  • First review time
  • Review-to-merge time

Quality metrics:

  • Merge without approval rate
  • Revert rate
  • Bug fix ratio (fixes as % of total PRs)

Comparison Dashboards

Repository Comparison Template:

| Repo/Team      | Contributors | PRs/Week | Cycle Time | First Review |
|----------------|--------------|----------|------------|--------------|
| payments       | 8            | 24       | 1.8 days   | 4.2 hours    |
| user-service   | 5            | 18       | 1.2 days   | 2.1 hours    |
| frontend       | 12           | 45       | 0.8 days   | 1.8 hours    |
| platform       | 6            | 12       | 3.2 days   | 8.4 hours    |
| mobile-ios     | 4            | 8        | 2.1 days   | 6.2 hours    |

Per-capita view (PRs/engineer/week):
| Repo/Team      | PRs/Engineer | Notes                           |
|----------------|--------------|----------------------------------|
| payments       | 3.0          | Stable, good efficiency          |
| user-service   | 3.6          | High performer                   |
| frontend       | 3.75         | Highest throughput               |
| platform       | 2.0          | Complex infrastructure work      |
| mobile-ios     | 2.0          | Release cycle constraints        |

For more on benchmarks, see PR Cycle Time Benchmarks by Team Size.

See your engineering metrics in 5 minutes with CodePulse

Identifying Cross-Repo Patterns

What Patterns to Look For

Analyzing across repos reveals patterns invisible at the individual repo level:

Consistency patterns:

  • Do all repos have similar cycle times, or is there high variance?
  • Are review practices consistent across teams?
  • Do some repos have much higher quality issues than others?

Bottleneck patterns:

  • Do the same reviewers appear across multiple repos?
  • Are certain repos blocking others (dependencies)?
  • Is one team's slow velocity affecting downstream teams?

Risk patterns:

Cross-Team Dependencies

In multi-repo environments, teams often depend on each other:

Dependency analysis questions:

1. When team A's velocity drops, does team B's follow?
   → Suggests dependency or shared resources

2. Do certain PRs require reviewers from multiple repos?
   → Cross-cutting concerns that may need better ownership

3. Are there "critical path" repos that block many others?
   → Investment in those repos has outsized impact

4. Do release cycles of different repos align or conflict?
   → Coordination overhead that could be reduced

Setting Up Org-Level Dashboards

Executive View

For leadership, the org-level view should be simple:

Org Engineering Health Dashboard

Overall Health: B+ (up from B last quarter)

Key Metrics:
  Total PRs this quarter: 1,247 (+12% QoQ)
  Avg cycle time: 1.6 days (target: <2 days) ✓
  Active contributors: 47
  Review coverage: 94%

Highlights:
  ✓ Frontend team cut cycle time by 40%
  ✓ Zero P0 incidents this quarter
  ⚠ Platform team velocity down (hiring ramp)

By Repository:
  [Expandable list with drill-down to repo details]

For more on executive reporting, see Board-Ready Engineering Metrics.

Team Lead View

Team leads need more detail on their repos plus context on how they compare:

Payments Team Dashboard

My Repos: payments, payment-gateway, billing-service

Combined Metrics:
  PRs this week: 24
  Avg cycle time: 1.8 days (org avg: 1.6)
  First review: 4.2 hours (org avg: 3.1)
  Contributors: 8

How We Compare:
  Cycle time: 75th percentile (room to improve)
  Throughput per engineer: 90th percentile (strong)
  Review coverage: 100% (best in org)

Attention Needed:
  - payment-gateway has 2-day first review time
  - 3 PRs currently blocked >24 hours

Building the Dashboard

Key decisions when building org-level dashboards:

  • Refresh frequency: Daily for operational, weekly for executive
  • Time ranges: Show trends (week, month, quarter) not just snapshots
  • Drill-down capability: Org → Team → Repo → Individual PR
  • Alerting: Notify when metrics deviate significantly from norm
  • Export: Enable data export for custom analysis

🏢 How CodePulse Helps

CodePulse is built for multi-repository visibility:

  • Executive Summary aggregates across all repos with a single health grade
  • Repositories page enables side-by-side comparison
  • Every view can filter by repository for team-specific analysis
  • Alerts can be set at org or repo level

Avoiding Dashboard Sprawl

The danger with multi-repo metrics is creating too many dashboards:

  • Start with one org-level view that serves 80% of needs
  • Add team-level views only when requested and justified
  • Resist per-repo dashboards unless there's clear ownership
  • Consolidate rather than creating new views for each question

The goal is visibility that drives action, not data that drives dashboard maintenance. Keep it simple, make it actionable, and iterate based on what questions actually get asked.

See these insights for your team

CodePulse connects to your GitHub and shows you actionable engineering metrics in minutes. No complex setup required.

Free tier available. No credit card required.