RAG status—Red, Amber, Green—is the most common way engineering leaders communicate project health to stakeholders. It's simple, visual, and universally understood. It's also frequently misused, gamed, and ultimately misleading. This guide covers how to use RAG status effectively for engineering reporting, and when to supplement it with actual metrics.
"A project that's Green-Green-Green for 8 weeks then suddenly Red is a reporting failure, not a project failure. The warning signs were there—the status hid them."
What is RAG Status?
RAG stands for Red, Amber, Green—a traffic light system for communicating project health at a glance. It originated in project management but has become standard in engineering reporting, especially for executive updates and steering committee meetings.
| Status | Meaning | Action Required |
|---|---|---|
| Green | On track. Progressing as planned. | Continue monitoring |
| Amber | At risk. Issues exist but manageable. | Team can address; flag to leadership |
| Red | Critical. Immediate intervention needed. | Executive attention required |
Some organizations add Blue (completed) and Gray(unknown/insufficient data). These are useful for portfolio views but aren't standard across all organizations.
Applying RAG to Engineering Projects
In engineering, RAG status typically applies to multiple dimensions of a project:
Common RAG Dimensions
| Dimension | Green | Amber | Red |
|---|---|---|---|
| Schedule | On or ahead of plan | 1-2 weeks behind | >2 weeks behind |
| Scope | Requirements stable | Minor scope creep | Major scope changes |
| Quality | Test coverage high, few bugs | Tech debt accumulating | Critical bugs blocking release |
| Team Health | Velocity stable, no burnout signs | Overtime increasing | Key people leaving/burning out |
| Dependencies | All external teams delivering | 1 dependency at risk | Blocked by external team |
Example: Engineering Status Report ┌─────────────────────────────────────────────────────┐ │ Project: Payment Processing v2.0 │ │ Week: 12 of 16 │ ├─────────────────────────────────────────────────────┤ │ Overall Status: 🟡 AMBER │ ├─────────────────────────────────────────────────────┤ │ Schedule: 🟢 Green │ 1 week ahead of plan │ │ Scope: 🟢 Green │ No changes this week │ │ Quality: 🟡 Amber │ Test coverage at 72% │ │ Team Health: 🟡 Amber │ 2 engineers on vacation │ │ Dependencies: 🟢 Green │ API team on track │ ├─────────────────────────────────────────────────────┤ │ Key Risks: │ │ • Test coverage needs to reach 85% before release │ │ • Team capacity reduced through end of month │ ├─────────────────────────────────────────────────────┤ │ Actions: │ │ • Prioritizing test coverage this sprint │ │ • Delayed non-critical features to Q2 │ └─────────────────────────────────────────────────────┘
/// Our Take
RAG status is necessary but insufficient. The real question isn't "what color is this project?" but "what do I need to do about it?"
The biggest problem with RAG is that it's subjective. Two engineers can look at the same project and assign different colors. That's why we recommend supplementing RAG with objective metrics like cycle time, deployment frequency, and test coverage— data that can't be gamed or interpreted differently.
The Problems with RAG Reporting
"Projects don't go from Green to Red overnight. They go from 'Green but actually Amber' to 'Amber but we said Green' to 'Red and everyone knew but nobody said.'"
RAG status has known failure modes. Understanding these helps you avoid them:
1. Watermelon Projects
A "watermelon" project is Green on the outside, Red on the inside. The status report says Green, but anyone working on the project knows it's in trouble. This happens when:
- Teams are afraid to report bad news
- There's pressure to maintain "Green" status
- The person reporting isn't close enough to the work
- Metrics aren't objective—just vibes
2. Status Inflation
When every project is Green, Green loses meaning. Teams learn that Amber means "you'll get asked hard questions" and Red means "you've failed," so they report Green until it's undeniably Red. The Amber zone—where action can still help— gets skipped entirely.
3. Subjectivity Without Calibration
What does "1-2 weeks behind" mean for Schedule Amber? Is it calendar weeks or sprint weeks? Behind the original estimate or the current plan? Without clear, agreed-upon thresholds, RAG becomes whatever the reporter feels like.
4. Lagging Indicators
By the time a project is clearly Red, it's often too late to fix. RAG status typically reflects what has already happened, not what's about to happen. Leading indicators—like increasing cycle time, declining test coverage, or review bottlenecks—warn you before the project status changes.
Setting Better RAG Thresholds
The fix for subjective RAG is objective thresholds. Define specific, measurable criteria for each status:
Schedule Thresholds (Example)
| Status | Threshold | Measurement |
|---|---|---|
| Green | Within 10% of planned completion | Story points completed vs. planned |
| Amber | 10-25% behind plan | Story points completed vs. planned |
| Red | >25% behind plan | Story points completed vs. planned |
Quality Thresholds (Example)
| Status | Threshold | Measurement |
|---|---|---|
| Green | Test coverage >80%, <5 open P1 bugs | CI coverage reports, bug tracker |
| Amber | Coverage 60-80% OR 5-10 open P1 bugs | CI coverage reports, bug tracker |
| Red | Coverage <60% OR >10 open P1 bugs | CI coverage reports, bug tracker |
Delivery Flow Thresholds (Example)
| Status | Threshold | Measurement |
|---|---|---|
| Green | Cycle time <48 hours, review pickup <4 hours | Git analytics (CodePulse) |
| Amber | Cycle time 48-96 hours OR pickup 4-8 hours | Git analytics (CodePulse) |
| Red | Cycle time >96 hours OR pickup >8 hours | Git analytics (CodePulse) |
📊 How to See This in CodePulse
CodePulse provides objective metrics to inform your RAG status:
- Dashboard shows cycle time breakdown—if pickup time is spiking, that's an early Amber signal
- Executive Summary gives health grades that can map directly to RAG thresholds
- Alerts can notify you when metrics cross thresholds before you update status
Using Leading Indicators
The best RAG reports combine lagging indicators (what happened) with leading indicators (what's likely to happen). Here's how:
Lagging vs. Leading Indicators
| Dimension | Lagging (What Happened) | Leading (What's Coming) |
|---|---|---|
| Schedule | Story points delivered | PR cycle time trend, blocked items count |
| Quality | Bugs found in production | Test coverage trend, code churn rate |
| Team Health | People who left | After-hours commits, review load imbalance |
| Dependencies | Integration failures | External team RAG status, API stability |
"Leading indicators are uncomfortable because they force you to admit problems before they're undeniable. That's exactly why they're valuable."
RAG Reporting Cadence
How often should you update RAG status? It depends on the audience:
| Audience | Cadence | Focus |
|---|---|---|
| Board/C-Suite | Monthly or quarterly | Portfolio-level RAG, major risks only |
| Steering Committee | Bi-weekly | Project-level RAG with actions |
| VP/Director | Weekly | Detailed RAG with leading indicators |
| Team Leads | Daily (standups) | Not RAG—use actual metrics |
The higher up the audience, the more RAG makes sense—executives don't need cycle time charts, they need "should I be worried?" The closer to the work, the more you should use actual metrics instead of traffic lights.
RAG Status Templates
Weekly Engineering Update Template
# Engineering Status - Week of [DATE] ## Overall Status: [🟢/🟡/🔴] ### Summary [2-3 sentences on what happened this week] ### RAG by Dimension | Dimension | Status | Notes | |--------------|--------|--------------------------------| | Schedule | 🟢 | On track for [milestone] | | Quality | 🟡 | Test coverage at 74% (target 80%) | | Team Health | 🟢 | No concerns | | Dependencies | 🟡 | Waiting on API team review | ### Key Metrics (from CodePulse) - Cycle Time: 32 hours (target: <48h) ✅ - Review Pickup: 3.2 hours (target: <4h) ✅ - PRs Merged: 24 (up from 18 last week) - Review Coverage: 98% ### Risks & Blockers 1. [Risk 1] - Mitigation: [action] 2. [Risk 2] - Mitigation: [action] ### Next Week Focus - [Priority 1] - [Priority 2]
Steering Committee Template
# Project: [NAME] - Steering Committee Update ## Executive Summary Overall: [🟢/🟡/🔴] | Previous: [🟢/🟡/🔴] | Trend: [↑/→/↓] [One paragraph summary of project status] ## Status Dashboard ┌────────────────┬────────┬─────────────────────────┐ │ Dimension │ Status │ Notes │ ├────────────────┼────────┼─────────────────────────┤ │ Schedule │ 🟢 │ Week 8 of 12 │ │ Budget │ 🟢 │ 92% of allocation used │ │ Scope │ 🟡 │ 2 features descoped │ │ Quality │ 🟢 │ 0 P1 bugs open │ │ Resources │ 🟡 │ 1 backfill in progress │ └────────────────┴────────┴─────────────────────────┘ ## Decisions Required 1. [Decision needed from steering committee] ## Key Risks | Risk | Impact | Probability | Mitigation | |------|--------|-------------|------------| | [X] | High | Medium | [Action] | ## Accomplishments This Period - [Achievement 1] - [Achievement 2] ## Next Period Plan - [Goal 1] - [Goal 2]
Preventing RAG Gaming
When RAG status affects perception, people will game it. Here's how to prevent that:
1. Make Thresholds Objective
If Green/Amber/Red is defined by measurable thresholds (cycle time <48h, coverage >80%), there's less room for interpretation. The metric is the status.
2. Separate Messenger from Status
If the team reporting the status also gets judged by it, they have incentive to inflate. Consider having metrics pulled automatically (from CodePulse, CI systems, bug trackers) rather than manually reported.
3. Normalize Amber
Amber should be normal, not shameful. Most projects should be Amber at some point. If everything is always Green, either your thresholds are too loose or people are inflating. Celebrate teams that catch issues early (Amber) before they become crises (Red).
4. Focus on Trend, Not Snapshot
A project that goes Green → Amber is healthier than one that's been "Green" for months then suddenly Red. Show the trend over time, not just the current status.
Related Guides
Learn more about objective engineering metrics that can inform your RAG reporting:
- Board-Ready Engineering Metrics — How to present engineering metrics to executives
- Weekly Engineering Status Report Template — Templates for regular status updates
- Engineering Health Scorecard — Building a comprehensive health dashboard
- Engineering Metrics Dashboard Guide — Setting up objective metrics tracking
Conclusion
RAG status isn't going away—it's too useful for executive communication. But it's only as good as the objectivity behind it. The best RAG reports:
- Use measurable thresholds instead of gut feel
- Include leading indicators to catch issues early
- Normalize Amber as a healthy state, not a failure
- Pull from real data when possible, not manual reports
- Show trends over time, not just snapshots
"The goal isn't a Green dashboard. The goal is early visibility into problems while they're still fixable. A dashboard that's always Green is lying to you."
Start by defining objective thresholds for your most important dimensions. Use tools like CodePulse to get real-time metrics that can inform—or even automate—your RAG status. The result is RAG reporting that actually helps you ship, not just report.
See these insights for your team
CodePulse connects to your GitHub and shows you actionable engineering metrics in minutes. No complex setup required.
Free tier available. No credit card required.
Related Guides
I Got $2M in Budget With These 5 Engineering Metrics
Learn how to create engineering metrics presentations that resonate with board members, investors, and C-suite executives.
The Status Report Template That Got Me Promoted
Stop writing status reports nobody reads. Here is a template for weekly engineering reports that executives actually find useful.
The A-F System That Fixed Our Broken Engineering Team
Build comprehensive health scorecards that give executives and stakeholders instant visibility into engineering team performance.
The Only 7 Metrics Your VP Dashboard Actually Needs
Skip vanity metrics. Here are the 7 engineering metrics VPs actually need to track team performance, delivery, and quality.
