A developer on my team once asked me: "Why does it take three weeks for a one-line bug fix to reach users?"
It was a fair question. The actual coding took 5 minutes. Writing the test took another 10 minutes. So why did users have to wait three weeks for the fix?
That's lead time for changes in action—and it's often the most eye-opening DORA metric for engineering teams. While deployment frequency tells you how often you ship, lead time tells you how fast you ship.
Lead time measures the clock time from when a developer commits code until it's running in production and available to users. It's a direct measure of your development pipeline's efficiency.
#What Lead Time Actually Measures
Lead time isn't just development time. It includes everything that happens between "I've written the code" and "users can use it":
- Code review time
- Testing and quality assurance
- Build and CI/CD pipeline execution
- Approval and release processes
- Deployment and rollout time
Elite performers: Less than one day
High performers: Between one day and one week
Medium performers: Between one week and one month
Low performers: Between one month and six months
Most teams are shocked when they first measure their lead time. What feels like "fast development" often reveals weeks of hidden delays.
The Hidden Costs of Long Lead Times
Context switching kills productivity
When lead time is long, developers finish a feature, move on to something else, then get pulled back to fix issues with the original feature. Every context switch wastes time and mental energy.
I watched a developer spend an entire morning remembering what their code from two weeks ago actually did. That's pure waste created by long lead times.
Delayed feedback hurts quality
The longer the gap between writing code and seeing it in production, the less likely you are to catch problems early. By the time users report issues, the developer has moved on and the context is lost.
Business opportunity cost
Every day a valuable feature sits in your pipeline instead of in front of users is lost revenue, delayed feedback, and missed competitive advantage.
Team frustration
Nothing kills developer morale like working on "urgent" features that won't reach users for weeks. Long lead times make everything feel less important.
#Common Lead Time Killers
Let me walk you through the biggest culprits I've seen:
#The Approval Bottleneck
The problem: Every change requires manual approval from senior developers, architects, or managers.
Real example: I worked with a team where the CTO had to personally approve every production deployment. The CTO was busy (obviously), so deployments backed up for days waiting for approval.
The solution: Replace human gates with automated quality gates. Good CI/CD pipelines can check for security issues, performance regressions, and code quality automatically—and they never take vacation.
#Review Purgatory
The problem: Pull requests sit for days waiting for code review.
Why it happens:
- Reviewers are overloaded with their own work
- PRs are too large and intimidating to review
- No clear expectation for review turnaround time
- Reviewers don't prioritize reviews over new development
The fix: Treat code review as a team priority, not an individual favor. Set SLAs (like "all PRs reviewed within 4 hours during business hours") and make review velocity visible.
#Batch Processing
The problem: Changes accumulate in staging environments and get deployed together in batches.
Why teams do this: It feels more efficient to deploy multiple changes at once.
Why it's wrong: Batch deployments increase lead time for individual changes and make it harder to identify problems when they occur.
#Environmental Drift
The problem: Staging environments differ from production, causing last-minute integration issues.
Real example: A team I worked with had a staging database that was 6 months out of date. Features would work perfectly in staging, then fail in production due to schema differences. Every deployment turned into a debugging session.
#Manual Testing Cycles
The problem: QA processes that require human intervention for every change.
Why it's slow: Humans don't work 24/7, humans make mistakes, and manual testing doesn't scale.
#How to Measure Lead Time Accurately
Most teams measure lead time wrong. Here are the key measurement points:
Start time: When code is committed to the main branch (not when a feature branch is created)
End time: When code is running in production and available to users (not when deployment "completes")
What to include:
- Code review time
- Build and test execution time
- Deployment pipeline time
- Any manual approval steps
- Rollout and verification time
What NOT to include:
- Feature development time
- Time spent on feature branches
- Time waiting for business requirements
Track lead time for different types of changes separately:
- Hot fixes: Should be fastest (elite teams: < 1 hour)
- Small features: Regular pipeline (elite teams: < 1 day)
- Large features: May require longer but should still be optimized
#Practical Strategies to Reduce Lead Time
#1. Eliminate Manual Steps
Automate everything possible:
- Code quality checks (linting, security scans)
- Test execution (unit, integration, e2e)
- Database migrations
- Deployment processes
- Health checks and verification
Replace approvals with automation: Instead of "senior dev must approve," create automated checks for the things senior devs look for: security vulnerabilities, performance regressions, code complexity metrics.
#2. Optimize Your Review Process
Set clear expectations:
- Reviews should happen within 4 hours during business hours
- If a PR sits unreviewed for more than 8 hours, escalate
- Small PRs (< 200 lines) get priority review
Make reviews easier:
- Keep PRs small and focused
- Write clear PR descriptions explaining what changed and why
- Include tests and documentation in the same PR
- Use draft PRs for work-in-progress to get early feedback
Distribute review load:
- Don't make one person the bottleneck reviewer
- Use rotation systems or automated reviewer assignment
- Train more people to give quality reviews
#3. Improve Your CI/CD Pipeline
Optimize for speed:
- Run tests in parallel
- Cache dependencies and build artifacts
- Use faster test databases (in-memory when possible)
- Fail fast on obvious problems
Make pipelines reliable:
- Eliminate flaky tests that cause false failures
- Use retry logic for network-dependent tests
- Have clear failure messages that help developers fix issues quickly
#4. Use Trunk-Based Development
Avoid long-lived feature branches:
- Feature branches that last weeks create integration nightmares
- The longer branches live, the harder they are to merge
- Big merge conflicts slow down lead time
Instead: Work in small chunks that can be merged to main daily, using feature flags to hide incomplete features.
#5. Implement Continuous Deployment
Remove deployment scheduling:
- Don't wait for "deployment windows"
- Don't batch changes for weekly releases
- Deploy as soon as code passes all checks
Make deployments safe:
- Use blue-green or rolling deployments
- Implement automatic rollback on health check failures
- Deploy to small percentages of users first (canary releases)
#Lead Time Optimization: A Real Case Study
Let me share a transformation I witnessed:
Starting point:
- Lead time: 3-4 weeks average
- Process: Feature branch → manual testing → staging → weekly production release
- Problems: Long review queues, manual QA bottleneck, batch deployments
Changes made:
- Automated the QA process: Replaced 2 days of manual testing with 20 minutes of automated testing
- Implemented trunk-based development: Eliminated long-lived feature branches
- Set up continuous deployment: Removed weekly release cycles
- Created review SLAs: All PRs reviewed within 4 hours or escalated
- Added monitoring: Could detect problems within minutes instead of hours
Results after 4 months:
- Lead time: 2-4 hours average (98% reduction)
- Deployment frequency: From weekly to multiple times daily
- Bug reports: Decreased by 60% (faster feedback meant better quality)
- Developer satisfaction: Significantly improved (features reached users the same day)
#Advanced Lead Time Techniques
#Feature Flags and Progressive Delivery
Instead of waiting for features to be "complete" before deploying, use feature flags to:
- Deploy incomplete features (hidden from users)
- Test in production with internal users
- Gradually roll out to increasing percentages of users
- Instantly disable features if problems occur
#Deployment Pipeline Parallelization
Run different types of checks in parallel instead of sequentially:
- Unit tests, integration tests, and security scans can run simultaneously
- Multiple deployment stages can run in parallel for different services
- Database migrations can run before application deployment
#Environment Consistency
Make your staging and production environments as similar as possible:
- Use the same infrastructure code (Terraform, CloudFormation)
- Keep data schemas in sync
- Use similar data volumes and traffic patterns
- Monitor environment drift and fix it quickly
#Measuring Lead Time Components
Break down your lead time to identify the biggest opportunities:
Code Review Time: From PR creation to approval
- Target: < 4 hours for most PRs
- Optimization: Better PR practices, reviewer availability
Build Time: From commit to successful build
- Target: < 10 minutes for most builds
- Optimization: Parallel execution, better caching
Test Execution: From build to test completion
- Target: < 15 minutes for full test suite
- Optimization: Test parallelization, faster test data
Deployment Time: From test completion to production availability
- Target: < 5 minutes for application deployment
- Optimization: Better deployment strategies, health checks
Manual Steps: Any human intervention required
- Target: Zero manual steps in normal flow
- Optimization: Automation, better tooling
#Lead Time Patterns by Team Size
Small teams (2-5 developers):
- Typical lead time: 2-5 days
- Main bottlenecks: Manual processes, limited reviewer availability
- Quick wins: Automation, clear review expectations
Medium teams (5-15 developers):
- Typical lead time: 1-2 weeks
- Main bottlenecks: Review queues, coordination overhead
- Focus areas: Review distribution, better branching strategy
Large teams (15+ developers):
- Typical lead time: 2-4 weeks
- Main bottlenecks: Approval processes, integration complexity
- Solutions: Microservices, automated quality gates, better tooling
#Common Anti-Patterns That Increase Lead Time
The "Safety Theater" Approval
Multiple approval steps that don't actually improve quality but add days to lead time.
The "Big PR" Problem
Waiting until features are "complete" before creating PRs, resulting in massive, hard-to-review changes.
The "Test Environment Queue"
Limited test environments that create bottlenecks as teams wait for their turn to test.
The "Release Train" Mentality
Fixed release schedules that force teams to wait for the next "train" instead of deploying when ready.
The "Perfect Staging" Fallacy
Spending weeks trying to make staging environments perfectly match production instead of deploying to production with proper safeguards.
#Your Lead Time Reduction Action Plan
#Week 1: Measure Current State
- Track lead time for 20 recent changes
- Identify the longest delays in your pipeline
- Map out every step from commit to production
- Survey developers about their biggest frustrations
#Week 2: Quick Wins
- Set PR review SLAs and make them visible
- Automate the most time-consuming manual steps
- Remove the most obviously unnecessary approval steps
- Speed up your slowest tests
#Week 3: Process Changes
- Implement trunk-based development
- Set up basic continuous deployment
- Create automated quality checks to replace manual reviews
- Start measuring lead time components
#Week 4: Cultural Changes
- Celebrate fast lead times, not just completed features
- Make lead time visible to the team
- Do a retrospective on what's working and what isn't
- Plan the next round of improvements
#Conclusion
Lead time for changes is often the most actionable DORA metric. Unlike deployment frequency, which requires cultural change, or failure rates, which require better engineering practices, lead time can often be improved quickly by removing obvious bottlenecks.
The key insight is that lead time is usually not limited by how fast developers can write code—it's limited by everything that happens after the code is written. Focus on optimizing your pipeline, not your developers.
Every day you reduce lead time is a day your users get value faster, your developers get feedback sooner, and your business moves faster than the competition.
Start measuring your lead time today. You might be surprised by what you find—and excited by how much you can improve it.
Ready to track your team's lead time automatically? Coderbuds' DORA Metrics Dashboard provides real-time lead time measurement and identifies your biggest pipeline bottlenecks.
Next in this series: Change Failure Rate: Balancing Speed and Quality - Learn how to ship faster without breaking production.