Your security dashboard shows thousands of vulnerabilities patched, millions of logs collected, and 99.9% uptime. Impressive numbers. But do they tell you if you're more secure today than yesterday? Can you justify your security budget with these metrics? Would they help you explain security posture to your CEO or board?
Probably not.
After building security programs for dozens of organizations, we've learned that most security metrics fall into one of two categories: vanity metrics that look good but don't drive decisions, or operational metrics that matter for day-to-day work but don't communicate business value.
This guide helps you identify and track the metrics that actually matter.
The problem with traditional security metrics
Common security metrics often measure activity, not outcomes:
Vanity metrics:
- Logs collected per day
- Number of security tools deployed
- Vulnerabilities scanned
- Security training completion rate
- Incidents responded to
These metrics measure that you're doing something, but not whether you're doing the right things or if those things are effective.
The test: Can an attacker succeed despite good performance on these metrics? If yes, they're vanity metrics.
Example: You might have 100% security training completion, but if users still fall for phishing attacks, the metric didn't measure what matters.
Framework: Metrics that matter
We organize security metrics into three tiers:
Tier 1: Executive & Board Metrics (Quarterly)
These communicate business risk and security program effectiveness to non-technical stakeholders.
1. Security Incident Financial Impact
What it measures: The actual cost of security incidents to the business.
Why it matters: Executives understand dollars. This metric directly connects security program effectiveness to business outcomes.
How to calculate:
Total Incident Cost =
Direct costs (forensics, legal, notification, credit monitoring)
+ Operational disruption (downtime, lost productivity)
+ Reputational damage (customer churn, lost deals)
+ Regulatory fines and penalties
Example:
- Q1 2024: $45,000 (phishing incident, 3 compromised accounts, 8 hours response)
- Q2 2024: $12,000 (malware detected and blocked before spread)
- Q3 2024: $8,000 (failed intrusion attempt, no data loss)
- Q4 2024: $5,000 (automated response to credential stuffing)
Trend: 89% reduction in incident costs year-over-year.
2. Mean Time to Detect (MTTD) & Mean Time to Respond (MTTR)
What it measures: How quickly you find and stop attacks.
Why it matters: IBM's Cost of a Data Breach Report shows breaches contained in <200 days cost $3.93M vs $4.76M for >200 days. Speed matters.
How to calculate:
MTTD = Time from initial compromise to detection
MTTR = Time from detection to containment
Both measured in hours/days, segmented by severity tier.
Example dashboard:
Critical Severity Incidents:
- MTTD: 2.3 hours (down from 14 hours last quarter)
- MTTR: 45 minutes (down from 4 hours last quarter)
High Severity Incidents:
- MTTD: 8 hours (down from 2 days last quarter)
- MTTR: 3 hours (down from 12 hours last quarter)
3. Security Control Effectiveness
What it measures: Are your security controls actually stopping threats?
Why it matters: Shows ROI on security investments. A control that never detects threats might be misconfigured, have coverage gaps, or provide false comfort.
How to calculate:
Track "prevented attacks" vs "successful attacks" by control layer:
Prevention Layer:
- Email security: Blocked 12,450 malicious emails (98.5% of inbound threats)
- EDR: Blocked 34 malware execution attempts (100% prevention rate)
- WAF: Blocked 1,200 application-layer attacks (99.2% prevention rate)
Detection Layer:
- SIEM: Detected 8 anomalies that bypassed prevention (100% detected within SLA)
- Threat hunting: Discovered 2 incidents missed by automated detection
- User reports: 15 phishing attempts reported by users (good security awareness)
Response Layer:
- Mean time to contain: 45 minutes for critical incidents
- Automation rate: 70% of incidents contained automatically
- Escalation accuracy: 92% of escalated incidents were true positives
4. Cyber Insurance Premium & Coverage
What it measures: Third-party assessment of your security posture.
Why it matters: Insurance underwriters assess your actual risk. If premiums decrease or coverage increases, your security is improving in ways that matter to risk professionals.
Track:
- Annual premium costs
- Coverage limits and exclusions
- Deductibles
- Changes quarter-over-quarter
Example: "Our cyber insurance premium decreased 18% YoY while coverage limits increased 25%, reflecting improved security posture recognized by underwriters."
Tier 2: Security Leadership Metrics (Monthly)
These help security leaders manage the program and make resourcing decisions.
1. Detection Coverage (MITRE ATT&CK)
What it measures: Percentage of relevant attack techniques you can detect.
Why it matters: Shows coverage gaps and helps prioritize detection engineering efforts.
How to calculate:
Map your detections to MITRE ATT&CK framework:
Coverage by Tactic:
- Initial Access: 85% (11/13 techniques)
- Execution: 78% (9/12 techniques)
- Persistence: 92% (12/13 techniques)
- Privilege Escalation: 70% (7/10 techniques) ← Gap
- Defense Evasion: 65% (13/20 techniques) ← Gap
- Credential Access: 88% (7/8 techniques)
- Discovery: 80% (8/10 techniques)
- Lateral Movement: 90% (9/10 techniques)
- Collection: 75% (6/8 techniques)
- Exfiltration: 95% (10/11 techniques)
- Command & Control: 82% (9/11 techniques)
Overall Coverage: 82% (101/123 relevant techniques)
Target: 90% by Q2 2025
Focus on techniques relevant to your environment - don't worry about detecting attacks against systems you don't use.
2. Mean Time to Patch (MTTP) Critical Vulnerabilities
What it measures: How quickly you patch actively exploited or critical vulnerabilities.
Why it matters: Unpatched critical vulnerabilities are attacker entry points. Speed of patching reduces exposure window.
How to calculate:
MTTP = Average days from CVE publication (or vulnerability scan finding) to patch deployment
Segment by severity and system type:
Critical Severity (CVSS 9.0+, known exploitation):
- Production servers: 3 days (SLA: 7 days)
- Workstations: 5 days (SLA: 14 days)
- Legacy systems: 15 days (SLA: 30 days, requires change control)
High Severity (CVSS 7.0-8.9):
- Production: 14 days (SLA: 30 days)
- Non-production: 30 days (SLA: 45 days)
Track exceptions: How many systems missed SLA and why?
3. Security Debt
What it measures: Accumulated technical debt that increases security risk.
Why it matters: Like financial debt, security debt accumulates interest. Track it to justify remediation projects.
Examples:
Security Debt Inventory:
- Legacy applications without MFA: 12 apps (down from 40 last quarter)
- Systems running unsupported OS: 8 servers (down from 22)
- Unencrypted data stores: 2 databases (both scheduled for migration)
- Shadow IT applications: 15 discovered (up from 8, needs investigation)
- Overprivileged service accounts: 45 accounts (down from 120)
- Public-facing assets without WAF: 3 applications (2 being migrated to cloud)
Total Security Debt Score: 85 (down from 190 last quarter)
Target: <50 by end of year
Assign point values based on risk and track reduction over time.
4. SOC Analyst Efficiency
What it measures: How effectively your SOC operates.
Why it matters: Burned-out analysts miss threats. Measure operational health to prevent alert fatigue and turnover.
Track:
Alert Quality:
- Daily alert volume: 180 (down from 2,000)
- False positive rate: 12% (down from 88%)
- Analyst-rated alert quality: 4.2/5 (up from 2.1/5)
Analyst Time Allocation:
- Triage and investigation: 35% (down from 75%)
- Proactive threat hunting: 30% (up from 5%)
- Detection engineering: 20% (up from 10%)
- Training and development: 15% (up from 10%)
Team Health:
- Average backlog age: 2.4 hours (down from 36 hours)
- Burnout score (survey): 3.1/10 (down from 8.2/10)
- Turnover rate: 0% last 12 months (was 40% previously)
Happy, effective analysts = better security outcomes.
Tier 3: Operational Metrics (Weekly/Daily)
These drive day-to-day security operations.
1. Phishing Test Results
What it measures: User susceptibility to social engineering.
Why it matters: Users are a critical control layer. Track effectiveness of security awareness.
Quarterly Phishing Simulation Results:
- Emails sent: 500
- Click rate: 8% (down from 22% last quarter)
- Credential submission: 2% (down from 12%)
- User reports: 45 users reported email as suspicious (up from 12)
Improvement: 64% reduction in successful phishing
Target: <5% click rate, >30% user reporting rate
2. Vulnerability Exposure Window
What it measures: Days between vulnerability discovery and remediation.
Why it matters: Smaller window = less time for attackers to exploit.
Track:
- Mean exposure window (days)
- Median exposure window (days)
- 90th percentile exposure window (days)
- Number of vulnerabilities exceeding SLA
3. Access Review Completion
What it measures: Timeliness of periodic access reviews and privilege recertification.
Why it matters: Privilege creep leads to excessive access. Regular reviews limit blast radius.
Track:
- % of access reviews completed on time
- % of reviews resulting in access revocation
- Average time to remediate revoked access
4. Security Tool Coverage
What it measures: Percentage of assets with security controls deployed.
Track:
Asset Coverage:
- EDR deployed: 98% of endpoints (target: 99%)
- Patch management: 100% of servers, 95% of workstations
- Log collection: 100% of production systems
- Vulnerability scanning: 100% of external assets, 95% of internal
- MFA enabled: 100% of privileged users, 95% of standard users
Metrics to avoid (or reframe)
"Number of vulnerabilities found"
- Why it's misleading: More vulnerabilities might mean better scanning, not worse security
- Reframe as: Mean time to remediate critical vulnerabilities
"Logs collected per day"
- Why it's misleading: Volume doesn't equal visibility
- Reframe as: % of critical systems with adequate logging + MTTD
"Security tools deployed"
- Why it's misleading: More tools ≠ more secure
- Reframe as: Control effectiveness rate by security layer
"Security training completion rate"
- Why it's misleading: Completion ≠ behavior change
- Reframe as: Phishing simulation click rate + user-reported suspicious emails
"Number of incidents responded to"
- Why it's misleading: More incidents might mean better detection, not more attacks
- Reframe as: Incident financial impact + MTTD/MTTR trends
Building your security metrics program
1. Start small: Pick 3-5 metrics (one from each tier) and track consistently before expanding.
2. Automate collection: Manual metrics are never up to date. Instrument your security tools to feed dashboards automatically.
3. Establish baselines: Track for 2-3 months before setting targets. Understand your starting point.
4. Set realistic targets: Aim for continuous improvement, not perfection. 10-20% improvement per quarter is excellent.
5. Review regularly:
- Weekly: Operational metrics with SOC team
- Monthly: Security leadership metrics with security managers
- Quarterly: Executive metrics with leadership and board
6. Tell the story: Metrics without context are just numbers. Explain what changed, why it matters, and what's next.
Example executive summary
Here's how to present security metrics to non-technical stakeholders:
Q4 2024 Security Posture Summary
Overall Trend: Significant improvement in threat detection and response capabilities
Key Achievements:
✓ Zero security incidents with data loss (down from 2 incidents in Q3)
✓ 89% reduction in incident financial impact ($5K vs $45K in Q1)
✓ Cyber insurance premium decreased 18% with 25% increase in coverage
✓ Mean time to detect critical threats: 2.3 hours (down from 14 hours)
Improvement Areas:
⚠ Detection coverage gaps in privilege escalation techniques (70% vs 90% target)
⚠ Legacy application modernization behind schedule (12 apps remaining)
Investments Delivering ROI:
• SOC automation: 70% of incidents now auto-contained, saving 40 analyst hours/week
• Zero Trust implementation: Eliminated lateral movement in penetration test
• Detection engineering: False positive rate dropped from 88% to 12%
Q1 2025 Priorities:
1. Expand detection coverage for privilege escalation (target: 90%)
2. Migrate 6 remaining legacy apps to modern authentication
3. Reduce mean time to patch critical vulnerabilities from 3 days to 24 hours
Conclusion
Effective security metrics share these characteristics:
- Outcome-focused: Measure results, not activities
- Actionable: Drive decisions and prioritization
- Contextual: Explained with business impact
- Trending: Track improvement over time
- Honest: Highlight gaps, not just successes
Start with a small set of high-value metrics, track consistently, and expand over time. The goal isn't perfect measurement - it's informed decision-making.
Need help building a metrics program for your security team? Contact us to discuss measurement frameworks tailored to your maturity level.