๐ Spec Coverage¶
Spec Coverage analyzes the gap between your specification acceptance criteria and existing tests.
Overview¶
When you write specifications with acceptance criteria, you want to ensure those criteria are actually tested. Spec Coverage helps you:
- Identify which acceptance criteria have tests
- Find gaps where tests are missing
- Get suggestions for writing missing tests
- Track coverage over time
graph LR
A[requirements.md] --> B[Extract Criteria]
C[test_*.py] --> D[Extract Tests]
B --> E[Match]
D --> E
E --> F[Coverage Report]
F --> G[Suggestions]
Quick Start¶
# Check overall coverage
specmem cov
# Get detailed report for a feature
specmem cov report user-authentication
# Get test suggestions
specmem cov suggest user-authentication
Understanding Coverage¶
What Gets Analyzed¶
Acceptance Criteria are extracted from requirements.md files in your specs:
### Requirement 1
**User Story:** As a user, I want to log in securely.
#### Acceptance Criteria
1. WHEN user provides valid credentials THEN system SHALL authenticate
2. WHEN user provides invalid credentials THEN system SHALL reject login
3. WHEN user fails login 5 times THEN system SHALL lock account
Tests are scanned from your test files:
- Python:
test_*.py,*_test.py(pytest) - JavaScript/TypeScript:
*.test.js,*.spec.ts(jest, vitest) - E2E:
*.spec.ts(playwright)
Matching Strategy¶
Coverage uses two matching strategies:
-
Explicit Links (confidence: 1.0)
-
Semantic Matching (confidence: 0.0-1.0)
- Compares criterion text with test names and docstrings
- Uses text similarity algorithms
A criterion is "covered" when confidence โฅ 0.5.
CLI Commands¶
specmem cov¶
Shows overall coverage summary:
$ specmem cov
๐ Spec Coverage Report โ
========================================
Overall: 374/463 criteria covered (80.8%)
Coverage by Feature
โโโโโโโโโโโโโโโโโโโโโโโโโณโโโโโโโโโณโโโโโโโโณโโโโโโโโโโโณโโโโโโโโโโโ
โ Feature โ Tested โ Total โ Coverage โ Gap โ
โกโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฉ
โ user-authentication โ 8 โ 12 โ 66.7% โ 33.3% โ ๏ธ โ
โ payment-processing โ 15 โ 15 โ 100.0% โ 0.0% โ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโดโโโโโโโโดโโโโโโโโโโโดโโโโโโโโโโโ
specmem cov report¶
Shows detailed coverage for a feature:
$ specmem cov report user-authentication
user-authentication โ ๏ธ
Coverage: 8/12 (66.7%)
โ
AC 1.1: WHEN user provides valid credentials... โ tests/test_auth.py:45
โ
AC 1.2: WHEN user provides invalid credentials... โ tests/test_auth.py:67
โ ๏ธ AC 1.3: WHEN user fails login 5 times... โ NO TEST FOUND
โ ๏ธ AC 1.4: WHEN session inactive 30min... โ NO TEST FOUND
specmem cov suggest¶
Get test suggestions for uncovered criteria:
$ specmem cov suggest user-authentication
๐ Test Suggestions for: user-authentication
==================================================
1. AC 1.3:
"WHEN user fails login 5 times THEN system SHALL lock account"
Suggested test approach:
- Test file: tests/test_user_authentication.py
- Test name: test_user_fails_login_5_times
- What to verify:
โข Verify: the system SHALL lock account
๐ก Copy these suggestions to your agent to generate the actual tests.
specmem cov badge¶
Generate a coverage badge for your README:
specmem cov export¶
Export coverage data:
# JSON format
specmem cov export --format json > coverage.json
# Markdown format
specmem cov export --format markdown > COVERAGE.md
Python API¶
from specmem import SpecMemClient
client = SpecMemClient()
# Get overall coverage
result = client.get_coverage()
print(f"Coverage: {result.coverage_percentage:.1f}%")
# Get coverage for specific feature
result = client.get_coverage("user-authentication")
for match in result.features[0].criteria:
status = "โ
" if match.is_covered else "โ ๏ธ"
print(f"{status} {match.criterion.number}")
# Get test suggestions
suggestions = client.get_coverage_suggestions("user-authentication")
for s in suggestions:
print(f"Missing: {s.criterion.text}")
print(f"Suggested: {s.suggested_name}")
# Generate badge
badge = client.get_coverage_badge()
Linking Tests to Criteria¶
For best results, add explicit links in your tests:
Python (pytest)¶
def test_account_lockout_after_failed_attempts():
"""Test that accounts are locked after 5 failed login attempts.
Validates: 1.3
"""
# Test implementation
JavaScript (jest/vitest)¶
Multiple Criteria¶
def test_authentication_flow():
"""Test the complete authentication flow.
Validates: 1.1, 1.2, 1.3
"""
pass
CI Integration¶
Add coverage checks to your CI pipeline:
# .github/workflows/coverage.yml
name: Spec Coverage
on: [push, pull_request]
jobs:
coverage:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install SpecMem
run: pip install specmem
- name: Check coverage
run: specmem cov
- name: Export report
run: specmem cov export --format json > coverage.json
- name: Upload report
uses: actions/upload-artifact@v4
with:
name: coverage-report
path: coverage.json
Fail on Low Coverage¶
#!/bin/bash
# coverage-check.sh
COVERAGE=$(specmem cov --json | jq '.coverage_percentage')
if (( $(echo "$COVERAGE < 80" | bc -l) )); then
echo "Coverage too low: $COVERAGE%"
exit 1
fi
echo "Coverage OK: $COVERAGE%"
Best Practices¶
Link Tests Explicitly
Add Validates: X.Y comments to your tests for accurate matching.
Run Coverage Regularly
Check coverage after adding new acceptance criteria.
Use Suggestions
Copy suggestions to your AI agent to generate missing tests.
Track Over Time
Export coverage reports to track improvement.
See Also¶
- CLI: specmem cov - Full CLI reference
- API: Coverage - Python API reference
- SpecValidator - Validate spec quality