Prevalence is the silent “third variable” in every test question: it doesn’t change the test itself (sensitivity/specificity), but it can completely flip what a positive or negative result means for your patient. If you’ve ever missed a PPV/NPV question because the stem felt “too clinical,” this post is your reset.
Tag: Biostatistics > Study Design & Probability
The clinical vignette (Q-bank style)
A hospital introduces a rapid PCR screening test for Disease X.
- Sensitivity: 90%
- Specificity: 90%
You are told the test characteristics are stable across populations.
Two groups are screened:
- Group A (general population): prevalence = 1%
- Group B (high-risk clinic): prevalence = 20%
Question: Compared with Group A, which of the following is true in Group B?
A. Sensitivity increases
B. Specificity decreases
C. Positive predictive value increases
D. Negative predictive value increases
E. False-positive rate decreases
Step-by-step: Why the correct answer is C. PPV increases
Key principle
- Sensitivity and specificity do not depend on prevalence.
- PPV and NPV do depend on prevalence.
The intuition
When prevalence rises, a random positive test is more likely to be a true positive (because there are simply more true cases floating around). That pushes PPV up.
Prove it quickly with a 2×2 table (high-yield move)
Assume 10,000 people in each group.
Group A: prevalence 1%
- Diseased: 100
- Not diseased: 9,900
Using Se = 90%, Sp = 90%:
| Group A | Disease + | Disease − | Total |
|---|---|---|---|
| Test + | TP = 90 | FP = 990 | 1080 |
| Test − | FN = 10 | TN = 8910 | 8920 |
| Total | 100 | 9900 | 10000 |
- PPV
- NPV
Group B: prevalence 20%
- Diseased: 2,000
- Not diseased: 8,000
| Group B | Disease + | Disease − | Total |
|---|---|---|---|
| Test + | TP = 1800 | FP = 800 | 2600 |
| Test − | FN = 200 | TN = 7200 | 7400 |
| Total | 2000 | 8000 | 10000 |
- PPV
- NPV
What changed?
- PPV skyrocketed (8.3% → 69.2%) with higher prevalence.
- NPV fell (99.9% → 97.3%) with higher prevalence.
So the correct answer is C. Positive predictive value increases.
Why every other answer choice is wrong (systematic distractor breakdown)
A. Sensitivity increases — Wrong
- Sensitivity is .
- It’s a property of the test among people who have the disease. Changing how common the disease is in the population doesn’t change test performance within diseased individuals.
- In both groups: sensitivity stays 90%.
USMLE trap: Stems say “high-risk clinic” and students assume the test “works better.” That’s a prevalence change, not a test-tech change.
B. Specificity decreases — Wrong
- Specificity is .
- It’s measured among people without the disease; prevalence doesn’t alter it.
- In both groups: specificity stays 90%.
High-yield: If a question implies Sp changed, they must be changing the test threshold/technology or introducing bias (e.g., verification bias), not merely changing prevalence.
D. Negative predictive value increases — Wrong
- NPV is .
- As prevalence increases, negatives are more likely to be false negatives, so NPV tends to decrease.
Rule to memorize:
- Prevalence ↑ → PPV ↑, NPV ↓
- Prevalence ↓ → PPV ↓, NPV ↑
E. False-positive rate decreases — Wrong
- False-positive rate (FPR) is .
- Since specificity is unchanged, FPR is unchanged.
Common confusion: Students mix up:
- FPR (a test characteristic) vs
- the number of false positives (which can change with prevalence, because the size of the non-diseased pool changes)
In our example:
- Group A FP = 990
- Group B FP = 800
The count of FP changed, but the rate among non-diseased stayed:
\text{FPR} = 1 - 0.90 = 0.10 \; \text{(10% in both groups)}
The two “universals” you should always check in stems
1) Are they changing prevalence or threshold?
- Prevalence change → affects PPV/NPV only.
- Threshold change (moving cutoff) → trades off sensitivity vs specificity.
Quick threshold tradeoff:
- Lower cutoff (call more people “positive”) → Se ↑, Sp ↓
- Higher cutoff (stricter positive definition) → Se ↓, Sp ↑
2) Are they asking for probability “given disease” vs “given test result”?
- “Given disease status” → sensitivity/specificity
- “Given test result” → PPV/NPV
High-yield summary table (memorize this)
| Quantity | Definition | Depends on prevalence? | Goes up when prevalence rises? |
|---|---|---|---|
| Sensitivity | No | No | |
| Specificity | No | No | |
| PPV | Yes | Yes | |
| NPV | Yes | No (usually decreases) |
Test-day heuristics (fast and reliable)
- If prevalence is low, most positives are false positives → PPV is low.
- If prevalence is high, most negatives are false negatives → NPV is lower.
- Sensitivity/specificity are “inside the lab”; PPV/NPV are “at the bedside.”
One-liner takeaway
Prevalence doesn’t change the test; it changes what the test result means. In higher-prevalence settings, a positive result is more believable → PPV increases.