In order to conduct diagnostic testing, the results of the diagnostic test of interest have to be compared to the results of existing "gold standard" method of diagnosis in a defined population. The results of both the diagnostic test and "gold standard" have to be quantified as a dichotomous categorical variable (positive or negative, "+" or "-").
Sensitivity is the ability of a diagnostic test to detect disease. It is the percentage of people that tested positive or "+" with both the diagnostic test and the "gold standard." A diagnostic test with high sensitivity is good at picking up cases of a given disease state. It is also able to "rule out" disease states.
Specificity is the ability of a diagnostic test to identify the healthy. It is the percentage of people that tested negative or "-" with both the diagnostic test and the "gold standard." A diagnostic test with high specificity is good at detecting cases that do not require more intensive treatment. It is also able to "rule in" disease states.
It is optimal to have a diagnostic test that can both detect disease (sensitivity) AND identify the healthy (specificity). There is an absolute inverse relationship between sensitivity and specificity. As sensitivity goes up, specificity will go down. Higher specificity will lead to lower sensitivity. A well-accepted criterion for a balanced diagnostic test is 80% for both sensitivity and specificity. However, given the clinical context, a certain type of diagnostic test with either higher sensitivity or specificity may be warranted.
If the diagnostic test results are measured along a numerical continuum, then receiver operator characteristic (ROC) curves can be plotted to detect what value maximizes both sensitivity and specificity. ROC curves can also be used to compare the diagnostic efficacy of several tests concurrently and comparing area under the curve (AUC).