The sensitivity of an exam is also known as the true positive rate (TPR) and is the percentage of positives that provide a positive result with the exam in question. For instance, a test that accurately identifies all negative samples in a clinical trial is highly sensitive. The lower the percentage of negatives, the more precise the results are likely to be. Likewise, the higher the sensitivity, the more likely the result will be erroneous.
In clinical trials, the level of sensitivity relies on how accurate the screening tests are and how many false negatives there are. A high sensitivity test is one in which the true negatives outnumber the true positives by a large amount. A low sensitivity test typically gives more false negatives than true negatives.
To sum up, then, we can say that sensitivities and levels of accuracy depend on how much or how little false positive results there are when interpreting the results of a clinical trial. It is also important to remember that disease-specific tests will have higher sensitivities than generic tests. Lastly, we can look at the relationship between sensitivities and the various curves that follow an outbreak of some disease. High sensitivity Curve One shows a sudden increase in frequency following an outbreak while low sensitivity Curve Two shows a gradual increase in frequency throughout the epidemic. Curve One shows a general overall increase in sensitivity for all diseases while Curve Two shows a decrease in prevalence following an outbreak of a disease.
However, it is important to note that the curves do not perfectly follow a normal distribution. For instance, the curves One and Two might be extremely sensitive to changes in cancer screening tests while curves Three might be extremely insensitive to changes in physical examination results. These variations are too small to allow us to generalize about screening tests and their sensitivity to cancer. Overall, however, they still follow a distribution that resembles a bell-shaped curve. There are instances, however, where sensitivities and levels of accuracy are greater. For instance, if there are more false positives, the level of sensitivities will be greater than if there are fewer positives.
The high sensitivity vs specificity debate has been rekindled due to new research focusing on a possible biological explanation for why some people are more sensitive to negative results than others. Negative results can result from cross-contamination of test material or contamination of the patient’s body by the testing substance itself. Thus, those with extremely high negative sensitivities might not experience any symptoms of cancer and could go through life without any sign of the disease. Conversely, healthy people who are negative or low responders might go through more uncomfortable experiences.
To answer the question above, then, we would need to be able to control for the latter. Therefore, the third factor considered in this post is whether the level of sensitivity or specificity of a news release is related to the news media or to the media outlet that releasing the information. It should be noted that there are times when a news release will be released with very high sensitivity and/or specificity but will be poorly managed by the media outlet or disseminated. This is not always the case, however. Some press releases may have only minor irregularities in them which unintentionally lead to incorrect conclusions about a product. In this case, it is likely that the level of sensitivity or specificity is unrelated to the media outlet itself.
A related factor to the second factor, on the third factor, is whether or not a test has a greater sensitivity than a clinical trial at predicting the clinical outcome of the PSA test. Clinical trials are typically large, double-blind trials of treatment effects on large numbers of patients. Thus, the primary trial may include thousands of PSA determinations. Conversely, some news organizations have reported that clinical trials are often based on small numbers of PSA determinations from a single clinic. Thus, the second factor, the higher sensitivity or specificity, might be impacted by this disparity. However, it should be noted that the PSA testing procedure and its use remain strongly supported by current regulatory bodies such as the US Food and Drug Administration.
While the factors I have described are an attempt to categorize the variability of sensitivity and specificity in detecting prostate cancer, it is important to note that every person is different. While the prior factors I suggested are a good example of common positive results, they do not accurately depict the variability of PSA testing. It is important to note that no blood test can give you a definitive diagnosis of PSA positive status until your doctor assures you by conducting a biopsy. Thus, while the aforementioned factors are helpful to understand the range of sensitivities and specimens acceptable for screening, you as the patient need to determine what is a good example of a PSA test result for you.