Testing for statistical significance depends on the chosen p-value. Selection of the p-value depends on the sample size (Banerjee, Chitnis, Jadhav, Bhawalkar, & Chaudhury, 2009; Ioannidis, 2005). This determines the location of desired significance on a normal distribution graph. Definition of the alternative hypothesis guides the selection of the left hand-tail, right-hand tail, or both tails. Selection of a large positive p-value is more plausible when considering alternative hypothesis. However, when a null hypothesis is the main concern for the test statistic, a small positive p-value is the most appropriate. The p-value provides the measure of distance for the test statistic of a null distribution within the right-hand tail. Therefore, lowering the p-value tests how far the test statistic is in that tail. Small p-values signify strong evidence against null hypothesis. Therefore, one would lower a p-value when interested with null hypothesis. This will give stronger evidence against the null hypothesis as opposed to alternative hypothesis, hence justifying the rejection of null hypothesis.
Explain how p-values that are 0.050, 0.049, and 0.051, might be different when interpreting statistical results.
P-value tests two assumptions. These are no effects on the interventions, and no difference in the effects of interventions between herogenous studies. Appropriate interpretation of statistical results based on p-values should be guided by a sample size and the 95% confidence interval (Banerjee et al., 2009; Ioannidis, 2005). When a p-value of 0.051 is obtained, it may be misinterpreted as ‘evidence of no effect’. However, the best interpretation of ‘no stronger evidence on the effect’ is different from that outlook. P-values of 0.050 and 0.051 are equal and greater than the set limit hence signifying ‘no stronger evidence that the interventions have effects’. However, in small meta-analysis or small studies these p-values are common, which may signify a range of effects hence the need to include substantial effect and no intervention effect. In such cases results should not be described as non-significance or not statistically significant but as ‘no stronger evidence that interventions have effects’ (Banerjee et al., 2009; Ioannidis, 2005). P-values less than set limit like 0.049 signifies detection of trivial effects. This should not be interpreted as implying that interventions have important effects without considering confidence interval and point estimation.
Explain why p-values are significant in contributing to public health practice. Be specific and provide examples.
P-values are the basis for accepting or rejecting the research hypothesis that guides public health practice. This has a significant contribution to this sector because the current trend in medical fraternity is evidence based approach. Clinicians design certain treatment practices or diagnosis regimes based on available studies (Banerjee et al., 2009). The p-value used in any important study must therefore reflect the outcome of the research. Selection of a p-value based on the sample size and its interpretation guides good practices in medical profession. This has a significant impact in making policies and other decisions that guides provision of such services (Browner, & Newman, 1987); Ioannidis, 2005). Scientist carrying out research needs to select appropriate p-value for their study based on a sample size and give appropriate interpretation that can help formulate policies for a public health practices.
Suggest how p-values may compromise the reliability of statistical results in public health practice.
Selection of p-values for a particular study reflects the outcome from that study. This depends on sample size of the population under the study. If the population is small a larger p-value should be selected when testing alternative hypothesis. However, when studying a larger population a smaller p-value should be selected to increase reliability of the output. When designing a diagnostic test, appropriate sample size needs to be selected for a representative outlook on the outcome of the study. Selection of p-value in such a case guides the possibility of obtaining error type I and error type II which may compromise utilization of research (Banerjee et al., 2009).
References
Banerjee, A., Chitnis, U. B., Jadhav, S. L., Bhawalkar, J. S., & Chaudhury, S. (2009). Hypothesis testing type I and type II errors. Industrial Psychiatry Journal, 18(2), 127-131.
Browner, W.S., & Newman, T. B. (1987). Are all significant p values created equal? The analogy between diagnostic tests and clinical research. Journal of the American Medical Association, 257(18), 2459-2463. Web.
Ioannidis, J. P. A. (2005). Why most published research findings are false. PLOS Medicine, 2(8), 124.