## Probability distribution

### Introduction

Probability distribution can be defined as a record of the probabilities that can be filled by random variables. It should be noted that the term “random” does not refer to any number. Instead, the term “random” refers to the outcome. In essence, random variables are usually definite. This paper will explore the effect of increasing sample on variability. Furthermore, the paper will explore the effect of frequency on probability. Lastly, the paper will examine relevance of p-values on probability and frequency distributions (Lumley, Diehr, Emerson & Chen, 2002).

**custom essay**

specifically for you

specifically for you

for only $16.05

**$11/page**

### Why increasing the sample size decreased the variability in the interactive media piece

Large sample sizes are considered beneficial to statistical research as opposed to small sample sizes. For instance, large samples give a closer view of the whole data set than small samples. Additionally, it is easier to select the wrong sample when sample size is small. Besides, sample variability increases when utilizing small samples instead of large samples. Furthermore, the size of sample error is reliant on sample size since standard error is obtained by means of getting the ration of standard deviation and the square root of the size of sample (Lumley et al., 2002). From the explanation above, it can be noted that variability is highly dependent on sample size. For instance, when the size of sample is increased, the range is reduced. Furthermore, when range is decreased, variability is also decreased. In addition, when unusual samples are encountered, in a small sample size, there would be a big sampling variability. Consequently, sampling variability is small with large samples sizes (Newsom, 2008). This can be illustrated in the fig. 1 below.

### Example

When one uses a sample size of 5 and the other uses a sample size of 100, then variability can be compared as follows. If 2 samples are unusual, this will result in 40% variability for the small sample. On the other hand, if the same numbers of samples were unusual in the large sample, then variability would be 2%. Accordingly, variability decreases with a large sample.

### How frequency is used to inform probability and why this important

Frequency is essential in cases that deal with repeated trials. Additionally, frequency can be significant in recurring sampling from an ensemble. Frequency models can be utilized in both independent and dependent trials. Frequency is related to repetitive probability experiments in a small dimension or independence. Nonetheless, the link between the two is subtle. Frequencies are essential since they can be utilized to infer probabilities for the subsequent trial. Moreover, if frequency is unavailable, it can be predicted from the probabilities (Loredo, 2006).

### The relevance of p-values as it concerns probability and the relationship with frequency distributions

P-value can be considered as the chance askew (right) of test statistic. Use of P-values is significant since it quantifies strength of evidences in support of alternative hypothesis as opposed to null hypothesis. P-value interpretation is usually linked to frequency-related probability interpretation. In fact, p-value can be construed in accordance with frequency of study. However, it should be understood that p-value does not mean that the chance of a null-hypothesis is true. For instance, when p-value is greater than 0.10, then this can be interpreted to be consistent with null-hypothesis.

## Conclusion

Sample sizes are significant in determining variability of samples. For instance, small samples lead to great variability while large samples lead to small variability. Additionally, frequency is significant in repetitive samples where it helps to predict future trials. Lastly, p-values give quantification of strength of evidences against null-hypothesis.

## Reference List

Loredo, T. (2006). *Probability and Frequency: Lecture 3*. Web.

**100% original paper**

on any topic

on any topic

done in as little as

**3 hours**

Lumley, T., Diehr, P., Emerson, S., & Chen, L. (2002). The importance of the normality assumption in large public health data sets. *Annual Review of Public Health, 23**, *151–169.

Newsom, J. (2008). *Lecture 4: Sample Size*. Web.