# 10.6: Inferential Statistics (Summary)

- Page ID
- 20200

## Key Takeaways

- Null hypothesis testing is a formal approach to deciding whether a statistical relationship in a sample reflects a real relationship in the population or is just due to chance.
- The logic of null hypothesis testing involves assuming that the null hypothesis is true, finding how likely the sample result would be if this assumption were correct, and then making a decision. If the sample result would be unlikely if the null hypothesis were true, then it is rejected in favor of the alternative hypothesis. If it would not be unlikely, then the null hypothesis is retained.
- The probability of obtaining the sample result if the null hypothesis were true (the
*p*value) is based on two considerations: relationship strength and sample size. Reasonable judgments about whether a sample relationship is statistically significant can often be made by quickly considering these two factors. - Statistical significance is not the same as relationship strength or importance. Even weak relationships can be statistically significant if the sample size is large enough. It is important to consider relationship strength and the practical significance of a result in addition to its statistical significance.
- To compare two means, the most common null hypothesis test is the
*t-*test. The one-sample*t-*test is used for comparing one sample mean with a hypothetical population mean of interest, the dependent-samples*t-*test is used to compare two means in a within-subjects design, and the independent-samples*t-*test is used to compare two means in a between-subjects design. - To compare more than two means, the most common null hypothesis test is the analysis of variance (ANOVA). The one-way ANOVA is used for between-subjects designs with one independent variable, the repeated-measures ANOVA is used for within-subjects designs, and the factorial ANOVA is used for factorial designs.
- A null hypothesis test of Pearson’s
*r*is used to compare a sample value of Pearson’s*r*with a hypothetical population value of 0. - The decision to reject or retain the null hypothesis is not guaranteed to be correct. A Type I error occurs when one rejects the null hypothesis when it is true. A Type II error occurs when one fails to reject the null hypothesis when it is false.
- The statistical power of a research design is the probability of rejecting the null hypothesis given the expected strength of the relationship in the population and the sample size. Researchers should make sure that their studies have adequate statistical power before conducting them.
- Null hypothesis testing has been criticized on the grounds that researchers misunderstand it, that it is illogical, and that it is uninformative. Others argue that it serves an important purpose—especially when used with effect size measures, confidence intervals, and other techniques. It remains the dominant approach to inferential statistics in psychology.
- In recent years psychology has grappled with a failure to replicate research findings. Some have interpreted this as a normal aspect of science but others have suggested that this is highlights problems stemming from questionable research practices.
- One response to this “replicability crisis” has been the emergence of open science practices, which increase the transparency and openness of the research process. These open practices include digital badges to encourage pre-registration of hypotheses and the sharing of raw data and research materials.

## References

Aarts, A. A., Anderson, C. J., Anderson, J., van Assen, M. A. L. M., Attridge, P. R., Attwood, A. S., … Zuni, K. (2015, September 21). Reproducibility Project: Psychology. Retrieved from osf.io/ezcuj

Abelson, R. P. (1995). *Statistics as principled argument*. Mahwah, NJ: Erlbaum.

Aschwanden, C. (2015, August 19). Science isn’t broken: It’s just a hell of a lot harder than we give it credit for. Retrieved from __http://fivethirtyeight.com/features/science-isnt-broken/__

Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., … can’t Veer, A. (2014). The replication recipe: What makes for a convincing replication? *Journal of Experimental Social Psychology, 50*, 217-224. doi:10.1016/j.jesp.2013.10.005

Cohen, J. (1994). The world is round: p < .05. *American Psychologist, 49*, 997–1003.

Frank, M. (2015, August 31). The slower, harder ways to increase reproducibility. Retrieved from __http://babieslearninglanguage.blogspot.ie/2015/08/the-slower-harder-ways-to-increase.html__

Head M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and consequences of *p*-hacking in science. *PLoS Biology, 13*(3): e1002106. doi:10.1371/journal.pbio.1002106

Hyde, J. S. (2007). New directions in the study of gender similarities and differences. *Current Directions in Psychological Science, 16*, 259–263.

Kanner, A. D., Coyne, J. C., Schaefer, C., & Lazarus, R. S. (1981). Comparison of two modes of stress measurement: Daily hassles and uplifts versus major life events. *Journal of Behavioral Medicine, 4*, 1–39.

Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. *Personality and Social Psychology Review, 2*(3), 196-217. doi:10.1207/s15327957pspr0203_4

Lakens, D. (2017, December 25). About *p*-values: Understanding common misconceptions. [Blog post] Retrieved from __https://correlaid.org/en/blog/understand-p-values/__

Mehl, M. R., Vazire, S., Ramirez-Esparza, N., Slatcher, R. B., & Pennebaker, J. W. (2007). Are women really more talkative than men? *Science, 317*, 82.

Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., … Yarkoni, T. (2015). Promoting an open research culture. *Science, 348*(6242), 1422-1425. doi: 10.1126/science.aab2374

Oakes, M. (1986). *Statistical inference: A commentary for the social and behavioral sciences*. Chichester, UK: Wiley.

Pashler, H., & Harris, C. R. (2012). Is the replicability crisis overblown? Three arguments explained. *Perspectives on Psychological Science, 7*(6), 531-536. doi:10.1177/1745691612463401

Rosenthal, R. (1979). The file drawer problem and tolerance for null results. *Psychological Bulletin, 83*, 638–641.

Scherer, L. (2015, September). Guest post by Laura Scherer. Retrieved from __http://sometimesimwrong.typepad.com/wrong/2015/09/guest-post-by-laura-scherer.html__

Schnall, S., Benton, J., & Harvey, S. (2008). With a clean conscience: Cleanliness reduces the severity of moral judgments. *Psychological Science, 19*(12), 1219-1222. doi: 10.1111/j.1467-9280.2008.02227.x

Simonsohn U., Nelson L. D., & Simmons J. P. (2014). *P*-Curve: a key to the file drawer. *Journal of Experimental Psychology: General, 143*(2), 534–547. doi: 10.1037/a0033242

Tramimow, D. & Marks, M. (2015). Editorial. *Basic and Applied Social Psychology, 37*, 1–2. __https://dx.doi.org/10.1080/01973533.2015.1012991__

Wilkinson, L., & Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. *American Psychologist, 54*, 594–604.

Yong, E. (August 27, 2015). How reliable are psychology studies? Retrieved from __http://www.theatlantic.com/science/archive/2015/08/psychology-studies-reliability-reproducability-nosek/402466/__