Statistical Package for the Social Sciences (SPSS; Armonk, NY, IBM Corp.) is a statistical software application that allows for researchers to enter and manipulate data and conduct various statistical analyses. Step by step methods for conducting and interpreting over 60 statistical tests are available in Research Engineer. Videos will be coming soon. Click on a link below to gain access to the methods for conducting and interpreting the statistical analysis in SPSS.
5 Comments
Comparison of independent groups on an outcomeNumber of groups, scales of measurement, and meeting statistical assumptions
Between-subjects statistics are used when comparing independent groups on an outcome. Independent groups means that the groups are "different" or "independent" from each other according to some characteristic. With between-subjects designs, participants can only be part of one group (independence) and only observed once (independence of observations, IOO).
One chooses a between-subjects statistical test based on the following: 1. Number of independent groups being compared (one group, two groups, or three or more groups) 2. Scale of measurement of the outcome (categorical, ordinal, or continuous) 3. Meeting statistical assumptions (independence of observations, normality, and homogeneity of variance) Here is a list of between-subjects statistical tests and when they are utilized in applied quantitative research: 1. Chi-square Goodness-of-fit - One group, categorical outcome, a priori hypothesis for dispersal of outcome 2. One-sample median test - One group, ordinal outcome, a priori hypothesis for median value 3. One-sample t-test - One group, continuous outcome, meet the assumption of IOO and normality, a priori hypothesis for mean value 4. Chi-square - Two independent groups, categorical outcome, and chi-square assumption (more than five observations in each cell) 5. Fisher's Exact test - Two independent groups, categorical outcome, and when the chi-square assumption is not met 6. Mann-Whitney U - Two independent groups, ordinal outcome, and when the assumption of homogeneity of variance for independent samples t-test is violated 7. Independent samples t-test - Two independent groups, continuous outcome, meet the assumption of IOO, normality (skewness and kurtosis statistics), and homogeneity of variance (also known as homoscedasticity, tested with Levene's test) 8. Unadjusted odds ratio - Three or more independent groups, categorical outcome, chi-square assumption, choose a reference category and compare each independent group to the reference 9. Kruskal-Wallis - Three or more independent groups, ordinal outcome, and when the assumption of homogeneity of variance is violated 10. ANOVA - Three or more independent groups, continuous outcome, meet the assumption of IOO, normality, and homogeneity of variance Chi-square vs. Fisher's Exact TestMeeting chi-square assumption of at least five observations per cell
There is a fundamental difference between chi-square and Fisher's Exact test. They are often used interchangeably both in everyday empirical discourse and also in the literature. There are many calculators available for free on the internet that will calculate inferential statistics for chi-square tests of independence and fisher's exact test. Without the proper statistical competencies, researchers can employ the wrong test. Here is how to know which of these tests to use with your research data:
1. Chi-square - This non-parametric test is used when you are looking at the association between dichotomous categorical variables. The primary inference yielded from this test is the unadjusted odds ratio with 95% confidence interval. EACH CELL of the 2x2 table MUST have at least five observations. 2. Fisher's Exact Test - This non-parametric test is employed when you are looking at the association between dichotomous categorical variables. The primary inference here is also the unadjusted odds ratio with 95% confidence interval. However, the Fisher's Exact Test is used instead of chi-square if ONE OF THE CELLS in the 2x2 has LESS than five observations. Parametric statistics are more powerful statisticsNon-parametric statistics are used with categorical and ordinal outcomes
As we continue our journey to break through the barriers associated with statistical lexicons, here is another dichotomy of popular statistical terms that are spoken commonly but not always understood by everyone.
Parametric statistics are used to assess differences and effects for continuous outcomes. These statistical tests include one-sample t-tests, independent samples t-tests, one-way ANOVA, repeated-measures ANOVA, ANCOVA, factorial ANOVA, multiple regression, MANOVA, and MANCOVA. Non-parametric statistics are used to assess differences and effects for: 1. Ordinal outcomes - One-sample median tests, Mann-Whitney U, Wilcoxon, Kruskal-Wallis, Friedman's ANOVA, Proportional odds regression 2. Categorical outcomes - Chi-square, Chi-square Goodness-of-fit, odds ratio, relative risk, McNemar's, Cochran's Q, Kaplan-Meier, log-rank test, Cochran-Mantel-Haenszel, Cox regression, logistic regression, multinomial logistic regression 3. Small sample sizes (n < 30) - Smaller sample sizes make it harder to meet the statistical assumptions associated with parametric statistics. Non-parametric statistics can generate valid statistical inferences in these situations. 4. Violations of statistical assumptions for parametric tests - Normality, Homogeneity of variance, Normality of difference scores Chi-square p-valueOdds ratio with 95% confidence interval should be reported and interpreted
Most people that need statistics are focused only on the almighty p-value of less than .05. When running Chi-square analyses between a dichotomous categorical predictor and a dichotomous categorical outcome, p-values are not the primary inference that should be interpreted for practical purposes. The lack of precision and accuracy in categorical measures coupled with sampling error makes the p-values yielded from Chi-square analyses virtually worthless in the applied sense.
The correct statistic to run is an unadjusted odds ratio with 95% confidence interval. This is the best measure for interpreting the magnitude of the association between two dichotomous categorical variables collected in a retrospective fashion. Relative risk can be calculated when the association is assessed in a prospective fashion. The width of the 95% confidence interval and it crossing over 1.0 dictate the significance and precision of the association between the variables. With smaller sample sizes, the 95% confidence interval will be wider and less precise. Larger sample sizes will yield more precise effects. Ordinal measures and normalityOrdinal level measurement can become interval level with assumed normality
Here is an interesting trick I picked up along the way when it comes to ordinal outcomes and some unvalidated measures. If you run skewness and kurtosis statistics on the ordinal variable and its distribution meets the assumption of normality (skewness and kurtosis statistics are less than an absolute value of 2.0), then you can "upgrade" the variable to a continuous level of measurement and analyze it using more powerful parametric statistics.
This type of thinking is the reason that the SAT, ACT, GRE, MCAT, LSAT, and validated psychological instruments are perceived at a continuous level. The scores yielded from these instruments, by definition, are not continuous because a "true zero" does not exist. Scores from these tests are often norm- or criterion-referenced to the population so that they can be interpreted in the correct context. Therefore, with the subjectivity and measurement error associated with classical test theory and item response theory, the scores are actually ordinal. With that being said, if the survey instrument or ordinal outcome is used in the empirical literature often and it meets the assumption of normality as per skewness and kurtosis statistics, treat the ordinal variable as a continuous variable and run analyses using parametric statistics (t-tests, ANOVA, regression) versus non-parametric statistics (Chi-square, Mann-Whitney U, Kruskal-Wallis, McNemar's, Wicoxon, Friedman's ANOVA, logistic regression). Research questions lead to choice of statistical designDifferences between-subjects and within-subjects designs
There are terms in statistics that many people do not understand from a practical standpoint. I'm a biostatistical scientist and it took me YEARS to wrap my head around some fundamental aspects of statistical reasoning, much less the lexicon. I would hypothesize that 90% of the statistics reported in the empirical literature as a whole fall between two different categories of statistics, between-subjects and within-subjects. Here is a basic breakdown of the differences in these types of statistical tests:
1. Between-subjects - When you are comparing independent groups on a categorical, ordinal, or continuous outcome variable, you are conducting between-subjects analyses. The "between-" denotes the differences between mutually exclusive groups or levels of a categorical predictor variable. Chi-square, Mann-Whitney U, independent-samples t-tests, odds ratio, Kruskal-Wallis, and one-way ANOVA are all considered between-subjects analyses because of the comparison of independent groups. 2. Within-subjects - When you are comparing THE SAME GROUP on a categorical, ordinal, or continuous outcome ACROSS TIME OR WITHIN THE SAME OBJECT OF MEASUREMENT MULTIPLE TIMES, then you are conducting within-subjects analyses. The "within-" relates to the differences within the same object of measurement across multiple observations, time, or literally, "within-subjects." Chi-square Goodness-of-fit, Wilcoxon, repeated-measures t-tests, relative risk, Friedman's ANOVA, and repeated-measures ANOVA are within-subjects analyses because the same group or cohort of individuals is measured at several different time-points or observations. |
Archives
March 2016
AuthorEric Heidel, Ph.D. is Owner and Operator of Scalë, LLC. Categories
All
|