Statistical Package for the Social Sciences (SPSS; Armonk, NY, IBM Corp.) is a statistical software application that allows for researchers to enter and manipulate data and conduct various statistical analyses. Step by step methods for conducting and interpreting over 60 statistical tests are available in Research Engineer. Videos will be coming soon. Click on a link below to gain access to the methods for conducting and interpreting the statistical analysis in SPSS.
5 Comments
Comparison of independent groups on an outcomeNumber of groups, scales of measurement, and meeting statistical assumptions
Betweensubjects statistics are used when comparing independent groups on an outcome. Independent groups means that the groups are "different" or "independent" from each other according to some characteristic. With betweensubjects designs, participants can only be part of one group (independence) and only observed once (independence of observations, IOO).
One chooses a betweensubjects statistical test based on the following: 1. Number of independent groups being compared (one group, two groups, or three or more groups) 2. Scale of measurement of the outcome (categorical, ordinal, or continuous) 3. Meeting statistical assumptions (independence of observations, normality, and homogeneity of variance) Here is a list of betweensubjects statistical tests and when they are utilized in applied quantitative research: 1. Chisquare Goodnessoffit  One group, categorical outcome, a priori hypothesis for dispersal of outcome 2. Onesample median test  One group, ordinal outcome, a priori hypothesis for median value 3. Onesample ttest  One group, continuous outcome, meet the assumption of IOO and normality, a priori hypothesis for mean value 4. Chisquare  Two independent groups, categorical outcome, and chisquare assumption (more than five observations in each cell) 5. Fisher's Exact test  Two independent groups, categorical outcome, and when the chisquare assumption is not met 6. MannWhitney U  Two independent groups, ordinal outcome, and when the assumption of homogeneity of variance for independent samples ttest is violated 7. Independent samples ttest  Two independent groups, continuous outcome, meet the assumption of IOO, normality (skewness and kurtosis statistics), and homogeneity of variance (also known as homoscedasticity, tested with Levene's test) 8. Unadjusted odds ratio  Three or more independent groups, categorical outcome, chisquare assumption, choose a reference category and compare each independent group to the reference 9. KruskalWallis  Three or more independent groups, ordinal outcome, and when the assumption of homogeneity of variance is violated 10. ANOVA  Three or more independent groups, continuous outcome, meet the assumption of IOO, normality, and homogeneity of variance Parametric statistics are more powerful statisticsNonparametric statistics are used with categorical and ordinal outcomes
As we continue our journey to break through the barriers associated with statistical lexicons, here is another dichotomy of popular statistical terms that are spoken commonly but not always understood by everyone.
Parametric statistics are used to assess differences and effects for continuous outcomes. These statistical tests include onesample ttests, independent samples ttests, oneway ANOVA, repeatedmeasures ANOVA, ANCOVA, factorial ANOVA, multiple regression, MANOVA, and MANCOVA. Nonparametric statistics are used to assess differences and effects for: 1. Ordinal outcomes  Onesample median tests, MannWhitney U, Wilcoxon, KruskalWallis, Friedman's ANOVA, Proportional odds regression 2. Categorical outcomes  Chisquare, Chisquare Goodnessoffit, odds ratio, relative risk, McNemar's, Cochran's Q, KaplanMeier, logrank test, CochranMantelHaenszel, Cox regression, logistic regression, multinomial logistic regression 3. Small sample sizes (n < 30)  Smaller sample sizes make it harder to meet the statistical assumptions associated with parametric statistics. Nonparametric statistics can generate valid statistical inferences in these situations. 4. Violations of statistical assumptions for parametric tests  Normality, Homogeneity of variance, Normality of difference scores McNemar's can be used as a post hoc testSignificant main effects for Cochran's Q need to be explained
Nonparametric tests like chisquare, fisher's exact test, KruskalWallis, Cochran's Q, and Friedman's ANOVA do not have post hoc analyses to explain significant main effects. In order to conduct these post hoc anlayses, researchers have to resort to using subsequent nonparametric tests for two groups.
In a prior post, I explained how MannWhitney U tests were used in a post hoc fashion for significant main effects found with KruskalWallis analyses. This is pertinent for betweensubjects tests. If you are using a withinsubjects design with three or more observations of a dichotomous categorical outcome, you utilize Cochran's Q test to assess main effects. If a significant main effect is found, then McNemar's tests have to be employed for post hoc group comparisons. Significant post hoc tests (or relative risk calculations) will provide evidence of significant differences across observations or withinsubjects. Nonparametric statistics should be employed more often than they are in the literature. Many published studies use small sample sizes and ordinal or categorical outcomes. The statistical assumptions of more power parametric statistics can often not be met with these types of designs. Nonparametric statistics are robust to these violations and should be used accordingly. Post hoc analyses are important in nonparametric statistics, just like in parametric statistics. Nonparametric statistics are robust to small sample sizesThe right way to conduct statistics
Mark Twain said it best, "There are lies, damn lies, and statistics." Statistics can be misleading from both the standpoint of the person conducting the statistics and the person that is interpreting the analyses. Many betweensubjects studies have small sample sizes (n < 20) and statistical assumptions for parametric statistics cannot be met.
For basic researchers that operate day in and day out with small sample sizes, the answer is to use nonparametric statistics. Nonparametric statistical tests such as the MannWhitney U, KruskalWallis, Wilcoxon, and Friedman's ANOVA are robust to violations of statistical assumptions and skewed distributions. These tests can yield interpretable medians, interquartile ranges, and pvalues. Nonparametric statistics are also useful in the social sciences due to the inherent measurement error associated with assessing human behaviors, thoughts, feelings, intelligence, and emotional states. The underlying algebra associated with psychometrics relies on intercorrelations amongst constructs or items. Correlations can easily be skewed by outlying observations and measurement error. Therefore, in concordance with mathematical and empirical reasoning, nonparametric statistics should be used often for betweensubjects comparisons of surveys, instruments, and psychological measures. Ordinal measures and normalityOrdinal level measurement can become interval level with assumed normality
Here is an interesting trick I picked up along the way when it comes to ordinal outcomes and some unvalidated measures. If you run skewness and kurtosis statistics on the ordinal variable and its distribution meets the assumption of normality (skewness and kurtosis statistics are less than an absolute value of 2.0), then you can "upgrade" the variable to a continuous level of measurement and analyze it using more powerful parametric statistics.
This type of thinking is the reason that the SAT, ACT, GRE, MCAT, LSAT, and validated psychological instruments are perceived at a continuous level. The scores yielded from these instruments, by definition, are not continuous because a "true zero" does not exist. Scores from these tests are often norm or criterionreferenced to the population so that they can be interpreted in the correct context. Therefore, with the subjectivity and measurement error associated with classical test theory and item response theory, the scores are actually ordinal. With that being said, if the survey instrument or ordinal outcome is used in the empirical literature often and it meets the assumption of normality as per skewness and kurtosis statistics, treat the ordinal variable as a continuous variable and run analyses using parametric statistics (ttests, ANOVA, regression) versus nonparametric statistics (Chisquare, MannWhitney U, KruskalWallis, McNemar's, Wicoxon, Friedman's ANOVA, logistic regression). Research questions lead to choice of statistical designDifferences betweensubjects and withinsubjects designs
There are terms in statistics that many people do not understand from a practical standpoint. I'm a biostatistical scientist and it took me YEARS to wrap my head around some fundamental aspects of statistical reasoning, much less the lexicon. I would hypothesize that 90% of the statistics reported in the empirical literature as a whole fall between two different categories of statistics, betweensubjects and withinsubjects. Here is a basic breakdown of the differences in these types of statistical tests:
1. Betweensubjects  When you are comparing independent groups on a categorical, ordinal, or continuous outcome variable, you are conducting betweensubjects analyses. The "between" denotes the differences between mutually exclusive groups or levels of a categorical predictor variable. Chisquare, MannWhitney U, independentsamples ttests, odds ratio, KruskalWallis, and oneway ANOVA are all considered betweensubjects analyses because of the comparison of independent groups. 2. Withinsubjects  When you are comparing THE SAME GROUP on a categorical, ordinal, or continuous outcome ACROSS TIME OR WITHIN THE SAME OBJECT OF MEASUREMENT MULTIPLE TIMES, then you are conducting withinsubjects analyses. The "within" relates to the differences within the same object of measurement across multiple observations, time, or literally, "withinsubjects." Chisquare Goodnessoffit, Wilcoxon, repeatedmeasures ttests, relative risk, Friedman's ANOVA, and repeatedmeasures ANOVA are withinsubjects analyses because the same group or cohort of individuals is measured at several different timepoints or observations. MannWhitney U and Wilcoxon as post hoc testsExplain significant main effects from KruskalWallis tests and Friedman's ANOVA
Nonparametric statistics are used when analyzing categorical and ordinal outcomes. These statistics are also used with smaller sample sizes (n < 20) and when the assumptions of certain statistical tests are violated.
The MannWhitney U test is employed when comparing two independent groups on an ordinal outcome. It is also used when the assumptions of an independent samples or unpaired ttest are violated (normality, homogeneity of variance). The Wilcoxon test is used when comparing ordinal outcomes at two different points in time or withinsubjects. It is further used when the assumptions of a repeated measures ttest are violated (independence of observations, normality of difference scores). A lesser known use for these two nonparametric tests is when significant main effects are found for nonparametric KruskalWallis and Friedman's ANOVA tests. Much like with a parametric oneway ANOVA or repeatedmeasures ANOVA, if a significant main effect is found using nonparametric statistics, then a post hoc analysis must be undertaken to explain the significant main effect. Nonparametric statistics do not have Tukey, Scheffe, and Dunnett tests like parametric statistics! When a significant main effect is found using a KruskalWallis test, subsequent MannWhitney U tests must be employed in a post hoc fashion to explain where amongst the independent groups the actual differences exist. The same holds true for Friedman's ANOVA. If a significant main effect is found, then Wilcoxon tests must be used in a post hoc fashion to explain where the significant changes occur amongst the observations or withinsubjects. 
Archives
March 2016
AuthorEric Heidel, Ph.D. is Owner and Operator of Scalë, LLC. Categories
All
