Statistical Package for the Social Sciences (SPSS; Armonk, NY, IBM Corp.) is a statistical software application that allows for researchers to enter and manipulate data and conduct various statistical analyses. Step by step methods for conducting and interpreting over 60 statistical tests are available in Research Engineer. Videos will be coming soon. Click on a link below to gain access to the methods for conducting and interpreting the statistical analysis in SPSS.
5 Comments
Analyze three or more measures of an ordinal outcomeWilcoxon is used as a post hoc test for significant main effects
The Greenhouse-Geisser correction is often employed when analyzing data with repeated-measures ANOVA. The statistical assumption of sphericity, as assessed by Mauchly's test in SPSS, is more often times than not violated. The Greenhouse-Geisser correction is robust to the violation of this statistical assumption with repeated-measures ANOVA. The means and standard deviations from a repeated-measures ANOVA can then be interpreted.
Friedman's ANOVA, in my experience, does not make many appearances in the empirical literature. Few people take three or more within-subjects or repeated measures of an ordinal outcome in order to answer their primary research question, I guess. It is a non-parametric statistical test since the data is measured at more of an ordinal level. When a significant main effect is found with a Friedman's ANOVA, then post hoc comparisons must be made within-subjects or amongst observations using Wilcoxon tests. Friedman's ANOVA, while being a non-parametric statistic, may have the most statistical power when employed with cross-sectional data yielded from a survey instrument that has limited reliability and validity evidence. Likert scales and composite scores from such tests may be naturally skewed due to systematic and unsystematic error. Friedman's ANOVA is robust to these types of distributions that come from cross-sectional studies in the social sciences. If the assumption of normality among the difference scores between observations of a continuous outcome cannot be met, then Friedman's ANOVA can be used to yield inferential evidence. But it is always a better idea to first check for outliers in a distribution (individual observations that are more than 3.29 standard deviations away from the mean) and make a decision as to whether 1) delete the observation in a listwise fashion, or 2) run a logarithmic transformation on the distribution. You will have transform the other observations of the outcome if you choose #2 above. The means and standard deviations of transformed variables cannot be interpreted but the p-values can be interpreted. Report the median and interquartile range for transformed variables. Deleting observations can introduce bias into the statistical analysis. This should only be done if the number of outliers constitutes less than 10% of the overall distribution. One can also run between-subjects comparisons between participants with all observations of the outcome versus participants without all observations. If there are no differences on predictor, confounding, and outcome variables between these two groups, then lessened observation bias can be assumed. Parametric statistics are more powerful statisticsNon-parametric statistics are used with categorical and ordinal outcomes
As we continue our journey to break through the barriers associated with statistical lexicons, here is another dichotomy of popular statistical terms that are spoken commonly but not always understood by everyone.
Parametric statistics are used to assess differences and effects for continuous outcomes. These statistical tests include one-sample t-tests, independent samples t-tests, one-way ANOVA, repeated-measures ANOVA, ANCOVA, factorial ANOVA, multiple regression, MANOVA, and MANCOVA. Non-parametric statistics are used to assess differences and effects for: 1. Ordinal outcomes - One-sample median tests, Mann-Whitney U, Wilcoxon, Kruskal-Wallis, Friedman's ANOVA, Proportional odds regression 2. Categorical outcomes - Chi-square, Chi-square Goodness-of-fit, odds ratio, relative risk, McNemar's, Cochran's Q, Kaplan-Meier, log-rank test, Cochran-Mantel-Haenszel, Cox regression, logistic regression, multinomial logistic regression 3. Small sample sizes (n < 30) - Smaller sample sizes make it harder to meet the statistical assumptions associated with parametric statistics. Non-parametric statistics can generate valid statistical inferences in these situations. 4. Violations of statistical assumptions for parametric tests - Normality, Homogeneity of variance, Normality of difference scores Non-parametric statistics are robust to small sample sizesThe right way to conduct statistics
Mark Twain said it best, "There are lies, damn lies, and statistics." Statistics can be misleading from both the standpoint of the person conducting the statistics and the person that is interpreting the analyses. Many between-subjects studies have small sample sizes (n < 20) and statistical assumptions for parametric statistics cannot be met.
For basic researchers that operate day in and day out with small sample sizes, the answer is to use non-parametric statistics. Non-parametric statistical tests such as the Mann-Whitney U, Kruskal-Wallis, Wilcoxon, and Friedman's ANOVA are robust to violations of statistical assumptions and skewed distributions. These tests can yield interpretable medians, interquartile ranges, and p-values. Non-parametric statistics are also useful in the social sciences due to the inherent measurement error associated with assessing human behaviors, thoughts, feelings, intelligence, and emotional states. The underlying algebra associated with psychometrics relies on intercorrelations amongst constructs or items. Correlations can easily be skewed by outlying observations and measurement error. Therefore, in concordance with mathematical and empirical reasoning, non-parametric statistics should be used often for between-subjects comparisons of surveys, instruments, and psychological measures. Ordinal measures and normalityOrdinal level measurement can become interval level with assumed normality
Here is an interesting trick I picked up along the way when it comes to ordinal outcomes and some unvalidated measures. If you run skewness and kurtosis statistics on the ordinal variable and its distribution meets the assumption of normality (skewness and kurtosis statistics are less than an absolute value of 2.0), then you can "upgrade" the variable to a continuous level of measurement and analyze it using more powerful parametric statistics.
This type of thinking is the reason that the SAT, ACT, GRE, MCAT, LSAT, and validated psychological instruments are perceived at a continuous level. The scores yielded from these instruments, by definition, are not continuous because a "true zero" does not exist. Scores from these tests are often norm- or criterion-referenced to the population so that they can be interpreted in the correct context. Therefore, with the subjectivity and measurement error associated with classical test theory and item response theory, the scores are actually ordinal. With that being said, if the survey instrument or ordinal outcome is used in the empirical literature often and it meets the assumption of normality as per skewness and kurtosis statistics, treat the ordinal variable as a continuous variable and run analyses using parametric statistics (t-tests, ANOVA, regression) versus non-parametric statistics (Chi-square, Mann-Whitney U, Kruskal-Wallis, McNemar's, Wicoxon, Friedman's ANOVA, logistic regression). Research questions lead to choice of statistical designDifferences between-subjects and within-subjects designs
There are terms in statistics that many people do not understand from a practical standpoint. I'm a biostatistical scientist and it took me YEARS to wrap my head around some fundamental aspects of statistical reasoning, much less the lexicon. I would hypothesize that 90% of the statistics reported in the empirical literature as a whole fall between two different categories of statistics, between-subjects and within-subjects. Here is a basic breakdown of the differences in these types of statistical tests:
1. Between-subjects - When you are comparing independent groups on a categorical, ordinal, or continuous outcome variable, you are conducting between-subjects analyses. The "between-" denotes the differences between mutually exclusive groups or levels of a categorical predictor variable. Chi-square, Mann-Whitney U, independent-samples t-tests, odds ratio, Kruskal-Wallis, and one-way ANOVA are all considered between-subjects analyses because of the comparison of independent groups. 2. Within-subjects - When you are comparing THE SAME GROUP on a categorical, ordinal, or continuous outcome ACROSS TIME OR WITHIN THE SAME OBJECT OF MEASUREMENT MULTIPLE TIMES, then you are conducting within-subjects analyses. The "within-" relates to the differences within the same object of measurement across multiple observations, time, or literally, "within-subjects." Chi-square Goodness-of-fit, Wilcoxon, repeated-measures t-tests, relative risk, Friedman's ANOVA, and repeated-measures ANOVA are within-subjects analyses because the same group or cohort of individuals is measured at several different time-points or observations. |
Archives
March 2016
AuthorEric Heidel, Ph.D. is Owner and Operator of Scalë, LLC. Categories
All
|