Statistical Consultation Line: (865) 742-7731
Accredited Professional Statistician For Hire
  • Contact Form

Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

3/18/2016

1 Comment

 

Newest publication from Dr. Eric Heidel, creator of Research Engineer

Directly related to Statistical Power engine of Research Engineer

My newest published article in Scientifica is now available for download online and on the Research Engineer website. The creation of the Statistical Power engine of Research Engineer led me to write the article. Click on the Download Article button below to download a .pdf directly from the website or click on the Statistical Power button to be taken to the aforementioned engine. Many thanks and regards to everyone that uses Research Engineer! -EH

Download Article
Statistical Power
Research Engineer Home Page
Scale, LLC
1 Comment

Writing survey items

11/10/2014

0 Comments

 

Write survey items that cover content areas

Survey items are composed of item stems and response sets

When it comes to writing survey items that use Likert scales as response sets, use 5-point Likert scales with increasing order. The 5-point scale is preferable to a 4-point, 3-point, or dichotomous scales because there is more chance for variance with a 5-point scale and there is a "neutral" rating.

Variance in the responses is needed in order to properly assess the diversity that may exist in a population. Increased variance is also important for the underlying mathematics associated with reliability analysis, exploratory factor analysis, validity analysis, and confirmatory factor analysis.

The use of 5-point Likert scales also works well in an aesthetic fashion for structuring a survey. Space and time can be saved in survey administration when items from similar content areas use the same 5-point Likert response set. These questions can be formatted into a matrix.

Finally, increasing order should be used when using a Likert scale, going from left to right.  

For example:

Strongly Disagree, Disagree, Neither Agree Nor Disagree, Agree, Strongly Agree
Never, Rarely, Sometimes, Often, Always
Very Poor, Poor, Moderate, Good, Very Good

Scale, LLC
0 Comments

95% confidence intervals

10/5/2014

0 Comments

 

Precision and consistency of treatment effects

95% confidence intervals are dependent upon sample size

If there is ANY statistical calculation that holds true value for researchers and clinicians on a day-to-day basis, it is the 95% confidence interval wrapped around the findings of inferential analyses. Statistics is not an exact mathematical science as far as other exact mathematical sciences go, measurement error is inherent when attempting to measure for anything related to human beings, and FEW tried and true causal effects have been proven scientifically. Statistics' strength as a mathematical science is in its ability to build confidence intervals around findings to put them into a relative context.  

Also, 95% confidence intervals act as the primary inference associated with unadjusted odds ratios, relative risk, hazard ratios, and adjusted odds ratios. If the confidence interval crosses over 1.0, there is a non-significant effect. Wide 95% confidence intervals are indicative of small sample sizes and lead to decreased precision of the effect. Constricted or narrow 95% confidence intervals reflect increased precision and consistency of a treatment effect.

In essence, p-values should not be what people get excited about when it comes to statistical analyses. The interpretation of your findings within the context of the subsequent population means, odds, risk, hazard, and 95% confidence intervals IS the real "meat" of applied statistics.

Scale, LLC
0 Comments

Effect size, sample size, and statistical power

9/15/2014

0 Comments

 

Effect size, sample size, and statistical power

Choose an effect size to maximize statistical power and decrease sample size

Effect size, sample size, and statistical power are nebulous empirical constructs that require a strong working knowledge of each in a conceptual fashion.  Also, there are basic interdependent relationships that exist amongst the three constructs. A change in one will ALWAYS exact a predictable and static change in the other two.

An effect size is the hypothesized difference expected by researchers in an a priori fashion between independent groups (between-subjects analysis), across time or observations (within-subjects analysis), or the magnitude and direction of association between constructs (correlations and multivariate analyses).

Effect size planning is perhaps the HARDEST part of designing a research study. Oftentimes, researchers have NO IDEA of what type of effect size they are trying to detect.

First and foremost, when researchers cannot state the hypothesized differences in their outcomes, an evidence-based measure of effect yielded from a published study that is theoretically or conceptually similar to the phenomenon of interest should be used. Using an evidence-based measure of effect in an a priori power analysis shows more empirical rigor on the part of the researchers and increases the internal validity of the study with the use of published values.

Sample size is the absolute number of participants that are sampled from a given population for purposes of running inferential statistics. The nomenclature of the word, inferential, denotes the basic empirical reasoning that we are drawing a representative sample from a population and then conducting statistics in order to make inferences back to said population. An important part of preliminary study planning is to specify the inclusion and exclusion criteria for participation in your study and then getting an idea of how large a participant pool you have available to you from which to draw a sample for purposes of running inferential statistics.

Due to the underlying algebra associated with mathematical science, large sample sizes will drastically increase your chances of detecting a statistically significant finding, or in other terms, drastically increase your statistical power. Large sample sizes will also allow you to detect both large and small effect sizes, regardless of scale of measurement of the outcome, research design, and/or magnitude, variance, and direction of the effect. Small sample sizes will decrease your chances of detecting statistically significant differences (statistical power), especially with categorical and ordinal outcomes, between-subjects and multivariate designs, and small effect sizes.

Statistical power is the chance you have as a researcher to reject the null hypothesis, given that the treatment effect actually exists in the population. Basically, statistical power is the chance you have of finding a significant difference or main effect when running statistical analyses.  Statistical power is what you are interested in when you ask, "How many people do I need to find significance?"

In the applied empirical sense, measuring for large effect sizes increases statistical power. Trying to detect small effect sizes will decrease your statistical power. Continuous outcomes increase statistical power because of increased precision and accuracy in measurement. Categorical and ordinal outcomes decrease statistical power because of decreased variance and objectivity of measurement. Within-subjects designs generate more statistical power due to participants serving as their own controls. Between-subjects and multivariate designs require more observations to detect differences and therefore decrease statistical power.      

Scale, LLC
0 Comments

    Archives

    March 2016
    January 2016
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014

    Author

    Eric Heidel, Ph.D. is Owner and Operator of Scalë, LLC.

    Categories

    All
    95% Confidence Interval
    Absolute Risk Reduction
    Accuracy
    Acquiring Clinical Evidence
    Adjusted Odds Ratio
    Affordable Care Act
    Alpha Value
    ANCOVA Test
    ANOVA Test
    Applying Clinical Evidence
    Appraisal Of The Literature
    Appraising Clinical Evidence
    A Priori
    Area Under The Curve
    Asking Clinical Questions
    Assessing Clinical Practice
    AUC
    Basic Science
    Beta Value
    Between-subjects
    Biserial
    Blinding
    Bloom's Taxonomy
    Bonferroni
    Boolean Operators
    Calculator
    Case-control Design
    Case Series
    Categorical
    Causal Effects
    Chi-square
    Chi-square Assumption
    Chi-square Goodness-of-fit
    Classical Test Theory
    Clinical Pathways
    Clustered Random Sampling
    Cochran-Mantel-Haenszel
    Cochran's Q Test
    Coefficient Of Determination
    Cognitive Dissonance
    Cohort
    Comparative Effectiveness Research
    Comparator
    Concurrent Validity
    Confidence Interval
    Confirmatory Factor Analysis
    Construct Specification
    Construct Validity
    Continuous
    Control Event Rate
    Convenience Sampling Method
    Convergent Validity
    Copyright
    Correlations
    Count Variables
    Cox Regression
    Cronbach's Alpha
    Cross-sectional
    Curriculum Vitae
    Database Management
    Diagnostic Testing
    EBM
    Education
    Effect Size
    Empirical Literature
    Epidemiology
    Equivalency Trial
    Eric Heidel
    Evidence-based Medicine
    Exclusion Criteria
    Experimental Designs
    Experimental Event Rate
    Facebook
    Factorial ANOVA
    Feasible Research Questions
    FINER
    Fisher's Exact Tests
    Friedman's ANOVA
    Generalized Estimating Equations (GEE)
    "gold Standard" Outcome
    G*Power
    Guidelines For Authors
    Hazard Ratio
    Hierarchical Regression
    Homogeneity Of Variance
    Hypothesis Testing
    ICC
    Incidence
    Inclusion Criteria
    Independence Of Observations Assumption
    Independent Samples T-test
    Intention-to-treat
    Internal Consistency Reliability
    Interquartile Range
    Inter-rater Reliability
    Interval Variables
    Intervention
    Intraclass Correlation Coefficient
    Isomorphism
    Item Response Theory
    Kaplan-Meier Curve
    Kappa Statistic
    KR-20
    Kruskal-Wallis
    Kurtosis
    Levene's Test
    Likert Scales
    Linearity
    Listwise Deletion
    Logarithmic Transformations
    Logistic Regression
    Log-Rank Test
    Longitudinal Data
    MANCOVA
    Mann-Whitney U
    MANOVA
    Mass Emails In Survey Research
    Math
    Mauchly's Test
    McNemar's Test
    Mean
    Measurement
    Median
    Medicine
    Merging Databases
    Missing Data
    Mode
    Multinomial Logistic Regression
    Multiple Regression
    Multivariate Statistics
    Negative Binomial Regression
    Negative Predictive Value
    Nominal Variables
    Nonequivalent Control Group Design
    Non-inferiority
    Non-inferiority Trial
    Non-parametric Statistics
    Non-probability Sampling
    Normality
    Normality Of Difference Scores
    Normal Probability Plot
    Novel Research Question
    Number Needed To Treat
    Observational Research
    Odds Ratio With 95% CI
    One-sample Median Tests
    One-sample T-test
    One-sided Hypothesis
    One-Way Random
    Operationalization
    Ordinal
    Outcome
    Outliers
    Parametric Statistics
    Pearson's R
    Ph.D.
    Phi Coefficient
    PICO
    Pilot Study
    Point Biserial
    Poisson Regression
    Population
    Positive Predictive Value
    Post Hoc
    Post-positivism
    PPACA
    PPV
    Precision
    Predictive Validity
    Prevalence
    Principal Components Analysis
    Probability Sampling
    Propensity Score Matching
    Proportion
    Proportional Odds Regression
    Prospective Cohort
    Psychometrics
    Psychometric Tests
    Publication
    Publication Bias
    Purposive Sampling
    P-value
    Random Assignment
    Randomized Controlled Trial
    Random Selection
    Rank Biserial
    Ratio Variables
    Receiver Operator Characteristic
    Regression
    Regression Analysis
    Relative Risk
    Relevant Research Question
    Reliability
    Repeated-measures ANOVA
    Repeated-measures T-test
    Research
    Research Design
    Research Engineer
    Research Journal
    Research Question
    Residual Analysis
    Retrospective Cohort
    ROC Curve
    Sample Size
    Sampling
    Sampling Error
    Sampling Method
    Scales Of Measurement
    Science
    Search Engine
    Search Query
    Sensitivity
    Simple Random Sampling
    Sitemap
    Skewness
    Social Science
    Spearman-Brown
    Spearman's Rho
    Specificity
    Specificity In Literature Searching
    Sphericity Assumption
    Split-half Reliability
    SPSS
    Standard Deviation
    Standards Of Care
    Statistical Analysis
    Statistical Assumptions
    Statistical Consultation
    Statistical Power
    Statistical Power Analysis
    Statistical-power-test
    Statistician
    Statistics
    Stratified Random Sampling
    Survey
    Survey Construct Specification
    Survey Methods
    Systematic Review
    Test-Retest Reliability
    Twitter
    Two-sided Hypothesis
    Two-Way Mixed
    Two-Way Random
    Type I Error
    Type II Error
    Unadjusted Odds Ratio
    Validity
    Variables
    Variance
    Wilcoxon
    Within-subjects
    YouTube


    Contact Form

Contact Dr. Eric Heidel
consultation@scalelive.com
(865) 742-7731

Copyright © 2022 Scalë. All Rights Reserved. Patent Pending.