Statistical Consultation Line: (865) 742-7731
Accredited Professional Statistician For Hire
  • Contact Form

Operationalization of constructs and behaviors

9/29/2014

0 Comments

 

Operationalization leading to understanding

Measurement of new phenomena

The term operationalization is very near and dear to my heart since I conducted my dissertation on operationalizing and validating the construct of isomorphism in supervision. Operationalization essentially means defining observable and measurable components of a given construct or behavior.

The term is used often in the social sciences because scientists in that field have to spend so much time creating and validating their constructs of interest, just to be able to measure for them. From an empirical standpoint, they have to operationalize the construct as it exists within the perception, context, experience, and environment of members of a population.

Many social scientists use survey methodologies (cross-sectional) to operationalize an abstract, new, or unique construct or behavior. They master the content area related to the construct, create a survey, and then administer it to a sample from a targeted population to see what content areas or items account for the most variance. Principal components analysis and confirmatory factor analysis are used to establish the construct validity of survey instruments.

Medical professionals use cross-sectional research designs to establish the prevalence of disease states. Operationalization within physiology deals more with using "gold standard" techniques and concrete measures such as lab values.  Treatment protocols are another form of operationalization within medicine.  Certain procedures like a central line insertion require 20+ sequential steps to be conducted by surgical team members, every time.  With the advent of the Affordable Care Act and upcoming clinical pathways, operationalization will play an even larger role in building economical, efficient, and effective standards of care.    

Scale, LLC
0 Comments

Evidence-based medicine and its applications

9/27/2014

0 Comments

 

Critical appraisal of the clinical evidence

The cart before the horse

I'm getting ready to add an Education section to the website, I decided to go back to first principles.  Bloom's Taxonomy had a pervasive impact on my philosophy of learning, teaching, and cognitive complexity.  I used it back in February of this year for an evidence-based medicine (EBM) presentation at work.  Bloom's Taxonomy* stipulated six levels of "knowing" or cognitive complexity.  The six levels in increasing order were knowledge, comprehension, application, analysis, synthesis, and evaluation.  

Here is the conundrum that Bloom's Taxonomy exacts upon applied EBM practice:

There are five steps to EBM:  Asking, acquiring, appraising, applying, and assessing.  

With asking, the EBM literature posits that clinicians experience "cognitive dissonance" when they have a knowledge gap in their clinical practice.  In order to deter the dissonance, the clinician decides to ask a clinical question to fill that gap.

With acquiring, the clinician uses the PICO (population, intervention, comparator, outcome) mnemonic to acquire the best clinical evidence, given the resources and time available.

Now we get to critical appraisal of the literature.  When looking at the nomenclature of the word "appraisal," it is reflective of the highest level of "knowing" or cognitive complexity in Bloom's Taxonomy, evaluation.  EBM stipulates that clinicians must be able to critically appraise the methods and statistical analyses of published studies.  This means that clinicians have to have functioning at a very high cognitive level to do this correctly.

However, past literature has shown that researchers feel anxious and intimidated by statistics due to a lack of experience and competency.**  Also, undergraduate and graduate medical training rarely equips clinicians with the necessary competencies to conduct and effectively interpret clinical research evidence.***

So, how can your everyday clinician with limited empirical/statistical training who feels "cognitive dissonance" a second time in the five steps of EBM critically appraise the literature?  Therein lies the conundrum, in my opinion.  

I'm positing that we need to refocus our efforts on the lower echelons of Bloom's Taxonomy by educating physicians, residents, fellows, faculty, pharmacists, nurses, and staff to better understand (knowledge), recognize (comprehension), choose (application), examine (analysis), and design (synthesis) research studies before we can expect them to critically appraise (evaluation) the literature.  

Scale, LLC

*Bloom, B. S.; Engelhart, M. D.; Furst, E. J.; Hill, W. H.;Krathwohl, D. R. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. New York: David McKay Company.
**Marquardt, DW.  Criteria for evaluating the performance of statistical consultants in industry.  The American Statistician 1981; 35; 216-219.
***Wegwarth O.  Statistical illiteracy in residents:  What they do not learn today will hurt their patients tomorrow.  Journal of Graduate Medical Education 2013; 5; 340-341.
0 Comments

G*Power for the masses

9/26/2014

0 Comments

 

G*Power is a necessary tool for every researcher's toolkit

Easy statistical power and sample size calculations

I'm trying to run an online business so I'm fully Google-integrated. I see that there many search queries of different derivations related to sample size calculation as it relates to behind-the-scenes tracking measures.

There is an open-source tool available to EVERYONE that allows you to calculate your own a priori and post hoc power analyses. It is called G*Power and as your personal statistical consultant, I highly suggest you go to the following web address and download Version 3.0 to your respective device:

http://www.gpower.hhu.de/en.html    

The researchers that developed this program have made a great contribution to science. It is truly a great and FREE program that can run a litany of different power analyses. You can find out in minutes how large of a sample size that you need, given that you have an idea of the effect size that you are attempting to detect in your study.

Use means, proportions, and variance measures from published studies in your field to have the most empirically rigorous hypothesized effect. Enter these values into G*Power and the adjust the variance and magnitude of the effect size to see how the required sample size changes.   

Click on the Sample Size button to access the methods of conducting and interpreting sample size calculations for ten different statistical tests.

Sample Size
Scale, LLC
0 Comments

Preliminary statistical consultation

9/26/2014

0 Comments

 

Support your local statistician!

Seek out methodological and statistical consultation

If you have access to a statistical consultants or statisticians within your empirical or clinical environment, seek out their services in the preliminary phases of planning your study. Here is a list of things that I do for residents, fellows, faculty, physicians, pharmacists, nurses, and staff at an academic regional medical campus:

1. Sample Size - I conduct sample size calculations for at least of 80-85% of my first-time clients. They often want to know how many people they need to reach a significant p-value. We work through the process of acquiring an evidence-based measure of effect that reflects what their research question is trying to answer.

It feels good knowing that you have a good chance of detecting significance with a small sample size. Also, it is good to find out that you have to collect A LOT more observations than you thought you would. Post hoc power analyses should be run for any non-significant main effects that may be considered Type II errors (limited or small sample sizes).

2. Statistical analysis - Real biostatistical scientists and statisticians will conduct your statistical analyses in an objective and expeditious manner to help you answer your research questions. Please help them understand what your research question is and what research design you want to use to answer it to the best of your abilities. They will be able to help you choose the correct statistic given that you can tell them the scale of measurement for your primary outcome and what type of design (between-subjects, within-subjects, correlational, mixed, or multivariate) you want to use to answer your question. It is also important to know WHO or WHAT you want to include in your sample in terms of inclusion and exclusion criteria. Finally, know your content area. We may not know your knowledge/philosophical base and need to understand the entire picture, as much as you can tell us.

3. Database management - Go ahead and let us build your database in a basic Excel spreadsheet and send an accompanying code book in Word so that we are all on the same page. It helps us all know what is going on, what variables are being collected, what they mean, how they are measured, and how the analysis will work. Share it with all members of the research team. Use the code book when entering your data. Tell the rest of us if you make changes to the code book or database. These simple tasks and communicative efforts can mean the difference between your statistics being run in five minutes versus five weeks.  SERIOUSLY.

4. Write-up of findings for publication - We will give you an annotated write-up of your findings with statistical outputs and give you basic and unbiased interpretations of the statistical results of your study. We can help you write up the statistical methods and results sections of your abstracts and manuscripts. We can even help you design tables and graphs that will make your study findings more aesthetically and visually appealing to your audience.

When it comes to authorship, if you feel that your statistical professional's contribution to the design, execution, and interpretation of your study warrants authorship, offer it to them. They will greatly appreciate it! However, YOU SHOULD NEVER BE REQUIRED TO GIVE US AUTHORSHIP JUST BECAUSE WE RAN YOUR STATISTICS FOR YOU.  IT IS UNETHICAL FOR US TO REQUIRE AUTHORSHIP FOR DOING OUR JOB. THAT IS, IF OUR JOB IS TO RUN STATISTICS IN YOUR EMPIRICAL OR CLINICAL ENVIRONMENT.          

Scale, LLC
0 Comments

Using naturally skewed continuous variables as outcome variables

9/23/2014

0 Comments

 

Transformed outcomes

Some continuous variables will be naturally skewed

In medicine, there is an important metric that signifies efficiency and quality in healthcare, length of stay (LOS) in the hospital. When thinking about the distribution of a variable such as LOS, you have to put it into a relative context. The vast majority of people will have an LOS of between 0-3 days given the type of treatment or injury that brought them to hospital. VERY FEW individuals will stay at the hospital one month, six months, or a year. Therefore, the distribution looks nothing like the normal curve and is extremely positively skewed.  

As a researcher, you may want to predict for a continuous variable that has a natural and logical skewness to its distribution in the population. Yet, the assumption of normality is a central tenet of running statistical analyses. What is one to do in this situation?

The answer is to first, run skewnessand kurtosis statistics to assess the normality of your continuous outcome.  If the either statistic is above an absolute value of 2.0, then the distribution is non-normal. Check for outliers in the distribution that are more than 3.29 standard deviations away from the mean. Make sure that the outlying observations were entered correctly.

You now have a choice:

1. You can delete the outlying observations in a listwise fashion. This should be done only if the number of outlying variables is less than 10% of the overall distribution. This is the least preferable choice.

2. You can conduct a logarithmic transformation on the outcome variable. Doing this will normalize the distribution so that you can run the analysis using parametric statistics. The unstandardized beta coefficients, standard errors, and standardized beta coefficients are not interpretable, but the significance of the associations between the predictor variables and the transformed outcome can yield some inferential evidence.

3. You can recode the continuous outcome variable into a lower level scale of measurement such as ordinal or categorical and run non-parametric statistics to seek out any associations. Of course, you are losing the precision and accuracy of continuous-level measurement and introducing measurement error into the outcome variable, but you will still be able to run inferential statistics.

4. You can use non-parametric statistics without changing the skewed variable at all. That is one of the primary benefits of non-parametric statistics: They are robust to violations of normality and homogeneity of variance. Instead of interpreting means and standard deviations, you will interpret medians and interquartile ranges with non-parametric statistics. 

Click on the Statistics button to learn more.
Statistics

Scale, LLC

0 Comments

Statistical tests

9/22/2014

0 Comments

 

Statistical tests are used to answer research questions

It's not about the statistics, it's about the question.

In my experience, statistics is a cognitive dissonance-inducing mathematical science and no one tends to recall their personal and professional statistical experiences with much zeal. It's as if there is an automatic recoil when the topic of statistics enters the discussion and planning of a research study. The literature has posited that statistics are intimidating and nebulous because many people do not possess the necessary competencies and experience with statistics and also people do not understand the lexicon of the science.

The most important thing to remember about applied statistics, despite their prevalence, relevance, and utility in everyday life, is that they are tools that human beings use to communicate the results of data analysis. Hypothesis testing is employed in empirical research so that researchers can present their findings in a relative context that is interpretable and applicable in other research and applied environments.

Statistics are useful ONLY when they are used to answer useful, appropriate, answerable, relevant, and valid research questions that are grounded in the empirical literature.

Scale, LLC
0 Comments

The research question is the foundation of everything empirical

9/20/2014

0 Comments

 

Foundation for measurement, design, power, and statistics

80% of preliminary study planning should be given to the research question

As a biostatistical consultant at an academic regional medical campus, I am supposed to spend 80% of my time working with residents, fellows, faculty, clinicians, researchers, nurses, pharmacists, and hospital staff to formulate and refine their research question. THAT is how important it is to any research study. 

A research question is cultivated through researchers' efforts to know the existing literature, their clinical expertise and interests, their collaboration with peers, and their intrinsic motivation towards scientific discovery and innovation. Answerable, appropriate, meaningful, and purposeful research questions make valid and needed contributions to the literature.

Deductive reasoning should be used when formulating a research question. Oftentimes, researchers will want to answer EVERY possible question and collect data on EVERY single variable that they can in hopes of finding SOMETHING SIGNIFICANT. This is not the way that REAL science works. A focused and refined research question is the basis for constructing and executing research. This does not mean that researchers cannot ask secondary, tertiary, and ancillary research questions as demographic, clinical, and confounding variables are yielded from literature reviews! Of course, these are important questions to ask and often lead to great discoveries! (Example:  Viagra) However, having ONE research question that serves as the foundation for a study is extremely important and should not be overlooked!

Many novice researchers will plan an entire study around a type of research design or a statistic that they read in an article. REMEMBER, research designs and statistical tests are chosen to answer researcher questions, NOT the inverse.

All of this being said, there are two existing frameworks that greatly assist in formulating (FINER) and refining (PICO) research questions. FINER stands for feasible, interesting, novel, interesting, and relevant. PICO stands for population, intervention, comparator, and outcome.

Scale, LLC
0 Comments

Prevalence vs. Incidence

9/19/2014

0 Comments

 

Prevalence and incidence used correctly

Difference in important epidemiological measures

The terms prevalence and incidence are often used interchangeably. However, they are extremely different in their utility and interpretability within epidemiology.

Prevalence is the proportion of cases or disease states that exist in a population at any given time.  Prevalence is established using cross-sectional research designs.  Measures of prevalence can be used to generate odds ratios for outcomes occurring given an exposure or non-exposure.  It is calculated when data is collected in a retrospective fashion. 

Incidence is the number of new cases or disease states that occur in a population.  Incidence is established in cohort designs.  Measures of incidence are used to establish the relative risk of an outcome given treatment or no treatment.  It is calculated when data is collected in a prospective fashion.

​Click on the Epidemiology button below to continue.
Epidemiology

Scale, LLC
0 Comments

Non-parametric statistics as post hoc tests

9/18/2014

0 Comments

 

Mann-Whitney U and Wilcoxon as post hoc tests

Explain significant main effects from Kruskal-Wallis tests and Friedman's ANOVA

Non-parametric statistics are used when analyzing categorical and ordinal outcomes.  These statistics are also used with smaller sample sizes (n < 20) and when the assumptions of certain statistical tests are violated.

The Mann-Whitney U test is employed when comparing two independent groups on an ordinal outcome.  It is also used when the assumptions of an independent samples or unpaired t-test are violated (normality, homogeneity of variance).

The Wilcoxon test is used when comparing ordinal outcomes at two different points in time or within-subjects.  It is further used when the assumptions of a repeated measures t-test are violated (independence of observations, normality of difference scores).

A lesser known use for these two non-parametric tests is when significant main effects are found for non-parametric Kruskal-Wallis and Friedman's ANOVA tests.  Much like with a parametric one-way ANOVA or repeated-measures ANOVA, if a significant main effect is found using non-parametric statistics, then a post hoc analysis must be undertaken to explain the significant main effect. Non-parametric statistics do not have Tukey, Scheffe, and Dunnett tests like parametric statistics!

When a significant main effect is found using a Kruskal-Wallis test, subsequent Mann-Whitney U tests must be employed in a post hoc fashion to explain where amongst the independent groups the actual differences exist.

The same holds true for Friedman's ANOVA.  If a significant main effect is found, then Wilcoxon tests must be used in a post hoc fashion to explain where the significant changes occur amongst the observations or within-subjects.

Scale, LLC
0 Comments

Sampling methods in research

9/18/2014

0 Comments

 

Probability vs. non-probability

Establishing causal effects vs. associations

Experimental research designs, like randomized controlled trials, can yield evidence of causal effects while observational designs like case series, case-controls, and cohorts cannot determine any cause and effect relationships. The reason is because random selection and random assignment of participants allows for any differences at baseline to occur purely by chance AND also for these differences to be adjusted for in subsequent statistical analyses.

From a conceptual standpoint, a sample assembled in a completely random fashion will be more REPRESENTATIVE of the actual population. Always remember that inferential statistics are conducted on samples to make INFERENCES BACK TO THE POPULATION. With a randomized sample, all of the biodiversity that exists in the real world has a better chance of being accounted for in the statistical analyses.  

Random selection (every member of a given population has an equal chance of being selected for the study) and random assignment (selected participants are randomly allocated to either the treatment or control group) are the primary components of probability sampling.

There are three types of probability sampling:

1. Simple random sampling - Every member of a population has an equal chance of being selected for participation in the study.  

2. Stratified random sampling - Independent strata within a given population are randomly sampled.  Each stratum must be overtly defined and homogeneous in some relative way.  Simple random sampling is then conducted on the stratum (singular) or strata (plural) of interest. 

3. Clustered random sampling - Naturally occurring or defined subgroups of a given population are randomly sampled. The subgroups need to be defined and are often grouped according to socioeconomic, demographic, clinical, or theoretical characteristics.

Non-probability sampling is used in observational research designs. The lack of randomization in these designs introduces selection and observation biases that can greatly skew the inferences yielded from statistics.

There are two types of non-probability sampling techniques:

1. Convenience sampling is the most prevalent form of non-probability sampling. Researchers just access retrospective data available to them in their empirical or clinical environment, or via existing databases, and conduct statistical analyses.

2. Purposive sampling is a more focused approach to sampling where specific groups of individuals are targeted for participation in the study.    

Scale, LLC

0 Comments
<<Previous

    Archives

    March 2016
    January 2016
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014

    Author

    Eric Heidel, Ph.D. is Owner and Operator of Scalë, LLC.

    Categories

    All
    95% Confidence Interval
    Absolute Risk Reduction
    Accuracy
    Acquiring Clinical Evidence
    Adjusted Odds Ratio
    Affordable Care Act
    Alpha Value
    ANCOVA Test
    ANOVA Test
    Applying Clinical Evidence
    Appraisal Of The Literature
    Appraising Clinical Evidence
    A Priori
    Area Under The Curve
    Asking Clinical Questions
    Assessing Clinical Practice
    AUC
    Basic Science
    Beta Value
    Between-subjects
    Biserial
    Blinding
    Bloom's Taxonomy
    Bonferroni
    Boolean Operators
    Calculator
    Case-control Design
    Case Series
    Categorical
    Causal Effects
    Chi-square
    Chi-square Assumption
    Chi-square Goodness-of-fit
    Classical Test Theory
    Clinical Pathways
    Clustered Random Sampling
    Cochran-Mantel-Haenszel
    Cochran's Q Test
    Coefficient Of Determination
    Cognitive Dissonance
    Cohort
    Comparative Effectiveness Research
    Comparator
    Concurrent Validity
    Confidence Interval
    Confirmatory Factor Analysis
    Construct Specification
    Construct Validity
    Continuous
    Control Event Rate
    Convenience Sampling Method
    Convergent Validity
    Copyright
    Correlations
    Count Variables
    Cox Regression
    Cronbach's Alpha
    Cross-sectional
    Curriculum Vitae
    Database Management
    Diagnostic Testing
    EBM
    Education
    Effect Size
    Empirical Literature
    Epidemiology
    Equivalency Trial
    Eric Heidel
    Evidence-based Medicine
    Exclusion Criteria
    Experimental Designs
    Experimental Event Rate
    Facebook
    Factorial ANOVA
    Feasible Research Questions
    FINER
    Fisher's Exact Tests
    Friedman's ANOVA
    Generalized Estimating Equations (GEE)
    "gold Standard" Outcome
    G*Power
    Guidelines For Authors
    Hazard Ratio
    Hierarchical Regression
    Homogeneity Of Variance
    Hypothesis Testing
    ICC
    Incidence
    Inclusion Criteria
    Independence Of Observations Assumption
    Independent Samples T-test
    Intention-to-treat
    Internal Consistency Reliability
    Interquartile Range
    Inter-rater Reliability
    Interval Variables
    Intervention
    Intraclass Correlation Coefficient
    Isomorphism
    Item Response Theory
    Kaplan-Meier Curve
    Kappa Statistic
    KR-20
    Kruskal-Wallis
    Kurtosis
    Levene's Test
    Likert Scales
    Linearity
    Listwise Deletion
    Logarithmic Transformations
    Logistic Regression
    Log-Rank Test
    Longitudinal Data
    MANCOVA
    Mann-Whitney U
    MANOVA
    Mass Emails In Survey Research
    Math
    Mauchly's Test
    McNemar's Test
    Mean
    Measurement
    Median
    Medicine
    Merging Databases
    Missing Data
    Mode
    Multinomial Logistic Regression
    Multiple Regression
    Multivariate Statistics
    Negative Binomial Regression
    Negative Predictive Value
    Nominal Variables
    Nonequivalent Control Group Design
    Non-inferiority
    Non-inferiority Trial
    Non-parametric Statistics
    Non-probability Sampling
    Normality
    Normality Of Difference Scores
    Normal Probability Plot
    Novel Research Question
    Number Needed To Treat
    Observational Research
    Odds Ratio With 95% CI
    One-sample Median Tests
    One-sample T-test
    One-sided Hypothesis
    One-Way Random
    Operationalization
    Ordinal
    Outcome
    Outliers
    Parametric Statistics
    Pearson's R
    Ph.D.
    Phi Coefficient
    PICO
    Pilot Study
    Point Biserial
    Poisson Regression
    Population
    Positive Predictive Value
    Post Hoc
    Post-positivism
    PPACA
    PPV
    Precision
    Predictive Validity
    Prevalence
    Principal Components Analysis
    Probability Sampling
    Propensity Score Matching
    Proportion
    Proportional Odds Regression
    Prospective Cohort
    Psychometrics
    Psychometric Tests
    Publication
    Publication Bias
    Purposive Sampling
    P-value
    Random Assignment
    Randomized Controlled Trial
    Random Selection
    Rank Biserial
    Ratio Variables
    Receiver Operator Characteristic
    Regression
    Regression Analysis
    Relative Risk
    Relevant Research Question
    Reliability
    Repeated-measures ANOVA
    Repeated-measures T-test
    Research
    Research Design
    Research Engineer
    Research Journal
    Research Question
    Residual Analysis
    Retrospective Cohort
    ROC Curve
    Sample Size
    Sampling
    Sampling Error
    Sampling Method
    Scales Of Measurement
    Science
    Search Engine
    Search Query
    Sensitivity
    Simple Random Sampling
    Sitemap
    Skewness
    Social Science
    Spearman-Brown
    Spearman's Rho
    Specificity
    Specificity In Literature Searching
    Sphericity Assumption
    Split-half Reliability
    SPSS
    Standard Deviation
    Standards Of Care
    Statistical Analysis
    Statistical Assumptions
    Statistical Consultation
    Statistical Power
    Statistical Power Analysis
    Statistical-power-test
    Statistician
    Statistics
    Stratified Random Sampling
    Survey
    Survey Construct Specification
    Survey Methods
    Systematic Review
    Test-Retest Reliability
    Twitter
    Two-sided Hypothesis
    Two-Way Mixed
    Two-Way Random
    Type I Error
    Type II Error
    Unadjusted Odds Ratio
    Validity
    Variables
    Variance
    Wilcoxon
    Within-subjects
    YouTube


    Contact Form

Contact Dr. Eric Heidel
consultation@scalelive.com
(865) 742-7731

Copyright © 2022 Scalë. All Rights Reserved. Patent Pending.