Main Content Area
Scale-related Pet-Peeves
Blog #8 (by Cheryl Jarvis, jarvisc@fau.edu)
Confusing Reflective & Formative Scales - Part 2
In my previous post, I explained the difference between formatively-indicated and reflectively-indicated latent variables. In this post, I’ll answer the question, “So what?” Who should care and why should they care about this distinction in measurement models?
Any academic or practitioner conducting research using multi-item scales to represent latent variables should be aware of the difference and take the steps needed to correctly specify their measurement models. Let me explain why, and give some examples of the consequences of failing to correctly specify formative measures.
If a latent variable is misspecified so that the direction of the causal arrow between the construct and its measured items is reversed, there would be no effect at all on the estimates provided by the measurement model itself. That is, in a confirmatory factor model, the regression parameters representing the relationship between the construct and its measures would not change – as in an algebraic equation, where “X equals Y” it is also true that “Y equals X.” Thus, you could reverse the causality and get the same result for the relationship between the variable and its measure. However, failing to recognize the conceptual difference between reflective and formative indicators could lead to two key failures.
First, misspecifying formative indicators as reflective can lead researchers to apply the wrong statistics in refining those scales. Specifically, if the researcher doesn’t recognize the formative nature of the measures and treats the construct as reflective, he or she will inappropriately apply an internal consistency measure of reliability (such as Cronbach’s alpha) to refine the scale, as dictated by classical test theory for reflective latent variables. Applying such a statistic would lead the researcher to eliminate measures that don’t highly correlate. However, as discussed in my first post, formative scale items need not correlate with each other – they may do so, but it is not necessary. And, in formatively-indicated constructs, every item represents a different and necessary component of the latent variable. Thus, eliminating items with low inter-item correlations would eliminate part of the meaning of the formative construct, damaging construct validity. This consequence should be a concern for any researcher using multi-item scales to measure latent variables, whether the final hypotheses are tested with structural equation modeling or another statistical approach, such as ANOVA or regression.
For those who use those measurement models as part of a larger structural equation model, the consequences compound even further. Even though the problem of misspecification arises in the measurement model, the consequences are felt throughout the structural model that incorporates those measures, creating misleading results both in the assessment of the fit of the structural model and in the regression parameters testing the hypotheses regarding the relationships between constructs.
By using Monte Carlo simulations, my colleagues and I demonstrated that if a formatively-indicated construct is misspecified as reflective, SEM fit statistics such as CFA, GFI, RMSEA, and SRMR can be biased by as much as 100 percent, depending on the location of the misspecified construct within the model, the sample size, and the inter-item correlations (Jarvis, MacKenzie and Podsakoff 2003; MacKenzie, Podsakoff and Jarvis 2005). Thus, a researcher would be mislead by the fit statistics, falsely judging that a misspecified model fits the data well; or vice versa, falsely rejecting a correctly specified model. In the first case, the results of the model and its hypotheses could be used to make predictions about relationships among variables that in reality aren’t supported. In the second, good data might be thrown away and valuable findings lost.
Even more disturbing, however, our simulations found that misspecifying a formatively-indicated construct as reflective resulted in regression estimates that were biased by as much as 555 percent. In some cases, depending on the location of the misspecified construct in the structural model, the bias in regression estimates was negative, in others it was positive. Likewise, the standard errors of the estimates also were biased. As a result, the Type II error rate was as high as 19% in some conditions. In other words, a manager could look at a piece of incorrectly designed research and wind up thinking, “If I implement this action, it will increase sales by 25%,” when the action might actually decrease sales, have no effect at all on sales, or increase sales by five hundred times as much as estimated.
So, the bottom line is that not only can’t we trust the magnitude or statistical significance of the results of our hypothesis tests, we can’t even trust the direction of those estimates. Thus, misspecification of measurement models can lead to inappropriate conclusions and bad decisions on the part of both researchers and managers. Given the troublingly large percentage of misspecified models in the marketing literature, it becomes obvious that a substantial proportion of the empirical results regarding the relationships between latent constructs in the marketing literature may be misleading.
The third and final post on this topic will answer the final question: “So what do we do about it?” I’ll give advice and insights into how to identify and correctly model formatively-indicated constructs.
References:
Bagozzi, Richard P. (1980), Causal Models in Marketing, New York: Wiley.
Fornell, Claus and Fred L. Bookstein (1982), “Two Structural Equation Models: LISREL and PLS Applied to Consumer Exit-Voice Theory,” Journal of Marketing Research, 19 (November), 440-452.
Jarvis, Cheryl Burke, Scott B. MacKenzie and Philip M. Podsakoff (2003), “A Critical Review of Construct Indicators and Measurement Model Misspecification in Marketing and Consumer Research,” Journal of Consumer Research, September (30), 199-218.