Main Content Area
Giving Scales "Bad" Names
As I have reviewed psychometric scales over the last two decades, one of the challenges I have dealt with is what to call measures. Should I simply use the names given by the authors of the articles or should I put more thought into it, especially when trying to give similar names for measures of similar constructs?
Below are some examples of scales where I named them differently than their users. On the left in each pair is the name that the researchers used for their scales. After looking at the items and what previous researchers (if any) had called similar scales, I decided on a title that I thought better communicated to future scale users what was actually being measured.
- Modification / Service Personalization Effort
- Psychic cost perceptions / Store Atmosphere Evaluation
- Inertia / Switching Costs (General)
- Partner Quality / Trust in the Company
- Being Hooked / Ad Message Involvement
- Perceptions of Sacrifice / Attitude About the Product Price
- Make a Difference / Boycotting Effectiveness
- Discrepancy Index / Brand Extension Fit
- Justifiability / Choice Difficulty
- Stigma by Association / Financial Status
- Ability in Future Co-creation / Self-Efficacy
- Warranty Bond Credibility / Attitude Toward the Company
There are several reasons why I have chosen at times to use a different name for scales than used by the authors of an article. First, I prefer that scales have names that refer to a general construct so as to facilitate comparison of scales that are suppose to measure the same thing. There are plenty of cases where authors have called their scales something so idiosyncratic that others would never guess the name based upon reading the items themselves. When several measures of the same thing have different names it makes it much more difficult for the researcher to figure out what measurement options are available.
A second reason I sometimes give scales different names than their users is because there are cases where multiple sets of authors use essentially the same scale but refer to it with different names. For example, there is a scale that measures a person's perceived cognitive effort in responding to a question. It was called the effort index by Menon, Raghubir, and Schwarz (1995), the accessibility manipulation by Raghubir and Menon (1998), the cognitive effort index by Menon, Block, and Ramanathan (2002), and the difficulty index by Menon and Raghubir (2003). I decided to call it Response Difficulty.
A third reason I sometimes change names is because I like them short, like five words or less. Having said that, I have found it very difficult at times to get across in a few words the breadth of what some scales measure. That is why there are cases where I have felt it necessary for a "long" title to be used (6-8 words). Such scales usually measure a construct in a very specific context, one for which it is difficult if not impossible to generalize to a simple construct name. For example, there is a measure by Petroshius, Titus, and Hatch (1995) that doesn't just measure attitude toward advertising, nor does it just measure attitude toward pharmaceutical advertising. It measures attitude toward pharmaceutical advertising to physicians. Although that last one was longer than I prefer, I ended up using it for the scale.
A final point to make is that when I decide to change a name significantly, I also provide the name used by the authors in the description field so readers can see the alternative ways it has been referred to. By the way, in many cases, no name is given to a scale by the users. The measure is merely described in some way such as “the items were summed and used as a manipulation check.” In those cases, it is totally up to me to figure out what the construct is and what other authors have called it.
The bottom line is that I am asking authors to strongly consider giving short, generic names to their scales. The name should be based on the common name for the construct being measured rather than something new or different unless there is a compelling reason to do otherwise.
Menon, Geeta and Priya Raghubir (2003), “Ease-of-Retrieval as an Automatic Input in Judgements: A Mere-Accessibility Framework?” Journal of Consumer Research, 30 (September), 230-243.
Menon, Geeta, Lauren G. Block, and Suresh Ramanathan (2002), “We’re At As Much Risk As We Are Led to Believe: Effects of Message Cues on Judgments of Health Risk,” Journal of Consumer Research, 28 (March), 533-549.
Menon, Geeta, Priya Raghubir, and Norbert Schwarz (1995), “Behavioral Frequency Judgments: An Accessibility-Diagnosticity Framework,” Journal of Consumer Research, 22 (September), 212-228.
Petroshius, Susan M., Philip A. Titus, and Kathryn J. Hatch (1995), "Physician Attitudes Toward Pharmaceutical Drug Advertising," Journal of Advertising Research, 35 (November/December), 41-51.
Raghubir, Priya and Geeta Menon (1998), “AIDS and Me, Never the Twain Shall Meet: The Effects of Information Accessibility on Judgments of Risk and Advertising Effectiveness,” Journal of Consumer Research, 25 (June), 52-63.