Main Content Area
Provide Evidence of Scale Quality
Part of being a good researcher is caring about the quality of the measures we use. If the quality of a measure is “low,” we should understand that it is either not measuring the intended construct well or it may be measuring a different construct. I have little doubt there are contexts in industry where the quality of a scale is not considered relevant; adequate quality is assumed. If the researcher claims that X measures “engagement,” “satisfaction,” “loyalty,” or whatever, it is not questioned. But, it should be. In academia, when professors conduct research with the intention of publishing it, it is assumed that journal reviewers will scrutinize their scales along with everything else they have done in their studies. At least, they are supposed to but, my 20 years of experience shows that there are far too many exceptions.
My pet-peeve is how many times someone is not doing their job. I don't know if it is the authors, the reviewers, or the editors who drop the ball but I am appalled at how often evidence of a scale's quality is not provided. Page limits in a printed journal article are understandable but that should be mitigated to great extent now-a-days by the nearly unlimited space that can be made available online. For example, current articles in the AMA journals (JM, JMR) routinely provide appendices online. What is a reader to conclude, however, when our field's other top journals publish articles which do not provide evidence of scale quality anywhere the reader can access? It tells me that the journal does not consider measurement quality to be an important issue. I beg to disagree.
In a related issue, I have noted many times that authors report some basic information such as Cronbach's alpha for their own scales but when describing a well-known scale they do not. It is as if the authors are saying “This scale is so well accepted that its popularity counts as validity and I don't need to provide any further evidence for it.” Rubbish! There are very few, if any, scales that are so “validated” that they don't benefit from routine checks being run in each and every case they are used. Based on experience, my guess is that in some cases (many cases?), those “well-known” scales do not perform well when tests of quality are run. The worst case I can think of after my 20 years of reviews (as well as using the scale) is the Marlowe-Crowne Social Desirability scale (1960). Few users in our field have reported its reliability and even fewer have said anything about its dimensionality and validity. The reason appears to be that the scale does not have what we now consider to be good psychometric quality. If this poor quality is reported, reviewers might ask for you to redo some portion of your research. If the quality (or lack thereof) is not reported, the paper may get published. But, when readers see that no evidence of quality was provided for the well-known scale it can lead them to believe it is beyond reproach and can be safely used as is.
The point being made here is that no scale is beyond reproach. Even when a scale has undergone extensive validation in one context or at one point in time that does not negate the importance of continued testing and reporting, even if it is merely at some simple level. BTW, what do I mean by simple testing? At the very least, factor analysis should be run to make sure a multi-item scale is unidimensional. Once that is known, some form of internal consistency should be calculated and reported, e.g., Cronbach's alpha. Reporting those results don't require much time during the analysis stage or much space in a printed article. There is little excuse for not providing this information. If there is an excuse, I'd like to hear about it so we can tell our doctoral students the reason why some of our leading researchers who publish articles in our leading journals do not think it is important. Maybe I missed the memo.
Crowne, Douglas P. and David Marlowe (1960), “A New Scale of Social Desirability Independent of Psychopathology,” Journal of Consulting Psychology, 24 (August), 349-354.