Main Content Area
When Journals Don't Adequately Vet Scales
In several of my past pet-peeve posts, I have lamented that too many scholars have had their work published in academic journals with insufficient care in reporting their usage of multi-item scales. In this post, I will focus more on the role played by the journal editors and the reviewers of the manuscripts.
It is really disappointing when journal editors and reviewers do not do their jobs. Regardless of the authors’ roles, it is still up to editors and reviewers to ensure two things regarding scale usage and reporting: 1) that papers will not accepted if authors have failed to use scales of a certain quality, and 2) that papers will not be published until authors provide a minimal amount of information about the scales they used. I will address these two points below.
To accomplish point 1, standards must be set. Maybe some journals have them but I am not aware of them. Certainly, I have not come across them in my interaction with marketing journals. There may be unwritten standards but if they are unwritten, we can’t expect authors and reviewers to uniformly understand what is expected. I won’t present standards in this post but, for the time-being, let me say I am not arguing for rigid standards nor that every journal should have the same standards. I am merely advocating that journals have standards of some sort and they be communicated clearly to reviewers, authors, and to the readership.
To accomplish point 2, journal reviewers should be asked by their respective editors to explicitly examine submissions for adherence to the stated standards. Exceptions may be made. As I said above, I am not advocating strict rules, especially not at first. But, I do expect reviewers to critically determine if authors have used the right measures to test hypotheses and if sufficient details attesting to scale quality have been provided. It never ceases to amaze me how many poor quality scales are used in articles published in our highest journals. It makes me wonder what reviewers were doing. Did they not pay enough attention to the scales? Did they not understand how poor the measures were for the purposes intended? I can tell you this: there is no doubt to me that the reviewing “culture” varies among the top marketing journals. One of them in particular, which will remain anonymous at this time, seems to have thrown scale quality out the window.
Having gone through the publication process dozens of times myself, I realize that in many cases authors used appropriate measures and reported an adequate amount of psychometric information in their initial journal submission. Unfortunately, that information was dropped later or greatly reduced for some reason, e.g., space limitations. While that might have been understandable years ago, it is unacceptable today. Thankfully, more journals have instituted web appendices where details about measures and other aspects of studies can be provided. The question is, why aren’t all journals expecting such information from authors and providing it either in the published articles or in web appendices?
In my next posting, I will elaborate on what I see as a set of minimal standards that any journal should set for authors who use multi-item scales and want their research to be published.