Main Content Area
Deceptive Scale Reporting
In my years of reviewing scales, one of the most upsetting issues I have had to deal with is deception. By that I mean, there have been countless articles in which something was said that led me to believe one thing but I learned later (or concluded) that it was not true. In some cases, no doubt, the misleading information was unintentional while in others cases I would swear the authors knew full well what they were doing. Yes, that is a strong accusation but, that is all it is since I have no proof of fraud. All I have is my sense that something weird was going on based on the information given in the article or the response given by an author when I have requested clarification.
Unfortunately, we know full well that deception occurs. In recent years we have heard about how it has happened in other fields as well as our own (e.g., Jarrett 2012). Those are the researchers who were caught. Does anyone have an educated guess about how many were not caught? I imagine that journal editors have an idea.
What sets off my alarms? First, there is the use and abuse of the word "adapted." I have bemoaned this practice in a past blog. Briefly, far too many authors say they "adapted" a scale from so-in-so but, when their scale and the one cited are compared, there is little if any common content. In the case of scales, the word "adapted" implies to me that a few words have been changed. The term is not appropriate for use when a completely new scale has been created that shares little content with a previous scale, even if it measures the same construct. Instead, the authors should say they drew ideas from work by particular authors.
At the next level of deception, there are multiple cases of authors saying they used scales that I eventually concluded did not exist. The typical way this has worked is that authors have cited an article as the source of the scale they are using but, upon checking those articles, I found no such scale. Could an innocent mistake have been made? Sure, but my point is that I have experienced this so many times that it is difficult for me to write them all off as innocent mistakes.
More challenging for me are cases where my experience with developing scales and purifying them tells me that something just doesn t seem to be right. For example, I have seen authors take items from scales measuring different constructs, mash them into one scale, and then provide stats that indicate the scale was unidimensional in their study. Could it happen? Well, yes, I suppose so but in some of these cases, where items were shown in past studies to clearly measure different constructs, I seriously doubt the truthfulness of the information.
But, it gets even worse. Sometimes I contact authors hoping that something they can say or provide will assuage my doubts. Instead, they end up using sophisticated versions of "the dog ate my homework" excuse or send me on wild goose chases. For example, I recently sought information to substantiate what some authors said in their article that did not make sense to me. (I will use the plural in referring to the author(s) in order to be neutral with regard to gender and number of authors involved.) On the surface, the article appeared to be great work but there were some things missing about the development and purification of several scales that I wanted to clarify. Further, a footnote in the article referred to other work by the authors that supported their scales' validities. I contacted the authors for the information and they referred me to another publication of theirs. Yet, the information I was looking for about the scales was not in that article either. I decided to push the authors a bit further and then they admitted that the information referred to in the footnote that laid the foundation for much of the article had not ever been written up! Maybe that was true but, why didn t they write it up, even if it was not ever published? Why didn't the journal reviewers demand the material? I spent many hours over several days trying to figure out the nature of the scales and their origin. Eventually, I decided that the authors were trying to obscure the details about reliability and validity or, at least, they did not try to make much effort to clarify what they did.
What can be done to minimize these practices and admonish those who do them? That is a very good question. The best time for it to happen is during journal review process. Reviewers should expect information about the source of scales to be provided and then check them to see if the descriptions are accurate. If they are not accurate then they should demand clarification and/or correction as a condition of acceptance. Likewise, when authors say they drew items for a scale from multiple scales and, in the opinion of the reviewers, the scales measure different constructs then evidence of the new scale s unidimensionality should be required from the authors. Finally, when authors refer to unpublished work that was done to test the psychometric quality of their scales, it should be provided in some form to the reviewers and, hopefully, made available to readers as well if the article is published.