Science, bad science, and pseudoscience: How bad statistics come to life

How do we know whether we can trust what we read? What is a good source of information and what is a bad source? How can we tell between them? Everyone loves to quote statistics. But how can you tell a good statistic from a bad one? When looking at the basis for what seems like a logical, reasonable argument, one must learn to distinguish between science, bad science, and pseudoscience.

In the above paragraph I stated the following “Everyone loves to quote statistics.” It seems reasonable enough. But the fact is, it is incorrect. Common sense should tell us that not “everyone” will love to quote statistics. There is enough diversity out there that we should be capable of finding someone who hates to quote statistics. That’s lesson number one. Absolutes are generally false.

In order to begin, we will need to know how to determine the difference between science, bad science, and pseudoscience. Yes, there is a difference between bad science and pseudoscience. Bad science is still science, but it is poorly done. Pseudoscience isn’t science at all, but it pretends to be. So let’s start with some basic definitions.

Science follows what has become known as the scientific method. It is a rigorous procedure wherein objective findings can be made about hypotheses that can be proven or disproven. The scientific method typically begins with an observation that becomes a statement of a hypothesis. This is a statement that purports to explain a particular phenomenon or a correlation between two or more phenomena. The hypothesis can then be used to make predictions about the phenomena. Once predictions are made, the hypothesis can then be tested via observation or experimentation. These tests will allow the scientist to confirm the hypothesis or to falsify it.

It may or may not be possible to test a particular hypothesis. In this case, the hypothesis may become a theory, which is a proposed explanation of a phenomenon. For a theory to be useful, it must be consistent with what is known and have some predictive value. The main difference between a hypothesis and a theory is the amount of evidence that exists to support it. A scientific theory is typically regarded as having enough evidential support to be treated as fact.

The theory of evolution is such a theory. It is consistent with what is known (prior science) and it holds some predictive value. It has also been modified over time in order to remain consistent with what is known. This is another key factor in distinguishing science from pseudoscience. Scientific theory is subject to change. As science advances, prior theories are changed or discarded and new theories are proposed. Thus, while a theory must be well-supported to be treated as fact, it is also understood that it is not fact and that it may be altered as new evidence is found.

Bad science and pseudoscience produce similar results, but are two different things. Bad science follows the scientific method, but uses outdated methods, sloppy designs and procedures, may draw erroneous conclusions, fails to explore alternate explanations of results, etc. it is science that contains errors, omissions, and falsehoods or is incomplete. It might also be based on false assumptions, use faulty reasoning, or poor logic. Two example of bad science that are frequently bandied about as “factual” statistics are Mary Koss’ study from the mid 80s asserting that one in four women will be the victim of a rape or an attempt between adolescence and the completion of college and the Eugene Kanin study indicating that 41% of rape reports are false.

The Koss study had a number of problems, but the worst was that Koss ignored statements by her subjects indicating they had not been victims of sexual assault (or attempts). This places the study dangerously close to the category of pseudoscience. Kanin, on the other hand, used a definition of false allegation that may have been far-fetched and overly broad in some respects.

Another feature of bad science is bias on the part of the researchers buried in the report. This can be illustrated by the World Economic Forum’s Global Gender Gap Report. The authors of this study intentionally incorporated bias into their design in order to preclude the possibility of finding anything other than what they were looking for. This particular analysis described any gender imbalance that favored women as an area of “equality” while describing any imbalance that favored men as contributing to the disadvantage of women and evidence of discrimination. The report also failed to include categories in which women might be more likely to hold an advantage. This report, even more so than Koss, might qualify as pseudoscience or outright fraud.

James Lett (in Ruscio) described six characteristics of scientific reasoning. These are falsifiability, logic, comprehensiveness, honesty, replicability, and sufficiency. Falsifiability is the ability to disprove a hypothesis. Logic dictates that the premise must be sound and that the conclusion must follow validly from the premise. Comprehensiveness must account for all the pertinent data, not just some of it. Honesty means that any and all claims must be truthful and not be deceptive. Replicability is the idea that similar results can be obtained by other researchers in other labs using similar methods. For this to have any meaning, the methods must also be transparent. In other words, the method used to obtain the results must be described in detail. Finally, sufficiency means that all claims must be backed by sufficient evidence. Any study that does not meet all of these criteria is not scientifically sound.

In the examples above, the Koss study meets the criteria for falsifiability and replicability. The subjects were given a questionnaire and it was possible for them to answer in a manner that would have disproved the hypothesis. Further, other studies have obtained similar results using similar methods. However, the study may fail the test of logic. Koss asserted that an affirmative answer to particular questions indicated that a subject was a victim of a sexual assault or an attempted sexual assault because the questions described these offenses in such a manner as to be consistent with the legal definitions.

But what Koss failed to consider was that these questions might also elicit responses that did not meet the legal criteria and were therefore, too broad. In fact, after administering the questionnaire, she interviewed subjects who answered in the affirmative. The majority of those subjects denied being victims of a sexual assault or an attempt. She dismissed this denial without any further testing claiming that the subjects simply did not know what constituted rape. While she was honest about her dismissal, she continued with her claim of one in four. This makes her results misleading as she fails the test for comprehensiveness (she does not account for all the data) which leads to the failure of the test for honesty. If the basic premise (that affirmative answers indicate sexual assault) is wrong then she begins with a false premise and fails the test for logic. Simply because her results were falsifiable and replicable does not make them scientifically valid.

Another problem with the logic of Koss’ study is that the results don’t fit the reality so sexual assault reporting on college campuses. There are very few incidents reported on college campuses each year. Koss and many others claim that only a small percentage of sexual assaults are reported and point to studies like hers as “evidence” of widespread underreporting. The problem is that after several decades of attempts to increase awareness and reporting of these crimes, reporting has not increased, and in fact may be decreasing. This discrepancy between the research and the observable reality is an indicator that the research may be faulty. But instead, true believers in the research use the discrepancy to “prove” the hypothesis of underreporting. This is a basic characteristic of pseudoscience as described below.

The Global Gender Gap Report fails nearly every test. It is not falsifiable. The method ensures that there is no way to disprove the hypothesis that women are at a relative disadvantage to men in every country in the world. Beginning with this faulty methodology, it is nearly impossible to meet the demands of the remaining tests with the possible exception of replicability.

The Kanin study fails the test for logic. His basic premise depended upon an adequate definition of a false report. His definition could be considered far too broad and likely included many reports where the women recanted for reasons other than having lied in the first place which may have biased his results by a little or a lot.

Pseudoscience is fake science. It might appear to look like real science, it does not follow the scientific method, ignores contradictory evidence, and isn’t falsifiable (can’t be disproven). There is usually an indifference to facts. Facts that don’t fit are discarded or ignored and the “facts” that are presented have generally not been proven in any scientific way. The research is often sloppy and may include hearsay, news reports, ancient myths, anecdotes, rumor, personal history, or case examples rather than scientific study.

Pseudoscientific research usually begins with a spectacular or implausible hypothesis and searches for evidence that will support it instead of designing experiments that may disprove the hypothesis. It often confuses correlation with causation and pseudoscientists rarely test their theories. When they do test the theories failures are often explained away as anomalies (the spirits just weren’t willing or there was a nonbeliever present). The “science” itself rarely progresses. Some new technology may be incorporated as a way to increase the mystery, but the pseudoscientific theory remains unchanged.

Ignorance and fallacy are used in place of fact. The lack of proof to the contrary proves the pseudoscientific theory. If it can’t be disproven, it must be true. Science hasn’t been able to prove widespread underreporting does not exist, therefore widespread underreporting does exist. They appeal to authority or emotion. “Believe the woman.” “We must take action to protect women.” “We must protect our daughters.”

This is similar to the claim made by Susan Brownmiller that “Rape is a conscious process of intimidation by which all men keep all women in a state of fear.” It is an appeal to emotion. It is a deliberate attempt to play on the insecurities of women by using their natural fear of being raped. It relies upon a complete redefinition of the word. Rape is no longer a sexual act, it has become a “process of intimidation.” In this same book she also made an appeal to authority. She stated that only 2% of all rape allegations are false. This was supposedly based on statements made by a New York City by sex crimes investigator. It was not a reference to any scientifically conducted study and was eventually discredited.

Another characteristic of pseudoscience is typically a profit to be made. Often, those who make extraordinary claims are selling something, or are attempting to secure funding, or even to advance a political agenda. Rape is a huge industry, especially on college campuses where federal and state funds are used to support research, prevention programs, and counseling centers. This, of course, is where most of the research is conducted and it is conducted by researchers whose jobs depend upon findings consistent with a high prevalence of rape and other forms of sexual assault. Koss received considerable support and assistance from Ms. Magazine in order to conduct her research. Ms. Magazine is a radical feminist publication that pushes a political agenda. Koss credits them with helping to secure her funding and with providing office space and other assistance. Studies funded, conducted, or supported by stakeholders who have a financial or political interest in the outcome should be regarded as highly questionable.

Conclusions and statistics generated by bad science and pseudoscience die hard, if at all. They seem to find their way into research years after being exposed. An example of this is the Super Bowl Hoax of 1993 that alleged that more women are abused on Super Bowl Sunday than any other day of the year. The statement was made without any scientific support and relied entirely on anecdotal evidence. No scientific study has found it to be true. Yet, it continues to be trotted out in various forms on an almost annual basis. Its latest incarnations being the abuse of women during the World Cup; and sex trafficking of minors during the Super Bowl. The problem with such statistics is that repetition breeds belief. The more people who repeat it; the more believable it becomes. Even when we know that the statement isn’t true, the idea it advances often lives on. The Super Bowl Hoax helped generate the fear and media attention that led to the passage of the VAWA in 1994. This legislation eroded some of the most important of our constitutional rights and protections. It also contained explicit language forbidding the funding of legitimate scientific research which might have demonstrated that the basic premise of the law was flawed. The law also forbid funding for services for an entire class of victims, men by denying that this class even exists.

The dangers of bad science and pseudoscience are not only that they establish false beliefs, but that they may also be quite harmful. It is important that we learn the differences between science, bad science, and pseudoscience and that we respond to them accordingly.

References

Brownmiller, S., (1975). Against our will: Men, women, and rape. (I do not have the publisher’s information).

Coker, R. (2001). Distinguishing science and pseudoscience. Quackwatch. Retrieved 6/20/2011 from:http://www.quackwatch.org/01QuackeryRelatedTopics/pseudo.html.

Hausmann, R., Tyson, L., and Zahidi, S. (2009). The global gender gap report. World Economic Forum. Retrieved on 6/21/2001 fromhttps://members.weforum.org/pdf/gendergap/report2009.pdf.

Kanin, E. (1994). An alarming trend: False rape allegations. Anada Answers (ed.).Ananda Answers. Retrieved 6/21/2011 fromhttp://www.anandaanswers.com/pages/naaFalse.html

Koss, M., Gidycz, C., Wisniewski, N. (1987). The scope of rape: Incidence and prevalence of sexual aggression and victimization in a national sample of higher education students. Journal of Counseling and Clinical Psychology, 55(2), 162-170.

Ruscio, J. (2006). Critical thinking in psychology: Separating sense from nonsense. Belmont, Ca. Wadsworth, Cengage Learning.

Recommended Content

%d bloggers like this: