LIS 504 - Fallacies

A fallacy is an error in logic. There are many different kinds of fallacy for which researchers must watch out, some of which are outlined below.

The ecological and exception fallacies

The ecological fallacy is an error in deduction; it involves making conclusions about individuals based only on analyses of group data. Noting that the average library user takes a week to read a book and concluding that books should be due a week after they are taken out would be an example of this fallacy.

The exception fallacy is an error in induction; it involves making a conclusion about a group on the basis of exceptional cases. Concluding that a book is not acceptable to your community because you have received complaints about it could be an example of this fallacy.

Ex post facto hypothesizing

In ex post facto hypothesizing, the same data that were used to develop a hypothesis are used to test it. The trouble is that, while a hypothesis can be tailor made to fit almost any data, it may not hold up for other, similar data and, as a result, it may be fairly useless.

For example, in looking at data on online searches that failed on account of technical reasons, we might note a week-by-week sequence such as:
Week 1 2 3 4 5 6
Failures 16 15 14 13 14 11
It might strike us that the rate is declining somewhat, and, if we tested this hypothesis on the data that produced it, we would indeed find a marginally statistically significant correlation (r=-0.90). But it might be that the correlation is just the result of random or atypical events. After another two weeks, we might see the following pattern emerge:
Week 1 2 3 4 5 6 7 8
Failures 16 15 14 13 14 11 15 13
Now the correlation is weaker and no longer statistically significant (r=-0.54). Our hypothesis that failures are generally declining is no longer supported by the data.

The hidden factor fallacy

Paying insufficient attention to the third criterion of causality - the absence of other plausible causal agents - can lead to the hidden factor fallacy. For example, research might show that children who were exposed to computers as toddlers subsequently engaged in more Web searching as teenagers than those who were not; but a conclusion that early computer exposure causes increased later use of computer search engines might be wrong, because both might be due to a third factor, such as the socioeconomic status of parents.

The regression fallacy

If a researcher selects items with extreme scores at one point in time and then looks at the scores of the same, or related, items at another point in time, we can expect that the second set of scores will average closer to the mean value for the population as a whole. Failing to recognize this natural regression toward the mean constitutes the regression fallacy.

To take a silly example, suppose we have a group of people throw a die and then select the group who got low scores (1 or 2). We then apply a "treatment", such as giving them all "magic feathers". They then each roll the die again. We might then end up with a result like the following:
Score Without magic feather With magic feather
1 6 2
2 6 1
3 0 3
4 0 2
5 0 3
6 0 1
Mean score 1.5 3.5

Substitute some real-life variables, such as "visits to the library" for "value of throw" and "library promotion campaign" for "magic feather" and you can see the potential dangers of this fallacy.


Home

Last updated November 2, 2000.
This page maintained by Prof. Tim Craven
E-mail (text/plain only): t.craven@uwo.ca
Faculty of Information and Media Studies
University of Western Ontario,
London, Ontario
Canada, N6A 5B7