Fallacious Arguments
4 Mistakes in Reasoning: The World of Fallacies boy at chalkboard, puzzled at two math equations, 2+2=4 and 3+3=7 Have you ever heard of Plato, Aristotle, Socrates? Morons! —Vizzini, The Princess Bride So far we have looked at how to construct arguments and how to evaluate them. We’ve seen that arguments are constructed from sentences, with some sentences providing reasons, or premises, for another sentence, the conclusion. The purpose of arguments is to provide support for a conclusion. In a valid deductive argument, we must accept the conclusion as true if we accept the premises as true. A sound deductive argument is valid, and the premises are taken to be true. Inductive arguments, in contrast, are evaluated on a continuous scale from very strong to very weak: the stronger the inductive argument, the more likely the conclusion, given the premises. What We Will Be Exploring We will look at mistakes in reasoning, known as fallacies. We will examine how these kinds of mistakes occur. We will see that errors in reasoning can take place because of the structure of the argument. We will discover that different errors in reasoning arise due to using language illegitimately, requiring close attention be paid to that language. Generally, we want our arguments to be “good” arguments—sound deductive arguments and strong inductive arguments. Unfortunately, arguments often look good when they are not. Such arguments are said to commit a fallacy, a mistake in reasoning. Wide ranges of fallacies have been identified, but we will look at only some of the most common ones. When trying to construct a good argument, it is important to be able to identify what bad arguments look like. Then we can avoid making these mistakes ourselves and prevent others from trying to convince us of something on the basis of bad reasoning! 4.1 What Is a Fallacy? image The French village of Roussillon at sunrise. Roussillon is in Vaucluse, Provence. It would be a fallacy to assume that because someone lives in France, he or she lives in Paris. Most simply, a fallacy is an error in reasoning. It is different from simply being mistaken, however. For instance, if someone were to say that “2 + 3 = 6,” that would be a mistake, but it would not be a fallacy. Fallacies involve inferences, the move from one sentence (or a set of sentences) to another. Here’s an example: If I live in Paris, then I live in France. I live in France. Therefore, I live in Paris. Here, we have two premises and a conclusion. The first sentence is a conditional, and we can accept it as true. Let’s assume the second sentence is also true. But even if those two premises were true, the conclusion would not be true. While it may be true that if I live in Paris then I live in France, and it may be true that I live in France, it does not follow that I live in Paris, because I could live in any number of other places in France. Thus, the inference from the premises to the conclusion is fallacious because of a mistake in the reasoning. Technically, this argument is said to commit the formal fallacy of “affirming the consequent” of the conditional. In a conditional sentence, “If P then Q,” P is the antecedent—it provides the condition—and Q is the consequent, or what follows from that conditional. So in this sentence, “If I need to get cash, then I can go to an ATM,” “I need to get cash” is the antecedent, and “I can go to an ATM” is the consequent. We can see the difference in the arguments here by looking at a very similar one that does not commit this fallacy (because it affirms the antecedent) and is in fact valid: If I live in Paris, then I live in France. I live in Paris. Therefore, I live in France. In learning to spot fallacies, we must be very careful to see whether the conclusion actually follows from the premises; if it does not, we need to determine why. Sometimes, as in our first argument here, the mistake is structural, or formal. At other times, the mistake is more subtle, and we have to examine the content of the argument—its meaning—to determine why it commits the fallacy; these kinds of mistakes in reasoning are often called “informal fallacies.” Here again is the famous informal fallacy we looked at in Chapter 2: Nothing is better than eternal happiness. A ham sandwich is better than nothing. Therefore, A ham sandwich is better than eternal happiness. The fallacy involved here is not structural; an argument with this structure actually can provide a valid inference, as in this example: Mary is taller than Susan. Susan is taller than Amanda. Therefore, Mary is taller than Amanda. This is an example of what is known as the transitive property, as in arithmetic: if 10 is less than 20, and 20 is less than 30, then we know—just from these two sentences—that 10 is less than 30. four children in height order The transitive property allows us to infer that if Billy is taller than Sally, and Sally is taller than Jeff, then Billy must be taller than Jeff. In contrast, the mistaken inference in the argument about the ham sandwich involves the meaning of the words, specifically the word “nothing.” In the first premise, to say there is nothing better than eternal happiness is to say there exists no thing better. But in the second premise, “nothing” seems to change meaning in order to say it is better to have a sandwich than to have nothing (as in the phrase “well, it’s better than nothing”). The word “nothing” subtly changes meaning from one sentence to the next, but the argument treats them as if “nothing” means the same thing. This then appears to allow us to draw the conclusion, but the mistake should be clear, and so we see why we cannot, on the basis of these premises, accept the conclusion that a ham sandwich is better than eternal happiness. Thus, the inference is made illegitimately, and that illegitimate inference is what results in a fallacy. While the ham sandwich argument is a bit silly, it is a good example of how, even if we are sure that there is a mistake in the reasoning, it can be a bit tricky to say what, precisely, that mistake is. There are many formal fallacies, mistakes in reasoning that occur due to the structure of the argument (the fallacy of affirming the consequent is, therefore, a formal fallacy). There are also hundreds of informal fallacies. In this chapter, we look at some of the best-known informal fallacies, and a couple of the most common formal fallacies. It is obvious why we want to avoid fallacies as a general rule; after all, fallacies are mistakes, and we want to avoid making mistakes. But here we also consider why we want to avoid the specific kinds of errors committed by fallacious reasoning. Why Should We Avoid Fallacies? We have already seen that philosophers use the term argument differently from how we use it in everyday conversation: to a philosopher, an argument simply provides reasons for accepting a conclusion. As we have also seen, our everyday reasoning usually includes a mixture of both deductive and inductive arguments. Obviously enough, when we try to establish a conclusion on the basis of evidence and reasoning, we want our arguments to be good arguments: valid (and sound) deductive arguments and strong inductive arguments. Fallacies are, in this context, somewhat like a virus, or a disease. That is, fallacies infect our reasoning and can give an argument the appearance that its conclusion should be accepted when it really shouldn’t be. We may never be able to “cure” our reasoning of the fallacies that threaten to infect it, but the more we are aware of the problem, the better our chance of being able to avoid it. Healthy reasoning, then, always requires that we be on the lookout for fallacies; in this case, as the old saying goes, an ounce of prevention is worth a pound of cure. brick wall with cracks in it Fallacies can be like cracks in a building, undermining the strength of our arguments. One clear result of studying and understanding fallacies is that we become aware of the problems they can cause in our own reasoning. Presumably, when we give an argument of our own, we want it to be the best argument we can construct; we assume, that is, that we aren’t willing to abandon sound principles of reasoning to win the argument. (There are contexts, of course, where this might not be the case, and we will look at some of these later.) We want to win our arguments, of course, but we also want to construct them correctly. Being aware of the various fallacies will improve our arguments and make them more difficult to defeat. After all, if our opponent in an argument can expose our reasoning as fallacious, our opponent will win, or at least show that our argument fails. We also, of course, don’t want to be fooled by our opponent into accepting reasoning that is not legitimate. Perhaps you are in a debate with someone who argues that raising taxes is bad for the economy. Your opponent points out that the last time taxes were raised, the economy did badly; therefore, raising taxes caused the bad economy. You may want to resist this conclusion, and being aware of fallacies allows you to point out that this argument commits the fallacy of the “false cause”: just because some event follows another event, it does not necessarily mean that the first event caused the second event. To make this fallacy clear to your opponent, you may provide a counterexample that uses the same kind of logic. “I took my dog for a walk, and then it rained. But walking my dog didn’t cause it to rain, did it?” Revealing the flawed reasoning in this case doesn’t mean that we have established that raising taxes is good for the economy, or that it is bad for the economy. But by demonstrating that the argument commits this fallacy, you can reject this argument as given, and you and your opponent can move on, in order to look for better arguments. We’ve included Concept Checks so you can test yourself on the concepts covered. The results are for your own instruction and will not affect your grade. You can take each quiz as many times as you like by clicking the “refresh” button on your browser. 4.2 Mistakes in Reasoning: Informal Fallacies Now that we have seen why we should be aware of fallacies, and why we should try to avoid them, we will identify and examine the most common informal fallacies. These fallacies are frequently encountered at work as well as among friends and family and in the media. For each of these fallacies, we will begin with an example and then specify the mistake involved in each. One of the best ways to become familiar with fallacies, once you understand them, is to construct one of your own that commits the same kind of error. Ad Hominem Fallacy Frank works for a big oil company. So of course, Frank doesn’t believe in global climate change. If we put this in premise-conclusion form, the argument would look like this: Frank works for a big oil company. Frank doesn’t believe in global climate change. couple arguing An ad hominem fallacy occurs when the reason for an argument is solely based on a person’s character or nature. The ad hominem fallacy comes from the Latin term for “to the person”: that is, the conclusion is to be accepted or rejected because of the person (and the characteristics of that person) involved, rather than the actual argument, or reason(s), supporting the conclusion. In our example, then, the reason put forth for Frank’s belief has little to do with the evidence Frank may have for that belief. Rather, the fact that he works for a big oil company provides the basis for why we attribute to Frank the belief we do. Of course, this is fallacious; Frank may have very good reasons, very bad reasons, or no reasons at all for his belief. But the fact that he works for a company that may be adversely affected by the politics of climate change doesn’t allow us to conclude that this is the reason for Frank’s view on the matter. Because this refers to Frank’s circumstances, this fallacy is often made more precise by labeling it an ad hominem argument (circumstantial). As always with fallacies, the conclusion does not follow from the premise(s). We can see this mistake by a rather ridiculous example. Presuming the communist dictator of the former Soviet Union, Josef Stalin, was a very bad person, what if someone made this argument? Josef Stalin believes that the sun rises in the east. Stalin was one of the worst monsters of the twentieth century. Therefore, we shouldn’t believe that the sun rises in the east. Clearly, the sun rises in the east regardless of what we think about Stalin; his character certainly doesn’t allow us to reject the claim. Here again, we see the reason put forth for the conclusion to be simply about the person involved. But, as should be obvious, even the most tyrannical dictator may hold beliefs that are true. In contrast to the ad hominem (circumstantial), this is a mistake based on the character of the person. Stalin’s character may well be worth attacking; but his personal failures, in this case, don’t have anything to do with whether his belief about the sun is true or not. Hence, we have two distinct kinds of ad hominem arguments: one based on the circumstances of the person, such as Frank’s job, and one based on the character of the person, such as Stalin’s. To spot an ad hominem fallacy, we determine whether the reason given for the conclusion rests solely on the characteristics or nature of the person who holds the view in question. And if those characteristics are not relevant to the conclusion, there is a good chance an ad hominem fallacy is being committed. Sometimes, however, those characteristics can be quite relevant, as in the following example: Mary is a devout Christian, so of course she believes in God. One of the defining characteristics of being a Christian is to believe in God; so if Mary is a devout Christian, it does follow that she believes in God. In this case, unlike the cases of Frank and Stalin, Mary’s personal characteristics are quite relevant to the conclusion and provide ample support for it. One can also consider one other version of this fallacy, often referred to by its Latin name tu quoque, meaning, “you’re another.” We are probably familiar with this fallacy from grade school; if you object to someone’s behavior, he or she might respond that your behavior is no better. This reply, of course, does not respond to your objection; rather, the claim seems to be that you can’t object because you have your own share of problems. If Robyn objects to Tom cheating on a test, and Tom replies that Robyn cheated on a test once, so she cannot legitimately object, he commits this fallacy. An actual, historical, example of the tu quoque fallacy was committed by the government of South Africa when it defended its apartheid policy of racial separation and discrimination. In some of its literature sent to the United States, this argument was made by the South African government: The U.S. treated its native citizens very badly,including putting them on reservations. Therefore, The U.S. cannot criticize our treatment of our own native citizens. The premise may well be accepted as true here, but it doesn’t follow that one cannot still criticize the South African policy. In this case, we may recall the phrase from our childhood, “two wrongs don’t make a right.” Hence the ad hominem fallacy is committed when the conclusion is rejected on the basis of characteristics of the person who puts forth the conclusion, and the characteristics of that person are not relevant to the conclusion. Once you are aware of the mistake in reasoning involved here, you may be surprised at how often you encounter the ad hominem fallacy. Stop and Think: Lose It, Don’t Abuse It! Celebrities such as Oprah Winfrey, Dr. Phil, and Suzanne Somers have all written books on nutrition and weight loss. Some critics have dismissed their advice outright, citing Oprah’s weight fluctuations (“Why would anyone take diet advice from a dieter who repeatedly fails?”), Dr. Phil’s larger physique (“Why would anyone take weight loss advice from Dr. Phil, who seems unable to lose that last 20–30 pounds?”), and the possibility that Somers may not practice what she preaches (“This queen of all things natural fills her face with Botox and the like”). Each of these comments qualifies as ad hominem attacks. For example, whether or not Dr. Phil is a few pounds overweight has no bearing upon the relative merits of his weight loss program. It may seem quite natural to dismiss a person’s claims outright on the basis of ad hominem considerations. An overweight person telling us how to lose weight strikes us as hypocritical, and no one likes a hypocrite. Nonetheless, we must remember that even the biggest hypocrites can, at least on occasion, speak the truth. As we can see, ad hominem appeals on their own do not demonstrate any weaknesses in these weight loss programs. If so, how should one go about assessing the merits of diet advice? What sorts of considerations are, in fact, relevant to such an analysis? Begging the Question Abortion is murder, and murder is illegal, so abortion should be illegal. To beg the question is to commit a mistake in reasoning by assuming what one seeks to prove. Often this kind of reasoning is criticized as “circular reasoning,” in that the premise that supports the conclusion is in turn supported by the conclusion, and thus goes in a circle. boy scratching his head To “beg the question” is to make a leap of logic by assuming what needs to be established. In the preceding argument, we may be quite willing to accept that murder is illegal. But the controversy over abortion really involves the first premise, whether or not abortion qualifies as murder. To assume that abortion is murder, then, begs the question, for that is the very issue that is at stake in the argument. It is important to see that rejecting this argument because it is fallacious doesn’t establish anything about the topic of abortion. Rather, it indicates that this argument, as structured, relies on an illegitimate inference, or commits a fallacy. Thus, it isn’t better as an argument than the following: Capital punishment is murder, and murder is illegal, so capital punishment should be illegal. In this case, one cannot legitimately assume that capital punishment is murder; one would have to provide an argument for that premise. Again, this argument doesn’t establish anything about capital punishment, because the argument is fallacious. In both the argument about abortion and the argument about capital punishment, we see that because the question is begged, these arguments fail. This doesn’t mean that one cannot construct good arguments about either topic, however. Perhaps we can see this more clearly with a ridiculous argument that has exactly the same structure: Sunbathing is murder, and murder is illegal, so sunbathing should be illegal. While many people argue over the ethical and moral questions that surround abortion and capital punishment, probably no one would argue that sunbathing is murder. But all three of these arguments are identical in structure, and now we can see a bit better why that structure is fallacious: we simply cannot legitimately assume what we seek to establish. In logic, to beg the question is to assume what one wishes to prove, although one often hears people in the media use the phrase to indicate that one answer leads to another. A politician, for instance, may be told that her response in an interview “begs the question,” or that her response raises further issues. This is not the precise, technical meaning of the phrase as used in logic, and here, as elsewhere, we will discover that logicians often use language in a way that is much more specific and explicit than it is in other contexts. It should also be noted that arguments that beg the question, or argue circularly, are technically valid. In all three of our examples, if the premises are accepted as true, we must accept the conclusion as true. But as we saw most obviously in the sunbathing example, the premise may well not be true. This is yet one more reason to remember that just because an argument is valid does not necessarily mean we should accept its conclusion! Slippery Slope Arguments We must not allow libraries to ban any books; if they ban some books, they may well ban all of them. The slippery slope fallacy is committed when one takes an example and extends it indefinitely to show that a given undesirable result will inevitably follow. Often the idea is that if an exception is allowed to a rule, then more and more exceptions will follow, leading to the inevitable result that few people, if any, will follow the rule. But this conclusion isn’t always warranted. A library may well wish to prohibit certain kinds of material, such as pornography, but that doesn’t mean that libraries will end up banning all kinds of materials. Here’s another example: The police won’t ticket you if you drive one mile an hour over the speed limit. The police won’t ticket you if you drive two miles an hour over the speed limit. The police won’t ticket you if you drive three miles an hour over the speed limit. Therefore, The police won’t ticket you if you drive n miles an hour over the speed limit. Eventually, it seems that the police, by making these exceptions, may not be able to ticket anyone no matter how much over the speed limit he or she drives. But that conclusion doesn’t follow from these premises; just because there is some degree of tolerance, or minor exceptions to the rule, that does not mean the rule itself is abandoned. And anyone who has gotten a speeding ticket has learned this the hard way! view of playground slide from the top of the ladder The view from the top of a slide. A slippery slope fallacy takes one example and extends it indefinitely to an unrelated conclusion. While these kinds of arguments commit the slippery slope fallacy, there are other ways of making this kind of mistake. Perhaps Rosemary thinks it is fine to have a glass of wine or two at dinner, but Franklin does not. Franklin tells her that if she has a glass of wine at dinner, pretty soon she will end up drinking a whole bottle of wine at dinner. There is some point between drinking no wine and drinking too much wine, but the idea that one glass of wine automatically leads to drinking too much wine seems to commit a rather obvious slippery slope fallacy. Determining whether an argument actually commits the slippery slope fallacy can be difficult. A teacher may make an exception to the rule “no late work is accepted” and allow a student to turn in a paper late. This may have a “snowball effect,” because the other students can point to this exception and ask why they aren’t also allowed to turn their work in late. Parents who enforce a strict bedtime may also worry that if they make exceptions, the idea of “bedtime” will become so flexible that it will become very difficult to get the kids to bed at a reasonable time. For these kinds of reasons, some philosophers have argued that certain rules cannot have any exceptions. For instance, consider the rule that you should never lie, that without exception, you should always tell the truth. The concern is that if an exception is made in one case, there may be exceptions in other cases, and eventually no one will be expected to tell the truth. One can see a similar idea with counterfeit money. A society cannot make exceptions, suggesting that sometimes counterfeit money is acceptable, for if even one exception is made, it is clear that we won’t possess the needed confidence that the money in circulation is genuine. Thus, to avoid this situation, no exceptions can be made. In this case, we have to be very strict; if some lies are permitted, we may well end up not being able to say where they are not permitted. In this case, it could be argued that there is a “cascading” effect where some lying leads to too much lying, and on this view would not be a slippery slope fallacy. Similarly, to try to prevent counterfeit money from circulating seems legitimate; there isn’t a slippery slope involved in thinking that if some counterfeit money is allowed to circulate, we may have significant problems in determining what is and what is not genuine money. Logic in the Real World: Forced Euthanasia The following passage from a personal website is a classic example of the “slippery slope” argument: When euthanasia becomes law it will start out on a strictly voluntary basis for the terminally ill. Then it will become available to anyone who wants it, and finally it will be involuntary, practiced on anyone who is a strain on the system: the elderly, the handicapped, the unemployable—potentially anyone who doesn’t benefit the system. Now, if we knew for certain that legalizing euthanasia would result in cases where people were put to death against their own will, we would have a strong reason not to enact such a law. However, the inevitability of this causal connection is far from established. Forms of euthanasia are legal in select states here in the United States, and involuntary cases have yet to be a problem. When dealing with arguments asserting a number of causal links between events, it is important to keep this key point in mind: it must be demonstrated that the original practice in question will likely lead to the highly undesirable outcome. If this is accomplished, no fallacy is involved. If it is not, then the argument should not persuade us. Do any of the arguments you have heard in debates about the legalization of marijuana, animal rights, immigration, or stem cell research resemble slippery slope reasoning? Do you feel that the example you came up with contains legitimate reasoning, or is a fallacious “slippery slope” involved? Explain your response. In general, then, one has to examine the premises of the specific argument to determine if, in fact, they support the conclusion. The premises must be shown to lead to the conclusion, and the connection between the premises and conclusion must be demonstrated. If one simply indicates that because one or more exceptions to a rule will lead to a rule being entirely ignored—as we saw in the example of the speeding ticket—then we may well have a slippery slope fallacy on our hands. Hasty Generalization I went to that new restaurant the other day, and I didn’t like what I had. I don’t think that restaurant is any good. We are probably familiar both with having generalized a bit too quickly ourselves and having heard others do so. The fallacy of hasty generalization is committed when the conclusion is based on insufficient information: a generalization is made too quickly. Thus, here, on the basis of having eaten at a restaurant one time, a very broad conclusion is drawn. Of course, the restaurant may not be any good, but one meal on one occasion isn’t enough to support that conclusion. The chef could have had a bad night; the restaurant, being new, might still be getting things figured out; it could have just been bad luck. But the conclusion that the restaurant isn’t any good does not follow from the premise, because the premise doesn’t provide sufficient support for that conclusion. In science, researchers expend considerable effort making sure data samples are large enough, and representative enough, to provide support for the conclusion. For instance, if a medical study seeks to establish a connection between cholesterol and heart disease using a data sample of a few patients, it might just be a coincidence if all the patients have high cholesterol and suffer from heart disease. But if the study involves numerous patients, from a wide variety of backgrounds, ages, and so forth, and all of the patients have both high cholesterol and heart disease, that would offer much stronger support for the view that they are causally related. Generally, then, the fallacy of hasty generalization is committed when one has inadequate support for the conclusion, but one still jumps to a conclusion. Consider the following argument, for instance: paintbrush with blue paint on it The hasty generalization fallacy can be summed up in the phrase “to paint with a broad brush,” which means to characterize without bothering with details or specifics. I’ve met a couple of people from China who studied English but were difficult to understand. I don’t think Chinese people can learn English well. Given that there are over a billion people in China, and assuming only one percent of them study English, that would be over ten million Chinese people studying English! To generalize on the basis of two people would be very hasty, indeed. Therefore, the evidence would not adequately support this conclusion, and would not follow from the premise as stated. Often the fallacy of hasty generalization can lead to damaging stereotypes made on the basis of just a few examples. Stereotypes about women, religious groups, minorities, ethnic groups, and so forth are often based on this type of reasoning. Drawing broad and very general conclusions based on insufficient evidence can therefore lead to harmful results, not only for the victim of the stereotype but also for the person doing the stereotyping. For instance, consider this argument: I had a guy from Peru working for me once, and he always came to work late. I won’t be hiring any more people from Peru. The generalization here, drawn on the basis of a single example, is that all Peruvians come to work late. Not only does this attitude discriminate against an entire group of people, but it also prevents the employer from discovering that Peruvians may be the best workers he ever hired. By making a mistake in reasoning and committing the fallacy of hasty generalization, the employer harms both those being stereotyped and himself. Argument from False Authority Albert Einstein was a brilliant man and believed in ghosts. So it seems that ghosts actually exist. The fallacy committed by appealing to a false authority draws a conclusion based on an authority whose expertise is irrelevant to the conclusion. Just because Einstein was a world-famous physicist doesn’t make him a legitimate authority on ghosts. (It isn’t really clear whether he did or did not believe in ghosts, by the way.) So the conclusion does not follow here, because Einstein doesn’t have the right kind of expertise to provide support for it. woman posing, looking off into distance English socialite Tara Palmer-Tomkinson appears in a commercial for potato chips. Celebrities endorse products all the time, and we don’t often stop to think that they might not be an authority on such a product. Naturally, if we sought Einstein’s views on a question in physics, we would be on much safer ground. There is no question that in physics, his authority is legitimate, and we could rely on his expertise. Hence, the name of this fallacy is important: the argument from false authority. In looking at arguments, it is important to determine whether the person whose view is being used to support the conclusion is truly an authority, and if so, whether that authority is relevant to the conclusion. Another way this mistake is often made is to suggest that a source of information has a conflict of interest: a person may benefit from some outcome, and we may think that such a benefit can call that person’s claim into question. Imagine a university president arguing that the basketball team should purchase a particular brand of shoes. She claims it is because they are the best shoes one can get at a good price, but she also has substantial holdings in the company that makes the shoes. Are we sure the university president is not biased in promoting the purchase of this brand of shoes? After all, she stands to make more money if the company’s stock does well. At the same time, the fact that she owns this stock doesn’t mean that the shoes are not the best shoe available for the price. Such conflicts of interest can be very challenging, for it is not that unusual that one’s arguments are driven by one’s self-interest. But simply because that self-interest may be involved does not mean that the argument definitely is driven by that self-interest. Each case must be looked at carefully. But for this reason, politicians often sell stocks and get rid of other investments in case there may be even an appearance of such a conflict of interest. Judges who are asked to decide on cases in which they may have a financial interest frequently recuse themselves: they do not hear such cases just in case it appears that they have such a conflict of interest. Perhaps the most common version of this argument can be seen in television commercials. For instance: A world famous golfer says he likes to drive a certain model of car. So that model of car must be pretty great! Of course, if we stop to think about it, it isn’t clear why we should think that being excellent at golf establishes one’s credentials in evaluating automobiles. Similarly, basketball players may not know any better than we do if a given fast-food restaurant is particularly good, and there is no reason to think that a famous football player is an expert on jeans. Yet it is hard to turn on a television without seeing a celebrity endorse a product, lending their reputation for expertise in their own field to a product they are paid to advertise. In such cases, the conclusion—that a product is good—does not follow from the premise; namely, that a celebrity whose fame comes from a completely different area of life says it is good. On occasion, a celebrity may actually be an authority in another field. For example, a movie star who is an expert chef may recommend a certain brand of kitchen knife. If she says the knife is good, we could accept her recommendation if (and only if) we were also able to determine that she was an authority in the relevant field. But the fallacy of appealing to an illegitimate authority is committed when the support provided by the authority is not relevant. When examining an argument that appeals to an authority, we must see what the credentials are of the authority and whether those credentials are relevant to the conclusion being put forth. This may not always be clear-cut; if a physician runs for political office, does her expertise in medicine indicate expertise in making quality political decisions? Do the kinds of questions physicians deal with give them advantages in making political decisions? Or are political and medical decisions so distinct that expertise in medicine is irrelevant to expertise in politics? In such cases, we need to learn more about the candidate in question and whether the candidate possesses the appropriate background, credentials, and expertise. But we may see why identifying someone as a good doctor may well not be sufficient to make that person a good political leader. Appeals to Pity and Popularity Your honor, I’m innocent. I haven’t been able to find work for several months, and I’ve been very sick. So I shouldn’t be found guilty. That book must be very good; it has been on the best-seller list for weeks. Two related fallacies, the appeal to pity and the appeal to popularity, make very similar mistakes in reasoning. In the appeal to pity, the reason put forth doesn’t give a good reason to accept the conclusion, but considers what, logically, is irrelevant information. The appeal to pity indicates that one should accept a conclusion because of the unfortunate situation of the person putting forth that conclusion. In the same way, in the appeal to popularity, the reason put forth doesn’t give a good reason to accept the conclusion, but considers what is irrelevant information. The appeal to popularity indicates that a conclusion should be accepted simply because many people think it is true. In both cases, of course, the conclusion does not follow from the premises. Someone accused of a crime isn’t innocent of the crime just because he or she is in bad circumstances; rather, guilt or innocence is based on whether the person actually committed the crime. Similarly, a book isn’t good just because it is popular; presumably we want to evaluate a book’s quality by characteristics other than just its popularity. As we know, sometimes books that are not very good sell many copies, just as some very good books do not sell many copies. Teachers frequently encounter the appeal to pity, but they also encounter arguments that seem to commit this fallacy but actually do not. Compare the following two arguments: boy holding late pass, looking sad Appeals to pity (and popularity) can lead to true or false conclusions. The key is determining whether the conclusion follows from the premises. I need to get an A in this course, because if I don’t, I will lose my scholarship. I couldn’t get my paper turned in on time, because there was a tornado and all the power went out. The first argument appeals to pity by suggesting that the reason the student should get an A is that if he doesn’t, he will lose his scholarship. That conclusion, of course, does not follow; he should get an A if his work deserves it. Presumably, it is not just this course that is leading to this result, anyway. The second argument appears to offer a similar kind of reasoning, but, in fact, such a power outage might well be a legitimate reason for a late paper. Similarly, an appeal to popularity may not always lead to a false conclusion; again, we have to determine whether the conclusion follows from what is being stated. The pizza in that place must be great; it always has a long line of customers. Of course, the reason the pizza is good isn’t because the pizza place has a long line of customers; the reason the pizza is good is because, well, it’s good! But one might see that there is another premise here, one that is not explicitly stated: customers line up only for a product that is really good. In that case, if there is a long line of customers, and customers are willing to stand in line only for something that is good, then a long line of customers for this pizza suggests that it is good. In both of these fallacies, then, one must look at the premises and see if they support the conclusion; as always, the question is, does the conclusion follow from those premises? There may be “hidden” premises, as we saw in the pizza case, or there may not be. But after looking at the information provided, if the reason to accept a given conclusion is solely because of the sad circumstances of the person putting forth the conclusion, that argument may well commit the fallacy of the appeal to pity. And, in the same way, if the reason to accept a given conclusion is solely because a lot of people accept it, the argument may well commit the fallacy of the appeal to popularity. Logic in the Real World: The “Dying Card” As we have learned, not all arguments that may touch us on an emotional level are necessarily fallacious. Television satirist Stephen Colbert provides a good example in an interview with a doctor promoting the value of having children immunized. According to the doctor, “[if we don’t immunize children] every year we would have thousands of children dying from measles or whooping cough, or we’d have congenital birth defects from rubella or [children] being paralyzed by polio.” To this, Colbert replied, “See, now this isn’t fair because you’re playing the children dying card. How am I supposed to fight that? Let’s keep this intellectual.” What Colbert is doing here is falsely accusing the doctor of using an appeal to pity in his statement. The doctor’s reference to potential child mortality is relevant to the question of immunization, and his mention of this possibility does not take his argument out of the realm of intellectual analysis, as claimed by Colbert. Arguments that may move heartstrings may or may not be fallacious; it is up to us to figure out whether or not we are encountering legitimate reasoning. This can sometimes prove difficult. Can you think of any ways to help us determine if an argument that invokes an emotional response is in fact fallacious? Are such arguments more difficult to assess accurately? Why or why not? Loaded Question I asked Susan the other day if she had stopped smoking marijuana. She said no, so she must still be smoking marijuana. The fallacy of the loaded question is committed when separate questions are combined unfairly. The resulting question cannot be answered without accepting an unfair assumption. If I were to ask Susan this question and she said “yes,” then that would lead to the conclusion that she did smoke marijuana but has now stopped. If I were to ask Susan this question and she said “no,” then that would lead to the conclusion that she did smoke marijuana and continues to do so. But these aren’t really fair alternatives for Susan because of the way the question is worded. In this example, what is “disguised” is that there are really two questions: 1. Have you smoked marijuana? 2. If so, have you stopped? Clearly enough, if the answer to the first question here is “no,” then the second question doesn’t apply. By combining the two questions into one question, it illegitimately assumes that the person has been doing the activity in question. In this way, the fallacy of the loaded question can be associated with a fallacy we saw earlier, that of begging the question, because both involve a false premise: in the current example, it is illegitimate to assume that the person ever smoked marijuana. In response to this question, Susan should have pointed out that it is unfairly worded, and that it assumes something that cannot be legitimately assumed. A question itself, of course, is not an argument, but if the question leads to a conclusion, it can provide the materials for an argument, as we can see in this example: Chris: I don’t support affirmative action. Bob: Chris, why don’t you support equal opportunities for women and minorities? Bob’s implied argument, when broken down, looks like this: Chris is against affirmative action. Therefore, Chris is against equal opportunities for women and minorities. When looked at this way, it is clear that Bob is assuming that affirmative action is necessary for women and minorities to receive equal opportunities. Affirmative action may be necessary for those opportunities, and it may not be; the point is that Bob cannot simply assume that it is necessary. Rather, he has to argue for the point, and by wording the question in the way he does, he makes an illegitimate assumption. For this reason, such questions are also frequently called “complex questions” because the question is, in fact, more complex than it may appear. As always, we see that when a fallacy is committed, the conclusion of the argument (whether that argument is explicit or merely implied) does not follow from the premises, or the reasons given, for that conclusion. Straw Man Fallacy Senator Jones wants to cut defense spending. I guess he doesn’t care if we can’t protect ourselves. The straw man fallacy takes an opponent’s claim, characterizes that claim unfairly, and then criticizes the opponent on the basis of that unfair characterization. In addition to not really addressing the opponent’s claim, the straw man fallacy also draws a conclusion by criticizing a different position than that advocated by the opponent. For that reason, the conclusion does not follow from the premise. In our example, there may be a significant difference between cutting defense spending by some percentage and having an inadequate defense. Presumably, one can argue that a country can, or cannot, still defend itself while spending less. Of course, whether or not that is the case is not the issue here; what is at issue is what Senator Jones’s claim actually is. Here, the claim seems to be mischaracterized, then criticized on that basis. This sets up a “straw man”—an unfair description of an opponent’s viewpoint—and then that straw man is “knocked down”—by criticizing not the view actually put forth, but the view as unfairly represented. man on horse with lance, riding around dummy A jousting reenactment in Germany. Characterizing an opponent’s claim unfairly essentially sets up a “dummy” argument that is easy to knock down and doesn’t fight back. Amy thinks the way factory farms raise chickens is cruel. Amy must think we can live on just nuts and berries. Amy’s position here is that certain methods of raising chickens involve some degree of cruelty. But her position here is mischaracterized to imply that she thinks all methods of food production involving animals involve cruelty; this seems to imply, further, that since cruelty is wrong, all such methods should be prohibited. Thus, her opponent concludes that Amy believes everyone should eat solely “nuts and berries,” or, at least, follow a vegetarian or vegan diet. But clearly enough, Amy’s claim isn’t fairly characterized, and thus what might be implied by that characterization is an illegitimate inference. Attributing a view to Amy, then criticizing her on the basis of that attribution, is a mistake in reasoning. The premise in this kind of argument is not fair to Amy, because it misrepresents her position (thus setting up a straw man). The conclusion based on that premise, then, does not follow from Amy’s own claim; it follows only from this unfair description of her claim (thus, knocking down the straw man). If we were to put this argument into premise-conclusion form, the fallacy committed becomes even clearer, and the bracketed premises—not stated in the original argument—show the mistaken assumption being made: Amy thinks it is cruel to raise chickens on factory farms. [Raising animals on factory farms is cruel.] [Most of the animals we eat are raised on factory farms.] [We should not do what is cruel.] [The only way to avoid this kind of cruelty is not to eat animals.] Therefore, Amy thinks we should live on just nuts and berries (that is, not eat animals). It is probably clear that this could very well mischaracterize Amy’s position; there are, for instance, ways of raising animals for food that are not cruel. But by providing the specifics of the argument here, we can see that a number of assumptions are being made—although not stated in the original argument—that one (Amy, for one) might well challenge or dispute. The trick with the straw man fallacy, however, is that there can be serious disputes about what is and isn’t a fair characterization of an opponent’s view. In our preceding examples there may be legitimate disputes about whether Senator Jones is proposing cuts to defense spending that risk weakening the military too much. There may also be disagreements with Amy, about whether there is in fact cruelty involved in factory farming, and if there is, how much cruelty is involved. The straw man fallacy is committed when it is obvious that an opponent’s position is being criticized based on a clearly unfair characterization of that position. But there may be legitimate disagreement about whether the opponent’s position is being unfairly represented. Highlights: Two Frameworks You Can Use to Help Identify Fallacies Fallacies can be difficult to identify. Putting arguments into premise-conclusion form, or equation form as it is sometimes called, can help you identify the connection between the premises and the conclusion; in other words, the relevance, or the logic. Some fallacies, like ad hominem, red herring, and straw man, occur more frequently in debates between two people; identifying them can be a little trickier. Following are two frameworks you can use to identify and distinguish between some of the fallacies you’ve learned about here. We’ll use some of the examples that are scattered throughout the chapter to illustrate two frameworks in action. First, try to figure out if the fallacy occurs in someone’s response to another person’s argument or claim. If it does, use the “Debate” framework below. Otherwise, use the “Premise-Conclusion” framework. Debate Framework 1. Identify the issue. Try to plug it into the following sentence: “The arguable issue is whether or not . . .” 2. Identify person A’s argument: both the conclusion and the premises. 3. Identify person B’s response to person A’s argument or claim. Does B attack A’s character in an attempt to discount the argument? That’s an ad hominem fallacy. Does B distort A’s claim in an attempt to make it ridiculous, easier to “knock down”? That’s a straw man fallacy. Does B bring in another issue attempting to distract from A’s argument or claim? That’s a red herring fallacy. Premise-Conclusion Framework 1. Identify the conclusion. Figure out what one is being persuaded to believe or do. Look for conclusion indicator words. 2. Identify the reasons offered in support. Look for premise indicator words. 3. Put the statements in premise-conclusion form, so the logic is easier to evaluate. 4. Compare what you get to the generic forms listed on this site: http://www.nizkor .org/features/fallacies Examples: I went to that new restaurant the other day, and I didn’t like what I had. I don’t think that restaurant is any good. P: I went to the new restaurant and did not like what I had. (insufficient evidence) C: Therefore, their food is not good. (overgeneralization) Albert Einstein was a brilliant man and believed in ghosts. So it seems that ghosts actually exist. P: Albert Einsten was a brilliant man. P: Albert Einstein believed in ghosts. P: Therefore, ghosts exist. That book must be very good; it has been on the best-seller list for weeks. P: That book is popular. C: Therefore, it must be very good. False Cause (Post Hoc) Fallacy The day before the election, the candidate decided to wear her clothes inside out. Since she won, that must have caused her victory. If one thing causes another thing to happen, the first event, of course, precedes the second event. For instance, if I put a pot of water on very high heat and the water then boils, we generally are willing to say the heating of the water caused it to boil. rabbit foot keychain on black background Lucky talismans, like a rabbit foot, often result in false cause fallacies. However, just because one thing precedes another does not mean the first causes the second. To use the terms we saw earlier, for one thing to cause another, it is a necessary condition that the cause precede the effect. But one thing preceding another is not a sufficient condition to establish a causal relationship between the two things, as this example should make clear: Every morning the rooster crows, and then the sun comes up. The rooster, therefore, must cause the sun to come up. To claim that one thing causes another solely because it occurs first is to commit the false cause fallacy. Another, more traditional name for this fallacy also reveals the mistake made in the reasoning: “post hoc, ergo propter hoc”—that is, “after this, therefore because of this.” Superstitions are a standard example of the false cause fallacy. If my luck improves (or at least doesn’t get any worse) when I carry around a rabbit’s foot, or when I tie my right shoe before tying my left shoe, or when I avoid walking under ladders, then I may be tempted to say that these practices caused or helped cause my good luck. But there are a couple of problems here. First, my luck could always be worse, so it is very difficult to tell that such superstitions really caused the results involved. Imagine I carry around a lucky penny, but I am badly injured when run over by a car. Yet my luck could have still been worse: perhaps I reason that if I had not had my lucky penny, I would have been killed by the car. But, more important, we might have difficulty establishing a causal relationship between a superstitious act and the luck that follows, were we to put it to a scientific test. And such a test, of course, would include making quite specific what such “luck” actually involved. In looking at cause and effect, we might want to distinguish among coincidences, correlations, and causes. If it rains after I wash my car, it may just be an unhappy accident (this would be a coincidence). If this happens with surprising frequency, I may think that it seems to rain almost every time I wash my car (this would be a correlation). But do we ever get to the point where we wish to claim that washing my car causes it to rain? Those who study the methods employed by science often try to determine whether a correlation actually supports the strong idea of a causal connection, as we see with this example: Every day after a full moon, the stock market goes up 10 percent. So the full moon causes the stock market to go up 10 percent. Here we may be tempted to think of this relationship as a mere coincidence, and that to make the stronger causal claim would be to commit the false cause fallacy. After all, just because the stock market went up after a full moon does not, by itself, indicate that the full moon caused it to go up. But what if someone noticed that there was a historical connection and went back through the records to discover that this relationship was very frequent—that almost every time the moon was full, the stock market then went up 10 percent the next day? How do we determine whether this correlation was not just coincidence, but a genuine causal relationship? At this point, of course, we move from logic to actual scientific inquiry, carefully examining the data and testing it in various ways. In general, we have to carefully state what the evidence is and what conclusion is being drawn, and we must examine the relationship, if any, that exists between the evidence and the conclusion. The fallacy of false cause is committed if we take a sequence of events—one thing followed by another—as by itself establishing a causal relationship. Just because B follows A, it does not follow that A causes B. And if we assert this conclusion on no other basis than the sequence “A then B,” we make a mistaken inference and, thus, commit the fallacy of the false cause. Red Herring Officer, you shouldn’t give me a speeding ticket. There are a lot of people out there who are much more dangerous than I am, and you should be chasing them, not me. The red herring fallacy is a very old mistake in reasoning—discussions of it go back at least to Aristotle—and also a very common one. A red herring fallacy is committed by someone who tries to avoid the issue by introducing another, irrelevant issue, hoping that it will then attract attention away from the issue that should be discussed. As we can see in this example, whether or not the driver deserves a speeding ticket should be determined by whether he or she was speeding. But by introducing the idea of those who break the law in more threatening ways, the driver hopes to divert the attention away from the question of whether he or she was speeding. The fallacy involved here can be made explicit by putting the example in premise-conclusion form: There are worse crimes than speeding. Therefore, I shouldn’t be given a ticket for speeding. smoked fish hanging from string Smoked fish. Like a smelly fish, a red herring is an irrelevant issue designed to throw the opponent off the true scent of the argument. As always with the fallacies we have been looking at, we see that the conclusion does not follow from the premise. While it certainly is true that there are many worse crimes than speeding, that doesn’t mean the driver was not speeding. Whether or not a speeding ticket should be given, therefore, has to be argued on a different basis. Parents are quite familiar with this kind of fallacy. Imagine Suzy says, You shouldn’t make me be home by midnight, Mom. None of my friends has to be home by midnight. To see the fallacy involved, we can put this into premise-conclusion form as well: None of my friends has to be home by midnight. Therefore, I should not have to be home by midnight. Parents, of course, have a traditional response to this argument (it might be worth considering if a fallacy is committed in this response!): If all of your friends jumped off a bridge, would you? Whether or not Suzy’s friends have to be home by midnight is irrelevant; the question is whether Suzy has to be home by midnight. By getting the parents to address the issue of other children and other rules set down by their parents, Suzy may hope to distract her own parents from their point and get them to focus on other issues. The red herring fallacy is one example of numerous fallacies that fall under the more general title of “fallacies of irrelevance” (the argument from false authority is another fallacy of irrelevance). All fallacies make the same general mistake in reasoning, leading to the overall result that the conclusion does not follow from the premise, or premises. Many fallacies make similar kinds of errors; for instance, all fallacies of irrelevance use premises that are irrelevant to the conclusion. It can get confusing keeping the various names and sub-fallacies straight, but it is more important to see that a fallacy is committed, and to be able to explain what mistake is involved. Logic in the Real World: Red Shark Fin Shark fin soup, thought to have curative powers, has a long tradition in Chinese culture. However, some fear overharvesting is causing a dangerous decline in shark populations, not to mention the cruel and wasteful practice of throwing sharks back into the water after removal of the fins. A legislator in San Francisco—home to the largest Chinatown in the United States—recently proposed a citywide ban on shark fin sales and possession. Another legislator stated in an opposing response, It seems that there are more and more examples where individuals or groups of individuals are trying to limit our heritage and our culture. It was not so many years ago that, if you happened to be Chinese, you could not go to school outside of Chinatown. Preventing Chinese students from going to school outside of Chinatown would be wrong. However, we can see that this matter has nothing at all to do with the issue at hand—that is, whether measures should be taken to stop the slaughter of sharks for their fins. In this response, the opposition threw up a smokescreen in attempting to divert our attention toward Chinese children of San Francisco and away from sharks. When analyzing arguments, it is important for us always to keep our focus upon the real issue. Doing so is the only way we can ensure that nothing “fishy” slips past us! In the specific case of the red herring, what reveals the error in the argument is the idea that a tangent, or irrelevant issue, is introduced, designed to distract one’s opponent from the issue at hand. Before we assert that the red herring fallacy has been committed, therefore, we must show that an irrelevant issue is being introduced as a distraction. In the following example, we see that one person’s “red herring” might be another person’s genuine concern about a relevant issue. For instance, imagine in a political campaign that one candidate, Ms. Smith, says this about her opponent Mr. Brown: My worthy opponent Mr. Brown advocates policies that will require more government interference in our lives, and thus should be rejected. There may be several fallacies involved here (possibly a slippery slope fallacy, for instance). But Mr. Brown might respond that the focus should be on the specific policies in question; from his perspective, Ms. Smith’s introduction of the topic of government interference may be intended as a red herring that distracts her audience from those policies. Ms. Smith, on the other hand, might think “government interference” is an important implication of the policies Mr. Brown advocates. So here we can see that although logic can help us identify when a fallacy is committed, logic cannot provide a complete account. In this case, what needs to be argued is whether Ms. Smith’s introduction of the topic of government interference is relevant (and thus not a red herring) or not relevant (and thus, as distracting from the issue at hand, is a red herring). Logic by itself is not in a position to settle that dispute! Logic in the Real World: The Top Three “Debate” Fallacies in Action Some fallacies, like ad hominem, red herring, and straw man, are ones that occur in debates between two people. Here we will illustrate the difference between these three, using a single issue: Should marijuana be legalized? Ad Hominem: Person B Discounts Person A’s Argument by Focusing on Character Person A: I think marijuana should be legalized, because it would rid the prison population of many nonviolent offenders, and that would save us money. Person B: Of course you’d say that, you’re a pot head! Straw Man: Person B Distorts Person A’s Argument to Make It Easier to Knock Down Person A: I think marijuana should be legalized, because it would rid the prison population of many nonviolent offenders, and that would save us money. Person B: You think we should legalize drugs?! That’s ridiculous! Red Herring: Person B Distracts from Person A’s Argument by Bringing in Another Issue Person A: I think marijuana should be legalized, because it would rid the prison population of many nonviolent offenders, and that would save us money. Person B: So you think nonviolent criminals shouldn’t be in prison! False Dichotomy Emma doesn’t think prayer should be allowed in public schools. Therefore, Emma must be an atheist. The fallacy of the false dichotomy is also known as the fallacy of “the false dilemma” and the fallacy of “black and white thinking.” The mistake in reasoning committed here is to present two, and only two, choices, when in fact there may be many other options available. For instance, in the preceding example, there may be many people who are not atheists who do not support prayer in public schools. To suggest that there are only two options—support for such prayers or atheism—is to ignore the many other options available. Therefore, the conclusion does not follow from the premise and commits the fallacy of the false dichotomy. We saw an earlier argument that committed the fallacy of the straw man—when Amy’s view was misrepresented and then criticized on the basis of that misrepresentation: Amy thinks the way factory farms raise chickens is cruel. Amy must think we can live on just nuts and berries. This argument also commits the fallacy of the false dichotomy, in that it is at least implied that either one completely ignores how animals are raised for food, or one must advocate vegetarianism. But, of course, one can be a carnivore and care about the treatment of animals. So this argument commits both the straw man fallacy and the fallacy of the false dichotomy. Either mistake is sufficient to reject the argument as stated, but it is good to keep in mind that a bad argument may make more than one mistake! two one-way signs pointing in opposite directions A false dichotomy fallacy suggests that there are only two options. On occasion, one may be presented with a choice in which there really is no third option. For instance, in the following example, we seem not to commit the fallacy of the false dichotomy, for there truly are only two possibilities (if we exclude vampires): Nick is either dead or alive. Nick is not dead. Therefore, Nick is alive. To understand the fallacy of the false dichotomy, then, we must examine the premises. If the premises present a false choice, by ignoring other options, then the conclusion will not follow from the premises as stated. This can sometimes take some care, and we need to be aware that some seemingly persuasive arguments upon closer examination are fallacious. For instance, one may encounter the following bumper sticker: America—Love It or Leave It The implied argument here seems to be that certain actions that might be critical of America would indicate that one doesn’t love America. The argument would then look something like this in premise-conclusion form: You must either love America or you must leave America. If you criticize America, you don’t love America. You have criticized America. Therefore, You must leave America. However, the argument implied by this bumper sticker seems to present two options when there are many others. One might, for instance, want to improve America and thus offer criticism as a way of improving it. Similarly, one might criticize one’s spouse and children while also loving them. There may be debates over what is and is not justifiable criticism; but in this case, it seems that one is presented with two choices, when actually there are other choices available. If that is the case, then, this argument would falsely present two choices and would then derive its conclusion on the basis of those two, and only those two, choices. Because there are more choices, this is a mistake in reasoning, an inference is made illegitimately, and the conclusion does not follow from the premises. Ch 4 What Did We Find? We discovered many arguments that may appear to be persuasive actually commit fallacies. We saw that fallacies can occur for various reasons. We examined how one identifies the kinds of mistakes made by fallacious arguments. We found that being aware of fallacious arguments can help us avoid them in our own reasoning, and also help us spot when others are using such arguments against us. Some Final Questions Consider some of the commercials you’ve seen. How might such commercials have employed fallacious reasoning to convince you to buy something? Try to come up with the kind of argument you might see a politician make. What fallacies do you see committed by some politicians? Why do you think such fallacies seem to be so common in politics? Are most people aware of fallacies? How can it improve one’s own arguments to be aware of fallacious reasoning? How can understanding fallacies help prevent us from being taken in by arguments that look good at first but are not? Web Links The Nizkor project Visit this site for additional examples of logical fallacies. http://www.nizkor.org/features/fallacies/ Mission: Critical—Logical Fallacies Visit this site sponsored by San Jose State University for additional exercises on logical fallacies. http://www.sjsu.edu/depts/itl/graphics/induc/ind-ded.html