© John Turri, CC BY-NC-ND http://dx.doi.org/10.11647/OBP.0083.03
This chapter answers the main criticisms of the knowledge account of assertion.
Probably the most popular and persistent objection to the knowledge account is that it fumbles cases of reasonable ignorant assertions. A reasonable ignorant assertion has two features: the speaker reasonably believes that the assertion’s content is true, but she does know that it is true. Critics have repeatedly discussed two types of example that supposedly fit this description.
The first type involves reasonable false assertions. In this type of case, a speaker has good evidence for believing that, say, she owns a certain type of watch. And she tells someone that she owns that type of watch. But her assertion turns out to be false despite the evidence. Perhaps the vendor mislabeled the watch so that she is wrong about what type it is, or perhaps her very reliable memory failed her on this particular occasion.
Critics of the knowledge account report having the intuition that reasonable false assertions are perfectly fine. They claim that this intuition is “obvious” and reflects ordinary practice (Hill & Schechter 2007: 109; Douven 2006: 476ff). Stronger yet, some claim that there is “no intuitive sense” in which a reasonable false assertion is improper, and that “there is no practice” of counting them as inappropriate (Douven 2006: 480; Hill & Schechter 2007: 109). If this is all on the right track, then the knowledge account cannot, without complication, “explain our intuitions about false but reasonable assertions” (Douven 2006: 478).
A series of experiments tested whether the critics have correctly described our ordinary practice of evaluating reasonable false assertions (Turri 2013b). People in these experiments considered a simple story about Maria. Maria is a watch collector who owns so many watches that she cannot keep track of them all by memory alone, so she maintains a detailed inventory of them. She knows that the inventory, although not perfect, is extremely accurate. One day someone asks Maria whether she has a 1990 Rolex Submariner in her collection. She consults the inventory and it says that she does have one. At the end of the story, one group of people was told that the inventory was right. Another group of people was told that the inventory was wrong. Everyone then answered the same question: should Maria say that she has a 1990 Rolex Submariner in her collection?
The results were absolutely clear. When the assertion would be true, virtually everyone said that Maria should make the assertion. But when the assertion would be false, the vast majority said that she should not make the assertion. This same basic pattern persisted when people were questioned in different ways. It also persisted across other differences that often can influence evaluative judgments and social cognition. For example, the pattern persisted whether the stakes were low (a “neighbor asking out of idle curiosity”) or high (a “federal prosecutor asking in the course of an official investigation”). It also persisted when the stimuli were systematically switched so that the inventory said that Maria does not have the watch, and people had to evaluate whether Maria should make a negative assertion (that is, “I don’t have one” as opposed to “I do have one”). When asked to explain their evaluation, a strong majority said that the statement’s truth-value was more important than Maria’s evidence.
One study in particular demonstrated how much subtlety and sophistication informs ordinary judgments about assertability. Instead of answering, yes or no, whether Maria should make the assertion, or rating their agreement with the statement that Maria should make the assertion, people performed a much more open-ended task of identifying what Maria should say. When the assertion would be true, the vast majority of people answered that Maria should assert that she owns the watch. But when the assertion would be false, very few people answered that way. Instead the most common response was that Maria should assert that she “probably” owns one, which, on the most natural interpretation of the case, is actually true because of Maria’s evidence.
The second type of ignorant assertion discussed by critics is “Gettiered assertion.” The idea here is that it is sometimes reasonable to believe true propositions that you nevertheless fail to know, due to objectionable forms of luck. These are often called “Gettiered beliefs,” named after Edmund Gettier, the philosopher who sparked discussion of such examples in the mid-twentieth century (Gettier 1963; for an overview, see Turri 2012a; for some pre-1963 history of such cases, see Matilal 1986: 135–37; Chisholm 1989: 92–93). According to conventional philosophical wisdom, Gettiered beliefs fall short of knowledge. But, critics claim, intuitively there is no sense in which you should not assert Gettiered beliefs. Hence, critics argue, knowledge is not the norm of assertion (e.g. Hill & Schechter 2007; Lackey 2007; Brown 2008; Smithies 2012; Smith 2012; Coffman 2014).
“Gettier cases” come in many varieties. Here I will focus on two basic types frequently mentioned in the assertion literature. There might be no theoretically neutral way of describing the structure of these cases, but I will try to remain as theoretically neutral as possible.
On the one hand, there are “environmental threat” or “fake barn” cases (the latter label is due to Goldman 1976: 772–73, crediting Carl Ginet; see Goldman 2009: 79 n. 5). This is the most popular type of case among critics of the knowledge account. In an environmental threat case, the agent believes that something is true because she directly perceives it. If that were the end of the story, then intuitively she would know that the proposition is true. But it turns out that the agent is in an environment where her perceptual evidence could very easily have been misleading and led her to form a false belief. Intuitively, many philosophers claim, this real and very near possibility of error prevents the agent from knowing (e.g., Goldman 1976; Sosa 1991: 238-39; Neta & Rohrbaugh 2004: 401; Pritchard 2005: 161–62; Kvanvig 2008: 274). For example, suppose Sarah looks out her car window and sees a roadside barn as she drives along. Everything about Sarah and the barn is normal. But Sarah does not realize that the area she is driving through is being used as a movie set and the set designers have constructed many fake-barn façades that look just like real barns. Sarah is looking at the one real barn among all the nearby fakes. Clearly Sarah does not know that it is a barn, the critic claims, but surely Sarah should, if asked, say that it is a barn.
On the other hand, there are “explanatory disconnect” or “apparent evidence” cases (the latter label is due to Starmans & Friedman 2012). In an apparent evidence case, an agent believes a true proposition based on good but fallible evidence. If that were the end of the story, then presumably he would know that the proposition is true. But it turns out that the agent’s evidence is misleading and his belief is made true by something completely unrelated to his evidence. Intuitively, the unexpected explanatory disconnect between evidence and truth prevents the agent from knowing. For example, suppose that Angelo is in the forest during deer hunting season. Two very loud, sharp bangs ring out nearby. Angelo judges that somebody is hunting deer nearby. And there is somebody hunting deer nearby. But the bangs Angelo heard were just backfire from a vehicle, and his belief is true because a camouflaged hunter is stalking a deer nearby with bow-and-arrow, silent and unseen. Clearly Angelo does not know that someone is hunting nearby, the critic claims, but surely Angelo should, if asked, say that someone is hunting nearby.
Because “explanatory disconnect” cases can be so peculiar, it is worth describing another example. Suppose that Geno’s mother is completing a home improvement project and she needs a set of metric wrenches. Her old set is lost and no one can find it. So Geno goes to the hardware store, buys a new set, then puts them in the garage. But Geno did not notice that he actually bought Imperial wrenches rather than, as he thought, metric wrenches. However, there is a set of metric wrenches in the garage: his mother’s old set is under some scrap metal in a garbage can where it will never be found. Clearly Geno does not know that there are metric wrenches in the garage, the critic claims, but Geno should, if asked, say that there are metric wrenches in the garage.
A series of experiments tested whether the critics have correctly described our ordinary practice of evaluating knowledge and assertion in “Gettier” cases (Turri in press b; see also Turri in press e). People in these experiments considered stories very similar to the ones described above about Sarah, Angelo and Geno. In the fake barn case, an overwhelming majority of people judged that Sarah both knew the proposition and should assert it. Indeed, the fake barn case was judged no differently than a closely matched “cheap barn” control case where there was no salient possibility of encountering a fake. This leads me to suspect that, in fake barn cases, the critics’ intuitions about assertability are tracking their implicit judgments about knowledge, their ordinary competence in applying that concept. The reason it seems clear that Sarah should make the assertion is that she has knowledge. But contemporary philosophers have also been trained to say, perversely, that someone in Sarah’s situation obviously lacks knowledge. This in turn causes them to misinterpret such cases as problematic for the knowledge account, even though the cases actually support it.
In the explanatory disconnect cases, judgments were more mixed. In some of them, the central tendency was to attribute both knowledge and assertability. In others, the central tendency was to deny both knowledge and assertability. Either way, the important point is that a very strong majority always kept their judgments of knowledge and assertability united, defying what critics say is the intuitive reading of such cases.
More generally the philosophical literature and lore on Gettier cases is a vast and confusing labyrinth built adventitiously over many decades. The nominal category “Gettier case” masks radical diversity in underlying causal structure. These differences are extremely important in both theory and ordinary practice — so important that it renders the nominal category, as we have inherited it, utterly useless. In the line of research just summarized, some “Gettier cases” elicited rates of knowledge attribution exceeding 80%, while others struggle to top 20%. The mere fact that something is a “Gettier case” is consistent with its being both overwhelmingly judged knowledge and overwhelmingly judged ignorance, thereby masking differences that radically affect the psychology of knowledge attributions and depriving the category of any diagnostic or predictive value (Turri, Buckwalter, & Blouw 2015; Blouw, Buckwalter, & Turri in press; Turri 2016). The lesson here is that philosophers should stop grouping into one category cases with radically different causal structures.
A second objection to the knowledge account is a simple argument linking blame and rule-breaking (for versions of this, see Lackey 2007: 603, 597; Douven 2006: 476–77; Hill & Schechter 2007: 109). A speaker who makes a reasonable false assertion is not thereby properly criticizable or blameworthy. A speaker who is not properly criticizable or blameworthy has probably not broken the norm of assertion. So the norm of assertion probably does not require truth. But knowledge requires truth. So the norm of assertion probably is not knowledge.
The crucial assumption here is that blamelessness is a defeasibly good indication that no rule has been broken. Is the assumption true? More importantly, does it accurately reflect the way people actually judge particular cases?
It turns out that it does not. Instead, when people consider cases of blameless rule-breaking, many prefer to describe events in a way that validates their desire to excuse. This can lead them to think and say false things about the agent’s conduct. In particular, it can lead them to falsely claim that no rule has been broken. This tendency is known as excuse validation (Turri 2013b: Experiment 5; Turri & Blouw 2015). It is related to another tendency known as blame validation, which causes people to describe events in a way that validates their desire to blame (Alicke 1992; Alicke 2000; Alicke et al. 2008; see also Alicke 2008; Alicke & Rose 2010.)
Several studies compared judgments about cases of reasonable false assertion to judgments about obvious cases of blameless rule-breaking. One obvious case of blameless rule-breaking involved Brenda, who just entered a natural baking contest. Brenda just started preparing her dish in the contest. Contest rules say that only natural sugar may be used as a sweetener, so Brenda was careful to buy only sweetener clearly labeled “natural sugar.” But the label on the package is wrong because there was a mix-up at the factory: an artificial sweetener that looks just like sugar was accidentally packed in a package labeled “natural sugar” without anybody noticing. Brenda is not aware that this happened and, as a result, she is actually using artificial sweetener. Obviously Brenda should not be criticized for this, and people’s response to the case clearly reflects this: nearly everyone says that she should not be criticized. Equally obviously, Brenda is breaking contest rules, but people’s response to the case does not clearly reflect this. When they are asked, “Did Brenda break the rules?” roughly half of people say that she did not break the rules. In other words, roughly half of people’s answers contradict the plain facts of the case.
Now consider the analogous case of a reasonable false assertion. Robert recently started collecting coins. Today he made a purchase for an 1804 US silver dollar at a local coin shop. But the coin dealer cheated Robert: the coin is actually a 1904 US silver dollar that has been made to look like it says “1804” on it. Robert is not aware that the dealer did this and, as a result, he tells his dinner guests that he has an 1804 US silver dollar. Obviously Robert should not be criticized for this, and people’s response to the case clearly reflects this: nearly everyone says that he should not be criticized. Equally obviously, Robert makes a false assertion, but people’s response to the case does not clearly reflect this. When they are asked, “Did Robert make a false statement to his guests?” roughly half of people say that Robert did not make a false statement. Again, roughly half of people’s answers contradict the plain facts of the case.
Across a wide range of activities, from baking to farming to asserting to playing chess, we observe the same exact pattern of response to blameless rule-breaking: basically everyone agrees that the agent should not be blamed, and roughly half of people falsely answer that the agent did not break the rule.
Now suppose that we ask people to consider the exact same cases of blameless rule-breaking, but we change the question ever so slightly. Rather than asking whether the agent “broke the rule” or “made a false statement,” instead we ask whether the agent “unintentionally broke the rule” or “unintentionally made a false statement.” This slight change causes a dramatic shift — now everyone answers “yes.” Everyone identifies it as blameless rule-breaking, even though half of people fail to identify it as rule-breaking. But unintentional rule-breaking entails rule-breaking, so how could this be?
The explanation is quite simple and, once stated, can seem completely obvious. People answer “no” to the original question because they want to avoid indirectly blaming a blameless agent. A factually accurate answer — “yes, she broke the rules” — could easily seem unfair and many people prefer to avoid giving that impression. The adverb “unintentionally” is often used to indicate that the agent should not be blamed for a bad outcome. So the modified question — “yes, she unintentionally broke the rules” — liberates people to answer accurately; it does not force them to choose between answering accurately and avoiding unfairness. Instead, by agreeing that the agent unintentionally broke the rules, people can simultaneously accurately identify the rule-breaking and excuse it.
Excuse validation is a very robust tendency. As already mentioned, it occurs when evaluating a wide range of activities. We observe it in both women and men. It persists when the consequences of rule-breaking are trivial and when they are momentous. For example, in one study less than half of people said that an agent broke the rules when the result was that a database would have to be updated manually. In a closely matched condition, less than half of people said that the agent broke the rules when the result was that the nation goes to war! Excuse validation also occurs when people evaluate other people’s statements about blameless rule-breaking, rather than judging it directly for themselves.
Taken together these findings completely undermine the attempt to use cases of reasonable false assertion against the knowledge account. A predictable proportion of people react to blameless rule-breaking by engaging in excuse validation: they literally deny that a rule was broken, even when it obviously was broken. Just as Brenda blamelessly broke the baking contest’s rule by using artificial sweetener, so too did Robert blamelessly break assertion’s rule by making a false statement. The critic’s intuitions here are simply excuse validation in action.
John Stuart Mill once wrote of moral judgment, “We do not call anything wrong unless we mean to imply that a person ought to be punished in some way or other for doing it” (Mill 1863/1979: ch. 5). This insightful observation is but part of a much larger picture: many of us are unwilling to identify even trivial, non-moral instances of blameless rule-breaking as rule-breaking. And many of us are willing to contradict others who accurately identify blameless rule-breaking as rule-breaking.
In keeping with prior theoretical work on assertion, much of the experimental work discussed here assumes that the “should” of assertability differs from other familiar sources of normativity, such as morality, rationality, politeness, or legality. That is, assessments of assertability ordinarily do not reduce to assessments of the assertion’s morality, rationality, etiquette, or legality. To illustrate this assumption with an analogy, consider a chess match. The goal of chess is to checkmate your opponent. The rules of chess allow rooks to move along an unobstructed vertical or horizontal path. If you can checkmate an opponent by moving a rook along an unobstructed vertical path, then there is a clear sense in which you should make that move. But if your opponent is a child who would be utterly devastated by the defeat or a violent mobster who will react violently to a loss, then there is also a clear sense in which you should not make the move. In these ways, the normativity distinctive of chess differs from the normativity of morality or practical rationality. Similarly, experimental research on assertion has assumed that there is a distinctive “should” of assertability.
Contrary to that assumption, some theorists have worried that patterns favoring factive accounts “actually track moral considerations rather than those that are proper to assertion” (Pagin 2015: 22). Similarly, one might worry that the attributions are tracking assessments of rationality, etiquette, or legality.
These worries have been directly tested experimentally (Turri under review). People were divided into groups and read a brief story. Everyone read the same basic story in which an agent has evidence for a proposition and is asked whether it is true. In one version of the story, the proposition is true; in another version, the proposition is false despite the evidence. Researchers also varied how much was at stake for the agent. For instance, she might be having an idle conversation with a neighbor (lower stakes), or under question by a federal prosecutor (higher stakes). After reading the story, participants rated whether the agent should make the assertion. Participants also rated the assertion’s morality, rationality, etiquette, and legality, in addition to its truth-value and how serious the situation was for the speaker. Researchers then used regression analysis to statistically analyze which of these judgments and other variables predicted assertability attributions.
The results ruled out the worries and provide further strong evidence that assertion has a factive norm. Even when controlling for all the other factors’ influence, evaluations of truth-value significantly predicted assertability attributions. Indeed, evaluations of truth value were the strongest predictor. This occurred when the stakes were lower and when they were higher. No other quality significantly predicted assertability attributions in both stakes conditions. When the stakes were lower, evaluations of etiquette also contributed significantly to assertability attributions. When the stakes were higher, evaluations of rationality and legality also contributed significantly to assertability attributions. Regardless of stakes, evaluations of morality and the seriousness of the situation did not predict assertability attributions. Assertability attributions were also unaffected by participant gender or age.
The observational data supporting the knowledge account include appropriate challenges to assertions. When someone makes an assertion, it is normally perfectly appropriate to ask them, “How do you know that?” or, more aggressively, to say, “You don’t know that.” But it is also perfectly appropriate to say, “That is not true,” “All the evidence suggests otherwise,” or, “You don’t believe that.” Don’t these latter challenges support weaker accounts of assertion’s norm, namely, a truth account, an evidence account, or a belief account (Kvanvig 2009)?
Taken in isolation, the propriety of these challenges does provide some evidence for the alternative accounts mentioned. But it does not favor these alternative accounts over the knowledge account because the knowledge account explains their propriety very well. We have theoretical and empirical evidence that, on the ordinary conception of knowledge, knowledge requires truth, belief, and not believing what goes strongly against the evidence (Buckwalter 2014; Starmans & Friedman 2012; Buckwalter, Rose & Turri 2015; Turri, Buckwalter & Blouw 2015; Turri & Buckwalter in press). So to question whether an assertion is true, whether the speaker believes what he is saying, or whether it goes strongly against the evidence is, by implication, to question whether an assertion expresses knowledge. The knowledge account, therefore, easily explains the relevance of these weaker challenges. More generally, the knowledge account easily explains the propriety of any challenge featuring an intuitively plausible requirement of knowledge.
Some critics argue that some of the observational data we have been discussing are less “pre-theoretic” and more tendentious than I have supposed. For instance, consider again the challenge, “That’s not true,” said in response to an assertion. Advocates of the knowledge account assume that the challenge constitutes a criticism of the assertion, as opposed to merely forcing the speaker into a position where he must either defend his assertion or retract it. But is it intuitively clear that the challenge really does constitute a criticism of the assertion? Critics claim “not to share such intuitions” and they suspect that the supposed datapoint is actually a “theory-laden intuition” (Rescorla 2009: 123). The most substantive, genuinely “pre-theoretic” datapoint we should allow, they argue, is that when someone challenges your assertion, you must either defend its truth or retract it. This falls short of the claim that a false assertion violates the norm of assertion (Rescorla 2009: 125).
I note two points in response to this line of reasoning. First, it is certainly true that multiple explanations are possible for any particular observation. But one-off explanations are cheap and rival hypotheses must be judged by how well they explain the entire range of data. It remains to be seen whether the proposed “defend or retract” account can well explain other relevant phenomena, let alone the entire range of data on the table. Second, the available evidence fully addresses the speculative worry that the intuitions in question are “theory-laden.” In multiple studies, the vast majority of people judged that false assertions should not be made. This is certainly not due to some theoretical commitment that these people all happened to share. Moreover, in many cases these judgments were prospective. No assertion had yet been made, let alone challenged. While a “defend or retract” norm is logically consistent with this — it is possible that people were quickly and implicitly evaluating a counterfactual situation in which the speaker is challenged and cannot defend himself — the fit is strained and ad hoc. But why settle for that when the knowledge account is a perfect fit?
Some have argued that a certain version of the knowledge account entails a paradox and so must be false (Pelling 2011, 2012). The version in question says that knowledge is not only necessary for assertability, but also sufficient. That is, knowledge is both necessary and sufficient to license assertion. For the sake of argument, grant that this stronger (“biconditional”) version of the knowledge account is preferable.
We can represent the argument as proceeding in three separate stages. First, we are asked to consider an isolated utterance of a sentence named “A1”: “This assertion is improper.” Second, we are told that an utterance of A1 causes serious trouble for the hypothesis that truth is both necessary and sufficient to license assertion. What serious trouble? Suppose that I say, “This assertion is improper.” If my assertion is true, then it is improper. If my assertion is false, then it is improper. Either way, we are told, it is a counterexample to the truth account. This creates “a self-referential paradox for the truth account.” Third, we are told that the paradox extends to afflict the knowledge account if it is possible to know two things: on the one hand, that the knowledge account is true; on the other hand, that if the knowledge account is true, then an utterance of A1 is true. A bit of further conditional reasoning leads to the paradoxical conclusion that if the knowledge account is true, then it is true both that I can know that A1 is true, and that I cannot know that A1 is true. Since it implies a contradiction, the knowledge account cannot be true.
I note two points in response to this argument. The first point is that someone who responds to the knowledge account this way has misunderstood the nature of the project. Even supposing that the argument works flawlessly, it is misguided to conclude that knowledge is not the norm of assertion. For there is no reason to suppose that, when combined with various other assumptions, a social practice’s rules will not have contradictory implications. To illustrate the point, suppose we are discussing the official rulebook of a legislative chamber (or a chess club, baseball league, baking contest, etc.). Now I proceed to prove that, when combined with assumptions about weird self-referential acts that no one ever actually performs, the rules imply a contradiction. Would it follow that these rules are not the legislative rules after all? Of course not! The entire exercise was meant to show that these rules have some paradoxical implication. If this in turn implied that they were not the legislative rules, then we would have, paradoxically, failed to show that the legislative rules have a paradoxical implication. Similarly, returning to the knowledge account, even if it did paradoxically imply an inconsistency, it would not follow that knowledge is not the norm of assertion.
The second response is that the argument does not work flawlessly. In fact, once identified, its principal assumption appears highly dubious. The argument assumes that an isolated utterance of “This assertion is improper” counts as asserting some definite proposition. But no reason is offered in favor of this crucial assumption and there is reason to doubt it. If someone weirdly uttered the isolated sentence “This command is improper” or “Obey this command,” they would not be offering a command. The most natural reaction to such an utterance is to wonder, “What command are you talking about?” Similarly, if someone weirdly uttered the sentence “This question is improper” or “Is this question improper?” it is, at the very least, unclear that they would be asking an actual question. The natural reaction is to wonder, “What question are you talking about?” Analogous points can be made about weird utterances involving other speech acts such as, “This guess is improper,” “This hypothesis is improper,” “This announcement is improper,” and so on. My reaction to a de-contextualized utterance of “This assertion is improper” follows precisely that pattern. I am left wondering, “What assertion are you talking about?”
Of course, we can easily imagine contexts in which some particular assertion is the topic of conversation — perhaps some particularly outrageous statement by a politician, comedian, or bigot — in which case saying “This assertion is improper” would make good sense. But that is because in that context we naturally interpret the noun phrase “this assertion” as referring to a salient pre-existing assertion. But the argument against the knowledge account assumes a radically different understanding of the phrase “this assertion.” In order for the argument to get off the ground, that phrase must be understood self-referentially and, furthermore, in such a way as to imply a contradiction. But it is highly doubtful that the phrase ever must be understood that way.
Some critics argue that knowledge is not the norm of assertion because knowledge requires belief, whereas assertability does not require belief (Lackey 2007). The argument is, as usual, defended almost entirely by appealing to intuitions about thought experiments. The thought experiments feature what are called “selfless assertions.” A “selfless assertion” is supposedly an assertion that has two crucial features. First, it is an assertion that, intuitively, the agent should make. Second, we naturally interpret the agent as neither believing nor, as a result, knowing the proposition asserted. By this point, the attentive reader will immediately question the dependability of these intuitions. And such skepticism would be very well placed.
The most widely discussed example of “selfless assertion” features Sebastian, a well-respected pediatrician and researcher who has extensively studied childhood vaccines (Lackey 2007: 599). Sebastian “recognizes and appreciates that all the scientific evidence shows that there is absolutely no connection between vaccines and autism.” But Sebastian’s own eighteen-month-old daughter was recently diagnosed with autism shortly after receiving one of her vaccines. The emotional trauma of his daughter’s diagnosis causes Sebastian to begin doubting his previous views about vaccines and autism, and he is aware that this is the source of his doubt. Moreover, he still recognizes that the evidence shows that there is no link. So when a baby’s parents ask Sebastian about the rumors of a link, he tells them, “There is no connection between vaccines and autism.”
Critics claim that two things are obvious about Sebastian. First, he does not believe that there is no link between vaccines and autism. Second, he should tell the parents that there is no link. So Sebastian should assert what he does not believe. Assuming that knowledge requires belief, it follows that Sebastian should assert what he does not know. Critics conclude that the knowledge account faces a “fundamental difficulty” (Pritchard 2014: 160; see also Wright 2014: 255).
Some researchers have responded to cases like Sebastian’s by proposing that, on the most natural interpretation of the case, he does believe that there is no link (Turri 2014c). Who is right? In order to answer that question, we need better evidence on how the case actually is most naturally understood. Do people judge that Sebastian should make the assertion? Do people judge that Sebastian believes the claim in question? Do people judge that Sebastian knows the claim in question?
A recent study investigated these questions experimentally (Turri 2015c). The results confirmed that it is definitely intuitive that Sebastian should assert that there is no link between vaccines and autism: over 80% agreed that Sebastian should make the assertion. However, it is also definitely intuitive that Sebastian both believes and knows that there is no link: nearly 90% attributed belief and knowledge. These results completely contradict the critic’s interpretation of the case and, ironically, end up providing further confirmation of the knowledge account.
Another example of “selfless assertion” features Stella, a “devoutly” religious “creationist teacher” who teaches science to fourth-graders (Lackey 2007: 599). Stella’s “deep faith” includes “a belief in the truth of creationism and, accordingly, the falsity of evolutionary theory.” Nevertheless, Stella “fully recognizes” the “overwhelming scientific evidence against creationism and in favor of evolutionary theory.” This leads Stella to tell her students, “Modern humans evolved from more ape-like ancestors called hominids.” Should Stella make this assertion? Does she believe that humans evolved? Does she know that humans evolved? When this case was tested, people overwhelmingly agreed that she should make the assertion. However, they also overwhelmingly agreed that she believes and knows that humans evolved. Again the results completely contradict the critic’s interpretation of the case and provide further confirmation of the knowledge account.
Critics have offered other examples of “selfless assertion.” But they are ill suited to test intuitions about assertability. They involve provocative, even incendiary, subject matter that can potentially interfere with people’s judgment. For instance, one case involves a “racist juror” sitting in judgment of an innocent black man accused of interracial sexual assault. The experiments discussed above focused on less provocative but still emotionally and morally charged examples. The examples of Sebastian and Stella involve socially controversial issues: the safety of vaccines and the antagonism between creationism and evolutionary theory. The stories also raise the prospect of harming innocent babies and children by threatening their physical health or intellectual well-being. It is not mere speculation that all this will trigger strong moral feelings. In the very same study discussed above, people also said it would be highly immoral for Stella to not make the relevant assertion. Moreover religious belief has a privileged social status in Western culture, so many people might feel uncomfortable explicitly attributing beliefs that conflict with someone’s avowed religious faith.
Aside from involving highly emotionally charged themes, all the cases critics have discussed are long, complicated, and confusing. They are confusing because they send mixed signals about the agent’s state of mind. For example, the agent is described as “fully recognizing” that there is an “overwhelming amount of scientific” evidence in favor of a certain proposition, but in the same paragraph it is explicitly stipulated that the agent “neither believes nor knows” the proposition. In other cases the agent is described as experiencing a cognitive roller-coaster, first knowing, then doubting, then “recognizing” that the doubt was irrational, followed by asserting the proposition in question.
In general, theoretical debate is not well served by focusing on complicated, confusing, and provocative cases. They introduce irrelevant factors that could easily cause performance errors or otherwise degrade social cognition. And yet, despite all of that, when tested these cases produced results fully consistent with the knowledge account.
But the defects of particular cases are not the fundamental issue. A deeper problem lurks here: thought experiments intended to probe for mental state attributions should not conflict with basic principles that guide social cognition. Previous work on social cognition shows that assertion is a powerful cue to belief attribution. Indeed, assertion can sometimes be a stronger cue to belief attribution than even a robust and consistent profile of non-verbal behavior (Rose, Buckwalter & Turri 2014). And work in developmental psychology shows that even very young children operate with a default assumption that people believe what they say (Roth & Leslie 1991; see also Nichols & Stich 2003). Even if critics devise simpler, coherent, more mundane cases of “selfless assertion,” we cannot magically stipulate away our tendency to interpret people as believing what they say. If thought experimentation is worth doing, it is worth doing well.
As we have seen, critics have tried to produce counterexamples to the knowledge account. The counterexamples are often interpreted as motivating weaker norms of assertion, such as belief or justification. But these counterexamples have been carefully studied and, one by one, they have all been effectively dismissed. However, a different objection to the knowledge account does not proceed by trying to pump intuitions about alleged counterexamples. Instead, it tries to identify data that the knowledge account might not explain so well.
One observation that the knowledge account well explains is the default propriety of many challenges to assertion. For instance, when I make an assertion, even if the content of the assertion has nothing to do with me or what I know, it is still normally appropriate to ask, “How do you know that?” If knowledge is the norm of assertion, then we can explain the propriety of this question by pointing out that by making the assertion I represent myself as knowing. However, it also seems appropriate to ask, “Are you certain?” or, “How can you be sure?” If we assume, as many do, that knowledge does not require certainty, then the knowledge account cannot as simply explain the propriety of this latter challenge. Some take this to motivate the certainty account: you should assert a proposition only if you are certain that it is true (Stanley 2008).
Some have proposed explanations of the “certainty” challenge that are consistent with the knowledge account. For instance, some have suggested that to be certain is, roughly, to know that you know. The propriety of the “certainty” challenge could then be explained as follows: by making an assertion you represent yourself as knowing, and the “certainty” challenge is appropriate because it asks you whether you have accurately represented yourself (Turri 2010b).
Alongside this explanation, it has been proposed that data on how we prompt assertion show that assertability is more closely connected to knowledge than certainty. For example, we naturally prompt assertion by asking, “What time is it?” Equally naturally, we can prompt assertion by asking, “Do you know what time it is?” Competent speakers respond to these similarly. The knowledge account can explain this on the grounds that we prompt assertion by asking whether you satisfy the norm of assertion, just as we can make a request by asking whether someone is in a position to grant the request — for example, one might ask an officious bureaucrat, “Are you authorized to make an exception in this case?” By contrast, we do not naturally prompt assertion by asking, “Are you certain (about) what time it is?” Questions about certainty typically become appropriate only after an assertion has been made. Aside from this, proponents of the certainty account have yet to address the very large amount of observational and experimental data discussed above in Chapter 1.
There is also a more direct test of the competing proposals. If assertion is more closely connected to knowledge than certainty, then this will have detectable behavioral consequences. In particular, it implies that people will be more willing to attribute assertability without certainty than assertability without knowledge. The matter has been tested (Turri in press c). People read a story about Angelo very similar to one discussed above. Angelo is camping with his daughter in a wooden cabin at the edge of the forest. As they settle in to sleep for the night, the daughter has her headphones on and Angelo is reading near the window. Angelo hears two very loud, sharp bangs ring out in the forest behind the cabin. It is deer-hunting season. Angelo’s daughter takes off her headphones and asks, “Dad, what’s going on? Is somebody hunting deer nearby?” After reading the story, one group of people was asked to evaluate assertability in relation to certainty, while another group was asked to evaluate assertability in relation to knowledge.
The primary question is how frequently people were willing to unlink certainty and assertability, on the one hand, and knowledge and assertability, on the other. A unified response keeps the epistemic status and assertability together. For knowledge, a unified response either attributes both knowledge and assertability, or denies both knowledge and assertability. For certainty, a unified response either attributes both certainty and assertability, or denies both certainty and assertability. A disunified response is simply the opposite of a unified one. The results were clear. The vast majority of people offered a unified response for knowledge, whereas only half of people offered a unified response for certainty. In fact, the odds of someone offering a disunified response was over four times greater for certainty than for knowledge. Assertability is more closely linked to knowledge than to certainty.
Related experimental findings were discussed in Chapter 1. To briefly reiterate — and focusing only on the results relevant to comparing knowledge and certainty — people were divided into groups (Turri, Friedman & Keefner in press). Everyone read the same basic story, in which the key proposition is true. But there was one small difference. In one version of the story, the agent is certain that the proposition is true. In the other version, the agent knows that it is true. People then rated whether the agent should assert the proposition. People who were told that the agent knows agreed that she should make the assertion. By contrast, people who were told that the agent is certain disagreed. This difference emerged even though truth was held constant across the conditions. Again, assertability is more closely linked to knowledge than to certainty.
Philosophers have said and assumed many things about the relationship between knowledge and certainty (for example, Descartes 1641/2006; Unger 1975; Wittgenstein 1975; Moore 1959; Klein 1981; Chisholm 1989). But very little is known about how these categories are related in ordinary social cognition. Some notable theorists have argued that knowledge requires being rightfully sure of a proposition (Ayer 1956: 34), though nowadays there seems to be wide agreement among professional philosophers that knowledge does not require certainty. But much recent empirical work has shown that professional philosophers often have, or at least report having, idiosyncratic and stylized intuitions about knowledge and related matters. Moreover, philosophers often seem unaware that their intuitions and assumptions deviate substantially from deep patterns in ordinary social cognition. It would be valuable to investigate how knowledge and certainty are related in ordinary social cognition. The results could then inform theorizing about the norms of assertion. In particular, they could reveal a form of certainty that is equivalent to, or required for, knowledge, as it is ordinarily understood. I would not be surprised if that turned out to be true. If it does, then the knowledge and certainty accounts are not necessarily competitors after all.
A few years ago, when I first considered writing a book defending the knowledge account, I imagined that it would include a chapter or two dedicated to evaluating rival accounts in detail. That is because, in the past, critics tried to argue that alternative accounts could explain all the evidence as well as the knowledge account did. Those days are now gone, however, because recently the quantity and variety of evidence has grown exponentially. Indeed, so much additional evidence has accumulated that rival accounts are essentially forced back to the starting line. Additionally, some evidence that rivals spent considerable time trying to explain, such as assertions about losing lottery tickets, never impressed me and is entirely absent from the present discussion. Nowadays the knowledge account has no rival. Instead of a chapter or two, all that is left is the previous section on certainty and this lonely paragraph. Would-be critics have their work cut out for them — provided that they do not mind working in vain.