Plato’s Meno is a puzzling dialogue. Taking place towards the end of Socrates’ life, it is ostensibly about whether virtue (aretē) can be taught, whether it is acquired by practice, whether it is simply given in a person’s nature, or whether its source is something else entirely. But it also includes the so-called Meno paradox and the geometry lesson that Socrates gives to illustrate Plato’s solution to it. And it includes the claim that knowledge is more valuable than mere true belief, as well as Plato’s account of why that is. It is this latter portion of the dialogue that interests me here.
Plato’s claim has spawned a large literature, which thrived particularly between the mid-1990s and the end of the 2000s.1 An interesting feature of that literature is that the concept of knowledge with which it is concerned is different from the concept of knowledge that scholars now agree Plato is concerned with in the Meno. Here is what Plato has Socrates say about the value of knowledge over merely true belief:
For in the case of true beliefs as well, as long as they remain they are a fine thing and they achieve everything good. But they are not willing to remain for a long time, and instead run away from the person’s soul, so that they are not worth much until one ties them down with a reasoning out of the cause [aitias logismōi]. […] When they have been tied down, they first become pieces of knowledge [epistēmai], and then they are such as to remain. And it is because of these things that knowledge is more valuable than correct belief, and knowledge differs from correct belief in being tied down.
We’ll come back to this argument below, because it is very close to one that Tim Williamson gives. But the thing to note here is the phrase aitias logismōi, translated here by Lindsay Judson as reasoning out of the cause. The scholarly consensus now seems to be that what it means for someone to know a proposition, in Plato’s sense, is for them to grasp an explanation—perhaps, particularly, the canonical explanation—of why the proposition is true; that is, to know p is to understand why p. That is what it means to reason out the cause of p.
In contemporary epistemology, knowledge that p and understanding why p tend to be treated as quite different cognitive states. We say that someone knows something if it is true and their belief about it is connected to its truth in a particular way, while we say that someone understands something if they grasp why it is true. There are doubtless relations between these two states, but they are different. In particular, in contemporary epistemology, it is taken to be possible to know p without understanding why p—knowledge from testimony will often put you in this position, for instance, as will non-inferential knowledge gained through perception. And some, such as Alison Hills, argue that we can understand why p without knowing or even believing that p.
The contemporary discussion of the claim that knowledge-in-the-contemporary-analytic-philosophy-sense is more valuable than true belief has often focussed on whether the veritist, who says that the sole fundamental source of epistemic value is truth, can accommodate that claim.2 I want to argue that, whether or not they can, they shouldn’t, because it is not always true; and, indeed, veritism predicts exactly when it is true and when it is false. But I also want to argue that the veritist can also say why understanding why p tends to be more valuable than not understanding why p, and so they can recover Plato’s sense of the claim: knowledge-in-Plato’s-sense (namely, understanding-why) is more valuable than true belief.
Clarifying the claim
Before we get going in earnest, it’s worth issuing a few clarifications.
First, from now on, I’ll use ‘knowledge’ and ‘understanding’ in their contemporary senses, so that you might know that something is so without understanding why it is so.
Second, while we often talk as if objects are the bearers of value, I will say that it is really states of affairs that are the fundamental bearers of value.3 There are general reasons to say this, but in our particular case, we might add to them that it is not clear that beliefs are objects, and so we shouldn’t formulate our claim as a claim about the value of beliefs with certain properties, but rather as a claim about states of affairs in which people believe things in certain ways. We often talk as if beliefs are objects, of course, but fundamentally a person believes a proposition; that is, there is a relationship between the person and the proposition; that is, the state of affairs contains the person and the proposition and, in that state of affairs, the two stand in a particular relationship to one another. We can often restate this by saying that the person has a belief, but it is hard to see what this thing might be we call a belief. On many accounts of what it means for a person to believe a proposition, it would not be straightforward to extract an object that we would call that person’s belief in the proposition, just as if I say that my department hopes for a new colleague in the autumn, it would not be straightforward to identify some object we might say is that hope. So the claim is not that the belief itself is more valuable if it counts as knowledge than if it doesn’t. Or at least, that is not the fundamental fact. It’s rather that a state of affairs that includes a belief that counts as knowledge is more valuable than a state of affairs that is the same in all ways except that the belief does not count as knowledge.
Third, we must distinguish practical value from purely epistemic value from all-things-considered value. Plato doesn’t distinguish these, but his answer to the puzzle suggests he’s talking either of practical or all-things-considered value. We’ll talk of both here at different points.
Fourth, it is clear the claim cannot be that knowledge of p is always more valuable than merely true belief in q. Were I choosing on behalf of someone whose epistemic welfare is dear to me, I would prefer that they have merely true belief about some topic that interests them deeply than that they have knowledge of the number of blades of grass on Brandon Hill in Bristol.
Fifth, building on the previous clarification, the claim must compare a situation in which someone has knowledge of a proposition and a situation in which they have merely true belief in that same proposition. And, what’s more, the two situations must differ as little as possible consistent with the change in the status of the belief.
So the claim that knowledge is more valuable than merely true belief is this:
K>TB: Suppose:
S is a state of affairs in which you believe p, p is true, and your belief in p counts as knowledge.
S’ is the closest state of affairs to S in which you believe p, p is true, but your belief in p does not count as knowledge.
Then S is more epistemically/practically/all-things-considered valuable than S’.
No false lemmas and the value of knowledge
Let’s begin with the epistemic version of K>TB: that is, the version on which it is claim about purely epistemic value.
An early response to Edmund Gettier’s counterexamples to the justified true belief analysis of knowledge stated that what distinguishes knowledge from justified true belief is that a belief that counts as knowledge is never inferred from false premises, but a belief that is true and justified might be. This is sometimes called the no false lemmas account.4
Now suppose we’re comparing someone who knows a proposition and inferred it entirely from true premises with someone who truly believes that proposition and inferred it from premises some of which are false. Then it is easy for the veritist to say why the former state of affairs is more valuable than the latter. In the latter state of affairs, the person has some false beliefs relating to the proposition believed, while in the former they don’t.
Suppose I am standing in a parkland in a Brazilian city and about thirty yards away stands a very realistic plastic model capybara. I see it and form the belief that there is a capybara in front of me, and on the basis of that infer that there are capybaras in this part of the world. In fact, there are capybaras in this part of the world, but the object I see isn’t one of them. I have a true belief based on a false premise, and so my belief doesn’t count as knowledge. The veritist says that I get some epistemic value from my true belief and some epistemic disvalue from my false premise. Now compare to a case in which I hear someone next to me say that there are capybaras in this part of the world, I come to believe that they’ve said this, and from that infer that there are capybaras in this part of the world. The veritist says that I get some epistemic value from my true belief that there are capybaras and I get some epistemic value from the true premises on which my belief was based.
And notice that this account doesn’t fall foul of the swamping problem. According to the swamping problem, we must not value a state of affairs that includes a valuable outcome of a reliable process more than an equally valuable outcome of a less reliable process. For instance, to use Linda Zagzebski’s wonderful example, we must not value a good cup of coffee made by a reliable coffee-maker more than a good cup of coffee made by an unreliable coffee-maker. The value of the actual outcome of the process ‘swamps’ the value of the process. But in the case I just mentioned, the greater value assigned to the state of affairs in which the belief counts as knowledge does not flow from the value of the process that brought it about, but from the value of the beliefs that act as inputs to the process—namely, the premises in the inference that led to the belief. And that value isn’t swamped by the value of the outcome of the inference.
There’s an interesting possibility that this account raises. Consider two scenarios. In the first, my friend Barra tells me that air is a little over 20% oxygen, and I come to believe it non-inferentially. My belief counts as knowledge by testimony. In the second, my friend Barra tells me that air is a little over 20% oxygen. I then run through a complex chain of reasoning: Barra is my friend; he doesn’t feel the need to show off his knowledge to me; so he wouldn’t lie to me; and so if he says this is the case, he believes it’s the case; also, he studied geography and chemistry at school; so if he believes it, it’s true; therefore, what he says to me is true. But let’s say that, he studied geography at school, but not chemistry, and he learned this fact about the composition of air in chemistry. So my belief is true, but it is based on reasoning that includes a false lemma, and so it isn’t knowledge. Nonetheless, the veritist seems to have to say that the state of affairs just described is more valuable than the first state of affairs, since I’ve got more true beliefs in the second state of affairs than in the first, and while I’ve got a false belief, it surely doesn’t outweigh all the true beliefs I used in my reasoning.
I won’t explore this too much, but there are a couple of things we might say. First, the veritist might say that we value someone’s doxastic state not by its total epistemic value, but by its average epistemic value. In the first state of affairs just described, where I have fewer beliefs, but all are true, the average epistemic value is greater than the second state of affairs, where I have many more beliefs, but one is false. Second, we might just bite the bullet and say that knowledge is sometimes less valuable than merely true belief. This doesn’t seem implausible to me. The claim K>TB is usually made completely generally without considering specific cases. Once we start to spell out the specific cases, it might well be that we realise that, in typical cases, the claim is true, and it is those that trigger our intuitions when we hear the general claim and find it plausible; but in some atypical cases, such as the one just described, K>TB doesn’t hold.
Non-inferential belief and the value of knowledge
So much for cases of merely true belief that don’t count as knowledge because the belief is inferred from false beliefs. Let’s turn now to the counterexamples to the no false lemmas account of knowledge—the cases in which the belief doesn’t count as knowledge but it isn’t inferred from any false premises. These are, quite naturally, cases in which the belief is not inferred from any more fundamental premises at all, false or otherwise, but is simply formed non-inferentially. A classic such case is Alvin Goldman’s Fake Barn County example.
My friend Asa is on a road trip through the American Midwest. One day, unbeknownst to them, they drive through Fake Barn County. In this county, there is only one actual barn, but there are thousands of barn facades that have been used over the years as filming locations. From the road, the single actual barn and the many barn facades are indistinguishable. At one point, as they’re driving, Asa glances to their right, where they see the single actual barn in the county, and come to believe on that basis that there’s a barn to their right. Their belief is clearly true, but many people say it’s not knowledge because it’s very lucky—had Asa looked out the window to their right at any other point on their journey through the county, they’d have formed the same belief, but in that situation it would have been false.
Let us also consider a scenario exactly like the one just described, but in this case, Asa is driving through Real Barn County. In this county, there are no barn facades—everything that looks like a barn is a barn. When Asa glances right and sees a barn in the field and comes to believe there’s a barn to their right, the belief they form surely does count as knowledge.
So, if it’s true that knowledge is more valuable than merely true belief, the belief formed in the second version of this case is more valuable than the belief formed in the first; or, more precisely, the second state of affairs is more valuable than the first. But if that’s true, then, were I to know exactly what would happen should Asa drive through Fake Barn County and what would happen should they drive through Real Barn County—if I were to know it down to the detail of exactly when they’d glance to their right and form their belief—and if I were to care epistemically about my friend, so that I want for Asa what is more epistemically valuable, then I should strictly prefer that they drive through Real Barn County rather than Fake Barn County. And if that’s so, I should be willing to pay some price for this to happen; indeed, there’s some price for Asa for pay such that I would think them better off if they were to pay it and go through Real Barn County than if they were not to pay it and go through Fake Barn County. But, at least from where I’m sitting, I just don’t see that I would be willing to pay this price, nor think it best for Asa that they pay it. My unwillingness is not based on any miserliness nor lack of care for Asa, I should say; it just doesn’t seem that I would improve their situation at all by paying to send them through Real Barn County and obtain for them a belief that is true and not luckily so. I don’t imagine that, if I were to hear that Asa had ended up driving through Fake Barn County rather than Real Barn County I would feel any regret on their behalf. I would not think: ‘Oh, what a shame! If only they’d turned right at the junction instead of left, they’d have gone through Real Barn County and ended up in a better epistemic state.’
Non-inferential belief and the value of justified true belief
One natural reaction to the case of Asa is to say, even if it’s right, it only shows that, sometimes, knowledge is no more valuable than justified true belief. After all, as I described it, Asa’s belief about the barn is true and justified in both Real and Fake Barn County; it just doesn’t rise to the level of knowledge in Fake Barn County because it’s lucky. So what about an analogous case in which the belief isn’t even justified? Perhaps it will turn out that knowledge is always better than true belief that neither counts as knowledge nor as justified.
Let me again describe two states of affairs involving Asa. In the both, Asa is out walking in the wilderness and realises they’ve forgotten their watch and phone and so have no timepieces to hand; in both they use methods for telling the time from the height of the sun above the horizon; and in both they come to believe truly on the basis of those methods that it’s about 3pm. In the first case, the method is very reliable across nearly all similar situations—if it had been about 4pm, the method would have given that; if it had been about 5pm, it would have given that. But in the second case, the method is extremely unreliable, but it does get it right in the particular case in which Asa applies it.
In the first case, Asa knows it’s about 3pm; in the second, they truly believe that, but their belief is not justified and it does not count as knowledge. And yet again, I might be asked to choose in advance which method Asa will use, knowing that, whichever method it is, it will deliver for them the true belief that it’s about 3pm. And, I might know that they will never again use these methods for telling the time, because the experience of being uncertain about the time during this walk so unnerved them they’ll never again forget to take a timepiece into the wilderness. Would I pay to ensure that Asa uses the method that gives them knowledge rather than the method that gives them merely true unjustified belief? Again, I think not! And again I think I would feel no regret were I to hear they used the unreliable method but it got it right in this case. It really seems that, in such a case, the epistemic value of the outcome of the process does indeed swamp the expected epistemic value of the process itself.
Plato and Williamson on the resilience of knowledge
So much for the epistemic value of knowledge. But what of its practical value? After all, it is its practical value that interests Plato. Prior to the passage quoted above, Meno has agreed with Socrates that a true belief is no worse than knowledge for practical purposes, as long as the true belief is present:
Socrates: So true opinion is in no way a worse guide to correct action than knowledge. […]
Meno: So it seems.
But, as we see in the passage quoted, Socrates thinks that your true opinion is more likely to persist if it is knowledge than if it is not.
Williamson’s deer stalkers
Tim Williamson gives a similar style of argument in Knowledge and its Limits, but of course he is using knowledge-in-the-contemporary-analytic-philosophy-sense, as opposed to knowledge-in-Plato’s-sense, i.e., something like understanding. Nonetheless, let’s start with Williamson’s version of the argument and then look to Plato’s.
Here is Williamson:
Knowledge is superior to mere true belief because, being more robust in the face of new evidence, it better facilitates action at a temporal distance. Other things being equal, given rational sensitivity to new evidence, present knowledge makes future true belief more likely than mere present true belief does. This is especially clear when the future belief is in a different proposition, that is, when the future belief can differ in truth‐value from the present belief.
Some hunters see a deer disappear behind a rock. They believe truly that it is behind the rock. To complete their kill, they must maintain a true belief about the location of the deer for several minutes. But since it is logically possible for the deer to be behind the rock at one moment and not at another, their present‐tensed belief may be true at one moment and false at another. By standard criteria of individuation, a proposition cannot change its truth‐value; the sentence ‘The deer is behind the rock’ expresses different propositions at different times. In present terminology, it is logically possible for the unchanging condition that the deer is behind the rock to obtain at one moment and not at another. If the hunters know that the deer is behind the rock, they have the kind of sensitivity to its location that makes them more likely to have future true beliefs about its location than they are if they merely believe truly that it is behind the rock. If we are to explain why they later succeeded in killing the deer, given the foregoing situation, then it is more relevant that they know that the deer is behind the rock than that they believe truly that it is behind the rock.
Let us suppose the situation is as follows: The stalkers witness the deer going behind the rock at 12noon. They assume there’s no way for the deer to stop being behind the rock without them seeing it emerge. By 12:01pm, they haven’t seen it emerge, so they infer that it’s still behind the rock.
Now consider two cases:
In the first, they’re right that there is no way for the deer to stop being behind the rock without them seeing it emerge, and indeed that’s true of all rocks in the vicinity. And so, at 12:01pm, they know the deer is behind the rock.
In the second, they’re right that there is no way for the deer to stop being behind the rock without them seeing it emerge, but they’re very lucky that their assumption is true in their case, since every other rock in the vicinity stands over a wide tunnel through which the deer could disappear and end up somewhere else in the forest without the stalkers seeing her. And so, at 12:01pm, they truly believe the deer is still behind the rock, but they don’t know it.
The deer stalkers are not more likely to have a true belief about the location of the deer at a later time, say 12:02pm, in first scenario than in the second. They’re exactly as likely to have a true belief at a later time in the two cases.
What’s more, there will be cases in which they are more likely to have a true belief at a later time in a scenario in which they have merely true belief to begin with than in a scenario in which they have knowledge. Perhaps in the former, they lack knowledge because of a false premise in the reasoning that leads them to their belief in the first place, but this false premise also protects them against misleading evidence that comes in after a short time. Let me give an example:
While they are waiting for the deer to emerge, a large, dark-coloured bird lands in a tree a little way off from the stalkers. The first stalker reasons as follows: from its size, silhouette, and colour against the sky, it’s either a corvid or a raptor; raptors more often spend their time alone; crows more often hang out in groups; this one is alone; therefore, it’s a raptor. The second stalker reasons as follows: from its size, silhouette, and colour against the sky, it’s either a corvid or a raptor; raptors tend to hang out in groups, but when they arrive in a new place, one arrives first to scout it out; crows, on the other hand, are mostly solitary, but when they first arrive in a new place, they arrive in pairs; this one arrived alone; therefore, it’s a raptor. Both have true beliefs, for the bird is in fact a raptor. The first stalker knows it, since they reasoned to their conclusion from true premises that we can assume they know; the second stalker doesn’t know it, since their assumption about the sociability of both families of birds is false, but by good fortune it got them to the right conclusion in this particular case. Minutes later, another large, dark-coloured bird lands on the branch beside the first one. The first stalker reasons as follows: oh, I was wrong; this bird is hanging out in a group; it’s a corvid. The second stalker reasons as follows: ah, I was right; here’s the another bird coming to hang out; it’s definitely a raptor. In fact, it is a raptor, but one displaying very unusual social behaviour. For the first stalker, who knows how raptors and corvids behave, the arrival of the second bird is very misleading evidence: it is evidence that is very unlikely conditional on it being a raptor, and rather likely conditional on it being a corvid, and so it leads them to conclude incorrectly that it’s a corvid. For the second stalker, however, who has false beliefs about the social behaviour of the two families of birds, the evidence is not misleading; indeed, it confirms the belief they already had. And so, in this case, it was the stalker with the merely true belief who was more likely to retain their true belief than the stalker with knowledge.
Plato’s guides to Larissa
So I think that what contemporary epistemologists call knowledge isn’t always more likely to give rise to true belief in the future than mere true belief is. But what about what Plato called knowledge; that is, the ability to grasp why the proposition known is true. Are you more likely to believe truly in the future if you have this sort of knowledge than if you have only true opinion? Does knowledge-in-Plato’s-sense “tie down” true belief in the way that chains tie down the statues of Daedalus, which are wont to run away if not constrained?
For the same reasons I gave above, I don’t think so. Someone might understand why p is true at time t, receive evidence e at t’, and conclude on the basis of e that p is no longer true at t’. But it might be that evidence e is misleading, and p is in fact still true at t’. And it might be that someone misunderstands why p is true at t, receives that same evidence e at t’, and does not conclude on that basis that p is no longer true at t’, and indeed concludes p is still true at t’, and does so precisely because of the false belief that witnesses their misunderstanding of why p is true at t. That is, the false belief inoculates them against being misled by the evidence e.
Hills on the value of understanding
So perhaps knowledge-in-Plato’s-sense, or understanding-why, as we now say, does not have practical value for this reason. But it’s clear that the veritist must say that it has epistemic value. After all, to understand why p is, very roughly speaking, to grasp a bunch of propositions that collectively provide an explanation why p is true, and to grasp that they do provide an explanation. There is some debate about whether grasping these propositions must involve believing them or merely accepting them; and there is some debate about whether these propositions must be true or merely close to the truth. But either way, it’s clear that the veritist can say that understanding why p is epistemically valuable.
Some go further than Plato and take understanding why p to involve more than merely grasping an explanation why p is true. For instance, Alison Hills says:
[I]f you understand why p (and q is why p), then you believe that p, and that q is why p, and in the right sort of circumstances you can successfully:
(i) follow some explanation of why p given by someone else.
(ii) explain why p in your own words.
(iii) draw the conclusion that p (or that probably p) from the information that q.
(iv) draw the conclusion that p’ (or that probably p’) from the information that q’ (where p’ and q’ are similar to but not identical to p and q).
(v) given the information that p, give the right explanation, q.
(vi) given the information that p’, give the right explanation, q’.
Not all of these extra clauses will add what the veritist considers epistemic value, since some are abilities to provide publicly and verbally what one has privately in one’s mind, and it’s what is in the mind that confers epistemic value. But the veritist needn’t say that they all add value to say that understanding why p is more valuable than not understanding why p. Also, clause (iv) demands that you have a disposition to use your understanding to produce new true beliefs in relevant situations, and the veritist can easily say what’s valuable about that, as long as there is some probability you will be in the situations that trigger than disposition.
But Hills thinks that there is another source of epistemic value for understanding. It is broadly veritist in spirit, but somewhat different in detail. Here is the relevant passage:
True beliefs are valuable for their own sake because they are an accurate reflection of the way things are: they are a mirror of nature. What does it mean for a belief to mirror nature? The metaphor has typically been understood in terms of the relationship between the content of the belief and the facts. A belief of the content: “Tibbles is a cat” mirrors the world if Tibbles is indeed a cat.
But a set of beliefs might also mirror the world in virtue of their form; by which I mean the similarities between the relationships between those beliefs and the relationships between the facts in the world: for instance a dependence between two beliefs might mirror a dependence between two facts. Suppose Tibbles is a mammal in virtue of being a cat (that is, Tibbles’s being a cat explains her being a mammal). And suppose that you draw the conclusion Tibbles is a mammal on the basis of your belief that Tibbles is a cat (that is, your belief that she is a cat explains your belief that she is a mammal) then clearly there is a similarity—a mirroring— between your beliefs and the world, that cannot be explained fully in terms of the content of those beliefs alone, but also must refer to the relationship between them: one of your beliefs depends on the other, just as there is a dependence between the facts in the world.
But I don’t think this can be right. After all, we very often don’t base our belief in the proposition to be explained on the proposition doing the explaining. Indeed, we very often go exactly the other way. I hear a scratching noise at night, and I formulate the hypothesis that the attic of my building might be housing some mice. The next day, I poke my head up into the space and see what look very much like mouse droppings; I notice that a stack of papers I stored up there the week before bears the distinctive marks of tiny teeth; and when I look down from the top of the ladder to the landing below, I see my neighbour’s cat looking hungrily up at the aperture. I come to believe my conjecture. Indeed, I think we would say that I now understand why there was a scratching noise, why there were droppings, why the paper in the stack looked munched, and why Sapphire was looking so intently towards the attic. But I didn’t derive any of these from the proposition that explains them, namely, that there are mice up there. Rather, I inferred that proposition from them. The direction of the dependency relations between my beliefs is exactly the opposite from the direction of the causal relationships in the world. And yet I do understand, and my understanding is every bit as valuable as if I had first seen the mice and then come to believe on that basis there would be scratchings and droppings and munching and murderous glances from the cat.
Veritism vindicated
So it seems to me that veritism can account for all that we should want to account for in this area. It says that, when a true belief falls short of knowledge because it is based on false premises, knowledge would be more valuable. But it doesn’t say that, when true belief falls short of knowledge because it is true due to environmental luck or because it is based on an unreliable non-inferential process, knowledge would be more valuable—and that seems the correct answer there. And it says that understanding why something is true is more valuable than not understanding why something is true.
As so often, the Stanford Encyclopedia of Philosophy entry by Duncan Pritchard, John Turri, and J. Adam Carter gives a wonderful overview.
See, for instance, Linda Zagzebski or Jon Kvanvig.
Thanks to Ralph Wedgwood for urging me to clarify this in a comment on last week’s post, where I adopted the locution of the debate I was describing and wrote as if it is objects that are the fundamental bearers of value. Ralph and I share this view, and Ralph pointed to the second chapter of Stephen Finlay’s Confusion of Tongues, which persuaded him of its truth.
David Armstrong and Michael Clark both proposed versions.
Hi Richard, just a note of surprise about how you ended up glossing "K>TB" under Clarifying the Claim. I would have thought "K>TB" amounted to the claim that (for instance) my knowing that p would be epistemically better than my merely truly believing that p. (Here "my knowing that p" and "my merely truly believing that p" pick out possible states of affairs.) I worry that the way you put things is subject to usual problems with counterfactual analyses (since it appeals to the "closeness" relation). E.g., to change the context, consider the claim that ecstasy is prudentially better than mere pleasure. I would gloss that as saying something like, my being in ecstasy would be better for me than my experiencing mere pleasure. On the other hand, we can dream up an exotic state of affairs, perhaps involving an evil demon, in which I'm in ecstasy, but if I were experiencing mere pleasure I would have stronger friendships. So (it seems) this is a state of affairs in which I'm in ecstasy, but the closest state of affairs in which I'm experiencing mere pleasure is one in which I'm (arguably) better off over all. I don't think this is a counterexample to the intuitive claim that ecstasy is better than pleasure; all it shows is that there are tradeoffs. Subtler examples might show failures of separability, instead of tradeoffs. I'm not sure whether this affects your arguments very much, though it might open up some gaps.
One thing about Asa in Fake Barn County - if you think they're still in Fake Barn County, then you might legitimately wish they had passed through Real Barn County, since that would reduce their risk of forming new false beliefs. But if you think they're done with the trip, then it no longer matters.
I have parallel thoughts about Newcomb problems - if I think someone's going to face a Newcomb problem, I can hope on their behalf that they're a one-boxer, but if I think they just did it, I might hope on their behalf that they two-boxed.