This is a great paper and I’m glad you’re talking about it. I can’t believe that I forgot to include some discussion of it in this paper that Michael Nielsen and I wrote on the topic of non-partitional updating: https://link.springer.com/article/10.1007/s10992-025-09814-6
Their result works by summing the inaccuracy of an update rule in all possible updating situations where you use it, but I think it is basically equivalent to assuming you are equally likely to be using it in any of these situations and then taking the expected accuracy. In their framework, one always gets a proposition as one’s evidence, and the proposition is true in every world where one gets it as evidence.
In our framework, we don’t insist that what one updates on is a proposition, and just call it a “signal”. We show that out of all functions from signals to updates, the one with highest expected accuracy is the one that shifts each world’s probability in proportion to the likelihood of the received signal in that world. I think we can reconstruct a version of their picture in ours if we think each signal must be a proposition that is true in the actual world, but that which proposition is received as the signal is chosen uniformly at random from the set of all propositions true at the world.
Cool! I was wondering about that sort of thing. In the book, I also talk about David Blackwell's approach to value of information, which uses a sort of signal-style approach, and Kevin Dorst's, which cuts out any representation of the evidence and moves straight to the posteriors, and I was wondering how Alex and Snow's result works for those. I'll think more about this! And this is a good reminder that I must include something about your paper in the book!
This is a great paper and I’m glad you’re talking about it. I can’t believe that I forgot to include some discussion of it in this paper that Michael Nielsen and I wrote on the topic of non-partitional updating: https://link.springer.com/article/10.1007/s10992-025-09814-6
Their result works by summing the inaccuracy of an update rule in all possible updating situations where you use it, but I think it is basically equivalent to assuming you are equally likely to be using it in any of these situations and then taking the expected accuracy. In their framework, one always gets a proposition as one’s evidence, and the proposition is true in every world where one gets it as evidence.
In our framework, we don’t insist that what one updates on is a proposition, and just call it a “signal”. We show that out of all functions from signals to updates, the one with highest expected accuracy is the one that shifts each world’s probability in proportion to the likelihood of the received signal in that world. I think we can reconstruct a version of their picture in ours if we think each signal must be a proposition that is true in the actual world, but that which proposition is received as the signal is chosen uniformly at random from the set of all propositions true at the world.
Cool! I was wondering about that sort of thing. In the book, I also talk about David Blackwell's approach to value of information, which uses a sort of signal-style approach, and Kevin Dorst's, which cuts out any representation of the evidence and moves straight to the posteriors, and I was wondering how Alex and Snow's result works for those. I'll think more about this! And this is a good reminder that I must include something about your paper in the book!
>in The Journal of Philosophy
Looks like it's in PhilReview!
Oops!
It’s either JPhil, PhilReview, or fandango.
I knew I could rely on you to be as taken with the names of the colours as I was, Daniel!