Speaker: Martin Smith (Glasgow)
Title: Entitlement and Evidence
Abstract: Entitlement is conceived as a kind of positive epistemic status, attaching to certain propositions, that involves no cognitive or intellectual accomplishment on part of the beneficiary - a kind of positive epistemic status that is in place by default. In this paper I will argue that the notion of entitlement - or something like it at any rate - falls out of an idea that may at first blush seem rather disparate: That the evidential support relation can be understood as a kind of variably strict conditional (in the sense of Lewis, 1973). Lewis provided a general recipe for deriving what he termed inner modalities from any variably strict conditional governed by a logic meeting certain constraints. On my proposal, entitlement need be nothing more exotic than the inner necessity associated with evidential support. Understanding entitlement in this way helps to answer some common concerns - in particular, the concern that entitlement could only a pragmatic, and not genuinely epistemic, status.

Speaker: Tomoji Shogenji (Rhode Island)
Title: Explanatory Power of Truth from the First-Person Perspective
Abstract: Deflationism about truth claims that truth is a merely technical device for semantic ascent with no explanatory power. It is common for those who favor the representational theory of truth to seek the needed explanatory power in the assignment of representational contents to other people's mental states. For example, assigning representational contents to someone else's mental states may help us explain her behavior. This third-person approach appears sensible because all we need about truth from the first-person perspective seems to be the equivalence schema that my belief that p is true if and only if p. However, it is difficult to see what gain we can make from the third-person perspective beyond non-representational causal explanation. This paper argues that we should seek the explanatory power of representational truth in our own mental states. More specifically, I argue that we can explain our visual experience better by representationally parsing our visual input from the first-person perspective.

Speaker: Julien Dutant (Geneva)
Title: Methods Models for Belief and Knowledge
Abstract: We introduce a formal representation of belief and knowledge based on the idea that knowledge is a matter of forming a belief through a sufficiently error-free method. We first model methods and their infallibility, then define belief and knowledge in terms of them. The resulting models are a significant extension of so-called "neighbourhood models". We argue that epistemological notions and problems like Gettier, inductive knowledge, fallible justification, epistemic contextualism, or failure of logical omniscience are represented in a more satisfactory ways in these models than in standard epistemic logic. In general, our models only validate the claim that knowledge is true belief; but we show that a full S5 system can be derived from a set of natural idealisations. The derivation provides some explanation of why  and when the S5 axioms should hold, and a vindication of their use.

Speaker: Jens Christian Bjerring (ANU/Copenhagen)
Title: Non-Ideal Epistemic Spaces
Abstract: In a possible-world framework, an agent a can be said to know a proposition p just in case p is true at all worlds that are epistemically possible for a. Roughly, a world is epistemically possible for a just in case the world is not ruled out by anything a knows.

This framework presupposes an underlying space of worlds that we can call epistemic space. Traditionally, worlds in epistemic space are identified with possible worlds, where possible worlds are the kinds of entities that at least verify all logical truths. If so, it follows that any world that may remain epistemically possible for an agent verifies all logical truths. As a result, all logical truths are known by any agent, and the corresponding framework only allows us to model logically omniscient agents. This is one of the familiar hyperintensional problems that emerge in the standard possible-world framework, and it shows that the framework cannot be used  to model non-ideal agents that fall short of logical omniscience.

A familiar attempt to model non-ideal agents within a broadly world-involving framework centers around the use of impossible worlds where the truths of logic can be false. If we admit impossible worlds where "anything goes" in epistemic space, it is easy to avoid logical omniscience. If any logical truth is false at some impossible world, then no logical truth need be known by any agent. As a result, we can use an impossible-world involving framework to model extremely non-ideal agents that do not know any logical truths.

A much harder, and considerably less investigated challenge is to ensure that the resulting epistemic space can also be used to model moderately ideal agents that are not logically omniscient but nevertheless logically competent. Intuitively, while such agents may fail to rule out impossible worlds that verify complex logical falsehoods, they are nevertheless able to rule out impossible worlds that verify obvious logical  falsehoods. To model such agents, we need a construction of a non-trivial epistemic space that partly consists of impossible worlds where not "anything goes". This involves imposing substantive constraints on impossible worlds to eliminate from epistemic space, say, trivially impossible worlds that verify obvious logical falsehoods. In this paper, I will show that the following claims form an inconsistent triad:

(Non-Omni) Worlds in epistemic space allow us to model agents that are not logically omniscient.

(Non-Tri) Worlds in epistemic space are either possible or  non-trivially impossible.

  (Max) Worlds in epistemic space are maximal.

Derivatively, I will argue that this shows that successful constructions of epistemic spaces that can safely navigate between the Charybdis of logical omniscience and the Scylla of "anything goes" are hard, if not impossible, to find.