I will survey a recent research program investigating “possibility semantics," a generalization of possible world semantics that starts with the notion of a partial possibility instead of the notion of a complete possible world.
Theories that use expected utility maximization to evaluate acts -- like subjective utilitarianism or decision theory -- have difficulty handling cases with infinitely many utility contributions. In this paper I present and motivate a way of modifying such theories to deal with these cases, employing what I call “Direct Difference Taking”. This proposal has a number of desirable features: it’s natural and well-motivated, it satisfies natural dominance intuitions, and it yields plausible prescriptions in a wide range of cases. I then compare my account to the most plausible alternative, a proposal offered by Arntzenius (2014). I argue that while Arntzenius’s proposal has some attractive features, it runs into a number of problems which Direct Difference Taking avoids.
Some agents are willfully ignorant regarding the behavior in which they propose to engage; they deliberately forgo the opportunity to inquire into the features that determine the behavior’s moral status. Examples include driving a car across an international border, suspecting that—but not verifying whether—the car contains contraband; buying cheap clothing, suspecting that—but not verifying whether—it was manufactured in a sweatshop; and so on. The law (when it applies) typically holds such agents to be equally as culpable as those who engage in the same behavior but who are not ignorant of the relevant details, and legal and moral philosophers have tended to agree with this claim. In order to assess this claim, I present a paradigm case in which ignorance of wrongdoing affords its agent an excuse for that wrongdoing, and I compare and contrast this case with a paradigm case of willfully ignorant behavior. I argue that the case for equal culpability is not easily made.
Free will compatibilists are typically focused on arguing that determinism is compatible with free will. Most compatibilists also think, however, that indeterminism is compatible with free will too. Unsurprisingly, given compatibilism’s main aim, little work has been done on this aspect of compatibilism. Still, it is important to think about this, if one is interested in developing a view of free will that doesn’t hinge on determinism being actually true or false—and I am one of those compatibilists. In this paper, I will look at this issue from the perspective of a compatibilist view that I have developed and defended elsewhere (Causation and Free Will, OUP 2016): a view where freedom is accounted for in terms of responsiveness to reasons, and where responsiveness to reasons is in turn a feature that is directly reflected in the actual causal histories of our behavior. In the first part of the paper I will argue that, assuming this compatibilist view of free will, indeterminism is compatible with free will. Still, as we will see, the assumption of indeterminism gives rise to some novel and interesting questions concerning the nature of indeterministic causation. The second part of the paper will be concerned with motivating and discussing those questions.
TBA
This talk will introduce the central concepts of my account of toxic speech, drawing from both philosophy of language and social epidemiology. A medical conception of toxicity looks at the toxin, the dose, frequency, route of delivery, susceptibility of the individual or population, and more. All these factors can be fruitfully explored in connection to understanding the mechanisms by which speech acts and discursive practices can inflict harm, making sense of claims about harms arising from speech devoid of slurs, epithets, or a narrower class I call ‘deeply derogatory terms.’ By highlighting the role of uptake and susceptibility, and this model suggests a framework for thinking about damage variation. Toxic effects vary depending on one’s epistemic position, access, and authority. An inferentialist account of discursive practice plus a dynamic view of the power of language games offers tools to analyze the toxic power of speech acts. Even a simple account of language games helps track changes in our discursive practices. Identifying patterns contributes to an epidemiology of toxic speech, which might include tracking increasing use of derogatory terms, us/them dichotomization, terms of isolation, new essentialisms, and more. Finally, I will highlight a few ways to respond to toxic speech, some predictable from past theories (Brandom’s challenging, Langton’s blocking) and some perhaps more unexpected.
Identity and distinctness facts come in many varieties, but here is an uncontroversial pair: The Eiffel Tower is identical with The Eiffel Tower, and Donald Trump is distinct from Barack Obama. David Lewis claimed that “there is never any problem about what makes something identical to itself.” I disagree: I believe there is a problem of determining what makes objects identical to themselves. We should provide a metaphysical explanation of identity facts. And, ideally, distinctness facts should come along for the ride: we should be able to metaphysically explain them as well. I will argue that many straightforward attempts to explain identity and distinctness facts fail. We cannot explain facts involving object identity and distinctness by appealing to the existence of the objects in question, by appealing to which properties the objects share (at least not along the traditional proposals), nor by appealing to facts about parthood. Instead, I suggest that we identify and distinguish objects on the basis of which facts they are constituents of. And we should identify and distinguish facts on the basis of their position in a network of metaphysical ground.
In my talk I begin by introducing the commonsense idea that there appears to be a legitimate question, “What is the state obligated to its people to do?”, that should constitute the focal point of a philosophical sub-discipline, which I call ‘political morality’. I then point out that there appears to be no such philosophical sub-discipline. I argue that we political philosophers should accept these two appearances as veridical and admit that we’ve been dropping the ball; i.e. we’ve been failing to inquire into something that demands our attention. I also begin the task of giving it the attention it deserves. I argue that political morality, given its apparent contours, would have to be entirely artificial. From that conclusion I infer the further conclusion that political morality would have to apply to artificial agency; that is, it would be the morality that applies to certain individuals only insofar as they exercise artificial agency. And from thatconclusion I infer, finally, that political morality must be role morality, since inhabiting a role is the only way of exercising artificial agency.
Rarely do we utter 'every F' to talk about absolutely every F, or 'some G' to talk about any G whatsoever. But there's no commonly-accepted account of just how speakers manage to use such phrases to talk about more restricted classes of things. After considering and rejecting some earlier suggestions, I offer a novel account of this phenomenon. On my account, speakers bear a relationship of linguistic authority over quantified statements, such that they can use those statements to express certain kinds of thoughts. But there are limits to such authority: both a practical limit, since speakers' thoughts are typically only so precise, and normative limits, since one's linguistic community will only tolerate a limited range of thoughts being expressible via a particular quantified statement.
Some of the most interesting rules governing human conversations are epistemic in nature: in fact, it is argued that conversational turn-taking is fundamentally driven by the creation and resolution of epistemic imbalances. Drawing on recent work in conversational analysis, this paper argues that our natural vulnerability to epistemological skepticism is at least in part a by-product of the background epistemic monitoring system that supports ordinary conversational exchanges.
This talk is about the 19th century Scottish philosopher Mary Shepherd’s response to Hume on induction, and some of her own metaphysics of induction.
Causal pluralism is, perhaps not surprisingly, plural. My aim in this talk will be to explore the relationship between two kinds of pluralism that have received attention in recent years. The first pluralism, which we owe chiefly to Ned Hall, argues that some causal claims are concerned with production while others are concerned with dependence or difference making. The second, which we owe chiefly to Elizabeth Anscombe, tells us that the word ‘cause’ is a schema term that stands in for more specific varieties of activities like pushing, wetting or scraping. My hope will be to show how we can understand the differing semantic and explanatory roles of these concepts, while connecting them to a reasonably economical ontology that is grounded in mechanisms.
By giving valid consent, we can waive rights that we hold against other people, thereby permitting them to perform actions that they otherwise weren't allowed to perform. However, our consent can fail to grant them permissions when we give it under duress. I examine the questions of how duress invalidates consent, and why it does so. My answers appeal to rights' function of protecting us from interference from others and consent's function of enabling us to share our lives with others.
Familiar, popular controversies -- like the one last summer over the memo written by a Google employee -- pit accusations of racism or sexism against competing accusations of political correctness. These disagreements can be fruitfully understood as grounded, at least in part, in a philosophical disagreement about whether moral considerations rightly play a role in what we believe. One side endorses the view that a commitment to nondiscrimination includes epistemic demands and rightfully so. The other side views this stance as both irrational and dangerous. The aim of the paper is to link these debates in our social world to a philosophical debate about whether moral and pragmatic considerations can and do affect beliefs and credences.
The quantum pragmatism of Healey will be compared to Bub's information theoretic account of the quantum. Concerns about both views will be raised. It will be argued that the Relational Blockworld account of quantum mechanics, which as we will show has much in common with these two views, provides a realist psi-epistemic take on the quantum that deflates the measurement problem, explains superposition and entanglement without invoking wave-like phenomena or non-locality, and explains the Born rule. The three views will be compared and contrasted for their relative merits. Ultimately, we will conclude that the Relational Blockworld approach has certain advantages.
When the attempt is made to determine empirically what it is that makes an individual choose to become involved in terrorist acts, one of the few factors with predictive value is: prior involvement in petty crime. This is in part because terrorist recruiters tend to seek out those who are on the fringes of society. It is connected also with the fact that terrorists often cooperate with criminal organizations in matters of finance and tactics. Terrorist acts and criminal acts differ, however. Very roughly: the former, but not the latter, have as their objective to send a message. We shall explore what this means for an ontology of terrorism, and describe how such an ontology might be useful, for instance in predicting terrorist acts.
JL Austin said that existing is not something that things do all the time. Was he right? Under what conditions has something done something anyway? Perhaps surprisingly, I think that the key to answering this and a host of other questions in metaphysics can be found in a distinction from linguistics, the distinction between "stative" and "non-stative" verbs. Other question that this distinction is relevant to include: under what conditions does an event occur? What is the difference between a cause and a background condition? Must a disposition be a disposition to act? What is it to manifest a disposition? What is the most basic kind of causation?
One of the few points of unquestioned agreement in virtue theory is that the virtues are supposed to be excellences. One way to understand this is to claim that the virtues always yield correct moral action and that we cannot be "too virtuous": the virtues cannot be had in excess or "to a fault". If we take this seriously, however, it yields the surprising conclusion that many traits which have been traditionally thought of as "virtues" fail to make the grade. A solution to the problem, proposed by Gary Watson (1984) and reminiscent of Aristotle's view, is found to generate more problems than it solves.
There have been several recent efforts to naturalize first-order moral inquiry by incorporating findings from empirical moral psychology. According to one line of argument, our intuitive “deontological” judgments result from evolved emotional reactions, adaptive in the past but insensitive to countervailing considerations in the present. Consequently, these judgments should be jettisoned in favor of a more rational moral framework aimed at maximizing overall utility (Greene, 2014; Singer, 2005). I argue that this attempt to impugn intuitive moral judgments misses the mark. According to the alternative approach I propose, however, empirical findings can be used in more targeted ways to guide moral belief revision.
TBA
TBA
TBA
Recent challenges in international security posed by two terrorist organizations, Al Qaeda and the Islamic State of Iraq and Syria (ISIS), have highlighted an urgent domestic and foreign policy challenge, namely, how to address the threat posed by violent non-state actors while adhering to the rule of law values that form the core of democratic governance. Despite the vital importance of this topic, the legal framework for conducting operations of this magnitude against non-state actors has never been clearly identified. The Law of Armed Conflict (LOAC) is organized around the assumption that parties to an armed conflict are “combatants,” meaning that they are members of a state military acting in the name of that state. Norms of conduct are unclear with regard to non-state actors, and there are few consistent legal principles to provide guidance. In 2002, the Bush Administration declared members of Al-Qaeda and other violent non-state actors “unlawful combatants,” and as such declared them not subject to the Geneva Conventions. Legal scholars tend to agree, and many have written that LOAC must adapt to fit the new asymmetric nature of armed conflict. Law, however, is generally thought of as a constraint, rather than an instrument for achieving other goals. This article will address the status of unlawful combatants under existing International Humanitarian Law and Just War Theory and ask what the right legal framework is for addressing the threat posed by non-state actors in current asymmetric conflict. It will argue that violent non-state actors are more properly thought of as international criminals than as combatants of any sort. It will also examine the meaning of rule of law reasoning in the context of war. Results oriented legal analysis treats law as failing to provide reasons to individual actors, and privileges form over substance. I argue that this approach must be rejected if war is to be constrained by law.
Contemporary art curators such as Harald Szeemann, Pontus Hulten, and Hans Ulrich Obrist are sometimes likened to artists: their work involves communicating views about exhibited artworks through largely visual means (the material arrangements of exhibition materials) - the same means employed by visual artists, especially installation artists.
Contemporary debate in museum studies often focuses on the search for new ways to engage visitors and present exhibited objects accurately and unbiasedly: it is sometimes suggested that effective strategies can be borrowed from the art-making context ( see e.g. David Carrier 2006; Hilde Hein 2006; James Putnam 2009).
In this talk I consider whether such claims about the affinity of certain exhibitions and museums to works of art have philosophical bite. In particular, I focus on the following issues: (1) Are certain museums and exhibitions similar to artworks in that they possess aesthetic properties? (2) Are certain exhibitions and museums akin to works of installation art? (3) Can exhibitions or museums qualify as works of art?
Trendsetters are the "first movers" in social change. To study the dynamics of change, we need to study the interplay between trendsetters' actions and individual thresholds. It is this interplay that explains why change may or may not occur.
In this talk I give an overview of a theory of well-being that understands well-being as the fulfillment of an appropriate set of values that can be sustained over time. I then show that when the theory is applied to people who value friendship (that is, most of us), it provides a two-pronged argument for developing the virtue of humility.
We have moral ideals to which we aspire. We also routinely and predictably fail to live up to those ideals. Considered as a practical moral problem, this may seem to have a straightforward solution—we should just act better! This answer, while true, is far too simple. Failures to live up to ideals are not all of a kind. Some we cannot or will not acknowledge as failures in the first place. Some failures call for guilt or remorse; others we may think justified in light of conflicting ideals or circumstantial constraints. Even justified failures to live up to ideals may produce moral residue or generate other kinds of moral obligations. In this talk, I explore the concept of a regulative normative ideal and the practical ramifications of such ideals for flawed moral agents. I argue that in a non-ideal world, living by an ideal demands that agents take up a wide range of actions and attitudes related to that ideal. These actions and attitudes are essential to the moral life and our efforts to engage in it well.
What is Spinoza's account of the relationship between the human mind and body? I'll argue that Spinoza has two different answers to two different versions of this question, starting from two different sets of premises. First, Spinoza starts with subjective data like felt sensations and our experience of moving our bodies, and from that he concludes that the human mind is united to the human body by representing it. Second, to account for apparent psychophysical interaction in general, Spinoza develops parallelism. While these accounts of the mind-body relationship are often run together (including by Spinoza), I'll show that Spinoza argues for them in relative isolation from one another and does not resolve them. I'll trace this back to Descartes, who also offers two irreconcilable approaches to two different questions about the mind-body relationship, and I'll argue that we can still find a sense of this tension in contemporary philosophy of mind.
The lives for headaches argument concludes that, while the exact number may be open to debate, there is a finite amount of headache relief that is sufficient to outweigh an innocent life. This conclusion presents a well-known conflict within value theory. Arguably based on both sound and valid reasoning, it is highly counterintuitive to most. My aim is to assess lives for headaches by way of an analysis of its underlying argument. Focus will be given to Dorsey’s (2009) formulation, and subsequent rejection, of this argument. Contra Dorsey I suggest that there is motivation to accept the reasoning leading to lives for headaches. Reconciling the counterintuitive consequences of lives for headaches with the strength of its supporting argument therefore remains an open and important puzzle.
Does our government have a right to demand that you file your taxes by April 15th and a right to punish you if you don’t file your taxes on time? A dominant belief in political philosophy is that states must be entitled to authorize the use of coercion in order to be justified in coercing its subjects. Anarchists believe, however, that states invariably engage in unjustified uses of coercion. They argue that it is morally wrong to restrict any liberty-rights without sufficient justification.
In this essay, I argue that it is morally problematic only if a state restricts a special class of liberty-rights without a compelling moral justification. An implication of my account is that states can engage in justified uses of coercion without having an entitlement to tell you what to do.
In 1814 Pierre LaPlace drew an epistemic consequence of the determinism of the Newtonian equations of motion that is quoted in almost every philosophical discussion of determinism, writing:
“An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes. “
But what if there were a device that was specifically designed to take a prediction of its behavior and do the opposite? Would a LaPlacean intelligence with knowledge of the laws and the initial state of the universe be able to predict the behavior of a device like this? You might be tempted to think a device like this is impossible in a deterministic world. In fact, they are quite trivial to construct. The puzzle presented by counterpredictive devices was introduced into the philosophical literature by a 1965 paper by Michael Scrivens as the “Paradox of Predictability”. A number of people wrote papers about the Paradox in the late 1960s and early 70s, but the discussion ended somewhat inconclusively, because there were features of the original presentation that unnecessarily diverted the discussion. I want to revisit the ‘Paradox’, present it a little more cleanly, and try to understand it as a purely physical phenomenon. It holds some important lessons about determinism and the nature of physical laws.
The claim that visual diagrams can justify mathematical beliefs faces two main challenges. The first challenge argues that diagrams are inherently unreliable, given a number of well-known cases in which diagrams or visual intuitions have proved to be misleading. I argue that these ‘problem cases’ have been misdiagnosed. In all such cases, the erroneous judgments result, not from any defect inherent to visual thinking as such, but rather from a specific set of cognitive heuristics that operate at an unconscious level. In general, the heuristic-based errors prove, in fact, to be correctable by means of the appropriate use of diagram-based visual understanding. I conclude that there is no reason at all to think that diagrams are inherently unreliable as guides to mathematical truth.
I then turn to the second challenge, which argues that the use of diagrams is not rigorous, and hence cannot justify. Recent work in formal diagrammatic systems shows that it is simply false that reasoning with diagrams cannot be rigorous. Nonetheless, it is true that in many cases of interest, diagrammatic demonstrations do not qualify as mathematically rigorous. I argue, however, that this does not imply that they cannot justify. Here I distinguish between rigorous justification and intuitive justification. I argue that both are properly regarded as kinds of justification, that both constitute central and permanent aims of mathematical practice, and that there is an inherent tension between the two. From this perspective, it is mistaken to say that because ‘visual proofs’ are not rigorous, they fail to justify. It is rather that they pursue the intuitive kind of justification at the expense of the rigorous kind. I conclude that the two main challenges to the justificatory status of mathematical diagrams are both unsuccessful.
The extent to which human psychology obeys optimal epistemic principles has been a subject of much recent debate. The general trend is to interpret violations of such principles as instances of irrationality. Yet, deviations from optimal rationality have been found to be pervasive in decision-making. Less pessimistic interpretations postulate that epistemic principles are more contextual and limited. Both interpretations focus exclusively on the epistemic features of human psychology. I propose that research on memory provides a more productive and empirically sensible way to understand the epistemic and non-epistemic features of human psychology. In particular, I argue that epistemic constraints for memory capacities are insufficient to explain the cognitive integration of memories, particularly memory reconsolidation. A central claim I defend is that there are epistemic, as well as narrative thresholds that need to be balanced. This approach provides a positive interpretation of forms of cognition that are not strictly epistemic, arguing that they play a critical role in information processing. It does so by appealing to their role in cognitive integration, in which much more is at stake than accuracy or compliance with epistemic principles.
A central project in philosophy of language and in linguistics has been to find a systematic relation between the sounds we make and the meanings we manage to convey in making them. The meanings speakers mean appear to depend in various ways on context. On the orthodox view, a syntactically disambiguated sound completely determines what is directly asserted by the speaker (what is said), relative to a context that serves only to fix values for indexicals and deictic expressions. However, there appear to be cases of context-dependence that resist orthodox treatment (Bach 1994). When orthodoxy seems implausible, one of two options has generally been favored. Contextualists depart from orthodoxy and allow a greater role for context, including pragmatic enrichments of what is said (Recanati 2002). Others have attempted to preserve the orthodox account by enriching the symbol with covert syntactic items (Stanley 2000). On at least some accounts, this choice is an empirical one about the psychology of speakers and hearers. I therefore discuss how a series of self-paced reading time experiments, reported in McCourt et al. forthcoming, might inform these debates. I argue that these studies undermine a commonly accepted orthodox treatment of implicit control of reason clauses (The ship was sunk to collect the insurance), and therefore weaken the general motivation for orthodoxy.
Before we can design medical decision procedures which adequately balance patient autonomy against broader well-being and social concerns, we must first understand what autonomy is and why it is important. An adequate conception of autonomy ought to be both theoretically sound and pragmatically useful. I propose an expanded and clarified individualistic conception of autonomy for use in bioethics that provides action-guidance in regard to obtaining genuinely informed consent and which can avoid many of the pitfalls which face more revisionary notions of autonomy.
Experiences justify beliefs about our environment. Sometimes the justification is immediate: seeing a red light immediately justifies believing there is a red light. Other times the justification is mediate: seeing a red light justifies believing one should break in a way that is mediated by background knowledge of traffic signals and cars. How does this distinction map onto the distinction between what is and what isn’t part of the content of experience? Epistemic egalitarians think that experiences immediately justify whatever is part of their content. Epistemic elitists deny this and think that there is some further constraint the contents of experience must satisfy to be immediately justified. Here I defend epistemic elitism, propose a phenomenological account of what the further constraint is, and explore the resulting view’s consequences for our knowledge of other minds, and in particular for direct perception accounts of this knowledge.
Many people have expressed hope that with the advent of widespread technology to capture moving images, we will finally be in a position to acknowledge and address the systemic problem of racialized police brutality. When the true details of this violence are visible to all, it is thought, the presence of a serious problem and the need to address it cannot be denied. In this essay, I will give reasons, connected to specific aesthetic phenomena regarding our interpretation and use of images, for skepticism about this hope. I will argue that standard ways of looking at these moving images tend to reinforce racialized forms of perception, and they tend to undermine attention to the systemic phenomena that contribute to racialized police violence. I will suggest that we must engage in forms of resistance to standard ways of looking. We must also attend to the acts of naming that shape what we perceive within these moving images.
Deontic logic is concerned with such normative concepts as obligation, permission, and prohibition. My talk will consist of two parts. In the first, I will introduce the field by discussing the most studied normative reasoning formalism, the so-called ``Standard Deontic Logic'' (SDL), as well as some of its open problems. In the second part of the talk, I will zoom in on one of these problems: how to draw inferences in the presence of normative conflicts. One prominent answer suggests restricting attention to maximally consistent subsets of the premise set. I will show how to turn this suggestion into a full-blown logic with a semantics and a proof-theory. I will conclude by discussing this logic's merits.
This talk is about the nature and causal determinants of both reflective thinking and, more generally, the stream of consciousness. I argue that conscious thought is always sensory based, relying on the resources of the working-memory system. This system has been much studied by cognitive scientists. It enables sensory images to be sustained and manipulated through attentional signals directed at midlevel sensory areas of the brain. When abstract conceptual representations are bound into these images, we consciously experience ourselves as making judgments or arriving at decisions. Thus one might hear oneself as judging, in inner speech, that it is time to go home, for example. However, our amodal (non-sensory) propositional attitudes are never actually among the contents of this stream of conscious reflection. Our beliefs, goals, and decisions are only ever active in the background of consciousness, working behind the scenes to select the sensory-based imagery that occurs in working memory. They are never themselves conscious.
When you “have an inclination” to do something, prior to having decided whether or not to do it, you regard the inclination as something you can “act on” or not. My focus is on this moment in the story of action. I argue that there is a puzzle about how to form a coherent conception of this “having” relation. To have an inclination is to be influenced by something that simultaneously takes the form of a discursive proposal (“how about doing this?”) and brute pressure (the motivational oomph that is the correlate of ‘will power’). I explain why standard conceptions of desire fail to capture this duality, and I offer an alternative conception of my own.
Relativists hold that reasons are related to contingent attitudes, but judgments about what people are morally required to do are profoundly insensitive to beliefs about these attitudes. This conflict with moral discourse is widely believed to be the central problem for ethical relativism. Many relativists have argued that a commitment to attitude-independent reasons requires belief in a spooky (non-naturalistic) metaphysics or epistemology. I’ll argue that the most promising defense of relativism will call upon resources that ‘quasi-realists’ have deployed in order to protect moral discourse from any commitment to spooky nonnaturalism. According to quasi-realists, a commitment to the existence of attitude-independent reasons is to be understood as a practical commitment about how (not) to justify normative claims, not a (spooky) metaphysical one. I agree with quasi-realists about this. However, quasi-realists and relativists (and everyone else) have failed to see that by turning an arcane metaphysical debate into a fundamentally practical one, quasi-realists throw relativists into their briarpatch. That is, relativists should be happy for the chance to make their stand against moral discourse on fundamentally normative ground.
Outcomes matter when it comes to distributing medical resources. However, it can be difficult to determine what sort of factors ought properly to be taken into account when characterizing such outcomes. If all we care about is maximizing well-being, then it seems that we should take all of the impacts of a proposed distribution into account. This would commit us to, e.g., prioritizing a successful individual with a loving family and many friends over an unemployed loner in a case where both are candidates for receiving a lifesaving organ transplant. But it seems somehow unfair that someone's social utility (or lack thereof) should determine whether that individual is given the chance to receive a lifesaving medical resource. Although distributing medical goods on the basis of such a straightforward social utility calculation seems clearly mistaken, it is a challenge to delineate what exactly ought to be taken into account in such cases. Does fairness require that we rule out certain considerations from our deliberation, and if so, which? Should only the health-related benefits of a given distribution be taken into account, or should we consider certain social benefits as well? An appeal to separate spheres can help answer these questions. Drawing on the work of Dan Brock and Frances Kamm, I aim to sketch out and motivate how outcomes ought to be characterized in the sphere of medicine as well as what type of distribution priority considerations ought properly to fall within that sphere.
One important fruit of the scientific project—arguably second only to the formulation of dynamic theories—is the principled organization of our universe’s constituents into categories and kinds. Such groupings come in two principal flavors: historical and synchronic. Historical categories group entities by their relationships to past events, as when an organism’s species is a function of the population from which it descended. By contrast, synchronic categories make group membership depend exclusively on current features of the universe, whether these are intrinsic or extrinsic to the things categorized. This talk explores just why scientists deploy historical categories when they do, and synchronic ones otherwise. After reviewing a number of examples, I formulate a principle designed to both describe and explain this feature of our scientific classificatory practice. According to this proposal, a domain is apt for historical classifications just when the probability of the independent emergence of similar entities (PIES) in that domain is very low. In addition to rationalizing this principle and showing its ability to correctly account for classification practices across the natural and social sciences, I will consider the nature of the probabilities that are at its core.
A number of prominent authors—Levi, Spohn, Gilboa, Seidenfeld, and Price among them—hold that rational agents cannot assign subjective probabilities to their options while deliberating about which they will choose. This has been called the “deliberation crowds out prediction” thesis. The thesis, if true, has important ramifications for many aspects of Bayesian epistemology, decision theory, and game theory. The stakes are high.
The thesis is not true—or so I maintain. After some scene-setting, I will precisify and rebut several of the main arguments for the thesis. I will defend the rationality of assigning probabilities to options while deliberating about them. Deliberation welcomes prediction.
This colloquium talk is part of the Philosophy of Probability Workshop.
Sometimes grave harm is an aggregate effect of many actions' consequences. In such cases, each individual contribution may in fact be so tiny that it's effect are imperceptible and, therefore, can't be said to cause harm. This creates a problem: If no agent's contribution causes harm, then s/he may lack sufficient reason not to contribute to the overall harmful effect. Hence, the harmful effect would seem normatively inevitable. In this talk, I discuss and reject two broadly consequentialist answers to the problem and I defend a virtue ethicist solution. According to the former views, a thorough analysis of all relevant cases reveals that there really are no cases of imperceptible contributions to harm. According to (my favored version of) the latter approach, agents should (in part) not partcipate in harmful practices, because a reasonably emphathetic agent would not act in this way.
We evaluate newspapers according to two dimensions: whether their stories are well-researched and accurate (did the reporter check their facts?), and which stories they choose to print in the first place (are the stories relevant to the public? newsworthy? important?). Could an analogous distinction apply to the representational states in an individual's mind? We use epistemic norms to evaluate beliefs according to whether they are true and well-founded. But discussions of which thoughts should populate the mind
in the first place are far less common in epistemology. I discuss whether there are norms of salience that apply to the mind, and if so, what kinds of norms these might be.
Quantum entanglement is real and important. It's also widely believed to be mysterious and spooky. The aim of this talk is twofold: to explain the basics of entanglement in a way that doesn't presuppose any prior knowledge of quantum mechanics, and to probe the question of whether entanglement really is spooky or mysterious. I will argue that it may be, but that if so, this isn't a obvious as is typically assumed.
TBA
In this paper, I argue that we have obligations not only to perform certain actions, but also to form certain attitudes (such as desires, beliefs, and intentions), and this despite the fact that we rarely, if ever, have direct voluntary control over which attitudes we form. Moreover, I argue that whatever obligations we have with respect to actions derive from our obligations with respect to attitudes. More specifically, I argue that an agent is obligated to perform an act if and only if it's the action that she would perform if she were to have the attitudes that she ought to have. This view, which I call attitudism, has three important implications. First, it implies that an adequate practical theory must not be exclusively act-orientated. That is, it must require more from us than just the performance of certain voluntary acts. Additionally, it must require us to (involuntarily) form certain attitudes. Second, it implies that an adequate practical theory must be attitude-dependent. That is, it must hold that which acts we ought to perform depends on which attitudes we ought form. Third, it implies that no adequate practical theory can require us to perform acts that we would not perform even if we were to have the attitudes that we ought to have. I then show how these implications can help us both to address certain puzzling cases of rational choice and to understand why most typical practical theories (utilitarianism, virtue ethics, rational egoism, Rossian deontology, etc.) are mistaken.
In “Seeing-as in the Light of Vision Science,” Ned Block (2014) argues that adaptation effects (the perceptual effects that arise from neural processes shifting their response profiles) provide an empirical criterion for distinguishing perceptual from cognitive representational content. The proposal, very roughly, is that where one finds adaptation effects, one finds perceptual processes. If this is correct, philosophers and cognitive scientists have a powerful tool for addressing the vexed question of where perception ends and cognition begins. And this would be good news for those, like Block, who think that “there is a joint in nature between percepts and concepts” (p.2). However, it’s not clear from Block’s discussion why adaptation is an exclusively perceptual phenomenon. His arguments are neither well developed nor, as I argue, particularly convincing. Nevertheless, Block’s discussion raises an important question: Are adaptation effects exclusively a manifestation of perceptual mechanisms, or are they in fact characteristic of a broad range of neuronal processes? In this paper, I offer tentative evidence for the latter of these options, and I argue that that adaptation effects alone will therefore not suffice as a test for perceptual content.
The greatest challenge to aggregative consequentialism is infinitarian paralysis: if there are infinitely many beings like us in the universe, no action can change the cardinal sum of value or disvalue and so (apparently) all actions are morally indifferent. I survey some approaches to this problem developed by Bostrom and conclude (as he does) that they are unsatisfactory. I then offer a new solution, based on a modification to Hume’s principle, and show that it succeeds where previous solutions fail in rescuing most ordinary consequentialist moral judgments. But my solution also carries significant metaphysical commitments and has potentially unwelcome implications concerning the moral status of future generations.
It is generally agreed that novels can be fully appreciated only through an experiential engagement with their well-formed instances. But what sort of entities can play the role of such instances? According to an orthodox view---accepted by Gregory Currie, David Davies, Stephen Davies, Nelson Goodman, Robert Howell, Peter Lamarque, Jerrold Levinson, Guy Rohrbaugh, Richard Wollheim, and Nicholas Wolterstorff, among others---the entities that play this role are primarily inscriptions---concrete sequences of symbol tokens, typically written/printed on something (say, paper, papyrus, or parchment) or displayed on the screen of some device (e.g., a computer or an e-reader). Thus, on this view, well-formed instances of, say, War and Peace include its original manuscript, printed copies (e.g., the copy lying on my table), and electronic text tokens (e.g., the text displayed on Anna's computer screen).
My goals in this paper are (a) to show that despite its popularity, the orthodox view is misguided and (b) provide an alternative. I begin, in Section 1, with a clarification of the expression "well-formed instance of an artwork." Next, in Section 2, I explain why inscriptions cannot be regarded as well-formed instances of novels. In particular, I argue that to be a well-formed instance of a novel, an inscription must be capable of manifesting certain sonic properties of this novel; however, no inscription can manifest such properties. Finally, in Section 3, I (a) draw a distinction between non-visual novels, or novels that do not contain any aesthetically relevant graphic elements, and visual novels, or novels that do contain such elements, and (b) argue that well-formed instances of non-visual novels are readings, whereas well-formed instances of visual novels are sums of readings and graphic elements.
Returning service members often carry the weight of their war in messy moral emotions that are hard to process and sometimes hard to feel. Some of these emotions can get sidelined in clinical discussions of posttraumatic stress, when the stressor is narrowed to exposure to life threat, and symptoms are streamlined to hyper-vigilance, numbing, and flashbacks. In recent years, a number of military psychological researchers and clinicians have pressed to expand the clinical focus and recognize the prevalence and distinctiveness of a dimension of psychological stress that is moral—hence the notion of moral injury and its emotions and interventions. Still what often goes unremarked in that research is the ubiquity (and sometimes, naturalness) of moral emotions such as guilt, shame, resentment, disappointment, empathy, trust, and hope outside the clinical arena. These can be a part of healthy processing of war, and part and parcel of ordinary practices of holding persons responsible and subject to normative expectations, or what the British philosopher Peter Strawson famously called “reactive attitudes.”
In this essay, which draws from my forthcoming book, Afterwar (Oxford University Press, May, 2015), I explore the idea of hope in persons as a kind of a positive reactive attitude that focuses our attention on pockets of good will in self or others, and on occasions for aspiration and investment. I am also interested in hope for outcomes, and how the two kinds of hope support each other.
With 2.4 million U.S. service members returning from a decade of war in which many have served long, multiple, deployments in complex and challenging partnerships, a philosophical discussion of reactive attitudes in the context of war is timely. And that the issues span more general concerns in moral psychology is a welcome way of bringing the moral psychology of soldiering into more mainstream philosophical discussion. In this essay, I explore moral injury and healing through soldiers’ own voices, based on extensive interviews I have conducted.
Susan Wolf (1982) argues that a person who follows a moral theory perfectly (a “moral saint”) would necessarily be unattractive, dull, and fail to lead a life of value. Secondly, she argues that we should reconsider the status of moral reasons when act, since the unattractiveness of moral saints is rooted not in a particular moral theory, but in the nature of morality.
I argue that none of Wolf’s complaints about the Western conception of moral saints apply to the Confucian moral ideal, junzi (君子). I will show that following Confucian ethics perfectly (i.e. to become a junzi) is consistent with being attractive, interesting, and leading a life of value, and a society with everyone being a junzi is a society worth striving for.
This paper explores the nature of and possible justifications for the property-like claims that contemporary nation-states make over things associated with or comprising their particular geographical territories. Modern states, of course, claim legal jurisdiction over particular areas of the earth’s surface, claiming in the process authority over persons located within those areas. But in addition to such jurisdictional rights (“of control”), states also claim rights that more closely resemble the kinds of claims to landownership made by individuals. These property-like (“exclusionary”) rights include the right to control the borders of the territory, as a landowner has the right to fence or otherwise exclude others from entering or using her land. And relative to those borders, states also make property-like claims over the non-human, physical stuff in, around, and comprising their areas – the things often referred to as “natural resources”. These rights emanate from, but are not confined to, the surface shapes of states that we draw on maps and globes. States claim rights not only to a bounded surface and to the things on it – the land and surface water themselves, the timber, plants, and animals found there – but also to what lies beneath that surface – the rocks and dirt, the metals and minerals, the oil, gas, and water – and to what is located above and around their surface shapes. The paper asks: what arguments might be offered in favor of such claimed rights and to what extent are those arguments persuasive.
In my talk I offer two answers to the above question. The first is semantical: I show that we can conceive of propositions about chance as empirical, and why this helps to make sense of the determination of chances in statistics. The second answer I give is metaphysical: chances come about because of the interplay between levels of description that cannot be made to mesh. As will be seen, both answers rely on early work about chances done by von Mises.
The thesis that the mental supervenes on the physical (that the mental could not have been other than as it is without the physical having been other than as it is) has been much discussed. I will be suggesting three reasons one might doubt that the issue has all the importance it has widely been taken to have. These will illustrate more general reasons for uneasiness about the status of modal metaphysics.
Correlation is not causation. As such, there are decision-making contexts (like Newcomb's Problem) where it is entirely reasonable for an agent to believe that the world is likely to be better when she x's, but also believe that x-ing would cause the world to be worse. Should agents x in such contexts? In this paper, I use the interventionist approach to causation to help answer this question. In particular, I argue that whether an agent should x depends on her credence that her decision constitutes an intervention. I also propose and defend a decision rule that takes stock of the exact way in which what an agent should do depends on this credence.
Cohen and Callender's (2009, 2010) Better Best System (BBS) analysis of laws of nature is fashioned to accommodate laws in the special sciences by allowing for any set of kinds to be adopted as basic prior to the determination of the laws. Thus, for example, setting biological kinds as basic will yield biological laws as the output of the best system competition. I will argue in this talk that the BBS suffers from two significant problems: (1) it is unable to single out a set of laws as fundamental, and (2) it cannot accommodate cases of interfield interactions that muddy the boundary between the basic kinds of individual fields (e.g. photon talk in biology). I then propose a new Best System style view and argue for its ability to account for special science laws, fundamental laws, and cases of interfield interactions.
Social choice theory offers a number of formal models that allow for the aggregation of ordinal and numerical scales. Although it is most often used to model elections and social welfare, it has also been used to model a wide variety of other phenomena, including the selection of scientific theories and the collective judgments of computers. But the skeptical findings of social choice theory have had particular impact within political philosophy and political science. For instance, William Riker famously used Arrow's impossibility theorem, which shows that no voting rule satisfies a number of reasonable conditions on democratic choice, to argue that populist theories of democracy are false. I argue that such skeptical findings are often the result of the context-free nature of orthodox social choice models. The addition of assumptions plausible only within particular theoretical contexts often produces models that avoid many of these skeptical results. I examine three examples of context-specific social choice models that are appropriate within the following theoretical contexts: scientific theory selection, elections, and the measurement of public opinion. I argue that such context-specific models typically represent their respective phenomena better than context-free models, and consequently are better able to determine what problems may or may not exist within their specific contexts. Finally, I suggest that a similar context-specific strategy might also be used in other formal areas of philosophy.
It's completely obvious that people who bring children into the world typically have some responsibility to parent them. Sure, marginal cases involving assisted reproductive technology or coercion, deception, or other extreme circumstances are tough to call, but in everyday cases, it seems like procreators have special moral reasons to look after their children from the moment of birth (at the latest). In this talk, I'm going to try to convince you that, while this moral intuition is fairly clear, an adequate justification for it is sorely lacking, and that this has some troubling implications for philosophical and social debates about parental rights, child support, and the role of the nuclear family.
The standard framework of cognitive science-- the computational-representational theory of mind (CRT)-- has it that mental processes are the result of brains implementing computations that range over representations. These representations have standardly been thought of as intentional states-- that is, states that have contents such that they are about or representations of particular things. For example, one standard interpretation of generative linguistics is that the mind performs computations over states that are representations of, inter alia, noun phrases, phonemes, and theta roles.
Recently, however, Chomsky (2000), Burge (2010), and others generally supposed to be adherents of the CRT, have argued that at least some of the so-called representations posited under the framework are not intentional: they are not about anything at all. Jones and Love (2011) assert, in particular, that Bayesian accounts of perception rely on computations that range over non-intentional states. This claim is particularly remarkable because Bayesian inference is often construed as a variety of hypothesis testing-- a process notoriously difficult to characterize in non-intentional terms.
Thus far, the arguments for these claims have been unsatisfying. To rectify the situation, I’ll examine a particular Bayesian account of color perception given by Allred (2012) and Brainard et al. (2008; 1997). Even though their account is couched in the intentional idiom of hypothesis testing, I’ll argue that we can preserve its explanatory power without attributing intentional states to the early perceptual system it describes.
The conclusion of this analysis is not that intentionality can be eliminated wholesale from cognitive science, as Stich, Dennett, and the early behaviorists have advocated. Rather, analysis of why intentional explanation proves unnecessary in this particular case sheds light on the conditions under which intentional explanation does prove explanatorily fruitful.
On standard accounts of modal expressions, sentences like (1) and (2) have been taken to express the same propositions, (2) making explicit the epistemic nature of the modality left implicit by (1).
(1) Jones might be dead.
(2) For all we know, Jones might be dead.
A problem for such accounts, however, is the fact that (1) and (2) do not support the same counterfactual continuations. (3), for example, is an acceptable follow-up to (2) but not to (1).
(3) But that’s just kind of a fluke, since we could have investigated his disappearance much more thoroughly.
This sort of problem does not generalize to other, non-epistemic modals, as (4) and (5) show.
(4) You can get a license in Georgia when you’re 16. But that’ just kind of a fluke, since Georgia could have had the laws New Jersey did.
(5) Given its laws, you can get a license in Georgia when you’re 16. But that’ just kind of a fluke, since Georgia could have had the laws New Jersey did.
Why should this implicit-explicit distinction be important for epistemic modals but not for non-epistemic ones? Some have argued on independent grounds that implicit epistemic modals exhibit idiosyncratic behavior (Yalcin (2007)), but such accounts are insufficient to handle the contrast exemplified by (1) and (2). I argue, instead, that the nature of epistemic modality has been misunderstood, that (1) does not contain an epistemic modal, and that this fact explains the difference between (1) and (2). Getting clear on the nature of epistemic modality also potentially help clears up a host of other problems (presumed) epistemic modals have posed for standard semantic theories.
Human beings’ capacity for cooperation vastly outstrips that of other great apes. The shared intentionality hypothesis explains this difference in terms of motivational and representational discontinuities, particularly the capacity to represent joint goals. In this paper, I first present an argument as to why we should reject the shared intentionality hypothesis’ hyper-competitive characterization of chimpanzees’ social cognitive abilities, and provide reasons to be skeptical of the generalizability of experimental findings from captive chimpanzees. Next, I outline an alternative account of the contrast between human and great ape social cognition that emphasizes gradual differences in domain-general reasoning rather than novel domain-specific representational abilities. Lastly, I review further cognitive and motivational factors that might affect human beings’ capacity for cooperation.
The aim of this talk to offer a formal understanding of common law reasoning -- especially the nature of this reasoning, but also its point, or justification, in terms of social coordination. I will present two, possibly three, formal models of the common law, and argue for one according to which courts, are best thought of, not as creating and modifying rules, but as generating a priority ordering on reasons. The talk is not technical, although it draws on tiny bits of logic, and also on work in AI and Law; it contributes to legal theory, and also, possibly, to applied ethics.
Contemporary epistemology offers us two very different accounts of our epistemic lives. According to Traditional epistemologists, the decisions that we make are motivated by our desires and guided by our beliefs and these beliefs and desires all come in an all-or-nothing form. In contrast, many Bayesian epistemologists say that these beliefs and desires come in degrees and that they should be understood as subjective probabilities and utilities.
What are we to make of these different epistemologies? Are the Tradionalists and the Bayesians in disagreement, or are their views compatible with each other? Some Bayesians have challenged the Traditionalists: Bayesian epistemology is more powerful and more general than the Traditional theory, and so we should abandon the notion of all-or-nothing belief as something worthy of philosophical analysis. The Traditionalists have responded to this challenge in various ways. I shall argue that these responses are inadequate and that the challenge lives on.
The past few decades have seen an expansion in the use of cost-benefit analysis as a tool for policy evaluation in the public sector. This slow, steady creep has been a source of consternation to many philosophers and political theorists, who are inclined to view cost-benefit analysis as simply a variant of utilitarianism, and consider utilitarianism to be completely unacceptable as a public philosophy. I will attempt to show that this impression is misleading. Despite the fact that when construed narrowly, cost-benefit analysis does look a lot like utilitarianism, when seen in its broader context, in the way that it is applied, and the type of problems to which it is applied, it is better understood as an attempt by the state to avoid taking sides with respect to various controversial conceptions of the good.
Scientific anti-realism is usually assumed to be a thesis about the scope of scientific theories with regard to unobservables. To many, this makes anti-realism an unattractive option, since it commits us to an arbitrary divide based on the limits of human perceptual organs and involves a skepticism about entities few want to reject. I argue that this view of what anti-realism should amount to comes from an inflationary and non-naturalistic meta-semantics that I argue should be rejected on independent grounds. I propose an alternative picture of anti-realism about science that draws no such arbitrary divide, but still helps us to dissolve the measurement problem--a problem that persistently resists realist solutions.
A number of seminal figures in the history of probability, including Keynes, de Finetti, Savage, and others, held that comparative probability judgments -- as expressed, e.g., by statements of the form 'E is more likely than F' -- might be, in one way or another, more fundamental than quantitative probabilistic judgments. Such comparative judgments have mostly been studied in relation to quantitative notions, viz. representation theorem. After briefly discussing how this older work on representation theorems relates to contemporary questions in linguistic semantics about what locutions like 'E is more likely than F' mean, I will then argue that we need a better understanding of normative considerations concerning comparative probability that is, at least potentially, independent of quantitative representations.
I will present a new result -- analogous to, but weaker than, Dutch book arguments for standard quantitative probability -- characterizing exactly when an agent maintaining given comparative judgments is susceptible to a blatant kind of pragmatic incoherence. It turns out that quantitative representability (or at least 'almost representability') can be motivated in this way, without presupposing quantitative representation on the part of the agent. Finally, I will illustrate how this result might bear on the aforementioned questions in semantics, as well as more general questions about the role of qualitative probability judgments in practical reasoning.
Understanding how gene regulatory networks work, and how they evolve, is central to understanding the evolution of complex phenotypes. A common way to model these networks is to represent genes as simple boolean logic switches, and networks as complex circuits of these switches wired together. Such models have been used to explore theoretical issues about adaptation and evolvability, and have successfully captured the key workings of well-studied developmental systems.
In this paper, I introduce a model in this same tradition, but one that explicitly incorporates recent philosophical work on signaling systems. The primary work is by Brian Skyrms, who has extended David Lewis’s ideas from “Convention”, placing them in an evolutionary context and connecting them to information theory. Skyrms uses these ideas to show how evolutionary processes can “create information” (and perhaps even proto-meaning). The model thus provides an important bridge between a standard tool for thinking about gene regulatory evolution, and new ideas about evolution of communication, information, and meaning.
I discuss two initial results from exploring this model. First, I show that the models provide a clear way to underwrite some “information-talk” in developmental biology that has been previously dismissed by philosophers. Second, I show how one shortcoming of the Lewis/Skyrms model — the inability to distinguish directive and assertive force — connects to recent ideas about evolvability in gene regulatory networks.
Philosophers are, of course, much concerned with issues of injustice. Vast literatures address wrongful incursions committed along lines of race, gender, sexual preference, religion and, of course, economic class. Comparatively little attention has been paid to impositions across generational lines, and where such unfairness has been invoked the story is often gotten backwards (“ageism”). This paper argues that during the preceding half century increased burdens have been placed on young cohorts for the direct benefit of the old, that almost every major social policy in recent years has further disadvantaged the young, and that this is not only an American problem but one that pervades the developed world. These injustices can be understood as failures of reciprocity, non-imposition, and democratic accountability. Unlike other perceived injustices, this one shows itself uniquely resistant to redress through liberal democratic institutions. I conclude by speculating that this immunity to melioration is not accidental but rather that the root cause of eating the young is liberal democracy itself.
Over the last three centuries, a series of scientific research programs have sought to establish the existence of genetically-based differences between human racial groups in socially-important psychological traits (e.g., intelligence and aggressiveness). These claims have long been subjected to criticism not only on empirical grounds, but on moral grounds as well, including charges of racism. Practitioners of racial science hotly deny such charges, and these debates about the “racist” nature of racial science have generally been unproductive. I suggest that one reason for this lack of productive debate is the absence of a well-defined and shared understanding of precisely what racism is. Thus, in this talk I examine various claims of racial science in the light of several philosophical analyses of racism, including (a) racism as inferiorization or pernicious belief, (b) racism as an institution, and (c) racism as racial ill-will or disregard. I conclude that although charges of racism are often more difficult to sustain than many have supposed, there is room for a moral critique of racial science under the racial ill-will/disregard conception of racism. In closing I offer some suggestions for how practitioners of racial science can mitigate some of its morally problematic features.
Self-determination is a cardinal principle of international law. But its meaning is often obscure. While the exact contours of the principle are disputed, international law clearly recognizes decolonization as a central application of self-determination. Most ordinary people also agree that the liberation of colonized peoples was a moral triumph. In the paper, I pursue a particular strategy for theorizing the principle: I start with a case where self-determination is widely considered appropriate—the case of decolonization—and try to get clear on precisely which values justified self-determination in that case. Specifically, I examine three philosophical theories of self-determination’s value: an instrumentalist theory, a democratic theory, and my own associative theory. I argue that our intuitions about decolonization can be fully justified only by invoking an interest on the part of alienated groups in redrawing political boundaries. This interest, I believe, may also justify self-determination in other cases, such as autonomy for indigenous peoples, and greater independence for Scotland or Quebec. Those who strongly support decolonization may have reason to endorse independence for these other minorities as well.
In this talk, I will first argue that the Chomsky, Hauser and Fitch (2002) claim that recursion is unique to human language is misleading because they fail to distinguish recursive procedures (I-recursion) from recursive patterns (E-recursion). I contend that in order to explain the emergence of recursive hierarchical structures in human language, as opposed to animal communication systems (bee dances and bird songs), the underlying recursive generative mechanism should be explained. I will compare two such accounts: an evolutionary account by Miyagawa et al. (2013) and the Minimalist syntax by Hornstein (2007), and argue for the latter.
Defenders of “democratic authority” claim that the democratic process in some way confers legitimacy on the state and generates an obligation for citizens to obey democratically made laws. This idea may be based on (a) the value of democratic deliberation, (b) the importance of respecting others’ judgments, or (c) the importance of equality. I argue that all three proposed bases for democratic authority fail, and thus that the democratic process does not confer legitimacy, nor does it create political obligations.
In a world where time travel is possible, could there be a closed loop of knowledge? Could a future Dunlap scholar bring me, in his time machine, the contents of my dissertation, so that I don't have to write it myself? In this paper, I address some conceptual issues surrounding Deutsch's solution to this paradox---known as the Knowledge Paradox---which stems from his influential framework for analyzing the behavior of quantum systems in the presence of closed timelike curves (CTCs). I argue that Deutsch's acceptance of the existence of the many worlds of the Everett interpretation in or to ensure that there is always an author of the dissertation (albeit in another world), creates a unique problem. The Many Worlds Interpretation commits Deutsch to believing that any history that is physically possible is actualized in some world. Among those histories, I argue, are ones which are indistinguishable in every way from worlds in which an Knowledge Paradox scenario plays out, wherein the dissertation exists, but was not written by anyone. Furthermore, in these worlds, the existence of the dissertation is not the result of time travel, but merely appears to be. So Deutsch's use of the Many Worlds framework to solve the Knowledge Paradox cuts both ways. It commits him to the existence of worlds in which the dissertation is genuinely a "free lunch"---it exists, but was not the result of the intellectual effort of rational beings---which is exactly the situation Deutsch was trying to avoid.
Many of the inferences we draw in our day-to-day reasoning are defeasible—their conclusions can be withdrawn in the light of new evidence. This inferential property, known as non-monotonicity, requires the development of non-standard logics to model our reasoning practices, for, in standard logic, the conclusions drawn from a premise set are never withdrawn when new premises are added to that set. Non-standard logics, however, face a unique challenge. Defeasible inferences have varying strengths and how to model the interaction between such inferences is an outstanding question. In this paper, I provide an answer to this question within the framework of the default logic originally proposed by Reiter (1980) and recently developed by Horty (2012). Focusing on a few sample cases, I show where Horty (2012)’s answer to the above question stands in need of development and how my proposed revisions to his account address these deficiencies. I also consider other developments of Reiter (1980)’s default logic which avoid some of Horty (2012)’s problems and suggest that these accounts face a fundamental difficulty which my account does not.
An intentional-historical formalist definition of poetry such as the one offered in Ribeiro (2007) inevitably raises the question of how poetry first emerged, and why. On this view, repetitive linguistic patterning is seen as a historically central feature of poems, and one that has both an aesthetic and a cognitive dimension. Combining the Darwinian idea of a musical protolanguage with analyses of ‘babytalk’, I suggest that this central feature of poetic practices first emerged as a vestige of our musical proto-speech and of our earliest form of communication with our caregivers. Conversely, I suggest that the existence and universality of ‘babytalk’, together with the universality, and antiquity of poetic practices, argue in favor of the musilanguage hypothesis over its competitors, lexical and gestural protolanguage. One consequence of this proposal is a reversal of how we understand poetic repetition: rather than being justified in terms of the mnemonic needs of oral cultures, it is now understood as an aesthetically pleasing exploitation of features already found in speech.
Ben Caplan and Carl Matheson have recently advocated musical perdurantism—the view that amounts to the conjunction of the following theses:
(1) Musical works are identical to mereological sums of their temporal parts (performances);
(2) Musical works persist by perduring, that is, “by having different temporal parts [performances] at every time at which they exist” (Caplan and Matheson 2006,
60).
In this talk I will argue that musical perdurantism faces a number of serious problems and, hence, cannot be accepted.
Talk Postponed
I argue that lexically primitive proper nouns (PNs) are systematically polysemous in the sense that they can be systematically used in different linguistic contexts to express at least three formally distinct yet analytically related extralinguistic concepts. Given independently plausible assumptions about the nature of these concepts, and of the lexically encoded meanings of primitive PNs, my overarching aim is to show how this framework offers a theoretically attractive way to explain Frege’s puzzle regarding the intersubstitution of so called “coreferential” PNs.
Many contemporary philosophers of physics (and philosophers of science more generally) follow Bertrand Russell in arguing that there is no room for causal notions in physics. Causation, as James Woodward has put it, has a ‘human face’, which makes causal notions sit ill with fundamental theories of physics. In this talk I examine some anti-causal arguments and show that the human face of causation is the face of scientific representations much more generally.
In Republic I, Plato makes the following claim about how injustice works in us: “[Injustice] will make [the individual] incapable of acting because of inner faction and not being of one mind with himself; second, it will make him his own enemy as well as the enemy of just people (Republic 352a 1-8).” But what would it mean for an individual's being unjust to (1) cause disharmony within that individual to the point that she is incapable of action and (2) cause that individual to become an enemy of herself? Christine Korsgaard argues that we cannot act at all (in the sense of acting as agents) unless we are acting as a unified person. But if Korsgaard is correct, in what sense could we ever truly act as our own enemies? I will argue that Korsgaard's requirements on what is constitutive of action are too stringent, and that one need not be internally unified in order to act as an agent. (I hold that I need not abandon a constitutional account of the soul/self in order to do this.) I will then present an account of how factionalization in one's soul/self leads an agent to act as her own enemy by obscuring her perception of the good. I will ultimately argue that, consistent with Plato's constitutional model of the soul, there is an order and internal consistency that is necessary for an agent to function justly and correctly, but this order is not necessary in order for an agent to act at all.
If you were forced to choose between saving one or ten other strangers in a lifeboat from certain death, which would you choose? Most people believe that one should save the lifeboat with the greater number of lives. However, many moral theorists have argued that the best deontological theory in town, Scanlon’s contractualism, is incapable of justifying this verdict.
On Scanlon’s account, a rescuer must decide whether to adopt a principle that favors saving the greater number of lives on the basis of the comparative strength of the sets of objections that we can expect to be posed by each person. However, since each person has an equally forceful complaint against the rescuer adopting a principle that does not save his or her life, no one’s objection is stronger than any other. Scanlon’s account gives us no guidance about what to do in the lifeboat scenario. It seems that we are left with making a decision by flipping a coin or holding a weighted lottery, both of which Scanlon rejects. As an alternative, he offers the “balancing view,” which demands that the person in the one-person lifeboat has her interests balanced against one person on the opposing side; those that are not balanced out in the larger group are used as tiebreakers. The balancing view entails that we must save the lifeboat with the larger group from certain death.
I argue that the balancing view is inconsistent with other core tenets of Scanlon’s contractualism and that his account obligates us to use a weighted lottery to figure out who should be saved.
What is the origin and meaning of our aesthetic sense? Is it genetically encoded or is it culturally inherited? The aim of this essay is to answer to such issues by defining the emergent and meta-functional character of the aesthetic attitude. First, I propose to include desire, somewhat controversially, in the free play of the cognitive faculties at the heart of Kant's Critique of Judgment. This step is justified, in part, by a brief analysis of Darwin's controversial remarks on the pre-human birth of aesthetics and its relationship with sexual selection (§§ 1-2). The point of discontinuity between a mere animal aesthetic sense and a proto-human one is then found in becoming indeterminate of desire and in the correlative diversification of aesthetic attractors (§§ 3-4). I next deal with the supervenience character of the aesthetic and its anticipatory value. After giving a short genealogy of the notion of supervenience, I then develop its affinity with that of epigenesis (§§ 5-6). What then follows is a critical review of two contemporary evolutionary perspectives on aesthetics: T. Deacon's essay on the "aesthetic faculty" and J. Tooby and L. Cosmides thesis concerning the evolutionary meaning of aesthetic-fictional activities (§§ 7-8). My concluding section attempts to say, in light of the foregoing discussion, what the epigenesis of the aesthetic mind consists in (§§ 9-10).
In this talk, I will carefully examine purported counterexamples to two postulates of iterated belief revision. I will show that the examples are better seen as a failure to apply the theory of belief revision in sufficient detail rather than a counterexample to the postulates. More generally, I will focus on the observation that it is often unclear whether a specific example is a “genuine” counterexample to an abstract theory or a misapplication of that theory to a concrete case, at what this means for a normative theory of belief revision.
This talk is based on joint work with Paul Pedersen (Max Plank Institute) and Jan-Willem Romeijn (Groningen University).
There is some sense of "ought" in which what an agent ought to do depends on her epistemic state--e.g., such that she ought to take whatever she justifiably regards as the best available course of action. Oughts of this kind are closely connected to action-guidance, since unlike "objective oughts" which are epistemic state-invariant, they seem to be epistemically accessible to agents under most circumstances. I argue, however, that under some conditions (namely, conditions of normative uncertainty) there may be no interesting species of oughts to which agents have epistemic access. This constitutes a challenge for any theory of rational choice that aims to provide agents with all-things-considered action guidance.
A central premise of Chalmers’ anti-materialism argument claims that we can conceive of a possible world where we hold fixed all physical facts but exclude certain phenomenological facts from obtaining. In this paper, I argue that attempting to conceive of such a scenario poses a puzzle: If, in this possible world, Roger lacks any phenomenal properties, this, itself, is a physical fact, in the same way that wood and coal lack phlogiston is a physical fact. But since both worlds share physical facts, this must be true at the actual world. But by hypothesis, Roger has phenomenal properties. Hence, the puzzle. I anticipate resistance to the idea that Roger lacking phenomenal properties is a physical fact, and address some possible (and actual) objections. However, independently of this issue, I argue that the conceivability premise is entirely question-begging.
The argument set out in this talk is part of a larger defense of what I call the diagram-based view: This is the view that we can come to grasp certain mathematical truths by perceiving spatial relations in suitable visual diagrams. The diagram-based view thus maintains that there is a distinctive visual route to genuine mathematical knowledge (for at least some nontrivial body of mathematics). Here I focus on one of the most forceful challenges to the diagram-based view, the particularity problem, and on the specific theorem with which this problem is usually associated: the angle-sum theorem, an elementary truth of Euclidean plane geometry. The problem is this: Even granting that the diagram allows us to see that the result holds for the particular triangle depicted, how could perception of the diagram ever warrant the judgment that the theorem is true in the general case, that is, for any triangle whatsoever? I consider both historical and contemporary solutions to the particularity problem, and conclude that none are satisfactory. I then provide a novel solution, arguing that we can indeed see that the angle-sum theorem holds in general, by perceiving the diagram in a way that is animated by the combined application of two different kinds of ‘dynamic’ visual imagery.
In modern democracies, political representatives often must use several different conflicting considerations when making policy decisions. I focus on two such considerations that are commonly thought to be in direct conflict: responsiveness to the wishes of the public and a commitment to do what is in the best interests of the public. Because the public is often mistaken about the likely outcomes of their preferred policies, if representatives commit themselves to being responsive to the wishes of the public then there will be many cases in which they cannot do what is in the best interests of the public (and vice versa). Additionally, the policy preferences of the public are often unstable or dependent upon the views of political elites. However, many theories of democratic governance (especially those in the American tradition) maintain that political responsiveness is key to political legitimacy. Therefore, it is not satisfactory to simply ignore the policy preferences of the public. I argue that this seeming dilemma can be reconciled if we model the wishes of the public in terms of preferences over possible policies and the interests of the public in terms of preferences over possible outcomes of those policies. Consequently, policy choices can be made taking into account both explicit policy preferences (wishes) and outcome preferences (interests). Although such a model substantially simplifies policy decisions, it requires that preferences over policies and outcomes be describable by interpersonally comparable cardinal utility measures. I hint at what such measures might be like and show that this has important implications for public opinion polling methodology.
Over the last several decades, a new strategy for attempting to understand quantum mechanics has emerged: analyzing the theory in terms of what quantum mechanical systems can do. This information-theoretic approach to the interpretation of quantum mechanics characterizes the difference between the quantum world and the classical world by delineating what kinds of classically impossible computations and communication protocols can be achieved by exploiting quantum effects.
This approach has opened up a potential avenue for synthesizing quantum mechanics with a particular aspect of general relativity: the possible existence of closed timelike curves (CTCs). A CTC is a path through spacetime along which a system can travel, which will lead it to its own past. David Deutsch (1991) developed the first quantum computational model with negative time-delayed information paths, which is intended to give a quantum mechanical analysis of the behavior of CTCs.
However, Deutsch's model is controversial because it entails certain effects that ordinary quantum mechanics rules out as impossible. Exactly how to adjudicate this conflict has been debated in the recent literature. In my paper, I will detail a protocol that generates one of these disputed effects. The example I’ll focus on shows how a CTC-assisted quantum computational circuit can be used for the instantaneous transmission of information between two spatially separated observers, which is impossible according to ordinary quantum mechanics
I will consider an argument by Cavalcanti et al. (2012) which purports to show that the protocol must fail. I will argue that this position is not well justified. The argument the authors explicitly give for their view is based upon a misinterpretation of the special theory of relativity, and the most plausible justifications they would offer instead are weak enough that we should consider the possibility of the instantaneous transmission of information to be an open question.
How many words are in this abstract? If you had to guess, without counting, you would probably get pretty close to the true number. However, if you and your friends all made guesses, the average of your guesses would likely be better than your typical guess. This is an example of the so-called Wisdom of Crowds effect.
The effect is surprisingly reliable — for example, the "Ask the Audience" lifeline on Who Wants to Be a Millionaire? has a 95% success rate. This seems to cry out for an explanation, especially given how irrational collectives can be — think: committee decisions, tulip prices, and rioting football fans. Scott Page (2008) has made some initial progress on this question with what he calls the Diversity Prediction Theorem. Roughly speaking, the theorem shows that if a collective is diverse, then its collective judgements are guaranteed to be better than the typical individual judgements. So it would seem that we have an explanation for the Wisdom of Crowds effect and its reliability: it's a mathematical necessity.
Not quite. For the theorem to have any explanatory power, it needs to be supplemented with bridge principles that connect the theorem to the explanandum. I will tease out these principles and show that they have some serious defects. An interesting consequence of these defects is that it appears to be impossible for there to be a Wisdom of Crowds effect for collective credences — i.e., averaged degrees of belief. However, after developing a Bayesian interpretation of Socrates' thoughts on wisdom (from the Apology), I will argue that collective credences are the only collective judgements that can be genuinely wise.
(In case you're still guessing: there are 286 words in this abstract, including these ones.)
Sentences like (1) present familiar puzzles for the familiar idea that declarative sentences of a natural language have truth conditions. The first numbered sentence in 'Framing Event Variables' is false.
Action reports like (2) and (3), which might be used to describe a scene in which two chipmunks chased each other, illustrate other (perhaps even harder) puzzles for this idea.
Alvin chased Theodore gleefully and athletically but not skillfully.
Theodore chased Alvin gleelessly and unathletically but skillfully.
I'll argue against various (broadly Davidsonian) attempts to reconcile intuitions regarding (2) and (3) with the claim that these sentences have truth conditions. In my view, the puzzles reflect deep "framing effects"--of the sort that Kahneman and Tversky made famous, though my central example is due to Thomas Schelling. If this is the right of diagnosis of the puzzles regarding sentences like (2) and (3), then I think we need a conception of linguistic meaning according to which sentence meanings *do not* determine truth conditions, not even relative to contexts. And as it happens, I've been peddling such a conception for a while now: it's better to think of meanings as instructions for how to build concepts, which might be used (when conditions allow) to form truth-evaluable judgments in contexts.
With commentary from Georges Rey!
Theodosius Dobzhansky in his1937 Genetics and the Origin of Species claimed that "the mechanisms of evolution as seen by a geneticist" consist of mechanisms at three levels. This multilevel analysis still captures the key mechanisms of evolutionary change. First, mechanisms produce the variations that are the raw material for change, including mutation mechanisms of imperfect copying of DNA (including repair mechanisms), as well as larger scale chromosomal changes and recombination. The second level includes mechanisms that change populations, genotypically and phenotypically. The most important is the mechanism of natural selection, which is the only known mechanism for producing adaptations. In the natural selection mechanism, the crucial joint activities of variant organisms and a critical environmental factor produce populational changes in subsequent generations. Finally, isolating mechanisms give rise to new species that are reproductively isolated from previous conspecifics. This paper argues that natural selection is, indeed, a mechanism (despite recent claims to the contrary) and places the natural selection mechanism into the context of the multilevel mechanisms of evolutionary change.
One of the problems for Russell's quantificational analysis of definite descriptions is that it generates unattested readings in the context of non-doxastic attitude verbs such as want. The Fregean-Strawsonian presuppositional analysis is designed to overcome such shortcomings of Russell’s quantificational analysis while keeping its virtues. Anders J. Schoubye (forthcoming), however, criticizes the Fregean-Strawsonian solution to the problem of non-doxastic attitude verbs as being inadequate by generalizing the problem to indefinite descriptions. Schoubye claims that the generalized problem calls for a radical revision of the semantics of definite and indefinite descriptions, and he attempts to develop a dynamic semantic account of descriptions. In this paper I defend the standard non-dynamic semantics of descriptions by refuting Schoubye’s objections to the Fregean-Strawsonian analysis. I argue that, once we take into account Elizabeth Villalta's (2008) recent analysis of non-doxastic attitude verbs, we can solve Schoubye's generalized problem concerning descriptions.
Introspection and certain sorts of “intuitionsâ€ù have been regarded by “Cartesiansâ€ù as a peculiarly reliable source of evidence in linguistics, psychology and traditional philosophy. This reliability has been called into question by a number of different “anti-Cartesiansâ€ù in the last decade, specifically by Michael Devitt with regard to linguistic intuitions, and Peter Carruthers with regard to introspection. I defend here the possibility of a moderate Cartesianism about both phenomena, more critical than the traditional approach, and open to empirical confirmation in a way that anti- Cartesians have not sufficiently appreciated. Briefly: our intuitions and introspections are reliable insofar as they are the casual consequence of internal representations that are produced by a specific competence whose properties they are then reasonably taken to reflect.
With commentary by Paul Pietroski.
In this paper I examine a well-known articulation of human nature skepticism, a paper by Hull (1986). I then review a recent reply to Hull by Machery (2008), which argues for what he claims is an account of human nature that is both useful and scientifically robust. I show that Machery’s account of human nature, though it successfully avoids Hull’s criticisms, is not very useful and is scientifically suspect. Finally, I introduce an alternative account of human nature—the “life-history trait clusterâ€ù conception of human nature—which I hold is scientifically sound, pragmatically useful, and makes sense of (at least some of) our intuitions about—and desiderata for—human (or, more generally, species) nature. The desiderata that it satisfies are that human nature should (1) be the empirically accessible (and thus not based on occult essences) subject of the human (psychological, anthropological, economic, biological, etc.) sciences, (2) help clarify related concepts like innateness, naturalness, and inevitability, which are associated with human nature, and (3) characterize human uniqueness.
This paper concerns the ethics of humor. More specifically, it is concerned with a certain category of jokes that can be labeled immoral jokes. I claim that such jokes exist, and that many of them are funny despite being immoral; that is to say, their immorality does not wholly undermine their humorousness, and may even somehow contribute to it. So a first task of the paper is to say what a joke’s being funny or humorous roughly amounts to. A second and more important task is to say what it is for a joke to be immoral, or what may come to the same thing, pernicious . But a third task will be to decide what attitude or behavior is appropriate to such jokes in light of their immorality, and whether their total proscription is justified, or even humanly possible.
This is intended to be a serious talk, in spite of the title. The idea is that quantum mechanics is about probabilistic correlations, i.e., about the structure of information, since a theory of information is essentially a theory of probabilistic correlations. To make this clear, it suffices to consider measurements of two binary-valued observables performed by Alice in a region A and by Bob in a separated region B -- or, to emphasize the banality of the phenomena, two ways of peeling a banana, resulting in one of two tastes. The imagined bananas of Bananaworld are non-standard, with probabilistic correlations for peelings and tastes that cannot be simulated by Alice and Bob if they are restricted to classical resources. The conceptually puzzling features of quantum mechanics are generic features of such nonclassical correlations. As far as the conceptual problems of quantum mechanics are concerned, we might as well talk about bananas.
The shared intentionality hypothesis aims to explain the evolution and psychology of human cooperation, but it lacks the means to deal with the free-rider problem. To resolve thisproblem, I propose that the shared intentionality hypothesis can be supplemented with an account of how punitive sentiment in humans evolved as a psychological mechanism for strong reciprocity. Supplementing the shared intentionality hypothesis in this manner affords us additional insight into the normative nature of human cooperation.
I begin with the following moral dilemma: we are inclined to say that the harder an agent finds it to act virtuously the more virtue she shows if she does act well, but we are also inclined to say that the harder an agent finds it to act virtuously the more it shows how imperfect in virtue she is. I argue that this dilemma is the result of a deeper conflict between conceiving of morality as a corrective constraint on immoral temptations and conceiving of morality as consisting in being a good human. I am concerned with whether, conceiving of morality as being a good human, we might still accommodate our deep-seated intuition that morality is both “correctiveâ€ù and “constrainingâ€ù. I give Philippa Foot’s account of virtue as a largely successful attempt to do justice to our inclination toward conceiving of morality as both corrective and constraining, but I ultimately find it lacking since it fails to capture the considered belief that morality can be a corrective constraint on one’s individual moral deficiencies. Improving upon Foot’s account of virtue, I posit the existence of intermediate virtues that would not be possessed by the ideally virtuous person, but are nonetheless essential to becoming an ideally virtuous person.
David Milner and Melvyn Goodale's (1995, 2006, 2008) influential dual visual systems hypothesis (DVSH) proposes a functional description of the two main cortical visual pathways in the primate brain. The dorsal stream processes fast, accurate, and egocentrically-specified visual information for the fine-grained implementation of skilled, online motor control. The ventral stream is thought to process slow, “inaccurateâ€ù, and allocentrically-specified visual information that supports the recognition and identification of objects and events, and other forms of visual processing associated with conscious visual experience. This functional gloss presupposes that vision for action employs quite different visual information from “vision for perceptionâ€ù. I argue that the type of information employed by motor systems will generally be task sensitive and can, contra Milner and Goodale, recruit “scene-basedâ€ù spatial information. Furthermore, vision for perception is not coded in allocentric and it is, at best, misleading to conceive of the spatial information underlying this type of vision as “inaccurateâ€ù.
Here I present a challenge to prioritarianism, which is, in Derek Parfit's words, the view that "we have stronger reasons to benefit people the worse off these people are." We have such reasons simply by virtue of the fact that a person's utility "has diminishing marginal moral importance". In discussions of prioritarianism, it is typically left unspecified what constitutes a greater, lesser, or equal improvement in a person's utility. I shall argue that this view cannot be assessed in such abstraction from an account of the measure of utility. In particular, prioritarianism cannot accommodate the widely accepted and normatively compelling measure of utility that is implied by the axioms of John von Neumann and Oskar Morgenstern's expected utility theory. Nor can it accommodate plausible and elegant generalizations of this theory that have been offered in response to challenges to von Neumann and Morgenstern. This is, I think, a theoretically interesting and unexpected source of difficulty for prioritarianism, which I shall explore in the paper.
Natural language exists in different modalities: spoken, written, signed or brailled. Although there is ample evidence showing that languages in different modalities exhibit rather different structures, there is a ‘glottocentric bias’ among theorists, i.e. sound is central, if not essential to language. Speech is usually considered to be the primary linguistic modality, while other modalities are taken to be a surface perceptual differences only occur at the interfaces of the language faculty and the perceptual system (e.g. sign language). Theorists in general think explanations for the facts about linguistic structures in spoken language can be generalized to other modalities. Many linguists (e.g. Chomsky, Bromberger etc.) have utilized the features of spoken language observed to draw inferences about the nature of the human language faculty. The principles that govern spoken language also govern linguistic phenomena in other modalities. In this talk, I will argue that the traditional picture of taking spoken language as the primary modality to understand other modalities of language is probably mistaken. I propose to look at the written and spoken modalities of logographic languages like Chinese (written and spoken), and also compare the syllable structures of speech to sign language. With respect to Chinese language, I will show that there are cases in which meanings cannot be disambiguated only by analyzing the spoken sounds, but rather we need to look at the written words, I argue that there might be different procedures of mapping forms of different modalities to meanings. As for sign language, I argue that the so-called ‘syllable structure’ in sign language exhibit rather different structure from that of spoken language, and the difference might be at the linguistic rather than merely at the perceptual level.
I challenge a recent attempt by Antony Eagle to defend the possibility of deterministic chance. Eagle argues that statements of the form '$x$ has a (non-trivial) chance to $\varphi$' are equivalent in common usage (and in their truth-conditions) to those of the form '$x$ can $\varphi$'. The effect of this claim on the debate about the compatibility of (non-trivial) chances with a deterministic world seems to be relatively straightforward. If '$x$ has a chance to $\varphi$' is equivalent to '$x$ can $\varphi$' and statements of the form '$x$ can $\varphi$' are able to be truthfully uttered in a deterministic world, then statements of the form '$x$ has a chance to $\varphi$' are also able to be truthfully uttered in such a world. Drawing upon the work of Angelika Kratzer and David Lewis, Eagle shows how our best semantic theories allow statements of the form '$x$ can $\varphi$' to be truthfully uttered in deterministic worlds. Under the assumption that the truth-makers of statements like '$x$ has a chance to $\varphi$' are objective chances, compatibilism about chance seems to follow. I argue, however, that we have reasons independent of the debate about compatibilism about chance to reject a semantic theory that yields the sort of results Eagle claims for the Kratzer-Lewis account. If we make the necessary modifications to our semantic theory, however, then compatibilism about chance follows only, if at all, with great difficulty.
When engaging with a work of fiction one task we must accomplish is determining what is true of the fictional world described by the work. Fictions prescribe particular authorized games of make-believe. It is a challenging task to determine which fictional truths are prescribed by a fiction even when dealing with paradigms of fiction such as literature and film. Inconsistencies and incomplete aspects threaten to make such fictional worlds deeply problematic, seeming to prescribe impossible or incoherent imaginings about the fictional worlds. Kendall Walton has pointed out a set of principles of generation for fictional truths that serve as helpful guides in determining such truths and assuaging apparent problems. Videogames however present a unique problem in the generation of fictional truths due to their interactivity. The question is what games of make-believe does any particular videogame prescribe and authorize for those who interact with it? In this paper I will present the main difficulties facing such a task, namely the special problem of fiction in videogames, examine extant principles and the work they do, what particular understanding of the principles is helpful while considering interactivity, and finally propose an understanding of how fictional truths are generated in videogames that is in line with Walton's general project and that resolves the initially troubling inconsistencies.
Propensity interpretations of objective probability have been plagued by many serious objections since their origin with Karl Popper (1957). In this talk, I attempt to offer a novel way of understanding propensity such that, in certain contexts, these problems go away. I call this approach, mechanistic propensity. The central claim is this: in cases where stochastic biological phenomena are the result of underlying mechanisms, propensities can be understood as properties of these mechanisms. I suggest that this (localized) account of propensity, if successful, enjoys several benefits that traditional accounts have lacked. Mechanistic propensities (1) aren't deeply mysterious (2) can explain frequencies, (3) are capable of accommodating single-case probabilities, (4) can cohere with determinism (both local and global), (5) avoid being too modal for science, and (6) seem not to face the reference class problem.
Representations and their contents are a core explanatory posit in both philosophy of mind and cognitive science; however, research in these fields has largely progressed independent of each other. On the one hand, philosophers are critical of theories and models in cognitive science that are already couched in intentional or representational terms and hence fail to provide non-circular analyses of content. On the other, philosophical theories have largely been developed based on a priori reflection, and fail to make testable predictions or claims that would be of interest to cognitive scientists. I lay out how one might begin to overcoming both theoretical weaknesses, by marrying a well-known theory of content in philosophy, Fodor's asymmetric dependency account, to a well-known framework in empirical psychology, signal detection theory (SDT). Fodor's theory has the advantage of being (arguably) non-circular, while SDT is fundamental to many statistical and theoretical models in cognitive science. In particular, I lay out how Fodor's theory can be re-formulated using SDT, thus suggesting how philosopher's theories of content can be made empirical. I then gesture at how one might keep the re-formulated theory from becoming circular.
Prior (1959) and others have argued that the widely shared preference for future wellbeing over past wellbeing (the preference that unpleasant experiences be located in one's past, and pleasant experiences in one's future) provides decisive evidence in favor of the A-theory of time (which holds that time objectively passes). The B-theory, which does away with passage, seems challenged to explain why it is rational to care more about one's future than one's past, if one is not 'moving towards' the future and 'away from' the past. I argue, with the passage theorist, that the stock B-theoretic responses to this problem have been unconvincing. But I then defend two additional claims: first, that the A-theorist's case has in fact been understated, in that the B-theory undermines not just the asymmetric concern for one's future over one's past, but also the belief that one has any genuinely self-interested stake in the wellbeing of one's 'past/future selves' at all; but, second, there is no compelling reason to regard our (albeit enormously strong) A-theoretic intuitions on these questions as veridical, and that the B-theorist can give a plausible deflationary explanation that renders them almost wholly non-evidential. I conclude, therefore, that while the problem of time-asymmetric preferences offers no compelling reason to adopt either theory of time, it does raise the stakes of the debate by showing that the B-theory, if correct, would demand a fundamental reordering of ordinary intuitions regarding the nature of our mental lives and the foundations of prudential rationality.
In this talk I begin to draw together, and package into a coherent philosophical position, a number of ideas that in the last 25 years I have alluded to, or sometimes stated explicitly, concerning the properties and the merits of the measure of deductive dependence $q(c | a)$ of one proposition c on another proposition a; that is, the measure to which the (deductive) content of c is included within the content of a. At an intuitive level the function $q$ is not easily distinguished from the logically interpreted probability function p that may, in finite cases, be defined from it by the formula $p(a | c) = q(c' | a')$, where the accent represents negation, and indeed in many applications the numerical values of $p(c | a)$ and $q(c | a)$ may not differ much. But the epistemological value of the function $q$, I shall maintain, far surpasses that of the probability function p, and discussions of empirical confirmation would be much illuminated if $p$ were replaced by $q$. Each of $q(c | a)$ and $p(c | a)$ takes its maximum value 1 when $c$ is a conclusion validly deduced from the assumption $a$, and each provides a generalization of the relation of deducibility. But the conditions under which $q$ and $p$ take their minimum value 0 are quite different. It is well known that if $a$ and $c$ are mutual contraries, then $p(c | a) = 0$, and that this condition is also necessary if $p$ is regular. Equally, if $a$ and $c$ are subcontraries ($a \vee c$ is a logical truth) then $q(c | a) = 0$, and this condition is also necessary if $p$ is regular. It follows that $q(c | a)$ may exceed 0 when $a$ and $c$ are mutually inconsistent. The function $q$ is therefore not a degree of belief (unless a positive degree of belief is possible in a hypothesis that contradicts the evidence). But that does not mean that $q$ may not be a good measure of degree of confirmation. Evidence nearly always contradicts (but not wildly) some of the hypotheses in whose support it is adduced.The falsificationist, unlike the believer in induction, is interested in hypotheses $c$ for which $q(c | a)$ is low; that is, hypotheses whose content extends far beyond the evidence. I shall provide an economic argument (reminiscent of the Dutch Book argument) to demonstrate that $q(c | a)$ measures the rate at which the value of the hypothesis c should be discounted in the presence of the evidence $a$.
Two projects are pursued in this paper. First, a new account of objective chance (Humean-Classical Chance) is presented that is Humean in kind, but distinguishes itself from other Humean accounts of objective chance by dispensing with any Best Systems Analysis, and by appealing to a Principle-of-Indifference-like assumption, as opposed to frequentism. Second, by being central to the functioning of the account presented, the minimum average code length interpretation of entropy that was developed by C.E. Shannon is explored. A frank assessment of the advantages and disadvantages of Humean-Classical Chance is also provided, largely in the interest of further illuminating what can and cannot be done with the minimum average code length interpretation of entropy.
Claims that certain traits are innate abound in the biological and cognitive sciences. Having legs is said to be an innate trait for frogs (and many other species), and the capacity to learn a language is said to be innate for humans. These claims appear to be intended as explanatory statementsÑthe fact that a trait is innate is meant to perform some work towards explaining things that we would like our biological and cognitive theories to explain. But what exactly are the explananda of innateness claims? Do innateness claims seek to explain instances of particular traits emerging in individuals? Or are they aimed at explaining the existence of commonalities or differences in a population (such as a species)? Or perhaps both of these? Is there any one thing that innateness claims are meant to explain across the different scientific disciplines in which they occur, or does innateness mean different things in different domains? In this talk I examine the explanatory role of innateness claims across a range of contexts and offer the following two tentative conclusions: (i) innateness claims are best construed as providing only individual-level (rather than population-level) explanations; and (ii) the explananda of innateness claims differ from context to context, owing to both theoretical and pragmatic considerations. Thus, I argue, there is no one thing that innateness means across all scientific contexts. In particular, the notion of innateness that is generally most useful in cognitive science differs from any of the notions that might be useful in biological contexts.
There has been a lot of interest in how to derive some broadly decision theoretic verdicts concerning deontic modalities and their interactions with conditionals. It is easy to argue that a traditional Kratzer-style premise-semantics needs some revisions in order to get these facts right. The difficulty is how to develop a semantic theory that gets those facts while remaining, as much as possible, 'ethically neutral'. In this talk, I investigate, three ways of going beyond the traditional Kratzer-style premise-semantics. Each successive grade makes more serious use of decision-theoretic machinery.At Grade I, we add sets of mutually exclusive alternatives ('decision problems'). Cariani, Kaufmann and Kaufmann (CKK) developed a version of Kratzer-semantics within the confines of Grade I: I will summarize that proposal, and defend it from some objections, but I will also flag some reasons to go beyond it.At Grade II, we add probabilities to the mix, so that our raw materials are an ordering source, a decision problem, and a probability space (in Yalcin's sense). This is, in my view, the best grade to work at. The core of my talk consists in spelling out the details and benefits of my favorite Grade II semantics.At Grade III, we have decision problems, probabilities and utilities: at this stage we inevitably cross the line and end up with an ethically compromised theory. I will argue we should not go this far.
The independence problems for functionalism stem from the worry that functional properties that are defined in terms of their causes and effects are not sufficiently independent of those purported causes and effects. I distinguish three different ways the independence problems can be filled out in terms of necessary connections, conceptual connections and vacuous explanations. I argue that none of these present serious problems. Instead, they bring out some important and over-looked features of functionalism.
The independence problems for functionalism stem from the worry that functional properties that are defined in terms of their causes and effects are not sufficiently independent of those purported causes and effects. I distinguish three different ways the independence problems can be filled out in terms of necessary connections, conceptual connections and vacuous explanations. I argue that none of these present serious problems. Instead, they bring out some important and over-looked features of functionalism.
Much of the work in traditional game theory is focused on the analysis of solution concepts (typical examples include the Nash equilibrium and its refinements or the various notions of iterated dominance). A solution concept is intended to represent the "rational" outcome of a strategic interactive situation. That is, it is what (ideally) rational players would do in the situation being modeled. This talk will focus on a key foundational question: How do the (rational or not-so rational) players decide what to do in a strategic situation? This has both a normative component (What are the normative principles that guide the players' decision making?) and a descriptive component (Which psychological phenomena best explain discrepancies between predicted and observed behavior in game situations?). This question directs our analysis to aspects of a strategic interactive situation that are not typically covered by standard game-theoretic models. Much of the work in game theory is focused on identifying the rational outcomes of an game-theoretic situation. This is in line with the standard view of a strategy as "general plan of action" describing what players (should) do when required to move according to the rules of the game. Recent work on epistemic game theory has demonstrated the importance of the "informational context" of a game situation in assessing the rationality of the players' choices. This naturally shifts the focus to the underlying *process* of deliberation that leads (rational) players to adopt certain plans of action.
Department of Philosophy, Skinner Building, University of Maryland, College Park, MD 20742-7505
Web Accessibility | Privacy Notice
Phone: (301) 405-5689 | Fax: (301) 301-405-5690