Archive

Archive for the ‘Philosophy of Mind’ Category

The Revised Principle of Alternate Possibilities and Galen Strawson’s Basic Argument

March 24, 2012 Leave a comment

I have a new post up at the Florida Student Philosophy Blog, which can be found here. It concerns the Revised Principle of Alternate Possibilities (RPAP) and using Galen Strawson’s Basic Argument to nullify the issues cause by the RPAP. I will not be re-posting it here at Philosophy & Polity, so head over to the UNFSPB and check it out!

Advertisements

Nonreductive Agent Causation Part II: Four Points of Analysis

February 20, 2012 4 comments
Reduction -  Photo courtesy of Smittenkitchen.com

"But if nonreduction is true then we would never have Maple Syrup." Photo courtesy of Smittenkitchen.com.

In Part I of this two-part post I introduced an extended dialogue between Timothy O’Connor and Derk Pereboom that spans physicalism, reductionism, agency theory, and quantum physics. O’Connor posits a purely physicalist theory of agency based on the formation of macroproperties which instantiate in sets of microproperties which reach a certain threshold level of complexity. Once this level is reached, an emergent macroproperty, constituted as an agent causal power, can then enact downward causal influence over its microproperties without being subject to upward causation or determination from its constituent microproperties. Pereboom takes O’Connor to task for failing to account for the influence of distal causes, which nevertheless determine the behavior of the agent causal power, but to counter the invocation of an emergent property, Pereboom alleges that even in a statistical model rather than a deterministic one, we are still left with distal causes as the ultimate originator of action. In the comment section of a previous post, Aaron Kenna rightly makes mention of this, viz. that statistical, indeterministic, and deterministic worldviews all fail to provide the freedom required by agency theories/moral responsibility. In a future post I shall discuss this point further, using Strawson’s “basic argument” as an example. But for now, let’s turn to four points of analysis on the conversation between O’Connor and Pereboom to see what we can make of it. Read more…

Nonreductive Agent Causation Part I: A Dialogue Between O’Connor and Pereboom

February 19, 2012 6 comments

"To reduce, or not to reduce - that is the question."

I have recently come to believe that the crux of disagreements in contemporary discussions on physicalism and agency is the seemingly impassable divide between reductionist and non-reductionist positions. Perhaps one of the clearest examples of this disconnect can be seen in a dialogue between Derk Pereboom and Timothy O’Connor regarding the plausibility of a certain type of physicalist agency theory. The conversation is multi-faceted and invokes emergent agent causal powers (which I have mentioned here before, though only in passing) as well as quantum indeterminism. In this post I would like to introduce the reduction/non-reduction divide by unfolding the conversation between Pereboom and O’Connor. Part I will be heavily exegetical, but in Part II I offer up four points of analysis on the dialogue at large and the theories therein. Read more…

Philosophers’ Carnival: January 30th, 2012

January 30, 2012 6 comments

Welcome to the January 30th, 2012 edition of the Philosophers’ Carnival! The goal of the Carnival is to highlight the best and most engaging blog posts in the area of philosophy – we have a lot of great submissions, so let’s dig in.

Epistemology

Clayton over at Think Tonk brings us a pithy post on a lack of evidence for evidentialism. Clayton argues that there exist instances wherein a person could in good faith believe she has good reason to believe that she is warranted in believing p all the while lacking sufficient evidence for believing p. There is also a valuable exchange in the comments section of the post. An excerpt from the main post:

Here, now, is my anti-evidentialist argument. William has sufficient justification to believe that he permissibly believes that he permissibly believes God exists. William, however, does not have sufficient evidence to believe that God exists. So, according to [the positive accessibility thesis], it is permissible to believe without sufficient evidence. According to the evidentialist, it is never permissible to believe without sufficient evidence. Thus, the evidentialist view is mistaken.

Following in the vein of beliefs, Jim over at Agent Intellect presents an explication on the differences between traditional Global Skepticism ala Descartes and Plantinga’s Evolutionary Argument against Naturalism. While he admits there is a measure of truth in likening the EAAN to Global Skepticism, he claims they differ substantial ways:

Plantinga’s EAAN is significantly different from classical global skepticism. First, we do not have to have a reason for a belief if it is properly basic, and such a belief can constitute knowledge even if we don’t know that we know it. We are justified, or our beliefs are warranted, up until the point where we have a reason for thinking them to be false. The EAAN provides just such a reason: if naturalism is true, then it is improbable or inscrutable that any given belief would be true. After this, the EAAN has the same effect as the more traditional global skeptical arguments: any reason you can give for a particular belief is itself subject to the EAAN and is therefore not trustworthy. There is no stopping the rot once it’s started. Indeed, part of the genius of Plantinga’s argument is that it amounts to a global skeptical argument that arises from within externalism.

Injecting a little bit of Hume into the mix, Maryann from the Examiner discusses the is-ought distinction, arguing that for an ought statement to be true there must exist some being to which that statement corresponds/describes, but which does not justify the statement. An excerpt from the piece:

Translating from epistemology back over to ethics, there needs to be a real ought in order for there to be moral knowledge, but 1) the real ought is not justified by its correspondence to reality—that would be saying its correspondence justifies its correspondence (begging in a circle) and 2) a particular ought is not made to correspond by its justification—that would be like saying that the act of believing made something real to believe in (also begging in a circle). No, there must be ‘both’ justification ‘and’ correspondence. If one or both is lacking (by depending on the other, or for some other reason), knowledge is lacking.

Metaphysics

Occasional Philosophy has an interesting re-imagining of Tegmark’s Quantum Suicide thought experiment, which traditionally limits hypothetical conclusions to the experimenter only. Instead, the author proposes the Quantum Homicide thought experiment, which allegedly allows outside observers to draw conclusions about many-worlds vs. Copenhagen interpretations of quantum mechanics. A snippet of the proposed tweak:

The Quantum Homicide thought experiment proposes a modification to the gun used in the experiment. In this case, if the particle is measured as spin up then the gun fires and kills the experimenter, just as before (in fact, the killing of the experimenter isn’t necessary for the experiment to work but I prefer the aesthetics of the continuity between the quantum suicide and quantum homicide cases). On the other hand, if the particle is measured as spin down then the gun fires a time travel ray, sending the experimenter one day into the past.

Noah Greenstein, the eponymous curator of Blog of Noah Greenstein, discusses the role emotional states play in hindering our reasoning. Based on this, he introduces the Future Rationality Cone, which attempts to include emotion and thought in predicting the relative rationality of future beliefs by way of their distance, as it were, from other beliefs:

Considering a person’s consciousness at some point, we can map what we consider rational and irrational based upon the potential mood and thought changes. Any possible future belief (a combination of thought and mood) will be a combination of changes in prior moods and thoughts. Beliefs that require too great a change in both thought or mood may be outside the realm of rationality for a person, while beliefs that require little effort will fall within the realm of rationality. Hence, the rationality cone.

Lewis from the group blog The Mod Squad tackles Leibniz’s views on the worth of “blind thought” i.e. cognition concerning signifiers absent an apparent regard for the signified, offering up a contrast between Locke, Berkeley, and Hume concerning blind thought:

This discussion, in which Leibniz first introduces blind thought, occurs in the midst of Leibniz’s commentary on Locke’s views on power and freedom. Specifically, it appears that Leibniz introduces the notion in response to Locke’s view that the main determinant of the will is not the prospect of a greater good, but instead, some strong present unease…As suggested by the initial illustration of algebraic reasoning, Leibniz’s stance on blind thought is not that it is always problematic. In a later discussion, relating to the purpose and origins of language, Leibniz suggests that blind thought can be of great utility.

Ethics

Switching gears ever so slightly, Greg at Cognitive Philosophy expounds on the potential threat to ethics posed by genetic modification (given a biologically contingent definition of ethics).

Changing the types of biological organisms that we are could conceivably change what is or is not right to do in any particular situation. It might change the very people that we should be striving to be. Yes, it’s unlikely we’ll change ourselves to the point where harming others is a good thing (though not impossible), but to what degree our systems of ethics will have to change is not something we can predict in advance. Now, let me be clear. I’m not making the naturalistic fallacy (or at least I’m not trying to). My point is that facts about our biology and psychology are going to *constrain* our ethical theories, not wholly *determine* them. Ethics is tricky business. Philosophers have been arguing about it for thousands of years, and while we all have some intuitive notions of what is good and what is bad, what is right and what is wrong, we’re certainly not anywhere close to having all the answers. Changing who we are as human beings will cause us to have to rethink some problematic notions.

Richard from Philosophy, et cetera discusses what he views as major lacunas in a recent argument against immigration that attempts to use environmental concerns to justify its position. He argues that general increases in human welfare outweigh any alleged damage to American wages, and similarly that if anything, mass immigration highlights rather than hides fundamental issues in countries facing an exodus:

Stepping back: If we want to get the most welfare “bang” for our ecological “buck”, barring the global poor access to economic opportunities is surely not the way to go. (It’s less extreme than outright killing them, but I think ultimately misguided for fundamentally similar reasons.) We should strive for improved efficiency in less humanly damaging ways: emissions taxes, reduced animal (esp. cattle) farming, increased urban density / efficient transit, etc. Not to mention investing in scientific research to uncover new solutions — investments which are more easily made by a wealthier, better educated populace.

Assorted Topics: Logic, and our lack of Kants

On the Logic side of philosophy, Tristan at Sprachlogic serves up a new notation for propositional modal operators. He seeks to answer the following by way of introducing a new notational method:

It is common to see the following list of four modal operators presented, sometimes as though it were exhaustive: possibility, necessity, contingency and impossibility. But reflect again that, of these four modalities, possibility is an odd one out, since it is non-commital on truth-value. Also, note that systems have been developed where other operators, e.g. one for non-contingency, are taken as primitive. This can give rise to an uneasy, lost feeling. Are the usual four modal operators just a hodge-podge? What modal operators are there (could there be)? Is there a systematic way of producing them all? And is there then a systematic way of determining logical relations between them?

Concerning philosophers themselves, Eric at Splintered Mind discusses the charge that specialization in contemporary philosophy signals the demise of interdisciplinary giants, using Kant as an example. An excerpt:

Consider by century: It seems plausible that no philosopher of at least the past 60 years has achieved the kind of huge, broad impact of Locke, Hume, or Kant. Lewis, Quine, Rawls, and Foucault had huge impacts in clusters of areas but not across as broad a range of areas. Others like McDowell and Rorty have had substantial impact in a broad range of areas but not impact of near-Kantian magnitude. Going back another several decades we get perhaps some near misses, including Wittgenstein, Russell, Heidegger, and Nietzsche, who worked ambitiously in a wide range of areas but whose impact across that range was uneven. Going back two centuries brings in Hegel, Mill, Marx, and Comte about whom historical judgment seems to be highly spatiotemporally variable. In contrast, Locke, Hume, and Kant span a bit over a century between them. But still, three within about hundred years followed by a 200 year break with some near misses isn’t really anomalous if we’re comparing a peak against an ordinary run.

Philosophy News

-I regret to say that Common Sense Atheism is closing its digital doors, as it were. The site will remain as an archive, and the site’s author, Luke Muehlhauser, will be continuing his work in the area of artificial intelligence.

-Peter Ludlow discusses the implications of a hypothetical dissolution of the APA, courtesy of the Leiter Report.

-Gary Gutting, frequent contributor to the New York Times, discusses the purpose of philosophy in our current climate. I highlight this Stone article in particular because I don’t imagine there is a single reader who has not had to brave such questioning!

-Neal Tognazzini at Flickers of Freedom celebrates the 50th anniversary of P.F. Strawson’s Freedom and Resentment. The College of William & Mary will be hosting a two-day conference examining themes across his work.

-Daniel Dennett has been awarded the Erasmus Prize 2012. The 2012 award celebrates those who have promoted “the cultural meaning of the natural sciences.”

-Matthew Mullins at Prosblogion posts on the John Templeton Foundation’s open online submission cycle for funding inquiries. The areas of focus are philosophy and theology.

“Determinism al dente” Review Part I: Intuition-Based Theorizing

October 5, 2011 Leave a comment

Published in 1995, Derk Pereboom’s article “Determinism al Dente” has had a great impact on my view of determinism and moral responsibility. In it, Pereboom affirms the truth of determinism, a lack of moral responsibility, and the inapplicability of praise or blame judgments to human action. What he does support is the view that what is traditionally called ‘hard’ determinism can be modified to coherently include moral principles and values that remain undamaged despite a lack of metaphysical moral responsibility. Because this article was so crucial in shaping not only my understanding of contemporary theories in philosophy of mind, as well as assisting me in considering and adopting a determinist world view, I thought it appropriate to give a brief summary and my reflection on what I consider to be an important piece of the current dialogue surrounding discussions of freedom of the will.

Perhaps the most salient feature of Pereboom’s article is that he draws attention to the rather vague and simple dichotomy that exists in philosophy of mind literature between hard and soft determinist viewpoints, a lack of distinction that hides a wealth of diversity in theory of mind. Pereboom highlights three incompatibilist theories in particular: traditional Humean conceptions of freedom of action, i.e. uncoerced action that stems from the genuine desires of the agent, Frankfurtian freedom of the will based on second-order desires becoming effective, and the ‘ability to will’ elucidated by Bernard Gert, Timothy Duggan, and John Fischer,  that argues deliberation and a rational thought process ought to do the heavy lifting in terms of defining how free human action can be. Pereboom then poses a simple thought experiment regarding a murder that is freely committed in all three senses and asks if this is truly compatible with determinism:

Let us consider a situation involving an action that is free in all of the three senses we have just discussed. Mr. Green kills Ms. Peacock for the sake of some personal advantage. His act of murder is caused by desires that are genuinely his, and his desire to kill Ms. Peacock conforms to his second-order desires. Mr. Green’s desires are modified, and some of them arise, by his rational consideration of the relevant reasons, and his process of deliberation is reasons-responsive…Given that determinism is true, is it plausible that Mr. Green is responsible for his actions?

Pereboom’s reply to this question invokes the distinction between proximal and distal causes, though he does not use these terms himself. In Frankfurtian and Fischerian senses of compatibilism Mr. Green would have acted freely because the proximal cause of his actions was either his second-order desires becoming effective or the deliberative process, respectively.  However, the distal cause ultimately lies outside of the control of the agent, as the causes of the desires or the deliberation process extend into the past prior to the existence of the agent, even. In this sense, incompatibilism states that a person cannot be held morally responsible or free if the ultimate cause of her actions lies outside of her control. Contrary to this, compatibilists argue that, because the first and second order desires and deliberative process were all consistent with, and stem from, the agent’s biology, then it matters little whether the causal chain extends past the agent’s control.

To demonstrate the falsity of such claims, Pereboom constructs a parallel thought experiment wherein neuroscientists fulfill all of the criteria required for compatibilists to say Mr. Green has acted freely and yet our intuition is that, because the neuroscientists have affected Mr. Green in the ways they have, he cannot be considered morally responsible for his actions. Elsewhere Fischer argues that reasons-responsiveness cannot be constructed by neuroscientists, but Pereboom rightly points out that in a physicalist worldview it must necessarily be possible to induce such behavior or results. Pereboom unfurls a thorough, if not tedious, list of four rejoinders and counter-replies which essentially demonstrate the invalidity of positing moral responsibility in a deterministic worldview. An interesting point Pereboom makes, and one to which I shall return to in my reflection at the end of this summary, is the invalidity of common sense intuitions surrounding moral responsibility and freedom of the will. In order to fully represent it, the following is Pereboom’s thought experiment (case #4):

Case 4: Physicalist determinism is true. Mr. Green is a rationally egoistic but (otherwise) ordinary human being, raised in normal circumstances. Mr. Green’s killing of Ms. Peacock comes about as a result of his undertaking the reasons-responsive process of deliberation, and he has the specified organization of first and second-order desires.

As Pereboom points out, many soft determinists intuitively believe Mr. Green to be morally responsible in this example for the reasons stated above (he possesses first and second-order desires pertaining to and effective for his action, which are rooted in a rational, reason-responsive process of deliberation, thereby fulfilling the required components of some compatibilist theories). Following traditional physicalist and reductionist lines, because the distal cause of Mr. Green’s behavior lies outside of his control, we must concede that he is not morally responsible. Pereboom does this by claiming the transitivity of replacing agent causation with event causation. As a side note, I do not believe this is necessarily a straight trade, as there is definitely a much more rich ontology to be had from holding that, though still determined, the agent causation is not reducible necessarily to event causation. Be that as it may, the ultimate conclusion ought to be that distal causes, no matter their nature, render agents incapable of being held morally responsible (assuming a PAP model of moral responsibility. See here for a discussion on this model).

Pereboom then argues that the intuitions soft determinists employ to argue Mr. Green is morally responsible in case #4 are mistaken because those alleging such intuitions have not internalized the implications of a deterministic worldview: “In making moral judgments in everyday life, we do not assume that agents’ choices and actions result from deterministic causal processes that trace back to factors beyond their control. Our ordinary intuitions do not presuppose that determinism is true, and they may even presuppose that it is false…If we did assume determinism and internalize its implications, our intuitions might well be different.” Now, this point gave me great pause while reading Pereboom’s piece, since intuitions often form the backbone of our evaluation of thought experiments in all fields, not just philosophy of mind. A few reflections on this point:

 

(1) Upholding the faultiness of these moral intuitions commits us to re-evaluating the use of moral intuitions in all manner of philosophical inquiry. Take Foot’s famed Trolley Problem – do we presume the intuitions most have regarding the appropriate action to be faulty because of presumed agency and ability to do otherwise? What about thought experiments surrounding the classification of a painting as art or not-art based on whether it has ever been viewed? Or whether it was intentionally created? While accepting Pereboom’s stance on this point might not upend the whole of intuition-based responses, it certainly would seem to create more problems than it solves.

 

(2) This move is especially helpful in evaluating arguments for agency, like Richard Taylor’s, which rely upon intuitive, ‘common sense’ pieces of datum which are, if Pereboom is right, rooted heavily in the position they are intending to verify, making such an argument circular. While this is beneficial to determinists of many flavors, arguments of Taylor’s style are few and far between now and appear to be in their twilight, if any exist at all who still adhere to such theories of agency.

 

Off the top of my head there appear to be two alternate possibilities to (1) that can be considered, though I am sure there are many more than just the two. The differences are partially related to the controversy surrounding intuitive responses as having either true content (3.a) or apparent content (3.b).

 

(3.a) Pereboom is clearly correct, though I believe he paints a picture in too broad of strokes. I say this because, unless we are prepared to commit ourselves to the problems outlined in (1), we must draw a distinction between feeling or emotion-based intuitions and reasoned intuitions. In case #4, we are told that determinism is true. We begin from a position wherein we must factor in determinative laws, causal transitivity, etc.  This is a reason-based intuition because it requires the application of premises which may or may not run counter to reality or counter to the nature of reality. For example, if we are asked to assume the perspective of another gender, another race, etc. or to presume the existence of non-real entities, e.g. “So, if unicorns were real, would it be unethical to raise and slaughter unicorn foals for delicious unicorn veal?” All of these premises require that the responder do more than simply reply based on an emotional response, and more so than in a thought experiment that is more in accordance with reality. I do, however, understand that thought experiments are by nature artificial and often require many caveats that are odd, if not unrealistic.

 

(3.b) The soft determinists who intuitively respond to case #4 that Mr. Green is morally responsible given the conditions either fail to appropriately apply the premise that physicalist determinism is true, or misunderstand the content of said premise. This is because, operating under the PAP’s model of moral responsibility, one must have the ability to do otherwise to be held morally responsible for one’s actions. Assuming we all agree on the transitive nature of distal and proximal causes, all human behavior in a deterministic world occurs due to factors outside of the control of the humans who inhabit it. This obviates (1) and (3.a) in that we need not posit that intuitions are faulty due to the nature of human perception itself, but rather a mistaken understanding or application of a premise of the thought experiment. A sticking point for this alternative would be an agreement between soft and hard determinists regarding the nature of universal causal determinism, the relationship between distal and proximal causes, etc. which may in itself a tall order.

The possibility also exists that (3.a) and (3.b) are two sides of the same coin. This would be because a failure to accurately or fully consider a premise of the thought experiment would constitute a non-rational intuitive response rather than a rational one. I am curious to hear other people’s thoughts on this.

That wraps up Part I of this review. In the next installment, I’ll discuss Pereboom’s attempt to find a middle ground between soft and hard determinism that upholds moral values but legitimately affirms a consistent determinist worldview.

Science, Philosophy, and Freedom

September 14, 2011 1 comment

Having but a lowly undergraduate’s degree from a SLAC, I recognize all too often that my knowledge of many philosophical topics is limited in both breadth and depth, even in those topics in which I feel most read. Despite this, I am no stranger to some of the more developed arguments for and against freedom of the will, and I have recently taken an interest in neurophilosophy and neuroscience. As some readers may note, I offered an extended treatment of the Soon et al. study, and elsewhere I have tried to use studies of this type to argue that emergentist and similar agency theories have significant hurdles to overcome if they are to maintain and prove the conclusions they draw regarding the role of conscious deliberation in human action.

Recently over at Flickers of Freedom, a piece from Nature was featured that allowed a rare rebuttal from some in the philosophy community in response to a 2007 study almost identical in scope and findings to the Soon et al. study. There is still a lively and interesting discussion going on in the comment section of that post that is well worth checking out.

 Despite my depressing lack of knowledge in many of these fields, especially the fact that I have not attended graduate school for philosophy, there still seem to be far more vestiges of agency theory left in the community than I would have thought. I am not such a dyed-in-the-wool determinist that I am not open to re-evaluating how we define freedom; on the contrary, I believe we must reconcile what we know from reason and science with how we perceive the world and the behavior of its inhabitants. That being said, some of the approaches offered by titans like Daniel Dennett (expanding our conception of the self to include our biology) do little, as far as I can understand, for solving the key issue posed by studies like Soon, Libet, and the most recent: how does deliberation enter the picture if predictive antecedent brain activity exists, and even once it has entered the picture, how can it play a causal role without being determined?

In my senior thesis I examined Timothy O’Connor’s theory of emergent agent causation, in particular his claim that emergentism eliminated the problem of interaction. By using Jaegwon Kim’s supervenience argument I demonstrated that O’Connor’s particular theory of emergent downward causation (a form of nonreductive physicalism) results in overdetermination. O’Connor also posits that emergent agent causation is a much simpler explanation for the behavior of human beings than complicated physicalist laws, but I call this into question as well. All of this is to say that, before we even begin to discuss deliberation and the participation of consciousness in our actions, agency theorists must recognize, and reconcile, the findings of studies like these with their theories of agency. Though I clearly cannot claim to know the vast body of O’Connor’s cogent and thought-provoking works, in my research I did not find a response from O’Connor to neurostudies like these. It is well-reasoned  (though flawed) monist and physicalist agency theories like these, not dualist approaches (which surely must have fallen far out of vogue by now) that also must reconcile their positions with these studies. The piece in Nature paints too simplistic of a picture of how these studies can be brushed aside if you are not a mind/body dualist, and I sincerely wonder what theories exist that would prompt statements like: “Nowadays, says Mele, the majority of philosophers are comfortable with the idea that people can make rational decisions in a deterministic universe.” Rational, sure – but free?

I look forward to reading more by folks like Kathleen Vohs, Al Mele (thanks, Nick!), and Adina Roskies in an attempt to better understand exactly which determinist elements are being affirmed and what reason they each give for simultaneously not being surprised by such findings and also urging that clearly free will is not threatened by them. I must have missed the memo!

For what it is worth, below are some concession and postulations about the limitation of current neurostudies as well as what ought to be realistically acceptable for philosophers to begin taking neurostudies seriously rather than treating them like elements of an intellectual turf war. Details of the study can be found in my aformentioned post.

Predictability

Depending on which camp one falls into, the 60% predictability is either impressive or lackluster. Given that, at least in the Soon studies (details can be found here), the choice is between left and right, we automatically expect the probability to hover around 50%, and so a 10% increase is noteworthy, but to some it is not by much.

It ought to go without saying that an increase in the predictive capability of the study would increase the persuasive power of its conclusions regarding free will. But what many often lose sight of is not only the massive gains made by the most recent studies but also the sheer weight of the implications of the concrete facts of the study. For example, in Libet’s studies in the 1980’s there was no way to predict choices – now there is, and such predictions are accurate more than half the time. To reiterate, a computer is connected to an fMRI machine and literally watches and measures human brain activity and uses such activity to predict future actions. I may be on the stodgy side, but given that it was only 25 years ago that we could not predict and we could not map or record rain activity, the technology and the studies have grown by leaps and bounds. Given this, I am confident that as technology improves, so too will the predictive capacity of these studies. The Nature article cited above describes several studies currently in the works or in the stages of publication that seek to mitigate concerns over the role of the subject in the study, timing, scope of measurement, etc. I am particularly excited about the study that seeks to remove the subjective element of the test subject becoming conscious of choice through using a video game set up.

Scope of Claims

I do agree with the spirit of the Nature article and some of the sentiments therein: these studies do not unequivocally disprove the existence of free will as traditionally conceived. Clearly these studies are artificial in nature (as all experiments are) and the nature of choice and subjective human experience as we understand it makes such studies very difficult to parse. For who, except the subject, can tell whether true deliberation took place? Who, if anyone, can say whether the 40% of the time the computer strikes out represents true freedom or a limitation in our technology?

All of this is not to discount the role philosophers have and have not had in this process. Though I do not doubt that some have risen to the occasion and addressed these studies proactively and head-on (or conducted them!) there remains an underlying impression that any engagement is reluctant and occurs only once science has ‘overstepped its bounds’ as it were. We are at a point in our development as a species that science and philosophy can no longer avoid one another. Social contract theory is threatened by evolutionary evidence that our ancestors were always social creatures. Religion and faith are under assault be scientific evidence that many evolutionary triggers explain the mass appeal of religious belief. So, too, is the traditional conception of ourselves as wholly free agents under attack by scientific evidence that our brains do more behind the scenes than we previously thought. The rise in neurophilosophy gives me hope that more and more thinkers are becoming willing to incorporate these findings in their philosophical considerations, though I do wonder about the ‘old-guard.’ Are we witnessing a backlash against science’s role in the intellectual and philosophical world, or do the sentiments in the Nature article represent genuine and appropriate hesitation to read too much into these studies, or to explain away the complicated workings of the human brain? Time will elucidate this question, but I wonder if it will it ever provide an answer.

J. Anderson Thomson on Perceived Agency

September 8, 2011 3 comments

“Humans are strongly biased to interpret unclear evidence as being caused consciously by an agent, almost always a humanlike agent. This cognitive ability to attribute agency to abstract sights or sounds may have helped our distant ancestors survive, allowing them to detect and evade enemies. It kept them alert, attentive toward possible danger. Better to jump at shadows than risk something or someone jumping at you.

This ability was adaptive, so therefore it is natural for us to assume the presence of unseen beings and to believe that such beings can influence our lives. It is equally natual to assume that such a being, if asked, can alter or affect what happens to us. Asking easily becomes praying…As social beings with these adaptations, we are now set up for belief in a divine attachment figure. We can attribute agency to it, transfer some of our early-life emotions to it, and as a result can believe that such a being desires to interact with us.”

-From Why We Believe in Gods: A Concise Guide to the Science of Faith.

A full review of this great book can be found here.