The peril of power

January 4, 2010

2009 continued the trend of unhealthy obssession with celebrities and their personal lives, and ended with a public relations nightmare for perhaps the decade’s best athlete: Tiger Woods.  One night while sitting at the dinner table, my mom professed that she couldn’t understand why people with power and money have a hard time sticking to one person.

My opinion was the exact opposite.  I think it is completely clear why humans have a hard time being monoamorous (having only one lover).  We have a biological impulse to pass on our genes which becomes a huge drive once we’re physically capable of doing so.  I don’t think it’s something restricted to the rich and powerful, either; it’s just that the rich and powerful are usually in a unique position to attract others to them, and thus fulfill their desires for polyamory.

From the perspective of evolutionary psychology, it is very obvious why those who are the most well off might be the most polyamorous.  When we are picking out partners, we are subtley looking for a good genetic match.  We are biologically coded to look for certain factors that indicate a partner capable of: a) Producing good offspring, b) Providing and taking care of that offspring.  The affluent are in a unique position to handle these tasks, and are thus very appealing to us.

Now the Kellogg School of Management at Northwestern  University has published some interesting research confirming that the most powerful have a certain inability to practice what they preach.

In all cases, those assigned to high-power roles showed significant moral hypocrisy by more strictly judging others for speeding, dodging taxes and keeping a stolen bike, while finding it more acceptable to engage in these behaviors themselves.Galinsky noted that moral hypocrisy has its greatest impact among people who are legitimately powerful. In contrast, a fifth experiment demonstrated that people who don’t feel personally entitled to their power are actually harder on themselves than they are on others, which is a phenomenon the researchers dubbed “hypercrisy.” The tendency to be harder on the self than on others also characterized the powerless in multiple studies.

This study confirms what seems to be patently clear in the public sphere, and we can be reasonably certain that this will continue to be an ongoing battle as social networking and the media continue to bring everyone’s private life into the public purview.

Advertisements

A brief (and rather pointless) journey in semantics

January 2, 2010

I follow Phil Plait of Bad Astronomy on Twitter (@BadAstronomer), and earlier he stirred up a bit of a hornet’s nest regarding what, exactly, constitutes a decade.  This is a little ironic as I had just discussed this very issue a couple days prior with a friend.

The question being asked: when does a decade begin or end?  Though Phil has presented his own argument, I would like to make my own view clear.

First, we should define what a decade is.  A decade is a period of 10 years, in the same way that a century is a period of 100 years and a millenia is a period of 1000 years.  Some of the confusion arises from what we saw on January 1st, 2000.  Many were hailing the beginning of a new millenia, and the start of the 21st century.  The problem was that the 21st century didn’t actually start until January 1, 2001 because there was no year 0.  Similarly, some argued that “the new millenia” didn’t start until January 1, 2001.

I will help clarify this issue of decades by using the year 2000 as an example.  Since a millenia means only a period of 1000 years, it was actually reasonable to say that January 1, 2000 was the beginning of a new millenia, because there is nothing in the definition of a millenia that says when our starting point must begin.  A person may justly say that a new millenia had begun, so long as they acknowledged that the beginning of the last millenia in their context was January 1, 1000.

The question of whether it was the 21st century, though, is a more rigidly defined problem.  When we talk of whether it is the 21st century, the 11th century, or the 5th century, we are referring to how many centuries have occurred since the start of the common era (or, as the religious might call it, Anno Domini).  So given that there was no year 0, it could not possibly have been the 21st century since the start of the common era unless it was January 1, 2001 or later.

How does this relate to decades?  First, we have the more general view.  January 1, 2010 is indeed the beginning of a new decade, since nothing in the definition of decade necessarily implies that a decade begin on a year ending in 1, 2, 3, etc.  To make it more clear, we might say that it is the end of the 00’s decade, as we have colloquially referred to decades in terms of the number in the tens slot.  So December 31, 2009 was the end of a decade in the same way that December 31, 1969 was the end of a decade.

However, on the more particular view, one could not argue that January 1, 2010 is the beginning of the 201st decade; only January 1, 2011 could be, given that there was no year 0.

Personally, I think the short-hand of referring to periods as the 60’s, 70’s, 80’s, 90’s, or “Aughties” is not very fruitful.  It doesn’t save us any appreciable time, but does add ambiguity (since there have been 20 different decades of the 60’s thus far).  For clarity’s sake, I think one should be explicit in stating their starting point.  For example, there is more clarity in saying that December 31, 1999 was the end of the 90’s, since we have specifically stated what our starting date is.  I choose to scrap such nomenclature altogether and stick only to counting decades since the common era began (I would say, for example, that 2010 is the last year of the 200th decade), but alas I suppose I’m in the minority there.  I find referring to decades by the number in the tens slot to be a tad short-sighted, but I leave it to the reader to determine whether this is necessary short-hand given our extraordinarily short lifetime when compared against the scale of the universe.

The inherent weakness of cultural and subjective relativism

December 30, 2009

One of the challenges in critical thinking is overcoming barriers to our way of thinking.  These barriers can mostly be categorized as follows:

1) Barriers erected because of how we think

2) Barriers erected because of what we think

Of interest to me today are two ideas that fall into this latter category.  When we say that something is “right” because it is right in a culture, we are being cultural relativists.  When we say that something is “right” because it is right in the mind of a person, we are being subjective relativists.  These two hinderances are both saying the same thing: there is nothing intrinsically right or wrong about actions; what is right is only a product of the culture and what we think is correct.

In a liberal, democratic, pluralist society, this sort of position seems quite appealing.  We want to consider everyone’s opinions from all cultures, and we don’t wish to enforce our opinions on someone else just because we hold a different opinion.  Still, I think it is very clear that just because someone holds an opinion does not make it right, and we can balance arguments amongst different cultures (or subjective perspectives) by comparing the reasoning that supports these arguments.

The most classic example of a challenge to relativism is that of the Holocaust in World War II.  If cultural or subjective relativism were true, then we are not  in a position to challenge the morality of what the German Nazis did to the Jewish people and those who support them.  After all, exterminating Jews was right in the mind of Hitler, and it was deemed appropriate by the German culture of the time.

Similarly, the cultural relativist would say that we have no grounds to challenge the morality of slave-holders, since they were largely following convention at the time.

And yet our intuition screams to us that neither the German Nazis nor the US slave-holders were behaving morally, and you would be hard pressed to build a case that they were doing so in light of all the rights and liberties we consider so important today.

The reason I bring this up today is a discussion I was involved in about Iran’s approach to civil liberties.  Here are some statements from Monday that are relevant:

Iran has barred single women from working for a state firm that operates a huge gas field and petrochemical plants on the shores of the Gulf, the Fars news agency reported on Monday.

“Oil Minister (Masoud Mirkazemi) has emphasised that single women should not be present in Assalouyeh,” the deputy director of the Pars Special Economic Energy Zone Company, Pirouz Mousavi, said.

It should be noted that Iran has formally said that single men will be unable to work in the future as well, although it is interesting that the phasing out of single women has happened first.  I imagine this can be explained by two factors: 1) Iran’s typically misogynist theocracy, 2) Most of the workers in the Iranian oil fields are single men, and sending them all home would be crippling to the industry.  The basis for this policy is a supposed religious imperative that all people must get married and start a family.

My objections to this action were immediate.  Not only does it hamper religious and personal freedoms of all people who wish to be irreligious or unmarried (as is, unfortunately, to be expected in Iran), it has also been applied in a way that targets women first and foremost.

The counter-objection I was met with was that of cultural relativism.  “Sure, that infringes upon our ideas of rights and liberties, but they have a very different definition of those in Iran and it’s none of our business.”  To this I reply: utter nonsense.

I do not believe in relativism for the reasons I outlined above.  While I do not wish to compare the methods through which this sort of relativism is carried out (there’s admittedly a huge difference between killing Jews and firing single women), it is a very core idea of relativism that I wish to challenge: the idea that what is truly right can only be determined by the individual or culture in which an action is taken or a belief is formed.  To me, an argument using cultural relativism to support Iranian theocracy is just as weak as an argument using cultural relativism to support German Nazism.

There is also a slightly obscene unspoken suggestion here, as well, which is that all Iranian citizens support such a policy.  The near-revolution that surrounded the Iranian election earlier this year showed a strong counter-current of young people, particularly young females, who were aspiring for a more secular, liberal democracy.  It has been countered that if these people are really unhappy, they should just leave Iran, but I think that is far too simplistic.  Iranian women are generally far under-educated, far more poor, and far behind the cultural zeitgeist of basically ever developed country.  Not only would the culture shock be tremendously difficult to overcome, an Iranian refugee would also lack the resources to make such a move happen.  Imagine the time, effort, and resources it would take in your own life to pick up and move to another country, and that’s not even accounting for the tremendous difficulty of leaving family and friends behind.

Let’s broaden the scope a little bit.  In Iran, an LGBT individual can be sentenced to death.  Same goes for an apostate, or someone who has left the Islamic faith.  Does any person who falls into either of these two categories deserve the punishment just because it is right in Iran?  Any person I would consider sane would say no (though they may be insane for any number of other reasons!).

I have nothing but respect for a multi-cultural and diverse community.  I encourage every person to speak their viewpoint loudly with the understanding that they will have to defend that viewpoint.  I believe strongly in a free market of ideas, where everyone can put forth their views and the best will ultimately win out, no matter how arduous the process may be.  However, let’s not equivocate listening to ideas with accepting ideas as true or right.  Certainly some areas are so grey that finding an absolute is difficult (and sometimes unimportant), but there is no room for subjective or cultural relativism when we’re talking about the absolute, universal, and fundamental rights of every single person on this planet.

Dreaming of free will

December 28, 2009

Dreaming my way in

As of late, I’ve been getting a lot better at lucid dreaming.  Essentially I use the Mnemonic-Initiated Lucid Dreaming technique, where I try to focus on sign-posts that indicate I’m dreaming.  Before I feel myself fall asleep, I put a heavy focus on lucid dreaming once I find a pattern of thought or topic that I recognize or target in my waking state.  As an example, last night I woke up with a vague sense of having dreamed about free will, so before I went back to sleep, I paid special attention to this subject with the intent of triggering a lucid dream as soon as it came upon me in my sleep.

It worked quite successfully, perhaps the most success.  Of course, the mind can work in a rather peculiar fashion when we’re dreaming, so it’s important to re-investigate what we were dreaming of to know if there’s anything of any use that was discovered.

I ran across quite a train of thoughts in my mind last night, and, as I’ve already suggested, it centered around the notion of free will.  This is an interesting philosophical topic to me, a topic on which I’ve spent a great deal of energy attempting to understand more clearly which arguments are well-reasoned and which ones are wishful thinking.

What are the arguments?

It became clear to me about 6 months ago that determinism poses a real challenge to free will.  Determinism is the view that everything that has happened or is going to happen was determined by a causal chain of prior events.  So if I decide to have a glass of milk, a determinist argues that I could only have chosen that option given my past experiences.

Our common notion of free will, on the other hand, says that we make choices all the time which are completely of our own volition and are in no way determined.  From these two views, we have a variety of philosophical positions:

– Incompatible determinist: free will and determinism are not compatible, and determinism is true
– Incompatible libertarian: the two are not compatible, and free will is true
– Compatibilism: the two can jive together

I’ll present a bit of a jumbled version of my argument, but hopefully it will make some sense.

Determinism vs indeterminism

Beginning with this view of determinism, I ask in what way would it make sense for our actions to NOT be determined.  When I “choose” to drink a glass of milk, it is for a variety of reasons.  Maybe I was just thirsty and wanted to try something new, maybe I’ve had milk regularly for years andn enjoy it more than other drinks, and maybe I felt the milk would go more appropriately with the toast I was about to have.  A person arguing for some vague sense of free will (or, more precisely, that the world is indeterminate) would then respond that you could have chosen anything else, and it is this objection I want to address first.

The Scottish philosopher David Hume spotted what is wrong with this argument a few centuries ago.  The indeterminist is saying “You made choice A, when you could have made choice B.”  A determinist replies that this sort of rationalization only arises by positing that you were someone different at the time you made your choice than the person you really were.  So what the indeterminist is really saying is as follows:

“If I were someone different at the time of the decision, I would have chosen differently.”

Undoubtedly this is true, but it doesn’t escape determinism.  The simple fact is that you weren’t someone different at the time, which is why you chose what you did.  Only because of all your past experiences, and your current situtation, did you choose what you did.  So, by my light, I would say that determinism appears to be true.

What does “free will” really mean, and can it be compatible?

To look at this a little further, though, we need to carefully analyze precisely what we mean by free will.  Again, the argument typically goes that the world is indeterminate and therefore we have free will (or vice-versa).  I ask this question: if our actions are not determined by a causal chain of past events, how is it that we make our decisions?  A simplistic way of looking at it: if we have no reasons or experience to prefer one option over another, what is left?  The answer, typically, is chance, but it is unclear to me how this is any closer to the free will we’re seeking to establish.  If what we end up deciding is indeed random, in what way can we say that we’ve really made our own decision?  Indeed, all that happened was we rolled the intellectual dice and ended up with a particular outcome over which we had no control.  It seems clear to me that this sort of contra-causal (ie. without cause or against cause) free will is basically non-sensical.

Is there any sense, then, in which we could say that we do have free will?  The only meaningful way in which I can define free will, in light of our previous inquiries, is that free will is having the ability to follow through in the way that has been predetermined without coercion from an external source.  In other words, there is no sort of malignant deity or something to that effect that impinges on our making choices based on our causal chain of experience.  If this is how we define free will, then I would consider myself a compatibilist.  Any other definition of free will is, to me, a complete non-starter, and would leave me in the position of an incompatiblist determinist (that is, I side with determinism and find it incompatible with this definition of free will).

The case still unclear

Certainly there are some challenges to this position, beginning with determinism.  We now know that the microscopic world is a quantum one, where the relationship between cause and effect begins to weaken and there is a high degree of randomness that takes place.  In physics, we have seen that, despite this quantum randomness at the microscopic level, there is still a great deal of regularity at the macroscopic level.  We haven’t discarded Newtonian mechanics at the macroscopic level just because we’ve learned quantum mechanics at the microscopic level.  So this quantum challenge was typically written off as something that really only applies at small scales.

This issue is further complicated by the growth of neuroscience, or science of the brain.  Earlier this year, an article in the Journal of the American Psychological Society showed that we believed to have made a conscious decision after the decision had already been made.  In other words, the decision may have occurred at a sub-conscious level, then been artifically reflected as a conscious decision.  Though the mechanism of this is not yet understood, it does raise serious questions about whether or not we could really say that we’re exercising free will.

Countering such evidence, though, was a paper in 2006 from the Journal of Integrative Neuroscience.  Edwin Lewis and Ronald MacGregor argued that determinism, particularly in the brain, was a faith-based position. It seems unclear to me which analysis will ultimately hold true.  Perhaps the world is indeterministic AND we don’t have any free will.

Relation to criminal justice

One of the reasons why I got so heavily interested in the free will vs determinism argument is that it has a very real impact on criminal justice.  If, as a determinist would argue, our actions are completely caused by other factors (such as our past experiences, our environment, our genetics), then we may need to re-evaluate the way in which criminals are charged.  Going back to an argument from lawyer Clarence Darrow in 1924, who was defending two young men convicted of murder against a 14 year old:

For over twelve hours Darrow reminded Judge Caverly of the defendants’ youth, genetic inheritance, surging sexual impulses, and the many external influences that had led them to the commission of their crime. Never before or since the Leopold and Loeb trial has the deterministic universe, this life of “a series of infinite chances”, been so clearly made the basis of a criminal defense. In pleading for Loeb’s life Darrow argued, ” Nature is strong and she is pitiless. She works in mysterious ways, and we are her victims. We have not much to do with it  ourselves. Nature takes this job in hand, and we only play our parts. In the words of old Omar Khayyam, we are only Impotent pieces in the game He plays Upon this checkerboard of nights and days, Hither and thither moves, and checks, and slays, And one by one back in the closet lays. What had this boy had to do with it? He was not his own father; he was not his own mother….All of this was handed to him. He did not surround himself with governesses and wealth. He did not make himself. And yet he is to be compelled to pay.”

The question is this: if we are not the ultimate cause of our actions, can we be held responsible for them?  Many argue that, in light of determinism, we would not be able to hold anyone accountable for their actions and, thus, we would be permitting a new society of decadent violence protected under the guise of determinism.  No crime, no matter how offensive, would be deserving of a punishment since there was no act of volition.  Indeed, the “intent” of an action is of key importance in the courts.

My position is, however, a little bit different.  Given the arguments for determinism, I think it undoubtedly is important to focus our legal spotlight on rehabilitation, not retribution.  Our goal as a society should be to change the environment and circumstances under which criminals come to be so as to take a proactive approach against crime, rather than a reactive approach involving punishment.

With that said, there is still a good reason for having retributive sentences: they act as a deterrent.  Take, for example, the crime of murder: one of the mitigating factors in preventing someone from murdering is their understanding that they will be jailed for life (potentially) for what they are about to do.  If we remove such penalties, that is just one less obstacle in the causal chain leading to murder.  For this reason, I think keeping lengthy sentences for a crime like murder is completely justified.

Relation to Theology

The question of free will and determinism is, of course, tremendously important to religion.  In Christian theology in particular, the presumption of free will in humans is an especially frequent counter to many objections against the faith.  Take, for example, the Problem of Evil, which can be broken down into a number of different categories.

The Logical Problem of Evil says that there is something logically inconsistent in the notion that there is a omnipotent, omniscient, omnibenevolent being who creates a world where evil occurs.  The typical counter from Christian apologists is that there is no logical contradiction here; some evil may have been necessary to achieve other goals (like greater spiritual development).

This leads to the Empirical Problem of Evil.  The argument here is not that there needn’t be any evil at all, but that there need not be so much.  As far as I’m aware, there is no sufficient counter-argument to this point, and it would be hard to develop one.  This argument relies on a largely subjective notion of what is enough evil, and what is too much evil, and thus is not the strongest (but likely the least controversial) tack to take.

Within these two categories, the type of evil can be broken down into yet another two categories.  First, we have the Problem of Human Evil.  Humans certainly encounter a great number of hardships, and from the religious perspective (in particular, the Christian perspective), this suffering could be the fault of the person experiencing it: they’ve failed to live a good life by their own hands.  Or this suffering could come from any other human who likewise is failing to live a righteous life.  The question from the non-religious is this: if god has all the qualities claimed of him, why can’t he prevent us from doing evil to each other?  The answer from the Christian apologist is a simple one: god was incredibly kind in giving us free will, and evil only arises from us misusing this gift.  The fault, then, is not to be placed at god’s feet (indeed, he is now even better than we had supposed!), but rather to be borne on the backs of the beings he created.  Surely we wouldn’t want to turn down free will, right?

So you see, very clearly, how free will is critically important to solving this problem.  If it could be demonstrated that there is no free will, then perhaps the most popular theodicy would go right down the toilet.  Given that I think the notion of free will being discussed in this sense is utterly non-sensical, I don’t give it much weight, but let’s briefly delve a little further.

Why the free will theodicy makes no sense

Suppose that free will does account for the Problem of Human Evil.  I mentioned above that there is a second category that is, as of now, still unexplained which we will turn to.  This is the Problem of Natural Evil.  Certainly a great deal of evil, hardship, and suffering can be attributed to the way we treat each other and the “choices” we make as individuals, but there exists yet another type of suffering over which we have no control: nature.  We live precariously on a single pale blue dot in the midst of a vast universe; a blue dot that can support life some of the time in some of its places.  If the Earth is too hot or too cold, it doesn’t make for good living amongst us homo sapiens sapiens.  Even if we find places to live where temperature, for example, could be considered fairly consistent, what are we to do about every storm that thunders our way?  Amazingly, the victims of Hurricane Katrina did not see this angle (indeed, many reports suggested that the faith of the victims had only grown stronger, which supports my personal hypothesis that religion and god are largely mechanisms of consolation).  Sure, god couldn’t interfere with the free will of the looters, but did the hurricane itself have any free will?  Couldn’t god have blown the hurricane away from the levees?  Unfortunately, the only response on this question is one that is not even respectable: as pastor John Hagee put it, New Orleans was being punished for its sinful past.  While one wonders about the morality of drowning babies for the crimes of their forefathers, I can’t help but think this works surprisingly well with the substitution of punishment that is epitomized in the story of Jesus Christ.

Free will in heaven

So far, I would say we have established a few principles in the mind of the Christian apologist.  Free will is an incredible gift which god would not deny us.  Given that free will, we humans inevitably cause or create some evil in our environment.  What can we say about heaven, then?  Heaven is often described as the most perfect place one can imagine.  Does such an imagination jive well with what we humans do when we’re given free will?  In other words, can we have free will in heaven and still have it be completely free of evil?  As the Christian apologist has argued earlier, these are clearly incompatible properties, which explains the evil present on Earth.  So either we have free will in heaven and there is evil present, or we do not have free will in heaven and there is no evil.  If we take the former to be true, then heaven is certainly not perfect, and it’s unclear how it would be significantly better than Earth.  If we take the latter to be true, then I have to ask in what sense we are still us; without a free will to exercise, are we not reduced to the deterministic machines that the religion is trying to save us from?

A counter argument here is that heaven is so good that no one even wants to commit evil.  Sure, we could if we really wanted to, but no one has the desire to exercise their free will for the purpose of doing evil.  God has done such an incredible job with heaven, we must ask ourselves: why did he fail so miserably in the natural universe?

Determinism and First-Cause Arguments

There is an interesting way here in which Christian apologists like to have their cake and eat it, too.  They assert that we have contra-causal free will, and this is why there is evil in the world.  To assert this free will, they argue that god has given it to us, but to get to god, they typically use the Cosmological (or First-Cause) argument.  Loosely stated, it says that there is a cause for everything, and that if we trace the causal chain backwards, we inevitably arive at some first cause which was itself uncaused.  If this were not so, we would have an infinite regress of events backwards in time, each event caused by some prior one ad infinitum.  So the answer to such a problem is that god is the unmoved mover, the uncaused cause.  He exists outside of space-time (though I don’t have the slightest clue of what “existence” is supposed to mean in this sense), and thus he was able to create time and space without logically requiring his own creator.  I won’t get into the Cosmological argument now (google for refutations of the Kalam Cosmological Argument, likely the strongest form), but what I want to draw attention to is this two-sided thinking.  To the Christian apologist, we know god exists because everything has a cause and there must necessarily be some uncaused cause, while at the same time not everything requires a cause because we exercise our free will.  In other words, the causal chain that is used to establish god’s existence is broken by the free will that god’s existence gives us.

Conclusion

The idea of free will has always been appealing to me.  As someone who would be considered a Classical Liberal (or, in the Western world, what we call a Libertarian), the idea that we have the free will and liberty to do as we so choose seems to be an important part of my philosophy.  That is why attacking my own ideas of free will has proven so painstaking, but ultimately necessary.  I can return to my Libertarian philosophy, for the time being, by preferring influence and interaction within a diverse free market over the typically more mono-culture government.  Still, it is clear that the truth of free will and determinism has an inseperable impact on our ideas of justice and, indeed, our religious sensibilities, and so I look forward to continued scientific and philosophical enlightenment in an oft-forgotten but ever-important question.

Religious Superstition taken to the extreme in Africa

December 5, 2009

For about 6 months now, the Center for Inquiry has been battling against a practice that seems rather archaic here in North America: witch burning.  Led by Norm Allen, the executive director of African Americans for Humanism for CFI, there has been a growth in skepticism, particularly amongst the youth communities.

News comes today, however, of a law suit against CFI’s Nigerian representative, Leo Igwe.  In particular:

The suit, scheduled for a hearing on Dec.17, is seeking an injunction preventing Igwe and other humanist groups from holding seminars or workshops aimed at raising consciousness about the dangers associated with the religious belief in witchcraft. The suit aims to erect a legal barrier against rationalist or humanist groups who might criticize, denounce or otherwise interfere with their practice of Christianity and their “deliverance” of people supposedly suffering from possession of an “evil or witchcraft spirit.” The suit also seeks to prevent law enforcement from arresting or detaining any member of the Liberty Gospel Church for performing or engaging in what they say are constitutionally protected religious activities. These activities include the burning of three children, ages 3 through 6, with fire and hot water, as reported by James Ibor of the Basic Rights Counsel in Nigeria on August 24, 2009. The parents believed their children were witches.

Hopefully the law suit will be laughed out of court.  Religious superstition should not be permitted to override a human’s right to live, and kudos goes to CFI for their continued battle in often hostile environments to spread science and reason to those who need it most.

Why I don’t celebrate Christmas

December 2, 2009

“‘Tis the season to be jolly” the popular Christmas carol begins.  Indeed, Christmas day is one of the most enjoyed holidays of the year, from the gift-opening on Christmas morning to the family feasts that fill our gullet till our belts need loosening.  So what could possibly prompt a person to walk away from such a celebration?

For me, it has been a fairly gradual process.  The magic of the season undoubtedly dissipates for all of us as we grow older, but my reasons are about much more than a process of maturation.

First, the religious background of Christmas.  As an atheist, I have no religious affinity to Christmas.  The certainty with which the birth of Jesus is presented doesn’t resemble the reliability of the New Testament.  The celebration itself, done on December 25th, is misplaced: the account of Luke suggests that if Jesus did actually exist, he was probably born in the spring or summer.  In it’s original celebration, Christmas was usually celebrated at the beginning of January.  So celebrating the day as the birth of Christ seems, to me, a mischaracterization of history.  Even if such an account were accurate, I wouldn’t celebrate Christmas for the same reason I don’t practice any other religious holidays: I don’t believe the tenets on which they are based.  The best holiday would be one in which we could all celebrate no matter our religious affiliation (or lack thereof).

Certainly Christmas has its less Christian aspects.  The Christmas tree is a pagan concept.  The story of Santa Claus is undoubtedly secular in nature.  The animated Rudolph the Red-nosed Reindeer was an annual favourite of mine as a child, and still holds a certain clutch on my unfortunately oft-nostalgic mind.  Still, tradition and a good story are not enough to keep me celebrating a holiday.

I don’t exchange gifts, either.  The social pressure to keep this practice up is tough to escape.  Still, I see a lot of good reasons for not exchanging gifts, and have thus far been able to resist such pressure.  There is the more practical aspect: if I need to get something, no one is likely to get me precisely what I need.  Further, I shouldn’t really be getting something just because it’s a day on which you give gifts.  If I need something and reach out to you at any time of the year, there’s no reason why a gift can’t be given then.  If I don’t need anything, you needn’t buy me a gift.  Truth be told, neither of us really needs anything.  I think here of the argument Peter Singer has often put forward: spending lavishly on the haves is basically an unethical or immoral action against the have-nots.  If there is someone who needs something at Christmas (or any time of the year), it almost certainly is not you or I.

What can be enjoyed during this season?  Well, I say there are two practices I find unobjectionable about the holidays, and we need not wait for Christmas to put them to work:

1. Take a moment with friends and family,

2. Give to those who are in need.

Reason’s Greetings!

Deepak Chopra: “Skeptics trust in nothing”

December 2, 2009

The Huffington Post is a paper that I came across quite a bit in 2008.  Though it had many interesting articles during the 2008 US Election, it’s leftish slant still left a lot to be desired.  In particular, the Huffington Post has run articles and op-eds supporting a lot of junk medicine, including homeopathy and the anti-vaccination perspective.  So it was no surprise when they published an article where Deepak Chopra, a true master of “woo,” complained about how skeptics are, in essence, pathetic nihilists who’ve never been ahead of the curve.

First, it stands to reason that Deepak Chopra has a very rational reason why he dislikes skeptics: Chopra firmly believes in the New Age mystic principle of what he calls “Quantum Healing.”  Put simply, it says that the mind can heal the body.  Take no need of any medicine; you have all that you need sloshing around in your skull.  Chopra makes use of the randomness seen in Quantum Mechanics at the sub-atomic level to suggest that there is also something going on between the mind and the body which is beyond our senses.  Touting this principle has led to a fair amount of success for Chopra, who gets his fair share of recognition for his philosophical outlook.  The only catch: there is, of course, no evidence for such a thing.

It is here where the rubber meets the road.  Skeptics such as myself pride themselves in a certain standard of evidence; anyone who ignores claims with significant supporting evidence is adhering to it dogmatically, and something certainly frowned upon within the skeptic community.  Indeed, the skeptic who acknowledges strong evidence quickly is the strongest of the bunch.  Chopra, though, sees things quite differently:

It never occurs to skeptics that a sense of wonder is paramount, even for scientists. Especially for scientists. Einstein insisted, in fact, that no great discovery can be made without a sense of awe before the mysteries of the universe. Skeptics know in advance — or think they know — what right thought is. Right thought is materialistic, statistical, data-driven, and always, always, conformist. Wrong thought is imaginative, provisional, often fantastic, and no respecter of fixed beliefs.

Here I think Chopra is heading down the completely wrong path.  Practical skeptics embrace the wonder of science, and have reverence and wonder for that which is unknown.  Our sense of awe is fully engaged by the natural, empirical world without ascribing it to something undemonstrated; it is enough for me to look into the Hubble Ultra Deep Field and feel an incredible sense of unity and solidarity simultaneously.  When I look into the sky, I see my connection to the universe and find it unfathomable in a way Carl Sagan might have described: I’m but a mote of dust in the wind, yet I’m made of the same starry stuff that everything else is.  There is real awe in such a moment which need not be answered by mystical claims.  I think, on the other hand, it is perfectly sufficient to give a natural account of what we know, and remain humbled by our ignorance of that which we don’t know.

Contrary to Chopra’s assertion, skeptics do not think they know in advance what right thought is: they simply believe what the strongest evidence indicates.  When the evidence changes to support a different opinion or hypothesis, the skeptic community goes along with it.  As an example, I think here of the large number of skeptics who came to accept Global Warming due to a preponderence of evidence in support of it.  Interestingly enough, the recent emails leaked from the Climate Research Unit in the UK are sparking new arguments in the skeptic community about the validity of Global Warming.  So clearly, skeptics are open to new ways of thinking about things and are, in this particular case, very non-conformist.

The problem here is not one of being ideologically driven; it is a question of standards of evidence.  By all means, a skeptic will accept a principle like quantum healing, but you must be able to demonstrate such a thing.  It is only by providing positive evidence for our claims that we are able to deduce the real from the non-real.  An idea that is wonderfully imaginative is only useful if it advances our understanding of the way the world really works; whether it is the product of fantasy or not is irrelevant.  So here comes the final blow from Chopra:

So whenever I find myself labeled the emperor of woo-woo, I pull out the poison dart and offer thanks that wrong thinking has gotten us so far. Thirty years ago no right-thinking physician accepted the mind-body connection as a valid, powerful mode of treatment. Today, no right-thinking physician (or very few) would trace physical illness to sickness of the soul, or accept that the body is a creation of consciousness, or tell a patient to change the expression of his genes. But soon these forms of wrong thinking will lose their stigma, despite the best efforts of those professional stigmatizers, the skeptics.

I hope it is immediately clear how flawed such an argument was.  Chopra feels that if he can just show one instance where people thought wrongly in the past, it can support his assertion that people are thinking wrongly now.  Yet this method of equivocation is invalid here, since there are two separate claims.  The reason physicians came to accept that the mind played a role in treatment (noticeably the placebo effect) was because the preponderence of evidence supported such an assertion.  Case studies were put forth in medical and psychological journals that showed the relationship between a positive outlook and positive outcomes.  Of course, the evidence didn’t demonstrate that one could be healed just by having the right state of mind, but that doesn’t stop Chopra.  No, it is we skeptics who hold evidence in the highest regard that are of wrong thought.  If only we would stop subjecting his ideas to the same burden of proof that allows us to ensure the highest degree of reliability in our personal beliefs, we’d get to the same “ahead of the curve” type of thinking that Chopra is espousing.

By all means, skeptics will ride the wave at the front of the curve side-by-side with Chopra, but we’ll wait until we’ve got good, reliable reasons to do so first.  It is in that which we trust: that the best evidence will ultimately rise to the top, and by proportioning our beliefs to the amount of evidence that supports them, we can be reasonably sure that our actions are justified.

Evolution, Morality, Atheism, and Invoking Godwin’s Law

November 29, 2009

Undoubtedly one of the most frustrating arguments I engage in with the discussion of atheism is that of how one finds (or knows) their moral compass. Almost every religious apologist (William Lane Craig, being a prime example), seeks to undermine natural morality because it supposedly leaves only one alternative: religion and, more specifically, god provide us our morality. This claim, I think, is false on its face. Looking at most cultures, we see morality that converges on a common principle: do onto others as you would have them do onto you. I think the general perception in the West is that this sort of wisdom was given to us divinely by Christ, but I find it more compelling that the Chinese philosopher Confucius espoused this principle 500 years before Christ, without resorting to god or organized religion to develop such an idea:

“Adept Kung asked: “Is there any one word that could guide a person throughout life?”
The Master replied: “How about ‘shu’ [reciprocity]: never impose on others what you would not choose for yourself?””
– Analects XV.24

Still, the argument often goes that not believing in god but believing in evolution will lead to doing things we would consider immoral. To support such a point, it is often said that Hitler was an atheist, and that his practice of Eugenics (artificially selecting the “lesser” individuals to permit reproduction amongst only the “finest” humans with the purpose of creating a “better” human species) was directly tied to his belief in evolution. Thus, it follows that atheism and evolution can be seen as intellectual positions that in some way endorse or promote immorality.  Personally, I am of the opinion that those who are using Hitler as a key part to their argument are not only mischaracterizing the position they are challenging, but also reducing the real evil committed by Hitler.  Interesting though this argument may seem (and perhaps even plausible to some), I think a careful look at facts, not hyperbole, best explains why this is simply incorrect.

First, let’s speculate a moment on whether or not Hitler believed in evolution. One wonders why, if Hitler believed in evolution and was an atheist, the Nazis adhered to the following guidelines in book banning:

When Books Burn: Lists of Banned Books, 1933-1939

“6. Writings of a philosophical and social nature whose content deals with the false scientific enlightenment of primitive Darwinism and Monism”

“c) All writings that ridicule, belittle or besmirch the Christian religion and its institution, faith in God, or other things that are holy to the healthy sentiments of the Volk.”

Hitler’s literary work, Mein Kampf, also has a similar leaning:

“Hence today I believe that I am acting in accordance with the will of the Almighty Creator: by defending myself against the Jew, I am fighting for the work of the Lord.”

“What we must fight for is to safeguard the existence and reproduction of our race and our people, . . . so that our people may mature for the fulfillment of the mission allotted it by the creator of the universe.”

“The undermining of the existence of human culture by the destruction of its bearer seems in the eyes of a folkish philosophy the most execrable crime. Anyone who dares to lay hands on the highest image of the Lord commits sacrilege against the benevolent Creator of this miracle and contributes to the expulsion from paradise.”

The faith of Hitler has long been a matter of debate because parties generally assume that if we can put Hitler on the other party’s side, we have won the argument.  The general consensus among historians is that Hitler was probably an atheist who manipulated the religious devotion of the German citizens, while presenting himself as a sort of Messiah, to achieve his goals.  Some consider that a “win” for theistic morality, though I think it speaks more to the manner in which adherence to irrational religion can be manipulated; in other words, it speaks to the element of danger involved in forming beliefs that aren’t grounded in reason or science.

But let’s consider exactly what is being suggested here: if person P believes claim C and claim D, does it follow that every person who believes claim C also believes claim D? On the contrary, I think it is clear that we develop our beliefs for a wide range of reasons, and this often creates an asymmetry between multiple claims. It doesn’t take much work for us to discover some beliefs at which we arrived for reasons we would discard immediately for other claims. In the context of this conversation, I think it can be said that we should assess claims not by comparing the actions of others who believed that claim, but by assessing the claim itself. Thus, if we want to determine whether atheism or evolution leads to immorality, we should be assessing atheism and evolution, not the behaviour of those who professed belief. After all, it is perfectly possible for us to take factual information and distort it to support irrational actions or conclusions; in fact, this is the usual religious explanation for why so many bad things, historically speaking, have been done in the name of god.

So now, let’s actually assess whether or not it is possible to be moral without god if we are the products of evolution rather than a divine plan.

It is pretty obvious that most atheists are perfectly able to reconcile their lack of belief with a desire to be good to other people. That is a simple fact borne out by interacting with people of such disbelief. Being a young, white male living in the Western world, I am a part of the largest growing group in religious polling: those who check “None of the above.” Yet, we see that people you know are atheists, and people who are closet atheists, are certainly capable of functioning morally in society.

Of course, it is completely possible that these people are acting irrationally; perhaps their lack of belief really does mean that they should rape, pillage, and plunder, but they have been coerced by the religious aspects of society to behave in what is typically considered a moral manner. Nonetheless, I think understanding our evolutionary background doesn’t hinder us from being moral; it gives a real account of why we are good to others.

I will now pose a hypothesis; I call it a hypothesis simply because I’m not current enough on the literature to claim authority, though I know much of this hypothesis has been advanced successfully in the past, and I believe that evolutionary psychology, neuroscience, and anthropology supports such a hypothesis.

Many moons ago, we were tribal animals. With scarce resources and high competition, we learned that we could be more successful hunter-gatherers if we pooled our skills with others and worked as a team. As one group formed, so did another, and instead of a battle of the individual, we had battles between tribes. This was the first step in developing our moral sense. Within each group, a certain type of conduct was necessary; failing to behave in a way that was condoned by the rest of the group would result in you being ousted, now having to fend against competition with groups as a lonely individual. I would consider this the beginning of codefied rules, as informal as they may have been. Having established that a group code would have inevitably formed, we must wonder how they came up with rules we generally consider to be good or moral. Here, I think the answer is obvious: any actions that are detrimental to the group’s success would be considered wrong. So murder within the group goes out the window; killing off your group members would defeat the purpose of forming the group in the first place. We can also extrapolate our desire to remain in the group to a more individual perspective as well: staying within the group will result in our personal success; we are more likely to stay in the group if none of the members of the group dislike us; and the members of the group are unlikely to dislike you if you do nothing to warrant it. Initially, this was likely a reactive position: we weren’t able to reason it out in advance, but discovered it after a multitude of bad “social experiments.” It wasn’t until much later that we were able to determine such things proactively using our limited intellect, prior to taking action.

So within the group, I think we’ve established that there were good reasons for being moral. A common question is: why would we extend the same rules to those out of our group, particularly to those who are not of our lineage? The answer here is actually a little disappointing. First, we don’t do this very well; humans are very ethnocentric and often cave to in-group thinking. Still, our social nature that was established in our more primitive times encourages us to extend morality and general decency to those who we interact with. The field of neuroscience, where the brain is scanned and images are generated highlighting areas of activity, have shown repeatedly that we respond very well to doing altruistic acts: when we do something for someone else, the areas of the brain associated with satisfaction and pleasure “light up.” We see similar behaviour in other social species as well.

Aside from that, we have some very real evidence that tells us that behaving in what is considered a moral manner is a beneficial plan for us to be successful, and not just pleasing to us personally. Many game theory experiments have shown that a Tit for Tat method, or generally being altruistic until you get burned, will, over the long haul, provide you with greater success, even if we define success in terms of acquiring resources. I think we can go out on a day-to-day basis and experience the same thing: people will generally do better by you if you do better by them. Certainly, there are those who “get away with it”, but that is rarely a good plan for long-term success. What is often called Karma is nothing other than our inability to escape from our past misdeeds; it is the deserved punishment from society for abandoning what is ultimately the best course of action for all of us.

So it seems clear to me that atheism and evolution are perfectly congruent with being a moral person. Our ability to survive and pass on our DNA is better advanced by behaving morally in society than it would be to behave immorally. Still, three issues seem to me to be on the table.

1. Haven’t we made mistakes in the past? Hasn’t our moral sense misguided us many, many times throughout history? If specific morality hasn’t been revealed to us divinely in a simple set of rules (say, the Ten Commandments), how do we determine what is right? Though I think we can argue that the following the golden rule or perhaps John Stuart Mill’s “harm-principle” is something we can intuitively agree on, I want to question whether the divine revelation of moral principles has actually been practiced, or if it works any better. Is it not true that there are extraordinary disagreements on what constitutes morality amongst the religious, even within a particular sect? Is it not true that what is considered moral has changed over time even within a particular sect? I think here of the abolition of slavery, which has significant support from the Bible but was discarded because we discovered it to be immoral. Or, more recently, I think the unfortunately gradual shift on the morality of same-sex attraction, particularly in liberal Christianity, shows the same thing. My point is that even the most steadfast rules asserted by the religious require interpretation and value judgements. We are always discovering more about what works best in a moral sense, and we are always re-evaluating whether our rules and goals are consistent with what we see to be true in natural reality. This is why morality, whether secular or religious, has always been evolving, and will continue to do so in the future. It is up to us as a species to recognize our commonality with all other homo sapiens sapiens, and rationally determine what is the best moral approach for ensuring the best opportunity for success for all.

2. Was Darwin’s theory (though it should rightly be credited to Alfred Russell Wallace as well) really the catalyst for Eugenics? Though I have already argued that in the case of Hitler this is untrue, I would also say that Darwin’s theory had nothing to do with Eugenics whatsoever. Eugenics is the practice of artificial selection (as opposed to Darwin’s theory of natural selection) and this concept is something we understood long before Darwin published On the Origin of Species in 1859. In fact, it was how we were able to domesticate dogs from wolves. Where Darwin changed our understanding was in how the constant tinkering and refining of natural selection produced all species, whereas Eugenics suggests that artifical selection will produce a more successful single species. These are two very different things.

3. If evolution is real (which I believe the evidence supports), wasn’t Hitler still justified in practicing Eugenics? Though we may have disagreed with his stance on a personal basis, could it not be argued that he is attempting to do what is ultimately best for all of us? I say, emphatically, that he could not be more wrong. Our knowledge of nature and of ourselves is incredibly insufficient. We are not in a position to know what is best now or will continue to be best in the future; that is something that remains to be seen. Suppose, for example, that we killed all people with a certain genetic defect. Suppose, however, that a future mutation of that specific gene would allow us to survive through unprecedented changes in our environment. Exaptation and adaptation have shown us again and again that how old parts can be cobbled together to serve new, important purposes. Indeed, I think the best way of ensuring the continued success of the human species is to maintain as diverse a gene pool as is reasonably possible; only by having many alternatives for nature to select from can we be certain that we can respond, in some way, to changing environmental conditions and stave off our own extinction.

It seems quite clear to me that the arguments put forth to discredit atheism do not only a poor job of its stated goal, but fail to put forth a positive argument for theistic morality (such as properly answering the Problem of Evil or Euthyphro’s Dilemma, or the misguidance given in many passages of divine text, whether its the New Testament, Old Testament, Koran, etc). To those who are moral and feel that their morality was only discovered through divine text, I encourage you to give yourself more credit, but I have no quarry with anyone who seeks only to be a good person. It is only when one attempts to take a moral high ground without justification that disagreements will arise, and hopefully that is something we can escape when we look at the evidence and arguments for each position through the shining light of reason.

The Value of Skepticism

November 27, 2009

When we look at the whole of human history, there is no doubt that we can find a plethora of beliefs that were held by a majority which proved themselves to be false upon closer inspection.  I think here of some very obvious examples: that the Sun went round the Earth, that illnesses could be relieved by bloodletting, or that enslaving another human being was a moral action.  In recent history, claims of paranormal or supernatural experience, dubious claims of alternative medicine, or even simple things like a fear of going into the basement, should give us pause to wonder if the beliefs on which we are taking our actions are reasonably true.  I doubt very seriously that any reasonable person wants to take an action that is unjustified, and I am sure you are in agreement when I say that we want to believe things because we have our own reasons for believing them, rather than accepting those ideas that are passed on to us by our social environment without critical analysis.

The best way to accomplish these two goals is to apply critical thinking and skepticism to all claims put forth.  Here I think of the 17th Century French philosopher René Descartes, who brought about a revival in skepticism when he sought to establish truths in life that were indubitable; that is, ideas that could not be doubted.  Though I think in practice we need not subscribe to a form of radical skepticism that would leave us in a state of complete lack of knowledge (a state of epistemological darkness), I think the best way to ascertain truth is to withhold assenting to a claim until we have been presented with supporting arguments or evidence that we consider sufficient.

This question of sufficiency is a tough one to answer, as it relies on a subjective view of what constitutes as good evidence, and the grey area between certain knowledge and complete lack of knowledge leaves a lot of room for each of us to determine what is sufficient for us to believe.  Still, I think there are tools we can use to prevent ourselves from being deceived by other people and our own internal biases and psychological make-up.

Namely, I think understanding what constitutes good science and what constitutes good reason will allow us to separate the wheat from the chaff.  When assessing arguments, it is important to be able to identify logical fallacies and poor evidence.  If we can use these tools to determine what arguments are good and what arguments are poor, we can proportion our beliefs to the evidence and reason that warrants such a belief.  By doing so, we can be relatively confident that our beliefs are our own, that they are reasonably true, and that any action we take based on those beliefs is justified and rational.