18 August 2014

Thoughts on the Ice Bucket Challenge

ALS is a nasty disease. Two people I greatly admire — Stephen Hawking and Jason Becker — have lived with the disease for many years. To most, it's not so kind. Years ago I trained a group of young people whose job it was to care for a wealthy man in his 50s who had ALS. I once asked one of the nurses if she thought she could live with the disease, and she bluntly said, "I would rather die".

If you're not familiar with the heavily viral "Ice Bucket Challenge", the gist is that someone gets a bucket of ice water dumped over them, pledges to donate to help ALS research, and then 'tags' several friends to repeat the challenge. Given its focus on social networking and its novelty, it's been hugely successful in raising money for ALS research. That's a good thing, right?



Yes and no. Raising money to fund treatment for a nasty disease is certainly a good thing. But as William MacAskill – a researcher in moral philosophy at Cambridge – pointed out, there's a problem of 'funding cannibalism'. He notes,
Because people on average are limited in how much they’re willing to donate to good causes, if someone donates $100 to the ALS Association, he or she will likely donate less to other charities.
It's also worth noting that in most of the videos I've seen, no reference is made to whom a charitable donation is supposed to be given, and I'd be willing to be that plenty of people participated without making any donations just for the nebulous effect of "raising awareness". Raising awareness about ALS accomplishes little without action and, more importantly, long-term commitment.

I'm not in with the cynical crowd who asks, "What does dumping a bucket of ice water over your head have to do with ALS?", because the answer is, "About as much as running a 5k has to do with breast cancer." Lots of charities create novelty events to raise money, but as MacAskill argues, this isn't a good long-term solution:
[...] competitive fundraising ultimately destroys value for the social sector as a whole. We should not reward people for minor acts of altruism, when they could have done so much more, because doing so creates a culture where the correct response to the existence of preventable death and suffering is to give some pocket change.
[...] Rather than making a small donation to a charity you’ve barely heard of, you could make a commitment to find out which charities are most cost-effective, and to set up an ongoing commitment to those charities that you conclude do the most good with your donations. Or you could publicly pledge to give a proportion of your income.
These would be meaningful behavior changes: they would be structural changes to how you live your life; and you could express them as the first step towards making altruism part of your identity. No doubt that, if we ran such campaigns, the number of people who would do these actions would be smaller, but in the long term the total impact would be far larger.

For my part, I generally decline solicitations to give to charity as I already sponsor a charity I think is important, and my donations are budgeted out of my regular income. That's not to say I can't forgo a dinner out for a one-off donation to a good cause, but I generally dislike doing that for the same reason – my charitable donations are budgeted, so if I gave every time I was solicited it'd cut into my regular charity budget.

The Ice Bucket Challenge has raised over $10 million for ALS research, and that's a good thing, but we should all take a moment to consider longer-term commitments to causes we find meaningful.

13 August 2014

The militarization of police is one of the most important civil rights issues America has ever faced

There's an old saying that when you're a hammer, everything looks like a nail. So what happens when, despite violent crime being at a 44-year low, the Department of Defense is allowing local law enforcement agencies to acquire its surplus tactical armament? The answer is precisely what happened in Ferguson, MO — police confronted unarmed protesters with rubber bullets and tear gas while brandishing body armor and assault rifles based on the M-4 Carbine. This is a picture of police in Ferguson, from a poignant article in Business Insider:


One could be forgiven for thinking that these men do not look at all like American police officers. Indeed were it not for the "Police" sticker slapped on the front of their body armor, they could be mistaken for any arbitrary paramilitary force. Give police a soldier's armament, and you'll convince them that they are soldiers. And if they're urban soldiers, the streets become their battlefield, and everyone looks like a potential enemy combatant.

This isn't some idle, slippery-slope conjecture; it's happening. Heavily armed border police have killed 19 Mexicans for the crime of throwing rocks. The ACLU has documented seven people being killed, and 47 being injured, by unnecessary SWAT raids which, as John Stossel notes, are now being used primarily to arrest nonviolent drug offenders, with a big margin for error:
SWAT raids are dangerous, and things often go wrong. People may shoot at the police if they mistake the cops for ordinary criminals and pick up guns to defend their homes against invasion. Sometimes cops kill the frightened homeowner who raises a gun.
Stossel also argues, rightly, that this affects all of us — not just, as some conservatives would like us to believe, people behaving badly:
It took only [90 minutes] for authorities to deem [comedian Joe Lipari] a threat and authorize a raid by a dozen armed men. Yet, says Lipari, "if they took 90 seconds to Google me, they would have seen I'm teaching a yoga class in an hour, that I had a comedy show."
Lipari has no police record. If he is a threat, so are you.
But while this affects us all, in the wake of Michael Brown's death it's become clear how critical an issue this is for minorities in America. The aforementioned ACLU report found that minority communities were disproportionately targeted for these violent raids. In a sobering article for The Concourse, Greg Howard contrasts the violence against young black Americans with the hullabaloo over "open carry" laws championed by white ultra-conservatives:
There are reasons why white gun's rights activists can walk into a Chipotle restaurant with assault rifles and be seen as gauche nuisances while unarmed black men are killed for reaching for their wallets or cell phones, or carrying children's toys.
Our Constitution guarantees us the right to peaceably assemble, and the freedom of the press. This week in Ferguson, peaceful protesters and journalists alike were pelted with rubber bullets and tear gas by paramilitary police. The freedom of the press, in particular, exists to protect the interests of civilians by forcing transparency of the state. When journalists are threatened with arrest and assaulted by police, there is no one to hold police accountable for questionable actions they may take against civilians.

With surplus military gear still pouring into police departments, this trend is unlikely to change any time soon unless we stand against it. What can we do? We can call our elected officials, and partner with the ACLU. And just for the hell of it, I went ahead and created a petition at Whitehouse.gov, which you can sign here.


12 August 2014

Dan Dennett and William Lane Craig on the decline of the church

It's pure coincidence that these were released around the same time, but they both provide unique perspectives on the decline of the Christian church here in the West.


Dan Dennett: "Can churches survive the new transparency?"




William Lane Craig: "Reasons youth are leaving the church" (podcast)

11 August 2014

Update on current projects

You know how, in the past, I've said that I'm working a book (or several)? Well, I'm working on two, and they're coming along briskly. One is simply a sort of "best of", which will be called Confessions of an A-Unicornist. I've picked, with your help, my best work from the past 4½ years of this blog and I'm organizing it by topic and slightly editing the posts for flow. I honestly have no idea when it will be done, but I'm making a point to work on it a little bit every day.

The other book is on a topic that's been on my mind a lot lately, and I don't want to let out too many spoilers here but I think it's a topic that all intellectually engaged non-believers will find themselves reflecting on sooner or later. It's my primary writing project right now, which is why the blogging has been slow and will likely continue to be. It's one of those projects that started out as a very short book, but research into the subject has greatly deepened my thoughts on it.

As for the blog itself, if I can get some good content up here once a week, I'll be satisfied. Right now I want to keep focusing primarily on the books, and get 'em done! I haven't decided yet how I'll go about publishing, but one thing's for sure: they'll be dirt cheap.

09 August 2014

A brain on a chip?

In Star Trek: Voyager, the titular spaceship had computers that ran on "bio-neural circuitry" stored in gel packs. Like human bodies, the bio-neural circuity was prone to viral infection and, in one episode, is treated with a makeshift "fever" created by an "inverted warp field", because Star Trek.

How far-fetched is the idea? As it turns out, not very. Scientists at IBM have developed a chip that mimics the neural structure of a brain. Wired explains:
In a [conventional] von Neumann computer, the storage and handling of data is divvied up between the machine’s main memory and its central processing unit. To do their work, computers carry out a set of instructions, or programs, sequentially by shuttling data from memory (where it’s stored) to the CPU (where it’s crunched). Because the memory and CPU are separated, data needs to be transferred constantly.
[....]
Neuromorphic chips developed by IBM and a handful of others don’t separate the data-storage and data-crunching parts of the computer. Instead, they pack the memory, computation and communication parts into little modules that process information locally but can communicate with each other easily and quickly. This, IBM researchers say, resembles the circuits found in the brain, where the separation of computation and storage isn’t as cut and dry, and it’s what buys the thing added energy efficiency—arguably the chip’s best selling point to date.
It's an interesting concept, and as the New York Times notes, it is both power-efficient and capable of massive parallel processing:
The chip contains 5.4 billion transistors, yet draws just 70 milliwatts of power. By contrast, modern Intel processors in today’s personal computers and data centers may have 1.4 billion transistors and consume far more power — 35 to 140 watts.
Today’s conventional microprocessors and graphics processors are capable of performing billions of mathematical operations a second, yet the new chip system clock makes its calculations barely a thousand times a second. But because of the vast number of circuits working in parallel, it is still capable of performing 46 billion operations a second per watt of energy consumed, according to IBM researchers.
Large-scale applications are still a long ways off, and unlike brains and conventional computers, the new chips can't learn. So maybe we're a ways off from supercomputers bearing any resemblance to the human brain. And these new chips are a good many steps away from the Voyager computers because they just mimic an aspect of brain structure — they don't actually contain organic matter.

Still, it's kinda cool to think about. Yann LeCun, a researcher in the field, is skeptical:
“This avenue of research is not going to pan out for quite a while, if ever. They may get neural net accelerator chips in their smartphones soonish, but these chips won’t look at all like the IBM chip. They will look more like modified GPUs.”
So, like, Assassin's Creed Unity in 4k resolution on my gaming PC? I guess that'll hold me over until we build some starships.

03 August 2014

Necessary beings don't exist

As I'm prone to do, I hopped over to William Lane Craig's ironically named website Reasonable Faith last night and read the latest Q&A. This one addressed what I think remains the single most atrocious argument for God's existence — the ontological argument. The argument comes in several forms, but the theme is always the same: God exists by definition. And it still astounds me that otherwise bright people think this makes for a persuasive argument.

The Q&A discussion begins with a reader's inverse take on the the argument:
When I think about the concept of God --a maximally great being-- it seems clear that God, if he exists, exists necessarily. So if God exists in the actual world, then there is by definition no possible world in which God does not exist. But the problem is this: there seem to be a nearly infinite number of possible worlds in which God does not exist
I'll let you read the full question for yourself, but the gist is that to accept the modal ontological argument, one has to accept that there is no possible world in which God does not exist; to reject it, one merely has to accept that there is only one possible world in which God does not exist.

Craig's response is that imagining a possible world in which God does not exist "begs the question by assuming that the concept of maximal greatness is incoherent. Just because we can imagine a world in which a single particle (or whatever) exists gives no reason for thinking that such a world is metaphysically possible".

Le sigh. "Maximally great"? "Possible world"? "Metaphysically possible"? Half of the chore of addressing these arguments is deciphering the bizarre and often nebulous terminology. So let's look at the terms:

1) I don't think it's readily apparent that the concept of "maximal greatness" is coherent, because 'maximal' denotes a quantitative property, and 'greatness' denotes a qualitative one. In other words, there is no universally accepted definition of what constitutes 'greatness' in the first place, much less how greatness could be quantified as 'minimal' or 'maximal'.

2) "Possible world" is just philosopher slang for 'possible'. It seems to me then that it's utterly superfluous. If you're trying to reason about whether something is possible, just say "it's possible" or "it's not possible".

3) It's impossible to know what is or isn't "metaphysically possible" because the term 'metaphysically' is nebulously defined. Indeed Craig himself tacitly admits this in an old Q&A when he concedes, "What we take to be metaphysically necessary/possible depends on our intuitions about such matters."

—————————————————

The "possible world" semantics can be seen for how ridiculous they are simply by looking at one of the key premises in Alvin Plantinga's version of the argument:
4. If a maximally great being exists in every possible world, then it exists in the actual world.
In other words, "If it is possible that a maximally great being exists, then a maximally great being exists".

What? There has to be some sort of hidden premise here, because it's obviously a non sequitur to simply say "It is possible that a exists, ergo a exists". The hidden premise is embedded in the idea of a so-called 'maximally great' being. Namely, these theologians conceive of a maximally great being who exists as being greater than a maximally great one who doesn't. Confused? You ought to be. Here's Craig's explanation:
When you think about it, anything that exists must have the property of existing in every world in which it exists! So you're right that you, I, and everyone else has existence as part of his or her essence in that sense. Rather the claim here is that God exists in every possible world. What God has that we don't, then, is the property of necessary existence. And He has that property de re, as part of His essence. God cannot lack the property of necessary existence and be God. Of course, if something has the property of necessary existence, it can't lose that property, since if it did, there would be a possible world in which it lacked necessary existence and so it was never necessarily existing in the first place!
And here we find the elephant in the room: the property of existence. From the SEoP:
There is a long and distinguished line of philosophers, including David Hume, Immanuel Kant, Gottlob Frege, and Bertrand Russell, who followed Aristotle in denying that existence is a property of individuals, even as they rejected other aspects of Aristotle's views. Hume argued (in A Treatise of Human Nature 1.2.6) that there is no impression of existence distinct from the impression of an object, which is ultimately on Hume's view a bundle of qualities. As all of our contentful ideas derive from impressions, Hume concluded that existence is not a separate property of an object. Kant's criticism of the ontological arguments for the existence of God rested on a rejection of the claim that existence is a property of an object. Proponents of the ontological argument argue that the concept of God as an entity with all perfections or a being of which no greater can be conceived entails God's existence, as existence is a perfection and a being that exists is greater than a being that does not exist. Kant objected (in his Critique of Pure Reason, A596/B624-A602/B630) that existence is not a property. “Thus when I think a thing, through whichever and however many predicates I like (even in its thoroughgoing determination), not the least bit gets added to the thing when I posit in addition that this thing is. For otherwise what would exist would not be the same as what I had thought in my concept, but more than that, and I could not say that the very object of my concept exists” (A600/B628). Finally, both Frege and Russell maintained that existence is not a property of individuals but instead a second-order property—a property of concepts, for Frege, and of propositional functions, for Russell.
What these philosophers were getting at is that conceptual abstractions do not have literally real properties; their properties are, themselves, conceptual abstractions. I can say for example that a unicorn (an abstraction) has the property of looking like a horse, having a horn, being delicious when canned, etc. But these properties are nothing more than conceptual abstractions — representative processes in the human brain. I cannot claim that by adding the property of "existence" to a unicorn, a unicorn is now a real thing. It's the other way around: something has to exist in order to have properties in the first place. To put it more plainly, imaginary things have imaginary properties.

This means that just because I can conceive of a being who is 'maximally great' — however I choose to define maximal greatness — I don't have any reason to think such a being actually exists. All I've done is conjure up some imaginary thing, and its actual existence still needs to be demonstrated independently of my ability to conceive it.

—————————————————

Some properties of Equinas Unicornus
There are other forms of this argument: Leibniz claimed that God is a necessary being because the explanation of a series of contingent things cannot itself be a contingent thing; Aquinas claimed that 'essence' and 'existence' are identical in God, so that God being non-existent is not only paradoxical, but inconceivable.

What strikes me about all these arguments is the peculiar way in which they bandy about commonly used terms. For example, Aquinas' argument relies on a concept of 'pure being'; God, in whatever ineffable way, is not an amalgamation of properties but rather a being in which all his properties have somehow melded together and are indistinguishable from existence. It's bizarre because we've never seen anything like this, and we don't have any reason to believe that distinct properties can meld together in that way, somehow becoming identical to each other. I think Hume's argument, above, is appropriate here: there is no impression of 'existence' that is distinct from the impression of an object, which is (more or less) a bundle of qualities. We don't have any reason to think that 'pure being' is even coherent (it seems obviously paradoxical to me), but even if we did we've only conjured up a conceptual abstraction — the coherency of a concept is necessary, but not sufficient, to show that it corresponds to reality.

Why do these semantic 'proofs' of God rely on such nebulous and equivocal terms like 'metaphysical necessity', 'perfection', 'maximal greatness', and 'pure being'? Sean Carroll nicely captures the allure of this type of convoluted thinking:
If you have God intervening in the world, you can judge it by science and it’s not a very good theory. If on the other hand God is completely separate from the universe, what’s the point? But if God is a necessary being, certainly existing but not necessarily poking into the operation of the world, you can have your theological cake without it being stolen by scientific party-crashers, if I may mix a metaphor. The problem is, there are no necessary beings. There is only what exists, and we should be open to all the possibilities.
The simplest and most rational view is that the entire concept of necessary beings is inane theological gobbledygook. We don't have any reason to think that existence is something that can be ascertained in a way distinct from an object that is itself an amalgamation of observable properties. We don't have any reason to think that existence can be a property of something, or that 'maximally great' is anything more than a theological conjecture dependent upon idiosyncratic definitions of terms. Most importantly, we don't have any reason to think that our mere ability to conjure up a seemingly coherent concept is reason enough to think that it corresponds to reality. As Carroll himself often says, we can't know what reality is just by thinking about it; we can contemplate possible ways it could be, but eventually we have to actually get out there and look.

30 July 2014

Was Richard Dawkins 'mansplaining' rape?

I cringe every time I hear the pseudo-word "mansplain". It's hardcore feminist slang for when a man comments on women's issues in some purportedly unenlightened manner, and it exemplifies the kind of black and white thinking and reactionary antagonism that characterizes a small but vocal subset of modern feminists in that the term exists to broadly undermine any sort of discourse that doesn't kowtow to an idiosyncratic point of view. Anyone who dares disagree is immediately branded and marginalized, further reinforcing the tribal groupthink that led to the creation of such a deplorable term in the first place.

Anyway, though. It's because of this:



Which led to this article by blogger Erin Gloria Ryan:

Thank Goodness Richard Dawkins Has Finally Mansplained Rape

As usual, a little context clarifies the issue. So, here's Richard Dawkins' in his own words, as part of a rather fantastic essay:
I now turn to the other Twitter controversy in which I have been involved this week.

'"Being raped by a stranger is bad. Being raped by a formerly trusted friend is worse." If you think that hypothetical quotation is an endorsement of rape by strangers, go away and learn how to think.'

That was one way I put the hypothetical. It seemed to me entirely reasonable that the loss of trust, the disillusionment that a woman might feel if raped by a man whom she had thought to be a friend, might be even more horrible than violation by a stranger. I had previously put the opposite hypothetical, but drew an equivalent logical conclusion:

"Date rape is bad. Stranger rape at knifepoint is worse. If you think that's an endorsement of date rape, go away and learn how to think."

These two opposite hypothetical statements were both versions of the general case, which I also tweeted:

"X is bad. Y is worse. If you think that's an endorsement of X, go away and don't come back until you've learned how to think properly."

The point was a purely logical one: to judge something bad and something else very bad is not an endorsement of the lesser of two evils. Both are bad. I wasn't making a point about which of the two was worse. I was merely asserting that to express an opinion one way or the other is not tantamount to approving the lesser evil.
In other words, people have different opinions about what constitutes a greater or lesser moral evil; Dawkins simply says people expressing their opinion one way or the other does not imply that they endorse what they perceive to be the lesser evil. Seems perfectly sensible to me, and a far cry from "Saying with laughable certainty that rape can be neatly categorized and quantified in terms of 'bad,' and that certain categories always affect victims more profoundly than other categories", as Ryan charged.

But hey, who am I to question someone who uses sophisticated terms like "yellthinker" and "mansplaining"? It's always easier to draw lines in the sand than it is to engage in rational discourse, and what kills me the most about this kind of black and white, reactionary thinking is that the overwhelming majority of us are on the same side on these issues. Perhaps it'd be wiser to treat each other with a modicum of charity.


p.s. - On a side note, this shows why Twitter is not exactly the ideal place to attempt to engage people in civil debate on controversial and sensitive topics.

Does truly selfless altruism exist?

There's a pretty thorough scientific body of evidence that a great deal of 'moral' behavior in humans can be explained by reciprocal altruism, and that's a point that even the most hard-nosed theist is generally hesitant to dispute. Reciprocity drives an incalculable range of human cooperation, and it's an essential component of social behavior given our obligatory interdependence. The 'Golden Rule' itself is a maxim of reciprocal altruism, essentially saying I will respect your needs and interests as I wish you to respect my own.

But is there such a thing as true altruism, behavior that has absolutely no selfish component whatsoever? I'm skeptical. I tried to think of the most extreme example of altruism, and I got a little help from Wikipedia: altruistic suicide. The example is a soldier who, in wartime, jumps on a grenade to save his comrades. Clearly, there can be no reciprocal benefit since the soldier is dead. But does this really defy explanation via reciprocal altruism?

The key is to think not about the final act itself, where there is clearly no reciprocal benefit, but rather the benefits of being in a squad of soldiers who hold self-sacrifice as a virtue. Let's say that there are ten soldiers in such a squad, and you are one of them. If all ten soldiers are in combat, all else being equal, there's a one in ten chance that you will be the one who has to save everyone else by jumping on the grenade. But more importantly, there is a 90% chance that someone else will jump on the grenade, thus substantially increasing your likelihood of survival. Self-sacrifice is considered virtuous precisely because, on average, it increases the odds of everyone's survival. This clearly falls under the purview of reciprocal altruism, and indeed research indicates that self-sacrifice is more likely among strongly cohesive groups [1].

If a soldier gives in to cowardice and runs from the grenade, allowing it kill several of his comrades, it's likely that his survival will be short-lived or, at best, miserable. The reduced ranks of his company means that his odds of survival on the battlefield are now drastically lessened, in addition to the fact that any surviving comrades will almost certainly ostracize him or even kill him. If he makes it back, he'll spend the rest of his life shamed and, depending on the nation and the point in history, imprisoned or executed. A key aspect of a culture of honor that values self-sacrifice is that death is viewed as preferable to a life of shame and dishonor. And a virtue of self-sacrifice among soldiers that is not honored in combat is one that might as well not exist at all.

Still, it could reasonably asked why the soldier would not turn away at the last moment nonetheless. Even a marginal chance for survival beats certain death, and a life of shame is still more evolutionarily advantageous for that individual than the grave, if only marginally so. I think the answer is that the split-second decision to jump on the grenade is not rational. The virtue of self-sacrifice is deeply ingrained in the group, both fueled by and fostering strong in-group cohesion. Even if the concept that a virtue of self-sacrifice increases the likelihood of survival for all is not precisely articulated, it is intuitively understood that what is good for the group is, on average, good for the individual. When the moment comes, the soldier does not pause to ponder the possible consequences of his decision; he reacts swiftly and instinctively.

This comports with the explanation Richard Dawkins has given that some altruism may essentially be misdirected reciprocal behavior (misdirected in an evolutionary sense, not a moral one). The quintessential example is giving aid to impoverished people in Africa. Clearly, aside from a sense of pride in helping others, there is little if any obvious reciprocal benefit to be had. There are a couple of explanations here, however. One is that the idea that charitable acts is virtuous is itself a cultural norm, and that does indeed square with reciprocal altruism since any of us could easily find ourselves in a situation in which we needed others' help. A society in which charity is noble is indeed a better society for all. Given that, helping someone in Africa — even though the Africans likely cannot reciprocate — can be seen as a boon to one's social status, itself a strong evolutionary advantage.

But that can't explain all acts of charity — it seems incredibly cynical and almost certainly false that people are charitable only because of a subconscious desire to increase their social status, and even if it were true it wouldn't explain anonymous acts of charity. More likely, it may be the case that our innate empathy for other humans is simply being irrationally redirected. There's a reason why, for example, ads imploring for aid for African children don't monotonously list the ways that aid will help them; instead, the ads play somber music and show pictures of the children looking sad and helpless. If such charity were rational, the ads would appeal to our sense of reason and not our sense of empathy. The fact that an act is irrational doesn't preclude it from being rationalized, of course, just as the irrational self-sacrifice of a soldier can be rationalized as part of a larger framework of reciprocal altruism. If, for example, the majority of African nations could become technologically advanced players on the world stage, it would undoubtedly contribute incalculably to scientific research, global trade, tourism, and much more. A reciprocal component still exists within the larger framework of human flourishing, even if it's not readily obvious.

So, can truly selfless altruism exist? Within the narrow confines of the acts of individuals and the immediate consequences, it may appear so. But when we look at the cultural norms and social psychology that lead to such selflessness being viewed as virtuous, those sacrifices both small and large are undeniably part of a larger framework of reciprocity that is integral to the survival and well-being of all.

29 July 2014

John Loftus' Outsider Test for Faith

Last night as I was replacing bookmarks thanks to my new install of Windows, I stopped by my old Christian stomping ground at Randal Rauser's blog. I was dismayed to see the self-indulgent 'critique' of John Loftus' new book. You have to love the pomposity of a statement like "his book is beset by cognitive biases and lack of epistemic virtue as I have demonstrated in parts 1-9 of this review" — not argued, mind you, but demonstrated. Checkmate, atheist!

Anyway, the old curmudgeon brings up a sensible point:
[Loftus'] test is generally presented as a punctiliar event or delimited process of religious self-examination. This too limits its value, for human beings always need to check our biases and cultivate epistemic virtue. We are forever works in process. You don’t pass a single test and then get confirmed as “clear” (and that includes Tom Cruise). Consequently, Loftus’ so-called outsider test conveys a very misleading impression that one can pass a particular test and then be found rational in perpetuity. That is dangerous self-delusion.
This is one of those rare circumstances in which I find myself in strong agreement with a Christian apologist, especially one so persistently cantankerous.

This is how John Loftus originally phrased the 'OTF', as he likes to shorthand it:
If you were born in Saudi Arabia, you would be a Muslim right now, say it isn't so? That is a cold hard fact. Dare you deny it? Since this is so, or at least 99% so, then the proper method to evaluate your religious beliefs is with a healthy measure of skepticism. Test your beliefs as if you were an outsider to the faith you are evaluating. If your faith stands up under muster, then you can have your faith. If not, abandon it, for any God who requires you to believe correctly when we have this extremely strong tendency to believe what we were born into, surely should make the correct faith pass the outsider test. If your faith cannot do this, then the God of your faith is not worthy of being worshipped.
I've always thought Loftus' 'test' works just fine as a general principle of skepticism, but fares rather poorly as an argument regarding the truth or falsity of any particular religious claim. It may be the case, improbable as it may be, that the Lord and Creator of the entire Universe decided to make the mostly illiterate, frequently barbaric and not particularly advanced tribes of the Bronze Age Israel his sole 'chosen people', to whom he revealed the one correct faith, sitting idly in Heaven as all the other thousands upon thousands of cultures spanning the globe throughout history worshiped the wrong gods. I mean, believing such a thing takes a pretty extraordinary degree of intellectual compartmentalization, but its sheer prima facie absurdity doesn't prove it false.

Ed Brayton has quipped that studying other religions is one of the best ways to lose your faith in the religion you were raised with, and I think he's right, for several reasons. Firstly, recognizing that our cultural upbringing intrinsically subjects us to ethnocentrism and in-group/out-group biases very quickly leads one to treat with skepticism the notion that the religion that they happened to be raised with, or happened to be surrounded by in their culture, is the one correct one out of all the thousands spanning human history. Lucky you, just being lucky enough to be raised in the culture that worships the correct God and, perhaps, even so lucky as to go to the church or the seminary which happens to have the correct nuanced theological understanding of the correct God.

Secondly, when one studies religion from an anthropological perspective (as in Pascal Boyer's exceptional book Religion Explained) and understands how religious beliefs form and change as well as how they are integrated into cultural norms, the illusion that one's religion is uniquely true becomes much harder to entertain. One sees that their own religion is subject to the same cultural forces that have shaped every other religion ever, and that their beliefs are nothing extraordinary or special.

And finally, there is research which shows that people mold God into a reflection of their own sociocultural biases. This is hardly surprising; anecdotal observation reveals that religious people have a remarkable tendency to believe that God's outlook mirrors their own in important ways; the key distinction is that the religious person thinks that God has informed their outlook, when science reveals the opposite to be true — God is created in man's own image.


Religion is on the decline in the West, and has been for some time. In the age of the Internet, with communication making the world smaller and smaller, an insular ethnocentric perspective becomes far more fragile than it once was. John Loftus' OTF doesn't demonstrate any religion to be false, but it does highlight the sheer cognitive compartmentalization that believers must hold to in order to sustain their innumerable idiosyncratic religious perspectives. To break out of this cognitive prison, people don't necessary need to be exposed to some 'sophisticated' philosophical argument; they just have to see that the world is bigger than the space in their heads.