Showing posts with label philosophy. Show all posts
Showing posts with label philosophy. Show all posts

Wednesday, July 2, 2014

my teaching philosophy

In an application I sent out to one of the universities I'm applying to, I was asked to write a "personal statement" that contains a detailed account of my teaching experience and my teaching philosophy. I thought this was a welcome challenge. I might have overdone it by sending the university a 5-page, single-spaced document, but the directions did say "detailed."

I don't think I said everything I had wanted to say, but what I've written below captures at least 90% of what I think and feel. I've cut and pasted only the last half of my personal statement below—the half dealing with my teaching philosophy. Agree or disagree as you will.



TEACHING PHILOSOPHY

Thanks in part to those linguistics and pedagogy courses I took in college back in the late 1980s and early 1990s, and thanks largely to my own experiences in the classroom, I have formed a clear teaching philosophy. I cannot claim to implement this philosophy perfectly, but it represents an ideal toward which I strive. In a nutshell: the ideal EFL classroom is student-centered and task-oriented. The teacher never lectures, and to the greatest degree possible, students are encouraged to take control of their own learning. In Korea, students are generally trained to be passive in the classroom; most of the classes they have with Korean professors will involve lectures. Classes on English grammar or literature will also be teacher-centered lectures, and the lecturers will speak primarily in Korean, which I find ironic. Students do little more than take notes during these sessions; they are not encouraged to question the professor or to “flex their English muscles”; instead, they sit in silence, just writing. How constructive is this? In my view, class is much more exciting and beneficial when the students take control and the professor stands back to let this happen. The professor, in my ideal classroom, is merely a guide or a facilitator; it is the students who are in the driver’s seat, even teaching each other lessons from the curriculum or completing tasks individually or in teams. People learn more when they are given responsibility: to learn to ride a bicycle, one must actually get on a bike, not merely hear a lecture about bike-riding.

In French pedagogical linguistics, a distinction is made between parler de la langue and parler dans la langue: speaking about the language versus speaking in the language. The former is a bad idea, but this is what happens when professors lecture on English grammar in Korean. The latter is a superior approach because it exposes the students to more actual English and forces them to think about what they are hearing. Linguist Stephen Krashen put forward the “I + 1 hypothesis” decades ago; the idea is that, if the students’ ability is at level I, the teacher must speak at level I + 1 to force the students to make an extra effort at comprehending the teacher’s utterances. Lazier students might resent this kind of challenge.

I also disagree with modern “oral proficiency” and “communicative” approaches that sacrifice the teaching of grammar for some vague, airy-fairy notion of “fluency.” These modern approaches do indeed get students producing English faster than the old-school methods ever did, but their major disadvantage is that the students, though speaking with confidence, often cannot speak well. Their speech tends to be garbled and incoherent, shot through with errors, and this is because the students have not learned the necessary grammatical structures on which to hang their ideas. When a Korean student says, “I go school” or “When you homework?”, I hear a grammar issue. Teaching EFL students how to structure “Wh-” and “yes/no” questions, how to reply intelligibly to such questions, and how to frame their thoughts in an organized manner is an essential part of a good language curriculum.

A personal example of the flaws of “oral proficiency”-oriented programs: my brother Sean went through a French curriculum that stressed communicative competence over grammar. Because I am fluent in the language, I would often try talking with my brother in French. I found that his pronunciation was not bad, and he was able to reply to my questions with short bursts of verbiage, but longer utterances were beyond him. When I took a look at Sean’s French writing, I saw it was atrocious: my brother had learned little to nothing about verb conjugation, grammatical gender, tense control, or any of the other myriad details that make one’s language clear and coherent. This was not Sean’s fault: the curriculum had failed to stress the structural, technical aspects of French, favoring instead a fuzzy, holistic approach that produced students who could gabble in French, but who had already begun to form a raft of bad speech habits that would be hard to undo later on in life.

This brings me back to EFL in Korea. Most of my Korean students have formed terrible speech habits because no one has bothered to correct their technical errors. I have taught writing classes in Korea in which my students were horrified to see how much red ink I had scrawled all over their short essays. This horror is the direct result of a lack of mindfulness caused by curricula that emphasize production and fluency, but neglect to consider correct grammar, mechanics, and so on.

There are, unfortunately, Western teachers in Korea who buy into the myth that “Korean students don’t need to learn more English grammar” or “Korean students have had enough grammar.” True: Korean students might be very good at recognizing grammar errors on a quiz, but that says nothing about those students’ ability to produce grammatically correct language. The problem with the “Koreans have had enough grammar” crowd is that these people do not realize that Koreans might have a good storehouse of passive grammar, but they have next to nothing when it comes to active grammar. The same goes for vocabulary: university students will have studied English for years, and will have a large mental lexicon of passive vocabulary (i.e., the vocabulary that is recognized through listening and reading), but they will have precious little active vocabulary (i.e., the vocabulary that one relies on when speaking and writing). Active vocabulary can only be developed through proactive use, which is again why lecture is a terrible way to teach English. Passive students will never develop active vocabulary.

In that sense, I do agree with the oral-proficiency school that the students need to be speaking, speaking, and speaking some more. But unstructured speech, “free talk,” and the avoidance of error correction are all harmful to students’ FL learning. Grammar drills and other focused exercises must be part of a language curriculum, however corny and old-school that might sound.

I have, lately, been encouraging my intermediate students to engage in a round-robin English activity in which the students take over, entirely, the responsibility of teaching, while the teacher stands back and monitors, providing occasional correction and leading the post-activity review segment. In my round-robin classroom, the students are divided into four teams. Each team is assigned a certain amount and type of content to teach. Team 1 will teach its material to Teams 2, 3, and 4; Team 2 will teach its material to Teams 1, 3, and 4, and so on. This is done in three rounds, with the combinations of teams rotating every round. Each team teaches its own material three times (and becomes expert at it by the third round); each team is taught different material by each of the other teams. By the end of three rounds, all four teams will have been exposed to all four teams’ material. The material itself is designed to be internally reinforcing, so there is a good bit of repetition and overlap, among the teams’ lessons, to aid students in remembering what they have learned. My intermediate kids love the round-robin approach; I told them that it provides them a small taste of American-style graduate-school seminars, in which it is incumbent on the students, not the professor, to provide the material for a given day’s lessons. My feeling is that you learn when you teach, and teaching something is an excellent way to take responsibility for it.

There are two other aspects to my pedagogical philosophy: I favor the use of behavioral objectives and the use of humor. Behavioral objectives stand in contrast to cognitive objectives. A cognitive objective might be something like, “By the end of the class, students will have developed an appreciation for Impressionist art.” The words develop and appreciation are frustratingly ill-defined in this context. Meanwhile, a behavioral objective will focus on things that are tangible and, where possible, quantifiable. For example: “By the end of the class, students will write a two-paragraph report summarizing the work of one Impressionist painter and expressing a well-defended opinion about that painter’s work.” As a pragmatist, I have a strong bias toward behavioral objectives because they can be used to measure students’ progress. As for the use of humor in the classroom, this should be so obvious as to go without saying. Humor softens the hard edges of social interaction in a classroom full of unfamiliar people. In Krashen’s terms, humor “reduces the affective filter,” lowering stress levels and allowing for better learning. It is an invaluable tool, not to mention one of the teacherly qualities for which an instructor will be long remembered.

To sum up, then: I am a strong advocate of student-centered, task-oriented FL learning. I am an enemy of lecture as a teaching method because it encourages student passivity and does nothing to improve students’ active vocabulary and active grammar. I believe in the old-school notion that grammar is absolutely crucial for good and proper production of language, but I speak here of grammar as it applies to the productive macroskills—speaking and writing. I also think that the teacher, far from being the center of attention in class, ought to be as far away from the center as possible, to allow the students to take charge of their own learning. While I am not against using Korean on occasion as a time-saving device, I believe that FL students should be exposed as much as possible to the target language, not to lectures in the students’ native tongue. Finally, I am a pragmatist who advocates the use of measurable, tangible behavioral objectives in lesson planning, and I also advocate the use of humor as a way to reduce stress and facilitate better learning.

These are some of the modest insights that I have gained from years of teaching. They have stood me in good stead, but because life is always evolving and people are always learning, I know that this philosophy will, inevitably, evolve as well.


_

Saturday, December 15, 2012

a faulty axiological argument for the existence of God

I was alerted, on my Twitter feed, to the existence of a five-minute Prager University video by Dr. Peter Kreeft (rhymes with "strafed"), professor of philosophy at Boston College, in which Dr. Kreeft attempts to prove the existence of God by arguing that good and evil enjoy objective existence. I will lay out Dr. Kreeft's argument, phase by phase, and then demonstrate why it resoundingly fails to prove God's existence.

1. The Argument

Dr. Kreeft's argument has two principal phases:

a. Establish that all non-objective (i.e., atheistic/naturalistic) explanations for the existence of morality are unsatisfactory.

b. Conclude from the failure of all naturalistic explanations that morality has an objective basis, which must be supernatural, i.e., God.

Establishing (a) is challenge enough, but much more depends on whether Dr. Kreeft can succeed at establishing (b) satisfactorily. In the video, Dr. Kreeft breaks (a) down into five parts. This five-part argument, a systematic rejection of several naturalistic explanations for the existence of morality, begins this way:

I'm going to argue for the existence of God from the premise that moral good and evil really exist. They are not simply a matter of personal taste-- not merely substitutes for I like and I don't like.

We can therefore call this an axiological argument for the existence of God. The term axiology refers to the study of value, i.e., ethics, morals, the Good, etc. Note, too, that Dr. Kreeft is aiming to establish that good and evil are objective realities, i.e., they reside in the world, independent of any particular person's perspective.

Dr. Kreeft continues:

Before I begin, let's get one misunderstanding out of the way. My argument does not mean that atheists can't be moral. Of course: atheists can behave morally, just as theists can behave immorally.

This is an important concession, but I'm not sure how relevant it is, given what Dr. Kreeft argues later: at the end of his spiel, Dr. Kreeft seems to imply that an atheist who believes morals to have an objective basis is actually a closet theist. This comes perilously close to the claim that there are no atheists, a claim that drives most atheists crazy. (It's a bit like defining religion so inclusively that even atheists turn out to be religious. I've been guilty of making that move myself.)

Here is the transcript (all typos are my responsibility) of the rest of Dr. Kreeft's axiological argument for God's existence:

Let's start, then, with a question about good and evil. Where do good and evil come from? Atheists typically propose a few possibilities. Among these are

-evolution
-reason
-conscience
-human nature, and
-utilitarianism.

I will show you that none of these can be the ultimate source of morality.

Why not from evolution? Because any supposed morality that is evolving can change. If it can change for the good or the bad, there must be a standard above these changes to judge them as good or bad. For most of human history, more powerful societies enslaved weaker societies, and prospered. That's just the way it was, and no one questioned it. Now, we condemn slavery. But, based on a merely evolutionary model—that is, an ever-changing view of morality—who is to say that it won't be acceptable again one day? Slavery was once accepted, but it was not therefore acceptable: if you can't make that distinction between accepted and acceptable, you can't criticize slavery. And if you can make that distinction, you are admitting to objective morality.

What about reasoning? While reasoning is a powerful tool to help us discover and understand morality, it cannot be the source of morality. For example, criminals use reasoning to plan a murder, without their reason telling them that murder is wrong. And was it reasoning, or something higher than reasoning, that led those Gentiles who risked their lives to save Jews during the Holocaust? The answer is obvious: it was something higher than reasoning, because risking one's life to save a stranger was a very unreasonable thing to do.

Nor can conscience alone be the source of morality. Every person has his own conscience, and some people apparently have none. Heinrich Himmler, chief of the brutal Nazi SS, successfully appealed to his henchmen's consciences to help them do the "right" thing in murdering and torturing millions of Jews and others. How can you say your conscience is right and Himmler's is wrong, if conscience alone is the source of morality? The answer is: you can't.

Some people say human nature is the ultimate source of morality. But human nature can lead us to do all sorts of reprehensible things. In fact, human nature is the reason we need morality. Our human nature leads some of us to do real evil, and leads all of us to be selfish, unkind, petty, and egocentric. I doubt you would want to live in a world where human nature was given free rein.

Utilitarianism is the claim that what is morally right is determined by whatever creates the greatest happiness for the greatest number. But, to return to our slavery example, if 90% of the people would get great benefit from enslaving the other 10%, would that make slavery right? According to utilitarianism, it would!

We've seen where morality can't come from. Now, let's see where it does come from.

What are moral laws? Unlike the laws of physics or the laws of mathematics, which tell us what is, the laws of morality tell us what ought to be. But like physical laws, they direct and order something, and that something is right human behavior. But since morality doesn't exist physically—there are no moral or immoral atoms or cells or genes—its cause has to be something that exists apart from the physical world. That thing must therefore be above nature, or supernatural. The very existence of morality proves the existence of something beyond nature and beyond man. Just as a design suggests a designer, moral commands suggest a moral commander. Moral laws must come from a moral lawgiver. Well, that sounds pretty much like what we know as God.

So the consequence of this argument is that, whenever you appeal to morality, you are appealing to God, whether you know it or not. You're talking about something religious, even if you think you're an atheist.

I'm Peter Kreeft, professor of philosophy at Boston College, for Prager University.


2. My Critique

My first reaction to this video was that an axiological argument for the existence of God has to be one of the more bizarre attempts at proving God's existence that I've seen. St. Anselm's ontological proof for the existence of God, while flawed, strikes me as more rigorously logical than Dr. Kreeft's strange undertaking. St. Thomas Aquinas's cosmological proofs—the so-called Five Ways—also strike me as more tightly reasoned than this morality-centered approach, although they, too, are flawed.

My objections to Dr. Kreeft's arguments can be summed up thus:

1. In attempting to refute a mere subset of the total number of naturalistic arguments for the existence/ultimate source of good and evil, Dr. Kreeft has failed to address all the possible arguments and thus cannot proceed directly to the supernatural.

2. Many, if not most, of Dr. Kreeft's objections merely reject possibilities because they are distasteful, not for any rigidly logical reason. These are aesthetic objections, not logical objections.

3. Even if we consider Dr. Kreeft successful in having refuted all the naturalistic arguments for the existence/ultimate source of morality, Dr. Kreeft has failed to demonstrate that a theistic source for morality is the only remaining option. Buddhism builds its system of morality not upon theism, but upon the basic empirical fact of dukkha (suffering, unsatisfactoriness) and the relational, processual, intercausal nature of reality. No god is needed in this moral framework.

Science has also been exploring the question of morality. You might want to take a look at Robert Wright's talk with Dr. Steven Pinker over at Meaningoflife.tv (see here). Fast-forward to about minute 34, then listen as Pinker and Wright talk about the notion of objective "moral laws" (i.e., moral realism, the idea that moral laws have objective existence), which enjoy an almost Platonic status, toward which evolving organisms are converging over time—laws that govern, say, cooperative survival strategies, tendencies toward reciprocal behavior, various pancultural forms of the Golden Rule, etc. Nowhere in that discussion is God explicitly invoked.

4. At several points in his argument, Dr. Kreeft assumes what he wishes to prove. A good example of that fallacious move occurs here, early in his argument:

For most of human history, more powerful societies enslaved weaker societies, and prospered. That's just the way it was, and no one questioned it. Now, we condemn slavery. But, based on a merely evolutionary model—that is, an ever-changing view of morality—who is to say that it won't be acceptable again one day? Slavery was once accepted, but it was not therefore acceptable: if you can't make that distinction between accepted and acceptable, you can't criticize slavery. And if you can make that distinction, you are admitting to objective morality.

The notion that "slavery was once accepted, but it was not therefore acceptable" is the crucial phrase here: Dr. Kreeft is merely asserting, not arguing. He offers no support, that I can see, for his contention that slavery wasn't acceptable back in the old days: obviously it was acceptable, or it would never have been practiced! To say that slavery was never acceptable is to say it was never acceptable from a God's-eye point of view—and that's precisely where Dr. Kreeft is assuming what he wishes to prove.

5. Dr. Kreeft's argument suffers from the same problem that plagues most arguments for an objective morality: whose morality, from which culture, is the morality? There are so many moralities out there, and not all of them share certain basic tenets like "killing/murder is bad." This is Cultural Anthropology 101, folks: moralities may overlap, but as with Wittgenstein's notion of family resemblances, distant-cousin moral systems may have little to nothing in common.

6. If we assume that Dr. Kreeft has successfully made the case for theism, Dr. Kreeft still faces all the logical and moral objections to theism itself. To wit: how moral is a jealous and vindictive God? Is the petty, bloodthirsty God of the Old Testament (a God who, in Christian reckoning, sacrifices his son in the New Testament) truly worthy of worship? What about the logical problems that burden most traditional concepts of God? Divine foreknowledge is incompatible with human freedom, for example, and we associate freedom with responsible, moral action. Etc., etc.

I think that about covers my objections to Dr. Kreeft's argument. Basically, I feel that the professor has failed to make the move from "No naturalistic explanation for morality is satisfactory" to "Only theism can explain the existence of morality." His objections to naturalistic explanations are more aesthetic than logical; he fails to answer all the naturalistic arguments for the existence of morality; he fails to provide a compelling case that theism is the only inevitable alternative in the face of naturalism's failures (cf. Buddhism and science on morality); he assumes what he wishes to prove; he fails to deal adequately with the diversity of moral systems; and finally, even if he has succeeded in making the case for God, he faces a mountain of logical and moral objections to theism itself.

That any argument for the existence of God can hold water is doubtful at best. Over the course of human history, no argument has yet proven universally acceptable, and this axiological approach strikes me as one of the stranger—not to mention weaker—attempts at supporting theism.

My thanks to my brother Sean for nudging me to write this post.


_

Friday, June 1, 2012

"spiritual, not religious"?
well, that's a punch in the face for you, then!

Quite possibly a new book for my collection: Dispirited: How Contemporary Spirituality Makes Us Selfish, Stupid, and Unhappy by Dr. Dave Webster, professor of religion, philosophy, and ethics at the University of Gloucestershire, England. Excerpt:

When someone tells me that they are “Not religious, but very spiritual,” I want to punch them in the face.

Hard…

[Interviewer (Webster himself, really)] What’s the most important take-home message for readers?

[Dave Webster] That the idea of being “spiritual, but not religious” is, at the very least, problematic. As I suggest in the book, mind-body-spirit spirituality is in danger of making us stupid, selfish, and unhappy.

Stupid—because its open-ended, inclusive and non-judgemental attitude to truth-claims actually becomes an obstacle to the combative, argumentative process whereby we discern sense from nonsense. To treat all claims as equivalent, as valid perspectives on an unsayable ultimate reality, is not to really take any of them seriously. It promotes a shallow, surface approach, whereby the work of discrimination, of testing claims against each other, and our experience in the light of method, is cast aside in favour of a lazy, bargain-basement-postmodernist relativism.

Selfish—because the ‘inner-turn’ drives us away from concerns with the material; so much so that being preoccupied with worldly matters is somehow portrayed as tawdry or shallow. It’s no accident that we see the wealthy and celebrities drawn to this very capitalist form of religion: most of the world realizes that material concerns do matter. I don’t believe that we find ourselves and meaning via an inner journey. I’m not even sure I know what it means. While of course there is course for introspection and self-examination, this, I argue, has to be in a context of concrete social realities.

Finally, I argue that the dissembling regarding death in most contemporary spirituality—the refusal to face it as the total absolute annihilation of the person and all about them—leaves it ill-equipped to help us truly engage with the existential reality of our own mortality and finitude. In much contemporary spirituality there is an insistence of survival (and a matching vagueness about its form) whenever death is discussed. I argue that any denial of death (and I look at the longevity movements briefly too) is an obstacle to a full, rich life, with emotional integrity. Death is the thing to be faced if we are to really live. Spirituality seems to me to be a consolation that refuses this challenge, rather seeking to hide in the only-half-believed reassurances of ‘spirit’, ‘energy’, previous lives, and ‘soul’.

I can tell already that I'm going to disagree with some or most of the author's contentions, but the book still sounds fascinating.


_

Friday, April 20, 2012

metaphysical froth

[NB: This post is actually a repost of an essay I wrote back in 2008-- here.]



Philip Pullman's His Dark Materials trilogy (The Golden Compass, The Subtle Knife, and The Amber Spyglass) is a wild metaphysical ride-- imagine Tom Robbins for kids-- that takes the reader through multiple alternate universes, many of which appear to be variations on our own, but at least one of which features an earth on which no human life evolved (the mulefa of that world are sentient, but not human: imagine elephants on motorcycles). We encounter only an infinitesimal fraction of the universes out there; more are born every moment.

The manner in which Pullman's universes are born is boilerplate sci-fi (for a classic example, see Larry Niven's short story, "All the Myriad Ways," in his collection of the same title): as sentient beings are faced with choices, each choice results in a mitotic split by which new universes are born, each universe containing an alternate version of the sentient being who has passed beyond the moment of choice. If a certain Being X has twenty possible choices at a given moment, then twenty different universes will be born, each one instantiating one of those twenty choices. Of course, assuming the existence of libertarian free will, each sentient individual actually faces an infinity of possible actions each moment, so each individual is "producing" infinities upon infinities of universes every moment. If you think that's complicated, apply that scenario to every sentient being.

The idea that we live in an ever-burgeoning froth of universes is evocative, but is also, in my opinion, unworkable. I want to talk first about the narrative problems it poses for Pullman's plot (this will require explaining the story a bit), and later on about the philosophical problems inherent in a frothy metaphysic.

1. Narrative Problems

Pullman obviously can't lead us through every single universe; for his story to have any coherence, he must confine his narrative to just a few universes. The ones we encounter are:

1. Lyra Belacqua's world
2. Will Parry's world (which is also our world)
3. The world of Cittàgazze (characterized by the predominance of Italian culture, the presence of Coca Cola, and the general lack of adults in the big cities)
4. The world of Lord Asriel's fortress
5. The world of the mulefa, where Mary Malone constructs her amber spyglass

Beings from other universes appear in the story, but we never visit those places.

All the parallel/alternate universes are connected, however, by the existence of Dust, which is particles of consciousness. When matter evolves to the point where sentience appears, there Dust is found. The universes are also connected to the Abyss: interdimensional explosions that rip the fabric of space-time can create holes in many alternate worlds at once, and the Dust from those worlds will begin to drain into that singular Abyss.

It is possible that the universes are also connected "at the top": the idea that all the universes are the products of a single creator God is alluded to in the books, although God is never actually seen, and God's existence is never confirmed. Much of the story focuses instead on The Authority, the first and greatest angel to be formed from the coalescence of Dust. The Authority crowned himself God and told all who followed that he was the creator.

Angels can pass easily between alternate universes without disturbing the overall frothy structure of the Great Metaphysic (my term for the sum total of all universes, not Pullman's). It seems that angels, despite being the most highly sentient of sentient beings, do not produce new universes with their choices. Pullman never directly addresses this issue. Humans, too, can pass from one universe to another; in fact, many doors between worlds remain open because the humans who created them have forgotten to reclose them.

If I've read Pullman correctly, the human ability to travel between worlds began about three hundred years ago in the world of Cittàgazze, where someone or a group of someones created a tool called the "subtle knife." The blade is of modest size and two-edged; one edge cuts through any material (reminiscent of a lightsaber); the other edge, when the proper owner of the knife achieves the correct state of mind, cuts through the fabric of one's universe and, depending on the direction of one's concentration, can slice a window or hole into an alternate world. Shifts in cuts and concentration are what allow the knife wielder to open doors to different worlds. A conscientious user of the knife can step through the threshold and reseal the tear, if he so wishes.

But over the course of three centuries, the various users of the subtle knife have secretly entered different universes, pilfering items and technologies found in them, often leaving the doors between worlds open. Each tear allows a little of the Abyss to peek through, and soul-eating Specters, the children of the Abyss, are created every time a cut with the subtle knife is made. As a result, a good part of the trilogy is devoted to the question of how to repair the open doors, stanch the flow of Dust into the Abyss, and stop the spread of the Specters.

It's all quite complicated, and I'm afraid it's also unworkable from a narrative point of view. The problem is this: if there's one Cittàgazze world, there are many-- an infinity of them, in fact. The moment the subtle knife was created, there wouldn't have been only one such knife, as Pullman's story implies: there would have been an infinity of them, too, with an infinity of people doing an infinite amount of damage to the Great Metaphysic. The plot of the trilogy wraps itself up far too neatly (and happily), and this is problematic because Pullman obviously wants to write a smart story for smart kids, a story that works on many levels. Astute young readers will catch on to the same problems I'm talking about here, and will have the same doubts about the conclusion of Pullman's trilogy.

With a blossoming infinity of subtle knives out there, a simple resolution is quite impossible. How would you track down and stop the wielders when each wielder is producing an infinity of new wielders at every moment? I conclude that Pullman bit off more than he could chew when he decided on such a freewheeling many-worlds scenario for his story. He could have avoided the chaos by hewing to a more modest alternate-universe paradigm, such as can be found in CS Lewis's Narnia series, where God's anteroom is a forest filled with still pools of water, each pool a gateway to a self-contained universe, and little to no interpenetration between universes except whatever God allows. Pullman could also have gone for an even more restricted scenario, such as the one in Stephen R. Donaldson's Thomas Covenant series, which deals with only one alternate world created by a being who, in our world, appears to be a Hindu monk. In terms of narrative, neatness counts, and the more I think about Pullman's story, the less I like this aspect of it. What a contrast with that other well-known series, the Harry Potter heptalogy! JK Rowling offers us only one world, one with quite enough action to keep us occupied, thank you very much. When put next to Pullman's trilogy, Rowling's series looks relentlessly linear.

2. Philosophical Problems

Now let's turn to the matter of the frothy metaphysic itself.

I'm a big fan of Occam's Razor, which states that we should "not multiply entities beyond necessity." This is normally interpreted to mean "the simplest, most elegant explanation for a given state of affairs is probably the correct one," but in the case of Pullman's Great Metaphysic, there's no need to reinterpret Occam: Pullman's story quite literally multiplies entities beyond necessity!

But let's think for a moment in terms of simple, elegant explanations. Which explanation for the current state of affairs strikes you as simpler and more elegant?

1. There is only one universe.

2. There is an infinity of universes, with new ones being created all the time as sentient beings make choices.

The idea that this one reality (and there can only ever be one reality, as I explained back in this post) contains one universe strikes me intuitively as correct. Parallel universes seem to me to feed an anthropocentric need to spread our egos as far and wide as possible: what a nice fantasy to think that somewhere out there is an alternate Kevin who is at this very moment sipping Mai Tais and surrounded by gorgeous women!

So the froth model seems to fail the test of Occam's Razor, a truly subtle knife if ever there was one. I also think the notion of a frothing reality presents us with a problem only vaguely alluded to earlier: the problem of freedom.

Freedom, conventionally defined, is the ability to do otherwise than what one has done. This suggests that, at a given choice-moment, there is the actual choice made and, potentially, an infinity of counterfactuals, the ghosts of alternatives unexplored. In the froth model, however, there are no counterfactuals: all possibilities are actualized! Stepping back to the God's-eye view, we can see that this means there is no freedom, no shadowy "otherwise." Those "otherwises" actually exist in-- as-- other worlds.

Let's simplify the situation and pretend that at moment M, when Kevin makes a choice, reality suddenly switches to the froth model, and that only Kevin is the generator of universes. What this means, from the God's-eye perspective, is that Kevin is a being whose true shape spreads across a multiverse and resembles a great, branching structure. That structure contains no potentiality, because every single one of Kevin's choices is actualized in some universe somewhere. The shape of this structure is therefore fixed: the branch-Kevin, taken as a whole, is not free. If we follow Kevin along only one world-line, we can see how he might think of himself as free-- how, from his limited perspective, he might come to regret the would-haves and could-haves in his life. But Kevin in his entirety, the infinitely ramified Kevin, isn't free at all: his plural existences cover all possibilities, leaving no counterfactuals.

I somehow doubt that reality is this complicated. I may be wrong, but Occam's Razor is quite persuasive: it's more fruitful to think we all inhabit a single, non-frothing reality, and that counterfactuals, whatever they are and whatever their ontological status, drop away as we pass through each moment of choice.* It also makes little sense, thermodynamically speaking, to say that we, or that our decisions, somehow create whole universes. Easier to adopt the creaturely view that we arise out of a universal matrix, retain some coherence for a time, and then slough back into the cosmic churning-- scattered and dissipated, and never to return exactly as we were.

In conclusion, then: while I found Pullman's trilogy to be a great read, it may have failed in the exploration of one of its most central ideas-- the notion of a ramifying multiverse. The neat conclusion of the trilogy did not take the metaphysic seriously enough, and as a result, the conclusion rang false.





*The same could be said for quantum-level fluctuations in the structure of abiotic matter. Why should sentience be the sole producer of universes?


_

Friday, March 30, 2012

do you have free will?

Sam Harris, speaking at Cal Tech, thinks you don't.

Harris's points seem almost to be grounded in Indian philosophy:

• Consciousness is the one thing that can't be illusory.
• The self, meanwhile, is an illusion.*
• Decisions, being based on previous states of affairs that include both previous decisions and random factors, cannot be parsed in such a way as to reveal free will at any point in the decision-making process.

There's more going on in this talk-- much more. If you find yourself with about 80 minutes to spare, I highly recommend watching Harris's spiel and the brief Q&A period that follows it.

My own sense that I have free will is both strong and undeniable, but Harris makes a pretty good case for the idea that a combination of deterministic and random factors can never be a recipe for freedom in the cherished philosophical sense, i.e., that I am somehow the "author" (Harris's term) of my actions. I wish he'd had more time to tease out the moral implications of this way of thinking. The talk heads, somewhat fuzzily, in the direction of emphasizing compassion and understanding-- especially regarding violent criminals-- as core values in this new, post-libertarian ethos, but Harris's spiel does little to unpack these concepts.

I approach these ideas with caution, partly because I'm extremely wary of attempts at social engineering. When people propose new moral paradigms, I feel as if I'm witnessing a sort of top-down attempt at restructuring human interaction. Of course, Harris isn't seriously proposing a thorough, comprehensive reparadigming; the lack of detail in his talk is enough to make that clear. But as a prominent author and respected neuroscientist, he's in a position to influence many people, and his facility for accessible explanations means he can insert his ideas into the pop-cultural nomos with ease. There is indeed a top-down dynamic at work here, and it's worrisome.

All of this has made me want to read more Herbert Fingarette. Fingarette has done a lot of work in the areas of freedom and responsibility, and I think he comes down on the side of moral agency: there is some sense in which we are morally responsible for what we do. He talks about two senses of the word "responsibility": (1) being the locus of action, and (2) being the locus of moral agency. In the first sense, being responsible means being the locus of a given action. In the second sense, it refers to being an accountable moral agent. The first sense applies when we think of, say, a bear attacking someone: no one seriously attributes malice to the bear. The second sense is more in line with how we approach premeditated murder: the killer is not only the enactor of the murder; he is also someone who can be held accountable for having done wrong.

Harris's way of thinking detracts nothing from sense (1), but it certainly complicates our evaluation of sense (2). I may watch this talk again soon. If I do, I'll likely have more to say on the matter.



*This is somewhat unfortunately phrased, since the term "illusion" requires a self that grounds the perspective from which illusions can be perceived. Harris might have done better to say that the self doesn't exist.


_

Friday, March 23, 2012

"How does the brain secrete morality?"

Here.

Interesting excerpt:

“The brain secretes thought as the liver secretes bile,” asserted 18th century French physiologist Pierre Cabanis. Last week, the Potomac Institute for Policy Studies convened a conference of neuroscientists and philosophers to ponder how our brains secrete thoughts about ethics and morality. The first presenter was neuroeconomist Gregory Berns from Emory University whose work peers into brains to see in which creases of gray matter those values we hold sacred lodge. The study, “The Price of Your Soul: neural evidence for the non-utilitarian representation of sacred values,” was just published in the Philosophical Transactions of the Royal Society B.

Philosophers often frame arguments over the bases of ethics in terms of deontology (right v. wrong irrespective of outcomes) and utilitarianism (costs v. benefits of potential outcomes). Both utilitarians and deontologists would argue that it is wrong to kill innocent human beings. A utilitarian might tote up the costs of being caught in murder or the harms to a victim’s family, whereas a deontologist would assert it is moral duty to avoid killing the innocent. For most people, a utilitarian reckoning in this case seems cold and psychologically broken (e.g., the kind of calculation that a psychopath would make). The researchers define personal sacred values as those for which individuals resist trade-offs with other values, particularly economic or materialistic incentives.

It is this distinction that Berns probes using functional magnetic imaging (fMRI) to see in which parts of subjects’ brains their moral decision-making is localized. Such scans identify areas of the brain that are activated by measuring blood flow.

I've generally heard the dichotomy referred to as deontology versus consequentialism. The first term comes from the Greek deon, which means "duty," a beloved concept of the philosopher Immanuel Kant, who wove duty into his Grounding for the Metaphysics of Morals. (The term ontology comes from a different Greek root: on or ontos, which means "being" or "existence." Don't confuse ontology with deontology; they aren't the same animal!) There is, when you think about it, something deontological about the consequentialist stance, and there's also something consequentialist about the deontological stance. As they might say in Zen Buddhism, the concepts are not-two (不二), i.e., they're distinct yet inseparable-- nondual.


_

Friday, February 17, 2012

a Philosophy of Religion course

In 2010, I created a syllabus for teaching my very own Philosophy of Religion course. Here it is, available through Google Docs. The original intent was to use the syllabus as a sample to apply for a full-time position at local community colleges, but the syllabus itself is solid enough for me to use it as the framework for an actual course in philosophy of religion.*

I had wanted to make this 16-week course available back in January for people wishing to learn on a face-to-face basis, but there didn't seem to be any interest (of course, my readership at the time was half of what it is now; the blog is growing!). If you're interested in learning about the philosophy of religion with me via Skype, however, I'm willing to teach you. The cost for the course is $286 (see here for an explanation of the rate), plus the cost of the two textbooks, both of which are available through various online sources.

As currently set up, the course assumes 3 hours per week for 16 weeks. Right now, my best available teaching day is Sunday; I'd recommend having the session between meals, from 2PM to 5PM, Eastern time. Skype will allow me to handle several callers at once, if I'm not mistaken, so we can have an actual class: me plus three or four students.

If you're in a different time zone, I hope you can rearrange your own schedule to fit this time frame. If not, we can see about arranging something privately, with the caveat that we stick to 3 hours a week, and that we keep strictly to whatever schedule we decide upon. For example: if we choose to have classes at 10AM every Monday, Wednesday, and Friday, we will not shift suddenly to 11AM during the third week of the course. I'm not a fan of wishy-washy behavior once a schedule has been established. Of course, there may be unforeseeable circumstances (e.g., my company occasionally schedules 11AM staff meetings)

General information about the Philosophy of Religion course has been available on this blog at this link. The syllabus itself provides more detail. If you're unclear on how to register, click the "Rates and Registration" tab under the banner, then click the "How to Register" link.

Just to put my price into perspective: $286 for 16 weeks' worth of education is extremely cheap. At 3 hours per week, that's 48 hours of class time. Divide $286 by 48 hours, and you get an hourly rate of $5.96. Who's insane enough to charge only $5.96 per hour for tutoring? Only someone who's more interested in cosmic questions than in money, that's who.

Get a few friends to sign up along with you, and let's get this thing rolling!




*A note on capitalization: whenever I refer generically to philosophy of religion, I leave the phrase uncapitalized. When I refer, however, to the name of the course I'd like to teach, I capitalize the phrase. No inconsistencies here.


_

Friday, February 10, 2012

agree and disagree

I've been a faithful reader of the writings of Dr. William Vallicella for years. He and I have some fundamental disagreements, but I admire the clarity of his writing and can appreciate the reasonableness of his positions. His recent post on Daniel Dennett, anthropomorphism, and the "deformation" of the God-concept offers a good example of how I can read a "Vallicellian" essay and come away both agreeing and disagreeing with its various claims.

A bit of background: Vallicella is a theist, i.e., he believes that ultimate reality is personal. Regarding the status of human beings, he advocates a point of view that he styles ontotheological personalism. The onto- comes from the Greek on/ontos, which means "being/existence." (The terms ontology and ontological are central to most Western philosophy.) The personalism in question is, roughly, the idea that there is something about human beings that is irreducibly personal, i.e., people cannot be explained fully by scientific/empirical examination and analysis; their personhood can't be broken down into smaller parts. This personalism has its being (ontos) grounded in God (theos): hence ontotheological personalism.

This puts Vallicella in conflict with scientific atheists who believe, like philosopher Daniel Dennett, that the human mind can be explained in purely physical terms (i.e., brain activity). On his blog, Vallicella routinely critiques physicalism, the philosophy of mind that says The mind is what the brain does. Lately, he has also been writing on the spectrum of possible God-concepts, ranging from a God that is utterly physical and totally anthropomorphic to a God that is so depersonalized as to be no more than an abstract concept. Vallicella wishes to avoid these two extremes.

My own theological orientation is far different from Vallicella's. While I consider myself Christian, this is more of a sociological designation than a theological one: I've been too steeped in Asian philosophy to be a theological Christian. There's very little, in terms of Christian doctrine, that I literally believe; my own sympathies, at this point, are mostly with scientific skeptics and philosophically inclined Taoists and Buddhists; I haven't been a classical theist for a long time (I'd call myself a nontheist, i.e., someone for whom the question "Does God exist?" has no rational, discursive answer). I see reality as an intercausal being-in-process and take a very dim view of most shows of religious piety. My own philosophy of mind is probably much closer to Dennett's than it is to Vallicella's: I see the mind as something that arises from the brain; it is, in fact, utterly dependent on the brain for its existence. At the same time, I'm not so naïve as to think that the brain's activity is totally predictable: cogitation, being a supervenient phenomenon (i.e., something that arises from a lower stratum of being), follows its own rules. As author Robert Pirsig analogized it in his book Lila (I'm taking some liberties, here): it's like the difference between computer hardware and software-- each follows its own rules, but software depends on the hardware for its functioning.

With that background in place, let's turn to Vallicella's post on Dennett, anthropomorphism, and the "deformation" of the God-concept. He writes:

One of the striking features of Daniel C. Dennett's Breaking the Spell: Religion as a Natural Phenomenon (Viking 2006) is that Dennett seems bent on having a straw man to attack. This is illustrated by his talk of the "deformation" of the concept of God: "I can think of no other concept that has undergone so dramatic a deformation." (206) He speaks of "the migration of the concept of God in the Abrahamic religions (Judaism, Christianity, and Islam) away from concrete anthropomorphism to ever more abstract and depersonalized concepts." (205)

Why speak of deformation rather than of reformation, transformation, or refinement?

I think Vallicella has a point, here. Atheists, especially these days as the so-called New Atheism gains in popularity, seem unable to acknowledge that modern folk might actually conceptualize ultimate reality in ways that are philosophically and morally sophisticated. This is unfortunate, because it does indeed mean the atheists are furiously attacking straw men as opposed to real targets. There can't be any real dialogue when people insist on talking past each other. I'd add that this problem isn't confined to the atheists: religious folk too often attack science before they've made the effort to understand it. One example might be the Christian fundamentalist's dismissal of evolutionary theory because "the probability that development X or Y could have occurred is infinitesimally small." This sort of argument shows great ignorance about the massive timescales on which biologists have to think when pondering the phenomenon of evolution. No legitimate scientist believes evolution is a theory: there are theories of evolution, but evolution itself is a fact. (To his credit, Vallicella has no problem with the idea that humans evolved. He's a philosophical theist, not a religious fundamentalist.)

Later on, Vallicella writes:

Dennett's view is that the "original monotheists" thought of God as a being one could literally listen to, and literally sit beside. (206) If so, the "original monotheists" thought of God as a physical being: "The Old Testament Jehovah, or Yahweh, was quite definitely a super-man (a He, not a She) who could take sides in battles, and be both jealous and wrathful." (206, emphasis in original). The suggestion here is that monotheism in its original form, prior to deformation, posited a Big Guy in the Sky, a human being Writ Large, something most definitely made in the image of man, and to that extent an anthropomorphic projection.

What Dennett is implying is that the original monotheistic conception of God had a definite content, but that this conception was deformed and rendered abstract to the point of being emptied of all content. Dennett is of course assuming that the only way the concept of God could have content is for it to have a materialistic, anthropomorphic content. Thus it is not possible on Dennett's scheme to interpret the anthropomorphic language of the Old Testament in a figurative way as pointing to a purely spiritual reality which, as purely spiritual, is neither physical nor human. Dennett thereby simply begs the question against every sophisticated version of theism.

Dennett seems in effect to be confronting the theist with a dilemma. Either your God is nothing but an anthropomorphic projection or it is is so devoid of recognizable attributes as to be meaningless. Either way, your God does not exist. Surely there is no Big Guy in the Sky, and if your God is just some Higher Power, some unknowable X, about which nothing can be said, then what exactly are you affirming when you affirm that this X exists? Theism is either the crude positing of something as unbelievable as Santa Claus or Wonder Woman, or else it says nothing at all.

Either crude anthropomorphism or utter vacuity. Compare the extremes of the spectrum of positions I set forth in Anthropomorphism in Religion.

Here, too, I agree with Vallicella's analysis of Dennett. This is indeed a popular form of attack on theism. Dennett might be accused, here, of committing the fallacy of the excluded middle: he's offering two stark alternatives on the (false) assumption that no middle-ground option is available.

Thus far, I've been in agreement with Vallicella, not because I'm a theist as he is, but because his accusations against Dennett strike me as reasonable. Dennett could have strengthened his own arguments by targeting a more philosophically sophisticated concept of God. Attacking the God of scriptural literalists is far too easy. (Dennett might shoot back that the world is full of scriptural literalists, which would be a fair point!) But Vallicella also makes some claims with which I disagree. To wit:

Dennett's Dilemma -- to give it a name -- is quite reasonable if you grant him his underlying naturalistic and scientistic (not scientific) assumptions, namely, that there is exactly one world, the physical world, and that (future if not contemporary) natural science provides the only knowledge of it. On these assumptions, there simply is nothing that is not physical in nature. Therefore, if God exists, then God is physical in nature. But since no enlightened person can believe that a physical God exists, the only option a sophisticated theist can have is to so sophisticate and refine his conception of God as to drain it of all meaning. And thus, to fill out Dennett's line of thought in my own way, one ends up with pablum such as Tillich's talk of God as one "ultimate concern." If God is identified as the object of one's ultimate concern, then of course God, strictly speaking, does not exist. Dennett and I will surely agree on this point.

But why should we accept naturalism and scientism? It is unfortunately necessary to repeat that naturalism and scientism are not scientific but philosophical doctrines with all the rights, privileges, and liabilities pertaining thereunto. Among these liabilities, of course, is a lack of empirical verifiability. Naturalism and scientism cannot be supported scientifically. For example, we know vastly more than Descartes (1596-1650) did about the brain, but we are no closer than he was to a solution of the mind-body problem. Neuroscience will undoubtedly teach us more and more about the brain, but it takes a breathtaking lack of philosophical sophistication — or else ideologically induced blindness — to think that knowing more and more about the physical properties of a lump of matter will teach us anything about consciousness, the unity of consciousness, self-consciousness, intentionality, and the rest.

This is where Vallicella and I part ways. First, I find his dismissal of Tillich's theology to be overly hasty. Tillich was, in my opinion, saying something quite meaningful in defining God as "ultimate concern." The phrase was never intended to mean, the way his detractors argued, that "If golf is my ultimate concern, because I think about it all the time, then golf is effectively my God." The word "ultimate," as used by Tillich, still refers to that which lies at the utterest edge of reality. Golf, while entertaining, doesn't fit that criterion. The term "concern," too, was well chosen, for this is what human beings, at their best, are supposed to embody: concern for others, for the world, for all of existence. Concern involves an outward turn-- what theologian John Hick might call a shift from self-centeredness to Reality-centeredness. Ultimate concern, then, is concern about the ultimate. How is this so different from what other philosophers and mystics have said and written?

I also disagree completely with Vallicella's characterization of neuroscience. For him, neuroscience will never "teach us anything about consciousness." The reality, though, is that neuroscientific theories are paving the way for us to make machines-- robots-- whose behaviors are becoming increasingly complex. If one definition of "intelligence" is "problem-solving ability," then by that standard we have been building increasingly intelligent machines for years. Soon, intelligence will come to mean more than the ability to win at chess or participate in a Jeopardy! competition: it will mean the advent of machines that react without confusion in fluid social or physical situations. While true machine consciousness is probably a long way off, I don't see its realization as an impossible goal. Intelligence isn't consciousness, but it's a vital component of consciousness. One day, a machine is going to stare at us with the same speculative curiosity we train on it.

My point is that the increasing complexity of machine behaviors is the result of scientific theories that are grounded in a naturalistic (or, more precisely, physicalist) philosophy of mind. If mind is indeed utterly dependent on matter, as I believe it is, then we will one day be able to arrange matter in such a way as to form minds. This won't convince the diehard substance dualists,* of course; they'll go on believing that mind is somehow independent of matter without ever being able to explain how a particular mind is connected to a particular body. Unfortunately, their philosophy of mind can promise no progress: you can't strive to create artificial intelligence if you believe it's inherently unachievable.

As I wrote in Water from a Skull, the problem for people in Vallicella's camp is that they are participating in willful ignorance about the nature of mind. They spend their time critiquing the constructive efforts being made by scientists, while offering no new insights of their own. Their stance is little more than a case against physicalism; there's no real case for substance dualism. In fact, for their stance to hold water, they have to deny that mind, consciousness, has a knowable nature. The so-called "zombie" problem in philosophy of mind makes this clear.

Imagine a being that looks and acts perfectly human, yet has no actual consciousness-- no real feelings, no true sense of selfhood, nothing that comes with possessing an ego. It might cry, but that act is merely an observable behavior, indicating nothing about the being's inner reality. It might laugh at jokes, but that's also no indication that it's experiencing the humor behind the joke. That hypothetical being is called a zombie by philosophers, and there's a big debate over whether zombies can possibly exist. The TV series Battlestar Galactica (and, before it, the movie Blade Runner) dealt with the zombie problem. Are the Cylons, who were created by humans and who look and act just like them, actual persons? Or are they "toasters"-- lifeless robots that merely simulate humans? The TV show ends up promoting the idea that Cylons are people, too: they have thoughts, feelings, inner lives. They're capable of love and hate; they have dreams and ambitions.

Let's snap back to our own reality. Imagine an AI (artificial intelligence) expert talking with a substance dualist about the possibility of creating Cylon-like artificial life. "All you'll end up creating is a zombie!" declares the substance dualist. "It won't have sentience! No feelings, no real self-awareness, no interiority!" "And you know this how?" asks the AI expert. "Can we ever design a test to detect consciousness?" "No!" blusters the dualist. You see, the substance dualist is trying argue two things at once: (1) that we'll never know whether we've created a true machine consciousness, and (2) that whatever we create will be a zombie. Obviously, these two prongs are contradictory, but let's concentrate on the first prong.

Dualists can't argue that "we'll never know whether the being's really conscious" unless they're convinced that the nature of mind is essentially unknowable, i.e., that we'll always be ignorant about mind. If you want to make a test to determine whether someone has a disease, you have to know the markers for the disease in question: you have to know something about the disease's nature. The more you know, the more accurate the test. By the same token, if you want to know whether something has a mind, you have to know something about the nature of consciousness. It's a lame cop-out to argue that we can never know what mind is, but that's basically what substance dualists have been doing for years, and it's the only argument they've got. All the other arguments they make against physicalism are in support of this basic thesis.

Vallicella's positions are always well thought-out and reasonable, but there are some areas in which he and I are doomed, I think, to eternal disagreement. Philosophy of mind is one of those areas; theism is another. He thinks the physicalists are blinded by their scientistic ideology; physicalists see him (and substance dualists in general) as deliberately ignoring the evidence of science. I'm willing to grant that the mind remains a mystery, but I believe the mystery isn't indissoluble.

It's possible to respect people with whom one disagrees, and even to learn from them. To any students who might have taken the time to read this meditation: I hope you find yourselves challenged and invigorated by the different points of view that you'll run across in your high school and college readings. I hope you encounter thinkers who make you angry, who challenge your assumptions, who shock you into looking at the world from a different perspective. I hope you enrich your own lives by incorporating those perspectives into your own. Life is all about growth and constructive change, but sometimes the best change involves the tearing-down of old mental paradigms so that new, more robust paradigms can replace them. I hope your perspective matures as you wrestle with various authors, and that you never dismiss the entirety of a thinker's argument simply because you dislike parts of it. A mature viewpoint involves an appreciation of the world's complexity. Beware black-and-white solutions to complicated problems.

As process philosopher Alfred North Whitehead said: "Seek simplicity, and distrust it."





*Substance dualism, a perspective most famously laid out by philosopher René Descartes (he of cogito ergo sum fame), is the belief that mind and matter are substantially different from each other. Thoughts are mental phenomena, not physical. Substance dualists come in different shapes and sizes; many of them would argue that there is some sort of mind-brain connection, but even the dualists who acknowledge this connection would say that there remains a fundamental difference between, as Descartes called them, res cogitans (mental phenomena) and res extensa (physical phenomena). Vallicella has never overtly called himself a substance dualist, but he repeatedly expresses sympathy with their point of view.


_