Not Today, Santa: I Have Seen the Father of Lies, and He is Us

You sit on a throne of lies.”
—Buddy the “Elf”

It being the Christmas season and all, this past Friday Family Movie Night found my little nuclear family all huddled in obeisance before the flat-screened iconostasis, watching the 2003 Will Ferrell flick Elf.  Part of what makes the movie so brilliant to me is its ability to shine an uncomfortable light on current social issues, even as my kids have no idea they’re watching anything other than a light-hearted fish-out-of-water story combined with a sweet, if problematic, love interest and a Santa-Claus-is-real! exuberance. 

For instance, just do a quick Google search for whether the derogatory term the human-elf Buddy applies to himself early on in the film—“cotton-headed ninny muggins”—is racist. I grew up in the Deep South on the east coast of the United States, where there is a history of representing a stylized dark-skinned African American slave child dressed in rags and referred to with the derogatory term pickaninny. Buddy’s self-directed put-down, which elicits gasps of shock from the other elves in Santa’s workshop, always reminds me of that particular ugly term, with which it enjoys obvious phonetic and possibly conceptual connections. And to judge by the Google search results, plenty of other people out there in internet-land find something suspiciously racist-sounding in the epithet as well.

Then there’s the clever way in which adult Buddy’s plight at the North Pole follows the familiar contours of institutionalized economic and racial discrimination that tends to lock disfavored populations into a downward spiral of social immobility. He’s racially different from the rest of the happy, industrious, GDP-increasing North Polians, who consider their particular emic racial characteristics a necessary precondition for economic success in their rarified world: “It’s a job only an elf can do. …Our nimble fingers, natural cheer and active minds are perfect for toy-building.” Not surprisingly, then, we soon find a clearly struggling, unhappy, and worried Buddy busily but clumsily constructing Etch-A-Sketches while his elven supervisor stands behind him, peering grumpily over the maladroit worker’s shoulder and recording what is doubtless the worst sort of evaluation on the clipboard he clutches before him. Buddy next finds himself “redistributed” and shunted off to an isolated basement-like workspace where he is forced to test Jack-in-the-Boxes for quality control, obviously terrorized by a boring, repetitive job that still manages to entail the worker’s traumatization at the inevitable moment each “successful” toy springs its menacing clown from an inviting, brightly colored pandora’s box. The scene makes me think of the YouTube TED talk by artist and media activist James Bridle in which he discusses how “underpaid, precarious contract workers without proper mental health support” are being employed by YouTube and Facebook to view and moderate the loads of questionable and even outright disturbing content uploaded to the services in an effort to filter out material that could cause real, lasting trauma to kid viewers. Bridle notes that such menial workers are themselves “being damaged by it as well” over the course of their bleak employ (see video here, at roughly the 12:10 mark). The irony is not lost on me that the class of YouTube kids video with which Bridle opens his discussion are precisely clips featuring nothing but an unseen person opening scores of chocolate Kinder Surprise Eggs. 

Then, when our newly demoted Buddy discovers the horrible secret of his ethnic and cultural origins, the all-father Santa also lets him in on the secret that his real, human father is on Santa’s naughty list: a not-so-subtle nod to the way hegemonic discriminatory systems lay claim not just to their victim’s awful present, but even coopt their past in an effort to enforce oppressed individuals’ subservient status as if a matter of the ineluctable working out of unsentimental, disinterested fate. It’s as though all the other “normal” elves were inwardly tsk-tsking, saying to themselves: Figures. What else could you expect from someone whose genetically inferior progenitors are just plain naughty? It comes as but little surprise that Buddy, equally out of place and stigmatized in the human world of New York City where he is scourged in menial employment-hell yet again in a corporate mail room, soon finds himself standing atop a city bridge, contemplating suicide. Merry freaking Christmas.


The aspect of the film I really want to concentrate on for this post, however, comes to the fore near the end of the picture, when Santa Claus, portrayed by a somewhat sinister and irascible Ed Asner, is talking with Buddy’s younger half-brother and remarks: 

“Christmas Spirit is about believing, not seeing. If the whole world saw me, all would be lost.” 

The comment comes at a moment when the younger boy is seeking to understand why Santa’s sleigh, which runs on “Christmas spirit,” won’t fly and, when he finds out the source of its fantastic power, inquires as to why Santa, like the Abrahamic God, doesn’t just show himself and remove all reasonable doubt, thereby ensuring a never-ending supply of energy for his evangelizing peregrination. 

The New Testament epistle to the Hebrews, chapter 11, verse 1, defines the word and concept of “faith” as “the conviction of things not seen.” In the movie version of astronomer and physicist Carl Sagan’s 1985 novel Contact, the roguishly handsome religious apologist and insider political player portrayed by Matthew McConaughey and derogated as “Father Joss” converses on a Washington D.C. balcony with stunning views of the obelisk to the eponymous statesman with scientist Ellie Arroway (played by Jodie Foster) about faith and empirical proof. When Arroway protests that she’d need such proof before she could assent to the prospect of a universe with God at its helm, Joss asks whether she loved her now deceased father who raised her singlehandedly (of course she did!) and, when she responds in the affirmative, commands her: “Prove it!” As an intellectual “argument,” this rhetorical tack naturally falls disastrously flat, but that’s precisely because it is not the appeal to reason the scientist was demanding; rather, it represents a call-out to pure emotion. 

Here, the religious figure seems to implicitly acknowledge what moral psychologist Jonathan Haidt makes explicit in his elephant-and-rider analogy to represent the relationship and relative influential weight of human emotional and autonomic responses in proportion to our rational faculties. Cognitive scientists Steven Sloman and Philip Fernbach make clear that so-called “sacred values” are unique in not being amenable to the questioning process designed to elicit the limits of our “illusions of explanatory depth” and provoke critical reflection on and adjustment of beliefs about the world. This resistence, even imperviousness, to change comes about because “sacred values” emerge first and foremost from the irrational elephant of emotion, not the desperate attempts of its reasoning rider to reign the massive animal in and steer it whichever way he might. You feel the presence of God in your life, just like you do the depth of your love for a dearly departed relative or friend. This is precisely why religions, like all good marketing schemes nowadays, seek to get at you while you’re young, so that they can invest the very idea of God with fond childhood memories and experiences of Durkheim’s “collective effervescence” resulting from throwing what Haidt calls “the hive switch” to leave the feelings of belonging and wholeness attendant upon group membership and worship etched indelibly on your impressionable mind. The reason religious faith is able to reverse the usual epistemological processes by which we form beliefs and check them empirically and experimentally against a reality that exists outside of our heads is because it rests primarily on feelings instead of reason.        


Once upon a time, I was a student of formal linguistics. As part of my studies of presuppositions and especially the split between what Finnish-American linguist Lauri Karttunen called “hard factive” predicates like regret and “semifactives” like know, I concentrated on some intriguing facts about how expressions of perception, belief formation, and emotional reaction to knowledge work both syntactically and semantically. 

Consider the 1999 Groove Armada dance hit “I see you baby.” The key, repeated refrain from that song is: “I see you, baby, shaking that ass.” The formal syntactic term to describe the structurally ambiguous string of words to the right of the verb see in this sentence is a “small clause.” There’s a question as to whether the pronoun you forms the direct object of the verb see, with the participial string shaking that ass as a modifying element following the appositional endearment baby separating pronoun and modifier, or whether the entire string you, baby, shaking that ass is really what is being seen as a single, whole event: like the equivalent of the sentence You are shaking that ass, only embedded within an utterance reporting someone else’s direct, personal witness. The fact that you can rearrange the sentence into the inverted pseudo-cleft formation You shaking that ass is what I see, baby makes the latter interpretation more likely. It’s striking that you can just as well replace the participle shaking with the bare infinitive form shake in both the original and clefted versions of the sentence without producing any degree of ungrammaticality: I see you, baby, shake that ass; You shake that ass is what I see, baby. 

Verbs of sense perception like see also permit their small clause objects to contain stative, as well as eventive, types of verbs. Groove Armada could have had the chorus run I see you, baby, being all sexy as… or some such, in which case the bare-infinitive version of the small clause complement becomes a little more difficult to accept as well-formed: ?I see you be all sexy as…. [Here, the initial question mark indicates a sentence about which grammaticality judgment is strained, though doubtful.] But for the fact that it’s difficult to talk of hearing someone shake their ass or being as sexy as something else, other verbs of perception like hear function similarly to see.  

The semifactive verb know also allows small clauses as objects, though of a fundamentally different sort. Know and other semifactives like think and believe will not permit eventive-type small clauses in the same way as verbs of perception. *I know/think/believe you, baby, shaking that ass is completely unacceptable. [Here, the initial asterisk serves to mark an ungrammatical sentence.] Even the stative *I know/think/believe you, baby, being all sexy as… doesn’t work so long as the verb in the complement remains in its participial form.  You can salvage this latter expression, however, by replacing the participle being with the full infinitive to be: I know/think/believe you, baby, to be all sexy as…, though in standard modern English this usage tends to be perceived as somewhat archaic. At times, it helps to rephrase such expressions as questions to see that they actually do work grammatically: Do/did you know/think/believe him to be sexy? 

There is a way to get an eventive verb into the complement of a semifactive expression, as in I know/think/believe you, baby, shake that ass, although, when you do that, the predicate describes not an unfolding event per se, but something more closely resembling an idea about an event. You can see this plainly by, one: realizing that you can freely insert the propositional complementizer that into the sentence directly after the semifactive verb (I know/think/believe that you, baby, shake that ass); two: noticing that you cannot keep the appositive baby in such sentences without the optional that (*I know/think/believe you, baby, shake that ass) unless you really want to start playing around with pragmatic pauses, emphatic stress, and intonation patterns; and three: understanding that adding in a modifying expression to the complement makes its propositional, fact-like character even more explicit (I know/think/believe that you, baby, shake that ass well) [Notice that, now, you’re reading the sentence with an intonation pattern pretty similar to what you would have had to have used in order to make the previous, ungrammatical sentence better! Note, too, that it is becoming impossible to keep reading these sentences without significant pauses around the appositive baby in a way not at all required for its use in the original song lyric with just the perception verb see!]. 

These semifactive verbs take small clauses that express ideas, not events. What’s truly fascinating about them is that, despite the fact that the small clause objects of semifactive verbs seem like they express factual information, they actually express only ideas about potential facts. That is, these verbs denote a mental process logically prior to the formation of factual knowledge, something you can clearly see by trying to substitute the explicitly factive complementizer phrase the fact that in for the simple complementizer that: *I know/think/believe the fact that you, baby, shake that ass. Doesn’t quite work, does it?

We can now contrast the semifactives with full, so-called “hard” factives like regret. Full factives don’t permit small clause objects whatsoever, no matter what form their verbs take. Instead, they require full sentential complements with the explicit complementizers that and the fact that: *I regret you, baby, shaking that ass/being sexy; *I regret you, baby, shake that ass/(to) be sexy; I regret (the fact) that you, baby, shake that ass/are sexy. 

This sliding scale of increasing syntactic and semantic complexity patterns together with the increasing complexity of the epistemological processes the different expressions denote. Perceptual predicates like see, hear, sense, and even catch in the sense of ‘to come upon someone in the act of something’ all take bare infinitive or participial small clauses denoting either events or states of being, while semifactives take either non-eventive full infinitive small clauses or even outright sentential complements (all denoting ideas), and hard factives take full sentential complements denoting complete propositions only. This syntax matches the burgeoning epistemology, with perception of events and states giving way to initial idea formation on the way to becoming full “knowledge” expressed as sentential facts. The fact that verbs of perception like see and hear also have a second, semifactive sense and use perhaps obscures the picture somewhat (I see/hear ((*the fact) that you, baby, shake that ass (well)), but you can still get the lay of the landscape, I think. 

By the way, this same pattern holds true, mutatis mutandis, in other languages than just English. As a qal-wa-homer (a fortiori) argument in support of this contention, let me mention that this increasing syntactico-semantic complexity has also been observed for the perceptual, semifactive, and factive verbs of numerous languages in the northeast Caucausus, an area known for linguistic systems that include a vast array of some of the most typologically unusual and marked features and patterns anywhere on the globe. If the pattern holds true for such “exotic” and far-flung languages as those in the Nakh-Daghestanian family, how much more so could one expect as much from languages syntactically closer to home. Because I know I have at least one regular French reader of the blog, let me demonstrate with that language: Je te vois remuer les fesses; *Je te sais/pense/crois remuer les fesses; Je sais/pense/crois (*le fait) que tu remues les fesses; *Je te regrette remuer les fesses; Je regrette (le fait) que tu remues les fesses. Be sure to notice in these sentences the language-specific distinguishing feature of complement clauses for verbs of perception consisting in the fact that the odd, seeming-direct-object you from the original Groove Armada lyric has essentially been sucked up into the main clause of the sentence, where the objective form te falls even before the main verb of perception itself! Note, too, the uneven distribution of that objective-case form te as opposed to the nominative form tu, which is restricted to use with semifactives and full factives.                 


To bring this abstruse linguistic discussion back around to the main issue of religion, the epistemological progression mirrored in the syntax and semantics of natural human languages as discussed above is not arbitrary, but rather quite natural. There’s a sense of simplicity and immediacy reflected in the near mono-clausality of sentences with verbs of sense position—which themselves express vivid moments-in-action of when sense perceptions come crashing into awareness—that is entirely lacking in those with either semifactive or hard factive predicates. These latter classes of verbs express the first formations of ideas on the way to possibly becoming facts and emotional reactions after the fact to knowledge already presumed factual, respectively. And the fact that the whole sentences containing them are really compounded of two independent clauses is made fully explicit with the shift toward more finite verb forms in the complements and more elaborate complementizers marking them, either overtly in the syntax or at least in the phonology in the form of pause.

As a special case of “knowing through faith,” itself a matter of arriving at “a conviction of things not seen,” religious epistemology effaces this clear progression, actually turning the interrelation between the stages of sense perception, belief formation, and knowledge-judgments on its head. When Santa Claus urges acceptance of the proposition that believing is somehow logically prior and necessary to sense perception in the form of seeing, he subordinates to already having a formed or forming opinion or belief the very avenue by which we acquire the information we use to form beliefs and opinions on the way to knowledge in the first place! 

To adopt this procedure as a practical heuristic requires the individual to already have a ready model or narrative into which she fits novel sensory and cognitive experiences, rather than a fundamental openness to novel sensory and cognitive experience which itself provides the data out of which a coherent narrative of the world is to be constructed. Framed in this way, narratives lay claim to our allegiance, obliging us to acceptance of truths prior to putting such truths to practical examination as urged by the scientific method of observation, hypothesis forming, and testing. Since stories used in this way demand that they be accepted a priori, a competition emerges, not just between competing narratives, stories, or models, but between narratives as a whole and the “lower-order” ways of knowing through sense perception and belief formation on the basis of observation, as well.   

At the dawn of Western literature, the mischievous shepherd Hesiod wrote in the prologue to his story of the origins of the gods—the so-called Theogony—about a curious warning he allegedly received regarding this competitive aspect to narratives, one issued from none other than the Muses themselves, no less: those daughters of Zeus whose task it is to inspire human beings to the heights of art, music, and literature, and other fields of endeavor and excellence. The Muses, Hesiod recalls, appeared to him one day as he was out shepherding his flock in the shadow of Mt. Helicon, home to two springs sacred to the Nine, both of whose waters were said to bring poetic inspiration to whoever should drink of them. The Muses invested Hesiod as a poet on the spot, he claims, capping off their initial address to him with this curious couplet: 

“We know how to speak many false things as though they were true;
But we know, when we wish, how to sing true things too.” (Lines 27-28)

The fickle goddesses promptly gave the shepherd a rhapsode’s staff and laurel crown and then “breathed into” him a “divine voice,” that he might make famous both things to come and things that were before. They instructed Hesiod to sing ever of the gods, but always to begin and end his songs by celebrating themselves. And then, with a wink and a nod, the newly minted poet ends his preamble with the befuddling line, “But why go on about irrelevant matters?” The Greek literally says “Why all this about oak and stone,” an ancient proverb or idiom for mere bagatelles.

Set at the beginning of a massive, epic-scale weaving together of local Greek traditions concerning the natures and origins of the gods, Hesiod’s proem can hardly be called irrelevant. The whole thing smells fishy. The Muses appear out of nowhere, hurling insults at the would-be poet  and warning him that the narratives they inspire may be either true or false-while-still-seeming-true. Then, having once received his marching orders along with some new duds, the poet promptly dismisses the whole affair as a passing fancy of the field to which sun-dazzled, hungry shepherds—“mere bellies,” the Muses call them—sometimes fall victim, as to hallucinations. Maybe what the shepherd actually did was climb Mt. Helicon for a respite from the mounting sun, pausing beside the spring Aganippe or Hippocrene to drink a few handfuls of some tainted, hallucinogenic refreshment. What was in those waters anyway? We might follow Classicist Louise Pratt in observing that all of this “seems to put Hesiod’s truth claim into a competitive context.” Telling stories, hearing stories, believing stories, disbelieving them—these are all, the Muses assure us, highly partisan affairs.


Aristotle once wrote that humankind is a “political animal” (Politics 1:1253a), meaning that we tend by nature to gather together in dense, stratified settlements, a pattern that seems inevitably to build up toward dwelling in cities, in Greek poleis (singular: polis), whence the adjective politikos and English derivatives politics and political. I would offer that we are also an enepic animal, from the Greek verb enepein, meaning to tell of or relate. I would use the adjective epic to describe this basic human story-telling tendency, a modifier whose origin hearkens back to the same basic Greek verb in an unaugmented form. Yet that word has long since acquired additional senses and associations in English that make it unsuited for my present purposes. The point is: as basic as our human tendency toward sociability and gregarity is, so also we possess—or rather are possessed by—another ingrained pull: toward story-telling, whether for pure entertainment or for education (to make sense of things and convey information)—or both. 

Humanity’s enepic nature isn’t just a soft tendency, either. It’s hard-wired into us as a part of our neural makeup. In high school, I had the good fortune to be exposed to psychologist Julian Jaynes’ controversial bestseller The Origins of Consciousness in the Breakdown of the Bicameral Mind. In the book, the author develops a theory of consciousness based on the idea of bicameralism (literally, “two-chamberedness”): that there exists in the brain a basic division of cognitive functions, with one part of the mind in essence cast in the role of a speaker or narrator and another piece playing the part of the audience, doing the listening and, on rare occasions, obeying. The specific arguments Jaynes mounts in support of his idea, especially the historical timeline he urges for the so-called “breakdown of the bicameral mind,” have proven controversial. But the basic notion of a narrator part of us somewhere in our heads, spinning tales that we interpret as consciousness resonates well with others’ research and ideas about conscious awareness.

In the 1970s and early 80s, neuropsychologist Michael Gazzaniga worked with split-brain epileptics and ended up hypothesizing the existence of a neural network in the brain whose function is very much that of story-teller. When patients suffer from particularly severe and violent seizures, one possible method of treatment is to surgically sever the dense bundle of fibers connecting the two hemispheres of the brain known as the corpus callosum. Since the corpus callosum provides the sole means for the brain’s hemispheres to communicate with one another, this callosotomy procedure has the effect of forcing localization of the seizures to one or the other half of the brain, thereby making them more manageable. But since human linguistic centers like Broca’s area, associated with the production of language, and Wernicke’s area, associated with processing language we hear, are largely lateralized in the left hemisphere, callosotomy also has the effect of isolating thoughts arising in the right hemisphere of the brain and preventing them from being able to be articulated in language by their thinker. 

Gazzaniga exploited this fact in devising an experiment for split-brain patients that separated their fields of vision and simultaneously presented two different stimulus images, one to each visual field. Since the optic nerves cross over at the bottom of the brain in the ocular chiasm, what is perceived through the right field of vision is shunted directly off to the left brain for processing, while what is perceived through the left field of vision flows first into the right hemisphere. Just in case the two hemispheres have been sundered from one another through callosotomy and the subjects’ fields of vision have been separated by a clever experimental apparatus, these differing visual inputs will remain forever separated from one another within the brain, with only one, that coming from the right field of vision, available for contemplation and articulation within the linguistic centers in the left hemisphere. 

So when Gazzaniga showed two different visual images to split-brain patients—a chicken claw in the right field of vision and a snow scene in the left—and then asked subjects to pick from an array of objects possibly connected to the images in the stimuli those they felt best related to what they had seen, what he discovered surprised him. The patients’ left hands pointed to a shovel, which obviously related to the snow scene presented in the left field of vision but which was locked in the right hemisphere of the brain and could not be articulated. The patients’ right hands pointed to a picture of a chicken. When Gazzaniga asked the patients to explain why they had chosen those two objects, they replied: “Oh, that’s simple. The chicken claw goes with the chicken. And you need a shovel to clean out the chicken shed.” Some part of these patients’ left brain confabulated a perfectly plausible, but entirely incorrect, explanation for a behavior whose genesis lay trapped in their non-verbal right hemisphere. This part of their left brains created order out of an unruly experience by taking available data from both the environment and other accessible areas of the brain and weaving it into a coherent, if flawed, narrative about the world and their actions within it. 

Gazzaniga saw this function of the mind again in a patient he was treating who suffered from reduplicative paramnesia, the delusional belief that a location has been duplicated, exists in more than once place at a time, or has been moved to a different locale. His patient, though in a hospital in New York, insisted she was in fact back in her home in Freeport, Maine. When Gazzaniga tried to reason with her, asking why, if she were back in her house on Main Street, were there elevators right outside the door of her hospital room, the woman responded, without skipping a beat: “Doctor, do you know how much it cost me to have those put in?” (p. 49). Gazzaniga called the function or process of the left hemisphere that could integrate information on the fly and produce such astoundingly plausible, even if fictitious, narratives the “Interpreter.” The functions Gazzaniga identified with the Interpreter’s processes are largely coextensive with the phenomenon we know as the self, as consciousness. 

In a 1995 essay on what it might mean for animals to be “conscious,” philosopher Daniel Dennett invokes a strikingly similar conception of consciousness. In an organism with a multi-layered neural make-up, there are many ways to account for what goes on inside. In the case of human beings, one possible way of accounting for the goings-on under the hood, so to speak, comes from the organism itself, in the form of a story whose narrator is some first-person sensation with which the organism itself personally identifies. As Dennett “crudely” puts it (his words, not mine!): “it is the story you or I will tell if asked.” Dennett suggests that in organisms like bats that lack the human capacity for language, the possibility exists that there is no such sense of a central narrative or narrator to explain the workings of the organism to itself and others: judgments are made and the bat acts, but there is no self-conscious way of expressing those judgments or explaining in language the resultant actions. Dennett’s description sounds remarkably like what Gazzaniga found to be happening inside the right hemispheres of his split-brain patients: in-coming perceptions impacted the brain and even spurred reactions on the part of the individuals, but the subjects had no conscious way of inspecting those perceptions and the motivations they drove, no way of expressing or explaining (or apparently even “knowing”) why their left hands reached for snow shovels.


It is no secret in modern neuroscience that a great deal of our motivations and actions precede our conscious awareness of them. Electroencephalograms (EEGs) show motor activity in the brain some 0.3 seconds before a person’s awareness of the intention to move a given body part. And delaying sensory feedback of the actual movement has been found to cause an individual’s judgment of when she intended the motion to shift in time. We apparently infer our intention retroactively on the basis of subsequent sensory experience. Other studies have shown areas of the brain activated prior to a decision to press one of two buttons between seven and a full ten seconds before the subject’s conscious registering of having made a decision. Much of our motivation and intentionality seems to reside in a Schrödinger’s box which the part of us that is conscious can only open after the fact to discover either a living or a dead cat. Only then do we begin considering our explanations to the animal cruelty people.

Research conducted with a rubber hand placed on a table before a participant with her own hands spread and likewise placed on the table in front of her such that one of her real hands is hidden from view behind a screen has shown that our brains use a similar inferential method to determine even where the limits of our own bodies lie. When researchers take brushes and stroke both the fake rubber hand that the participant is staring at and the real hand hidden from view behind the screen, the participant in the experiment comes to believe that the rubber hand is, in fact, a part of her body: to such an extent, in fact, that, when she catches out of the corner of her eye another researcher with a fork making to stab the rubber hand, the participant starts in terror and removes her real hand from behind the screen to avoid the damage threatened only to the false extremity. 

We apparently infer our own intentionality and “incarnation” with respect to our bodies retroactively on the basis of subsequent sensory experience. The feeling of the result of movement, like that of touch in the rubber hand experiment, is used by our brains as data on the basis of which to infer our intention to move and the limits of our physical being. And this sensation can be manipulated from without, causing us to misconstrue the proper bounds of both our intentionality and embodiment. 

If you’re a parent, like me, you’ve no doubt witnessed the gathering clouds of a child’s impending behavioral storm darken overhead, followed by the first distant rumblings of her refusal to accept a world contrary to her wishes and imaginings, and then—finally—the full-throated deluge, punctuated by lightening flashes and thunderclaps of screaming, stomping, hitting, and throwing things. Like me, you may have found yourself, afterwards, taking the offending child to task with, of all things, reproachful questions headed by why: “Why did you do that?”, “Why did you hit your sister?”, Why did you throw that toy?”, and the like. Of course, these questions are pointless and ridiculous. You know this because, like me, you’ve watched your kid in those searching moments, scrutinizing their diminutive face for some sign of awareness, of realization, of remorse. But nope! There is no such thing. The child’s behavior seems as much a mystery to her as to you. You might get a story about the perceived injustice that sparked the whole affair—“Well, Jane had the truck I wanted, and I asked her for it nicely, but she still said no!”—but you have to teach a child to say things like “I was frustrated that I couldn’t have the toy.” There’s a reason the mantra of modern parentspeak to kids has become: “Use your words.” By teaching kids to cast their feelings and desires into socially acceptable wording, we are cultivating their sense of self-awareness, self-analysis, and responsibility—literally, the ability to respond and account for themselves to others. To this extent, as Dennett has intimated, at least part of our consciousness results from inculturation. We are teaching our children to reason their way into their own behavioral black boxes and come to a realization about themselves, or rather their selfs.         


 In the 1980s and 1990s, Satanic Panic gripped both the United States and the United Kingdom. Lurid tales of Satanic Ritual Abuse (SRA) involving children, animals, drugs, sex, violence, and forced captivity circulated both in the news media and in popular gossip. Legal cases resulted and, in many instances, destroyed lives. Anchoring most, if not all, such cases was the testimony of alleged victims, usually children, who worked intensively with therapists of varying credentials and underlying agendas to “recover” their memories of the supposed abuse. While there remains an on-going debate in scholarly, popular, and legal circles over the existence and nature of recovered memories and whether or not traumas can ever be totally forgotten and then later reliably remembered, most all of the spectacular cases of SRA prosecuted at the height of the Panic have been proven false, the recovered “memories” at their core shown to be pure confabulations on the basis of moral outrage and suggestion by overzealous individuals intent on finding real-life bases to justify their prior prejudices. 

While there may be controversy over the existence and nature of false memory, the reconstructive nature of memory itself has long constituted an uncontroversial pillar of modern psychology. Our brains are not like video and audio recorders. We do not simply witness, tape, and store all of our experiences and then play them back upon demand when we wish to recall something. Rather, memory proceeds on the basis of isolated bits and pieces of actually recalled information—small details like a particular sight or smell—combined with emotional responses to such informational gleanings and conscious reasoning that takes those disparate details and weaves them together with general beliefs and knowledge about oneself and the world around us into a coherent narrative we recognize as our memory. If someone asks us, for instance, whether we have ever cheated on an exam and we have no clear memory of a momentous occasion when we did cheat and thereby bought disastrous consequences upon ourselves, we do not simply playback in our heads every time we sat for an exam and scrutinize the record for evidence of cheating. Rather, we cast back for some isolated memory of a cheating-related recollection (maybe a time when we thought about cheating on a test and felt ashamed), combine that with a self-observation like “I’m not the kind of person who cheats on exams,” and decide that, no, we probably have not ever cheated on an exam, which we regard as a shameful behavior anyway. We might even respond with something like “No, I don’t think so,” all but confessing that our “memory” is really a reconstruction on the basis of both limited evidence and rationalization. Like consciousness itself, our memories are, in large part, just stories that we tell ourselves and others, confabulated on the best available—but still quite incomplete—evidence combined with culturally-determined patterns of thinking and believing about ourselves.


Philosophers and neuropsychologists like Daniel Dennett and Michael Gazzaniga conclude that human consciousness and self-awareness involve more than a little story-telling. Parts of our mental make-up are, in essence, narrative machines, specialized to take disparate details from sensory awareness and internal “awareness” of other parts of our own organism and weave them together into a single, coherent narrative that we take both for who we are and how the world around us is. This happens at both the unconscious level of how we perceive the world around us and the slightly more conscious level of how we conceive of ourselves, our motivations, and our action within the world. And these two levels of story-telling “instincts” serve to help make us better able to function in the world. 

While we read, our eyes constantly engage in large leaps across the page rather than simply scanning across a line of text one letter, or even word, at a time. Psycholinguists call these leaps saccades, and we launch into a saccade on average every quarter-second. While our eyes are in mid-flight, as it were, during a saccade, we are temporarily blinded. We do not take in new visual information again until our eyes “land” and fixate on another word, usually several words over from where we launched the saccade. The intervening material is taken in via vision just outside of the central focus (so-called parafoveal vision) and peripheral vision. Despite the resulting disjointed way in which we actually take in the visual information while reading, we experience a constant, unmoving image of the page and the words on it. This effect is, in effect, a hallucination. It is a product of our brain’s ability to weave a constant image of reality out of what we know from the fields of neuropsychology and psycholinguistics is in fact a chaotic and disconnected set of snapshots. The constant image of the page of text is a best estimate, an inference of a solid, unchanging object based on a series of still images and our expectation that they all represent tiny snippets of a larger, constant real thing.

Research conducted with infants shows that we come hard-wired with expectations about the macrophysics of the world around us and use those expectations to craft compelling narratives about how objects should behave in the real world. And like any good narrative, the stories babies unconsciously craft about objects in their brains can be lead into dazzling plot twists by cunning researchers. Babies express surprise, for instance, when a researcher or parent places a ball behind a screen and then secretly removes the ball before taking down the screen to reveal nothing but empty space. The babies expect that objects won’t disappear when blocked from view. When a ball is hidden from view and then secretly replaced with a teddy bear, babies express equal surprise and delight, because they expect objects to maintain the same shape and appearance over time, even when they pass out of view. Babies also expect objects to be cohesive and not pull apart when they tug on them, and so forth.   

In the field of cognitive linguistics, our expectations about the physical world have been examined and discussed under the concepts of frames and scripts: essentially pre-set cognitive frameworks that help us make sense of experience by guiding our expectations of it beforehand. Frames and scripts are important not just in our hard-wired expectations about physics, but also in our acculturated expectations about social scenarios and interactions. Once we’re civilized in our various societies, we come to have ready-made models about what typically happens and who typically interacts (and how!) in restaurants, hospitals, grocery stores, affectionate families, and the like. Much of our humor and parody exploit the ironic possibilities inherent in violating these mental models, as in this SNL skit.


At a much more macro-level, narratives prove crucial to how we process and interact with the world around us in yet another way. Having a pre-prepared script or model for given experience can greatly color our perceptions of that experience. British Comedian Peter Kay has a great bit about misheard song lyrics. In the segment, he plays snippets from pop-songs right after telling the audience his highly original, quirky, and funny interpretation of what the lyrics are saying. Then, once the music plays, the audience cannot help but hear precisely what the comedian just patterned for them in narrative, to both their chagrin and great delight. Something similarly hilarious often results when an individual listens to a song in a foreign language and “hears” the lyrics as phrases from his or her own native language, as this clip demonstrates.       

The 1990s brought us not only grunge music, the sitcom Friends, and the layered “Rachel” haircut, but also the pop-culture phenom that was the WWJD bracelet or wristband. By asking themselves “What Would Jesus Do?”, those who wore such bracelets (usually young people judged to be in danger of temptation from drugs, sex, and other behaviors deemed sinful in popular Christianity) put themselves in mind of the Gospel narratives and Epistles of the New Testament and wondered: “If Jesus were in my shoes now, what choice would he make?” This kind of creative narrative imagining of a possible future with a different actor was supposed to help ensure that the individuals make a better moral choice in the heat of the moment. In his study of the religion-like qualities and effects of fantasy role-playing games, religious studies scholar Joseph P. Laycock notes that some RPG-players do the same with the elaborate characters they’ve created for their gameplay. 

Author, publisher, and television producer Lisa Cron has spoken of something similar she does with narratives drawn from everywhere from Star Wars to the Rosanne Bar Show to great books of previous centuries. Stories engage our emotions, mirror neurons, and neurochemical production to the point where our brains light up in precisely those areas and flood with precisely those neurotransmitters during a narrative that would be activated if we ourselves were caught in the same circumstances and performing the same actions as the protagonist. Author Sarah-Jane “SJ” Murray refers to this as “neural coupling” and sees in it the neuropsychological basis for the common advice for writers to “show not tell.” The anonymous author of the first-century BCE work of literary criticism entitled On the Sublime discussed this same phenomenon in great literature and prescribed the same advice. Of the classic poem of jealous love by archaic Greek poet Sappho known to tradition by its first two words phainetai moi (“he seems to me”), the author of On the Sublime writes:

“Is it not wonderful how she summons at the same time soul, body, hearing, tongue, sight, skin, all as though they had wondered off apart from herself? She feels contradictory sensations—freezes, burns, raves, reasons—so that she displays not a single emotion, but a whole confluence of emotions. Lovers show all such symptoms, but what gives supreme merit to her art is, as I said, the skill with which she takes up the most striking and combines them into a single whole.” (Pseudo-Longinus, On the Sublime, 10)

This is the very essence and definition of what the author of the treatise calls “the sublime.” 

Research has shown that these same traits in dreams—a coherent plot structure and strong, verisimilar emotional content—are precisely the most consistent predictors, alongside the stage of sleep from which we are awoken, of whether or not we remember our dreams upon waking. The ability of good story-telling to so involve the hearer or reader and cause them to try on others’ perspectives and attempt to see the world through others’ eyes forms the basis for the use of novels to cultivate empathy in everyone from soldiers recently returned home from combat to troubled, anti-social teens teetering on the edge of collapse into criminality. Narratives help us make sense of both ourselves and others and, accordingly, can be used both to forge better understanding between individuals, as well as to help us banish our own inner demons.   


Despite our widespread, innate use of narratives in the form of a priori beliefs to structure other ways of knowing, including sensory perceptions, we also have recourse to an altogether different way of knowing that relates to the first in the same way as the now proverbial dual systems of cognition that go under the rubric of “thinking, fast and slow.” We’re conscious of the fact that our default setting of using a priori beliefs to structure our approach to ourselves and the world around us is a quick-and-dirty heuristic designed to enable snap judgment and fast reaction times. This accelerated reactivity helps a lot when it’s a matter of life and death out on the open evolutionary savannah, but in the modern world of so many false appearances and mental traps, it can prove our undoing if allowed to progress completely unchecked. The check we use as a counterbalance is our ability to think slowly, using conscious reasoning and rational reflection-cum-introspection, as much as it’s available to us. 

When someone or something makes us conscious of the fallible, reconstructed natures of our memory, for instance, we stop, take a step back, and begin actually sifting through real recollected data in an effort to suss out the error. At such times, we become conscious of the fact that, syntactically and semantically, the verbs remember and recall actually function identically to verbs of sense perception like see and hear: I recall/remember you, baby, shaking that ass. Carefully amassing and reassembling such actual recollections, we proceed back up the natural epistemological scale, using the data of real memory to form a posteriori beliefs we can check against observed states of affairs.

A new religion may get us initially with a warm, welcoming embrace by a faith community at a social or mixer or casual ceremony or seminar, pairing that first flood of positive emotion with a follow-up of intense interest in us as individuals that makes us feel all important and cherished and vital to the community. However, once we’ve been in for a while and the shine of that first blush of new love or infatuation (how like dating finding a new religious community, or even just a collection of friends, can be!), we begin to see and think past the fast thinking and let the natural epistemological process mimicked in our language percolate upwards. We start looking and listening, two actions that differ from seeing and hearing solely in the addition of intentionality to the process of sense perception. [Notice how the added degree of intention in these predicates likewise finds an echo in their syntax: listen and look permit the exact same syntactic patterns in their complements as other verbs of sense perception like see and hear, except that the object-like you is now held at one more degree of separation, cast as the object of prepositions: I look at you, baby, shaking that ass; I listen to you, baby, shaking that ass.] We intentionally and attentionally attend to the warning signs, the places where the jolly, tidy narrative breaks down, and the cult-like qualities and mental control exerted through stories shine (or rather glower) through. And then we take all that we’ve begun to really see and hear and let it lead us to the formation of a posteriori ideas and beliefs about the group cast in this harsh, new light, at which point we test our initial opinions and may become convinced of new and troubling truths about our communities of faith against which we react emotionally, coming finally to regret the fact that we ever became involved with them to begin with. In this way, faith is lost, eyes are opened, and new conclusions spur novel action which usually results in a parting of ways.    


Coming at long last back around to the innocent seasonal flick Elf with which this interminable disquisition began, the moment where Santa Claus indoctrinates Buddy’s younger half-brother into the cult of Father Christmas comes amid news coverage of widespread sightings in New York’s Central Park of “something falling from the sky.” The reporter interviews a couple of people whose sense perceptions of the event are, as always, disorderly and incomplete and then flashes on a father and his young daughter, the latter of whom is convinced that what she saw fits a narrative already in her head about what, or rather who, the object could have been and what the Christmas season is all about (no, you dummy, not Jesus: Santa Claus!). Even as the reporter laughs the girl’s fervor off as youthful fancy, Buddy’s bro is scouring the park, looking for the object he, too, glimpsed in the sky. He eventually finds Santa, “asks” him “So…you’re really Santa Claus” with that huh-like intonation, and receives back the enigmatic response: “You never can tell, kid.” 

Yet, after clutching his very own, much sought-after “real Huff skateboard” that Santa somehow just knew to bring him, Buddy’s brother Michael determines that he very much can tell and decides to turn prophet for his newfound God-like revelation, making off with Santa’s master-list of gifts-to-recipients in book form. We next see him emerging from the undergrowth before the reporter and assembled crowd, clutching the revelatory book whose “gospel” of occult knowledge soon convinces even the hardened television journalist of the veracity of his unlikely tale. Evangelist Michael, whose Hebrew name appropriately means ‘Who is Like God?’, has won his first convert to the new faith.

One of the great historical ironies in the “religion of the book” that is Christianity attends the fact that the second-to-third century CE Latin Church Father Tertullian defended his and his fellow believers’ allegiance to the narratives of their sacred text in an early-third-century apologetic work De Carne Christi (“On the Flesh of Christ”) by making recourse to a principle from Aristotle’s Rhetoric, book two, chapter 23, section 22. There, the great Greek philosopher of a prior age notes that, as counterintuitive as it may seem, accounts of events in narrative that seem at first implausible are even more likely to be “either true or almost true” precisely because they strain credulity. This is the whole “truth is stranger than fiction” idea: “You can’t make this stuff up, folks!” In commenting on how the Bible could possibly narrate the incarnation in human flesh of a god’s offspring and, moreover, that divine offspring’s death by crucifixion, a blasphemy no reputable Greek philosopher would ever have allowed to be entertained of something as supposedly unbounded and perfect as divinity, Tertullian notes in chapter five, section four, of his On the Flesh: prorsus credibile est, quia ineptum est (“It is assuredly credible precisely because it seems an ill fit [for the concept of deity].”) This (in)famous line would later be much misquoted and misinterpreted by Enlightenment thinkers like Voltaire, who, intent on smearing Christian faith and its venerable old worthies, reduced the line to the now-common, and commonly disparaged, credo quia absurdum est or “I believe because it is absurd.” The irony here lies in the fact that the principle itself stems from the unimaginable richness and breadth of experience that often belies our own, narrow narratives of the world and of what’s possible and impossible within it. Aristotle’s idea, and Tertullian’s use of it to defend the Bible, constitute an appeal to the way in which, to quote Hamlet (1.5.167-168), “There are more things in heaven and earth, Horatio, // Than are dreamt of in your philosophy.” And yet now we have the Bible as a chief source of precisely such narrow narratives of the world that would seek to deny much of the richness and vitality of our lived experiences. 

Make no mistake: whenever someone tries to convince you that you must first believe a narrative of hidden realities—be they of the character of traditional religion or more in line with modern “New Age” spirituality or esoteric occultism—in order to then be able to see, appreciate, and benefit from the alleged effects of such realities in the external world, they’re selling you a totalizing discourse whose aim is to efface the slow, rational epistemology by which we put our narratives to the test in order to suss out what’s “either true or almost true” from what is manifestly not. And when they attempt to gaslight us by pointing up precisely the kinds of limitations on our sensory apparatuses and imperfect self-knowledge that I have discussed above, beware their hidden premise that pointing up the fallibility of our paths to knowledge is somehow tantamount to proving the contingency of all knowledge, how it’s just “narratives all the way down,” such that we might as well accept their particular story of things as any other. Indeed, they want precisely that you accept only their narrative and will make you doubt the integrity of your own self and knowledge of the world in order to help sway you to their way of thinking and believing. 

The powers of our minds to shape and even alter our experienced reality are most definitely profound, and we’re learning more and more about them everyday. But we should never be mislead into hypostatizing those powers and falling into the traditional religious trap of casting our own mental apparatuses out into the cosmic void and pretending that it’s all “mind only” or just “manifestations of consciousness” or that our own, particular narrative of the world is somehow reflected in (and a reflection of) a universal (and suspiciously anthropomorphic) divine order. Our experience of a constant, unchanging page of static words of text as we read may have hallucinatory qualities, but one of the key techniques employed by lucid dreamers to distinguish between their conscious dream-states and their waking moments is precisely the fact that words of text in dreams have a way of spontaneously rearranging themselves that text in the real world most certainly does not. I have seen the Father of Lies, and He is definitely and definitively Us. 


As Hesiod puts it, the Muses threw down the celebrated gauntlet long ago, warning us of the double-edged quality to the particular narrative art with which they invested the hapless shepherd. We can come to fervently believe in any number of possible lies, and it may even emotionally feel really good to do so. But hidden in the very words we use to describe the progression of perception to thought to belief and knowledge lie the blazes of another, slower, perhaps more lonely, but certainly also more honest path to knowing: one that starts with submitting the narrowness of our own conceptions to the boundless, almost incomprehensible vastness of experience, rather than the other way around.

So Merry Christmas to one and all, my independent, iconoclastic lovelies! And to all fondly cherished lies: good night!        

One thought on “Not Today, Santa: I Have Seen the Father of Lies, and He is Us

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s