Coping With Stage Fright

speak-to-no-audience

The first thing that needs to be understood about stage fright is that either you have it, or you don’t.  If you don’t have it, good for you, enjoy your life.  But if you do happen to have stage fright, you will more than likely have to put up with it for the rest of your life.  I’m not saying that to be discouraging, I’m just trying to be honest about my own personal experiences of dealing with stage fright.  There is good news, however, in that even if you can’t completely exorcise the irrational phobia from your consciousness, there are ways to manage it; ways to make your anxiety undetectable to lay observers.

Before I get into all the details let me give a little bit of background about my own issues with stage fright.  I have always had stage fright.  As a kid in grade school, I hated being called on in class, I hated having to take part in school plays/performances, I hated everything and anything that had me standing (or sitting) in front of a group of people, focusing on my words/actions.  Whenever I found myself in such a situation the first thing that would happen is my heart rate speeding up, next I would feel the blood pump into the back of my head (which was deafening to my ears), my knees would feel both light and heavy at the same time, my mouth would go dry, my voice would give out, and my face would turn as red as a tomato.  This went on for a long, long time.  I never “grew out of it,” as people kept telling me I would.  Even today, as someone whom most people would identify as a sociable, talkative kind-of-guy, I still feel a silent dread at having to address a group of people.  But because I’ve learned how to cope with my stage fright, the uncomfortable experience remains solely a private one.

Let me just say upfront that my ability to manage my stage fright is not the result of counseling or medication.  Additionally, I tried meditating as a teen, thinking that relaxation techniques would help me get over my anxiety, and all it ended up doing was making me more anxious (I worried so much about whether I was doing the breathing right that I couldn’t relax for a second; also, it made me dizzy).  Self-hypnosis was a complete waste of time.  And no, thinking positive thoughts didn’t do much either, on account that (for me at least) worrying thoughts are never voluntary to begin with.  What helped me cope with my stage fright in my late teens were the following realizations:

1.)  Stage fright can never fully go away.  It sounds a bit strange, but it’s probably the most important lesson I had to learn.  I was so focused on getting to a point where all of the symptoms I associated with my stupid phobia would simply go away for good, that any effort that yielded a lesser result would come across to me as a failure; leading to more anxiety on my part.  Finally, in college I began to concentrate on getting to just a barely adequate presentation level (enough to at least be passably understood by my audience).  Gradually, my confidence grew with each presentation, which started easing up the physical signs of my nervousness; i.e. my voice stopped changing pitch, my face stopped blushing (this one was more of a steady improvement over time; first the blushing was just restricted to my cheeks, then it only showed up for the first minute or so of my talk, until it’s now gone completely).  Now, don’t misunderstand me, I was (and remain) terrified of speaking in front of people.  However, once my goal changed from trying to beat my stage fright, to simply trying to hide it from the audience, my mind started to feel less overwhelmed by the experience, giving me enough time to take a breath and at least mask confidence to the audience.  Which eventually developed into real confidence.

2.)  Starting with a joke is always a good idea.  I know it sounds cliche and overly simplistic, but I find that even the most mundane of humorous comments at the start of the talk will make the experience so much easier when it comes to my stage fright.  It doesn’t just put you at ease, but the audience.  Not to mention if you do slip up in the middle of your presentation, establishing yourself as a humorous person at the very start makes it so much easier to recover (it makes the audience more forgiving too).  I know that some of us aren’t gifted with perfect comedic timing, but you need to remember that no one is expecting you to bring them to tears from laughter; no one is expecting you to be funny at all.  A slight quip about the scenery or some local activity (keep it conservative), or even the talk you’re giving itself, will do the job just fine for no other reason than that it is not expected by the audience, which will make them appreciate your effort to make them more comfortable.  And will make you more comfortable in return.

3.)  People don’t care about you.  Well, some people care about you, but your audience doesn’t.  At most they care about what you’re about to present, not you as a person.  And if you don’t draw attention to yourself, they’ll easily forget about everything in your talk that didn’t concern their preemptive interest.  This is a point that I was aware of long before I learned to deal with my stage fright, the problem was that it did me no good because I didn’t actually believe it.  In the back of my head (my nervous, blood pulsing head) I firmly held on to the idea that everyone was listening to every word I said, as if their lives dependent on hearing my stupid 5-10 minute oral report.  Getting me to actually abandon this bit of irrational thinking took some time and a lot of effort, but it was a needed step in being able to handling my stage fright problem.

There are several more things that go into it, as well as numerous subsequent details that accompany the three above, but I think this covers the essentials as they pertain to my experience.  And it needs to be remembered, this is based on my experience.  Personalities are different, and people respond differently to different stimuli, so there is always the chance that my methods aren’t good enough for you.  However, I think the points I listed are universal enough to at least push most people in the right direction of coping with their stage fright.

Generation C(ynical)

With the first month of the new year coming to a close, I’m left sensing the same old aroma of destitute oozing from the pores of my generation.  For the longest time I could not trace, or deduce its origin, but its stench rose up with the passing of each year nonetheless.  It’s particularly evident in the restlessness we exhibit towards our relations with the rest of the world.  Our attention span is gradually eroding away, as we become unable to focus on one thing long enough to satisfactory digest any of it.  In turn, we try to substitute this defect by focusing on several things at once, but never registering enough of anything to feel fully content with ourselves, making us dependent on a continuous supply of novel information and content to keep us entertained (often confused erroneously with being happy).  We have by necessity become accustomed to multitasking everything, not as a result of a higher functionality, but out of a never ending search for higher stimuli.  We want to be part of something grand, and we are sure that ours is the era of unparalleled social transformation, but as we look around our search is left unfulfilled by the unimpressive characters that bumble before us to signal the beginning of the new epoch.

There is a banner that hangs above our heads, and it depressingly reads:  “No heroes here to be seen, no glory left for me.”  We desperately want relevance (just check out the wide array of YouTube videos; or even easier, look at the large number of blogs written by individuals eager to share their personality with an audience–including this one), but we have lost interest in the form this relevance can take.  We have given up on the notion of heroes who affirm life, what we desire now are a continuous supply of cynics.  We do not believe that, as a person, as a generation, as a species, glory can be achieved anymore in our social interactions, so we dare not try to even attempt it.  The revolutionary spirit has come to a screeching halt, and the occasional sparks of it seen across the world could very well be nothing more but the reflexive cry of an amnesia afflicted body.

Like our predecessors, we are eager to achieve, to innovate, to create, to socially progress, but we are constantly being told that our ambitions are misplaced; how we ought to look to the past for guidance rather than compose our own future.  Yes, we are being told that the generation that has brought about one of the largest gaps of global socioeconomic inequality in modern history, that has (and continues) to produce one economic blunder after another, whose self-appointed wisdom has left half the globe starved or reeling in anguish, is the generation we need to model ourselves after.  These are the individuals we are expected to emulate as a generation?  The “wise elders” we are to turn to for guidance?  We’d be better off seeking advise from recycled fortune cookies, then this group of chronic failures!  But they keep that banner solidly pinned over our heads, and condition us to believe that we are dependent on their leadership to endure the problems they have created.  And we go along with it, because tradition says we have to respect ancient wisdom, and we cannot violate traditions–can we?  Well, I don’t know about you, but I sure as hell can.  Because I choose to stand under a very different banner, one I have willingly nailed over my own head, and ask no one else to adopt, unless they so choose.  My banner holds no cynicism about the future, in fact it welcomes the coming of new eras, new innovations, new ideas and ideals.  It reads:  “For progress to occur, traditions must die.”

The concept of ancient wisdom is imaginary.  Had humanity always been concerned with being governed by the values of the dead, we’d still be stuck with our ancestors’ superstitious explanations of where the sun disappears to after it sets every night.  We cannot afford to conserve values that hold no relevance to us; we must adapt to a changing scenery, or (literally) die trying.

Differentiating between the Objective and the Absolute

For something to be an absolute it must by definition be consistently changeless and impervious to new data–it always remains the same, in every situation, under every condition.  Thus, if someone claims to be taking an absolutist position he is essentially proclaiming that said position is immune to refinement and scrutiny, and will never need to be even slightly reexamined or amended, ever.  This is a mindset that I thoroughly reject, and I do so by necessity.  No idea, no proposal, no hypothesis, no theory, no fact, has ever, or will ever, be barred from the scrutiny of newly emergent data.  It is through a process of rigorous examination and reexamination that existing data about reality is reaffirmed, refined, or displaced by a better working model.  The introduction of absolutes does nothing to further our knowledge or our understanding of reality; in fact, it negates both with its inflexibility to change.  This is what I mean when I say that I reject all absolutes, and this is where some people falsely conclude that this must mean that I think all facts and claims about reality are just subjective opinions.

For something to be objectively true it must be verified to exist independent of any subject’s perception, feeling, or thought on the matter.  There are schools in philosophy which deny the possibility of objective facts on the basis that everything we perceive to exist does so solely through our subjective human perception of it, therefore what we call objective facts can never be anything more but our subjective human perception.  I am definitely not an adherent to such a mindset, and I’ll tell you why:  1,000 years ago various strains of viral and bacterial infections made plague and disease a common occurrence in people’s life.  The fact that these people had no knowledge of the viruses and bacteria that were causing their ailments (and no knowledge of germ theory, in general) made no difference to the reality of their existence, because the viruses and bacteria did not care whether or not they were perceived or known by the organisms they were infecting, maiming, and killing–that is to say, they existed independent of the subject’s perception, feeling, or thought on the matter; their existence was an objective fact whether anyone perceived it or not, as was their affect whether anyone understood it or not.  Likewise, prior to Newton people were largely unaware of the fact that the force of gravity was pulling down on them (and everything else) at 9.8 m/s^2, regardless of their subjective feeling or thought on the matter.  And similar things can be said about a number of other things, where subjective perception are irrelevant to objective data: heliocentric solar system, age of the earth, shape of the planet, and so on and so forth.

But I can already hear a faint cry of protestation here, “Wait a minute,” someone might be inclined to say, “doesn’t the fact that gravity is acting on us right now, and has always done so, mean that it is an objective fact, and an absolute, which contradicts your previous rejection of absolutes?”  In short, no.  The theory of gravity is accepted as the most reliable conclusion about the various relationships we see between matter on earth, the solar system, and the universe, and, thus far, has survived all measures of scientific scrutiny–but this by definition means that it is open to scrutiny, hence it is open to being overturned if (and that’s a big if) future observable, testable, verifiable, falsifiable, empirical data was to demand such a verdict.  If, hypothetically, extensive research was to demonstrate that what we think of as gravity is really the affect of three different, yet-unnamed, forces working together to produce what we have mistakenly been calling gravity, there isn’t a competent physicist in the world who would in defiance of all evidence dogmatically cling to the previous gravitational model–this is what keeps scientific theories from being absolutes, while still remaining objective facts; namely, that objective facts don’t need to be impervious to future revisions to remain objective, they just need to be independently verifiable from a subject’s perception, feeling, or thought on the matter.

A point of contention that arises from this is the claim that due to our fallible human perception, what we deem to be objective facts will always be dictated by our subjective observations, thus facts about reality cannot be verified fully independent of a subject’s perception, feeling, or thought on any matter.  Proponents of this philosophical position would agree with me about rejecting absolutes, but would also insist that my attempt to defend objective facts is dubious, because our interpretations of available data is unavoidably limited, and biased,  on account of our flawed human conception.  I accept the fact that our sub-Saharan, anthropocentric, primate brains are very good at concocting a flawed image of reality; hence, the once held belief that the earth is stationary and the center of the universe.  However, doesn’t the fact that, no matter what people might have thought on the subject, the earth was still rotating around the sun, in the corner of one tiny galaxy, indicate that verifiable objective facts still exist despite what our subjection perception tells us?  If we subjectively perceive the sun to be moving across the sky, but objectively know that it is the earth that is actually moving around the sun, does that not serve as a viable demonstration that despite all our flawed human thinking, we can still differentiate between the subjective and the objective?  After all, it is not our flawed human perception that is telling us that we live in a heliocentric solar system (our perception says the opposite), it is the accumulation of observable, testable, falsifiable, empirical data.

For one to continuously try and challenge this by claiming that, “But you can’t fully know if you’re interpreting the data accurately,” dwells into the realm of what I would call absolutist subjectivism–where one’s insistence that all physical facts are subjective starts to very much resemble the opposing view that facts are absolute (and I have already explained why I reject absolutist positions).  Such a dedication to deem all facts as merely the subjective perceptions of the mind ignores the reality that our perceptions are not solely the product of internal factors, but are also largely dependent and shaped by factors and circumstances of the external world.  The sun isn’t bright simply because we internally perceive it to be so, we perceive it to be bright because we are responding to external stimuli telling us it is so (the sun’s objective brightness couldn’t care less what we perceived one way or the other).

The pure solipsist would not be satisfied with any of this, because (according to solipsism) only one’s mind can be sure to exist, all else (including physical observations and personal perceptions) are liable to be illusions (such as hallucinations or dreams) created by said mind.  Generally, I consider solipsism to be too unfalsifiable of a position (which is a point to its detriment, not its favor) to spend time arguing against.  I’m skeptical as to why, if reality is wholly an illusion of my mind, I’m imagining myself to be nearsighted?  I have had bad vision since I was about 11 years old, and to this date never have I had one dream in which my dream self was afflicted with myopia.  The reason for this doesn’t seem all that mysterious to me, my brain doesn’t need my eyes to create images while I’m asleep; it works with the images registered in my conscious memory.  But if solipsism is true, and my mind was the only thing that truly existed, why does my imaginary self need eyes and glasses to perceive a world that is essentially a hallucination?  Why am I imagining myself to be dependent on physically external factors (my glasses, my contacts, my optometrist), in a reality that is essentially a product of my own conscious creation?  Yes, I know that solipsists will probably come up with some long-winded philosophical musing about how solipsism does not suppose that the content produced my the sole existence of the mind, necessitates any sort of control over said content; which does nothing to explain why I need my physical, material eyes and glasses to perceive an immaterial reality.  But it doesn’t matter, because it would be a waste of time to bother refuting the specifics of solipsism.

For the sake of argument, let us accept what the solipsist says, and mine is the only mind that exist (or, at least, the only one that is verifiable to me), and the physical world I perceive is a creation of my mind.  How would one actually go about differentiating a solipsistic reality from a non-solipsistic reality?  Even if solipsism is true [which I highly doubt], am I still not bound and limited by the parameters set up within this reality my conscious self is inhabiting?  Even if the force of gravity is something that my solipsistic mind has created, isn’t my inability to levitate off of the ground (even if just an imaginary perception) a fact within the reality I am inhabiting?  And doesn’t the fact that, despite what my mindful feelings, thoughts, or desires are on the matter, I am incapable of imagining myself defying gravity by levitating off of the ground make gravity an objective fact, at least within the conscious reality I am inhabiting?  Even if I turn out to just be dreaming all of this through some mind-only, brain-in-a-jar kind of state, if the parameters of this reality operate independent of my subjective perception, I am still bound by the physical world that I am apparently hallucinating myself in.  And if I have no means by which to escape from this dream world, I ask again, how is a solipsistic reality different from a non-solipsistic reality?  What exactly does solipsism offer to the discussion, besides a bunch of useless, baseless, non-consequential propositions?  Nothing, nothing at all.  (And if you happen to be a solipsist, and you disagree with what I’ve said, you should keep in mind that by disagreeing with me, you are essentially disagreeing with yourself on account that I–and this blog–are just a creation of your mind.)

Now, a fair point to all of this would be to stop me right here and mention how when people in the modern world are discussing absolute and objective facts what they are usually debating over isn’t the cold, mechanical, facts of scientific inquiry on physical reality, which hold no direct consequence on their personal values in life (though this is a debatable point, depending on the particular scientific inquiry in question).  What people really are asking is whether or not there exists such a thing as absolute moral judgments, or objective moral judgments.  This, to me, is a much more intricate question to ponder.  Personally, I am still inclined to say that absolutes do not exist even when it comes to moral judgments.  For instance, do I think that lying is morally reprehensible?  Yes.  Can I think of instances when lying would not be morally reprehensible?  Yes.  I cannot see how an absolutist moral framework allows for such a disparity on a single moral judgment to occur, since something that is absolutely right or wrong demands for it to be applicable equally to all circumstances, lest one admits to the circumstantial (non-absolute) nature of moral principles.

“Intelligentsia is Dead! Hooray!”

Historically, the word intelligentsia refers to someone occupying a murky upper-class status on the basis of their intellectual contributions to culture and society.  These select few would (more often than not) share two major criteria amongst themselves:  1. They were rich.  2. On account of criteria 1, they didn’t have to work for a living, thus could spend all their time philosophizing about life and its hardships (unlike those philistine farmers who were too busy collecting crops for the village to sit back and reflect about what really matters to people).  Since the end of feudalism, and the laughably archaic status of aristocracies, intelligentsia can come to refer to just about anybody who writes a book that educated people hold in high regard, whether it contributes anything to our social consciousness or not.

Admittedly, the notion of what is, and is not, to be deemed intellectually worthy is quite subjective.  Speaking for myself, I would rather read the worst dime novel imaginable, than the most academically praised book on anything political.  Regardless, I have no issues with the diverse opinions people hold about good and bad writing or art.  What I’m getting at is how intelligentsia, as an applicable term, is  entirely nonsensical in any contemporary meaning.

Whether it was genuinely well intentioned, or the product of a corrupt system, the artists and writers that made up the intelligentsia of the past did produce works that creatively immortalized pieces of human history.  Gave a frame of reference to a past culture; something we can nostalgically look back and draw inspiration from to progress forward through moments of social gridlock (for example, the way the Renaissance was inspired by the intellectual contributions of ancient thinkers).  I can’t imagine such a thing happening with any of the works being produced by the public intellectuals of today.  That’s not to say that there are no good books being written in literature, or that modern art is devoid of aesthetic skill (though my septuagenarian neighbor would beg to differ).  But none of these are truly capable of sparking the imagination of the people as they once did, partly because we would have to be removed and forget about them first (which in today’s information age is impossible).

It is noteworthy that the title of the public intellectual has never been assigned on the bases of popular opinion, but on the basis of what other public intellectuals promote amongst each other as just too brilliant and sophisticated.  And everyone goes along with it, because its assumed that these people must know what their talking about (and nobody wants to risk looking unsophisticated and lowbrow).  This is just the nature of the animal; unlike the sciences, Arts and Humanities studies have no such thing as a decent peer-review process, largely because the peers themselves are removed from the broader social culture they reside in.

The intelligentsia of society used to be polymaths, whose expertise would roam across academic disciplines.  That is no longer a viable position to occupy.  Our knowledge and data is too broad to be encapsulated by any one mind; specialization is a necessity.  The era of the intelligentsia is dead and gone, and I for one welcome it as an important testament to our educational progress as a society.  We have accumulated so much data, raw knowledge, that it cannot be confined to the few.  Despite the pessimistic nature of these posts, some words do deserve to die.  When a word because too rigid to be properly applied in any meaningful way, the responsible thing to do is to retire it, and let it rest in peace.  Now, all we need to do is let the self-styled public intellectuals in on this fact.

Ayn Rand’s Atlass Shrugged: Analysis and Critique

Part One:  Analysis

Exposing the means by which the looters of a nation are able to exploit the abilities of the productive members of society lies at the heart of Ayn Rand’s novel Atlas Shrugged.  The plot of the novel makes a clear distinction between the two factions through the values each side exhibits for their worldview, and more importantly the imagery by which they express such ideals.  This is best illustrated in a dialogue that occurs halfway through the novel, in which the pirate character Ragner Danneskjöld—speaking to industrialist Hank Rearden—declares his intent to destroy the person of Robin Hood from human consciousness.  To him, Robin Hood is the embodiment of the misplaced mentality modern society has come to embrace.  He claims that it s through the legacy surrounding his exploits that the looters of today are offered a convenient excuse to promote their detrimental moral superiority, which encourages that the need of one man justifies the sacrifice of another.  Ragner Danneskjöld’s aim to eradicate the ideals, legend, and righteousness of Robin Hood, as a means to free mankind from his own self-imposed deprecation and provide him with the independent morale necessary to survive, stands as a perfect metaphorical expression to illustrate Rand’s philosophical stance for the virtue of self-interest over the misguided value of self-sacrifice.

Right from the start Ragner Danneskjöld makes it abundantly clear how his worldviews contrasts that of the man he is out to destroy, Robin Hood, and why these differences create an odious contempt against the man, and the ideals which are embodied by him.  He explains that where Robin Hood sought to take from the rich and give to the poor, he in turn is “the man who robs the thieving poor and gives back to the productive rich” (page 532).  Here, Danneskjöld is careful with his diction, as not to give a misconstrued representation of his words.  He uses thieving to describe the needy underprivileged poor and illustrate his belief that a group who seeks compensation, without the intent of earning their imbursements, is in fact robbing from those who have accumulated their wealth through relentless labor and resourcefulness (the productive rich).  The idea of Robin Hood, Danneskjöld states, creates a false warrant of merit where it is the inept who are justified in demanding aid from the skilled, a logic which holds little ground against objective reasoning, based on the notion that strength and intellect are factors of dominance not servitude.  Danneskjöld’s views differ in that he sees each man responsible for his own wellbeing, but realizes that this is challenged by the faux guilt hanging over the conscience of the productive few, and the widespread assurance that those with manufacturing ability have a responsibility to provide for the survival of others with lesser capabilities.  He encapsulates this partiality as, “the need of some men is the knife of a guillotine hanging over others—that all of us must live with our work, our hopes, our plans, our efforts, at the mercy of the moment when that knife will descend upon us” (p.532).  Such a grim depiction further serves to emphasize the intense abhorrence Danneskjöld feels for the looter’s ideology.  He sees the admiration of men like Robin Hood as a prime factor society has become to delude itself of the supposed inherent justice of altruism that is being offered as the only humane quality necessary for people to posses.  By encouraging these sentiments, the looters render any counterargument as nonsensical simply through the public impression that all other takes on the matter are immoral by definition.  Hence, leaving the producers and providers of society to be drained and deposed of as the needy masses see fit.

Ragner Danneskjöld’s vehement reproach towards Robin Hood stems not so much from the actual reality of the man, but the pretense which has come to symbolize him.  On page 532 he says, “It is said that he fought against the looting rulers and returned the loot to those who had been robbed, but that is not the meaning of the legend which has survived.”  Clearly, Danneskjöld is able to differentiate between the folk hero image Robin Hood represented as someone who stood against an abusive ruling authority—and even accept the gallantry associated with it—from what he sees as a prevarication created by those seeking to use his actions to elevate their own ideology amongst the populace.  His contempt is not so much aimed at eradicating the man, but the myth of him that has come to serve the plunderers of society.  Nonetheless, Danneskjöld also understands that in order to free the world from the self-deprecation brought on by the legend, no distinction can be made between man and myth.  The reason being that as long as a man like Robin Hood exists to serve as a guiding example for the looters, it is necessary to deal with the two entities as one and the same, due to the extend the myth has come to overtake every aspect of the man’s personhood.  Danneskjöld explains his rationale plainly when he gives his assessment of what Robin Hood has become, “He is held to be the first man who assumed a halo of virtue by practicing charity with wealth which he did not own, by giving goods which he had not produced, by making others pay for the luxury of his pity” (p.532).  It is through his selfless servitude that Robin Hood’s legacy evolved into the defender of the poor, rather than the robbed.  Such an image caused the distortion which Danneskjöld hopes to destroy; the idea that the true nature of mankind involves the demand for self-sacrifice.  And, although, Danneskjöld considers it complete folly, he understands the depth to which man is capable of falling if such nonsensical sentiment continues to be valued as morally correct.

The righteousness of Robin Hood is the ultimate goal Ragner Danneskjöld wishes to remove from human consciousness.  Considering it a personal duty to relieve man of the foul virtues he has accepted through centuries of fanciful tales, which have caused the discarding of realistic sensibility.  Danneskjöld argues against an ideology that considers the preservation of the self immoral, yet praises the believe that, “in order to be placed above rights, above principles, above morality, placed where anything is permitted to him, even plunder and murder, all a man has to do is to be in need” (p.533).  The championing of need is in Danneskjöld eyes the greatest depravity luring mankind away from realizing the importance of personal interest.  A world where all men are held accountable to provide for themselves, Danneskjöld argues, is a world where every member of society will labor to achieve highest proficiency, rather than depend on someone else’s productive output.  He sees the preservation of egotism not just as a necessity for his own values to survive, but the existence of mankind as a whole.  This is best summarized by Danneskjöld in his closing words, “Until men learn that of all human symbols, Robin Hood is the most immoral and the most contemptible, there will be no justice on earth and no way for mankind to survive” (p.533).  To remove Robin Hood from the moral pedestal society has set him on, would deprive the looters of a functioning symbol to hold over the head of men striving to earn their wealth instead of waiting for free hand-outs.  As long as the idolization of someone like Robin Hood persists amongst the general public, no hope lies for the true providers of society—working not to serve another man’s needs, but solely their own interest.

The fundamental plot of Ayn Rand’s novel Atlas Shrugged exhibits the struggle between those few in society who have rejected the moral superiority of altruistic self-sacrifice, and the looters who use the concept of need to subjugate any trace of personal interest and basic individuality.  The character of Ragner Danneskjöld serves to illustrate the idea of what man should strive to be; resourceful, fearless, and nondependent on other men’s capabilities.  He declares Robin Hood as the one man he is out to destroy as his personal mission to rid the world of its unyielding thirst for need.  He views the ideals, legend, and righteousness of the Sherwood Forest archer as the primary symbol serving the looters false ethical cause.  Ragner Danneskjöld reasons that the fatal blow necessary for man to see through the gilded façade the looters have erected to cover their noxious ideology is the death of their idol, the original offender against the nature of man, Robin Hood.

Part Two:  Critique

Although the brief exchange between Hank Rearden and Ragner Danneskjöld is meant by Ayn Rand to logically outline her philosophical position concerning the destitute nature of altruism, a number of apparent logical faults can be found right in the midst of the impassioned dialogue.  On page 532, Ragner Danneskjöld explains how he has never robbed a single private or military vessel during his pirate campaigns against the looters of society.  The reason for the first is self-evident by the stance of Rand’s ideals on capitalism, as to why military vessels are not to be attacked, Danneskjöld notes, “because the purpose of a military fleet is to protect from violence the citizens who paid for it, which is the proper function of a government” (p.532).  However, on the same page, the pirate names his ideological foe as “the idea that need is a sacred idol requiring human sacrifice.”  The problem with this line is that it seemingly negates the point he has made about the necessity of preserving the military, since the military is a prime example of an institution that operates primarily on the notion of self-sacrifice for the sake of a particular society/country/community as a whole.  Danneskjöld’s acceptance of the need for such an establishment runs counter to his vehement promotion of individual self-interest.

Rand might argue that this is irrelevant, on account that Ragner Danneskjöld specifically mentions that the military is supposed to protect those who “paid for it,” but such a rationalization does not solve the philosophical dilemma at hand, and even leads to a number of more conundrums.  Namely, it does not address the fact that in a world where self-interest is heralded as the ideal standard of behavior, the fundamental principles of military combatants will be eradicated, because it is universally understood that a soldier is expected to give his life for his brothers in arms, and for his country, if the situation calls for it; the interests of the individual are secondary to the interest of the unit as a whole.  And, on the point of the military serving those who paid for it, one is left bemused by what exactly Danneskjöld means by this.  He mentions that such is the proper function of government, thus it implies he does support the notion that the government is to be the arbitrator of the armed forces.  However, further on in the text, Danneskjöld firmly rejects taxation as a form of robbery (p.534), suggesting that the method by which citizens are to pay for their military protection must come from some other means—more than likely, what is implied is a direct payment of some sort.  This leads to a major problem that is left ignored by Rand throughout the dialogue; the possibility that if the military is privatized to protect those who have paid for their service, the result will be unmanageable disparity that can lead to losses amongst all economical sectors of society.

As a thought experiment, say, for example, that the East Coast of the U.S. is the more affluent part of the country (let us assume it is due to it having more entrepreneurs investing into a growing industrial economy) and uses its affluence to thoroughly protect its shores from any possible threats that might harm its source of wealth; while the West Coast is significantly less affluent, and, as a result, cannot afford as much military protection for its shores.  Let’s also set that the material resources the industrial centers on the East Coast use to produce their wealth is located on the uncultivated areas of the West Coast.  Since, presumably, the entrepreneurs of the East Coast would have a vested interest in keeping the West Coast as nonindustrial as possible, so as to keep the production costs of their products lower than the selling price.  But, because the West Coast cannot afford to properly protect its shores, their material resources (which are used by the East Coast) lie more vulnerable to external threats of theft.  Should the East Coast pay for the needed military protection of the West Coast?  And, if so, in whose individual self-interest is it to cover the cost?  Ideally, the West Coast should be expected to cover the cost itself, but in order for it to produce the wealth necessary to properly protect its shore it will need to increase prices on its material goods; at the expense of the East Coast.  Thus, it would appear, the East Coast is left picking up the bill no matter the angle one chooses to lock at this dilemma.

Now, since the government is the arbitrator of the military (as implied by Danneskjöld), one would be justified in proposing that it should also bear the responsibility of paying for the expenses that go into deploying its forces.  But where would the government get the revenue to make such payments?  Presumably taxes, but Danneskjöld has already established that taxation is equivalent to robbery, therefore for the government to tax its citizens would be criminal in nature.  It is true that the average worker has some interest in keeping the resources of their employer protected, lest they risk losing their place of employment.  However, to what degree should a menial employee be expected to pay for the protection of resources, whose total revenue potential he will only receive a fraction of (in comparison to the individuals who run the company)?  Perhaps, the wealthy entrepreneurs and industrialists, who have the greatest interest in protecting the West Coast, can be expected to provide the greatest payment to the government in order to finance the needed military protection (and, yes, it would have to be given through the government, since Danneskjöld has already acknowledged that the government’s function is to be in charge of the military).  Thus, the burden to pay falls on those who earn the most from the protected resources.  This seems like a viable position, but the question then becomes, how, in practice, is this any different from taxation?  It would appear that the only difference would be a lack of coercion on behalf of the wealthy, and maybe that’s the underlying point, but if the end result is still the same as before, what sense is there in pretending that the current system is a form of tyranny since the solution will essentially be the exact same thing only promoted by the tenets of a different ideological principle?

Another major point of contention arises through the message Ayn Rand is trying to present through Ragner Danneskjöld’s condemnation of Robin Hood (and altruism in general).  In his dialogue with Hank Rearden, Ragner Danneskjöld makes the case that wealth is an inherent indication of productivity.  Implying that due to the competitive nature of the market those who are wealthiest will also be those who possess the greater intellect and talent, and, thereby, are by definition the most deserving of all the riches and power they can accumulate; while those who occupy the lower ranks of society do so by the merits of their own failures.  However, this is clearly not as absolute as Danneskjöld is making it sound, and as the Robin Hood fables are meant to convey.  The rich Robin Hood stole from were corrupt monarchs who demanded servitude from the lower classes of society, not because they had gained their wealth by the merit of their work, but due to an arbitrary right of birth.  In this scenario, the most productive members of society were the underprivileged poor, the looters as Ragner Danneskjöld would call them, but who had no means to benefit from their productivity due solely to the fact that they were born in poor households.  Hence, in such a system, it would be fundamentally disingenuous to claim that the lower-classes lack of economic motility is the result of a lack of productivity, just as it would be insincere to proclaim that wealth to be a representations of intellect or work ethic.

The question of inherited vs. earned wealth is an issue that Ayn Rand never dwells into in Atlas Shrugged, even though any defense of her philosophy demands a clarification on this point; especially if one branches out to the greater narrative of the novel.  For example, two of the main characters in the novel, Dagney Taggert and Francisco d’Anconia (both of whom are presented throughout the prose as the epitome of the productive capitalist) lay claim to their fortune strictly by an accident of birth.  Both have inherited their wealth through the work of their productive ancestors, not through, shall we say, the sweat of their brow.  It is true that they are shown to be ardent entrepreneurs (although, for the aristocratic d’Anconia, this is more a matter that the reader is just suppose to grant as a given for the sake of the narrative; he is never actually shown creating anything industrially successful throughout the plot), but the question of how these characters would have succeeded if they had not been born in such a privileged position remains open to question.  This is particularly noteworthy, since the majority of the named antagonists in the novel (who seek to undermine all the values the Rand’s protagonists hold dear) are also wealthy industrialists, thus, the plot subtly acknowledges the point that the possession of wealth is not an ideal indicator of productivity.

The pivotal event Atlas Shrugged is leading up to is the point at which the productive few of society unanimously go on strike, and allow the looters of society to fully see the catastrophic fate that their self-sacrificing policies will inevitably lead to; i.e. the complete collapse of civilization.  Although the novel ends at this point, in his dialogue with Rearden, Danneskjöld gives the reader a glimpse of what is to follow thereafter.  He states, “When we are free and have to start rebuilding from out of the ruins, I want to see the world reborn as fast as possible” (p.535).  Here, he is giving justification for his work as a pirate, he is simply collecting the money that has been looted away from the productive to be utilized by them to remold society after the coming collapse (ironically drawing parallels with the criminal aspects of Robin Hood).  He continues, “If there is, then, some working capital in the right hands—in the hands of our best, our most productive—it will save years for the rest of us and, incidentally centuries for the history of the country” (p.535).  Thereby, those productive few who currently are held down by the looting majority will be well compensated in the imminent future.  However, this once again brings up the topic of earned vs. inherited wealth.  While those who Danneskjöld sees worthy today are bound to continue accumulating their wealth in this approaching utopia, what exactly will happen to those who might possess the potential to be entrepreneurs, but were unfortunate enough to have been born amongst the looting majority?  The narrative seems to imply that once the virtue of self-sacrifice has been thoroughly annihilated in favor of self-interest, those who deserve to rise through the social ladder will be able to do so.  However, it goes without saying that, whether or not the potential advancement exists, few will be able to actually occupy the ranks of the rich, simply because the number of available spots will always pale in comparison to the number of lower-ranking poor.  Therefore, most people will have to be content with the lower position they occupy in society, and these will be the ones upon whom the fortunes of the rich few will be founded on; meaning that, once more, the social reality that is to arise from the coming collapse will not be much different from the society that exists today.

Furthermore, the question is still open as to how someone such as the aristocratic Francisco d’Anconia, who has never been shown to produce anything of worth, whose entire fortune is based on the merits of his last name, deserves to be amongst the ranks of the productive few, other than strictly through his association with the other protagonists in the narrative?  How is someone born poor in this post-looter society, expected to compete with the generations worth of wealth that d’Anconia has inherited from his ancestors? (This point still stands even if one takes into account the fact that d’Anconia’s mission is to undermine the current social order by wasting the wealth he has, because Danneskjöld’s words to Rearden clearly imply that he will be reimbursing all the productive rich in the coming era for their present losses.)  While a reader can speculate one scenario after another, the truth is that all of these points remain unaddressed by the plot itself.

Ayn Rand’s Atlas Shrugged is meant to convince the reader of the superiority of promoting strict capitalism in all aspects of a person’s life.  It is a simple philosophy, best articulated by the pirate character Ragner Danneskjöld, in his dialogue against the legend of Robin Hood, and the virtue of self-sacrifice the looting masses have accepted as morally viable.  Although there are times in which Danneskjöld seems to be conveying a deeper truth pertinent to the advancement of an industrial society, upon scrutiny, much of the foundation that he sets to build this new ideology of self-interest on is based on flimsy premises that leaves too many factors unexamined (two of which, the proper function of government and the dilemma of inherited vs. earned wealth, are pointed out here).  As such this simple philosophy comes across as overly simplistic to hold any practical application.

Bibliography

Rand, Ayn. Atlas Shrugged. Signet Book (New York: 1992, original 1957).

A Brief Word on Art

Some time back, I was eating dinner out for a change of pace (there are times when even us hermits feel the need to breath in the humidly fluorescent air of city life).  In the middle of my meal, I couldn’t help but overhear a conversation between two individuals seated somewhere behind me (I couldn’t see them, but judging by their voices I think it’s safe to assume they were women).  They were discussing how popular musicians are resorting more and more to the use of cheap gimmicks to promote shock value for their image (they gave examples of needless profanity, absurd fashion, over-the-top antics, etc.).  Then one of them said something I’ve heard repeated many times before:  “The point of all art is to provoke and challenge people.”  This is one of those statements that on the surface sounds like it simply has to be necessarily true.  After all, who would argue that the most memorable works of s/he can recall of the top of our her/his head was not some piece that initially provoked a high degree of emotion or thought in her/him (for better or worse).  The idea that the purpose of paintings, photographs, music, poems, literature, graphics, furniture designs–whatever else people create to artistically engage onlookers–is to stimulate a response from potential admirers and detractors alike, seems all too obvious when we consider how important the emotional response of an audience is in immortalizing the aesthetic longevity of any work of art (and by extension, the artist).  And yet, I still find myself disagreeing with the original statement.

The claim that the purpose of art is to provoke and challenge the individuals who come across it, seems somewhat glib to me.  Now, I can see that as a factor in the greater equation, or as a possible end result, but I ultimately I feel that it missed a key point in what makes art such an indispensable part of human expression.  Art provokes, and it challenge; but what about the times it doesn’t?  Does it cease to be art?  When I’m walking through a museum, and I’m glancing at the classic works of history, I cannot say I’m really being challenged by them.  I suppose you could say that they provoke a sense of admiration in me, but they certainly don’t do much in provoking any new insights for me.  Not to mention, quite a few pieces evoke complete indifference on my part, but still don’t diminish my ability to recognize them as decent works of art.  They are still good and beautiful expressions of art, which they are simply for the sake of being art, independent of my subjective liking of them.  Or, to put it more articulately: the point of art, in my opinion, is first and foremost to exist for its own sake.  The meanings we assign, and emotions we ascribe, seem to me like secondary functions.

The art itself is adaptable to an evolving landscape, and its specific appeal changes with time and surroundings, but the aesthetic value innate to the work remains untouched.  Even if you dislike a particular painting, you will still not dismiss paintings as a whole.  Even if you just hate a particular song or genre of music, you will still see the artistic value in music.  The same goes for poetry and literature, and a multitude of other modes of artistic expression you have no personal interest in.  The reason being that, although we might recognize that a piece of art is not appealing to us, not because it provokes or challenges us, but precisely because it fails to do either, we are still able to acknowledge some potential aesthetic value in its existence (even if not for our own tastes).

Unless, you happen to be a professional art critic or social commentator, who nowadays seem to get paid to dismiss everything.

The Internet as the Rabbit Hole

Every now and then I decide to briefly try going on somewhat of a web-detox regiment.  Not for any deep reasons, I just feel that my web usage occasionally reaches a critically high point.  Mind you, I can’t just cut the ethernet cable to my Wi-Fi completely, because the sheer prevalence of online services in managing my daily chores is too great to allow for that sort of liberty (I still have to check my emails daily in order to pay my bills).  But, to my surprise, when put to the test these necessary online duties take me under 15 minutes to complete from log on to log off.  This was surprising to me, considering I’ve previously been known to spend hours on end staring at my laptop screen.  My excuse for raking up these net overtime hours was always that I’m doing something productive (reading fancy-pants articles, and whatnot), in addition to pursuing leisurely activities like online games and YouTube.  But in reality, I was just trying to find excuses to continue staying online for any reason whatsoever.  The internet just has this way of making me feel as if all the important things that occur in life revolve around this omnipresent series of tubes that place the world at our fingertips.

Just about everyone reading this will probably have little trouble understanding the initial stages of withdrawal I experienced throughout the last week, and how I craved for that psychedelic high that comes with navigating from one site to another (picking up bits and pieces of information from dozens of different sources, at record speed).  But I don’t want to fall into the trap of sounding overly melodramatic about what should really be a mild nuisance.  Yet, it is a moderately noticeable form of mild annoyance, in that even now that I have broken my semi-netfree fast, I feel a sense of hesitation about resuming my previous web surfing habits.  Almost as if, now that the routine has been broken, I fear falling back into it again.  The fact that this is having an affect in making me question what would normally be my usual course of action, makes me think that some kind of–even if only in the most superficial recesses of my mind–psychological dependency has been severed.  And I’m left with these undefined reservations about reestablishing the normal mode of operation again.

Despite the fact that so much of my personal and professional life incorporates online services, the reality is that the dominance of the virtual world we create for ourselves on the internet, is largely illusory.  The all-encompassing presence I am (and I imagine many of you are, too) keen on attributing to websites, forums, online groups, blogs, is very much a self-maintained delusion, sustained by the fact that cyberspace allows us to do something meatspace doesn’t: transcend social limitations and decorum.

In the four days of my net abstinence, I saw how tediously slow information in the real world operates.  This makes the speediness and efficiency of online data a very attractive alternative  (ironically,  however, the lack of easily available distractions made whatever task I was doing also go by much quicker).  Furthermore, I saw how unaware a great deal of people are about internet culture and memes (and not just to the elderly), even though I always considered these things to be fairly widespread in popular culture.  The jokes, the tweets, the web-dramas, and multitude of online communities, don’t have much of an existence outside of their cyber confines (either that, or people simply feel stupid referencing them in person).  But the primary difference I took notice of was the general way people communicated with one another.

Whether you believe me or not, I make it a habit to write on this blog in the same manner and diction I do in my daily life.  Now, of course the blog format allows me to correct the occasional grammar mistake, and rephrase poorly articulated statements to better convey my opinions, but the basic tone expressed is the same as it would be if you were sitting across the table from me (just with less “ums” and awkward pauses mid-sentence as I fumble over my words).  However, when I see some of the more blunt and vitriolic comments left online, I find myself wondering just how many of these individuals would be equally daring with their choice of insults in a face-to-face conversation.  In person, even more confrontational personalities remain for the most part reserved when they are facing possible opposition in thought from a second party.  There is a level of empathy and solidarity in play; even if you hate the person speaking to you, it’s difficulty not to humanize someone whose face is right in front of you.

When forced to interact in person, most people have somewhat of a filter that prevents a lot of faux pas and breaches in social etiquette from leaking through.  Online, where the person you are interacting with is nothing more than a far-off abstraction of typed words, this filter is virtually discarded in favor of apathetic aloofness (see what I did there with “virtually”, ’cause we’re talking about “virtual” reality; try to keep up with my linguistic subtleties nOObs).  And the tiny personal transgression we are willing to overlook in the fellow human being seated across from us is thrown aside when that human being is reduced to nothing more than a screen.  I imagine it’s too much like having an internal monologue (where anything goes) that we forget there are actual people reading our diatribes.

This brings me to the core realization that hit me this week: the internet is essentially imaginary.  Not in the sense of being nonexistent, but in the sense of it mirroring our impulsive inner ramblings.  Hence, it’s no surprise that it can deliver such a satisfying high to our psyche, since it practically serves as a reflection of our deepest thoughts.  This isn’t necessarily a bad thing, but I think I’ll try to limit my daily dose and remember that there is a space, outside of cyberspace.  On which real life hinges.

Things I’ve Learned From Late Night Infomercials

My sleeping pattern has been steadily returning to normal in the last two weeks, which is great for my overall stamina.  Nonetheless, insomnia still has a habit of occasionally slipping into my bed at night, and wringing her decadent claws around me (worse of all, she never bothers to leave any money on the dresser either, despite having her way with me all night.  What kind of a cheap skank does she take me for?).  In light of still having to bear the occasional case of sleeplessness, insomnia has given me a chance to become reacquainted with a long neglected friend from youth:  Television.

Yes, the internet has spoiled me, with its easy access and availability to high quality resources, is it a surprise how neglectful I have been towards that lonely square box complimenting my entertainment center (which is neither located in the center of anything, or provides much in the area of entertainment these days).  But now, I return to you, sweet, patient television, to give my restless nights some ease of mind.  Unfortunately, my time away from TV has made me unprepared to deal with the fact that a.m. programming is the abyss in which infomercials reign supreme.  Naturally, like any person eager to be bored into a comatose stupor (that ought to show that bitchy insomnia what’s what), I watched and allowed the spawn of consumerism’s unwanted lovechild with cheesy soap-opera’s dialogue to try and work its charm on me.  In this experience, I have picked up on a few seemingly important life lessons from these late night/early morning infomercial ads.

  • College is serious business!  Are you living in the United States, and can’t afford to go to college?  Don’t worry, despite that fact that low-income students qualify for government grants that don’t need to be paid back–and are usually enough to cover the bill to attend most modestly ranked, in-state public universities–what you should really consider as an adolescent with no credit history or real life financial experience, is taking out loans to attend a privately-run, online college.  According to the infomercials, even Brenda Walsh from Beverly Hills 90210 got a Liberal Arts degree this way, and if she can do it with her busy acting schedule, who are you not to?!
  • Baldness is a death sentence!  Of course, I have yet to personally appreciate the life-altering impact of male pattern baldness (though judging by my family album, I have a 50/50 chance of finding out all about it in the coming decade or two).  But if there is one thing that infomercials have taught me about this phenomenon, it’s that once a receding hairline begins, a man might as well start to contemplate how many years of his life he is willing to sell to the Devil just so he can retain enough hair strands to manage a decent comb-over.  The message is clear: if you’re not foaming it, transplanting it, or lasering it, you have metaphorically castrated your manhood to a perpetually phallic state of solitude.  Yet, having now been given this great insight into human sex appeal, I’m left wondering why the bald guy I share a wall with is (by the wall-piercing sound of it) still getting laid more than I am?  Also, is that aforementioned pact with the Devil in any way voidable?
  • Acne is a merciless cancer on society that Hollywood needs to defeat one Proactiv infomercial at a time!  Speaking as someone who went through his adolescent years with a moderate degree of pimples on his face, and who still wakes up to the occasional zit now and again (the battle for clear skin never ends, and takes no prisoners–damn it!), I can easily understand the sentiment behind this A-list celebrity crusade against the pangs of acne-laden skin.  What I don’t understand is why, if this cause is as important as the fancy graphics and voice-over narration is to lead me to believe, are young people with virtually no income of their own being asked to cough up $39.99 for a product whose main active ingredients is the same benzoyl peroxide you can pickup at any drug store for under five bucks?  Also, I can’t help but notice how far good lighting and a fair amount of foundation goes to *ahem* clear-up those celebrity faces.
  • Your soul’s salvation depends on your willingness to send money to some guy, who heads some obscure ministry, in some awkwardly named place in California!   For the longest time, I was under the impression that religious clerics had to undergo some kind of seminary training, or at least an apprenticeship of some sort (if that ends up being turned into a reality show staring Donald Trump on NBC, I swear to every conceivable deity and space creature that I will personally bring forth great wrath and vengeance upon the lands of the earth, and all its inhabitants…look I’m just saying, please cancel the apprentice already, it’s not even ironically funny anymore).  Apparently my suspicions were dead wrong, because all you need to offer pious counseling is a P.O. Box and vaguely threatening, thick eyebrows with which to pierce and guilt the very souls you are trying to save.  Sometimes senders are promised gifts for their charitable donations (though if you’re doing it for the free Gideon Bible, I suggest just swiping one from any motel room), but sometimes viewers are offered more urgent reasoning like, “Fool, it’s the end of the world, so do this one decent thing and send the money, or else…”  What else, you ask?  Who cares, man.  Do you want to take the risk to find out?  I didn’t think so.

The only thing that’s really worth asking now is how the mail-order prayers can be utilized to cure the acne menace of the young, the college finances of the slightly older, and the baldness of the even older?  This is just one of the many ways infomercials are bringing the lessons of life full circle, one sleep deprived mind at a time.  Now, if you’ll excuse me, Insomnia appears to have had enough of me for the evening.

C.S. Lewis’ Abolition of Man, “Men Without Chests”: A Critique

C.S. Lewis | Biography & Facts | Britannica

C.S. Lewis may very well be one of the most prolific writer’s of the 20th Century, having gained eminence through his apologetics writings (Mere ChristianityThe Problem of Pain, etc.) and the popular children’s book saga The Chronicles of Narnia.  In his 1944 lecture compilation, The Abolition of Man, Lewis sets out to defend the reality of universal, absolute human values, against what he perceives to be the relativistic subjectivism of modern society. His first lecture, “Men Without Chests,” attempts to raise the reader’s consciousness to the prevailing menace that Lewis insists is eating away at the essence of humanity, and the method by which it permeates into popular thought.

Lewis sets up the lecture as a critical response to a pair of elementary textbook authors (referred to as Gaius and Titius), and the faulty reasoning by which the prose in their work (referred to as The Green Book) is irreparably corrupting the minds of young children with its promotion of subjectivist values.  Lewis makes sure to clarify that he does not believe the authors to be doing this out of intentional malice, “I do not want to pillory two modest practicing schoolmasters who were doing the best they knew: but I cannot be silent about what I think the actual tendency of their work.”[1]  In this view, the authors are as much a product of the greater problem they are propagating, than the root cause of it.  Lewis presents his first case against the authors by quoting a section from their textbook, “‘When the man said This is sublime, he appears to be making a remark about the waterfall…Actually…he was not making a remark about the waterfall, but a remark about his own feeling,’” which they clarify with, “‘This confusion is continuously present in language as we use it.  We appear to be saying something very important about something: and actually we are only saying something about our own feelings.’”[2]  Lewis takes issue with these two statements for two specific reasons: firstly, it will teach a young student, “that all sentences containing a predicate of value are statements about the emotional state of the speaker,” and secondly, “that all such statements are unimportant.”[3]  Lewis goes on to acknowledge that neither of the authors have actually stated this much in so many words, but Lewis, “is not concerned with what they desired but with the effect their book will certainly have on the schoolboy’s mind,”[4]  since Lewis has already conceded that the authors are as unaware of the harm they are causing, as the young pupil is of the harm that is subconsciously being done to him.[5]  Lewis’ position is that the reduction of emotive language to the realm of subjective thought is a subversion of the greater essence of humanity; it cuts out man’s soul, long before he is able to fully appreciate the transcendent reality of his emotional experiences.[6]  Lewis sees this as going well beyond providing young minds with a proper education, and calls such tactics as an attempt to debunk emotions on the basis of commonplace rationalism, “They see the world around them swayed by emotional propaganda—they have learned from traditions that youth is sentimental—and they conclude that the best thing they can do is to fortify the minds of young people against emotions.”[7]  An action Lewis loathes, because “by starving the sensibility of our pupils we only make them easier prey to the propagandist when he comes.”[8]

What Lewis is doing here (as he does in most of his apologetic works) is setting up a false dichotomy, infused with imaginative hyperbole: either educators teach a student to give full credence to the objective truth of his emotional introspections, or they “have cut out of his soul.”[9]  Lewis presents no logical, coherent argument to support any of his claims, other than his own subjective opinion that he is clearly right on this matter.  It is not self-evidently true how explaining to a young student that our tendencies to attribute traits to inanimate objects is a reflection of our own personal feelings about the object and not an actual attribute of the object, will cause them to develop long-lasting character deficiencies.  When I stub my toe on my coffee table, my instinctive reaction is to curse the table for hurting me.  I know that the table is not alive; I know that the table didn’t actually set out to hurt me; I know that the table is not malicious; I know that the foul words I’m attributing to the table are a subjective emotional response, and not an actual reflection of the table itself; I know that the table cannot hear or sympathize with me, but I still can’t help but animate the inanimate object.  Why?—Because I’m human, and I can’t control the chemistry in my brain that dictates my responses to the stimuli of my environment.  Knowing and recognizing this reality has not hindered, or stunted, my emotional development, nor has it done so for anybody else.  And even if it did have negative repercussion to our human psyche, this still would not be an argument against the veracity of our emotional attributes to the surrounding world being an entirely subjective experience.  As it stands, Lewis’ entire reasoning for opposing this view rests on the basis that he finds it unpleasant and harmful.  To which the only salient response can be, so what?  The veracity of a claim does not depend on its supposed bleakness and implication of unpleasantness.

Lewis also tries to give further authority to his position by claiming how, prior to modern times, all men believed that, “objects did not merely receive, but could merit, our approval and disapproval, our reverence or our contempt.”[10]  Prior to modern times, men also attributed the occurrence of epilepsy to demonic possession, instead of a treatable neurological disorder; the mistaken beliefs of the past need not hold credence to us in the present, especially as we gather more information and knowledge about the world.  Also, the claim that objects can merit approval and disapproval is a baseless assertion.  Objects can cause us to respond towards them in one manner or another, but they do not merit our response, since objects are devoid of any kind of intent, and thereby, do not/cannot strive to live up to anyone’s conceived expectations.  Not to mention, out responses to objects are entirely dependent on the context of the situation we find ourselves in, and likely to change under different circumstances.  Hence, our emotional responses remain a subjective experience every way one wishes to look at it.

At times, Lewis seems to acknowledge that emotional attributes are person-specific, he states [quoting Plato], “The little human animal will not at first have the right responses.  It must be trained to feel pleasure, liking, disgust, and hatred at those things which really are pleasant, likeable, disgusting and hateful.”[11]  So, to clarify, our emotional responses towards objects (or anything else for that matter) are objectively true, but we need to be trained in order to feel the “right responses”?  Does that not imply that if my initial emotional response to an object strays from the response Lewis considers to be the “right response,” my emotional response is not objectively true to begin with?  If my emotional responses have to be trained to follow suit with that of others, are they even still my emotional responses anymore?  Am I not just subverting my emotions in favor of someone else’s?  And if that’s the case, how can I trust that Lewis’ interpretation of what constitutes the right emotional responses are anymore trustworthy than my own?

Lewis’ response to this is to posit the existence of a universally recognizable “greater thing,” that he identifies as the Tao, “It [referring to the Tao] is the reality beyond all predicates, the abyss that was before the Creator Himself.”[12]  It would be completely appropriate to stop Lewis right there, and point out the disingenuous way he is presenting an Eastern concept–the Tao–as if it was congruent with the monotheistic, Abrahamic, worldview of the West.  (Although, his following sentence does a better job of characterizing the Tao, “It is the Nature, it is the Way, the Road.  It is the Way in which the universe goes on, the Way in which things everlastingly emerge, stilly and tranquilly, into space and time.”)[13]  It is of importance to note that no warrant is given by Lewis to justify this sleight of hand, where he tries to misconstrue the Tao by associating it with his Christian conception of a conscious “Creator,” and in particular his desire to designate this creator as a “Him.”  Lewis’ motivation here is to demonstrate that since our emotional responses are kind-of-sort-of similar across cultural lines, we must collectively be appealing to a universal, objective, authority as a point of reference:

And because our approvals and disapprovals are thus recognitions of objective value or responses to an objective order, therefore emotional states can be in harmony with reason (when we feel liking for what ought to be approved) or out of harmony with reason (when we perceive that liking is due but cannot feel it).[14]

But Lewis has failed to logically establish that out approvals and disapprovals are recognitions of anything but our own subjective experiences.  It certainly has not been shown that our value judgments are any indication of an objective order (or arbitrator of any sort).  Not to mention that Lewis’ only defense against the prevalence of divergent emotional responses to particular situations/objects seems to be a weak call for the need to “train” people to have the “right responses.”  The question he continuously ignores to definitively answer is why, if he is right, people’s experiences are not convergent on all matters of emotional responses?  And even on matters where they do converge, people will often demonstrate no unified reasoning for their responses.  It can be said that my observational experience that the sky is blue is objective; no one absent of some kind of physical or neurological disorder would deny that the sky is blue.  However, my emotional experience that the sky is sublime is not objective, since another person can honestly say that his emotional experience is that the sky is dull; or he could agree with me that the sky is sublime, but for a varying array of reasons that have nothing to do with my own experience. Neither one of our subjective claims holds more merit than the other.  And no resolution on the matter can be reached, since we can both accuse one another of not being “trained” to hold the “right response” towards the sky.

A frustrating part about Lewis is his apparent inability to differentiate between the objective fact of a matter (such as the fact that I happen to have feelings XYZ about an object), and the subjective response that stems from it (the actual emotions cause by feelings XYZ, the specifics of which, in any particular situation, are unique to me alone).  He states, “It can be reasonable or unreasonable only if it conforms or fails to conform to something else,”[15] in an attempt to make his notion of an absolute objective value sound assertive.  But being assertive doesn’t make an unfounded claim any more true, because even if one grants the veracity of his statement (namely, that we judge things as reasonable only as they pertain to other things), this admission does not warrant the stipulation of any sort of objective, or absolute, greater value judgment.  Our interactions with our surroundings foster the values and emotional responses we attribute to objects/matters; meaning that we are the fundamental arbitrators of our perceptive values.  Furthermore, our values and emotional responses change as we gain more information and data about out surroundings.  No universally objective point of reference is needed.  This does not invalidate the reality of our emotional experiences, but it is nonsensical for Lewis to claim that the mere existence of our emotional experiences must also confirm the existence of some kind of objective source for our emotions.

Towards the end of the lecture, Lewis begins to settle into a string of fallacious and bullying tactics against his detractors:

Either [Gaius and Titius] must go the whole way and debunk this sentiment like any other, or must set themselves to work to produce, from outside, a sentiment which they believe to be of no value to the pupil and which may cost him his life, because it is useful to us (the survivors) that our young men should feel it.[16]

“Which may cost him his life,” here Lewis is either keen on overdramatizing matters, or he is the most deranged man that has ever lived.  Telling a student that the emotional attributes he assigns to inanimate objects (which was the point that Lewis started his argument on), is not in reality a reflection of the objects themselves, but a subjective value that reflects on the feelings of the person making the attributes, does not, in any way, rob said student of the emotions he is experiencing.  Lewis has not established, in any way imaginable, that this is the case.  Being able to understand the subjectivity of one’s emotional experience will not render one as some kind of blasé automaton, since the emotions we feel are involuntary to begin with (we can’t stop feeling them).  Lewis tries to squirm out of the fact that he has not logically presented his case by stating, “In battle it is not syllogism that will keep reluctant nerves and muscles to their post in the third hour of bombardment”[17]  This, combines with his call that emotional response that diverge from what he perceives to be the “right response” must be trained to conformity, is evidence enough to assume that Lewis is a man who doesn’t accept the fact that a person is not obligated to give even the slightest credence to his subjective, emotional diatribes, absent of any logically coherent, and consistent, argument.

To some readers this might sound especially harsh, but they might want to read the manner in which Lewis addresses his opponents, “It is an outrage that they should be commonly spoken of as Intellectuals.  This gives them the chance to say that he who attacks them attacks Intelligence.”  The last line is particularly ironic, since such form of fallacious engagement is best characterized by Lewis himself, “a perceived devotion to truth, a nice sense of intellectual honour, cannot be long maintained without the aid of a sentiment which Gaius and Titius could debunk as easily as any other.”[18]  The message Lewis is presenting to the reader here is that one cannot disagree with what he has said, because only those who accept his premises of an absolute, objective, value have any basis upon which to argue about truth.  Of course, this is completely dishonest and unfounded to anyone who does not already agree with Lewis’ [subjective] point of view.

The authors of the textbook he has been arguing against don’t say that there exists no means by which to perceive truth, nor is there any rational extension by which one can make such a claim (this is another one of Lewis’ retreats to fallacies).  Instead, what they rightly say is that one’s personal feelings on a matter are irrelevant when it comes to evaluating reality, because reality is not contingent on the perceptions of any person’s emotional response to it; nor does it ultimately care about your meager opinions.  But Lewis cannot accept this, which is why this entire lecture can be summarized as follows: “I don’t like the implication of X, therefore X needs to be wrong.”  His entire justification of the objective truth of emotional responses collapses into one giant emotional response; one subjectively giant emotional response.


[1] Lewis, C.S. The Abolition of Man. “Men Without Chests,” Harper One: 1944, p. 1-2.

[2] Lewis, p. 2-3.

[3] Lewis, p. 4.

[4] Lewis, p. 4-5.

[5] Lewis, p. 5.

[6] Lewis, p. 9.

[7] Lewis, p. 13.

[8] Lewis, p. 14.

[9] Lewis, p. 9.

[10] Lewis, p. 15.

[11]Lewis, p. 16.

[12] Lewis, p. 18.

[13] Lewis, p. 18.

[14] Lewis, p. 19.

[15] Lewis, p. 20.

[16] Lewis, p. 22.

[17] Lewis, p. 24.

[18] Lewis, p. 25.

My Grievances With 3 Classic Cartoon Characters

Like most children, I wasted a good deal of my early development waking up at near dawn each and every day just to catch the regular lineup of morning cartoon classics.  But unlike most children, I wasn’t content with just viewing and enjoying the programs–No!  Because even from a young age, and even when it comes to matters we hold dearest to our hearts, I believe there is a limit to how much nonsense a person ought to be willing to accept from their entertainment.  And some of the logical gaffes of classic cartoon series are too great to not be called out and challenged directly.  In that light, consider this a serious list of grievances that is decades in the making; a wound of hangups I have been nursing for as long as I can remember being a conscious agent.  My first venture into social commentary and cultural polemics (and a man never forgets his first), without which I may not have become the blogger that stands…er…writes before you today (and I doubt that’s a world any of us would want to imagine).

So, skipping any further introductions, let’s start with this list of my childhood grievances by order of personal annoyance from least to worst offenders.

3.  The Problem With Scooby-Doo:  Fred.  Just so there is no confusion for those of you not too familiar with the show, this guy is Fred:

Scooby_doo_fred_characters_walpaper

Look at him, with that stupid orange ascot. I hope it chafes his neck.

Fred isn’t the brains, he isn’t charismatic, and he sure as hell isn’t the comic relief.  All he does is state the obvious aloud, and than comes up with the most imbecilic plan to catch the weekly crooks (“Oh, hey Shaggy, why don’t you and Scooby lure the bad guy to step onto this puddle of oil, so he can slip on it?  I’m sure you’ll be fine.  FYI, I’ll be hiding safely in the bushes over there.”).

But even if I’m willing to overlook all of this, there is still one major character flaw that makes Fred an irredeemable jackass in my eyes.  Scooby and Shaggy are always presented as being 100% convinced that the place the group is investigating is actually haunted (Daphne’s stance is more or less ambiguous, but generally falls into this same line of thinking when the “monsters” appear).  In contrast, throughout the show’s run, it is established that Fred and Velma are repeatedly unconvinced that any of the places Mystery Inc. investigates are really haunted, which is why them two always look into alternative explanations right from the start.  So far so good.  Yet, if Fred is convinced that the unarmed ghost running at him is just some guy clothed in a loosely fitting bed sheet, why doesn’t he just tackle the bastard?  Remember, Scooby and Shaggy (and probably Daphne) actually think it’s a ghost, and Velma is too small in stature to be much of a match against a full grown man.  Fred, however, is presented as an athletically fit young man.  He could do some serious damage to the group’s would-be assailants if he actually bothered to stop just mopping around like a waste of carbon and use his physical traits to contribute to the team.  Honestly, I’m glad he was written out off most of the later incarnations of the show, as it gave me the opportunity to imagine how one day he happened to trip over his bell-bottom jeans, fell out of the Mystery Machine, and the rest of Mystery Inc. just never bothered to go back for him because they couldn’t be bothered to care.

2. The Problem With The Smurfs: Their Incomprehensible Gender Issues.  Everybody points out the fact that the Smurfs have a serious male-to-female ration problem in their mushroom village.  Few people bother to point out that this is only a minor issue in the greater dilemma in the Smurf biosphere.

The problem isn’t that there is only one (later two) female Smurfs, it’s that there are no naturally birthed female Smurfs–Period.  Smurfette was artificially created by Gargamel, and than later on in the series Sassette was artificially created by the three young Smurflings.

unnamed

Just to jog your memory, this one is Sassette. I realize you were racking your brain trying to remember.

Now, I can accept that within the reality the Smurfs inhabit, they are magical creatures that are (literally) delivered by a stork every some-odd season or so.  However, since every single occasion of a Smurf being born naturally produces only (presumably) male offspring, this heavily implies that Smurfs are organically a one-gendered species.  This itself is not the problem, either.  The problem is, how can a one-gendered, essentially asexual population of creatures still feel sexual attraction towards an artificially created opposite gendered individual (i.e. Smurfette), when she isn’t really a natural product of their biological makeup?  In numerous episodes the male Smurfs are shown swooning madly for Smurfette’s simply on account that she is of the opposite gender, despite the fact that Smurfs obviously aren’t gender binary (i.e. they have no opposite gender).  If anything, all the Smurfs should either feel no sexual attraction towards anyone, or all the (presumably) male Smurfs ought to be getting it on with one another.

unnamed

At least Vanity Smurf seems to make a lot more sense now.

1. The Problem With Tom and Jerry:  Tom was the wronged party in the series, and no one seems to care but me!  Let’s look at the facts, shall we?  Tom is the unseen homeowner’s pet, thus he is essentially an official resident of the house he occupies.  Jerry, on the other hand, is a rodent; a pest that’s constantly breaking entry into the residency to steal the tenants’ food and take advantage of their living fixtures.   Jerry is obviously the intruder here, while Tom is just doing what he was probably brought into the house to do in the first place: keep pests away.  Sure, it’s all fun and games for Jerry to ravage and plunder someone else’s fridge and horde away their belongings, but what do you think will happen to poor Tom when his owners get fed up with his inability to do what he was (by all estimation) brought into the house to do?  He’ll probably be put out on the street, or worse, be dragged off to the animal shelter–where he will eventually be put to sleep if no one takes him (and since he is already an older cat, no one will probably take him, because all those snobbish kids care about saving is that cute little kitten in the back).  Are Jerry’s antics still funny to you?  Yeah?  Then consider the fact that if you pay attention to the episodes, about 80% of the time Jerry is the one that’s provoking Tom (who is, as we have already established, simply protecting his and his care keeper’s property from a commonly disease-carrying vermin).  You know what, fuck that mouse.  I hope he choked on that ill-gotten cheese.

unnamed (1)

Yeah, enjoy it.  You thievingly amoral home invader, you.

All right, having gotten all of that finally off my chest, the healing process can at last begin.