Coping With Stage Fright

The first thing that needs to be understood about stage fright is that either you have it, or you don’t.  If you don’t have it, good for you, enjoy your life.  But if you do happen to have stage fright, you will more than likely have to put up with it for the rest of your life.  I’m not saying that to be discouraging, I’m just trying to be honest about my own personal experiences of dealing with stage fright.  There is good news, however, in that even if you can’t completely exorcise the irrational phobia from your consciousness, there are ways to manage it; ways to make your anxiety undetectable to lay observers.

Before I get into all the details let me give a little bit of background about my own issues with stage fright.  I have always had stage fright.  As a kid in grade school, I hated being called on in class, I hated having to take part in school plays/performances, I hated everything and anything that had me standing (or sitting) in front of a group of people, focusing on my words/actions.  Whenever I found myself in such a situation the first thing that would happen is my heart rate speeding up, next I would feel the blood pump into the back of my head (which was deafening to my ears), my knees would feel both light and heavy at the same time, my mouth would go dry, my voice would give out, and my face would turn as red as a tomato.  This went on for a long, long time.  I never “grew out of it,” as people kept telling me I would.  Even today, as someone whom most people would identify as a sociable, talkative kind-of-guy, I still feel a silent dread at having to address a group of people.  But because I’ve learned how to cope with my stage fright, the uncomfortable experience remains solely a private one.

Let me just say upfront that my ability to manage my stage fright is not the result of counseling or medication.  Additionally, I tried meditating as a teen, thinking that relaxation techniques would help me get over my anxiety, and all it ended up doing was making me more anxious (I worried so much about whether I was doing the breathing right that I couldn’t relax for a second; also, it made me dizzy).  Self-hypnosis was a complete waste of time.  And no, thinking positive thoughts didn’t do much either, on account that (for me at least) worrying thoughts are never voluntary to begin with.  What helped me cope with my stage fright in my late teens were the following realizations:

1.)  Stage fright can never fully go away.  It sounds a bit strange, but it’s probably the most important lesson I had to learn.  I was so focused on getting to a point where all of the symptoms I associated with my stupid phobia would simply go away for good, that any effort that yielded a lesser result would come across to me as a failure; leading to more anxiety on my part.  Finally, in college I began to concentrate on getting to just a barely adequate presentation level (enough to at least be passingly understood by my audience).  Gradually, my confidence grew with each presentation, which started easing up the physical signs of my nervousness; i.e. my voice stopped changing pitch, my face stopped blushing (this one was more of a steady improvement over time; first the blushing was just restricted to my cheeks, then it only showed up for the first minute or so of my talk, until it’s now gone completely).  Now, don’t misunderstand me, I was (and remain) terrified of speaking in front of people.  However, once my goal changed from trying to beat my stage fright, to simply trying to hide it from the audience, my mind started to feel less overwhelmed by the experience, giving me enough time to take a breath and at least mask confidence to the audience.  Which eventually developed into real confidence.

2.)  Starting with a joke is always a good idea.  I know it sounds cliche and overly simplistic, but I find that even the most mundane of humorous comments at the start of the talk will make the experience so much easier when it comes to my stage fright.  It doesn’t just put you at ease, but the audience.  Not to mention if you do slip up in the middle of your presentation, establishing yourself as a humorous person at the very start makes it so much easier to recover (it makes the audience more forgiving too).  I know that some of us aren’t gifted with perfect comedic timing, but you need to remember that no one is expecting you to bring them to tears from laughter; no one is expecting you to be funny at all.  A slight quip about the scenery or some local activity (keep it conservative), or even the talk you’re giving itself, will do the job just fine for no other reason than that it is not expected by the audience, which will make them appreciate your effort to make them more comfortable.  And will make you more comfortable in return.

3.)  People don’t care about you.  Well, some people care about you, but your audience doesn’t.  At most they care about what you’re about to present, not you as a person.  And if you don’t draw attention to yourself, they’ll easily forget about everything in your talk that didn’t concern their preemptive interest.  This is a point that I was aware of long before I learned to deal with my stage fright, the problem was that it did me no good because I didn’t actually believe it.  In the back of my head (my nervous, blood pulsing head) I firmly held on to the idea that everyone was listening to every word I said, as if their lives dependent on hearing my stupid 5-10 minute oral report.  Getting myself to actually abandon this bit of irrational thinking took some time and a lot of effort, but it was a needed step in being able to handling my stage fright problem.

There are several more things that go into it, as well as numerous subsequent details that accompany the three above, but I think this covers the essentials as they pertain to my experience.  And it needs to be remembered, this is based on my experience.  Personalities are different, and people respond differently to different stimuli, so there is always the chance that my methods aren’t good enough for you.  However, I think the points I listed are universal enough to at least push most people in the right direction of coping with their stage fright.

Generation C(ynical)

With the first month of the new year coming to a close, I’m left sensing the same old aroma of destitute oozing from the pores of my generation.  For the longest time I could not trace, or deduce its origin, but its stench rose up with the passing of each year nonetheless.  It’s particularly evident in the restlessness we exhibit towards our relations with the rest of the world.  Our attention span is gradually eroding away, as we become unable to focus on one thing long enough to satisfactory digest any of it.  In turn, we try to substitute this defect by focusing on several things at once, but never registering enough of anything to feel fully content with ourselves, making us dependent on a continuous supply of novel information and content to keep us entertained (often confused erroneously with being happy).  We have by necessity become accustomed to multitasking everything, not as a result of a higher functionality, but out of a never ending search for higher stimuli.  We want to be part of something grand, and we are sure that ours is the era of unparalleled social transformation, but as we look around our search is left unfulfilled by the unimpressive characters that bumble before us to signal the beginning of the new epoch.

There is a banner that hangs above our heads, and it depressingly reads:  “No heroes here to be seen, no glory left for me.”  We desperately want relevance (just check out the wide array of YouTube videos; or even easier, look at the large number of blogs written by individuals eager to share their personality with an audience–including this one), but we have lost interest in the form this relevance can take.  We have given up on the notion of heroes who affirm life, what we desire now are a continuous supply of cynics.  We do not believe that, as a person, as a generation, as a species, glory can be achieved anymore in our social interactions, so we dare not try to even attempt it.  The revolutionary spirit has come to a screeching halt, and the occasional sparks of it seen across the world could very well be nothing more but the reflexive cry of an amnesia afflicted body.

Like our predecessors, we are eager to achieve, to innovate, to create, to socially progress, but we are constantly being told that our ambitions are misplaced; how we ought to look to the past for guidance rather than compose our own future.  Yes, we are being told that the generation that has brought about one of the largest gaps of global socioeconomic inequality in modern history, that has (and continues) to produce one economic blunder after another, whose self-appointed wisdom has left half the globe starved or reeling in anguish, is the generation we need to model ourselves after.  These are the individuals we are expected to emulate as a generation?  The “wise elders” we are to turn to for guidance?  We’d be better off seeking advise from recycled fortune cookies, then this group of chronic failures!  But they keep that banner solidly pinned over our heads, and condition us to believe that we are dependent on their leadership to endure the problems they have created.  And we go along with it, because tradition says we have to respect ancient wisdom, and we cannot violate traditions–can we?  Well, I don’t know about you, but I sure as hell can.  Because I choose to stand under a very different banner, one I have willingly nailed over my own head, and ask no one else to adopt, unless they so choose.  My banner holds no cynicism about the future, in fact it welcomes the coming of new eras, new innovations, new ideas and ideals.  It reads:  “For progress to occur, traditions must die.”

The concept of ancient wisdom is imaginary.  Had humanity always been concerned with being governed by the values of the dead, we’d still be stuck with our ancestors’ superstitious explanations of where the sun disappears to after it sets every night.  We cannot afford to conserve values that hold no relevance to us; we must adapt to a changing scenery, or (literally) die trying.

Differentiating between the Objective and the Absolute

For something to be an absolute it must by definition be consistently changeless and impervious to new data–it always remains the same, in every situation, under every condition.  Thus, if someone claims to be taking an absolutist position he is essentially proclaiming that said position is immune to refinement and scrutiny, and will never need to be even slightly reexamined or amended, ever.  This is a mindset that I thoroughly reject, and I do so by necessity.  No idea, no proposal, no hypothesis, no theory, no fact, has ever, or will ever, be barred from the scrutiny of newly emergent data.  It is through a process of rigorous examination and reexamination that existing data about reality is reaffirmed, refined, or displaced by a better working model.  The introduction of absolutes does nothing to further our knowledge or our understanding of reality; in fact, it negates both with its inflexibility to change.  This is what I mean when I say that I reject all absolutes, and this is where some people falsely conclude that this must mean that I think all facts and claims about reality are just subjective opinions.

For something to be objectively true it must be verified to exist independent of any subject’s perception, feeling, or thought on the matter.  There are schools in philosophy which deny the possibility of objective facts on the basis that everything we perceive to exist does so solely through our subjective human perception of it, therefore what we call objective facts can never be anything more but our subjective human perception.  I am definitely not an adherent to such a mindset, and I’ll tell you why:  1,000 years ago various strains of viral and bacterial infections made plague and disease a common occurrence in people’s life.  The fact that these people had no knowledge of the viruses and bacteria that were causing their ailments (and no knowledge of germ theory, in general) made no difference to the reality of their existence, because the viruses and bacteria did not care whether or not they were perceived or known by the organisms they were infecting, maiming, and killing–that is to say, they existed independent of the subject’s perception, feeling, or thought on the matter; their existence was an objective fact whether anyone perceived it or not, as was their affect whether anyone understood it or not.  Likewise, prior to Newton people were largely unaware of the fact that the force of gravity was pulling down on them (and everything else) at 9.8 m/s^2, regardless of their subjective feeling or thought on the matter.  And similar things can be said about a number of other things, where subjective perception are irrelevant to objective data: heliocentric solar system, age of the earth, shape of the planet, and so on and so forth.

But I can already hear a faint cry of protestation here, “Wait a minute,” someone might be inclined to say, “doesn’t the fact that gravity is acting on us right now, and has always done so, mean that it is an objective fact, and an absolute, which contradicts your previous rejection of absolutes?”  In short, no.  The theory of gravity is accepted as the most reliable conclusion about the various relationships we see between matter on earth, the solar system, and the universe, and, thus far, has survived all measures of scientific scrutiny–but this by definition means that it is open to scrutiny, hence it is open to being overturned if (and that’s a big if) future observable, testable, verifiable, falsifiable, empirical data was to demand such a verdict.  If, hypothetically, extensive research was to demonstrate that what we think of as gravity is really the affect of three different, yet-unnamed, forces working together to produce what we have mistakenly been calling gravity, there isn’t a competent physicist in the world who would in defiance of all evidence dogmatically cling to the previous gravitational model–this is what keeps scientific theories from being absolutes, while still remaining objective facts; namely, that objective facts don’t need to be impervious to future revisions to remain objective, they just need to be independently verifiable from a subject’s perception, feeling, or thought on the matter.

A point of contention that arises from this is the claim that due to our fallible human perception, what we deem to be objective facts will always be dictated by our subjective observations, thus facts about reality cannot be verified fully independent of a subject’s perception, feeling, or thought on any matter.  Proponents of this philosophical position would agree with me about rejecting absolutes, but would also insist that my attempt to defend objective facts is dubious, because our interpretations of available data is unavoidably limited, and biased,  on account of our flawed human conception.  I accept the fact that our sub-Saharan, anthropocentric, primate brains are very good at concocting a flawed image of reality; hence, the once held belief that the earth is stationary and the center of the universe.  However, doesn’t the fact that, no matter what people might have thought on the subject, the earth was still rotating around the sun, in the corner of one tiny galaxy, indicate that verifiable objective facts still exist despite what our subjection perception tells us?  If we subjectively perceive the sun to be moving across the sky, but objectively know that it is the earth that is actually moving around the sun, does that not serve as a viable demonstration that despite all our flawed human thinking, we can still differentiate between the subjective and the objective?  After all, it is not our flawed human perception that is telling us that we live in a heliocentric solar system (our perception says the opposite), it is the accumulation of observable, testable, falsifiable, empirical data.

For one to continuously try and challenge this by claiming that, “But you can’t fully know if you’re interpreting the data accurately,” dwells into the realm of what I would call absolutist subjectivism–where one’s insistence that all physical facts are subjective starts to very much resemble the opposing view that facts are absolute (and I have already explained why I reject absolutist positions).  Such a dedication to deem all facts as merely the subjective perceptions of the mind ignores the reality that our perceptions are not solely the product of internal factors, but are also largely dependent and shaped by factors and circumstances of the external world.  The sun isn’t bright simply because we internally perceive it to be so, we perceive it to be bright because we are responding to external stimuli telling us it is so (the sun’s objective brightness couldn’t care less what we perceived one way or the other).

The pure solipsist would not be satisfied with any of this, because (according to solipsism) only one’s mind can be sure to exist, all else (including physical observations and personal perceptions) are liable to be illusions (such as hallucinations or dreams) created by said mind.  Generally, I consider solipsism to be too unfalsifiable of a position (which is a point to its detriment, not its favor) to spend time arguing against.  I’m skeptical as to why, if reality is wholly an illusion of my mind, I’m imagining myself to be nearsighted?  I have had bad vision since I was about 11 years old, and to this date never have I had one dream in which my dream self was afflicted with myopia.  The reason for this doesn’t seem all that mysterious to me, my brain doesn’t need my eyes to create images while I’m asleep; it works with the images registered in my conscious memory.  But if solipsism is true, and my mind was the only thing that truly existed, why does my imaginary self need eyes and glasses to perceive a world that is essentially a hallucination?  Why am I imagining myself to be dependent on physically external factors (my glasses, my contacts, my optometrist), in a reality that is essentially a product of my own conscious creation?  Yes, I know that solipsists will probably come up with some long-winded philosophical musing about how solipsism does not suppose that the content produced my the sole existence of the mind, necessitates any sort of control over said content; which does nothing to explain why I need my physical, material eyes and glasses to perceive an immaterial reality.  But it doesn’t matter, because it would be a waste of time to bother refuting the specifics of solipsism.

For the sake of argument, let us accept what the solipsist says, and mine is the only mind that exist (or, at least, the only one that is verifiable to me), and the physical world I perceive is a creation of my mind.  How would one actually go about differentiating a solipsistic reality from a non-solipsistic reality?  Even if solipsism is true [which I highly doubt], am I still not bound and limited by the parameters set up within this reality my conscious self is inhabiting?  Even if the force of gravity is something that my solipsistic mind has created, isn’t my inability to levitate off of the ground (even if just an imaginary perception) a fact within the reality I am inhabiting?  And doesn’t the fact that, despite what my mindful feelings, thoughts, or desires are on the matter, I am incapable of imagining myself defying gravity by levitating off of the ground make gravity an objective fact, at least within the conscious reality I am inhabiting?  Even if I turn out to just be dreaming all of this through some mind-only, brain-in-a-jar kind of state, if the parameters of this reality operate independent of my subjective perception, I am still bound by the physical world that I am apparently hallucinating myself in.  And if I have no means by which to escape from this dream world, I ask again, how is a solipsistic reality different from a non-solipsistic reality?  What exactly does solipsism offer to the discussion, besides a bunch of useless, baseless, non-consequential propositions?  Nothing, nothing at all.  (And if you happen to be a solipsist, and you disagree with what I’ve said, you should keep in mind that by disagreeing with me, you are essentially disagreeing with yourself on account that I–and this blog–are just a creation of your mind.)

Now, a fair point to all of this would be to stop me right here and mention how when people in the modern world are discussing absolute and objective facts what they are usually debating over isn’t the cold, mechanical, facts of scientific inquiry on physical reality, which hold no direct consequence on their personal values in life (though this is a debatable point, depending on the particular scientific inquiry in question).  What people really are asking is whether or not there exists such a thing as absolute moral judgments, or objective moral judgments.  This, to me, is a much more intricate question to ponder.  Personally, I am still inclined to say that absolutes do not exist even when it comes to moral judgments.  For instance, do I think that lying is morally reprehensible?  Yes.  Can I think of instances when lying would not be morally reprehensible?  Yes.  I cannot see how an absolutist moral framework allows for such a disparity on a single moral judgment to occur, since something that is absolutely right or wrong demands for it to be applicable equally to all circumstances, lest one admits to the circumstantial (non-absolute) nature of moral principles.

“Intelligentsia is Dead! Hooray!”

Historically, the word intelligentsia refers to someone occupying a murky upper-class status on the basis of their intellectual contributions to culture and society.  These select few would (more often than not) share two major criteria amongst themselves:  1. They were rich.  2. On account of criteria 1, they didn’t have to work for a living, thus could spend all their time philosophizing about life and its hardships (unlike those philistine farmers who were too busy collecting crops for the village to sit back and reflect about what really matters to people).  Since the end of feudalism, and the laughably archaic status of aristocracies, intelligentsia can come to refer to just about anybody who writes a book that educated people hold in high regard, whether it contributes anything to our social consciousness or not.

Admittedly, the notion of what is, and is not, to be deemed intellectually worthy is quite subjective.  Speaking for myself, I would rather read the worst dime novel imaginable, than the most academically praised book on anything political.  Regardless, I have no issues with the diverse opinions people hold about good and bad writing or art.  What I’m getting at is how intelligentsia, as an applicable term, is  entirely nonsensical in any contemporary meaning.

Whether it was genuinely well intentioned, or the product of a corrupt system, the artists and writers that made up the intelligentsia of the past did produce works that creatively immortalized pieces of human history.  Gave a frame of reference to a past culture; something we can nostalgically look back and draw inspiration from to progress forward through moments of social gridlock (for example, the way the Renaissance was inspired by the intellectual contributions of ancient thinkers).  I can’t imagine such a thing happening with any of the works being produced by the public intellectuals of today.  That’s not to say that there are no good books being written in literature, or that modern art is devoid of aesthetic skill (though my septuagenarian neighbor would beg to differ).  But none of these are truly capable of sparking the imagination of the people as they once did, partly because we would have to be removed and forget about them first (which in today’s information age is impossible).

It is noteworthy that the title of the public intellectual has never been assigned on the bases of popular opinion, but on the basis of what other public intellectuals promote amongst each other as just too brilliant and sophisticated.  And everyone goes along with it, because its assumed that these people must know what their talking about (and nobody wants to risk looking unsophisticated and lowbrow).  This is just the nature of the animal; unlike the sciences, Arts and Humanities studies have no such thing as a decent peer-review process, largely because the peers themselves are removed from the broader social culture they reside in.

The intelligentsia of society used to be polymaths, whose expertise would roam across academic disciplines.  That is no longer a viable position to occupy.  Our knowledge and data is too broad to be encapsulated by any one mind; specialization is a necessity.  The era of the intelligentsia is dead and gone, and I for one welcome it as an important testament to our educational progress as a society.  We have accumulated so much data, raw knowledge, that it cannot be confined to the few.  Despite the pessimistic nature of these posts, some words do deserve to die.  When a word because too rigid to be properly applied in any meaningful way, the responsible thing to do is to retire it, and let it rest in peace.  Now, all we need to do is let the self-styled public intellectuals in on this fact.

Ayn Rand’s Atlass Shrugged: Analysis and Critique

Part One:  Analysis

Exposing the means by which the looters of a nation are able to exploit the abilities of the productive members of society lies at the heart of Ayn Rand’s novel Atlas Shrugged.  The plot of the novel makes a clear distinction between the two factions through the values each side exhibits for their worldview, and more importantly the imagery by which they express such ideals.  This is best illustrated in a dialogue that occurs halfway through the novel, in which the pirate character Ragner Danneskjöld—speaking to industrialist Hank Rearden—declares his intent to destroy the person of Robin Hood from human consciousness.  To him, Robin Hood is the embodiment of the misplaced mentality modern society has come to embrace.  He claims that it s through the legacy surrounding his exploits that the looters of today are offered a convenient excuse to promote their detrimental moral superiority, which encourages that the need of one man justifies the sacrifice of another.  Ragner Danneskjöld’s aim to eradicate the ideals, legend, and righteousness of Robin Hood, as a means to free mankind from his own self-imposed deprecation and provide him with the independent morale necessary to survive, stands as a perfect metaphorical expression to illustrate Rand’s philosophical stance for the virtue of self-interest over the misguided value of self-sacrifice.

Right from the start Ragner Danneskjöld makes it abundantly clear how his worldviews contrasts that of the man he is out to destroy, Robin Hood, and why these differences create an odious contempt against the man, and the ideals which are embodied by him.  He explains that where Robin Hood sought to take from the rich and give to the poor, he in turn is “the man who robs the thieving poor and gives back to the productive rich” (page 532).  Here, Danneskjöld is careful with his diction, as not to give a misconstrued representation of his words.  He uses thieving to describe the needy underprivileged poor and illustrate his belief that a group who seeks compensation, without the intent of earning their imbursements, is in fact robbing from those who have accumulated their wealth through relentless labor and resourcefulness (the productive rich).  The idea of Robin Hood, Danneskjöld states, creates a false warrant of merit where it is the inept who are justified in demanding aid from the skilled, a logic which holds little ground against objective reasoning, based on the notion that strength and intellect are factors of dominance not servitude.  Danneskjöld’s views differ in that he sees each man responsible for his own wellbeing, but realizes that this is challenged by the faux guilt hanging over the conscience of the productive few, and the widespread assurance that those with manufacturing ability have a responsibility to provide for the survival of others with lesser capabilities.  He encapsulates this partiality as, “the need of some men is the knife of a guillotine hanging over others—that all of us must live with our work, our hopes, our plans, our efforts, at the mercy of the moment when that knife will descend upon us” (p.532).  Such a grim depiction further serves to emphasize the intense abhorrence Danneskjöld feels for the looter’s ideology.  He sees the admiration of men like Robin Hood as a prime factor society has become to delude itself of the supposed inherent justice of altruism that is being offered as the only humane quality necessary for people to posses.  By encouraging these sentiments, the looters render any counterargument as nonsensical simply through the public impression that all other takes on the matter are immoral by definition.  Hence, leaving the producers and providers of society to be drained and deposed of as the needy masses see fit.

Ragner Danneskjöld’s vehement reproach towards Robin Hood stems not so much from the actual reality of the man, but the pretense which has come to symbolize him.  On page 532 he says, “It is said that he fought against the looting rulers and returned the loot to those who had been robbed, but that is not the meaning of the legend which has survived.”  Clearly, Danneskjöld is able to differentiate between the folk hero image Robin Hood represented as someone who stood against an abusive ruling authority—and even accept the gallantry associated with it—from what he sees as a prevarication created by those seeking to use his actions to elevate their own ideology amongst the populace.  His contempt is not so much aimed at eradicating the man, but the myth of him that has come to serve the plunderers of society.  Nonetheless, Danneskjöld also understands that in order to free the world from the self-deprecation brought on by the legend, no distinction can be made between man and myth.  The reason being that as long as a man like Robin Hood exists to serve as a guiding example for the looters, it is necessary to deal with the two entities as one and the same, due to the extend the myth has come to overtake every aspect of the man’s personhood.  Danneskjöld explains his rationale plainly when he gives his assessment of what Robin Hood has become, “He is held to be the first man who assumed a halo of virtue by practicing charity with wealth which he did not own, by giving goods which he had not produced, by making others pay for the luxury of his pity” (p.532).  It is through his selfless servitude that Robin Hood’s legacy evolved into the defender of the poor, rather than the robbed.  Such an image caused the distortion which Danneskjöld hopes to destroy; the idea that the true nature of mankind involves the demand for self-sacrifice.  And, although, Danneskjöld considers it complete folly, he understands the depth to which man is capable of falling if such nonsensical sentiment continues to be valued as morally correct.

The righteousness of Robin Hood is the ultimate goal Ragner Danneskjöld wishes to remove from human consciousness.  Considering it a personal duty to relieve man of the foul virtues he has accepted through centuries of fanciful tales, which have caused the discarding of realistic sensibility.  Danneskjöld argues against an ideology that considers the preservation of the self immoral, yet praises the believe that, “in order to be placed above rights, above principles, above morality, placed where anything is permitted to him, even plunder and murder, all a man has to do is to be in need” (p.533).  The championing of need is in Danneskjöld eyes the greatest depravity luring mankind away from realizing the importance of personal interest.  A world where all men are held accountable to provide for themselves, Danneskjöld argues, is a world where every member of society will labor to achieve highest proficiency, rather than depend on someone else’s productive output.  He sees the preservation of egotism not just as a necessity for his own values to survive, but the existence of mankind as a whole.  This is best summarized by Danneskjöld in his closing words, “Until men learn that of all human symbols, Robin Hood is the most immoral and the most contemptible, there will be no justice on earth and no way for mankind to survive” (p.533).  To remove Robin Hood from the moral pedestal society has set him on, would deprive the looters of a functioning symbol to hold over the head of men striving to earn their wealth instead of waiting for free hand-outs.  As long as the idolization of someone like Robin Hood persists amongst the general public, no hope lies for the true providers of society—working not to serve another man’s needs, but solely their own interest.

The fundamental plot of Ayn Rand’s novel Atlas Shrugged exhibits the struggle between those few in society who have rejected the moral superiority of altruistic self-sacrifice, and the looters who use the concept of need to subjugate any trace of personal interest and basic individuality.  The character of Ragner Danneskjöld serves to illustrate the idea of what man should strive to be; resourceful, fearless, and nondependent on other men’s capabilities.  He declares Robin Hood as the one man he is out to destroy as his personal mission to rid the world of its unyielding thirst for need.  He views the ideals, legend, and righteousness of the Sherwood Forest archer as the primary symbol serving the looters false ethical cause.  Ragner Danneskjöld reasons that the fatal blow necessary for man to see through the gilded façade the looters have erected to cover their noxious ideology is the death of their idol, the original offender against the nature of man, Robin Hood.

Part Two:  Critique

Although the brief exchange between Hank Rearden and Ragner Danneskjöld is meant by Ayn Rand to logically outline her philosophical position concerning the destitute nature of altruism, a number of apparent logical faults can be found right in the midst of the impassioned dialogue.  On page 532, Ragner Danneskjöld explains how he has never robbed a single private or military vessel during his pirate campaigns against the looters of society.  The reason for the first is self-evident by the stance of Rand’s ideals on capitalism, as to why military vessels are not to be attacked, Danneskjöld notes, “because the purpose of a military fleet is to protect from violence the citizens who paid for it, which is the proper function of a government” (p.532).  However, on the same page, the pirate names his ideological foe as “the idea that need is a sacred idol requiring human sacrifice.”  The problem with this line is that it seemingly negates the point he has made about the necessity of preserving the military, since the military is a prime example of an institution that operates primarily on the notion of self-sacrifice for the sake of a particular society/country/community as a whole.  Danneskjöld’s acceptance of the need for such an establishment runs counter to his vehement promotion of individual self-interest.

Rand might argue that this is irrelevant, on account that Ragner Danneskjöld specifically mentions that the military is supposed to protect those who “paid for it,” but such a rationalization does not solve the philosophical dilemma at hand, and even leads to a number of more conundrums.  Namely, it does not address the fact that in a world where self-interest is heralded as the ideal standard of behavior, the fundamental principles of military combatants will be eradicated, because it is universally understood that a soldier is expected to give his life for his brothers in arms, and for his country, if the situation calls for it; the interests of the individual are secondary to the interest of the unit as a whole.  And, on the point of the military serving those who paid for it, one is left bemused by what exactly Danneskjöld means by this.  He mentions that such is the proper function of government, thus it implies he does support the notion that the government is to be the arbitrator of the armed forces.  However, further on in the text, Danneskjöld firmly rejects taxation as a form of robbery (p.534), suggesting that the method by which citizens are to pay for their military protection must come from some other means—more than likely, what is implied is a direct payment of some sort.  This leads to a major problem that is left ignored by Rand throughout the dialogue; the possibility that if the military is privatized to protect those who have paid for their service, the result will be unmanageable disparity that can lead to losses amongst all economical sectors of society.

As a thought experiment, say, for example, that the East Coast of the U.S. is the more affluent part of the country (let us assume it is due to it having more entrepreneurs investing into a growing industrial economy) and uses its affluence to thoroughly protect its shores from any possible threats that might harm its source of wealth; while the West Coast is significantly less affluent, and, as a result, cannot afford as much military protection for its shores.  Let’s also set that the material resources the industrial centers on the East Coast use to produce their wealth is located on the uncultivated areas of the West Coast.  Since, presumably, the entrepreneurs of the East Coast would have a vested interest in keeping the West Coast as nonindustrial as possible, so as to keep the production costs of their products lower than the selling price.  But, because the West Coast cannot afford to properly protect its shores, their material resources (which are used by the East Coast) lie more vulnerable to external threats of theft.  Should the East Coast pay for the needed military protection of the West Coast?  And, if so, in whose individual self-interest is it to cover the cost?  Ideally, the West Coast should be expected to cover the cost itself, but in order for it to produce the wealth necessary to properly protect its shore it will need to increase prices on its material goods; at the expense of the East Coast.  Thus, it would appear, the East Coast is left picking up the bill no matter the angle one chooses to lock at this dilemma.

Now, since the government is the arbitrator of the military (as implied by Danneskjöld), one would be justified in proposing that it should also bear the responsibility of paying for the expenses that go into deploying its forces.  But where would the government get the revenue to make such payments?  Presumably taxes, but Danneskjöld has already established that taxation is equivalent to robbery, therefore for the government to tax its citizens would be criminal in nature.  It is true that the average worker has some interest in keeping the resources of their employer protected, lest they risk losing their place of employment.  However, to what degree should a menial employee be expected to pay for the protection of resources, whose total revenue potential he will only receive a fraction of (in comparison to the individuals who run the company)?  Perhaps, the wealthy entrepreneurs and industrialists, who have the greatest interest in protecting the West Coast, can be expected to provide the greatest payment to the government in order to finance the needed military protection (and, yes, it would have to be given through the government, since Danneskjöld has already acknowledged that the government’s function is to be in charge of the military).  Thus, the burden to pay falls on those who earn the most from the protected resources.  This seems like a viable position, but the question then becomes, how, in practice, is this any different from taxation?  It would appear that the only difference would be a lack of coercion on behalf of the wealthy, and maybe that’s the underlying point, but if the end result is still the same as before, what sense is there in pretending that the current system is a form of tyranny since the solution will essentially be the exact same thing only promoted by the tenets of a different ideological principle?

Another major point of contention arises through the message Ayn Rand is trying to present through Ragner Danneskjöld’s condemnation of Robin Hood (and altruism in general).  In his dialogue with Hank Rearden, Ragner Danneskjöld makes the case that wealth is an inherent indication of productivity.  Implying that due to the competitive nature of the market those who are wealthiest will also be those who possess the greater intellect and talent, and, thereby, are by definition the most deserving of all the riches and power they can accumulate; while those who occupy the lower ranks of society do so by the merits of their own failures.  However, this is clearly not as absolute as Danneskjöld is making it sound, and as the Robin Hood fables are meant to convey.  The rich Robin Hood stole from were corrupt monarchs who demanded servitude from the lower classes of society, not because they had gained their wealth by the merit of their work, but due to an arbitrary right of birth.  In this scenario, the most productive members of society were the underprivileged poor, the looters as Ragner Danneskjöld would call them, but who had no means to benefit from their productivity due solely to the fact that they were born in poor households.  Hence, in such a system, it would be fundamentally disingenuous to claim that the lower-classes lack of economic motility is the result of a lack of productivity, just as it would be insincere to proclaim that wealth to be a representations of intellect or work ethic.

The question of inherited vs. earned wealth is an issue that Ayn Rand never dwells into in Atlas Shrugged, even though any defense of her philosophy demands a clarification on this point; especially if one branches out to the greater narrative of the novel.  For example, two of the main characters in the novel, Dagney Taggert and Francisco d’Anconia (both of whom are presented throughout the prose as the epitome of the productive capitalist) lay claim to their fortune strictly by an accident of birth.  Both have inherited their wealth through the work of their productive ancestors, not through, shall we say, the sweat of their brow.  It is true that they are shown to be ardent entrepreneurs (although, for the aristocratic d’Anconia, this is more a matter that the reader is just suppose to grant as a given for the sake of the narrative; he is never actually shown creating anything industrially successful throughout the plot), but the question of how these characters would have succeeded if they had not been born in such a privileged position remains open to question.  This is particularly noteworthy, since the majority of the named antagonists in the novel (who seek to undermine all the values the Rand’s protagonists hold dear) are also wealthy industrialists, thus, the plot subtly acknowledges the point that the possession of wealth is not an ideal indicator of productivity.

The pivotal event Atlas Shrugged is leading up to is the point at which the productive few of society unanimously go on strike, and allow the looters of society to fully see the catastrophic fate that their self-sacrificing policies will inevitably lead to; i.e. the complete collapse of civilization.  Although the novel ends at this point, in his dialogue with Rearden, Danneskjöld gives the reader a glimpse of what is to follow thereafter.  He states, “When we are free and have to start rebuilding from out of the ruins, I want to see the world reborn as fast as possible” (p.535).  Here, he is giving justification for his work as a pirate, he is simply collecting the money that has been looted away from the productive to be utilized by them to remold society after the coming collapse (ironically drawing parallels with the criminal aspects of Robin Hood).  He continues, “If there is, then, some working capital in the right hands—in the hands of our best, our most productive—it will save years for the rest of us and, incidentally centuries for the history of the country” (p.535).  Thereby, those productive few who currently are held down by the looting majority will be well compensated in the imminent future.  However, this once again brings up the topic of earned vs. inherited wealth.  While those who Danneskjöld sees worthy today are bound to continue accumulating their wealth in this approaching utopia, what exactly will happen to those who might possess the potential to be entrepreneurs, but were unfortunate enough to have been born amongst the looting majority?  The narrative seems to imply that once the virtue of self-sacrifice has been thoroughly annihilated in favor of self-interest, those who deserve to rise through the social ladder will be able to do so.  However, it goes without saying that, whether or not the potential advancement exists, few will be able to actually occupy the ranks of the rich, simply because the number of available spots will always pale in comparison to the number of lower-ranking poor.  Therefore, most people will have to be content with the lower position they occupy in society, and these will be the ones upon whom the fortunes of the rich few will be founded on; meaning that, once more, the social reality that is to arise from the coming collapse will not be much different from the society that exists today.

Furthermore, the question is still open as to how someone such as the aristocratic Francisco d’Anconia, who has never been shown to produce anything of worth, whose entire fortune is based on the merits of his last name, deserves to be amongst the ranks of the productive few, other than strictly through his association with the other protagonists in the narrative?  How is someone born poor in this post-looter society, expected to compete with the generations worth of wealth that d’Anconia has inherited from his ancestors? (This point still stands even if one takes into account the fact that d’Anconia’s mission is to undermine the current social order by wasting the wealth he has, because Danneskjöld’s words to Rearden clearly imply that he will be reimbursing all the productive rich in the coming era for their present losses.)  While a reader can speculate one scenario after another, the truth is that all of these points remain unaddressed by the plot itself.

Ayn Rand’s Atlas Shrugged is meant to convince the reader of the superiority of promoting strict capitalism in all aspects of a person’s life.  It is a simple philosophy, best articulated by the pirate character Ragner Danneskjöld, in his dialogue against the legend of Robin Hood, and the virtue of self-sacrifice the looting masses have accepted as morally viable.  Although there are times in which Danneskjöld seems to be conveying a deeper truth pertinent to the advancement of an industrial society, upon scrutiny, much of the foundation that he sets to build this new ideology of self-interest on is based on flimsy premises that leaves too many factors unexamined (two of which, the proper function of government and the dilemma of inherited vs. earned wealth, are pointed out here).  As such this simple philosophy comes across as overly simplistic to hold any practical application.

Bibliography

Rand, Ayn. Atlas Shrugged. Signet Book (New York: 1992, original 1957).

A Brief Word on Art

Some time back, I was eating dinner out for a change of pace (there are times when even us hermits feel the need to breath in the humidly fluorescent air of city life).  In the middle of my meal, I couldn’t help but overhear a conversation between two individuals seated somewhere behind me (I couldn’t see them, but judging by their voices I think it’s safe to assume they were women).  They were discussing how popular musicians are resorting more and more to the use of cheap gimmicks to promote shock value for their image (they gave examples of needless profanity, absurd fashion, over-the-top antics, etc.).  Then one of them said something I’ve heard repeated many times before:  “The point of all art is to provoke and challenge people.”  This is one of those statements that on the surface sounds like it simply has to be necessarily true.  After all, who would argue that the most memorable works of s/he can recall of the top of our her/his head was not some piece that initially provoked a high degree of emotion or thought in her/him (for better or worse).  The idea that the purpose of paintings, photographs, music, poems, literature, graphics, furniture designs–whatever else people create to artistically engage onlookers–is to stimulate a response from potential admirers and detractors alike, seems all too obvious when we consider how important the emotional response of an audience is in immortalizing the aesthetic longevity of any work of art (and by extension, the artist).  And yet, I still find myself disagreeing with the original statement.

The claim that the purpose of art is to provoke and challenge the individuals who come across it, seems somewhat glib to me.  Now, I can see that as a factor in the greater equation, or as a possible end result, but I ultimately I feel that it missed a key point in what makes art such an indispensable part of human expression.  Art provokes, and it challenge; but what about the times it doesn’t?  Does it cease to be art?  When I’m walking through a museum, and I’m glancing at the classic works of history, I cannot say I’m really being challenged by them.  I suppose you could say that they provoke a sense of admiration in me, but they certainly don’t do much in provoking any new insights for me.  Not to mention, quite a few pieces evoke complete indifference on my part, but still don’t diminish my ability to recognize them as decent works of art.  They are still good and beautiful expressions of art, which they are simply for the sake of being art, independent of my subjective liking of them.  Or, to put it more articulately: the point of art, in my opinion, is first and foremost to exist for its own sake.  The meanings we assign, and emotions we ascribe, seem to me like secondary functions.

The art itself is adaptable to an evolving landscape, and its specific appeal changes with time and surroundings, but the aesthetic value innate to the work remains untouched.  Even if you dislike a particular painting, you will still not dismiss paintings as a whole.  Even if you just hate a particular song or genre of music, you will still see the artistic value in music.  The same goes for poetry and literature, and a multitude of other modes of artistic expression you have no personal interest in.  The reason being that, although we might recognize that a piece of art is not appealing to us, not because it provokes or challenges us, but precisely because it fails to do either, we are still able to acknowledge some potential aesthetic value in its existence (even if not for our own tastes).

Unless, you happen to be a professional art critic or social commentator, who nowadays seem to get paid to dismiss everything.

The Internet as the Rabbit Hole

Every now and then I decide to briefly try going on somewhat of a web-detox regiment.  Not for any deep reasons, I just feel that my web usage occasionally reaches a critically high point.  Mind you, I can’t just cut the ethernet cable to my Wi-Fi completely, because the sheer prevalence of online services in managing my daily chores is too great to allow for that sort of liberty (I still have to check my emails daily in order to pay my bills).  But, to my surprise, when put to the test these necessary online duties take me under 15 minutes to complete from log on to log off.  This was surprising to me, considering I’ve previously been known to spend hours on end staring at my laptop screen.  My excuse for raking up these net overtime hours was always that I’m doing something productive (reading fancy-pants articles, and whatnot), in addition to pursuing leisurely activities like online games and YouTube.  But in reality, I was just trying to find excuses to continue staying online for any reason whatsoever.  The internet just has this way of making me feel as if all the important things that occur in life revolve around this omnipresent series of tubes that place the world at our fingertips.

Just about everyone reading this will probably have little trouble understanding the initial stages of withdrawal I experienced throughout the last week, and how I craved for that psychedelic high that comes with navigating from one site to another (picking up bits and pieces of information from dozens of different sources, at record speed).  But I don’t want to fall into the trap of sounding overly melodramatic about what should really be a mild nuisance.  Yet, it is a moderately noticeable form of mild annoyance, in that even now that I have broken my semi-netfree fast, I feel a sense of hesitation about resuming my previous web surfing habits.  Almost as if, now that the routine has been broken, I fear falling back into it again.  The fact that this is having an affect in making me question what would normally be my usual course of action, makes me think that some kind of–even if only in the most superficial recesses of my mind–psychological dependency has been severed.  And I’m left with these undefined reservations about reestablishing the normal mode of operation again.

Despite the fact that so much of my personal and professional life incorporates online services, the reality is that the dominance of the virtual world we create for ourselves on the internet, is largely illusory.  The all-encompassing presence I am (and I imagine many of you are, too) keen on attributing to websites, forums, online groups, blogs, is very much a self-maintained delusion, sustained by the fact that cyberspace allows us to do something meatspace doesn’t: transcend social limitations and decorum.

In the four days of my net abstinence, I saw how tediously slow information in the real world operates.  This makes the speediness and efficiency of online data a very attractive alternative  (ironically,  however, the lack of easily available distractions made whatever task I was doing also go by much quicker).  Furthermore, I saw how unaware a great deal of people are about internet culture and memes (and not just to the elderly), even though I always considered these things to be fairly widespread in popular culture.  The jokes, the tweets, the web-dramas, and multitude of online communities, don’t have much of an existence outside of their cyber confines (either that, or people simply feel stupid referencing them in person).  But the primary difference I took notice of was the general way people communicated with one another.

Whether you believe me or not, I make it a habit to write on this blog in the same manner and diction I do in my daily life.  Now, of course the blog format allows me to correct the occasional grammar mistake, and rephrase poorly articulated statements to better convey my opinions, but the basic tone expressed is the same as it would be if you were sitting across the table from me (just with less “ums” and awkward pauses mid-sentence as I fumble over my words).  However, when I see some of the more blunt and vitriolic comments left online, I find myself wondering just how many of these individuals would be equally daring with their choice of insults in a face-to-face conversation.  In person, even more confrontational personalities remain for the most part reserved when they are facing possible opposition in thought from a second party.  There is a level of empathy and solidarity in play; even if you hate the person speaking to you, it’s difficulty not to humanize someone whose face is right in front of you.

When forced to interact in person, most people have somewhat of a filter that prevents a lot of faux pas and breaches in social etiquette from leaking through.  Online, where the person you are interacting with is nothing more than a far-off abstraction of typed words, this filter is virtually discarded in favor of apathetic aloofness (see what I did there with “virtually”, ’cause we’re talking about “virtual” reality; try to keep up with my linguistic subtleties nOObs).  And the tiny personal transgression we are willing to overlook in the fellow human being seated across from us is thrown aside when that human being is reduced to nothing more than a screen.  I imagine it’s too much like having an internal monologue (where anything goes) that we forget there are actual people reading our diatribes.

This brings me to the core realization that hit me this week: the internet is essentially imaginary.  Not in the sense of being nonexistent, but in the sense of it mirroring our impulsive inner ramblings.  Hence, it’s no surprise that it can deliver such a satisfying high to our psyche, since it practically serves as a reflection of our deepest thoughts.  This isn’t necessarily a bad thing, but I think I’ll try to limit my daily dose and remember that there is a space, outside of cyberspace.  On which real life hinges.