“Have-to-Marry” vs. “Will-Never-Marry”

Have-to-Marry: “Marriage is a great personal bond between two individuals, and provides a person with added stability in life.”

Will-Never-Marry: “Marriage is an archaic institution, founded on a misplaced desire to placate familial and/or societal expectations, instead of the desires of the individual itself.  Rather than offering stability in life it makes the individual more readily complacent with a less fulfilling life.”

Have-to-Marry: “You’re confusing contentment with complacency.  For a person to be content in life is for her/him to finally have gained the maturity to appreciate what they have in the moment…”

Will-Never-Marry: “At the expense of all that they could have had were they not legally chained to the wellbeing of another person.”

Have-to-Marry: “To refer to marriage as being chained is a cheap piece of rhetoric.  Marriage in today’s society is, more often than not, a contract of affection and trust between individuals.”

Will-Never-Marry: “Except when it’s not.”

Have-to-Marry: “It’s faulty reasoning to look at exceptions and pretend that they represent the whole.”

Will-Never-Marry: “But these exceptions (such as when marriage is entered more for utilitarian reasons than sentimental ones), are a prominent part of the deal in our society.  If marriage is all about affection and love, why accompany it with things like tax breaks, better mortgage rates and healthcare packages at all?”

Have-to-Marry: “The fact that society has put in place material incentives to make marriage an appealing prospect for individuals doesn’t negate the truth that people who marry are gaining personal–i.e. psychological and spiritual–benefits from the act.  It only means that people were astute enough to construct a society that recognizes the beneficial elements of the practice, on the community as a whole.”

Will-Never-Marry: “Which are what again?”

Have-to-Marry: “Marriage symbolizes to the community that you have a steak in the long-term well-being of said community.  The act of committing to one person shows that you are willing to set aside your egocentric interests and can consider the good of others as well as your own.  This breeds a certain level of trust and respectability in most people’s eyes.”

Will-Never-Marry: “So (as I said before) you should get married to appease societal expectations?  So that the people around you will trust and think well of you as a person?”

Have-to-Marry: “Not just for those reasons, but on a macro level they certainly ought to be factors to consider.  Humans are social beings; it’s how we’ve survived this long.  And if we are to continue to survive we have to commit and trust one another.  Marriage is one of the ways (though, admittedly, not the only way) we do this, on an individual level.”

Will-Never-Marry: “Human beings survived for hundreds of thousands of years prior to the advent of marriage.  Not to mention that for the majority of the institutions existence it served more as a business venture between families, rather than a demonstration of either love, maturity, or trust.”

Have-to-Marry: “That’s a very simplistic rendition of the complicated history of it all.  Regardless, it’s irrelevant to the discussion of the benefits marriage affords to the individual, and by extension society, today.”

Will-Never-Marry: “You brought up history, so I’m just following your train of thought here.  It’s pretty much a given that the segment of society that favors a particular practice will argue how this particular practice is indispensable for the ‘wellbeing of society’ (whatever that means).  The same line of reasoning can be made (and has been made) to argue in favor of establishing a single state-enforced religion, a monarchical government, or various forms of slavery.  The proponents of all of these institutions have always resorted to the old canard that sans said institutions the society or community would not function properly, and might even descend into disarray.  The truth is that if people started to abandon any currently common social practice, the only outcome that we can predict is that the practice would just seize to be common.  That’s it.  In Western societies today we no longer frown down on premarital relationships; the stigma once attached to it has been all but removed from people’s psyche.  So to claim that we wouldn’t have affection or commitment between people without marriage, on a social or individual level, is unfounded speculation at best.  And putting aside societal imperatives for a moment, given that there is no longer a stigma concerning premarital relations, please tell me what specific benefits I, the individual, gain from marriage (besides a couple of utilitarian financial incentives)?”

Have-to-Marry: “Every single study, every single survey, every single bit of data done on the subject, clearly shows that married individuals experience greater health (physical and mental) and longevity than non-married individuals.  This holds true even when one compares them to cohabitating non-married couples.  I might only be able to speculate as the reason why this is so, but you can’t pretend that this fact is untrue just because we don’t know all the variables involved.  At the end of the day, we can look at the data and conclusively point to the fact that married people report better health, and that married people are living longer (it’s not as if we can fake the latter, right?).”

Will-Never-Marry: “Correlation doesn’t equal causation.  The advent of marriage as a social practice in human history also correlates with the advent of slavery; however, to therefore imply that one causes the other would be as fallacious as the reasoning you displayed in your previous remark.”

Have-to-Marry: “As of this moment the common denominator for all this data is marriage.  If you can find a better explanation, by all means do so, but until you do society is under no obligation to discard the most reasonable explanation that is currently on offer for the data observed.”

Will-Never-Marry: “But it’s the validity of the cited data that I’m calling into question.  You’re ignoring that I began this conversation by mentioning that marriage makes people complacent with what they have.  When you invest that much of your identity into something of course your going to self-report higher satisfaction with the decision, but there is still no means by which to differentiate between those who are reporting there sincere beliefs, and those who have willingly self-deluded themselves into a stupor because the prospect of admitting their dissatisfaction is too great of a personal failure to bear.  As to the issue of longevity, of course you’re going to live longer when you have another person in every waking moment of your private life discouraging you from taking any potential risks in life.  Being bubble-wrapped in a monotonous life of adequate mediocrity hardly counts as a fulfilled life; no matter how long it lasts.”

Have-to-Marry: “Tell me, how exactly are these speculation of yours against marriage any more valid than my so-called speculations in favor of marriage?”

Will-Never-Marry: “Simple.  You know that my opinions are authentic, because I don’t have a spouse breathing down my neck about what opinions they think I ought to have.”

Have-to-Marry: “Maybe if you did you’d be able to come up with a more mature counterargument.”

Will-Never-Marry: “You can’t prove that.”

The End of Revolutionaries

A century ago, if someone was referred to as a revolutionary, there was nothing obscure about the character of the person being talked about.  Sure, the cause for which he or she was dedicated to may vary anywhere from the far left to the far right of the political spectrum, but there was no doubt that the individual revolutionary was a person who had drastically altered public consciousness (for better or worse).  Moreover, a revolutionary was an individual that had usurped (or at least had attempted to usurp) an existing political order, in favor of a fresh one; in short, a revolutionary was one who actually took part in revolutions.

Nowadays, the original implications of the term have completely been lost on us.  Revolutionary has become a filler word, utilized for both derision and adulation.  A clear example being the 2008 election of Barack Obama.  Right-wing pundits made headlines comparing it to some sort of Bolshevik takeover (when, in reality, if there was a Bolshevik regime in charge you would already be either silenced or dead), while left-wingers hailed it as the dawning of a revitalized new era in American history (when, in reality, the political system was entirely unchanged, occupied by the same individuals, the same groups, with the same interests as always).  The problem is that we use the word revolutionary, when we actually mean transitional.  A transition is simply a modified carry-on from what preceded, while a revolution is a wholehearted discarding of the previous order.

The confusion is made worse by the large number of individuals who see fit to assign the revolutionary label to satisfy whatever narrative they wish to present.  A year ago, I read a horribly self-aggrandizing book called The Broken Compass: How British Politics Lost its Way (a work I cannot in good conscience recommend even as a tool of torture against my worst enemy), within the book the author refers to himself as having once been a revolutionary in his youths.  But this is clearly nonsensical by any conceivable measure.  What actual revolution had he personally taken part in?  None.  What revolution had he influenced?  None.  So, how does he fancy himself a revolutionary?  Apparently, because in his youth he identified as a Marxist, and yearned for some fanciful social uprising that he couldn’t be bothered to actually lift a finger to bring about.  In America, we have a word for such a person, poser (though, failure would also be quite appropriate).  It is possible to be a failed revolutionary (such as Guy Fawkes, whose idiotic plot literally blow up on him before it went anywhere), but many of those in history who have been pinned with the revolutionary label are not failures in this sense, but complete nonstarters.  Take for instance Emma Goldman, held in high regard as a revolutionary figure by the extreme left.  Goldman spent all of her active years living in America, contributing written works promoting anarchism (among other social causes).  Last I checked, the government functioned through her life unharmed by her “revolutionary” prose, so in what way is she really a revolutionary?  None that I can see.

The last point I want to make will be the most controversial one; namely, that to be a revolutionary one must by definition be unreasonably dogmatic.  No, revolutionaries are not critical thinkers, or clear thinking in any imaginable way.  They are uncompromising, and unwilling to reevaluate the positions and values they hold, blindly proclaiming their doctrine of ideology as infallible in the great scheme of history.  That makes them the intellectual enemies of all sensible persons.  I don’t care what ideology it comes from, a lack of self-scrutiny is an admission of idiocy.  And revolutionaries never self-scrutinize, which is why I’m glad that the word has lost all meaning in Western thought.  I welcome the mild, watered-down facade that has been erected in its place.  We’re better off for it.

The Tower of Babel: An Alternative Perspective

When people speak of a need for their faith in God/s, they almost always come around to expressing how–though they’ll readily grant that organized religion, as an institution, may at times fall short of the ideal–the faith and grace of the Almighty still resonates in the hearts of all mankind (whether they acknowledge His omnipresence or not), and serves as the one true guiding force by which we may hope to find solidarity; through which we can strive to attain peace of mind, and (ultimately) peace on Earth, as surely as we are to find it in the coming hereafter.

When looked through the scope of the narrative found in the Book of Genesis, important events like man’s banishment from Eden, and the subsequent Great Flood meant to purge the world from the sinfulness that man had spawned in the world thereafter, are further reassurances of the need man has for God’s eternal presence in his life, without which he is doomed to be lost to both personal solace and eternal salvation.  Moreover, if we dwell further into the Christian perspective, it is in the figure of Jesus Christ–wherein God became man, and died at the hands of man, for the sake of absolving said man of his sin so that he may once more gain eternal life in Heaven at the side of his Creator–where we find the long awaited mending of the rift between man and his spiritual soul, and bring peace between the physical and metaphysical realms.

Given all of the above, the Tower of Babel stands as a rarely explored peculiarity to the common narrative.  The story of the Tower begins in the first verse of Chapter 11, in the Book of Genesis (this is after the banishment from Eden, and after the Great Flood had already taken place):

1 Now the whole world had one language and a common speech.

2 As people moved eastward, they found a plain in Shinar and settled there.

The whole earth was of one language, and presumably of a common understanding, as evident by the fact men journeyed and lived in some sort of union.  Though subtle, the placement of this story at this point of the Book is very significant in its relation to the theological underpinnings explored at the beginning of this post.  The story continues:

3 They said to each other, “Come, let’s make bricks and bake them thoroughly.” They used brick instead of stone, and tar for mortar.

So united was man in his pursuits, he begins to set the stepping stones for architecture and human innovation by improving on common building techniques.  A symbolic act indicating the advent of greater civilization meant to sustain a decently sized population.

4 And they said, “Come, let us build ourselves a city, and a tower whose top is in the heavens; let us make a name for ourselves, lest we be scattered abroad over the face of the whole earth.”

The common theological perspective is that this verse signifies how, rather than a symbol of man’s ingenuity, the Tower is a symbol of man’s pride.  The emphasis being on the hubris of mere men wanting to make a name for themselves by reaching the realm of God by earthly means, rather than spiritual ones, thereby making mockery of the very concept of salvation through the grace of God.  This reasoning is satisfying to many faithful, but rings hollow on a number of accounts.  The first of which being that nowhere in the verse is there any reference to God, his grace, subverting his grace, or even wanting to reach Heaven to reside there against the wishes of God.  At it’s most basic interpretation, what the verse does demonstrate is a wish to push human innovation beyond its limitations, to surpass our natural inhibitions and master it to our advantage.  And if this is a grave sin, then one might as well deduce all modern technological achievements to be sinful (and if you’re reading this post, by means of some technological device, one can safely assume you are not of this opinion).  Furthermore, such speculation is rendered moot by the subsequent verses, wherein God clearly states his reasons for disapproving of man’s construction of the Tower:

5 But the Lord came down to see the city and the tower the people were building.

6 The Lord said, “If as one people speaking the same language they have begun to do this, then nothing they plan to do will be impossible for them.

The construction of the Tower isn’t the problem for God.  His concern is the implication it holds concerning man’s collective potential to rise higher than his nature (where nothing “they plan to do will be impossible to them”).  There’s no mention of man’s pride–his hubris, if you will–nor is it even hinted that God’s concerns rest in anything other than his own self-interest, as he only identifies two contentions he holds with man’s construction of the Tower: 1. They are doing it as one people, 2. the construction of the Tower symbolizes man’s power to be limitless.  Now, God’s solution to this problem is a simple one.  Since 2 stems directly from 1, he sets out to undo 1:

7 Come, let us go down and confuse their language so they will not understand each other.”

8 So the Lord scattered them from there over all the earth, and they stopped building the city.

Bible scholars will easily identify the Tower of Babel as being a clear example of an etiological myth, meaning a myth/story/legend meant to explain the origin of a phenomenon (i.e. think of the tale of how man received the gift of fire after Prometheus stole it from the Olympians).  In this case, the phenomenon being explained through the legend of the Tower of Babel is the origin of the diversification of languages.  Acknowledging this, from a philosophical/theological perspective, the actions of God as a character in the narrative are far more interesting of an indication of the dynamic between man and the Divine.  Because, for those who take this narrative seriously, God’s actions are not just responsible for the diversification of man’s languages, but also man’s segregation into different tribes, many of which undoubtedly grew to become opposing tribes, which inevitably led to these tribes waging war on one another on account of these differences.  Therefore, as the instigator of the tribalism among men, God can be credited as the direct catalyst of the warfare that came about as a result from said tribalism.  That is, if one takes the narrative seriously.  For those with a more scholarly interest in the subject, the greater plot implications between the characters are still equally intriguing.

Thus, to summarize the whole plot:  In a world following man’s banishment from paradise, following the Great Flood–a world just about all theologians and the faithful identify as being fallen and plagued by sin–humanity managed to surpass these great odds stacked against it and unite as one people, and coexist in such unity that it not only survived, but thrived in the harsh environment on the basis of its ingenuity alone.  According to the Bible itself, this great human unity did not need an appeal to the Divine to be achieved, nor did it require a blood sacrifice on the part of the Creator to bring peace and solace to the hearts of man.  And, amazingly, it was not man’s sins that halted this progress.  Nor was it man’s inherent wickedness that tore at the base of this serenity.  It was God, Himself.  Why?  In accordance to the story it can be simply put as God being afraid of man.

As heretical at it might sound, this underlying fear of man’s potential is not an uncommon theme throughout ancient mythology (when stories like the Tower of Babel would have been crafted).  The lineage of the Greek pantheon is a direct testament to this very concept.  The Titans were deposed by the very Olympians they had spawned, just as the Titans themselves had deposed the ancient gods that preceded them.  Given this tradition of cyclical deicide, it is not a farfetched interpretation to read the constant demand the Olympian gods place on being revered and worshipped by mankind not as a testament of their strength, but as a revelation of the fear that their own creation–man–will one day follow in the same traditions that all the higher beings in their history have done, and depose the makers that made them.

Aristotle could never rationally fathom way any god would be concerned with the daily happenings of a lower order of beings like mankind, and proposed a deity that took a laissez-faire approach towards human endeavors.  But perhaps Aristotle was not thinking creatively enough.  For what are gods without worship?  How many gods throughout the ages have met their fate in the graveyard of mythology simply because man stopped minding them any attention?  From this perspective, the prospect of man turning both inward to his own strength and ingenuity, as well as to that of his fellow man, is antithetical to the interests (and downright survival) of any halfway competent God.  And the God of the Book of Genesis is no exception to this, as shown by His own conduct in story of the Tower of Babel.

Love is a Casual Greeting

Love is a casual greeting.  Love used to feel like a word that made the strongest weak in their knees.  Made the blood flush to their cheeks, as they tried to control the pitch of their voice and the queasy feeling in their stomachs, so they wouldn’t give away the obvious (which everybody had already figured out): they were in love.  But love is a casual greeting.

It means hello, or goodbye–like Aloha!–it’s been colloquialized.  People date for two hours, they say they love each other.  For two days, they are in love.  Six months later, they no longer say they love that person.  Now they both say they love someone else, as casually as they once said they loved each other.  Because to say you love someone is the right thing to say when you see them, it is as expected as saying, “Hello!”  To not say it would be impolite.  And decorum and civility trumps passions.

The word love used to hurt, so it had to be declawed.  The sting is removed with every casual use.  It becomes normal–boring, even!  Ask someone how their day was, and you’re already bored before they can ever answer.  Words that are boring have no power.  They can’t hurt, or disappoint.  Saying words like hello or goodbye yield no commitment or expectation; the emotional investments are net-neutral.  If the word love means saying hello or goodbye, then love yields no commitment or expectation; love’s emotional investments are net-neutral.

Words don’t mean what we want them to mean, they mean how they are used.  And if love is used like a casual greeting, then love is a casual greeting.

Much love to you all,

KR

Education System’s Deficiency in Deterring Disorderliness

There is a form of punishment used in many U.S. public schools.  Most of these schools have their own unique abbreviation/initialism for it, but essentially it can best be described as in-school detention.  Although policies can vary from district to district, the standard procedure by which this form of detention operates is to segregate students who are troublemakers away from the rest of their peers for a designated period of time, usually by placing them in a room with other equally unmanageable youths.  The reasoning underlying this practice is based on the principle of deterrence, where we are aiming to deter further behavioral problems from the troublemaker students, by cutting them off from the rest of the school network.  Moreover, we are aiming to deter any bad influences said students could have on their fellow classmates.  Unlike suspension, in-school detention presumably provides a set of extra motivating factors to correct bad student behavior.

When a student is sent to be disciplined in this way, her/his teachers will often get a note in their office box asking them if they could either send out busy work for some of the students currently “doing time” in in-school detention (they are still in school, after all), or if they could step in to supervise the troublemakers during one of their off periods.  Normally, none of this is mandatory, and unless one of the disciplined students is one of your own, faculty members generally don’t feel much obligation to respond.  The problem a lot of teachers are seeing, however, is that every time they receive the list of names of students placed in in-school detention, they notice a strange anomaly about the rationale behind this deterrence system; namely, that the names on the list are almost always the same.  Which means it’s the exact same students getting repeatedly crammed into this same room, for largely the same reasons they were there a week ago.

Now, I may not be the brightest bloke around, but how does a form of discipline, operating under the rationality of deterring bad behavior in students, keep having to discipline the same exact students for the same exact behavior it’s set up to deter?  At when point does the whole deterring factor actually come into play here?

I understand that something needs to be done about students who act out badly.  I know that undue lenience can lead to the same result as completely ignoring an obvious problem.  But is this form of punishment really any different from ignoring a problem?  If not, than why are the exact same students always the recipients of a punishment whose primary purpose is supposed to be to correct their bad behavior?  At what point can we say that the policies and procedures in practice have failed to fulfill their desired goal, and maybe–just possibly–an alternate route ought to be considered?  Because if the goal is to truly correct bad behavior (instead of just putting troublemakers out of sight, out of mind), what we are doing now in schools is definitely not working.

One suggestion I have been told is that the punishment is not severe enough.  The rationality here being that young people need to be frightened into a certain mode of behavior, and in-school detention is just not frightening enough to mischievous youths.  To individuals who feel this way I would like to–in return–suggest that they first take a look at how well emphasizing severity as a punitive punishment has worked out in the U.S. criminal justice system over the last 40 years in terms of deterring criminal behavior amongst convicted criminals.  Judging by the fact that the reciprocity rate of criminal offenders has suffered no drop whatsoever (while arrests, and re-arrests, have increases exponentially in most metropolitan areas), I would suggest that perhaps we brainstorm a few other ideas first.

Furthermore, speaking as someone who has supervised in-school detention periods before, I can assure you that the students who are sent there aren’t happy or proud about it.  They understand it’s a punishment, and they view it as a punishment, and they feel no pleasure from being there.  What they do feel is anger, which is fine if the goal is to make them feel pissed off for having been caught breaking the rules, but that’s not what these sort of disciplines are claiming as their goal.  The claimed goal of in-school detention is to deter and correct bad behavior.  Getting people angry for the bad things they do can never correct bad and stupid behavior, because anger is a catalyst for more bad and stupid behavior.  When you’re angry you don’t sit back and reflect rationally about your bad judgment.  In fact, in such a state of mind you are much more likely to exercise even more bad judgment.  Besides, short of going back to the olden days of beating bad behaving students with a ruler, I don’t really see what more we could do on the severity scale on account that for the hours they spend in in-school detention we are already treating them like incarcerated criminals.  And make no mistake about it, that is exactly how they feel and it’s doing nothing to remedy whatever is instigating their behavior.

“Okay, Sascha.  How about then you step down from your soapbox, stop being a condescending ‘know-it-all’ blogger, and actually tell us a practical alternative route we can take.”

Fair enough, hypothetical reader (who somehow always seems to know just what to say to keep the plot of my posts moving along smoothly).  You don’t correct bad behavior by ignoring it, and you don’t correct it by emphasizing the severity of punishment above all else either.  In my experience what works best is a combination of swiftness and discretion.  If a student breaks the rules in place in a classroom, it makes no sense to me to send her/him to the principle’s office, who will then send her/him to another office, who will then place her/him in a segregated classroom for about a week, where s/he will sit isolated from the rest of the school, without a word of explanation from anyone as to why s/he shouldn’t be behaving the way s/he did (aside from the patronizing, “Because it’s against school policy”).  But this is exactly what so many educators do.  Instead of dealing with the problem swiftly and directly, they pass the responsibility of assigning concrete consequences on to someone else; i.e. out of sight, out of mind.

In my experience, the best chance for actual long-term behavioral correction comes by confronting the student about their behavior, on your own terms, and explaining the consequences that will result from this behavior immediately following its occurrence, without constantly shipping her/him through some asinine administrative roundabout.  If the offense is committed in my classroom, it is my responsibility to deal with it, because individual cases of misbehavior require individual consequences.  And I promise you, when consequences are customized to each breach in conduct committed by each individual student, it leaves a much more lasting impression on the mischievous youth than a generalized decree ordained by some top-level administrator who doesn’t even know the student’s name, let alone the circumstances surrounding her/his behavioral issues.

Please don’t misunderstand what I mean by “confronting the student” as some sort of public declaration against her/him in front of class.  Consequences to bad behavior have to be swift, but they are also most effective when discloses in private.  When someone publicly confronts you about your behavior (even when you know that you’re in the wrong), you are much less receptive to the message than you would be if the person made the effort to avoid humiliating you in front of your peer group.  I don’t care how your intuition tells you to approach it, the plain fact is that discreteness in these situations works because it gives the teenager–whose full reasoning faculties are still developing–the thing s/he craves most:  the courtesy of being spoken to like an adult, instead of being repeatedly scolded like a child.

I know someone reading this is probably dismissing me as just too soft on those disorderly “little assholes”.  If you’re one of these readers let me ask you, what do you think is more likely to happen when a teacher publicly calls out a student for behaving like the “little asshole” s/he most probably is?  Do you think that the student will rationally consider the consequences of their misbehavior, and thereby learn the value of civil conduct?  Or is the student going to feel that s/he has to go on the defensive and save face in front of her/his friends by stubbornly locking horns with the teacher?  All of this is dependent on what results you’re aiming for.  I’m aiming to actually correct delinquent behavior; if you’re aiming to simply piss off “little assholes” and “give ‘em what they’ve got comin’” have fun engaging in your pissing match with an adolescent, just remember only one of you is expected to be the adult in the situation.

The reason I care about this issues is that I’m concerned that the policies and procedures we have in place to deal with misbehavior in the educational system parallel too closely with the practices prevalent within the criminal justice system.  Educators are entrusted with the responsibility of nurturing minds that are still maturing.  As the adults in the room–the sole individual with the experience and developed cognitive faculties to exercise the restraint and judgement necessary to deal with a volatile situation–the consequences of how disorderly conduct is treated and corrected has to fall on our shoulder, whether we like it to or not.  But I don’t see how we all can live up to this duty, how we can mend well-adjusted adults as a society, if we keep placing disorderly students into ineffective programs and policies that are firstly failing to address the underlying issues causing bad behavior, and secondly failing to correct said behavior.  I care about this because there is a decent chance that today’s delinquents will be tomorrow’s felons–and if we condition them at a young age that social institutions have no means or interest in dealing with them as thinking persons, there is little reason for them to be.

Four Movies That Bored Me to Tears, But Almost Everyone Else Thinks are Awesome

Three years ago, I finally got around to seeing a movie called Prometheus, because I was tired of every-freaking-person I know constantly telling how, “This movie will blow your mind, man.  If you don’t see it, and you don’t like, you’re officially too stupid to function.”  Well, I saw it, and I guess I’m “officially too stupid to function.”  I found the movie to be visually appealing, and the acting was much better than I expected it would be.  But, overall, I didn’t think much of it.  Yes, I got all the “nuanced” intricacies about the frailty of human existence and the endless search to find meaning in life, etc., etc., etc (so please spare me the 2000 word email, philosophizing to me about how I must not have truly “gotten” the plot because I don’t love the movie as much as you do).  When I saw the movie everyone from my old college roommate to my own mother bombarded me with why I’m wrong not to appreciate the stupendous beauty of it all.  All of this is strange to me because when it comes to movies I’m a firm believer that brilliance is in the eyes of the individual viewer.  You and I can watch the same movie, and leave the theater with completely different perceptions about what we just saw; neither one of us is wrong and neither one of us is right about whether or not we personally connect with a film–it either hits us intellectually and/or emotionally, or it doesn’t.  Thus, I’m more than willing to agree to disagree with anyone whose opinion differs with mine on this Prometheus movie, or any other movie I may have enjoyed/disliked in the past.  However, I’ve noticed that a lot of people simply cannot let it go if someone doesn’t enjoy their favorite film as much as they do; therefore, they must convince you about how awesome their favorites are, or shun/ridicule you for your inability to appreciate the “great nuances” of the mindful plot they are so keen on.

This post isn’t going to be a review on Prometheus.  Instead I want to briefly list and discuss four movies that most people I have met are willing to get in fistfights about if I so much as dare to share my lack of enthusiasm for them.  If you are a semi-regular movie watcher you have probably heard of these films, and if I say something that offends you, remember that this is just my take on the matter, and not meant to be an absolute verdict on anything.  Ready?  Okay, let’s start in reverse:

4. Napoleon Dynamite (2004):  The fan following this movie developed early on in its release amazing me to no end.  It was quoted everywhere, by everyone.  People had “Vote for Pedro” shirts within weeks of its first showing; not to mention, the dance scene was reenacted by more random people in my acquaintance than I’m willing to admit.  Personally, the movie bored me.  I know it was meant to be quirky, and kind of dopey, and I can definitely understand how this adds to the charm for those who enjoyed it.  But it bored me.  In the end, I left the theater convinced that Napoleon wasn’t socially ostracized by his peers because he was nerdy, but because he was kind of an asshole.  And I find it hard to sympathize with a protagonist whose well-being I don’t give a shit about.

No matter how devilishly fashionable the particular jerk happens to be.
No matter how devilishly fashionable the particular jerk happens to be.

3. From Duck Till Dawn (1996):  Oh. My. Gawd!–People love this movie.  At least, people who have lived/interacted somewhere within my general vicinity.  I don’t know if it’s because Quentin Tarantino and George Clooney are in it, or because Robert Rodriquez has somewhat of a cult following amongst movie fans, but everyone has been preaching to me about the brilliance of this movie since the 7th grade.  Like I said before, I get it.  It’s witty in many places, and the action scenes are original for its time (especially the scene with the crotch-gun).  Also, the fact that it’s supposed to be a bit corny didn’t elude me either.  Yet, there is a point at which corny because silly, which in turn because stupid.  Three-quarters of the time the human survivors were stuck in that bar (after the initial vampire attack/brawl), I found myself thinking, “WTF is the point of this scene right here?”  [Like the part where, after being attacked by a horde of vampires, and finding themselves having to fight an entourage of newly made vampires, and being possibly surrounded by another innumerable horde of vampires outside, the character Frost starts reciting an overly dramatic war story from the Vietnam War that the other characters just can’t help but calmly listen to, becoming oblivious to the dangers of their surroundings.  If I was there I would have slapped that guy’s face in the middle of his story and stated, “There are fucking vampires around us.  I don’t give a shit about what you did in motherfucking Vietnam.  Now, grab something sturdy and help me board up the windows & doors, jerk.”  But that’s just me.]

The crotch-gun was still pretty awesome though
The crotch-gun was still pretty awesome though.

2.  Scarface (1983):  Arguably one of Al Pacino’s most memorable roles, the criticism this movie usually gets stems from its excessive depiction of violence, drugs, and profane language (which is, ironically, also the primary reason why so many people enjoy the film).  I couldn’t care less about any of that, and would personally never discount a movie just because it made use of some colorful material.  My problems with the movie is the pacing, the sloppy editing, and the beyond belief feats performed by the characters in what is supposed to be an otherwise reality-based movie (I mean, come on, how much cocaine can Tony Montana snort without passing out?  How many bullets can he take without at least tipping over?).  Nevertheless, I found myself in an awkward spot when this movie comes up in casual company, because I do see value in it.  I enjoy watching Al Pacino be Al Pacino, but with a Cuban accent.  But I have to be honest that it isn’t as great to me as it probably is to you.  A simple statement that’s usually more than enough to arouse the content of any suburban gangsta within earshot.

scarf

1.  Deliverance (1972):  This movie is ranked as one of the top achievements in American cinema.  In 2008, it was even selected for preservation by the American Film Registry for being “culturally, historically, or aesthetically significant.”  My indifference to this movie has been suggested as the ultimate proof of my ineptitude in casting any opinion on movies whatsoever (possibly a worthwhile thing for the readers of this post to consider when evaluating my opinion here).  Let me first start off by saying, no, I am not offended by this movie because of its depiction of Southerners.  I am unimpressed by this movie because it nearly bored me to death when everybody promised it would be, “the most horrifically thrilling film in existence.”  I wasn’t.  In fact, I found it to be pretty tame–allow me to explain, before you condemn my philistine judgment.  When I first saw this movie I was about nine years old, and I fell asleep before the character Drew died on the canoe (belated spoiler alert).  Years later, I decided to give it another shot, convinced that my initial apathy was caused by my prepubescent brain being unable to fully appreciate all the subtle “nuances” (there is that word again.  I just hate that word so much) of the plot.  Yeah, well, I was left bored again.  Only this time I couldn’t blame youth or anything else.  Although I could appreciate the aesthetic beauty of the setting, I’m someone who cannot be swayed into liking a movie because it has pretty trees or a mesmerizing lake in the scenery; the plot and the characters matter to me.  The problem here is that for this movie, they didn’t.  The movie was slow, but not in a way that focused my attention further into the details of the plot.  The dialogue didn’t make me ponder anything deep, despite the annoyingly constant attempt by the script to throw armchair philosophy at me ad nauseum.  The characters weren’t as engaging as I would have liked.  Moreover, nothing [absolutely nothing] in the movie caused even the slightest bit of terror or unease or disturbance within me.  And yes, I’m aware there is a suggested male-male rape scene; no, it didn’t even cause me to flinch in horror for a second (by than I guess I was too comatose from boredom to care).  For 109 minutes, I was just bored.  I’m willing to accept that the problem is with me, and not the film, but I cannot pretend to have liked something I didn’t.  If you disagree, then we disagree.

One thing we can all agree on is that this kid didn’t get nearly enough screen time to warrant his popular association with this film.
One thing we can all agree on is that this kid didn’t get nearly enough screen time to warrant his popular association with this film.

 

I should mention that the above movies are not, in my opinion, the worst movies ever made.  They are simply a list of movies I didn’t like and appreciate as much as most people I’ve met in my life have.  Trust me, if I was to make a list of movies I genuinely hated, these four wouldn’t even crack the top 10 (except maybe for From Dusk Till Dawn, that scene with the Vietnam Vet really pissed me off).

Why We Write

I’ve heard it said on several occasion that a writer’s best ally is maintaining a high degree of modesty in her/his work.  The reasoning behind this is obvious to most people in that no self-respecting reader wants to support the scribbles of a smug narcissist, convinced that her/his words are the world’s gift to human expression.  However, despite its practical value, the adage does ignore an important attribute all writer’s share to some degree:  namely, writers are narcissists.

Whether we’re aiming to share our words by traditional print media, or blogging free of charge in our spare time, there is something undeniably self-indulgent in our conviction that we not only have something of importance to add to a topic of discussion, but that the greater public might benefit in hearing our take on a subject matter, too.  And if you really don’t feel that you have anything worthwhile to say on a topic, why are you typing so many grueling posts testifying to the contrary?

Let it be clear, this is not a social criticism on my part.  Rather, it is a personal affirmation.  I write this blog precisely because I feel it is worth the time and effort, and–in a broader sense–hope that it offers some benefit to somebody, somewhere (even if just to avoid having to read for that book report you failed to research on your own).  Moreover, I have no shame in recognizing the benign egotism of the act.  I find humility, of course, in knowing that my opinions on the topics I discuss hold no more inherent value than anybody else’s.  Likewise, I find aggrandizement in the conviction that my opinions have as much of a right to be heard and shared as anybody else’s (whether anyone else agrees, I’ll readily leave to the free marketplace of ideas to decide).

Modesty, by virtue of consistent self-scrutiny, is definitely a valued tone writers and commentators should strive to maintain in the work they share with the world.  Yet, I feel it is also important to acknowledge how those of us who take the step of actually putting our thoughts and musings into a public forum, are also just a little bit full of ourselves.  Which, although always capable to evolving into a boorish vice, can also serve as an indispensable catalyst for creativity.  Ovid probably captured this sentiment best in the epilogue to his Metamorphosis:

Now I have done my work.  It will endure,

I trust, beyond Jove’s anger, fire and sword,

Beyond Time’s hunger.  The day will come, I know,

So let it come, that day which has no power

Save over my body, to end my span of life

Whatever it may be.  Still, part of me,

The better part, immortal, will be borne

Above the start; my name will be remembered

Wherever Roman power rules conquered lands,

I shall be read, and through all centuries,

If prophecies of bards are ever truthful,

I shall be living, always.

Our words, too, are immortalized, in the far reaches of cyberspace; leaving a piece of ourselves to forever be either derided or appreciated long after we have lost the ability to partake in the conversation.  Possibly a very humbling thought, but not really all that modest.

Consider this my welcoming post to the coming summer days, and the approaching half-year mark.  In case anyone can’t tell, my new year’s resolution was to strive to be more honest with myself (also to start spending more time outdoors, but I’ve been making that one 10 years in a row, and quit every time I notice just how uncomfortably bright the sun is).  So a jovial greeting to whoever happens to be reading.  Stay safe, positive, and slightly eccentric, wherever you are.

My Problem With Personality Tests

I would wager that astrology has suffered a big drop in its number of believers over the last few decades.  Some people still undoubtedly read their horoscopes now and again, but few would take it seriously if the prophetic paragraph typed in the flimsy pages of their morning newspaper told them to avoid going outdoors because doom will be awaiting them.  It’s easy enough to point to examples that call into question the rational basis of astrology (like pointing out how it can be that people born on the same date, at the same time, emerging from the exact same womb, can still go on to have completely different futures, and even personalities?) that much of the practice has been reduced to something of a passive interest for most of its practitioners, and is rarely avowed amongst new acquaintances.

For many millennials the idea of orienting their personality based on what astrological sign they were born under seems silly, however, at the same time many within this same generation seem to accept alternative personality groupings, like the Myers-Briggs Personality Test as something more concrete.  While the Test purports to indicate psychological preferences on how a person perceives the world, and makes her/his decisions as a result, it’s reliability, practical utility, and scientific validity leave much to be desired.

The problem with Myers-Briggs is the same as the problem with all personality tests, in that it is self-administered and reflect a self-selected view of one’s personality, failing to take into account the fact that how you see yourself in your inner dialogue might not be how you come across in your external interactions with others, all of which goes considerably into influencing the perceptions and decisions you form and make on a daily basis.  This is fueled by an inevitable personal bias innate to personality tests, making it damn near impossible to be objective when the subject matter in question is, in fact, yourself.

The reason I compare such personality tests to a horoscope is that when I read through the various personality summaries of each astrological sign, there is not a single one that will not reflect some basic component of my personality and character that I can choose to focus on, if it happens to please me to do so.  And when I read through the descriptions of each personality type on the Myers-Briggs scale (or any other personality test), I can also see ways in which every category listed could apply to me if I just focused on different aspects of my personality.

I get that having a ready made list of attributes makes it easier for many of us to interact with and make decisions about other people (as well as how we choose to view and carry ourselves, personally).  But something as fluid and adaptable as a “personality type” comes across to me as much too situation-specific to be neatly labeled by any one, two, three, or seven scale tests, anymore than reading celestial patterns to determine one’s lucky/unlucky days.  I find that actually talking with each other is much more affective at gauging another person’s personality, just like having an honest dialogue with oneself goes much further in helping us figure out what makes us who we are deep down.

Friedrich Nietzsche on Religion and Atheism

In defence of slavery: Nietzsche's dangerous thinking | The Independent |  The Independent

Believe it or not, there actually exists some contention in Nietzschean circles about the philosopher’s religiosity (or lack thereof).  While most people maintain that Friedrich Nietzsche was undoubtedly an atheist, a few commentators see his creeds against Christianity as being indicative of a deeper understanding of the mystical; leaving room open for a belief in the divine.  Adding to the possible confusion for some readers comes from the writings of certain cranks (i.e. Thomas J.J. Altizer), who promote a wholly bizarre “Death of God” theology that stretches Nietzsche’s writings to absurd lengths.

But the best way to put the issue to rest is to go straight to the source himself.  In his final and most autobiographical full book, Ecce Homo, Nietzsche begins the second chapter, “Why I am so Clever,” by plainly stating his position on religious matters.

He states:  “‘God,’ ‘immortality of the soul,’ ‘redemption,’ ‘beyond’–without exception, concepts to which I never devoted any attention, or time; not even as a child.  Perhaps I have never been childlike enough for them?”  Here, he clearly sets his worldview as being completely divorced from what one would call religious sentiments, and, one could argue by the inclusion of ‘beyond,’ as devoid of the supernatural in general.  It is important to bring attention to the way Nietzsche claims to have never “devoted” any time to anything vaguely religious, because it is vital in understanding the manner by which he addresses theological positions in his writings.

Some have quoted the next paragraph in the text, where Nietzsche says, “I do not by any means know atheism as a result; even less as an event,” to indicate that Nietzsche might have still held to a spiritual sort of mysticism.  But this is unfounded in the actual text, because it places too much emphasis on the first part of the sentence, while ignoring the last.  Nietzsche qualifies that his did not know atheism as a result or event, precisely because his unbelief was not the product of some grand epiphany; he did not lose faith, because he never had it to begin with.  He goes on to explain, “it is a matter of course for me, from instinct.  I am too inquisitive, too questionable, too exuberant to stand for any gross answer.”  To Nietzsche, disbelief is his natural disposition, his inquisitive nature demands him not to accept anything more.

I mentioned earlier that it is noteworthy how Nietzsche never bothered to entertain any notion of the supernatural, and how this sentiment affected his approach to theology.  Unlike other prominent atheist writers of the 19th Century, who saw fit to argue against the existence of deities and religions, Nietzsche never bothered to engage or refute any of the arguments for the existence of gods.  He repeatedly affirms that gods do not exist, but his affirmations are meant to be taken as solid proclamations, rather than logical arguments.  The reason for this is that Nietzsche would have considered such engagements as insulting to his person, because to him, “God is a gross answer, an indelicacy against us thinkers–at bottom merely a gross prohibition for us:  you shall not think!”  To even go so far as to refute the standard theological arguments would have been too big of a concession in Nietzsche’s mind.  To him the nonexistence of gods was a given fact, unworthy of debate (a position that greatly influenced later existentialists thinkers, like Jean-Paul Sartre).

This might seem odd, since anyone who has read Nietzsche can attest to the fact that he spends a multitude of pages mentioning God.  Indeed, it can be argued that the topic seems to be somewhat of an obsession to the philosopher, even if he claims to not devote any time to it.  However, one must be very careful here.  In much of his writings, Nietzsche’s atheism takes on a very post-theistic tone (The Gay Science, Thus Spoke Zarathustra, etc.), where he asserts the death of God, not as an actual entity, but as a psychological concept.  Primarily, because that’s all gods are to Nietzsche, man-made concepts, whose humble origins have been forgotten.  What he discusses in his writings is not any sort of deity recognizable to the religious, but the role, power, and influence the concept of God has had on the psychology of humanity, as well as how modernity is leading to the gradual (and unavoidable) erosion of this concept from our psych, as supernatural suppositions become more and more untenable in contemporary discourse.

In these regards, Nietzsche’s post-theistic atheism is a unique take on the issue on religion and God, but one should avoid assigning to it any deeper meaning than even the philosopher himself intended.

Bibliography

Nietzsche, Friedrich. Ecce Homo. “Why I Am So Clever,” Section 1.

The specific translation I used for the quotes in this post, come from Walter Kaufmann’s Basic Writings of Nietzsche, 1967 (2000 reprint), The Modern Library: New York, pages 692-693.

Dale Carnegie Was Wrong

If you’ve ever taken a Communications or Business class, or sat in on any sort of marketing/networking seminar, there is a very good chance that Dale Carnegie’s How to Win Friends & Influence People was listed among the recommended readings on the syllabus.

In the book, Carnegie sets out to give a list of very basic advice on how to successfully interact with people, and increase your own potential by doing so.  The advice given seems very reasonable in a broad sense, and can be summed up in terms of being genuine and polite towards others, approaching everyone with a positive attitude, and reaping the personal satisfaction and interpersonal accolades that come from it.  All this is well and good, and I’ll be the first to tell people that if you wish for others to like you, not behaving like a complete dick towards them will go a long way in accomplishing this goal.  Moreover, if Carnegie’s book(s) help anyone achieve a mental state that makes them feel more empowered and confident in how s/he communicates and carries her/himself (not to mention, increases the person’s overall happiness), I have no qualms with that aspect of his method.

However, all that being said, it would be dishonest if I did not mention how there is something that always irked me about Carnegie’s writing, in particular this one book.  I think my problems with the self-help author are best summarized at the very beginning of Chapter One of Part Three titled, “You Can’t Win An Argument.”  In it, Carnegie tells the story of an event that occurred when an acquaintance he was having a conversation with mistakes an obvious quotation from Hamlet as being from the Bible.  Carnegie, aware of the error, corrects the man, and looks to his accompanying friend (an expert on the subject) to back him up in the correction.  Surprisingly, the friend sides with the gentleman who is in error, later telling Carnegie he did so because to correct the man would not accomplish anything positive.  Carnegie happily agrees with this reasoning, and advises readers to take it to heart that you should not correct such obvious mistakes made by others on account that it would make you argumentative, and being argumentative will not make people like you.  Presumably, the proper thing to do when confronted with such a situation is to be accommodating and refrain from saying anything that is not agreeable.

I take issue with this line of thinking.  Not because I see a great merit in being argumentative with people, but because I see something disturbingly manipulative in this tactic of communication, which I believe to be a problem at the core of much of the self-help market.  Carnegie asks us what good there is to stand firm and prove to the mistaken man that he is wrong, pointing to the desire to be held in high-esteem as the main priority.  But why should being liked be of a greater priority in this situation than being honest?  If it’s because it will be personally beneficial for you to always be seen in agreeable terms by those around you in case you need to call on them for favors down the road, then you are not looking to make real friends or honestly communicate with people at all; your purported interests lie in simply using people for your personal interests.  Because if this is not the case, and the stated goal is to form genuine relationships with people, then you should (as politely and lovingly as you can) seek to uphold a standard of honesty with those around you.  This includes being honest when you know that an acquaintance has made a minor mistake, such as mistaking the source of a quote.

I have made silly mistakes and the occasional faux pas on many occasions (and will undoubtedly make many more to come).  Sometimes, those around me correct said mistakes; other times, no correction is made and my ignorance remains unchecked until I happen to come across the truth of the matter first-hand.  Every time it happens, I felt like an idiot, and, yes, slightly resentful that my ignorance was in full view to the public.  But you know what definitely never happened again?  A repetition of that same display of ignorance on my part, on that same subject I was previously so wrong about.

I believe this is something Carnegie fails to address in his work.  And the reason this is a problem is that books like How to Win Friends & Influence People present themselves as being based on the principle that the fundamental way to succeed in getting what you want from others is to first be mindful of the wants and desires of other people.  In terms of building empathy, this is a principle I can truly get behind.  What I can’t get behind is the idea that pandering to the ignorance of those we wish to like us is something an honest person should strive for.  What I outright reject is the idea that communication skills ought to be built on chess-level moves of strategy and tactic, wherein the goal is to say just the right buzzwords to manipulate a desired outcome.

And no, despite what some self-appointed “straight-talkers” with a public platform wish to promote, standing up for what is true should not require you to disregard sensitivity towards others’ dignities and give you a license to be a total asshole in how you communicate with people under the guise of honesty.  To be honest is to simply be sincere with what you know to be true, and I believe making friends on the basis of such sincerity is a better approach, then looking to avoid making enemies by kissing the ass of anyone who might seem influential enough to give you a leg up in life simply for being their Yes-Man.