Practicing Self-Scrutiny

Genuine self-scrutiny is a personal virtue that is much easier preached than practiced.  Usually the furthest most of us are willing to go is a relativistic acknowledgment that differing opinions exist to our own and that, all things considering, we would be willing to change our minds if these alternative viewpoints were to persuade us sufficiently.  But, in my opinion, this sort of tacit relativism isn’t much in the way of self-scrutiny.  To self-scrutinize is to actively challenge the values and ideals we hold dear to our person–to dare to shake the foundation holding up our most cherished beliefs, so to speak, and test if the structure on which we house our beliefs is sturdy enough to withstand a direct attack.  In contrast, the aforementioned acknowledgment that differing (and potentially equally valid) views exist to our own is a very passive stance, as it strictly relies on an external source to come along and challenge our own position(s), with no actual self-scrutiny being involved in the process.

Up to this point, this very post can be rightfully characterized amongst the passive variant; i.e. it’s me (an external source) attempting to challenge you to question the manner by which you view the world around you.  Although there are occasionally posts on this blog in which I sincerely try to adopt opposing stances to my own, the truth is that I do this primarily to better strengthen my own position by being able to effectively understand what I’m arguing against.  This, too, is not self-scrutiny.  And it would be dishonest to pretend otherwise.  To truly self-scrutinize I would have to pick a position–a value, an ideal–by which I orientate my worldview around, and mercilessly strip it to its bone.  The frustrating part of such a mental exercise is the inevitability of having to rely on generalizations of my own opinions in order to be able to paraphrase them thoroughly enough, without getting trapped in a game over meaningless semantics.  The important thing to remember is that the points I will be arguing over (largely with myself) in this post are admittedly stripped of their nuances regarding obvious exceptions and impracticalities, so as not to lose focus of the underlying principles that are being addressed.  Consider this a disclaimer for the more pedantic-minded amongst my readers (you know who you are).

First, it would be helpful if I stated a value by which I orientate my worldview around, prior to trying to poke holes into it.  Above most else, as long as I can remember I have always valued the egalitarian approach to most facets of human interaction.  I truly do believe that the most effective and just and fair means for society to function is for its sociopolitical and judiciary elements to strive for as equitable an approach to administering its societal role as possible.  In this view, I also recognized that this can more realistically be considered an ideal for society to endeavor towards than an all-encompassing absolute–nonetheless, I still see it as a valuable ideal for modern society to be striving towards.  Additionally, I should clarify that I do not necessarily claim this personal value of mine to be derived from anything higher than my own personal preferences to how I think society ought to be.  Yes, it is subjective, as it is subject to my desires and interests, however I would argue that this is true of just about any alternative/opposing viewpoint that may be brought up.  Furthermore, the merits and benefits I believe to be implicit in my personal preference of an egalitarian society (though admittedly subjective) are (in my opinion) independently verifiable outside of just my own internal desires.  In short, I value egalitarianism on account that, because I have no just and tangible means by which to sift through who merits to occupy which position in the social hierarchy we all live in, I consider it important that (if nothing else, at least on the basic application of our political and judicial proceedings), we hold all members of society to an equal standard.  Moreover, not that it matters to determining the validity of the egalitarian viewpoint, but I’m convinced that the majority of the people reading this will have little trouble agreeing with the benefits of such a worldview (though probably more in principle, while leaving room on disagreement on the most practical means by which to apply said principle in the social framework).

Now, the immediate issue I see arising with this stance of mine is the objection that genuine egalitarianism can easily lead to outright conformity–especially enforced conformity–as a society built on the model of complete equality might find it difficult to function unless it actively sets out to maintain the equality it’s seeking to establish.  It is a harsh fact that large-scale human interaction is not naturally egalitarian; meaning that left to their own devices there is little in historical evidence to suggest that a complex society of people will not diversify themselves into a multi-layered hierarchy; thereby instinctively creating the social disparity that the egalitarian mindset it aiming to combat.  The most obvious response would be to insist that egalitarianism simply means that the basic functions of society (i.e. the law) have to be applied equally, and that as long as measures are upheld in society, the system will self-correct to its default setting.  Yet, this outlook is only convincing as long as one is inclined to have faith in the sincerity of the application of the law, in terms of holding all in society to an equal standard.  This also brings us to the issue of who is to be the arbiter warranted with upholding the principles of an egalitarian system.  The judiciary?  The policymakers?  And does this then bestow on these individuals a set of authority (i.e. power and privilege) that thereby creates a disparity which in itself violates the very premise of a truly egalitarian model?

“In a democratic society, the authority rests with the people in the society to ultimately decide on who is to be the arbiter(s) to ensure that equality is being upheld in said society on the people’s behalf.”

But to maintaining social equality by means of representative democracy brings us to the issue of having those in the minority opinion be subject the to whims of the majority.  And is this not also in itself is a violation of what an egalitarian society ought to be striving for?  When we play out the potential pitfalls of every one of these concerns what we end up with is the realization that, in practice, egalitarianism seems to only function when applied on a selective basis.  Complete equality, across the board, on all matters, has the serious consequence of either ending up in a social gridlock (rendering all manners of progress on any issue impossible), or coercion (negating the benignity that is ideally associated with egalitarianism).

I’ve heard it said how in this sort of a discussion it is important to differentiate between equality of outcome and equality of opportunity; that the latter is the truly worthwhile goal an egalitarian ought to be striving for in order to ensure a just and fair society.  I’m not sure this does much to address the primary issue at hand.  If there exists no disparity in opportunity, but we reserve an inequity in outcome, than will it not be the case that you will still end up with a select number of individuals occupying a higher role in the social hierarchy than others?  And once the foundation is laid for such a development, is it not just as likely that those who end up occupying a higher role could put in place measures that will be of greater benefit to themselves, even at the expense of those who fell into lower social roles (i.e. meaning that even though in this model all opportunity was equally available at first, the caveat that different people can have different outcomes–fall into more or less favorable social conditions–leaves open the issue of what safeguard is there that those who manage to rise high enough will not manipulate matters to their advantage in society; including stifling the outcome and opportunity potentials of future generations; therefore, undermining the whole egalitarian ideal on which the system was meant to be founded on).  If the rebuttal is that in a truly egalitarian society measures would be in place to prevent this, we fall back to the question of who exactly is to be the arbiter warranted with upholding the principles of an egalitarian system.  Thus bringing us full-circle the line of inquiry mentioned in the preceding paragraphs.

These are objections that, even as someone who considers himself an egalitarian, I have a lot of sympathies with.  Mainly because I don’t have any way to refute them without appealing to a personal intuition that these concerns are not endemic to an egalitarian model and that it’s ultimately feasible to avoid such potential pitfalls.  However, I have to also admit that I’m not entirely sure of this myself.  This problem brings me directly to the confrontation of what do I value more in society:  the principle of equality, or the autonomous individual?  The threat that removing all disparity that exists between all individuals might lead to a stifling of the distinct individuality of people is something I believe is worth worrying over.  What good is a world where equality is triumphant but reigns on the basis of enforced sameness?  Not to mention, what will happen to the human ingenuity all of us in modern life dependent on for our survival as a society?  The prospect of attaining personal achievement is necessitated by one’s ability to standout above the fold, and create something unique and distinct from the common.  The possibility that this drive will be held in suspect in a completely egalitarian world, in the name of preemptively combating all forms of perceived inequality, no matter how unpleasant it might be to my core values, is not something I can dismiss simply because it’s inconvenient to my worldview.  Essentially, I believe that it would be unwise to simply brush off the point that a world safeguarded to the point where no one falls, is also potentially a world where no one rises.

When I started writing this post I had a standard set of points I knew I would raise to fulfill my interest of demonstrating a genuine attempt at unrestrained self-scrutiny.  I know that some readers might wonder why I’m not doing more to combat the objections I’ve raised here against my own egalitarian perspective, and the simple truth is that it’s because I understand my desire for egalitarianism to be practical and feasible rests almost entirely on the fact that I want both of those things to be true, because it would validate my presupposed worldview.  Nonetheless, I do understand that reality does not dependent on my personal whims and wishes.  In all honesty, having actually reasoned out the premises here, I’m left wondering, if for the sake of practicality we will undoubtedly always be forced to be to some extend selective with our approach to egalitarianism, why do we (I) still even bother calling it egalitarianism at all?  Perhaps there is a term out there that more honestly fits what most of us mean when we seek to uphold what we refer to as egalitarian principles.  That, however, is a wholly separate discussion to my intentions here.  My goal was to hold my own views and values to the fire and see where it ends up.  In that goal, I think I’ve gone as far as this medium allows…what results from it will take a bit more thinking on my part to figure out.

Advertisements

Job Interviews: Plainly Simple, or Just Plain Stupid

Is it just me, or does anyone who has ever been to a job interview think that the person doing the interviewing asks the stupidest questions imaginable.  By far the dumbest thing that comes up in every job interview (in my experience) is the question, “Why do you want to work here?”

When this happens most of us will smile and mumble on about how much we respect the company/business/field/whatever and how much potential we see in the employer, and how we wish to contribute even a small part to the blah, blah, blah.

What most of us really want to say to the question, Why do you want to work here?, is much simpler:  “Money.  I want money.  I want you to give me a paycheck on a regular basis so that I can afford to pay my bills, and feed myself, and otherwise survive in modern society.  I couldn’t care less about this place, or its success, as long as it in no way impedes on my ability to earn a leaving, this entire industry can just be scrapping by for the next 50 years with no prospect for growth.  What I want is to get paid, and I’ll do the job for it because I have to.  Think of me as a sexless prostitute, if you will.  But you know that already.  You must know that!  You spend all day, every day interviewing people who give you the exact same insincere, pre-prepared response they found while searching Google for ‘interviewing tips’ the night before.  Heck, you were in the same place, for every job you’ve ever had in the past, so cut the crap and stop wasting my time with this nonsense.  You have my freaking resume, you have my freaking credentials.  You have all you need to know to make an informed opinion about whether or not I qualify for this job.  If I do, great, give it to me and I’ll start earning my salary.  If I don’t, thanks for your time and let me be on my way.  And what is with this whole second and third interview shit?  I said all I have to say in the first interview.  My answers to your vague, overly simplistic questions will not change the second or third time.  There are only so many ways we can say the same thing over and over again before we run out of words.  Believe it or not, there actually is a limit to the amount of bullshit the English language can be spun into over the course of a 30 minute conversation.  You’re smothering me, man, you’re smothering me!  The main goal of any job is to earn money, otherwise we’re just slave labor.  And I’d rather be a prostitute for the job market, than a slave.  Got it?  Good, now let’s talk benefits, shall we?”

I will pillage, conquer, and surrender a kingdom* to any person out there who is willing to say give this response to a really annoying job interview question.

All right then.  /Rant over.

 

*Timeline to claim pillaged, conquered, and surrendered kingdom falls to the discretion of the pillaging, conquering, and surrendering party.  Terms and conditions are amendable at said party’s whim and interest.  No refunds or evidence of the existence or plausibility of a kingdom’s pillaging, conquering, surrendering will be issued prior to melodramatic outburst.  All rights reserved.

“How you like them Apples…”

Privacy in the digital age often seems far more complicated than it ought to be.  If asked whether there exists a case in which one’s personal information should be shared or viewed by a third party or government agency, most people would respond with a resounding, “Hell no!”  And rightfully so.  However, it’s also common knowledge that if you have a profile up on any social network, or even if you just have a Google account through which you surf the web, your personal information and interests have been, and are currently, undoubtedly being shared with market parties you have no say in, all with such little protest from the public that most of us simply have come to accept it as an unavoidable fact of the technologies we depend on to get by in the modern world.  Thus, it is no surprise that there exists a side in the current fight between the Apple corporation and the FBI, which readily wishes to dismiss us worrywarts as being overly paranoid in the debate.  Because from their perspective, the idea that allowing the FBI to gain access to one phone, from one customer, connected to an act of terrorism is not equivalent to Big Brother moving in to depose of our civil liberties.  In this view, trading a bit of privacy for added security is a no-brainer.  If you haven’t caught on yet, I strongly disagree with this viewpoint.

Maybe it bears no need of repeating, but government agencies in the U.S. (and elsewhere) haven’t exactly had the best record when it comes to respecting their citizens’ right to privacy.  In fact, it is a callous mode of operation that continues to happen, again and again.  So, forgive us, brave protectors of our collective securities if we are not as optimistic about the prospect of allowing a precedent to be set on our privacy rights, to be potentially abused by institutions that have done nothing to earn our trust in their beneficence in upholding this area of the law.

Looking back I remember the indignation with which President Bush scolded those who of us who cared to raise objection to the warrantless wiretapping conducted by the NSA under his orders.  I also remember President Obama sternly rejecting calls of hypocrisy on his part when he allowed this same invasion of privacy and liberties to continue under his presidency when he had spent so much time campaigning against such abuses of executive power during his bid for the White House.  There is one thing, however, I agree with President Obama about when he was laughably trying to defend himself from critics (especially those who had once supported him) on the onset of the Edward Snowden leaks.  I remember the President mention that the conversation of choosing between privacy and security was an important one to have.  And this is absolute true!  Unfortunately, what the president does not–or will not–understand is that it is a conversation we ought to be having before such breaches of our liberties are undertaken.  Discussing it after the fact, and then dismissing our hostility at having our privacy violated as imbecilic, is insulting to our collective intelligence and undignifying, to say the least.

For those who say they are willing to grant institutions like the FBI access to their phones, if it means a possible increase in combating global terrorism and societal security, I wish you well with your optimism.  But do not dare speak on my behalf to implement your priorities into our laws.   And please don’t condescend to me as if you’re the levelheaded adult in this conversation, and we’re all a bunch of babbling infants, too stupid to understand the bigger picture you are wont of protecting.   Because I can see a bigger picture too, and it does not involve overreaching agencies suddenly learning the importance of restraint with next to unlimited power, when all notable evidence points to them never having learned it to begin with.

The State of the American Education System and Its Sputnik Legacy

As someone whose early academic background was split fairly evening between Europe and the United States, I’m occasionally asked by my now-fellow Americans whether I believe there is any truth to the oft cited inadequacy of the U.S. education system.  The simple answer is an obvious, “Yes.”  Of course, as I write this it needs to be remembered that there really is no such thing as the U.S. education system, for the same reason that there is no such thing as the U.S. culture.  Education is primarily a state matter in this country, therefore what we have is a loose collection of 50 independent educational systems (who themselves house several diverse district school systems) that at times exchange information and resources, but ultimately set their own standards on what is to be considered academically adequate within their individual state borders.  I’m aware that non-American readers might hear this and still wonder what sort of nation could possibly allow any segment of its population to fall behind academically, while letting another segment flourish.  This is an understandable though somewhat hasty reaction, as it ignores the difficulty that comes along with uniformly managing a nation as heavily populated as the United States (the latest census has Americans numbering over 300 million, and we’re still stretching from one ocean to another and beyond if we count Alaska, Hawaii, and a handful of protectorate territories).  Taking all that into consideration, it is still undeniably true that the U.S. could be doing a whole lot better of a job when it comes to fostering a decent education for it young citizenry.

The figures that are usually cited by cynics of the American education system read along the lines of how, nowadays, only 53% of American adults know how long it takes for the Earth to revolve around the sun, only 59% know that humans and dinosaurs did not live at the same time, only 47% know what percent of the Earth’s surface is covered in water (and only 1% of that know what percent of that is fresh water), and only 21% of American adults are able to answer all three of the above questions correctly.[1]  Now, one needs to keep in mind that this sort of data is collected through survey polls, and the limitations of relying on a particular population sample who happen to just randomly come across the survey in order to be included in its statistics, always leaves the possibility that the gathered results are unduly making the issue in question seem worse than it really is.  Nonetheless, one doesn’t really need the controversial figures above to note a clear lack of academic rigor in contemporary U.S.  schools; the lackluster national averages on basic test scores does that just fine (if you need prove, ask a random American to add two fractions and see what happens).  When one considers that today’s generations has a wider access to resource material than any generation in human history–where all you really need to do to research a subject is use Google–figures shown even half of the percentages quoted above would still be rather frightening.

Figures from academic disciplines I have personal exposure to don’t fare much better nationwide.  Americans from all backgrounds show a complete lack of knowledge in matters concerning US history and politics, where only 50% of US adults can name all three branches of government (meaning that half of us cannot), only 54% know that the power to declare war belongs to congress (40% incorrectly thought it belonged to the president), and as for those noble souls we elect to public office, only 57% know what the purpose of the electoral college is.[2]   I admit there has occasionally been a part of me that shuddered to think how, if we can’t pass tests regarding our own country, what our performance would be if we were asked about events beyond our borders).

When it comes to the people who comment and frame sociopolitical conclusions about the American educational system, the question I believe they really want answered is why the U.S., as a leading first-world country, can’t seem to find a way to curb its descending academic standing.  However, those who hold this question in mind would probably be surprised to learn that it’s really not so much a question of “why can’t”, as much as “why won’t.”  Because the United States, in its not too distant past, had experienced a similar fall from scholastic grace, only to emerge from it as a leader in 20th Century technological and scientific advancements.  But to fully explain the circumstances I’ll have to give a brief history lesson.

On October 4, 1957, the Unites States of America received the greatest blow in the Cold War struggle up to that point.  Surprisingly, it did not come from a military lose, nor was it the result of any covert attack on the part of our allies. Yet, it was an event that hit deep into the recesses of our society, and shook us harder than any missile ever could.  And for the first time—in a long time—made us question our position as a leader in the world stage, while simultaneously bruising our pride as a nation for having allowed ourselves to suffer such a humiliating defeat.  This event, of course, was Sputnik, and as the Soviet Union launched its satellite into outer space, America watched itself fall back into stupendous awe at the enemy’s technological advancement, and loathing bewilderment at our own programs’ shortcomings.  Did not our earlier attempts at space exploration with Project Vanguard end in utter failure?  How could these “godless commies” beat us to it–by what right?  Bemused and anxiety followed quickly thereafter, but the core of the problem was clear to all:  We had fallen behind, and something would have to change if we were ever to reclaim dominance again.

The Sputnik crisis spurred an immediate response from the US government–unwilling to allow those nogoodniks in Moscow to surpass us by any means–leading to a drastic increase in federal spending on science research and education.  The effort paid off, when on July 20, 1969, the American flag was planted into the Moon’s surface, becoming the first nation to do so.  Yes, a compelling argument can certainly be made that the underlying reasons for the sudden concern for the country’s educational well-being, and all the achievements that stemmed from it, were inspired by a jingoistic impulse to establish American dominance in the Cold War effort.  But acknowledging this fact doesn’t make it less efficient in its immediate outcome: improving the standard of the then-mediocre U.S. educational system [we’re speaking on average here, not across the board].  Therefore, I don’t see the question of “why can’t” the U.S. identify and fix its academic problems (the exact content and contributors of which are open for debate) as a viable one, since it did manage to successfully do just that in the past (for however long or brief a time it was).  The difference between now and then is a issue of incentive and priority.  Immediately post-Sputnik, education became a matter of national defense.  These days, however, America’s national defense priorities do not involved enemies looking to match us on the scholastic front (destruction is really motive in today’s theaters of operation).  Thus, outside of offering a few rhetorical points during election seasons, policymakers really have no high interests when it comes to investing the effort and funds to reassess how education is administered in this country.

To summarize:  In 1969 our enthusiasm for scientific discovery and educational progress as a component of national security brought us to the moon and beyond.  Faced with a potential danger, we recognized our faults and took action to combat an issue that was threatening our sociopolitical status; today, as a nation, our attitudes, enemies, and priorities on the matter have simply changed.

So, does the U.S. have a problem in its educational system(s)?  Yes.  Can it be fixed?  Sure.  Will it be fixed?  Probably not any time soon.


[1] ScienceDaily, “American Adults Flunk Basic Science”, March 13, 2009, http://www.sciencedaily.com/releases/2009/03/090312115133.htm.

[2] Figures from the Intercollegiate Studies Institute, reported by NBC Los Angeles, “Americans don’t know much about History”, January 26, 2009, http://www.nbclosangeles.com/news/local-beat/Study-Americans-Dont-Know-About-Much-About-History.html.

Treatise on Profanity

I like profanity.  I like how it adapts to whichever situation the speaker wants to thrust it in.  I like how it effortlessly fluctuates from endearment to abuse.  And I love how, once spoken, the reaction reveals more about the listener than the speaker.

Like any mode of expression, profanity is codependent on the speaker and the listener to add context to its message.  If worn-out ad nauseum, the profanity becomes stale, bland, and too normalized for heterodox consumption.  However, if used with tact, distributed with a precise attention to detail, it can have the impact of elevating even the dullest of conversation to a respectable level of fringe rebelliousness.  But here too, one must proceed cautiously.

The most beautiful part of profanity is its apparent authenticity.  If it comes across too calculated, too forced, the effect is ruined, and worse still, the disgust will come to be associated with profanity itself rather than the failure of the speaker to profane properly.  Essentially, profanity must be mindful, but not overly so.  It must resonate with the audience–good or bad–without drowning them in a sea of senseless rabble.

When it comes to the listeners (or maybe I should say responders) to the profanity being spoken, often the reaction is one of self-righteous disgust at the words.  In this circumstance, no effort is given to understand the context in which the words are spoken, let alone to appreciate the emotive experience it produces.

Are you offended by profanity?  That’s good.  Now, aim to dig deeper and understand the power the words have over you.  If you are offended or made uncomfortable by a profane word (or profanity in general), resist the urge to either apologize for your initial feelings (they are involuntary after all) or to demand an apology from the speaker to sooth your offense.  Instead, try to appreciate the great depth of emotions these so-called vulgarities have forced you to confront.  That power alone is why profanity deserves better than to be dismissed as too lowbrow for intellectual discourse.  Why it deserves an honored place in literary/cultural discussion.  If anything, to ignore that which challenges our most base values and senses, evokes so much heated passion from us, would be all that much worse for intellectual discourse.

Fucking A!

The Social Contract Theories of Hobbes and Rousseau: A Critique

[This post makes references to two previous analyses on the social theories of Thomas Hobbes and Jean-Jacques Rousseau, which can be found here, and here, respectively.]

Social contract theorists, like Thomas Hobbes and Jean-Jacques Rousseau, aim to systematically establish the basic components that warrant the formation of human communities, giving rise to the creation of governing entities, all through an initial set of covenants a people agree to enter into, in order to strengthen their prospects for individual self-preservation by being members of a greater society; this is the social contract.

Although, Hobbes and Rousseau diverge greatly about the framework and mode of governance that is to ensue from the social contract, both agree that absent of such a pact, the individual is transported back into what can be called the state of natureTo Hobbes, this is an anarchic, cruel, savage, existence where no law or peace can exist, and a perpetual state of war is the norm (hence, giving man, as a rational animal, the incentive to enter into covenants with his fellow as a means to avoid such a dire reality).  Rousseau, on the other hand, takes a much gentler view of the state of nature.  He agrees with Hobbes that in this state man is left to a solitary existence, but instead of viewing this as a savage realm, he sees it as peaceful and ideal, where the general will of the individual was not subverted to the will of any other persons.

These fundamental disparities proposed by the two philosophers are of secondary concern to this critique, since my focus will be to show that both thinkers have failed to account for the exact means by which modern communities exist in relation to the state of nature they present as their starting premise, and therefore, have failed to give credence to intellectual integrity of social contract theory.

In Leviathan, Thomas Hobbes proposes a social system build on covenants between individuals, which subsequently form what he calls the commonwealth (i.e. society).  In this model, justice is defined as performing the agreed on covenants, thus injustice is naturally that which is counter to the established covenants of the commonwealth.  And this is to be enforced by an authoritarian sovereign, acting as proprietor of the said social contract.  Hobbes maintains that the incentive individuals have to hold to the laws of the covenants, is their desire to avoid the savage state of nature that they are bound to be banished to, in case they fail to live up to the social contract.  However, there is a problem here that Hobbes fails to demonstrate; namely, what grants the premise that a failure to perform the social covenants will automatically place one back to the state of nature, at all?  For example, in all social communities possessing established laws (i.e. covenants), there more than likely exist individuals who, at times, break these laws (i.e. fail to perform their covenants), but are not definitively banished from the community itself (i.e. the commonwealth).  Almost always, perimeters exist within the community itself that deal with the criminal perpetrators, and still allow them to retain their citizenship status within the society.  In fact, the judicial systems of much of modern society operate on the basis of punishment, yes, but also rehabilitations; Hobbes’s social contract does not give measure to this latter, important, aspect of criminal justice.  Instead, he wishes to place all breaches of social covenants on an equal plane of offense, which ultimately renders his social contract as impractical by definition, because it will be unable to adapt as issues and concerns that are bound to arise as society progresses.  Unavoidable technological, social, and political advancements will mean that with each passing generation, individuals will be born into covenants that they did not consent to, whose decrees do not pertain to their cultural orientations, but are nevertheless judged by the merits of an archaic framework with little relevance to their modern lives.  Thus, for any political model to survive the test of time, a means of amending the initial covenants must be put into place from the start.

Furthermore, Hobbes’s insistence that the enforcement of the covenants is to reside with an authoritative sovereign also fails to take into account the ever-changing demographic that occurs within the parameters of a populace, and does not give a proper account of why individuals born after the initial covenants were made–and therefore did not consent to empower the ruling sovereign as the proprietor of their commonwealth–ought to be subjected to decrees authorized prior to their existence.  As already stated, the next generation will not necessary agree with the initial pact that created the social contract, thus new covenants will be required every few decades, but this by definition subverts the entire point of Hobbes’s authoritarian system.  Hobbes’s social program is innately static, but—unfortunately for Hobbes—society and life are not.  If one was to take Hobbes’s account of the state of nature, and incorporate it into his proposed social system, the end result would be a constant calamity of communal covenants, being erected and floundering with the passing of time.  And, perhaps, such a view of society is historically defensible, but it not the sort of stable commonwealth Hobbes was arguing for.

In The Social Contract,Rousseau’s argument rests on even flimsier premises than Hobbes’s.  The state of nature Rousseau depicts—peaceful, harmonious with nature, man’s ideal state of being—renders his entire proposal for setting up a proper society and government (even a popularly democratic one) redundant, since if one was to accept his account of the state of nature, the philosopher’s real task ought to be to argue for the dissolution of government and society as a whole.

Rousseau proposes that man entered into the social contract, because the conditions of his solitary state (though peaceful, and ideal) were insufficient in ensuring the individuals self-preservation; therefore, he formed into communities to strengthen his chances against the forces of nature within the safety of the group, while still retaining his general will (and without subverting the will of others).  But Rousseau’s entire basis for this premise sounds like a case of special pleading; why did man have to form a pact with other men, if his existence prior to the advent of communities was peaceful, and fruitful?  If he has already enjoyed the greatest freedom possible absent of an established society, what reason is there to argue in favor of keeping any social order, whatsoever (even one that is largely run as a direct democracy)?  Also of note, Rousseau mentions that any individual who wishes to leave the social contract is free to do so at his/her discretion.  Hence, if we follow the reasoning Rousseau outlines for us, these individuals leaving the social contract would be returning to the state of nature, where they will have peace and be free.  So, once again, what was the purpose of entering the social contract, where one’s general will is capable of being conformed to the will of the community?  The philosopher’s failure to address these questions concerning the most fundamental aspects of his argument makes his entire prose suffer as a result, and gives no good reasons as to why his proposals should be taken seriously.

Rousseau’s entire program would be much more coherent if he had given a more thorough rationale as to why man actually benefits from societal life, in contrast to a solitary one.  But to do so would, of course, undermine all the previous work in his Discourse, where he affirms that man is innately a solitary being.  (That, however, is a critique for another post, on another day.)

The problems with social contract theory, in general, is that it places to much emphasis on man’s conscious entrance into communal life, when in reality, a much more cohesive account can be made for the idea that we–by large–do not actively consent to any covenants, or social contracts, but are instead born into them.  Rather than forging social communities, we extend and modify those that we have.  Hence, the relatively slow (millennia long) progression of human civilizations.  No clear account can be given of what the initial spark was that caused sophisticated communities to emerge, but there is no reason to speculate that it must have involved a great deal of conscious sophistication on behalf of the individuals involved; remember, the original purpose of the most mundane of habits can be forgotten, and transformed into the most innate and sacred of customs within a stretch of only a generation, or two.  Thus, to attribute too much forethought to the habits of our ancestors, would be a grave submission of reasoning.  Time erodes all matters, but along the way it can also modify and sharpen things into something more pragmatic and tasteful, than how they initially began.

Depression Impression

This is a topic I have been wanting to touch on for some time, but usually found myself pausing as it proved difficult to articulate what essentially comes down to a concerned observation on my part.  (I suppose one could consider this post an attempt to verbalize a matter that’s been unsettling me in hope that it will make more sense once I finally manage to focus it together in a coherent prose.)

Throughout the years of schooling and tutoring, I have noticed several trends and patterns emerging.  Notwithstanding the ever-fluid fashion sense of adolescent youths, a concerning trend I repeatedly take note of is how as time goes by the number of people being prescribed antidepressants continues to increase exponentially.  This trend is also true of colleagues, supervisors, family members, close friends, and casual acquaintances.  And demographic studies seem to confirm my observation that it is indeed the case that over the last three decades the number of people being treated for depression, and prescribed antidepressant, has continuously risen (at least in the U.S.) with no signs of leveling off.

One possible explanation for this is that only recently people have been willing to seek proper treatment for their depression than ever before, which would make the increase in prescribed antidepressants a positive development as it indicates that a greater number of individuals in need of medical/psychiatric care are receiving it.  However, although I would love nothing more than to wholeheartedly embrace this optimistic outlook on the observed trend, I can’t help but feel that it serves to overlook a rather important anomaly in the pattern:  namely, if there are now more people than ever seeking and receiving treatment for their depression, why is the rate of depression at a seemingly never-ending rise?  In other words, if we are being proactive by treating depression head-on, shouldn’t we see a correlating decrease in depression with the increase of prescribed antidepressants (i.e. the exact opposite of the trend we’ve been seeing over the last 20-30 years)?

As a point of preemptive clarity I feel the need to state how I hope this post doesn’t come across as the scribbling of an internet conspiracy theorist, raving against “Big Pharma” and “the ills of modern medicine”.  I also feel somewhat silly having to actually say this, but (again, just for clarity’s sake) I’m not opposed to medications, or vaccinations, or hospitals, and I have no issue giving due credit to the advent of modern medical science as an irrefutable component that has shaped the overall rise in improved health for the large segment of the globe that has enjoyed it for the better part of over a century.  But none of this has anything to do with the issue that is blatantly staring at me when it comes to depression and the increased dependency on antidepressants I see with the people around me (which seems to mirror the data gathered on the national population as a whole).  Furthermore, given this observed trend, I can’t help but ask myself to at least consider that something important is being overlooked.  Perhaps the possibility exists that it might not always be the depression itself that is the causal depressor to the afflicted individual; that, in at least some of these cases, the depression itself is a psychological response to an unaddressed stress factor that’s being overlooked because we are more content with just medicating people and sedating them into bliss, rather than considering the possibility that a deeper–possibly environmental or societal–problem exists here.

Like I said before, I am not an opponent to medicine or medication, but I can’t ignore the fact that I keep seeing more and more people around me resorting to antidepressants to treat their distress, with no apparent long-term plan or indication for these pills to actually subside and eliminate the cause of their depression.  What I’m saying is that if we are going to numb a portion of people’s neurological senses, we better be damn sure that what we are doing is actually treating the cause of people’s suffering, rather than just assume we’re on the right track and continue to prescribe medication that is simply not bringing about the expected result (i.e. actually reducing the number of people afflicted with depression).