Tag Archives: society

Egalitarianism; A Practice in Self-Scrutiny

Genuine self-scrutiny is a personal virtue that is much easier preached than practiced.  Usually the furthest most of us are willing to go is a relativistic acknowledgment that differing opinions exist and that, all things considering, we would be willing to change our minds if these alternative viewpoints were to persuade us sufficiently.  But, in my opinion, this sort of tacit relativism isn’t much in the way of self-scrutiny.  To self-scrutinize is to actively challenge the values and ideals we hold dear to our person–to dare to shake the foundation holding up our most cherished beliefs, and test if the structure on which we house our beliefs is sturdy enough to withstand a direct attack.  In contrast, the aforementioned acknowledgment that differing (and potentially equally valid) views exist to our own is a very passive stance, as it strictly relies on an external source to come along and challenge our own position(s), with no actual self-scrutiny being involved in the process.

Up to this point, this very post can be rightfully characterized among the passive variant; i.e. it’s me (an external source) attempting to challenge you to question the manner by which you view the world around you.  Although there are occasionally posts on this blog in which I sincerely try to adopt opposing stances to my own, the truth is that I do this primarily to better strengthen my own position by being able to effectively understand what I’m arguing against.  This, too, is not self-scrutiny.  And it would be dishonest to pretend otherwise.

To truly self-scrutinize I would have to pick a position–a value, an ideal–by which I orientate my worldview around, and mercilessly strip it to its bone.  The frustrating part of such a mental exercise is the inevitability of having to rely on generalizations of my own opinions in order to be able to paraphrase them thoroughly enough, without getting trapped in a game over petty semantics.  The important thing to remember is that the points I will be arguing over with myself in this post are admittedly stripped of their nuances regarding some obvious exceptions and caveats, so as to not lose focus of addressing the underlying principles that are being discussed.  Consider that a disclaimer for the more pedantic-minded among my readers (you know who you are).

First, it would be helpful if I stated a value by which I orientate my worldview around, prior to trying to poke holes in it.  Above most else, as long as I can remember, I have always valued the egalitarian approach to most facets of human interaction.  I truly do believe that the most effective, and just, and fair means for society to function is for its sociopolitical and judiciary elements to strive for as equitable an approach to administering its societal role as possible.  In this view, I also recognized that this can more realistically be considered an ideal for society to endeavor towards rather than an all-encompassing absolute–nonetheless, I still see it as a valuable ideal for modern society to be striving towards, even if we must acknowledge that its perfect implementation may forever be out of our grasps.

Additionally, I should clarify that I do not necessarily claim this personal value of mine to be derived from anything higher than my own personal preferences to how I think society ought to be.  Yes, it is subjective, because it is subject to my desires and interests, however I would argue that this is true of just about any alternative/opposing viewpoint that may be brought up.  Furthermore, the merits and benefits I believe to be implicit in my personal preference of an egalitarian society (though admittedly subjective) are, in my opinion, independently verifiable outside of just my own internal desires.  In short, I value egalitarianism on account that, because I have no just and tangible means by which to sift through who merits to occupy which position in the social hierarchy, I consider it important that (if nothing else, at least on the basic application of our political and judicial proceedings), we hold all members of society to an equal standard.  Moreover, not that it matters to determining the validity of the egalitarian viewpoint, but I’m convinced that the majority of the people reading this will have little trouble agreeing with the benefits of such a worldview (though probably more in principle, while leaving room on disagreement on the most practical means by which to apply said principle in a social framework).

Now, the immediate issue I see arising with this stance of mine is the objection that genuine egalitarianism can easily lead to outright conformity–especially enforced conformity–as a society built on the model of complete equality might find it difficult to function unless it actively sets out to maintain the equality it’s seeking to establish.

It is a harsh fact that large-scale human interaction is not naturally egalitarian; meaning that left to their own devices there is little in historical evidence to suggest that a society of people will not diversify themselves into a multi-layered hierarchy; thereby instinctively creating the social disparity that the egalitarian mindset is aiming to combat.  The most obvious response would be to insist that egalitarianism simply means that the basic functions of society (i.e. the laws) have to be applied equally, and that as long as measures are upheld in society, the system can self-correct to its default setting.  Yet, this outlook is only convincing as long as one is inclined to have faith in the sincerity of the application of the law, in terms of holding all in society to an equal standard.  This also brings us to the issue of who is to be the arbiter warranted with upholding the principles of an egalitarian system.  The judicial system?  The policymakers?  The public at large?  And does this then bestow on these individuals a set of authority (i.e. power and privilege) that thereby creates a disparity which in itself violates the very premise of a truly egalitarian model?

“In a democratic society, the authority rests with the people in the society to ultimately decide on who is to be the arbiter(s) to ensure that equality is being upheld in said society on the people’s behalf.”

But maintaining social equality by means of representative democracy brings us to the issue of having those in the minority opinion be subject to the whims of the majority.  And is this not also in itself a violation of what an egalitarian society ought to be striving for?

When we play out the potential pitfalls of every one of these concerns what we end up with is the realization that, in practice, egalitarianism seems to only function when applied on a selective basis.  Complete equality, across the board, on all matters, has the serious consequence of either ending up in a social gridlock (rendering all manners of progress on any issue impossible), or coercion (negating the benignity that is ideally associated with egalitarianism).

I’ve heard it said how in this sort of a discussion it is important to differentiate between equality of outcome and equality of opportunity; that the latter is the truly worthwhile goal an egalitarian ought to be striving for in order to ensure a just and fair society.  I’m not sure this does much to address the primary issue at hand.

If there exists no disparity in opportunity, but we reserve room for an inequity in outcome, than will it not be the case that you will still end up with a select number of individuals occupying a higher role in the social hierarchy than others?  And once the foundation is laid for such a development, is it not just as likely that those who end up occupying a higher role could put in place measures that will be of interest to themselves alone; or even at the expense of those who fall into lower social roles?  Meaning that even though in this model all opportunity was equally available at first, the caveat that different people can have different outcomes–fall into more favorable and less favorable social conditions–fails to safeguard against the potential dilemma of having those who manage to rise high enough manipulating matters in society to their advantage; thereby stifling the outcome and opportunity potentials of future generations.  If the rebuttal is that in a truly egalitarian society measures would be in place to prevent this, we fall back to the question of who exactly is to be the arbiter warranted with upholding the principles of an egalitarian system?  Thus bringing us full-circle to the line of inquiry mentioned in the preceding paragraphs; hence, making an equality of outcome vs an equality of opportunity distinction does little to nothing to resolve the issues being discussed here.

All these objections are ones that, even as someone who considers himself an egalitarian, I can sympathize with.  Mainly because I don’t have any way to refute them without appealing to a personal intuition that these concerns are not endemic to an egalitarian model and that it’s ultimately feasible to avoid such potential pitfalls when we leave room within the social system to be amendable to debate and revision.  However, I have to also admit that I’m not always entirely sure of this myself.

This problem brings me directly to the confrontation of what should be valued more in society:  the complete equality of all people, or the value of the autonomous individual?  And whether creating such a dichotomy is necessary, or a balance can be struck in satisfying the interests of both entities?

The threat that removing all disparity that exists between all individuals might lead to a stifling of the distinct individuality of people is something I believe is worth worrying over.  What good is a world where equality is triumphant but reigns on the merits of absolute sameness?  Not to mention, what will happen to the human ingenuity all of us in modern life depend on for our survival as a society?  The prospect of attaining personal achievement is necessitated by one’s ability to stand out above the fold, and create something unique and distinct from that which is common.  The possibility that this drive will be held in suspect in a completely egalitarian world, in the name of preemptively combating all forms of perceived inequality, no matter how unpleasant it might be to my core values to acknowledge, is not something I can dismiss simply because it’s inconvenient to my worldview.  Essentially, I believe that it would be unwise to simply brush off the point that a world safeguarded to the point where no one falls, is also potentially a world where no one rises.

When I started writing this post I had a standard set of points I knew I would raise to fulfill my interest of demonstrating a genuine attempt at unrestrained self-scrutiny.  I know that some readers might wonder why I’m not doing more to combat the objections I’ve raised here against my own egalitarian perspective, and the simple truth is that it’s because I understand my desire for egalitarianism to be practical and feasible rests almost entirely on the fact that I want both of those things to be true, as it would validate my presupposed worldview, by fiat.  Nonetheless, I do understand that reality does not depend on my personal whims and wishes.  In all honesty, having actually reasoned out the premises here, I’m left wondering why, if for the sake of practicality we will undoubtedly always be forced to be to some extent selective with our approach to egalitarianism, we (myself included) even bother calling it egalitarianism at all?  Perhaps there is a term out there that more honestly fits what most of us mean when we strive to uphold what we refer to as egalitarian principles.  That, however, is a wholly separate discussion to my intentions here.  My goal was to hold my own views and values to the fire and see where it ends up.  In that goal, I think I’ve succeeded…what results from it will take a bit more thinking on my part to figure out.

Advertisements

The Power of Names

Shakespeare invited us to consider, “What’s in a name?  That which we call a rose, by any other word would smell as sweet.”  The Bard’s musings on the subject notwithstanding, the truth is that names do hold a fair bit of power in forging our perception of other people, as well as ourselves.

If you are a foreign-born individual who goes about in your adopted land of residence with a first name that points clearly to your nation of origin, you immediately know how vital a role a name can play when trying to integrate yourself with the local population (so much so that many foreigners will give in, and change their foreign-sounding names to something more palatable to the culture they aim to assimilate in).  Although few of us will readily admit to it, we are all susceptible to making generalizations about people we come across in our daily life based on superficial features.  Names are definitely one such feature.  That is not to say that every assumption made about someone based on such features is either wrong, or malicious.  It’s not wrong (factually or morally) to deduce that a person with an obviously Asian sounding name is in some way culturally connected to Asia.  Same with a man named Hans Gunterkind most likely being of some kind of Germanic heritage,  Jean-Pierre Neauvoix being French.  So on and so forth.

(It goes without saying that the contemptible part in forging a preconception about someone isn’t the initial preconception itself, it’s what you do with it from there on forward.  If on recognizing you’re about to speak with Chen Huiyin leads you to assume she is probably Asian before seeing her, no sensible person will raise an eyebrow for that assumption.  If, however, you further take your preconception to assume she is in some way personally inferior to someone who isn’t Asian, that’s where we run into issues of bigotry that will rightly be condemned by much of the public at large.)

Issues of what might be called ethnic names aside (are not all names relatively ethnic to different cultures, one might be inclined to ask here?), there are naming norms within American culture that occasionally shape our interactions with each other.  When you’re in the middle of everyday America and come across the name Kevin, it is unavoidable that you will imagine a man.  Unless you just happen to know a woman named Kevin, but even then you are likely to ascribe it to a rare anomaly.  What if over the course of the next three decades a swarm of new parents decide that Kevin makes for a great name for their baby girls, and the social paradigm shifts so that suddenly you run into more female Kevins than male ones?  Would you easily adjust to the new cultural trend, or still stick to the norm you had been accustomed to of Kevin being a predominantly male name?  If this sounds like an unlikely scenario to happen, think about how the name Ashley in America at the start of the 20th Century changed from mostly male to predominantly female by the start of the 21st Century.

Not to belabor a point past my humble reader’s generous patience, but it would feel disingenuous not to touch on my personal experience here.  Growing up in continental Europe as a boy named Sascha/Sasha the social assumption about it was that my parents must be bland, unimaginative, and possibly even a tad bit conservative in their leanings, precisely because boys named Sascha/Sasha are so common to come across there.  At the time, it formed a personal impression of myself being just another average lad going about my business, similarly to how I imagine an American youth named Michael or David would feel on the matter in contemporary American culture.  When I moved to the U.S. in my early teens I came to find out that my name was somewhat of a peculiarity to my peers; one that definitely demanded further explanation on my part.  Suddenly, I was no longer merely a random guy with an average-to-boring name, I was a random guy whose androgynous-to-feminine name invited further conversation (occasionally schoolyard taunts, too, but I’m pretty good at deflecting unkind commentary and rolling with the punches, so I bear no negative grudges from it).

I would argue that your name is the most basic qualifier of your identity, and people’s reactions to it forms a great deal of your learned behavior when interacting with others.  I can honestly say that the change in perception in how people reacted to my name on moving to the U.S.–as opposed to the reaction I received for it back in Europe–did affect how I carry myself and interact with others to some non-trivial extent.  At least in that I know when I introduce myself to others, I can be sure of two things:  1. I will be pegged as foreign regardless of my citizenship status, 2. I may be asked an awkward follow-up question regarding my name (to which, when I’m feeling lazy, my typical response will be either “My parents were really hoping for a girl, and were surprised when I popped out, dick-swinging and all,” or “I wanted to be able to better relate to women, but Nancy Sunflowerseed sounded too butch, so Sascha had to do”).

Believe it or not, the purpose of this post was not to regale anyone with anecdotes about naming cultures, as a clever ruse to sneak in a dick-swinging joke.  It’s to touch on a greater point about forging better writing habits and being mindful of one’s intended audience’s social palate.  Sooner or later, just about all writers find themselves fretting over picking out the perfect name to convey their characters’ personalities and backgrounds effortlessly to the reader.  And there are definitely right and wrong names one can decide on, for the roundabout reasons stated above.

If you’re writing a story about a street-wise, inner-city black kid, born and bred in the Bronx, but is named Hans Jorgenson Gunterkind, well you better be ready to explain how the hell that came to be.  Same if you’re writing a story about a 15th Century Samurai named Steven.  While clever names can add exotic intrigue to characters, and piece together unspoken–unwritten?–context about their personal interactions with their environments, it can also needlessly distract the reader if it’s not really meant to be a focal point of the narrative.

It’s perfectly fine to be bold and go for something unconventional when you’re crafting your written world, but don’t bend over backwards to convey uniqueness unnecessarily, to the point that it hinders the readers ability to become immersed within the narrative.  A story that has five characters named Mike to show the absurd commonality of the name can be witty and fun, or it can end up confusing and frustrating to the reader.  Take a moment to consider how the greater world you have created interacts with this dynamic, and whether it helps or hurts the story you’re setting out to tell.  Reading practicality should not be dispensed for the sake of creativity; they should operate together to form a coherent story that can be enjoyably read.

You can’t please everyone, and someone will hate your work no matter what or how you write.  Which is why the starting point for all my writing advice is to always start with being honest with every story’s first reader: its author.  And if, as you put pen to paper (or, more realistically, fingers to keyboard), what seemed like a great name in the first outline is becoming harder to work with as the story progresses, rather than forcing the narrative to conform, there is no shame in revising the basics–character names included.

Suck on that, Shakespeare, is what I’m really trying to say here.

The Cynic’s Political Dictionary

  • Centrist: adj. the act of claiming to not care about identity politics in order to feed one’s own already narcissistic self-value.
  • Communism: adj. crippled by Progress (see Progress).
  • Conservative: adj. a desire to recapture an imaginary Golden Age, and cease caring.
  • Corporation: adj. the benchmark of personhood for Conservatives; n. the Great Satan of Liberals.
  • Economics: v. the act of attempting to predict the future, through a broken crystal ball.
  • Elections: n. the greatest theater production money can buy.
  • Family Values: absolute control of the person (see Person), and her/his genitalia.
  • Fascism: v. the act of feigning fear.
  • Free-market: n. the omniscient, omnibenevolent, omnipotent God of Libertarianism (see Libertarianism).
  • Independent Voter: n. a disgruntled Conservative/Liberal; n. a committed Moderate (see Moderate).
  • Labo(u)r: n. an archaic animal of antiquity that invokes nostalgia in Liberals (see Liberal), and disdain in Conservatives (see Conservative).
  • Liberal: v. a state of perpetual inability to cease seeing faults everywhere in society.
  • Libertarianism: n. the completely rational belief that faceless, easily corruptible conglomerates are more honest and trustworthy than faceless, easily corruptible governments.
  • Middle-class: n. a mythical being with no clear definition; adj. a rhetorical token point.
  • Moderate: n. white bread.
  • Person: adj. act of being valued by your monetary and/or societal contribution; n. a corporation (see Corporation).
  • Politics: adj. the art of self-interest.
  • Progress: v. the infantilization of humanity; adj. hope for change with no plan to act.
  • Religion: adj. a source of false humility for the socially powerful, and a source of false power for the socially humiliated.
  • Socialism: n. the elder brother of Communism (see Communism); adj. being beyond redemption.
  • The People: n. a device that creates the impression of human compassion.
  • Voting: v. a dramatic tragedy.

Stranger Danger, Knocking at the Door of Society

In Austin there have been a series of bomb explosions this month from an as-of-yet unidentified perpetrator* (see update below).  Of course it goes without saying that all of us here are hoping that the person/s responsible is/are apprehended sooner rather than later.  Living in the city, what I’ve seen is that life is more or less carrying on as usual in the public sphere.  This is to be expected as people by and large still have duties and obligations to concern themselves with that forces them to carry on regardless of the danger that may be surrounding them (bills still have to be paid after all, and kids still have to get to school).  That is to say, while I know many individuals are certainly taking any and every precautions they can to be safe in a time like this, the city’s social life remains largely undisturbed.

This observation caused a coworker of mine to opine how surprised she was that everyone (referring to those of us who reside within Austin) is responding far more nonchalant about these bomb incidences than one would expect of people in similar situations.  Although I can somewhat see what she meant by the comment, I feel that it also brings up the further query of how exactly one is expected to act while this kind of situation is going on?  How do you as a person properly respond to potential danger that is far enough to be an abstraction to you subjectively, even though you rationally know it’s objectively close enough (mere miles if you’re an Austinite) that it ought to keep you on high alert?  In this regard, trying to gauge out one’s safety risk is comparable to standing in fog–those outside can see you’re in it, but you (precisely because you’re in it) still identify it as something that is some distance removed from you.

The southwest Houston neighborhoods I spent my teen years growing up in were not particularly safe places (it unfortunately goes without saying how most urban areas in big US cities aren’t).  During that time, I have been held up and robbed–and intimately known many others who have been held up and robbed–by street gangs and desperate individuals enough times to have developed a sixth sense about which way to move, what sort of characters to avoid, and how to secure my home to ease my mind on the matter as much as I can (as a precautionary rule, the little chain lock on the door does little good).  My point is that, like most city-folks, being surrounded with some degree of criminal activity is not something new to me.  Nevertheless, no matter how much personal familiarity one has with this nation’s crime rate, the news that a neighbor or coworker has been assaulted and/or robbed within walking distance of you (or that random packages are detonating in the city) will always stir a certain level of anxiety in a person’s mind.

I know people who use this to argue that the human “heart” is naturally inclined to do evil in times of desperation.  But I’m unconvinced by this line of reasoning.  Just as I doubt that man is naturally disposed to be good, I’m equally skeptical of suggestions of his innate wickedness.  Man is adaptive; his behavior situational.  Which is why I see no necessary contradiction in the fact that a person can be a callous murderer at one moment in time, and a genuinely loving parent in another.  In fact, I’m fairly certain that the three men who robbed me at gun point a few years ago probably spent that very evening exchanging pleasantries and joy with some loved one or another (quite possibly with my money; in which case, I at least hope it managed to bring someone happiness).

But this doesn’t do anything to relieve the reality that social communication is being broken down in the densely populated areas of the world.  And it leads me to ponder a few things.  Namely, what if in the future someone who sincerely requires my assistance knocks on my door for help?  Will I readily trust the person, or will I assume that it must be a clever ploy to get me to leave the safer confines of my home, concocted by individuals looking to prey on the average person’s sympathy towards a helpless voice?  I don’t know.  Ideally, I like to think I’m empathetic enough to answer the call for help.  Shamefully, I’m inclined to admit that there’s a chance I might not respond to a doorstep plea.  But it’s easy to philosophize about different scenarios when one is safely removed from the moment of action.  In the moment, a normally rational person can easily be overtaken by anxiety-induced irrationality.  I have even been told by many friends that their social anxiety has reached the point where they don’t feel comfortable having people approach them as they are getting into their cars, because their minds instantly start to recall all the horror stories of victims assaulted (or worse) by opportunistic criminals.  (I personally have also always been of the opinion that there is no inquiry that cannot be made by a stranger just as well standing several paces away from my car door, as standing right in front of it.)

For me, all of this brings up the issue of how exactly we’re supposed to create a more socially cohesive and  cooperative society, when for the sake of our very survival we have little choice but to be vigilantly suspicious of the individuals we are stuck sharing society with?

*Update, 03/21/2018:  A person believed to be responsible for the bombings was identified by law enforcement authorities today.  He took his own life as authorities moved in to apprehend him.

On Arguing Economics

Just to get the main point across allow me to start this post by simply stating, there exists no such thing as the economic model from which we can impartially derive any sort of self-evident conclusions, policies, or values.  By which I mean that there is no purity test to determine which economic model is somehow more objectively “valid” than another.

For example, take two modern economic models that stand on completely opposite sides of the spectrum:  Marxist communism and laissez-faire freemarket capitalism.  [I’m aware that different people have over the decades attempted to give varying definitions within both these models, thereby making an overreaching analysis on my part impossible; hence, I will primarily be addressing elements that are agreed upon components by almost all professional voices in the aforementioned fields.]  Putting aside what Marxism has come to mean to the layperson through the various revolutionary forces that carried its banner in the 20th Century, at the core of the economic model is the proposition that societal development is best understood as the process by which humans–as a collective–produce the necessities of life (often referred to as historical materialism among Marxist scholars).  While the nuances of the whole thing can get very convoluted from here on out, the basic framework Marx was working off of, within this scope of historical materialism, is that human society is better served if the workers who physically produce the products necessary for the life of all of society retained economic control over said products.  From this he further postulated the emergence of a commune like market of commerce, in which production is owned and distributed equally among all sectors of society (i.e. communism), as a historical inevitability that human development is progressively heading towards in the modern era.

The theoretical problem of course in the Marxist economic model is that the validity of historical materialism is dependent on the notion that we accept the validity of historical materialism; this is otherwise known as a tautology (or circular argument), and is fallacious by definition.  The practical part being ignored in this model is that the perception of human progress as developing towards one specific sociocultural norm or another is only evident in hindsight, and any economic/social course that ends up developing can in retrospect be rationalized in terms of its preceding events; this is true even for identical situations that yield contrasting outcomes.  Not to mention, if we are to approach economics from a historical perspective (as Marxism claims) a decent case could be made that human nature (even in modern, industrial time) seems to be more conducive on creating hierarchical social structures, rather than collective communes.

Before any freemarket advocates who might be reading this start handing out congratulatory “Likes” to my dismantling of Marxism (I’m looking your way libertarians and self-styled classical liberals), it needs to be said that the reasoning underlying laissez-faire freemarket capitalism fares no better than its socialist antipodes.  The premise that economic sectors perform at their best when market forces are allowed to compete unmolested by non-market factors (like the government), rests on the idea that little to no regulation will in itself create an environment in which all the various forces that make up the marketplace will have to compete against one another; theoretically leaving the final word on what products/serves are to succeed in the freemarket to the consumers (i.e. all of us).  In theory, this sounds great; in practice, just like when it comes to Marxist economics, historical data casts a few doubts on the extent to which laissez-faire capitalism holds up.

First, the proposition that the freemarket is something akin to a self-sustaining, self-correcting organism ignores the fact that the freemarket is–above all else–entirely man-made.  The freemarket, as an economic plane in which human beings exchange commerce, is not a naturally occurring phenomenon, anymore than a locomotive is a naturally occurring phenomenon; we purposefully invented it to serve our economic needs.  Thus, to argue a “hands-off” approach to an entity whose very existence is owed to primarily “hands-on” interests, can be argued to be more than a bit narrow-sighted.

More than that, when we look at the era in which laissez-faire freemarket capitalism thrived unmitigated in the U.S.–the late 19th and early 20th Centuries–instead of seeing a marketplace of robust competition, driven by the needs of the consumer, we see a gradual concentration of market power in the hands of a handful of conglomerates.  The reason being that, economically speaking, the initial surge in competition experienced in a newly emerging market, left to its own devices, can in time have a minority of businesses surpass their competition to the point that they are virtually the only option on the market left for the consumer.  In this historical scenario, the presence of a laissez-faire freemarket did not create a healthy competitive environment, nor did it have any means to correct the centralization of commerce powers in the hands of the few over the many.  (In fact, in this case the government actually did have to step in and implement anti-monopoly laws to try and introduce competition back into the market.)  Therefore, the unanswered (or unanswerable) question concerning laissez-faire capitalism is the issue of–given the proposition that faceless, easily corrupted government agencies cannot be trusted enough to interfere with the business operations of the freemarket–why faceless, easily corruptible conglomerates ought to, for some reason, be seen as more trustworthy in this regard?

Although this much should be obvious by now, the point of this post isn’t to convince anyone to accept the superiority of one economic theory over another.  Even as far as the two (admittedly more extreme) examples cited above, I’m sure that given more time and interest we all could go back and forth listing all the sincere benefits and advantages of both Marxism and laissez-faire capitalism.  Acknowledging this, my greater point about economics remains the same, which is that while the historical study of economics can produce viable, scientifically tangible, insights about some aspect of human societies (primarily developments in the commercial and fiscal sectors), proposed economic theories themselves lack this level of scientific rigor.  All economic theories (be it Marxism, laissez-faire capitalism, or anything in between) by necessity begin with an assumed conclusion (“human society is naturally moving towards a collective communal state”, “the freemarket operates best when left unregulated”, etc. etc. etc.) and then go on to selectively interpret all socioeconomic developments through the lens of whatever situation is more conducive to the promotion of the favored economic conditions already accepted by the economic theory in question.

From this it certainly does not logically follow that all economic theories are equal in their outcome (whether for good or bad).  Or that any one economic theory couldn’t be claimed as more preferable for any specific society (I think most reading this can agree that feudalism would generally be a horrible model for modern society).  What it does mean is that there is no such thing as an all-encompassing, omniscient economic system deduced through unfiltered objective reality, as opposed to individual, subjective human preferences.  In light of that, I think perhaps talks of economics from opposing viewpoints is due a bit more humility and reservation about one’s own pet theories, than what is currently on display in public discourse.

Just some food for thought, savor it as you wish.

The Golden Age of Conspiracy

I have an unhealthy obsession with conspiracy theories.  When I say this please don’t misunderstand me.  I don’t actually buy into the stated details of conspiracy theories, I’m just fascinated by how much devotion and faith people put into them; how a person will take several demonstrable facts and then loosely connect them into something–which at first glance–sounds like a plausible narrative, which will appeal to a wide spectrum of people.  Despite what some might think, I am wholly unconvinced that either intelligence or education plays a significant role in deterring people away from believing in conspiracy theories, because such theories are not really about filling the gaps of our mind’s ignorance and shortcomings.  It’s more about satisfying a base desire for witnessing something greater, higher, that is closed to the majority of the “deluded” masses.  This is what makes conspiracy theories appealing to its proponents.

I was still young when Lady Diana died in 1997, but I was old enough to take note of the reactions people around me had to the news.  It took about four minutes after hearing the news for several members in my family to staunchly announce how they didn’t accept the “mainstream” story.  Why didn’t they accept it?  What tangible evidence did they have to make them doubt the news report?  Essentially none, but it didn’t matter.  There suspicion was that the simple answer must be a distraction to cover up the real story.  Or as my mother put it, “I cannot believe that there isn’t more to this whole thing.”  This sentence, I believe, captures the mindset most of us have, most of the time, when we are confronted with some awestruck piece of data.  The official report of the incident was that Diana and her boyfriend died after crashing in a road tunnel in Paris, due to the driver losing control of the vehicle.  But this just wasn’t big enough for most people, who to this day maintain there has to be more to it.  And no investigation will be enough to convince any of them otherwise, because any investigator who comes up with a different conclusion will simply be evidence of the greater conspiracy.  Most conspiracy theories follow a similar line of reasoning.

We have an innate aversion to simplicity.  Just repeating a story we hear isn’t enough, we need to add more complex details onto it to make it more digestible for wider consumption; refine it and move the narrative forward with facts we think ought to be included with the official details.  It can’t be that politicians are simply corrupt and self-serving, they must also be secretly operating under the direction of an unknown shadow government, which is menacingly pulling the strings behind the curtain [and (occasionally) this shadow government has to be made up of shape-shifting, inter-dimensional lizards, whose bloodline traces back to ancient Babylon].  It’s not enough to say that life on earth is simply adaptive to its environment, there has to be more to it; some kind of grand purpose and intent operating on a level too complex, too powerful for our meager minds to fathom.  This line of thinking is especially strong when we don’t have enough facts to draw any kind of clear conclusion, in such a case we’ll reason that even a conspiracy theory is better than no theory.

Simple reasons and answers are often not enough to do the job for us, because simplicity can never meet the expectations of our innately suspicious imaginations.  What does satisfy out suspicion is a narrative that goes counter to the “mainstream.”  That only those of us who are of the most elite intellect can grasp.  “The Illuminati may be fooling you but it’ll never fool me,” is the popular tagline.  Part of the appeal of conspiracy theories is the layer of excitement they bring to everyday facts.  It is stimulating beyond belief to lose oneself in all the various plots and details of a hidden world, even if its veracity is only verified by a very questionable set of complex circumstances; this just makes it more exciting.  The other part of the appeal is the strange level of remote plausibility it brings to the table.  For instance, there is no denying that people have conspired in the past (and still do today), often for ominous reasons (an example being the documented long history of unethical humane experimentation in the United States).  And this air of remote plausibility is more than enough to keep people’s suspicions on high alert, except when it comes to scrutinizing the various details being used to support the particular conspiracy theory they have chosen to embrace.

We know that the human mind is in many ways constrained in its ability to rationalize the world, thus we are constantly seeking the higher, the greater, the unimaginable as our answer of choice.  The strange thing is that as the answer we are seeking becomes more nuanced and complex the simpler it will begin to seem to us, and we will insist that our highly elaborate–immensely complicated and circumstantial–answer is really the most simple and obvious of them all.  Because by that point we have already accepted the narrative of the conspiracy, where the grand conclusion is being used to fill in the details, instead of the observable details being used to arrive at the most possible conclusion (be it simple or complex).

Precisely because there appears to be something innate about the way the human mind is drawn to conspiracies the ease by which ideas are exchanged in our lifetime makes it a ripe golden age for conspiracy theories and conspiracy theorists to thrive.  The reason being that this greater medium of communication, and the great vastness of information available to us in which we can indulge our niche interests, also makes it possible to feel as though we are exploring new pieces of data everyday without ever really having to step outside the conclusions of the particular niche interest we are being drawn to.  Given enough time, we’ll cease wanting to hear from an opposing view contradicting the knowledge we have invested so much time in attaining.  The deeper secrets we have learned will become a part of the way we view and interact with the world.  In short, the conspiracy will become a part of your identity, a personal matter for you to defend, and all competing and alternative data will work only to confirm what you already have accepted to be true.  Reducing reality to a matter of popular vs fringe consensus, the veracity of which is to be decided based on how titillating it is to one’s cynically credulous senses.

Agony by Eye Contact

I have always been told that I have an eye contact problem.  When most people hear this, they assume that I mean how I have trouble maintaining eye contact.  However, my apparent problem is the exact opposite; I’m told that I make too much eye contact with people while speaking with them.

It is one complaint that has followed me all throughout my childhood (and subsequent adult years), by people alleging that I am not showing them proper respect because I insist on “staring” at them as we talk.  Yet, despite numerous attempts to remedy this supposed faux pas of mine, I have never really been able to figure out what the socially acceptable amount of eye contact is supposed to be.  Hence, what results is me trying to simultaneously give someone my complete attention, while worrying that I have given her/him too much attention, and made her/him feel uncomfortable because of it.

The reason I have always been inclined to make direct eye contact with whomever I happen to be speaking to at the moment, is my desire to hear and understand every word that is being spoken to me by said individual.  I make the assumption that if you find it worthwhile to approach me in conversation about a topic, you want me to actually listen to what you have to say, and not nod my head and shift my eyes aimlessly, looking for a distraction to avoid looking at your eyes.

The strangest part is that when I’m confronted about my intense eye contact habit, and told that I’m being rude to the person whose words I’m trying to hear, my sincere request to get some constructive feedback on the matter is always met with scorn.  “You should already know why it’s obviously wrong,” is the answer I usually get (which is obviously asinine since I obviously don’t know).  The second most common answer is that it makes the person I’m speaking to uncomfortable, which though reasonable, still doesn’t validate the notion that my behavior is wrong.

Breaking the routine of a person with obsessive compulsive disorder will definitely make the person afflicted with OCD uncomfortable, but doing so is a necessary step in getting the person to break away from her/his compulsion (assuming the person wants to break from it).  In that same regard, how can I be sure that it is not society’s aversion to eye contact that is the problem here?

I know from my experience teaching in a classroom that students who actually look at me as I’m lecturing tend to retain more information, than those who never lift their heads from the paper in front of them.  This is because communication is not strictly verbal, so being told to listen with just my ears and never my eyes comes across as a strange demand to me, since I know that I will register more of what you’re saying if I look at you while we’re conversing.  Do you not want me to grasp and thoroughly contemplate everything you have to say?

And, yes, I’m aware that there are people who have different kinds of social anxieties and communicative disorders, who are physically and psychologically incapable of making eye contact with others.  But I have a hard time believing that the vast majority of people I happen to come across in casually conversation fall into this category.  Also, as someone who suffers from stage fright, I can totally understand the desire to not have people gawk at you incessantly while I’m giving a talk.  However, the issue I’m referring to here is limited strictly to a one-on-one conversation, usually started by someone approaching me to discuss a topic s/he feels is important enough to speak to me about.  The idea that it is impolite to maintain eye contact with someone who has chosen to speak with me, baffles me to no end, and honestly makes me wonder about the state of our self-worth as a people, when we are so easily unnerved and intimidated by anyone who dares to closely observe and pay attention to what we have to say.

Despite having said all this, I do constantly try to accommodate to people’s desires and limit the amount eye contact I give to a person during conversation, but I really wish someone would give me the guidelines to how much is too much, or not enough, since I obviously am not able to figure it out on my own.