Christopher Ryan’s Civilized to Death: The Price of Progress presents the hypothesis that humanity’s move from a forager/hunter-gatherer existence into agricultural societies–and from there modernity–was the first step in our decline as a species. The further we progressed towards modern life, and away from our forager origins, the more we stepped away from our natural state, where we were happier, healthier, and more in tune with our environment, and became the sad, sickly, fragile creatures we are today.
First things first, there are several parts of the book I found interesting, such as the distinction between life span vs. life expectancy to show that our prehistoric ancestors lived much more fruitful and full lives than many of us are keen to believe. I’m also someone who’s not opposed to the notion that the sharp increase in anxiety, depression, substance abuse, and general listlessness seen in modern society is in no small part aggravated by (if not caused by) the luxuries we have adopted in our daily lives (social media being an obvious example of a contributor to this malaise). However, all in all, I don’t think Ryan makes a convincing case for the underlying issues he raises, and there are several holes throughout his one-sided narrative.
Ryan relies heavily on the self-reported happiness of current-day forager societies, and uses that to infer that prehistoric man must have been equally satisfied with his existence. But as anthropologists are quick to point out, foragers today are not prehistoric, and shouldn’t be though of as such, in that they live in the current world same as we do. The conditions of their environment are not necessarily equivalent to those of prehistoric man, and their demeanor and customs have undergone as long of a development and adaptation as that of societies living in industrialize cities. Meaning that short of a time machine, we can’t simply infer the case that the behavior of current-day foragers/hunter-gatherers (Ryan used the two terms as synonyms) is a reflection of prehistoric humans. At best, we can try to piece together narratives that seem plausible in light of available evidence, but in the hands of someone like Ryan these ultimately become unfalsifiable just-so stories, as he has a proclivity to dismiss any opposing views to his as stemming from people being indoctrinated into the neo-Hobbesian view of human nature that demands for modern life to be seen as a point of progress in human development. Thus, any pushback one might give can be readily set aside as just mere squabble from the brainwashed.
Ryan wants to make the case that the selfishness, tribalism, and aggressive tendencies that are seen as innate parts of human nature are actually just things conditioned in us by modernity, and that such traits are entirely absent in forager communities, which he takes to mean that man’s natural disposition is one of harmony, humility, and peace with its environment and fellow man. However, in one part of the book he also states that forager communities will early one engage in customs like banter and roasting of members who show greater talents as a means of keeping their egos in check and instill in them a sense of humility. Ryan never contends with (or seems to realize) the fact that if such behavioral conditioning is needed then it calls into question his hypothesis that humility is the natural disposition of even the foragers he wants to uphold as the hallmark for us all to strive for. It simply implies that they partake in a level of social conditioning to ensure a harmonious society, same as we do when we teach children to share and empathize with those around them, whether we do it around the camp fire or in the daycare centers.
Ryan also engages in a set of conjectures when he appeals to the peaceful bonobos as evidence that man (as a fellow member of the great ape family) is also a pacifist species when allowed to remain close to his natural (forager) state of being. What Ryan fails to address is that we are as closely related to the common chimpanzee as we are to bonobos, and the former—despite having never developed agriculture that set them down the treacherous path of modernity—is still famously tribal, aggressive, and warlike in its demeanor. And that’s not to mention other great apes, like gorillas, which are hierarchical and routinely commit infanticide, and orangutans, whose males often mate via forced copulation of females (aka rape). In light of all this, the peacefulness of the bonobos seems to be the exception in the great ape family rather than the rule.
A final point of neglect that stuck out is that, while Ryan was quick to draw a comparison between humans and our evolutionary cousin the bonobo to support his narrative, he never says a word about our evolutionary forager sibling, the Neanderthal, who was displaced (by one means or another) by our supposedly pacifist prehistoric ancestors. While I’m sure there are ways to make this caveat fit his narrative, the fact that Ryan doesn’t even bother to try gives the impression that he would rather we simply ignore it as a data point in history so he doesn’t have to think about it, and hopes we do the same.
As I said before, I’m actually quite open to the idea that modernity has come with drawbacks that society is failing to deal with (the mental health crisis of the last few decades being a prime example), and I’m even open to the notion that there are lessons we can learn from prehistoric humans about life, happiness, leisure, and purpose. But I don’t think Christopher Ryan’s Civilized to Death does much to add to that important topic of conversation, and in many ways stands in the way of it, as it’s a topic that appears to be too far out of the author’s wheelhouse to deal with.
You can tell we just got past an election year when buzzwords like “class warfare” are getting thrown around without reservation. I’m told class warfare is the hostile means by which we in the lower bracket of the economic hierarchy unfairly try to undermine the integrity and work ethic of the wealthier individuals in the country by redistributing their wealth down to ourselves. How we’re doing this I’m a little fuzzy on, since as the years have gone by the only redistribution of income I have seen is the consistent loss of my own to utilities, a rising rent, and various other life necessities. Whatever devious scheme we poor folks are supposed to be up to, I’m obviously doing it wrong. (Apparently the memo informing me of when, where, and how we are to topple the Bastille got lost with my shipment of freshly polished battle-axes. And, honestly, what fun is any kind of “war” without battle-axes, anyway?)
I’m told that my distrust of both faceless conglomerates and faceless bureaucrats is contributing to all the vile, unjustified antagonism from my economic ranks against those who can afford to buy my home, car, and soul, trice over. For that I’m sincerely sorry, and in the future I will take into consideration that just because these entities have the ability to influence significant factors in my personal life, is no excuse to fail to consider how possessing such power must be a burden on their fragile humanitarian hearts. To put their minds at ease, I hereby declare to these caring, faceless conglomerates and bureaucrats that anytime the stress of controlling my finances and civil freedoms becomes too much to bear, I will be more than willing to take some of the load off their hands. It’s a small gesture on my part really, but I think ultimately it’s the thought that counts.
All joking aside, I’m getting the impression that a small segment of the population is getting somewhat paranoid that any day now their neighbors from outside the Country Club roster will come to storm their gated-community’s ivory entrances, demanding some sort of economic overhaul, or whatever. Often the sentiment of concern lingers on the fear that we uncultured brutes might turn to violent rebellion to sooth our misguided aspirations. To ease these fears let me inform any affluent citizens who might be reading this that they have nothing to worry about, because the majority of crimes we poor people commit are now, and will always be, against other poor people. Why? Well, firstly, because low-income individuals are located at a nearer proximity to petty criminals (for instance, I’m almost certain that the young man who attempted, and failed, to rob me a few years ago lived within a few blocks from me). Secondly, robbing low-income individuals of the meager possession they have carries a little-to-no-risk factor of getting caught, because essentially nobody gives a damn about Angelo’s stolen George Foreman Grill (honestly, he should take it as a compliment that anyone would even bother stealing that piece of crap), as much as when some shady hedge fund manages swindles Joe Millionaire out of a few zeros from his bottom line.
If there is class warfare in this country, rest assured it is an intra-class warfare. We poor people will turn on each other before we will ever think of undermining our more socially powerful counterparts. As for the rich, your immediate fears ought to lie with your equally affluent competitors who actually have the means to put a real dent in your earnings. Trust me, waitress Susan asking for affordable healthcare coverage for her children will not be the catalyst that erodes your trust fund.
And if I’m wrong, I’ll see you all at the Bastille!
There is a docu-series on Netflix on former NFL player Aaron Hernandez, who was arrested and charged for murder in 2013. In 2015, Hernandez was found guilty of first-degree murder, and sentenced to life in prison without the possibility of parole. In 2017, he was found dead in his cell after having committed suicide. He was 27 years old at the time of his death.
The life of Aaron Hernandez is certainly interesting enough to look into, in and of itself, but the real story doesn’t end at the young man’s death, as he would be posthumously diagnosed with chronic traumatic encephalopathy (CTE), which is speculated to have contributed to the violent and irrational behavior that led to his homicidal crimes.
CTE, being a neurodegenerative disease, is commonly found among individuals who sustain repeated (concussion-level) blows to the head. Hence, it is no surprise that the disease has been in the news for years to explain away the destructive behavioral problems exhibited by athletes who played contact sports known for their high frequency of head trauma.
There are often cited studies that confirm a higher than average CTE diagnoses among such athletes compared to the general population, however, because CTE is only able to be diagnosed through an autopsy (meaning the person in question by default has to already be dead in order to confirm if they had the disease) skeptics argue that the statistics used in these studies are bound to be overinflated since the dead athletes being tested for CTE most likely exhibited the behavioral issues indicative of the disease to begin with. An unbiased diagnosis rate would require a large and diverse sample pool of athletes who play concussion-prone contact sports, who would need to be tested posthumously for CTE, and the results would then need to be compared to the rates of CTE diagnoses to the rest of the population that didn’t partake in such sports (which would also require a large and diverse sample pool of test subjects to avoid skewing the data through selection bias).
Obviously, this is an issue that will not reach a satisfying conclusion any time soon on the science alone, if ever, for the very cumbersome reasons of testing for the disease outlined above. But how much data would even be sufficient to convince us that some percentage of these athletes are at risk of suffering unalterable brain damage before we are willing to draw any ethical considerations on the subject? Moreover, what percentage is considered an acceptable sacrifice in this situation? 50%? 25%? What if it’s definitely proven that only 5-10% of athletes who engage in these sports are going to sustain brain damage that will lead them to possibly hurt others and/or hurt themselves? Is that an acceptable number for us to accept as just part of an athlete’s life and experience?
I wasn’t personally raised in a household that cared a whole lot about sports, but I do still understand how all of us can get very attached to our preferred pastime, and get quite protective of it. And it’s not just about enjoying a game; it’s about the thrill of the competition, and the camaraderie between likeminded fans coming together to cheer for their team (at times with nothing in common except for maybe their mutual dislike of the opposing team). Sports to a lot of people aren’t just games, but a form of community, and arguably even a shared worldview. And to be told that something that brings you joy in life is inherently harmful to the very group of people you’re idolizing (i.e. the athletes) can be enough to put anyone on the defensive as it’s all to easy to interpret such arguments as a personal indictment against ones very character.
Although I didn’t watch much conventional sports growing up, my home TV was often set to the bi-weekly professional wrestling shows from the 90s to the mid 2000s. I watched pro wrestling from a young age (possibly too young), and was enamored by the characters, storylines, theatrics, and yes, the violence of it all. If I’m being honest, I also did eventually grow bored of it year to year as the storylines got repetitive, and I became desensitized to the spectacle of watching people genuinely put their bodies through hell in scripted fights for my entertainment. But I continued to tune in despite my waning interest, because it was a point of shared interest with my family and friends that I did not want to let go of. And I didn’t, until mid-2007.
If you’re a wrestling fan, you probably already guessed what I’m about to reference. In June 2007, WWE wrestler Chris Benoit murdered his son and wife, before committing suicide in his Atlanta home. It was an event that shook the pro wrestling community, and left many people bewildered as to what could have compelled a man who so many fans admired as a decent guy to do something so heinous.
We may never know what exactly motivated Benoit to do the horrible things he did that day, but a leading theory of the underlying cause is CTE, as confirmed by an autopsy which revealed the wrestler’s brain to be severely damaged and resembling an Alzheimer’s patient, caused by years of repeated head trauma and concussions. The findings sparked a new debate among wrestling fans, where they asked if it was right to hold the man fully responsible for his actions, or if his state of mind was such that he had no control over his actions. Meanwhile, a different sort of debate crept up in my own mind: Am I partly responsible for this?
After all, I cheered every head blow, steel chair collision, punch, kick, and fall for years and years right along with everybody else. It was done for my enjoyment, and I never once questioned the ethics of it. These are adults, after all. They know the risk they’re getting into. I neither created this sport, nor controlled how it’s managed and presented. What they chose to do is beyond me, and if I stopped watching, it would still exist, completely indifferent and independent of me. All of this was and is true, yet it still didn’t feel right anymore. I simply couldn’t watch another match without feeling uncomfortable about the possible damage I was passively encouraging through my viewership.
My family and friends still watched, and I never tried to argue them out of it (nor anybody else). I didn’t go into detail about why I stopped watching, choosing to simply say I was bored with it (which was true enough) and not participating in the conversation if the topic came up. Everyone accepted it wasn’t my thing anymore readily, and things moved on without issue.
The feeling of discomfort never left though. There are even residual traces of defensiveness still lurking, ready to stand up for my past viewing habits, so I’m not being flippant when I say I understand the reflexive agitation football fans, soccer fans, boxing fans, etc. etc. etc., are feeling nowadays from the scrutiny aimed at their favorite sports, and the implied judgment accompanying screeds about the physical, measurable harm done for their entertainment value.
Just as I had no intention of talking anybody out of watching pro wrestling 14 years ago, I have no intention of arguing for sports fans of any sort to give up their preferred pastime. I don’t believe attempting such a thing to even me possible, honestly. And I also don’t believe that a legal ban on specific sports is the productive way to go about mitigating the perceived harm being committed here, either. The only question I ask of anyone is to consider what the value of your entertainment experience is, and if this cost happens to be laced with bodily trauma, and pain, and agony, and tragedy for the athletes that make said entertainment possible, is it a cost that’s worth paying?
I have an unhealthy obsession with conspiracy theories. Now, when I say this please don’t misunderstand me. I don’t actually buy into the stated details of conspiracy theories, I’m just fascinated by how much devotion and faith people put into them. How a person will take several halfway demonstrable, halfway ludicrous details, and then loosely connect them into something which at first glance sounds like a plausible narrative, but on any close inspection falls apart under the most basic level of scrutiny.
Despite what some might think, I am wholly unconvinced that either intelligence or education plays a significant role in deterring people away from believing in conspiracy theories, because such theories are not really about filling the gaps of our mind’s ignorance and shortcomings. It’s about satisfying a base desire for witnessing something greater, higher, that is closed to the majority of the “deluded” masses. This is what makes conspiracy theories appealing to its proponents.
I was still young when Lady Diana died in 1997, but I was old enough to take note of the reactions people around me had to the news. It took about four minutes after hearing the news for several members in my family to staunchly announce how they didn’t accept the “mainstream” story. Why didn’t they accept it? What tangible evidence did they have to make them doubt the news report? Essentially none, but it didn’t matter. There suspicion was that the simple answer must be a distraction to cover up the real story. Or, as one person put it, “I cannot believe that there isn’t more to this whole thing.” This sentence, I believe, captures the mindset most of us have, most of the time, when we are confronted with some awestruck piece of data.
Of course, the official report of the incident was that Diana and her boyfriend died after crashing in a road tunnel in Paris, due to the driver losing control of the vehicle. But this just wasn’t grand enough for some people, who to this day maintain that there has to be more to it. And no investigation will be enough to convince any of them otherwise, because any investigator who comes up with a different conclusion will simply be evidence of the greater conspiracy. Most conspiracy theories follow a similar line of reasoning, regardless of the facts or details presented to them to negate their favored narrative.
We have an innate aversion to simplicity. Just repeating a story we hear isn’t enough, we need to add more complex details onto it to make it more digestible for wider consumption; refine it and move the narrative forward with facts we think ought to be included with the official details.
It can’t be that politicians are simply corrupt and self-serving, they must also be secretly operating under the direction of an unknown shadow government, which is menacingly pulling the strings behind the curtain. And (occasionally) this shadow government has to be made up of shape-shifting, inter-dimensional lizards, whose bloodline traces back to ancient Babylon; or a cabal of cannibalistic pedophiles using the blood of their child victims to maintain their youth and power.
It’s not enough to say that life on earth is simply adaptive to its environment, there has to be more to it; some kind of grand purpose and intent operating on a level too complex, too powerful for out meager minds to fathom. This line of thinking is even stronger when we don’t have enough facts to draw any kind of clear conclusion, in such a case we’ll reason that even a conspiracy theory is better than no theory.
Simple reasons and answers are often not enough to do the job for us, because simplicity can never meet the expectations of our innately suspicious imaginations. What does satisfy our suspicion is a narrative that goes counter to the mainstream. That only those of us who are of the most elite intellect can grasp: “The Illuminati may be fooling you but it’ll never fool me.”
Part of the appeal of conspiracy theories is the layer of excitement they bring to everyday facts. It is stimulating beyond belief to lose oneself in all the various plots and details of a hidden world, even if its veracity is only verified by a very questionable set of complex circumstances; this just makes it more exciting. The other part of the appeal is the strange level of remote plausibility it brings to the table. For instance, there is no denying that people have conspired in the past (and still do today), often for ominous reasons (an example being the documented long history of unethical humane experimentation in the United States). And this air of remote plausibility is more than enough to keep peoples suspicions on high alert, except when it comes to scrutinizing the various details being used to support the particular conspiracy theory they have chosen to embrace.
We know that the human mind is in many ways constrained in its ability to rationalize the world, thus we are constantly seeking the higher, the greater, the unimaginable as our answer of choice. The strange thing is that as the answer we are seeking becomes more nuanced and complex the simpler it will begin to seem to us, and we will insist that our highly elaborate, immensely complicated and circumstantial answer, is really the most simple and obvious of them all. Because by that point we have already accepted the narrative of the conspiracy, where the grand conclusion is being used to fill in the details, instead of the observable details being used to arrive at the most possible conclusion (be it simple or complex).
A lot of what passes for Nietzsche’s image in popular thought is a caricature of what was constructed by the Nazi propaganda machine in the 1930s (largely with the help of the philosopher’s own nationalistic, anti-Semitic sister, Elisabeth). Of course, if blame is to be assigned, then it is only fair to point out that much of the misinterpretations surrounding Nietzsche stems from the man’s own insistence on expressing his views in rather quick, often intentionally obscure musings and aphorisms, leaving his ideas wide open to be bastardized by opportunistic ideologues.
The reality is that even though it takes little effort to sanction an elitist system through Nietzsche’s philosophy, the actually details that accompany the man’s anti-egalitarian values—namely, anti-politics, anti-nationalism [especially anti-German], anti-group/herd mentality—are by definition incompatible with the belligerent, conformist, nationalistic, fascism inherent to the Third Reich’s state ideology. Nietzsche views on the notion of nationalities and personal identities (and the often times conflicted dynamics between the two), reveal a much more complex and nuanced perspective than the picture that has been (still is) often presented of him as the patron saint of Nazism.
In Part Eight of Beyond Good and Evil (1886), titled “Peoples and Fatherlands”, Nietzsche outlines his analysis of European and Western development, and critiques the modern move towards democratic institutions as a step towards the cultivation of a true tyranny. Nietzsche comments that the tribal affiliations that once dominated Europe are eroding away in favor of a more borderless sentiment amongst the hitherto disconnected people:
The Europeans are becoming more similar to each other / an essentially supra-national and nomadic type of man is gradually coming up, a type that possesses, physiologically speaking, a maximum art and power of adaptation as its typical distinction.[1]
For Nietzsche, this development is a direct result of the advent of modernity, and modern ideas, which has made a person’s allegiance to a trifling tribe or nation unsatisfactory in light of modern man’s greater awareness of the world. Thus, a grander identity is needed, and a newer, more encompassing, international personal ideology is required to escape the limitations of the narrow worldview of one’s regional clan. Moreover, as identities and ideologies extend beyond the old local boundaries, a person’s interests will also evolve from the tribal group to the global. Politically, one possible result from all of this will be the development of a pluralistic society, out of which democracy will ascend as a means of appeasing the diverging—and converging—interests arising amongst the new, modern populace. It is within this context, Nietzsche argues, that democracy is born.
Nietzsche understands how this rise of democracy is looked upon as a great progress by contemporary society, but the philosopher himself is wary of the implications that such a system holds for humanity, stating that “this process will probably lead to results which would seem to be least expected by those who naively promote and praise it, the apostle’s of ‘modern ideas.’”[2] Nietzsche is distrustful of populist inclinations, because it unduly gives credence to the degenerate, weaker persons of society to regress the progress of the more innovative value-creators, who will be forced to reside amongst the lowly plebeian masses. This sentiment is directly tied in with Nietzsche’s thesis on the dichotomy of master-slave moralities, the relevant part of which can be summarized as follows:
Our egalitarian sentiment, according to Nietzsche, is a result of the poison we have all blindly swallowed. Our demand for universal moderation, for the value of humility, our aversion to boastfulness as being too impolite in the presence of weaker, stupider individuals, and our desire to reduce the feeling of inadequacy from an opponent’s failures, are all manifestations from the original slave revolt of morality that is promulgated by those who seek to vindicate the virtue of their inferiority by means of social cohesion—to rationalize away personal failure in favor of mass victimization.
The democratization of society is to Nietzsche a move towards the promotion of mediocrity. It will condition us to be content with the will of others as reasonably equivalent to our own, instead of asserting our own interest in opposition to the whims of the masses. In short, our strive to achieve a more egalitarian mindset, will leave us too eager to be content with compromises with positions we fundamentally disagree with, rendering us potentially incapable of identifying and combating the ascension of any tyrannical entity that might see fit to stealthily encroach its power over our person:
The very same new conditions that will on the average lead to the leveling and mediocritization of man—to a useful, industrious, handy, multi-purpose herd animal—are likely in the highest degree to give birth to the exceptional human beings of the most dangerous and attractive quality.[3]
Nietzsche proposes that in a society where the primary aim is to create unanimous equality, the ultimate result will be to create an environment of obstinate complacency (the greatest form of oppression that can be leveled against a thinking person). All this will in turn lead to the sweeping infantilizing of the individual, making her/him dependent on the body of the system as a whole for her/his survival, rather than one’s own strength and merit. A trend that will lead to a population “who will be poor in will, extremely employable, and as much in need of a master and commander as of their daily bread.”[4]
However, the degeneration will not be universal amongst all individuals. Nietzsche explains that “while the democratization of Europe leads to the production of a type that is prepared for slavery in the subtlest sense, in single, exceptional cases the strong human being will have to turn out stronger and richer than perhaps ever before.”[5] According to Nietzsche, in nature there exist those who can only dominate by virtue of their own values, and those who can only be dominated as a result of their inability to create values (hence, they must leach off of the values of others). These two groups do this by the presence of their will to power, that is to say, the very nature of their existence. As long as they exist, they cannot choose to act differently than the manner in which their nature—i.e. their will to power—dictates.
The problem Nietzsche sees with modernity is that our egalitarian-minded moral system has turned all of this upside-down, allowing for the weaker plebeian caste (who cannot create any values of their own) to dominate the environment on which the stronger noble caste (the natural value-creators) are cultured to stoop to the level of the very masses they should be dominating. This causes a dilemma for those few contemporary men born possessing the noble character trait, where their instinct (their will to power) tells them to reject the moral values of their surroundings and create their own moral values, but their conscience (indoctrinated by the slave mentality of the lowly masses controlling the moral discourse) tells them that subverting their own will in benefit of the herd is the highest virtue of the good modern man. Thus, when any individuals do inevitably rise above the masses (because, in Nietzsche’s view, the masses cannot help but unwittingly condition themselves to be dominated by some sort of master), the resulting value-creators who ascend to power will be as much a perversity of the noble character, as the degenerate culture that has produced them; what will ensue is absolute tyranny:
I meant to say: the democratization of Europe is at the same time an involuntary arrangement for the cultivation of tyrants—taking that word in every sense, including the most spiritual.[6]
Reading these dire statements by Nietzsche through the privileged viewpoint of the 21st century, an observer would be justified to marvel at the prophetic nature of the philosopher’s words in predicting the rise of the totalitarian systems that would follow a few decades after his death.
The rise of fascism in both Italy and Germany appeared to emerge out of relatively democratic phases in both nations’ histories. Likewise, the 1917 October Revolution in Russia that brought to power the Bolshevik faction in the unstable country was enabled by the indecisiveness of the democratically-minded Provisional Government that arose from the 1917 February Revolution. In all of these examples the presence of a democratic political institution did not hinder the advent of repressive totalitarian regimes. Moreover (Nietzsche might argue), the presence of said democracies were instrumental in opening the door to these malignant forces, by having no mechanism by which to eject them from the political process besides the whims of a broken, infantilized population (whom Nietzsche describes as being “prepared for slavery in the subtlest sense”).
However, if one wants to be critical about the possibly prophetic nature of Nietzsche’s philosophy, it would also be apropos to point out that this sort of historical analysis is more the result of selective reasoning then objective inquiry. After all, it is equally true that every single one of the European democracies that yielded the totalitarian regimes of the 20th Century, were themselves preceded by non-democratic political entities, whose infrastructure crumbled despite their lack of concern for creating an egalitarian society. Furthermore, if the oppression of the totalitarian models of the last century are to be blamed on the insufficiency of the democratic institutions that preceded them, than consistency demands for us to also blame the insufficiencies of these democratic institutions on the failures of the aristocratic power structure that preceded them; and so on, and so forth, ad infinitum.
A better way to approach Nietzsche’s position here, is to consider that the philosopher may not be referring to political power at all, but a psychological development: “I hear with pleasure that our sun is swiftly moving toward the constellation of Hercules—and I hope that man on this earth will in this respect follow the sun’s example?”[7] Hercules, of course, is the Roman demigod who is described as having returned from the underworld[8], and eventually ascended to the realm of the gods by virtue of his strength and valor—a character whose legend for Nietzsche must have served as a fitting representation of the philosopher’s will to power. The fact that Nietzsche states the reference as a question indicates that he was doubtful of the development of man to follow the example set forth by the Roman demigod.
I mentioned before that Nietzsche popular image is heavily, and unjustifiably, linked with Nazism. The falsity of this supposition is verified by Nietzsche’s own rejection of the purity of the German people, a sentiment that is antithetical to Nazi ideology: “The German soul is above all manifold, of diverse origins, more put together superimposed than actually built.”[9] To Nietzsche the idea that Germany is to be cleansed of foreign elements is an absurdity in and of itself, since all things German (for him) are a mixture of originally non-German elements [a truth that I personally believe aptly pertains to all nations and ethnicities]. Nietzsche views the German nationalism emerging in his time as a result of an undefined people attempting to become a coherent identity; it is a compensation for a fault, which in its path “is at work trying to Germanize the whole of Europe”[10] [a statement that perhaps once again hints at Nietzsche’s “prophetic” qualities in predicting the coming decades].
The most surprising fact to anyone whose opinions of Nietzsche have been largely shaped by the man’s false impression as a Nazi-precursor is the philosopher’s staunch abhorrence of European anti-Semitism. Nietzsche seems to understand the potential for his writings to be utilized by opportunistic anti-Semites, causing him to purposefully herald the Jewish people as a superior specimen, in contrast to the anti-Semites who seek to expel them from the continent:
The Jews, however, are beyond any doubt the strongest, toughest, and purest race now living in Europe; they know how to prevail even under the worst conditions (even better than under favorable conditions), by means of virtue that today one would like to mark as vices.[11]
The irony here is that Nietzsche is attributing to the Jewish peoples every positive quality the anti-Semitic nationalists of Europe wish to attribute onto themselves. Just how much of this is motivated by Nietzsche’s preemptive desire to separate himself from the bigoted views of some of his potential admirers is an open question, but what is certain is the philosopher’s complete denunciation of the conspiratorial propaganda the anti-Semites are eager to spread into public consciousness:
That the Jews, if they wanted it—or if they were forced into it, which seems to be what the anti-Semites want—could even now have preponderance, indeed quite literally mastery over Europe, that is certain; that they are not working and planning for this is equally certain.[12]
In other words, Nietzsche is of the opinion that if the Jewish people were as eager for world domination as the anti-Semites claim, they would already be dominating the world by now. The fact that they are neither planning nor interested in this is evident by the continued harassment they have to endure by people who claim (and have been claiming for a good few centuries now) to constantly be a knife-edge away from “Jewish-dominance.” Instead, Nietzsche suggests that the history of the Jewish people in Europe indicates a desire to want to at long last be accepted within the public realm:
Meanwhile they want and wish rather, even with some importunity to be absorbed and assimilated by Europe; they long to be fixed, permitted, respected somewhere at long last.[13]
Even going so far as to insist that to achieve the long overdue inclusion of the Jewish people “it might be useful and fair to expel the anti-Semite screamers from the country.”[14] I mentioned before the possibility that Nietzsche’s motivation for writing this screed against the anti-Semites of Europe is directly tied in with his desire to counterattack any possible conflation between his views and the views of some of his more questionable admirers (it was a move that, while well-intentioned, proved futile in the long run).
A more intellectual challenge that can be issued on Nietzsche’s passionate defense of the Jewish people, is the seeming contradiction it creates with the man’s staunch attacks against religion, in particular against Abrahamic monotheism, of which Judaism is the founding faith. A reasonable counter Nietzsche could make is that nowhere in his defense of the Jewish people does he defend any of the religious tenets of Judaism; rather he is aiming to point out the prejudice unduly leveled against the Jews as an ethnic group (which is what their most vitriolic defamers classify them as). Another point of consideration is that Nietzsche’s defense of the Jewish people, as an ethnic group, is completely compatible with his broader worldview regarding master-slave moralities. As a quick summary, Nietzsche divides human society into two distinct castes: the aristocratic nobility (the value-creating masters) and the plebeian masses (the herd-minded slaves). Amongst the aristocratic nobility, who–according to Nietzsche–are the rightful arbitrators of what is morally good, a further distinction is made between the knightly-aristocracy and the priestly-aristocracy;[15] the latter of which are the ones who have provided the intellectual means for the lowly plebeians to charge a slave-revolt against the purer morality of the more noble caste—a slave-revolt which has permeated and shaped the moral conscience of modern man. In this scenario described by Nietzsche, the ancient Hebrews would occupy the role of the priestly-aristocracy, which has created the opportunity for the revolting slave-morality of Christianity to perverse the nobleman’s superior morality.
But Germans and anti-Semites aren’t the only groups Nietzsche holds in low regard; his opinion on the English are equally negative, dismissively referring to the nation’s philosophical contributors as the archetypes of modern mediocrity:
There are truths that are recognized best by mediocre minds because they are most congenial to them; there are truths that have charm and seductive powers only for mediocre spirits: we come up against this perhaps disagreeable proposition just now, since the spirit of respectable but mediocre Englishmen.[16]
Nietzsche’s sentiment here could be due to his perception of the historical influence English thinkers have had in fostering the atmosphere for what he considers to be harmful modern ideals. Nietzsche’s reasoning may partly be justified by the fact that English parliamentary-style government has served as a model for many forms of European democracies; a system which, as discussed earlier, Nietzsche views as contributing to the “mediocritization of man.” This reading is supported by the philosopher’s persistent equating of the lowly plebeian values with the English nation, in contrasts to the superior (in Nietzsche’s eyes) French culture, “European noblesse—of feeling, of taste, of manners, taking the word, in short, in every higher sense—is the work and invention of France; the European vulgarity, the plebeianism of modern ideas, that of England.”[17] Here, Nietzsche’s personal biases are leaking through the prose, showing his preference towards the Latin countries he spent a great deal of his creative career residing in, in hopes that the temperate climate would alleviate his poor health. France, in particular, is a place he developed a great deal of fondness for, an affection that was further encouraged by the fact that the German nationalists of his time (à la Richard Wagner) held French culture in very low regard. In contrasts to the barbarianism of the northern cultures of Europe, Nietzsche described the French as possessing a more timid and sophisticated taste and mannerism:
Even now one still encounters in France an advance understanding and accommodation of those rarer and rarely contented human beings who are too comprehensive to find satisfaction in any fatherlandishness and know how to love the south in the north and the north in the south.[18]
Of course, it can be easily argued that Nietzsche is engaging in a very selective form of cultural analysis in his heralding of France as a society that has transcended politics and nationalities. Furthermore, one is even justified in pointing out the apparent contradiction in Nietzsche’s reasoning, since the ideals of the French Revolution played a large part in nurturing the call for democratic reforms throughout the European continent—at least in spirit, if not in practice—a historical development Nietzsche claims to despise wholeheartedly. The inconsistency in Nietzsche’s condemnation of the English for their historic role in nurturing democratic principles, but failure to acknowledge France’s equal part in this modernization effort, is a shortcoming that cannot (should not) be easily overlooked by even the casual reader.
On the face of things, Nietzsche’s opinions of nationalities and patriotism appear direct and concise, as he spends page after page polemically dissecting and chastising all who fall for such “infantile” ideals. However, the man’s mindset on the modern development of Western society seems to be somewhat murky at times. He writes as if he loathes the coming uniformity of society (a sentiment instilled through the growing influence of democratic institutions), but at the same time he condemns the narrow-minded tribalism on offer from the nationalists. This leaves open the question on what sort of political development Nietzsche would like to see come about to reverse the wrongs we are currently on. Moreover, is it even possible to develop any political ideals from a man whose philosophy is so staunchly anti-political to begin with; will not any such attempt result in complete failure, on account that one cannot successfully create an ideological foundation on inherently polemical premises? I think Nietzsche’s primary goal on the issue of modern politics ought to be viewed more as a social criticism, rather than a social framework. For instance, when it comes to European affairs, the philosopher distances himself from both the nationalist and democratic factions, but is astute enough to realize that the former is a final gasp of a dying sentiment, and that the latter will be the ultimate trend amongst modern man, because (above all else) “Europe wants to become one.”[19] Yet, despite the potential that lie with the aim in greater social unity, the underlying principles upon which this globalizing trend is based on, is something Nietzsche simply cannot support in good spirit.
[1] Nietzsche, Friedrich. Beyond Good and Evil, Part Eight “Peoples and Fatherlands,” section 242.
[2] Ibid.
[3] Ibid.
[4] Ibid.
[5] Ibid.
[6] Ibid.
[7] Ibid, section 243.
[8] Virgil, Aeneid, 6.395.
[9] Ibid, section 244.
[10] Ibid.
[11] Ibid, section 251.
[12] Ibid.
[13] Ibid.
[14] Ibid.
[15] Nietzsche, Friedrich. On the Genealogy of Morals, “First Essay: ‘Good and Evil,’ ‘Good and Bad,’” 1887, section 7.
[16] Nietzsche, Beyond Good and Evil, “Peoples and Fatherlands”, section 253.
Social life, and the social culture that surrounds it, is by necessity an idealization of extroverted personalities. Being outgoing, adventurous, flirtatious–i.e., sociable–is the go-to characteristic that storytellers revert to when they want to make a character likable. In contrast, if they want to convey the point that a characters is not fully well-adjusted, the usual trope is to make her/him socially aloof (or downright inept), awkward, withdrawn, or not good at the basics of human interaction (somehow Sherlock Holmes can deduct all the intricacies of human behaviors to get an accurate read on people’s personalities, right down to their favorite toilet paper brands, but can’t figure out that he himself is a total asshole, huh?). Given this subversively negative portrayal of introversion by media and entertainment sources, it’s no surprise that many introverts will eagerly seek out any medium that affirms some level of humanity to the introverted individual.
Self-help books on Amazon that deal with introversion not as a maladaptive flaw, but as a perfectly valid state of personality, garner a lot of support, both in their reviews and the number of sales. Online communities (which tend to skew heavily towards the introverted side of the personality scale anyway) will often share supportive words and studies showing that being an introvert doesn’t simply end at “not being social,” but encompasses a wide array of positive traits, too, such as thoughtfulness, self-sufficiency, and creative aptitude. One could even argue how the ease by which social media has taken over the personal interactions of much of modern human communications, that this digital age we’re enjoying caters much better to our introverted tendencies, given the control users of these platforms have in terms of getting to tailor interactions to their personal comfort levels.
Personally, I definitely lean more towards being an introvert than an extrovert, so I’m inclined to welcome any positive press bestowed towards my fellow shut-ins (relax; we’re allowed to ironically use these demeaning terms among ourselves). But going right along with the introvert’s supposed knack of thoughtful introspection, I would be doing my tribe a disservice if I didn’t point out that for many people the introvert label has become somewhat of a cop-out to avoid uncomfortable situations, or avoid taking steps towards any semblance of self-improvement on the social front.
Everybody has bouts of introversion; even the most socially lively among us. Usually these show up while we’re in the midst of new social surroundings and experiences. What seems to separate the self-identified extroverts from the self-identified introverts is the way they respond to said experiences. Extroverts will use the initial discomfort to energize themselves and try to turn the unfamiliar setting into something familiar (thereby increasing their comfort level with it), while introverts tend to see these social settings as a drain to their energy and will approach them like a tedious chore (thereby not concerning themselves with increasing their comfort level in the situation, but focusing on the comfort they’ll get to enjoy once they’re finally able to be alone again). I’m admittedly generalizing here for the sake of brevity, so calm down with the caveats and nuances I know you’re preparing to angrily type my way (we introverts do have a penchant for pedantry, after all).
With all this bit of pop psychology aside, I want to get to matter that I have observed pretty prominently for a while now. For a lot of us who identify as introverts, we often use the label as an excuse to cover for our shyness. As I said, everyone is introverted some of the time, but I’ve noticed that for many of us who define ourselves as introverts–not just as one of our personality traits, but the defining trait of our identity–what we seem to be doing is using the now more socially acceptable fact of being an introverts to hide the still less acceptable fact of just being too shy.
What reason would any of us have to self-delude our own egos this way? Well, for starters, to say that you are an introvert is to say that avoiding social settings is a part of your nature, while admitting that you are just too shy for social settings might make you sound like you are fearful, and therefore make you feel like a coward. It goes without saying that being shy doesn’t make anyone a lesser person, but it’s also unavoidable that most of us would rather not advertise our fears and insecurities to the rest of the world. With the rise of respectability given to genuine introversion, many of us see it as an opportunity to mask our social fears and anxieties behind it. Meanwhile, we continue to feel withdrawn and isolated, and continue to fall deeper into the despair of loneliness; making it much worse for ourselves because we’ve now fooled all those around us into believing that being alone is our preferred state of being. And because we have convinced others (and, on a surface level, ourselves) that we are innate introverts, whose default nature is to be away from others as much as possible, we eventually find it impossible to seek out what we truly do crave at our core: companionship and camaraderie.
It took me some time to accept that deep down I wasn’t just an introvert comfortable in solitude, as much as I was also a shy kid who was afraid to engage in social settings, despite actually having a basic desire to do so. This shy kid eventually became a shy adult who embraced his more introverted qualities, because it was easier than having to confront my honest fears on the matter, and leave myself vulnerable to the very sort of judgment that caused my shyness (and nurtured my introversion) to begin with.
Much like stage fright, I can’t promise that shyness ever really goes away. Whether it’s origins are ultimately caused by nature or nurture (or a combination of both), once you mature through life with it, you’ll always feel some of its affects on you. But there are ways to lessen the sting of it, especially when it comes to your outward interactions with others. It takes effort (a lot of effort), as no book, seminar, or inspirational quote can do the job of remolding the way you see yourself, and the way the world interacts around you. But it can be done. And if you are a self-identified introvert reading this, I would ask you to consider whether, for you too, it is perhaps simple shyness that is at the root of what you believe to be an inherently introverted character.
And if you are considering finding ways to overcome the negative aspects of shyness that are keeping you from being as happy in life as you could potentially be, a giant step forward will be to admit the fact of your shyness to yourself. The next steps forward are more incremental, and involve making a combination of small and moderate changes to your way of thinking about socializing and interacting with others. One giant step backward to any possible progress, however, is to cling to things that allow you to hide from the reality of your fears and insecurities about achieving the social life that would satisfy you (whatever extend or comfort level that may be), and pretending that your lack of social interactions are the result of being an innate introvert, when it probably has more to do with simply being a person whose shyness has caused them to avoid the initial discomfort of socializing. There is no shame in not wanting to be alone, but hiding from this want and continuing to deny it to ourselves out of a misguided sense of loyalty to an identity we have adopted to cope with our shyness, is the best way to guarantee a lifelong refuge in a misery that need to be.
The first grocery store I saw when I moved to the United States was a meager looking spectacle called Sellers Bros. in a rundown strip-mall area of southwest Houston, TX. The store’s shelves were as overcrowded with bargain, generic-name products, as it’s aisles were with patrons shuffling from one end of the building to the next, holding tightly to their Lone Star Cards needed to feed their families for the month. The building’s somber looking outer-structure held a passing resemblance to the apartment complexes that surrounded it only a few paces away—one of which my family was living in at the time, serving as our first exposure to the realities of inner-city American life we had immigrated to, and were gradually assimilate with.
The majority of the neighborhood was composed of immigrant families. Though unlike my family, which originated east of the Atlantic Ocean, it was impossible not to notice that most of my neighbors hailed south of the Rio Grande. As a result, while I had come to this country with the advantage of being able to speak English reasonably well—well enough to understand, and be understood by the general Anglophone population anyway—this advantage proved of little value on the very street I called home for these years of my adolescence. It was an early education to the fact many living in urban America are readily familiar with. Namely, that within the reality of American life, reside smaller sects of conflicting realities, many of which can neither communicate nor understand one another, and are set up so that they will rarely meet. Gulfton Street in Houston, Texas, occupies one such reality.
Tucked away between two major highways in southwest Houston, spanning a stretch of 3 to 4 miles of cracked concrete landscape, sits the street of Gulfton. The epicenter of the Gulfton Ghetto, as it’s occasionally called by the local media and by other Houstonians (though never by the neighborhood’s own inhabitants). To those who take a wrong turn off Bellaire and find themselves driving down Gulfton Street by accident, the insulting nickname will seem most warranted.
The immediate sights one is met with are panel after panel of gang graffiti, row upon row of low-rent apartment complexes, and concrete sidewalks that have been in desperate need of repair for a good few decades now. Surprisingly, there is a park/recreational center meant to give some relief to the area’s ongoing problem with juvenile delinquency, though anyone who has ever stepped onto the park itself will be quickly robbed of any hopefulness at the prospect of this endeavor. In short, like many neighborhoods in urban America, Gulfton is a place that has been largely abandoned to the ravages of metropolitan entropy.
Under-funded and halfway flushed out improvement projects that have failed to live up to expectations are pointed to by the rest of the city as reasons not to bother with any future attempts at repairing the crumbling infrastructure. Leaving the residents who have given up on the idea of moving away to either wall themselves off from the unsavory conditions that surround them within their private residences (however meager they may be), or embrace it by becoming a part of its destructive nature.
The first instinct any well-meaning person will have when confronted with a reality like Gulfton is, “Can anything be done to fix this?” It’s an honest question, but it betrays a lot about the person asking it. The idea that there is any one thing that can resolve problems that are decades in the making is a part of the problem to begin with. These sort of problem are such that they have no one facet of origin, but are a delicate, interwoven mess of social, economic, and political barriers erected and maintained through complex systems with interests that themselves compete against and prop up each other in a multitude of ways. The problems of Gulfton, like the problems of similar neighborhoods and populations throughout this country, have no single cause; hence they can have no single solution to curb the path they are currently on.
“Why don’t the people living there work to fix things? It’s their neighborhood, after all. Don’t they care?”
Unfortunately, the reality of all urban areas is that they are landlocked and dependent on the larger metropolitan that surrounds them. They don’t get to make decisions in a vacuum, and resources are finite and sparse in terms of what will be readily allocated to benefit them. The further issue is that once a neighborhood has fallen far enough to be regarded as “hopeless” by officials and administrators who could possibly make a difference, the very hopelessness of said neighborhood is used as the reason against committing long-term funds to improve its conditions, on the basis that it would be unfair to use tax dollars from well-behaved citizens in more savory parts of the city to fund the activities of no-good thugs and gangsters in these low-income, high crime areas. Local agencies will say they are not equipped to handle the expenses needed to undertake the sort of social projects necessary to overhaul the issues plaguing these sorts of areas, while Federal agencies see these issues as strictly a local concern.
In the absence of a robust social safety net provided by the city or state authorities to ensure the most basic of securities and public amenities, opportunistic forces will band together to construct their own safety nets, which for many young people will take on the form of turning to gangs that prey on social instabilities as a means to offer their quasi-organized crime structure as an alternative to festering in a decrepit social system. The reason youths are most susceptible to this, is that they are the most in need of some kind of functioning social order to orientate their lives (and relieve their boredom), and even the violent and dangerous structure of a gang life is to many preferable to the instability of no visible structure at all.
Some people have a natural aversion to hearing that any issues constitute a systemic problem, requiring a systemic approach to resolve. They conjure up images of how the very notion of entertaining such a thought is little more than an attempt to skirt away responsibility from the individuals and let them avoid the consequences of their actions and/or apathy, leaving them no incentive to make things better on their own accord. I can understand the sentiment behind this aversion, though I find it largely misinformed.
In a place like Gulfton, how exactly do you expect the individuals living there to step up to fix the various problems that plague their environment? Should they pool their meager earnings together to pay for the ongoing structural damage to their concrete sidewalks and street signs, despite the fact that we’re talking about city property and as a results is an issues needing to be addressed by the local government? How about the need to improve the resources available to the local schools so that there can be robust after-school programs and activities available for young people to occupy their time with to discourage the need for delinquency and gang activity? Should the low-income earning parents of these youths fund these programs directly, thereby taking money away from them that’s needed to pay rent, utilities, food, clothing, etc.? Would that be an example of individuals stepping up to take personal responsibility to improve the conditions around them, or a neglect of one’s obligations to provided basic necessities for one’s own family first? If donating money is not the answer, surely we can get everyone to at least volunteer their time to improve their community, no? It’s not as if the sort of people who have to live in these sorts of neighborhoods, are undoubtedly also stuck working jobs with little to no flexible hours or time off, after all.
Perhaps the answer is that all these folks ought to work harder to increase their earnings, so they aren’t hostage to their economic conditions. Yet, if they actually managed to do just that, what incentive would they have to spend their extra earnings on repairing a place like Gulfton, as opposed to–oh, I don’t know–simply moving away to a better part of town that already offers all the basics of having dignified living conditions?
Unless you are Bruce Wayne, sitting on an endless supply of inherited wealth, resources, and leisure time, individuals donating money and/or donating time, will never be a solution to the problems that affect neighborhoods like Gulfton. These are problems that took a long time to manifest, and they require long-term investment and planning to be resolved. It requires layers upon layers of overarching organizational resources, to properly oversee and track improvements, that no single individual or clustered group is capable of providing. Private businesses, local or otherwise, also offer little help in the matter, since their is no business incentive in investing in a place simply to improve the lives and environment of its residents, since these residents will not be able to return the gesture on account that, at the end of the day, they’ll still be too poor to ever be able to turn a profit for these businesses.
And it takes an astounding level of naivete to not be able to realize this. The same sort of naivete that leads certain people to make inane points like, “If you like public programs, and think taxes should be higher to pay for them, why don’t you just volunteer more of your money on an individual basis, instead of demanding everyone else do it through the tax code?” Because individual actions and donations will not solve systemic problems like the ones affecting neighborhoods like Gulfton, that’s why. Because many of the problems plaguing inner-city life are far too complex and interconnected to a multitude of surrounding factors to be seriously brushed off with red herrings concerning individual responsibilities.
Areas like Gulfton are the way they are because they have become culturally and economically alienated from the rest of their metropolitan centers, and the rest of the country at large, and little is being done to incorporate them into the greater society that surrounds them. The full reasons for this alienation are legion, and the solutions that will be necessary will by definition be just as extensive, which is a reality that must be acknowledged by those who purport to take the issues of working, urban, and immigrant communities seriously.
If, on the other hand, you simply don’t care about places like Gulfton, then just say you don’t care, and stand by the convictions of your apathy. And stop pretending that there is a greater moral or ideological basis to what is essentially pure disinterest for the plight of people you can’t be bothered to give a shit about. It will make for a much more honest conversation.
In a not-too-distant previous life, when I thought that standing in front of dozens of apathetic teenagers in hope of teaching them why learning proper grammar, writing, and argumentation skills was a worthwhile vocation to pursue, I came up with a nifty little speech to start off every semester.
I would say:
I know exactly what you are thinking right now. It’s the same question every student, in every course, in every land thinks every time they enter a classroom.
Why do I need to learn this?
The simple answer is that it’s because the law requires you to; at least until you turn 18. For most of you that’s a good enough answer to put up with my incessant talking for a few months, scrape together enough effort to satisfy the course requirement, and move on to your next classroom, until the law finally says that you’ve gone through the motions long enough to be let loose into the real world, full of non-classroom-type duties and responsibilities. For most of you this answer is good enough. But there’s a few of you for whom this sort of reasoning is not anywhere near good enough to make you put up with what the education system expects of you for an hour and fifteen minutes of your day.
If you fall within that group, I want you to listen very closely. In life you will meet many people. A great number of these people will make prejudgments about you from the first moment they see you–both good and bad. The good prejudgments will work to your benefit, and the bad will be obstacles that can make your life very, very hard.
People will make prejudgments about you based on your height, your weight, your race, your gender, the way you dress, the way you stand, even the way you choose to cut your hair. The negative opinions formed by these prejudgments, no matter how unfair or shallow, will for the most part be things you have little control over. Except for one important component: The way you communicate. Yes, people will judge you by how you speak, too. And while you can’t do much about someone who simply hates you for the way you look, you can sure as hell do everything to deny them the pleasure to dismiss you for the way you communicate. Even if they still hate you at the end of the day for all the bigoted ways available to them, you should at the very least do everything in your power to make it impossible for them to dismiss you for the way you write, the way you argue–the way you speak! That is entirely within your power, and it is a power that’s learned, not inherited. This is your opportunity to learn it, if this is a power you wish to possess. If you don’t, any prejudgments others make about your person as a results of your decision right now, will be entirely on you.
I’m biased, but I like to think it got the point across as well as anything else could. And while the point was of course to get the students to feel somewhat enthused about the lesson plan, there was also a deeper purpose to my little pep-talk. Namely, I was demonstrating the use of rhetoric to argue the case for learning about rhetoric (none of the students ever really picked up on this, though).
Rhetoric has a few technical (read boring) definitions floating around, but the basic gist of it is that rhetoric is a form of discourse meant at persuasion (typically of a person or audience). This is the part about rhetoric that most philosophical commentators agree on anyway. Opinions regarding the use or ethical standing of rhetoric have been more polarizing, however. Plato looked down on rhetoric as mere flattery that could be used to manipulate the masses, as it’s primary purpose was to convince you to side with the argument, and not to impart knowledge or truth. His student Aristotle took a more favorable view, and considered rhetoric to be an important discipline (and art form), and a necessary part of any well-rounded civics education. Much of the writings and social revolutions that emerged from the Enlightenment relied heavily on rhetoric to persuade the public to a new way of thinking about life (and liberty, and even the pursuit of happiness). The same goes for anti-Enlightenment reactionaries, who argued in favor of preserving the status quo in society.
In the modern world, rhetoric (in its purest form) is most readily seen in courtrooms and legislative bodies, and the political spheres that surround them. It’s no surprise that so many politicians start out as lawyers, and go on to use the same rhetorical tricks they learned in law school on the campaign trail. It’s for this reason that rhetoric takes on a negative connotation in many people’s minds.
Memorable (yet content-empty) slogans, propagated by conscience-devoid politicians, whose only concern is scoring a victory in their (and their donors’) favor. Arguments put worth by their mouthpieces in the form of public commentators and pundits, serving the sole purpose of winning over the electorate’s hearts, often at the expense of their critical thought and personal long-term interests. Honorable mentions also go to the rhetorical tactics of self-professed experts who peddle pseudoscience and conspiracy theories to the affect of fostering a perpetually misinformed populace for the sake of monetary gains. These can all be counted as examples in support of Plato’s skepticism towards rhetoric as a virtuous mode of discourse.
Even my speech above is arguably laced with unwarranted rhetorical hyperbole. (Honestly, most people you meet will probably not form good or bad opinions of you; they’ll probably look right past you with complete indifference, if you offer no value to them as a person). However, one should refrain from getting distracted with unwarranted equivocations. I sincerely believe there’s a big difference between educators using rhetoric to motivate their students to succeed in their coursework, and the sort of rhetoric that contributes to public policy meant to misinform the public (if you don’t, I hope you never get picked to serve on any jury).
I already mentioned the culpability of politicians making use of rhetoric to spread propaganda for ideological gains. And while this is universally snubbed as somewhere on the edge of morally questionable behavior, the only reason its done is because it works so well. In other words, people get manipulated by the bells and whistles of skilled rhetoricians because they don’t care to educate themselves about the hogwash they are being fed (usually because they agree and want to believe what’s being said to them, even if it’s factually baseless).
The public (at least its voting component) is the primary check on politicians in a democratic republic. However, given the ease by which we will readily be swayed by faint words of praise and reckless fearmongering, its not absurd to thing that Plato may have been on to something when expressing doubts with the public’s ability to combat against rhetoricians whose only purpose is to persuade with complete disregard for the truth of their words.
A secondary check on the rhetoric of public officials is the part of the voting public that makes up the free press. The reason why the founders of the United States explicitly mentioned protection for the free press from the government in the first amendment of the U.S. Constitution, relates back directly to the role the press (ideally) ought to have as the fact-checkers holding those in power accountable. Unlike the public, a respectable free press has several internal mechanisms in play that work to sift through credible and credulous information. It’s also why the first thing clever rhetoricians do is undermine the very credibility of the free press. “Fake News” is a beautiful example of manipulative rhetoric at its finest, as it plays on the public’s distrust of media sources (i.e. its only reasonable to believe that some news outlets fail to overcome the biases of their presenters) and gives it a credulous dose of self-serving generalization (i.e. all news outlets that disagree with me are the biased ones, regardless of any evidence they present to support their position).
Any reasonable amount of critical thought on the subject clearly shows that the fact that news sources can be mistaken (or even outright deceptive), does not therefore warrant the conclusion that all media must be wrong and lying when they report something you don’t want to be true. Once again, it’s up to the public to follow-up on the sources any reputable press will readily provide for them to check the merits of what’s being reported. Shouting “Fake News,” however, makes it easier to muddy this relationship between the public and the press, by equating all sectors of the press as untrustworthy in general, and allows people to lazily self-select only the media they are already disposed to agree with, without having to be burdened with doing any intellectual legwork.
Journalists are also rhetoricians by trade. Unlike politicians and lawyers, however, members of the free press ought to strive to belong to Aristotle’s more virtuous sect of the rhetoric spectrum, which aims to persuade the masses towards truth and knowledge. As journalism moves more towards competing for public viewership to continue to operate–thereby having to appease to the whims and tastes of the public, rather than seeking to simply inform them–the concept of fact-based reporting threatens to descend completely into the realm of vacuous rhetoric meant to do little more than keep up viewer support (which, as mentioned, is prone to succumb to some flimsy and fickle interests).
The elevation of online personalities, whose sole journalistic experience is being able to cultivate an audience around themselves on video-sharing sites like YouTube, under the neologism of “alternative media,” is an example of a free press where rhetoric takes precedence over fact-based reporting. Not to smear those personalities who make every effort to be a respectable source of information, the reality is that the environment of being an online news commentating source is inherently prone to undermine the fact-checking mechanism of traditional journalism, mostly by side-stepping it completely in favor of peddling rhetoric.
These online outlets have little in the way of field-based journalists doing the legwork to uncover newsworthy stories, let alone teams of fact-checkers tirelessly looking through sources and notes to determine the veracity of a story prior to its reporting. In truth, they rely almost entirely on the work of traditional journalists, whose work they present and provide opinionated commentary over, while ever-so-often throwing in jabs at how ineffective traditional journalism is, despite most (if not all) their actual “news” content coming through the efforts of said traditional journalism. The reason why this matters is that it is a clear example in which what could be a respectable profession, and a reliable venue for information for the public, is sacrificing its responsibility to dispel factual knowledge for the convenience of mindless rhetoric because it offers them popularity and financial gains in terms of viewer support and sponsorship.
Understanding the role of rhetoric–its values, its uses, and its prevalence–is vital in being able to identify the difference between an impassioned speaker fighting on behalf of a just cause, and a demagogue looking to manipulate the mob to his advantage. Its vital in being able to distinguish between journalists who go through many painstaking, sleepless nights to report a truth to the people as a public service, and pundit blowhards using the cover of journalism to propagate misinformation for their own gains and egos. In general, to understand the use of rhetoric, is to be able to identify it and (if need be) ward yourself against its more dire influences.
Rhetoric is not, and should not be, a dirty word. Like most things, in the hands of benign and well-meaning hands, it is a powerful tool of communication that can inspire immense good in the world. In the wrong hands, however, it can be the barrier that keeps us permanently free-falling in the abyss of credulity and self-destruction.
Recently the Republic of Ireland held a referendum to repeal longstanding blasphemy offenses in its country. While blasphemy still stands as a finable offense in the Republic under the 2009 Defamation Act, the referendum is still a demonstration that, as far as the Irish people are concerned, charges of blasphemy ought not to be a part of punishable civil law in their nation.
Friends of my adopted homeland here in the United States usually have a conception of Western Europe as being made up of a set of predominantly secular and progressive cultures. And speaking as someone who spent many years growing up in Western Europe, this conception isn’t wholly unfounded. As a result, it might astound many Americans to hear that some of these secular, progressive, ultra-liberal, borderline lefty countries still have enforceable blasphemy laws in place. Granted, the actual enforceability of such laws is largely theoretical in nature, given that they are usually undermined by far more salient laws allowing for the freedom of religious expression and the freedom to believe in accordance to one’s personal conscience. Thus, blasphemy laws currently exist as a vestigial organ in European law books; without practical purpose or application, but still present nonetheless.
“If these laws are unworkable, than why even bother to fret about them with referendums at all? Why not just continue to ignore them, and get on with your blaspheming ways?”
This could be a reasonable response, but it misses an important point concerning blasphemy laws. Putting aside the fact that it makes perfect sense to oppose the criminality of blasphemy on principle alone as unbecoming of any modern democratic nation, there is also the issue of the frailty on which the laxity of these laws currently exist. To put it more plainly, the reason blasphemy charges are unworkable in most of the European nations that have them is precisely because the current sociopolitical climate is too secular and progressive to enforce them. However, as any student of history knows, sociopolitical climates are anything but static. So what happens if the political pendulum swings too far to the right, towards a political faction that views the protection of religious sensibilities as far more important to a nation’s cultural well-being, than the free expression of its citizenry? Suddenly, these outdated blasphemy laws that have had no real thrust in civil law for almost two centuries, become a very powerful weapon in the hands of reactionaries all too eager to use the existing rule of law to conform society to their line of quasi-pious thinking. And this is a potential threat both believers and unbelievers alike ought to be concerned about.
Blasphemy isn’t simply the act of professing one’s disbelieve in religious claims, whole cloth. Blasphemy is the very nature in which all religions profess the very doctrines that make up their faiths.
Whenever polytheistic faiths, like certain sects of Hinduism, profess the existence of multiple gods, they are blaspheming against monotheistic religions which insist that there is only one god, and none other (and vice versa). Within the monotheistic Abrahamic faiths, when Christians profess that Jesus Christ is the foretold messiah, they are blasphemy against the Jewish faiths that claim that the messiah is yet to come (and vice versa). When Muslims claim that Jesus, though a prophet and a messiah, is not the son of God, they are blaspheming against a central claim of Christianity. The Catholic Church’s stance on the supremacy of the Roman papacy is blasphemous to the Eastern Orthodox Churches, and the Protestant rejection of Catholic ecclesiastical authority is blasphemous to Catholics. The Methodists are blasphemers to the Calvinists, and just about every Christian sect considers Mormonism a heresy.
The obvious point here is that to take away the right to blaspheme is to make it impossible for religious pluralism to exist within a society. Perhaps this is fine as long as your religious opinion is the dominant one in the society you inhabit, but what happens if you find yourself just short of the majority opinion? What if a population shift occurs, and the very laws that enforced the thin-skinned sensibilities of your religious persuasion becomes the means why which the new dominant line of thought undermines your right to religious expression?
I could stop writing now, and end on this appeal for mutual cordiality between people of all faiths, and how it is in everyone’s self-interest to oppose blasphemy laws, but I fear it would leave things very much against the spirit of healthy discomfort that blasphemy really should elicit in a person when coming across it. On that note, allow me address the elephant in the room that needs to be brought up when concerns regarding religious offense of any sort, in law or public discourse, rears its head.
Undeniably, religions make bold claims for themselves. Claims that offer definitive answers on matters concerning life, death, morality, with a wager on possessing a monopoly on Truth with a capital T. And they are always keen to wrap this all-knowing, all-encompassing bit of absolutist wisdom in a garb of self-proclaimed humility, as if to say, “No, no, don’t mind me…I’m simply professing to know the answers to all of life’s mysteries, ready made with the consequences (read: threat) that will befall you if you don’t follow along with my modest creed.”
In short, religions by their inherit design simply claim to know things they couldn’t possibly know. But I, in turn, admit that I don’t know. I don’t know what the answers to life’s mysteries are; nor do I know which of today’s mysteries will remain mysterious forever, and which might become common knowledge for subsequent generations to come. I don’t know which moral answers yield the most objective good for humanity; nor can I say for sure that such answers are even completely knowable. The truths I do know come with a lowercase t, held provisionally in accordance to forthcoming evidence and reasoned arguments, and I don’t know if I can do anything other than to reject the grammar of bolder Truth claims when confronted with them.
It is precisely that I don’t know that I am left with little recourse than to examine, question, dismiss, disbelieve, and (when I see fit) deride those who do claim to know, but offer hardly a dearth of evidence for their claim. It took centuries of debate and bloodshed of previous generations of thinkers for any of us to be able to enjoy this simple — yet powerful — privilege to skepticism. A privilege I do hold up as my right, and which I will speak up for without hesitation or apology. What you call blasphemy, I call critical thought. And if anyone can appeal to traditions as a means to protect religious sensibilities by legal means, I am fully within my right to appeal to the tradition of cultural and intellectual pushback towards religious doctrines and religious authorities that has made it possible for any sort of interfaith (and non-faith) social cohesion to exist in the modern world. A tradition that includes both the right to the profane and the blasphemous, which cannot be allowed to be abridged in a democratic republic, for as long as one wishes to be part of any nation worthy of the claim.
Genuine self-scrutiny is a personal virtue that is much easier preached than practiced. Usually the furthest most of us are willing to go is a relativistic acknowledgment that differing opinions exist and that, all things considering, we would be willing to change our minds if these alternative viewpoints were to persuade us sufficiently. But, in my opinion, this sort of tacit relativism isn’t much in the way of self-scrutiny. To self-scrutinize is to actively challenge the values and ideals we hold dear to our person–to dare to shake the foundation holding up our most cherished beliefs, and test if the structure on which we house our beliefs is sturdy enough to withstand a direct attack. In contrast, the aforementioned acknowledgment that differing (and potentially equally valid) views exist to our own is a very passive stance, as it strictly relies on an external source to come along and challenge our own position(s), with no actual self-scrutiny being involved in the process.
Up to this point, this very post can be rightfully characterized among the passive variant; i.e. it’s me (an external source) attempting to challenge you to question the manner by which you view the world around you. Although there are occasionally posts on this blog in which I sincerely try to adopt opposing stances to my own, the truth is that I do this primarily to better strengthen my own position by being able to effectively understand what I’m arguing against. This, too, is not self-scrutiny. And it would be dishonest to pretend otherwise.
To truly self-scrutinize I would have to pick a position–a value, an ideal–by which I orientate my worldview around, and mercilessly strip it to its bone. The frustrating part of such a mental exercise is the inevitability of having to rely on generalizations of my own opinions in order to be able to paraphrase them thoroughly enough, without getting trapped in a game over petty semantics. The important thing to remember is that the points I will be arguing over with myself in this post are admittedly stripped of their nuances regarding some obvious exceptions and caveats, so as to not lose focus of addressing the underlying principles that are being discussed. Consider that a disclaimer for the more pedantic-minded among my readers (you know who you are).
First, it would be helpful if I stated a value by which I orientate my worldview around, prior to trying to poke holes in it. Above most else, as long as I can remember, I have always valued the egalitarian approach to most facets of human interaction. I truly do believe that the most effective, and just, and fair means for society to function is for its sociopolitical and judiciary elements to strive for as equitable an approach to administering its societal role as possible. In this view, I also recognized that this can more realistically be considered an ideal for society to endeavor towards rather than an all-encompassing absolute–nonetheless, I still see it as a valuable ideal for modern society to be striving towards, even if we must acknowledge that its perfect implementation may forever be out of our grasps.
Additionally, I should clarify that I do not necessarily claim this personal value of mine to be derived from anything higher than my own personal preferences to how I think society ought to be. Yes, it is subjective, because it is subject to my desires and interests, however I would argue that this is true of just about any alternative/opposing viewpoint that may be brought up. Furthermore, the merits and benefits I believe to be implicit in my personal preference of an egalitarian society (though admittedly subjective) are, in my opinion, independently verifiable outside of just my own internal desires. In short, I value egalitarianism on account that, because I have no just and tangible means by which to sift through who merits to occupy which position in the social hierarchy, I consider it important that (if nothing else, at least on the basic application of our political and judicial proceedings), we hold all members of society to an equal standard. Moreover, not that it matters to determining the validity of the egalitarian viewpoint, but I’m convinced that the majority of the people reading this will have little trouble agreeing with the benefits of such a worldview (though probably more in principle, while leaving room on disagreement on the most practical means by which to apply said principle in a social framework).
Now, the immediate issue I see arising with this stance of mine is the objection that genuine egalitarianism can easily lead to outright conformity–especially enforced conformity–as a society built on the model of complete equality might find it difficult to function unless it actively sets out to maintain the equality it’s seeking to establish.
It is a harsh fact that large-scale human interaction is not naturally egalitarian; meaning that left to their own devices there is little in historical evidence to suggest that a society of people will not diversify themselves into a multi-layered hierarchy; thereby instinctively creating the social disparity that the egalitarian mindset is aiming to combat. The most obvious response would be to insist that egalitarianism simply means that the basic functions of society (i.e. the laws) have to be applied equally, and that as long as measures are upheld in society, the system can self-correct to its default setting. Yet, this outlook is only convincing as long as one is inclined to have faith in the sincerity of the application of the law, in terms of holding all in society to an equal standard. This also brings us to the issue of who is to be the arbiter warranted with upholding the principles of an egalitarian system. The judicial system? The policymakers? The public at large? And does this then bestow on these individuals a set of authority (i.e. power and privilege) that thereby creates a disparity which in itself violates the very premise of a truly egalitarian model?
“In a democratic society, the authority rests with the people in the society to ultimately decide on who is to be the arbiter(s) to ensure that equality is being upheld in said society on the people’s behalf.”
But maintaining social equality by means of representative democracy brings us to the issue of having those in the minority opinion be subject to the whims of the majority. And is this not also in itself a violation of what an egalitarian society ought to be striving for?
When we play out the potential pitfalls of every one of these concerns what we end up with is the realization that, in practice, egalitarianism seems to only function when applied on a selective basis. Complete equality, across the board, on all matters, has the serious consequence of either ending up in a social gridlock (rendering all manners of progress on any issue impossible), or coercion (negating the benignity that is ideally associated with egalitarianism).
I’ve heard it said how in this sort of a discussion it is important to differentiate between equality of outcome and equality of opportunity; that the latter is the truly worthwhile goal an egalitarian ought to be striving for in order to ensure a just and fair society. I’m not sure this does much to address the primary issue at hand.
If there exists no disparity in opportunity, but we reserve room for an inequity in outcome, than will it not be the case that you will still end up with a select number of individuals occupying a higher role in the social hierarchy than others? And once the foundation is laid for such a development, is it not just as likely that those who end up occupying a higher role could put in place measures that will be of interest to themselves alone; or even at the expense of those who fall into lower social roles? Meaning that even though in this model all opportunity was equally available at first, the caveat that different people can have different outcomes–fall into more favorable and less favorable social conditions–fails to safeguard against the potential dilemma of having those who manage to rise high enough manipulating matters in society to their advantage; thereby stifling the outcome and opportunity potentials of future generations. If the rebuttal is that in a truly egalitarian society measures would be in place to prevent this, we fall back to the question of who exactly is to be the arbiter warranted with upholding the principles of an egalitarian system? Thus bringing us full-circle to the line of inquiry mentioned in the preceding paragraphs; hence, making an equality of outcome vs an equality of opportunity distinction does little to nothing to resolve the issues being discussed here.
All these objections are ones that, even as someone who considers himself an egalitarian, I can sympathize with. Mainly because I don’t have any way to refute them without appealing to a personal intuition that these concerns are not endemic to an egalitarian model and that it’s ultimately feasible to avoid such potential pitfalls when we leave room within the social system to be amendable to debate and revision. However, I have to also admit that I’m not always entirely sure of this myself.
This problem brings me directly to the confrontation of what should be valued more in society: the complete equality of all people, or the value of the autonomous individual? And whether creating such a dichotomy is necessary, or a balance can be struck in satisfying the interests of both entities?
The threat that removing all disparity that exists between all individuals might lead to a stifling of the distinct individuality of people is something I believe is worth worrying over. What good is a world where equality is triumphant but reigns on the merits of absolute sameness? Not to mention, what will happen to the human ingenuity all of us in modern life depend on for our survival as a society? The prospect of attaining personal achievement is necessitated by one’s ability to stand out above the fold, and create something unique and distinct from that which is common. The possibility that this drive will be held in suspect in a completely egalitarian world, in the name of preemptively combating all forms of perceived inequality, no matter how unpleasant it might be to my core values to acknowledge, is not something I can dismiss simply because it’s inconvenient to my worldview. Essentially, I believe that it would be unwise to simply brush off the point that a world safeguarded to the point where no one falls, is also potentially a world where no one rises.
When I started writing this post I had a standard set of points I knew I would raise to fulfill my interest of demonstrating a genuine attempt at unrestrained self-scrutiny. I know that some readers might wonder why I’m not doing more to combat the objections I’ve raised here against my own egalitarian perspective, and the simple truth is that it’s because I understand my desire for egalitarianism to be practical and feasible rests almost entirely on the fact that I want both of those things to be true, as it would validate my presupposed worldview, by fiat. Nonetheless, I do understand that reality does not depend on my personal whims and wishes. In all honesty, having actually reasoned out the premises here, I’m left wondering why, if for the sake of practicality we will undoubtedly always be forced to be to some extent selective with our approach to egalitarianism, we (myself included) even bother calling it egalitarianism at all? Perhaps there is a term out there that more honestly fits what most of us mean when we strive to uphold what we refer to as egalitarian principles. That, however, is a wholly separate discussion to my intentions here. My goal was to hold my own views and values to the fire and see where it ends up. In that goal, I think I’ve succeeded…what results from it will take a bit more thinking on my part to figure out.