The eternal recurrence is most heavily referred to by Friedrich Nietzsche in his 1883 Thus Spoke Zarathustra, where it serves primarily as a thought experiment proposed by the title character (Zarathustra) that is meant to designate a supreme achievement of human development; the ascension to a higher type of consciousness in man.
In Zarathustra, Nietzsche conceives of a cyclical universe, where every event is ever recurring, across an infinite stretch of time, forever. Nietzsche’s intent is to focus the mind of his readers on a possible reality in which every action they had committed (all faults, setbacks, mistakes, and wrongdoings) was bound to be repeated by them, an infinite amount of time. Where they would be forced to endure their shame and grief over and over again, unable to change or improve on any past misdeeds, for all eternity. And then to ask the question: “Would you be willing to bear such a reality?” Would a person be able to cope with knowing that s/he will have to helplessly live through all the pains, heartbreaks, bad decisions, and grief that s/he has already struggled through once in life? And would this person, aware of this eternal recurrent, still manage to affirm a will to live?
Nietzsche believed that most people alive would decisively shriek a unanimous “No!” to such a proposition, because it would seem too bleak and fatalistic a fate to have to eternally return to one’s life’s errors, infinitely doomed to recommit one’s sins (for lack of a better term). Nietzsche saw this as a reflection of the destitute modern man has surrendered himself to; the wanting denial of one’s true existence. He contrasted this with what he called amor fati (Lat. love of fate):
My formula for greatness in a human being is amor fati: that one wants nothing to be different, not forward, not backward, not in all eternity. Not merely bear what is necessary, still less conceal it—all idealism is mendaciousness in the face of what is necessary—but love it (Ecce Homo, “Why I am So Clever,” section 10).
To be able to look at the compilation of one’s life, with all one’s mistakes and regrets, and still unashamedly proclaim one’s desire to relive it all as is (with no intent to alter one’s past actions), is according to Nietzsche the ultimate affirmation of life—a full embrace of one’s existence, a testament to the arrival of the overman (Ger. Übermensch).
Although the eternal recurrence was a central theme in Thus Spoke Zarathustra, Nietzsche seemed to have somewhat abandoned the thought experiment in much of his later work (he makes no mention of it in either Beyond Good and Evil or On the Genealogy of Morals). However, this appears to be a hasty conclusion, since Nietzsche does make continuous references to the basic sentiment found in his 1883 philosophical novel, and seems to be expanding on the same core concepts in his later writings.
This eternal return, and its importance in signifying the coming of the overman, is Nietzsche’s attempt to offer a possible redemption narrative for humanity. A means by which man can take the fatalistic nature of life, and surpass its dire implications by ascending beyond them into a realm of complete oneness with all the facts and events that come together to compose one’s life story. Yet, this redemption is not inevitable, for man (or “modern man,” as Nietzsche would say) is in a constant state of rejecting amor fati, and moving away from self-acceptance, in favor of finding acceptance with “higher” ideals, that are imagined to dwell exterior and superior to oneself. This is the fate of what Zarathustra called the “last man”—the alternate fate of mankind—the final descend of mankind to a sheepish, complacent shell of what he once was, living in fear of his own existence.
A lot of what passes for Nietzsche’s image in popular thought is a caricature of what was constructed by the Nazi propaganda machine in the 1930s (largely with the help of the philosopher’s own nationalistic, anti-Semitic sister, Elisabeth). Of course, if blame is to be assigned, then it is only fair to point out that much of the misinterpretations surrounding Nietzsche stems from the man’s own insistence on expressing his views in rather quick, often intentionally obscure musings and aphorisms, leaving his ideas wide open to be bastardized by opportunistic ideologues.
The reality is that even though it takes little effort to sanction an elitist system through Nietzsche’s philosophy, the actually details that accompany the man’s anti-egalitarian values—namely, anti-politics, anti-nationalism [especially anti-German], anti-group/herd mentality—are by definition incompatible with the belligerent, conformist, nationalistic, fascism inherent to the Third Reich’s state ideology. Nietzsche views on the notion of nationalities and personal identities (and the often times conflicted dynamics between the two), reveal a much more complex and nuanced perspective than the picture that has been (still is) often presented of him as the patron saint of Nazism.
In Part Eight of Beyond Good and Evil (1886), titled “Peoples and Fatherlands”, Nietzsche outlines his analysis of European and Western development, and critiques the modern move towards democratic institutions as a step towards the cultivation of a true tyranny. Nietzsche comments that the tribal affiliations that once dominated Europe are eroding away in favor of a more borderless sentiment amongst the hitherto disconnected people:
The Europeans are becoming more similar to each other / an essentially supra-national and nomadic type of man is gradually coming up, a type that possesses, physiologically speaking, a maximum art and power of adaptation as its typical distinction.
For Nietzsche, this development is a direct result of the advent of modernity, and modern ideas, which has made a person’s allegiance to a trifling tribe or nation unsatisfactory in light of modern man’s greater awareness of the world. Thus, a grander identity is needed, and a newer, more encompassing, international personal ideology is required to escape the limitations of the narrow worldview of one’s regional clan. Moreover, as identities and ideologies extend beyond the old local boundaries, a person’s interests will also evolve from the tribal group to the global. Politically, one possible result from all of this will be the development of a pluralistic society, out of which democracy will ascend as a means of appeasing the diverging—and converging—interests arising amongst the new, modern populace. It is within this context, Nietzsche argues, that democracy is born.
Nietzsche understands how this rise of democracy is looked upon as a great progress by contemporary society, but the philosopher himself is wary of the implications that such a system holds for humanity, stating that “this process will probably lead to results which would seem to be least expected by those who naively promote and praise it, the apostle’s of ‘modern ideas.’” Nietzsche is distrustful of populist inclinations, because it unduly gives credence to the degenerate, weaker persons of society to regress the progress of the more innovative value-creators, who will be forced to reside amongst the lowly plebeian masses. This sentiment is directly tied in with Nietzsche’s thesis on the dichotomy of master-slave moralities, the relevant part of which can be summarized as follows:
Our egalitarian sentiment, according to Nietzsche, is a result of the poison we have all blindly swallowed. Our demand for universal moderation, for the value of humility, our aversion to boastfulness as being too impolite in the presence of weaker, stupider individuals, and our desire to reduce the feeling of inadequacy from an opponent’s failures, are all manifestations from the original slave revolt of morality that is promulgated by those who seek to vindicate the virtue of their inferiority by means of social cohesion—to rationalize away personal failure in favor of mass victimization.
The democratization of society is to Nietzsche a move towards the promotion of mediocrity. It will condition us to be content with the will of others as reasonably equivalent to our own, instead of asserting our own interest in opposition to the whims of the masses. In short, our strive to achieve a more egalitarian mindset, will leave us too eager to be content with compromises with positions we fundamentally disagree with, rendering us potentially incapable of identifying and combating the ascension of any tyrannical entity that might see fit to stealthily encroach its power over our person:
The very same new conditions that will on the average lead to the leveling and mediocritization of man—to a useful, industrious, handy, multi-purpose herd animal—are likely in the highest degree to give birth to the exceptional human beings of the most dangerous and attractive quality.
Nietzsche proposes that in a society where the primary aim is to create unanimous equality, the ultimate result will be to create an environment of obstinate complacency (the greatest form of oppression that can be leveled against a thinking person). All this will in turn lead to the sweeping infantilizing of the individual, making her/him dependent on the body of the system as a whole for her/his survival, rather than one’s own strength and merit. A trend that will lead to a population “who will be poor in will, extremely employable, and as much in need of a master and commander as of their daily bread.”
However, the degeneration will not be universal amongst all individuals. Nietzsche explains that “while the democratization of Europe leads to the production of a type that is prepared for slavery in the subtlest sense, in single, exceptional cases the strong human being will have to turn out stronger and richer than perhaps ever before.” According to Nietzsche, in nature there exist those who can only dominate by virtue of their own values, and those who can only be dominated as a result of their inability to create values (hence, they must leach off of the values of others). These two groups do this by the presence of their will to power, that is to say, the very nature of their existence. As long as they exist, they cannot choose to act differently than the manner in which their nature—i.e. their will to power—dictates.
The problem Nietzsche sees with modernity is that our egalitarian-minded moral system has turned all of this upside-down, allowing for the weaker plebeian caste (who cannot create any values of their own) to dominate the environment on which the stronger noble caste (the natural value-creators) are cultured to stoop to the level of the very masses they should be dominating. This causes a dilemma for those few contemporary men born possessing the noble character trait, where their instinct (their will to power) tells them to reject the moral values of their surroundings and create their own moral values, but their conscience (indoctrinated by the slave mentality of the lowly masses controlling the moral discourse) tells them that subverting their own will in benefit of the herd is the highest virtue of the good modern man. Thus, when any individuals do inevitably rise above the masses (because, in Nietzsche’s view, the masses cannot help but unwittingly condition themselves to be dominated by some sort of master), the resulting value-creators who ascend to power will be as much a perversity of the noble character, as the degenerate culture that has produced them; what will ensue is absolute tyranny:
I meant to say: the democratization of Europe is at the same time an involuntary arrangement for the cultivation of tyrants—taking that word in every sense, including the most spiritual.
Reading these dire statements by Nietzsche through the privileged viewpoint of the 21st century, an observer would be justified to marvel at the prophetic nature of the philosopher’s words in predicting the rise of the totalitarian systems that would follow a few decades after his death.
The rise of fascism in both Italy and Germany appeared to emerge out of relatively democratic phases in both nations’ histories. Likewise, the 1917 October Revolution in Russia that brought to power the Bolshevik faction in the unstable country was enabled by the indecisiveness of the democratically-minded Provisional Government that arose from the 1917 February Revolution. In all of these examples the presence of a democratic political institution did not hinder the advent of repressive totalitarian regimes. Moreover (Nietzsche might argue), the presence of said democracies were instrumental in opening the door to these malignant forces, by having no mechanism by which to eject them from the political process besides the whims of a broken, infantilized population (whom Nietzsche describes as being “prepared for slavery in the subtlest sense”).
However, if one wants to be critical about the possibly prophetic nature of Nietzsche’s philosophy, it would also be apropos to point out that this sort of historical analysis is more the result of selective reasoning then objective inquiry. After all, it is equally true that every single one of the European democracies that yielded the totalitarian regimes of the 20th Century, were themselves preceded by non-democratic political entities, whose infrastructure crumbled despite their lack of concern for creating an egalitarian society. Furthermore, if the oppression of the totalitarian models of the last century are to be blamed on the insufficiency of the democratic institutions that preceded them, than consistency demands for us to also blame the insufficiencies of these democratic institutions on the failures of the aristocratic power structure that preceded them; and so on, and so forth, ad infinitum.
A better way to approach Nietzsche’s position here, is to consider that the philosopher may not be referring to political power at all, but a psychological development: “I hear with pleasure that our sun is swiftly moving toward the constellation of Hercules—and I hope that man on this earth will in this respect follow the sun’s example?” Hercules, of course, is the Roman demigod who is described as having returned from the underworld, and eventually ascended to the realm of the gods by virtue of his strength and valor—a character whose legend for Nietzsche must have served as a fitting representation of the philosopher’s will to power. The fact that Nietzsche states the reference as a question indicates that he was doubtful of the development of man to follow the example set forth by the Roman demigod.
I mentioned before that Nietzsche popular image is heavily, and unjustifiably, linked with Nazism. The falsity of this supposition is verified by Nietzsche’s own rejection of the purity of the German people, a sentiment that is antithetical to Nazi ideology: “The German soul is above all manifold, of diverse origins, more put together superimposed than actually built.” To Nietzsche the idea that Germany is to be cleansed of foreign elements is an absurdity in and of itself, since all things German (for him) are a mixture of originally non-German elements [a truth that I personally believe aptly pertains to all nations and ethnicities]. Nietzsche views the German nationalism emerging in his time as a result of an undefined people attempting to become a coherent identity; it is a compensation for a fault, which in its path “is at work trying to Germanize the whole of Europe” [a statement that perhaps once again hints at Nietzsche’s “prophetic” qualities in predicting the coming decades].
The most surprising fact to anyone whose opinions of Nietzsche have been largely shaped by the man’s false impression as a Nazi-precursor is the philosopher’s staunch abhorrence of European anti-Semitism. Nietzsche seems to understand the potential for his writings to be utilized by opportunistic anti-Semites, causing him to purposefully herald the Jewish people as a superior specimen, in contrast to the anti-Semites who seek to expel them from the continent:
The Jews, however, are beyond any doubt the strongest, toughest, and purest race now living in Europe; they know how to prevail even under the worst conditions (even better than under favorable conditions), by means of virtue that today one would like to mark as vices.
The irony here is that Nietzsche is attributing to the Jewish peoples every positive quality the anti-Semitic nationalists of Europe wish to attribute onto themselves. Just how much of this is motivated by Nietzsche’s preemptive desire to separate himself from the bigoted views of some of his potential admirers is an open question, but what is certain is the philosopher’s complete denunciation of the conspiratorial propaganda the anti-Semites are eager to spread into public consciousness:
That the Jews, if they wanted it—or if they were forced into it, which seems to be what the anti-Semites want—could even now have preponderance, indeed quite literally mastery over Europe, that is certain; that they are not working and planning for this is equally certain.
In other words, Nietzsche is of the opinion that if the Jewish people were as eager for world domination as the anti-Semites claim, they would already be dominating the world by now. The fact that they are neither planning nor interested in this is evident by the continued harassment they have to endure by people who claim (and have been claiming for a good few centuries now) to constantly be a knife-edge away from “Jewish-dominance.” Instead, Nietzsche suggests that the history of the Jewish people in Europe indicates a desire to want to at long last be accepted within the public realm:
Meanwhile they want and wish rather, even with some importunity to be absorbed and assimilated by Europe; they long to be fixed, permitted, respected somewhere at long last.
Even going so far as to insist that to achieve the long overdue inclusion of the Jewish people “it might be useful and fair to expel the anti-Semite screamers from the country.” I mentioned before the possibility that Nietzsche’s motivation for writing this screed against the anti-Semites of Europe is directly tied in with his desire to counterattack any possible conflation between his views and the views of some of his more questionable admirers (it was a move that, while well-intentioned, proved futile in the long run).
A more intellectual challenge that can be issued on Nietzsche’s passionate defense of the Jewish people, is the seeming contradiction it creates with the man’s staunch attacks against religion, in particular against Abrahamic monotheism, of which Judaism is the founding faith. A reasonable counter Nietzsche could make is that nowhere in his defense of the Jewish people does he defend any of the religious tenets of Judaism; rather he is aiming to point out the prejudice unduly leveled against the Jews as an ethnic group (which is what their most vitriolic defamers classify them as). Another point of consideration is that Nietzsche’s defense of the Jewish people, as an ethnic group, is completely compatible with his broader worldview regarding master-slave moralities. As a quick summary, Nietzsche divides human society into two distinct castes: the aristocratic nobility (the value-creating masters) and the plebeian masses (the herd-minded slaves). Amongst the aristocratic nobility, who–according to Nietzsche–are the rightful arbitrators of what is morally good, a further distinction is made between the knightly-aristocracy and the priestly-aristocracy; the latter of which are the ones who have provided the intellectual means for the lowly plebeians to charge a slave-revolt against the purer morality of the more noble caste—a slave-revolt which has permeated and shaped the moral conscience of modern man. In this scenario described by Nietzsche, the ancient Hebrews would occupy the role of the priestly-aristocracy, which has created the opportunity for the revolting slave-morality of Christianity to perverse the nobleman’s superior morality.
But Germans and anti-Semites aren’t the only groups Nietzsche holds in low regard; his opinion on the English are equally negative, dismissively referring to the nation’s philosophical contributors as the archetypes of modern mediocrity:
There are truths that are recognized best by mediocre minds because they are most congenial to them; there are truths that have charm and seductive powers only for mediocre spirits: we come up against this perhaps disagreeable proposition just now, since the spirit of respectable but mediocre Englishmen.
Nietzsche’s sentiment here could be due to his perception of the historical influence English thinkers have had in fostering the atmosphere for what he considers to be harmful modern ideals. Nietzsche’s reasoning may partly be justified by the fact that English parliamentary-style government has served as a model for many forms of European democracies; a system which, as discussed earlier, Nietzsche views as contributing to the “mediocritization of man.” This reading is supported by the philosopher’s persistent equating of the lowly plebeian values with the English nation, in contrasts to the superior (in Nietzsche’s eyes) French culture, “European noblesse—of feeling, of taste, of manners, taking the word, in short, in every higher sense—is the work and invention of France; the European vulgarity, the plebeianism of modern ideas, that of England.” Here, Nietzsche’s personal biases are leaking through the prose, showing his preference towards the Latin countries he spent a great deal of his creative career residing in, in hopes that the temperate climate would alleviate his poor health. France, in particular, is a place he developed a great deal of fondness for, an affection that was further encouraged by the fact that the German nationalists of his time (à la Richard Wagner) held French culture in very low regard. In contrasts to the barbarianism of the northern cultures of Europe, Nietzsche described the French as possessing a more timid and sophisticated taste and mannerism:
Even now one still encounters in France an advance understanding and accommodation of those rarer and rarely contented human beings who are too comprehensive to find satisfaction in any fatherlandishness and know how to love the south in the north and the north in the south.
Of course, it can be easily argued that Nietzsche is engaging in a very selective form of cultural analysis in his heralding of France as a society that has transcended politics and nationalities. Furthermore, one is even justified in pointing out the apparent contradiction in Nietzsche’s reasoning, since the ideals of the French Revolution played a large part in nurturing the call for democratic reforms throughout the European continent—at least in spirit, if not in practice—a historical development Nietzsche claims to despise wholeheartedly. The inconsistency in Nietzsche’s condemnation of the English for their historic role in nurturing democratic principles, but failure to acknowledge France’s equal part in this modernization effort, is a shortcoming that cannot (should not) be easily overlooked by even the casual reader.
On the face of things, Nietzsche’s opinions of nationalities and patriotism appear direct and concise, as he spends page after page polemically dissecting and chastising all who fall for such “infantile” ideals. However, the man’s mindset on the modern development of Western society seems to be somewhat murky at times. He writes as if he loathes the coming uniformity of society (a sentiment instilled through the growing influence of democratic institutions), but at the same time he condemns the narrow-minded tribalism on offer from the nationalists. This leaves open the question on what sort of political development Nietzsche would like to see come about to reverse the wrongs we are currently on. Moreover, is it even possible to develop any political ideals from a man whose philosophy is so staunchly anti-political to begin with; will not any such attempt result in complete failure, on account that one cannot successfully create an ideological foundation on inherently polemical premises? I think Nietzsche’s primary goal on the issue of modern politics ought to be viewed more as a social criticism, rather than a social framework. For instance, when it comes to European affairs, the philosopher distances himself from both the nationalist and democratic factions, but is astute enough to realize that the former is a final gasp of a dying sentiment, and that the latter will be the ultimate trend amongst modern man, because (above all else) “Europe wants to become one.” Yet, despite the potential that lie with the aim in greater social unity, the underlying principles upon which this globalizing trend is based on, is something Nietzsche simply cannot support in good spirit.
 Nietzsche, Friedrich. Beyond Good and Evil, Part Eight “Peoples and Fatherlands,” section 242.
 Ibid, section 243.
 Virgil, Aeneid, 6.395.
 Ibid, section 244.
 Ibid, section 251.
 Nietzsche, Friedrich. On the Genealogy of Morals, “First Essay: ‘Good and Evil,’ ‘Good and Bad,’” 1887, section 7.
 Nietzsche, Beyond Good and Evil, “Peoples and Fatherlands”, section 253.
In a not-too-distant previous life, when I thought that standing in front of dozens of apathetic teenagers in hope of teaching them why learning proper grammar, writing, and argumentation skills was a worthwhile vocation to pursue, I came up with a nifty little speech to start off every semester.
I would say:
I know exactly what you are thinking right now. It’s the same question every student, in every course, in every land thinks every time they enter a classroom.
Why do I need to learn this?
The simple answer is that it’s because the law requires you to; at least until you turn 18. For most of you that’s a good enough answer to put up with my incessant talking for a few months, scrape together enough effort to satisfy the course requirement, and move on to your next classroom, until the law finally says that you’ve gone through the motions long enough to be let loose into the real world, full of non-classroom-type duties and responsibilities. For most of you this answer is good enough. But there’s a few of you for whom this sort of reasoning is not anywhere near good enough to make you put up with what the education system expects of you for an hour and fifteen minutes of your day.
If you fall within that group, I want you to listen very closely. In life you will meet many people. A great number of these people will make prejudgments about you from the first moment they see you–both good and bad. The good prejudgments will work to your benefit, and the bad will be obstacles that can make your life very, very hard.
People will make prejudgments about you based on your height, your weight, your race, your gender, the way you dress, the way you stand, even the way you choose to cut your hair. The negative opinions formed by these prejudgments, no matter how unfair or shallow, will for the most part be things you have little control over. Except for one important component: The way you communicate. Yes, people will judge you by how you speak, too. And while you can’t do much about someone who simply hates you for the way you look, you can sure as hell do everything to deny them the pleasure to dismiss you for the way you communicate. Even if they still hate you at the end of the day for all the bigoted ways available to them, you should at the very least do everything in your power to make it impossible for them to dismiss you for the way you write, the way you argue–the way you speak! That is entirely within your power, and it is a power that’s learned, not inherited. This is your opportunity to learn it, if this is a power you wish to possess. If you don’t, any prejudgments others make about your person as a results of your decision right now, will be entirely on you.
I’m biased, but I like to think it got the point across as well as anything else could. And while the point was of course to get the students to feel somewhat enthused about the lesson plan, there was also a deeper purpose to my little pep-talk. Namely, I was demonstrating the use of rhetoric to argue the case for learning about rhetoric (none of the students ever really picked up on this, though).
Rhetoric has a few technical (read boring) definitions floating around, but the basic gist of it is that rhetoric is a form of discourse meant at persuasion (typically of a person or audience). This is the part about rhetoric that most philosophical commentators agree on anyway. Opinions regarding the use or ethical standing of rhetoric have been more polarizing, however. Plato looked down on rhetoric as mere flattery that could be used to manipulate the masses, as it’s primary purpose was to convince you to side with the argument, and not to impart knowledge or truth. His student Aristotle took a more favorable view, and considered rhetoric to be an important discipline (and art form), and a necessary part of any well-rounded civics education. Much of the writings and social revolutions that emerged from the Enlightenment relied heavily on rhetoric to persuade the public to a new way of thinking about life (and liberty, and even the pursuit of happiness). The same goes for anti-Enlightenment reactionaries, who argued in favor of preserving the status quo in society.
In the modern world, rhetoric (in its purest form) is most readily seen in courtrooms and legislative bodies, and the political spheres that surround them. It’s no surprise that so many politicians start out as lawyers, and go on to use the same rhetorical tricks they learned in law school on the campaign trail. It’s for this reason that rhetoric takes on a negative connotation in many people’s minds.
Memorable (yet content-empty) slogans, propagated by conscience-devoid politicians, whose only concern is scoring a victory in their (and their donors’) favor. Arguments put worth by their mouthpieces in the form of public commentators and pundits, serving the sole purpose of winning over the electorate’s hearts, often at the expense of their critical thought and personal long-term interests. Honorable mentions also go to the rhetorical tactics of self-professed experts who peddle pseudoscience and conspiracy theories to the affect of fostering a perpetually misinformed populace for the sake of monetary gains. These can all be counted as examples in support of Plato’s skepticism towards rhetoric as a virtuous mode of discourse.
Even my speech above is arguably laced with unwarranted rhetorical hyperbole. (Honestly, most people you meet will probably not form good or bad opinions of you; they’ll probably look right past you with complete indifference, if you offer no value to them as a person). However, one should refrain from getting distracted with unwarranted equivocations. I sincerely believe there’s a big difference between educators using rhetoric to motivate their students to succeed in their coursework, and the sort of rhetoric that contributes to public policy meant to misinform the public (if you don’t, I hope you never get picked to serve on any jury).
I already mentioned the culpability of politicians making use of rhetoric to spread propaganda for ideological gains. And while this is universally snubbed as somewhere on the edge of morally questionable behavior, the only reason its done is because it works so well. In other words, people get manipulated by the bells and whistles of skilled rhetoricians because they don’t care to educate themselves about the hogwash they are being fed (usually because they agree and want to believe what’s being said to them, even if it’s factually baseless).
The public (at least its voting component) is the primary check on politicians in a democratic republic. However, given the ease by which we will readily be swayed by faint words of praise and reckless fearmongering, its not absurd to thing that Plato may have been on to something when expressing doubts with the public’s ability to combat against rhetoricians whose only purpose is to persuade with complete disregard for the truth of their words.
A secondary check on the rhetoric of public officials is the part of the voting public that makes up the free press. The reason why the founders of the United States explicitly mentioned protection for the free press from the government in the first amendment of the U.S. Constitution, relates back directly to the role the press (ideally) ought to have as the fact-checkers holding those in power accountable. Unlike the public, a respectable free press has several internal mechanisms in play that work to sift through credible and credulous information. It’s also why the first thing clever rhetoricians do is undermine the very credibility of the free press. “Fake News” is a beautiful example of manipulative rhetoric at its finest, as it plays on the public’s distrust of media sources (i.e. its only reasonable to believe that some news outlets fail to overcome the biases of their presenters) and gives it a credulous dose of self-serving generalization (i.e. all news outlets that disagree with me are the biased ones, regardless of any evidence they present to support their position).
Any reasonable amount of critical thought on the subject clearly shows that the fact that news sources can be mistaken (or even outright deceptive), does not therefore warrant the conclusion that all media must be wrong and lying when they report something you don’t want to be true. Once again, it’s up to the public to follow-up on the sources any reputable press will readily provide for them to check the merits of what’s being reported. Shouting “Fake News,” however, makes it easier to muddy this relationship between the public and the press, by equating all sectors of the press as untrustworthy in general, and allows people to lazily self-select only the media they are already disposed to agree with, without having to be burdened with doing any intellectual legwork.
Journalists are also rhetoricians by trade. Unlike politicians and lawyers, however, members of the free press ought to strive to belong to Aristotle’s more virtuous sect of the rhetoric spectrum, which aims to persuade the masses towards truth and knowledge. As journalism moves more towards competing for public viewership to continue to operate–thereby having to appease to the whims and tastes of the public, rather than seeking to simply inform them–the concept of fact-based reporting threatens to descend completely into the realm of vacuous rhetoric meant to do little more than keep up viewer support (which, as mentioned, is prone to succumb to some flimsy and fickle interests).
The elevation of online personalities, whose sole journalistic experience is being able to cultivate an audience around themselves on video-sharing sites like YouTube, under the neologism of “alternative media,” is an example of a free press where rhetoric takes precedence over fact-based reporting. Not to smear those personalities who make every effort to be a respectable source of information, the reality is that the environment of being an online news commentating source is inherently prone to undermine the fact-checking mechanism of traditional journalism, mostly by side-stepping it completely in favor of peddling rhetoric.
These online outlets have little in the way of field-based journalists doing the legwork to uncover newsworthy stories, let alone teams of fact-checkers tirelessly looking through sources and notes to determine the veracity of a story prior to its reporting. In truth, they rely almost entirely on the work of traditional journalists, whose work they present and provide opinionated commentary over, while ever-so-often throwing in jabs at how ineffective traditional journalism is, despite most (if not all) their actual “news” content coming through the efforts of said traditional journalism. The reason why this matters is that it is a clear example in which what could be a respectable profession, and a reliable venue for information for the public, is sacrificing its responsibility to dispel factual knowledge for the convenience of mindless rhetoric because it offers them popularity and financial gains in terms of viewer support and sponsorship.
Understanding the role of rhetoric–its values, its uses, and its prevalence–is vital in being able to identify the difference between an impassioned speaker fighting on behalf of a just cause, and a demagogue looking to manipulate the mob to his advantage. Its vital in being able to distinguish between journalists who go through many painstaking, sleepless nights to report a truth to the people as a public service, and pundit blowhards using the cover of journalism to propagate misinformation for their own gains and egos. In general, to understand the use of rhetoric, is to be able to identify it and (if need be) ward yourself against its more dire influences.
Rhetoric is not, and should not be, a dirty word. Like most things, in the hands of benign and well-meaning hands, it is a powerful tool of communication that can inspire immense good in the world. In the wrong hands, however, it can be the barrier that keeps us permanently free-falling in the abyss of credulity and self-destruction.
Recently the Republic of Ireland held a referendum to repeal longstanding blasphemy offenses in its country. While blasphemy still stands as a finable offense in the Republic under the 2009 Defamation Act, the referendum is still a demonstration that, as far as the Irish people are concerned, charges of blasphemy ought not to be a part of punishable civil law in their nation.
Friends of my adopted homeland here in the United States usually have a conception of Western Europe as being made up of a set of predominantly secular and progressive cultures. And speaking as someone who spent many years growing up in Western Europe, this conception isn’t wholly unfounded. As a result, it might astound many Americans to hear that some of these secular, progressive, ultra-liberal, borderline lefty countries still have enforceable blasphemy laws in place. Granted, the actual enforceability of such laws is largely theoretical in nature, given that they are usually undermined by far more salient laws allowing for the freedom of religious expression and the freedom to believe in accordance to one’s personal conscience. Thus, blasphemy laws currently exist as a vestigial organ in European law books; without practical purpose or application, but still present nonetheless.
“If these laws are unworkable, than why even bother to fret about them with referendums at all? Why not just continue to ignore them, and get on with your blaspheming ways?”
This could be a reasonable response, but it misses an important point concerning blasphemy laws. Putting aside the fact that it makes perfect sense to oppose the criminality of blasphemy on principle alone as unbecoming of any modern democratic nation, there is also the issue of the frailty on which the laxity of these laws currently exist. To put it more plainly, the reason blasphemy charges are unworkable in most of the European nations that have them is precisely because the current sociopolitical climate is too secular and progressive to enforce them. However, as any student of history knows, sociopolitical climates are anything but static. So what happens if the political pendulum swings too far to the right, towards a political faction that views the protection of religious sensibilities as far more important to a nation’s cultural well-being, than the free expression of its citizenry? Suddenly, these outdated blasphemy laws that have had no real thrust in civil law for almost two centuries, become a very powerful weapon in the hands of reactionaries all too eager to use the existing rule of law to conform society to their line of quasi-pious thinking. And this is a potential threat both believers and unbelievers alike ought to be concerned about.
Blasphemy isn’t simply the act of professing one’s disbelieve in religious claims, whole cloth. Blasphemy is the very nature in which all religions profess the very doctrines that make up their faiths.
Whenever polytheistic faiths, like certain sects of Hinduism, profess the existence of multiple gods, they are blaspheming against monotheistic religions which insist that there is only one god, and none other (and vice versa). Within the monotheistic Abrahamic faiths, when Christians profess that Jesus Christ is the foretold messiah, they are blasphemy against the Jewish faiths that claim that the messiah is yet to come (and vice versa). When Muslims claim that Jesus, though a prophet and a messiah, is not the son of God, they are blaspheming against a central claim of Christianity. The Catholic Church’s stance on the supremacy of the Roman papacy is blasphemous to the Eastern Orthodox Churches, and the Protestant rejection of Catholic ecclesiastical authority is blasphemous to Catholics. The Methodists are blasphemers to the Calvinists, and just about every Christian sect considers Mormonism a heresy.
The obvious point here is that to take away the right to blaspheme is to make it impossible for religious pluralism to exist within a society. Perhaps this is fine as long as your religious opinion is the dominant one in the society you inhabit, but what happens if you find yourself just short of the majority opinion? What if a population shift occurs, and the very laws that enforced the thin-skinned sensibilities of your religious persuasion becomes the means why which the new dominant line of thought undermines your right to religious expression?
I could stop writing now, and end on this appeal for mutual cordiality between people of all faiths, and how it is in everyone’s self-interest to oppose blasphemy laws, but I fear it would leave things very much against the spirit of healthy discomfort that blasphemy really should elicit in a person when coming across it. On that note, allow me address the elephant in the room that needs to be brought up when concerns regarding religious offense of any sort, in law or public discourse, rears its head.
Undeniably, religions make bold claims for themselves. Claims that offer definitive answers on matters concerning life, death, morality, with a wager on possessing a monopoly on Truth with a capital T. And they are always keen to wrap this all-knowing, all-encompassing bit of absolutist wisdom in a garb of self-proclaimed humility, as if to say, “No, no, don’t mind me…I’m simply professing to know the answers to all of life’s mysteries, ready made with the consequences (read: threat) that will befall you if you don’t follow along with my modest creed.”
In short, religions by their inherit design simply claim to know things they couldn’t possibly know. But I, in turn, admit that I don’t know. I don’t know what the answers to life’s mysteries are; nor do I know which of today’s mysteries will remain mysterious forever, and which might become common knowledge for subsequent generations to come. I don’t know which moral answers yield the most objective good for humanity; nor can I say for sure that such answers are even completely knowable. The truths I do know come with a lowercase t, held provisionally in accordance to forthcoming evidence and reasoned arguments, and I don’t know if I can do anything other than to reject the grammar of bolder Truth claims when confronted with them.
It is precisely that I don’t know that I am left with little recourse than to examine, question, dismiss, disbelieve, and (when I see fit) deride those who do claim to know, but offer hardly a dearth of evidence for their claim. It took centuries of debate and bloodshed of previous generations of thinkers for any of us to be able to enjoy this simple — yet powerful — privilege to skepticism. A privilege I do hold up as my right, and which I will speak up for without hesitation or apology. What you call blasphemy, I call critical thought. And if anyone can appeal to traditions as a means to protect religious sensibilities by legal means, I am fully within my right to appeal to the tradition of cultural and intellectual pushback towards religious doctrines and religious authorities that has made it possible for any sort of interfaith (and non-faith) social cohesion to exist in the modern world. A tradition that includes both the right to the profane and the blasphemous, which cannot be allowed to be abridged in a democratic republic, for as long as one wishes to be part of any nation worthy of the claim.
I’ve heard it said that the hallmark of argumentation is being able to summarize an opposing viewpoint in a way that the person holding this view would agree with your summary of their position; thereby ensuring that you not only understand the viewpoint you are arguing against, but are also tackling the most robust interpretation of the opposing side.
This principle of charity in arguing has been around debating circles for a long time, but has in the last few years gained traction under the neologism of steelmanning (an obvious negation of its logical antonym of straw-manning, where one argues disingenuously against a position that an opponent never presented, and does not hold). And on the face of it, this seems like a great development I can entirely get behind. Who would come out and seriously propose that one should not have a clear understanding of an opposing argument, let alone that one shouldn’t argue against an honest representation of said opposition? This is simply a case where, in principle (even if not in practice), the majority of reasonable people will be of one mind.
That’s all great so far. However (don’t look shocked, you knew this was coming when you read the title of the post), while it’s not hard to steelman the argument in favor of steelmanning, the way in which the concept has been thrown around lately leaves much to be desired for me personally. Whereas it’s meant to stand as an honorable demonstration of mutual respect between intellectual opponents, it’s also taken on the form among some very, very lazy thinkers (who, nonetheless, fancy themselves as stalwart intellects) where they demand for others to strengthen their arguments for them, in ways they never did, and never could have done to begin with.
As a point of principle, if I’m feeling inclined to engage in an argument with others, I will argue against what they say. Not what I think they should say to make their side more compelling. Not even what I would say, were I to hypothetical be forced to switch to their side on gunpoint. But, strictly, what the arguments are that they give to me to support the viewpoints they deem worthy to state aloud for public criticism and/or derision [no, despite what some people say, mockery does not immediately make one guilty of having committed an ad hominem, as long as the mocking follows a salient line of counterarguments; though weak debaters are usually prone to focus in on any well-placed jabs made against them as a clever means to deflect from the fact that they’ve run out of things to say to support their position.]
So when I come out and say…oh, I don’t know…promoting the concept of a white ethnostate is racist and fascistic, and I in turn get emails lecturing me about how I haven’t dealt with the most robust arguments in favor of the alt-right’s ethnostate position, I’m going to call bullshit on claims of my supposed failure to steelman such a clearly racist and fascistic position, because I didn’t pamper it first with a string of dishonest white nationalist euphemisms used to conceal a proposition invoking outright ethnic cleansing.
The fact that I can follow an argument from its premises to its unpalatable logical conclusion–whether or not its proponents have the reasoning capabilities or the guts to follow the same thread of their own argument–does not require me to waste my time to think of ways to make these kind of arguments more pleasant for mass consumption before I attempt to refute them (personally, I find it far more honest to deal with things in their unfiltered form). Nor am I required to do other people’s intellectual legwork for them, and bend over backwards to make their arguments stronger than they could ever hope to do on their own, so they can feel like they are being given a fair hearing in “the marketplace of ideas” (TM), where apparently every half-baked idea should be allowed to be spouted free of consequences.
Instead, I’d ask the question that if you keep finding yourself in a position in which you have to call on people to give your arguments the most charitable interpretations, you should: 1. Consider the possibility that you are a lousy communicator on behalf of the positions you are looking to promote, and 2. Give some thought to the notion that it’s not really the case that people are misinterpreting your views as absurd, horrendous, or laughable, but that your views actually are exactly that.
If you feel the need to argue a point, go argue it. If you want to have controversial conversations, then have them. But if you’re going to spent as much time whining afterwards about how everyone’s just so mean and unfair to you because they won’t paint every inane thing you say in the best possible light–or take every opportunity to fellate your ego about how brave you are to say dumb shit people will take offense to–save us all the trouble (and the bandwidth) and keep your poorly constructed arguments to yourself.
Genuine self-scrutiny is a personal virtue that is much easier preached than practiced. Usually the furthest most of us are willing to go is a relativistic acknowledgment that differing opinions exist and that, all things considering, we would be willing to change our minds if these alternative viewpoints were to persuade us sufficiently. But, in my opinion, this sort of tacit relativism isn’t much in the way of self-scrutiny. To self-scrutinize is to actively challenge the values and ideals we hold dear to our person–to dare to shake the foundation holding up our most cherished beliefs, and test if the structure on which we house our beliefs is sturdy enough to withstand a direct attack. In contrast, the aforementioned acknowledgment that differing (and potentially equally valid) views exist to our own is a very passive stance, as it strictly relies on an external source to come along and challenge our own position(s), with no actual self-scrutiny being involved in the process.
Up to this point, this very post can be rightfully characterized among the passive variant; i.e. it’s me (an external source) attempting to challenge you to question the manner by which you view the world around you. Although there are occasionally posts on this blog in which I sincerely try to adopt opposing stances to my own, the truth is that I do this primarily to better strengthen my own position by being able to effectively understand what I’m arguing against. This, too, is not self-scrutiny. And it would be dishonest to pretend otherwise.
To truly self-scrutinize I would have to pick a position–a value, an ideal–by which I orientate my worldview around, and mercilessly strip it to its bone. The frustrating part of such a mental exercise is the inevitability of having to rely on generalizations of my own opinions in order to be able to paraphrase them thoroughly enough, without getting trapped in a game over petty semantics. The important thing to remember is that the points I will be arguing over with myself in this post are admittedly stripped of their nuances regarding some obvious exceptions and caveats, so as to not lose focus of addressing the underlying principles that are being discussed. Consider that a disclaimer for the more pedantic-minded among my readers (you know who you are).
First, it would be helpful if I stated a value by which I orientate my worldview around, prior to trying to poke holes in it. Above most else, as long as I can remember, I have always valued the egalitarian approach to most facets of human interaction. I truly do believe that the most effective, and just, and fair means for society to function is for its sociopolitical and judiciary elements to strive for as equitable an approach to administering its societal role as possible. In this view, I also recognized that this can more realistically be considered an ideal for society to endeavor towards rather than an all-encompassing absolute–nonetheless, I still see it as a valuable ideal for modern society to be striving towards, even if we must acknowledge that its perfect implementation may forever be out of our grasps.
Additionally, I should clarify that I do not necessarily claim this personal value of mine to be derived from anything higher than my own personal preferences to how I think society ought to be. Yes, it is subjective, because it is subject to my desires and interests, however I would argue that this is true of just about any alternative/opposing viewpoint that may be brought up. Furthermore, the merits and benefits I believe to be implicit in my personal preference of an egalitarian society (though admittedly subjective) are, in my opinion, independently verifiable outside of just my own internal desires. In short, I value egalitarianism on account that, because I have no just and tangible means by which to sift through who merits to occupy which position in the social hierarchy, I consider it important that (if nothing else, at least on the basic application of our political and judicial proceedings), we hold all members of society to an equal standard. Moreover, not that it matters to determining the validity of the egalitarian viewpoint, but I’m convinced that the majority of the people reading this will have little trouble agreeing with the benefits of such a worldview (though probably more in principle, while leaving room on disagreement on the most practical means by which to apply said principle in a social framework).
Now, the immediate issue I see arising with this stance of mine is the objection that genuine egalitarianism can easily lead to outright conformity–especially enforced conformity–as a society built on the model of complete equality might find it difficult to function unless it actively sets out to maintain the equality it’s seeking to establish.
It is a harsh fact that large-scale human interaction is not naturally egalitarian; meaning that left to their own devices there is little in historical evidence to suggest that a society of people will not diversify themselves into a multi-layered hierarchy; thereby instinctively creating the social disparity that the egalitarian mindset is aiming to combat. The most obvious response would be to insist that egalitarianism simply means that the basic functions of society (i.e. the laws) have to be applied equally, and that as long as measures are upheld in society, the system can self-correct to its default setting. Yet, this outlook is only convincing as long as one is inclined to have faith in the sincerity of the application of the law, in terms of holding all in society to an equal standard. This also brings us to the issue of who is to be the arbiter warranted with upholding the principles of an egalitarian system. The judicial system? The policymakers? The public at large? And does this then bestow on these individuals a set of authority (i.e. power and privilege) that thereby creates a disparity which in itself violates the very premise of a truly egalitarian model?
“In a democratic society, the authority rests with the people in the society to ultimately decide on who is to be the arbiter(s) to ensure that equality is being upheld in said society on the people’s behalf.”
But maintaining social equality by means of representative democracy brings us to the issue of having those in the minority opinion be subject to the whims of the majority. And is this not also in itself a violation of what an egalitarian society ought to be striving for?
When we play out the potential pitfalls of every one of these concerns what we end up with is the realization that, in practice, egalitarianism seems to only function when applied on a selective basis. Complete equality, across the board, on all matters, has the serious consequence of either ending up in a social gridlock (rendering all manners of progress on any issue impossible), or coercion (negating the benignity that is ideally associated with egalitarianism).
I’ve heard it said how in this sort of a discussion it is important to differentiate between equality of outcome and equality of opportunity; that the latter is the truly worthwhile goal an egalitarian ought to be striving for in order to ensure a just and fair society. I’m not sure this does much to address the primary issue at hand.
If there exists no disparity in opportunity, but we reserve room for an inequity in outcome, than will it not be the case that you will still end up with a select number of individuals occupying a higher role in the social hierarchy than others? And once the foundation is laid for such a development, is it not just as likely that those who end up occupying a higher role could put in place measures that will be of interest to themselves alone; or even at the expense of those who fall into lower social roles? Meaning that even though in this model all opportunity was equally available at first, the caveat that different people can have different outcomes–fall into more favorable and less favorable social conditions–fails to safeguard against the potential dilemma of having those who manage to rise high enough manipulating matters in society to their advantage; thereby stifling the outcome and opportunity potentials of future generations. If the rebuttal is that in a truly egalitarian society measures would be in place to prevent this, we fall back to the question of who exactly is to be the arbiter warranted with upholding the principles of an egalitarian system? Thus bringing us full-circle to the line of inquiry mentioned in the preceding paragraphs; hence, making an equality of outcome vs an equality of opportunity distinction does little to nothing to resolve the issues being discussed here.
All these objections are ones that, even as someone who considers himself an egalitarian, I can sympathize with. Mainly because I don’t have any way to refute them without appealing to a personal intuition that these concerns are not endemic to an egalitarian model and that it’s ultimately feasible to avoid such potential pitfalls when we leave room within the social system to be amendable to debate and revision. However, I have to also admit that I’m not always entirely sure of this myself.
This problem brings me directly to the confrontation of what should be valued more in society: the complete equality of all people, or the value of the autonomous individual? And whether creating such a dichotomy is necessary, or a balance can be struck in satisfying the interests of both entities?
The threat that removing all disparity that exists between all individuals might lead to a stifling of the distinct individuality of people is something I believe is worth worrying over. What good is a world where equality is triumphant but reigns on the merits of absolute sameness? Not to mention, what will happen to the human ingenuity all of us in modern life depend on for our survival as a society? The prospect of attaining personal achievement is necessitated by one’s ability to stand out above the fold, and create something unique and distinct from that which is common. The possibility that this drive will be held in suspect in a completely egalitarian world, in the name of preemptively combating all forms of perceived inequality, no matter how unpleasant it might be to my core values to acknowledge, is not something I can dismiss simply because it’s inconvenient to my worldview. Essentially, I believe that it would be unwise to simply brush off the point that a world safeguarded to the point where no one falls, is also potentially a world where no one rises.
When I started writing this post I had a standard set of points I knew I would raise to fulfill my interest of demonstrating a genuine attempt at unrestrained self-scrutiny. I know that some readers might wonder why I’m not doing more to combat the objections I’ve raised here against my own egalitarian perspective, and the simple truth is that it’s because I understand my desire for egalitarianism to be practical and feasible rests almost entirely on the fact that I want both of those things to be true, as it would validate my presupposed worldview, by fiat. Nonetheless, I do understand that reality does not depend on my personal whims and wishes. In all honesty, having actually reasoned out the premises here, I’m left wondering why, if for the sake of practicality we will undoubtedly always be forced to be to some extent selective with our approach to egalitarianism, we (myself included) even bother calling it egalitarianism at all? Perhaps there is a term out there that more honestly fits what most of us mean when we strive to uphold what we refer to as egalitarian principles. That, however, is a wholly separate discussion to my intentions here. My goal was to hold my own views and values to the fire and see where it ends up. In that goal, I think I’ve succeeded…what results from it will take a bit more thinking on my part to figure out.
The other day I got a chance to revisit John Wayne’s epic war film The Alamo. As one can assume from the title, the film depicts the events surrounding the 1836 Battle of the Alamo, whose legacy served to inspire popular support for the ongoing independence movement led by the white American colonists living in what was then Mexican territory. It would be an understatement to say that the film does not strive for historical accuracy. Rather it focuses more on the mythical nostalgia that has developed among the white Texan population since the battle (and persists to this day); fervently espousing a message of freedom and republicanism over tyrannical oppression as a likely allegory to the Cold War struggle taking place during the film’s release in 1960.
In Gunfighter Nation, historian Richard Slotkin defines myths as “stories drawn from a society’s history that have acquired through persistent usage the power of symbolizing that society’s ideology and of dramatizing its moral consciousness” (p. 5). Within the history of Western expansionism, the Alamo stands as a hallmark of American fortitude, where the legacy of the event has all but displaced any concern for veracity by its admirers. This is the sentiment on which John Wayne builds his tale of TheAlamo, occurring chiefly within the framework of the Western genre that his own quasi-mythical persona helped create in American culture. The message that Wayne is adamant to reverberate throughout the film is the idea of nostalgia. As evident by how the plot begins and mounts its climax with Sam Houston prophetically commenting on the need for future generations to remember and uphold what is being done in 1836, to keep it in their hearts as the life of Texas.
Although the film’s setting is in Texas, depicting a Texan struggle for freedom from oppression, John Wayne’s constant reminiscing about republicanism—a clear attempt to mimic his perceived Jeffersonian ideal of democracy—transforms the entire narrative into a classic tale of American virtue relatable to all red-blooded patriots. It doesn’t take much to realize that Wayne’s Davy Crockett is not meant to be an accurate representation of his historical namesake, but an emblematic stand-in for Wayne’s personal principles (as seen by the dialogues his Crockett gives, where the lines often closely match Wayne’s 1977 patriotic oration America: Why I Love Her).
This is best seen in the first exchange between Colonel William Travis and Davy Crockett, where Crockett proclaims, “Republic, I like the sound of that word. It means that people can live free, talk free, go or come, buy or sell, be drunk or sober, however they choose. Some words give you a feeling. Republic is one of those words that make me tight in the throat.” Of course, the irony that Texas is being freed by slaveholding Americans is absent from Wayne’s proclamation. Instead, he focuses on the myth that Americans (in particular white Southerners) heralded the true spirit of the Texan cause: freedom. This is vital in establishing the message that we are viewing a battle between right and wrong, and since an independent Texas is presented as the land of opportunity, hope, and future, all who stand against it can only be on the side of despair and tyranny. The essential myth Wayne accomplishes here is the substitution of frontier Texas into contemporary America’s struggle against the evils of the world.
The film itself acknowledges its affirmation of myth over fact in a telling scene in which Crockett reads out a forged letter he had written under Santa Anna’s name, urging the American men to leave Texas at once. The pompous tone of the letter causes Crockett’s men to see it as a clear attempt of intimidation, and as men they are obligated to respond harshly to such antics. Crockett does immediately admit that he in fact wrote the letter, but justified it on the basis that its contents were in line with what Santa Anna might have written. Nevertheless, the men are so agitated at the possibility of Santa Anna addressing them so self-righteously that they readily take up the Texan cause for freedom and independence as their own. Never mind that the letter was a fake, created and existing solely in Crockett’s imagination. Moreover, no man present bothers to question how Crockett, a native of Tennessee, whose knowledge of Santa Anna stems solely from hearsay, could possibly know what sort of message Santa Anna would give to these Americans. And no one cares, because the reasoning behind established myth “is metaphorical and suggestive rather than logical and analytical” (Slotkin, p. 6).
The Alamo is a film that needs to be analyzed through the time it was made in order to fully grasp its underlying theme. In 1960, the United States was engaged deep within the Cold War struggle against the Soviet Union, a conflict which to most Americans stood as the absolute battle between liberty and tyranny. Of course, in 1960, America had little idea of how the conflict would eventually unfold in the next three decades, thus it became a dire priority to raise American consciousness against the forces of oppression on the other side of the world. John Wayne, being a staunch anti-Communist, anti-Leftist patriot, creates a historical narrative that serves as a helpful analogy for the American people to grasp how the fight against tyrants is an American virtue that reaches deep into the country’s roots.
For Wayne, promoting such a message could also have been an attempt to atone for his failure to serve in World War II, an inconvenient truth for a man who built his career on portraying brave patriots who answered the call of duty for their country. In reality, the fact still remains that John Wayne could only live up to his image in make-believe movies, never in real life, which perhaps fostered much of his simplistic dialogue promoting war against perceived tyranny. The opening scene of The Alamo starts with a harsh condemnation of Santa Anna as a malicious dictator, determined to “crush all who oppose his tyrannical rule.” Just as the Cold War narrative between the Unites States and Soviet Union was simple, so is the narrative between Santa Anna’s Mexico and the American-Texan forces in 1836–it is simply a fight between right and wrong.
Little background information is given about any of the major characters involved in the fight for Texan independence. Nor is there much said about why a large population of white Americans are living in Mexico to begin with, or how they are specifically being oppressed by their adopted country. Crockett and his men are the only white settlers shown actually immigrating to Texas, and the only background on Crockett is that he was in Congress before becoming a raccoon-hat wearing adventurer on the frontier. Although his time in Congress is portrayed more as a mundane series in his life, rather than having been a worthwhile endeavor on his part (Crockett’s negativity towards policymakers is likely a reflection of Wayne’s own frustration with contemporary politicians who are not doing enough to combat the menace of the Soviet Union).
It also does not take much to see that Santa Anna is meant to be a representation of the archetypal Soviet dictator—though perhaps not so much on par with a Stalinist megalomaniac, as a boorish Khrushchev autocrat. As a result, John Wayne is attempting to blend the urgent threat of the present with a treacherous (yet, ultimately defeated) enemy of the past; hence, Crockett’s nostalgic musing about the state of his mind right before a noble, though hopeless, battle as “Not thinking; just remembering.” Yes, a battle may be lost, but the final outcome has always been victorious for those who choose the right path; the war will still be won in the end.
John Wayne’s The Alamo heavily orientates around the notion of cultural nostalgia, and how this looking towards the past serves to foster a positive consciousness towards the future. Wayne does not care to provide a reliable history lesson to his viewers, however. Instead he provides a needed myth that retells a known story the way he believes it ought to have happened, and ought to be seen. In that sense, he is foreshadowing the lines that will be uttered in one of his better cinematic works, The Man Who Shot Liberty Valance: “This is the West, sir. When the Legend becomes fact, print the legend.” The Alamo is the legend, not just to Texas but all freedom loving Republics (i.e. America as a whole), and for John Wayne, if it is to be remembered at all, it better be done the right way–his way.
From its initial publication on November 24th, 1859, Charles Darwin’s On the Origin of Species revolutionized the scientific field through its presentation of evolutionary theory as the biological process capable of accounting for the diversity of life observed in the world. And the key means by which Darwin proposed evolution to be possible was a mechanism he called natural selection.
From the start, controversy arose against Darwin’s strictly naturalistic explanation for the emergence of new species, and opposition formed swiftly to denounce evolution by natural selection as an insufficient theory that is unscientific in its analysis. Most of the early opposition was religious in nature, but a more legitimate note of dissent came from Darwin’s own colleague Alfred Russel Wallace, who criticized Darwin’s choice of diction in referring to the evolutionary process by the term natural selection as misleading to the general public, because it needlessly implied a selector in the process. Darwin countered Wallace’s objection by making the case that, for explanatory purposes, natural selection served as a sufficient term as it gives people a descriptive (albeit metaphorical) idea of how the wholly naturalistic phenomenon operates in comparison to the widely familiar practice of artificial selection.
Wallace himself was a proponent of evolution (often referred to as its co-discoverer along with Darwin), and was by no means opposed to the idea of natural selection. He simply preferred the phrase “survival of the fittest” as a much better alternate to natural selection, arguing:
Natural Selection is, when understood, so necessary and self-evident a principle, that it is a pity it should be in any way obscured; and it therefore seems to me that the free use of “survival of the fittest,” which is a compact and accurate definition of it, would tend much to its being more widely accepted, and prevent it being so much misrepresented and misunderstood.
Wallace thought that among the scientists in the field, who understood their work, the use of natural selection was not an issue, but among those who did not understand evolution and its process, the metaphor would fail to convey Darwin’s true meaning. Undoubtedly aware of the attacks his and Darwin’s theory was already being subjected to, Wallace must have been worried that confusing people about the function of natural selection with metaphorical language would only serve to move skeptical minds further away from embracing evolutionary theory.
Darwin responded by agreeing that natural selection can be misleading to some, and even decided to incorporate “survival of the fittest” alongside natural selection as a compromise to Wallace in subsequent editions of On the Origin of Species. But Darwin also commented how through the continued use of natural selection, his intended meaning will become more widespread, and weaken the sort of objections Wallace made. Despite these concessions on the issue, Darwin remained largely dismissive of Wallace’s concern, even bluntly responding that Wallace overstated the case for the opposition, and implied that certain individuals will misinterpret any term simply because they are too keen on scrutinizing over matters that are trivial to the average person.
Darwin introduced the concept of descent through modification (i.e. evolution) in Chapter I of On the Origin of Species by drawing parallels to the artificial selection observed in animal domestication, something most of his readers would have been familiar with at the time. He does this as a means of easing his audience into his argument in Chapter IV, where he finally makes his case for natural selection. The confusion Wallace referred to can be argued here by Darwin’s parallel between artificial and natural selection, and his stating how, “this preservations of favourable variations and the rejection of injurious variation, I call Natural Selection,” because it indicates the presence of intelligent oversight (as is the case for artificial selection), when in reality no such implication need be made for the process to function. Though in his exchange with Wallace, Darwin appeared to be shrugging the matter off as a nonissue, he nevertheless thought it important to both defend his use of natural selection, and clear up any confusion about his intent in later editions of the book: “It is difficult to avoid personifying the word Nature; but I mean by Nature, only the aggregate action and product of many natural laws, and by laws the sequence of events as ascertained by us. With a little familiarity such superficial objections will be forgotten.” Thereby reiterating his confidence that by continually familiarizing the public with his true intended meaning for natural selection, the term can be salvaged and the misguided dissent will disappear.
Charles Darwin insisted that metaphorical terms are needed in science for the sake of expressing an idea, and that it is the general descriptive quality that ought to be focused on by readers, not so much the personification of abstract concepts. For example, when one says that particles are physically attracted to one another, few actual think there is some sort of conscious intimacy taking place between the consciousness-devoid matter. Same goes with the description that gravity pushes down on a table, in that nobody would claim that the result caused by the force is driven by a self-awareness to hold on to the object. In the case of natural selection, while in a literal sense a misnomer, it is nevertheless an apt description of the mechanism taking place.
Despite what is often asserted within anti-Darwinian circles, evolution by natural selection is actually not a completely random phenomenon, in that there does occur a mode of selection. To explain it simply: Different variants exist among and within different species, exhibiting different traits; some of them will be better adapted to a given environment, thus they will better survive in said environment, leaving more descendants with the same beneficial traits than the less adapted species. It is blind, unguided, and in the long-run goalless, but also not really random, in that nature itself non-randomly provides the setting in which the various random traits will either flourish or flounder. Thus, although the selector is an unintelligent and unaware agent, it is a selector nonetheless; a natural selector. Meaning that Darwin’s use of natural selection as a metaphorical expression to describe the mechanism of evolutionary theory is a fitting one, and an entirely justifiable one.
Natural selection, as a term, is metaphorical only in the broad sense, but very descriptive in light of the proper understanding of the science involved in its function. Darwin was right to point out that, given enough promotion, a phrase will begin to take on the definition popularly assigned to it even among the most stubborn minds. Originally, the Big Bang was coined as a dismissive mockery of the theory, and is neither accurate not descriptive, but it has such wide use that objections have been thoroughly forgotten, and nobody emphasizes its metaphorical implications. This leads into the main point, and it is one that Darwin himself indirectly made to Wallace, how for those who are opposed to the implications of evolution no term or explanation will be justifiable, and misconstruing natural selection is a means by which to either conform the concept to their personal liking or discredit it as insufficient. The same would happen with “survival of the fittest,” or any other alternative phrase that could be proposed. And it is through the merit of its work that science is judged, not by its ability to accommodate to the ignorance of its detractors.
 Francis Darwin and A. C. Seward, eds., More Letters of Charles Darwin: A record of his work in a series of hitherto unpublished letters (London: John Murray, 1903. Vol. 1.), 270.
 Francis Darwin and A. C. Seward, eds., More Letters of Charles Darwin: A record of his work in a series of hitherto unpublished letters (London: John Murray, 1903. Vol. 1), 272.
 Darwin, Charles. The Origin of Species ed. James Secord (Oxford: University Press, 2008), 111.
Friedrich Nietzsche wrote extensively about his interpretation of human development (as well as human degradation), and in his beautifully articulated fervor he often fell into the habit of overextending his narrow understanding of evolutionary theory.
One cannot erase from the soul of human being what his ancestors like most to do and did most constantly / It is simply not possible that a human being should not have the qualities and preferences of his parents and ancestors in his body, whatever appearances may suggest to the contrary (Beyond Good and Evil, “What is Noble,” Section 264).
The detrimental part of Nietzsche’s error above is his apparent endorsement of Lamarckian inheritance (an early evolutionary hypothesis that states how organisms can pass on traits they acquired in their lifetimes to their offspring; considered to have been largely displaced as a scientifically viable theory in favor of Darwinian natural selection). In the same section, Nietzsche goes on to say that if one knows about the character traits and likes of the parents, an accurate inference about the child’s personality traits and likes also becomes possible; emphasizing that it is only, “with the aid of the best education that one will at best deceive with regard to such a heredity.” Nevertheless, Nietzsche ignores the impact that environmental pressure plays on the development of a child’s psychology, i.e. the fact that people (in particular children) seem to readily adopt the characteristics and traits that are prevalent in their surroundings (this is not an absolute rule, but a general statement).
For example, I have always lived in working-class urban areas in the United States, where there reside quite a few immigrant households (my own included). And where there are immigrant households in the U.S., there are also first-generation Americans. By Nietzsche’s assessment these first-generationers should retain the “qualities and preferences” of their parents and ancestors, yet in reality, more often than not, they simply don’t.
If they were born here–or arrived here at a young age–went to American schools, associated with American peers, and indulged in American pop culture to any extend, their qualities and preferences will be inseparable from that of anyone else whose ancestry goes back several generations in this country. This will be true in regard to their most basic characteristics, such as their accents, their mannerisms, their values, their ideals, their politics, and their interaction with societal phenomena. What remains of the traditional ties to the parent’s mindset becomes solely a sentimental practice for the sake of the still unassimilated elders, rather than a reflection of sincere attachment to ancestral values.
Nietzsche might have countered by saying that this is just part of the deceptive education he warned about. But if we accept that people can be deceived about their likes and preferences by their surroundings, does it not also warrant the notion that people are deceived about their likes and preferences by their parents (i.e. childhood indoctrination), rather than having inherited them by Lamarckian means? In fact, under close scrutiny Nietzsche’s two opposing premises seem to be virtually identical, as long as one does away with the Lamarckian inheritance component in the first.
Nietzsche rejected free will as a viable factor in human psychology. Thus he may have been motivated to accept acquired inheritance as a necessity to explain human behavioral traits in a completely deterministic universe. But, if so, this is a needless exercise on his part, since the fact that people’s behaviors are determined by a combination of genetic (in a purely biological sense, not the abstract personal interests discussed above) and environmental factors, is sufficient enough in offering a thorough explanation of the matter. However, I doubt that free will held any real motivation in Nietzsche’s reasoning on the subject.
More likely, Nietzsche saw Lamarckian inheritance as a more fitting addition to his greater philosophical aims. Charles Darwin had adamantly proposed that in the grand scheme of things, the only coherent way to speak of evolution is on the level of populations, not individuals. To Nietzsche–who by all accounts had no trouble accepting either Darwin’s theory by natural selection, or the common descent of living organism–this view would have been too naive to satisfy his want for a more inwardly self-reflection (he was after all more a philosopher, than a scientist), not to mention I suspect he probably saw it as antithetical to his own promotion of individual development and preservation, in favor to the preservation of the population as a whole.
Thus, it might be safe to say, that in this case at least, Nietzsche had fallen into the same trap he had warned others of with so much rational eloquence. He overlooked the fact that the veracity of a conclusion cannot be determined by its conformity to our preferences, but must stand on its own merits.
As rare as it is for me to have interactions with Kronstadt Revolt (KR) readers, the few times it does happen it’s exclusively occurred outside the actual confines of the blog (i.e. mostly emails). My best guess is that due to my low posting frequency they want to make sure there is actually someone still typing away at a keyboard behind the dashboard menu before fruitlessly putting a comment into moderation limbo that may never be read or approved by anyone (as an fyi, comment settings are set to only moderate the first comment you post, to make it easier to control spam from bots; after that first-time approval showing you’re human, your subsequent comments should post automaticallyupdate: comments are no longer moderated at all–go nuts, people!). Never mind that my twitter updates are about as (in)frequent as my blog posts, it is the trend that has developed, and I’m happy to interact with readers who feel the need to check in on a thing or two, here and there.
By far the most viewed posts I have on KR are the one’s about Friedrich Nietzsche (with Nietzsche’s Views on Women in particular getting the lion’s share of these views). Given the popularity of the subject, I suppose it makes sense that the majority of questions I get revolve around people either asking for clarification about Nietzschean philosophy, or challenging my interpretation of it. Neither of which I mind. Considering I opened up the conversation into the subject, it would be absurd of me to scoff at either people asking for more details, or questioning my perspective on the subject. (If nothing else, I can at least point them to better resources than myself on anything I personally fail to address; usually Nietzsche himself.)
Over the past few months, however, the sporadic question or two I find in my inbox about Nietzsche have more than a few times come attached with one other name: Jordan B. Peterson. Although usually not so much in the form of a question, as an eager endorsement for me to explore the man’s views on similar topics (or just any of the wide range of social/psychological topics he covers). If nothing else, the man has an enthusiastic fan base, which very much has grown exponentially since his name started making the rounds on the online “memosphere” in late 2016. Since then his lectures have become increasingly popular on YouTube, and many people (mostly young men, but others, too, I’m sure) regard him as a foremost intellectual of our time, going so far as to credit him for re-instilling guidance to their lives.
In part, I’m writing this post to serve as a bookmark I can direct future inquiries to that may come my way regarding my thoughts on the man. Let me start off by saying that I was aware of Peterson somewhat before I was actually aware of Peterson. To put it less cryptically, I first saw the man in a YouTube segment back in 2011, where he opposed a set of atheist bus ads in Toronto, and where he stealthily mentioned that atheists like Richard Dawkins maybe should be oppressed (one might be inclined to assume he’s come a long way in the promotion of free speech given he has cultivated it as one of his leading mantras over the course of the last 2 years, however a general dislike, and outright hostility towards open atheism–let alone outright anti-theism–is not an uncommon theme for Peterson to this day, despite his popularity with centrist-to-conservative leaning atheists online).
Unfortunately, in the segment Peterson is never asked whether it’s warranted to be so hostile towards a limited bus ad campaign put on by atheist activists (on their own dime, no less), when one often can’t go 2 miles in most North American metropolitan centers without coming across scores of billboards, posters, films, books, songs, graffiti, church signs, church buildings, and motel room nightstands, all advertising on behalf of Christianity, with little worthwhile resistance from secular voices.
While I didn’t notice it at the time of my first viewing of that debate, I had also come across Peterson’s work a few years prior in the form of his 1999 Maps of Meaning, a book that left no impression on me due to its overemphasis on Jungian psychoanalysis (much of which rests on highly unfalsifiable assertions, which irks not just me, but modern psychology as well, since as a fields it has largely moved away from Carl Jung’s theses and conclusions). The writing style in the book is also occasionally laced with a distinct tone of self-importance (i.e. repeated mentions of how grand the contents held within it’s pages truly are) that I find personally distracting. This is just a subjective matter of literary taste (so think of it as nothing more), but my take has always been that if a work is important/intelligent/paradigm-changing it is better to let the work speak for itself, then boast about it to the reader within the very work. And as a result I quickly forgot the book, the man who wrote it, and failed to recognize him as the “Canadian man opposed to atheist bus ads” I saw years later. I honestly never expected to come across him again, especially not with the large following his views have garnered since my first exposures to him.
Yet, since around early 2017, he has popped back up not just on my radar, but a great deal of the sociopolitical/culture discourse, causing me to try to familiarize myself with his views again (though with a bit more concentration then before). Peterson is a psychologist by trade, and a lot of his content deals with the dynamic behind chaos and order as prominent in the lives of individuals struggling to find meaning in their existence. This may be why he’s been described as a surrogate father figure to a segment of millennials who feel directionless in the modern world; a viewpoint both as much harped on by his critics, as it is embraced by his admirers. His advice can range from the practical (“Clean your room; straighten yourself out first”), to dire warnings against the influence of cultural Marxism (lately, he’s been more keen on dropping the cold war terminology in favor of a more updated “Neo-Marxism,” or just plain “postmodernism”–two distinct terms he has a habit of using interchangeably), to his more spiritual messages bemoaning the modern world’s loss of traditional (i.e. Christian) faith (essentially, he finds that there’s value in the historical/psychological meaning religion, in particular–if not exclusively–Christianity, offers to the human psyche; this social criticism of his is often tied in to his screeds against Marxism and postmodernism, too).
Because the questions directed at me about Peterson involve my thoughts on his thoughts about Nietzsche, I’ll write my quick take on what I’ve seen of him on the subject so far. To me, the man strikes me as someone who doesn’t so much read Nietzsche’s writings, as he reads into Nietzsche’s writings (a habit I warn against in my own book) to make the philosopher’s views sound more sympathetic to his own.
Whenever he brings up Nietzsche in his lectures, it’s usually to point to the Prussian philosopher as an intellect who foresaw the nihilism that the Western world’s gradual move away from traditional (i.e. Christian) faith would lead to, and to cement Peterson’s personal views on why the preservation of Christianity (even if only as a metaphorical archetype to be aspired to) is important both for the individual, and for Western civilization as a whole. The caveat that he doesn’t usually bother to focus on in these lectures, however, is the fact that as far as Nietzsche was concerned, Christianity itself is ultimately a form of nihilism, precisely because its grounding foundation is imagery and can therefore offer no lasting counter to the harsh empirical reality the modern age has forced on us. Nietzsche’s subsequent objections to contemporary secular philosophers attempting to create alternatives to Christian values wasn’t due to their move away from Christianity as a moral framework, but their continued reliance of what he considered to be fundamentally Christian morals. Hence, the philosopher’s wider intellectual project of wanting to create a transvaluation of all values, in which Christian concepts like Good, Evil, and Sin, are to be displaced by a philosophy that affirms life, rather than fetishizes death.
In Nietzsche’s view, Christianity at its core will always be (and can never be anything more than) a death cult that inverts man’s base instincts and desires into absurd notions of sinfulness, rendering it as a moral system to be entirely hostile to life. (As a reference, I offer every page, paragraph, and sentence of Nietzsche’s The Antichrist, which in German also translates to “The Anti-Christian”.)
I’ll grant that given the many hours of lecture footage Peterson has up on YouTube where he explores numerous philosophical topics, it’s possible that I missed the part where he goes into depth regarding Nietzsche’s staunch anti-Christian position, and how it’s completely incompatible with his own defense of Christian moral values as a framework for society. But from all the footage I have seen (and it personally seemed like quite a bit at the time of viewing), Peterson seems to always evoke Nietzsche as a kind of kindred spirit, who would have sided with him against the godless forces undermining Christian morals as a sound foundation of meaning for people. And, speaking as someone whose familiarity with Nietzsche is just a bit more than the average layperson’s, this strikes me as mistaken at best, and downright deceitful at worst.
I’ve been warned that Jordan Peterson fans have a tendency to get cheeky when they come across even the mildest push back to their favorite psychologist, so my preemptive retort is that, yes, my room is always in a state of unmatched tidiness, and my stance is so upright one would be mistaken to call me anything less than permanently erect. Hope that settles that matter.