Nietzsche on Free Will

There is some confusion and misunderstanding floating around concerning Friedrich Nietzsche’s thoughts on the concept of free will. By which I’m referring to the willful inability of many admirers of the philosopher to accept the fact that he wholeheartedly rejected the existence of anything akin to free will.

To Nietzsche, free will is a concept that cannot be separated from its religious underpinnings, thus: “God has been thoroughly refuted; ditto, ‘the judge,’ ‘the rewarder.’ Also his ‘free will'” (Beyond Good and Evil, “What is Religious,” section 53).

Since Nietzsche gives no credence to the religious worldview, he sees no reason why religious concepts ought not to be rejected right along with the rest of the divine packaging, “The desire for ‘freedom of the will’ in the superlative metaphysical sense, which still holds sway, unfortunately, in the minds of the half-educated (Beyond Good and Evil, “On the Prejudice of Philosophers,” section 21).

He does acknowledge, however, that many of his irreligious peers still try to preserve some notion of a non-supernatural version of free will, a sentiment that Nietzsche describes as the need for individuals to hold onto a sense of personal responsibility, “some will not give up their ‘responsibility,’ their belief in themselves, the personal right to their merits at any price” (Beyond Good and Evil, “On the Prejudices of Philosophers,” section 21). The mindset of the naturalistic thinkers who hold to the existence of free will, is their attempt to salvage the idea of accountability (their own, and that of others), and by extension, the institution of justice and due punishment for one’s actions.

But Nietzsche rejects this desire as a misdirected conflation of two separate issues; namely, a conflation of justice with punishment, and a further conflation of both of these with free will:

The idea, now so obvious, apparently so natural, even unavoidable, that had to serve as the explanation of how the sense of justice ever appeared on earth–“the criminal deserves punishment because he could have acted differently”–is in fact an extremely late and subtle form of human judgment and inference: whoever transposes it to the beginning is guilty of a crude misunderstanding of the psychology of more primitive mankind (On the Genealogy of Morals, “Second Essay,” section 4).

Nietzsche proposes that the origin of justice can be more accurately characterized as a form of trade, serving as a method to equalize two competing parties, and not necessarily as a punishment for one’s freely chosen actions (i.e. free will). In fact, in such a framework the emphasis on punishing offenders is superseded by the notion that, “every injury has its equivalent and can actually be paid back, even if only through the pain of the culprit” (On the Genealogy of Morals, “Second Essay,” section 4).

As already mentioned, Nietzsche’s rejection of free will is tied in with his general rejection of theism. And he feels that the efforts of atheistic philosophers to retain the faulty concept, while still proposing a godless reality, is misguided; not to mention counterproductive:

Surely, that philosophers’ invention, so bold and so fateful, which was then first devised for Europe, the invention of “free will,” of the absolute spontaneity of man in good and in evil, was devised above all to furnish a right to the idea that the interest of the gods in man, in human virtue, could never be exhausted (On the Genealogy of Morals, “Second Essay,” section 7).

Nietzsche argues that the reason free will was originally invented as a concept was to give religiously-minded philosophers a means by which to allow for unconstrained supernatural intervention on the part of the various gods man had hitherto created. In short, free will is a trump card conveniently utilized to give deities a meaning to exist:

The course of a completely deterministic world would have been predictable for the gods and they would have quickly grown weary of it—reason enough for those friends of the gods, the philosophers, not to inflict such a deterministic world on their gods! (On the Genealogy of Morals, “Second Essay,” section 7).

Now, a fair question for a reader to ask is how Nietzsche’s rejection of free will does not also lead to a dismantling of much of Nietzsche’s own philosophy, in particular his conception of “the will to power,” and his continuous call for individuals to create their own values in life? Although a good point, it nonetheless rests on a superficial reading of Nietzsche’s thoughts on the subject.

It is true that Nietzsche heralded the idea of individuality, but not in any sense that would imply self-improvement. He fervently maintained that, “independence was for the very few” (Beyond Good and Evil, “The Free Spirit,” section 29), and even these individuals had no choice in the matter, because their instinct for individualistic expression is also deterministically confined, just as the herd-instinct of the masses can’t help itself but to subvert the independence of the few (On the Genealogy of Morals, “First Essay,” section 2). In this regard, there is nothing “free” about Nietzsche’s “will to power,” which is itself entirely instinctive, driven not by any conscious intent or choice-value, but on purely mechanical responses to environmental and genetic factors. Thus, in Nietzsche’s own language, the will to power is nothing more but the instinct for freedom (On the Genealogy of Morals, “Second Essay,” section 18), which of course is an instinct no one can freely choose to have.

Nietzsche understood how his views on this matter would make some uncomfortable (in particular his call for persons to abandon a concept like free will, upon which so much of the popular conception of personhood is based on), to which he bluntly responded: “One should guard against thinking lightly of this phenomenon merely on account of its initial painfulness and ugliness” (On the Genealogy of Morals, “Second Essay,” section 18).

According to Nietzsche, free will–being fundamentally an illusion–necessitates that we have no choice but to act as if our decisions are free agents. Therefore, the disdain individuals feel about the fact that their actions are entirely deterministic is itself a causal result of the way by which human perception has evolved to relate to its environment. We have no free will, but we are determined to behave as if we do. Whatever, “painfulness” or “ugliness,” people imagine will result from acknowledging this point is moot on principle.

The Intellectual Value of Comic Books

Although the previous two decades saw a great surge in the respectability afforded to comic book characters adapted brilliantly to cinema screens, I don’t think the same level of appreciation carried over to the colorful, panel-style pages that all these characters originate from. What I mean is, while moviegoers might have cheered on at the sight of the Avengers, I predict very few people cared enough to go out and read up on the multitude of Avengers comics in publication since the mid-20th Century. I would argue the same probably holds true for many of the other top comics-to-cinema franchises.

Some movie historians point to the success of the 1978 Superman movie, or Tim Burton’s 1989 Batman, as the beginning of the mainstream acceptance of comic book adaptations, but I’m not too sure it’s reasonable to cast such a far-reaching net. Movie genres, I believe, come in arcs and trends, and I don’t think the recent rise the comic book movie is anymore linked to the success of the two aforementioned movies than the rise of popularity of action movies throughout the 1980s and 1990s in general.

I’d argue that the precursor to the current comic book movies craze started just at the close of the 20th Century, with a movie called Blade.

For readers too young to remember 1998 too well, the first Blade movie was a humongous hit at the time of its release. Despite most moviegoers probably not being aware that they were in fact watching a comic book movie, Blade set the stage for Marvel’s superhero film adaptations that continue to this day. Moreover, it shifted the zeitgeist away from comic book movies needing to have an air of lightheartedness and child-friendly whimsy, and showed that you can have superheroes be dark, serious, and directed in a way where it looks as if they’re grounded in a reality that could plausibly overlap with our own (Christopher Nolan’s Batman trilogy would utilize a similar formula when adapting the caped crusader to the big screen in 2005’s Batman Begins).

Nevertheless, the theatrical success of Blade the movie, didn’t elevate the Blade comic book in the wider audience. Nor did the mainstream embrace of the subsequent comic book movies that enjoyed massive commercial success uplift most of it’s printed character counterparts to an equal footing with their cinematic namesakes’ successes.

Now, don’t misunderstand me. I am not making some nerd-elitist “we true fans liked it before it was cool,” and in fact I’d argue that some comic book characters like Thor and Iron Man are not just adaptations, but superior works of storytelling in their big screen form than they ever were on the printed page.

What I am saying is that, despite the mainstream acceptance and success of movies based on comic book characters, and the widespread enjoyment the public gets out of the stories being told therein, comic books themselves are still not afforded the intellectual respect of being viewed as something beyond children’s entertainment, regardless of the maturity or complexity of the actual story being told within the drawn panels. Furthermore, if a comic book does reach a point where it is mature enough, raw enough, complex enough that it does crossover into the domain of being legitimate adult-approved entertainment, it immediately gets rebranded from mere “comic book” status up to the more reputable sounding category of a “Graphic Novel.”

So, there were some conversations about graphic novels… – Idaho Commission  for Libraries

Arguable the differentiation between what counts as a comic book, and what counts a graphic novel, could very well have its place. However the truth remains that, while a lot of people are willing to defend the intellectual worth of graphic novels like Watchmen, Maus, or Sin City, not too many bother to standup for the literary value of the common comic book; often this includes those of us who grew up enjoying comic books. And I would argue this seemingly minor oversight causes us to ignore a major contributor to a child’s introductory development to the world of literature, which can and does give rise to a lifelong appreciation of storytelling as a whole. Stories that can, and ought to, still be enjoyed well into adulthood.

Personally, comic books were a gateway into appreciating the written word at a young age, and laid the groundwork for understanding the importance of syntax structure when communicating one’s ideas through prose.  Now, I certainly didn’t realize as a kid, as all I did was enjoy the stories I followed in the printed panels, but the seed was planted for me to have a foundation to grasp the classics of literature once I was mature enough to engage them firsthand. Nowadays, I am surrounded by the greats (and some not-so-greats) of the literary world on my bookshelves, but I still feel no shame in openly indulging in the cheap, department store comic I bought along with my morning snickers bar. 

To me, comic books are a form of literature. Like all literature, some of it is good and some of it is bad; some of it is fascinating, and some of it is corny; some of it is engaging, and some of it is dull. But to dismiss the entire genre, so critical in shaping a one’s early sense of imagination and reading comprehension, just seems like a betrayal to the very foundations that introduced us to the world of literature to begin with.

Pronouncing Nietzsche

A reader sent a pretty good question to my inbox:

This will sound really really stupid but do you know how ‘Nietzsche’ is supposed to be pronounced? I mean the way he would have pronounced it himself. I always feel like I’m saying it wrong.

There is nothing stupid about asking something you genuinely don’t know the answer to, and I personally have little regard for individuals who make it a habit to put down anyone eager to correct their confusion on a particular issue. Now that I got that out of the way, dear reader, let me address the question.

The most often mistake I hear is “NEE-chee” (with an ending that rhymes with “see” or “fee”), and it’s probably the way most native English speakers have been thought to say it; this includes both academic professors and the average layperson. I suppose the reason why this mispronunciation is so widespread amongst Anglophones is because the pronunciation of the man’s name is of no real consequence when it comes to analyzing his philosophy–except to those who happen to have a particular fixation on these sort of issues. That last bit was not meant to be judgmental, just an observation on my part. And I can actually see how such fixations can be a healthy sign of a person’s intellectual curiosity, as long as you don’t start thinking of other people as your intellectual inferiors over something as trivial as the fact that they mispronounce a name whose linguistic origins they don’t happen to speak. 

The other mistake is to simply pronounce the name as “Nitch” (with the false assumption that the “e” is silent); this one’s rarer, but I’ve heard it said once or twice in college so it’s worth mentioning.

The confusion people seem to have is how the heck you’re suppose to say the ending of the philosopher’s name. This site gives a decent rendition of the standard German pronunciation (with audio included), and I encourage readers to follow the link to hear it for themselves. In the linked site, the pronunciation is transcribed as something close to “NEE-cheh”, but this can be confusing to some English speakers because the closing “h” syllable is relatively soft; coming across as a quick exhaling sound, so it sounds kind of like you’re saying it under your breath (as you’ll hear on the audio recording on the link provided). This can be made even more confusing by the fact that depending on which German speaker you ask, the pronunciation you hear can either come across sounding like “NEE-ché” (ending “é” used as it is in French, but with a guttural stress; which brings it very close to the “NEE-cheh” pronunciation shown in the link).

For all the years I’ve been fluent in German (i.e. since early childhood), and all the time I spent talking to native Germans (also since early childhood), I have always used the former pronunciation (the guttural “é” sound at the end), but one needs to keep in mind that I learned to speak German in Hannover, Lower Saxony, which is often cited to be as close to an accent-neutral region as German can get (sort of the German equivalent of what Americans would call a “Midwestern accent” in their country). However, in college (here, in the U.S.) I ran into several professor who also spoke fluent German, and they vehemently insisted that it’s supposes to be “NEE-cha”. Rather than pointlessly argue over it, I’ll just let people know about the supposed discrepancy, even though I almost never encountered it myself while communicating with German speakers.

In closing, this is a common question English speaking have when looking the writings of Friedrich Nietzsche, and it’s always difficult to transcribe linguistic sounds from one language to another. I think the linked site’s phonetic transcription of “NEE-cheh” is a good compromise between the two (allegedly) disputing accounts of the German pronunciation of “Nietzsche”. Just keep in mind that the closing “h” is more of an ending breath, than it is a proper syllable. 

Or you can simply keep on pronouncing it as you always have, because how you say the name of any writer or philosopher shouldn’t have any bearing on how well you understand and analyze his/her ideas.

The Bum and the Professor: A Hypothetical Conversation

Bum:  “Spare some change?”

Professor:  “No.”

Bum:  “Not even a quarter, or a nickel?  No change at all?”

Professor:  “Sorry. If I had some, you can rest assured that I’d give it to you, but I just don’t have any.”

Bum:  “Why can I ‘rest assured’ of that? I don’t know you.”

Professor:  “True, but I know you, more or less. I have spent decades lecturing and writing on the plight of the underprivileged. So I understand your hardship enough to know that if I honestly had any money to spare, I wouldn’t hesitate to give it to you at once.”

Bum:  “All these decades you’ve spent lecturing and writing about someone like me, did no one ever pay you?”

Professor:  “Of course they did.”

Bum:  “And yet, you haven’t got a quarter or nickel to spare with the guy that earned you a paycheck?”

Professor:  “I resent that remark. I’ll have you know that I have given a large sum of money over the years to various charities to help people in need.”

Bum:  “Good for you. That still doesn’t put either a quarter or a nickel in my hand, right now.”

Professor:  “You’re judging me for not being able to give you money, right now? A bit self-righteous for a man who spends his days begging for a portion of other peoples money, don’t you think?”

Bum:  “No judgment here, honestly. I’m just following your train of thought, which I admit can seem pretty ‘self-righteous’. Probably about as self-righteous as being told that someone knows me, just because they’ve written something about poor folks here and there.”

Professor:  “I see. Well, allow me to clarify: While I don’t know you personally, I do understand, because of my extensive research and studies on the subject, the hardship that comes along with residing within the parameters of today’s socioeconomic hegemony.”

Bum:  “Parameters of what?”

Professor:  “Socioeconomic hegemony.  It’s a phrase I coined in one of my papers. Roughly it means that the conditions of a person’s environment are so dominating that they are naturally setup to be disadvantageous to the underprivileged in said environment. You understand?”

Bum:  “I understand what you said. I don’t understand what good it does to have it said.”

Professor:  “Identifying and defining a problem is the first step to having it resolved.”

Bum:  “When did you first write this?”

Professor:  “About 30 years ago.”

Bum:  “How long until it starts to ‘resolve’ the problem?”

Professor:  “It doesn’t work that way.”

Bum:  “Why not?”

Professor:  “Because social theories aren’t meant to fix people’s problems just by the power of the pen.  People have different perspectives, and one social theory can yield an innumerable sub-theories on how to implement reforms. Not to mention, there is always nuance to consider.”

Bum:  “So some other guy can come up with a different ‘social theory’ about the exact same problem your social theory talks about, and his would be just as good as yours.”

Professor:  “I think you’re getting confused, remember we’re talking about hypothetical thought experiments here.”

Bum:  “So they’re imaginary.”

Professor:  “No, they are normative descriptors of reality.”

Bum:  “How do you know they’re describing reality, if they haven’t been tried out yet?  That is what hypothetical means, right?”

Professor:  “It’s more abstract than that.”

Bum:  “I bet. But I still don’t see the point of coming up with all of these social theories, if they can’t actually resolve the problems they’re addressing. Seems to me like a man might as well be doing nothing and still get the same results.”

Professor:  “I told you, social theories recognize a problem and allow for the future assembly of working models to be implemented by society.”

Bum:  “Hypothetically.”

Professor:  “Yes, hypothetically.”

Bum:  “See that building over there? 30 years ago I was part of the crew assembling the foundation of dozens of buildings just like it, all over town. Most of them are still around. People can use them, live in them. They can like them or hate them. But they can’t ignore them. If they decide to get rid of them, they have to put some physical effort into removing them from the spot we put them on. You understand what I’m driving at?”

Professor:  “Not really, no.”

Bum:  “Before we put down the foundation, when we were barely carving out the dimensions on the ground, the buildings were what you would call hypothetical. Now, 30 years later, I guess someone a little better with words than me, would say that these buildings are ‘descriptors of reality’, at least in the little, tiny spot of reality where they stand. You couldn’t describe the area where these buildings are without mentioning the buildings themselves.”

Professor:  “Okay, I get what you’re driving at, but you’re wrong. This is completely different from my academic discipline; you’re simply not comparing like with like.”

Bum:  “Yeah, probably. All I know is that 30 years ago, we identifies a problem: no building in this spot. Now, 30 years later, problem is resolved: building is there, whether someone likes it or not. 30 years ago, you identified a problem; now, 30 years later, you’re identifying of the problem all those years ago hasn’t done squat to resolve whatever problem it is you felt needed to be identified in the first place–because if it had I wouldn’t be sitting here like this, would I? So, let me ask you, are you sure your social theories are actually describing reality, or are you just defining reality to your liking, and cramming your social theories into it so you can have something to lecture people on?”

Professor:  “My theory is sound, but to understand it properly would take many years of study.  Hence, this conversation is inconsequential. Here’s your quarter, and have a nice day.”

Bum:  “Much appreciated, good sir. You have yourself a good one, too.”

Lev Grossman’s The Magicians Trilogy

Years back, I had originally given up on Lev Grossman’s The Magicians trilogy halfway through Book Two because the main character, Quentin Coldwater, is such an insufferable, self-absorbed piece of shit that the thought of being trapped in his head for another book and a half seemed unbearable at the time. But the Covid quarantine got me to revisit the trilogy from the start again, and while my first impression of Quentin remains unchanged for that first half of the trilogy, the character’s development by the end of Book Two does actually soften me to his flaws and failures, to the point that I found myself fully emphasizing with him by Book Three as the hero of the story the narrative seemed so eager to convince me that he is not. Perhaps it was a clever ploy of reverse psychology, or subverting expectations on the part of the author, but whatever it was, it worked perfectly in the grand scheme of the narrative as a whole.

Throughout the books, we see Quentin be a lousy friend (practically dropping all his past contacts once he gets to Brakebills), a dishonest boyfriend, and a bit of a glory-hog whose concerns lie less with the safety of those around him, then fulfilling his own interest in coming out on top of the adventure he thinks he needs so he can escape the monotony of his life. But it’s in the aftermath of having experienced all of this (roughly at the closing of Book Two) that we get to see a shift in his perspective. Which retroactively makes a lot of sense, on account that he would need to experience the consequences of his hubris before being able to set out on a genuine journey of growth and finally learn from his mistakes. As a character, it wouldn’t make sense for him to have either the knowledge or experience to understand how to deal with the situations around him maturely, nor would it have been realistic or relatable. In fact, I’m pretty sure that had Book One started out with a character that was mature, reserved, amicable, and fully resourceful right from the start, I probably would have complained that such a trope is too boring and lacked any real character depth to bother with (being a nitpicky critic comes so easy to us in the audience, doesn’t it?)

Some worthwhile reads payoff eventually is the lesson here, and deserve to be carried through to the end. And having gotten to the end of The Magicians trilogy, I see why the author wrote Quentin as such a little shit at the beginning of the story, and why it was even necessary to do so, regardless of how much it irked me at the time of reading on the first go at it.