Character Backgrounds: The Dilemma of Sharing Too Little, or Too Much

When writing a story, there exists a natural disconnect between how the author interprets the plot, and how the audience reads it.  The obvious reason for this being that the author has the (mis)fortune of knowing the intended background details of the events and characters before they ever makes their way onto the page, in ways that are not readily available to the reader.  The task for any decent writer is to convey these details in a way that makes for a compelling narrative that will be neither overbearing for the reader, nor leave them stranded in the dark regarding important plot/character developments.

Spotting moments when an author is being too reserved with details is fairly easy.  Anytime you’ve come across a part of a story or book that left you wondering, “Wait, who is this, and why are they suddenly in the middle of everything?  Where they hell did they come from?” you were essentially exposed to underdeveloped writing.  Be sure not to misunderstand what I’m saying, though.  Introducing new characters, and strategically withholding information about them, can be an effective writing technique to invigorate interest back into the plot, as a little mystery can go a long way in building much needed suspense in an otherwise stale plot.

As an example, imagine a love story between two characters named Tom and Jill.  For over a hundred pages, you followed along as Tom sees Jill, falls in love with her, and tries desperately to impress her.  Jill was originally aloof regarding Tom’s advances, but slowly she starts to feel flattered by his affection for her, and agrees to give him a chance.  Things are going great for the two love birds for several more pages, then—just as the plot can’t bear the weight of anymore Hallmark moment clichés—a sudden wrench is thrown into the mix:

Nothing could tear Tom’s gaze away from Jill’s eyes.  The shape of them, their softness as she smiled, even the wrinkles that formed at the corners of her eyelids as she laughed, all worked to keep him in a hypnotic trance from which he could not—would not—escape.  Or so he thought.  Because the moment Susan Gallaghan walked by them, he felt his eyes wander from his beloved Jill’s enchanting eyes, to the rhythmic steps that paced along in front of him.

Let’s assume this is the first time this Susan character is ever mentioned in the plot.  The first thoughts any reader is going to have will be along the lines of:  “Who the hell is this Susan person?”, “Is she someone new to Tom?”, “Is she an old flame?”, “Is she a girl from his youth that he secretly pined after?”, “Is Tom actually a serial killer, and Susan his next victim?”  At this point, we, the audience, have no clue.  The fact that we have no clue is what makes it a brilliant writer’s trick, because now you are invested in the dilemma and subsequent resolution that is sure to follow.

But what if the drama never really follows the way you expect it to?  While the sudden introduction of this new character works to spark the reader’s interest in the development of the story, it can only carry the audience’s engagement so far.  If Susan keeps popping up in the same way, with the same vague acknowledgment from the established characters, the reader’s interest will quickly turn to frustration, and ultimately to disinterest.  You have to give the audience a reason as to why the things that are happening on the page are worth being mentioned to begin with, and in the case of character development, this means divulging at the very least some connection between secondary plot-devise characters (like Susan above) and the main protagonists.

Divulging a character’s background effectively in a narrative is not as easy as it may sound.  A lot of times it can come across bloated, and a poor attempt to force feed too much information into the plot, just for the sake of having the reader know why this person exists in the story.

Imagine if the mysterious introduction of Susan above followed up with:

Tom immediately recognized Susan as his high school sweetheart, to whom he had lost his virginity to on prom night.  The two of them went their separate ways soon after graduation, but Tom never quite got over his love for Susan.  Susan, for her part, had little trouble moving on from Tom.  So much so, that she moved away to study and travel abroad.  As she traveled the world, she gained an appreciation for herself, and how she didn’t need to define her identity by any one person that happened to be in her life.  Unlike Tom, Susan wasn’t validated by whether someone loved her; she felt complete knowing that she loved herself.  Even now as she walked past him with all the confidence of a young woman who intended to live  her life to the fullest, Tom’s heart throbbed once again for the one that got away.  Though Susan didn’t recognize Tom, the two of them would be seeing a lot more of each other from her on out, since she was set to begin a new position in the very firm Tom worked at.

The problem here isn’t that this information is being revealed within the plot; it’s that there is no reason to have it laid out all at once, let alone right after the mysteriousness regarding Susan’s presence was so brilliantly executed.  All of this can be revealed through the course of several pages, if not several chapters.  Again, by all means give the necessary background to establish a character, but there is no need to lump it all together in one spot, because then your narrative will inevitably end up repeating itself again and again, every single time the information needs to be revisited.  Eventually, Tom and Susan will have a confrontation, where hints can be dropped regarding their past intimacy.  Rather than state that Susan is a confident and independent person, why not show it by the way she behaves and interacts with her surroundings and the other characters?  Pretty much everything stated in that one paragraph can be dispersed throughout the story by piecemeal, without having to kill the suspense of revealing it all in one big swoop (especially right after the mystery character is introduced).

For a real literary example of where an author does a superb job of balancing the enigma of his characters with their subtle background revelations throughout the plot, I would point to the characters of Mr. Croup and Mr. Vandemar in Neil Gaiman’s Neverwhere.  Even before the book’s otherworldly narrative is revealed, these two characters’ peculiar manner of dress and manner of speaking foreshadows a fantastical nature to their persons (and, by extension, the plot itself).  All of which is subtly explored in what essentially amounts to breadcrumbs worth of information through the course of a 300+ page story.  And in the end of it all, the mystery behind who/what Mr. Croup and Mr. Vandemar really are is never fully revealed, precisely because there is no reason for the story to do so.

Ultimately, it’s up to every writer to decide how much is too much background exhibition for her/his characters, and how much is just enough to not stifle character and plot development.  That happy balance will largely depend on the sort of story you are trying to tell, and it may take several revisions to get it within the range you are aiming for.  But, while it’s not always straightforward in either case, being able to spot the problem in other written works means you are more than capable of applying that critical eye to your own.  Like a lot of writing advice, it simply starts with reading your writings not as an author, but as a reader, first and foremost.

Mindlessly Mindful: How Meditation Stifled my Creativity

Over the course of the last few years, the practice of mindfulness meditation has sparked a great deal of interest in private and public discourse.  For many this discourse takes on the form of a full-scale spiritual reawakening in their lives–the rationale of looking back to what some would call time-tested wisdom, as a guide to navigate through modern life.  Still to others, who might belong to a more pragmatic mindset, the adoption of meditation into their daily routine is less about reaching an esoteric sense of enlightenment, and more about wanting to find a means of focus for the cluttered thoughts they feel are clogging up their minds.

My own interest into mindfulness meditation began sometime in late-2016, and stemmed from a general curiosity regarding the positive results being attested to by its practitioners–ranging from all sort of different personalities; including (but not limited to) self-appointed gurus, public intellectuals, corporate bosses, average laborers, and everyone in between.  What peaked my curiosity most was how the underlying message from this diverse group of people was a resounding agreement that: “Yes, indeed, meditation works!”  The full definition of how it “works!” and what it means for it to “work!” often vary as much as the individual backgrounds of meditation practitioners, however there are some very clear commonalities among all the positive testimonials.

A greater sense of focus is one reoccurring benefit attested to by mindfulness meditators.  Specifically, a greater awareness and appreciation of the details encompassing the moment one happens to be currently occupying, as well as the multitude of thoughts that accompany it.  Another common theme among meditation circles is how it leads one to confront the (supposedly false) preconceptions surrounding the fundamental concept of the Self, and the illusory nature by which we think of our Self in relations to both our internal dialogue, as well as the external world our Self interacts with (whether it is even coherent to think of the Self as an independent agent relating to the world, rather than another component in an endless string of interacting affects that make up existence).

I spent weeks researching the practice and philosophy of mindfulness meditation to get a better understanding of it, until finally, on January 1st, 2017, I decided to put theory to practice and devote a significant portions of my free time trying to gain some firsthand experience of what it truly means to incorporate meditation in my daily life.  Recently, on January 1st, 2019, this personal experiment of mine came to a full stop.

When I first set out on this personal journey I expected the possible results to go one of two ways:  1.  A net positive, wherein I would enjoy the benefits of reaching some semblance of self-awareness, self-discovery, and hopefully even personal growth (like so many others testified to having experienced through meditation).  2.  A net neutral, the results of which would be no more dire than having wasted some portion of my time on a fruitless exercise that offered no real benefits, but ultimately no harm.

Having now gone through it, I can’t say what I experienced to have been neutral, since the practice definitely affected me on more than one level.  Unfortunately, from my perspective, the affects I felt leaned more towards a net negative as a whole; so much so, I decided to give up meditating completely as something that may simply not be a suitable practice for someone like me.

Once I ceased meditating, a subsequent curiosity came over me in which I wanted to find out if there were others that have had a similar (negative) experience to my own while practicing mindfulness meditation, but surprisingly enough the answer to that questions seems to be a resounding, “No.”

I came across a few blog posts here and there of people saying they weren’t completely satisfied with what mindfulness meditation offered, or that it wasn’t what they expected, but they were still overall happy to have had the experience (even if they decided it wasn’t the right fit for them).  I also finally took the time to research the medical and psychological data regarding the long-term benefits of meditation (or, more aptly, the lack thereof) I had intentionally avoided while engaging in the practice, so as not to be prematurely biased against it.  Yet, other than a general confirmation that little to no empirical evidence exists to validate its self-proclaimed benefits–possibly making meditation more comparable to a placebo effect than genuine self-awareness–I still didn’t come across reports that confirmed anything close to my personal (negative) experience.

I’m not going to go into deep details regarding the exact nature of the sort of mindfulness regiment I did during this two year period; partly because I’d rather be guilty of leaving details ambiguous, then have every meditating Tom, Dick, and Mary who fancies her/himself a guru lecture me about how “real” meditation ought to be done.  If that is the sort of objection coming to mind as you read this, I am unfortunately failing to get the crux of my point across.

It’s not that I meditated and got no results from it, or that my results were drastically different from what I’ve read, heard, and observed others state about their own experiences while meditating.  In fact, my experiences were more or less in line with what the typical person claims to go through while practicing mindfulness exercises.  My problem with meditation–and mindfulness meditation, specifically–are what I view to be the negative impact it had on my creative wherewithal.

What exactly do I mean with this? Allow me to explain.

A heightened awareness of the current moment is one of the major benefits promoted in favor of meditation.  While I see how it might help those who have a habit of wearing their emotions on their sleeves to meditate–or maybe those who suffer from impulsive decision-making in general–I’m someone who came into meditation already relatively calm and collected, possessing a decent set of stress management skills to begin with.  Furthermore, I’m someone who relies on having to construct imaginary plots, involving imaginary people, and projecting them into contrived scenarios that could resolve themselves any number of ways I see fit to write.  Now, seeing that creative writing is generally penned in the past tense, about things that have yet to be imagined, involving situations which do not exist, I never expected mindfulness meditation to offer much in the way of benefits in this part of my life.  But I also wasn’t prepared for how downright harmful it could be to it, either.

Prior to incorporating meditation into my daily routine, sitting at my desk and passionately typing away at my laptop’s keyboard for long enough to lose my sense of self because I am too immersed in the world I’m creating, was the feeling that gave me satisfaction at the end of a day when I went to bed.  And, slowly but surely, I felt this passion begin to erode the more progress I made with my meditative practice.  (Then subsequently return when I stopped meditating altogether.)

Sure, I got better at focusing on my breathing, as well as the various physical sensations that made up my moment-to-moment experiences, which in turn made me more aware of not just my thoughts, but the process by which these thoughts seemed to spontaneously manifest into my conscious monologue, but all of this came at a cost.  Being more aware of my thoughts–moreover being conscious of the act of thinking–made it harder to lose myself within those thoughts when I needed to weave together thoughtful writing.

And it wasn’t just writing.  Other creative outlets like painting became harder, too, because a large part of my painting process revolves around being able to foresee and focus on what shapes and images can be created (rather than what are present in the moment), and what method/color scheme will illustrate them best.  Being aware of the moment, and the act of what I’m doing (in this case sitting in a chair while painting) offered no benefit to the act itself, and ironically often served to distract from letting my thoughts roam towards conjuring up the inspiration needed to complete the project.

Yes, inspiration.  That is the key ingredient that I felt slipping the deeper I delved into meditation.  Ironically, as a result I found myself feeling more frustrated and stressed as a person when I sat down to do my work; traits I largely did not possess (at least not to the level I developed) going into meditation.

Like a lot of bad side effects, it took time for the signs to come to the surface, at which point meditation had already become part of my daily routine (and, really, routines can be so hard to break once they’ve cemented into our daily lives).  So I carried forward through all of 2017, and the first half of 2018, somewhat oblivious to what was the source to my depleting creative spark.  Then, last summer I wrote a post on this blog titled, The Pitfalls of Self-Help, after which I started to consider the possibility that all the positive testimonials I had heard in praise of mindfulness (which got me interested in it) were just as vacuous as the testimonials of people following any other self-help/self-awareness fad.

I started to seek out other mindfulness practitioners to see what insights they had to share, and was largely met with not-fully-thought-through regurgitations from self-proclaimed meditation gurus, whose wisdom sounded more like buzzwordy slogans from the reject bin of yesterday’s fortune cookie stash.

One particular conversation proved most enlightening.  The gist of it went something like:

Meditator:  “How you perceive of the Self is an illusion.”

Me:  “I perceive of my Self as a collection of atoms that make up the matter that is me; occupying a specific space in time that only I occupy.  In what sense in this an illusion?”

Meditator: “That’s not how people define the Self.  When people talk about a Self, they speak of it in terms of a separate entity that’s observing their doings, instead of being a part of it.  That’s an illusion.”

Me:  “But I just told you that doesn’t apply to how I, personally, conceive of the Self; as it pertains to me, or anyone else.”

Meditator:  “It does.  You’re trying to intellectually rationalize you perception.  In reality, you’re just not being honest with how you really perceive your Self, in everyday practice.”

I’m fine with accepting that I have blind spots regarding my own conscious and subconscious awareness.  What I take issue with is being told I have to accept the idea that someone else–with absolutely no firsthand access to my thoughts or perceptions–has figured out where all these blind spots are, how they pertain to my experiences, and how it all conveniently fits into her/his own preconceived generalizations and worldview.  In other words, feel free to tell me that I’m wrong in my opinion, but don’t condescendingly tell me you know what I’m really thinking, in order to make me and my thoughts conform to your philosophy.  That’s not awareness; that’s just bullshit.  And I hate to say it, but a lot of meditation seems to run very close to this level of discourse.

In the last half of 2018, as I drifted more and more away from seeing any value for keeping meditation in my life, I was given two further explanations by meditation practitioners for my lack of positive results:  1.  I’m not spiritual enough, and 2. I’m too straight-edge.

I’ll freely grant the truth of the first explanation as a strong possibility.  Even with the most elastic definition of the word “spiritual,” I can honestly say that it does not, and cannot, apply to me.  While I know there are efforts made to promote a secular form of spirituality, I still feel the need to point out that I have never believed in the supernatural, nor the mystical, and the values and passions I have in life I do not equate or think of in any deeper “spiritual” terms.  The things that give my life meaning and joy, are simply the things that give my life meaning and joy, and I see no reason why I need to lump on belabored spiritual terminologies that do little to further elucidate what is innately a tautological experience for everybody.  Apparently, this type of thinking doesn’t sit well with the sort of people who claim to get concrete benefits out of meditation.  In such circles, simply saying you appreciate any aspect of life, and your roles and perceptions in it, is an affirmation of your spirituality.  Which is fine, but to me that just redefines spiritual so broadly that it becomes meaningless as a term.  I’m not invested enough in the semantics behind it all to debate the issue, but it’s safe to say that I don’t personally consider myself to be a spiritual person (regardless of whether others want to see me as such).

As to the second point, concerning my lifestyle choices; on more than one occasion, it was suggested to me that meditation can only be truly of benefit when performed under the influence of psychedelics.  I have no way of knowing if this is true or not, as I do not partake in recreational drug use (though I support anyone else’s right to do so).  But I have to ask, how do you know that what you perceive to be a greater self-awareness while high on psychedelics isn’t just a drug-induced delusion that has no bearing on reality as it actually is?  If being on drugs, and then meditating, is the key to opening the door to a greater truth about life, how come no one has ever emerged from these drug-fueled meditative states with any tangible, verifiable realizations about the world?

How come in all the centuries of taking mushrooms and meditating in caves, none of these yogis and gurus came out of the experience with something like “E=mc^2”, or the formula for penicillin, or even something as basic as “hey guys, guess what, the world is actually round” (in fact, there is a growing following of people online, at least some of whom I imagine are very prone to getting baked, that argue in favor of a flat-earth).  It’s always some esoteric and vague platitude, like “the Self is an illusion” (as long as both “Self” and “illusion” are defined in very particular terms) or “states of happiness and suffering both depend on consciousness to be realized” (no shit, you’re telling me people who are brain dead can’t feel happy or sad?–Brilliant!).  So, I must ask, what exactly is the point of a greater awareness, if said awareness has nothing tangible to say about the most fundamental, verifiable facts regarding the reality we inhabit?

And, look, perhaps there are those for whom such musings and conversations are of great value, and their personal experiences have been greatly enriched by their existence.  If meditation has brought these people happiness, and impacted their personal growth as individuals positively, I would never argue to take it away from them on the basis that it wasn’t my cup of tea.  We’re all different, and what works for you may not work for me, is one underlying message here.

The other reason for writing this post is to speak to anyone who may have had a similar experience with meditation to my own, and also struggled to find others voicing said experience.  Although I didn’t find much in the way of negative testimony regarding mindfulness meditation, I have a hard time believing that there isn’t someone–at least one person–in the world who, like myself, has tried this out and found it to have been more of a hindrance in her/his life, rather than a benefit.  To this person(s) I’d like to say, there’s in no point in struggling to move forward in a futile quest, and there’s in no shame in walking away from something that is doing you no good.  There are many different ways to experience life and achieve personal fulfillment, and just because something is presented as a cure-all to what ails you, doesn’t mean that there aren’t better alternatives out there more suitable for you.

And if you think everything I’ve written is unwarranted drivel, let me know, and I’ll be sure to meditate on your concerns post haste.

Understanding Perspective in Writing

Writers easily get bogged down in what one could call the nuts and bolts of narrating a story–plot, setting, character development, etc.–that it often gets easy to overlook that narrating itself is the very underpinning that defines the perspective by which a story is revealed to the reader.

Generally, most narratives are written as either from a first-person or third-person perspective.  Second-person exists, too, but is not often used as an exclusive character perspective on account that it’s hard to construct a long-form narrative with it (not impossible, but definitely hard).  As an example, on many blog posts [including this one] I’ll often utilize the rhetorical second-person “you” in reference to the hypothetical reader scrolling through the text, but when doing so will usually not take long to resort to the first-person “I” in order to make the prose coherent.  By and large, if you are writing some kind of narrative, especially in the realm of fiction, you’ll probably be doing it in first-person, or third-person.

Regular readers of KR know that I hate all forms of jargon.  Philosophical, political, literary–all of them; if you’re someone who always feels the need to express yourself using pretentious ten dollar words and terms in lieu of more straightforward ones available, I will always assume that you are probably someone who doesn’t know what s/he is talking about.  With that in mind, if you are not 100% sure about all these terms, let’s simplify it by saying that if your story’s narrator speaks using “I,” “me,” and “we,” your story is written in first-person.  The strength  of writing in first-person comes from the ease by which the reader gets to empathize with the narrator, and in turn, the narrative of the story being told.

Tom went to the store, bought gum, and then shot himself with his revolver,” can be emotionally gripping, but not as emotionally gripping as, “I went to the store, bought gum, and then shot myself with my revolver,” because now you are not just being asked to read as a casual observer, but as the main character him/herself.  This is why first-person narratives are easier to immerse oneself into, as the prose has less of a descriptive barrier between narrator and reader, making it easier to become invested in the plot’s dilemmas and character arcs.

However, writing in first-person also has its drawbacks.  The perspective is by definition restricted to only one point-of-view.  Unless your character is some sort of clairvoyant deity, the narrative will be limited to whatever s/he sees and describes (even if your character is an all-knowing god, written in first-person the story is still only told through the perspective of one viewpoint, hence it’s still restricted).  Most stories have more than one character present; hence it’s not hard to realize the issues that arise when you can only ever truly understand how one character is feeling, and have to rely on this one perspective to give a complete deduction on the thoughts and intentions of all the other characters.

As an example, let’s say that the narrator character is in a conflict with side characters A and B.  What are character A and B’s thoughts on this conflict?  You don’t know.  You know what the narrator character thinks their thoughts might be, and that’s all.  This isn’t a problem, in and of itself.  It can be used to create a wonderful sense of tension and suspense.  But it also means that a writer has to keep in mind perspective consistency within the plot, so that it doesn’t violate the logic of the first-person perspective that’s been setup so far.  This means that if side characters A and B had a conversation somewhere far away from the narrator character, the narrative has to be worked around in which the narrator character somehow gets wind of it if it’s going to be mentioned in the plot.  The narrator character can’t just mention it in mid-conversation, because we–as the readers who have had direct purview to the narrator’s perspective–know that that’s not knowledge that could have been available to her/him.  It breaks internal logic, and it’s rhetorically lazy.

Another glaring handicap with first-person narratives is that everything in the setting is dependent on the description given by the narrator character.  Which means that if this narrator is presented as someone not keen on being too observant and articulate, it will seem weird to have her/him suddenly break into elaborately detailed descriptions of everything happening around her/him just so the reader can see what’s being looked at.  It can also be distracting, and work to undercut the immersion benefits mentioned earlier regarding the first-person narrative to begin with.

The ready alternative is to write in the third-person, and many writer’s workshops will tell you to do just that.  Third-person allows you to separate the narrator’s voice from the characters in your story.  This means that things like character actions and appearance, and setting descriptions, are not dependent on any one character’s observations.  They are instead voiced by an impartial, non-participatory “third-person” giving all the details of the narrative’s happenings.  The obvious benefit of writing in the third-person is that it allows the writer to craft a multi-perspective plot that includes the inner thoughts of any character in the story, not just one narrator character.  Although a third-person narrative can have the affect of creating a buffer between the reader and a story’s protagonist in contrast to how a first-person perspective can work to merge reader and character into one unified voice, it also gives the writer a greater sense of control of setting the details of the narrative, as well as a greater sense of freedom when it comes to how these details are to be dispensed to the reader.

The major setbacks to writing in a third-person perspective is the misstep of not understanding that the narrative comes in two very distinct forms, which for the sake of consistency should not be confused throughout the plot.

The first form is what is called third-person-limited. The non-participatory narrator uses pronouns like “he,” “she,” and “they” (as opposed to the first-person, “I,” “me,” and “we”), and will give descriptions from the aforementioned impartial point-of-view.  But, as the name implies, a third-person-limited perspective has its literary  constraints.  Limited implies that while the narrative will give descriptive details to the reader independent of any one character’s subject thoughts, it’s narrative scope is limited to the details of usually one main character, and the details shared will not step outside the purview of the details available to this main character.

If you’re thinking that this sounds a lot like a first-person perspective just with different pronoun usage, you are both right and wrong.  Similarities between the two are clearly present, but unlike a first-person perspective , third-person-limited does allow for the narrative to explore the inner thoughts and motivations of the secondary characters because they are not being described through the main character’s subjective perspective.  The limitation is that the secondary characters have to be in some sort of interaction or connection with the main character.  Of course, it is also possible to avoid being tied down to one and only one character, by re-centering to a different main character throughout the different scenes that make up the plot.  One just has to be careful not to get confused about what character is currently occupying this role (i.e. if character A is the main character in Scene 1, and Scene 2 switches to character B as the focal point, the third-person-limited narrative in Scene 2 can’t suddenly start reference details revealed in Scene 1 because its point of focus, character B, will as of this point be ignorant of said details–even simply stating in the narrative, “Character B is ignorant of this fact revealed to Character A” is a violation of the internal logic of a third-person-limited perspective).

On the opposite side of all of this, stands the third-person-omniscient perspective.  For a writer, this perspective allows for the greatest amount of narrative freedom, in that you are not chained to the thoughts or whereabouts of any one character.  Think of the third-person-omniscient perspective as the god’s eye view to the third-person-limited bird’s eye view of storytelling.   Want to explore multiple character thoughts and feelings, without needing to relate it back to any given main character’s role within the scene?  No problem.  Want to jump around between character perspectives, and reference back to the reader things that only they (as the audience) are aware of within the greater plot?  Your only limitation is your creativity here.  However (oh, come now, you knew it was coming) it is important to keep in mind that too much freedom within a prose can also very easily tire out a reader.  When you present multiple viewpoints, it might make it harder for readers to bond with any characters (let alone the intended main protagonists of the story), or get invested in the dilemmas and outcomes that befall any of them.  In other words, too much information can create perspective fatigue, which is why even a narrative written from a third-person-omniscient perspective will often self-limit when to utilize its omniscience.

I spend some time here going over some of the strengths and drawbacks of the different narrative perspectives available to writers, not in order to argue for using one form over the other, but to simply give an overview why someone might wish to choose one over the other (depending on what sort of story is being written).  By far, the only real thing I am arguing for in this post is the importance of consistency in writing.  Meaning that whatever perspective you choose for your story’s narrative, you have to stick with it, otherwise you are setting yourself up for a grueling writing experience, and increasing the likelihood of the final draft being a frustrating mess to read (as much as it will be one to write).

It is perfectly fine to start out with one perspective, and then decide that the story is better served if written from a different perspective, but when faced with such a case the correct action is to take the time to go back to the beginning and rewrite everything to match the now better fitting narrative for the story.

Consistency. Consistency. Consistency.  That is the only true lesson here.

The Art of Rhetoric: Its Virtues & Flaws

In a not-too-distant previous life, when I thought that standing in front of dozens of apathetic teenagers in hope of teaching them why learning proper grammar, writing, and argumentation skills was a worthwhile vocation to pursue, I came up with a nifty little speech to start off every semester.

I would say:

I know exactly what you are thinking right now.  It’s the same question every student, in every course, in every land thinks every time they enter a classroom.

Why do I need to learn this?

The simple answer is that it’s because the law requires you to; at least until you turn 18.  For most of you that’s a good enough answer to put up with my incessant talking for a few months, scrape together enough effort to satisfy the course requirement, and move on to your next classroom, until the law finally says that you’ve gone through the motions long enough to be let loose into the real world, full of non-classroom-type duties and responsibilities.  For most of you this answer is good enough.  But there’s a few of you for whom this sort of reasoning is not anywhere near good enough to make you put up with what the education system expects of you for an hour and fifteen minutes of your day.

If you fall within that group, I want you to listen very closely.  In life you will meet many people.  A great number of these people will make prejudgments about you from the first moment they see you–both good and bad.  The good prejudgments will work to your benefit, and the bad will be obstacles that can make your life very, very hard.

People will make prejudgments about you based on your height, your weight, your race, your gender, the way you dress, the way you stand, even the way you choose to cut your hair.  The negative opinions formed by these prejudgments, no matter how unfair or shallow, will for the most part be things you have little control over.  Except for one important component:  The way you communicate.  Yes, people will judge you by how you speak, too.  And while you can’t do much about someone who simply hates you for the way you look, you can sure as hell do everything to deny them the pleasure to dismiss you for the way you communicate.  Even if they still hate you at the end of the day for all the bigoted ways available to them, you should at the very least do everything in your power to make it impossible for them to dismiss you for the way you write, the way you argue–the way you speak!  That is entirely within your power, and it is a power that’s learned, not inherited.  This is your opportunity to learn it, if this is a power you wish to possess.  If you don’t, any prejudgments others make about your person as a results of your decision right now, will be entirely on you.

I’m biased, but I like to think it got the point across as well as anything else could.  And while the point was of course to get the students to feel somewhat enthused about the lesson plan, there was also a deeper purpose to my little pep-talk.  Namely, I was demonstrating the use of rhetoric to argue the case for learning about rhetoric (none of the students ever really picked up on this, though).

Rhetoric has a few technical (read boring) definitions floating around, but the basic gist of it is that rhetoric is a form of discourse meant at persuasion (typically of a person or audience).  This is the part about rhetoric that most philosophical commentators agree on anyway.  Opinions regarding the use or ethical standing of rhetoric have been more polarizing, however.  Plato looked down on rhetoric as mere flattery that could be used to manipulate the masses, as it’s primary purpose was to convince you to side with the argument, and not to impart knowledge or truth.  His student Aristotle took a more favorable view, and considered rhetoric to be an important discipline (and art form), and a necessary part of any well-rounded civics education.  Much of the writings and social revolutions that emerged from the Enlightenment relied heavily on rhetoric to persuade the public to a new way of thinking about life (and liberty, and even the pursuit of happiness).  The same goes for anti-Enlightenment reactionaries, who argued in favor of preserving the status quo in society.

In the modern world, rhetoric (in its purest form) is most readily seen in courtrooms and legislative bodies, and the political spheres that surround them.  It’s no surprise that so many politicians start out as lawyers, and go on to use the same rhetorical tricks they learned in law school on the campaign trail.  It’s for this reason that rhetoric takes on a negative connotation in many people’s minds.

Memorable (yet content-empty) slogans, propagated by conscience-devoid politicians, whose only concern is scoring a victory in their (and their donors’) favor.  Arguments put worth by their mouthpieces in the form of public commentators and pundits, serving the sole purpose of winning over the electorate’s hearts, often at the expense of their critical thought and personal long-term interests.  Honorable mentions also go to the rhetorical tactics of self-professed experts who peddle pseudoscience and conspiracy theories to the affect of fostering a perpetually misinformed populace for the sake of monetary gains.  These can all be counted as examples in support of Plato’s skepticism towards rhetoric as a virtuous mode of discourse.

Even my speech above is arguably laced with unwarranted rhetorical hyperbole.  (Honestly, most people you meet will probably not form good or bad opinions of you; they’ll probably look right past you with complete indifference, if you offer no value to them as a person).  However, one should refrain from getting distracted with unwarranted equivocations.  I sincerely believe there’s a big difference between educators using rhetoric to motivate their students to succeed in their coursework, and the sort of rhetoric that contributes to public policy meant to misinform the public (if you don’t, I hope you never get picked to serve on any jury).

I already mentioned the culpability of politicians making use of rhetoric to spread propaganda for ideological gains.  And while this is universally snubbed as somewhere on the edge of morally questionable behavior, the only reason its done is because it works so well.  In other words, people get manipulated by the bells and whistles of skilled rhetoricians because they don’t care to educate themselves about the hogwash they are being fed (usually because they agree and want to believe what’s being said to them, even if it’s factually baseless).

The public (at least its voting component) is the primary check on politicians in a democratic republic.  However, given the ease by which we will readily be swayed by faint words of praise and reckless fearmongering, its not absurd to thing that Plato may have been on to something when expressing doubts with the public’s ability to combat against rhetoricians whose only purpose is to persuade with complete disregard for the truth of their words.

A secondary check on the rhetoric of public officials is the part of the voting public that makes up the free press.  The reason why the founders of the United States explicitly mentioned protection for the free press from the government in the first amendment of the U.S. Constitution, relates back directly to the role the press (ideally) ought to have as the fact-checkers holding those in power accountable.  Unlike the public, a respectable free press has several internal mechanisms in play that work to sift through credible and credulous information.  It’s also why the first thing clever rhetoricians do is undermine the very credibility of the free press.  “Fake News” is a beautiful example of manipulative rhetoric at its finest, as it plays on the public’s distrust of media sources (i.e. its only reasonable to believe that some news outlets fail to overcome the biases of their presenters) and gives it a credulous dose of self-serving generalization (i.e. all news outlets that disagree with me are the biased ones, regardless of any evidence they present to support their position).

Any reasonable amount of critical thought on the subject clearly shows that the fact that news sources can be mistaken (or even outright deceptive), does not therefore warrant the conclusion that all media must be wrong and lying when they report something you don’t want to be true.  Once again, it’s up to the public to follow-up on the sources any reputable press will readily provide for them to check the merits of what’s being reported.  Shouting “Fake News,” however, makes it easier to muddy this relationship between the public and the press, by equating all sectors of the press as untrustworthy in general, and allows people to lazily self-select only the media they are already disposed to agree with, without having to be burdened with doing any intellectual legwork.

Journalists are also rhetoricians by trade.  Unlike politicians and lawyers, however, members of the free press ought to strive to belong to Aristotle’s more virtuous sect of the rhetoric spectrum, which aims to persuade the masses towards truth and knowledge.  As journalism moves more towards competing for public viewership to continue to operate–thereby having to appease to the whims and tastes of the public, rather than seeking to simply inform them–the concept of fact-based reporting threatens to descend completely into the realm of vacuous rhetoric meant to do little more than keep up viewer support (which, as mentioned, is prone to succumb to some flimsy and fickle interests).

The elevation of online personalities, whose sole journalistic experience is being able to cultivate an audience around themselves on video-sharing sites like YouTube, under the neologism of “alternative media,” is an example of a free press where rhetoric takes precedence over fact-based reporting.  Not to smear those personalities who make every effort to be a respectable source of information, the reality is that the environment of being an online news commentating source is inherently prone to undermine the fact-checking mechanism of traditional journalism, mostly by side-stepping it completely in favor of peddling rhetoric.

These online outlets have little in the way of field-based journalists doing the legwork to uncover newsworthy stories, let alone teams of fact-checkers tirelessly looking through sources and notes to determine the veracity of a story prior to its reporting.  In truth, they rely almost entirely on the work of traditional journalists, whose work they present and provide opinionated commentary over, while ever-so-often throwing in jabs at how ineffective traditional journalism is, despite most (if not all) their actual “news” content coming through the efforts of said traditional journalism.  The reason why this matters is that it is a clear example in which what could be a respectable profession, and a reliable venue for information for the public, is sacrificing its responsibility to dispel factual knowledge for the convenience of mindless rhetoric because it offers them popularity and financial gains in terms of viewer support and sponsorship.

Understanding the role of rhetoric–its values, its uses, and its prevalence–is vital in being able to identify the difference between an impassioned speaker fighting on behalf of a just cause, and a demagogue looking to manipulate the mob to his advantage.  Its vital in being able to distinguish between journalists who go through many painstaking, sleepless nights to report a truth to the people as a public service, and pundit blowhards using the cover of journalism to propagate misinformation for their own gains and egos.  In general, to understand the use of rhetoric, is to be able to identify it and (if need be) ward yourself against its more dire influences.

Rhetoric is not, and should not be, a dirty word.  Like most things, in the hands of benign and well-meaning hands, it is a powerful tool of communication that can inspire immense good in the world.  In the wrong hands, however, it can be the barrier that keeps us permanently free-falling in the abyss of credulity and self-destruction.

 

The Muse, She Calls at Night

Depending on whom you ask, the severity of what it means to have writer’s block ranges from a minor annoyance to an anxiety inducing migraine.  Everyone experiences a bit of writer’s block now and again.  Often it takes the form of not knowing how to verbalize going from Point A to Point P in a prose; or, at least, not knowing how to write it seamlessly enough that it would count as decent writing.  In these cases, it can be something as simple as the ideal word or phrase that turns it all around to unclog the ol’ writer’s pipelines.  Other times, just the act of persistent writing (followed by heavy editing)  is enough to help get the creative juices flowing back onto the page.

As is to be expected, the people who feel the most emotionally committed to what they are trying to write tend to feel the most distraught when their creativity is experiencing a slowdown, or has reached a complete halt altogether.  If you find yourself in a situation like this, then you are fortunate enough that there is plenty of advice out there for you.  Reading more (both related and unrelated works) to inspire your own writing, is one of them.  As is the aforementioned idea to persevere through the block through sheer willpower and keep typing away until something halfway decent starts manifesting itself.  Exercise, eating a well-balanced meal, and getting enough sleep are probably somewhere on the list, too.  Someone once told me that it’s also worthwhile to try stepping away from one’s writing entirely to cure writer’s block.  Although I’m sure this might work for some, I’ve also seen it have the opposite effect of causing writer’s to lose the motivation to go back to an unfinished work the more time they spend away from it.

In this sea of helpful remedies to cure writer’s block, I would like to take a moment to share what helps me personally ward off this dreaded ailment.  It’s more of a writing guideline–or routine–I have found to be the most conducive to getting me where I need to be when confronted with the heavy hurdle of staring at a blank page.   And it can be plainly stated as:

Write at night, and edit by day. 

For me, there’s just something about writing in those last few hours before bedtime that gets my creativity firing at its full capacity.  Maybe it’s the fatigue of the day, where my mind had already spent several hours going through a few rough drafts long before I ever started to write a word down.  Or perhaps it’s a combination of the still of the night, and the dreamlike state of slumber already taking a hold of my senses to steer my imagination where it needs to go.  I don’t really know what it is, but for me the writing muse comes at night.

Now, I also added the bit about editing by day, which shouldn’t be ignored.  While I might feel most inspired to write at night, I’m also more prone to make avoidable grammatical errors when I’m already drifting off to sleep.  This is why, after I hit save and turn in for the night, I’ll spend the next day (or two) going through what I had written to fix any spelling mistakes, cluttered diction, or to revise anything that might have looked decent when first written, but in the light of day reads like it’s been overworked, or is off in some other way.

Is this an obsessive compulsive routine that needs to be followed to the letter for me to be able to write anything?  Of course not!  Plenty of things get written and edited by day, too, within a few short hours, with no creative hindrance whatsoever.  Just like there are nights when the muse decides to turn in early and doesn’t bother to come at all for one project, and barely manages to phone it in for another.  However, outliers shouldn’t be used to negate a general trend.

I will freely admit, though, that I have always been somewhat of a night owl, laced with infrequent bouts of insomnia.  Hence, it’s possible that I just happen to be the personality type for whom a habit of nightly writing comes the easiest, and you might not be.  But, if you are struggling with writer’s block, and none of the other remedies have offered you much relief in the matter, do feel welcome to try my personal guideline out for yourself.  Take the last two hours or so before your normal bedtime (no need to force wakefulness past your usual comfort level), and see if it helps unclog that cerebral blockage.  Just be prepared to possibly have to edit and revise a few things the next day, like a motherfucker!

Treatise on Blasphemy

Recently the Republic of Ireland held a referendum to repeal longstanding blasphemy offenses in its country.  While blasphemy still stands as a finable offense in the Republic under the 2009 Defamation Act, the referendum is still a demonstration that, as far as the Irish people are concerned, charges of blasphemy ought not to be a part of punishable civil law in their nation.

Friends of my adopted homeland here in the United States usually have a conception of Western Europe as being made up of a set of predominantly secular and progressive cultures.  And speaking as someone who spent many years growing up in Western Europe, this conception isn’t wholly unfounded.  As a result, it might astound many Americans to hear that some of these secular, progressive, ultra-liberal, borderline lefty countries still have enforceable blasphemy laws in place.  Granted, the actual enforceability of such laws is largely theoretical in nature, given that they are usually undermined by far more salient laws allowing for the freedom of religious expression and the freedom to believe in accordance to one’s personal conscience.  Thus, blasphemy laws currently exist as a vestigial organ in European law books; without practical purpose or application, but still present nonetheless.

“If these laws are unworkable, than why even bother to fret about them with referendums at all?  Why not just continue to ignore them, and get on with your blaspheming ways?”

This could be a reasonable response, but it misses an important point concerning blasphemy laws.  Putting aside the fact that it makes perfect sense to oppose the criminality of blasphemy on principle alone as unbecoming of any modern democratic nation, there is also the issue of the frailty on which the laxity of these laws currently exist.  To put it more plainly, the reason blasphemy charges are unworkable in most of the European nations that have them is precisely because the current sociopolitical climate is too secular and progressive to enforce them.  However, as any student of history knows, sociopolitical climates are anything but static.  So what happens if the political pendulum swings too far to the right, towards a political faction that views the protection of religious sensibilities as far more important to a nation’s cultural well-being, than the free expression of its citizenry?  Suddenly, these outdated blasphemy laws that have had no real thrust in civil law for almost two centuries, become a very powerful weapon in the hands of reactionaries all too eager to use the existing rule of law to conform society to their line of quasi-pious thinking.  And this is a potential threat both believers and unbelievers alike ought to be concerned about.

Blasphemy isn’t simply the act of professing one’s disbelieve in religious claims, whole cloth.  Blasphemy is the very nature in which all religions profess the very doctrines that make up their faiths.

Whenever polytheistic faiths, like certain sects of Hinduism, profess the existence of multiple gods, they are blaspheming against monotheistic religions which insist that there is only one god, and none other (and vice versa).  Within the monotheistic Abrahamic faiths, when Christians profess that Jesus Christ is the foretold messiah, they are blasphemy against the Jewish faiths that claim that the messiah is yet to come (and vice versa).  When Muslims claim that Jesus, though a prophet and a messiah, is not the son of God, they are blaspheming against a central claim of Christianity.  The Catholic Church’s stance on the supremacy of the Roman papacy is blasphemous to the Eastern Orthodox Churches, and the Protestant rejection of Catholic ecclesiastical authority is blasphemous to Catholics.  The Methodists are blasphemers to the Calvinists, and just about every Christian sect considers Mormonism a heresy.

The obvious point here is that to take away the right to blaspheme is to make it impossible for religious pluralism to exist within a society.  Perhaps this is fine as long as your religious opinion is the dominant one in the society you inhabit, but what happens if you find yourself just short of the majority opinion?  What if a population shift occurs, and the very laws that enforced the thin-skinned sensibilities of your religious persuasion becomes the means why which the new dominant line of thought undermines your right to religious expression?

I could stop writing now, and end on this appeal for mutual cordiality between people of all faiths, and how it is in everyone’s self-interest to oppose blasphemy laws, but I fear it would leave things very much against the spirit of healthy discomfort that blasphemy really should elicit in a person when coming across it.  On that note, allow me address the elephant in the room that needs to be brought up when concerns regarding religious offense of any sort, in law or public discourse, rears its head.

Undeniably, religions make bold claims for themselves.  Claims that offer definitive answers on matters concerning life, death, morality, with a wager on possessing a monopoly on Truth with a capital T.  And they are always keen to wrap this all-knowing, all-encompassing bit of absolutist wisdom in a garb of self-proclaimed humility, as if to say, “No, no, don’t mind me…I’m simply professing to know the answers to all of life’s mysteries, ready made with the consequences (read: threat) that will befall you if you don’t follow along with my modest creed.”

In short, religions by their inherit design simply claim to know things they couldn’t possibly know.  But I, in turn, admit that I don’t know.  I don’t know what the answers to life’s mysteries are; nor do I know which of today’s mysteries will remain mysterious forever, and which might become common knowledge for subsequent generations to come.  I don’t know which moral answers yield the most objective good for humanity; nor can I say for sure that such answers are even completely knowable.  The truths I do know come with a lowercase t, held provisionally in accordance to forthcoming evidence and reasoned arguments, and I don’t know if I can do anything other than to reject the grammar of bolder Truth claims when confronted with them.

It is precisely that I don’t know that I am left with little recourse than to examine, question, dismiss, disbelieve, and (when I see fit) deride those who do claim to know, but offer hardly a dearth of evidence for their claim.  It took centuries of debate and bloodshed of previous generations of thinkers for any of us to be able to enjoy this simple — yet powerful — privilege to skepticism.  A privilege I do hold up as my right, and which I will speak up for without hesitation or apology.  What you call blasphemy, I call critical thought.  And if anyone can appeal to traditions as a means to protect religious sensibilities by legal means, I am fully within my right to appeal to the tradition of cultural and intellectual pushback towards religious doctrines and religious authorities that has made it possible for any sort of interfaith (and non-faith) social cohesion to exist in the modern world.  A tradition that includes both the right to the profane and the blasphemous, which cannot be allowed to be abridged in a democratic republic, for as long as one wishes to be part of any nation worthy of the claim.

Steelmanning: Argumentation for Lazy Intellectuals

I’ve heard it said that the hallmark of argumentation is being able to summarize an opposing viewpoint in a way that the person holding this view would agree with your summary of their position; thereby ensuring that you not only understand the viewpoint you are arguing against, but are also tackling the most robust interpretation of the opposing side.

This principle of charity in arguing has been around debating circles for a long time, but has in the last few years gained traction under the neologism of steelmanning (an obvious negation of its logical antonym of straw-manning, where one argues disingenuously against a position that an opponent never presented, and does not hold).  And on the face of it, this seems like a great development I can entirely get behind.  Who would come out and seriously propose that one should not have a clear understanding of an opposing argument, let alone that one shouldn’t argue against an honest representation of said opposition?  This is simply a case where, in principle (even if not in practice), the majority of reasonable people will be of one mind.

That’s all great so far.  However (don’t look shocked, you knew this was coming when you read the title of the post), while it’s not hard to steelman the argument in favor of steelmanning, the way in which the concept has been thrown around lately leaves much to be desired for me personally.  Whereas it’s meant to stand as an honorable demonstration of mutual respect between intellectual opponents, it’s also taken on the form among some very, very lazy thinkers (who, nonetheless, fancy themselves as stalwart intellects) where they demand for others to strengthen their arguments for them, in ways they never did, and never could have done to begin with.

As a point of principle, if I’m feeling inclined to engage in an argument with others, I will argue against what they say.  Not what I think they should say to make their side more compelling.  Not even what I would say, were I to hypothetical be forced to switch to their side on gunpoint.  But, strictly, what the arguments are that they give to me to support the viewpoints they deem worthy to state aloud for public criticism and/or derision [no, despite what some people say, mockery does not immediately make one guilty of having committed an ad hominem, as long as the mocking follows a salient line of counterarguments; though weak debaters are usually prone to focus in on any well-placed jabs made against them as a clever means to deflect from the fact that they’ve run out of things to say to support their position.]

So when I come out and say…oh, I don’t know…promoting the concept of a white ethnostate is racist and fascistic, and I in turn get emails lecturing me about how I haven’t dealt with the most robust arguments in favor of the alt-right’s ethnostate position, I’m going to call bullshit on claims of my supposed failure to steelman such a clearly racist and fascistic position, because I didn’t pamper it first with a string of dishonest white nationalist euphemisms used to conceal a proposition invoking outright ethnic cleansing.

The fact that I can follow an argument from its premises to its unpalatable logical conclusion–whether or not its proponents have the reasoning capabilities or the guts to follow the same thread of their own argument–does not require me to waste my time to think of ways to make these kind of arguments more pleasant for mass consumption before I attempt to refute them (personally, I find it far more honest to deal with things in their unfiltered form).  Nor am I required to do other people’s intellectual legwork for them, and bend over backwards to make their arguments stronger than they could ever hope to do on their own, so they can feel like they are being given a fair hearing in “the marketplace of ideas” (TM), where apparently every half-baked idea should be allowed to be spouted free of consequences.

Instead, I’d ask the question that if you keep finding yourself in a position in which you have to call on people to give your arguments the most charitable interpretations, you should: 1. Consider the possibility that you are a lousy communicator on behalf of the positions you are looking to promote, and 2. Give some thought to the notion that it’s not really the case that people are misinterpreting your views as absurd, horrendous, or laughable, but that your views actually are exactly that.

If you feel the need to argue a point, go argue it.  If you want to have controversial conversations, then have them.  But if you’re going to spent as much time whining afterwards about how everyone’s just so mean and unfair to you because they won’t paint every inane thing you say in the best possible light–or take every opportunity to fellate your ego about how brave you are to say dumb shit people will take offense to–save us all the trouble (and the bandwidth) and keep your poorly constructed arguments to yourself.