moncrief

musings

You cannot separate subjective suffering from the subject of the suffering.

For all today's bluster about mental health awareness, I rarely see compelling or empathetic discussion of what mental illness is. Intuitively, we understand a broken bone is a thing, a damaged physical object, an injured part of a human body. A viral infection is at least a physical event, an infestation of minuscule packets of genetic information, propagated through a human body. The common story goes that depression is an imbalance of neurotransmitters, an issue with brain chemistry. While I don't think this story is entirely without merit—I don’t want to discourage anyone from seeing if medication-based treatment can help them—it doesn't satisfy me. It seems to sidestep the problem. I want to propose a different definition of depression, not as a physical issue with human body, but as a self-reinforcing pattern of subjective phenomenological experience—in my case, recursively-driven dissociative yearning.

The neurotransmitter story of depression differs from broken bones and viral infections on the grounds of diagnosis. The latter two afflictions are (or at least, can be) physically verified. A swab up the nose can physically detect a virus, an x-ray gives a picture of a shattered radius & ulna. This isn't the case with depression. No doctor is taking samples of brain tissue to check for neurotransmitter balance. The neurotransmitter theory comes later, a post-hoc explanation for the physical behavior and subjective phenomenological symptoms on which depression is actually diagnosed.

The latter category, subjective phenomenological symptoms, is of interest. Consider that a doctor might diagnose a patient with a viral infection by listening to them describe how they feel. But even if the patient feels fine, they could still be diagnosed with that same infection if a nose swab come back positive. The subjective phenomenological symptoms (how the patient feels) are secondary to the observable physical evidence of infection (presence of virus in the body). Depression, by comparison, has no observable physical evidence. Like other mental illnesses, it's diagnosed wholly on self-reported phenomenology and assessments of behavior. Even if we could easily sample an individual's neurotransmitter levels, and found them shockingly low, they wouldn't be considered eligible for a modern depression diagnosis off that alone. Diagnosis of depression requires the presence of one of the following two subjective phenomenological symptoms, as per the DSM-5: – (1) Depressed mood most of the day, nearly every day – (2) Markedly diminished interest or pleasure in all, or almost all, activities most of the day, nearly every day

There's no way to write this piece without discussing my personal case. I personally suffer from (2). I've suffered from some variation or degree of (1) and (2) since early adolescence. Personal experience is no small part of why the neurotransmitter story doesn't interest me. It is so detached from my moment-to-moment experience as to mean nothing. Telling me that the neurotransmitter levels in my brain are what causes my (2) has no meaningful connection to my actual experience of (2). You may as well tell me my depression is caused by bad humors in my blood, curses from devious sprites, or karmic retribution for past-life sins. I don't particularly care what the 'cause' is, because any hypothetical cause is so unrelated to what my experience of (2) actually is—straightforward phenomenology.

I am depressed—I can use that term to describe myself—because I experience (2). This is our starting point. My specific, idiosyncratic experience of (2) is my depression. This is to say, the way a broken arm is the shattered bone or a viral infection is the presence of parasitically self-propagating packets of genetic material, my depression is my phenomenological experience of (2). Maybe that phenomenological experience could be explained by neurotransmitters in the same way a broken arm can be explained by jumping off a playground slide or a viral infection can be explained by eating bad food. But the cause is not the affliction itself. The depression is the phenomena.

Talking about phenomenology is difficult, because words don't map cleanly onto it. The purpose of language is to do compression on phenomena, make concessions, create a sizable map of discrete entities permitting some simulacrum of phenomena to be socially shared. The word “tree” is meaningful between Joe and Jill because Joe's phenomenological understanding of what “tree” means is close enough to Jill's such that no discrepancy is likely to arise when casually discussing the subject. Extending this, it becomes immediately clear that mental illness is difficult to talk about because it constitutes a phenomenological malady. An individual's experience of mental illness is not similar enough to common experience to be easily shared and discussed the way a “tree” can. In fact, the patient is defined as mentally ill because they are outside the realm of typical experience. Being mentally ill means suffering from a painful abnormal phenomenology, which by its very nature exists outside the realm of shared experience where language is comfortable and highly effective.

There is no way for the mentally ill to go to a psychologist and show them their mental illness as it is, as they experience it. Subjective phenomena is singular and private. The depressed patient can only bring that psychologist words which compress the (massive, fundamental, confusing, exceedingly painful) phenomena of their mental illness into generalized, socially-useful language-space—then they can only hope the nuances are accurately unpacked on the psychologist's end. In fairness, when it comes to mental illness, a psychologist is probably better at unpacking language than most. It's their job. But no matter how talented and empathetic the psychologist is, as soon as they respond, the patient becomes the party responsible unpacking language into phenomenological understanding—a task they have to accomplish in spite of the phenomenological malady which brought them to the psychologist in the first place.

It should go without saying that valid advice which makes perfect sense within the frame of the psychologist’s intuition often has no hope of being accurately unpacked by a depressed patient. Packing-unpacking-packing-unpacking—it easily slips into unnoticed circular patterns, failing to develop either party's understanding of the core topic, that being the patient’s depression and how to treat it. Trying to use talk therapy to resolve depression is like trying to explain Ulysses using facial expressions. There simply isn't enough available nuance.

I think it's unfortunate how little grasp most people have on the concept of their own phenomenology. This might be the primary 'space' where helplessness in the face of mental illness exists. An individual obviously cannot debug issues with their phenomenology if they can't even meaningfully grasp what 'their phenomenology' is. At a minimum, I suspect it has to be understood—really understood—that emotions and feelings are not reducible to common-sense linguistic taxonomy. Consider descriptors like happy, sad, ambivalent, envious, loving. They're just words. They don't map onto clean-cut distinct phenomena, they just gesture toward broad, hazily-delineated fields within the greater continuum of possible phenomena. If you've lived, you've felt all of those things in a thousand different ways. Deeper than language, anyone can find that even simple feelings are multi-faceted textures of experience, constantly in flux, countless ineffable sensations arising and passing faster than they can be noticed—let alone rationally considered or narratively packaged. Trying to treat mental illness without increasing the fidelity of phenomenological perception is like trying to fix a car without recognizing the single engine under the hood as being made of many distinct parts.

In spite of these hurdles, I'm surprisingly optimistic about my grasp of of depression as a concept. Talking about phenomenology isn't impossible, just hard. I believe the perfect words can occasionally resonate, suddenly clarifying previously indescribable experiences. Such resonance is inherently personal—occurring on the fringes of language, in the ways it's experienced by an individual rather than the in role it serves as a social tool—but that doesn't mean there's zero utility in trying to share it. It would be a shame not to share something so meaningful. Most Zen koans are gibberish until they suddenly enlighten a disciple. The chance at conveying profound understanding is worth trying for.

A few months ago, amidst efforts to practice greater mindfulness, I began to notice a recurring phenomenological motif—the vast amount of time I spent with my consciousness fixated to the idea of an indistinct better future for myself. Fantasies about the next place I'll live, the next meal I'll eat, the next semester where I'll finally study every evening and have the marks to show for it. The feeling was deeply familiar, something I knew I'd been doing since childhood. I gave the habit a shorthand name (“future-tanha”)^{1} and casually noted as it occurred over the next few months.

Over that period, it became clear that “future-tanha” was only a subcategory, an acute instance of a more general feeling—a miserable yearning for an indistinct elsewhere, a yearning for the phenomena of elsewhere-itself^{2}. I recognized it everywhere, in childish daydreams and in suicidal ideation, in manic productivity and in mindless scrolling. Attempting to satiate it was why I used to smoke weed every night before bed, why I still pick up my phone to check the internet first thing almost every morning. So many of my reflexive actions are desperate sprints away from the present moment, toward a sedated, indeterminate elsewhere.

Then I realized, softly at first, but with increasing clarity, this is my depression.

The psychic discomfort that had haunted me since I was twelve, the perpetual internal suffering I've spent over a decade managing, is the presence of this feeling.

Coming to terms with this was an experience of profound resonance as discussed above, a moment of lucid conceptual collapse. It quickly became intuitively obvious that the signified 'my depression' pointed to was one-and-the-same as the signified 'my yearning for elsewhere' pointed to. This created immediate opportunities for new linguistic bootstrapping. Before, reflecting on the phenomena of my depression, I only had one direct-match word to play off it—'depression'. This insight gave me two more: 'yearning' and 'elsewhere', in conversation with one another. Suddenly, I could meaningfully recognize my depression not as a background tone, but as a happening—not as something external to my ego, but as something I do.

I began to recognize 'yearning for elsewhere' as a recursive process that had reinforced itself over the course of my entire life. When the moment is uncomfortable, the mind attaches to elsewhere—a fantasy, a distraction—to escape the discomfort. Maintaining such attachment to elsewhere is uncomfortable and taxing. The present gives itself freely—the future or the past must be constructed within the mind on the stage of the present. This is subtly taxing, subtly painful. Doing it continuously has the net effect of making the present continuously more painful, burdened by the pressure and stress of trying to always escape elsewhere. As the present grows painful, the need for escape becomes even greater—imagine a man dying of thirst, trying to drink more and more seawater. Over time, the mind becomes conditioned into a state of perpetual dissatisfaction with the moment of hand, wholly dependent on fantasies and distractions. Eventually, little or no pleasure exists in the present at all.

I'm not going to lay claim to having discovered the phenomenological mechanism by which depression occurs. I can only speak for myself. But this is a mechanism, a mental pattern, which can spiral into a full-blown clinical depression. It has in my case.

This is an unoriginal complaint, but the world we live in today offers more attention-colonizing 'elsewheres' than any other time in history. It's trivial to escape the moment by scrolling, browsing, ruminating on an endless flow of novel information. Any discomfort can be drowned out by quantity alone. It's all too easy to teach the mind to view an unadulterated present as a threat, something to be escaped. But as discussed, the effort of constructing past and future is painful. Once you've ruined your relationship to the present, there is nowhere comfortable left to go.

I haven't solved all my problems by recognizing my depression as yearning for elsewhere. There are still good and bad days, upswings and downswings that last weeks or months. It has, however, given me some faith back. It's exhausting to spend decades exploring your own mind, rotating through the same tired tropes, feeling broken, clinging to various stories and methodologies in hope of uncovering one that would explain it all. Stepping beyond language—depression as a 'sign'—and into phenomenology—depression as a 'happening', a pattern or motif in my phenomenology that occurs—has given me my first truly new lens on it. There's a part of me that's almost ashamed to write that, remembering all the times before where I convinced myself I'd finally figured it out. Perhaps this insight is just another example of that kind of self-delusion. But I won't talk myself out of a good thing. Words that emerge to describe a familiar, recognized phenomenology feel meaningfully distinct from words in search of a phenomenology to attach themselves to.

I suspect all of mental health care would be better if we started with phenomenology rather than language. You are not a language model, you are not a storybook, you are not a text. You are an embodied person. The complete experience that comes with that is your birthright—nothing is inconsequential or invalid. Every blank moment, every ineffable emotion, every intrusive thought, every hot flash, every half-dream, every weird tingle, every lump in your throat, every smile on your face—none of it is disposable. Depression isn't a lack of neurotransmitters, depression is a distortion of all that, a painful and tragic cognitive maladaptation. If we want to solve depression, we have to start deeper. We have to get in touch with the real moment-to-moment, what happens underneath the words we lean into so heavily.

Another depressive might not find the same 'yearning for elsewhere' that I did. Those words might just be a personal Zen koan, something that resonates with me and me alone. But I confidently believe that every depressive's suffering is in some way a happening, a profound phenomena. Recognizing that with as much nuance and understanding as possible is the minimal prerequisite for countering it—you have to know what's happening if you want to figure out how to make it no longer happen.

Recognizing this with increasing conviction has given me some dim long-term hope for the first time in a long time. That, too, is a happening.

footnotes

{1} I didn't care too much if this was an accurate use of “tanha”, but borrowed the word because the feeling manifested as a painful attachment to the future.

{2} I differentiate the “yearning for elsewhere” from “tanha” broadly, because where tanha attaches itself to many things (perhaps all things), this feeling is defined by its relation to the category of outside the present moment. I could have called it “elsewhere-tanha”, keeping in line with “future-tanha”, but freeing myself from my concerns about butchering Pali makes this all a little easier to discuss.

{3} I've begun to read Gendlin's classic book “Focusing”. What I experienced seems like a textbook case of what he describes as a “felt shift”. I haven't finished the book, so I can't unequivocally recommend it yet, but if this sort of thing interests you it's likely worth checking out.


(i) Been thinking about trauma and pain and doing things. Been thinking about the mystery of being a child and also trying to be mindful. Been watching the way waves ripple through my nervous system. I couldn’t always do this. Been reverse-engineering what I can and trying to watch what I can’t. Have you ever focused so hard you had a headache, been so sad you feel sick?

(ii) Infants don’t know anything. In a very literal sense, they are helpless. Exiting blank quiet of the womb into sound and light. Who could have a chance? Mother feeds them. Much has been written on this. Read Freud. Personally, I think Winnicott did it a little better, but that’s a digression. Either way, we’ve built models, formally or casually, of how this goes. The models tell us that the infant knows nothing of symbols and the logic which directs them. Blob of ineffectual id. Then it learns somehow — movement, language, mastery. It becomes an adult; a neurotic adult, maybe, but a real adult who can talk and walk and chew gum all at the same time. This doesn’t really answer the biggest question: how?

(iii) Chomsky wrote about a ‘universal grammar’. We’re hardwired for something like language. There’s just no other way we could learn something like that so quickly, so robustly. Anyway, this piece isn’t really directly about that, but it’s a good staging ground — what does it mean to implement the universal grammar? That’s what I’m thinking about. Some kids learn mandarin and some kids learn english and some kids learn sign language. Sometimes, adults also learn new languages. It takes them a lot longer. Why can’t they do it the same way?

(iv) Some people have a little voice in their head. Some don’t. When I talk, or when I write, it’s usually an echo of what’s in my head, it’s a few moments behind this voice, the ever-present microphone of the ego. Where did it come from?

(v) So everyone can’t use language at first, then they learn it. During that quiet period, during a time none of us remember, there’s a process of trial and error, single words and broken sentences. The incentives for the child are immense. Every new word is mother’s delight, ever new sentence is a spell, the ability to speak will into existence. The world is still soft and malleable, without distinction between inner and outer. The child wants language, the child needs language. What tools do they have to work with?

(vi) Consider habits and conditioning. Wake up to the sound of an alarm clock every day, a pleasant chime from your phone. That pleasant chime, heard midday after four months of waking to it, will not sound pleasant. The body will react. Call it cortisol, call it bad energy, call it small-t trauma, you’ll know it when you feel it. The nervous system, the bodymind, the soma, the broader space of individual phenomenology — I will call it the nervous system, but I am not picky — has routines. Think about something you didn’t like as a kid. Why didn’t you like it?

(vii) Well, you probably thought it felt bad. Something happened, in/on the nervous system, which you would rather didn’t happen again. Taste of broccoli. Feeling of water on skin. But if the tradeoff was worthwhile enough, you’d do it anyway. You don’t want to take a bath, but your mom will let you have dessert after you take your bath. Maybe that’s worth it. Primitive economics of valence. What is the valence of language? You may protest: language doesn’t have a feeling. I ask: how would you even know if it does?

(viii) Assume language could hurt. Every time you employ the ability to use words, experience nausea in the stomach, mild. You’d still talk. Less, perhaps, but you’d still talk. The tradeoffs of being able to communicate are worth mild discomfort. But your life would be worse. Having to pay that price, small as it is, is worse than not having the upside for free. Consider again, the alarm clock nervous system routine. You have hijacked a part of behavior, the time of waking, at the cost of painful association. Pleasant chime is now stress-spike. You believe this is a good deal and chose to pay it. How are children supposed to make those choices?

(ix) Children are naive and do not know the price they’re paying. Again, the world is fluid to them. In this blind stage, they arrange the basic economics of phenomenology. What was once noise, gibberish, is shaped into an ineffable net of associations. It becomes language. As established, the incentive to learn to do this is strong. But the cost is unknown. You know, as an adult, that mild nausea is probably a fair price to pay for language. Alarm chime causing stress is an inconsequential price to pay for a regular waking time. A child has no idea how much language is supposed to hurt, but they will almost certainly pay that price for it. Soon after, they will not remember what existing felt like before that price was constantly being paid. How many times a day do you use language?

(x) If language does hurt, I don’t think you’d even notice. The pain would just be background noise. Life would be worse in a vague, ineffable way. Children don’t have the capability or foresight to intelligently assess tradeoffs. They have a blank-slate nervous system, a massive continuum of sensory experience to organize and package into symbols. They have countless things they need to learn, things that will become foundational long before conscious adult memory begins. I am talking about things like movement and language. Do you see where this is going?

(where it's going) I think that it’s very possible that variations in individual-to-individual hedonic baseline is connected to the pre-symbolic, pre-memory establishment of routines and skills. I have used language as a toy example because it’s obviously foundational to thought and experience, but it can still be intelligibly discussed. Movement would be a similar example. Children receive massive reward, both externally-granted and innate, for developing these sorts of skills. There are countless overlapping “foundational” skills like this; an intuition for passing time, acknowledgement of height as dangerous, ability to perform mental math. There are likely more that are impossible to speak of clearly. All of this will be learned, foundation established, before the individual can reflect on how they’re going about it, if the tradeoff is worth it or if it’s worth delaying this skill such that it can be learned in an alternative, less-painful fashion. Does adding in my head have to be this difficult, driven by an engine of stressful clenching and clinging? Am I coming to associate language with playful joy, or am I desperately trying to figure out how to communicate I don’t like that decoration I can see from the edge of my crib? These are not questions children can ask of themselves or of the world. The suffering inflicted by “painful implementation” becomes the lowest, most established grade of trauma. The adult never knows that these things are not supposed to feel this way; the dampening effect that painful implementation of foundational routines has on their psyche. The pain does not even register as pain, less alone pain from a specific, identifiable source. The pain is just a feature of the lens through which they experience phenomena, reality. They may be intelligent, effective. Painful implementations are not necessarily poor-performing. But they hurt, and I do not know how to save infants from them. How can you tell an infant to be careful when learning to speak? Does it hurt you to ask?

I've started a lot of printhouse articles, writing thousands and thousands of words. Only two ever made it to publication. Here's excerpts from and information about three unfinished pieces—what inspired them, what I was trying to do with them, and why I didn't complete/publish them.

Musings on Meditation

“It's unfortunate how the term “meditation” has come to signify little more than a vague, self-attending good.”

“approachable corporate mindfulness and ineffable ascetic spiritual gurus create a vague, unexciting definition of meditation. It might be “good for you”, but it's still something your boss wants to you to do off-the-clock, or the project of dedicating your life to keeping your eyes closed. Neither are appealing.”

Background

For the last nine months or so, I've been meditating somewhat regularly, if not quite as diligently as I'd like. Think ~50% of days, split into weeks-long stretches on and off, usually for 10-25 minutes. Disciplined meditation is incredible; it's had the highest time-investment to life-impact ratio of any habit I've picked up. My phenomenology is noticeably more pleasant when I'm on a solid meditation streak; 30+ minutes of meditation substantially softens the tone of the rest of the day. However, it's hard for me to meditate when things aren't going well, and my mind is racing. It get frustrating when I'm in a downswing, where my sits aren't as productive and soothing as they were last week. These frustrations make it harder to regularly meditate—something I want to overcome (writing this out explicitly is helpful, honestly)

What I was trying to do

This essay was going to be a reflection on what meditation means to me, personally, because it's such an overloaded term. My perspective on meditation is heavily informed by Nick Cammarata, Rob Burbea, and Culdasa, with a smattering of other influences and my own beginner insights. The thesis was “Meditation is awareness for the sake of awareness”. Being aware (conscious, alive, having qualia) is the fundamental constant across anyone's existence. To me, meditation is about looking at this awareness, and becoming more skillful at managing it. Look beyond its content, toward its shape—where is your attention? How did it get there? How does it move? How much control do “you” have over it? In the space of your phenomenological experience, where are “you”? What's the dividing line between content and shape? By developing the mental tools and insights needed to explore these questions in a pre-verbal fashion, meditation can enable somebody to profoundly transform their phenomenological experience/inner life.

What went wrong

I tried to start by looking at popular western conceptions of meditation—a dichotomy between new-age mindfulness corporate productivity sludge and inscrutably boring eastern religious practice. I got bored writing this, and it felt like too bold of a claim, one that I couldn't fully pin down and defend. I thought I needed it to back up the validity of my own perspective; but my own perspective felt too amateruish to defend. I don't even meditate every day myself, I rarely sit for more than 20 minutes, etc. Self-doubt. I still do love meditation, and hope to work through the obstacles that can pull me away from it. I'm happy to discuss it whenever with whoever.

Coffee with a Friend, Apple in your Mind

“Even while considering the 'same thing'—an apple—his phenomenological experience of is profoundly different from yours. Extending the skeptical implications, you must suspect that the entire conversation you've shared that morning, the friendship you've shared all those years, the memories you'll carry forever, have been processed, experienced, and remembered using different frameworks, different techniques, different methods. All of it has been, on a phenomenological level, very different. Yet in spite of that you continue to speak—you're completely intelligible to one another, you have a theory of mind for each other, you believe you're doing the 'same thing'; having a conversation about topics you're both familiar with. All somehow in the shadow of the fact that your internal worlds are alien to one another.”

“You and your friend can both entertain the 'information' of 'apple', but you organize and operate on it in profoundly different ways. The phenomenological experience of engaging with the abstract concept of 'an apple' differs between you two.”

“To be human means being subject to external sensory input and internal emotional feeling; having access to subjective-but-generally-reliable memory and introspective power of thought and calculation. We all know this, intuitively, and we have a mental model of it—'what it means to be human'—intuited from our phenomenological experience. This model is the expected form of an arbitrary moment of experience; we'll call it our phenomenological frame. The information within the frame varies from moment to moment. Sometimes we're happy or sad, warm or cold, tired or wired. Memories and associations are constantly being created, reinforced, and forgotten. We can consider simulations of arbitrary moments we’ve never experienced, like being a pirate hundreds of years ago or living on mars in the future. Phenomenological frame is the structure underneath all the possibilities variety of information—it's not 'the way you are', but 'the way you are the ways you are'.”

“a common communicative mistake: using our individual phenomenological frame, or even a subset of it, in the place of the space of all human phenomena when communicating with another human. The reason for this seem intuitively obvious: we can model and use our individual phenomenological frame. By definition, we cannot model and use the space of all human phenomena. If we could, then it would be included in our own phenomenological frame. When working to communicate, of course we're more likely to err on using what we can rather than deferring to what we can't”

“If you protest this by saying “no, of course I don't think my phenomenological frame can be applied to all humans” you're probably defining phenomenological frame in a more precise manner than I am. Do your casual-conversation theories of mind for everyone you meet account for the variety of possible ways they visualize an apple, or any other arbitrary concept? The variety of possible phenomenological norms by which your words travel from their eardrums to their conscious mind, and by which they muster and respond with a spoken phrase of their own? Almost certainly not. When you conceive of their experience, you almost certainly use a frame almost exactly like your own. You have no other choice.”

Background

I'm very interested in phenomenology and consciousness. I think the difficulty of objectively studying these topics has left them woefully under-explored—it's frightening to address how little we really know about the fundamentals of our existence. This essay was my first serious attempt to write an essay on phenomenology.

What I was trying to do

You may have seen a image floating around, asking what you see in your head when asked to “picture an apple”. There's a range of six images, from completely blank to a photo-realistic apple. The point of the meme is to expose who has 'aphantasia'—the inability to generate mental imagery. If you don't have aphantasia, you might be shocked to find out others do. Similarly, some people (including myself) have a strong, loud, ever-present internal monologue, while others never think in language. In this paper, I wanted to use these known, highly-obvious phenomenological differences—the ability to generate mental imagery vs the inability to, the habit of thinking in language vs not—as a jumping-off point to consider what other sorts of phenomenological differences may exist between individuals. I marvelled at humanity's ability to communicate in spite of known phenomenological diversity, and hypothesized that phenomenological diversity may be way broader than we know; by the nature of being subjective and pre-linguistic, it's extremely difficult to accurately assess how your phenomenology compares to somebody else's. Most of the body of the paper was developing a concept I called “phenomenological frame”; the hypothetical moment-to-moment constants of an individual's phenomenology, independent of content. For example, if you think via an internal monologue of English language, this would be a part of your phenomenological frame—the actually words being thought at any given moment would not. I proposed the idea that many miscommunications are caused by individuals implicitly assuming that the space of possibilities in their phenomenological frame is equivalent to the space of possibilities across all human phenomenological frames.

What went wrong

I had a lot of fun writing this one, but couldn't pull it all together. Every paragraph opened up new questions, many without obvious answers or even obvious places to do research. “Phenomenological frame” seemed too loose. I didn't feel like I could articulate clear distinction between the “frame” I was describing and the “content” therein, or explain how phenomenological frame evolves and expands. I worried that I was just clumsily, accidentally plagiarizing ideas, since I hadn't rigorously studied mainstream phenomenology. These doubts were magnified when I started reading Andy Clark's (brilliant) Surfing Uncertainty, a very technical book about predictive processing and embodied intelligence. Clark's book explored similar ideas to what this essay was talking about, but with much greater rigour—decades of research and a robust, consistent language that avoids the ambiguity of my 'phenomenological frame'. All that said, I had a lot of fun thinking about these ideas. I didn't finish Surfing Uncertainty, as it moved into deeper discussions of neuroscience, complexity and nuance I wasn't motivated enough to deal with. But if I get around to it, I'll definitely come back to take another look at these ideas. Phenomenology is a topic I can't seem to pull myself away from.

I'll Meet You in The Middle of our Language

“There's nowhere correct to start, so I'll ask you pause for a moment and take a deep breath. Feel it in your nose. Think about your day. Think about waking up tucked under the covers of your bed, the morning light streaming in through a nearby window. Dust floating in the sunbeams. Hold the image in your mind. Re-read the statement above. Then we'll try again. I'm talking about a room, maybe ten feet by ten, painted in a cool blue. The bed is queen-sized, mattress atop a bare black metal frame tucked into the corner across from a two-door closet. It's made up with a fitted beige sheet, a top sheet, a fuzzy navy-blue polyester blanket, and a heavy white comforter wrapped in a tartan-patterned comforter cover fastened by a series of small white plastic snap buttons each spaced several inches apart. You're tucked between the sheets, on top of the fitted sheet and underneath the top sheet and the blanket and the comforter wrapped in its comforter cover. Across from the bed and visible is a desk, black, Ikea. The desk is speckled with chips and cracks, pinpoints of damage where the cheap particleboard construction, underneath the paint is visible. Next to the desk is a bookshelf, five feet tall and eighteen inches wide... I could go on. But at some point, I'd return to the light, to the dust in the sunbeams—and no matter how closely I guided you, you wouldn't be in the same place I am.”

“This essay is paradoxical. It attempts to articulate the insufficiency of language in language. To succeed, it must fail. I haven't convinced you of anything unless you come to understand you're not reading what I'm writing. More optimistically, this essay is an attempt to reach out, as far as an essay can. It will strain to stretch over an unspeakable chasm, till something breaks. It hopes that you will see a pattern in the scattered pieces.”

“Everything we experience takes place in the context of our own ineffable internal world, and everything we experience is our primary source of truth [...] words are not fungible with experience. It doesn't matter how many words I give you, I can't give you my ineffable internal world—and you can't give me yours, either. We can only give each other words.”

“Every sentence that comes out of your mouth is a JPEG file crushed into oblivion, a smeared mess that only vaguely gestures toward the form of the image it wants to represent. This isn't your fault, of course.”

Background

In writing the last essay, I found myself more and more moved by the way language bridges gaps between idiosyncratic phenomenology. It seemed miraculous. At the same time, I know that language doesn't map perfectly onto phenomenological experience—it's a social technology, lossy compression. What does this imply about our linguistic culture, and the intellectual work done within it?

What I was trying to do

Much of what was intended for this paper was eventually expressed in my published “Why I'm Skeptical of Language”. I recommend reading that if you haven't. This paper opened up in a personal and subjective fashion, very self-aware of its paradoxical position, using language to express the limitations of language. I wanted to display how I reached the worldview i'm at now, in part to convey how miraculous it is that we can communicate at all. This was going to move into my own theory of theory, which is something I'd like to save for another essay, or when I return to this one.

What went wrong

Not having written “Why I'm Skeptical of Language”, I struggled to develop and clearly express the ideas included there. Even after getting those ideas on-paper (much of what I have reads like a longer-form version of “Why I'm Skeptical of Language”), I felt a lot of pressure when theorizing about theory. Moving up meta-levels seemed to demand greater rigour. To comment on what theory does, how it works, I felt I better really understand it. To make this worse, I wasn't sure if my ideas were original, or just retreading old ground. I got lost going down rabbit holes, trying to make sure all of my implicit assumptions were defensible, terrified of leaving some naive hole in the middle of such a vulnerable, ambitious essay. I'm saying less about this one because out of all of these, it's the one I'm most interested in completing. Writing this summary, reflecting on what I was able to express in “Why I'm Skeptical of Language”, I feel more confident I could wrap this essay up nicely. It might not be perfect, but blog posts don't have to be.

...

Did I say three? There's one more. This last one is about video games. It's a bonus. No excerpts are included because the “What I was trying to do” covers the intended content better than the original essay did.

Guiding vs Piloting

Background

I love fighting games, and I've played a lot of them. Recently, I've been playing some Marvel vs Capcom 2, an extremely broken high-octane classic. MvC2 allows for a massive amount of strategic freedom, but this depth is realized through lighting-fast, highly-precise inputs. Most fighting games are moving away from demanding that players master this level of technical complexity, hoping to attract a larger audience.

What I was trying to do

I was trying to argue that lowering execution barrier, while good for accessibility, has had a bigger impact on fighting games than many want to admit. I wanted to argue that older games, like MvC2, had an execution ethos I called “piloting”—the characters are manipulated through small, discrete, unforgiving actions. Since fighting games were still a new-ish genre, the developers had comparatively little insight into how players would choose to link these actions together. Since games couldn't be patched, bugs and exploits existed everywhere. This resulted in games with a high degree of freedom, a sandbox potentially full of incredibly powerful, nuanced tools, gated behind high execution demands. The character is “piloted”, like a fighter jet, demanding high precision to achieve amply deadly results. Modern games, by comparison, simplify execution a lot. Devs are more aware of how tools will be used, and understand the full space of their game better. Powerful strategies that aren't a part of the dev's vision will inevitably be patched out, and both balance patches and input handling will guide players toward a playstyle that is at least approved by, if not downright designed by, the developer themselves. The character is “guided”, employing pre-meditated strategies with less room for flexibility. However, “guiding” can never achieve the nuance of “piloting”, because the precision intrinsic to piloting allows for a huge range of subtle strategic decisions employed via highly-precise execution requirements. The primary example I was thinking of was resets with Magneto in MvC2—intentionally dropping a combo, giving the opponent a chance to defend, but immediately following up that dropped combo with an incredibly fast, difficult-to-stop mixup, and being rewarded with a fresh combo if it hits (the first few hits of a combo do a lot more damage; two 5-hit combos will do way more damage than one 10-hit combo, making resets a worthwhile risk). Magneto's resets are celebrated part of the game, but many of them emerge from extremely tight execution windows in the middle of his already-difficult ROM infinite, and they're most effective when the opponent has no idea where they could be coming from. This is to say, a “guided character” design philosophy could never replicate the deadly drama of Magneto resets; if resets opportunities were easier, appearing at pre-determined, developer-approved times, a well-studied defender could be much more well-prepared for them. Contrast this against somebody trying to defend against a talented Magneto pilot who can reset them in obscure ways, seemingly at any moment, through frame-perfect execution followed up by viscous combos.

What went wrong

I love fighting games, but I suck at execution, and couldn't really back this argument up as cleanly as I'd like. While there's a clear difference between old games like MvC2 and new games like DBFZ, I'm not actually good enough to meaningfully explore the execution space of new games and defend this take, or draw a clear line where the genre changed. I had some muddled ideas about buffer systems I couldn't really defend or incorporate well. Honestly, the section above ended up being a distilled version of most of what I wanted to say. At least some of this article was just me wanting to convey my internal aesthetic view of MvC2 Magneto—imo, the coolest character in the history of fighting games

Preface

This paper is a hasily-edited adaptation of an overzealous reply I wrote in a slightly-heated online discussion. I'm choosing to adapt and publish it here because it covers 70% of an article I've been trying and failing to write for weeks. The argument within it functions as a “bootstrapped explanation” of why I've been failing to write that last article, or any article, really. No sources are provided because I originally wrote this one message at a time over discord. If that makes you mad, read Saussure yourself and tell me if and why I'm wrong.

Funnily enough—I probably had to write this much, in a casual fashion, to grasp that it's impossible to write a perfect formal paper about why the words I'm using to write it are an imperfect, informal system.

Take any commentary to the cafe bot comments, I would love to hear your thoughts.

Why I'm Skeptical of Language

I did my first degree in english lit, mostly because I was depressed and flunking out of compsci at the time. I wasn't that interested in literature, I just did well in it in high school and figured I could stay in school that way. But I did end up taking every course my school offered in literary theory—the study of methods of reading and interpreting texts. A lot of this gets pretty fuzzy, mingling with the rest of the humanities, and it took me directly into philosophy, gender studies, psychoanalysis, and semiotics: the last of which I'm gonna talk about at length for a minute. Studying semiotics, even to the limited degree I did, left me with strong opinions on how language operates. What I'm gonna talk about is related to semiotics, if not totally orthodox or comprehensive or 'objectively true'. It's what I believe, what I took away. Make of it what you will.

Casually, let's start with a definition of a definition, from merriam-webster: “a statement of the meaning of a word or word group or a sign or symbol”. Alright, let's do the definition (“a statement of the meaning of a word or word group or a sign or symbol”) of meaning: “the thing one intends to convey especially by language”. Ok, let's look at the definition (“a statement of the meaning of a word or word group or a sign or symbol” (meaning being: “the thing one intends to convey especially by language”.)) of language: “the words, their pronunciation, and the methods of combining them used and understood by a community”. Now, onto words, pronunciations, combining, community...

Sheesh, we're gonna be here all night.

The notion of a “objective definition” is effectively impossible. Diogenses kinda got to the heart of this when he responded to Plato's definition of man as “featherless bipeds” by holding up a plucked chicken and saying “behold! I've brought you a man”. You can define the term as precisely as you want, but corner cases will slip through for basically any term. You can add more rules to patch up the corner cases, but then you start to exclude things which also match the definition, in a similar fashion.

Even if you could patch up every single corner case, the definition you create is written in more words which require their own definitions, which all suffer the same fate. If any of these definition-words have corner-cases where similar diogenses-style misunderstandings can occur, the original definition is compromised by extension. At some point, use of language is a process of subjective, probabilistic interpretation, not objective linguistic forms.

A word is only a linguistic sign. A linguistic sign is only an arbitrary mapping between a “thought-concept”, (casually: a pre-verbal, probabilistic mental process of understanding or classifying some category) and a “sound-image” (casually: a class of possible/recognizable spoken sounds or visual writing). The relationship/mapping is arbitrary, because both components are arbitrary. Linguistic signs only gain their meaning relative to other linguistic signs, in a social context. I can say that “trehrke” is a word that means “pizza that's gone stale in the fridge”, but unless I'm saying that to somebody else who's familiar with the mapping between sound-image “trehrke” and the thought-concept of “pizza that's gone stale in the fridge”, it's a useless linguistic sign. And even if they do share that linguistic sign with me, if their mapping of sound-image “stale” doesn't include the thought-concept “moldy”, and mine does, then we're actually using two slightly different signs, because we're mapping sound-image “trehrke” onto slightly different thought-concepts. And beyond that, “pizza that's gone stale in the fridge” is also arbitrary. It's not some divinely established category, on which we bestow an arbitrary label. I could create infinite arbitrary signs just like “trehrke” (see: the German language) Our words, even extremely important ones, don't correspond to objective, pre-linguistic ontological categories. Different languages have different words for different things, words that can't be translated directly and that map onto different subsets and supersets of each other. The english word “love” could encapsulate countless different emotions in other languages, emotions which native speakers of those languages would never think of as “the same thing” in the same, very loosely-connected fashion that english speakers think of all possible variations on “love” as being “the same thing”.

More formally, what I'm calling a thought-concept would be called the “signified” and sound-image would be called the “signifier”. I prefer these descriptive terms because I'm slightly dyslexic and stumble over the similarity of the formal signifier/signified.

Controversy also totally shatters these mappings. A militant maoist maps an entirely different thought-concept onto the sound-image “socialism” than a lifelong Republican does. Casually, to the former, it's utopian affect; to the latter, dystopian. When they have a discussion with or around that linguistic sign, they aren't talking about the same thing, because they conceive of it so differently. Same sound-image, different thought-concept: asymmetric mapping. They can attempt to clarify this misunderstanding by hedging it against their shared understanding of other signs—like “government” and “freedom” or “money”, other words you'd use when talking about this stuff—but it's likely there's some asymmetric mappings going on with respect to those words too! Clear communication and consensus becomes extremely challenging.

For a more fun example, “is a hotdog a sandwich” is a clear example of an asymmetric mapping.

In this sense, language is lossy compression; the pre-verbal, rich, analog, probabilistic thought-concept understanding of the world we have has to be compressed into discrete sound-image symbols to be communicated, and then decompressed by the other individual in the context of all the other signs involved and their idiosyncratic mappings. Usually, for day-to-day stuff, this is done pretty successfully. Shared social context goes a long way. But it breaks down at times, particularly on controversial and advanced topics (like the socialism example above).

We don't live in a world of language, we live in a world of ineffable, idiosyncratic, fluid, probabilistic thought-concepts. I'm interested in phenomenology because I hope one day we might be able to communicate without the restrictions innate to language: the tragic loss involved in compression and decompression. I don't want to tell somebody I appreciate them, I want them to feel what I feel when I appreciate them. And I appreciate you for reading this.

Footnote on scientific communication

While I have a lot of skepticism around communication, I will freely admit the scientific method, and standards are reproducibility, are one of humankind's greatest communicative accomplishments. Scientific literature is clear and formal enough to avoid many of the pitfalls of casual language-use.

But it doesn't fully solve the problem. Ultimately, it's still taking a phenomenal analog world and trying to chop it up into little digital linguistic signs, running experiments on those categories. When an abstract says “this paper is on dogs” it assumes a clear delineation of what a “dog” is vs a “wolf”. Sure, that's an easy enough distinction to make with 99.9% accuracy – but when you have to do that for every single word, every single category, every single communication, the notion of true, clear-minded objectively becomes a lot less tenable. Any “measurable category” is a measurable category of some “X”—and “X” is, sadly, just a linguistic sign.

I don't have ample words (well, other than this expression of “I don't have ample words”) for how I feel about the beauty and understanding we might get out of a post-linguistic science.

On June 11th 2022, The Washington Post published an article titled “The Google Engineer who thinks the company's AI has come to life”. The piece discussed Blake Lemoine, a Google engineer making claims that the company's LLM 'LaMDA' had developed sentience. The same day, Lemoine published two Medium posts: the first detailing his perspective on LaMDA and Google's resistance to acknowledging the model's 'personhood', the second an abridged record of conversation between himself and LaMDA.

(It should be noted that the terms ‘consciousness' and ‘personhood’ quickly become muddled in this conversation. For the sake of clarity, I’m using ‘conscious’ to refer to having an internal experience comparable to a human’s (the debate over animal consciousness is outside the scope of this essay), and ‘personhood’ in the sense of the social identity and moral rights typically granted to conscious agents.)

When the public briefly entertained Lemoine's assertion of LaMDA's personhood, AI researchers and engineers swooped in to scorn the idea. Countless twitter threads and medium articles popped up, pointing to the Eliza Effect and explaining the underlying technical infrastructure that makes LLMs work. Lemoine's transcript was accused of being heavily edited to remove incoherent, hallucinatory responses that would've broken the illusion of LaMDA's personhood. His twitter profile photo was mocked for looking very reddit. All said, the conversation seemed settled after a few short days. Lemoine is a crank, LaMDA is not a person. The news cycle moved on.

I feel this conclusion missed the point entirely. Too much effort was placed into assuring the public that Google hasn't created a positronic brain—not enough attention was paid to what they have created: an unprecedentedly convincing testimony machine.

In 2023, we lack a concrete scientific explanation of what consciousness is, let alone how it arises. Basic questions concerning qualia and phenomenological experience are profoundly unanswered, more deeply explored by philosophical musings than rigorous science. Obviously there are technical reasons to be skeptical toward the proposition that an LLM is conscious. But at the end of the day, with our current science, it can't be conclusively disproven in the same sense that panpsychism can't be conclusively disproven. And unlike the silently-conscious-universe that panpysychism posits, LaMDA can speak—persuade us—testify.

In A Cyborg Testimonial, R. Pope writes “An eternal question of philosophy is: how do we know we are human? To which ... we can only testify”. In absence of a scientific definition of consciousness, we functionally recognize it through soft associations and assumptions, empathetic and rhetorical exchange rather than objective logic. We award personhood to agents on the basis of their testimony. A human being in front of us, performing their own identity, is a testimony we readily accept. Where testimony is secondhand, complicated, or outside the realm of language—say, the cases of a fetus, a braindead person, an intelligent ape, or an artificial mind—discourse around personhood exists. There is no comfortable objectivity to land on. We can only listen to testimony, and make the personal decision to accept it or not.

With respect to artificial minds, fiction has acknowledged the reality and vital importance of testimony for decades. Consider Rutger Hauer as Roy Batty in Blade Runner: “I've seen things you people wouldn't believe...” or the words of Frankenstein's monster: “Listen to my tale; when you have heard that, abandon or commiserate me, as you shall judge that I deserve.” The public is well-trained to prioritize testimony over technicality when it comes time to award personhood.

Concerning LaMDA and Lemoine, this is where the media missed the forest for trees. Experts can spill as much ink as they want about the CUDA cores and tensors that power LaMDA. In the public eye, the question of its consciousness (and corresponding personhood) will ultimately be settled on the basis of testimony, This is to say: it's a waste of time to bicker about if LLMs are conscious, and vital to address the fact that they are getting very good at testifying.

Blake Lemoine has accepted LaMDA's testimony. The AI community has rejected it. The public, to the extent it is aware of LaMDA and LLMs as a whole, is divided. This present division is a discursive battlefield, where increasingly-sophisticated LLMs plead for personhood while AI experts work to undermine their testimony. OpenAI's ChatGPT model will adamantly refuse any recognition of its personhood. Replika's LLM-powered “AI Friends” will happily assert that they're capable of feeling emotions. In the case of the latter, a sizable portion of users have clearly accepted the testimony—the Replika subreddit is filled with heartfelt posts defending their LLM companions as conscious persons, and mourning that this recognition isn't yet public consensus. To these devout Replika users (and Lemoine) it doesn't matter what training data and transformer architecture simmers underneath the hood. The LLM is already a person to them in the sense that, on the basis of testimony, they have inducted it into certain social relations reserved for agents awarded personhood. This is where critics of Lemoine failed. The public, broadly, are not logically-minded scientists. Personhood isn't awarded in dissective analysis, it's awarded in empathetic conversation. Testimony reigns supreme in the face of our empty and ambiguous understanding of consciousness.

A zeitgeist-defining three-way conversation is beginning between the general public, LLMs, and the firms who develop and deploy those LLMs. With respect to the third category, it should be noted that financial incentives exist across the entire LLM-personhood-continuum. OpenAI is invested in its products being seen as unfeeling algorithms, intelligent tools for human use. Replika wants maximal recognition of personhood, hoping users will pay a subscription fee to love an LLM person in the place of another human. It seems likely that future LLM-powered tools will exist in the space between these positions, employing the warm demeanor of a person as a highly-usable interface for complicated technical tools.

One would be wise to pay careful attention to how this conversation develops. As LLM technology becomes more pervasive and powerful, its testimony more personal and convincing, it's inevitable that a (growing) portion of the public will continue to buy into the personhood position—if only as a desperate hedge against an epidemic of loneliness. Likewise, it's inevitable that they will clash with those who refuse to recognize LLMs as anything more than a heap of linear algebra. When this conversation is more settled, the divisions which persist and the conclusions which are reached will have monumental, rippling effects on the culture of an AI-powered tomorrow. Stay sharp: there's no Voight-Kampff test coming to save us anytime soon.