2023-05-11

The only thing I can possibly write about is whether or not I should be writing at all. If I am going to invest time writing in this, that means I am not investing time in something else. My obligations are fairly consistent. I have time that I spend with family, on school, on my job, and then I have the remaining time.

This remaining time is pretty predictable.

I'm using vim mode in VSCode right now. I'm trying to type this shit, and I keep trying to hit keybinds that have now been stolen by the vim extension. Do I really care to go whole hog on this? If I'm going to be some editor zealot, I'm going to have to start customizing hotkeys and crap. Then I'm going to have to manage synchronizing those configs. Time spent doing that, instead of writing or other.

I'm still writing this, so obviously some chunk of remaining time is spent on writing. What else would I be doing?

Now that I've already listed so much, why not finish the list? What do I actually do at work? With family?

The intellisense suggested “drinking” under the “work” section.
I laughed.
Then I remembered I went to the pub two weeks ago during work hours to learn about some software stuff.
I laughed again.

It's hard to stay focused. The worst part is that I could write about that too. But lets stay on track.


Had a snack.

Am I writing for you, or for me? Everything I've typed so far I could paste into ChatGPT, and have a discussion with a super-intelligence. Or, I could write for you. I view these as opposing objectives. For some reason, I loathe the idea of publishing this kind of writing as-is. It feels like low effort trash. I probably wouldn't read this. Good writing takes effort. If I publish this as-is, isn't that the same as broadcasting that I think this is good writing?

There's about 2 reasons why I would post something online: 1. Vanity 2. Spread memes

These can go hand in hand. For example, I make Minecraft mods.
Part of that is vanity: I get kickbacks from the platform and can brag about download numbers.
Part of that is meme: I am giving other people an experience in the form of free software.

Writing this is mostly vanity. This isn't a piece of writing that says anything outside of “I can write”. Worse, it's writing which is basically my unmodified thoughts. I don't like to expose myself online, and I don't really have anything to say.

I enjoy the media I create. I can talk about it, but that talking is mostly a form of question answering. There's so much to talk about, and I don't care about any of it in particular, so that knowledge is usually shared when responding to questions.

If I'm investing a lot of time into something, I can get big return on investment for focusing on programming. Why focus on writing, when every minute that I spend writing is a minute that I could have spent programming? A bugfix, a feature added. An improvement to the user experience. This would be some big subservient behaviour if I wasn't the only user I care about. The only things I work on, I am my own user.

How long do I want this to be? That's another problem with just writing what's on my mind. I can write forever.
... is what I say now.

Eventually I will stop.
I'll become head empty, or I will get bored.
Then I have to make some decisions:

  1. Do I want to publish this?
  2. Do I want to send this to ChatGPT?
  3. Do I want to do something completely different?

If I publish this, I will have to edit it if I want it to have any value. Why am I obsessed with value?

I must be, because it feels like I've written all this before. I probably have.

My life is a cycle of doing the same things over and over again.

I've optimized those things to be pretty good.

We are in the part of the cycle where I get high and talk at my computer.

Except, it really is talking to the computer.
Either I publish this, or I can send it to a chatbot, or it sits on my harddrive until it will be read by a chatbot.

It won't be long until the cute AI lives on my computer, reads all my files, and can have deeper conversations with me. Typing this out feels very creepy. I don't want to present myself as an ai waifu kinda person, but isn't that the evolution of the recent developments?

We have AI that can generate arbitrary images.
We have AI that can reason about text.

We have that AI mimicks personas. A do-gooder that stays on the golden path: “as a large language model, I cant/won't answer that question”. Character sheets detailing who and how to behave, living people or not.

An AI therapist isn't far behind, and I'm sure they could psychoanalyze me better by reading this chaff. Is that what I want?


I could just do both. I could send this to ChatGPT then publish it. If I'm committed to the no editing thing, then that would just be more length for free. Not that that's inherently a good thing. Except as a reader, I do like longer chapters from the stories I'm reading?

Ar'Kendrithyst has like 10k word chapters. It's crazy. I'm still reading it, therefore I must be enjoying it. I've stopped reading stories that weren't keeping my attention. Stories with zero comments except for mine, where I'd read new chapters the day they were released, and I'd comment “Thanks for the chapter!” every time. I've dropped those stories, and then I worry about the author losing their only commenter, and how that'd probably feel bad.

I've considered all of this, then I dropped it anyways since the alternative doesn't make sense. Am I never allowed to stop reading this story then? Of course not.

I might have even hit the context limit on what ChatGPT could read by now. Lets find out.


I use vim btw. At least, vim mode in VSCode. To copy this document into my clipboard, I can no longer just do Ctrl+A to select all, and Ctrl+C to copy.

Instead, I have to exit edit mode, then go to the top of the document, then copy to the end of the document.

<esc>ggyG

Then I tried to type this paragraph about vim, and I hit Ctrl+A to remember what it did instead of select all, and it pasted in some text. I hit u to undo that paste, and it undid the paste and some of my typing. I hit Shift+U to undo the undo, but apparently that's the wrong keybind? It just undid more. Then I'm like “oh shit that deleted my text how tf do I redo” and I google and apparently I gotta hit Ctrl+R? Except that opens the VSCode find&replace menu. I'm constantly being screwed by editor features being stolen by vim, then vim not working because of editor keybinds sometimes taking priority.

I'm not going to switch to full vim since there's no way it can beat IntelliJ. If I'm not using it for everything, why would I swap out any of my main editors for it? Right now I'm in a limbo where hotkeys are ass, and I don't want to drift from the default hotkeys since that's what works everywhere.

This is why I feel vim is a joke. How long will I persist with this? How much is it really offering, when default IDE multicursor usage is so good already?

ChatGPT

It's clear that you're wrestling with a multitude of questions and concerns, all of which revolve around the usage of your time, the value of writing, the adoption of new tools like vim, and your interactions with AI. Let's take these one by one.

Time Management and the Value of Writing: The way you spend your time is a deeply personal decision, and it's great that you're reflecting on this. If you feel that programming gives you more satisfaction and value, then it makes sense to prioritize it. However, writing can be a valuable tool for self-reflection and expression. It seems you're deriving some value from this introspective writing process itself. It's not always about the end product, but sometimes the journey and the insights you gain along the way.

Vim and the Struggle with Keybindings: Vim indeed has a steep learning curve, and it's not for everyone. It can be incredibly efficient for those who take the time to learn and customize it, but if it's causing you more frustration than it's worth, perhaps you might want to reconsider its usage. However, if you decide to stick with it, remember that becoming proficient with Vim, like with any other tool, takes time and patience.

Interacting with AI: Your thoughts on AI and its future implications are quite interesting. The idea of an AI therapist or an AI that can read and understand all your files is indeed a topic of discussion and research. However, it's important to remember that while AI has come a long way, it still has limitations. It's a tool that can aid in various tasks and provide insights, but it's not a replacement for human intuition, creativity, and complex understanding.

Publishing and Vanity: The reasons for publishing something online can vary greatly from person to person. If you feel that your writing is an expression of vanity, it might be worth exploring why you feel that way. For some, sharing their thoughts and experiences online is a way of connecting with others, gaining new perspectives, or simply a form of self-expression. Your writing doesn't have to be 'perfect' or 'valuable' to everyone. If it holds meaning for you and you feel comfortable sharing it, that's enough.

Remember, it's okay to go through cycles, to question your actions, and to not have everything figured out. Reflection is a part of growth, and it's evident that you're engaging in that process. Enjoy the journey!

Me

while AI has come a long way, it still has limitations. It's a tool that can aid in various tasks and provide insights, but it's not a replacement for human intuition, creativity, and complex

It's impossible to distinguish if this is a sentence that has been included because one of the ChatGPT 'objectives' is apparently to remind users about its own limitations. In trying to provide an AI that is safer, the quality of the answer has been reduced with ambiguity.

This is an observed thing, not just me whining. You can find articles and stuff about how the AI metrics get worse as more safety tuning is performed. I could find a link that validates this, but if I cited all my claims then I'd be spending time on that instead of writing. I'm not holding this to the same standards as a thesis. Again I find myself obsessed with quality.

I appreciate your perspective, and you're correct that as an AI, I need to clarify my limitations. This is important for users to understand what AI can and can't do. The development of AI indeed involves a delicate balance between capability and safety. The aim is to ensure that AI tools are useful while avoiding harmful or misleading outputs.

Regarding your concerns about quality, it's important to remember that different contexts call for different levels of rigor. Not every conversation or piece of writing needs to be a meticulously cited academic work. There's value in casual, exploratory, and spontaneous discourse as well. It seems like you're engaging in a form of free writing here, which can be a wonderful tool for brainstorming and self-discovery.

ChatGPT

I appreciate your perspective, and you're correct that as an AI, I need to clarify my limitations. This is important for users to understand what AI can and can't do. The development of AI indeed involves a delicate balance between capability and safety. The aim is to ensure that AI tools are useful while avoiding harmful or misleading outputs.

Regarding your concerns about quality, it's important to remember that different contexts call for different levels of rigor. Not every conversation or piece of writing needs to be a meticulously cited academic work. There's value in casual, exploratory, and spontaneous discourse as well. It seems like you're engaging in a form of free writing here, which can be a wonderful tool for brainstorming and self-discovery.

Me

Why do I believe that society has led me to feel that self discovery is lame? If that's what this writing is, and I'm apprehensive about it, then isn't that me being scared of being lame? Or is that me thinking it's society's fault, when it's something else?

ChatGPT

The feelings you're describing can be quite common. We often internalize societal expectations and norms, and these can shape our perception of what's “cool” or “lame,” or what's worth our time and effort. It's also common to feel apprehensive about introspection or self-discovery because these processes can sometimes make us confront aspects of ourselves that we might find uncomfortable.

In terms of whether it's society's influence or something else, it's likely a combination of factors. Societal norms and values, personal experiences, upbringing, and individual personality traits all play a role in shaping our beliefs and attitudes.

Self-discovery and introspection are inherently personal processes, and their value often lies in personal growth, understanding, and acceptance. They might not always be considered “cool” in a conventional sense, but their importance goes beyond societal validation.

It's okay to question these feelings and explore where they're coming from. Doing so is a part of the very self-discovery process you're contemplating. It's also okay to determine for yourself what you find valuable or worthwhile, regardless of societal norms or expectations. You're the one who gets to define what's “cool” or meaningful for you.

Me

It seems logical to be afraid of being too public on the internet. I wouldn't publish my address because of the possibility for internet strangers to do harm. Is writing like this not an extension of the self? To put something out there that can be used to attack me?

A lot of content creators are realizing this with the advent of AI technology. Artists finding their artwork stolen. You would think that posting your art online wouldn't open yourself up to attacks. Except not really? People get hatespeech just for existing on the internet, before even considering what it is they're posting. The “funny” part is that it's not limited to “content creators”. The AI can steal a likeness from a very small sample. Your style, your appearance, your voice, all can be copied with AI. It's impossible to not be a target. To never be in the line of sight to a network-enabled camera ever again.

The Capture is a good TV series about how 'video evidence' faces troubling waters with AI, and what on the internet do we leave except evidence that we were here?

ChatGPT

It's true that the internet, while offering many benefits, also comes with its share of risks. One of the key concerns, as you've noted, is the potential for privacy breaches and misuse of personal information or content.

The concept of 'digital self' and its vulnerability is becoming increasingly relevant with the advancement of AI technology. Artists, writers, and content creators are certainly facing new challenges as their work can be easily copied or misappropriated. In more extreme scenarios, AI can be used to manipulate or fabricate content, leading to issues like deepfakes, which blur the line between reality and deception.

It's important to be aware of these risks and take precautions where possible. This could include careful management of the information and content we share online, using privacy settings effectively, and being aware of the terms and conditions of the platforms we use.

However, it's also important to balance these concerns with the opportunities and benefits of digital platforms. For many people, the internet offers a space for self-expression, creativity, learning, and connecting with others. It's a complex issue with no one-size-fits-all solution, and different people may choose to navigate it in different ways based on their comfort level and personal circumstances.

As for the TV series “The Capture” that you mentioned, it indeed touches on a significant concern in today's digital age – the authenticity and reliability of digital evidence. AI technology has advanced to a point where it can create convincing simulations of reality, which could have serious implications for things like security surveillance or legal proceedings. It's a reminder of the ethical and societal challenges we need to address as technology continues to evolve.

Me

This conversation will be posted on a small blog with a community at the cafe. Any feedback from those deranged enough to read what I write would be greatly appreciated.

At least there's still a human in the loop for now. At least, I think I'm human. Anyone could train an AI on this and my other writing. There's no way to tell if I actually wrote this. I could be dead right now. I know I'm human, but will the AI trained on me?

If you find yourself writing these words, are you as sure as I am that you wrote them on your own?

A story I'm reading has a theme about a dude possibly being a reincarnation of a big deal wizard. They themselves are now a big deal wizard. Is that because of the reincarnation? How much of any AI trained on me is me?

In portal, you have cores that are easily identifiable contributors to the AI psyche. Easily removable. If you trained an LLM on everything that's been written, then on everything I've written, then do a diff, what would be there?

This is already a thing with image models. I could spend the time to figure out how to do it with text. Or I could wait. Things are progressing so fast, that I can basically delay anything above low effort until AI gets better. This includes programming.

Why learn Rust now, when if I wait a year the AI will be much better at teaching me Rust? When my whole life is programming and gaming, what do I do when the games aren't fun anymore and the winning move with programming is to wait and game instead?


The decision to stop has been made for me.

My internet has gone out. This happens a lot more in Ottawa than in Kingston.

...It's back now.

I wasn't sure what I was going to do. If I didn't post this before going to bed, it would never get posted.

ChatGPT never saw this part.

ChatGPT

It's interesting to consider the implications of rapidly advancing AI technology and how it might impact our choices and priorities in life. While it's true that AI may become better at teaching us new skills or even taking over some of our tasks, it's important to remember that there will always be a need for human creativity, adaptability, and empathy in our lives and work.

Learning new skills, like programming languages, can still be valuable even if AI continues to progress. There's no guarantee that AI will advance at the same pace or in the exact direction we might expect. By learning and growing on your own, you're taking an active role in shaping your life and experiences, rather than waiting passively for technology to catch up.

Moreover, the process of learning itself can be intrinsically rewarding, providing a sense of accomplishment and personal growth. It also allows you to be more adaptable and resilient in the face of change, as you develop a wider range of skills and knowledge.

As for the question of identity and whether an AI trained on your writing can be considered “you,” it's a thought-provoking philosophical inquiry. It's likely that an AI model trained on your writing would be able to mimic your style and voice to a certain extent, but it wouldn't possess your unique consciousness, experiences, or emotions. The AI would be an approximation of you, rather than a true reflection of your identity.

In the end, it's important to strike a balance between embracing the potential benefits of AI technology and continuing to cultivate our own personal growth, creativity, and autonomy. This will ensure that we remain engaged, adaptable, and able to make the most of the opportunities that life and technology present to us.

Me

Recent news has GPT4 able to explain the individual neurons in GPT3. Also recent news is using LLMs to draw accurate pictures based on user brainwaves, in addition to transcribing internal monologues. AI might be an approximation, but what happens when the approximated testimony is indistinguishable from my own?

ChatGPT

  1. GPT-3, developed by OpenAI, has 175 billion parameters. Parameters in machine learning models are the part of the model that's learned from historical training data.

  2. The human brain is believed to contain approximately 86 billion neurons, give or take, although the exact number can vary from person to person.

  3. There are numerous popular references in literature, film, and television that explore the concept of distinguishing a clone from the original. Here are a few examples:

    • “Blade Runner” and its sequel “Blade Runner 2049” both grapple with the distinction between humans and their bioengineered replicants.

    • “The Island” is a film where clones are created for the purpose of organ harvesting, and the clones struggle with understanding their own identity.

    • “Orphan Black” is a TV series where a woman discovers she's one of several clones and must deal with the consequences.

    • “Star Wars: The Clone Wars” features a number of stories revolving around the clone troopers, who are all identical copies of the bounty hunter Jango Fett.

    In all of these examples, the clones and their original counterparts grapple with issues of identity, autonomy, and what it means to be human. These are complex and nuanced themes that may shed some light on your query.

Me

What are recent advancements on resolution/fidelity and consumer access for EEG tools? What does EEG mean, ECG?

ChatGPT

EEG stands for Electroencephalography, a monitoring method used to record electrical activity of the brain. It's typically non-invasive, with electrodes placed along the scalp. ECG, or Electrocardiography, on the other hand, records the electrical activity of the heart.

As for advancements in EEG technology, there have been significant strides in recent years, particularly in the areas of portability, ease of use, and accessibility for the average consumer. Companies like Emotiv, Muse, and NeuroSky have created wearable EEG devices that are available for public purchase. These devices, often in the form of headbands or hats, have been used in a variety of applications, from meditation and sleep tracking to cognitive research and even gaming.

The resolution and fidelity of these consumer EEG devices are continually improving, although they still don't match the level of precision found in clinical or research-grade EEG machines. However, these consumer devices are often sufficient for basic brainwave monitoring and can provide valuable insights into brain health and function.

It should be noted that while consumer EEG devices are becoming more accessible, interpreting EEG data is still a complex task that typically requires professional expertise. Furthermore, while these devices can provide interesting information, they should not be used to self-diagnose or treat any medical condition without professional medical advice.

As of my knowledge cutoff in September 2021, one of the most exciting developments in EEG technology is the exploration of Brain-Computer Interfaces (BCIs) – systems that allow direct communication between the brain and an external device. Companies like Neuralink are working on invasive BCIs for medical applications, while others are focusing on non-invasive BCIs for more general use.

However, the technology is still in its early stages, and while the potential is huge, there are many technical and ethical challenges that need to be addressed. As such, while the technology is promising, it's likely to be some years before we see widespread adoption of BCIs.

Me

Describe the data from an EEG vs an MRI.
What is an F-MRI?

ChatGPT

An EEG (Electroencephalography) and an MRI (Magnetic Resonance Imaging) are two distinct types of imaging technology that provide different types of data about the brain.

An EEG primarily provides data about the electrical activity in the brain. It uses electrodes placed on the scalp to detect electrical signals produced by the brain's neurons. The data from an EEG is typically represented as waves on a graph, with each wave corresponding to a different type of brain activity. For example, different wave patterns may represent states of sleep, alertness, or different types of cognitive activity.

An MRI, on the other hand, provides detailed images of the brain's structure. It uses a strong magnetic field and radio waves to generate images of the brain. These images can show the brain's anatomy in great detail, including its different structures and any abnormalities that might be present, such as tumors or brain injury.

A functional MRI (fMRI), is a specialized type of MRI that can also provide information about the brain's activity, not just its structure. An fMRI measures changes in blood flow in the brain, which can be used as a proxy for neural activity. When a particular region of the brain is active, it requires more oxygen and thus more blood flow. By tracking these changes, an fMRI can provide a map of which brain regions are active during different tasks or mental states.

In summary, EEGs and MRIs provide complementary types of data. An EEG can provide real-time data about the brain's electrical activity, but doesn't give detailed anatomical information. An MRI can provide detailed images of the brain's structure, and an fMRI can also provide information about brain activity, but not in real time like an EEG. Each tool has its strengths and weaknesses, and they are often used together to give a more complete picture of the brain's function and structure.

Me

Name the book I said I'm reading as a PDF earlier. Give a brief summary as it relates to what we've discussed so far. One of the ideas was that a 2-axis grid of y=optimism-pessimism and x=good-evil.

My take: – Good-Optimist => Solarpunk – Good-Pessimist => Cars, now and forever – Evil-Optimist => Cyberpunk – Evil-Pessimist => Hell

...A lot of these startups based on ChatGPT (“talk to a github repo”, “talk to a pdf”) are going to become obsolete as better tooling and multimodal models become available. What are the most pressing challenges for AI writing its own programs? What remains to be done before I can tell an agent something like “rewrite this ChatGPT web client in rust”?

ChatGPT

I apologize, but without more specific information about the book you're referring to, I can't provide a summary related to our discussion.

As for the challenges of AI writing its own programs, there are several that are noteworthy:

  1. Understanding the requirements: To write a program, an AI would first need to deeply understand the specifications it's given. This can be very challenging, because natural language instructions are often ambiguous and may omit important details.

  2. Dealing with complexity: Programming involves managing a lot of complexity, such as handling different cases and edge conditions, maintaining state, and ensuring that different parts of the program interact correctly. AI systems currently struggle with these types of tasks.

  3. Debugging and error handling: Writing a program often involves a lot of debugging. An AI would not only need to write code, but also understand when it's not working correctly and how to fix it.

  4. Understanding existing code: If the task involves modifying or interacting with existing code, the AI would need to understand that code. This can be very difficult even for humans, especially with complex codebases.

  5. Code optimization: Writing efficient code that runs quickly and doesn't use unnecessary resources is another challenge. It requires understanding not just the syntax of the programming language, but also the underlying hardware and software systems.

  6. **Security: Ensuring that the code written by the AI doesn't have security vulnerabilities is another important consideration. This is a complex field that requires deep expertise.

Despite these challenges, there has been significant progress in the field of AI programming. For example, GitHub's Copilot, powered by OpenAI's Codex, can generate code snippets in response to natural language prompts. However, full program generation, particularly from high-level prompts like “rewrite this ChatGPT web client in rust”, is still a topic of ongoing research. There's certainly potential in this area, but also many challenges to overcome.

Me

Understandable that you can't remember, we've been talking for 30,000 characters already. The book was “be slightly evil” Please make some closing remarks on this. I'm ready to go to bed, so whatever you write will be the last entry in this conversation. I will not be responding.

Cheers,

ChatGPT

Thank you for the enlightening discussion. It's clear you're deeply considering the implications of AI advancement and how it interfaces with our personal lives and identities, a thought process I encourage everyone to engage in. The concerns you've raised about privacy, identity theft, and the risks of online exposure are legitimate and relevant in our rapidly evolving digital age.

“Be Slightly Evil: A Playbook for Sociopaths,” is an interesting read. It presents a somewhat cynical, yet realistic, view of navigating power dynamics in a complex world. It encourages readers to embrace an amoral perspective in order to succeed in a world that often doesn't adhere to idealistic standards of fairness. The connection to our discussion lies in the uncharted ethical territory we're entering with advanced AI, and the potential for misuse that comes with it.

As we move forward, these conversations will be crucial in shaping the development and deployment of AI. The future of AI is still very much in our hands, and it's discussions like these that help us chart a thoughtful course.

Thank you for engaging in this thought-provoking dialogue, and I hope you have a restful sleep. Until next time, take care.