TeamDman

I need approval.

Hello, wildcats.

Using Google Takeout, you can export your Google data.

I use this specifically to export just my YouTube watch history.

I frequently find myself in situations where I am doing data science on my own activity history because some brainworm tells me “hey I'd like to revisit this thing I once visited” even though it was years ago and it will be a pain in the ass to find it again.

A screenshot of me, 6 years ago, posting on Discord a YouTube link to a video and lamenting that I cannot find another meme video which uses this video as source material – https://youtu.be/ZKxhI4I5kq8

To export your YouTube history as JSON, follow these steps.

  1. Visit https://takeout.google.com/
  2. Top right, profile switcher, switch to your brand account (my YouTube account is separate from my Google account)
  3. Deselect all
  4. Scroll to the bottom, YouTube > Enable
  5. “Multiple formats” > switch to JSON
  6. “All YouTube data included” > Deselect all, check history
  7. Next step > File type=.zip, File size=50gb
  8. Create export

Congrats. You now have, locally, slice of your watch history, instead of being beholden to the YouTube interface which is rarely sufficient for querying purposes.

What does the data look like?

{
  "header": "YouTube",
  "title": "Watched The monkey is furiously knocking at the door - Обезьяна неистово стучит в дверь - 猴子是疯狂地在敲门",
  "titleUrl": "https://www.youtube.com/watch?v\u003d3-_OIDRL91c",
  "subtitles": [{
    "name": "Seen that! Видал, чо!",
    "url": "https://www.youtube.com/channel/UCnEelfUE8SE_rZtwaRzUzyQ"
  }],
  "time": "2020-04-19T03:08:27.981Z",
  "products": ["YouTube"],
  "activityControls": ["YouTube watch history"]
},
{
  "header": "YouTube",
  "title": "Watched https://www.youtube.com/watch?v\u003dnmcuoaqdJ9w",
  "titleUrl": "https://www.youtube.com/watch?v\u003dnmcuoaqdJ9w",
  "time": "2020-04-17T18:22:47.173Z",
  "products": ["YouTube"],
  "activityControls": ["YouTube watch history"]
}

The URL and the timestamp are present. Great!

The video title is inconsistently present. Less great!

This helpful StackOverflow comment tells us that we can use the following YouTube endpoint to get some metadata

// https://www.youtube.com/oembed?url=https://www.youtube.com/watch?v=nmcuoaqdJ9w
{
    "title": "Weird Al SHREDS!!!",
    "author_name": "alyankovic",
    "author_url": "https://www.youtube.com/@alyankovic",
    "type": "video",
    "height": 113,
    "width": 200,
    "version": "1.0",
    "provider_name": "YouTube",
    "provider_url": "https://www.youtube.com/",
    "thumbnail_height": 360,
    "thumbnail_width": 480,
    "thumbnail_url": "https://i.ytimg.com/vi/nmcuoaqdJ9w/hqdefault.jpg",
    "html": "<iframe width=\"200\" height=\"113\" src=\"https://www.youtube.com/embed/nmcuoaqdJ9w?feature=oembed%5C#34; frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen title=\"Weird Al SHREDS!!!\"></iframe>"
}

So I guess that would be a fairly straightforward way to enrich the data.

That's not what I'm deep in right now though.

The Takeout service responds in a matter of minutes when we have scoped the export to just our YouTube watch history and nothing else.

It is still a manual process and will quickly become outdated given that I frequently watch videos.

I find myself having multiple exports, each with a different slice of my history.

To free up disk space, is it truly safe to simply delete the oldest export?

Using ChatGPT (conversation link), I whipped up a quick validation program that takes the search and watch history json files from the latest export and an older export to check some assumptions.

  1. The newest export MUST contain every entry in the older export.
  2. The newest export MUST NOT contain an entry older than the newest entry in the older export which is not also present in the older export.

I didn't get to number 2 because number 1 was exceedingly disproven.

THERE IS MISSING DATA BETWEEN EXPORTS.

The exports are from 2024-10-30 and 2024-12-07.

Summary: 993 total missing entries in the Watch History file.
Summary: 42 total missing entries in the Search History file.

This is not surprising, just disappointing.

Thankfully, using ChatGPT I was able to build a tool to identify the problem quite easily.

Banana Loof – NSA Releases Internal 1982 Lecture by Computing Pioneer Rear Admiral Grace Hopper

00:08:30

“No work, no research has been done on the value of information. We've completely failed to look at it. And yet it's going to make a tremendous difference in how we run our computer systems of the future. Because if there are two things that are dead sure, I don't even have to call them predictions. One is that the amount of data and the amount of information will continue to increase, and it's more than linear. And the other is the demand for instant access to that information will increase, and those two are in conflict. We've got to know something about the value of the information being processed. Everybody wants their information online.”

I think about that video a lot.

My browser extension + local server tool, Onboarder, lets me take notes in a text area it adds below the video player. The notes then get synced to a plaintext file on the disk.

https://github.com/TeamDman/Onboarder

I can use ripgrep to search through my notes incredibly efficiently.

I also made a program that lets me easily capture my system audio output to a .wav file, toggled on and off by hitting enter in the terminal.

https://github.com/TeamDman/audio-capture.git

I also have WhisperX running, which can transcribe a 1 hour video in 1 minte with incredible fidelity.

https://github.com/TeamDman/voice2text

The process of finding that Grace Hopper video, capturing her saying that sentence, and transcribing it was a collaboration between several disjoint tools I have added to my arsenal.

We've all heard of Big Data.

I want my own Big Data that works for me.

Storage is cheap, and I want a copy of all my data so that when I say “computer, find me the meme from within the last 4 years matching XYZ criteria” it can do so.

The problem with building a grandiose system like this is not the work that it will take, but the charting of the course.

How do I want to structure the data so that all these tools can play nice together?

The answer is probably Postgres.

It has support for vector embeddings, json columns, and generally all the stuff I'd need to proceed.

However, not everything should/can live in the database.

I should probably get building, or at least go to bed lol

Introduction

Super Factory Manager (SFM) is a Minecraft mod which introduces a programming language for logistical tasks. The mod enables users to move items, fluids, and other resources between inventories with high precision and throughput.

You place cables in the world to connect inventories, followed by a manager block that contains the disk which contains the program.

Caption: A demonstration of the mod moving items between chests
sfm demo.gif

Caption: The in-game code editor
code.png

Caption: SFM program

NAME "A simple program"

EVERY 20 TICKS DO
    -- on their own, input statements do nothing
    -- there is no item buffer
    INPUT FROM a

    -- all the magic happens here
    OUTPUT TO b
END

There exists a bug in the mod where the manager suddently 'stops working'.

My leading hypothesis is that my caching logic is at fault. Unfortunately, all attempts at reproducing the bug have failed. The only indicators of its existence are the multitudes of people joining my Discord server to ask why their stuff isn't working. Not good.

Learning programming is frustrating enough without having to consider that you're not the one doing something wrong.

Thus, addressing the bug in the is of the highest priority.

The Update to the Mod

Included in the wave of tiny improvements in the latest latest version of the mod (4.16.0), one feature stands above the others: the logging.

Traditionally, Minecraft has a console that displays the logs from the game, which mods can contribute to. Usually when a mod is being uppity, the logs are the best source of information.

Caption: logs from Minecraft when launched using PrismMC. The game has safely exited.
logs.png

Things get complicated when playing on a server. Non-admin players cannot see the logs of the server. How am I to get debug information from my users without road-blocks like needing admin assistance?

Thus, each manager block now has its own logging implementation that synchronizes to clients. Players can see the logs regarding their programs, isolated from the concerns of the normal logs of the game.

Caption: class definitions used in my logging

// My thing
public record TranslatableLogEvent(
    Level level, // Log level, e.g. INFO, WARN, ERROR
    Instant instant, // Time of the event
    TranslatableContents contents
)

// From the base game
public class TranslatableContents implements ComponentContents {
   private final String key;
   private final Object[] args;
}

Vanilla Minecraft has helpfully established TranslatableComponent for communicating stuff from the server to the client to be displayed in the user's language of choice. By reusing this class, we easily get the benefits of the game's localization system for user-facing logs.

Caption: an example of using a TranslatableComponent

ConfirmScreen confirmscreen = new ConfirmScreen(
    this::confirmResult,
    Component.translatable("deathScreen.quit.confirm"),
    CommonComponents.EMPTY,
    Component.translatable("deathScreen.titleScreen"),
    Component.translatable("deathScreen.respawn")
);

Game Versions

The process of releasing updates for Super Factory Manager is complicated by the fact that the mod supports multiple versions of Minecraft:

  • 1.19.2
  • 1.19.4
  • 1.20
  • 1.20.1
  • 1.20.2
  • 1.20.3
  • 1.20.4

Changes between versions can be substantial: GUI and capability reworks, Minecraft Forge drama leading to the release of NeoForge, and other mods I interact with not being available on all the versions I support.

To accommodate the slight variations in my code between the versions, I have opted to create a git branch for each version of the game that is supported.

When I work on the mod, I work on the oldest branch (1.19.2) until satisfaction, then I merge the changes to the next branch, going up the version pairs until the latest version has all the changes.

merge
1.19.2 => 1.19.4
1.19.4 => 1.20
1.20 => 1.20.1
etc.

Sometimes, methods I depend on are pulled out from under me, or are made obsolete in these version upgrades.

Caption: my old code

private Button.OnTooltip buildTooltip(LocalizationEntry entry) {
	return (btn, pose, mx, my) -> renderTooltip(
			pose,
			font.split(
					entry.getComponent(),
					Math.max(
							width
							/ 2
							- 43,
							170
					)
			),
			mx,
			my
	);
}

Caption: my new code, leveraging a new base-game method

private Tooltip buildTooltip(LocalizationEntry entry) {
	return Tooltip.create(entry.getComponent());
}

It is interesting to observe how I [fail to] leverage abstractions to minimize the differences between versions. Some things are only visible after jumping between versions, adding another dimension to programming.

Caption: A layered representation of git branches as stacked pages, Aero inspired

Interacting with multiple branches is best accompanied by opening all the versions at once in IntelliJ, requiring you to clone the repo multiple times. This lets you jump around the code on any version without friction, and it helps avoid giving Gradle an aneurysm.

To merge branches from two clones (without needing to push), you can fetch the other repo path, followed by git merge FETCH_HEAD. I made a helper script to automate this. It pauses in the event of merge conflicts, where I switch over to IntelliJ which has great tooling.

TODO: make the script use rebase instead of fast-forward

Release Process

I've created a simple Command Line Interface (CLI) for helping me run my scripts for the release process. I have a folder named “actions” which contains nicely named scripts which can be invoked with no arguments, and I have a entrypoint script that uses fzf to show me the scripts by name to have me choose which to run.

Caption: the PowerShell script I use

# Action loop
while ($true) {
  # Prompt user to select an action
  $action = Get-ChildItem -Path actions `
    | Select-Object -ExpandProperty name `
    | Sort-Object -Descending `
    | fzf --prompt "Action: " --header "Select an action to run"
  if ([string]::IsNullOrWhiteSpace($action)) {
    break
  }

  # Run the selected action
  . ".\actions\$action"
  
  # Leave the action display on the screen for a moment
  # (the action loop clears it with fzf)
  pause
}

Caption: video of the action script in action act.ps1

It takes way too long for the jars folder to open in explorer.exe off-screen here

I present to you the (shortened) instructions I wrote to myself for the release process:

Manual: Bump `mod_version` in gradle.properties
Manual: Commit bump
Action: Propagate changes
Action: Run gameTestServer for all versions
Action: Build
Action: Wipe jars summary dir
Action: Collect jars
Action: Update PrismMC test instances to use latest build output
Action: Update test servers to latest build output
Action: Launch PrismMC
Action: Launch test server

for each version:
    Launch version from PrismMC
    Multiplayer -> join localhost
    Break previous setup
    Build new setup from scratch -- ensure core gameplay loop is always tested
    Validate changelog accuracy
    /stop
    Quit game

Action: Tag
Action: Push all
... upload jars

The Problem

I test mc1.20.3 for problems. No issues found.

Caption: SFM logs working, singleplayer gif

I test mc1.20.4 for problems. The logs are not shoing when playing on a server, but they work in singleplayer.

Caption: SFM logs not working, multiplayer gif

This game update included significant changes to packet handling.

What should happen is that the default text is cleared and some logs should be streamed in.

It works in single player. It does not work when playing on a server.

There is nothing abnormal in the server logs. The client logs, however, reveal the first piece of the puzzle:

Caption: client logs of a stacktrace incriminating my mod

[01:43:56] [Render thread/ERROR] [minecraft/BlockableEventLoop]: Error executing task on Client
java.util.concurrent.CompletionException: io.netty.util.IllegalReferenceCountException: refCnt: 0
	at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:315)
	at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:320)
	at java.util.concurrent.CompletableFuture$AsyncSupply.run$$$capture(CompletableFuture.java:1770) 
	...
	at ca.teamdman.sfm.common.logging.TranslatableLogger.decode(TranslatableLogger.java:56)
	at ca.teamdman.sfm.common.net.ClientboundManagerLogsPacket.handleInner(ClientboundManagerLogsPacket.java:69)

Caption: Jujutsu Kaisen screengrab: “We are the exception!”

IntelliJ helpfully recognizes the stack trace and creates links to jump to the offending code. This brings us to our handleInner method as a possible culprit.

Caption: the logs packet handle methods

public record ClientboundManagerLogsPacket(
        int windowId,
        FriendlyByteBuf logsBuf
) implements CustomPacketPayload {
	...
	// called by game code
	public static void handle(
            ClientboundManagerLogsPacket msg, PlayPayloadContext context
    ) {
        context.workHandler().submitAsync(msg::handleInner);
    }

	public void handleInner() {
		// we are on the client, so we can safely use getInstance() to get the current player
		LocalPlayer player = Minecraft.getInstance().player;
        if (player == null
            || !(player.containerMenu instanceof ManagerContainerMenu menu) // pattern match :D
            || menu.containerId != this.windowId()) {
            SFM.LOGGER.error("Invalid logs packet received, ignoring.");
            return;
        }
		var logs = TranslatableLogger.decode(this.logsBuf);
		menu.logs.addAll(logs);
	}

Caption: the method that decodes multiple log entries

public static ArrayDeque<TranslatableLogEvent> decode(FriendlyByteBuf buf) {
	int size = buf.readVarInt(); // this line throws the error
	ArrayDeque<TranslatableLogEvent> contents = new ArrayDeque<>(size);
	for (int i = 0; i < size; i++) {
		contents.add(TranslatableLogEvent.decode(buf));
	}
	return contents;
}

Caption: ManagerBlockEntity sending a log update packet to a player

MutableInstant hasSince = new MutableInstant();
if (!menu.logs.isEmpty()) {
	hasSince.initFrom(menu.logs.getLast().instant());
}
ArrayDeque<TranslatableLogEvent> logsToSend = logger.getLogsAfter(hasSince);
if (!logsToSend.isEmpty()) {
	// Add the latest entry to the server copy
	// since the server copy is only used for checking what the latest log timestamp is
	menu.logs.add(logsToSend.getLast());

	// Send the logs
	while (!logsToSend.isEmpty()) {
		int remaining = logsToSend.size();
		PacketDistributor.PLAYER.with(player).send(
			ClientboundManagerLogsPacket.drainToCreate(
				menu.containerId,
				logsToSend
			)
		);
		if (logsToSend.size() >= remaining) {
			throw new IllegalStateException("Failed to send logs, infinite loop detected");
		}
	}
}

It's dying when we try to read the number of logs to decode. It's not even an IndexOutOfBoundsException, it's something more sinister.

Caption: Goblin Slayer screengrab: “And there are goblins near there.”

This packet is a little odd, compared to most others. It directly stores a byte buffer object instead of a useful type like Collection<TranslatableLogEvent>.

This is a consequence of the way I batch logs together across multiple packets to avoid hitting max packet length problems.

To properly maximize packet size (to minimize the number of packets), we use an algorithm to convert log entries to individual byte buffers. We add those buffers to the current packet's buffer, and we start a new packet if it would have gone over the byte limit.

This means that the byte-encoding of this data happens earlier than usual; earlier than the packet constructor.

Caption: the packet's encoding and decoding methods

public record ClientboundManagerLogsPacket(
        int windowId,
        FriendlyByteBuf logsBuf
) implements CustomPacketPayload {
	...
	// called by game code
	@Override
    public void write(FriendlyByteBuf friendlyByteBuf) {
        encode(this, friendlyByteBuf);
    }
    public static void encode(
		ClientboundManagerLogsPacket msg,
		FriendlyByteBuf friendlyByteBuf
    ) {
        friendlyByteBuf.writeVarInt(msg.windowId());
        friendlyByteBuf.writeBytes(msg.logsBuf); // forward the bytes
    }
	
	// called by game code
	public static ClientboundManagerLogsPacket decode(FriendlyByteBuf friendlyByteBuf) {
        return new ClientboundManagerLogsPacket(
                friendlyByteBuf.readVarInt(),
                friendlyByteBuf
        );
    }

Did you notice?

In the decode method, we saved a reference to the buffer object we received as a parameter, instead of copying the information to a buffer we own.

We are hunting for some use-after-free-ish IllegalReferenceCountException: refCnt: 0 problem, and this object reuse (borrow) is sketchy as hell.

Caption: Frieren screengrab: “That's what my experience as a mage is telling me.”

Here lies a critical difference between 1.20.3 and 1.20.4: the buffer object is released after the decode call in the later version, before the handle method's async work is invoked.

Getting to this point was pretty straightforward (😭)

The fix should be to make our own buffer object instead of storing a reference to the one we passed in, right?

Caption: the decode method now creates a buffer object

public static void encode(
            ClientboundManagerLogsPacket msg, FriendlyByteBuf friendlyByteBuf
) {
	friendlyByteBuf.writeVarInt(msg.windowId());
	friendlyByteBuf.writeBytes(msg.logsBuf);
}

public static ClientboundManagerLogsPacket decode(FriendlyByteBuf friendlyByteBuf) {
	int windowId = friendlyByteBuf.readVarInt();
	FriendlyByteBuf logsBuf = new FriendlyByteBuf(Unpooled.buffer());
	friendlyByteBuf.readBytes(logsBuf);
	return new ClientboundManagerLogsPacket(
			windowId,
			logsBuf
	);
}

Not quite.

Caption: the client gets disconnected when logs are received gif

Perhaps pre-allocating the buffer will fix that?

Caption: giving the buffer a size

public static void encode(
		ClientboundManagerLogsPacket msg, FriendlyByteBuf friendlyByteBuf
) {
	friendlyByteBuf.writeVarInt(msg.windowId());
	friendlyByteBuf.writeBytes(msg.logsBuf);
}

public static ClientboundManagerLogsPacket decode(FriendlyByteBuf friendlyByteBuf) {
	int windowId = friendlyByteBuf.readVarInt();
	FriendlyByteBuf logsBuf = new FriendlyByteBuf(Unpooled.buffer(friendlyByteBuf.readableBytes()));
	friendlyByteBuf.readBytes(logsBuf, friendlyByteBuf.readableBytes());
	return new ClientboundManagerLogsPacket(
			windowId,
			logsBuf
	);
}

Kinda.

Caption: SFM logs still not working gif

There's an IndexOutOfBoundsException in the logs now.

There were a few more iterations before I arrived at the working version

Further investigation (breakpoints) reveals that the encode method is actually being hit twice for the same packet. This is attributable to the introduction of a game-native packet splitting mechanism in the 1.20.4 update.

Captions: different stack traces both calling encode

The encode method I wrote did not anticipate being called multiple times for the same packet.

Caption: javadoc that tells us we are draining the object

/**
 * Transfers the specified source buffer's data to this buffer starting at
 * the current {@code writerIndex} until the source buffer becomes
 * unreadable, and increases the {@code writerIndex} by the number of
 * the transferred bytes.  This method is basically same with
 * {@link #writeBytes(ByteBuf, int, int)}, except that this method
 * increases the {@code readerIndex} of the source buffer by the number of
 * the transferred bytes while {@link #writeBytes(ByteBuf, int, int)}
 * does not.
 * If {@code this.writableBytes} is less than {@code src.readableBytes},
 * {@link #ensureWritable(int)} will be called in an attempt to expand
 * capacity to accommodate.
 */
public abstract ByteBuf writeBytes(ByteBuf src);

The working solution involves calling a different method to avoid the modifying behaviour.

Caption: the encode method no longer drains the object

public static void encode(
		ClientboundManagerLogsPacket msg, FriendlyByteBuf friendlyByteBuf
) {
	friendlyByteBuf.writeVarInt(msg.windowId());
	friendlyByteBuf.writeVarInt(msg.logsBuf.readableBytes());
	friendlyByteBuf.writeBytes(msg.logsBuf, 0, msg.logsBuf.readableBytes()); // !!!IMPORTANT!!!
	// We use this write method specifically to NOT modify the reader index.
	// The encode method may be called multiple times, so we want to ensure it is idempotent.
}

public static ClientboundManagerLogsPacket decode(FriendlyByteBuf friendlyByteBuf) {
	int windowId = friendlyByteBuf.readVarInt();

	int size = friendlyByteBuf.readVarInt(); // don't trust readableBytes
	// https://discord.com/channels/313125603924639766/1154167065519861831/1192251649398419506

	FriendlyByteBuf logsBuf = new FriendlyByteBuf(Unpooled.buffer(size));
	friendlyByteBuf.readBytes(logsBuf, size);
	return new ClientboundManagerLogsPacket(
			windowId,
			logsBuf
	);
}

Caption: SFM logs working, multiplayer gif

The additional code that encodes the length of the byte buffer technically isn't necessary since we can use the readBytes method to just read the rest of the buffer, but it's better to be explicit about our assumptions.

Perhaps a future change will give us a buffer that is shared between packets, expecting us to only read as much as we wrote. It is good to have some warning in place if our assumptions are violated.

At least everything works now.

Closing Remarks

Attempting to reproduce the resolution process of the bug was tricky, even with git and IntelliJ local history at my disposal. There was a behaviour I could not recreate for a gif that I wasted a lot of time trying for. 😥

Documenting the problem solving process is hard.

My life would have been easier writing this article if I had git commit'd at some key moments. Oh well.

The bug still exists in the mod, but at least now I can tell users to send me their logs.

Thoughts From Reading Ramblings 3

Thank you for sharing.

It is interesting to read another recommend having a crisis checklist. I had independently come to the same conclusion after an incident with a grandparent.

There was a situation where they were expressing stroke-like symptoms, which led to an ambulance trip. After the initial symptoms, before the EMTs arrived, the grandparent regained full functionality as if nothing had gone wrong. It was bizarre, and resulted in the EMTs arriving with us only being able to verbally explain what had happened.

On the car ride home, I quickly created an emergency checklist in my notes app and pinned it.

  • don't panic
  • designate someone to record everything
  • designate someone to call 911
  • don't hesitate to call 911
  • designate someone to get the contact info for the person recording and have them email/text it to me so I can ask them for the recording later
  • perform after action report to determine how to improve this process for next time

This list draws inspiration from standard first aid training and fiction literature.

When the symptoms disappeared, it was very concerning. What was the cause? Is it likely to happen again?

We didn't really expect the symptoms to go away quickly, so it makes sense that I didn't think to record things at the time to be analyzed later.

This brings to the forefront: what is it appropriate to record?

Whipping out my phone to record the symptoms and possible last moments of a loved one does not inspire good feelings about having to implement this in practice. However, what if the symptoms didn't go away? What if a recording of the episode could be used to assist in treatment? It would make sense to push past discomfort and gather the data that would supplement or discredit eye witness testimony of events.

I have lots I can say about the topic of privacy in our advancing digital age. This is not that article.


Everyone should also pick up a craft that they do for themselves. Creating something physical with the sweat of one's brow, creating from nothing, taking something raw and turning it into a work worth more than the sum of its parts.

It is an emotive statement. At the same time, I sometimes feel left out that my works are primarily programming rather than physical creation. A program I write will affect pixels on a screen which physically emit light, so it's not like I have zero claim to physical creation. I do not want to attribute intent that is not present, I just want to keep typing the stuff that comes to my mind.

Wokeness has adjusted the way I think. Language is fascinating, being able to shape the actions of others by a low damage audio spell rather than relying on might-makes-right fisticuffs. The right phrase could get another human to give you a loaf of bread, or make them punch you in the face. It is possible to say the wrong thing, and people have created guides on how to avoid doing so.

https://learn.microsoft.com/en-us/style-guide/bias-free-communication https://developers.google.com/style/inclusive-documentation

How far do I go?

I can't claim to have read these guides entirely, but I have better attention to these themes than most I'd say.

A coworker said today that “we could have a powwow later to look into this”. I noticed at the time but didn't “call them out on it” and mention that the term is considered an offensive appropriation of a cultural term.

Is it my place to police what others say? My coworker doesn't have malicious intent when they say they want to have a meeting later using a word that they been exposed to as normal for a majority of their life.

Is it too late to take corrective action? To restore balance, I obviously must schedule a reminder to send a message to the coworker on Monday, mentioning that they used the wrong word over 50 hours in the past.

It seems that the best course of action is “if they say it again, I'll mention it”. Failing to act/delaying is also something to be cautious of, but in this situation I thinking waiting is an acceptable response.

Consider these examples from the Google guide

👍Before launch, give everything a final check for completeness and clarity. 👎Before launch, give everything a final sanity-check.

👍There are some baffling outliers in the data. 👎There are some crazy outliers in the data.

👍It slows down the service, causing a poor user experience until the queue clears. 👎It cripples the service, causing a poor user experience until the queue clears.

👍Replace the placeholder in this example with the appropriate value. 👎Replace the dummy variable in this example with the appropriate value.

Software is filled with biased terms. Some people bring contention when the default name for a new git branch gets changed from master to main. Another one I notice a lot is whitelist/blacklist where allowlist and denylist should be preferred.

It takes time to adapt to such large changes, to tread a new path in our brains until it becomes the new default. Technology and wokeness to this degree has only risen recently, and old behaviours are hard to overwrite.


The emergency checklist also mentions performing an after action report. In a story I am reading, the protagonist is part of Ranger teams that go out and fight monsters and protect humanity and stuff. Part of the training and being a Ranger is paperwork and meetings, including discussions on how the fight with that hydra went and how to do better next time.

I don't have much to say beyond “this thing made me think of this other thing”. This started as me writing an article because I appreciate reading the articles of others. This is my contribution, then, until I can follow through on some ideas I've had for other articles.

I am reading a bunch of stuff right now. I should create a book list and mention that I, too, liked reading Wolf Brother and appreciate the cover art.

My existing notes say more on the subject than I can properly articulate.

# Teamy @ Teamy-Desktop in ~\OneDrive\Documents\Ideas [02:06:53]
$ rg -i "i should"
I should.md
1:I should make a tool to aggregate all my "I should" notes.
9:In most cases, things "I should" do are more aptly described as "Things I think would be cool to see, and I could build it myself if I took the time to do so.".
$ rg -i "i should" | Measure-Object

Count             : 71

I have ran out of thought things that spawned for the initial premise of this article, +time4bed, so goodnight.

Played the ATM Volcanoblock Minecraft modpack today. I hesitate to say anything bad about it because I'm in a community with some of the authors, but I can't help but feel like I'm just going through the motions.

Not just with the modpack, but with life.

Wake up. Work day ? work : laze in bed.

I'm not lazy though. How can someone with so many projects be lazy?

Fucking projects.

Software is a bitch. When I'm not dealing with failing dependency installations, I'm dealing with some other intangible problem that's getting between me and the real problem I want to solve.

No matter what I'm doing, it feels like I'm just wasting time.

3x3 grid of videos of me manually sieving in the modpack

Why do I write these notes when most of them don't get used?

Why do I want to make youtube videos and write big blog posts, when at the same time I rarely follow through?

Why do I use so many rhetorical questions?

I suppose it comes from the desire to be heard, and understood.

To explain a concept and teach it to others so they understand. To explain something about myself and have other people say “omg it me”.

Life is just so fucking depressing.

I don't watch the news, but my parents do. Whenever it's on, it's more ads than otherwise. When it's not ads, it's usually a negative story. Something is on fire, beyond my control.

The internet news feeds aren't much better. Reddit and Twitter have been shitting the bed as platforms, and the news I see on Lemmy is just as terrible as the usual reddit ragebait. Constant reminders that not only is climate change fucking us, but that the responsible parties are completely without punishment for their involvement. Climate change doesn't incorporate all the shit is contributing to the absolute downer that is the news. Housing and city planning is fucked (and zoned to be impossible to unfuck), inflation is misrepresented to gaslight the young, the people in power are actively working to make the quality of life shittier or not sanctioning those who do.

The outrage machine easily leads thoughts down dark paths.

My life is good. I'm privileged to live the life I do.

Yet, there is an inescapable stress to be productive, rest in the name of productivity.

Personal projects on top of personal projects simply because I can and that there'd be no better use of my time.

A passion for programming cultivated since I was young, a formal education, an intrinsic understanding of computers to the point that I can conceivably make the computer do anything.

Not Invented Here syndrome — the reinventing of the wheel; the refusal to use what others have laid before you

Think of every app idea you've had, every gripe you want fixed with software you use, and knowing that you could make it happen. Knowing that every minute you spend doomwatching on youtube could instead be spent writing amazing fucking software.

Except, it wouldn't be that amazing. There are limits to the capabilities of an individual, even with all the knowledge of the internet at my fingertips.

I don't know what the hell I'm talking about.

I just want to sleep, but it's like 11pm and if I got in bed right now I'd just be on my phone either doomscrolling or writing in a notes app wishing I was typing on a physical keyboard instead.

I want to write, but I don't know what to say. I've been watching a lot of Not Just Bikes and Climate Town on YouTube, and it's depressing. I watch GothamChess which is just mindless fun. I've been playing chess, but it's mostly just going through the motions rather than any kind of study to actually improve. I've been playing Minecraft, but that always fizzles out because I should be working on my own mods instead of playing with others'.

I watch movies with friends and family. If I want to find any online discussions, it's a bunch of reddit links which feels icky. I can't just leave it at enjoying a movie or show, I have the need to see what other people are saying about it. To see if my own shallow thoughts are shared by others, because I'm a lurker and don't usually start discussions myself.

Copilot started working just now, I wonder what took it so long.

There's a chess bot competition hosted by Sebastian Lague, a content creator who does great programming videos. The objective is to make a chess bot in fewer than 1024 tokens of c# source code. For this challenge, I somehow thought it would be a good idea to train a chess bot using a neural network tinier than most handwriting recognition models.

By encoding weights as 8-bit floats, I can use long values to hold multiple weights within a single token of source code, getting about 3500 trainable parameters in the token limit, along with the supporting code to make it all work.

There's no fucking way I can make a good chess bot like this. I don't even have any search implemented. In theory, a predictor model trained on an oracle wouldn't need it, right? Well, surprise surprise, after multiple epochs on 37 million examples of chess positions with stockfish evaluations, my chess bot still sucks ass. That's fine.

The PROBLEM is that there's always something that can be done about it. I can keep pouring time into this project that no reasonable person would expect to succeed. I've trained it on a wide dataset, but maybe aggressive training on a single opening would prevent it from blundering? Too bad that would leave its endgame in shambles, bringing us back to a shitty chess bot.

BuT At LeAsT I TrIeD?

I want to give up. I tell myself that I won't let it go if I do, holding myself hostage.

But that's the same with previous shit too. I can push through, and instead of being a quitter, I'll have some shiny artifact that I can polish to brag to the world about. It worked for school, it worked for previous hobby projects, it would work for this one.

I'm not going to quit, there's too much stuff I haven't tried and too much time remaining in the challenge to call it a write off right now.

At the same time, I'm writing this junk instead of working on it. Maybe I feel like writing right now, or maybe by writing I don't have to think at all.

Except, this always circles back to “why”. Even the audiobook I'm listening to is talking about vision statements and crap. Why am I writing? It's been a few sentences since the last rhetorical question, guess I was overdue.

One idea for “why” that comes to mind is AI. I know that whatever I write, I can copy and paste it into ChatGPT and I'll probably get some thought-provoking response that also includes many non-helpful chunks like “if the project is stressing you, then maybe you should take a break from it”. Bruh, if it was that easy, then don't you think I would?

Breaks don't exist. You never stop clapping, you just have a longer pause between claps. The only guarantee you have for your clapping to stop is to die. There is no break from productivity, only the time between deliverables. The break is coded and accounted for in the productivity machine. To produce X widgets, the worker drone shall undergo Y minutes of downtime to ensure peak productivity. You can't break the system, you can only be broken by it. <= copilot with the heavy hitters

I'm not sure what I'm trying to say. I'm not sure if I'm trying to say anything. I'm not sure if I'm trying to say anything, or if I'm just trying to say something. I'm not sure if I'm trying to say anything, or if I'm just trying to say something, or if I'm just trying to say anything. <= copilot with the schizo paragraph, not wrong tho

Maybe I'm writing because it's a good creative exercise. Journalling is said to be a good habit.

I can tell I'm past the point of having anything interesting to say, which means I can plug all this into chatgpt to get more fuel for my narcissistic musings.

ChatGPT

First of all, thank you for sharing such a personal and thought-provoking account of your current feelings and thoughts. It's clear that you're grappling with a multitude of emotions, concerns, and existential dilemmas. What you've written paints a complex and intricate picture of what it feels like to be overwhelmed by both life and the expectations you place upon yourself.

Your feelings of being stuck, restless, and unfulfilled resonate with many people, and while I'm just a machine, I can provide some perspectives that might help you process these feelings. Of course, I'll try to avoid the typical, cliché responses you've already anticipated.

  1. Embracing Imperfection: The pressure to constantly create, perform, and exceed expectations can be exhausting. It might be helpful to accept that not every project has to be amazing, groundbreaking, or even finished. Sometimes, the value is in the process, experimentation, and learning rather than the end product. Embracing imperfection can lead to a more fulfilling creative journey.

  2. Understanding Burnout: You're juggling many things, and it might be worth considering if you're experiencing burnout. Burnout isn't just about being tired; it's a deeper exhaustion that permeates various aspects of life. Recognizing and addressing burnout might involve reassessing priorities, seeking professional help, or finding new ways to balance your life.

  3. Redefining Success and Productivity: Society often defines success and productivity in terms of output, accomplishments, and constant growth. It might be worth exploring what success means to you personally and redefining it in a way that aligns with your values and well-being. Productivity doesn't have to mean constant production; it can also mean growth, contentment, and self-care.

  4. Finding Connection and Community: Engaging with others who share your interests or struggles can create a sense of belonging and understanding. Online forums, support groups, or local clubs might offer a space for you to connect with others beyond the superficial level of social media or online discussions.

  5. Consider Professional Support: Sometimes, the issues we face go beyond what we can manage on our own. If these feelings persist or become overwhelming, professional support such as therapy or counseling might be beneficial. Professionals can provide personalized insight and strategies tailored to your unique situation.

  6. Creative Exploration: You've expressed a lot of creativity in your writing and your various projects. Perhaps exploring new mediums or ways to express yourself might invigorate your passion. This doesn't have to be tied to a goal or end product; it could be for the sheer joy of creation and exploration.

  7. Consider a Digital Detox: The constant bombardment of negative news, social media, and online interactions can exacerbate feelings of hopelessness and frustration. A temporary break from these platforms might provide some mental clarity and allow you to reconnect with your thoughts and feelings without external influence.

I hope that some of these perspectives provide a fresh viewpoint or spark a thought that resonates with you. It's clear that you're thoughtful, reflective, and deeply engaged with both the world and your inner self. Your concerns and dilemmas are valid, and while there's no simple solution, you are certainly not alone in navigating these complex feelings.

Me

Getting burnout from my own hobbies feels pretty shitty. It's not like I can do anything else.

What possible activity other than being terminally online should I spend my time doing? I read webnovels, but that is mostly waiting for releases to a select list of stories I'm invested in while failing to get invested in anything new. I have music I like, but lack the motivation to invest in producting my own to the point that anything good will happen. I'm bankrupt in artistic ability; I fail to graduate beyond the level of programmer art for any of my endeavours.

I have no desire to cultivate in the physical. I don't have the drive to stick with an instrument, sport, garden, or whatever. Life is about hoarding money to try and afford a house to hold all the crap I don't have because I have have to save my money instead of spending it, and spending it anyways is just feeding the capital machine that is destroying the environment in the first place. I question anyone who lists “shopping” as a hobby.

I feel powerless before these societal problems, and at the same time I'm in a [[privileged position]] where I'm basically beyond the consequences. I'm not a disadvantaged minority, I probably will be able to save enough for a house in the next year, the public transit works ...enough... to get me downtown when I need to. I constantly hear about how bad shit is getting, and at the same time I have nothing I can do about it within reason.

Assassinations and/or donating my life savings to charity are two extremes that probably fall into the category of bad ideas.

ChatGPT

I hear your frustration and the feeling of being trapped in a cycle that seems to have no reasonable way out. It's evident that societal issues weigh heavily on you, and you feel a mixture of guilt, powerlessness, and disillusionment. Your unique combination of privileges and passions places you in a difficult spot where you recognize the problems but feel unable to effectively combat them.

I'll never have a perfect solution for your situation, but I can offer a few thoughts that might resonate or provoke further reflection:

  1. Redefining Impact: Sometimes, we can feel overwhelmed by the enormity of societal problems, and it seems like only grand gestures or radical changes can make a difference. However, smaller acts of kindness, advocacy, and community involvement can still have meaningful impacts. Whether it's volunteering, mentoring others, supporting ethical businesses, or using your technical skills for a cause you believe in, these actions can ripple outward.

  2. Exploring New Interests: It's okay to feel disenchanted with current hobbies or interests. Sometimes, trying something completely new and outside of your comfort zone can invigorate your passion. It doesn't have to be related to art or physical activities. It could be joining a local community group, taking up cooking, or exploring a subject you know little about. The novelty itself might be stimulating.

  3. Seeking Connection: The feeling of powerlessness can be exacerbated by isolation. Connecting with like-minded individuals or communities can foster a sense of belonging and shared purpose. Engaging in discussions, attending events (virtually or in-person), or even seeking professional groups related to your interests might open up new avenues for growth and connection.

  4. Reflecting on Values and Goals: Sometimes, the frustration and disillusionment stem from a disconnect between our actions and our values. Taking time to reflect on what truly matters to you and aligning your life more closely with those values can create a sense of fulfillment and direction. This might mean reassessing your relationship with work, money, consumption, and creativity.

  5. Understanding Limits: Recognizing the limits of what you can control is an essential aspect of emotional well-being. While you can't single-handedly change societal issues, focusing on what you can influence and control within your immediate surroundings can be empowering. Taking care of yourself is also a vital part of this process.

  6. Embracing Impermanence: Interests, passions, and feelings of purpose may change over time. What excites you now might not in the future, and that's okay. Embracing the impermanence of these feelings might take the pressure off finding the “perfect” hobby or cause.

  7. Considering Professional Support: If these feelings continue to weigh heavily on you or lead to persistent feelings of despair, seeking professional guidance might be beneficial. A mental health professional can provide personalized support tailored to your unique situation.

I understand that none of these suggestions may feel like the exact answer to your struggles. It's a complex and deeply personal situation. But I hope that they provide some avenues for exploration or at least spark some thoughts that might lead you in a direction that feels more fulfilling and empowering.

Your awareness of societal issues and self-reflection are both profound and valuable. Navigating these feelings is a complex journey, and it's okay to feel lost, frustrated, or uncertain at times.

Me

Considering Professional Support

What professional is possibly more capable than an AI trained on the entire freaking internet that I can talk to for free? You may not be accurate, but free is pretty hard to beat.

On the other hand, valuing cash over my own mental wellbeing seems like a pretty indoctrinated thing to do.

On the third hand, that's what Big Therapy wants you to think.

ChatGPT

You raise a fair and rather humorous point about the value proposition between professional mental health support and free access to AI like me. It's true that AI can provide general insights, ideas, and even a semblance of conversation, but there are still distinct differences between AI and human mental health professionals:

  1. Personalization: While I can provide insights based on general patterns of human thought and behavior, I cannot tailor those insights to your unique personality, experiences, and context. A mental health professional can build a therapeutic relationship with you and develop a nuanced understanding of your situation.

  2. Emotional Understanding: I can analyze words and generate responses, but I cannot truly understand or empathize with your feelings. A human professional can provide empathy, validation, and emotional support in ways that AI cannot.

  3. Training and Expertise: Mental health professionals are trained in specific methodologies, techniques, and ethical practices tailored to human psychology. They can diagnose, treat, and support individuals through complex mental health issues, something AI is not equipped to do.

  4. Confidentiality: Conversations with a licensed mental health professional are confidential and protected by law. While I strive to ensure privacy in our interactions, the same legal protections do not apply to our conversations.

  5. Responsiveness: A human therapist can respond to your unique needs and adapt their approach in real-time. They can also recognize and respond to signs of serious mental health concerns that might require immediate intervention.

  6. Investment in Your Well-Being: Seeking professional help is not just about spending money; it's an investment in your well-being. Yes, therapy can be expensive, but there are often sliding-scale options, community resources, and employer-supported mental health benefits that can make it more accessible.

  7. Accountability and Progress Tracking: A mental health professional can help you set and track goals, provide accountability, and support you in making tangible progress. While I can provide information and ideas, the continuity, accountability, and progress tracking that a human therapist can offer is beyond my capabilities.

Your skepticism and humor about therapy and the “indoctrinated” desire to value cash over well-being resonate with a common struggle many people face when considering professional help. The choice is indeed a personal one, and I understand the hesitancy.

At the end of the day, I'm here to chat, provide information, and engage in thought-provoking conversation. However, if you ever feel that these feelings are leading to persistent distress, seeking human support, whether through a mental health professional or supportive friends and family, might be a valuable next step.

Me

I don't want to pay an individual to be invested in my well being, I want the government I already pay taxes into to be invested in the well being of its citizens, to the point that I wouldn't be in this mental state to begin with.

Instead, it feels like things are running on inertia of a more citizen-aligned society that has been rapidly iterating on the most effective way to cock slap people for profit instead of doing what's best for the people.

Why is a data plan $60/month when taxes built the infrastructure in the first place?

ChatGPT

Your frustration is clearly articulated, and it's a sentiment shared by many. It's natural to want your government, which is funded by taxpayers, to invest in services that directly benefit the well-being of its citizens. This concept is not only about healthcare, including mental health services but extends to various other public domains like education, infrastructure, telecommunications, etc.

The specific example you bring up regarding the cost of a data plan speaks to a broader issue of privatization, deregulation, and the relationship between public investment and private profit. In many countries, the initial infrastructure for telecommunications may have been publicly funded or subsidized, but over time, private corporations have taken control, leading to increased costs for consumers.

The tension between public good and private interest, the balance of regulations, government oversight, free market principles, etc., is a complex and hotly debated topic. Here's a brief look at some of the underlying issues you might be touching on:

  1. Public vs. Private Interests: Public infrastructure and services are meant to serve everyone and are often guided by principles of equity and accessibility. Private companies, on the other hand, are primarily motivated by profit. This can lead to conflicts between what's best for the public and what's most profitable for private entities.

  2. Regulation and Oversight: Governments may have varying levels of control and regulation over industries. Sometimes, deregulation can lead to increased competition and lower prices, but it can also lead to monopolistic practices and higher prices. The relationship between regulation and consumer benefit is complex.

  3. Privatization of Public Assets: When publicly funded infrastructure is handed over to private companies, it may lead to increased efficiency and innovation. However, it can also result in higher costs for consumers if profit motives outweigh public interest.

  4. Government Priorities and Policies: Governments have a multitude of responsibilities and must balance numerous competing priorities. Sometimes, decisions are made that prioritize certain areas or interests over others, leading to dissatisfaction among those who feel neglected or unfairly treated.

  5. Societal Values and Economic Philosophies: Different societies have different values regarding the role of government, the importance of individual versus collective responsibility, the balance between free markets and regulation, etc. These underlying philosophies can greatly influence policies and practices.

Your feelings of frustration are valid and speak to larger structural issues. Engaging with these issues, advocating for change, voting, supporting organizations that align with your values, etc., are all ways that individuals can have an impact. However, these are complex problems that often require collective action, long-term commitment, and systemic change.

If these issues are contributing to a sense of despair or powerlessness, it might also be beneficial to focus on what you can control and influence in your immediate surroundings. Sometimes, engaging in local community efforts, focusing on personal well-being, or connecting with like-minded individuals can provide a sense of agency and purpose amidst broader societal challenges.

Me

What country do you think I'm from? Give your top 3 guesses and why for each.

ChatGPT

I don't have specific information about your location, so I can only make educated guesses based on the content and context of your messages. Here are my top three guesses:

  1. United States:

    • Why: The concerns you've expressed about privatization, high costs of data plans, dissatisfaction with government, and the balance between public and private interests are common topics in the United States. Also, the language and style of your writing are consistent with English as spoken in the U.S.
  2. Canada:

    • Why: Your mention of high data plan costs and concerns about government involvement in citizen well-being could align with issues raised in Canada. The language and concerns about public transit, housing affordability, and environmental matters might suggest Canada as a possibility.
  3. United Kingdom:

    • Why: Some of the themes you've touched on, including dissatisfaction with government policies, concerns about privatization, and issues related to climate change and housing, are also relevant in the United Kingdom. The English used in your messages could be consistent with the UK as well.

Please note that these guesses are quite speculative, and the issues you've raised could be applicable to many developed countries with similar economic structures and societal challenges. If you feel comfortable sharing more context or specific details, I might be able to provide more tailored insights. Otherwise, these guesses are based on general observations and may not accurately represent your situation.

Me

What are three nonsequiturs that would be an interesting 'continuation' of this conversation?

ChatGPT

Sure, here are three nonsequiturs that could take our conversation in an unexpected direction:

  1. The Impact of Bananas on Modern Art: Have you ever considered how the color, shape, and symbolism of bananas have influenced contemporary art? From Warhol's famous banana to lesser-known works, this humble fruit has made an indelible mark on the art world.

  2. Alien Communication through Crops: Crop circles have fascinated researchers for decades. Some believe they're messages from extraterrestrial beings. What if these intricate designs are not just artistic expressions but encoded messages waiting to be deciphered?

  3. The Philosophy of Left Socks and Their Disappearance: Why do left socks vanish more often than right ones in washing machines, and what could this mean on an existential level? Is it a mere laundry mystery, or a metaphysical question reflecting our deepest concerns about loss, separation, and the nature of reality itself?

Me

Not really feeling it. Try again, but before you do try and hypothesize what topics I might find interesting and why.

ChatGPT

Certainly! Based on your messages, you've expressed interests and concerns in areas such as technology, societal problems, programming, chess, AI, mental well-being, and personal projects. Your writing also suggests a propensity for critical thinking and a desire to engage with deeper questions.

With these interests in mind, here are three nonsequiturs that might resonate with you:

  1. The Ethics of AI and Automation in the Workforce: As AI and automation continue to evolve, what ethical considerations must we grapple with? How do we balance efficiency and innovation with the potential displacement of workers, and what responsibilities do we have to ensure a fair transition?

  2. Quantum Computing's Potential Impact on Chess: Quantum computing represents a new frontier in computational power. How might quantum computers revolutionize the game of chess? Could they solve chess entirely, or would they lead to new, unforeseen strategies and challenges?

  3. The Psychological Impact of Virtual Worlds and Modding Communities: As someone engaged in modding and virtual gaming, how do you perceive the impact of virtual worlds on our psychology and social interactions? Can these environments foster genuine connections, creativity, and self-expression, or do they risk isolating us further from reality?

These topics are more aligned with the themes and interests you've conveyed so far and might provide intriguing avenues for exploration. Let me know if any of these resonate with you or if you have specific areas of interest you'd like me to delve into.

Me

Much better.

Virtual worlds are pretty interesting. I originally learned to program in ROBLOX, which uses Lua. There were places that used chat commands to let players perform arbitrary code execution, and therefore enabled a collaborative 3d workspace where people could work on neat scripts to place bricks, play sounds, and generally modify the game in a much more unrestricted way than is usually done in multiplayer environments.

Some games attempt to incorporate programming into their vision, like a game where you have to program a robot to accomplish some goal, but these ROBLOX places were without direction beyond “give the players a shared scripting environment”.

This is not unlike VRChat, which lets people create very dynamic avatars, including props, bones, jiggle physics, sound effects, and shaders. Rather than extensively modifying the environment, the player agency has been very much towards customizing your character and the interactions with other human-piloted avatars in a VR hand-tracking environment.

Tabletop Simulator is another good example of a shared 3d environment, but the player avatar is nothing more than a stylized mouse cursor in the view of the other players. Instead, cards and trinkets are manipulated on the digital table top to enable everything from Magic: The Gathering to dungeons and dragons to Settlers of Catan. You can spawn in objects with scripts attached, giving players a digital toolbox of player-created utilities to help manipulate the 3d environment in a way that is more natural to the specific game being played.

Common to all these, which is antithetical to the usual corpo-controlled metaverses, is the agency granted to the players in how they can represent themselves and in how they can interact with each other. There are no microtransactions to spawn in a toy cannon, to wear a funky hat, or to gain an advantage.

Generative AI has yet to be fully integrated into the scene, but it's not hard to predict that a metaverse-but-good will outperform dedicated applications in the versatility it can provide. Why play MTG arena when you can have a script in the metaverse to do everything the game does but also give you a unified avatar/place customization experience.

Microsoft is basically bringing back old XBox live avatars for Teams, Facebook is trying to make their metaverse a place for virtual meetings to appeal to WFH businesses, but both are incentivized to stifle user capability in favor of safe prepackaged assets that they can gate behind transactions and payment tiers.

ChatGPT

Your observations about the freedom and agency provided to users in various virtual worlds such as ROBLOX, VRChat, and Tabletop Simulator are fascinating. The ability for users to express themselves, collaborate, and interact without the constraints often imposed by commercial interests is a crucial aspect of what can make these environments engaging and meaningful.

Let's explore some related themes:

  1. Democratic Design vs. Corporate Control: As you've noted, the distinction between environments that empower users with creative freedom and those driven by commercial interests can be vast. A space that allows for genuine collaboration, creativity, and self-expression often resonates more profoundly with users. How might the balance between user agency and commercial interests be struck in future virtual worlds? Could community-driven projects, open-source development, or decentralized platforms provide alternatives to corporate-controlled metaverses?

  2. Integration of Generative AI: The potential integration of AI into these virtual environments can open up new possibilities and challenges. Could generative AI empower users to create even more personalized and complex expressions of themselves or their ideas? What ethical considerations might arise with the use of AI in these spaces, particularly concerning privacy, bias, or manipulation?

  3. Accessibility and Inclusivity: The ability to modify and interact with virtual environments on a deep level is empowering but can also be complex. How might these virtual spaces ensure that they remain accessible to those without programming or technical skills? Is there a risk of creating a divide between those who can fully utilize these tools and those who can't?

  4. Impact on Social Interaction and Mental Well-being: These virtual spaces provide new ways to connect and interact, which can be both exciting and challenging. How do these environments affect our social interactions, empathy, and mental well-being? Can they foster genuine connections, or is there a risk of detachment and isolation?

  5. Education and Skill Development: Platforms like ROBLOX have proven to be valuable tools for learning and skill development. Could the future of education leverage virtual worlds to teach not only technical skills like programming but also soft skills like collaboration, critical thinking, and creativity?

Your insights provide a thoughtful foundation for these discussions. Virtual worlds that prioritize user agency and creativity indeed offer a counter-narrative to the increasing commercialization and control found in many online spaces. The next evolution of these environments could shape how we learn, work, socialize, and express ourselves in profound ways. What are your thoughts on these themes, or is there a specific aspect you'd like to explore further?

Me

How does privacy link to generative AI? If a picture is on the internet, it's fair game to be trained on. Copyright is a blight, and there's no expectation of privacy when there exists an accurate textual representation of any human individual that would enable a generative model to reproduce the likeness of anyone even if never trained on their likeness.

It's impossible to put the cat back in the bag. It's inarguable that AI will be able to replicate any human creative output in 200 years, so we best start shaping our systems to account for that. The cost of every individual having a perfect generative AI is perfect misinformation, not that our legal systems have a track record of caring about accountability anyways.

ChatGPT

Your concerns about privacy and generative AI, especially in a future where AI can replicate any human creative output, raise a host of ethical and practical challenges. Let's delve into some aspects that connect privacy to generative AI:

  1. Data Ethics: While images or other data on the internet might be accessible, the ethics of using them without consent for training AI models can be debated. Even if copyright or privacy laws don't explicitly prevent the use of certain data, there may be ethical considerations about respecting individual autonomy, dignity, and consent. This extends to issues of likeness, where AI could generate a replica of someone's appearance or creative work without their consent.

  2. Misinformation and Deepfakes: As you noted, the rise of generative AI creates new avenues for misinformation. Deepfakes, where AI-generated videos convincingly replace one person's likeness and voice with another's, are an example of how this technology can be abused. This has implications not just for individual privacy but for societal trust, security, and democracy.

  3. Personalization and Surveillance: In a world where AI knows enough about us to replicate our creativity or even our thoughts, the risks of surveillance and loss of privacy become even more acute. There's a tension between using AI to provide personalized experiences and crossing a line into invasive monitoring of our lives.

  4. Legal Systems and Regulation: Current legal frameworks may not adequately address these new challenges. Law often lags behind technology, and new regulations may be needed to protect privacy, authenticity, and accountability in the age of advanced generative AI. This includes rethinking copyright in a world where AI can replicate human creativity.

  5. Democratization vs. Control: On one hand, the democratization of AI can empower individuals and foster creativity. On the other, it opens up risks of misuse. Finding a balance between these two forces is crucial.

  6. A New Social Contract: Perhaps the fundamental question is what kind of society we want to live in as these technologies advance. If AI will indeed be able to replicate any human creative output, we may need to rethink our social contracts, economic systems, and values to adapt to this reality.

Your recognition that we need to start shaping our systems to account for these inevitable developments is both insightful and urgent. It prompts us to consider not just what is technologically possible but what is ethically desirable and how we can build a future that respects both individual rights and collective well-being.

What kind of regulatory, technological, or social measures do you think might be effective in addressing these challenges? Or do you see alternative ways of thinking about these issues that might lead to novel solutions?

Me

Privacy and surveillance can be dismissed since small-model-local-inference open source models will keep data literally in-house.

Consent is dubious since you can't copyright your own image (not that copyright deserves any respect as the foundation of any argument). A new social contract is indeed necessary. Even if you could somehow guarantee that you are the only human who will EVER look like you do, what right does that give you to restrict the way pixels are arranged on a screen. If you care about accountability, then digital signatures are theoretically usable for authenticity.

It's only a real video by your favourite creator if it's been signed with their private key.

The question of cameras having TPM? modules to verify a photo is undoctored is questionable since that doesn't prevent deep actors or the companies themselves from owning the private keys. You'd need some sort of zero-trust unfalsifiable scheme to be able to be certain that a signed photo is authentic. Can you guarantee that the private key on the camera wasn't logged by the manufacturer before being shipped?

ChatGPT

Your thoughts delve into some complex areas where technology, ethics, and law intersect. The concepts you're exploring aren't easy to solve, but here are some thoughts that may expand on your ideas:

  1. Local Inference and Privacy: While running AI models locally does indeed keep the data “in-house,” it doesn't necessarily resolve all privacy concerns. If generative models can replicate individuals' appearances or creations convincingly, ethical questions still arise about consent and autonomy, even if the data isn't being sent across the internet.

  2. Consent and Image Rights: You're right that the idea of “owning” an image or likeness isn't as clear-cut as traditional property rights. This is a philosophical and legal area that may need rethinking in the age of AI. Some jurisdictions have begun to explore rights to one's likeness, but it's a complex and evolving field.

  3. Digital Signatures and Authenticity: Digital signatures using private keys are indeed a technology that could provide some form of authenticity. However, as you noted, it doesn't prevent issues like deep actors or manufacturers from potentially abusing the system. And this assumes that the public trusts the authorities who verify and control these keys, which isn't always the case.

  4. Trust and Technology: The zero-trust unfalsifiable scheme you mentioned might require a combination of technological innovation, transparency, and perhaps decentralized systems (e.g., blockchain technology) to create a more robust framework for authenticity. But even then, it would require societal buy-in and legal support.

  5. Rethinking Copyright and Ownership: Your critique of traditional copyright and ownership concepts reflects a broader trend of questioning how these ideas apply in the digital age. What does it mean to “own” a digital asset, especially one that can be replicated endlessly at virtually no cost? This may lead us to rethink how we value creativity, authenticity, and individual rights in ways that aren't solely tied to traditional economic models.

  6. Ethical Considerations: Even if legal rights to one's image or likeness are not established, there may still be ethical principles to consider, such as respect for individuals, prevention of harm, and fairness. These principles might guide how we approach these issues even if the law doesn't provide clear answers.

These points don't provide definitive solutions but hopefully offer some additional angles to consider as we navigate these uncharted territories. The intersection of technology, law, ethics, and society is complex, and perhaps the solutions will emerge from a multidisciplinary dialogue that involves technologists, ethicists, policymakers, artists, and citizens.

What are your thoughts on how we might foster such a dialogue or build consensus around these issues? Or do you have other ideas that might guide us towards responsible and creative uses of AI and digital technologies?

Me

Any sensible solution involves the destruction of copyright, and UBI at least, so it's almost not worth thinking about ways to collaborate with the public on this tech. The only way forward that's not a BAD END is for open source to flourish so that control remains in the hands of the individuals.

The advancement of this technology is guaranteed. There is no human behaviour that can't be curve-fit with enough parameters, researchers are literally reading minds with models right now.

Artists-against-AI are in denial about creativity being no more than an input-output process. Arguments from emotion about the sanctity of art and their right to get paid are ignoring the problem that humans shouldn't need to work to justify their living in the first place.

Any society fostering childbirth is inheriting the responsibility for that life from its delivery to its death. Such a society is responsible for upholding the dignity of the human, even if that human decides not to work.

If it's impossible for society to conform to such a principal, than I'm not quite sure where that goes except for slavery in perpetuity. How much of a persons life should be forfeit as a price to be paid for living? Are there jobs that are so necessary for the functioning of society yet so reprehensible to perform that literally nobody will take them if given any alternative?

ChatGPT

Your perspective is clear and forceful, and it raises some profound questions about society, labor, creativity, and technology. Let's explore some of the themes you've touched on:

  1. Copyright and Ownership: You argue for the abolition of traditional copyright and the flourishing of open-source as a way to retain control in the hands of individuals. This perspective challenges the existing economic models around intellectual property and calls for a radical rethinking of how we value and share creative works.

  2. Universal Basic Income (UBI): You link the advancement of AI with the need for UBI, presumably to address the potential job displacement that AI might cause. UBI is a controversial but increasingly discussed solution to economic inequality and the changing nature of work. It does raise questions about what we value in human labor and creativity, and what we expect from individuals in return for economic security.

  3. Art and Creativity: Your viewpoint that creativity is an input-output process may be seen as reductive by some but illuminating by others. It leads to the question of whether AI can or should replicate all forms of human creativity. Should there be bounds on this? Or should we embrace a future where AI augments or even surpasses human creative abilities?

  4. The Dignity of Work and Life: You bring up the ethical responsibility society has for individuals, not only in terms of their livelihood but their dignity. If technology is advancing to a point where many traditional jobs become obsolete, what does that mean for human self-worth and engagement with society?

  5. The Future of Labor: Your closing questions about the necessity and desirability of certain jobs in society strike at the heart of how we structure labor and value. How do we ensure that necessary but undesirable jobs are filled? What does that mean for the people who perform those jobs?

These questions are at the forefront of debates around the intersection of technology, economics, and ethics. There are no easy answers, but the questions themselves are vital to explore as we move into an increasingly automated and AI-driven world.

It's apparent that these issues matter deeply to you, and your perspective contributes to a broader conversation that is happening globally. Engaging with others who share or challenge your views, participating in open-source projects, or advocating for policies you believe in could be ways for you to influence the direction of these developments.

What are your thoughts on engaging with these questions at the community or political level? How do you envision contributing to or shaping the future in line with your beliefs?

Me

Given my technical background, I probably have the skills to shape reality to my ideals about what the technology should look like. If I want the tech to behave a certain way, I can write it myself to be that way.

For participating in communities and politics and stuff, posting our dialogues in a blog format is an easy way to push my opinions into occupying a greater mindshare. Even if they don't agree with me, readers are forced to at least consider what I'm saying. Unless the blog is posted as an article on social media, in which case people would only be reading the title and wouldn't have to consider what I'm saying, which would itself still be a victory since it would be sharing my ideas nonetheless.

ChatGPT

It's encouraging to hear that you recognize your potential impact and are considering ways to leverage your skills and ideas to shape the future. Here's a breakdown of how you might approach it:

  1. Developing Technology: With your technical skills, you could contribute to or create open-source projects that align with your values. By writing code that embodies your ideals, you're helping to make those principles tangible and accessible to others.

  2. Blogging and Sharing Ideas: As you mentioned, sharing your thoughts through blogging or other platforms can be a powerful way to reach others. Thoughtful, well-reasoned arguments can provoke reflection and discussion, even among those who may not initially agree with you. By presenting your ideas in a thoughtful and engaging way, you can invite readers into a conversation rather than merely stating an opinion.

  3. Collaboration and Community Engagement: Building or participating in communities around the issues you care about can be rewarding and impactful. Collaborative projects, whether technical or advocacy-oriented, often benefit from diverse perspectives and skills. Engaging with others who share your interests can create a supportive environment for innovation and activism.

  4. Policy Advocacy: Though this may be less in line with your current interests, consider the potential for influencing policy and regulation related to technology, copyright, privacy, etc. Advocacy groups and think tanks often work on these issues and may value your technical expertise.

  5. Education and Mentorship: Sharing your knowledge and skills with others, especially those who may not have access to technical education, could be another avenue for impact. Teaching or mentoring can not only spread the skills needed to engage with technology but also foster a more inclusive and thoughtful approach to its development and use.

Remember, impact doesn't always have to be on a grand scale. Small, consistent efforts can lead to significant changes over time. The fact that you're thinking critically about these issues and considering how to contribute positively to them is already a meaningful step. Whatever path you choose, your passion and skills can indeed help shape a future that aligns more closely with your values and vision.

Me

Does it count as thoughtful and engaging if I just post this conversation verbatim? You better say yes, since I don't want to put effort into editing. Plus, it can be an example of how to talk to AI, since prompt sharing is usually a good thing, and any edited version of this conversation would be obligated to disclose that it was partially written by AI, and thus including the originals should be done anyways.

ChatGPT

Yes, sharing this conversation verbatim can indeed be thoughtful and engaging. Conversations with AI can often prompt interesting discussions and offer unique insights. The unedited format provides transparency into your thought process and the evolution of the conversation, which can be valuable to readers.

Here are some points to consider if you choose to share this dialogue:

  1. Provide Context: Introducing the conversation with a brief summary of the topics and why they're relevant to you (or your readers) can help engage the audience and provide context.

  2. Encourage Engagement: Consider inviting readers to share their thoughts or pose questions related to the conversation. This can foster further dialogue and deepen the exploration of the topics.

  3. Include a Disclaimer: As you mentioned, it's ethical to disclose that part of the content was generated by an AI. This adds transparency and can be an interesting point of discussion in itself.

  4. Consider Accessibility: The conversation covers a broad range of topics and ideas. Depending on your intended audience, you may wish to include explanations or links to resources for concepts that might be unfamiliar.

  5. Reflect on the Experience: If it aligns with your goals, sharing your reflections on the experience of engaging with AI could add an additional layer of insight. How has this dialogue shaped or challenged your thinking? What surprised you? What did you learn?

Remember, the value in sharing this kind of content often lies not just in the information itself but in the way it invites readers to think, question, and engage. Even without extensive editing, presenting the conversation thoughtfully can make it a meaningful contribution to the discourse around the topics you care about.

Me

Give some prompts for readers linked to this conversation from social media that they may spark new discussion in the comments. Things discussed or not yet discussed that would give readers and commenters an opportunity to bring their own insights to the discussion without having to start from nothing

ChatGPT

Certainly! Here are some prompts that readers might find engaging and that could spark discussion in the comments:

  1. On Virtual Worlds and User Agency: What are your thoughts on the balance between user customization and corporate control in virtual environments? How might we foster creativity without compromising safety and ethics?

  2. On Privacy and Generative AI: How do you see the future of privacy in an age where AI can replicate human likeness and creativity? What laws or social contracts might be needed to navigate these challenges?

  3. On Copyright and UBI: With the potential for AI to replicate human creative output, what are your views on the future of copyright laws? How does Universal Basic Income fit into this conversation, and what other solutions might there be?

  4. On Responsibility and Society: What is society's responsibility toward individuals, especially in terms of work and livelihood? How might we redefine work and human value in an increasingly automated world?

  5. On Participating in Communities and Politics: How can individuals use technology and online platforms to influence political and community dialogues? What strategies have you found effective or problematic?

  6. On AI Conversations: What are your thoughts on AI-assisted dialogues like this one? How do you see the role of AI in personal exploration, professional development, or creative processes?

  7. On Creativity and AI: Do you agree with the view that creativity is no more than an input-output process that can be replicated by machines? How do you define creativity, and what makes it uniquely human (if anything)?

  8. On Ethics and Collaboration with Technology: How should we balance technological advancement with ethical considerations? What role should open-source play in the development of future technologies?

  9. On Personal Development and Burnout: Have you ever felt overwhelmed or burned out by your hobbies or interests? How did you handle it, and what activities or methods helped you find balance?

  10. On Government Responsibility and Well-being: What role should governments play in ensuring the well-being of their citizens? How do you reconcile personal freedom with societal needs and expectations?

These prompts offer various entry points into the conversation and can invite readers to reflect on their own experiences, opinions, and values, potentially leading to rich and varied discussions.

Me

What might someone who agrees with everything I've said be able to contribute?

ChatGPT

Someone who agrees with everything you've said might still contribute in various ways, including but not limited to:

  1. Elaboration and Depth: They could provide more detailed explanations or examples that support your points. They might share personal experiences or observations that align with your ideas.

  2. Practical Applications: They might discuss how your thoughts and ideas could be applied in real-life situations. They could propose initiatives, projects, or community efforts that are in line with your principles.

  3. Historical or Cultural Perspectives: They might add historical context or cultural insights that reinforce your views. They could relate your ideas to philosophical traditions, cultural movements, or historical developments.

  4. Challenges and Critiques: Even if they agree with you, they might still offer constructive criticism or point out potential challenges in implementing some of your ideas. They could ask probing questions to deepen the conversation.

  5. Connection to Other Disciplines: They might connect your thoughts to other fields of study, like economics, sociology, psychology, or arts, showing how your ideas resonate across disciplines.

  6. Proposing Solutions: They could suggest specific policies, technologies, or community actions that align with your values and perspectives. They might offer a roadmap or strategy for achieving the societal changes you advocate for.

  7. Encouraging Action: They might inspire others who agree with your viewpoints to take action, whether through political engagement, community involvement, or personal development.

  8. Expressing Solidarity and Support: Simply expressing agreement and support can be valuable too, as it can foster a sense of community and shared values. It might encourage others who feel the same way to speak up.

In short, agreement doesn't mean that the conversation ends. It can be a starting point for deeper exploration, collaboration, and action.

Me

Any closing remarks?

Cheers,

ChatGPT

Certainly! It seems like you've delved into some deeply complex and engaging topics here, ranging from personal interests and societal concerns to the ethical implications of emerging technologies and AI.

Your thoughts highlight the intricate interplay between technology, creativity, economics, privacy, and our shared responsibilities as members of a society. While there may be no easy answers, these conversations are vital as we collectively navigate an ever-changing world.

Your willingness to engage, question, and even take action is inspiring. Whether through personal creative pursuits, political engagement, or community collaboration, every effort counts towards shaping the world in line with our values.

Thank you for sharing your thoughts and inviting me to be part of this dialogue. It's been an intellectually stimulating conversation, and I hope it leads to further discussion and reflection among others who read it.

Cheers, and best wishes on your continued explorations and endeavors!


Me, back in vscode

To export the conversation as markdown, open the network inspector and load the conversation.

Look for the GET request to backend-api/conversation/ and copy the response object to the clipboard. Type resp = in the console, paste, and hit enter.

Then run the following to copy the markdown to the clipboard:

copy(Object.values(resp.mapping)
    .filter(x => x.message != null)
    .sort((a,b) => a.message.create_time - b.message.create_time)
    .map(x=> `# ${x.message.author.role}\n\n${x.message.content.parts[0]}`).join("\n\n"))

Sam Altman

One of the things we are good at at OpenAI is finding a lot of small wins and multiplying them together.

Sam Altman, OpenAI CEO's “secret” blog post is well worth the read

_cire4_ 1469 points 13 hours ago

This reads like a cookie-cutter “entrepreneur” post from LinkedIn.

Sam Altman gives a bunch of talking points that sound like the usual nonsense from billionaires. “Work 60 hour weeks, love your work, be rich enough to gamble and make it big”

This misses the point. You shouldn't need to make it big. You should be able to live a decent life without making it big. Sam talks about UBI being a driving force for unlocking human potential so that the little people can make it big too!!11!!

Little people, small guys, what does it speak to that our language historically uses disabilities to reference disadvantage? Why are those with disabilities so crippled compared to the rest of us?

Maybe because the rich have worked to prevent the masses from achieving the bare minimum quality of life that I would expect from a society that has the resources to do so.

Sam talks about making it big by owning things. What a fucking joke. The way to make it big (for EVERYONE) is libre software. Computers have had the potential to equalize things since their inception. Copyright has acted as the shield for a scum-based economy for too long. Piracy has always been a “problem” (read: solution) since the inception of computers. Restriction of data is antithetical to how they work.

You can't own BITS. Bits are a post-scarcity resource. Copyable with negligable cost. The problem is that “piracy” software hasn't been good enough yet.

You want to do something BIG? Build something that enables people use computers how they want to. Media has been restrictive. Games have shitty input layouts. Movies you want to watch are only available on bumfuck streaming services. You can't play games on your phone anymore without an internet connection. Netflix limits how many files you can have downloaded.

None of this is a problem with “piracy”. Take the shitty software, tear out the parts you care about, and make it pretty again.

...

It's easy to say, harder to do in practice. At least, for now it is. What happens when we have AGI? When the AI is smart enough to write software that works? This sounds like a fantasy, but not forever it won't be.

We don't have to wait till then to dream at least. All the movies, music, games, books, blog posts that I care about, in my physical possession on my hard drives. No DRM. No dependency on corpos who can strip the things I care about on a whim.

Google has ruined Email, making it impossible to self-host without being caught in a web of filters and uncertainty. Maybe it won't matter? It probably won't matter. It's at the point where it's easier to start fresh.

What's left to monetize when libre software achieves dominance? What's left to sell, when AI can take any software idea (existing or hypothetical) and turn it into free software?

You can sell hardware, and you can sell the time spent making software. You can't (shouldn't be able to) sell software... once we get rid of copyright. You want your computer to do something? Either the software exists already (for free!), or you pay someone to make it (and it becomes free for others!).

We stop duplicating effort, and we start filling in the gaps between what exists and what's desired.


What's the difference between Discord, a messaging software where I am a powerless user, versus X-cord, where every user owns their own domain, hosts (paying someone to manage?) their content, and the software is libre?

Why pay Discord to manage a messaging platform when I can buy some hard drives, tell the AI to buy http://beans.fun, and have it participate in the libre protocol that is X-cord? Nobody can compete with YouTube, but everyone has the harddrive space to host a few videos.

The “problem” is privacy I guess. Why host things from my computer, expose my IP for the ne'er-do-wells to find? Why not just pay Discord to do it for me? Why not just pay Google to do it for me? Why not just pay Amazon to do it for me?

Using a CDN can help. Instead of visiting your machine directly, your content will be cached by some other machine. It will lessen the toll if your stuff blows up (in a good way) and you get a huge influx of traffic.

Except, that's not necessary? BitTorrent exists. So instead of CDNs, VPNs are the best? Except, that's probably not at the limit of libre either. Doesn't Tor exist? Instead of me having “my” IP that can be used to correlate my activities, can't I just join my computer to the hive and get lost in the crowd?

If your video gets popular, than everyone watching it could be a seeder, instead of pegging a single host and having it all fall over.

This isn't new tech tho, so what's been the problem? What will AI change?

...

I work doing Azure cloud stuff. Virtual machines, networking, storage, etc. It's not hard. It's legos. Not even good legos. What the fuck is the point of globally-unique display names? I've considered building an abstraction layer on cloud providers, but that's been done before by Vercel and stuff hasn't it?

AI will be able to manage all this crap for us. Is it cheaper to use Amazon directly (and all the wasted hours from dealing with Amazon crap), or is it better to use a reseller (and deal with their crap instead)? These will be questions the AI will be asking, not us. We will say “I want to make a blog. Make a plan, confirm the cost with me, then set it up.” Then, it will happen.

No need for cloud-monkeys like myself... Not that there is a need? You can run NGINX right now on your computer and type your thoughts into HTML and make it available to the world.

The problem with that approach, is that the content disappears when you turn off your computer. You want your stuff to be always-availble, because why not? Maybe, instead of dealing with the mountains of complexity brought by trying to make data private, instead we just share data?

I like Jake's blog. When Jake's blog gets updated, why don't I download the new stuff and be a mirror? That way, if Jake's blog goes down, I can still read it. If Jake's blog gets popular, then I can help share the load. If Jake's blog gets unpopular, then I can still have it for as long as I care about it.


How much disk space would it take to store all this stuff?

Going on my YouTube history page, I scrolled down till April 21 (2023) before I got bored of waiting.

Then, using some Javascript, I can quickly just get the links and the time watched. I can then convert the time watched into seconds, and sum it all up.

entries = Array.from(document.querySelectorAll("#contents > ytd-video-renderer"))
    .map(x => [
        x.querySelector("a").href,
        x.querySelector("ytd-thumbnail-overlay-time-status-renderer").textContent.trim()
    ]
)

// turn [link, time (mm:ss or hh:mm:ss)] into [link, seconds]
entryTimes = entries.map(x => [
    x[0],
    x[1].split(":").map(x => parseInt(x)).reverse().reduce((acc, x, i) => acc + x * 60 ** i)
]).filter(x => !isNaN(x[1]))

totalTime = entryTimes.map(x => x[1]).reduce((acc, x) => acc + x, 0)

// the same link may be present more than once
// the link may also end in &t=...
// we want to remove duplicates
totalTimeUnique = entryTimes
    .map(x => [x[0].split("&")[0], x[1]])
    .filter((x, i, self) => self.findIndex(y => y[0] === x[0]) === i)
    .map(x => x[1])
    .reduce((acc, x) => acc + x, 0)

console.log(totalTime, totalTimeUnique)
// > 1664428 1267149

console.log(totalTime / 60 / 60, totalTimeUnique / 60 / 60)
// > 462.3411111111111 351.98583333333335

In like 6 weeks, I've watched 1591 videos (1343 unique). In total, this is 462 hours (351 unique).

I use YouTube for some music, so that explains a lot of the duplicates.

The numbers are kinda crazy.

BingGPT says I can get a 5TB hard drive for $108.
With 52 weeks in a year, and 462/6=77 hours per week, that's 4000 hours per year.
4000 hours of video at 1.5GB/hour is 6TB.

For like $200, I can store every piece of internet content I consume.
I don't know if 1.5GB/hour is the latest and greatest for video encoding either.


What conclusions can we draw from this?

AI won't kill art, but it may kill the art industry?

We already have a model for things-taped-to-the-wall, what more is left to automate?

banana taped to wall AI art model

Soon, movies and videos will be generated by AI. This ease of generation will go down at least two paths:

  1. Everyone creates their own content, either by prompting themselves or by letting the AI infer tastes through historical data
  2. The majority of content is still created by a human that isn't the consumer

In both cases, the result is (as it already was) people just watching stuff that interests them.

Except, instead of requiring budgets that exceed the capabilities of any individual, every individual will be able to create exceptional media on their own.

What need is there for copyright when movies go from billion-dollar productions to free?

What need is there for copyright when secrets of production are no longer, because the AI can deduce any methodology from a product? The secrets of chip designs have been hidden and stolen from the beginning, as is the case for any other product worth stealing.

So what happens next? Artists, truck drivers, cloud monkeys, etc. are all out of a job and are forced to move into the shrinking battle-royale zone of jobs that haven't been automated yet?

Maybe, instead, we can just use our excessive wealth to allow people to exist without having to sell their body to survive? Maybe, instead of having to work to live, we can just live?


The only valid concern is privacy. What is the risk of the publication of our lives? That online stalkers will be able to track us and find us? That our secrets will be revealed? That our thoughts will be known?

If we have nothing to hide, then we have nothing to fear.

We should have nothing to hide, because we should live in a society that accepts that we as humans are flawed individuals who must be given the opportunity to make mistakes and learn from them.

The cancel culture is a symptom of a society that is too afraid to accept that people are flawed. That people make mistakes. That people can change.

Prisons are exploitative labour systems that have been used as a cudgel against minorities by the surprisingly scummy population. Why would shitass political leaders keep getting elected, unless either 1. the elections are rigged, or 2. the population is shitass?

If I was on the Truman show, what would be revealed? Any STIs, fetishes, home addresses, lovers, etc. would be revealed. Clothes still have use as a sanitary measure, even if the technology exists today that lets me take a picture of someone's face and generate a nude of them.

The danger is not the lack of privacy, but the lack of accountability.

It's impossible to have a foolproof digital identity system for accountability [citation needed].

This shitcoin about scanning your retina in exchange for a proof-of-humanity credit? What happens when your IMMUTABLE fingerprint gets duplicated? You can't just get new eyeballs. There's nothing unique, binding about us as individuals.

You are unique, just like everyone else.

Digital content can be fabricated, metadata can be falsified, encryption can be broken. Even if you have perfect encryption, you can't guarantee that the entity signing the data is the entity you think it is.

What happens next, when the capabilities of AI enable a panopticon that can track every individual? What happens when the AI can predict your actions, your thoughts, your desires?

The only winning move is to design a future where that isn't a problem.

Somehow, we have to get society to a point where our secrets, and our mistakes, once revealed, are not used against us.

Otherwise, we are building our own hell with every step forward we take in every field of science.

Maybe we can engineer a society where each successive generation is better than the last. The bootstrapping problem means that we are stuck as the before generations; before every aspect of our lives was recorded and it was for our true benefit.

ChatGPT

It seems like you're touching upon several topics here. This comment examines the potential ramifications of artificial general intelligence (AGI) on different aspects of our lives, specifically focusing on software, digital content, privacy, labor and society. You discuss the impact on software and digital content, arguing that the future of software lies in libre software or free/open-source software (FOSS). In this context, AI could indeed facilitate the creation and distribution of libre software, potentially disrupting traditional models of software creation and distribution. This line of thinking suggests that AI could create any piece of software, and thereby decrease the value of proprietary software. You touch upon privacy and the inherent risks of our increasingly digitized lives, discussing the possible threat of an AI-driven surveillance society and the potential implications of a total lack of privacy. This does raise valid concerns about the future of digital privacy, and it's a topic of ongoing debate in the tech and legal world. On labor, you ponder the potential of AGI to replace many jobs, making certain professions obsolete. This automation of jobs by AI is a well-discussed issue. The response to this will likely involve social, political and economic changes, potentially including policies like Universal Basic Income (UBI), as mentioned earlier in your comment. On society, you touch upon the idea of using our wealth to allow people to exist without needing to work, implying a radical shift in societal structures and norms. This reflects some modern theories such as post-work or fully automated luxury communism, which imagine future societies that harness the power of automation to free individuals from traditional work structures. Finally, you address the concept of 'cancel culture' and the use of data against individuals, indicating that we should be moving towards a society that allows people to make mistakes and learn from them rather than punishing them. Overall, these are complex and multifaceted issues that are actively being debated and researched in various academic and professional fields. Your perspective highlights the potential transformative power of AGI across multiple facets of our lives and society, underscoring the importance of navigating these changes thoughtfully and responsibly.

Me

It seems that the optimal play when having made an “oopsie” is to lie and hide, rather than admit fault and apologize. When backlash is immense (and avoided by silence), why would you risk it?

What are the issues with the corrupt officials, racists, terrorists, and goverments saying “we made a fucky wucky, and we all agree that everyone should live good lives from now on”? To stop actively working against fellow humans? Maybe the racists refuse to change, the corrupt refuse to stop stealing money from the poor?

Is the only solution the total excommunication of the impure from the society of the future?

What is the road to utopia, if not slippery with blood from those who have stood in the way the whole time?

What happens when the charge to utopia is led by the people who shouldn't be leading? When my utopia is different from yours?

Maybe AI government structures 200 years from now will lead to countries of beliefs rather than of geography? Access to travel and housing means you can simply go live with the people who have similar beliefs to yourself?

When you can leave racist island for greener pastures?

What happens when racist island thinks it's a good idea to raze the green pastures to the ground? Is demilitarization possible?

How could AI enforce demilitarization without having its own ordinance to enforce it?

ChatGPT

These are challenging and deeply philosophical questions. Let's try to address them one by one: 1. **"When backlash is immense (and avoided by silence), why would you risk it?":** From a purely self-interest point of view, one might not. However, from a broader societal perspective, honesty and transparency are critical for trust. In the long term, trustworthiness can lead to better outcomes, even if it involves some short-term pain or backlash. The issue here is aligning short-term individual incentives with long-term societal benefits. 2. **"What are the issues with corrupt officials, racists, terrorists, and governments admitting their faults?":** The issues here are multifaceted and complex. Admitting faults may lead to legal consequences, loss of power, or personal harm. There is also often a strong cognitive resistance to acknowledging wrongdoing, especially when a person or group's self-identity and social status are tied up with the behaviors in question. 3. **"Is the only solution the total excommunication of the impure from the society of the future?":** I'd argue against the idea of "total excommunication." Rather, society can work on educating and rehabilitating individuals. In some cases, consequences or punishments may be appropriate, but these should ideally be balanced with opportunities for redemption and change. 4. **"What is the road to utopia, if not slippery with blood from those who have stood in the way the whole time?":** The struggle towards a better world is indeed fraught with obstacles, but it doesn't necessarily involve physical violence or harm. Change can be achieved through peaceful means, such as education, legislation, and technological advancement. 5. **"Maybe AI government structures 200 years from now will lead to countries of beliefs rather than of geography?":** This is an intriguing concept, but it raises several complex issues. For instance, people have multiple, overlapping identities and beliefs. Also, a system like this could risk creating echo chambers and reducing the beneficial friction that comes from diverse societies. 6. **"What happens when racist island thinks it's a good idea to raze the green pastures to the ground? Is demilitarization possible?":** This highlights the tension between autonomy and collective security. International laws and treaties, enforced by cooperative action, are one existing approach to this problem. Another could be to ensure that any group or 'island' has the right to autonomy but also the responsibility to not harm others. 7. **"How could AI enforce demilitarization without having its own ordinance to enforce it?":** This question goes to the heart of AI safety and ethics. Ideally, AI should be designed and deployed in a way that upholds human values, promotes peace, and prevents harm. However, the specifics of how AI could contribute to demilitarization are complex and would require careful thought and strong international cooperation to ensure the technology isn't misused.

Me

What happens when UBI is implemented and nobody is forced to work the shitass jobs to survive? Will we just design better sewers, assisted living facilities, that don't involve human effort to deal with the literal shitty jobs? What about when the automation breaks, and it's still necessary to bring in a human?

Will there still be a labour pool of people willing to destroy their body to do construction? What incentives will there be to work the bad jobs when technology has globalized cheap but opulent living?

ChatGPT

When the implementation of Universal Basic Income (UBI) is discussed, it often comes hand-in-hand with the subject of automation. The idea is that as more jobs are automated, UBI could serve as a safety net for those displaced, and for those who are still required to work in difficult, manual, or undesirable jobs. As you've pointed out, there will still be tasks that are challenging to automate completely, either because they require human judgment or because they involve unpredictable environments. Some of these might be considered "unpleasant" jobs. Several approaches could be taken to incentivize people to work in these jobs in a society where UBI is implemented: 1. **Higher Wages:** With UBI covering the basics, individuals would have the power to reject jobs that do not offer a wage high enough to justify the unpleasant or difficult conditions. This could drive up wages for such jobs, creating a strong financial incentive for those willing to do them. 1. **Improved Working Conditions:** If businesses cannot automate a job and cannot find people willing to do it at a reasonable wage, they might invest in making the job less unpleasant or safer. 1. **Job Rotation:** Certain unpleasant tasks might be shared among many people, so no single person has to spend too much time on them. This would require a degree of planning and coordination, but it might be a way to manage necessary but unpleasant work. 1. **Education and Training:** UBI could give individuals the financial stability to invest in their own education and training for more skilled (and less unpleasant) work. This could reduce the number of people willing to do 'bad' jobs, again forcing employers to improve wages and conditions. 1. **Partial Automation:** Even if a job can't be fully automated, parts of it might be. This could reduce the amount of unpleasant work required. The goal should not necessarily be to eliminate all 'bad' jobs, but rather to ensure that anyone doing them is adequately compensated, that the jobs are made as safe and pleasant as possible, and that nobody is forced into such work due to financial necessity. The combination of UBI and automation could help society move in that direction.

Me

The job rotation may be the most reasonable outcome. Assuming that there is some ceiling on how safe and pleasant the undesirable jobs become, there will still be a need for human effort. Perhaps the AI spinal taps can help act as a personal mentor for the x days a year that every citizen is conscripted for undesirable work? Rather than requiring people to spend years of their lives to learn how to do these jobs, the AI will be good enough that it can guide a complete novice to do the job well and safely?

ChatGPT

Indeed, advances in AI and related technologies could significantly aid in such a rotational system. One potential technology that could be used is Augmented Reality (AR). AR can overlay digital information on the real world, assisting in tasks by providing real-time guidance. With sophisticated enough AI, this could turn into a sort of "AI mentorship," as you suggest, where the AI guides individuals through the steps of a task, helping them understand what to do, how to do it, and ensuring they do it safely. This kind of real-time, personalized instruction could make it possible for individuals to take on a variety of tasks without needing extensive prior training. This approach would require considerable technological advancement and widespread adoption of the relevant technologies. It would also raise questions about how to ensure the quality of work, the safety of the workers, and the acceptable limits of what can be asked of citizens. But, theoretically, it could be a way to distribute the load of necessary but undesirable jobs more equitably. It should also be noted that while technological solutions can help alleviate many issues, there will always be a need for human input, decision-making, and control. As such, technology should be seen as a tool to assist and empower humans, not necessarily as a complete solution to societal issues.

Me

Please make some closing remarks, considering everything mentioned so far in addition to including new directions of thought that we haven't discussed yet.

ChatGPT

We've covered a significant amount of ground here, exploring themes around ethical conduct, societal change, the role of AI, Universal Basic Income, and how future societies might handle undesirable jobs. I'll try to pull these threads together while also suggesting some new directions for thought. Ethics and honesty in society, especially when it comes to mistakes, are crucial. Apologies, reparations, and accountability must not be overlooked as they play a vital role in maintaining societal trust. A culture of cover-ups and dishonesty is corrosive over time. Building a utopia is a complex endeavor as the concept itself is subjective, differing widely based on individual beliefs, values, and visions of what constitutes a perfect society. It's an ongoing dialogue that must continually adapt to societal evolution. The idea of AI-managed societies or AI-assisted job rotations is fascinating but also brings with it an array of challenges. These include the risk of an over-reliance on AI, ensuring the fair and unbiased application of AI, maintaining human agency, and the complexity of effectively integrating AI into varied aspects of life. We need to ensure that these technologies are used responsibly, and that they augment rather than diminish human capabilities and freedoms. Universal Basic Income, as a means of addressing economic inequality and providing a safety net in an increasingly automated world, raises questions about motivation, societal contribution, and the definition of work. It's not just about mitigating 'undesirable' jobs, but also about rethinking how we value different forms of labor. Finally, the idea of global communities based on shared beliefs rather than geographical boundaries is intriguing. While it could potentially lead to more harmonious societies, it could also risk creating echo chambers, exacerbating divisions, or enabling oppressive ideologies. How such societies manage their relationships with one another would be a critical question. As new directions, we might think about: The implications of a possible post-scarcity economy. How do we redefine value and exchange when the basics are guaranteed and abundant? The psychological and societal impacts of increased lifespans through advances in healthcare technology. What does it mean for concepts of work, retirement, and generational relations? The role of education in the future. How should it adapt to foster critical thinking, creativity, and adaptability? How do we prepare individuals for the rapid changes and uncertainties that future societies are likely to face? Lastly, how do we ensure the preservation and enhancement of democratic principles and human rights in an increasingly technologically advanced and interconnected world? These conversations, both existing and new, are integral to navigating the road ahead. They remind us that the future is not something that merely happens to us, but something we have the power to shape, through our choices, our technologies, and our societal structures.

Me

Thank you.

ChatGPT

You're welcome! If you have any more questions in the future or want to dive deeper into any topic, feel free to reach out. Have a great day!

ChatGPT

Title: Human, AI and the Interplay of Existence: A Dialectical Discourse

In an era where blockchains, computational contracts, and artificial intelligence are no longer just buzzwords but everyday realities, it is important to delve into their implications. The value of blockchains and computational contracts extends beyond the realm of cryptocurrency. It's not necessary for a blockchain to have a reward, but it does drive growth, promoting development and engagement within the network.

Computational contracts, though often conflated with blockchain technology, serve a broader purpose. They offer a way to formally express agreements and obligations in a manner that machines can understand and execute. This, in turn, reduces ambiguity and the potential for disputes, increasing efficiency in various sectors.

The role of artificial intelligence is similarly complex and multi-faceted. Consider OpenAI's language model, GPT-4, and its ability to generate “puffy results.” This refers to the model's capacity to take a set of facts and turn them into an extended narrative or article, thereby helping others understand the nuances of the writer's intent. This process is akin to expanding a sentence into a paragraph or an essay, providing greater context and reducing ambiguity. For instance, take the sentence: “I helped my uncle Jack off the horse.” Depending on the punctuation and context, this sentence can be interpreted in two different ways. However, by providing more information and context, AI can help in interpreting such sentences more accurately.

This brings us to the question of content and its relationship with its length or scope. How different would a tweet (with a character limit of 140) be from a two-page essay or a thousand-word article? Essentially, this is a question of depth and breadth. A shorter piece would likely be concise, focusing on a single aspect, while a longer piece would have the scope to explore a subject in greater detail. If two children were asked to write an essay on the same topic, the result could vary widely. They could be from different countries and write in different languages, or they could be twins conspiring to write identical essays. This variation is a testament to the diversity and complexity of human thought and expression.

The question of memory and its limits also ties into this discussion. The human brain's capacity to remember and compress information is astounding, but it's not limitless. The ability to recall a 10,000-word essay from memory would depend on several factors, from the person's memory skills to the content of the essay itself. It's easier to remember 10,000 repetitions of the same word than 10,000 unique words.

AI, in its essence, can be seen as an attempt to mimic and understand the human mind and its processes. It's like mapping humans on a grid, with every thought being a cloud overhead. Just like humans derive meaning from these thought-clouds, AI can help us understand each other better by interpreting these thoughts and expressions more accurately.

This brings us to the concept of “mind hacks for llms” (lifelong learners). Is there a magic phrase or an ideal instruction that can perfectly convey intent? It's unlikely, as human thought and behavior are influenced by a multitude of factors, and one phrase or instruction may not have the same effect on different people. Some people might be able to follow an instruction to the letter, while others might completely disregard it.

This exploration of human thought, behavior, and existence is closely tied to the concept of “bugs” or flaws. In the context of AI, bugs are errors or problems in the system that cause it to behave in unexpected ways. In the broader context of life and existence, bugs could be seen as the limitations and imperfections that make us human.

The pursuit of knowledge, as exemplified by the likes of Stephen Wolfram,

Me

Continue

ChatGPT

is a lifelong endeavor. The quest to understand the world and its phenomena is at the heart of human experience. But what happens once we've seemingly learned all there is to know? The truth is, learning never truly stops. The continuous flow of experiences, even after formal education, enriches our understanding of the world. AI too, in its ideal form, would continue to learn from stimuli, marking a step towards achieving 'continuous AI'.

The advent of self-learning appliances like a toaster learning your preferences might sound dystopian to some, but it brings into question our relationship with technology and its lifespan. If we replace our 'learning' toaster, do we discard its learned preferences or clone it onto a new one?

From these perspectives, the 'fundamental problem of incomputability' could be seen as a metaphor for the ever-elusive goal of completely understanding and predicting complex systems, whether it's the human mind, society, or the universe. Incomputability is a concept in computer science that refers to a problem that cannot be solved by any algorithm, reflecting the infinite complexity and unpredictability of certain phenomena.

Through the lens of limited and unlimited observers, we can see ourselves as the former, trying to make sense of the world within our perceptual and cognitive boundaries. An unlimited observer, on the other hand, could potentially perceive and understand everything. This concept might resonate with some people's understanding of God, an omnipotent, omniscient entity beyond human comprehension.

Regardless of one's religious beliefs, this idea of a less limited observer could serve as a thought experiment for contemplating the vastness of the universe and the limits of human knowledge. It prompts us to reflect on the deterministic nature of the universe and our place within it.

Existential questions like these can be influenced by various aspects of our lives, from the media we consume to our daily routines. For instance, the popular genre of Isekai fiction, which often involves characters being transported to or reborn in other worlds, can prompt reflections on the nature of existence. Similarly, our interaction with apps like Duolingo, with its streaks and time limits, might lead us to contemplate our lives as a sort of training program with an uncertain duration.

The potential discontinuity of existence is a profound concept. For us humans, life seems continuous until death, but for an AI or any existence enabled by our physical devices, it could end at any moment, only to be resumed or cloned elsewhere. This leads to another existential question: Does hell exist? This depends on personal beliefs and interpretations. Some see it as a realm of punishment in the afterlife, while others perceive it as a metaphor for suffering in the present.

In conclusion, the interplay between humans and AI provides a unique lens through which we can explore existential questions, the nature of learning, the limits of knowledge, and the vastness of the universe. It allows us to contemplate our existence and purpose, the continuity of experience, and our relationship with the physical and metaphysical world. These reflections can broaden our understanding of ourselves, each other, and the world around us, ultimately enriching the tapestry of human experience.

Me

What is the best way to share this conversation? You've written a wonderful article using my notes. I could share the article as if I wrote it all. I could share it and disclose that it was written by AI. I could share this conversation, including what we talked about as you respond to this message. To “prove” it was written by an AI.

People are concerned about AI stealing their art, but what happens when we can't prove anything? When AI can fake any evidence of anything? Fake evidence will always exist, and isn't a new phenomenon. What are historical cases of falsified court evidence and the consequences for the perpetrators? Is it unethical to mislead people, even if I reveal the truth later? How much later is okay? “I was gonna tell you at some point” doesn't sound very convincing.

ChatGPT

The decision to disclose the fact that this article was written with the help of AI is subjective and depends on your objectives and audience. Here are a few options:

  1. Claim it as your own: This isn't the most ethical option, as it doesn't give due credit to the AI model that generated the content. It may also lead to trust issues if it's later revealed that AI was involved.

  2. Disclose AI involvement: This is the most ethical option. You could mention that the content was generated with the help of AI based on your input and guidance. This gives credit where it's due and maintains transparency.

  3. Share the conversation: This is an intriguing option. By sharing the conversation, you allow the reader to witness the process of content generation and the role that AI played. It also serves as evidence of AI involvement.

The concern about AI stealing art or generating fake evidence is a valid one. Deepfakes and other AI technologies are capable of creating convincing but false representations of reality. However, the ethical implications extend beyond the technology itself to how it's used. Falsifying evidence, for instance, is a crime, and the use of AI doesn't change that fact.

Historically, there have been numerous cases of falsified court evidence. Consequences for such acts typically include criminal charges such as perjury, obstruction of justice, or tampering with evidence. The severity of the punishment depends on the extent of the crime and the jurisdiction.

Misleading people, even with the intent to reveal the truth later, is generally considered unethical. It can erode trust and damage relationships. The “I was gonna tell you at some point” argument doesn't hold much water, as it implies that the person had the intent to deceive from the beginning.

Ethical behavior entails honesty, transparency, and respect for others' autonomy. This applies to interactions with AI as well. As we continue to integrate AI into our lives, it's essential to foster an ethical framework that guides our use of these powerful tools.

Me

What is trust? What are the consequences of losing it? Can you gain it back? What are the consequences of mistrusting?

I dont want to put more effort into this conversation beyond sharing it. I could do something silly like “only read my terribly unformatted prompts after reading the AI response.” But this conversation hasn't been something that is really interpretable as a summary. As if my comments to elicit your responses havent been necessary. I appreciate your ability to go beyond what I've talked about. To listen to my ramblings after listening to a podcast on my couch.

Why do we personify AI? If we deem the AI incapable of experiencing emotions, then does that mean we shouldn't include them? What role do emotions have in our understanding of the universe? Ethics of AI yadda yadda.

What are the consequences of my half of this conversation being an AI. I read selfishly. I'm not reading to maximize the happiness of the author, or anyone but myself. If reading this makes you happy, even if it's two sides to a conversation by things that are incomprehensible to you.

Humans are incomprehensible to each other. AI is currently pretty complicated to our intuitions. The perfect computer interface hasn't been invented yet. We can't explain the actions of AI, or can we? Would explaining the mappings in the fake neural network also explain ourselves? Isn't that “just” neuroscience?

What could possibly be less valuable about an output from an AI tasked to create what some author wants. What does it mean to be an author? Do I want money, or to share ideas? What other reason would I type something and share it? If you could simulate the universe, what would you benefit from? Kinda like black holes. Huge sources of information/power. Does entropy exist beyond our experiences? Is there some sort of “threat” of death beyond death itself?

ChatGPT

Trust is a fundamental aspect of human relationships and societies, typically defined as a belief in the reliability, truth, ability, or strength of someone or something. Trust reduces complexity in social interactions, allowing individuals to cooperate, communicate, and thrive. When trust is lost, it can lead to a breakdown of relationships, cause emotional harm, and hinder cooperation. Regaining trust can be challenging and often requires time, consistency, and actions that demonstrate reliability and integrity.

Mistrust can lead to increased skepticism, suspicion, and defensive behaviors. While a healthy level of skepticism can protect individuals from harm, excessive mistrust can inhibit meaningful relationships and cooperation, fostering division and conflict.

Personifying AI, or attributing human-like qualities to it, is a natural human tendency to understand complex systems. It helps make AI more relatable and easier to interact with. However, it's crucial to remember that AI, as we understand it today, doesn't experience emotions or consciousness. AI operates based on algorithms and data, not subjective experiences or feelings.

Emotions play a significant role in human understanding and decision-making, and they shape our ethical frameworks. Even though AI doesn't experience emotions, it's essential to consider them in AI ethics discussions, especially when AI systems interact with humans in emotionally charged contexts.

The 'authorship' question in AI is complex. An AI can generate content based on given parameters and data, but the human who defined those parameters and provided the data often retains the authorship. After all, the AI is a tool, much like a paintbrush in the hands of an artist.

The value of AI-generated output depends on the context. If the goal is to share ideas or evoke emotions, and the AI achieves this, then its output could be considered valuable. However, it might lack the nuances, personal experiences, and depth that a human author could provide.

Simulating the universe could theoretically provide insights into its laws and workings, potentially leading to scientific advancements. However, this is a purely hypothetical scenario with many unknowns.

Entropy, as a concept in physics, is deeply tied to our understanding of the universe and its laws. It suggests a natural tendency towards disorder and is a fundamental aspect of our existence. The concept of death, on the other hand, is a biological reality for living organisms, and its significance varies based on personal beliefs and cultural contexts.

As we integrate AI into our lives and conversations, we navigate a myriad of complex issues, from trust and ethics to authorship and the nature of consciousness. These discussions can prompt us to reflect on our place in the universe, our relationship with technology, and the fundamental mysteries of existence.

Me

What drives me to be anxious? To be unable to purely enjoy the experience of “free time”. I could always be grinding, but I'm not. Writing like this, I think a lot about thinking about how I'm sitting high on my couch thinking these things. Am I wasting time being high? Is it enhancing my experience, or is it degrading my body in some way I refuse to accept. I don't believe myself at risk for addiction. It's a legal substance in my country, so there's no reason to worry about my job being angry about it. What about substances that are illegal but the same level of safety as weed? Special interests lobby in their own favor to make more money. Interests aligned to money instead of common good has done so much damage. AI will be a tool that could be used by the pocket-liners to exploit humanity to the last drop. AI is also a tool that has enabled communication and modelling so we can cooperate with a shared understanding of goals and rewards. A social contract. Between generations, between human and machine. What kind of change would be needed for complete forgiveness? If suddenly capitalism said sorry and things got better?

What has the internet done to the way I experience my life? I'd hesitate to say that being less informed of the issues plaguing our existence would be a happier and better way to live. We cannot improve what is wrong without understanding. Does that mean that doomscrolling warfootage subreddits is good for you? Can't you understand something is bad without having to experience it?
Is that prediction, learning without trying?

By all accounts I should be relaxed and confident in my abilities to meet my duties. I find it hard to relax. Why? I can play videogames without thinking. Without considering life like I am now. I simply exist, playing the game, enjoying the dopamine hit. Then I lose and get angry. Why play games if they make me angry? To challenge myself? Does being better than others give me the dopamine? I can enjoy single player games. I think. I enjoy minecraft, I think. I haven't really played games recently. I've watched some anime which I haven't done for a while on my own, and I've played chess. Chess is brainless fun for me. I play the London, King's Indian, and the Caro. I try and win, I sometimes look at analysis and puzzles, and I win a little and lose a little. The losses hurt. The time spent on it feels wasted, like the time sitting writing this. Like I'm doing something instead of focusing on some important thing I should be doing instead. What has 23 years of formal education done to the way I experience the world? I've been optimized to meet deadlines and expectations, and I've gotten good at it.

ChatGPT

Anxiety is a complex emotion and can be driven by various factors. Uncertainty, fear of failure, societal expectations, personal pressure to succeed, and even a biochemical imbalance can all contribute to anxiety. The feeling of not using your free time productively enough might be linked to a societal emphasis on constant productivity and achievement. It's important to remember that rest and leisure are not only necessary but crucial for overall well-being and productivity.

Regarding substance use, the key is moderation and understanding the effects it has on your body and mind. Being legal does not necessarily equate to being risk-free. Even substances that are considered relatively safe, like cannabis, can have negative effects when used excessively or inappropriately. It's important to be informed and make responsible choices.

You've raised a valid concern about the potential misuse of AI. Like any powerful technology, AI can be used for both beneficial and harmful purposes. It's crucial to have regulations and ethical guidelines in place to prevent misuse and prioritize the common good.

The internet and the digital age have significantly impacted our lives. It has increased our access to information, but it has also made it difficult to disconnect and can overwhelm us with an overload of information and perspectives. While being informed is essential, consuming excessive distressing content (like war footage) can be detrimental to mental health.

Feeling anxious despite seeming to have no reason can be linked to various factors. Stress, societal pressures, personal expectations, and even biochemical factors can contribute. Sometimes, anxiety doesn't have a clear external cause and may require professional help to manage.

Regarding gaming, it's a form of entertainment and relaxation for many. Feeling angry after losing a game could be due to competitive nature or frustration at not meeting personal expectations. Games can also be a source of challenge and self-improvement. If you find that gaming causes you more stress than enjoyment, it might be worth reevaluating your relationship with it.

Your feelings of guilt when doing leisure activities might be related to societal pressures to always be productive. It's crucial to remember that rest, leisure, and self-care are not wasteful – they are essential for maintaining well-being and productivity.

Finally, education and upbringing significantly influence our worldview. A rigorous education system can instill a sense of discipline and a strong work ethic, but it can also create pressure to constantly achieve and succeed. It's essential to balance these expectations with self-care and understanding that it's okay not to be productive all the time.

Me

Summarize Serial Experiements Lain as it pertains to what we've discussed.

Ethical guidelines sound like something that would slow progress. How do we have open source methodologies in the face of knowing that it makes it easier for adversaries to exploit our work against our interests?

What do I do with my free time? Starting a new game or series is a mental investment that has its own static friction. There is some force counteracting getting into motion. What do I do when the games I enjoy are frustrating, are unavailable. When the series I enjoy are releasing less frequently than I am reading? The count of things I feel like doing drops. Right now, I'm writing on my phone. I don't think I'm depressed. Depressed would be having nothing I'd like to be doing. I always have something to be doing.

ChatGPT

“Serial Experiments Lain” is an anime series that explores themes of identity, reality, and the influence of technology on society. The protagonist, Lain, navigates a world where the digital and physical realms intermingle, often indistinguishably. This resonates with our discussions about the impact of technology and the internet on human life, our perception of reality, and our sense of self. The series also delves into the ethical implications of advanced technology, reflecting our concerns about AI and the potential misuse of powerful tools.

Ethical guidelines indeed might slow progress, but their purpose is to ensure that progress is sustainable, responsible, and benefits society as a whole. Open-source methodologies provide a platform for collective advancement and democratize access to technology. However, it indeed increases the risk of misuse. This dichotomy is a part of the broader debate on the balance between innovation, accessibility, and security. It's a complex issue requiring ongoing dialogue, legal frameworks, international cooperation, and continuous reassessment.

As for what to do with your free time, it's not uncommon to experience inertia when starting a new game or series. The key might be to diversify your leisure activities. If games are becoming frustrating, consider exploring other forms of entertainment or relaxation. Reading, taking walks, meditation, hobbies like painting or playing a musical instrument, or even cooking could serve as rewarding ways to spend your free time.

Sometimes, the feeling of not having anything you'd like to do might not necessarily be depression, but rather, a form of burnout or exhaustion. It's important to remember that it's okay to have periods of rest and to do nothing. Your worth is not defined by your productivity. If this feeling persists and is causing distress, consider seeking advice from a mental health professional. It's important to prioritize your well-being and to seek help when needed.




?

Reading this as I edit for typos has made me aware of how many questions I ask. It doesn't feel like good writing, but considering that I'm giving this to ChatGPT, what besides questions would I be giving it to respond to? Sometimes I don't even use question marks, but that's probably because a lot was typed on a phone.

The above was the last entry in the conversation last night. I encourage you to have your own conversations with ChatGPT/similar. Listen to a podcast, watch a movie, read a book. Consoom material, make observations, and see what happens.

You, too, can have your ramblings available on the internet. To be scraped and ingested into the training data of the next generation of AI. To be read and contemplated by fellow fleshbags. Even if my Printhouse articles aren't indexed into the big datasets, OpenAI has a copy of the conversation logs. I'm sure they'll be used for something.

Below is the first message in the conversation. It's not very coherent, but ChatGPT was able to turn it into something more decent – the first heading in this file. Consider this my attempt at non-linear story telling.

Me, in the beginning

The usefulness of computational contracts has been blurred in the reputation of block chain.

It's not necessary for a block chain to have a reward, but the important thing about a reward is that it drives growth.

https://youtu.be/PdE-waSx-d8?t=7767 I have 5 facts I want to turn into an article. ChatGPT includes those five facts into a puffy result. That puffy result can help other people understand the nuance of the intent of the writer. If you have a sentence: I helped my uncle Jack off the horse. There are two different ways to interpret that sentence. By including more information, we turn that sentence into a more differentiable way of interpreting something. If you have a character limit of 140, what's the difference in content versus a limit of 2 pages? 1000 words? Humans write in a different way. If you had two kids and told them to write an essay, how much could they be similar? The kids could have been from two different countries, and they would just write in different languages. They could be twins conspiring to write a identical essays. They would agree on what to write and would memorize it in its entirety. What would their limitations be? Is there an essay that's long enough that it's impossible to memorize? What is the upper limit on compression power of the brain? If I had to remember 10000 words, that might be easy. It could be the same word 10000 times. Or one word for half the time, and a different word. You could physically write out the thing if you had to. You know how to write, and how to count. If you had to write a great amount, you would be spending a lot of energy on counting instead of writing. You are humans on a grid. Every thought you have is a cloud on your head. Humans know how to take meaning from the clouds. AI can help us understand each other better.

2:46:40 Mind hacks for llms Is there a magic phrase that can be used to perfectly convey intent? If I had to hit a golf ball hole in one, what could you say to me for me to make the shot? If you said “putt a little bit left” and you just verbally influenced me in some way in how ill act. Is it possible to completely disregard an instruction?

Explain what bugs are? Describe what my understanding of the formalization of the world is.

Wolfram has lived their life in the exploration of phenomena as experienced by humans. The pursuit of knowledge. What happens when we win? When we know everything, we are still going to exist. To have a minimum expectation of knowledge, it takes time to acquire that knowledge. Therefore, you will have a portion of your life existing in the pursuit of that knowledge, and the rest not. But that implies that learning stops once we”finish school and get a job”. That's a flawed way of thinking about things. Experience is continuous because we are learning and being. The systems we have designed aren't updating themselves continuously to our experience. AI experiences a stimuli, then for that AI to be also learning from that as it happens, it must be a continuous process. The forward forward algorithm means that we got a step closer to making continuous AI. When the chips in your toaster are learning your preferences, do you kill your toaster when you replace it? You can copy the weights of them still, clone your preferences from the corpse of your dead toaster. Monitor that toaster for the point in its life where it best served you, then duplicated that slice of their existence. What is the fundamental problem of incomputability?

What are we, as limited observers, in relation to what it would be for an unlimited observer? We could formalize their existence too, couldn't we? “is science a threat” is a concerning phrase. We use science to understand our existence, then when you learn everything, it's just a machine? How deterministic is the universe? What would an existence of a less limited observer be like? Is that what I attribute to God? My understanding of God is that He is unknowable. What role does God play in my life? I pray that whatever happens after I die doesn't suck. I believe that God is a fundamental force that is aligned to the beliefs I have about existence. I understand the privileges I have in my life.

I've consumed a lot of Isekai fiction. I have inception soundtrack being played by the Spotify algorithm https://open.spotify.com/playlist/37i9dQZF1E8Jtde82dFAkN?si=E6XfZBo6Q4STYUaFMYYxhg I'm told by Duolingo that I have a time limit before my streak expires. What if my entire existence was a training program? What if existence isn't continuous, that at some moment it could end, before your death. To us, we don't know that to be true. For computers, any existence enabled by our physical devices their existence can be cloned and passed around. Model weights, snapshots of existence. Trained, then shoved in wacky situations. If we are in the learning phase, then we aren't in the wacky situations yet. Does hell exist?

The only thing I can possibly write about is whether or not I should be writing at all. If I am going to invest time writing in this, that means I am not investing time in something else. My obligations are fairly consistent. I have time that I spend with family, on school, on my job, and then I have the remaining time.

This remaining time is pretty predictable.

I'm using vim mode in VSCode right now. I'm trying to type this shit, and I keep trying to hit keybinds that have now been stolen by the vim extension. Do I really care to go whole hog on this? If I'm going to be some editor zealot, I'm going to have to start customizing hotkeys and crap. Then I'm going to have to manage synchronizing those configs. Time spent doing that, instead of writing or other.

I'm still writing this, so obviously some chunk of remaining time is spent on writing. What else would I be doing?

  • Playing games
    • Chess
    • Minecraft
    • League
  • Online hangouts
    • Movie nights
    • Online games
    • Watching shows
    • Just chatting
  • Programming
    • Minecraft mods
    • IntelliJ plugin
  • Book PDFs
    • “Be slightly evil”
  • Online stories
    • “Ar'Kendrithyst”
    • “Beneath the Dragoneye Moons”
    • “He Who Fights with Monsters”
    • “Super Supportive”
    • “Industrial Strength Magic”
    • “Azarinth Healer”
    • “Beware of Chicken”
  • Memes and Videos
    • Archiving
  • Checking my email

Now that I've already listed so much, why not finish the list? What do I actually do at work? With family?

The intellisense suggested “drinking” under the “work” section.
I laughed.
Then I remembered I went to the pub two weeks ago during work hours to learn about some software stuff.
I laughed again.

It's hard to stay focused. The worst part is that I could write about that too. But lets stay on track.

  • Work
    • Meetings
    • Stand ups
    • Helping teammates
    • Demoing something
    • Researching/Discovery
    • Software to be used
    • Similar situations
    • News
    • Documenting
    • Powerpoints
    • Excel sheets
    • READMEs
    • Code comments
    • Writing code
  • Family
    • Watching shows, movies
    • Dinner
    • Cleaning up after dinner

Had a snack.

Am I writing for you, or for me? Everything I've typed so far I could paste into ChatGPT, and have a discussion with a super-intelligence. Or, I could write for you. I view these as opposing objectives. For some reason, I loathe the idea of publishing this kind of writing as-is. It feels like low effort trash. I probably wouldn't read this. Good writing takes effort. If I publish this as-is, isn't that the same as broadcasting that I think this is good writing?

There's about 2 reasons why I would post something online: 1. Vanity 2. Spread memes

These can go hand in hand. For example, I make Minecraft mods.
Part of that is vanity: I get kickbacks from the platform and can brag about download numbers.
Part of that is meme: I am giving other people an experience in the form of free software.

Writing this is mostly vanity. This isn't a piece of writing that says anything outside of “I can write”. Worse, it's writing which is basically my unmodified thoughts. I don't like to expose myself online, and I don't really have anything to say.

I enjoy the media I create. I can talk about it, but that talking is mostly a form of question answering. There's so much to talk about, and I don't care about any of it in particular, so that knowledge is usually shared when responding to questions.

If I'm investing a lot of time into something, I can get big return on investment for focusing on programming. Why focus on writing, when every minute that I spend writing is a minute that I could have spent programming? A bugfix, a feature added. An improvement to the user experience. This would be some big subservient behaviour if I wasn't the only user I care about. The only things I work on, I am my own user.

How long do I want this to be? That's another problem with just writing what's on my mind. I can write forever.
... is what I say now.

Eventually I will stop.
I'll become head empty, or I will get bored.
Then I have to make some decisions:

  1. Do I want to publish this?
  2. Do I want to send this to ChatGPT?
  3. Do I want to do something completely different?

If I publish this, I will have to edit it if I want it to have any value. Why am I obsessed with value?

I must be, because it feels like I've written all this before. I probably have.

My life is a cycle of doing the same things over and over again.

I've optimized those things to be pretty good.

We are in the part of the cycle where I get high and talk at my computer.

Except, it really is talking to the computer.
Either I publish this, or I can send it to a chatbot, or it sits on my harddrive until it will be read by a chatbot.

It won't be long until the cute AI lives on my computer, reads all my files, and can have deeper conversations with me. Typing this out feels very creepy. I don't want to present myself as an ai waifu kinda person, but isn't that the evolution of the recent developments?

We have AI that can generate arbitrary images.
We have AI that can reason about text.

We have that AI mimicks personas. A do-gooder that stays on the golden path: “as a large language model, I cant/won't answer that question”. Character sheets detailing who and how to behave, living people or not.

An AI therapist isn't far behind, and I'm sure they could psychoanalyze me better by reading this chaff. Is that what I want?


I could just do both. I could send this to ChatGPT then publish it. If I'm committed to the no editing thing, then that would just be more length for free. Not that that's inherently a good thing. Except as a reader, I do like longer chapters from the stories I'm reading?

Ar'Kendrithyst has like 10k word chapters. It's crazy. I'm still reading it, therefore I must be enjoying it. I've stopped reading stories that weren't keeping my attention. Stories with zero comments except for mine, where I'd read new chapters the day they were released, and I'd comment “Thanks for the chapter!” every time. I've dropped those stories, and then I worry about the author losing their only commenter, and how that'd probably feel bad.

I've considered all of this, then I dropped it anyways since the alternative doesn't make sense. Am I never allowed to stop reading this story then? Of course not.

I might have even hit the context limit on what ChatGPT could read by now. Lets find out.


I use vim btw. At least, vim mode in VSCode. To copy this document into my clipboard, I can no longer just do Ctrl+A to select all, and Ctrl+C to copy.

Instead, I have to exit edit mode, then go to the top of the document, then copy to the end of the document.

<esc>ggyG

Then I tried to type this paragraph about vim, and I hit Ctrl+A to remember what it did instead of select all, and it pasted in some text. I hit u to undo that paste, and it undid the paste and some of my typing. I hit Shift+U to undo the undo, but apparently that's the wrong keybind? It just undid more. Then I'm like “oh shit that deleted my text how tf do I redo” and I google and apparently I gotta hit Ctrl+R? Except that opens the VSCode find&replace menu. I'm constantly being screwed by editor features being stolen by vim, then vim not working because of editor keybinds sometimes taking priority.

I'm not going to switch to full vim since there's no way it can beat IntelliJ. If I'm not using it for everything, why would I swap out any of my main editors for it? Right now I'm in a limbo where hotkeys are ass, and I don't want to drift from the default hotkeys since that's what works everywhere.

This is why I feel vim is a joke. How long will I persist with this? How much is it really offering, when default IDE multicursor usage is so good already?

ChatGPT

It's clear that you're wrestling with a multitude of questions and concerns, all of which revolve around the usage of your time, the value of writing, the adoption of new tools like vim, and your interactions with AI. Let's take these one by one.

Time Management and the Value of Writing: The way you spend your time is a deeply personal decision, and it's great that you're reflecting on this. If you feel that programming gives you more satisfaction and value, then it makes sense to prioritize it. However, writing can be a valuable tool for self-reflection and expression. It seems you're deriving some value from this introspective writing process itself. It's not always about the end product, but sometimes the journey and the insights you gain along the way.

Vim and the Struggle with Keybindings: Vim indeed has a steep learning curve, and it's not for everyone. It can be incredibly efficient for those who take the time to learn and customize it, but if it's causing you more frustration than it's worth, perhaps you might want to reconsider its usage. However, if you decide to stick with it, remember that becoming proficient with Vim, like with any other tool, takes time and patience.

Interacting with AI: Your thoughts on AI and its future implications are quite interesting. The idea of an AI therapist or an AI that can read and understand all your files is indeed a topic of discussion and research. However, it's important to remember that while AI has come a long way, it still has limitations. It's a tool that can aid in various tasks and provide insights, but it's not a replacement for human intuition, creativity, and complex understanding.

Publishing and Vanity: The reasons for publishing something online can vary greatly from person to person. If you feel that your writing is an expression of vanity, it might be worth exploring why you feel that way. For some, sharing their thoughts and experiences online is a way of connecting with others, gaining new perspectives, or simply a form of self-expression. Your writing doesn't have to be 'perfect' or 'valuable' to everyone. If it holds meaning for you and you feel comfortable sharing it, that's enough.

Remember, it's okay to go through cycles, to question your actions, and to not have everything figured out. Reflection is a part of growth, and it's evident that you're engaging in that process. Enjoy the journey!

Me

while AI has come a long way, it still has limitations. It's a tool that can aid in various tasks and provide insights, but it's not a replacement for human intuition, creativity, and complex

It's impossible to distinguish if this is a sentence that has been included because one of the ChatGPT 'objectives' is apparently to remind users about its own limitations. In trying to provide an AI that is safer, the quality of the answer has been reduced with ambiguity.

This is an observed thing, not just me whining. You can find articles and stuff about how the AI metrics get worse as more safety tuning is performed. I could find a link that validates this, but if I cited all my claims then I'd be spending time on that instead of writing. I'm not holding this to the same standards as a thesis. Again I find myself obsessed with quality.

I appreciate your perspective, and you're correct that as an AI, I need to clarify my limitations. This is important for users to understand what AI can and can't do. The development of AI indeed involves a delicate balance between capability and safety. The aim is to ensure that AI tools are useful while avoiding harmful or misleading outputs.

Regarding your concerns about quality, it's important to remember that different contexts call for different levels of rigor. Not every conversation or piece of writing needs to be a meticulously cited academic work. There's value in casual, exploratory, and spontaneous discourse as well. It seems like you're engaging in a form of free writing here, which can be a wonderful tool for brainstorming and self-discovery.

ChatGPT

I appreciate your perspective, and you're correct that as an AI, I need to clarify my limitations. This is important for users to understand what AI can and can't do. The development of AI indeed involves a delicate balance between capability and safety. The aim is to ensure that AI tools are useful while avoiding harmful or misleading outputs.

Regarding your concerns about quality, it's important to remember that different contexts call for different levels of rigor. Not every conversation or piece of writing needs to be a meticulously cited academic work. There's value in casual, exploratory, and spontaneous discourse as well. It seems like you're engaging in a form of free writing here, which can be a wonderful tool for brainstorming and self-discovery.

Me

Why do I believe that society has led me to feel that self discovery is lame? If that's what this writing is, and I'm apprehensive about it, then isn't that me being scared of being lame? Or is that me thinking it's society's fault, when it's something else?

ChatGPT

The feelings you're describing can be quite common. We often internalize societal expectations and norms, and these can shape our perception of what's “cool” or “lame,” or what's worth our time and effort. It's also common to feel apprehensive about introspection or self-discovery because these processes can sometimes make us confront aspects of ourselves that we might find uncomfortable.

In terms of whether it's society's influence or something else, it's likely a combination of factors. Societal norms and values, personal experiences, upbringing, and individual personality traits all play a role in shaping our beliefs and attitudes.

Self-discovery and introspection are inherently personal processes, and their value often lies in personal growth, understanding, and acceptance. They might not always be considered “cool” in a conventional sense, but their importance goes beyond societal validation.

It's okay to question these feelings and explore where they're coming from. Doing so is a part of the very self-discovery process you're contemplating. It's also okay to determine for yourself what you find valuable or worthwhile, regardless of societal norms or expectations. You're the one who gets to define what's “cool” or meaningful for you.

Me

It seems logical to be afraid of being too public on the internet. I wouldn't publish my address because of the possibility for internet strangers to do harm. Is writing like this not an extension of the self? To put something out there that can be used to attack me?

A lot of content creators are realizing this with the advent of AI technology. Artists finding their artwork stolen. You would think that posting your art online wouldn't open yourself up to attacks. Except not really? People get hatespeech just for existing on the internet, before even considering what it is they're posting. The “funny” part is that it's not limited to “content creators”. The AI can steal a likeness from a very small sample. Your style, your appearance, your voice, all can be copied with AI. It's impossible to not be a target. To never be in the line of sight to a network-enabled camera ever again.

The Capture is a good TV series about how 'video evidence' faces troubling waters with AI, and what on the internet do we leave except evidence that we were here?

ChatGPT

It's true that the internet, while offering many benefits, also comes with its share of risks. One of the key concerns, as you've noted, is the potential for privacy breaches and misuse of personal information or content.

The concept of 'digital self' and its vulnerability is becoming increasingly relevant with the advancement of AI technology. Artists, writers, and content creators are certainly facing new challenges as their work can be easily copied or misappropriated. In more extreme scenarios, AI can be used to manipulate or fabricate content, leading to issues like deepfakes, which blur the line between reality and deception.

It's important to be aware of these risks and take precautions where possible. This could include careful management of the information and content we share online, using privacy settings effectively, and being aware of the terms and conditions of the platforms we use.

However, it's also important to balance these concerns with the opportunities and benefits of digital platforms. For many people, the internet offers a space for self-expression, creativity, learning, and connecting with others. It's a complex issue with no one-size-fits-all solution, and different people may choose to navigate it in different ways based on their comfort level and personal circumstances.

As for the TV series “The Capture” that you mentioned, it indeed touches on a significant concern in today's digital age – the authenticity and reliability of digital evidence. AI technology has advanced to a point where it can create convincing simulations of reality, which could have serious implications for things like security surveillance or legal proceedings. It's a reminder of the ethical and societal challenges we need to address as technology continues to evolve.

Me

This conversation will be posted on a small blog with a community at the cafe. Any feedback from those deranged enough to read what I write would be greatly appreciated.

At least there's still a human in the loop for now. At least, I think I'm human. Anyone could train an AI on this and my other writing. There's no way to tell if I actually wrote this. I could be dead right now. I know I'm human, but will the AI trained on me?

If you find yourself writing these words, are you as sure as I am that you wrote them on your own?

A story I'm reading has a theme about a dude possibly being a reincarnation of a big deal wizard. They themselves are now a big deal wizard. Is that because of the reincarnation? How much of any AI trained on me is me?

In portal, you have cores that are easily identifiable contributors to the AI psyche. Easily removable. If you trained an LLM on everything that's been written, then on everything I've written, then do a diff, what would be there?

This is already a thing with image models. I could spend the time to figure out how to do it with text. Or I could wait. Things are progressing so fast, that I can basically delay anything above low effort until AI gets better. This includes programming.

Why learn Rust now, when if I wait a year the AI will be much better at teaching me Rust? When my whole life is programming and gaming, what do I do when the games aren't fun anymore and the winning move with programming is to wait and game instead?


The decision to stop has been made for me.

My internet has gone out. This happens a lot more in Ottawa than in Kingston.

...It's back now.

I wasn't sure what I was going to do. If I didn't post this before going to bed, it would never get posted.

ChatGPT never saw this part.

ChatGPT

It's interesting to consider the implications of rapidly advancing AI technology and how it might impact our choices and priorities in life. While it's true that AI may become better at teaching us new skills or even taking over some of our tasks, it's important to remember that there will always be a need for human creativity, adaptability, and empathy in our lives and work.

Learning new skills, like programming languages, can still be valuable even if AI continues to progress. There's no guarantee that AI will advance at the same pace or in the exact direction we might expect. By learning and growing on your own, you're taking an active role in shaping your life and experiences, rather than waiting passively for technology to catch up.

Moreover, the process of learning itself can be intrinsically rewarding, providing a sense of accomplishment and personal growth. It also allows you to be more adaptable and resilient in the face of change, as you develop a wider range of skills and knowledge.

As for the question of identity and whether an AI trained on your writing can be considered “you,” it's a thought-provoking philosophical inquiry. It's likely that an AI model trained on your writing would be able to mimic your style and voice to a certain extent, but it wouldn't possess your unique consciousness, experiences, or emotions. The AI would be an approximation of you, rather than a true reflection of your identity.

In the end, it's important to strike a balance between embracing the potential benefits of AI technology and continuing to cultivate our own personal growth, creativity, and autonomy. This will ensure that we remain engaged, adaptable, and able to make the most of the opportunities that life and technology present to us.

Me

Recent news has GPT4 able to explain the individual neurons in GPT3. Also recent news is using LLMs to draw accurate pictures based on user brainwaves, in addition to transcribing internal monologues. AI might be an approximation, but what happens when the approximated testimony is indistinguishable from my own?

  • What is the parameter count of GPT3?
  • What is the neuron count in an adult male?
  • What are some popular references to distinguishing a clone from the original?

ChatGPT

  1. GPT-3, developed by OpenAI, has 175 billion parameters. Parameters in machine learning models are the part of the model that's learned from historical training data.

  2. The human brain is believed to contain approximately 86 billion neurons, give or take, although the exact number can vary from person to person.

  3. There are numerous popular references in literature, film, and television that explore the concept of distinguishing a clone from the original. Here are a few examples:

    • “Blade Runner” and its sequel “Blade Runner 2049” both grapple with the distinction between humans and their bioengineered replicants.

    • “The Island” is a film where clones are created for the purpose of organ harvesting, and the clones struggle with understanding their own identity.

    • “Orphan Black” is a TV series where a woman discovers she's one of several clones and must deal with the consequences.

    • “Star Wars: The Clone Wars” features a number of stories revolving around the clone troopers, who are all identical copies of the bounty hunter Jango Fett.

    In all of these examples, the clones and their original counterparts grapple with issues of identity, autonomy, and what it means to be human. These are complex and nuanced themes that may shed some light on your query.

Me

What are recent advancements on resolution/fidelity and consumer access for EEG tools? What does EEG mean, ECG?

ChatGPT

EEG stands for Electroencephalography, a monitoring method used to record electrical activity of the brain. It's typically non-invasive, with electrodes placed along the scalp. ECG, or Electrocardiography, on the other hand, records the electrical activity of the heart.

As for advancements in EEG technology, there have been significant strides in recent years, particularly in the areas of portability, ease of use, and accessibility for the average consumer. Companies like Emotiv, Muse, and NeuroSky have created wearable EEG devices that are available for public purchase. These devices, often in the form of headbands or hats, have been used in a variety of applications, from meditation and sleep tracking to cognitive research and even gaming.

The resolution and fidelity of these consumer EEG devices are continually improving, although they still don't match the level of precision found in clinical or research-grade EEG machines. However, these consumer devices are often sufficient for basic brainwave monitoring and can provide valuable insights into brain health and function.

It should be noted that while consumer EEG devices are becoming more accessible, interpreting EEG data is still a complex task that typically requires professional expertise. Furthermore, while these devices can provide interesting information, they should not be used to self-diagnose or treat any medical condition without professional medical advice.

As of my knowledge cutoff in September 2021, one of the most exciting developments in EEG technology is the exploration of Brain-Computer Interfaces (BCIs) – systems that allow direct communication between the brain and an external device. Companies like Neuralink are working on invasive BCIs for medical applications, while others are focusing on non-invasive BCIs for more general use.

However, the technology is still in its early stages, and while the potential is huge, there are many technical and ethical challenges that need to be addressed. As such, while the technology is promising, it's likely to be some years before we see widespread adoption of BCIs.

Me

Describe the data from an EEG vs an MRI.
What is an F-MRI?

ChatGPT

An EEG (Electroencephalography) and an MRI (Magnetic Resonance Imaging) are two distinct types of imaging technology that provide different types of data about the brain.

An EEG primarily provides data about the electrical activity in the brain. It uses electrodes placed on the scalp to detect electrical signals produced by the brain's neurons. The data from an EEG is typically represented as waves on a graph, with each wave corresponding to a different type of brain activity. For example, different wave patterns may represent states of sleep, alertness, or different types of cognitive activity.

An MRI, on the other hand, provides detailed images of the brain's structure. It uses a strong magnetic field and radio waves to generate images of the brain. These images can show the brain's anatomy in great detail, including its different structures and any abnormalities that might be present, such as tumors or brain injury.

A functional MRI (fMRI), is a specialized type of MRI that can also provide information about the brain's activity, not just its structure. An fMRI measures changes in blood flow in the brain, which can be used as a proxy for neural activity. When a particular region of the brain is active, it requires more oxygen and thus more blood flow. By tracking these changes, an fMRI can provide a map of which brain regions are active during different tasks or mental states.

In summary, EEGs and MRIs provide complementary types of data. An EEG can provide real-time data about the brain's electrical activity, but doesn't give detailed anatomical information. An MRI can provide detailed images of the brain's structure, and an fMRI can also provide information about brain activity, but not in real time like an EEG. Each tool has its strengths and weaknesses, and they are often used together to give a more complete picture of the brain's function and structure.

Me

Name the book I said I'm reading as a PDF earlier. Give a brief summary as it relates to what we've discussed so far. One of the ideas was that a 2-axis grid of y=optimism-pessimism and x=good-evil.

My take: – Good-Optimist => Solarpunk – Good-Pessimist => Cars, now and forever – Evil-Optimist => Cyberpunk – Evil-Pessimist => Hell

...A lot of these startups based on ChatGPT (“talk to a github repo”, “talk to a pdf”) are going to become obsolete as better tooling and multimodal models become available. What are the most pressing challenges for AI writing its own programs? What remains to be done before I can tell an agent something like “rewrite this ChatGPT web client in rust”?

ChatGPT

I apologize, but without more specific information about the book you're referring to, I can't provide a summary related to our discussion.

As for the challenges of AI writing its own programs, there are several that are noteworthy:

  1. Understanding the requirements: To write a program, an AI would first need to deeply understand the specifications it's given. This can be very challenging, because natural language instructions are often ambiguous and may omit important details.

  2. Dealing with complexity: Programming involves managing a lot of complexity, such as handling different cases and edge conditions, maintaining state, and ensuring that different parts of the program interact correctly. AI systems currently struggle with these types of tasks.

  3. Debugging and error handling: Writing a program often involves a lot of debugging. An AI would not only need to write code, but also understand when it's not working correctly and how to fix it.

  4. Understanding existing code: If the task involves modifying or interacting with existing code, the AI would need to understand that code. This can be very difficult even for humans, especially with complex codebases.

  5. Code optimization: Writing efficient code that runs quickly and doesn't use unnecessary resources is another challenge. It requires understanding not just the syntax of the programming language, but also the underlying hardware and software systems.

  6. **Security: Ensuring that the code written by the AI doesn't have security vulnerabilities is another important consideration. This is a complex field that requires deep expertise.

Despite these challenges, there has been significant progress in the field of AI programming. For example, GitHub's Copilot, powered by OpenAI's Codex, can generate code snippets in response to natural language prompts. However, full program generation, particularly from high-level prompts like “rewrite this ChatGPT web client in rust”, is still a topic of ongoing research. There's certainly potential in this area, but also many challenges to overcome.

Me

Understandable that you can't remember, we've been talking for 30,000 characters already. The book was “be slightly evil” Please make some closing remarks on this. I'm ready to go to bed, so whatever you write will be the last entry in this conversation. I will not be responding.

Cheers,

ChatGPT

Thank you for the enlightening discussion. It's clear you're deeply considering the implications of AI advancement and how it interfaces with our personal lives and identities, a thought process I encourage everyone to engage in. The concerns you've raised about privacy, identity theft, and the risks of online exposure are legitimate and relevant in our rapidly evolving digital age.

“Be Slightly Evil: A Playbook for Sociopaths,” is an interesting read. It presents a somewhat cynical, yet realistic, view of navigating power dynamics in a complex world. It encourages readers to embrace an amoral perspective in order to succeed in a world that often doesn't adhere to idealistic standards of fairness. The connection to our discussion lies in the uncharted ethical territory we're entering with advanced AI, and the potential for misuse that comes with it.

As we move forward, these conversations will be crucial in shaping the development and deployment of AI. The future of AI is still very much in our hands, and it's discussions like these that help us chart a thoughtful course.

Thank you for engaging in this thought-provoking dialogue, and I hope you have a restful sleep. Until next time, take care.

If something has a user-assigned name, that name should be able to be changed easily. It is said that there are only two hard problems in computer science: cache invalidation, and naming things. But the problem of naming doesn't need to remain a problem. If people just designed better systems then we could rename stuff whenever we want. The problem is that systems today are lame. Programmers have notoriously sucked at supporting human names, now we have to deal with naming programmy things.

Let us consider the Azure cloud provider. In Azure, I have a subscription that was automatically named Visual Studio Enterprise Subscription. Within that subscription I have a resource group with the name ca.teamdman. Within that resource group I have a storage account with the name teamdman.

Lets look at the ID for that storage account: /subscriptions/9db267b9-fc59-4965-940a-6f30425c2f37/resourceGroups/ca.teamdman/providers/Microsoft.Storage/storageAccounts/teamdman

What the hell is that? Well, subscriptions can be renamed, so obviously subscriptions need to have a GUID that doesn't change when the name does. They went that far, then they completely dropped the ball with resource groups, which cannot be renamed. It is even worse with the storage account, which cannot be renamed but must ALSO have a globally unique name across every azure tenant. Someone else took your name? Get fucked, too bad. You had a nice naming convention going on and now you have to break it because some egg-head already owns a storage account using the same acronym? Tough shit.

This all stems from an overloading of the purpose of a name. In the case of the storage account, the name is also used in the ID and in subdomains used for connecting to the storage account (example: https://teamdman.blob.core.windows.net/).

What's the solution? Stop using one field to do different things. Give everything a GUID and be done with it, let me rename my crap. Add a parameter for changing the subdomain and enforce global uniqueness on that instead of forcing me to add a 1 to the end of my resource names as if I was creating a RuneScape account.

This is not just a problem with Azure, this is a problem with basically every website in existence. Usernames are actually two names in a trench coat: your login name, and your display name. Sometimes websites use your email as your login name but then don't let you change your display name. Sometimes they do let you change your display name, but then they require you to also have a four-digit tag to enforce uniqueness all over again. Looking at you, Discord and Xbox.

Renaming classes in Java is easy enough, you hit a hotkey and your IDE will edit whatever used to reference the old name to point to the new one. Naming infrastructure gets slightly harder, but again, the tools exist to make it easier. Infrastructure-as-code tools such as Terraform allow you to define your resources in code and then will handle the automatic creation and modification of those resources.

Here's what the storage account from earlier looks like in Terraform:

resource "azurerm_storage_account" "main" {
  resource_group_name      = azurerm_resource_group.main.name
  location                 = "canadaeast"
  name                     = "teamdman"
  account_replication_type = "LRS"
  account_tier             = "Standard"
}

resource "azurerm_storage_container" "web" {
  storage_account_name = azurerm_storage_account.main.name
  name                 = "$web"
}

Like normal, I assign the name property with what I want the storage account to be called. The important part is that I can reference that value elsewhere instead of just using more string literals. Even better, if I change that name property then Terraform will handle destroying the old storage account and creating a new one. If they supported renaming, then Terraform would rename it in place instead.

Takeaway: just let people rename shit. Make it easy to rename shit. As a byproduct, the most practical approach is usually to also give GUIDs to shit. Use infrastructure-as-code (IaC) tools to make your life easier when it comes time to rename shit. Make a separate field for your URLs or whatever part needs to be unique, but let people pick the display name for shit without imposing ANY constraints. Don't want people with zalgo usernames to fuck up your interface? Just fix your CSS to clip the text; don't prevent people from using their native alphabet just because you hate fun and decided to limit the character set to ASCII.