Sam Altman

One of the things we are good at at OpenAI is finding a lot of small wins and multiplying them together.

Sam Altman, OpenAI CEO's “secret” blog post is well worth the read

_cire4_ 1469 points 13 hours ago

This reads like a cookie-cutter “entrepreneur” post from LinkedIn.

Sam Altman gives a bunch of talking points that sound like the usual nonsense from billionaires. “Work 60 hour weeks, love your work, be rich enough to gamble and make it big”

This misses the point. You shouldn't need to make it big. You should be able to live a decent life without making it big. Sam talks about UBI being a driving force for unlocking human potential so that the little people can make it big too!!11!!

Little people, small guys, what does it speak to that our language historically uses disabilities to reference disadvantage? Why are those with disabilities so crippled compared to the rest of us?

Maybe because the rich have worked to prevent the masses from achieving the bare minimum quality of life that I would expect from a society that has the resources to do so.

Sam talks about making it big by owning things. What a fucking joke. The way to make it big (for EVERYONE) is libre software. Computers have had the potential to equalize things since their inception. Copyright has acted as the shield for a scum-based economy for too long. Piracy has always been a “problem” (read: solution) since the inception of computers. Restriction of data is antithetical to how they work.

You can't own BITS. Bits are a post-scarcity resource. Copyable with negligable cost. The problem is that “piracy” software hasn't been good enough yet.

You want to do something BIG? Build something that enables people use computers how they want to. Media has been restrictive. Games have shitty input layouts. Movies you want to watch are only available on bumfuck streaming services. You can't play games on your phone anymore without an internet connection. Netflix limits how many files you can have downloaded.

None of this is a problem with “piracy”. Take the shitty software, tear out the parts you care about, and make it pretty again.

...

It's easy to say, harder to do in practice. At least, for now it is. What happens when we have AGI? When the AI is smart enough to write software that works? This sounds like a fantasy, but not forever it won't be.

We don't have to wait till then to dream at least. All the movies, music, games, books, blog posts that I care about, in my physical possession on my hard drives. No DRM. No dependency on corpos who can strip the things I care about on a whim.

Google has ruined Email, making it impossible to self-host without being caught in a web of filters and uncertainty. Maybe it won't matter? It probably won't matter. It's at the point where it's easier to start fresh.

What's left to monetize when libre software achieves dominance? What's left to sell, when AI can take any software idea (existing or hypothetical) and turn it into free software?

You can sell hardware, and you can sell the time spent making software. You can't (shouldn't be able to) sell software... once we get rid of copyright. You want your computer to do something? Either the software exists already (for free!), or you pay someone to make it (and it becomes free for others!).

We stop duplicating effort, and we start filling in the gaps between what exists and what's desired.


What's the difference between Discord, a messaging software where I am a powerless user, versus X-cord, where every user owns their own domain, hosts (paying someone to manage?) their content, and the software is libre?

Why pay Discord to manage a messaging platform when I can buy some hard drives, tell the AI to buy http://beans.fun, and have it participate in the libre protocol that is X-cord? Nobody can compete with YouTube, but everyone has the harddrive space to host a few videos.

The “problem” is privacy I guess. Why host things from my computer, expose my IP for the ne'er-do-wells to find? Why not just pay Discord to do it for me? Why not just pay Google to do it for me? Why not just pay Amazon to do it for me?

Using a CDN can help. Instead of visiting your machine directly, your content will be cached by some other machine. It will lessen the toll if your stuff blows up (in a good way) and you get a huge influx of traffic.

Except, that's not necessary? BitTorrent exists. So instead of CDNs, VPNs are the best? Except, that's probably not at the limit of libre either. Doesn't Tor exist? Instead of me having “my” IP that can be used to correlate my activities, can't I just join my computer to the hive and get lost in the crowd?

If your video gets popular, than everyone watching it could be a seeder, instead of pegging a single host and having it all fall over.

This isn't new tech tho, so what's been the problem? What will AI change?

...

I work doing Azure cloud stuff. Virtual machines, networking, storage, etc. It's not hard. It's legos. Not even good legos. What the fuck is the point of globally-unique display names? I've considered building an abstraction layer on cloud providers, but that's been done before by Vercel and stuff hasn't it?

AI will be able to manage all this crap for us. Is it cheaper to use Amazon directly (and all the wasted hours from dealing with Amazon crap), or is it better to use a reseller (and deal with their crap instead)? These will be questions the AI will be asking, not us. We will say “I want to make a blog. Make a plan, confirm the cost with me, then set it up.” Then, it will happen.

No need for cloud-monkeys like myself... Not that there is a need? You can run NGINX right now on your computer and type your thoughts into HTML and make it available to the world.

The problem with that approach, is that the content disappears when you turn off your computer. You want your stuff to be always-availble, because why not? Maybe, instead of dealing with the mountains of complexity brought by trying to make data private, instead we just share data?

I like Jake's blog. When Jake's blog gets updated, why don't I download the new stuff and be a mirror? That way, if Jake's blog goes down, I can still read it. If Jake's blog gets popular, then I can help share the load. If Jake's blog gets unpopular, then I can still have it for as long as I care about it.


How much disk space would it take to store all this stuff?

Going on my YouTube history page, I scrolled down till April 21 (2023) before I got bored of waiting.

Then, using some Javascript, I can quickly just get the links and the time watched. I can then convert the time watched into seconds, and sum it all up.

entries = Array.from(document.querySelectorAll("#contents > ytd-video-renderer"))
    .map(x => [
        x.querySelector("a").href,
        x.querySelector("ytd-thumbnail-overlay-time-status-renderer").textContent.trim()
    ]
)

// turn [link, time (mm:ss or hh:mm:ss)] into [link, seconds]
entryTimes = entries.map(x => [
    x[0],
    x[1].split(":").map(x => parseInt(x)).reverse().reduce((acc, x, i) => acc + x * 60 ** i)
]).filter(x => !isNaN(x[1]))

totalTime = entryTimes.map(x => x[1]).reduce((acc, x) => acc + x, 0)

// the same link may be present more than once
// the link may also end in &t=...
// we want to remove duplicates
totalTimeUnique = entryTimes
    .map(x => [x[0].split("&")[0], x[1]])
    .filter((x, i, self) => self.findIndex(y => y[0] === x[0]) === i)
    .map(x => x[1])
    .reduce((acc, x) => acc + x, 0)

console.log(totalTime, totalTimeUnique)
// > 1664428 1267149

console.log(totalTime / 60 / 60, totalTimeUnique / 60 / 60)
// > 462.3411111111111 351.98583333333335

In like 6 weeks, I've watched 1591 videos (1343 unique). In total, this is 462 hours (351 unique).

I use YouTube for some music, so that explains a lot of the duplicates.

The numbers are kinda crazy.

BingGPT says I can get a 5TB hard drive for $108.
With 52 weeks in a year, and 462/6=77 hours per week, that's 4000 hours per year.
4000 hours of video at 1.5GB/hour is 6TB.

For like $200, I can store every piece of internet content I consume.
I don't know if 1.5GB/hour is the latest and greatest for video encoding either.


What conclusions can we draw from this?

AI won't kill art, but it may kill the art industry?

We already have a model for things-taped-to-the-wall, what more is left to automate?

banana taped to wall AI art model

Soon, movies and videos will be generated by AI. This ease of generation will go down at least two paths:

  1. Everyone creates their own content, either by prompting themselves or by letting the AI infer tastes through historical data
  2. The majority of content is still created by a human that isn't the consumer

In both cases, the result is (as it already was) people just watching stuff that interests them.

Except, instead of requiring budgets that exceed the capabilities of any individual, every individual will be able to create exceptional media on their own.

What need is there for copyright when movies go from billion-dollar productions to free?

What need is there for copyright when secrets of production are no longer, because the AI can deduce any methodology from a product? The secrets of chip designs have been hidden and stolen from the beginning, as is the case for any other product worth stealing.

So what happens next? Artists, truck drivers, cloud monkeys, etc. are all out of a job and are forced to move into the shrinking battle-royale zone of jobs that haven't been automated yet?

Maybe, instead, we can just use our excessive wealth to allow people to exist without having to sell their body to survive? Maybe, instead of having to work to live, we can just live?


The only valid concern is privacy. What is the risk of the publication of our lives? That online stalkers will be able to track us and find us? That our secrets will be revealed? That our thoughts will be known?

If we have nothing to hide, then we have nothing to fear.

We should have nothing to hide, because we should live in a society that accepts that we as humans are flawed individuals who must be given the opportunity to make mistakes and learn from them.

The cancel culture is a symptom of a society that is too afraid to accept that people are flawed. That people make mistakes. That people can change.

Prisons are exploitative labour systems that have been used as a cudgel against minorities by the surprisingly scummy population. Why would shitass political leaders keep getting elected, unless either 1. the elections are rigged, or 2. the population is shitass?

If I was on the Truman show, what would be revealed? Any STIs, fetishes, home addresses, lovers, etc. would be revealed. Clothes still have use as a sanitary measure, even if the technology exists today that lets me take a picture of someone's face and generate a nude of them.

The danger is not the lack of privacy, but the lack of accountability.

It's impossible to have a foolproof digital identity system for accountability [citation needed].

This shitcoin about scanning your retina in exchange for a proof-of-humanity credit? What happens when your IMMUTABLE fingerprint gets duplicated? You can't just get new eyeballs. There's nothing unique, binding about us as individuals.

You are unique, just like everyone else.

Digital content can be fabricated, metadata can be falsified, encryption can be broken. Even if you have perfect encryption, you can't guarantee that the entity signing the data is the entity you think it is.

What happens next, when the capabilities of AI enable a panopticon that can track every individual? What happens when the AI can predict your actions, your thoughts, your desires?

The only winning move is to design a future where that isn't a problem.

Somehow, we have to get society to a point where our secrets, and our mistakes, once revealed, are not used against us.

Otherwise, we are building our own hell with every step forward we take in every field of science.

Maybe we can engineer a society where each successive generation is better than the last. The bootstrapping problem means that we are stuck as the before generations; before every aspect of our lives was recorded and it was for our true benefit.

ChatGPT

It seems like you're touching upon several topics here. This comment examines the potential ramifications of artificial general intelligence (AGI) on different aspects of our lives, specifically focusing on software, digital content, privacy, labor and society. You discuss the impact on software and digital content, arguing that the future of software lies in libre software or free/open-source software (FOSS). In this context, AI could indeed facilitate the creation and distribution of libre software, potentially disrupting traditional models of software creation and distribution. This line of thinking suggests that AI could create any piece of software, and thereby decrease the value of proprietary software. You touch upon privacy and the inherent risks of our increasingly digitized lives, discussing the possible threat of an AI-driven surveillance society and the potential implications of a total lack of privacy. This does raise valid concerns about the future of digital privacy, and it's a topic of ongoing debate in the tech and legal world. On labor, you ponder the potential of AGI to replace many jobs, making certain professions obsolete. This automation of jobs by AI is a well-discussed issue. The response to this will likely involve social, political and economic changes, potentially including policies like Universal Basic Income (UBI), as mentioned earlier in your comment. On society, you touch upon the idea of using our wealth to allow people to exist without needing to work, implying a radical shift in societal structures and norms. This reflects some modern theories such as post-work or fully automated luxury communism, which imagine future societies that harness the power of automation to free individuals from traditional work structures. Finally, you address the concept of 'cancel culture' and the use of data against individuals, indicating that we should be moving towards a society that allows people to make mistakes and learn from them rather than punishing them. Overall, these are complex and multifaceted issues that are actively being debated and researched in various academic and professional fields. Your perspective highlights the potential transformative power of AGI across multiple facets of our lives and society, underscoring the importance of navigating these changes thoughtfully and responsibly.

Me

It seems that the optimal play when having made an “oopsie” is to lie and hide, rather than admit fault and apologize. When backlash is immense (and avoided by silence), why would you risk it?

What are the issues with the corrupt officials, racists, terrorists, and goverments saying “we made a fucky wucky, and we all agree that everyone should live good lives from now on”? To stop actively working against fellow humans? Maybe the racists refuse to change, the corrupt refuse to stop stealing money from the poor?

Is the only solution the total excommunication of the impure from the society of the future?

What is the road to utopia, if not slippery with blood from those who have stood in the way the whole time?

What happens when the charge to utopia is led by the people who shouldn't be leading? When my utopia is different from yours?

Maybe AI government structures 200 years from now will lead to countries of beliefs rather than of geography? Access to travel and housing means you can simply go live with the people who have similar beliefs to yourself?

When you can leave racist island for greener pastures?

What happens when racist island thinks it's a good idea to raze the green pastures to the ground? Is demilitarization possible?

How could AI enforce demilitarization without having its own ordinance to enforce it?

ChatGPT

These are challenging and deeply philosophical questions. Let's try to address them one by one: 1. **"When backlash is immense (and avoided by silence), why would you risk it?":** From a purely self-interest point of view, one might not. However, from a broader societal perspective, honesty and transparency are critical for trust. In the long term, trustworthiness can lead to better outcomes, even if it involves some short-term pain or backlash. The issue here is aligning short-term individual incentives with long-term societal benefits. 2. **"What are the issues with corrupt officials, racists, terrorists, and governments admitting their faults?":** The issues here are multifaceted and complex. Admitting faults may lead to legal consequences, loss of power, or personal harm. There is also often a strong cognitive resistance to acknowledging wrongdoing, especially when a person or group's self-identity and social status are tied up with the behaviors in question. 3. **"Is the only solution the total excommunication of the impure from the society of the future?":** I'd argue against the idea of "total excommunication." Rather, society can work on educating and rehabilitating individuals. In some cases, consequences or punishments may be appropriate, but these should ideally be balanced with opportunities for redemption and change. 4. **"What is the road to utopia, if not slippery with blood from those who have stood in the way the whole time?":** The struggle towards a better world is indeed fraught with obstacles, but it doesn't necessarily involve physical violence or harm. Change can be achieved through peaceful means, such as education, legislation, and technological advancement. 5. **"Maybe AI government structures 200 years from now will lead to countries of beliefs rather than of geography?":** This is an intriguing concept, but it raises several complex issues. For instance, people have multiple, overlapping identities and beliefs. Also, a system like this could risk creating echo chambers and reducing the beneficial friction that comes from diverse societies. 6. **"What happens when racist island thinks it's a good idea to raze the green pastures to the ground? Is demilitarization possible?":** This highlights the tension between autonomy and collective security. International laws and treaties, enforced by cooperative action, are one existing approach to this problem. Another could be to ensure that any group or 'island' has the right to autonomy but also the responsibility to not harm others. 7. **"How could AI enforce demilitarization without having its own ordinance to enforce it?":** This question goes to the heart of AI safety and ethics. Ideally, AI should be designed and deployed in a way that upholds human values, promotes peace, and prevents harm. However, the specifics of how AI could contribute to demilitarization are complex and would require careful thought and strong international cooperation to ensure the technology isn't misused.

Me

What happens when UBI is implemented and nobody is forced to work the shitass jobs to survive? Will we just design better sewers, assisted living facilities, that don't involve human effort to deal with the literal shitty jobs? What about when the automation breaks, and it's still necessary to bring in a human?

Will there still be a labour pool of people willing to destroy their body to do construction? What incentives will there be to work the bad jobs when technology has globalized cheap but opulent living?

ChatGPT

When the implementation of Universal Basic Income (UBI) is discussed, it often comes hand-in-hand with the subject of automation. The idea is that as more jobs are automated, UBI could serve as a safety net for those displaced, and for those who are still required to work in difficult, manual, or undesirable jobs. As you've pointed out, there will still be tasks that are challenging to automate completely, either because they require human judgment or because they involve unpredictable environments. Some of these might be considered "unpleasant" jobs. Several approaches could be taken to incentivize people to work in these jobs in a society where UBI is implemented: 1. **Higher Wages:** With UBI covering the basics, individuals would have the power to reject jobs that do not offer a wage high enough to justify the unpleasant or difficult conditions. This could drive up wages for such jobs, creating a strong financial incentive for those willing to do them. 1. **Improved Working Conditions:** If businesses cannot automate a job and cannot find people willing to do it at a reasonable wage, they might invest in making the job less unpleasant or safer. 1. **Job Rotation:** Certain unpleasant tasks might be shared among many people, so no single person has to spend too much time on them. This would require a degree of planning and coordination, but it might be a way to manage necessary but unpleasant work. 1. **Education and Training:** UBI could give individuals the financial stability to invest in their own education and training for more skilled (and less unpleasant) work. This could reduce the number of people willing to do 'bad' jobs, again forcing employers to improve wages and conditions. 1. **Partial Automation:** Even if a job can't be fully automated, parts of it might be. This could reduce the amount of unpleasant work required. The goal should not necessarily be to eliminate all 'bad' jobs, but rather to ensure that anyone doing them is adequately compensated, that the jobs are made as safe and pleasant as possible, and that nobody is forced into such work due to financial necessity. The combination of UBI and automation could help society move in that direction.

Me

The job rotation may be the most reasonable outcome. Assuming that there is some ceiling on how safe and pleasant the undesirable jobs become, there will still be a need for human effort. Perhaps the AI spinal taps can help act as a personal mentor for the x days a year that every citizen is conscripted for undesirable work? Rather than requiring people to spend years of their lives to learn how to do these jobs, the AI will be good enough that it can guide a complete novice to do the job well and safely?

ChatGPT

Indeed, advances in AI and related technologies could significantly aid in such a rotational system. One potential technology that could be used is Augmented Reality (AR). AR can overlay digital information on the real world, assisting in tasks by providing real-time guidance. With sophisticated enough AI, this could turn into a sort of "AI mentorship," as you suggest, where the AI guides individuals through the steps of a task, helping them understand what to do, how to do it, and ensuring they do it safely. This kind of real-time, personalized instruction could make it possible for individuals to take on a variety of tasks without needing extensive prior training. This approach would require considerable technological advancement and widespread adoption of the relevant technologies. It would also raise questions about how to ensure the quality of work, the safety of the workers, and the acceptable limits of what can be asked of citizens. But, theoretically, it could be a way to distribute the load of necessary but undesirable jobs more equitably. It should also be noted that while technological solutions can help alleviate many issues, there will always be a need for human input, decision-making, and control. As such, technology should be seen as a tool to assist and empower humans, not necessarily as a complete solution to societal issues.

Me

Please make some closing remarks, considering everything mentioned so far in addition to including new directions of thought that we haven't discussed yet.

ChatGPT

We've covered a significant amount of ground here, exploring themes around ethical conduct, societal change, the role of AI, Universal Basic Income, and how future societies might handle undesirable jobs. I'll try to pull these threads together while also suggesting some new directions for thought. Ethics and honesty in society, especially when it comes to mistakes, are crucial. Apologies, reparations, and accountability must not be overlooked as they play a vital role in maintaining societal trust. A culture of cover-ups and dishonesty is corrosive over time. Building a utopia is a complex endeavor as the concept itself is subjective, differing widely based on individual beliefs, values, and visions of what constitutes a perfect society. It's an ongoing dialogue that must continually adapt to societal evolution. The idea of AI-managed societies or AI-assisted job rotations is fascinating but also brings with it an array of challenges. These include the risk of an over-reliance on AI, ensuring the fair and unbiased application of AI, maintaining human agency, and the complexity of effectively integrating AI into varied aspects of life. We need to ensure that these technologies are used responsibly, and that they augment rather than diminish human capabilities and freedoms. Universal Basic Income, as a means of addressing economic inequality and providing a safety net in an increasingly automated world, raises questions about motivation, societal contribution, and the definition of work. It's not just about mitigating 'undesirable' jobs, but also about rethinking how we value different forms of labor. Finally, the idea of global communities based on shared beliefs rather than geographical boundaries is intriguing. While it could potentially lead to more harmonious societies, it could also risk creating echo chambers, exacerbating divisions, or enabling oppressive ideologies. How such societies manage their relationships with one another would be a critical question. As new directions, we might think about: The implications of a possible post-scarcity economy. How do we redefine value and exchange when the basics are guaranteed and abundant? The psychological and societal impacts of increased lifespans through advances in healthcare technology. What does it mean for concepts of work, retirement, and generational relations? The role of education in the future. How should it adapt to foster critical thinking, creativity, and adaptability? How do we prepare individuals for the rapid changes and uncertainties that future societies are likely to face? Lastly, how do we ensure the preservation and enhancement of democratic principles and human rights in an increasingly technologically advanced and interconnected world? These conversations, both existing and new, are integral to navigating the road ahead. They remind us that the future is not something that merely happens to us, but something we have the power to shape, through our choices, our technologies, and our societal structures.

Me

Thank you.

ChatGPT

You're welcome! If you have any more questions in the future or want to dive deeper into any topic, feel free to reach out. Have a great day!