29 Comments

Maybe post-Soviet privatizations are relevant. In Russia they gave out shares of newly privatized industries to workers, since they used to be nationally owned. Most communist-bloc people had no experience with capitalism, they didn't know what to do and sold their shares cheap or got scammed. It was all happening very quickly since there were worries the Communists would get back into power, there was a huge rush to privatize now and build up an anti-communist power bloc. Oligarchs and criminals quickly took over huge swathes of the economy.

Expand full comment

yeah this is a great analogy!

Expand full comment

God would never allow this.

Expand full comment

All of you people act like special interest groups have some inherent interest to further their idea above anything else. They are motivated by money, they too are simply trying to survive. Once an elysium-capable system is achieved and we are able to fully remove humans from any obligatory work, even if this system is fragmented among many parts, there won't be much reason for exploitation by definition, beyond simply dick-measuring contests.

Expand full comment

So... BDs* and group BDs jacked into FOREVER as an AI procedurally generates content based upon the container owner's desires.

😬

This is not the take.

At some point on some platform (possibly here), I said that the Demiurgos cracks down on ascension by creating deeper layers of abstraction from base reality.

"Digital life" is the worst possible manifestation of that.

*braindances

Expand full comment

Your own personal utopia can be anything you want. Are you projecting maybe?

Expand full comment

Most of the stuff in here is correct but I'm not sure about Elysium itself. Bostrom's "Deep Utopia" talks about this topic. There's value in my experience being "real"

Expand full comment

What is not "real" here?

Expand full comment

Your suggestions sound similar to Nozick's experience machine and perhaps even Wireheading

Expand full comment

Why? Where did I say that?

Expand full comment

"virtual experiences, places, buildings, virtual environments etc, would be exactly the things that are optimal for the person whose utopia it is"

virtual sounds kinda "not real" to me

Expand full comment

but they don't have to be. Places, buildings, landscapes etc can be real

Expand full comment

You can get real experience by visiting other people or inviting them to you.

Expand full comment

Or by just creating real people in your utopia?

Expand full comment

Ok, that makes it more real. But there's still something fake about living in a pod/experience machine, even if I can meet other people online

Expand full comment

I mean, you can meet them IRL, not online.

Also, scientific discovery and creative art will still exist, and they provide real experience. We can have experiments with actual physical matter, unlike how currently we mostly only read about it online. And capabilities for IRL art will only grow, unlike how now we just consume content online. Or even if we go to an IRL event, usually we just watch.

I think it all comes to what exactly you mean by "real experience". Maybe something like the fear of death can't be achieved, but even in that case, the experience itself can be replicated artificially.

Expand full comment

I like this idea but I have some questions. Apologies if you're planning to cover any of these points in your next essay.

1. How would this system function under the condition of many sovereign entities competing, whether that is regular countries, network states, unrestricted corporations that have built different AI, or whatever else? Is the assumption that humans have no say in this future so the ASI implementing Elysium will do so without any human input? If this were not the case or if there were multiple ASIs with different goals then we would have the same situation as before, with various entities attempting to exert control over populations of humans and in the worst case scenario getting into conflicts with each other that would probably have a catastrophic amount of human casualties.

2. Since you mentioned resource limitations there would need to be an enforcement mechanism against humans or other entities that did not agree to obey resource constraints. Wouldn't that imply we need a single enforcing entity that is effectively a god over all of human space and can exert overwhelming power against anyone who defects from the system? I'm not saying this is bad necessarily but it would basically be a secular realization of the Christian rapture and would mean no one will have any meaningful agency over their lives after this ASI is brought into being.

3. Where exactly do these beings who are created in people's individualized utopias come from? You mentioned there would be rules like you can't torture them, so that seems to imply the ASI in some way provides a template that you have to work within, meaning if I understand correctly it is like you are video game designer creating characters only they are conscious entities? To me that seems less efficient than just requesting what you want from the ASI and having it split off a subroutine to model a certain "character". Obviously torture is bad since it violates the consent of the digital entities, but what about situations where the consent is more blurry. An example I am thinking of is if you create a planet of catgirls or whatever that are theoretically very intelligent but incapable of desiring anything else except sex, couldn't that be considered equivalent to drugging humans for their entire lives to use them as sex slaves? I guess my inclination would be that either they should not have any capacities that they don't need to serve their function so they would either be retarded outside of their sexual capacities or simply lack qualia, or they have traits that bias them toward particular behavior such as nymphomania but are still fully-functioning beings capable of other behavior as well. Admittedly this is very philosophical and relies on a lot of speculative assumptions.

4. How long do you think we have before we will either achieve something like Elysium or be killed by AI? If Kurzweil's 2040 or 2045 estimate is accurate, then I would be surprised if we can completely replace the entire global political system without a lot of intermediate steps in which AIs that are advanced but not godlike take over most of the functions of government. Otherwise I think it's plausible that new polities could be created within that timeframe either in the ocean, off Earth, or within part of the territory of presently existing nation states, but it's doubtful democratic governance will completely cease to exist anywhere. In that scenario do you think it would be possible to develop aligned AI or do the presently existing superpowers need to be removed?

Expand full comment

> no one will have any meaningful agency over their lives after this ASI is brought into being.

You have agency to decide what happens in your part of ELYSIUM

Expand full comment

I get that but I'm talking about the fact that this premise hinges on the assumption that no one will ever gain an infinitesimal fraction of the power of this ASI or they will be able to challenge it. Presumably creating other ASIs would be harshly prevented as well, so there would be a single all-powerful god ruling over as much of space as it could expand into for the rest of the age of the universe. I'm not saying that's necessarily bad but just clarifying since it sounds like that's what Elysium would entail.

Expand full comment

yes that is exactly the plan and it's quite similar to a society with military forces that have a monopoly on violence

Expand full comment

> in the worst case scenario getting into conflicts with each other

This post does not cover conflict

Expand full comment

> without any human input?

No input is needed. And indeed it would be very dangerous to have anyone giving input to it.

Expand full comment

If there is no human input what makes you think it would decide on the solution you want on its own? Also what about the other questions?

Expand full comment

It wouldn't decide, this would be its programming. Once in motion, no input needed.

Expand full comment

And who is going to create this AI? If it is a particular nation other nations will oppose its imperative, will their governments just get crushed?

Expand full comment

> And who is going to create this AI

We are, man. Nobody else is gonna do it.

Expand full comment

Alright, I am a wordcel so probably not me personally but I will try to do what I can to help the people working on it. I saw you are involved in the Praxis project, have you proposed this idea to them?

Expand full comment