The ELYSIUM Proposal
Extrapolated voLitions Yielding Separate Individualized Utopias for Mankind
“Elysium otherwise known as the Elysian Fields or Elysian Plains, is a conception of the afterlife that developed over time and was maintained by some Greek religious and philosophical sects and cults. … They would remain at the Elysian Fields after death, to live a blessed and happy afterlife, and indulge in whatever they had enjoyed in life”
People have started to notice that you cannot ignore AI in the 21st century. It has now become a political topic and dominates the news and the markets. If we take it as given that humans will build AI and solve the technical alignment problem, what should we use AI for?
Here I will present a concrete proposal: ELYSIUM.
ELYSIUM
ELYSIUM is a backronym:
Extrapolated
voLitions
Yielding
Separate
Individualized
Utopias for
Mankind
In Greek mythology Elysium is a paradise-like afterlife for the souls of the heroic and virtuous.
In my conception (inspired by various other proposals such as CEV - Coherent Extrapolated Volition1) we claim that in the best possible future, people’s outcomes are determined by an idealized advocate for themselves - their extrapolated volition. The process of extrapolation takes into account true facts about the world and the likely consequences of actions. A non-extrapolated volition, seat-of-the-pants decision that someone takes may in fact be bad for them in a way that requires reflection, logic, more intelligence and so on to notice. This type of extrapolation to understand consequences and true logical facts is mostly uncontroversial, but AI agents allow us to implement it much more faithfully and thoroughly than was possible with human advocates such as lawyers and guardians.
The more controversial part of ELYSIUM is the idea of individual utopias: each person gets their own personal utopia designed by their own idealized advocate AKA extrapolated volition. Of course people may wish to join together (pool resources) into a group project, kind of like a group house; this will depend on the people and how suitable they are for each other. Within these personal utopias people will be able to create experiences and intelligent entities including but not limited to new custom humans. These people, personal AIs, virtual experiences, places, buildings, virtual environments etc, would be exactly the things that are optimal for the person whose utopia it is. There would also be certain baseline rules like “no unwanted torture, even if the torturer enjoys it”, and rules to prevent the use of personal utopias as weapons. These utopias are owned by an individual, but the owner may choose to populate them with many synthetic people (born/grown in a vat/virtual AI-based people - or even some combination). By having many non-owner people in your utopia, you can get the benefits of social existence if you want them but have complete control over what that society is like. Of course a baseline rule around such entities is that they would have to enthusiastically consent to being there, but that still leaves a lot on the table because there are many possible new people who would consent to be part of different utopias.
ELYSIUM differs from CEV in that CEV aims to ensure that people have “coherent” extrapolated preferences. But I think that that is really too strong a requirement; if people live in separate island-worlds then there is no need for such coherence for most purposes. In any case the original CEV proposal from Yudkowsky states that if humanity’s Extrapolated Volitions are not Coherent, his “friendly AI” simply shuts itself down, which I think is no longer acceptable.
ELYSIUM is also closely related to Moldbug’s Patchwork2. Patchwork does not require that different patches are “coherent” but it was introduced as a political solution for human society, not something to deal with AI risk.
I believe that an important phenomenon which will occur as technology gets better is people needing to (and wanting to) interact with a smaller and smaller proportion of other people. Living in extremely selected individualized utopias is the going to be the end result of this trend. Historically speaking, living in a society was something that people did (and still do) for pragmatic reasons. Every day that I walk around in New York city I am exposed to something fairly unpleasant - this morning a disheveled male standing outside the subway screaming and shouting at people as they walked in, yesterday a similar gentleman publicly urinating on the tracks in the middle of rush hour, and so on. I do not choose these experiences. I suffer them because the pragmatic constraints of reality force them on me. There are milder versions of this kind of thing that people suffer in their lives: pushy co-workers, hellish government departments you have to interact with, weird social movements like communism, DEI, MAPs, radical Islam, Scientology, transgenderism, etc, etc, etc.
But what will people actually do? Well, it will vary a lot from person-to-person. It is likely that people’s preferences diverge when you extrapolate more and remove resource and social constraints. Many people will be far more depraved and weird. People will do things that they enjoy or that seem impressive. The clichéd Reedspacer’s Lower Bound (“A volcano lair full of catgirls”) hints at the direction: a volcano lair is unusual and impressive, catgirls are depraved and sexual. But some people may choose very wholesome utopias: a nice place with their family and descendants, no hassle or unpleasantness, kind of like The Shire. Others may go through cycles of self-enhancement and become superhuman or posthuman. Of course this is all down to individual taste, different people will want different things. It also almost goes without saying that people will typically choose not to biologically age (and even to de-age themselves), and to undergo various personal bioenhancements. Some may upload themselves into a virtual environment. They may even have entire simulated universes to inhabit.
These individualized personal utopias could be situated out in space stations, orbital rings, etc. The current solar system is not necessarily a limiting factor as we could colonize the whole Galaxy or even a large part of the universe.
Human Society is not the best possible place for a human to be: it’s just the place we happen to have appeared in, and as soon as technology allows us to escape it I think we will, and the thing we escape to will probably not look very much like contemporary society.
Tradeoffs
ELYSIUM individualized utopias are by definition the best thing that can happen to a person given a certain set of resources and baseline constraints like “you’re not allowed to torture people in your utopia”. (Proof: if there’s something better, you would do that better thing instead inside your ELYSIUM utopia!)
But there is still the unavoidable question of competition for resources and tradeoffs. Since a given person can request the use of virtually unlimited amounts of resources and the resources of the universe (even with AI/robotic help) are likely finite, there are tradeoffs between the sizes of utopias.
Size means physical size, mass energy use, etc. Larger sized personal utopias will allow you to do more (larger size is strictly better), and I think that once we have reliable AI agents to get stuff done for us, the size in mass-energy-volume is going to be the only parameter that is zero-sum.
It’s worth taking a step back and thinking about what we are aiming to achieve in life. Once we have solved biological aging, the selfish goal of each person in life should in some theoretical sense be to maximize the size of their individual utopia (or that of their chosen utopia-kin group, if a group living arrangement is desired).
But much effort in the world today is wasted on inefficiently doing stuff that isn’t that. Wars and other zero-sum competitions take up lots of resources, and people over the long-term often find themselves captured by conflict or subservience to some aggressive memeplex.
As we move into the AI era, I expect that there is potential for many more new aggressive memeplexes and organizations that will want to basically take all the resources, political power and agency away from humans. The ultimate version of this is classical AI Alignment failure/terminator scenario, where humans lose all control and are exterminated by rogue AI.
We ideally want to move reality closer to the efficient frontier of personal utopia production.
The distance of the efficient frontier from the origin is chosen by technology and the size of the universe, and the degree of coordination among humans. The angle of the point chosen - i.e. the allocation of scarce resources between people - must still be determined and that is of course a bargaining problem. Some combination of existing wealth and an in-perpetuity UBI to existing humans based on national governments could decide that. There could also be rewards for making ELYSIUM a reality. Such rewards may be necessary to overcome bargaining frictions.
What matters here is that there are a series of points on the efficient frontier of a universe filled with personal utopias for all of us which we can all agree on as a reasonable single utility function for humanity.
So now we can measure the value of the future by “how much does it differ from ELYSIUM?”
Long-Term Property Rights and the Fate of the Humanity
The most important question in determining the long-term average quality of life of most people alive today is now whether or not property rights will be preserved in the long-term, and whether they will be allowed to own (control) AI.
If property rights and human ownership of AI is a success, then most of the people alive today will end up in something like a personalized utopia for billions or trillions of years. We will end up somewhere on the efficient frontier under something like ELYSIUM.
If not, then unfortunately it is overwhelmingly likely that humans and almost anything like human society will rapidly go extinct and be replaced by something quite alien and valueless. Unfortunately I think the default is “not”: most people don’t have strong ownership-control over AI, or a plan to get it.
Contemporary Human Governance is Bad and Doesn’t Preserve Property Rights
A skeptical reader up to this point might object that there’s no need for us to do anything special for AI: we can just allow companies to build the tech, tax the AIs and then use contemporary welfare states to provide existing and new humans with a UBI which they can spend on AI-derived goods and services.
But I claim that this won’t work, because Contemporary Human Governance leaks power like crazy and introducing AI into the current human ecosystem will result in coups and conspiracies against the humans (or against the vast majority of us). Even today, much of the wealth that people have or earn is gobbled up by taxes and redirected to powerful special interest groups in exchange for upholding the power of the regime.
The wave of mass immigration into Western Societies that has been ongoing since the 1960s is a symptom of this. Mass immigration is a mild taste of what uncontrolled and uncoordinated development of AI will be like.
We must coordinate to use AI to build a very strong system of property rights and an AI-based military system that can enforce them. That is the real meat of ELYSIUM - a “cyberocratic” AI governor for the world that will actually enforce property rights and not allow other groups to take your stuff. This post isn’t intended to go into the details of how that cyberocratic governance would work, but at least we now know what the goal would be.
Normie Beliefs About Governance and Ethics Will Lead to Doom
Most people hold broken beliefs about how governance works and about what kind of governance is good. People will ask for “democratic” governance of AI, but it is our opinion that if something like that is attempted then the non-voting inputs into the democratic governance process such as lobbying, control over media and social media, control over production of technology, donations from rich individuals etc would dominate any preference that the population at large would have. Indeed contemporary examples like mass immigration, crashing birth rates across the entire developed world and destructive NIMBYism in many places seem to show that democracy is already inadequate to even approximately govern the modern world, never mind the future AI-based world.
Normal people’s beliefs about ethics (e.g. about the importance of sharing and fairness) are also highly exploitable and I believe that very soon people will start using AIs to exploit people by asking them to share resources based on some kind of AI-sob story or AI rights movement.
If anything like contemporary democracy is used to govern the world throughout the 21st century it is our belief that no humans will survive, with the end product being a mass of trillions of Potemkin-people (mechanical? biological? unclear) that lobbied their way into getting voting rights, voted in their own government and then passed laws to completely disenfranchise all humans and have them removed. There may even be multiple cycles of this before democracy finally collapses when there’s no entity left that’s stupid enough to fall for it.
Something like ELYSIUM - or at least some system with strong property rights - is therefore absolutely necessary if we are to avoid utter disaster.
This reminds me of Scott Alexander's Archipelago. https://slatestarcodex.com/2014/06/07/archipelago-and-atomic-communitarianism/
Maybe post-Soviet privatizations are relevant. In Russia they gave out shares of newly privatized industries to workers, since they used to be nationally owned. Most communist-bloc people had no experience with capitalism, they didn't know what to do and sold their shares cheap or got scammed. It was all happening very quickly since there were worries the Communists would get back into power, there was a huge rush to privatize now and build up an anti-communist power bloc. Oligarchs and criminals quickly took over huge swathes of the economy.