Optimizing the simulation software we live in
A universe, which is simulated has many ways in which it can be optimized. In light of these optimizations, a sufficiently advanced civilization, despite being descendant from spear-flinging cavemen, could write software for a sufficiently powerful computer which would be able to simulate a world much like the one we inhabit. Here are a few thoughts on the optimizations.
Preface
I will only be discussing the optimizations which can be applied to simulations excluding video games. The reason for this is that there are many great books describing the numerous techniques modern video games use to simulate large virtual worlds. Michael Abrash is one of the favorite authors of such works in the eyes of many including me.
In addition, it is assumed that the world we want to simulate is like the one we currently inhabit. Or at the least, it must be able to foster intelligent life. If we didn't live in a world that could do that, simulation theory wouldn't exist. It is also implied that intelligence can emerge from computation, regardless of substrate. I took the idea as an axiom and suggest the reader do the same to understand the article.
I'd be remiss not to clarify the meaning of simulation as used in this article. When most people think about simulations they imagine a world like in The Matrix, where conscious beings can be plugged into a supercomputer. The simulations I discuss exclude any sort of I/O or interaction from the outside world. They just run and do their thing, but the same optimizations that I lay out can also be applied to the plug-in simulations; adapting the optimizations to them is left as an exercise to the reader.
This article is not intended only for programmers; it is meant for everyone. I did my best to explain the mentioned programming concepts in simple terms, so don't be discouraged if the article seems too technical. In addition, this post is simply an exploration of primitive ideas regarding the topic and is by no means an rigorous evaluation of the question at hand.
Finally, I will not argue in this article whether we are currently in a simulation. If you wish to read about that, you can read Nick Bostrom's original simulation argument. However, what I will discuss is the various methods that can be applied to optimize a simulation that would look and feel like the world we currently inhabit.
Speed doesn't matter
Software can be optimized in many ways. They revolve around three general concepts: speed, size, and power consumption. I won't discuss the latter in this article since it's a lot harder to estimate and is mostly derived from the former two.
So, let's start with speed.
Firstly, simulation doesn't have to be concurrent, meaning, a computer can spend millions of years simulating the next millisecond of the universe, but the beings inside will recognize it as one millisecond. Before that single tick, no processes take place in our world. This makes simulation more likely, since a supercomputer doesn't have to simulate the entire world concurrently, it could take it's sweet time for billions of years to simulate a single timestep of the world.
Example: Imagine you have The Sims installed on a slow and underpowered computer. Would the lagging really make any difference to the citizens of the Sims world? The ability of the people inside to recognize the passage of time is only relative to the single tick instance of their universe. If one tick in the Sims universe takes 4 "real" ticks, everything else would be 4 times slower. A tick or a timestep is the smallest unit of time that can exist in a given universe. Since any atomic interaction is reliant on the calculations of the computer, everything else would be halted until the next state of the world is calculated and "written to memory".
Size matters
The universe is mostly nothing with infrequent somethings scattered around in small groups and even less common concentrations of somethings. In other words, the universe is mostly empty space dotted with celestial objects in clumps and black holes. That's good for us, aspiring simulation developers, because it means most of the universe doesn't matter.
Despite this, there are still lots of atoms in the universe, all of which need simulating. The more stuff we have in the simulation program, the more memory we need. And memory, compared to CPU time, is a larger problem because we have less space than we have time.
Our computer needs to be significantly smaller than the universe. To achieve this, we can compress the data in the world. This could make the required memory billions of times smaller. The larger the actual world is, the larger the optimizations one can apply since any redundancies in the data are simply compressed away. However, such a computer would still largely depend on how big the actual world is.
There are 3 cases of size optimization depending on the size of the universe. I'll list them, but will only consider the best case. Let's get the easiest one out of the way: the world is as big as modern physics says it is. If so, we'd either need an absurdly large computer to simulate everything or we'd have to resort to using a black hole as our supercomputer. Both variants are close to unfeasible. It might come as a surprise to some, but the black hole one is closer to reality.[1]
In the second scenario, the entire world is only the size of the observable universe. For that, a computer the size of a galaxy[2] should suffice. It doesn't even matter if the world is larger than the observable universe in the case of humans on earth; if we will never be able to interact with anything outside that, it might as well not exist. Which leads us to the next case.
The third (and best) scenario: the world is only as big as our solar system and the simulation constantly plays tricks on us to squeeze as much computational optimization out of the supercomputer. For example, whenever the simulation detects that a conscious being is looking at the stars with a telescope it materializes textured billboards in the lens of the observer. We could also assume that unconscious beings are simple automatons that don't need tricking. It can apply the hundreds of other tricks game developers use to squeeze 60 frames per second out of the GPU (occlusion and frustum culling for every conscious being, BSP trees, different levels of detail for all objects...). This further reduces the need for speed and scale of our machine.
In this last case, the main problem would be the number of conscious, observing brains that the simulation would have to constantly monitor and trick. The more conscious eyes you have on the world, the more of it you have to render with significant detail, which lessens the opportunities for optimization, so you'd need to keep that number in check. Maybe through some kind of fundamental rule of the universe, which makes the conscious beings stop producing as much offspring after some threshold. Perhaps, we wouldn't even be able to observe the world if it didn't have such a fundamental rule, meaning it would've already ran out of RAM and crashed a long time ago. So, we might have a sort of natural selection of anthropic simulations.
Of course, there is a limit to how much of the world can be faked. The program would still have to perform operations down to the level of quarks (or whatever smaller particles we discover/invent in the future) to correctly produce chemical and physical reactions.
A material mind creating something like that is really far fetched and would take immense effort, but for a mind to kick off a simulation which generates fundamental rules and then a universe based on them seems plausible. Stephen Wolfram seems to be doing that, but on a much smaller scale.
Our universe might be a surviving simulation which managed to be interesting by allowing for the development of conscious beings and simultaneously managed to not crash the computer by having rules which make such a world possible by optimizations.
We have less space than time
Even though CPU time is a smaller problem than memory size, it doesn't mean that we can fully trade the former to optimize the latter. The universe has a time limit and our simulation must be able to run long enough for intelligent life to develop. It took our world 13.75 billion simulation years to birth a majestic species such as humans, so it's reasonable to use that as our measure for how long in simulation time it should run.
The simulation hardware needs to be powered, preferably by a star[3], so we can give it time until the last star in the universe dies. That means, it has 100 trillion outside world years to simulate 13.75 billion years. That means the simulation timestep can be no higher than 100,000/13.75 = 7272 outside world timesteps. In other words, the runtime performance of the simulation should be optimized so much that we have at least 7272 outside world seconds to simulate 1 second of the simulated universe. Compared to modern video games, where that ratio is only 1/60, it's a relative cakewalk.
In contrast, the memory requirements are much harder to satisfy. The more memory we have, the more materials we'll need to build the supercomputer that will house our simulation. There are 1078 to 1082 atoms in the observable universe and around 1057 atoms in our solar system. Let's assume that each atom takes up a kilobyte of memory. We could store all the data in 1 terabyte micro sd cards. A terabyte is equivalent to 109 kilobytes, so it would be able to store all the data required for 109 atoms. That means, we would need at least 1057/109 = 1048 micro sd cards for the simulation. The volume of a micro sd card is 165 * 10-9 m3, so the volume of the memory component of the supercomputer would be 164 * 1039 m3. The volume of the earth is 1021 m3. So, unless we store that data in something as small as DNA, the size of the computer would exceed the size of our solar system (which we assume as a best case scenario to be roughly equal to the size of our universe).
Conclusion
I hope I managed to say something new about the topic of simulation optimizations or at least to provide conversation fodder at parties. I believe it is feasible to construct a simulation that closely resembles our world provided the very unlikely but entirely possible best cases I listed accurately mirror our reality. This wouldn't require a huge change in the kind of technology we now have, but would require scale. It seems more plausible that humans can make a lot more of the same things, than make a much better new something. In other words, the kind of technology such projects require is within our reach.
There are two main things that make our job of developing such simulations easier: the fact that the universe can run for a long time and that game developers have already thought of amazing optimizations to help us save on simulation timesteps.
[1] Joe Armstrong talks about a black hole computer in his talk "The Mess We're In" to explain why his Grunt configuration didn't work.
[2] I have no idea of the actual estimate, but here's how my thinking went when writing this: if I can compress the data of the universe and make it billions of times smaller, then given that there are billions of galaxies in the world, one galaxy should be enough to store all that information.
[3] It is assumed that the power consumption of our supercomputer won't exceed the energy generated by something like a Dyson sphere attached to a star.