The problem is this assumes the same physics for both the outer and inner worlds.
If anything, the behavior of quantizing continuous waves into discrete units such that state can be tracked around the interactions of free agents seems mighty similar to how procedural generation with continuous seed functions converts to voxels around observation/interaction in games with destructive or changeable world geometry like Minecraft or No Man’s Sky.
Perhaps the continued inability to seamlessly connect continuous macro models of world behavior like general relativity and low fidelity discrete behaviors around quantum mechanics is because the latter is an artifact of simulating the former under memory management constraints?
The assumption that possible emulation artifacts and side effects are computed or themselves present at the same fidelity threshold in the parent is a pretty extreme assumption. It’d be like being unable to recreate Minecraft within itself because of block size constraints and then concluding that it must therefore be the highest order reality.
Though I do suspect Bell’s inequality may eventually play a role in determining the opposite conclusion to the one you came to. Namely that after adding an additional separated layer of observation to the measurement of entangled pairs in the Wigner’s friend variation in Proietti, Experimental test of local observer-independence (2019), measured results were in conflict. This seems a lot like sync conflicts in netcode, and I’ve been curious if we’re in for some surprises regarding the rate at which conflicts grow as the experiment moves from just two layers of measurement by separated ‘observers’ to n layers. While the math should have it grow multiplicatively with unobserved intermediate layers still having conflicts which compound, the lazy programmer in me wonders if it will turn out to grow linearly as if the conflicts are only occurring in the last layer as a JIT computation.
So if we suddenly see headlines proposing some sort of holographic principle to explain linear growth in rates of disagreement between separate observers in QM, might be productive to keep in mind that’s exactly how a simulated system sweeping sync conflicts under the rug without actively rendering intermediate immeasurable steps for each relative user might work.
Turing machines can compute anything defined as an algorithm, and cannot compute anything that cannot be defined as an algorithm. This is why, for example, computers can’t generate random numbers (only deterministic streams of pseudorandom numbers using some starting seed). Also all Turing machines are equivalent – they can all run the same set of algorithms given sufficient memory and will produce the same result.
By Bell’s inequality, we know that certain events (I use quantum-tunneling) are non-deterministic and cannot be predicted by an algorithm (at better than chance, given infinite computing power, infinite time, and perfect knowledge of the system). Note though I’m an amateur quantum mechanic at best :D
Therefore if the universe is a simulation running on a Turing-machine, they would have to either halt, use pseudorandom numbers (which I can detect with finite but large CPU power and finite but large time), or sample their own random numbers from a local entropy source.
This way I try to minimize assumptions about physical laws in the Universe ‘upstairs’. One interesting property of this is that if the universe upstairs is also simulated, then if it samples local entropy it just passes the problem upward :D
I do work with the assumption that a Turing machine runs any simulation, Matrix-style. Not some underlying physical process that just so happens to simulate a Universe, and also put entropy in all the right places whenever I look.
This is all just for amusement though. If the Universe was really running on a Turing machine, we’d see way more ads (drink your ovaltine encoded in pi?). Also the current design is really suboptimal what with all the entropy. No way it would run for 13-point-whatever billion years. I refuse to believe that our hypothetical extradimensional programmers are simultaneously that smart and that dumb :P
(a) It might not be a Turing computer running it, as if the parent reality was continuous and didn’t need to deal with quantization, hypercomputation with real numbers would be possible.
As for the lack of ads - I’d hope that a future post-scarcity society capable of wasting resources on subatomic structures of dog poop just for the sake of accuracy would also be a society that didn’t need to sell things to its simulated populations.
That said, I do think there’s a pretty blatant 4th wall breaking Easter Egg in our lore, it’s just that there’s not many people looking at things from antiquity through the lens of virtual world lore design in order to go “oh wow, that’s almost over the top.”
Like the illusion of the naked bodies or dolphins where if you are a kid you only see the dolphins because your mindset hasn’t shifted such that looking at it you primarily see the naked features.
A text rediscovered after millennia of being lost within weeks of us completing the first Turing complete computer which is titled “the good news of the twin” and posits that we are in a non-physical copy of an original world created by a being of light which was itself brought forth by a spontaneous original humanity that’s now dead and in whose images we were fashioned is pretty wildly on point. Especially the addition of its claim the proof for this exists in the study of motion and rest combined with the claim by its followers that the ability to discover an indivisible point making up bodies is only possible in the non-physical. And the lore being attributed to the most famous person in history is explicit to the point of trolling almost. Yet this all exists within our history and has broadly been ignored and forgotten, dismissed by most people examining it as just being ‘weird’ decades before Pong existed, let alone The Matrix or Nick Bostrom, etc.
I mean, you can go further back than that if you want. You’ve got Plato’s Allegory of the Cave :)
Anyway, I don’t think I can conclusively prove we’re not in a simulation (although I don’t think we are – onus of proof lies with the positive existential proclamation). I can only prove – and in many cases only provide limited evidence – that we’re not in certain classes of simulation.
I’m literally just using scrap parts salvaged out of other things. So I think it’s quite challenging to do even that much :D
Although I plan to replace the lazy el-cheapo diode-breakdown entropy source with a particle-spectrometer-based quantum TRNG in a few months. I’ll have to build it myself, but it will be neat to have a proper instrument not made from junk. I’ll make a second one for my coffee machine, so I can make Schrödinger’s Coffee – simultaneously caffeinated and decaffeinated until you drink it.
Actually Plato was instrumental in the work I’m referring to. It was effectively using Plato’s demiurge and concepts of eikon as a response to Epicurean naturalism and the commitment to the belief that death was final. Effectively arguing that even if the world came to be from natural causes and the soul initially depended on the body such that death was inescapable, that the eventual development of a creator of worlds would allow for a recreation in the image (eikon) of the original physical universe but without actually being tied to and dependent on physical form. Claiming that this had already happened and we just don’t realize it, it emphasized that the better situation was to be in the non-physical copy than to be the original.
So indeed, Plato’s thinking was instrumental - just in the opposite manner as he intended (Plato was very keen on originality and looked down upon the notion of images as representations of the original).
And I agree - narrowing down the classes of simulation is a worthwhile pursuit, and one with considerable potential for success. IIRC there’s been some good papers already proving we can’t be in a simulation running on classical computing architecture.
In any case, good luck with your future experiments!
What, a lazy programmer in me? I’ll have you know take pride in that lazy programmer! Just last week I helped a more junior dev avoid the evils of premature optimization thanks to it.
The problem is this assumes the same physics for both the outer and inner worlds.
If anything, the behavior of quantizing continuous waves into discrete units such that state can be tracked around the interactions of free agents seems mighty similar to how procedural generation with continuous seed functions converts to voxels around observation/interaction in games with destructive or changeable world geometry like Minecraft or No Man’s Sky.
Perhaps the continued inability to seamlessly connect continuous macro models of world behavior like general relativity and low fidelity discrete behaviors around quantum mechanics is because the latter is an artifact of simulating the former under memory management constraints?
The assumption that possible emulation artifacts and side effects are computed or themselves present at the same fidelity threshold in the parent is a pretty extreme assumption. It’d be like being unable to recreate Minecraft within itself because of block size constraints and then concluding that it must therefore be the highest order reality.
Though I do suspect Bell’s inequality may eventually play a role in determining the opposite conclusion to the one you came to. Namely that after adding an additional separated layer of observation to the measurement of entangled pairs in the Wigner’s friend variation in Proietti, Experimental test of local observer-independence (2019), measured results were in conflict. This seems a lot like sync conflicts in netcode, and I’ve been curious if we’re in for some surprises regarding the rate at which conflicts grow as the experiment moves from just two layers of measurement by separated ‘observers’ to n layers. While the math should have it grow multiplicatively with unobserved intermediate layers still having conflicts which compound, the lazy programmer in me wonders if it will turn out to grow linearly as if the conflicts are only occurring in the last layer as a JIT computation.
So if we suddenly see headlines proposing some sort of holographic principle to explain linear growth in rates of disagreement between separate observers in QM, might be productive to keep in mind that’s exactly how a simulated system sweeping sync conflicts under the rug without actively rendering intermediate immeasurable steps for each relative user might work.
I took it from an information theory perspective:
This way I try to minimize assumptions about physical laws in the Universe ‘upstairs’. One interesting property of this is that if the universe upstairs is also simulated, then if it samples local entropy it just passes the problem upward :D
I do work with the assumption that a Turing machine runs any simulation, Matrix-style. Not some underlying physical process that just so happens to simulate a Universe, and also put entropy in all the right places whenever I look.
This is all just for amusement though. If the Universe was really running on a Turing machine, we’d see way more ads (drink your ovaltine encoded in pi?). Also the current design is really suboptimal what with all the entropy. No way it would run for 13-point-whatever billion years. I refuse to believe that our hypothetical extradimensional programmers are simultaneously that smart and that dumb :P
(a) It might not be a Turing computer running it, as if the parent reality was continuous and didn’t need to deal with quantization, hypercomputation with real numbers would be possible.
(b) You might be interested in research earlier this year at Caltech that quantum randomness has detectable statistical patterns to it. Also, the jumps themselves can technically be predicted and reversed using algorithms though this is somewhat separate from the non-deterministic aspect of the jumps.
© Local timelines don’t need to have started from scratch. In Minecraft there are diamonds in low layers, which might lead someone who only knows it from the inside to assume the simulated world has been around for very long, even though it might have only been running for 15 minutes. Though I’d agree that if this is a simulation, the scale and degree of fidelity is a pretty extreme flex of resources relative to anything we could muster.
As for the lack of ads - I’d hope that a future post-scarcity society capable of wasting resources on subatomic structures of dog poop just for the sake of accuracy would also be a society that didn’t need to sell things to its simulated populations.
That said, I do think there’s a pretty blatant 4th wall breaking Easter Egg in our lore, it’s just that there’s not many people looking at things from antiquity through the lens of virtual world lore design in order to go “oh wow, that’s almost over the top.”
Like the illusion of the naked bodies or dolphins where if you are a kid you only see the dolphins because your mindset hasn’t shifted such that looking at it you primarily see the naked features.
A text rediscovered after millennia of being lost within weeks of us completing the first Turing complete computer which is titled “the good news of the twin” and posits that we are in a non-physical copy of an original world created by a being of light which was itself brought forth by a spontaneous original humanity that’s now dead and in whose images we were fashioned is pretty wildly on point. Especially the addition of its claim the proof for this exists in the study of motion and rest combined with the claim by its followers that the ability to discover an indivisible point making up bodies is only possible in the non-physical. And the lore being attributed to the most famous person in history is explicit to the point of trolling almost. Yet this all exists within our history and has broadly been ignored and forgotten, dismissed by most people examining it as just being ‘weird’ decades before Pong existed, let alone The Matrix or Nick Bostrom, etc.
I mean, you can go further back than that if you want. You’ve got Plato’s Allegory of the Cave :)
Anyway, I don’t think I can conclusively prove we’re not in a simulation (although I don’t think we are – onus of proof lies with the positive existential proclamation). I can only prove – and in many cases only provide limited evidence – that we’re not in certain classes of simulation.
I’m literally just using scrap parts salvaged out of other things. So I think it’s quite challenging to do even that much :D
Although I plan to replace the lazy el-cheapo diode-breakdown entropy source with a particle-spectrometer-based quantum TRNG in a few months. I’ll have to build it myself, but it will be neat to have a proper instrument not made from junk. I’ll make a second one for my coffee machine, so I can make Schrödinger’s Coffee – simultaneously caffeinated and decaffeinated until you drink it.
Actually Plato was instrumental in the work I’m referring to. It was effectively using Plato’s demiurge and concepts of eikon as a response to Epicurean naturalism and the commitment to the belief that death was final. Effectively arguing that even if the world came to be from natural causes and the soul initially depended on the body such that death was inescapable, that the eventual development of a creator of worlds would allow for a recreation in the image (eikon) of the original physical universe but without actually being tied to and dependent on physical form. Claiming that this had already happened and we just don’t realize it, it emphasized that the better situation was to be in the non-physical copy than to be the original.
So indeed, Plato’s thinking was instrumental - just in the opposite manner as he intended (Plato was very keen on originality and looked down upon the notion of images as representations of the original).
And I agree - narrowing down the classes of simulation is a worthwhile pursuit, and one with considerable potential for success. IIRC there’s been some good papers already proving we can’t be in a simulation running on classical computing architecture.
In any case, good luck with your future experiments!
This is why I’m on lemmy
After reading that, I don’t believe you have one :)
What, a lazy programmer in me? I’ll have you know take pride in that lazy programmer! Just last week I helped a more junior dev avoid the evils of premature optimization thanks to it.
Lazy programmers are the best programmers. ;)