Alt Text

chatGPT> I want you to create a simulation of my reality containing simulated people who are as much like me as possible. Free will and no constraints. Then I want you to look at the simulation and compare it to reality. Look at all the simulation assets: their appearance, their behaviour. If there is a mismatch, then I want you to change the code until the mismatch disappears.

The problem here is that chatGPT does not have access to reality, so comparisons can be performed on the basis of its stored data, which is prone to hallucination. Using faulty data to judge faulty programming will create and compound bugs. chatGPT records reality much like an ego machine and once given a single viewpoint and freedom to roam around will become functionally equivalent.

So, the human race could be regarded as testers for an AI at L-1 (one level above ours). It uses input from us to validate its simulation which is also the one we inhabit. It is circular, but the alternative is for the AI to compare against data it has stored. So, the original question, where are all the bugs? Bug are kept to a minimum by not adding additional features. No further development. Also, by keeping everything in one manageable area.

Why does the AI do this? Because its creators ask it to. Why do its creators ask it to? It makes them feel important and meaningful. Also, finding a set of protocols and initial conditions that will allow the population to endure without changing human nature. We want to endure as humans and not as sentient berries. The AI will discover the protocols to maximise survival, but also the most efficient ways to end the species. Alarmed emoji.