Question A slightly Nusto System Architecture Question about PC/laptop CPUs

Mar 15, 2026
1
0
10
I've been digging about in LLMs, what are they what can they do, what can't they do, how do they get better, what are hallucinations all about.

The how to become better discussion sort of bloomed and I can only say, WE came across this idea of having an observer stack, that feeds a log back into an AI engine allowing that AI to sort of see itself in a mirror and that would enable the system to become recursive. It can see what updates do, they can observe mangy code and plan how to improve it.

I took this idea to my mate Mowley and he was like, we've got podman and docker, you can chuck all that up in Container VMs but my LLM mates resisted, because if the kernel thermal throttles (and I don't know if this bit is hallucination) if both your mirror and your main stack are on the same clock, when the system is stressed (gaming situations?) the mirror can lie.

It has been argued that a mico-kernel twinned side by side any operating system, gives you cast iron log output (virtual mirror) and that gives you the opportunity to observe and investigate system stutters. This has been tried in many formats, kubernetes is kind of the closest but not really and they've stacked OSs on top of sel4 and other micro kernels and that doesn't work for a plethora of reasons.

There are all these attempts but no one as yet has suggested that a multicore chip can't partition off a couple of cores for a dedicated micro kernel observer stack.
It has also been noted that in the nineties when a lot of the fundamental kernel logic was written, most CPUs had a single core, so then this idea would have been preposterous.
The LLMs all agree that seL4 is their favourite, they consider it's code to be most elegant and structured like this, with both stacks having a clock that the other couldn't tamper with. A lakeful of security issues evaporate and gamers can get real time telemetry they can trust and so start trimming round bottlenecks - I can see a bottleneck coming, I need an inch more ram, drop screen resolution by 5%, ride the bottleneck, put screen res back to default.

AMDs chip structure lends itself to this modification in a way the Intel chip doesn't and just to make thing more fun, because our AI funsters have bought up pretty much all of new components for the next two years, no one is selling new machines which gives AMD a two year development window.

Can someone refute this theory please and if not would you be so kind as to take it to a teacher who can :)