Could someone explain exactly how you can see the better quality of more pixels on a 2 megapixel monitor (1920 x 1080).
Because resizing a larger image to a smaller size looks better than an image taken at that smaller size (if you arn't just doing a nearest neighbour resize anyway).
It's like downsampling in games or as Nvidia call it DSR and AMD call it VSR. They render the game at a higher resolution say 4k then resize it down to 1080p to fit monitor. The resulting 1080p image looks much better than if the game was just rendered in 1080p even without using any kind of AA.
Same thing if you resize a 6mp or 12mp image to 1080p using bicubic resampling, lancoz resampling or something, by essentially averaging out a larger number of pixels you get smoother lines and tonal transitions than if the photo was with a sensor that can only capture 1080p.
How does decreasing that diameter increase the number of photons each pixel can detect and therefore increase color accuracy and reduce noise?
It doesn't, the noise actually gets increased not reduced as you increase pixel count while maintaining the same physical sensor size which is why compact camera's particularly had to keep increasing how aggressive their noise reduction was after the 3MP mark until some later advances reduce it a bit but now we're back to the stage where processed images look more noisy the more MP they have.