Although I didn't read the article, I'm assuming that, typically, screen red and green pixels have too much blue in them. This is simply because the spectrum (absorbtive or emissive) of the materials we use isn't perfect.
For below, "eye" is what you actually see, "scr" is the intensity of the screen pixels, and "img" is the intended color.
If:
R(eye) = R(scr)
G(eye) = G(scr)
B(eye) = B(scr) + r * R(scr) + g * G(scr)
Then you can never display a color where B(img) < r * R(img) + g * G(img). Ie, you can't display less blue light than the unwanted blue that's coming from the red and green pixels.
We instead adjust the color of all the screen's pixels to have a slight blue bias, so that the screen is capable of displaying the desired image smoothly.
However, if you add yellow pixels, you can do:
R(eye) = R(scr) + Y(scr)
G(eye) = G(scr) + Y(scr)
B(eye) = B(scr) + R(scr) * r + G(scr) * g
You can create a function mapping RGB(img) to RGBY(scr) that's smooth (if not linear), and gives nicer than normal results where the smallest of R & G is much higher than B, ie colors near yellow.
The screen still only needs 3 color values per pixel, but now it uses these values to decide the intensities of the 4 screen pixel colors via mapping functions.
It doesn't seem to be what their screen is doing, however. Having information about 4 colors is a waste as there's only 3 degrees of freedom.
Of course, this could work for pixels other than yellow, depending on what spectrums are currently available from known materials.
I'm thinking that simply increasing thickness and combining different absorbtive materials should be able to yield ever-improving color distinction, although at the cost of wasted power.
Eg: If 100 um of green filter material lets 90% of green light through and 10% of blue through, then 200 um of green filter lets 81% of green light through and 1% of blue light through, although power efficiency decreases by 10%.
However, that's too simple, as there are more than 3 different wavelengths of visible light. But if you combine absorbtive materials which have a maximum permitted light intensity wavelength either side of the desired color, some combination of the two materials will give a maximum permitted light intensity wavelength of exactly the desired color. Increase the thickness of this material, and increase the backlight intensity accordingly, and your color will become nicer.
Of course, absorbed light, (in addition to being wasted power), is turned into heat, so the thicker the filter the hotter your screen will get.
I'm not really sure how thick LCD screen color filters usually are.
One other thing to mention - if you're slightly colorblind (like me) you may have slightly different absorbtion spectrums for your three cone types. You may then find the RGBY screen looks nicer. But a screen tailored specifically to your cones would look far better again. (IIRC they can grow the pigment from your DNA and know the absorbtive spectra of many common genetic colorblindness alleles).
Unfortunately, if an photo's color is perfectly tuned to a person with normal color vision, it is possible that some colorblind people might actually not get all the information from the photo that they could get from looking at the real world scene. For example, I'm looking into a way to use two different pigments that look the same to a regular person but can be distinguished by people with slightly different spectra (like me). Thus I can write signs that only people like me can see. But photos probably won't pick this information up.