I think the claim, as I understand it, is that the video source from the GPU is at the higher resolution but is then "right-scaled" by the GPU to the display "native resolution." This is actually a nice way of leveraging the muscle of a GPU to create a nicer, sharper image onto the screen. I've known gamers to bump their GPU resolution to 4K and "right-scale" it to their native resolution to get better visuals.
Of course, this is something you would do if you have GPU to spare, heh. In other words, wouldn't you rather process visuals at a lower resolution and use the GPU memory to handle more details on screen? Or, this may be Nvidia's finest API package EVER. I do trust them above anyone else to squeeze more out of their hardware than anyone else ever could.
Can't wait to find out the facts.