25 minutes ago
This post is very in depth. it will probably go right over most of your heads. that's okay, enjoy😊
•while nvidia has heavily promoted ray-traced effects from its geforce rtx 2080 and rtx 2080 tigraphics cards, the deep-learning super-sampling (dlss) tech that those cards' tensor cores unlock has proven a more immediate and divisive point of discussion. we all want to know whether it works and what tradeoffs it makes between image quality and performance.
•nvidia dlss is the specific dnn model devised to solve the inherent issues, like blurring and transparency, with taa (temporal antialiasing). dlss can deliver either much higher quality than taa at a certain set of input samples, or much faster performance at a lower input sample count, all while inferring a visual result that’s of similar quality to taa while using basically half the shading work. for example, at 4k resolution, dlss provided two times faster performance than taa in epic’s unreal engine 4 infiltrator demo. of course, the pre-requisite is a training process where the dnn learns how to produce the desired result thanks to a ‘large number of super high-quality examples'.
•to train the network, we collect thousands of “ground truth” reference images rendered with the gold standard method for perfect image quality, 64x supersampling (64xss). 64x supersampling means that instead of shading each pixel once, we shade at 64 different offsets within the pixel, and then combine the outputs, producing a resulting image with ideal detail and anti-aliasing quality. we also capture matching raw input images rendered normally. next, we start training the dlss network to match the 64xss output frames, by going through each input, asking dlss to produce an output, measuring the difference between its output and the 64xss target, and adjusting the weights in the network based on the differences, through a process called backpropagation. after many iterations, dlss learns on its own to produce results that closely approximate the quality of 64xss, while also learning to avoid the problems with blurring, disocclusion, and transparency that affect classical approaches like taa. •article finishes in comments•