March 28, 2024

Sapiensdigital

Sapiens Digital

Testing Nvidia’s DLSS 2.0: Higher Frame Rates for Free?

Nvidia’s Deep-Learning Supersampling versus AMD’s Radeon Image Sharpening. DLSS versus RIS. Acronym versus acronym. That’s nothing new in the graphics wars: Since mid-2019, AMD and Nvidia have been trading blows. Graphics cards are super-competitive again in both the low end and the midrange of the market, with cards from Big Green such as the GeForce RTX 2070 Super providing killer frame rates for the Nvidia faithful, while the Radeon RX 5600 XT has entered the field as AMD’s strongest value proposition for high-performance gaming in ages.

With these two closely competing cards filling most niches for 1080p and 1440p play —along with about half a dozen others split between the two companies— the first half of 2020 has seen the conversation shift from the number of teraflops each GPU packs under its shroud to what kind of extra software features the cards offer once installed in your rig. Given that there are four different GPU models just to cover the $200-to-$300 range, GPU manufacturers like AMD and Nvidia need to do everything they can to differentiate themselves. Sometimes this amounts to game exclusives, or it can come down to whole new approaches to the way that games are rendered by a GPU, like Nvidia’s DLSS neural AI learning network.

dlss on nvidia

Since the release of those two cards mentioned above (and a few more), both companies have started touting the capabilities offered by their new, respective image-improvement technologies: DLSS for Nvidia, and RIS for AMD. They are not the same thing, though. Plus, as though things weren’t already confusing enough, two other sharpening approaches—Nvidia’s Freestyle, and the open-source post-processing project ReShade—use their own approaches and are part of the fray.

What exactly are these technologies, and how much visual clarity can they really add to your favorite games? In this multi-part deep-dive series, we’ll be painstakingly testing, retesting, and screen-grabbing each tech and sharpener (at all the relevant resolutions) to see which ones do the job best.

First Off: Anti-Aliasing vs. Sharpening

Before we dive in, let’s start with a quick clarification on the technologies we’ll be talking about.

If you’re a game hound, anti-aliasing is a familiar term. It refers to one of several techniques with the same goal: smoothing out the jagged edges around a character, a background, or an object in a video game to make it look as close as possible to something you’d see in the real world. The most common implementations of anti-aliasing in modern gaming are known as FXAA (fast approximate anti-aliasing), TAA (temporal anti-aliasing), MSAA (multisample anti-aliasing), and SMAA (enhanced subpixel morphological anti-aliasing).

Anti-aliasing is resource-intensive, though. These different flavors of traditional anti-aliasing can suppress your graphics performance by significant margins, depending on the game you’re playing, how well optimized it is, and your specific hardware setup. Rounding off polygons is one of the most taxing jobs for your graphics card, which is why any small gains made in the technology can lead to significant leaps in frame rates.

mechwarrior 5

In so many words, DLSS offloads the rendering effort of anti-aliasing to an AI network, and uses the Tensor cores on RTX cards to process that data in tandem with Nvidia’s servers. And to clarify in the case of Nvidia’s DLSS: DLSS can be both an upscaling and an anti-aliasing technology in one, depending on the resolution you’re playing at. (Upscaling is the practice of improving the quality of an image that may be blurrier or rendered in a lower resolution.) This happens through an incredibly complex AI-based neural learning network, known as NGX, that is trained on tens of thousands of still images from a game. The AI uses those learnings to display a cleaner and more efficiently rendered image than what traditional anti-aliasing techniques are capable of.  

In contrast, AMD’s RIS, Nvidia’s Freestyle, and ReShade all fall under a new category of image-improvement techniques known as “sharpeners.” Though each technology aims for the same end (better-looking games that don’t affect performance, and in some cases may even improve it), the ways in which they approach the problem are quite different. DLSS is a style of anti-aliasing/upscaling that uses artificial intelligence and a neural-net supercomputer to determine where an image can be upscaled from the rendered resolution (generally 1080p, 1440p, or 4K) without losing any performance.

amd radeon image sharpening

Sharpeners, on the other hand, affect the visual fidelity of a game at the post-processing level, and they activate only once the GPU has fully rendered the image of a game. With in-game object edges being intelligently sharpened by an algorithm, players can run a game in a downscaled version that reads, to the eye, as no different from the true resolution render level. This kind of technique saves on performance without sacrificing the visual fidelity that gamers expect when playing at higher resolutions.

With that lesson out of the way, let’s take a closer look at each approach.

DLSS 1.0: Nvidia’s First Try

DLSS 1.0 was an anti-aliasing technique that aimed to replace traditional technologies like FXAA, SMAA, and TAA, and was first released roughly six months after the GeForce RTX hardware that powers it hit shelves.

We’ve explained the rudiments of how DLSS works in some of our roundups of current graphics cards, but here it is in a nutshell. First, Nvidia feeds a DLSS-enabled game through its neural-network AI supercomputers. These powerful computers run every scene in the title hundreds upon thousands of times, analyzing areas where images can be sharpened and the edges cleaned up to produce a crisper-looking image at lower resolutions.

dlss 1.0 control

The main goal of all this effort? It’s to make a game that’s being rendered at a lower resolution look just as good as one that was natively rendered at a higher resolution. This efficiency can boost the frame rates of a game up to 33 percent in some titles, all without sacrificing any of the graphical fidelity that gamers paid all that extra money (for a robust GeForce RTX video card) to experience.

Was it perfect? We’ll get into the qualitative aspects later, but the tech itself had a whole barrel of caveats attached upon release. While the performance gains were certainly nothing to sniff at, it was far from becoming a permanent replacement for traditional anti-aliasing technologies at the time.

The first caveat: the number of games that supported DLSS. Because Nvidia needs to train every game (at every resolution) that wanted to use DLSS through its own supercomputers, the ability for developers to use it in their titles was limited by Nvidia’s bandwidth (and still is today, but less so). The result of this bottleneck: More than a year after the feature was first announced, fewer than 30 games on the market had the option to turn it on.

The second issue with DLSS was artifacting, or more specifically, “haloing.” This by-product of DLSS was first noticed on the original Battlefield V implementation of DLSS, the first game to carry support for the technology. It manifested as a sort of “smudging” of textures around fine edges, like you might have found on the crosshairs of a gun, or in the details of a character’s watch. It wasn’t not overly noticeable unless you were looking for it, especially in games with a lot of fast-moving action. But, in certain titles, it was pronounced enough that the performance benefit wasn’t worth having characters, items, or landscapes looking degraded as the trade-off.

DLSS 1.0 Haloing

Nvidia took the criticisms of its first go-around at this to heart, and in this 2020 release, the company seems to have learned a lot about what went wrong the first time and how to make sure it doesn’t make the same mistakes again in DLSS 2.0.

DLSS 2.0: A Strong Course Correction

Earlier this week, Nvidia unveiled the next phase, dubbed DLSS 2.0.

dlss 2.0 ai rendering

Understanding the technical details of how Nvidia’s engineers have made improvements from DLSS 1.0 to DLSS 2.0 would take a master’s thesis to completely enunciate, but here are the core promises: 

1. The network is much easier to train than before, which means more games should theoretically support it than DLSS 1.0.

2. The performance gains should be higher than before.

3. The visual quality and overall fidelity of the renders has been increased.

4. Users will have a greater level of control over how DLSS behaves on a per-game basis.

Plus, with the addition of an Unreal Engine 4 integration, developers and programmers will be able to build their games from the ground up to use DLSS in more efficient ways than ever.

dlss 2.0 new network

Now sure, Nvidia makes lots of claims when it comes to its latest technology releases. But how does DLSS 2.0 actually stack up in performance when pitted against non-DLSS-enabled versions of the same game? We dug in to find out.

Benchmarking DLSS 2.0: Let’s Take ‘Control’

In many ways, the sleeper-hit Control (from the developers of Alan Wake) seems to have been made from the start with ray tracing in mind. Unlike games that released without ray tracing and had it patched in after the fact (Shadow of the Tomb Raider and Battlefield 5, just to name a couple), Control had ray tracing baked into the core of the engine, which means you can…wait for it…control the lighting scheme more deeply than in any other ray traced title to date.

dlss 2.0 control

In this game, you can configure almost every aspect of how the ray tracing behaves, including what type of reflections would be cast and whether or not lighting diffuses at indirect angles. 

control ray tracing

For most users, it’s enough to use one of the available presets. But for reviewers like me, it offers up a perfect opportunity to see whether DLSS 2.0 can deliver on what Nvidia has been claiming the original 1.0 iteration would for over a year now: offset the computational load of ray tracing to an AI network so games can look pretty and run fast at the same time.

For this reason, Control is currently the best game to test the feature set of DLSS 2.0, especially considering the game is one of the few that has a DLSS “slider” that lets us configure the resolutions that we want our PC to run at.

In this testing, we cranked every ray-tracing setting to full, and selected the Ultra preset on a system running an Intel Core i7-7700K CPU with 16GB of Corsair DDR4 memory and an Nvidia GeForce RTX 2080 Super card. We then ran it through three tests at native resolutions: 3,840 by 2,160, 2,560 by 1,440, and 1,920 by 1,080. At each native resolution, we ran three tests at different DLSS render levels: 66 percent, 58 percent, and 50 percent render resolution.

dlss 2.0 benchmarks

As you can see, the effect on performance of turning on DLSS 2.0 was substantial. At its most optimized, the 50 percent render image (Performance Mode) represents a 184 percent increase in frame rate over native 4K (54fps versus 19fps), while even the highest quality setting (66 percent) still grants a boost of 94 percent over the native resolution.

This trend continued in 1440p and 1080p results, all of which reflected an equally huge jump in speeds once the DLSS feature was turned on. Admittedly there is a bit of a bell curve where the percentage gains start to drop off in 1080p resolution, and we were told by Nvidia it was likely this would happen. At that point, the CPU is handling a lot more of the heavy lifting than the GPU would, which means the effectiveness of any tech contained in the GPU (the Tensor cores) drops accordingly.

However, this is only half the story. While just rendering down an image is a cheap and easy way to gain free performance, DLSS also needs to look good too, right?

DLSS: Today, Tomorrow, and Beyond

In the next part of our investigation into DLSS 2.0, we’re going to pit its performance against AMD’s competing Radeon Image Sharpening (RIS) software, as well as Freestyle and Reshade. Not only that, but we’ll have a complete evaluation of how DLSS actually helps games look better versus these other technologies in a side-by-side screenshot comparison.

My early impressions, though, are that Nvidia has created something special here, a nascent technology that could upend the value of graphics cards down the line. I’m a hard reviewer to please in this department; I’ve made my feelings about DLSS known on multiple occasions in more than a few of our video card reviews, and this is the first time I’m sincerely impressed with what the tech has to offer. And I’m not only impressed what it has to offer today, but also excited about the promise it offers for tomorrow. 

The one big, big caveat right now is the number of supported games. Nvidia says the new neural network is much easier to train on individual games, which should, in theory, mean a lot more games will work with DLSS 2.0 in the near future. However, the fact that it’s debuting the technology with just four niche titles is nothing more than a tiny taster of its potential: Control, MechWarrior V, Wolfenstein: Youngblood, and Deliver Us to the Moon. Four games is a good start, but a long, long way from mainstream relevance or a key reason to buy a GeForce RTX card. 

Either way, stay tuned as we diver deeper into this exciting new technology, and also give AMD and RIS their time on the bench to respond in kind.

Further Reading

Graphics Card Reviews

Graphics Card Best Picks

Source Article