Nosotros've been waiting to reexamine Nvidia's Deep Learning Super Sampling for a long time, partly because we wanted new games to come out featuring Nvidia's updated algorithm. We also wanted to enquire Nvidia as many questions as nosotros could to really dig into the electric current state of DLSS.

Today's commodity is going to embrace everything. Nosotros'll exist looking at the latest titles to use DLSS, focusing primarily on Control and Wolfenstein: Youngblood, to meet how Nvidia'due south DLSS 2.0 (equally we're calling it) stacks up. This volition include our usual suite of visual comparisons looking at DLSS compared to native epitome quality, resolution scaling, and diverse other post processing techniques. Then, of course, at that place will be a look at operation across all of Nvidia'south RTX GPUs.

We'll likewise be briefly reviewing the original launch games that used DLSS to see what'due south changed here, and there volition be plenty of discussion on the RTX ecosystem, Nvidia's marketing, expectations, disappointments, and so on. Strap yourselves in because this is going to be a comprehensive wait at where DLSS stands today.

First, having covered the topic in detail earlier, here'due south a recap of where we're up to with DLSS...

Nvidia advertised DLSS as a key characteristic of GeForce RTX 20 series GPUs when they launched in September 2022. The idea was to ameliorate gaming operation for those wanting to play at high resolutions with high quality settings, such as ray tracing. Information technology did this by rendering the game at a lower than native resolution, for example, 1440p if your target resolution was 4K, and so upscaling it back to the native res using the power of AI and deep learning. The goal was for this upscaling algorithm to provide native-level image quality with higher performance, giving RTX GPUs more value than they otherwise had at the fourth dimension.

This AI algorithm also leveraged a new feature on RTX Turing GPUs: the tensor cores. While these cores are most likely included on the GPU to arrive as well suitable for information eye and workstation use cases, Nvidia found a way to use this hardware feature for gaming. Later, Nvidia decided to ditch the tensor cores for their cheaper Turing GPUs in their GTX sixteen serial products, and so DLSS concluded upwardly only existence supported on 20 series RTX products.

While all this sounded promising, the execution within the first nine months was far from perfect. Early DLSS implementations looked bad, producing a blurry paradigm with artifacts. Battlefield V was a particularly egregious case, just even Metro Exodus failed to impress.

One of the major issues with the initial version of DLSS is that information technology did not provide an experience improve than existing resolution scaling techniques. The implementation in Battlefield Five, for instance, looked worse and performed worse than a simple resolution upscale. In Metro Exodus information technology was more on par with these techniques, but it wasn't impressive either. As DLSS was locked down to sure quality settings and resolutions on certain GPUs, and was but supported in a very limited selection of games, it didn't brand sense to utilise DLSS instead of resolution scaling.

Following the disappointing result, Nvidia decided to throw the original version of DLSS in the bin, at least that's what it sounded like based on our discussions with the company. Instead, for the short term they released a amend sharpening filter for their FreeStyle tools that would enhance the resolution scaling experience, while they worked on a new version of DLSS in-house.

The first step towards DLSS two.0 was the release of Control. This game doesn't use the "final" version of the new DLSS, but what Nvidia calls an "approximation" of the work-in-progress AI network. This approximation was worked into an image processing algorithm that ran on the standard shader cores, rather than Nvidia'south special tensor cores, just attempted to provide a DLSS-similar experience. For the sake of simplicity, we're going to call this DLSS one.9, and we'll talk well-nigh this more than when nosotros look at DLSS in Control.

Late in 2022 though, Nvidia got around to finalizing the new DLSS -- nosotros feel the upgrade is significant enough to warrant it being chosen DLSS 2.0. There are fundamental changes to the way DLSS works with this version, including the removal of all restrictions, so DLSS at present works at any resolution and quality settings on all RTX GPUs. It also no longer requires per-game training, instead using a generalized preparation organization, and it runs at higher functioning. These changes required a significant update to the DLSS SDK, so information technology'due south not backwards compatible with original DLSS titles.

So far, nosotros've seen 2 titles that use DLSS ii.0: Wolfenstein: Youngblood and indie title 'Deliver usa the Moon'. Nosotros'll exist focusing primarily on Youngblood as it's a major release.

Nvidia tells us that DLSS 2.0 is the version that volition be used in all DLSS-enabled games going forrard; the shader core version, DLSS one.ix, was a one-off and will only be used for Control. But we think it's still of import to talk most what Nvidia has done in Command, both to encounter how DLSS has evolved and also to see what is possible with a shader core image processing algorithm, so permit'southward dive into information technology.

Control + DLSS

With a target resolution of 4K, DLSS ane.9 in Command is impressive. More and then when you lot consider this is an approximation of the full technology running on the shader cores. The game allows you to select ii render resolutions, which at 4K gives you the choice of 1080p or 1440p, depending on the level of functioning and paradigm quality you want.

DLSS with a 1440p return resolution is the better of the two options. It doesn't provide the same level of sharpness or clarity as native 4K, just it gets pretty close. Information technology's also close overall to a scaled 1800p image. In some areas DLSS is amend, in others it'due south worse, but the slightly softer image DLSS provides is quite similar to a modest resolution scale. Unlike previous versions of DLSS, though, it doesn't suffer from whatsoever oil painting artifacts or weird reconstructions when the upscale from 1440p to 4K is applied. The output quality is very good.

Nosotros can also see that DLSS rendering at 1440p is better than simply playing the game at 1440p. Some of the differences are subtle and crave us zooming in to see ameliorate border handling and cleaner lines, but the differences are there: we'd much rather play at 1440p DLSS than native.

That'southward not to say DLSS i.9 is perfect, because information technology does seem to be using a temporal reconstruction technique, taking multiple frames and combining them into i for higher item images. This is axiomatic when viewing some of the fine details around the Command game globe, particularly grated air vents, which trouble the paradigm processing algorithm and produce flickering that isn't present with either the native or 1800p scaled images. These super fine wires or lines throughout the environment seem to consistently give DLSS the most trouble, although paradigm quality for larger objects is decent.

Previously nosotros found that DLSS targeting 4K was able to produce image quality like to an 1800p resolution calibration, and with Control's implementation that hasn't inverse much, although as we've simply been talking about nosotros do think the quality is better overall and basically equivalent (or occasionally better) than the scaled version. But the central departure between older versions of DLSS and this new version, is the performance.

... the key difference between older versions of DLSS and this new version, is the performance.

Previously, running DLSS came with a functioning hit relative to whatever resolution it was rendering at. And so 4K DLSS, which used a 1440p render resolution, was slower than running the game at native 1440p as the upscaling algorithm used a substantial amount of processing time. This concluded up delivering around 1800p-like operation, with 1800p like image quality, hence our original lack of enthusiasm.

Still, this shader processed version is significantly less performance intensive. 4K DLSS with a 1440p render target performed on par with native 1440p, and so there is a significant performance improvement over the 1800p-like performance we got previously. This also clearly makes DLSS 1.nine the best upscaling technique nosotros accept, because it offers superior epitome quality to 1440p with performance on par with 1440p; essentially there's adjacent to no performance striking here.

Another way to wait at information technology is nosotros get an 1800p-like image, with the performance of 1440p, which is but better than we can accomplish with whatever resolution scaling option.

You'll find that we oasis't mentioned epitome sharpening and how that factors in. With previous DLSS iterations, scaled 1800p provided meliorate non-sharpened image quality with fewer artifacts at the aforementioned operation level, which fabricated 1800p a amend base rendering choice to then sharpen from. With sharpening, you lot desire to use the best native paradigm you can and go from at that place, which is why nosotros preferred using 1800p + sharpening to match 4K rather than using either native DLSS or sharpened DLSS. The results were better.

Merely with "DLSS 1.9/two.0," the tables have turned and now it makes more sense to acuminate the DLSS paradigm if you want to try and further match native image quality. That's considering unlike with previous versions, at equivalent operation levels nosotros do get meliorate image quality with DLSS this time. Sharpening DLSS rendering at 1440p to endeavour and emulate 4K gives you much better results than trying to sharpen a simple 1440p upscaling task.

While DLSS in Control is nice for anyone targeting 4K and using a 1440p return resolution, the results outside of this specific combination are lackluster. When targeting 4K and using the lower 1080p return resolution, temporal artifacts become more obvious and at times jarring. The epitome is also softer than with a 1440p resolution, which is to be expected, although functioning is solid.

Using DLSS with a target resolution beneath 4K is too non a good idea. When targeting 1440p nosotros're presented with either 960p or 720p as the render resolutions, and when rendering at either of these resolutions there's simply not enough detail to reconstruct a great looking paradigm. Even the higher quality option, 960p, delivered far from a native 1440p paradigm. This algorithmic approximation of DLSS is just not suited to these lower resolutions.

With this in mind, let'southward explore the operation benefit we get from Control'south optimal configuration: a target resolution of 4K while rendering at 1440p. We're comparison to Control running at 1800p, which is the equivalent visual quality choice, to see the pure performance benefit. The benchmarks were ran on our Cadre i9-9900K test rig with 16GB of RAM, and nosotros used Ultra settings with 2x MSAA when DLSS was disabled.

The performance gains we see with each RTX GPU are fairly consistent beyond the lath. When visual quality is made roughly equal, DLSS is providing betwixt 33 and 41 percent more operation, which is a very respectable uplift.

This first batch of results playing Control with the shader version of DLSS are impressive. This begs the question, why did Nvidia feel the need to go back to an AI model running on tensor cores for the latest version of DLSS? Couldn't they just keep working on the shader version and open up it up to everyone, such equally GTX xvi series owners? Nosotros asked Nvidia the question, and the answer was pretty straightforward: Nvidia'due south engineers felt that they had reached the limits with the shader version.

Concretely, switching dorsum to tensor cores and using an AI model allows Nvidia to achieve better image quality, better handling of some pain points like motion, better low resolution support and a more flexible arroyo. Apparently this implementation for Control required a lot of hand tuning and was institute to not work well with other types of games, whereas DLSS 2.0 back on the tensor cores is more generalized and more easily applicative to a wide range of games without per-game training.

Not needing per-game training for DLSS 2.0 is huge...

Not needing per-game grooming for DLSS 2.0 is huge. This allows the new model to take all of the knowledge and information it'southward learned across a broad variety of games and utilise information technology all at once, rather than relying on specific set up of training data from a unmarried game. This has provided better image quality, but it also gives Nvidia another advantage: generalized DLSS updates.

While the first version of DLSS required individual updates to DLSS on a per-game ground to meliorate quality – and this rarely happened – DLSS 2.0 should improve over time as Nvidia trains the AI model as a whole. Nvidia told u.s. that starting with this new version, they'll be able to update DLSS via Game Ready drivers without the need for game patches. We'll see whether that materializes, but information technology'south an improvement over what was previously possible.

Some other benefit from non needing per-game grooming is that it makes DLSS faster and easier to integrate. This should mean more DLSS games, but nosotros'll wait and see to see if that materializes commencement.

Wolfenstein: Youngblood + DLSS

Time to take an in-depth look at DLSS 2.0 in Wolfenstein: Youngblood. Compared to previous DLSS implementations, at present there are three quality options to cull from: Quality, Balanced and Functioning. All proceed to upscale the game from a lower target resolution, so the 'Quality' mode isn't a substitute for the notwithstanding absent DLSS 2X mode that was appear at launch.

The big question is whether DLSS 2.0 is any skilful, and we're pleased to say that it is.

In fact, DLSS ii.0 is extremely impressive, far exceeding our expectations for this sort of upscaling technology. When targeting a native 4K resolution, DLSS 2.0 delivers epitome quality equivalent to the native presentation. Despite DLSS rendering at an actual resolution below 4K, the final results are equally practiced as or in some circumstances ameliorate than the native 4K paradigm.

We're hesitant to say that the image quality DLSS provides is ameliorate than native, because Youngblood's existing anti-aliasing techniques like SMAA T1x and TSSAA T8X aren't smashing, and produce a flake of blur across what should exist a very precipitous native 4K epitome. When comparing DLSS straight to, say, TSSAA T8X, the DLSS image is sharper, and we should notation here the results we're showing now are with the game'southward built in sharpening setting turned off.

Still, when putting DLSS upward against SMAA without a temporal component, and so just regular SMAA, the level of clarity and sharpness DLSS provides is quite similar to the SMAA image. Again, there are advantages here – SMAA does take a fair few remaining jagged edges and some shimmering, which is more often than not cleaned upwards with DLSS – only when comparing item levels we'd say both native 4K and DLSS are like. We doubtable with a really good post-process anti-aliasing like nosotros've seen in some other games (e.one thousand. Shadow of the Tomb Raider), we'd see DLSS and native 4K looking almost identical.

And while it may non be always superior to native 4K, beingness at worst equivalent to 4K, is a huge pace forrard for DLSS. As we talked well-nigh extensively in previous features, older DLSS implementations were only good enough to produce an 1800p-like image, frequently with weird artifacts like sparse wires and tree branches getting 'thickened', along with a full general oil painting effect that we didn't similar. None of those issues are present here, this merely straight up looks similar a native prototype.

We should stress that native 4K and DLSS 4K don't look identical. This isn't a blackness box algorithm that can magically pull true native 4K out of the hat. 4K DLSS does await slightly dissimilar to native 4K, some areas may have a small increase to detail, others may accept a minor decrease. But it's no longer a situation where the DLSS image is noticeably worse, the 2 images are to our eyes equivalent, with neither existence clearly better than the other in all situations.

There are some areas where you may observe DLSS actually improves the prototype quality, such as with some fine patterned areas and other elements with thin lines. This is because Nvidia trains the AI using super sampled images with the clearest possible forms of these details. On the other hand, there are however some areas where the algorithm struggles, one being with how DLSS handles the burn down elements towards the terminate of the game's built-in benchmark tool. But these are minor problems and a far cry from the bug with DLSS 1.0.

Equally mentioned earlier, there are iii quality modes on this latest revision of DLSS, and at 4K the differences are very subtle between them. Quality is slightly sharper than Balanced, which is slightly sharper than Performance. We think Balanced is a great place to be with the 4K epitome, and realistically all are a negligible amount abroad from the native image.

The other really impressive aspect to DLSS 2.0 is that it's also fully functional at lower resolutions.

Take 1440p for instance. The Quality DLSS mode, like at 4K, provides substantially a native 1440p image while rendering at a lower resolution. Everything we've just been talking almost with 4K too applies here, which is different any previous version of DLSS, where quality apace fell away at lower resolutions. Even with Control'south shader implementation this was a significant outcome, simply information technology'south not with DLSS 2.0.

At 1440p the limitations of the lower quality DLSS modes does become a bit more apparent. While the Performance mode is fine at 4K, we retrieve the quality does endure more here and we wouldn't recommend it over either Counterbalanced or Quality which are both fine. Quality is the way we would opt to use at 1440p given it delivers the closest presentation to native.

DLSS 2.0 is also effective at 1080p in the same way it is at 1440p and 4K, with DLSS providing essentially native image quality, particularly when using the Quality Manner. Like to 1440p, nosotros don't think the performance style is particularly effective, so we'd stick to using Balanced or Quality, the latter of which is the most impressive and delivers image quality equivalent to native.

DLSS 2.0 performance

Let's accept a look at operation once more than using our Core i9-9900K test rig, uber settings, ray tracing disabled (considering it doesn't make much sense in a fast paced game like this) and TSSAA T8X anti-aliasing when DLSS is disabled.

Here we're looking at the boilerplate functioning gain we saw when playing with a 4K target resolution across the six RTX GPUs that support 4K gaming, from the RTX 2060 Super to the RTX 2080 Ti. Using the Quality way, on average we saw a 24% improvement to average FPS over native 4K, and a 27% improvement in 1% lows, with equivalent to native 4K image quality. Using Balanced, the numbers increased to around a 35% comeback, and then with the Operation mode, a 47% comeback. All of these modes we'd say deliver epitome quality essentially identical to native 4K, if not ameliorate.

DLSS Modes vs. RTX GPU @ 4K

The reason why we're using an average across the six GPUs is performance is very consequent regardless of what RTX GPU y'all accept. This chart shows the actual results from all half-dozen GPUs, and you lot'll see the lines match upwardly. There is a slight tendency for lower performance GPUs to gain more out of using DLSS, we saw up to a 25% gain for the RTX 2060 Super compared to 22% for the RTX 2080 Ti using the quality mode, but it'southward for the virtually office equivalent.

What about at 1440p? Clearly nosotros aren't getting as expert performance gains here. The Quality mode provided a 16% proceeds on boilerplate, and the Counterbalanced mode a 23% gain. The Performance style did evangelize a 30% gain, below what Balanced achieves at 4K, however we don't feel the Operation style delivers paradigm quality equivalent to native, then we're not getting a pure 30% performance proceeds equally paradigm quality does subtract somewhat.

DLSS Modes vs. RTX GPU @ 1440p

There is a slight tendency for lower performance GPUs to gain more than from DLSS at 1440p. We saw up to a 26% gain using the Balanced mode with an RTX 2060, compared to just 17% with the RTX 2080 Ti.

At 1080p, gains drib further again, now just 10% on average across 7 GPUs using the Quality style, with the same trend for lower power GPUs to see higher gains. Nosotros believe something that could be going on is that frame rates become and then high that DLSS struggles to scale well.

DLSS Modes vs. RTX GPU @ 1080p

For example, the RTX 2060 at native 1080p hit 150 FPS, and saw around a 15% improvement to that performance when using the DLSS Quality mode. This is similar to what we saw with the RTX 2080 at native 1440p: once more around 150 FPS, and again around a 15% improvement using the Quality manner. And so this does brand me remember that the lower your baseline performance is, the more yous tin can do good from DLSS, simply nosotros'll accept to test more games in the future to confirm that finding.

On the other mitt, nosotros saw larger gains when ray tracing was enabled. Nosotros didn't do all-encompassing testing with this feature, merely with the RTX 2060 we saw upwardly to a 2X improvement at 4K using the Performance mode.

Regardless of the configuration though, we were able to achieve performance gains, with DLSS effectively giving y'all a free performance boost at the same level of image quality. That's very impressive and will be especially useful either in situations where yous want to gain at high resolutions, or when you want to really crank upwards the visual effects, similar using ray tracing.

DLSS ecosystem, closing thoughts

There are lots of genuine positives to take abroad from how DLSS performs in its latest iteration. After analyzing DLSS in Youngblood, there's no incertitude that the technology works. The first version of DLSS was unimpressive, but it's almost the opposite with DLSS two.0: the upscaling power of this new AI-driven algorithm is remarkable and gives Nvidia a existent weapon for improving performance with virtually no affect to visuals.

DLSS now works with all RTX GPUs, at all resolutions and quality settings, and delivers effectively native image quality when upscaling, while actually rendering at a lower resolution. It's mind bravado. It's besides exactly what Nvidia promised at launch. We're but glad nosotros're finally getting to meet that now.

Provided we become the same excellent image quality in futurity DLSS titles, the situation could be that Nvidia is able to provide an additional 30 to 40 percent extra operation when leveraging those tensor cores. We'd accept no problem recommending gamers to employ DLSS 2.0 in all enabled titles because with this version information technology'due south basically a free performance button.

The visual quality is impressive enough that we'd accept to starting time benchmarking games with DLSS enabled -- provided the image quality we're seeing today holds upwardly in other DLSS games -- like to how we have benchmarked some games with different DirectX modes based on which API performs better on AMD or Nvidia GPUs It'due south as well credible from the Youngblood results that the AI network tensor core version is superior to the shader cadre version in Control. In a perfect world, we would get the shader version enabled for non-RTX GeForce GPUs, but Nvidia told us that'southward not in their plans and the shader version hasn't worked well in other games.

Clearly, a yr and a half after DLSS launched, fifty-fifty Nvidia would admit this hasn't gone to plan. This is well-nigh an identical situation to Nvidia'southward RTX ray tracing. The characteristic has been heavily advertised as a 'must have' for PC gamers, but the first few games to support the tech didn't impress, and it'due south taken nearly a year to get one-half-decent game implementations that equally of today can be counted on one paw.

But like with ray tracing, it'southward nice to somewhen get DLSS back up in games, just doing and then weeks or months after the games' launch is virtually worthless. We tin't imagine as well many people going back to play Youngblood months after release specifically for DLSS, fifty-fifty less and so having received mediocre reviews.

We have no doubt that DLSS will become a fantastic inclusion in games beyond today, simply nosotros've got to say that looking back Nvidia went too far on promising what they were unable to deliver. It was common to run into promotion slides like the higher up, showing off the magical free performance DLSS would provide.

The huge slate of DLSS games is nigh laughable in 2022 with most of these never getting DLSS. Those are some major releases that Nvidia advertised would back up DLSS just never came to fruition. Nosotros asked them about this and their response was that initial DLSS implementations were "more difficult than nosotros expected, and the quality was not where we wanted information technology to be" so they decided to focus on improving DLSS instead of adding it to more games.

Nvidia also told us that older titles supporting DLSS will require game-side updates to get 2.0 level benefits due to the new SDK and that's in the hands of developers, but we dubiousness those will get support. Battlefield V and Metro Exodus announced to have the aforementioned DLSS implementations as when we showtime tested these titles, flaws and all.

Some readers criticized our original features, challenge that we didn't understand DLSS because the magic of AI would see these titles amend with more deep learning and grooming. Well, a year later and it hasn't improved in these games at all.

On the positive side, Nvidia claims DLSS is much easier to integrate now and getting DLSS in games on launch day with quality equivalent to Youngblood's implementtion should be very achievable.

DLSS is at a tipping point. The recently released DLSS 2.0 is clearly an excellent engineering and a superb revision that fixes many of its initial issues. It will be a genuine selling point for RTX GPUs moving forward, particularly if they can get DLSS in a significant number of games. By the time Nvidia'south adjacent generation of GPUs comes effectually, DLSS should be gear up for prime time and AMD might need to respond in a big way.

Shopping Shortcuts:
  • GeForce RTX 2070 Super on Amazon
  • GeForce RTX 2080 Super on Amazon
  • GeForce RTX 2060 Super on Amazon
  • GeForce RTX 2080 Ti on Amazon
  • AMD Ryzen 9 3900X on Amazon
  • AMD Ryzen 5 3600 on Amazon