Alongside their spring driver update, AMD this morning is also unveiling the first nugget of information about the next generation of their FidelityFX Super Resolution (FSR) technology. Dubbed FSR 2.0, the next generation of AMD’s upscaling technology will be taking the logical leap into adding temporal data, giving FSR more data to work with, and thus improving its ability to generate details. And, while AMD is being coy with details for today’s early teaser, at a high level this technology should put AMD much closer to competing with NVIDIA’s temporal-based DLSS 2.0 upscaling technology, as well as Intel’s forthcoming XeSS upscaling tech.

AMD’s current version of FSR, which is now being referred to as FSR 1.0, was released last summer by the company. Implemented as a compute shader, FSR 1.0 was a (relatively) simple spatial upscaler, which could only use data from the current frame for generating a higher resolution frame. Spatial upscaling’s simplicity is great for compatibility (see: today’s RSR announcement) but it’s limited by the data it has access to, allowing for more advanced multi-frame techniques to generate more detailed images. For that reason, AMD has been very careful with their image quality claims for FSR 1.0, treating it more like a supplement to other upscaling methods than a rival to NVIDIA’s class-leading DLSS 2.0.

However it’s been clear from the start that if AMD wanted to truly go toe-to-toe with NVIDIA’s upscaling technology that they’d need to develop their own temporal-base upscaling tech, and that’s exactly what AMD is doing. Set to launch in Q2 (Computex, anyone?), AMD is developing and will be releasing a new generation of FSR that incorporates both spatial and temporal data for more detailed images.

Given that FSR 2.0 is an AMD technology, the company isn’t really beating around the bush here as to why they’re doing it: using temporal data allows for higher quality images. And while it goes beyond the scope of today’s teaser from AMD, like DLSS (and XeSS), they are clearly going to be relying on motion vectors as the heart of their temporal data. This means that, like DLSS/XeSS, developers will need to build FSR 2.0 into their engines in order to provide FSR with the necessary motion vector data. Which is a notable trade-off from the free-wheeling FSR 1.0, but is none the less a good trade-off to make if it can produce better upscaled images.

And although FSR 2.0 won’t launch for a few months, AMD is already taking some efforts to underscore how it will be different from DLSS 2.0. In particular, AMD’s technique does not require machine learning hardware (e.g tensor/matrix cores) on the rendering GPU. Which is especially important for AMD, since they don’t have that kind of hardware on RDNA 2 GPUs. As a result, conceptually, FSR 2.0 can be used on a much wider array of hardware, including older AMD GPUs and rival GPUs, a sharp departure from DLSS 2.0’s NVIDIA-only compatibility.

Though even if AMD doesn’t require dedicated ML hardware in client GPUs, this doesn’t mean they aren’t using ML as part of the upscaling process. To be sure, as temporal AA/upscaling has been the subject of research in games for over a decade now, there are multiple temporal-style methods that don’t rely on ML. At the same time, however, the image quality benefits of using a neural network have proven to be significant, which is why both DLSS and XeSS incorporate neural networks. So at this point I would be more surprised if AMD didn’t use one.

If AMD is using a neural network, then at a high level this sounds quite similar to Intel’s universal version of XeSS, which runs inference on a neural net as a pixel shader, making heavy use of the DP4a instruction to get the necessary performance. These days DP4a support is found in the past few generations of discrete GPUs, making its presence near-ubiquitous. And while DP4a doesn’t offer the kind of performance that dedicated ML hardware does – or the same range of precisions, for that matter – it’s a faster way to do math that’s still good enough to enable temporal upscaling and improve on FSR 1.0’s image quality.

Update: According to a report from Computerbase.de, AMD has confirmed to the news site that they are not using any kind of neural network. So it seems AMD is indeed going with a purely non-ML (and DP4a-free) temporal upscaling implementation. Color me surprised.

As for licensing, AMD is also confimring today that FSR 2.0 will be released as an open source project on their GPUOpen portal, similiar to how FSR 1.0 was released last year. So developers will have full access to the source code for the image upscaling technology, to implement and modify as they see fit.

Finally, as part of AMD’s teaser the company has released a set of PNG screenshots of Deathloop rendered with both FSR 1.0 and an early version of FSR 2.0. Though early screenshots should always be taken with a grain of salt – they’ve been cherry-picked for a reason – the difference between FSR 1.0 and FSR 2.0 in performance mode is easy enough to pick up on.


FSR 1.0 vs. FSR 2.0

Meanwhile the difference versus native is less clear (which is the idea), though it should be noted that even native 4K is already running temporal AA here.


Native (4K) vs. FSR 2.0

Ultimately, today’s announcement is a teaser for more information to come. At next week’s GDC, AMD will be hosting a session on the 23rd called “Next-Generation Image Upscaling for Games”, where AMD will be presenting their research into image upscaling in a developer-oriented context. According to AMD we should expect a little more technical information from that session, while the full details of the technology will be something AMD holds on to until closer to its launch.

If all goes according to plan, FSR 2.0 will launch next quarter.

Comments Locked

46 Comments

View All Comments

  • Zoolook - Thursday, April 7, 2022 - link

    Nvidia usually pulls shit like that, AMD usually doesn't, but we'll see.
  • emn13 - Thursday, March 17, 2022 - link

    Since this is a TAA-esque tech, just like DLSS, the issue may well be software support, rather than hardware. Even DLSS probably could have been ported to other hardware; I suspect NVidia was trying to differentiate their flagship products rather than being intrinsically limited. After all, AI inference algorithms run just find on all kinds of GPUs, not just DLSS supporting ones.
  • Yojimbo - Thursday, March 17, 2022 - link

    DLSS 1.9 was the 2.0 DLSS running without the tensor cores. It was not nearly as good as the 2.0 that came out soon afterwards. I don't know how much better it could have gotten if NVIDIA had continued to develop it, but the fact that tensor cores are much better for deep learning application is not a marketing gimmick, it's a reality. With the tensor cores the GPU can inference more complicated networks within the allotted amount of time between frames and so get more accurate results and thus higher quality from a lower input resolution, resulting in higher performance, as well.
  • mode_13h - Monday, March 28, 2022 - link

    Tensor cores are also more power-efficient, assuming the complexity of your network is constant. And burning lots of power on DLSS could trigger/worsen clock throttling.
  • rmfx - Thursday, March 17, 2022 - link

    The improvement is mindblowing.

    V1 was a total joke.
    V2 (if as good as shown here) is a true DLSS contender that will make me buy AMD.
  • Makaveli - Thursday, March 17, 2022 - link

    Agreed I think RDNA 3 is going to surprise alot of people.
  • haukionkannel - Thursday, March 17, 2022 - link

    It definitely looks better now. As it should! Still native resolution is a king, but these looks actually very useable just like resent DLSS versions.
    Do you remember how bad first generation DLSS was... It was really bad. It looks like this is AMD DLSS 2.0 moment, when things starts to click in right places.
  • Makaveli - Thursday, March 17, 2022 - link

    You are correct and I already tried to remind the NV fan boys they were comparing DLSS 2.xxx to FSR 1.0 and give AMD time to work on it. Now that time is here.
  • Wereweeb - Thursday, March 17, 2022 - link

    V1 had no artifacts, since it didn't use a temporal algorithm.
  • Zizy - Thursday, March 17, 2022 - link

    It did have shimmering. Avoided ghosting though.

Log in

Don't have an account? Sign up now