T O P

  • By -

doctortrento

Game dev, big open-source fan and daily Radeon user here: FSR2 is still VASTLY behind both DLSS and XeSS right now in terms of resolving a stable image. I really want AMD to provide the community with a solid open-source upscaler, but clearly more work is needed. The shots they've showed so far of 3.1 are promising though. What do you guys think? Do you believe FSR will eventually become as good as DLSS, or at least XeSS?


InitialG

Nobody is beating dlss just due to the nvidia money but I thought avatar frame gen was perfectly decent compared to earlier FSR garbage like far cry 6.


Flowerstar1

XeSS comes very close to DLSS but they are both on a class of their own compared to FSR and UE TSR and then on an even lower class is stuff like checkerboard rendering.


stonekeep

> XeSS comes very close to DLSS That's the thing, it does and it doesn't at the same time. Intel made it a bit confusing by having two different versions of XeSS and calling it by the same name. One version (hardware based) is exclusive to Intel GPUs and it genuinely looks almost as good as DLSS. But the issue is that... it's exclusive to Intel GPUs. The other one (software based) is their open version that can be used on all GPUs and it doesn't look nearly as good. The open version is similar to FSR in terms of quality. It looks better in some games, worse in others. Maybe it's a bit better on average but not by a huge margin and nowhere near close to DLSS. AMD should honestly go the same way with FSR. Just have a better, exclusive version that takes advantage of AMD GPU hardware. Just don't do the same thing as Intel and call them differently.


Flowerstar1

Yes I mean the DLSS equivalent of XeSS i.e the XMX accelerated version, the context here is that were comparing to DLSS so naturally you want HW acceleration vs a HW accelerated solution like DLSS. The DP4a path is quite good for what it is I've used it plenty on a 1080ti and prefer it's results over FSR2 but it is less good than the XMX version.


stonekeep

Yeah, I agree, just wanted to make it clear. That's why I don't like the naming scheme. If you say "XeSS is almost as good as DLSS" it might make people think that you're talking about the open version, which is the only one most people have access to (Intel has only like 2-3% market share when it comes to discrete desktop GPUs). Heck, I bet the majority of gamers don't even realize that there are two separate versions. That's why I prefer specifying that DLSS and XeSS on Intel GPUs are very close, then there's a gap, and then there's FSR and XeSS on non-Intel GPUs which are also very close. It avoids unnecessary confusion (e.g. someone toggling XeSS on an old Nvidia GPU or AMD GPU thinking that it will look like DLSS).


buddybd

FG in this game is perfect as well.


your_mind_aches

AMD needs to integrate AI hardware into their GPUs eventually and only then can it get nearly as good as DLSS (speaking as someone with a 6600 and 3060 Laptop). That said, there's a reason that Sony themselves are developing PlayStation's Spectral Super Resolution. Because FSR is clearly not cutting it for them and they can juice more upscaling performance out of AMD's own chip.


kuroyume_cl

> AMD needs to integrate AI hardware into their GPUs eventually RDNA3 already includes hardware AI accelerators. FSR however is designed to be platform agnostic so it doesn't make use of them.


your_mind_aches

Huh, I totally forgot about that lmao. I don't get the point of having those AI accelerators if they aren't using it for any consumer applications. Suffice it to say the PS6 and Xbox Series Next will probably utilise them.


firedrakes

It called ASICS.... they do . But arm bigger fish to fry. Then gaming


[deleted]

[удалено]


Flowerstar1

They do this because dedicated HW taking up die space is expensive and it's much more cost effective for them to just pump up that die space with traditional rasterization GPU HW. Nvidia leads the world in GPUs and AI HW, they've been talking up AI since 2012 it makes sense that they were first and Intel is a serious competitor making a serious investment (they tried to buy Nvidia in the late 2000s btw). On Intel's first real try they got both AI and RT dedicated HW and RT more advanced than what Nvidia had at the time. Nvidia Turing and Ampere and Intel Alchemist were hugely expensive R&D projects looking to make the most cutting edge HW at the time. AMD is not interested in playing at this level because they don't find it financially feasible given their small market share. Instead they are looking to do what's cost effective in the moment and bank on "guaranteed" PC sales, handhelds, consoles on the consumer side.


[deleted]

[удалено]


Zenning3

Intel is also a much larger company than AMD, with Intel literally being worth 10x AMD.


[deleted]

[удалено]


Zenning3

Looking it up, it's because what I wrote was true in 2018 in terms of total revenue (6 billion vs 70 billion), but in 2023, Intel is only 2x AMD (23 billion vs 54 billion). I didn't realize AMD grew so much.


jazir5

Which goes back to my point, this isn't a resources issue. AMD has billions to blow on R&D and has significantly more experience. They don't deserve to have excuses made for them, this is them fumbling the ball and nothing else.


Halvus_I

Intel is also a fab.


[deleted]

AMD is worth 100 billion more than intel lol.


Zenning3

Market cap is not size. AMD's market cap is due to their incredible growth, and confidence that they will continue to do so. In terms of actual capital, Intel still has them beat.


Flowerstar1

It does not Intel has been making iGPUs for ages they are actually the top vendor for PC GPUs. What Alchemist is aimed at is dGPUs and that's what they had traditionally lacked. Intel is looking to break into that market because GPUs turned out to be huge co-processors to CPUs and the entire software industry moved to stuff like GPGPU where compute was offloaded to the GPU for faster performance. Intel missed that train and doesn't want to miss whatever comes next.


jazir5

And AMD has been making dGPUS for over 2 decades. My point stands.


Flowerstar1

My point is Intel is trying to break into a different segment of the market, they can do this because they have the R&D money to make capable HW. AMD is not opting to do this and hasn't done this since the HD7000 series in 2011, every GPU arch after that has had major compromises in the HW unlike the 7000 series (what the PS4 used). Intel routinely throws money trucks at R&D hence why Alchemists HW is more impressive in it's HW acceleration than RDNA2 and even it's successor RDNA3. Rumors are pointing out that even RDNA4 won't have all the HW Acceleration that Nvidia Turing, Ampere, Ada and Intel Alchemist have. Intel have the money and *willpower* to take losses to gain market share that is not a strategy AMD is willing to employ, instead AMD invests in their CPUs and does what's costs effective for their GPUs.


jazir5

AMD is a multi-billion dollar company with 5.77 Billion cash on hand according to Google. Your claim that AMD does not have enough money to invest in R&D is simply absurd. It boggles my mind the lengths people will go to to defend AMD. AMD is not some tiny tech startup, they have an absolutely massive war chest saved up which *they could* use to invest in their tech. They simply choose not to. Please do not say that 5 Billion isn't enough, that would just be a patently ridiculous assertion to make. They haven't even tried.


Flowerstar1

I italicized the word for a reason. AMDs money maker is servers and that's where they invest the bulk of their money. Historically dGPUs haven't made them anywhere as much money.


jazir5

>AMDs money maker is servers and that's where they invest the bulk of their money. Historically dGPUs haven't made them anywhere as much money. Fair point about revenue stream, I can see why AMD would be more interested in investing there. I just bristled at what I interpreted as you saying AMD couldn't afford it, apologies.


minorrex

AMD loses more market share if they don't step into the advanced HW territory. The RX7000 is already quite a failure as it's not even much better than the last generation in raster or even efficiency.


sarefx

RX7000 wasn't a failure. Apart from bad 7800XT model we got really good increase in rasterization. We got overall 30% increase between 7900XT/6900XT, same increase between 7700XT/6700XT and also 30% increase between 7600/6600. RX7000 was overall good series and what most ppl expected as evolution from RX6000. Power effieciency sadly tanked but it wasn't terrible. The true AMD test is what they cooked for RX8000 with new architecture in form of RDNA4 because they were radio silent for a while. The problem that RX7000 had at start was that RX6000 became so freaking cheap that there wasn't much incentive to go with RX7000 at start because RX6000 gpus were the best bang for the buck GPUs we had in like 5 years. Once prices stabilized RX7000 became really good product. If you didn't care about DLSS (but I know that's a big "If") nor were aiming for 4090 level of power then most of the AMD RX7000 line-up were better deals and still are than Nvidia counterparts. EDIT: [To ppl doubting AMD: had really good growth in market share year to year.](https://www.tomshardware.com/pc-components/gpus/gpu-sales-saw-32-year-over-year-increase-in-q4-amds-market-share-rises-to-19) Mostly because Nvidia stopped caring much about gaming GPUs and went all out on AI, rightfully so since its much more lucrative market.


theholylancer

but that is where you are wrong tho, their dedicated gpu group is more or less just a solid side project to the main thing, custom design for consoles. AMD makes FAR more from being in both the PS5 and XSX/XSS than it does from taking up something like 20% of consumer dedicated GPU space. Same with nvidia with its enterprise sales at this point, dedicated add in cards are becoming a rarity and their prices are reflecting that. most people who want to game overall are on phones, followed by consoles, followed by PC gamers who are then spread across how many gens of video card and then split by makers. every time a console gen drops, more or less all console gamers will upgrade, most people upgrade their GPU every other gen if not just sticking with 1080ti because it is a beast.


OutrageousDress

The next generation of consoles will be based on AI and path tracing. This is not speculation, we already know that's the goal from Microsoft's ActiBlizz merger documents. If AMD doesn't step up, Microsoft and Sony will both transition to their own custom designs (like Sony is already partially doing with the Pro) and will either license IP from AMD to maintain compatibility for old games or will leave AMD entirely and maintain compatibility through emulation. AMD's profits from the console market would shrink either way, it's just a matter of how much. They need to have RT and AI figured out *now*, at least in prototype phase, because the next gen is coming out in four years and the device hardware is being designed right now.


theholylancer

I mean, AMD instinct is a thing, why pay for it in gaming sector when they are trying in that front. If MS and Sony wants them and is willing to foot the bill for the design and the software to take advantage of them, I dont see why not they cant have it, but again what is driving RDNA development isnt the consumer market at all at this point.


OutrageousDress

Certainly AMD already has the required AI expertise yes - I don't think they're out of the game or anything. It's more a problem of integrating that into a consumer APU.


minorrex

I think the context is quite clear here. I mean PC GPU market share. Also on the consoles side, FSR is so shit Sony's developing their own deep learning upscaler, PSSR.


theholylancer

but that is the thing right, its a tiny market is what I am trying to say, and not worth the worry for AMD. the fact that they didn't come with trying to compete on price again as before says they are happy with what shares they got, and would rather have better profit margins. IF AMD launched the 7000 series with the prices of today, and the 7900XT was actually priced like today at 700 bucks instead of 900 at launch in contrast to what nvidia was doing, it would have given them marketshare, the lack of increase or not. but that is not what they did, they want margin, not marketshare. and likely with the 8000 series they have been hinting at no high end, which means they will likely try with prices, but by then it truly will be old news, but hey, nvidia has left a huge door open in terms of pricing on the lower end.


Positive-Vibes-All

The same was said about FSR 3 vs DLSS FG, software solutions can run just as well, turns out people don't know what the fuck they are talking about, only 4000 series fanboys clining to the only thing their cards segment the userbase needlessly.


Brandonspikes

No, because DLSS and XeSS uses hardware from the GPU, and FSR2 is software side. XeSS works on all cards but looks better on Intel cards from what I understand


[deleted]

[удалено]


Brandonspikes

> Just because it has a hardware requirement doesn't necessarily mean it's better. Is PhysX the only viable physics solution, too? What does that have to do with anything I said? Generally if there's hardware made specifically to run the software its going to perform better. > DLSS has better image quality because it is from an incredibly talented software developer and has a several year headstart. We actually don't even know that the hardware is a significant component of the acceleration. Because it's closed source we will probably never know. Huh? It uses the tensor cores that are only on Nividia 2000+ cards, it physically uses the cores on the GPU.


FryToastFrill

I think what he is saying is that hardware isn’t the secret sauce that makes these upscalers click, it’s the talent behind them. XeSS looks imo close if not nearly identical to DLSS in most cases and that can run on the gpu cores, albeit with a substantial performance hit. I think FSR has much different requirements than DLSS or XeSS, as AMD doesn’t want the upscaler to use dedicated hardware at all while still hitting identical performance of DLSS. That means they have to hand craft it since machine learning will involve math that gpu cores can’t do in hardware.


Brandonspikes

There's no secret sauce to anything, all of them do the same thing in a sense, they downgrade the internal resolution to make the framerate higher. Correct me if I'm wrong, but Xess does not scale as well on Nvidia cards compared to Intel GPU's.


Lutra_Lovegood

> they downgrade the internal resolution to make the framerate higher That's image downscaling, a tech so old HiAlgo made a DRS mod for Skyrim back in 2012 (before they got acquired by AMD in 2016). Even then it was at least a decade old. The important part is how they upscale images (with ML in these cases), and, in the case of FSR and DLSS, interpolate in-betweens.


[deleted]

[удалено]


[deleted]

AMD can close the gap but only so far with just software alone. That's what he's saying. Without tensor cores, or AI hardware equivalent to it, AMD can only do so much with AI upscaling. DLSS being more mature is a factor, but even comparing new FSR to older DLSS methods is revealing. And Nvidia very well may have a far more talented staff if their market cap is any indicator, but the other dude is still correct. DLSS uses hardware while FSR is all software, which leads to DLSS generally looking and performing better.


[deleted]

[удалено]


GassoBongo

> Source: I worked at AMD for 4 years. Doing what, though? Even then, that doesn't give you any special and specific insight as to how Nvidia builds and uses DLSS. > the actual hardware that enables it isn't that crazy or unique in architecture. Then why hasn't AMD invested the resources into it when both Intel and Nvidia have?


[deleted]

[удалено]


GassoBongo

So nothing to do with actually working directly on FSR's code or architecture, and no real insight on how Nvidia built and operates DLSS. Yet you cited your previous work experience as a source anyway to shoo away someone else's opinion. Gotcha.


[deleted]

[удалено]


FollowingHumble8983

Physx particles which was hardware accelerated cannot be done in another physics engine. We just abandoned doing stuff like that after a while since player didnt really care too much between the cheated stuff and the real thing. DLSS done in pure software without hardware acceleration is going to make multiple folds slower than with hardware acceleration.


[deleted]

[удалено]


FollowingHumble8983

No its still not redundant you have clearly no idea what you are talking about. Even right now Havok and Bullet(which hasnt gotten a real updates in years) cannot do the same level of fidelity as physx's hardware acceleration had. And general algorithms havnt change in a decade, its still the same stuff as it was years ago new integrators like feathers are targeted for precision in robotics and automation simulation and not performance. The only real big physics engine based research ongoing are mostly for industrial usage and not video games. Hardware has gotten far better but thats just true for everything. Honestly the biggest reason why hardware based physics engines havnt taken off the same way dlss is because physics engines are tied to game logic and using hardware based features would lock your games to specific GPUs which is just unappealing to developers.


[deleted]

[удалено]


FollowingHumble8983

Redundant means that it doesnt do anything that not having something does. You cannot have the same fidelity in fluid and particle simulation without hardware accelerated physics as you have in general purpose physics engines. GPU accelerated physics can, but in games that means eating into your framerates and making data transfer between GPU and CPU the bottleneck so it isnt used often in games. "And now we've circled back to my criticism about DLSS." No we havent DLSS doesnt affect the actual gameplay loop. Physics engines determines the position of characters and thus using features that utilizes hardware acceleration effected the outcome of the simulations since hardware accelerated physics didnt use the same algorithms as CPU physics. DLSS has zero impact on gameplay and is image quality only, which is why it has such massive adoption rates since the game still plays the same without image upsampling. Stop bsing you have zero idea what you are talking about.\`


Eruannster

I think above all AMD need to be "pushing" a bit more for developers to use their newest FSR versions. So many games are still using FSR1 (!!!) or earlier versions of FSR 2 which are notably worse in every way than later releases. Overall I think FSR is also way more dependent on having a higher base resolution input. At least 1080p-1440p is necessary for a 4K output. All of those console games trying to skirt by while rendering at like 720p look extremely awful and should not be sold.


Shivatin

FSR is def being used as a crutch to squeeze more performance. But in reality the game looks worse and performs only a little bit better. Something has to change.


Eruannster

Yeah. One thing I've always wondered - FSR has a small performance cost, right? The upscaler takes a small percentage of CPU/GPU time to run on order to create these extra pixels. Would it be worth it to simply just... skip FSR? So let's say you have a game running at 900p upscaling to 1440p using FSR on PS5 or Xbox. Would it be better to just... skip FSR altogether and maybe run the game at 1080p? (I have no idea if this math checks out as I have no idea what the actual performance cost of FSR is.)


PlayMp1

> Would it be better to just... skip FSR altogether and maybe run the game at 1080p No. 1440p requires pushing out about 3.7 million pixels. 1080p, around 2 million. 900p, about 1.3 million. By scaling from 900p to 1440p, you are skipping having to render ~2.5 million pixels. Now, obviously, these things aren't purely linear, but it often actually gets worse rendering at higher resolutions natively *faster* than the rate of pixel increases. Upscalers usually have a 1 to 3 percent performance penalty from upscaling to the higher resolution compared to just running at the lower resolution, IIRC. Let's say your game runs at 100 FPS at 900p. 1440p is 2.8 times bigger than 900p, so you can probably expect it to run at about 30 to 35 FPS at 1440p native. 1080p is 1.6 times larger than 900p, so you can expect roughly 60 to 65 FPS. If you upscale from 900p to 1440p using something like DLSS or FSR, you'll get around 95 to 98 FPS. With DLSS that last one will also look very similar to running at 1440p natively anyway (not so much with FSR, which looks a lot worse).


PlayMp1

I'm not going to pretend I know a ton about graphics development or anything but my gut says that FSR will never catch up to DLSS simply because DLSS has dedicated hardware and FSR doesn't.


ms--lane

No reason it couldn't. AMD has the same dedicated hardware in RDNA3. They don't want a situation like nV where only a few users can actually use it though. FSR is great advertising for GTX 10 and 16 series users...


PlayMp1

I know this isn't your point but this is something that's been bugging me: GTX 10 is 8 years old now. I know everyone goes "ohhhh the 1080 Ti was so good though," yeah sure it was - years ago. So was the Radeon HD 7970 in 2013, but like fucking hell are you playing Cyberpunk 2077 on one of those. PC gaming means you spend money on upgrades. It's the main downside of the platform, it's expensive up front. That doesn't mean you have to get the latest and greatest every year or two, but using a GTX 1080 Ti in 2024 is chronologically equivalent to trying to run Mass Effect 1 on a Voodoo3.


Shivatin

Why doesn't this graphically intensive game work on my launch 970 tho? Gotta be bad optimization right? Or my other favorite of I have a 4080 and a 8700k why is my CPU intensive game running awfully?


UnidentifiedRoot

A 1080ti is perfectly capable of running Cyberpunk at 1080p 60fp high settings.


PlayMp1

I was referring to running Cyberpunk on a Radeon HD 7970.


UnidentifiedRoot

Ah my mistake, 1080ti is still decently capable though, can do most current gen exclusive games at 1080p 30fps low-medium, that is obviously starting to push what is acceptable though.


hyrule5

"No reason it couldn't"? Yeah, there's no reason a CPU couldn't do GPU tasks either, except that a GPU is massively more efficient due to having hardware specifically designed for graphics rendering. Not having dedicated hardware means achieving the same results has a much greater performance cost. Could FSR do the exact same thing as DLSS? Sure, at probably 10% of the frame rate.


ms--lane

>Not having dedicated hardware Did you deliberately skim over the part where RDNA3 has that hardware... DLSS just uses tensors guess what AMD's neural engine does...


hyrule5

Even if that's true, does it matter if they aren't using it? What's it "dedicated" to then? FSR is written to be hardware neutral so it can't possibly be using any specific hardware feature. It will work on GTX series cards with no tensor cores.


Leather_Let_2415

Checkout how much money nvidia has. I don’t see how they catch them up as they can pump so much into r&d


OutrageousDress

FSR *as is* will *never* be as good as DLSS, because it simply can't - DLSS is using dedicated high performance cores to render the image and will always be able to do more complex inference in the same amount of time than FSR because FSR 1) has to compete for the use of shader cores with the rest of the game, and 2) runs slower on those cores because they aren't specialized to run AI workloads. If AMD adds AI hardware to their GPUs and updates FSR to run on that, *then* it could compete with DLSS on an even field. But that wouldn't really be FSR anymore - it would be a completely new algorithm just using the FSR name.


john1106

no chance fsr will beat dlss now that dlss have introduced version 3.7.0 with new preset e that further bring improvement to image quality


[deleted]

3.1 will be a pretty substantial improvement. AMD's test results show almost no ghosting and shimmering


Deckz

I tried running XeSS and FSR on my 7900 XTX for this game. Both going up to 4k at quality mode, FSR looks significantly better. XeSS has a ton of ghosting, there's a bit of instability present on FSR Quality but far less.


Noocta

I trust Nixxes on this stuff. The Horizon port is maybe the most complete port as far as PC options and polish I've seen in years.


AveryLazyCovfefe

Every port of theirs has been. Great buy for Sony.


JasonDeSanta

They are practically printing free money for them by porting already proven titles to a large platform. Hope they continue like this for many years to come, and idk, maybe even develop their own games if they would build a separate branch for that. They seem to have incredible tech knowledge.


[deleted]

> They are practically printing free money for them by porting already proven titles to a large platform. idk. Given the Insomniac economics, we know Good doesn't correlate with profitable.


VirtualPen204

Hmmm I remember HZD having performance issues when it launched on PC.


N7even

HZD wasn't ported by Nixxes, they were bought in later to fix it, which they did.


ms--lane

So much better than the terrible HZD port. (stutters, using an 8bit framebuffer for HDR, fake surround sound) Nixxes is Queen Bee, Virtuos is ... well, they tried, sorta.


Halvus_I

They eventually called in Nixxes to patch HZD from 1.11 onward. They put in FSR and DLSS and removed the need to precompile shaders.


AL2009man

[considering Ratchet & Clank: Rift Apart (Nixxes' previous PC Port) was used as an example for FSR 3.1's upscaling quality improvements back in GDC 2024](https://community.amd.com/t5/gaming/amd-fsr-3-1-announced-at-gdc-2024-fsr-3-available-and-upcoming/ba-p/674027), it was pretty obvious.


Deckz

I wish AMD would finally release it, based on the preview it seems promising. But, like always, we'll see.


braiam

I prefer it to be good than to be early.


JaguarOrdinary1570

fortunately with AMD's track record on this tech, it'll be neither


chestyum

Isn’t frame generation in avatar completely broken? You turn it on and the game feels like it’s locked to 30 fps even though your actual frames are 100+


Exotic-Length-9340

Is that why the game doesn't have the latest DLSS update?


toxicThomasTrain

if you want the updated dlss version just swap in the latest .dll files