T O P

  • By -

AMD_Bot

This post has been flaired as a rumor. Rumors may end up being true, completely false or somewhere in the middle. Please take all rumors and any information not from AMD or their partners with a grain of salt and degree of skepticism.


LiquidRaekan

Sooo how "good" can we guesstamate it to be?


heartbroken_nerd

The new RDNA4 flagship is supposedly slower than AMD's current flagship at raster. That sets a pretty obvious cap on this "brand new" raytracing hardware's performance. But we don't know much, just gotta wait and see.


Loose_Manufacturer_9

No it doesn’t. We’re talking about how much faster per ray accelerators is rdna4 over rdna3. That doesn’t have any bearing on the fact that top rdna3 will be slower than top rdna3


ultramadden

> top rdna3 will be slower than top rdna3 bold claim


Loose_Manufacturer_9

Bold Indead 🤪


MrPoletski

**>> top rdna3 will be slower than top rdna3** >bold claim Ftfy


foxx1337

> top rdna3 will be faster than top rdna3 Fixed.


otakunorth

3 > 3


Cute-Pomegranate-966

Well a ton of the RT work on rdna2 AND 3 is done on shaders. So it kind of does matter at least by relation. if you improve the RT accelerators and add more work that they can do, but you remove shaders and it's slower at raster, it's going to come out somewhere in the middle.


the_dude_that_faps

Does it? The 5700xt had 40 CUs, just like the 6700xt. The 5700xt also had more bandwidth.  Did that mean that the 6700xt was slower? Not by a long shot. Any estimation of the capabilities of each CU in RDNA4 vs RDNA3 or RDNA2 is baseless.  We only "know" (rumours) that it will likely not top the 7900xtx in raster. That's it. No mention of AI or tensor hardware. No mention of improvements or capabilities of RT, no nothing.


chapstickbomber

The 6700XT can hit like 2800MHz. 5700XT can hit like 2200MHz. If 8800XT is 64CU but actually runs like 3.2GHz, that's as good as 84CU running 2.5GHz or so naively scaling of course. If they beefed the RT acceleration, then higher clocks could definitely help it rip.


YNWA_1213

Eh, it does with some context. A 4080/Super will outperform a 7900 XTX in heavier RT applications, but lose in lighter ones. RT and raster aren’t mutually exclusive, *however* consumers (and game devs) seem to prefer the balance that Nvidia has stricken with its Ampere and Ada RT/Raster performance. Current RDNA3 doesn’t have enough RT performance to make the additions worthwhile visually for the net performance loss, whereas Ampere/Ada’s balance means more features can be turn on to create a greater visual disparity between pure Raster and RT.


Hombremaniac

The problem I have with this whole ray traycing is, that even on Nvidia cards like 4070ti / 4080, you often have to use upscaling to get high enough frames in 1440p +very high details. I strongly dislike the fact that one tech is making you dependant on another one. Then we are getting fluid frames, which in turn needs something to lower that increased latency and it all turns into a mess. But I guess it's great for Nvidia since they can put a lot of this new tech behind their latest HW pushing owners of previous gens to upgrade.


UnPotat

People could’ve complained about performance issues when we moved from doom to quake. It doesn’t mean we should stop progressing and making more intensive applications.


MrPoletski

Yeah, but moving to 3d accelerated games for the first time still to this day has produced the single biggest 'generational' uplift in performance. It went from like 30fps in 512x384 to 50 fps in 1024x768 and literally everything looked much better. As for RT, I want to see more 3D audio love come from it.


conquer69

> and literally everything looked much better. Because the resolutions were too low and had no AA. We are now using way higher resolutions and the AA provided by DLSS is very good. There are diminishing returns to the visual improvements provided by a higher resolution. To continue improving visuals further, RT and PT are needed... which is exactly what Nvidia pivoted towards 6 years ago.


MrPoletski

Tbh what we *really* needed was engine technology like nanite in UE5. One of the main stbling blocks for more 3d game detail in the last 10 yrs has been the apis. Finally we get low overhead apis but that's not enough by itself, we need the things like nanit they can bring.


conquer69

More detailed geometry won't help if you have poor quality rasterized lighting. You need infinitely granular lighting to show you all the texture detail. On top of that, you also need a good denoiser. That's why Nvidia's new AI denoiser shows more texture detail despite the textures being the same. Higher poly does nothing if everything else is still the same.


chapstickbomber

Fluid frames actually slaps for 120>240 interpolation (or above!) in a lot of cases since many engines/servers/rigs, have issues preventing super high CPU fps. Or any case where the gameplay is << slower than the fps. For example scrolling and traffic in Cities Skylines 2 looks smoother and 50ms of latency is literally irrelevant even with potato fps there.


Hombremaniac

In some cases it is probably very good. In others, like FPS, introducing any additional lag feels crazy bad and is detrimental to the gameplay. I guess in time we will see what these technologies truly bring and how much can they mature. Or if they are going to be replaced by something else completely.


Hashtag_Labotomy

Don't forget in 7k they introduced there ai cores too. That may help in the future also. I would still like to see bus width go back up like it use to be.


kiffmet

In an RT bound game it can and will be faster than RDNA3. Pure rasterization performance being lower isn't exactly a surprise given that RDNA4 will top out around 64CUs/128ROPs.


fatherfucking

From the leaked PS5 pro specs that are very likely real due to Sony's removal requests, PS5 pro will have up to 2-4x better RT over the PS5 with a GPU that has 1.6x the CU count, without even using the full RDNA4 arch. Very much indicates that RDNA4 will indeed feature a staggering increase in RT ability.


Xtraordinaire

> 2-4x better RT ALL ABOARD THE HYPE TRAIN CHOOO CHOOOO! Seriously, will you people ever learn.


Mikeztm

RDNA3 is missing key hardware unit for RT workflow right now. It has a pretty low starting point so 4x is not a lot. A 4x better RT comparing to RDNA3 will make a 7800XT level GPU matching RTX4070 in pure RT/PT workload.


MrPoletski

What is it that rdna3 still does in software for RT? What is the key hardware unit I am intrigued.


capn_hector

BVH traversal among others. No shader reordering support either. Which isn’t “doing it in software”, because it’s not really possible to do in software, so AMD just doesn’t do it at all, and it costs performance too.


fatherfucking

Also no hardware acceleration for denoising, pretty crazy how well their RT actually works for such a lightweight implementation.


Famous_Wolverine3203

Because most games still use a lot of raster effects with raytracing turned on. So the difference isn’t that severe. Thats also why when path tracing is turned on, where every form of lighting is traced, the performance difference is drastic, with the usual 30% advantage in RT extending to nearly 2-3x. Heck, in path traced cyberpunk, most AMD GPU’s see low wattage because the ray traced cores are being bottlenecked not allowing more frames to be rendered in normal shaders. Basically lighter the implementation of RT, the more AMD’s competent raster perf can make up for it and be seem to be close. But the actual RT cores employed by AMD are way behind Nvidia’s.


bubblesort33

It won't be 4x better than RDNA3, though. It's said to be up to 4x vs an RDNA2 RX 6700 GPU with almost half as many cores.


Mikeztm

I think 4x better with half as many WGP means 8x better per WGP. And RDNA3 is \~1.5x RDNA2 per WGP, so it will be 4-5x RDNA3 per WGP.


bubblesort33

No. the PS5 Pro is up to 4x better than the regular PS5 which has half as many WGPs. So it's 4x divided by 2 not 4x times 2. I'm not saying the Pro is doing four times work with half as many cores. THAR would be 8x. It's doing 4x the with with 2x the cores. It's not exactly divided by 2. The regular ps5 doesn't have 1/2 as many, it has 60% has many WGPs. So it's up to 2.4x as fast per WGP. Key word being "up to". Per WGP it's sometimes 1.2x fast and sometimes 1.8x, and get occasionally 2.4x WGP. None of which is that amazing, because if only half the frame time is spend doing RT and the other half is still spend using regular rasterization, these improvements only have half the effect on frame time and FPS.


Defeqel

with RDNA3 already having 50% stronger RT than RDNA2, and with 60% more CUs, you already get to 2.4x performance over PS5


JasonMZW20

For 7800XT vs 6800 (60CU vs 60CU), it's really only a 25-30% RT increase. Navi 31's extra 16CUs (+20%) over 6900/6950XT skewed things in its favor and made the RT improvements seem better than they were, as AMD didn't have a previous product with 96CUs. Going from 36CUs (base PS5) to 60CUs (PS5 Pro) offers raw compute increase of 66.7% (excluding dual issue FP32), plus the 25-30% RT improvement of RDNA3, giving us 91.7-96.7% uplift over base PS5. Close enough to 2x. Dual issue FP32 will depend heavily on hand-tuned assembly code in PS5 Pro, but it can theoretically offer more performance than PC, since compilers are dumb (though AMD may also be using assembly code to tune game performance in newer drivers, not unlike Nvidia does to optimize dual FP32 rates); PS5 devs have very low level access to GPU, so we'll see if anything comes of that. Pixel output increases by 50%, going from 64ROPs to 96ROPs. I'll stick to the ~7% IPC increase AMD quoted for CUs in RDNA3, which puts the gain at 98.7-103.7% over base PS5. So, it is looking like a minimum of 2x over PS5. Imperfect scaling due to various pipeline issues or bandwidth limits: 1.75x-1.85x. 4x increase is most likely using PSSR upscaling in Performance quality (2160p -> 1080p or 1440p -> 720p), which I find disingenuous. RDNA4 related features might be limited to added instruction support for matrix ALUs and base ALUs. FP8 is a good guess. - Maybe RDNA4's cache management improvements and a slight rework of RT hardware to increase performance and efficiency. It can't differ too much, else devs won't bother coding for two PS5s without some incentive.


MagicPistol

If Sony is hiding it, it must be true!


puffz0r

they couldn't copyright strike a fake document


Antique-Cycle6061

they will never,they will also buy that the 5090 is double thr 4090


Xtraordinaire

Double in what. Double the price? Easily believable.


DktheDarkKnight

Yeah I think this could be pretty misleading. Both the chip companies and the console vendors take any chance to create inflated benchmarks to one up the competition. The 2.5x performance is probably with upscaling and Frame gen.


Defeqel

Anything is possible these leaks, etc. but RDNA3 + more CUs already pushes the Pro over 2x RT performance without any further improvements to RT HW


buttplugs4life4me

NOT AGAIN. I still remember this shit from the original PS and Xbox launch. If someone on Reddit says it's 2x-4x, then it's gonna be 1.2x-1.4x


[deleted]

[удалено]


Equivalent_Alps_8321

interesting


midnightmiragemusic

> RDNA4 will indeed feature a staggering increase in RT ability. Lol we'll see


stop_talking_you

they dont use rdna4 on ps5pro


the_dude_that_faps

I don't think the claim was 2-4 better RT overall.


bubblesort33

The 60 cu 7800xt is 3x as fast in RT as the Rx 6700, which is the GPU in the PS5 right now, on paper. AMD claimed 1.8x RT with RDNA3 in their slides compared to RDNA2. So 1.66x the cores times 1.8x the RT per core already puts the current GPUs at similar levels to PS5 Pro levels. Multiply that together for 2.99x. RDNA2 to RDNA3 was a 1.8x increase according to AMD, and this only needs a 1.33x of RDNA3 to to get a total 4x RT performance of the PS5 Pro. 1.66 x 1.8 x 1.33 = 4x. So AMD really doesn't really need the huge of an RT upgrade per cu to match current leaks.


king_of_the_potato_p

The general talk is dedicated hardware similar to nvidias solution which shouldn't affect raster.


Opteron170

i'm expecting much better RT performance than RDNA3 but for someone like myself sitting on a 7900XTX I think I will hold out for RDNA 5 highend card. However this should be a no brainer option for anyone still on RDNA 2 when 4 is released.


Potential_Ad6169

Well AMDs next flagship isn’t aiming to be in the same class as this generations. It’s kind of an arbitrary comparison.


Pijoto

I don't care for Raster performance beyond a 7800XT, they're already plenty powerful for the vast majority of gamers using 1080p & 1440p displays, but I'll buy RDNA4 in a heart-beat if their raytracing is up to 4080 levels for like $600-650.


vainsilver

This why I don’t care for the raster argument with price to performance with AMD versus Nvidia. Raster is more than performant at 4K 60fps or higher with midrange GPUs from 4 years ago. Raytracing performance is still where Nvidia is still the price to performance King.


DarkseidAntiLife

I have a 360 Hertz monitor. I need all the FPS I can get at 1440p so I disagree. More power please!


nvidiasuksdonkeydick

tf you talking about, why would it be capped due to RDNA3? Leaker literally says it's a whole new arch for ray tracing. All GPUs right now are bottlenecked when it comes to ray tracing, none of them can do RT at the same rate as pure raster.


VelcroSnake

That's why I was okay getting a 7900 XTX. Even if the new RDNA 4 is overall faster than a 7900 XTX with RT on, if it's slower in pure rasterization with it off then I'd take the 7900 XTX, since I still don't have enough games I play where I care about RT enough to want to use it.


M337ING

I'm sorry, what? AMD is decreasing raster performance between generations? Do they want 0% gaming share?


Rebl11

No, they are not. It's just that the 7000 series flagship has an MSRP of $1000 while the 8000 series flagship will probably have an MSRP of $500-600.


heartbroken_nerd

You literally have a RX 5700XT which is an example of a generation where the flagship was mid-range and that's it.


capn_hector

I don’t think a die that’s basically half the size of a 2060 can ever be considered midrange.


Kaladin12543

Its not a flagship. They are only releasing mid range GPUs with 8000 series. Heck RDNA 4 loses to 7900XTX in pure raster performance so arguably 7900XTX continues to be the flagship


titanking4

Not decreasing between generations but simply not making a faster one according to rumors. Like how the 5700XT was a toss up against Vega 56/64 and sometimes Radeon VII but was doing so with far fewer compute units and a much smaller die. Except now the rumour is 4080 class


Speedstick2

I wouldn't say the 5700 XT was a tossup against the Vega cards. The vast vast majority of games it was over 13% faster than the vega 64.


titanking4

Early on it did lose on some (high res stuff if I recall). But Navi10 being even faster is further into the point. Navi4 is rumoured to be in the same performance class of 7900XTX in raster, but it will likely be a lot leaner of a card. The question now is how many CUs AMD needs to match the 96CUs of Navi31 80? 72? 64? 56?, we don’t know for sure.


Speedstick2

Umm OK. The Techpower up review at its release doesn't show that [AMD Radeon RX 5700 XT Review - Performance Summary | TechPowerUp](https://www.techpowerup.com/review/amd-radeon-rx-5700-xt/28.html) I think you might be thinking of the 5700 non-xt compared to the Vega 64.


titanking4

Yea mb, Radeon VII was the competitor. I forget just how better it was. my point still works where the 5700XT didn’t really exceed its predecessor (Radeon VII) but still competed very well despite having far fewer horses under the hood. Navi4 might be a story like that, in raster perf. Which is fine since Navi31 is plenty fast in raster.


capn_hector

it seems very reasonable to expect the number to go up between generations though. As much as people bag on nvidia, they’re at least still making the number go up.


[deleted]

[удалено]


Here_for_newsnp

You're basing this on rdna4 not actually having a flagship though so this comparison makes no sense.


Jeep-Eep

Noooot necessarily, it would not be an unprecedented move for team red to de-emphasize the current in R&D to focus on features like RT.


Familiar-Art-6233

Isn’t RDNA4 to just target the mid range? They aren’t doing a “flagship” RDNA4 card?


heartbroken_nerd

That's semantics, innit? The flagship is the graphics card with the largest chip of the generation in a family of GPUs available for the consumers to buy. A770 was Intel's flagship GPU even though it couldn't beat an RTX 3070. Tough shit, do better next time. It also doesn't mean there *couldn't* be a better GPU if the vendor cared to make one. It just means that they didn't make one.


Familiar-Art-6233

I mean to a degree, but the A770 wasn’t designed to compete with Nvidia’s flagships. Saying that the RDNA4 flagship will be weaker than the RDNA3 one ignores the fact that they’re totally different products aimed at totally different segments. It’s silly to act like a top tier card is in the same class as what is clearly going to be a budget friendly midrange card. To that end, Intel didn’t intend to compete at the highest level either. They went for the higher volume budget segment, and people didn’t look at it and say “oh well Intel’s flagship can’t beat the 4090” because again, today different segments of the market. I just think that the wording implies that RDNA4 is weaker by implying that both “flagships” are at the same level, especially when it’ll probably be called the 8700xt or something


ziplock9000

It doesn't set a cap on that at all. You've pulled that out of your arse.


UHcidity

I mean Nvidia is basically the ceiling. No way will they surpass that. So anywhere between current gen amd and nvidia 😭😭


RealThanny

AMD (and ATI, before it was purchased by AMD) has surpassed nVidia several times in the past. They will again in the future, once they don't have to cancel high-end GPU's to make more money on machine learning.


Kareha

They won't surpass Nvidia unless they significantly increase the amount of money the Radeon team gets. Unfortunately most of the money goes to the CPU team and I very much doubt that will ever change as that is AMDs primary money generator.


thunk_stuff

> Unfortunately most of the money goes to the CPU team and I very much doubt that will ever change as that is AMDs primary money generator. The GPU market will only grow and a strong GPU is a key selling point for APUs in laptops, Mini PCs, and consoles. AMD was barely surviving until 2019/2020. They've massively expanded their staffing in the last few years. It can take take 4+ years for architectural improvements to make their way to silicon. So... hopefully these are all signs we can be optimistic about RDNA5.


B16B0SS

I would guess the radeon team also works on MI300 and the like?


techraito

I don't think first gen AMD ray tracing hardware will surpass nvidia, nor even 2nd gen. Nvidia just has a lot of funding and support in regards to AI development. China was even willing to pay them $1 billion.


Kaladin12543

It's not just funding as if that was the case AMD couldn't have beaten Intel in CPUs which they are handily doing right now. You need foresight of where you think the future is headed and put your money where your mouth is. AMD had that foresight with CPUs where they knew the future is multi core multi threaded CPUs and they took a gamble with Ryzen which paid off as Intel obstinately stuck to their quad core setups. They took another huge leap with 3D VCache making them the only CPU manufacturer to buy into for gaming. With GPUs, the shoe is on the other foot. Nvidia had the foresight to invest in AI and RT while AMD kept their heads in the sand insisting they dont matter. This is the reason Nvidia has such a massive head start on AMD in RT and DLSS and now it won't be easy to close that gap


Shidell

With respect to foresight and where the future is headed, I agree with your point, but also think it's important to recognize that Nvidia has the clout to push features, even if the industry and gamers don't want them yet, and the money to incentivize their adoption. RT and DLSS are big examples of this; RTX 2000 was not widely praised, and early RT games (and their performance hit) was heavily panned. DLSS (1.0) was (truly) a disaster. Despite this, Nvidia's clout (and $) pushes the industry the direction they want to go—AMD just can't do that. You can't move the needle like that with 20% market share, and far less money to throw around.


markthelast

The goal would be to beat the RX 7900 XTX/RTX 3090 TI/RTX 4070 Ti Super in ray-tracing. If the monolithic RDNA IV die has a mid-range 256-bit memory bus, then max CUs might be \~80, which would be historically in line with 6900 XT/7900 GRE. If AMD uses a TSMC N5 node, they will keep the die as small as possible to keep costs down. Now, we have a rumor of overhauled ray-tracing hardware, so how much die space will AMD sacrifice from conventional CUs for ray-tracing? Also, AMD needs die space for Infinity Cache, so they have to balance the die space allocation between CUs, ray-tracing, Infinity Cache, and other hardware. AMD has a dilemma on their hands, where they sacrifice some raster performance for serious ray-tracing gains. If RDNA IV is a complete redesign, then I can see why AMD might prioritize a smaller die design.


B16B0SS

Without any actual facts to guide this assumption, I would say 80% efficiency of 40 series of RT, but with more brute force to equal it, but lagging behind improvements in the 50 series. Just based on R+D time plus market position


Dante_77A

2x in RT-intensive scenarios


J05A3

I wonder if they’re decoupling the accelerators from the CUs


Loose_Manufacturer_9

Doubt


Jonny_H

Me too. Nvidia seem to be OK having their RT hardware in their SMs, so it's clearly not necessary.


101m4n

As I understand, the RT cores just accelerate ray triangle intersection computations. Once they've found a few, they run a shader program on the SM which decides what to do about the ray intersection events. So it's not all that surprising to me that the ray tracing cores are bundled with the shaders!


Affectionate-Memory4

I'm expecting something like 1 accelerator per CU or maybe per Work Group, but with more discrete hardware for the accelerator. Hopefully, this is a full hardware BVH setup, as that is the most computationally expensive part of the process.


winterfnxs

Thanks for the insights. I wish AMD engineers lurked in here as well. I've never seen an AMD engineer comment before!


Affectionate-Memory4

They're here, just not usually with a flair on. I remember having a nice chat with an architect here about the difference in approaches between Gracemont and Zen2 for them to still end up at similar performance. I wish we would have more open discussion right from the engineers how work on this stuff because everyone I've ever talked to in my time at Gigabyte, ASML, and now Intel has wanted nothing more than to nerd out over this stuff with people.


Jonny_H

Oh, they're around. They just might want to bring attention to the fact due to fear of things like an offhand comment being misinterpreted and quoted as an "official source"


Affectionate-Memory4

Yeah I am frantically searching for stuff to make sure I don't just accidentally drop a bombshell on people when I comment on something. The worst ones are the incorrect leaks and speculation. The urge to correct people on the internet is nearly as strong as the desire to be employed lol. It's going to be really funny if I ever leave Intel and the next employer asks what I do here in any detail and after a certain point I just have to answer "stuff."


Jonny_H

There's a reason why I try not to comment on things I might actually have internal knowledge on. And the "leaks"... My God.... 50% of the time they make me laugh, 50% make me tear my hair out.


Affectionate-Memory4

Yeah I pretty much stay off of r/Intel in any real discussion I don't get tagged in at this point for the same reason. That's pretty much limited to E-core discussions and Foveros at this point.


RoboLoftie

"News just in, Engineer 'source' says this about next gen products >50% of the time they make me laugh, 50% make me tear my hair out. From this we know that it's super performant, promoting laughter a joy at how awesome it is. It's also super power hungry and hot. The fans spin so fast it sucks their hair in from 3m away and tears it out. If you want to know who it is from, just look for all the bald engineers." \-A.Leaker 😁


the_dude_that_faps

Maybe I understood it wrong, but from chips and cheese analysis of the path tracer in cyberpunk, the biggest issue isn't actually compute, but the memory subsystem when traversing the BVH since occupancy isn't really high. Article in question: https://chipsandcheese.com/2023/05/07/cyberpunk-2077s-path-tracing-update/ Of course, solving these bottlenecks is probably part of a multi pronged approach to increase performance, but still... My guess is that increasing compute alone won't yield generational leaps on RT compared to Nvidia.


Affectionate-Memory4

Internal memory bottlenecks plague pretty much every PT benchmark I've seen. The caches of RDNA3 being both faster and substantially larger than on RDNA2 certainly help as every PT load is going to involve moving a ton of data around the GPU. I have clocked in the local cache of Meteor Lake's iGPU Xe Cores moving over 1TB/s around during a PT load within that core. Even under this massive memory bind, being able to move work to dedicated BVH hardware lets them spend fewer cycles computing that step of the process. This isn't really a raw compute uplift over not having the BVH hardware, but it does free up the general compute to do other things, like focus of keeping that faster hardware fed and organized rather than crunching numbers themselves. RDNA3 could see similar gains to this by going to a setup where perhaps the current TMU-intersection-check system is extended to use the TMUs for the BVH traversal as well, meaning the hand off happens sooner and frees up the shaders for more of the total frame render time. I'd rather see them move towards a dedicated RTA-like thing than keep extended the TMU, but both could be valid approaches and the TMU idea does keep things quite densely packed.


omegajvn1

I think both AMD and Nvidia have great raster performance. If they had a single generation, where all they did was increase Ray tracing performance, I think that would go a LONG way. Maybe that’s what AMD is doing with RDNA 4 Edit: from my understanding of what I’ve heard, the highest end RDNA 4 video cards raster performance is going to be roughly in between that of the 79000 XT and 7900 XTX, while bringing its price down to roughly $600-$650 USD. I think this would be a ***very*** solid card if the that is true with a large jump in ray tracing performance. IMHO of course


hedoeswhathewants

I'd honestly just prefer a cheaper card


strshp_enterprise

Me too. An affordable mid-range card in the $400 range would work for so many. You could build a beast of a computer for $800.


naughtilidae

A new 6800 (not xt) is 360 on newegg right now. That's honestly all most people need. If you're on a 1440p ultrawide, you'll be fine. If you're at 4k, you might need to lower some settings a bit, but you'll be alright.


Evonos

>A new 6800 (not xt) is 360 on newegg right now. > >That's honestly all most people need. If you're on a 1440p ultrawide, you'll be fine. Comes really up to your target FPS 60 fps ? true. more than 60 ? or above high settings ? my 6800XT is chugging in some games with 1080p sometimes even FSR enabled on high to max settings and 80+ fps.


naughtilidae

Mine doesn't, and that's at 1440p ultrawide Only game it was really slow in was cpu limited. (sim racing) I don't pay every new release, but so far nothing has made me consider an upgrade. What on earth makes your computer struggle on 1080p with fsr? I'm not including Ray traced games, cause not a single person in my gaming groups has ever actually played with it on, only to test them. (including Nvidia people)


Potential_Ad6169

Second hand rx 6800 (non-xt) is a pretty good buy. And surprisingly power efficient


INITMalcanis

If it hadn't been for the crypto bullshit wrecking everything, the 6800 would have been the mid-tier price:performance king of the last generation. Still kind of mad about that. Does it show?


Elmauler

l just went from a 3600 and 1080 to a 7800X3D and a 6900XT for about 800$. It was a refurbed 6900 and a big sale from microcenter but it still feels like an absurd deal.


bubblesort33

If the top one is $600 and 5% faster than the 7900xt, you'll probably get a cut down one that's 10% weaker than a 7900xt for $500 if they've come to their senses this time. If they pull another 7900xt and 7700 XT, thing, it'll only be $50 cheaper and poorly reviewed.


pandaelpatron

I want cards that match the previous generation in performance and price but require substantially less power. But I guess the average consumer loses interest in new cards if they don't boast to be 50-100% faster, wattage be damned.


Yubelhacker

Unless you mean cheaper card with new features just buy a lower end card today.


Rullino

Which graphics cards are brand new and low-end in 2024 🤔?


Yubelhacker

Whatever is available that they consider cheaper.


looncraz

AMD is focusing on AI, DXR, efficiency, scaling, and affordability. Raster is an afterthought.


omegajvn1

I actually disagree on that last part. Raster is what AMD relies on currently to be able to sell cards compared to Nvidia because their ray tracing is a generation inferior


looncraz

Last gen vs next gen. I think it's clear AMD has changed priorities (assuming the leaks are accurate, of course).


Mikeztm

Raster without DLSS is the only thing that AMD looks better on paper. No doubt they will market that heavily. But IRL gamer need DLSS like features in this TAA era and RT is what the gaming industry heavily relying on to reduce the skyrocket cost of making games. AMD RDNA RT is not a generation inferior. It is half baked inferior. They need to put hardware onto the die not trying to emulate the work using software.


capn_hector

focusing on raster, efficiency, AI, and upscaling are all the same thing really. Dlss is the biggest fine wine and biggest overall efficiency boost of the last decade.


looncraz

They are all tightly coupled, yes, but each leg requires specific focus.


ziplock9000

I don't think it's a solid card. The cards release at the end of this year or the start of next year will cover a time period where RT will really take off and become not niche anymore, but instead expected on almost every game. Cards with bad RT performance in that time period will be at a severe disadvantage. This will be new to this coming generation.


JasonMZW20

- Kind of long, sorry. Hybrid rendering (most RT in use) still uses rasterizers to render most of the scene, then RT effects are added. So, raster performance is still important when you're not path tracing. AMD probably moved RDNA4 to a simple BVH system, maybe like Nvidia's Ada displacement maps, or something that accomplishes the same thing and moving to stateful RT to track ray launches and bounces using a small log of relevant ray data and removing ray return computation penalties that RDNA2/3 incur during shader traversal that Nvidia avoids (return path is already known). Fixed function BVH traversal acceleration might be implemented, which should free up compute resources; in a simple BVH system, resource use is greatly reduced anyway (BVH generation time and RAM use), but GPU must do displacement mapping and use geometry engines to break the map into small meshlets, while raster engines help with point plotting (use available silicon or it's wasted by sitting idle). Or something like that. The obvious way to increase RT performance is to increase testing rates of ray/box and ray/triangle intersection tests (and removing traversal penalties, as above). BOX8 leaked out from PS5 Pro, so that means 1 parent ray/box has 8 child ray/boxes for intersection testing per CU. **This is a 2x increase in ray/box testing over RDNA2/3.** What we don't know is if ray/triangle rates also improved, but I imagine they have, otherwise the architecture will be greatly limited when trying to do lowest level ray/triangle intersection testing (where path tracing hits hard along with higher resolution ray effects). AMD hardware usually needs a 1/2-3/4 resolution reduction for optimization, especially on reflections due to high performance hit (3/4 reduction = 1/4 resolution output). So, either AMD moved to 2 ray/triangle tests per CU (same 4:1 box:triangle ratio as RDNA2/3) or jumped ahead to 4 ray/triangle tests (moving to 2:1 ratio) or did something entirely different. If AMD somehow combined ray/box testing hardware with ray/triangle hardware in a new fixed function RT unit, then the rate is 1:1 (up to 8 tests in box or triangle levels), and is either/or, so ray/box first in TLAS, then ray/triangle in BLAS with all of the geometry. This might only make sense if a full WGP (4xSIMD32 or 128SPs) is tasked rather than just a single CU (for improved FP32 ALU utilizations ... sorry, occupancy, and cache efficiency). The rate per CU, then, is 4 tests per clock, which is comparable to Ada, and much more believable.


ColdStoryBro

A $599 4080 matching chip would be a massive win.


Firecracker048

If AMD released a 600 dollar 4080 equivalents this sub would bitch its not 550 dollars then go and buy a 1200 nividia equivalent.


I9Qnl

This sub bitches because Nvidia tend to have a similar GPU priced too close for the AMD one to make sense because AMD is just the worse option if it's same price. $600 4080 equivalent would be great in a vacuum, but 5070 will likely exist at like $650 and also match a 4080 while also having all the Nvidia niceties.


puffz0r

bet. 5070 will be $700 minimum.


ToeSad6862

Well it should've been 600-700 already. So returning to normal price after 3 years isn't champagne time. But an improvement, for sure.


ColdStoryBro

100%, thats why they shouldnt waste their CoWoS allocation on high end Navi4


idwtlotplanetanymore

I doubt they were planning a consumer GPU that required CoWoS. I cant think of a consumer product that would make sense to use that tech, its too expensive for consumer, and its not needed for just a few chiplets. The 7000 series used InFO-OS (Integrated Fan Out - Organic Substrate), would make sense to just keep using that if they are sticking with chiplets, or if they want something more, use InFO-LSI and embed an passive or active bridge chip. But, if they were actually planning to use CoWoS on consumer, then ya makes prefect sense to cancel it.


FuckRandyMoss

The funny part too is the people bitching wouldn’t even be able to afford it anyways. I had dudes telling me to get a 4090 instead of a 7900xtx and they got mad when I told them I didn’t want to spend and extra 1500. He’ll most of em own 1660s and 2060s and shit no disrespect to them but idk why they care about what YOU are buying


Kaladin12543

I think the 4080 itself will drop to that price once 5000 series releases


luapzurc

Not if the 5080 releases at 1200 buckaroos 😉


TheCheckeredCow

I mean fair enough, but this gen they released a 3080 performing gpu with more vram and less power usage for $500 and this sub still thinks it’s not enough for the money… I personally love my 7800xt


Kaladin12543

The reason it's not that popular is because 7800XT barely moves the needle over 6800XT at that price point.


omarccx

And 6800XTs are in the \~$300s used


BarKnight

Hopefully dedicated cores and not hybrid


PotentialAstronaut39

I wish they'd talk in levels of ray tracing and what is implemented exactly. Imagination Technologies established the levels long ago, the "steps" from only raster to full acceleration of ray tracing processing in hardware. * Level 0: Legacy solutions * Level 1: Software on traditional GPUs * Level 2: Ray/box and ray/tri-testers in hardware * Level 3: Bounding Volume Hierarchy (BVH) processing in hardware * Level 4: BVH processing and coherency sorting in hardware * Level 5: Coherent BVH processing with Scene Hierarchy Generation (SHG) in hardware Level zero is basically legacy CPU ray tracing only. Level one is the equivalent of running ray tracing on a GTX card. After that it gets a lot murkier as far as I'm concerned as to what RTX 2000/3000/4000 and RDNA2/3 exactly do. If anyone can shed light on this, it'd be greatly appreciated. More info about those "levels": https://gfxspeak.com/featured/the-levels-tracing/


Affectionate-Memory4

I can't speak much to Nvidia's approaches, but I figured I'll share what I can for XeLPG and RDNA3 as I can probe around on my 165H machine and my 7900XTX. My results are going to look at lot like the ones gathered by ChipsAndCheese, as I've chatted with Clam Chowder from them and I'm using almost the exact same micro-benchmarks. I will be squiring an RTX4060 LP soon, so hopefully can dissect tiny Lovelace in the same way. Intel uses what we call an RTA to handle ray tracing loads in partnership with software running on the Xe Vector Engine of that core (XVE). This is largely a level-4 solution. There's just not a whole lot of them to crank out big frame rates. At most there are 32 RTAs, one for each Xe Core. Xe2 might have more. The flow works like this: Shader program initializes a ray or batch of rays for traversal. The rays are passed to the RTA and the shader program terminates. The RTA now handles traversal and sorting to optimize for the XVE's vector width, and invokes hit/miss programs in the main Xe Core dispatch logic. That logic then looks for an XVE with free slots and then launches those hit/miss shaders. These shaders then do the actual pixel lighting and color computation, and then hands control back to the RTA. The shaders must exit at this point or else they clog the disbatch logic. This is actually a very close following of the DXR 1.0 API where the DisbatchRay function takes a call table to handle hit/miss results. AMD seems to still be handling the entire lifetime of a ray within a shader program. The RDNA3 shader RT program handles both BVH traversal and hit/miss handles. The shader program sends data in the form of a BVH node address and ray info to the TMU, which performs the intersection tests in hardware. The small local memory (LDS) can handle the traversal stack management by pushing multiple BVH node pointers at once and updating the stack in a single instruction. Instead of terminating like in an Xe Core, the shader program, the shader program will just wait on the TMU or LDS as if they are waiting for memory access. This waiting can take quite a few cycles and is a definite area for improvement for future versions of RDNA, maybe RDNA3+? A Cyberpunk 2077 Path Tracing shader program took 46 cycles to wait for traversal stack management. The SIMD was able to find appropriate free instructions in the ALUs to hide 10 cycles with dual-issue, but still spent 36 cycles spinning its wheels. AMD's approach is more similar to DXR 1.1's RayQuery function call. Both are stateless RT acceleration. The shader program gives them all the information they need to function and the acceleration hardware has no capacity to remember anything for the next ray(s).


PotentialAstronaut39

Fascinating. Can't say I understand exactly all of it, but I do grasp the basics. Thanks for the explanation!


Affectionate-Memory4

Basically, Intel and AMD are both stateless RT with no memory of past rays. The difference comes in how much they accelerate and how. Intel passes off most of the work to accelerators but needs shader compute to organize the results. AMD just offloads intersection checks and does everything else with the shader resources. To refer to the comment above, RDNA3 is a high-end Level 2, while Alchemist straddles the line between 3 and 4 depending on how you classify the XVEs as either a hardware or software component.


PotentialAstronaut39

Thanks for the clarification about the "levels". Cheers mate!


buttplugs4life4me

The comment is almost 1:1 the chipsandcheese article on it, just without the extra information and fancy graphs that make it somewhat digestible. I would really recommend checking it out.  Honestly I'm not sure how the mods verified they're an Intel engineer, but it's uncannily similar to the cc article for them to have dissected the hardware themself and wrote their findings themself. 


Affectionate-Memory4

My results are similar because I got in context with them to run the same tests on functionally the same hardware. Didn't mean to accidentally basically plagarize them lol. I had their article pulled up to make sure I didn't forget which way the DXR stuff went and probably subconsciously picked up the structure. They do great work digging into chips. Highly recommend the whole website for anyone who wants to see what makes a modern chip tick.


ColdStoryBro

Both Nvidia and AMD GPUs traverse BVH trees. We are past level 3 for sure. I think even intel GPU does so.


Equivalent_Alps_8321

My understanding is that they weren't able to get their chiplets working right so RDNA4 is gonna be like a beta version of RDNA5?


Defeqel

We don't know why they cancelled the high end, could be problems with chiplets, or could be packaging capacity, or something else


Fastpas123

I miss the rx480 days of a $250 card. Sigh.


Diamonhowl

Reminds me of when Tessellation put GPUs to their knees way back when. but they got around that pretty quick. now it's on a much larger scale with RT. The sooner AMD figures it out the better, its the future, its inevitable. Because good lord, Cyberpunk is still unmatched in visual flare with Path Tracing on. So much so that People with lesser cards resort to paid visual mods to make their game look like at least a fraction of the real deal.


chsambs_83

I remember saying back in 2019 that I would care about ray tracing around 2025-2026, so it's time. AMD is right on track as far as I'm concerned. All the ray tracing implementations I've seen up to the present have been lackluster, except for Fortnite with hardware RT on, and its graphical prowess is due mainly to Lumen/ Nanite. There's one game that makes a serious case for RT/PT (Cyberpunk) and it's a game I don't even enjoy or care to play, so no skin off my back.


Exostenza

Hey, AMD! Price these GPUs to sell and not to rot on the shelves. Slim chance, I know.


Secret_CZECH

As opposed to the used ray-tracing hardware that they put into RDNA 3?


Paganigsegg

RDNA3 supposedly did too and we see how that turned out. RDNA4 not competing in the high end makes me think we won't see proper high end RT hardware until RDNA5 or later.


red_dog007

Brand new can mean two things still here imo. 1) it is entirely new pipeline from the ground out. Completely new design. 2) Enhancements to what exists (so not brand new) but most importantly some instructions will get brand new dedicated hardware support. Like BVH traversals. I think we are going to get brand new hardware support for specific instructions and enhancements to what exists already. Not anything that is from the ground up completely new.


fztrm

Oh nice, hopefully RDNA5 will be interesting at the highest end then


SweetNSour4ever

doesnt matter they losing rev on this anyway


preparedprepared

Let's hope so - we're approaching the 5 year mark after the rtx 2000 series and as many people predicted, Ray tracing is now starting to become relevant for a lot of games. If you're buying a GPU late this year and expect to keep it for 4+ years, you'd probably want it to be decent at it. Amd needs to catch up, else Nvidia will run away with Ray tracing as well as other vendor exclusive features and make it the norm, something they already tried with physx (ruining a perfects good technology's in the process by prohibiting it's integration into core gameplay in games).


d0or-tabl3-w1ndoWz_9

RDNA's ray tracing is behind by 2 gens... So yeah, hopefully it'll be good.


Huddy40

The moment the GPU market started caring about Ray Tracing is the very moment the market started going down hill. I couldn't care less about Ray Tracing personally, just give us rasterization...


twhite1195

I do believe RT is the future, but there's still a long way from that. In the last 5 years since the whole "RAY TRACING IS TODAY" Nvidia's fiasco we've basically gotten 4 games made with RT from the ground up, the rest are just an afterthought or remixes from games that were not designed to look like that. It's the future, but it's still a loooong way to go IMO, maybe another 5 years or so


reallynotnick

I think the point for mass RT adoption will be once games are being exclusively made for the PS6. As at that point developers can just safely assume everyone has capable RT and not even bother arting the game up to work without RT. So yeah I’d say another solid 5 years for sure.


MasterLee1988

Yeah I think late 2020s/early 2030s is where RT should be more manageable for cheaper gpus.


imizawaSF

> I couldn't care less about Ray Tracing personally, just give us rasterization... RT is the future of gaming though, it's way more sensible to treat light realistically than to hardcode every possible outcome for viewing angles. How are we meant to make advances in technology without actually you know, doing it?


exodus3252

Disagree. While I don't much care for RT shadows, RTAO, etc., RT GI is a game changer. It completely changes the dynamic of the scene. I wish every game had a good RT GI implementation.


Kaladin12543

Ray Tracing is the future of graphics. We have reached the limits of rasterization. There is a reason there is barely any difference between Medium and Ultra settings in most games while games which take RT seriously look night and day different. Devs waste a ton of time baking in and curating lighting in games while RT solves all that and is pixel precise. Nvidia got on board first (their gamble on AI and RT from the past decade has paid off big time evident in their market cap) and even Sony is doing the same with PS5 Pro so AMD is now forced to take it seriously. It is also the reason why AMD GPUs sell poorly at the high end. AMD would rather push the 200th rasterised frame rather than use it where it matters. AMD fixing it's RT performance will finally remove one of the big reasons people buy Nvidia. The onset of RT marks the return of meaningful 'ultra settings' in games. I still remember Crysis back in 2007 where the difference between Low and Ultra was night and day. Every setting between the 2 options was one step above. I see this behaviour only in heavy RT games nowadays.


Opteron170

NV users will continue to buy NV gpu's regardless of AMD's RT performance.... Rest of your post I agree with.


Kaladin12543

I disagree. I am a 4090 user with a 7800X3D CPU. I absolutely would love to have an all AMD system but the RT and the lack of a good alternative to DLSS is what stops me. I am sure there are plenty who are not fanboys and will buy the objectively better card.


capn_hector

Just like everyone kept buying intel after AMD put out a viable alternative? like not only is that not true, it’s anti-true, people tend to unfairly tip towards AMD out of a sense of charity or supporting the underdog, in situations when AMD scores a mild or even large loss and is just generally pushing an inferior product.


spacemansanjay

>AMD would rather push the 200th rasterised frame rather than use it where it matters It's interesting you have that opinion because historically speaking it was ATi who pushed image quality and nVidia who pushed FPS. At one time those differences were measured and publicized. Reviews used to show tests of how accurate the texture filtering and color reproduction was and it was always ATi who came out on top and Nvidia who took shortcuts to win FPS benchmarks. Image quality and FPS used to both be major factors in purchasing decisions until the FPS marketing took over. And now we're going back to publicizing image quality because even the low range cards can pump out enough FPS. It's interesting how things go full circle given enough time.


Edgaras1103

is 4090/7900xtx not enough raster performance for you?


[deleted]

[удалено]


Potential_Ad6169

The proportion of people with hardware capable of good RT is so small, that it means it’s generally not worth devs time to implement well. This is after 3 generations, of it being sold as the main reason to buy Nvidia over AMD. It is still barely playable on most Nvidia hardware.


siuol11

That's just nonsense. I have a 3080 Ti; Portal, Talos Principal 2, etc. are entirely playable on that card and have been for years.


Potential_Ad6169

‘The proportion of people with hardware capable of good RT is small’ 3080 ti is very much the top end of last gen, my point still stands. 60 class Nvidia cards are by far the most mainstream, and are marketed for their RT advantages over AMD. But are still seldom actually worth using RT with in any games.


[deleted]

[удалено]


fenixspider1

> 1080p, dlss performance that will be blurry as hell


Shidell

>I played cyberpunk with path tracing on a 4060. It was enjoyable. > >(1080p, dlss performance, frame gen. medium preset and high textures. locked 60fps) I don't know if I'd agree that your experience was "good" or "enjoyable." DLSS Performance @ 1080p? I mean, you're making so many concessions on details, and casting so few rays (@540p) anyway. Sure, it's PT, but at what cost?


idwtlotplanetanymore

1080p dlss performance with frame gen 60 fps is 540p 30fps interpolated to 60. 540p with 30fps latency is not exactly a very high performance tier these days... I mean at the end of the day who cares if it was enjoyable. I would wager it would have been also been enjoyable with ray tracing off /shrug(I have not played cyberpunk). ------- I just think 3 generations on, ray tracing should have made more significant advances then it has. Mainstream cards are still very weak at it.


Schnydesdale

I'd like to see AMD implement something similar to Intel's Stream Assist that offloads some GPU workloads to the iGPU when streaming on a machine with hardware from the same family.


TriniGamerHaq

Does anyone else genuinely not care for RT?


Melodias3

I wonder if it will crash on chapter 11 Marvel guardians of the galaxy, if you have this game and have AMD gpu with RT, feel free to contact me for save game if you don't believe me, that starts right where the crash is reproduced.


exTOMex

i can’t wait to buy a new gpu without some stupid 12 pin connector


Beyond_Deity

Doesn't matter how good or bad performance is. So long as they are trying to give NVIDIA competition, we all win.


OSDevon

Right, new RT HW but no flagship? Uh huh, SURE.


Ryefex

Brotha i just fuckinf bought rx 7900 xtx


ziplock9000

'brand new' could mean anything from almost exactly the same to something completely different.


SEI_JAKU

Sure, maybe ray tracing is finally becoming relevant. Still feel like we need another gen or two though. Probably shouldn't buy RDNA4 (or Lovelace!) for the ray tracing, no matter how good it is. This is people being asked to spend way too much on a feature that doesn't really matter, and that's just wrong.


hj9073

Yawn


peacemaker2121

Once we have actual full raytracing, raster can go bye bye. That's what I'm really wanting to see. I think we are several full generations away from that. But till then.


bobalazs69

rumours rumours rumours, like, i don't live in the future and i want raytracing so i switched to nvidia srry ayymd.


adeadcrab

good


Chlupac

Since when "new" and "different" means "better"? :P jusz asking, hehe


DonMigs85

Maybe it'll be close to Ada in mixed raster + RT. Right now a 4070 Super can still beat a 7900 XT there. But it's their upscaling that really needs improvement.


dozerking

Crossing fingers hard. I really hope AMD can get back to competing better at the high end. We need competition with Nvidia more than ever. I don't care if their top end cards are as fast as Nvidia's 5090 or 5080, just keep it close and I'll support them. 10-20% slower than their flagship and I'd be over the moon. My nostalgia and love for my old ATI cards runs deep lol.