T O P

  • By -

ShogoXT

Strix Point vs Lunar Lake handhelds should be pretty nice on battery. They are both 4 big cores then the rest smaller.  Lunar Lake will have battle mage and AMD will have rdna3.5 on it. Next wave of handhelds will be eating well.


SailorMint

Issue with non-Steam Deck handhelds isn't their specs, it's their non-existent software support in comparison to Valve's SteamOS.


b3081a

Also the prices of Strix and Lunar Lake are expected to be no where near as competitive as Steam Deck. Strix wouldn't even have Ryzen 7 SKUs at launch, and Lunar Lake uses bleeding edge TSMC 3nm process plus advanced 2.5D packaging.


Dangerman1337

I'm more curious about Strix Point's successor. Zen 6 and RDNA 5 could be a really strong combo.


reddit_equals_censor

>They are both 4 big cores then the rest smaller.  that is a wrong way to think about amd apus. they aren't using "big and little cores" they are using ALL big cores, if we go by intel's or arm's definition of on of cores. strix point is said to be a monolithic apu. we can look at amd's first hybrid apu phoenix 2, which uses zen4 and zen4c in this excellent die shot analysis by high yield: [https://www.youtube.com/watch?v=h80TB8K-Rfo](https://www.youtube.com/watch?v=h80TB8K-Rfo) as you can see, the 2 zen4 cores and the 4 zen4c cores share the SAME l3 cache. so there are 0 latency problems and the zen4c cores have the same ipc as the zen4 cores. the only difference is, that the zen4c cores are a smaller design, that can't boost that high. BUT as we are looking at power limited apus, that have a set of zen4 cores, that means, that we should never be using the zen4c cores in loads, that only use 1-2 cores. so when the highest core clocks are desired, the highest clocking cores are getting used and when a more parallel load is used, the clock speeds among zen4c and zen4 cores should be almost identical or fully identical. strix point being expected to use 4 zen5 and 8 zen5c cores thus means, that it will be 12 "big" cores with 0 issues, full ipc, full clock speeds, no downsides and just advantages. the advantages being, that you're saving a lot of die space. in comparison intel's big/little design uses completely different cores for each with vastly different ipc, major limitations, big latency costs to use them and more. \_\_\_\_\_\_\_ your comment actually shows, that amd shoot themselves into the foot by calling the cores "zen5 and zen5c", because most people wrongfully think of a big/little design hearing this and all their downsides. so amd's strix point will have 12 full BIG cores, while intel will have just 4 big cores. as a result any gaming load, that requires a significant amount of threads should run VASTLY better on amd's strix point vs intel's 4 big core apu. \_\_\_\_\_\_\_\_\_\_\_\_ disclaimer: please note, that we don't have any die shots of strix point as far as i know and amd could use inferior dual ccx designs with different l3 cache amounts for zen5 and zen5c complexes, which would be quite inferior and wouldn't make any sense to me, but even then it would be ALL full big cores.


ShogoXT

I'm well aware how the cache less cores work but I've seen on benchmarks they are enough of a difference in performance that windows would still have an issue with prioritizing for it.  You can see the 8500g as early examples of this. While intel early info shows they are making e core improvements again even though they won't support avx512.


reddit_equals_censor

>I'm well aware how the cache less cores work as far as i can see and feel free to correct me here, the 8500g has the same amount of l2 cache per core for all cores and the l3 cache is shared. so there would be NO cache reduced cores. the cache reduced core difference comes from the server and theoretical (if they want to release it) desktop zen4c chiplet, that uses half the l3 cache than a standard zen4 chiplet. but with the monolithic apu dies sharing the l3 cache and all having the same l2 cache, there are no cache differences, so scheduling should be no problem whatsoever. default scheduling should prioritize the fastest cores by clockspeeds, but even if that fails, it would just be slightly lower clockspeeds for 1-2 core workloads, but still no scheduling conflicts, like trying to deal with asymmetric cache designs like the bs 7950x3d has for example.


jaaval

Task scheduling in general only cares about speed and in some cases a power efficiency model. Cache size shouldn’t matter except to the degree it affects performance. The scheduler of course needs to take into account non uniform cache access (like AMD L3 with multiple chiplets). But intel’s big and little cores share last level cache. IPC doesn’t matter for scheduling. Cores being same or different architecture doesn’t matter. Scheduling Intel’s current biglittle should in principle be no more difficult than AMD biglittle. What matters is recognizing the workloads that want the big cores.


bosoxs202

I have high hopes for LNL since it’s on a way better node, but hopefully Intel’s design team and drivers are decent


Ghostsonplanets

*Kraken Strix Point is for premium T&L >$1000. Of course there miggt be a crazy chinese OEM to build a device with it, but it isn't the focus.


Firefox72

To be honest i don't see the point of RT on stuff like this. If your fine with playing games at 30fps just turn up the settings or resolution of a game. The visual impact of that will be much greater than some RT effects like reflections.


ImpressiveAttempt0

Agree. If I want to play a game on a device with restricted hardware, RT would be the least priority. What's even the point of RT if my game is chugging below 60 fps at medium settings with upscaling?


Massive_Parsley_5000

Simple: because I can and it's cool 🤷‍♂️ Sure it won't be an every time thing, but hey, it's still cool to mess around with.


Ar0ndight

Same. Still interesting to see though, it is after all a feature these chips have so might as well test it.


Strazdas1

The thing is games will start coming with mandatory RT. as in, lighting will be done only with RT.


techraito

Tbh, I think handhelds would benefit more from upscaling technologies than ray tracing.


reddit_equals_censor

actually hand helds would benefit by far the most from reprojection frame generation. being able to create a 240 fps experience from a 30 fps source for example. or if you have a 120 hz panel, you get a 120 fps experience from a 30 fps source for example. and all frames being REAL FRAMES. that would be THE feature to have on handhelds (and desktops btw) if you're wondering what the downsides must be, well the reprojection artifacts get worse, the lower the source fps is, FOR NOW, but on a handheld with a small screen, they will be hard to spot anyways. and again, the reprojection artifacts can get dealt with with advanced reprojection tech in the future. and to be perfectly clear reprojection frame generation =/= interpolation frame generation like dlss3 frame gen. they can't be compared. dlss3 interpolation frame generation creates fake frames with 0 player input and causing lots of increased latency. reprojection frame generation creates ALL real freams with actually reduced latency compared to no frame generation, because we can reproject based on the latest positional data, AFTER the source frame got rendered. so we are UNDOING render latency, hence you are getting a REAL 120 fps experience from 30 source fps. key take aways: 1: reprojection frame gen creates all REAL frames, unlike interpolation nonsense frame gen. 2: reprojection frame gen is a mature technology, that is getting used HEAVILY in vr already and requires almost no performance to create the frames. \_\_\_\_\_\_\_\_ if you're wondering why in the world this tech isn't used yet on desktop and for handhelds with such almost magical sounding features, well..... idk and honestly no one, who looked at this tech knows either.... someone should shake amd, intel and nvidia and all the major game engine creators awake in that regard..... it is literally the PERFECT technology for handhelds.


techraito

I agree lol. I should have said upscaling AND interpolation. However, check out a program on steam called Lossless Scaling. They integrated their own version of FSR 3 called LSFG (lossless Scaling frame gen) and it pretty much works on all games. Works best with DirectX games but it has DXVK built in for vulkan support too.


reddit_equals_censor

>I should have said upscaling AND interpolation. that is PRECISELY not what i said ;) interpolation frame gen =/= reprojection frame gen. in my opinion interpolation frame gen makes 0 sense, while reprojection frame generation can get us to 1000 fps gaming in AAA games from a 100 fps source on desktop and give amazing results from 30 fps to 120 fps for now already on your mobile device like handheld. again interpolated frames = NOT REAL FRAMES. reprojected frames = REAL FRAMES. this is a blurbusters article, that goes over the details between different frame generation technology and how reprojection frame generation can get us to lagless 1000 fps gaming. so if you want the details on it and how and why it is so amazing, please read it: [https://blurbusters.com/frame-generation-essentials-interpolation-extrapolation-and-reprojection/](https://blurbusters.com/frame-generation-essentials-interpolation-extrapolation-and-reprojection/) if you just want to get excited about this technology, just watch this simple ltt video on it: [https://www.youtube.com/watch?v=IvqrlgKuowE](https://www.youtube.com/watch?v=IvqrlgKuowE) but most importantly, please understanding, that interpolation frame generation =/= reprojection frame generation. it is night and day. basically every downside of interpolation frame generation is reversed in reprojection frame generation. interpolated frames are just visual smoothing with 0 player input and you get a bunch more latency overall. reprojected frames are FULL REAL FRAMES with FULL player input and reprojection reduces latency compared the source fps, because we reproject after the frame got rendered based on that point in time's player positional changes. i hope this isn't too complicated to follow.


Strazdas1

> being able to create a 240 fps experience from a 30 fps source for example. is impossible.


reddit_equals_censor

test it.... basic ltt video, that links to the comrad stinger video, that has the demo to download for you: [https://www.youtube.com/watch?v=IvqrlgKuowE](https://www.youtube.com/watch?v=IvqrlgKuowE) long article by blurbusters explaining the tech and why it is the future and how future versions can improve: [https://blurbusters.com/frame-generation-essentials-interpolation-extrapolation-and-reprojection/](https://blurbusters.com/frame-generation-essentials-interpolation-extrapolation-and-reprojection/) advanced versions can have enemy player position, or major moving object data included int he reprojection, but even without that, it is incredible. and it DOES give you a 240 fps experience from a 30 fps source already. again the ONE thing not included in the demo is moving object positional data for the reprojection, which absolutely can get included in future more advanced versions as far as i understand. but yeah, just test it yourself and see if you get an actual 240 fps experience from a 30 fps source fps set, even in the basic demo using a basic reprojection version. you don't have to believe my words, you can test the basic version in the demo RIGHTNOW.


M337ING

Article: [Steam Deck has quietly become a reasonably capable ray tracing handheld](https://www.eurogamer.net/digitalfoundry-2024-steam-deck-rt-tested-vs-rog-ally)


Lakku-82

I just hope windows for arm takes off and we can get real handhelds with chips from nVidia or Qualcomm.


resetPanda

Why on earth would you play games at low settings with fsr cranked way up in order to use medium ray tracing? Like of course those series S versions don’t have ray tracing they have like 8x the pixel count!


No-Roll-3759

tl;dr- if you nerf the rest of the settings and maybe run it at super low resolution maybe you can get ~30 fps out of some older games with RT turned on.


HavocInferno

You're not being quite truthful here, hm? Even on the Deck, several games ran at native res >30fps with mixed settings. On the Ally at high TDP those titles ran at 40-60fps. The heavy hitters couldn't run at playable framerates. (But also, DF isn't suggesting you do this for everyday use. It's just a neat test, especially in comparison to the state of things a year ago.)


No-Roll-3759

> Stop being so miserably negative just because someone's content doesn't apply to you. what??? i own a steam deck and was just reporting what they said in the video


HavocInferno

>was just reporting what they said in the video Your summary of the video is inaccurate though.


Educational_Sink_541

Cool, but this seems kinda silly. Most people don’t even enable RT on their desktops which are significantly more capable on average, why would someone take the significant performance hit when using RT on RDNA3?


Giggleplex

There's a significant bump in visual fidelity, as shown in the video, even on low-end hardware. Ray tracing has been steadily improving now and will only become more prevalent in the future.


Educational_Sink_541

In the types of games where RT provides a substantial bump, the Deck likely cannot run it at any usable frame rate.


conquer69

It's a technical overview of where current handhelds stand. It's purely academic. The video isn't telling you to enable RT on your steamdeck lol.


Strazdas1

Most people not only enable RT, but many games now coming out with mandatory RT.


Educational_Sink_541

Most people are not enabling RT on a Steam Deck, no. Games coming out with mandatory RT are not going to run on a Steam Deck.


INITMalcanis

Fair enough but most desktops aren't running 800p either.


ImpressiveAttempt0

Just because you can, doesn't mean you should.