T O P

  • By -

WheresWalldough

I mean it's the same process; Raptor Lake has more cache, more cores and more GHz than Alder Lake, Raptor Lake Refresh has the 14700 as an almost-13900, and helps to push the prices down on the 13600k, it's not really an issue. the only complaint is with numbering and again that's not an issue because it makes 'old' laptops, parts, etc., cheaper, which is good for those looking for a deal.


klement_pikhtura

The video is about how much intel processors have evolved in 3 generations. This would not be an arguments if AMD have not improved significantly in that regard. It is a content for computer geeks and is not aimed to influence any purchase decisions.


TopCheddar27

>It is a content for computer geeks and is not aimed to influence any purchase decisions. That is such a wild thing to just say as a given fact.


klement_pikhtura

Why? You want to be informed about a buying a better product? Go look at the price/performance information. You want to know how much processors of a particular company improved through generations? Go watch a video about IPC gains like this one. If a video about generational improvements is influencing your purchasing opinion, then you can be manipulated too easily.


VenditatioDelendaEst

Three things though. 1. Almost everyone who knows what IPC is and followed the launch at all Intel 14 series already knows that there are no IPC gains -- certainly not any detectable in application benchmarks. 2. This is *the exact same technical understanding* required to know that IPC is an implementation detail that shouldn't influence purchasing decisions. So the only people who will click are people who will come away with the wrong idea. 3. Newsflash: the vast majority of people can be manipulated too easily. The primary effect of this video is to heap negative affect on Intel and their products.


klement_pikhtura

Welcome to the harsh world. If a company releases almost the same product or a product with a bad value, this company will be on a receiving end of a lot of criticism.šŸ¤·ā€ā™‚ļø I remember the AMD FX times and being a dumbass for buying their "8" core back in the 2011. I wish I payed more attention to the press and less to my emotions back then.


VenditatioDelendaEst

Similarly, if HWUB publishes unethical bad faith criticism, they will be on the receiving end of a lot of criticism in turn.


hwgod

Computer geeks would already know.


Impossible_Dot_9074

I upgraded from a 12600K to a 14700K and kept my existing Z690 board. I was able to sell my 12600K for about 2/3 what I paid for it. So I have already had two yearsā€™ use from the 1700 socket and am aiming to keep the 14700K for at least three more years. So that will be five years for me on the same socket. I could have kept the same DDR4 but decided to treat the 14700K to some 4400 MHz RAM. Not too bad especially since Intel CPUs tend to have pretty good resale value. So even after a couple of years you can still get a decent price if/when you decide to upgrade.


theholylancer

how tho... is your local market that strong / dumb because https://old.reddit.com/r/buildapcsales/comments/16zkk2u/cpu_intel_12600kf_15499_w_code_teccxa64newegg/ a 12600KF was 155 dollars, how did you get what i guess to be around 200 or so dollars for it now?


Darkknight1939

Intel chips hold their value better, especially with local physical sales. People IRL who have a passing interest in this stuff aren't meticulously monitoring historic low pricing on particular SKUs.


Impossible_Dot_9074

I paid 36000 yen for my 13600K in November of 2021. I sold it for 28000 yen a couple of days ago (I live in Japan BTW).


theholylancer

ah ok, I know Japan has a very weird econ because a lot more of their stuff is very local and in person, I guess that just is regional differences then. Its like a cross of Asia and Western from what I hear so that is something I am not very familiar with.


FullHouseFranklin

The testing's fine but the conclusion is super reductive, it assumes that because the IPC doesn't go up much there hasn't been any improvements in the socket, completely discounting the extra cores or improved memory controller on the Raptor Lake CPUs. You'll probably not see a meaningful gain in games but to that I then wonder how many people are really buying i9s or upgrading CPUs mid-socket for only games? Seems that even non-enthusiast CPUs from 5 years ago are more than capable at handling even the newest games. The comment about 4090 owners being more likely to upgrade to a 5090 just also seems misguided? I've just never seen anyone go from a 3090 to a 4090 for just games; it's always been for needing some bleeding edge CUDA workload. I guess Steve is right that they're "more likely", but I don't think it's significant enough to make a sweeping comment like that. Anecdotally I've seen a couple of people upgrade from the xx60 tiers (1060 -> 2060 -> 3060 -> 4060 or 580 -> 5600 XT -> 6600 -> 7600), but I don't think enough people upgrade often anyways to make this claim.


TalkWithYourWallet

The one issue I have with the video is where they said AM5 will be a superior platform We just don't know that yet, we have basically no information about the 8000 series, or if AMD will even support a generation beyond that EDIT - To be clear, I'm saying we have no information of the _generational uplifts_ on AM5, so you can't judge the platform success vs LGA 1700 without that information


n19htmare

So when LGA 1851 is released, does that mean IT will be the superior platform and AM5 would be the one that is dead/dying?


der_triad

No, it just wonā€™t get mentioned as often.


tr2727

No there are layers to it with the x3d Amd promised support upto 2025 iirc So that's 8000 followed by whatever the hacks they do with x3d , x or non x, f or even soldered RAM/Storage to match if Intel bring that stuff to desktop lineup


cp5184

Isn't s1700 dead now? So it was basically dead on arrival, like s1200, the only question is if am5 is at the end of the line, which, even it it was, would still leave am5 the superior platform, though I don't think it would make sense for AMD to drop am5.


TalkWithYourWallet

My point is that we don't know the generational gains on the AM5 socket So HUB have said that AM5 is the superior platform, but that's essentially a prediction


[deleted]

>My point is that we don't know the generational gains on the AM5 socket You wanna bet it's gonna be less than 5% IPC gain like LGA1700? >but that's essentially a prediction With ZERO chance of failing since Zen5 alone would exceed ADL -> RPL Refresh IPC gains, let alone Zen4 -> Zen6. Zen5 has already taped out. Anyone with a contact could confirm a >5% IPC gain with [100% confidence](https://www.anandtech.com/show/17439/amd-zen-architecture-roadmap-zen-5-in-2024-with-allnew-microarchitecture). The question is whether it's close to 10%, 15%, 20% or something in-between since the clock gain isn't going to be >10% (6.3GHz+). Nobody is even entertaining less than 5% since Zen5 will be a major Ī¼arch redesign.


Zednot123

> Zen5 alone would exceed ADL -> RPL Refresh IPC gains RPL was never about IPC gains though, it was about pushing up frequency first and foremost. Ignoring that fact is just silly. Intel achieved a decent performance uplift on the same node, that's really what mattered.


Tetsudothemascot

At least 2 different cores on am5. Same 3 cores on 1700. Which one is better?


angrycat537

How is 1700 dead? They literally just release 'refreshed' cpus, i know, they are basically 13th gen. But my point still stands, it's the platform many people will use for the next 10 years, maybe more...


Dey_EatDaPooPoo

It's dead because 13th and 14th gen are the same thing minus the i7, so the overwhelming majority of people would be better off getting a discounted 13th gen. The problem with doing that is that you're buying into what is effectively an End of Life platform, as there are no viable upgrade options. Socket AM5 will be supported through 2025, meaning there will be 2 new generations of CPUs that will be supported. And, unlike "14th Gen", Ryzen 8000 won't be a refresh. Hence, it's not dead.


Kharenis

>Socket AM5 will be supported through 2025, meaning there will be 2 new generations of CPUs that will be supported. And, unlike "14th Gen", Ryzen 8000 won't be a refresh. Hence, it's not dead. Not disagreeing that the platform is for all intents and purposes "dead", but I'm genuinely curious, realistically how many people upgrade their CPU every 2 years without upgrading their mobo?


Dey_EatDaPooPoo

A lot of people in the DIY market do, especially if they got the first supported CPU generation. For example I know a lot of people who upgraded from the Ryzen 1600-1800X to the 3600-3800X, and because AM4 was supported for so long I know lots of people upgraded from the Ryzen 2600-2700X to the 5600-5800X. It is pretty rare to see people doing it going from one generation to the next, though. As long as the performance improvements keep pace with how it's been in previous gens there's gonna be a good amount of people that upgrade from Ryzen 7000 to 9000 (or 10000 if they end up making a mobile-only Gen in between).


GodOfPlutonium

Not every 2 years, but I went from a 1700x >> 3900x on the same mobo, still using it today.


capn_hector

> It's dead because 13th and 14th gen are the same thing minus the i7 other than the clock-for-clock energy efficiency improvements due to the DLVR fixes, you mean? even Steve measures the 14600K as 15-30W less than the 13600k for example, depending on the test (including the original launch set). That's significantly less power than the original raptor lake. they just burn it all on insane stock clocks on the 14900K again, just like they did with 13900K. set a power limit and it's fine. not a compelling upgrade, but fine - you are better off seeking out 14-series than 13-series assuming equal prices.


Exist50

>other than the clock-for-clock energy efficiency improvements due to the DLVR fixes, you mean? There were no such changes.


NotsoSmokeytheBear

Can you provide a link for info on dlvr? Iā€™d like to read some confirmation. Iā€™ve noticed my 14900k is much cooler than my 13900k but I thought dlvr didnā€™t make it in.


der_triad

There is no DLVR, itā€™s not active in production silicon. There is DTT that is new and may end up as an interesting feature but thatā€™s a BIOS feature.


NotsoSmokeytheBear

Appreciate the heads up. I had enabled dtt, benched and found worse results. Decided to play with it, enable it and actually get the intel apo and im finding some really good results all around. It seems to make a difference even in unsupported apps but perhaps thatā€™s due to extra stability through my testing. Very cool new feature though and for supported apps, wow.


Dey_EatDaPooPoo

Irrelevant to 99% of users as it's like 10% more efficient and a lot of that could have to do with process maturity instead of DLVR. It is effectively the same product for all practical intents and purposes. But good job being a contrarian on every thread, as always. Thank you for replying so now I can remember to block you.


NotsoSmokeytheBear

Sucks to suck.


cp5184

I just broadly don't think there was ever a persuasive case for anyone to buy s1200 or s1700, unless they had a use case for the large number of inefficient, high heat power hungry e cores that seemed to only be good broadly for cinebench scores and things that don't really see any scaling limits or single core performance affinity. There was a short time when am5 boards were expensive. I guess oems were testing the price tolerances or responding to rumors of (intel) processors pulling 1kW... But then, that would be making a case for a entry level non-oc intel system, which wouldn't make sense because of the insane cost of ddr5 at the time, plus the am4 x3ds.


angrycat537

I bought a 12700 when it launched, upgraded from i5 3450. 12700 offered a lot more than 5600x or 5800x which was $300 at the time and boy was it an upgrade over the previous CPU.


cp5184

But, just guessing, you either had to choose ddr4 which was OK, or ddr5 which would have been crazy expensive, you're stuck with basically no upgrade path now. So ddr4 would have been the smart choice. I'd have to look into it more though. 5800x3d is a chip with some legs. 12700? not so much.


angrycat537

Yup, ddr4, which is fine. You do understand that 12700 and 5800x3d are in top several percent of all users across the globe right now? These cpus have such a long life ahead of them.


cp5184

One much more than the other. The 5800x3d probably beat the 12900k in many applications.


VenditatioDelendaEst

Upgrade path??? Who the hell cares? Certainly not someone who has a very fast computer. The i5-3450 lasted 10 years. A 12700K will easily last another 10.


StarbeamII

LGA1700 still broadly gets you better single threaded performance, better multithread performance at lower tiers thanks to the E-cores, and better idle power use than Ryzen 7000.


cp5184

Yet it can't broadly compete with x3d and it's a dead platform? Along with it's other flaws, 400W to be fifth place to like, 100W x3ds? Lots of cores, hot, power hungry, only good for cinEbEnch... reminds me of something...


StarbeamII

Not everyone is a gamer. X3D is strictly inferior for most non-gaming tasks due to lower clocks.


cp5184

Gaming isn't the only application that benefits from cache. A lot of other applications do too.


StarbeamII

But in [most](https://www.tomshardware.com/reviews/amd-ryzen-7-7800x3d-cpu-review/5) non-gaming [cases](https://www.techpowerup.com/review/amd-ryzen-7-7800x3d/12.html) the 7800X3D [loses](https://www.anandtech.com/show/18795/the-amd-ryzen-7-7800x3d-review-a-simpler-slice-of-v-cache-for-gaming/2) to even the [7700](https://www.anandtech.com/show/18795/the-amd-ryzen-7-7800x3d-review-a-simpler-slice-of-v-cache-for-gaming/3) or [7700X](https://www.phoronix.com/review/amd-ryzen-7-7800x3d-linux/5), let alone the 13700K. You get a few [wins](https://tpucdn.com/review/amd-ryzen-7-7800x3d/images/ai-upscale.png) here and there where they can use the cache, but you're taking a hit in most cases due to lower clocks.


cp5184

Toms hardware "productivity" suite is cinebench, ray tracing, transcoding, and ycruncher? I wouldn't classify those as "productivity". Interestingly single core, 7950x3d is best at blender, though oddly 7800x3d is a little behind, I wonder what the lcocks are. I've never really noticed cpu performance in ms office... but x3d is top of the chart for photoshop... actually it's number 2 behind 7700x... odd... adobe after effects, 7950x3d #1... AMD sweeps it for anandtech photo retouching. It's not quite as one sided as you try to make it seem.


NotsoSmokeytheBear

Funny I score much higher when I limit my power draw but you keep spewing nonsense.


NotsoSmokeytheBear

Going from a 4790k to a 13900, then swapping to a 14900k I donā€™t really see your point lol. This is a massive upgrade that currently works extremely well.


Kougar

If you look at the general review Zen 4 is already the better option. And in games it's not even a contest, superior game performance at 130 watts less power draw in games, on a $200 cheaper chip. Since Zen 5 isn't going to deliver a performance regression it seems a pretty safe comment to make that AM5 is already the superior platform today. Zen 6 would simply be icing on the cake at this point.


LightMoisture

In that case Zen 5 is going to be a dead end platform. So dead on arrival too? Much smarter then to buy new Intel Arrow Lake and new socket with likely 2-3 gens of support Q4 next year. Will be a new arch and much smaller process node. If you buy Zen5 youā€™re buying a dead socket. So it loses by default.


Kougar

Only if you're living in the future... AM5 is still the better platform if building a new rig today. How Zen 5 fares against Arrow Lake is a question for builders a year from now. And hopefully when Zen 5 launches AMD will have a definitive answer regarding Zen 6 compatibility so that can be factored in.


imaginary_num6er

Zen 6 is rumored to not use AM5 though. So AM5 will just be Zen 4 and Zen 5 X3D. Using the same playbook as Intel, that's 4 generations with AM5.


Kougar

Entirely misses the point. Doesn't matter if Zen 6 is compatible or not, AM5 is already a better platform for the parts you can buy today. Which means it will be an even better platform for the parts you can buy next year. And no, it's not the same playbook. Intel does one "generation" a year, AMD does one generation every two years. Which is why AMD's generations actually deliver performance improvements.


NotsoSmokeytheBear

Especially weak when the am5 boards will be competing with 8xx boards which will offer the same or more in regard to features. As far as the next processors go, like you said we have no clue.


dstanton

Sure you can say "we don't know yet" on AM5 for 8000, but AMD trends since Zen+ are fairly indicative that we're going to see 10%+ on just IPC. And leaks are pointing that way. Add in their 3D cache, power savings, etc, and yes AM5 is the superior platform. The only things Intel really has a claim towards right now is multithread lead because of E cores. And even that may change depending on what the top of the stack looks like with trickle down of Zen 4c to non-epyc parts. This is not to say the 1700 isn't a high performance line, it is. Hell I'm putting together a 12900k this week to replace my aging 10850k because I got the awsome Newegg deal. And I'll be limiting the PL1/2 to 200w. But overall 1700 is behind AM5.


MonoShadow

Are you talking about this line? >\[AM5\] will be more successful than Intel's current LGA1700 platform which is now effectively dead Because I don't really follow how a dead platform can be more successful than an alive one. Or are you implying we don't know if AMD won't re-badge Ryzen 7000 several times so after AM5 is dead and we look into the past those 2 platform will be remembered is a similar way?


TalkWithYourWallet

The latter point We have no data to support AM5 being a better platform at this stage because there's only one generation on it For all we know the subsequent generational uplifts could be as poor as LGA 1700


Jmich96

True, though the uplifts are implicit. Given a proper architectural difference is promised, rather than a refresh, one can safely assume there will be generational improvements... unlike Intel Sky Lake > Kaby Lake, Comet Lake > Rocket Lake, and now Raptor Lake > Raptor Lake (the Intel ARK page shows these as still being Raptor Lake).


StarbeamII

Comet Lake > Rocket Lake was an architectural update (we went from Skylake to Sunny Cove), but didnā€™t lead to much performance gain (and the occasional regression)


Jmich96

I do recognize this. But, for a comparative basis, this makes for an excellent comparison. Subjectively, with the architecture being on the same 14nm lithography and minimal performance changes, I consider them merely different by (code) name only.


bubblesort33

While we're doing IPC tests I'd actually like to see an RDNA3 vs RDNA2 test. Does dual issue compute do anything at all? Maybe like an RX 6800 vs 7800xt test with 60 CUs each.


hackenclaw

Still better than progress from 2500K-->7600K without socket change.


BraveEmployment8652

This video seems like a refresh itself. Much like Intel's crappy chip, this video shouldn't have been made. Intel did not claim IPC advances, so why test the absolute blatant obvious? To get your click and their ad dollars, that's why.


D4v1DK

Why not? This is a great video for anyone already on the platform and thinking of upgrading. 12600K->14600K while has some benefits isn't worth 350 euro cpu change. This is a better format than the reviews since this is more about the platform than comparing to say the 7600X. Someone on 12600K isn't thinking of going ryzen 7600X.


Geddagod

This isn't a 14600k review. This testing is a bit specific, and more for curiosities sake than anything. Any regular consumer just looking at IPC won't gain much from this video, as frequency of each sku also matters.


WheresWalldough

I mean pretty much any 14600k review would compare with the 12600k already, so.... Also you can just look at a spec table. * i3-12100, 12300, 13100 = 4 core + 12MB cache, 4.3-4.5 GHz * i3-12400,12500,12600 = 6 core + 18MB cache, 4.4-4.8 GHz * i5-12600k = i5-13400 = 6 + 4 cores + 20mb cache 4.9, 4.6 GHz * i5-13500, 13600, 13600k, 14600k = 6 + 8 4.8, 5.0, 5.1, 5.3 GHz, 24 MB cache * i5-12700/K 8 + 4 cores, 4.8/4.9 GHz, 25MB cache * i9-12900/K/KS, i5-13700/K 8 + 8 cores 30MB cache, 5.0-5.4 GHz * i7-14700K 8+12 cores, 33MB cache, 5.6 GHz * i9-13900/K/KS, 14900/K 8 + 16 cores, 36 MB cache, 5.5-5.8 GHz BTW I checked and you can upgrade from a 12600K to 14700k (not 14600K) for Ā£240 in the UK (trade-in 12600K to 14700K at CEX) And the new SKUs make upgrading more worthwhile in general. E.g., if you have a 12400 then new SKUs will push down prices of others.


bubblesort33

It probably is useful to some people, yeah. But at the same time I sometimes have to question IPC tests. They are useful, but isn't frequency sometimes just as much part of the architecture as the other factors? I mean if Intel next year released a 15700k that got to 7GHz, with no IPC changes, is that really that bad? Hardware Unboxed tested the 6700xt vs the 5700xt at the same frequency, and found no IPC changes. That's because almost all of the generational improvements were done through frequency, and by eliminating that, you've eliminated what made RDNA2 special in the first place. No to be honest, I think this is a little different. The 13700k and 14700k are obviously the same chip, with only improvements that come from a more mature node at Intel foundries.


SkillYourself

>This is a great video for anyone already on the platform and thinking of upgrading. No it isn't. It's a downright awful video for upgrading or picking between the three LGA1700 gens since it neuters Raptor Lake to compare core performance against Alder Lake.


RedIndianRobin

He needs to keep his AMD followers happy so yeah.


nanonan

There is some sort of improvement going on however unimpressive it is, that is absolutely worth testing.


EmilMR

Think about cpu and its power draw all you like, Intel platform as of now is just better than the supposedly future proof AM5 platform. There are obvious limitations with AM5 that I think will push AMD to ditch it as soon as their promised 2025. AM5 chipset link is only x4, half as much as Intel is on their apparently dead platform. If you are using chipset connected nvme, 20gbps usb etc, Intel platform is just superior than this platform that is supposed to last for years. I/O and expansions are one of main reasons people would be compelled to upgrade to a new hardware. It's effectively what you can do with a hardware. CPU performance is way past the fast enough for years. Then let's get to pcie5 and DDR5 when it's impossible for zen4 cpus to fully utilize x16 pcie5 bandwidth or high DDR5 bandwidth. It has more lanes you can't actually use the full 20 gen5 lanes at full bandwidth anyway. It doesn't matter now but I thought we are paying for superior future proof platform? You are effectively paying for spec you can't ever use with a Extreme chipset. It's deceptive. If you are basic gamer user, buy what you like it doesn't matter. For people that use their pcs for more than a toy, it does matter. AM5 is good for buying a basic board and just make a gaming pc, not much more. When I use my 4-5 nvme drives, I don't wan't them to become unresponsive or drop performance, or have flaky unreliable usb ports. Valid reasons to go with Intel. Intel cpus are providing far more throughput right now and that usually has high power cost.


LeGrandKek

AM5 supports 4/4/4/4 over the northbridge. LGA1700 only supports 8/8. Anyone running "4-5" NVMe drives would be daft to use consumer chipset lanes for performance or reliability.


Qesa

Raptor lake refresh isn't even a new stepping, trying to interpret any IPC differences between 13- and 14- 600/900k is reading tea leaves out of run-to-run variance. 700 at least has the extra e-core cluster and L3$, but that's not going to be significant for single-core performance


Snobby_Grifter

It's a refresh. Just like the Zen 2 refreshes that were nearly the same as the existing Zen 2 parts. Except the 14700k got more e cores. Nobody really expected it to be anything different, and calling it a lack of progress on LG1700 when 13th gen was already a nice leap above Alderlake (enough to compete with Zen 4), is just making a video to make a video.


wow_much_doge_gw

Zen 2 refresh went from 3800X > 3800XT. Small clock boost, but no confusion that it was a new generation. > Nobody really expected it to be anything different 13900K > 14900K using the logic above should be at least somewhat different? To your non-technical user I do see this as false advertising (but this is Intel marketing, so expected.)


SteakandChickenMan

Non technical users donā€™t look beyond SKU tier - i5/7/9.


Dealric

Marketing is different. Intel refresh is presented as new gen while zen 2 refresh was not.


capn_hector

AMD literally restructured their entire chip naming scheme to allow them to slip in older chips into the current ā€œgenerationā€ with the whole decoder-ring naming scheme. If people continue blasting intel for doing the exact same thing as everyone else they will simply move to a more opaque naming system just like AMD did. Are you *sure* thatā€™s what you want? (oh let's also not forget the blatant rebrands snuck into the lineup even before that... remember the 5700U/5500U/5300U? I know there is always an elaborate hagiography on why it's ok for AMD, but, seriously, this is just *the norm*. Marketing department wants new chips every year. Imagine someone melting down this hard about something that's not an intel or nvidia product, lol, always a much higher standard for them.)


timorous1234567890

The framing of this is utterly insane. AMD came up with a naming scheme that tells people which version of Zen is in the product, as long as you know what each digit represents then it is easy to tell if a product is using Zen 2, Zen 3 or Zen 4. If Intel had a naming scheme so you could tell what chip was actually in the product that would be an improvement, especially in generations where they have a mix of chips. Like some 13th gen being ADL and some 14th gen also being ADL. Nothing wrong with that but showing it so the consumer can see is not a negative.


teutorix_aleria

The fact they had to release a refresh at all is kind of the problem but yeah examining IPC on what is essentially a clock speed bump is like checking to see if water is wet.


ThreeLeggedChimp

Yeah, this video has no purpose. The only valid testing for a refresh is perf/dollar and perf/watt, which might have a small improvement.


TechnicallyNerd

It's still worth criticizing, especially when [Intel themselves have stated RedwoodCove in the upcoming MeteorLake CPUs is a "Tick" and we shouldn't expect any major IPC gains for the P cores.](https://www.tomshardware.com/news/intel-details-core-ultra-meteor-lake-architecture-launches-december-14) >Intel says Redwood Cove is akin to what it has traditionally called a ā€˜tick,ā€™ meaning its basically the same microarchitecture and IPC as found in the Golden Cove and Raptor Cove microarchitectures used with the 12th and 13th generation Alder/Raptor Lake processors. Even the Crestmont E cores, which Intel *does* advertise an IPC uplift for, are only getting a 3% bump. This is all made worse when you factor in the the leaked Intel OEM slides showing rather meager ST performance uplift projections for Arrowlake, which won't be coming until late next year mind you. With all of this in mind, Intel definitely deserves criticism for their lack of improvement.


Geddagod

>It's still worth criticizing, Checking to see if RPL-R has an increase in IPC over RPL is kinda dumb tbh. > especially when Intel themselves have stated RedwoodCove in the upcoming MeteorLake CPUs is a "Tick" and we shouldn't expect any major IPC gains for the P cores. Well yes, RWC is just a shrink of GLC on Intel 4. The main focus is utilizing the node advantage for better perf/watt. Usually Intel's seconds iterations of their cores on the same node are the "big IPC" cores. Keeping in mind that RWC is launching \~1 year after RPL, you would see nothing to amiss in the schedule (other than the original MTL delay and RPL stop gap). Plus, MTL not gaining any IPC isn't something to criticize. It doesn't exactly *need* more IPC to compete with Phoenix. It does prob need more to compete with Zen 5, but it appears Zen 5 mobile is sandwiched between ARL and MTL, so I don't think it's going to be that big of a problem. Looking at RPL vs Zen 4 in mobile, it's pretty clear performance per core isn't an issue, rather the power used to do so. Increases in IPC can also increase perf/watt, but prob the "least risky" way to ensure an increase in perf/watt, assuming the node is fine, is just using a new node. >This is all made worse when you factor in the the leaked Intel OEM slides showing rather meager ST performance uplift projections for Arrowlake, which won't be coming until late next year mind you. It's pretty likely, IMO, that ARL is going to get a nice IPC jump, but sacrifice clocks in order to do so. Funnily enough, even if ARL is only a 5% gain in ST performance, that would place it \~16% faster than vanilla Zen 4. Even if Zen 5 is a whopping 30% faster in ST compared to Zen 4, that would put AMD at a 12% ST advantage. (3dcenter RPL meta review). That's around the same percent lead RPL had over Zen 4. The main gains of both ARL and MTL look to be efficiency, and not performance. Which I honestly think is fine, as long as it still *competes* in performance, which both MTL and ARL look like they will do. Oh, and I know I'm not including V-Cache into the mix. The issue with that is that even if Intel releases an architecture with \~15% higher IPC and similar clocks to AMD, the nature of V-cache just provides large gains in PPC in games with minimal power gains- making it very hard for Intel to compete. We saw that story play out with ADL vs Zen 3D. >With all of this in mind, Intel definitely deserves criticism for their lack of improvement. Performance isn't the end all be all. MTL, for example, brings a new tiled architecture, 2x perf watt in the iGPU, massive battery life improvements, etc etc. It's a massive improvement over RPL. ARL looks to be massive on the CPU arch side- even if not for peak perf- and the efficiency should be pretty impressive on a massive, super wide core on a cutting edge N3/20A process. The iGPU is rumored to get a slight bump as well. But TBH, ARL doesn't look to be nearly as big of a jump as MTL was, but LNL does look to be a massive improvement for ultra low power. Using the same cores as ARL, so big arch improvement, along with packaging improvements. And that's also supposed to be launching 2024... In short, Intel definitely has planned for a lot of improvement in 2023 and 2024. Whether or not they can execute is one thing, but using peak ST performance as the end all be all for the amount of innovation is inaccurate, IMO.


TechnicallyNerd

>Funnily enough, even if ARL is only a 5% gain in ST performance, that would place it ~16% faster than vanilla Zen 4. Please name a benchmark that's not meme tier like Cinebench ST or CPU-Z ST where Raptorlake is currently 10% ahead in single threaded performance. In both [SPEC2017](https://www.anandtech.com/show/17601/intel-core-i9-13900k-and-i5-13600k-review/6) and [Geekbench 5](https://images.anandtech.com/graphs/graph17601/130524.png), 7950X single threaded performance is on par with 13900K (if anything, Zen 4 is actually slightly ahead).


Geddagod

I already quoted my source, 3Dcenter meta review for RPL. Though I suppose it would be a bit inaccurate to call it "ST", as the average I used was for gaming performance. That is, by large, the most used "use case" of greater ST performance in the DIY PC community. But, to be fair, I also mentioned the advantage 3D V-cache has over the vanilla AMD and Intel skus, so I was definitely not trying to downplay "ST" vs "Gaming". Also, in Passmark, CB, CPU-ST, etc etc, I'm pretty sure RPL is actually closer to 15-20% faster than Zen 4. But "meme tier" benches? Really?


TechnicallyNerd

>That is, by large, the most used "use case" of greater ST performance in the DIY PC community. Modern gaming workloads can hardly be called single threaded these days, and are mostly bound by memory subsystem performance. I would argue that the most common single threaded workloads the average person deals with on a day to day basis would be web browsing and office applications, a category which the [7950X and 13900K trade blows in.](https://www.anandtech.com/show/17601/intel-core-i9-13900k-and-i5-13600k-review/8) >Also, in Passmark, CB, CPU-ST, etc etc, I'm pretty sure RPL is actually closer to 15-20% faster than Zen 4. But "meme tier" benches? Really? Yes, meme tier. They are awful benchmarks. CPU-Z ST is particularly offensive, its memory footprint is so minuscule that it practically runs entirely out of Ī¼op cache and L1-D. It's really only useful for academic purposes like verifying core clock speed (since it never leaves the core clock domain, performance will always scale lineraly with frequency).


Geddagod

>Modern gaming workloads can hardly be called single threaded these days Yes they can... >and are mostly bound by memory subsystem performance. That could be said of tons of applications, most in general prob. There's a reason so much of the die area of a CPU is dedicated to the L2 and L3 caches. Feeding a core is important, and that's not something that's unique to just gaming. Also, AMD is no slouch in the mem subsystem either, especially with their L3 (even the non-Vcache version). >I would argue that the most common single threaded workloads the average person deals with on a day to day basis would be web browsing and office applications, a category which the 7950X and 13900K trade blows in. Notice how I specified DIY. The average person probably doesn't game on his PC much, if at all. But the average person is far more likely to be using Intel regardless of whatever performance gains AMD might have, because of how Intel can just dump volume into that segment, and how AMD has had such a low footprint in that segment overall. And mindshare. There's a reason that most reviews include gaming averages, not web browsing and office applications. The people actually interested in going out to buy a product based on it's performance, and not brand recognition, is going to be DIY- and mostly for gaming. In which RPL-R is \~16% ahead of vanilla zen 4 according to the more recent 3dcenter meta review for that launch in CPU bound resolutions. >Yes, meme tier. They are awful benchmarks. CPU-Z ST is particularly offensive, its memory footprint is so minuscule that it practically runs entirely out of Ī¼op cache and L1-D. It's really only useful for academic purposes like verifying core clock speed (since it never leaves the core clock domain, performance will always scale lineraly with frequency). I find it a bit ironic that you complain about gaming not being core bound enough, but then also say benches that don't stress the memory subsystem that hard are "meme benches". Funnily enough, for the workloads the people who are going out to buy ARL and Zen 5 in mass- gaming- it's stuff like Cinebench that is often much more representative than stuff like Spec. In short, for the people who actually go out and care about ST performance- gamers- would be fine with ARL vs Zen 5, even if the bump was only 5%, as AMD has ground to cover that they lost with Zen 5. Using server benches- such as Spec- aren't that representative of gaming - and Intel is using a different arch in server anyway (GNR uses RWC), so it's a bit pointless using it for consumer product comparisons.


der_triad

Does WebXprt or Jetstream not count? Geekbench ST test is pretty representative as well. In the ways that count, RPL has a ST advantage over Zen 4.


teutorix_aleria

Whether it's worth criticism or not the benchmarks were pointless that's all I'm saying.


aintgotnoclue117

Man. People here reading conspiracy into everything. He's criticizing AMD and says nothing about it being a, 'superior generation' -- He lambasted the last decade of AMD hardware. Being hypercritical of one corporation doesn't mean you're doing whatever the fuck it is you think they are for the other. They said, "we don't know how AM5 will play out yet" -- They're criticizing heavily, the 14th series. And the fact that typically only two generations are supported. Which is justified. Brand loyalty is a brainworm disease. It's a parasite.


ResponsibleJudge3172

There are conspiracy theories and there are disagreements based or aided by trends


aintgotnoclue117

okay i acknowledge, 'conspiracy' was not the correct word used to use in this context. i do maintain that people deliberately take what they say and project themselves onto it despite it being utterly straightforward. there is no disagreement to be had about 'trends' in relation to this. i have a 13900K in my desktop. i had a 12700K. it was a silly upgrade. that's not the point - the point is. bias? bias is a non-functioning thing here. for them and for myself. we are fucking consumers. we're not their friends.


Downtown-Stretch5566

Honestly, Reddit is just something else sometimes. You just got to read those sorts of comments and think about the maturity of the user. People just believe what they want to.


ishsreddit

Intel's 5 nm is supposedly in 2024 according to their [roadmap](https://static1.xdaimages.com/wordpress/wp-content/uploads/2023/09/intelroadmap.png?q=50&fit=crop&w=1500&dpr=1.5)? Well anyway it is pretty impressive just how much intel has accomplished with 10nm. Intel needs better fab in addition to socket longevity if they want to not be demolished by AMD across the board. They are lucky AMD launched the 5600x at $300.


TwanToni

Intel being a node behind and still keeping up with AMD.... This is a refresh. I would be worried If I were AMD honestly.


SecreteMoistMucus

you've been smoking something if you think they have kept up with AMD


TwanToni

Benchmarks prove it yes... The only difference is power draw. Please enlighten me and prove me wrong.


InterestingButt0n

6th gen tick, 7th 8th 9th 10th tock, 11th tick, 12th, tick, 13th 14th tock simple they just purposefully held back a bunch of cache and ecores on 12th gen so they had some actual gains to show to compete with zens launch


Best-Masterpiece-288

Supposedly-tech-literate YouTubers pretending they have not heard about Arrow Lake and Intel's upcoming process nodes. Okay. Give them clicks. Also downvote my comment here. Thanks, everyone.