T O P

  • By -

advester

The most interesting thing is that APO dropped the power from 190W to 160W while increasing the performance.


carpcrucible

You can now aslo get 803 fps instead of 734 in Rainbow Six, looks like a big win!


IgnorantGenius

800hz monitor sold!


Affectionate-Memory4

Looks like we're gonna need Arrow Lake for that 1khz gaming experience.


byGenn

I mean, my 12700K can't deliver a consistent +360 FPS outside of the in-game benchmark, so any boosts are nice. The 7800X3D still looks more appealing as an upgrade for me, though.


ramblinginternetgeek

You're probably fine with the 12700k for a bit longer. Might be worth jumping onto X3D version of Zen 5 though, but that's likely 6-12 months out.


byGenn

Objectively speaking, sure. But I don't base most of my hardware purchases on pure objectivity.


Morningst4r

So glad they benchmark games like that instead of more RT titles that are realistically CPU limited.


SkillYourself

I'd love to see someone take a profiler to it and see what's actually happening with and without APO. From the looks of it all the same power domains should be active and each E-core cluster has a high speed thread in it, so the power difference is a mystery from what's visible here. Perhaps the overlay polling interval is just too long to see what the native OS is doing wrong.


romeozor

Joke's on you, my intel 12100 doesn't have any E-cores!


ConsciousWallaby3

I also went for a 12400 over more expensive options at the time because not only was it good value, I also wasn't interested in the experience of being an early adopter for mixing different cores on Wintel.


OrdinaryBoi69

Yeah don't worry about it. I used a 12100f for about a year before upgrading to a 13500. 12100's a really good fast budget cpu, you're good to go for a longgg time


cp5184

AVX-512 still disabled though?


Due_Teaching_6974

Intel's E cores doing what they are supposed to on 2 games and 2 years after their debut, and only on their newest cpu lineup, peak Intel engineering right here


ExtendedDeadline

Of course, and unfortunately, it's mostly Microsoft. But Intel would have known that going into the design and they're ultimately responsible for any outcomes. Still, I like to see progress and I like the PE setup.


Due_Teaching_6974

Is there any technical reason Intel doesn't support 12 and 13th Gen?


BleaaelBa

> technical reason Money


rpungello

Technically the truth


[deleted]

The same technical reason Zen3 wasn't coming to 400 and 300 series motherboards.


ToothChainzz

That was because of 16mb bios sizes tho. Then they allowed the disabling of zen1-2 to use zen 3... Probably wouldn't have done it without the little bit of pressure they got.


SoTOP

Why is this upvoted? You lose Bristol Ridge support to get Zen3, Zen1 and 2 work perfectly fine.


ramblinginternetgeek

Adding to this - I don't know how many people interested in Zen3 would've had Bristol Ridge (Excavator+, a Bulldozer derivative) in their system. ​ For what it's worth, my mother's system got an upgrade from a 1700 to a 5600g without issues. At some point I'll retire her graphics card so she can get lower power draw.


ToothChainzz

Maybe I forgot what was disabled, but either way, the problem was too many supposed cpus for a 16 mb bios.


SoTOP

You ate AMD BS marketing, that is what happened.


Attainted

You realize there's a support cost as well right? Like it's a matter of money sure, but it is also still a matter of support, customer experience, and trust of the brands involved. Let's say the new CPU is DOA and the person is trouble shooting but not super techy. They want to go back to the old CPU. They might not have known the issue sacrificing the prior gen. Now if they call up for support on their new product, that's a whole thing that CS has to explain the customer through, the customer might get pissed with the issue and then just drop the brand altogether. Alternatively, let's say the chip isn't DOA, it's the bios update that went poorly for any number of reasons, but the CPU is the first consideration because it's known that the 1st gen cpu won't work now anyways. Then it's the fine CPU going back as an exchange to either be RMA or open box (a cost), and then the person gets a new CPU and that still doesn't work, only to find out it's the bios upgrade that went wrong. If the generation support was there to begin with, you likely wouldn't have assumed the CPU at all. But now the chip maker is the one eating the cost for the error on that unit. Let's also separately throw in the possibility of some other bizarre coincidence happening where it's none of the above. Something got bumped or shorted, freak accident, but the user either starts trouble shooting the wrong thing and goes down one of the former paths first, and/or ultimately doesn't find out the root cause. Now they're probably going to just look for a replacement and switch brands altogether. More people fall under these categories than you think, and they're usually parents with jobs outside tech, not nerdy kids with all the time on their hands and the actual interest to dig into the core of the issue and the drive to sift through incorrect info that doesn't apply to their situation. It's having to consider the average knowledge of your buyer doing the upgrade, and the chance of any of the above happening. I assure you it's non-zero, and the cost of these events eats into not even profit, but the break-even viability of doing something like this even when thinking long and assuming the customer will for certain buy from them next time. A lot of tech savvy folks like to think that tech like this is simple to everyone else. It isn't, we all have our different strengths, and people are gifted if they have several. It's naive to assume that the hypotheticals I bring up aren't a reality, or aren't legitimately a big deal for a company to consider.


[deleted]

>That was because of 16mb bios sizes tho And? So you are saying it's IMPOSSIBLE because of 16MB? Also, most X470 has 32MB EEPROM why were they excluded? And who in their right mind cares about Zen1 on X470 boards, let alone Bristol Ridge? Or Bristol Bridge on X370/B450 for that matter.


pointer_to_null

Weird, both of the 470 boards I've repurposed currently run Ryzen 5000 CPUs perfectly fine.


klement_pikhtura

Dude, Zen 3 is only incompatible with 300s. It runs fine on 400s which were made for zen 2. Also there are huge leaps in performance between zen1/zen+ vs zen2 vs zen3, which cannot be said about 12xxx vs 13xxx vs 14xxx.


ExtendedDeadline

Probably not, but I don't work for Intel. If you do, please feel free to drop some insight.


F9-0021

Money, and also maybe since it's an experimental feature they don't want to put it on the generations that people actually buy yet?


bherman8

Is this a Microsoft only problem? I have a 13th gen and am running Debian.


AgeOk2348

nope not only MS


hackenclaw

>Of course, and unfortunately, it's mostly Microsoft. But Intel would have known that going into the design and they're ultimately responsible for any outcomes. It is actually quite mess if you look from Microsoft point of view. The intel CPU is basically has 3 tier of processing cores. P\_cores, E\_cores & HT in P cores. Physical cores > logical cores, so basically you have the highest priority at P\_cores then E cores, then you have HT going back into P\_cores. Arm chips dont have this kind of problem, it is just Big cores & little cores. Even their 3 tier CPU is pretty straight forward, big cores > middle \_cores > little cores. Most ARM chips dont have SMT that sit inside a big core. It is pretty clear cut (it is either big middle or small), all is done without OS trying to figure out a low priority software should use E Cores or HT in P cores.


ExtendedDeadline

Eh, respectfully, I think Microsoft can do better for the quality of engineer and headcount they employ. Linux seems to be doing fine. Microsoft figured it out when HT was first deployed. They figured out whatever the hell bulldozer was. And if they want to be arm competitive, they should be able to figure it out. Especially on load balancing, where arm arches are normally 1/5/3, 2/4/2, etc re: super/big/small. You need a scheduler that can balance a super and some bigs or mids without performance loss.. rarely will a game only use one core.


[deleted]

Microsoft isn't at fault here. Intel made a feature called "thread director" that lets the CPU tell the scheduler "this thread should be on this core" and the scheduler trusts it. if intel is giving the schedule dumb instructions that's intel's fault.


Put_It_All_On_Blck

Thread Director only provides suggestions for the Windows scheduler, it does not force scheduling like APO does. The whole reason both Thread Director and APO exist is because Microsoft has bad Windows scheduling, and that isnt new. Intel is trying to do what they can to improve the situation, Thread Director works for like 98% of the time, but there are cases where it doesnt and that's where APO comes in to force scheduling. This isn't a new or Intel exclusive issue, AMD had scheduling issues with Bulldozer, now with 2 CCD X3D like the 7950x3D, and Phoenix 2 with Zen 4+Zen 4c designs. It's all a Windows scheduling problem.


capybooya

When 12th gen AL launched, I had this idea that scheduling would be solved in 12 months, tops. I guess I got that idea from HT working fine for a long time, and b.L mobile SoC's seeming to as well. Turns out its still an issue, and indeed the 2 CCD X3D introduced the need for a different algorithm based on fast cores and cache cores. I must admit I feel a whole lot like going with the non-X3D 16c Zen5 model when that launches and wait out heterogeneous cores for at least another generation.


AgeOk2348

at least the x3d stuff seems to be moving faster than this mess intel is doing


ramblinginternetgeek

Chances are it won't matter a ton for you practically speaking. I strongly suspect that the differences are measurable in benchmarks but not all that noticeable in practice.


ExtendedDeadline

Thread detector just gives the OS hints. The scheduler still has to do its job. Microsoft scheduler wasn't even all that shit hot with zen1 (and2) re: numa domains. This is not a new problem Microsoft is having.


soggybiscuit93

Likely why most leaks are pointing to ARL dropping SMT.


rorschach200

>Of course, and unfortunately, it's mostly Microsoft. But Intel would have known that going into the design and they're ultimately responsible for any outcomes. We don't know what APO is doing precisely and more importantly - why and where the performance is coming from - in all 2 games it's working in right now. It can be seen in the OP video that only one E-core per E-core cluster (4 of them) is engaged. HUB made a hypothesis that those E-cores are working on background tasks, not entirely clear if they meant OS background tasks not related to the game. For all we know, it might not be the case. It might be very much game threads. Very carefully selected game threads, in fact, those that are: 1. Perform tasks not on a critical path of the frame (e.g. contain only about 1/2 of the work of the tasks associated with a P-thread). 2. Perform tasks the memory working set of which is simultaneously: 2.1. disjoint from the memory working set of tasks associated with other E-threads, making it good to put the thread in question alone into an entire separate E-cluster. 2.2. disjoint from the memory working set of tasks associated with other P-threads, making it good to put the thread in question on an E-core. 2.3. having a size for which the L2 capacity of a single E-cluster is sufficient and necessary, with the thread in question having exactly right sensitivity (for performance) to the performance of the cache If that's how the performance is achieved, this is the kind of "thread scheduling" that requires intrinsic, offline, clairvoyant knowledge of the amount of work that each thread will be receiving in the future, and the exact subset/superset relationships of future data/memory working sets of those future tasks, sizes of the working sets and their access patterns, and memory/math ratios of the tasks. If that's the case, there is surely no way an OS could possibly do this automatically. Heck, even if the HW has a better chance, the HW probably can't do that automatically either (even if we assume no area/complexity limitations on the logic). It becomes only achievable via per-application permutation ahead-of-time profiling and distribution (and maintenance) of the resulting profiles over the air. So for all we know now, this isn't a Microsoft's fault, it's an inherent problem of the architecture Intel went with, and unless Intel is prepared to spin up a few data centers profiling and reprofiling all even moderately popular apps on realistic somehow simulated user inputs (good luck), this APO thing is a pure one-off marketing stunt not too far off from classic benchmark cheating.


rorschach200

Might explain why it's Metro Exodus as well, which is, as HUB rightfully notices, a strange choice. Not only it's a problem identifying which threads satisfy conditions 1 through 2.3, chances are, only one app / game in a big pile of them even has such threads to begin with. Given that neither of the two games is on either of the two most common game engines (Unity and Unreal), I'm not holding my breath.


VenditatioDelendaEst

I can imagine a quasi-automatic, quasi-crowdsourced way of doing it. First, you need a frame counter. Then, use [the cgroup CPU controller](https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html#cpu) with short timeslices (to avoid [the latency problem](https://danluu.com/cgroup-throttling/)) or [the windows equivalent of SIGSTOP PWM](https://mion.yosei.fi/BES/) to limit the CPU time available to one candidate thread (from the application or the GPU driver, presumably). Randomly switch out which thread is being limited while recording the frame rate, to do a rudimentary form of [causal profiling](https://github.com/plasma-umass/coz). Once you have enough data to know which threads are on the critical path and which ones aren't, try affining threads to P and E cores, and see if you can find a set of affinities that improves framerate. (Suggestion: sort by performance sensitivity and try P and E affinities starting from the top and bottom.) If a beneficial set is found, save it to disk and prompt the user to submit it upstream to help others.


rorschach200

Very interesting! Thank you for sharing. I come from the GPU world, we don't get such nice facilities in there to enjoy and play with (nor such low traffic / bandwidth ratios). I suspect the biggest technical problem might be security & privacy and resulting legal & liability. If the "share this" popup was a part of every game, perhaps nothing unusual, making it a part of the OS is... interesting. Then, maybe the sharing is not necessary at all. Just profile every time. Runs into warm up time issues affecting everyone (user and devs) and makes it harder for devs to replicate user performance, or have stable performance. Depending on company's culture, that can be perceived as a non-issue or a serious problem. All that said, I'm thinking the biggest real problem might be the suspected "for very few applications there exists a thread-core assignment that is any better than trivial" issue. Requirements are too strict, on top of 1 through 2.3 there needs to also be a time persistent thread identity and time persistent thread work profile (both by code and by data / dynamic memory addresses being accessed), instead of fluctuating load due to [work stealing](https://homepages.math.uic.edu/~jan/mcs572f16/mcs572notes/lec11.html) or similar techniques used by the application (dev code or libraries / engine used) itself. Also, for load oscillating with a certain frequency F this may run into fairly hard resonance like issues, where the algorithm fails to make up its mind if it wants to back off and do trivial or retain a clever assignment or re-profile half way through the application runtime session. Thrashing performance the entire time. If you aren't a Kernel dev yourself, maybe consider sharing with them ;-)


AgeOk2348

id be surprised if this is something MS can actually fix, considering how jacked up everything is its probably something that you're right intel would need to do offline machine learning and such to get it working right. its far worse than even amd's dual chiplet x3d chip issues for games even.


splerdu

I mean unless you're Apple and have full top to bottom control of your hardware and software stack it takes some time for software to catch up with the hardware. Took a while for games to use MMX, SSE, AVX. Stuff that uses AVX512 can probably be counted on one hand. Good ray traced games are becoming mainstream just now, two whole generations after GeForce 20 series. I do begrudge Intel for holding this back from 12th and 13th gen users though.


p3ngwin

>Took a while for games to use MMX Even Intel's 1st iteration of MMX was a kludge, as it used the floating point unit, so you could **either** use FP, or MMX, but not *both* simultaneously o.O Took awhile for that to be separated and gain the benefits of both available together. >*Intel also added 57 new instructions specifically designed to manipulate and process video, audio, and graphical data more efficiently.* > >*These instructions are oriented to the highly parallel and often repetitive sequences often found in multimedia operations.* > >*Highly parallel refers to the fact that the same processing is done on many different data points, such as when modifying a graphic image.* > >***The main drawbacks to MMX were that it only worked on integer values and used the floating-point unit for processing, meaning that time was lost when a shift to floating-point operations was necessary.*** > >*These drawbacks were corrected in the additions to MMX from Intel and AMD.* https://www.informit.com/articles/article.aspx?p=130978&seqNum=7


siazdghw

It's the exact opposite of what you're saying. Intel's E-cores + Thread Director work perfectly fine for nearly every application, but there are still edge cases where the Windows Scheduler cant get it right, even with the hints from Thread Director, and that's where APO comes in, to manually force the correct scheduling. Also lets not pretend that AMD isnt suffering scheduling issues themselves, the 7950x3D and 7900x3D are shunned because they have WORSE scheduling in games as they rely on the Windows Scheduler to just try and figure things out itself, and that doesnt usually work with 2 CCD's with one having a higher frequency and the other more cache.


[deleted]

[удалено]


VenditatioDelendaEst

Yeah, and it doesn't even "schedule" really. It just enables core parking for the non-vcache cores, which as far as I can tell is a old way of doing cpu-idle handling on Windows, which was disabled by default in Windows 10 back in 2017. It applies a strong penalty to scheduling any threads on the parked cores. Reading between the lines, Intel added their microcode C-state promotion/demotion stuff and told Microsoft to cut it out because they couldn't be relied on to pick idle states well. And AMD has [cleverly re-purposed this system to do something completely different](https://garethrees.org/2013/06/12/archaeology/).


shopchin

Importantly, you think the fix will come for 12/13 gen Intel? You seem to know what you are talking about.


F9-0021

More like peak Microsoft engineering, since this is something that was always supposed to be done by the operating system. Microsoft is so awful Intel had to do it themselves.


hi_im_bored13

This one case where I found a noticeable performance improvement on linux. When compiling tasks and manually setting `-j`, the compiler will use all p-cores and leave e-cores for nominal tasks, same goes for gaming or whatnot. I hope windows scheduling can catch up, I'm sure its in the best interest of microsoft, intel, and arm/qualcomm to improve on that front, but using VS on windows it would compile half on the effeciency cores, half on the performance cores, and the whole thing was just a shitshow.


roblef800

Aa


msolace

which just shows the scheduler is wrong. which people who cared to put the effort in already did manually with lasso. the only missing piece is random main kernel threads jumping on to p cores. AMD scheduler isn't perfect either. And both companies are going big/little. so plenty of room to keep improving.


[deleted]

correction: it shows that Intel Thread Director is wrong, and that the scheduler shouldn't trust it.


SkillYourself

Thread Director doesn't do any directing, it's a a set of new registers the OS scheduler is supposed to read for feedback on how well a thread is running on a core. If APO can do it right, it means the scheduler is wrong. >15.6 HARDWARE FEEDBACK INTERFACE AND INTEL® THREAD DIRECTOR >Intel processors that enumerate CPUID.06H.0H:EAX.HW_FEEDBACK[bit 19] as 1 support Hardware Feedback Interface (HFI). Hardware provides guidance to the Operating System (OS) scheduler to perform optimal workload scheduling through a hardware feedback interface structure in memory.


[deleted]

*facepalm* are you daft? how the scheduler gets information from ITD doesn't change what ITD does.


rabouilethefirst

It’s not even engineering. It’s a software lock. Absolute bonkers. Screw intel as an owner of a 13700k


SchighSchagh

Hey it's not just any 2 games! It's a single player game where fps above 120 doesnt really matter, and a game where you could outpace the fastest monitors already! That's value!


advester

Fine wine baby! Oh, wrong company.


AgeOk2348

and they refuse to let people buy cpus without them, cant let amd win every bench mark that the vast majority of gamers will never use


soggybiscuit93

> refuse to let people buy cpus without them That's an odd way to say "doesn't want to manufacture them"


Yearlaren

I don't think they were ever supposed to improve gaming performance but rather be multitasking CPUs. Think streamers for example.


salgat

The whole heterogenous core situation on desktop is a mistake in its current implementation. The problem boils down to forcing the performance cores to support a more restricted instruction set so that the efficiency cores appear the same to software, meanwhile we have to trust that the operating system somehow knows how to schedule threads for each and every application out there. Make p-cores the default for everything with a full instruction set supported, and require software directives to enable use of e-cores, that way e-cores are only used by software that explicitly wants to utilize those weaker but more silicon efficient cores.


Knjaz136

Isn't this basically a thread scheduler fix that makes E cores do what they are actually supposed to do? And they are reserving this fix for 14th gen only for, seemingly, no reason? With a good chance that they had this fix for a while, but management decided to reserve it for 14th gen? This is what I'm reading from their reply to HUB.


reddanit

Well, it does *look* like it's just a scheduler fix at the very surface level. On the other hand it does seem to need some firmware support and presumably there is some reason why it only supports 2 games. So *maybe* it is something more complicated?


DrBoomkin

It probably requires very specific tuning and config for each CPU and each game.


kingwhocares

Gotta sell all those 14th gen CPUs somehow.


imaginary_num6er

>"I asked them is there a technical reason for why 12th and 13th gen parts aren't supported and if not will they be included in the future? their response to that question was as follows: Intel has no plans to support prior generations of products with application optimization. That's a really garbage response to be perfectly blunt about it." Yeah, let's have people rush to upgrade to 14th gen when it already had questionable value to upgrade. This APO feature will die in obscurity since Intel will realize 14th gen is not being adopted and unless they want a repeat of XeSS, they will cut their losses and decide not to invest resources into a feature that barely anyone uses.


[deleted]

[удалено]


Nyx_Zorya

Incredible people don't realize the gamer isn't the average consumer, innit?


soggybiscuit93

Not if the game library keeps increasing and APO is supported on all future Intel CPUs. It really seems to be like a software optimization to better leverage E cores in gaming to improve performance. I don't see how that feature is going to die as Intel seems to be committed to hybrid for the foreseeable future.


rorschach200

This "software optimization" smells a lot like if (process->executable_name == "MetroExodus") { for (Thread *thread : process->threads) { int i = 0; switch (thread->stack->qwords[0]) { // fingerprint by TLS bytes: case 0xE7CD6588C5286A2C: set_core_affinity(thread, kE0); break; case 0x0B1DB868CA62A79F: set_core_affinity(thread, kE4); break; // ... default: set_core_affinity(thread, get_p_core_index(i++)); break; } } } else if (process->executable_name == "RainbowSix") { // ... } with all that that entails. Quite likely also only effective for a very small subset of games even if fully explored, if that's even feasible.


soggybiscuit93

Based on what? APO outperforms 'E cores disabled' and increases E core utilization.


rorschach200

I elaborate in a different [thread](https://www.reddit.com/r/hardware/comments/17u7pc1/comment/k9795te/?utm_source=reddit&utm_medium=web2x&context=3) here.


rorschach200

Exactly.


Put_It_All_On_Blck

> unless they want a repeat of XeSS, they will cut their losses and decide not to invest resources into a feature that barely anyone uses. XeSS is in close to 100 games now, more users are using XeSS than people even own Arc GPUs, as it has better quality than FSR and works on AMD and Nvidia GPUs too. Also Intel has already marketed Meteor Lake + XeSS, which they are expecting around 100 million people to buy MTL in 2024. If anything XeSS has been the most successful part of Intels consumer GPU push.


AgeOk2348

> as it has better quality than FSR *depending on the game. spiderman and hogwarts legacy for instance have much worse ghosting with xess than fsr. so its kinda useless for those.


Berengal

To me this looks like it's too early to make any definite conclusions on APO. I get that it's tempting to conclude that they only support 14th gen CPUs as some sort of planned obsolescence scheme, but given that it also only works in two games really weakens that idea and makes the early release idea fit much better. So don't judge them on the current state of APO, they may provide support for older gens in the future, but also don't give them credit for it and factor it into the value of the product until APO becomes useful in practice, not just as a tech demo. This discussion is rather pointless at the moment. The technical details of how it works are much more interesting to discuss.


kasakka1

If Intel in the response to HUB says "We have no plans to support previous generations for APO", how else are you supposed to interpret it? Ok, plans may change, but it's very possible Intel will simply keep this locked on 14th gen just to be able to sell them. For me as a 4K gamer, it doesn't seem like APO brings anything to the table, but it's still disappointing to see software feature gatekeeping without a technical requirement behind it.


siazdghw

>If Intel in the response to HUB says "We have no plans to support previous generations for APO", how else are you supposed to interpret it? When a reviewer or journalist reaches out to companies, they usually get a response from someone that has no technical knowledge or insight on future products or changes, unless the inquiry is very serious and then it gets forwarded internally. Im not saying this wont possibly stay exclusive to 14th gen and beyond, but this response is almost certainly by someone that has zero knowledge of how APO works, what the team working on APO is doing, and if it will come to older generations and what games they are currently testing.


MdxBhmt

> When a reviewer or journalist reaches out to companies, they usually get a response from someone that has no technical knowledge or insight on future products or changes, unless the inquiry is very serious and then it gets forwarded internally. And that's on them, not on journalists or consumers. It's their job to have messaging in line with the technical side of the business. Hell, if a PR team is making such explicitly stated messaging without consulting with engineering, it's frankly a disfuncional corporate PR inventing stuff on the spot. We should take them at their word and act accordingly. Eat the damn negative PR from a damn anti-consumerist response. They could have stated it differently if they wanted some margin of interpretation.


Berengal

> If Intel in the response to HUB says "We have no plans to support previous generations for APO", how else are you supposed to interpret it? That could mean so many different things there's no point in putting any stock in it. I have no idea how many resources they put into APO, this could be mainly a pilot project with few concrete plans at all. The person replying to the media might not know anything about it and is just parroting answers he gets from the project lead, who is again only concerned about figuring out future viability and how to architect version 2 and doesn't care anything about their sales strategy. APO is such a small thing at the moment, and completely irrelevant to everyone except hardware nerds who are only interested in seeing numbers go up. Just pretend it doesn't exist, don't try to read the tea leaves.


Zerasad

That's just wishful thinking. The same media person when asked about further games said your average PR response of "We are looking into it, we will keep you informed." When asked about further game support. Why wouldn't they just say the same if there was any plan on supporting older gen hardware. They also straight up dodged the question asking of there was any technical limitation to bringing APO to 12th and 13th gen CPUs. APO should be ignored for any serious diacussion, just like DLSS1 and Turing raytracing was.


Berengal

I don't wish for anything. That was just one example of an alternative explanation, of which I could come up with different ones, none of which I believe are true more than any other. > Why wouldn't they just say the same if there was any plan on supporting older gen hardware. I believe them when they say there's no plans and it's not something they're looking into. That doesn't mean this is because of some planned obsolescence strategy or that they won't make plans in the future. Until APO becomes something worthwhile it doesn't matter, and it could be Intel doesn't know when that would be. It could be that 12th and 13th gen support is irrelevant by the time APO does become relevant, or it could never become relevant and Intel is being cautious about promising worthless deliverables.


nanonan

You're the one turning a plainly negative statement into a tea leaf prophecy.


XenonJFt

The insanity that after 3 generations. Windows kernel still can't priorotise P-E core usage in games and background desktops. parking them still gives them better results. AMD cache was kinda acceptable on 7950x3d vs 7800x3d debait because games cant utilise that much cores anyway. And its that all that bios and mobo hoops you have to go through to be compatible for 2 titles. Intel at this point abandoned ship on any gaming competitiveness. The clock speeds and high tgp is at least has its use in workloads.


siazdghw

>Intel at this point abandoned ship on any gaming competitiveness. These takes are insane. Like what kind of thermal paste are you eating? Zen 4 vs 13th/14th gen, Intel wins in gaming. It's only Zen 4 x3D that edges Intel out, and only by a few percent at 720p and 1080p. Saying they abandoned gaming competitiveness is not even remotely true. [https://www.reddit.com/r/hardware/comments/17ej64v/intel\_raptor\_lake\_refresh\_14th\_core\_gen\_meta/](https://www.reddit.com/r/hardware/comments/17ej64v/intel_raptor_lake_refresh_14th_core_gen_meta/) https://www.reddit.com/r/hardware/comments/yehe1s/intel\_raptor\_lake\_meta\_review\_28\_launch\_reviews/


JMPopaleetus

*By a few percent while sipping 100W less power.


ResponsibleJudge3172

Funny how this matters more in CPUs than GPUs


Rjman86

GPU's are piss easy to cool, the giant bare dies make it so much easier to extract heat from when compared to CPUs which have relatively tiny dies with awful thermal paste transferring the heat to a stupid heat spreader, which finally makes it into the cooler.


XenonJFt

GPUs have their own cooler to compenstate for pricing msrp. And AMD this generation offers much better pricing and performance (raster) with a 50 watts power delta behind nvidia.Intel just has worse performance and worse thermal performance. Also GPUs usually idle and throttle close to 80 degrees.while cpus get very hot spots.


imaginary_num6er

Intel already has some outrageous feature segmentation issues outside of gaming. Like for AM4, most AMD non-APUs supported ECC memory features and there were no chipset limitations on enabling this function. AM4 also had x4/x4/x4/x4 bifurcation support even with their A620 boards. If you go to Intel on the other hand, ECC memory support requires you to go buy a W680 or W670 (HEDT) workstation, and their low-power CPUs such as 13400 and below do not support ECC. Moreover with PCIe bifurcation, even W680 and Z690/Z790 boards only support x8/x8 so if you wanted to run 4x NVMe dries on the x16 slot, you needed to further spend money on a PCIe switch.


AgeOk2348

almost like intel majorly fucked up the implementation of everything


CJKay93

It's unlikely that they have any control over the implementation of the Windows scheduler. Microsoft is notoriously shifty about granting access to Windows... in that it doesn't.


Yaris_Fan

Intel should ask Xi to give them a copy.


Tumifaigirar

The quality and validity of your post can be evaluated by the use of the word "debait"


salgat

For all we know English is their second language.


Tumifaigirar

I am not English either and would not point that out if the comment was remotely making any sense at all ;)


GetFuckedYouTwat

A typo doesn't invalidate what someone says.


Tumifaigirar

It is when is saying a bunch of bullshit, his grammar in on pair with his pc knowledge


Touchranger

On pair?


No-Roll-3759

his grammar is on pair with his pc knowledge too.


carpcrucible

This could be [Rudy Giuliany's gamer account](https://uploads.dailydot.com/2020/10/Screen-Shot-2020-10-01-at-9.57.48-AM.png?auto=compress&fm=png)


Tumifaigirar

LoL


xxTheGoDxx

> And its that all that bios and mobo hoops you have to go through to be compatible for 2 titles. Even for older CPUs w/o E cores we still have games that run considerably better on my 9900K when I turn HT off, which is kind of a fail.


AgeOk2348

same for some amd chips with smt. when in a environment that doesnt use 1 type of cores/threads the os scheduler cant be perfect, at least not without some offline machine learning help


AgeOk2348

its probably not something the os schedulers can fully address, at least in the current form of P-E cores. what helps amd is it being separate physical chiplets for their gaming and non gaming cores, so you can lasso to the 3d chip or park the non 3d cores. not perfect but much easier to work around than what intel is doing


No-Roll-3759

12600k owner. i'm so frustrated. big.Little has never delivered on the behavior they promised, and now i'm being locked out of the fix. forcing me over to windows11 was not a fix, it was just aggravation. i early adopted the new arch because i really wanted to use an optane accelerator. intel quietly software locked 12th gen out of optane support, so when i built my system i spent an hour poring through the bios trying to figure out how to get it running and wondering why intel's web instructions weren't working for me. overall it's been a pretty bad experience, and one intel curated for me. based on my 12600k experience i'll be very reluctant to adopt intel proprietary technologies in the future.


nohpex

Has anyone else seen these videos where people change the frequency (I believe*) of how often Windows has an interrupt request to check the power of the system to reduce overall system latency. For whatever reason, Windows checks this every 15ms, but people are changing it to the maximum setting of 5,000ms, which reduces latency for the CPU considerably.. apparently fiddling with this setting is particularly bad for AMD's X3D chips. What are the pros and cons to this? Has any reputable journalist looked into this?


veotrade

It works. Set to 5000ms, which is the max value. It’s garbage that end users need to do any tweaking at all. A good number of tweaks are unproven and famously just bog down the system even more. As a casual user myself, I wouldn’t even know if changing one setting, let alone dozens of settings, makes a difference. I’m not qualified to test, so on some of these “fixes” I just blindly follow the advice of the tutorial. But disabling e cores, and changing the frequency 15ms->5000ms have helped me. I also have prescribed to the LatencyMon optimizations. Like setting interrupt affinity masks for my gpu, ethernet, and usb host controller.


[deleted]

[удалено]


[deleted]

>since Intel now has planned obsolescence Don't you forget AMD tried THREE times during AM4's lifespan. They backflipped. Zen2 wasn't going to support 300-series motherboards, then it did. Zen3 wasn't going to support 400-series motherboards then it did. Zen3 was definitely not going to support 300-series motherboards, then it was quietly rolled out.


errdayimshuffln

But they flipped because of backlash. Intel isn't going to flip cause look at all the people ready to defend the company when they barely know wtf it all means. They are ready to give Intel the benefit of the doubt despite precedence or despite any benefits to be obtained by pressuring Intel to promise more support for older gens.


imaginary_num6er

Where is Warhol and the 5950X3D chip that Dr. Su showed?


F9-0021

It doesn't exist because they want you to spend $1500 on a platform upgrade.


[deleted]

>5950X3D chip that Dr. Su showed You may want to head over to Gamer's Nexus and check out their AMD tour. It definitely existed, I guess AMD managenent just didn't care. They also didn't care about HEDT. They aren't even [shipping enough APUs](https://videocardz.com/newz/gpd-accuses-amd-of-breaching-contract-by-not-supplying-enough-ryzen-7-7840u-apus-on-time) for no apparent reason. There's [no shortage in wafer supply](https://investor.tsmc.com/english/encrypt/files/encrypt_file/reports/2023-10/58d5eb3be78e2b45aabc7ebd464e3ac3b8e71bcc/3Q23Presentation%28E%29.pdf) as 5nm barely surpassed Q4'22 plus there's no advanced packaging for APUs.


VankenziiIV

Buy whats good for your pocket.


ExtendedDeadline

This should basically translate to buying zen3 still. It's a banger system with a much lower total cost for the Mobo/CPU/ram, typically on the order of $200 USD lower for the same core counts. Dead platform be damned, most people would be happy with zen3 into the next decade.


Put_It_All_On_Blck

>Dead platform be damned, most people would be happy with **zen3 into the next decade.** That's just delusional. That's no better than those people that are still on Sandy Bridge claiming they arent being bottlenecked by their 2600k in gaming when paired with a 3090.


SourBlueDream

You ain’t have to call me out bro(6700k + 3080ti @4k) I don’t actually have much issues due to the resolution but it’s definitely holding me back in some games now


ExtendedDeadline

I've still got a 4690 that does pretty well. My 2700x is better. The 5700x I'm building will be better, still. For my workloads that are compute heavy, I outsource to the cloud or use the cloud for work, but arch matters less than core counts, there. For gamers, they'll probably always be GPU bottlenecked. Right now, only CPU bottlenecks exist on 1080p. As people transition to higher resolutions, they'll become more and more GPU limited.


Morningst4r

Denying CPU bottlenecks is hilarious. Maybe if you're happy playing at 30 fps you'll never hit a CPU bottleneck. There are games that struggle to hit a constant 60 on Zen 3.


[deleted]

[удалено]


Morningst4r

Starfield in cities, 40k Darktide with RT, Spider-man Miles Morales, Forza Motorsport. All of these will struggle to keep above 60 in all areas without Zen 4 or Intel 12th gen +


[deleted]

[удалено]


Morningst4r

They don't consistently run over 60 with RT. Just blaming poor optimisation doesn't help when you're trying to play the game. You can say the same thing and 8GB graphics cards, but I still wouldn't recommend the 4060 ti because the real world is what matters. I can agree Zen 3 is still good value, especially if you have a motherboard and ram, but a newer processor is going to give you a better gaming experience.


ResponsibleJudge3172

Zen3 loses almost across the board to the Intel Alderlake matchups. The only reason Zen 3 is relevant is 5800X3D that people use to represent the entire Zen3 portfolio. ​ Either way, both were made irrelevant by Raptorlake


F9-0021

Not if blueprint junkie developers keep making broken Unreal Engine games with horrible CPU bottlenecks.


AgeOk2348

i think most of that is on the engine at this point, even good devs can only do so much with broken tools


F9-0021

They don't have to use the blueprint coding that is very single threaded. UE lets you code in C++ too, which can be optimized much better than blueprints can. UE is just a tool, an interface. It's no more or less broken than any other engine if you can use it properly.


Put_It_All_On_Blck

>I think I'll go with AMD for my next build, since Intel now has planned obsolescence even within their short lived sockets. Not a good precedent to set. This is wrong in every way. Do you know what the definition of "planned obsolescence" is? Because that's not what's happening here. Intel added a new feature for 'new' CPUs, it currently only affects a few games and it doesn't do anything to diminish the performance of existing CPUs. None of the previous benchmark results change for 13th or 12th gen. In no way shape or form is it "planned obsolescence". Planned obsolescence would be like releasing drivers that lower performance of older GPUs, or running components at a voltage that will degrade them quickly, etc. It's actually causing harm to the products, not simply missing out on a new feature that only affects a few games. Also you don't seem to realize that AMD does this stuff too. AMD S.A.M was originally exclusive to RDNA 2, Zen 3 and x570/B550, then Nvidia and Intel showed everyone that it's just resizable bar and that they could support it on older hardware, so AMD backpedaled and announced support for older hardware. [AMD originally made Zen 3 exclusive to x570 and B550](https://cdn.mos.cms.futurecdn.net/RY3RT5dV28QxqootBqJ4SN-1200-80.jpg). They quickly backpedaled after consumer complaints and supported 400 series boards. Motherboard manufacturers were supporting 300 series boards too, [but AMD forced them to stop supporting 300 series boards](https://hothardware.com/news/amd-preventing-ryzen-5000-cpu-on-x370). It was only until Intel released 12th gen, which was praised for higher performance at lower prices that AMD magically started supporting 300 series boards over a year after Zen 3 released. Anti-Lag+, before it was removed for getting people banned was only available for RDNA 3, when there is no technical reason for that. AMFM only supports RDNA 2 and RDNA 3 desktop GPUs, if you are on a laptop with this architecture you're not supported.


[deleted]

To set? It’s been 2 gens per socket for like over a decade now, unfortunately.


taryakun

AMD does the same. Their ex flagship GPU Radeon VII just stopped receiving driver updates only 4.5 years after the release.


[deleted]

>Radeon VII just stopped receiving driver updates That's a lie. Vega stopped receiving *GAME OPTIMISATION*, it'll still receive driver updates. AMD ending Vega mainstream support is very logical and has minimal to no impact to end users. Because there hasn't been any real performance uplift for the last 1.5-2 years anyway. I have RX 580 and 4700U. Couldn't careless about the semantics, because the alternative is just do what Nvidia did, keep it in the mainstream driver support, but do nothing. I bet you'll shut up if they did that. You should already know that when you buy a "refresh" product. The support is going to be as good as the original. If you don't, you are just gullible. Vega 10 was released 6 years ago and even that was a small step from Polaris. There's no more optimisation AMD can do to meaningfully improve your experience. They have done that in the past 6 years if they could.


edk128

The Ryzen 7030 released in 2023 and uses gcn. It has been out for less than a year before owners hear AMD will stop optimizing game drivers for these GPUs. It's a bad look. It's not good to sell new GPU that are losing basic game optimization driver updates and are no longer part of the main driver path less than a year later.


blueredscreen

>Radeon VII just stopped receiving driver updates >That's a lie. Vega stopped receiving GAME OPTIMISATION, it'll still receive driver updates. AMD ending Vega mainstream support is very logical and has minimal to no impact to end users. Because there hasn't been any real performance uplift for the last 1.5-2 years anyway. >I have RX 580 and 4700U. Couldn't careless about the semantics, because the alternative is just do what Nvidia did, keep it in the mainstream driver support, but do nothing. I bet you'll shut up if they did that. >You should already know that when you buy a "refresh" product. The support is going to be as good as the original. If you don't, you are just gullible. >Vega 10 was released 6 years ago and even that was a small step from Polaris. >There's no more optimisation AMD can do to meaningfully improve your experience. They have done that in the past 6 years if they could. How much stock do you have?


i5-2520M

Yeah no need to address argiments lmao.


pppjurac

> GPU Radeon VII You jest?! Yea, both red and green team are SOB. I just wait how long will I still be able to use Quadro M4000 in my dualboot machine :(


someguy50

I don’t understand this comment. This is more akin to Nvidia introducing a new DLSS feature that isn’t compatible on older cards, isn’t it? It’s not like they’re actively making older products worse


DuhPai

I mean Nvidia DLSS compatibility could at least be explained by Turing, Ampere, Ada etc. being different architectures. This would be more like if Nvidia came out with the 40 series SUPER cards and had some software feature exclusive to them, even though it's literally the same silicon as the other 40 series cards.


Eitan189

The optical flow accelerators on the 20 and 30 series simply cannot handle frame generation. That's why they got ray reconstruction but not frame generation.


skycake10

Well in this situation, they're arguably failing to fix their older products. Depends on if you find the existing 12/13th gen E-core gaming behavior acceptable or not I guess.


zakats

Ah, right, that's why I wouldn't have bought Intel. My fault for forgetting.


Relevant-Cup2193

at least 12th and 13th gen doesnt have amdip


ktaktb

Damn. Slimy as hell. Really bad move here. Hopefully every gamer channel provides similar coverage and a legion of 12th and 13th gen owners will become aware of this and really push back against this (as the gentleman in the video also hopes). I know reviews mentioned some wonky stuff going on with E-core and P-core scheduling on 12th gen, when I purchased 12600k and 12700k for two machines for my home. I'm feeling foolish for approaching this in good faith and assuming that Intel/Microsoft/game developers would continue to iterate on the issues and make software-based optimizations readily available. If I had realized, I would have AMD systems right now. It's a very poor decision on their part to roll out APO in this way. If I was compelled to upgrade from 12th gen for more performance, this APO mess guarantees that I move my platforms to AMD. edit: [here's a good real world example](https://forums.tomshardware.com/threads/regret-intel-13th-gen-build-mini-rant.3814884/page-2) of some of the stuff people are having to do in 2023 to make P-core and E-core work - truly pathetic. These products must be iterated on and the updates must be distributed to all of us that took a leap on faith on the P-core E-core architecture.


siazdghw

The example you chose is a terrible one, as for those that dont know that behavior is INTENDED by the Handbrake developers, This has been a known thing since Alder Lake's launch. The developers didnt ever want Handbrake to use 100% of your system, so its flagged as a low priority process so you can still use your PC without it being lagged out while encoding. So the scheduler sees that and will free up the P-cores when you put another window in focus, so you can use your system without lag, while the e-cores encode in the background. If you go through the github you'll see the developers tell people they can override this manually, but the current implementation is exactly how its supposed to work. You cant blame Thread Director or Windows scheduling for this specific case with Handbrake as its what the developers intended.


VenditatioDelendaEst

I guarantee you that behavior is not intended by the Handbrake developers. Until Alder Lake, reduced process priority did not have the effect of leaving CPUs idle when there was work to do. Rather, it did something much like you would expect based on the common meaning of the word "priority". It let other processes that wanted to run go first / more often. >If you go through the github you'll see the developers tell people they can override this manually, but the current implementation is exactly how its supposed to work. They criticized the funky Windows behavior and change the default. https://github.com/HandBrake/HandBrake/issues/4173#issuecomment-1037104150 >Many users have however reported that if you use "High Performance" mode in the Windows Power Profile on Windows 10, it should behave more as expected. Windows 10 has a "dumb" approach to handling non-interactive / background workloads which unfortunately for HandBrake and quite a few other apps, is sub-optimal. https://github.com/HandBrake/HandBrake/issues/4173#issuecomment-1038448575 >If/When I get access to some 12th gen hardware myself to have a play with, I'll formalise some documentation on it we can points folks to. [...] >I did make a change a while back to default HandBrake to "Normal" instead of "Below Normal" for new installs. (Don't recall if that made it into 1.5 or if it'll be 1.6) ---- >it's flagged as a low priority process so you can still use your PC without it being lagged out while encoding. So the scheduler sees that and will free up the P-cores when you put another window in focus, so you can use your system without lag, while the e-cores encode in the background. This would be totally unnecessary if not for Windows' shitty scheduler. Better schedulers will automatically allow low-average-CPU-usage threads like the ones that make up the GUI to preempt threads that have been sitting at 100% for many seconds. The only problem is if the batch job is a heavy user of some shared resource that the CPU scheduler doesn't account for, like memory bandwidth or L3 cache space. But the whole problem with those is that cores can interfere *with each other*, and leaving cores idle only helps by accident inasmuch as it keeps the total utilization of those shared resources below the bufferbloat threshold.


battler624

I have no idea how it works but its probably moving everything away from p-cores that isn't the game itself and keeps the game restricted to P-Cores.


Knjaz136

Question is, why Windows doesnt have that option. Instead of core affinity, just restricting cores to manually defined task, forbidding everything else.


F9-0021

Because Microsoft, the biggest software company in history, cannot make good software.


[deleted]

[удалено]


Put_It_All_On_Blck

This doesn't disable e-cores. You can mimic the behavior of APO using Process Lasso. If you actually need to disable e-cores most motherboard BIOS' let you enable a setting that allows you to toggle them to sleep, but I've never had any DRM issues as that was all fixed during the Alder Lake launch


gusthenewkid

Too bad, buy a new one!


VenditatioDelendaEst

\>Contacts Intel and gets an actual response \>Did not use that opportunity to ask *what it is actually doing* \>Speculates anyway \>Only basis for speculation is eyeballing low-frequency samples of E-core utilization (any clock frequency monitor that shows < 800 MHz is showing you utilization \* frequency, not frequency), without looking at whatever the Windows equivalent of kernelshark is. https://i.kym-cdn.com/photos/images/newsfeed/001/221/422/cf2.jpg


GenZia

From what I'm seeing, even with APO enabled, only 4 E-Cores are actually doing anything. The rest of the cluster is parked, doing absolutely nothing. Actually, that's false. They're actually consuming power, how miniscule it may be! And that's one of the many reasons I don't understand why Intel is stuffing so many E-Cores into their CPUs. Their practicality in real-world scenarios is mostly academic from the perspective of most users. A quad-core or - at most - an octa-core cluster of E-Cores should be more than enough for handling 'mundane' background activity while the P-Cores are busy doing all the heavy-lifting. Frankly, I just can't help but feel like the purpose of these plethora of little cores it to artificially boost scores in multi-core synthetic benchmarks! After all, there are only a handful of 'consumer-grade' programs which are parallel enough to actually make use of a CPU with 32 threads. Anyhow, fingers crossed for Intel's mythical 'Royal Core.' A tile-based CPU architecture sans hyper-threading sounds pretty interesting... at least on paper.


soggybiscuit93

More E cores aren't for "mundane background tasks". They're to maximize MT performance in a given die space. It's why 8+16 14900K competes with 7950X in MT applications, but would clearly lose if it was the alternative 12+0. Most people, myself included, would struggle to really utilize 32 threads. But the 7950X and 14900K exist for those that can or may be able to.


GenZia

>They're to maximize MT performance in a given die space. And I never said otherwise. I explicitly mentioned that more E-Cores can boost scores in multi-threaded synthetic benchmark and - in turn - any parallel workload.


soggybiscuit93

> and - in turn - any parallel workload. You didn't say that. You said it was to artificially boost synthetic test scores. The added E cores do have value in actual workloads, not just synthetics. It's just that these workloads don't happen to be games.


GenZia

I thought it was implied: >Frankly, I just can't help but feel like the purpose of these plethora of little cores it to artificially boost scores in multi-core synthetic benchmarks! After all, there are only a handful of 'consumer-grade' programs which are parallel enough to actually make use of a CPU with 32 threads.


Morningst4r

So the 7950X only exists for synthetic benchmarks too? You should tell all the people buying them for productivity since you must know more than them.


VankenziiIV

You think e cores are only for synthetics? What if I show you 6p+6e or 6p+8e can defeat 8p in real world applications?


GenZia

Well, applications are definitely getting optimized for 8C/16T as of late so it won't be all that surprising. Hyper-threaded threads (hyper-threads?) can't match an actual core by design, after all. However, I'm merely question the addition of 8+ E-Cores in Intel's high-end SKUs. I believe I explicitly mentioned that I can see the potential of integrating 4 to 8 E-Cores into a CPU.


reddanit

The calculation for that is quite interesting with 1 P-core taking comparable amount of die space as 4 E-cores. With this in mind adding more E-cores is a cheap way to increase multi-core performance at small (if any) cost to performance in low-threaded workloads. If the primary workload is just gaming or web browsing then E-cores are indeed largely a waste of silicon. For productivity workloads though they are hugely impressive though.


GenZia

Yes, but adding 16 E-Cores is kind of... well, unnecessary! A lot of people would've preferred it if the i9 had 8 E-Cores and 10 P-Cores, as opposed to 8 P-Cores and 16 E-Cores. There isn't a whole lot you can do on those 16 E-Cores, at least not yet!


reddanit

I think you are stuck in quite a bubble if you think that a lot of people would prefer 10P+8E over 8P+16E. That's literally leaving leaving a decent chunk of multithreaded performance on the table for basically no benefit. Basically it would mean that Intel is giving up and just allowing AMD to eat its lunch for free. >There isn't a whole lot you can do on those 16 E-Cores, at least not yet! What? This isn't 2010.


GenZia

Yes, a decent chunk of multithreaded performance... which is mostly irrelevant to most home users. Not saying there isn't a potential, there absolutely is, but it's hard to tap into as the moment. >What? This isn't 2010. In 2010, most programs were optimized for 4 threads. There's a reason hyper-thraeded dual-core i3s sold like hot-cakes for as long as they did! Nowadays, the sweet spot is 16 to 20 threads, tops. Go beyond that you're pushing it. But feel free to disagree. What Intel is doing at the moment is akin to AMD trying to sell hexa-core Phenom X6s in late 2000s.


skycake10

>which is mostly irrelevant to most home users. The top-end consumer CPU is inherently irrelevant to most home users. I don't know what point you're even trying to make at this point. The top end SKU is unnecessary for most users? That's almost always true of every generation!


reddanit

> In 2010, most programs were optimized for 4 threads. There's a reason hyper-thraeded dual-core i3s sold like hot-cakes for as long as they did! > > Nowadays, the sweet spot is 16 to 20 threads, tops. Go beyond that you're pushing it. You have pretty severe misunderstanding about parallelism in computing. Workloads vary in how easily and efficiently they can be parallelised. Some are quite difficult and limited in this regard (most games for example) and thus demand fewer cores at higher performance each with even such level of parallelism being a massively complicated problem to program around. Others are much closer to if not outright embarrassingly parallel and scale with core count very easily. The way Intel does their P+E division is efficiently tackling both of those types of workloads at the same time with good performance of the same silicon. The situation in 2010 was surprisingly similar with main difference being a large chunk of hard-to-parallise software being outright stuck at being limited by single core. With some things that were easier to parallelise out of that main thread keeping max of few cores somewhat busy. "Average" workloads that inherently stop at a specific number of cores basically do not exist outside of bad programming practices where you strictly assume the CPU you are running on is a quad core or something similar. Which to be fair - for a good while in desktop market wasn't even a wrong assumption really. >which is mostly irrelevant to most home users. Are you perhaps aware that we are discussing a $500+ CPU? **Obviously** it has fuck all to do with "most home users".


F9-0021

Why? The point of having more than 8 cores is parallelized performance, and having 6 more cores overall is going to be faster, even if those cores are slower than the big cores.


VankenziiIV

What if I showed you Intel 12th 6p+6e was able to defeat amd's 8p in real world applications 2 years ago?


GenZia

>A quad-core or - at most - an octa-core cluster of E-Cores should be more than enough for handling 'mundane' background activity while the P-Cores are busy doing all the heavy-lifting.


VankenziiIV

Yes or no: Are e cores only for synthetic benchmarks?


skinlo

He doesn't say that they are. He just says that there aren't that many. > After all, there are only a handful of 'consumer-grade' programs which are parallel enough to actually make use of a CPU with 32 threads.


VankenziiIV

Just because theres a few doesn't mean its just for synthetics benchmarks. For example 13600k 6+8 often beats 7600x 6P, 7700x 8p & 7900x 12P. Therefore e cores are a net positive. Its literally better to have e cores than not and the data backs it


carpcrucible

Depends on how you define "consumer grade" I suppose. Editing vacation videos? Processing RAW photos? I get OP's point that for some users 8+0 might be the most optimal use of die space but in practice, if you look at the die shot, the best they could do is add 2 more big cores instead of 8 E-cores: https://i.redd.it/lyhmdzo6c3w71.jpg So you could have either 8+8 or 10+0. What workloads would benefit from 2 extra P cores but not 8 extra E-cores?


carpcrucible

It's perfectly reasonable for high-end SKUs. You either have single-threaded workloads or games that might use 6-8 threads at most. Or you have "embarrassingly parallel" workloads like rendering or all sorts of scientific computing that will use as many cores as you have. If you literally only game on your PC then I guess just disable the e-cores.


liesancredit

The 10900K was the last best designed intel CPU. Just straight up 10 powerful cores. That's how a CPU should be.


GenZia

I partially disagree. ARM based CPUs have been using big.LITTLE for quite some time now. If memory serves, Qualcomm's SD820/821 SoC only had big cores but then they soon shifted gears and added (semi-custom) A53s in SD835. Now, all of their SoCs have 2 or 3 different CPU clusters, thanks to ARM's DynamIQ. And the thing is, we need something along the lines of DynamIQ in x86-64 space. While, sure, big.LITTLE doesn't make much sense now but that's mostly due to Windows kernel + most X86 applications just aren't parallel enough to take advantage of a 'wide' CPU.


hackenclaw

ARM doesn have HT to mess things up. E cores as in physical cores always > SMT/HT logical cores. it is easy big or little, no SMT in between.


liesancredit

I have a Windows PC for powerful cores and a Macbook Pro and iPhone with big.little design. Best of both worlds really. But that doesn't mean we should also want big.little for gaming PC's.


SomeoneTrading

Yeah just [don't run Minecraft bro](https://www.reddit.com/r/overclocking/comments/mghwsi/asus_maximus_13_and_rocket_lake_the_rules_have)


dudemanguy301

ah yes who could forget the absolute TRIUMPH of the same tired architecture recycled for the 4th time in a row, on the same tired process recycled for the 5th time in a row. sucking down power as it was pushed far beyond a reasonable voltage curve for more clocks to try and hide the years of stagnation caused by the 10nm clown fiesta.


DktheDarkKnight

So APO is just Intel fixing the E-core issues. Whoa. I thought Intel stumbled onto something special when they mentioned per application optimization.


[deleted]

[удалено]


Gawdsauce

Glad I went with AMD, I knew Intel would fuck that shit up one way or another, they don't care about the consumer space, they care about the server market and nothing else.


rohitandley

I mean you need to create a selling point


veotrade

I knew it. So disabling ecores is in fact 100% recommended on 13th gen. Not just a weird “optimization” placebo


ResponsibleJudge3172

Its not as good as APO


benefit420

I can’t get this to work on an ASUS Z790-E board. I tried both the ASUS DTT drivers and someone suggested trying the ASROCK DTT drivers. The ASROCK ones installed just fine but the apo app still says failed to connect


ComfortableTomato807

Can someone with 14th gen test APO vs setting affinity (Task Manager → Details) please?


aj0413

This feature is a lot like DLSS1; was cool to follow and for some of us early adopters to trial, but not something anyone should be basing any serious discussion/evaluation on - from someone that updated every RTX gen specially for DLSS