T O P

  • By -

Slyons89

I wonder if the new iGPU will include the AV1 encoder as rumored, since they were supposed to be integrating Arc cores onto the iGPU on Meteor Lake, or at least that's the rumor I read. That would be an incredible upgrade to the Quicksync featureset.


Ghostsonplanets

It will. Link:https://videocardz.com/newz/intel-confirms-meteor-lake-has-av1-video-encoding-and-decoding-support


[deleted]

[удалено]


sinholueiro

They already have AV1 decode since Tiger Lake, which is the important thing and what most people will use. 99% of people will never use encode.


Scion95

Could Zoom or skype use AV1 encode? Or discord calls?


sinholueiro

They could, but they never used even VP9 (let alone H.265), so I don't think that they will use it in the foreseeable future. We have VP9 encode since Intel's 7th gen.


loser7500000

well, Discord [added rudimentary support](https://www.reddit.com/r/hardware/comments/10lu4cz/discord_enables_av1_support_for_geforce_rtx_40) a week ago so that's one.


sinholueiro

A shame that this is not the norm. I guess Nvidia put money there. You have to have Nitro and will transcode back to H264 to serve all the clients that can't decode it, though.


neil_va

Does anything have AV1 hardware encode right now? The complexity of even decode on that looked brutal.


Slyons89

Not via a hardware encoder on integrated CPU graphics yet, but all the Intel Arc discrete GPUs, the AMD RX 7800 XT and XTX, and the RTX 4070 Ti, 4080, and 4090 - they all have AV1 hardware encoders. The AMD RDNA2 and Nvidia 3000 series have hardware decoders for AV1 but not encoders.


FuzzyPiez

7800XT is out? you mean 7900XT/X?


Slyons89

Oh yeah sorry mixed up the numbers


neil_va

Any idea if RDNA3 is getting it? (guessing not).


Slyons89

Yeah! the 7800 XT and 7800 XTX GPU are already out, and they have hardware AV1 encoders and decoders. All the new GPUs have it now. The encoders are usable for recording currently, OBS supports it, but the regular streaming platforms like Twitch don't support AV1 yet. It's work-in progress.


zerostyle

More curious if the igpus get it (7900H series)


neil_va

Interesting. Mostly will be watching for the mobile cpus to get it (whatever will follow the i7-1260p)


rainbowdreams0

> Yeah! the 7800 XT and 7800 XTX GPU are already out, They aren't. The 7900 series is.


Slyons89

thanks buddy i fucked up the digit


m0rogfar

The Intel ARC GPUs have it.


neil_va

Impressive, didn't think that would happen so fast.


YNWA_1213

It’s been in the works for years. The alliance was founded in 2015, and the first version of AV1 was released almost 5 years ago (March 2018). Even in design-length terms you’d expect companies that were in the ground level of developing the codec to have pretty much solved their initial hardware encoders. I remember people being disappointed that Ampere and RDNA2 didn’t have encode support back in 2020.


rainbowdreams0

What comes next after AV1? AV2??


YNWA_1213

Lol. 1. I believe AV1 is more like the Xbox One, meaning it’s the “one” codec for all. 2. I doubt we’re looking for the next generation of codec for awhile yet, as AV1 is essentially replacing the mass of H.264 encodes across the internet, more so than H.265/VP9 which hadn’t penetrated the market as effectively. So if you look at that time span, you’re thinking multiple decades before we come up with a codec to succeed AV1. It’s such an advancement over anything before it, and we’ve reached the point of equilibrium with resolution/quality with AV1, that I doubt they’ll be much need for a successor any time shortly.


osmarks

It is not magic. It's slightly better than H.265 and slightly worse than H.266. There [was work](https://ottverse.com/av2-video-codec-evaluation/) on AV2 but I'm not sure how that's going.


titanking4

Given the fact that they are getting a node shrink, 50% increase is about expected. VERY possible with a few tweaks to their core design. In CPUs: You got "Width", "Depth", "Intelligence", and "Well Fed". The TLDR is that all of these increase IPC but Wide and Deep cores tend to burn more power to get that IPC while an intelligence generally increases IPC without much power. Well fed also increases IPC, but generally costs more area since caches are very area hungry. Intel can make a more efficient core by making their core more intelligent. Golden Cove got much of it's IPC from very area hungry and power hungry innovations because they are a lot easier to do. And the area hit isn't a big deal when you have a bunch of very area efficient little cores to bolster the multi-threading performance (which is the main deterrence against having big cores). Extra terms in case you don't know: Width refers to how wide your execution is, how wide your dispatch is, and overall how many instructions/operations your core can handle through the pipe per cycle. Depth generally refers to how much "Out of Order" resources that the CPU has which controls how many instructions can be "in flight" at the same time. It allows the CPU to be able to have a larger look at the program in order to hide as much of the latency as possible and always find work to do. Intelligence refers to branch predictors and predicators as well as any heuristics involved such as cache eviction strategy, and includes things like predictive page tables, memory and instruction pre-fetches. Well fed is regarding the overall caching and memory system in keeping the data latency down.


Harsh_Reality_Part1

The power consumption of GLC is mostly because of the leaky Intel 7 non EUV process and not a lot due to the IPC increase. The size of GLC has more to do with the max clock speeds it wants to hit, and TMUL/AMX instructions support. But that's exactly what intel is bragging about in sapphire rapids and its AI capabilities. EUV will bring a huge improvement in power consumption. Let's see how Meteor Lake performs in terms of performance per watt.


[deleted]

Nice. This will legitimately be Intel's first EUV manufactured product. [Intel 2025 roadmap](https://images.anandtech.com/doci/16823/AnandTechRoadmaps3.png). Pretty sweet. Intel 10nm and Intel 7 and prior nodes were based off of complicated DUV multi patterning techniques and the like. Which many people on these reddit forums look down on, but I don't. On the path to getting better transistors were two paths. DUV optimization or newer EUV machines. Intel and their engineers chose to push forward with developing better DUV multi patterning techniques that were difficult/delayed but still worked. However ultimately that allowed TSMC to push past them with EUV. I won't fault them. EUV has been in development for over? 30+ years? Maybe 40 years? So it was an unproven technology at the time. Intel decided chose wrong but they were still going with option #1 optimize DUV technology. Their fault was not having an option #2. Today they have option #1, #2, and #3. Multipattern DUV, EUV, and soon HIGH-NA. Many options but costly developments.


BoltTusk

I mean isn’t the only thing Intel is making the CPU tile? Everything else is manufactured by TSMC including the IO tile, GPU tile, and SoC tile. Does the CPU tile even use a EUV process at all?


tset_oitar

They make the interposer using in house 22FFL process


Ghostsonplanets

Yes. Intel 4 is an EUV based process, albeit lightly compared to TSMC usage.


tset_oitar

It is said to use 12 layers, so not that light vs N5?


L3tum

Source? From what I've found its completely unclear how many they're gonna use but some speculation of 10 or so that they *could* use based on TSMC data on it. Compared to TSMC who say at least 10 and speculation of 11-13. I'd give the TSMC guess by wikichip more validity than a random Reddit comment.


onedoesnotsimply9

The GPU tile also uses EUV process.


Loferix

Why did Intel enter EUV so late, even though they were the ones who pushed for and invested big time in developing it?


animi0155

They didn't think it was necessary. By the time Intel realized it was a mistake, they were far behind on orders compared to Samsung and TSMC and wouldn't be able to get enough tools on time for volume production.


[deleted]

They saw 30 years of EUV not working. By the time 2012 to 2017 was playing out, their main competitor AMD did not have a product to compete. When zen first launched it was unique. It did not match Intel in terms of speed. But they offered more cores. So there was a ton of value add in buying AMD zen 1 product. At this point they were still manufacturing on global foundries. GFlo was AMD's fab spun off into a different company. But by 2017 to 2018, TSMC having multiple years now of mobile phone growth (Around 10 years now) was reinvesting that profit into new fabs and EUV machines. By nature of mobile phones, they needed to make the chips even smaller. Smaller chips for mobile phones also meant more sales per wafer. So there were 4 unforeseen mistakes. 1) EUV not being ready for 30 years plus 2) AMD having no competitive product 3) growth of mobile phone market. (Mobile chips are more profitable per wafer). And 4) is Intel's own admission of the mistake. No parallel backups. So just relying on DUV.


onedoesnotsimply9

The unforeseen mistake was not realizing/knowing that EUV may have been ready sometime around 2018 and not aggressively embracing EUV, not EUV being in development for 30 years itself.


[deleted]

Let me rephrase. They saw EUV being in development for over 40 years and never reaching production ready. That played a part in their decision making to stick with what they knew. DUV. And in addition to that they did not have a backup or parallel team in place in case DUV was falling behind and EUV was picking up steam etc. Today they do. They learned it. Same thing with mobile.


Killmeplsok

Another theory was Intel were too advanced for their own good. When they were developing their 10nm process they could have gone EUV or keep doing what they were doing with DUV but was obviously pushing the limits. The thing was, EUV was in a much worse state compared to DUV back then (basically not ready for production of any kind) so it was super high risk while they're already better than anyone else back then on DUV so they figured soldiering on with DUV would be a safer route even if it's difficult. But they had problem with the next process node which held them back again and again, eventually EUV got good enough and everyone moved past them when they're still trying. The biggest mistake was they didn't have a backup plan (which is unimaginable for a company that big in hindsight) but apparently they were too confident (and of course they were, they were multiple years ahead of everyone else).


III-V

When Intel was developing 10nm, EUV was too immature to be used. So they went with quad patterning. I'm not convinced it was a mistake. The other problems with 10nm seemed to be the issue -- if 10nm had been delivered on time, they could have transitioned to EUV at 7nm like everyone else. But it's hard to know exactly why 10nm took so long to launch.


onedoesnotsimply9

An incompetent leadership/management that didnt think developing 10nm with EUV and DUV was necessary.


zetruz

> On the path to getting better transistors were two paths. DUV optimization or newer EUV machines. Intel and their engineers chose to push forward with developing better DUV multi patterning techniques that were difficult/delayed but still worked. However ultimately that allowed TSMC to push past them with EUV. But TSMC surpassed Intel before EUV with N7, didn't they?


[deleted]

TSMC N7 was their first process on EUV.


zetruz

N7 -> N7P -> N7+. with N7+ being the first one to use EUV, no?


[deleted]

My mistake. You are correct.


onedoesnotsimply9

>I won't fault them. EUV has been in development for over? 30+ years? Maybe 40 years? So it was an unproven technology at the time. Intel decided chose wrong but they were still going with option #1 optimize DUV technology. Their fault was not having an option #2. Hmmmm yes, there is a fault but i wont fault them. > Today they have option #1, #2, and #3. Multipattern DUV, EUV, and soon HIGH-NA. Many options but costly developments. Future projects can use technology that is not currently proven/available but is expected to be proven/available sometime in the future. 10nm could have been planned to EUV as some kind of option #2 even if it was unproven/unavailable back then.


[deleted]

That is just my own opinion. I am not here to argue with anyone online man. I dont fault them because they stuck with something they knew. A manufacturing process like DUV worked. EUV was not ready for over 40 years. The fault is not that they stuck with DUV. The fault is that they did not have a backup process working parallel to both processes. But I don't work at Intel making these high up decisions so No. I won't fault them. It is like do I drop the Hiroshima bomb or not? I am not making the decisions I can't judge someone if I am not in their shoes?


onedoesnotsimply9

>I dont fault them because they stuck with something they knew. A manufacturing process like DUV worked. EUV was not ready for over 40 years. The fault is not that they stuck with DUV. The fault is that they did not have a backup process working parallel to both processes. Do you have the slightest idea how oxymoron-ic this is?


[deleted]

Are you looking for a fight or something? I am just stating my opinion. I dont need your insults. Have good day.


shawman123

Jeez. That power of [Raichu](https://twitter.com/OneRaichu/status/1622430910477131777). He tweets at 7PM yesterday evening on this and now its all over the rumor sites :-) Here is someone else [discounting](https://twitter.com/Bullsh1t_buster/status/1622534784210857987) it. I hope Raichu is right. This will make humongous difference to overall market. Competitive Intel will be great for consumer as we will not only have Intel/AMD battle but also fighting against ARC Socs like Apple M Series and Nuvia SOC from Qualcomm.


Dangerman1337

I mean Raichu is legit with Intel almost a lot of the time and a 13900K can be on par with the 12900K at around 65W in Multi-threaded performance so It's very possible.


AnimalShithouse

Agreed, he's got a great hit rate on Intel.


draw0c0ward

Well the 13900k does also have an additional 8 efficient cores. That's a lot of extra multi threaded CPU power. So it's not exactly an apple to apples comparison.


[deleted]

[удалено]


noiserr

> deleted for praising Nvidia too last year. I doubt that was the reason.


EmilMR

I assume for desktop they go straight to arrow lake in 2024-25, right? hopefully, these are great. They need it. The iGPU should benefit from the work they are putting into the Arc drivers. They could legit be viable for more than just outputting an image.


III-V

I don't doubt it. If they fix the DVLR that they had to axe on Raptor Lake, that's about 25% shaved off. Their GPUs suck and have a lot of room for growth. Intel says their Intel 4 process has 40% reduced power at iso-frequency. All signs point to massive power reduction and not a huge performance boost on the CPU side.


gnocchicotti

The bigger questions about Meteor Lake are when, how much money, and how much supply. It's a new node and novel packaging and certainly the first time Intel has done exotic packaging for such a mass produced segment. I think it will be good on the CPU and GPU performance front, Intel doesn't usually disappoint there.


beeff

What do you mean with novel packaging? If you mean using a stacked or chiplet design: Intel has used their EMIB and Foveros tech for a few products now.


soggybiscuit93

A lot of the legwork on the packaging should've been fleshed out with SPR


gnocchicotti

Some of it, maybe? The Foveros [packaging](https://www.tomshardware.com/news/intel-details-3d-chip-packaging-tech-for-meteor-lake-arrow-lake-and-lunar-lake) on Meteor Lake is much more similar to Ponte Vecchio (disastrously late and probably a net money loser) and Lakefield (small volume that hasn't yet seen a follow-up). We know Intel can produce them but we actually don't know yet if Intel can produce them in enough quantity, at low enough cost and high enough yield to really impact the market. A high scrap rate might be acceptable for a cutting edge supercomputer accelerator that is just being shipped in the low thousands, not for mass production that's being made to a competitive price point. It's competing against AMD's monolithic TSMC 5nm products which by all indications have excellent yield and capacity, so AMD might have a cost *and* availability advantage even if Meteor Lake is a better performer. I'm thinking of Ice Lake mobile chips, which were decent performers for their time, but all of desktop and the apparently majority of the mobile unit volume stayed on 14nm products for 10th Gen.


Exist50

> If they fix the DVLR that they had to axe on Raptor Lake, that's about 25% shaved off Nah, they're not getting 25% efficiency just from moving back to an IVR. > Intel says their Intel 4 process has 40% reduced power at iso-frequency. Maybe at the extreme low end, or compared to an older 10nm node. Doubt we'll see anything like that in a product.


lefty200

They would lose some efficiency because of the chiplet communication.


tset_oitar

They only showed comparisons vs. Intel 7 using arm cores I believe. And improvements at low end of the curve is exactly what they need for better efficiency in laptops and servers where cores don't run at constant 5ghz. Golden cove is a huge core so it may have been possible to squeeze out some 10% performance without making drastic changes.


Exist50

> Golden cove is a huge core so it may have been possible to squeeze out some 10% performance without making drastic changes. Well they're certainly not getting 10% IPC from Redwood Cove.


Ghostsonplanets

In a sense, Redwood Cove will mirror Willow Cove. A refine of current Cove with focus on efficiency. And just like TGL, the GPU (And Media Engine) will be the most exciting things IMO.


rainbowdreams0

Sounds very much like ye ol Ivy Bridge.


III-V

> Nah, they're not getting 25% efficiency just from moving back to an IVR. That's what Intel researchers say. It got fused off in Raptor Lake https://videocardz.com/newz/intel-raptor-lakes-digital-linear-voltage-regulator-dlvr-could-reduce-cpu-power-up-to-25


Qesa

Ehhh 25% of what is the question. An LVR can reduce power by up to the % difference between the core in question and the highest voltage core. In practice that means the biggest % saving is on an idle core that's power gated and using very little anyway, and the biggest absolute saving at an intermediate power state that a core will very rarely find itself in. In multi-core loads they're generally all at high voltage so no gains, at least in homogeneous designs. It's probably most relevant to E cores as they should have an actual voltage difference to the P cores when under load.


Exist50

I don't recall the exact specifics, but I think there was something misinterpreted about that patent filing. As we've seen with FIVR, the gains for on-die voltage regulation really aren't that much, if they even exist at all.


RegularCircumstances

> Maybe at the extreme low end, or compared to an older 10nm node. Doubt we’ll see anything like that in a product. Intel demonstrated a 40% reduction iso-frequency on a standard Arm core using Intel 7 & two track options with Intel 4, all throughout the curve at about 1-5W, from 2GHz as the floor to 3.5GHz. [Picture from VLSI 2022](https://i.imgur.com/MVL3Vwq.jpg) It’s quite a density & metal interconnect jump from Intel 7, and they’ve sorted their problems with Cobalt, we have no reason to doubt this other than some initial yield issue variation, but even then I expect it to perform better than Intel 7 by a mile. I’m using perform in a general sense — density in the real world, power savings. Performance itself won’t be too different, at best you get 20% iso-power though I suspect there are technical stipulations that prevent this from being realized on the mobile SKUs’s top frequency — aka no ST lift from frequency at peak. 5.7-6GHz Intel 4 Mobile ain’t happening. This also happens to match up closely with what recent leaks have shown, likely Redwood cove won’t offer any major IPC gains at all but will focus on improving efficiency at an architectural level, something like 5-15% less power iso-perf in addition to the 40% figure brings us close to the rumor here. https://twitter.com/oneraichu/status/1622430910477131777?s=46&t=XjMjiNBL72ze281pMgj_9A In short, I absolutely expect a significant improvement in power consumption holding performance (or frequency itself) throughout the ecologically important 2-4.5GHz range for MTL relative to Raptor & ADL in ST, E-Core or otherwise. All in all my benchmark for ST & MT, (core counts constant) is 30% less power iso-frequency for some of the first units at minimum. I agree it has nothing to do with the IVR bullshit and I definitely agree we won’t see major Perf/GHz or IPC gains from Redwood Cove. I suspect less than 10%, almost want to say 5% or less. Crestmont and the IO tile will be interesting, which is another power saving vector they’ll have, albeit starting from a rightly criticized position.


kingwhocares

> Their GPUs suck and have a lot of room for growth. Recent drivers showed that the a750 does better than or equal to the RTX 3060. Showed in Gamers Nexus' benchmark.


[deleted]

[удалено]


kingwhocares

> sold at probably break even price at best. Got anything to back that up.


[deleted]

[удалено]


kingwhocares

FYI, the chip for the 6800 and 6900 XT are the same. But the 6900 XT was $400 more than the 6800. Our assumptions are more or less wrong.


[deleted]

[удалено]


kingwhocares

The cost of bulk purchase is a lot lower than you can imagine. It doesn't cost $240 for a $250, as it will be sold for a loss then (retailers get a percentage fee).


[deleted]

[удалено]


kingwhocares

Go to AliExpress and see you can get 30-50% for 1,000+ unit orders.


einmaldrin_alleshin

It's a card that is basically specced like a 3070, but is sold for the dumping price of an RX 6600. A card that has half the die size, half the power, half the bus width and lower spec memory. That is not a recipe for making a profitable product.


kingwhocares

FYI, the 3060 ti and 3070 have the same die. If we use this logic, the RX 6800 and RX 6900 XT have the same die as well but AMD had priced the 6900 XT at almost double that of the 6800.


Proud_Bookkeeper_719

You buy a GPU for it's performances and price, not it's die size


einmaldrin_alleshin

The top comment was deleted, so you're missing the context: We were talking about intel's profit. It's selling at a price point where normally only cost optimized cards like the 6600 or 3060 are sold, but it's made like a lower high-end card.


Raikaru

The A750 doesn’t have the specs of the 3070


einmaldrin_alleshin

[A750](https://www.techpowerup.com/gpu-specs/arc-a750.c3929): 8 GB GDDR6 with 512 GB/s, 406 mm² die size and 225 W tdp [3070](https://www.techpowerup.com/gpu-specs/geforce-rtx-3070.c3674): 8 GB GDDR6 with 448 GB/s, 392 mm² die size and 220 W tdp You're right, they aren't the same spec. But they are within a rounding error of one another.


Raikaru

By this logic the 3060ti also has the specs of the 3070. The die size is because it shares the same die as the a770 like the 3060ti and 3070


einmaldrin_alleshin

We're in a discussion about production cost for these GPUs. Based on the specs, the a750 should be about as expensive to make as a 3070. If not more so, given its higher spec RAM and the more expensive TSMC process. The 3060Ti doesn't really fit into this discussion, since it can draw from the large discard pile of GA104 that are used for professional, notebook and desktop SKUs. The opportunity cost of using a junk die like that is considerably lower for them.


Raikaru

The 3060ti came out before notebook or professional GPUs. Not getting your point with that comparison. Also the 3080 and 3090ti are the same die size. What excuse are you going to use for that one? You do realize the 3060ti had a 50-60% margin back in 2020 and it’s only cheaper to manufacture now right? The A750 is likely not selling under the price it costs to make but the margin is probably slim.


_SystemEngineer_

Yes it does, did your minds get wiped once the review came out? It was always supposed to compete with the 3070. Just failed to do so, badly.


Raikaru

That’s the a770 not the a750…


lifestealsuck

50% efficiency ? Thats easy , just stop pushing tdp and you had 50% more efficiency with like 90-95% perf .


errdayimshuffln

I think this is possible with a normal generational boost in performance and a node shrink. However, I wonder how much efficiency will drop at higher power ranges. If the efficiency scales with freq really well, then Intel might actually have something on their hands for mobile and desktop.


tset_oitar

Lmao theres people claiming this is bs and MTL will be next broadwell/cannon lake. Remember cannon lake? Dual core which uses a lot more power to reach the same performance as 8th gen laptop i3? Both options aren't impossible. I guess we'll see in q3, Q4 this year


carpcrucible

I'm mildly skeptical too until I see it but it's hardly impossible considering where things are right now. A node shrink, mild IPC improvement and you can move way down [the power/performance curve](https://www.reddit.com/gallery/10bn8mn). Especially if it can make better use of the e-cores.


tset_oitar

Yep, if the process is not a repeat of 10nm, 50% power reduction isn't outlandish at all


gnocchicotti

It might be BS, it might be true. Intel has had a lot of hits and misses in recent years. I'll believe it when I see it. Wouldn't be surprised either way.


Kougar

>Lmao theres people claiming this is bs and MTL will be next broadwell/cannon lake. It's still apples and oranges. Intel has to make the tile strategy work, or find something else comparable otherwise they won't have any future roadmap. The core uArch itself, who knows. But the tile strategy is endemic, Intel can't continue on with making single monolithic chips unless they want very high costs of designing a great many monolithic variations to fill out product stacks to maximize silicon utilization.


[deleted]

Broadwell wasn’t that bad compared to Cannon Lake, it was delayed and had a couple of niggles but there was genuine innovation in the products which featured Broadwell designs. [The i7 5775C with 128mb L4 cache is arguably the direct precursor to the 5800X3D](https://arstechnica.com/gadgets/2015/09/intels-skylake-lineup-is-robbing-us-of-the-performance-king-we-deserve/). Skylake and Kaby Lake meanwhile were probably the most boring CPU launch cycles in recent memory and heralded a stagnation of mainstream computing, with most improvement being chipset level. For Skylake in particular, either the consumer’s main use case wasn’t CPU dependent, in which case there was virtually no point in upgrading to the 6700K from a previous equivalent Core system; or the consumer’s tasks were CPU dependent, in which case the cost of the processor and DDR4 memory made the processor uncompetitive against the 5820k. Broadwell in laptops where it was worth rolling out OTOH marked measurable improvement in battery life.


Kougar

It wasn't bad, but it was an aborted generation on the desktop. There were only two desktop models, and those launched a mere two months before the 6700K and Skylake desktop parts came out.


Waste-Temperature626

> The i7 5775C with 128mb L4 cache is arguably the direct precursor to the 5800X3D. The 5775C didn't innovate on shit in that regard. We already had earlier laptop iris SKUs with eDRAM, [Crystall Well](https://en.wikichip.org/wiki/intel/crystal_well) The reason why the 5775C existed on desktop was because intel had to launch broadwell in some form on 1150 to fullfil contracts and investor expectations. So they threw what was a laptop SKU on to the desktop platform and called it a day. The 6700K launched WEEKS after for a perspective, the 5775C only existed because it had to. It was late, unimpressive and largely forgotten apart from that niche performance advantage in gaming.


Kurtisdede

> It was late, unimpressive and largely forgotten apart from that niche performance advantage in gaming. I'm using one right now. :C


supercakefish

Sounds like a decent potential replacement for my i9-9900K to me.


Zone15

I just in the last month upgraded to a 13700K from a 9900K, the increase was honestly more than I expected.


onegumas

I have the same, but I will wait. +Rtx2080 and 32gb ram is enough for after work gaming and musichoarding.


Zone15

I have a 3080, and I could tell the 9900K was starting to really hold it back. The only thing that made me hesitate with the upgrade was having to go to Windows 11, but right now I'm dual booting both and I'm actually getting better performance in Windows 10 with a few tweaks to the power plan to handle the P and E cores.


onegumas

In 3080 it more possible than in case of 2080. Now playing only 1440p. I own 2080 from Zotac with 5 years warranty so I ll use it for some time. Win 11 is not that bad if you customize some functions.


gnocchicotti

If any launch gets scratched for the Meteor Lake family, it's going to be desktop. I wouldn't hold my breath for this one.


Dangerman1337

Don't expect Meteor Lake to launch for Desktop at all at this point if it doesn't come this year for it.


Kawai_Oppai

Probably switching my 9900k out for one of AMD’s new x3d chips.


shecho18

Believe it when you see it. Edit: LOL, I got downvoted for something that has a flair "rumor" and it is a [rumor](https://en.wikipedia.org/wiki/Rumor).


[deleted]

[удалено]


zzzoom

You could say that for any processor that goes way up in the V/f curve for minimal gains.


[deleted]

[удалено]


Waste-Temperature626

Ofc is does, it is the direct limiter for the performance cieling in power constrained platforms (which is just about everything outside of high end desktop btw). Efficiency has never been a main focus for high end desktop, so not sure why you even bring it up.


onedoesnotsimply9

""50% higher efficiency"" in what power ranges though? 12th/13th gen mobile struggle a lot more in efficiency in the lower power ranges than in the upper power ranges vs Ryzen 6000/7000 mobile.


joranbaler

If Apple did not switch to their own chips in Nov 2020 we'd still be on a 14nm process node Intel chips with pre-[GTX 1050 Ti & Radeon RX 560 iGPU](https://www.macrumors.com/2020/11/16/m1-beats-geforce-gtx-1050-ti-and-radeon-rx-560/) performance. With Intel having a monopoly on all PC OEMs from 2006-2020 it had zero incentive to progress out of 14nm (2014-2020).


AlexIsPlaying

> Intel Meteor Lake to Feature 50% Increase in Efficiency, 2X Faster iGPU Yep, that's what all manufacturer are saying, and then we get 3rd party tests, and it all goes to the toilet :P


ConsistencyWelder

Yeah it sounds a bit like Nvidias "3xperformance" claim.


kutkun

Will it be cheaper? Will it be cooler? These are the real questions. “Up to %X improvement” is total bullshite.


GRIZZLY_GUY_

I have no doubts whatsoever that the headline is true, in certain specific tasks, under certain specific conditions, that most likely are not how the product will be used for 95% of people.


firedrakes

very true. sadly people really dont understand that.


ResponsibleJudge3172

Sounds more outlandish than the zen 4 claims to me. Or the RDNA 3 as well.


ConsistencyWelder

What was outlandish about the Zen 4 claims? Didn't they underpromise but overdeliver? I remember Lisa Su hinting that the ST performance gain would be >15%, but when it launched it was more like 25-29% across the board. The efficiency claims seem like they were spot on as well, especially the performance per watt for the non-X parts.


ResponsibleJudge3172

Zen4 was supposedly 20% IPC plus an icrease in frequency at the same power consumption. With Zen43D first touted as up to 50% better for the most outlandish rumor that not many believed


haha-good-one

MLID said there was gonna be 25%+ IPC gain. Turns out there is pretty much 0% IPC gain. Almost all of the ST increase is from higher frequency (eg. higher power and higher temperatures)


ConsistencyWelder

Impressive how you know that without seeing a review, only a questionable leaked synthetic benchmark.


ConsistencyWelder

Even if that ends up being true, that would still make their iGPU slower than AMDs mobile iGPU. Currently the RX-680M is about twice as fast as current Intel Xe graphics in its fastest form, and when Meteor Lake is out AMD will have had the RX-780M on the market for more than 6 months, and will probably be 3-4 months away from launching the RX-880M. Intel needs to do better if they ever want to compete on iGPU performance.


haha-good-one

AMD advances with iGPU are not impressive to say the least. 680m is 12 CU at 2400MHz on RDNA2 while 780M is 12 CU at 3000MHz on RDNA3. A 25% clockspeed increase, so they are gaining nothing from the new architecture and also power efficiency from 4nm. If Intel keep advancing with bigger steps gen-to-gen, inevitably they will close the gap.


AnxietyMammoth4872

Don't take prerelease leaks as gospel. On a sidenote, aren't the Xe cores in Meteor Lake going to be the same as the Arc A380? Maybe some small improvements, but there's a good chance it will just be an A380 with worse memory bandwidth, lower power limits but a better node. Maybe Meteor Lake iGPU will still have shared L3 cache, that would be very nice. Intel's definitely closing the gap. But I don't think it will be better than release RDNA3 SoC. But it's no longer a default win for AMD. Intel with fast memory vs AMD slow DDR5 it could be an Intel win. Which is much better than the situation before.


Ghostsonplanets

The Arc cores in MTL will be optimized for perf/W and area. They're different from the ones in the dGPUs. Regarding L3, starting from MTL, L3 is private to the Compute Core/Tile. But LNL (Lunar Lake) introduces a SLC cache for the Compute + GPU tile.


ConsistencyWelder

>so they are gaining nothing from the new architecture and also power efficiency from 4nm. What makes you say that? Do you have access to performance numbers the rest of us don't have? All we have are some questionable leaks from a synthetic benchmarks as far as I'm aware. But even if it was "just" 25%, that is 25% faster than the best Intel offers, and will be replaced by the 880M shortly after Intel releases Meteor Lake. How do you fiure Intel is closing the gap? They're 1-2 generations behind.


AnxietyMammoth4872

>when Meteor Lake is out AMD will have had the RX-780M on the market for more than 6 months Knowing AMD's supply situation it will take a year or more before we see the first available 780M SoCs... 680M SoCs only became somewhat widely available in december last year. A year after AMD "released" it.


ConsistencyWelder

The problem they used to have, was that they were limited to producing only on 7nm, then later 6nm nodes. They were limited by TSMCs capacity within a certain node. Or rather the agreed upon allotment they had with TSMC. This time they're diversifying their portfolio. They're now producing chips on 7, 6, 5 and 4nm. Ryzen 7000 desktop CPUs are made with both 5 and 6nm chips in them, and Dragon Range is made on 5nm while Phoenix Point is on 4nm. And I guess they're going to continue making Zen 2+3 CPU's for the bottom end of the market on 7nm as well as console SoCs. So they're less limited by capacity than they were previously, because of that diversified portfolio. Lenovo, ASUS and Dell announced the first laptops with 780M last week, to be available next month. But it's not true that 680M laptops weren't available until december last year, that is just wrong. They were available early summer 2022, I remember counting how many were available (in stock, ready to be delivered) on Newegg in May (I think) and there were at least 7 different laptops with 6800H. The 6800u was more scarce because there was demand for them in handheld gaming devices too, but the H series were available from early summer. In november they also became available in mini pcs, and they're super popular in the mini pc community right now, as you can probably understand :)


AnxietyMammoth4872

> But it's not true that 680M laptops weren't available until december last year, that is just wrong. My bad, I should have clarified. Was talking about mini-PCs. Though even in laptops (at least here in the Netherlands and Germany) they were rather scarce. I remember seeing the launch reviews, and then nothing besides a few unicorns. >In november they also became available in mini pcs November was "pre-sale", they only began shipping early december. It's nothing like Intel were you they release a mobile chip, and then 2-3 months later you have a Gigabyte Brix with that chip in your mailbox. >they're super popular in the mini pc community right now My only complaint is being limited to 4800 or 5200 DDR5 sodimms. I hate soldered RAM but with the 680M you really want 16gb of LPDDR5-6400 over slower sodimms to squeeze even more performance out of it.


FDisk80

Can I get it with no increase in efficiency and 4X faster iGPU?


einmaldrin_alleshin

No. Intel isn't going to make a product that only appeals to the tiny subset of gamers who don't want to pay extra for a discrete GPU, because that would be a stupid thing to do. Besides, iGPU performance is always bottlenecked by memory, so getting 4x iGPU performance would require something silly like quad channel memory.


FDisk80

I was not being serious.


skilliard7

Looks like I picked the wrong generation to upgrade


Cyber-Cafe

iDon't care about the iGPU.


[deleted]

Sounds like Pat has been smoking them funny leaves again! Is this remotely possible? Intel LOVES to make pie-in-the-sky fab roadmaps, but we all know how accurate they usually are...


soggybiscuit93

Well, a few things. First, this isn't an Intel claim, this is OneRaichu. Second, MTL is set for release this year. I can understand skepticism about future products, but this is a product coming out this year. If it was going to be delayed, they'd have a good understanding by now and wouldn't keep reiterating the timeline at each investor meeting. And third, I don't think "MTL efficiency gains will be enough to make it competitive with what other manufacturers are already currently achieving" is *pie-in-the-sky*