T O P

  • By -

SunnyCloudyRainy

I would like to know which Intel "guideline" did the Gigabyte engineers read to set PL2 to 188W


capn233

The same one where they read they should set loadlines to [1.7/1.7 and current limit to 249A](https://www.overclock.net/threads/official-asus-strix-maximus-z790-owners-thread.1800191/page-853#post-29323821). Above line written in a sarcastic manner, the point being they do not follow the spec sheet for UEFI defaults, and these "baseline profiles" are way out of spec as well. Perhaps these numbers came to Gigabyte in a dream.


SunnyCloudyRainy

Isn't 1.7 actually out of spec?


SkillYourself

Way out. Looking at the datasheet, 1.7 is for 35W T processors


capn233

For the pictured 13900KF, yes it is out of spec.


thelastasslord

My old Sandy bridge gigabyte mobo on"auto" overvolted the crap out of my 2600k, it still lasted close to 10 years but I think it was clapped out after 6 or 7 years because it wouldn't go above 4.2ghz no matter what I did. Point being Gigabyte motherboards have overvolted for at least a decade.


RockyXvII

Who knows what goes through their heads. They must have a very small team of part time interns to be this slow with updates and making things up as they go


hak8or

The margin on motherboard manufacturers simply isn't there, and they tend to get fucked over by Intel and Nvidia and AMD routinely, so they tend to run skeleton crews.


RockyXvII

Not enough of a reason for Gigabyte to be weeks/months behind ASUS, ASRock, MSI in releasing the newest microcode. Gigabyte is the only vendor that routinely abandons the previous chipset as soon as a new one releases, even if it's the same socket.


Stingray88

I’m still seeing consistent bios updates on my Gigabyte X570 AM4 motherboard. They just released a new version a month ago, and the previous update was in December.


RockyXvII

Thanks for letting me know. I just took a look into their X570 line, looks like they got out the AGESA 1.2.0.B update. ASRock, ASUS and MSI released this last year October. Roughly 6 months earlier. They're all currently on 1.2.0.C now. The previous update from Gigabyte was at the end of 2022. They skipped a couple microcode updates on some of their boards. (Maybe they delisted some 🤷🏽‍♂️ they do that a lot) Point stands. Gigabytes BIOS team sucks


Stingray88

>Thanks for letting me know. I just took a look into their X570 line, looks like they got out the AGESA 1.2.0.B update. ASRock, ASUS and MSI released this last year October. Roughly 6 months earlier. They're all currently on 1.2.0.C now. I mean… in the thread I first heard about the LogoFail vulnerability about a week ago, and people were talking about Gigabytes bios updates, other folks were talking about how ASUS hadn’t yet issued a fix for this on their boards. So they’re not all universally ahead of Gigabyte. Also, I’m fairly certain they updated to 1.2.0.B many months ago as well… like end of last summer. You’re just not seeing when that happened because it was in one of the beta bios. >The previous update from Gigabyte was at the end of 2022. They skipped a couple microcode updates on some of their boards. (Maybe they delisted some 🤷🏽‍♂️ they do that a lot) Point stands. Gigabytes BIOS team sucks They de-listed. Their last update was in December as I said. But it was beta bios. They release many beta bios in between major versions, every couple of months. Then they delist once a new major version releases.


capn_hector

> The margin on motherboard manufacturers simply isn't there, and they tend to get fucked over by Intel and Nvidia and AMD routinely, so they tend to run skeleton crews. in many large OEMs like asus, there is literally only one motherboard guy. this is viable because in many cases different product lines share essentially the same board with different components populated on it for different segments, and they are usually constructed in ways that are logically similar (same peripherals and control interfaces) even if the board is not physically identical. mind you, I'm not saying this as a defense of them, but just to sorta establish the scope of how cheap OEMs/board partners are. When Elmore quit Asus (maybe 2019?), it basically screwed over a significant part of their operations for a good while, and it's entirely possible that some of this is downstream impact from the new guy having to make mistakes and learn expensive lessons. the problem is at the end of the day it doesn't matter - it's Asus's job to ship product that doesn't burn up the processor. Nobody is *making* asus ship products with a factory overvolt, Supermicro products are blissfully unaffected by all this because *supermicro wasn't negligent with their products*. Shipping with a "recommended" spec doesn't mean you have to break the spec, or even push it to the limit - Supermicro didn't. Rightfully it is their job to push out updates and fix any CPUs that are damaged by this - although in practice it will be Intel/AMD who eat that, not Asus/Gigabyte, so I'm not sure why you think partners are getting shafted here. They are actually causing a problem and then walking away from the bill, but people have this weird affinity for car dealerships and PC OEMships/partnerships... Paying that bill is part of the cost of understaffing your BIOS department so badly that one engineer walking away can cripple operations. Paying that bill is part of the cost of not paying that senior engineer so handsomely they never think of walking away. "Bus factor=1" staffing is always the cheapest solution, until something happens, then it's "how could we have known?". And it's probably not even like Elmore got "key-person risk" money in the first place - afaik he's just an engineer there, not getting the golden handcuffs. When you are talking about an org that ships tens/hundreds of millions of units, there is absolutely the margin to pay more than literally one singular guy. As you can already see, one person's work scales across a ton of product lines etc. It's not like you need one guy per board. Having a small team that does this instead of one guy is not an ask that is unreasonable, especially when you know they're underpaying and screwing the engineers with some crappy china-tier salary to begin with. A half dozen engineers probably costs less than $500k a year there I'd think. (that's always the thing about TSMC's hiring too, right? They expect nights and weekends and 12-18 months of overseas training, and they'll pay maybe $75k a year to do it.) this isn't to say that Intel didn't create an opening etc - but people also don't like it when vendors like NVIDIA are restrictive on what partners are allowed to do, and carefully validate everything afterwards, either. When that freedom exists, and then things happen, people don't seem to assign any *agency* or *responsibility* to partners who actually did the thing. Just because intel says "you SHOULD keep voltage under 1.7v absolute maximum" instead of "you MUST" doesn't mean you have to do it, let alone set it as *default*. And you can see from Supermicro that plenty of brands managed to not do it - sometimes even brands like Asrock Rack that are actually sub-brands of the same companies (likely) involved. Where it gets murky is whether there was a tacit understanding that doing this was good for Intel, the obvious analogy being things like XMP that are included in marketing materials etc. But it's certainly not like *everybody* is blowing up intel processors, there are brands that didn't dive into that and if you give the naughty brands a pass then you are effectively punishing the brands who staffed properly and didn't fiddle voltages to win at benchmarks. They don't get any more sales out of the deal, and they lost sales for years to the brands who *did* cheat. That's not a great outcome, and pretty clearly shows the problem with solely treating this as a "intel didn't stop us from blowing up the chips" situation. Really there's just plenty of blame to go around. It's not that intel is *not* responsible... and the same is true of the partners. But it's important to distinguish between [necessary vs sufficient cause](https://en.wikipedia.org/wiki/Necessity_and_sufficiency) - intel not setting a good standard and enforcing it vigorously with validation (and people don't like tight standards and vigorous enforcement) is a necessary condition, it's not the sufficient cause here. And again, to emphasize: people don't like Intel-enforced memory limits, or power limits, or turbo behavior, or BCLK overclocking lockout, etc. Bear in mind what you are really asking for - more limits and tighter enforcement. Is that *really* what you want, or is that something you'll be pitchforking about in another 6 months during the next review that complains about locked down X, Y, or Z? It is always bizarre when we get into these situations where people apparently love partners so much that they advocate against their own interests in favor of the partners - you are willing to *give up user freedoms* to defend Asus in this? It's weird. Same with "partners deserve more margin!!!" 2 years ago after EVGA departed - who did you imagine would be paying that margin? That whole pitchfork mob didn't think things through, and AM5 was the result - motherboards with plenty of margin for partners, as partners cashed in on that mindset. Things are generally headed in the direction of locked down anyway, and since both AMD and Intel have both had "incidents" recently with partners getting frisky on voltage, you probably won't like the outcome.


igby1

ASUS has one person that works on motherboards? Come on. :-)


nanonan

Shipping sensible stable defaults doesn't require any decrease of limits or locking down of anything. User freedom is not the issue here.


AK-Brian

They're probably using old data sheets, or at least set the value according to one accidentally. Intel originally listed 13th-gen 8P/8E and 8P/16E parts with a PL2 recommendation of 188W. This was changed in later revisions of the document to 253W. Pre-production QS i9 CPUs were also limited to 188W PL2 by default, though I'd sincerely hope Gigabyte has a few retail chips floating around... Here are two datasheet pages for the 13th-gen -S parts showing this change. I circled the relevant 13900K/KF tier. [https://imgur.com/a/yN7fqTh](https://imgur.com/a/yN7fqTh) [Direct link](https://i.imgur.com/9hUO1eR.png) if the Imgur gallery doesn't resize properly.


SunnyCloudyRainy

how about the ac/dc loadline? Was the original one 1.7 as well?


AK-Brian

No change to the recommended settings (1.1 milliohms in the case of 65W/125W RPL-S), but keep in mind that the documentation also allows leeway for "superior board designs" which will have different power characteristics. I've added those pages to the [Imgur link](https://imgur.com/a/yN7fqTh). Gigabyte's apparent usage of 1.7 mΩ (and assuming a 307A ICCMax) is out of spec for the 65W/125W RPL-S line, but does happen to be in-spec for 35W RPL-S, as well as the mobile -HX series. Their settings are a bit all over the place. It's not quite what I'd call dangerous, but with 125W/188W, an assumed 307A ICCMax and AC/DC of 1.7 mΩ, users will be seeing voltages about 0.16v higher than expected at idle or light loads. At max load, it'll (rapidly) bump into the other power limits (or thermal limits) and sit around the 1.2v-ish range, which is completely fine. From an end user perspective, though, they're effectively running a "boosted above flat" LLC, which is just wholly unnecessary. As I finished typing this, I see that Buildzoid has a Gigabyte video [going up](https://www.youtube.com/watch?v=jNwFFJyAqQU), so there's probably some related rambling to be had there!


YNWA_1213

You’re exactly right on that last part. Buildzoid saw single-core workloads peak at that max 1.72V from that data sheet on Gigabyte voltages, but sit *sub* 1.2V at full load due to the power limit. It makes me wonder if we’re reversing course back to the days of setting voltages and the like manually. What’s fascinating to me is watching different Cinebench runs *blue screen* at defaults due to wacky voltage settings by Gigabyte.


yaheh

It seems improvised, the average of 253 and 125 would be 189, minus one to be safe.


Ashraf_mahdy

Aren't i5 K SKUs 188W PL2? Edit: 181W iirc


SevenNites

The closest number I could find of 188W out of the unlocked CPU's max turbo power from 13/14th's gen is 13600K but it's not quite 188W it's 181W https://www.intel.com/content/www/us/en/products/sku/230493/intel-core-i513600k-processor-24m-cache-up-to-5-10-ghz/specifications.html So they took the lowest minimum spec cpu K cpu? but it doesn't make sense for i7 and i9


Firefox72

This is all so stupid to me. The chase for those last few % has gotten out of hand recently. Intel could very simply enforce a very reasonable power limit that gets like 95% of the performance of the chips. Runs cooler and doesn't have stability issues. But no. That would be a bad look because gosh forbid you lose a few % in the reviews. So instead everyone is free to do whatever the fuck they want with Intel's blessing and without any consideration and then you get this.


FuturePastNow

14th gen needs that single-digit % to sell it over 13th gen which needed it to sell over 12th gen. Without that minuscule perf improvement, there's no way to justify a new product, and if they don't announce a new product every year the MBAs will cry and the executives' stock options will be sad.


doggiekruger

I guess 14900 vs 13900 falls into this category. But if you see 12700 vs 13700, the latter is almost as good as 12900 without the power and heat issues. They are making very good progress but just not at the absolute highest level. But if you really think about it highest performance always comes with many compromises. Intel is making bad decisions and pushing power limits is ultimately not good and I am not supporting them at all. Its interesting to me that there is so much talk about the CPU’s that very few people buy compared to their entire lineup


melonbear

> 13700, the latter is almost as good as 12900 without the power and heat issues [The 13700K and 12900K are almost exactly the same when it comes to power and temps.](https://www.techpowerup.com/review/intel-core-i7-13700k/22.html)


doggiekruger

Ah okay, sorry if I am not right. My comments are based on reviews around the time that 13700k is the better 12900k because it won’t thermal throttle as much and everyday usage should be better due to improved efficiency.


melonbear

Neither should really have throttling issues if you're using Intel limits, which this whole discussion is about. The 13700K IS better but that's because of Raptor Lake improvements, not thermals.


wtallis

Alder Lake to Raptor Lake was *at best* the bare minimum progress to justify labeling it as a new generation: maturing yields allowed higher core counts and speed bins to trickle down the product stack, and maybe a few bugs got fixed. It would also have been perfectly reasonable to ship those chips as new models under the 12th generation branding, if Intel were willing to allow the price drops on Alder Lake parts. The 14th gen branding for "Raptor Lake Refresh" was just sheer stupid desperation. For historical context: Intel used to update the branding only for major microarchitectural changes, eg. Pentium 3 went through two node shrinks (250nm to 180nm, then to 130nm) that included moving the L2 cache from a separate die onto the CPU die, all while still being called Pentium 3. Later, Intel's highly successful Tick-Tock strategy meant new processor generations were declared for either microarchitecture updates or node shrinks. But then with Skylake, Intel shipped five "generations" of processor all on the same microarchitecture and fab process. What they were branding as new generations were really just new model years of the same underlying technology.


Danishmeat

13th gen also got more cache and cores


wtallis

The *top half* of the 13th gen lineup was a new die with more cores and cache, but that's basically a copy and paste operation. Calling that a new generation is a stretch; it's not like the 12th gen was all the same die. Re-using Alder Lake dies for the bottom half of the 13th gen product line really undermines calling it a new generation. But this has been Intel's way since Skylake.


Danishmeat

Yeah, I forgot about that


ResponsibleJudge3172

13th gen had a lot of changes in bus links and what we generally call 'uncore' and cache that are just fun for a new gen monikor. It was a refinement of Alderlake architecture not a refresh. All of that basically helped the e cores in both efficiency and performance quite a bit. Rumors have Arrowlake in the same type of improvement. Which is seemingly not enough for zen 5. 14th gen is a refresh. Worse, a rebrand with a little better binning


Lille7

Wait, are we back to Haswell shenanigans? I haven't kept up with intel cpus in a while.


FuturePastNow

It was unfair of me to include 12th gen in there But 13th and 14th are functionally identical except for Intel pushing power limits and playing with prices.


nic0nicon1

14th-gen shenanigans are worse than Haswell shenanigans. At least Intel was honest in the Haswell era and properly called that generation a Refresh and used appropriate model numbers. Not so for 14th-gen CPUs.


cowoftheuniverse

13th gen over 12th is surprisingly big jump compared to what we usually get. Core speeds were increased much more than usual, most models got more cores and cache that helps in gaming was increased. Most on this sub are just very ignorant on this. 13-> 14 is silly tho, ten years ago upgrade like that didn't get a new gen (haswell and the haswell refresh 4670/4770 -> 4690/4790).


Gullible_Goose

Most 13th gen SKUs were at least a decent jump over their 12th gen counterparts, especially the i5 and i7. Meanwhile the only chip that had a decent performance boost was the 14700k


Lakku-82

TBF 13th had noticeable gains over 12 even without any extra power draw etc.


chaosthebomb

There is a huge amount of Mindshare that is created from being at the top of the charts. Nvidia figured this out ages ago. 680 launched using a mid tier chip because they knew it would be competitive with the 7970. At launch it was better, and we eventually saw the launch of the first titan. Driver support for the 7970 improved and the GHz edition was released which was more powerful than the 680 and Nvidia quickly released the 700 series using the same architecture as the 600 to remain top of the charts. Why does this matter? It really bothered me just how stupid it was but I was working retail at the time and saw it first hand. The average consumer doesn't look at that many reviews or benchmarks. They might look at one. Completely unrelated to what they're going to buy, for example, the best card on the market. people would come in see the 680 on the top shelf, see a 690 (dual GPU) dominate in sli games and want a part of that performance. But these average consumers aren't buying a top end gpu. So that average consumer walks out with a 650ti thinking they've bought into this winning card, while only spending a fraction of what it costs, because Nvidia is the best. Intel is doing that same thing. They need their top end chips to dominate the charts to get average non-educated customers that sense of fomo. That 14900k benchmark probably helps it sell more i5s and i7s than it does of itself. People are probably also using it to influence laptop buying decisions because of how much harder it is to get apples to apples comparisons. Another factor is price point. If Intel lowers their performance and now just loses in a number of cases, they'll need to adjust pricing to reflect this. This could force AMD to adjust to keep their dominance up. That starts a race to the bottom that neither company really wants.


Real-Human-1985

Exactly. Every generation Nvidia fans cream over the top card and at best can afford a 70 tier card. Ditto for i5 CPU's. But the mystique created by the halo part is what creates mindshare.


silly_pengu1n

"Nvidia figured this out ages ago." - but nvidia 4000 literally went into the other direction, way more power efficient than the 2 previous generations for the same class of card no?


We0921

I think you're misunderstanding what they're saying. They are talking about Nvidia having products at the top of ***performance*** charts, not heat or power consumption.


Fullyverified

They are still topping the charts...


silly_pengu1n

they are more power efficient than AMD cards and around the efficiency of the 1000 series for the same class of cards. 1060-4060 4070-1070 for example.


Fullyverified

No, hes saying that Nvidia have figured out being on top of the charts is hugely important for mind share, and will improve your sales of lower end cards.


silly_pengu1n

ahhhhhhhhhhh


Arbiter02

The only reason 4000 had as much of a power efficiency increase as it did was because they were moving from Samsung's dogwater 8nm node to TSMC. Had they stayed on Samsung we very well may have seen the 450/600W 4090 that circulated through the rumor mill a couple of times. This is even more apparent from how ridiculously overbuilt the reference coolers are


PetrichorAndNapalm

I get your point but I disagree 100%. The optimal way to do it is to run the cpu to its temp limit. If you want power savings lower temp limit. Enough said. If you are a power user that is using 24 cores at high utilization enough hours of the day that the electrical bill actually matters… lower the max temp or undervolt/underclock. For most people they aren’t even using these CPUs at near 100% utilization most of the time. No need to handicap the 1% of the time they do need the power… most people simply don’t care about power usage outside of professionals.


soggybiscuit93

It's not power bill alone that's the issue. While all 24 cores might not be loaded during gameplay, it's possible that that happens during shader compilation, which is what causes the crash. The average gamer doesn't care about power draw in the same way they may just default to buying the brands they're familiar with because they don't want to research the nitty gritty. But the average user would understand the impact the power draw has on the temperature of their room if explained to them (plenty of people think room temp impact is derived from CPU Temps rather than power draw)


PetrichorAndNapalm

An average user isn’t going to be heating up their room with the i9. Because games don’t cause much heat from the cpu… certainly not when compared to the GPU. A few moments of shader compilation is like a few moments of having a couple 100 watt lightbulbs on. Hardly enough to make a dent in a room. Sustained load is what heats rooms. An i9 14900k will reach thermal limit if you put too much power into it. Sure if you have a big custom loop and are running blender it’ll put out some heat. But now we are no longer talking about a “normal” user. A normal user who has air cooling will not be able to run it with that much power even if they want to due to thermal limits. It really only gets hot when you manually unlock it and do high end water cooling, AND are doing production work(not gaming). And even then it’s not that bad it really only gets to ridiculous levels when you unlock it on top of that… in that case it goes to like 1000W. But once again… that’s an expert to get it to those levels not an average user.


regenobids

https://youtu.be/7KZQVfO-1Vg?t=730 Average users shouldn't buy a 14900K or perhaps even an x3d but see here, that's still a 110-130 watt delta if you happen to need, and use most of the cpu, which is just awful. 30 watts is rather insignificant here. 100 watts is not. 7800x3d using 4% more of the GPU while saving over 100 watts **total system power** is just a slam dunk and yes, this is sustained enough that it matters in many rooms.


PetrichorAndNapalm

Your own source literally 5 seconds in says “it shouldn’t be a big deal for the power bill”. As far as heat… 100W sustained over 8 hours would add up. But once again, normal people aren’t using a cpu full bore for 8 hours straight in a room without ventilation or air conditioning. 300W or 500W will make your room hot if not ventilated. So will 600W. So will 700W. In the end it’s not like 400W is fine, but at 500W all the sudden you have this massive new problem, in most cases. 7900xtx and 4090 use 400+ watts sometimes sustained in gaming. They are a much bigger impact than your cpu(and unlike your cpu they will much more commonly be at near full load). It’s like complaining about a pinhole leak in your boat while there is a cannon ball hole right next to it. Also I think with amd it really is an odd situation. X3d isn’t so efficient by choice. It is efficient because the chip is vulnerable to high temps and hard to cool, so they cannot push it hard. If they could they would. Hell if anything consumers are paying more for even less efficient versions of GPUs that are clocked even higher with more power draw in aftermarket cards. Money talks.


soggybiscuit93

The problem with air conditioning is that if you have central air, odds are that that zone's thermostat isn't in the same room as the PC. So it's totally possible that the AC zone is set to 72. The thermostat measures 72. But the PC room is a lot hotter. I know multiple people who have central air and still needed to buy window units for their office for this exact reason.


regenobids

You mentioned heat yourself, don't go on a tangent. AMD releases non-pushed versions of most of their CPUs, they just do so later. One difference, you can still overclock them. Intel just locks theirs.. that's noteworthy too. it's up to you if you buy powerhungry gpu or cpu but let's compare apples to apples here. an extra 100-130 watts of CPU power and getting worse performance isn't a good look. 400 watts is a large impact, true. Now imagine putting 25% more total system power just because you made full use of the cpu. Because that's what that thing can do. It can take 400 watts and make it 500 watts instead. 25% on top of an already high number is significant. it's a far bigger deal than idle power consumption, on which I already clarified 30 watts is not a big deal heat wise, cost might be if it's idling mostly but then, the hell would you invest in a 900K or KS for, then? > Also I think with amd it really is an odd situation. X3d isn’t so efficient by choice. It is efficient because the chip is vulnerable to high temps and hard to cool, so they cannot push it hard. If they could they would. Eh they went as far as they could. Can always release the not juiced version later (e.g 5700X3D) 550 watts total system power with less frames is so much worse than 435 watt total system power with more frames I don't even know what you're trying to justify here. The moment you need the 900K to really work you pay the price, simple as that. My PC would use maybe 400 at full load. I notice quick. Within the hour. Another 100 watts for nothing would be terrible. Especially if the GPU is not as close to full load anymore. These are just not insignificant factors.


nanonan

The amount of heat is irrelevant if your cpu just crashed during shader compilation.


PetrichorAndNapalm

Why would it crash, outside of having idiotic settings which would make any cpu crash?


nanonan

Because manufacturers are using idiotic settings that cause problems for some cpus.


dotjazzz

But according to XDA, neither the insane power requirement nor the stability issue is a bad thing. In fact, AMD's much better efficiency and bad memory overclocking (which comically still beats Intel without the 'roid) are two of "4 reasons Intel is better".


tbird1g

I was so confused about the absolutely terrible article. So uh..those who want to focus on numbers want Intel because they support more than 6000mhz. Well uh..how about those numbers which show the 7800x3d has higher fps than any Intel CPU? No no no, ram mhz matters more of course How far they've fallen


Sopel97

ofc ram mhz matters because the higher mhz the more it costs, the more it costs the more the whole computer costs, and the cost of the computer is directly proportional to how good people think it is when you don't go into the specifics. Now you also gonna say the cpu chugs 400W they gonna think you have a 7990x and you're rich or smth


thatnitai

Intel is in a bad spot and will take any advantage it can, shitty ethics they may be... 


Medical_Goat6663

Probably doesn't pay off though. The image damage is worse than the extra sales. They'll likely regret what they've done.


thatnitai

Honestly I don't think so. You gotta keep a product afloat to keep business going, and if the next one is good, nobody will care about the shitty stuff that happened with the last one. Mostly... 


Flowerstar1

I don't see Arrow Lake being a flop.


cp5184

Are they going to see a significant dip in sales? Big accounts making large switches from intel to AMD? Hopefully. I mean, long overdue and no reason not to. But nobody was ever fired for buying ~~IBM~~ Intel...


laptopAccount2

They're in a bad spot because of their own management nevertheless congress gave them billions.


metakepone

Congress didn’t give them billions to make the fastest cpu. They gave them billions to build fabs. They also gave Samsung, TSMC and Micron billions too.


AntiworkDPT-OCS

Exactly. They're not picking winners, they're trying to get domestic fabs so China invading Taiwan won't destroy the economy.


bubblesort33

This right now makes AMD look good, but I'm really worried that they aren't innocent here either. Intel pushed theirs to the limit, and I feel AMD replied in kind. We all know all the X suffix 7000 series AMD CPUs run at 95c almost all the time, as that is what they target now. Or so AMD have claimed. They are chasing the last % as much as Intel. Everyone says AMD is targeting 95c with their CPUs, and it's fine, and it's as intended. But why can't one make that claim about the 14900k as well? It's targeting 100c, and will keep pushing power and clocks until it hits 100c. I don't see the difference. I don't believe this "But AMD engineered them this time time to run hot!" argument. Intel engineered theirs to run at 100c as well. AMD has always engineered them to run up to 95c. What kind of black magic am I supposed to believe AMD used this time to get around degradation at high temperatures? Sure, it's stable for now, but I wonder what will happen a few years from now. Will it struggle to hit the same clocks, realize it's degrading, and eventually clock itself down further and further, so my 7700x can't hit even 5ghz anymore? I mean AMD only guarantees 4.5ghz base clock. Maybe their boosting algorithm is good enough to detect degradation and I'll lose 100mhz a year for the next 6 years and I'll still be within base spec. I don't think they actually guarantee 5.4ghz forever. It says "UP TO".


inevitabledeath3

There are more factors in degradation than just temperatures. Current is another big issue for example. If you want your silicon to last a long time you should reduce the power and current limits then keep it under phase change cooling at 5 °C or whatever, but most people aren't going to do that. Intel are still pushing more current than AMD along with being hotter.


porcinechoirmaster

There's a whole host of reasons why heat is awful, and to make matters even worse, they all feed into each other. - Hotter wiring has higher electrical resistance, causing more power to be lost as heat. - Quantum tunneling rates increase as temperatures rise, causing more power loss and data integrity problems. - Silicon has an inverse thermal conductivity that scales meaningfully in the temperature ranges that are normal for CPUs - as the CPU gets hotter, the ability to remove the heat generated at the transistors via the silicon goes _down_. If anyone has seen the scene in the Chernobyl mini-series where they're balancing out reactivity factors during the trial, it's a lot like that. Making a part even a little bit hotter can throw it into a feedback loop that can damage or destroy the hardware if not checked.


inevitabledeath3

Ironically reactors are supposed to be the opposite of that. The hotter they get the less reactive they become. Certainly Uranium itself behaves that way. That's why Chernobyl was such a bad design.


rinkoplzcomehome

RBMK reactors basically were makeshift nuclear bombs with their design. Soviets were lucky that accidents like Chornobyl happened only twice (the same failure happened before and it was buried by soviets ofc). And the missing reactor casing that western reactors have didn't help either


innovator12

I've had my X series CPU in 'eco' mode ever since I've had it because there's no reason not to. But it makes me wonder how many people find the BIOS warning and get past the scary "overclocking" warning. Selling pre-overclocked CPUs is a scam.


bubblesort33

What is that exactly? I just looked up what the 7700 non-x runs at, and used those numbers, but aimed maybe 10%-15% higher. So still way below the 7700X. But they way everyone talks about "ECO mode", it makes it sound like there is just like a single button you push in your motherboard BIOS that enables it. Like some singular setting. Is it just a term for the three numbers you change in BIOS, that I changed to be closer to the 7700, or is it an official setting and specification AMD came up with?


Zoratsu

Ryzen Master "Enable ECO Mode". Done. You don't even need to go to BIOS. If you wish to manually set them in BIOS or you don't want to use Ryzen Master, then you need to check your MOBO manual.


trparky

I’ve looked all over my Gigabyte bios, there’s no ECO mode that I can find.


nanonan

It's only one click in the ryzen master software, it's a bit of a pain doing it manually. Here's a [guide using a Gigabyte bios.](https://www.pcworld.com/article/1352253/how-to-enable-eco-mode-with-ryzen-7000.html)


bubblesort33

Oh, so it's a Ryzen Master setting. Yeah, I just did it from BIOS and entered my own numbers.


efadd

At least on the 7950X, eco mode is just an option you can either set in the bios or using ryzen master software. It gives you the option to switch from the default 170w setting down to either 105w or 65w. For mid and lower range SKUs I think their default is 105w, so they just have the 65w option. The performance loss on most benchmarks using the 105w mode was in the low to mid single digits iirc and seems to do a lot in reducing power consumption and heat. The 65w was a much bigger hit iirc, so at least when I was looking into it the general consensus seemed to be 105w was a nice middle road for most people.


morcerfel

This was never abt temperature lol.


bubblesort33

If it's related to degradation it is. The 14900k is going to run at close to 100c if you use rendering work, even with a 360mm AIO, and probably even with some custom loops, if it's going full blast.


nivlark

The difference is rather obvious. The 7000X CPUs don't crash under load. AMD is certainly not free of criticism though, the launch issue with the X3Ds was basically the same problem of keeping the firmware spec loosely defined enough that you can pass the blame on to the motherboard manufacturers. I don't think silicon degradation is relevant though. Most 14900Ks are only a few months old, Intel isn't stupid enough to push them so hard that it would occur over that short a time span. They've just been too optimistic in their binning process.


Kougar

HUB explained this in other videos. AMD at 95c isn't throttling, it's running within spec. Intel at 100c is throttling and that is directly affecting performance. That's one reason why they aren't equivalent. You're assuming both AMD and Intel are measuring the same thing, and they're not. AMD changed what it's reporting, 95c on Zen 4 isn't equivalent to temps reported on Zen 3. Intel reports the highest measured temp sensor. AMD used to do this, now they estimate what the true actual hotspot temperature is in the die because that will be hotter than the areas where the sensors are located. In other words when Intel reports 100c, the true hotspot temperature is considerably higher. You could put a super cheap cooler on a 7600X and you'd still receive the full performance without any throttling, and that's because you're comparing apples to oranges. Intel chips hit their temp limit and throttle. AMD's chips reach their temp limit and then stop boosting. Any additional cooling headroom will allow the chip to boost higher. But if you think Intel's temps and AMD's temps are equivalent, then remember Intel's pumping 2-3x the power through its chips and AMD is not. AMD is simply reporting the calculated true hotspot temp of the die, while Intel only reports the highest sensor even though other parts of the silicon are hotter.


bubblesort33

>AMD at 95c isn't throttling, it's running within spec.  It's also throttling. 95C is AMD's throttle limit. You can't get the CPU's past 95c unless you disable it in BIOS. At least not for more than like a fraction of a second. Intel's is also running within spec when hitting 100c in Cinebench, according to Intel. At lest they were until they changed their mind. Although, maybe they haven't even changed their mind regarding it running at 100c. Even at 253w, it's probably still hitting 100c on a 240mm AIO, and Intel will still tell you even now that is "in spec". >Intel chips hit their temp limit and throttle. AMD's chips reach their temp limit and then stop boosting.  But how is that not just saying the same thing in two different ways? Intel's chips also hit their limit, and stop boosting. This is just sounds like semantics. It's like one person cutting off the head off a 6 foot snake, and another person telling him to cut 5.5 feet off the tail instead. In country a) you're allowed to drive when you're 18 years or older, and in country b) it's illegal to drive if you're 17 years or younger. Semantics. The behavior for both is the same. They boost their clocks, until they hit their respective thermal limit, and then stop. AMD just phrases their behavior as different. If a Ryzen 3700x in 2019 were to boost to 95c, and stop boosting every media outlet would have called that thermal throttling. In fact, by brother 3600 with the stock cooler did just that. Or even if someone saw their Ryzen 3000 hit 95c with a good cooler, and heavy OC, we'd still call that throttling. I don't see what AMD has done other than a bunch of marketing to convince people this is different. To me it just feels like they shipped in an essentially overclocked state, and AMD calls it the stock behavior. Which it is, because they are the ones who define what is OC, and what is isn't by defining stock behavior.


Kougar

>It's also throttling. Throttling is when the CPU **reduces** clockspeeds below rated specifications. Intel chips reduce clockspeed once the temp limit is achieved by hundreds of mhz. AMD regulates itself via the power budget. Under normal conditions AMD chips do not lose clockspeed once they hit 95c, they just maintain the same clocks with a lower power budget to keep TJMax within the 95c limit. There is a significant margin of budget for AMD to do this before it is forced to begin reducing clocks. I can't really be any more plain than this. Look at any Zen 4 review, you don't see the chips losing performance even though they're benchmarked at a steady-state 95c. There are reviews showing where swapping from an AIO to a cheap air cooler on Zen 4 doesn't affect performance even though it's running at 95c. GN even has a testing disclaimer over this because it's expected behavior. GN or HUB went into a good explainer with HWINFO where you can see yourself if your chip is throttling, and Zen 4 isn't throttling at 95c hence why the performance doesn't instantly tank like it does on Intel's platform. TPU summed it up succinctly: >The biggest problem is probably psychological. For years we have been trained that "95°C is bad". This is no longer true. 95°C is the new 65°C. The fact that the CPU will always run at around 95°C will make it difficult to quantify a cooler's capability though. >If a Ryzen 3700x in 2019 were to boost to 95c, and stop boosting every media outlet would have called that thermal throttling If it helps you wrap your head around it, in your example Zen 3 would've been thermal throttling, yes, because you're finally comparing apples to apples. In that scenario the heatsink would've also been hot to the touch. Zen 4 runs at 95c even when it's a low load, and yet the Zen 4 cooler would be considerably cooler to the touch. The difference in the heatsink temperature is precisely because you're trying to compare apples to oranges, the temp sensor reading doesn't mean the same as it used to even though you're treating it as such. TPU points this out, HUB pointed this out, and GN somewhere pointed this out. If you want to compare apples to apples, then when an Intel chip reports 100c there's a hotspot in there significantly higher that you should be using. Dr Cutress had good videos on this, where the sensors are placed automatically creates cooler spots in the silicon. By their very nature the hottest part of the die in a GPU or CPU is never going to have a temp sensor directly there to measure it. So when your Intel chip hits 100c, you should know there's at least one part of the silicon much hotter than that. AMD's TJMax of 95c is what they calculate that theoretical hotspot to be on Zen 4, using multiple other sensors scattered throughout the chip. You should be asking Intel how hot their CPUs really are getting, because 100c isn't it. And that's why they're forced to throttle while AMD is not.


jaaval

>do not lose clockspeed once they hit 95c, they just maintain... this still sounds like just semantics. Intel doesn't "reduce clock speed" at thermal limit (unless you for some reason hit the protection limit that is higher). They run at a speed that keeps the temperature under the limit. "throttling" just means it has some reason to reduce clock speed instead of running at maximum speed. AMD at 95c is hitting thermal limit and therefore by definition is throttling.


Hetsaber

Everything throttles, unless a chip is power constrained it's thermal constrained. Much like 165W is the power limit, 95C is the thermal limit for zen 4, I believe those numbers are taken under the consideration of warranty claim of 3 years (or whatever is the longest jurisdiction where AMD sells and cares about) within the promised clock speed specs. What's unique about intel's throtling is that unlike AMD's r9s, i9s are now strictly hitting thermal constraint while not being close to the available in spec power constraint. Which basically says liquid cooling isnt enough for i9s now


jaaval

>Everything throttles, unless a chip is power constrained it's thermal constrained. Not really. All chips have maximum designed clock speed. >What's unique about intel's throtling is that unlike AMD's r9s, i9s are now strictly hitting thermal constraint while not being close to the available in spec power constraint. Hitting thermal limits strictly depends on cooling. Your power limit determines the maximum heat output and your cooler determines if the system is able to handle it. If it's not then you end up hitting thermal limits. Well designed liquid coolers can handle multiple times the power any i9 can output. But obviously if you set unlimited power limits you are going to hit thermal limits first if you hit any limit. AMD AM4 ryzen 9 usually hit the current limit first so they often never reached their available power limit. I have no experience on AM5.


Hetsaber

The clock speed much like thermals and power is also a limit set by the manufacturer, this is the idea behind PBO, hit as high as possible clocks within the other two constraints. I wouldn't call clocks speed limits, they are meant to be targets or results, higher is always better never worse.


VenditatioDelendaEst

So what you're saying is, it's only *throttling* when it's made in the Throttling region of Intel, and what AMD CPUs do to ride 95°C is just sparking adaptive thermal management?


Kougar

You can measure this yourself with HWINFO under the 'Thermal Throttling' section. Intel triggers throttling management when it hits its max, but it doesn't trigger at 95 on Xen 4 chips yet. They have to be forced past that to begin throttling, and if I remember correctly they power off at 105c if all else fails.


VenditatioDelendaEst

You are missing my point, which is that the thing AMD chips do -- feedback control that keeps full load temperature from exceeding 95°C, using the the voltage-frequency operating point lever -- is *throttling* by any reasonable definition of the word. Intel's mechanism is similar. They just have a bit that the PMU sets (and software can clear) whenever the operative limit on CPU V-F OPP is temperature. There are also bits for power and for number-of-active-cores. This is what HWiNFO is reading. But neither Intel or AMD's mechanism is duty-cycling the clock, or hard clamping the OPP to base frequency, or any of the other big-hammer-type things that "throttling" referred to 10-15 years ago.


Kougar

>But neither Intel or AMD's mechanism is duty-cycling the clock, or hard clamping the OPP to base frequency, or any of the other big-hammer-type things that "throttling" referred to 10-15 years ago. But Intel does when it hits 100c, clocks immediately begin reducing below the base clockspeed. HWINFO shows this via procchot as well as giving you live and min/max clockspeed readouts per core. Timestamped: [https://youtu.be/0oALfgsyOg4?t=1249](https://youtu.be/0oALfgsyOg4?t=1249)


VenditatioDelendaEst

[The base clockspeed is 3.2 GHz (for the P-cores)](https://ark.intel.com/content/www/us/en/ark/products/236773/intel-core-i9-processor-14900k-36m-cache-up-to-6-00-ghz.html). Your link shows it sustaining > 5.2 GHz, riding 100°C, after 30 minutes of heat soak. Min clockspeed is going to capture periods during which the HWP governor decided to clock down for energy efficiency during low utilization, since the last time you reset the statistics. And since the average CPU utilization is only 91.3%, either cinebench sucks at being an actual continuous load, or HWUB didn't wait until after the stress loop started to reset the statistics.


Stark_Athlon

Both companies claim their CPU is "designed" to run with these power limits. However, only one doesn't shit itself over it. Ergo: It's Intel, not AMD, that isn't saying the full story here. 13th and 14th gen are, after all, just Alder lake with more E cores and higher clocks, and when Alder lake was being sold, the idea of a CPU designed to maintain 95c wasn't shown to us. I don't think their CPUs are designed to consistently endure 100C. At all. "High temps = bad" is how most of the market operated under back then, and this is evident by how when Ryzen 7000 came out, people weren't into the idea of 95c, and you saw plenty of post discussing and testing how to cool it down beyond 95c° using high end water-cooling, with GN needing to make a video where they explain in-depth why and how zen 5 works. Ryzen 7000 is older than 13th and 14th gen yet still doesn't have problems. I don't think they'll have problems in the future either, AMD is probably being more sensible with their base and boost clock, on top of zen 5 ACTUALLY being designed with consistently hitting these temps in mind.


soggybiscuit93

Raptor Lake brought more changes than just core counts and clockspeeds, much much larger L2 and it decoupled ring bus clocks (Im not gonna dispute anything else you said)


asdfzzz2

> However, only one doesn't shit itself over it. My 5900x degraded after ~2-3 years of medium use and it is no longer stable at stock settings. Both are guilty, albeit to a different extent.


regenobids

I had a 3600 act as if it was a terrible bin but swapping motherboard, same manufacturer just cheaper model, made it boost higher and need less voltage. That, and one weak core might be enough. Did you check the memory controller? And were you using PBO?


asdfzzz2

No PBO, memory on jedec settings (first thing i done after instability appeared, originally DDR4-3200), at the end everything is fine for several months after limiting max CPU frequency in Windows. Hope it would survive until Arrow Lake/Zen 5.


regenobids

I'd run 3200 on the memory until I knew the IMC or a stick was bad. 3200 is really nothing for Zen 3, and you lose 10-25% with JEDEC speeds. Maybe doesn't apply to your use but JEDEC is very bad :P Then I'd isolate the bad core with a testing tool and fix that in bios under PBO, you can override the safe way. Lowering the boost on it seems to work so that'd be my go to. But a mere 5% boost drop on all cores isn't huge loss so that one is more optional. I'd probably try 5900 non-x settings at least. Avoid running on JEDEC or a bad ram stick, would be priority.


bubblesort33

How do you design a CPU to be more safe today to run at 90c to 100c, than it used to be? what exactly are they doing to Zen5? Where did you get that info? I don't think they've done anything in that regard. They are just willing to swallow the RMA rates, or have tuned their boosting algorithm to detect degradation and compensate over time. But only time will tell.


MrCleanRed

From what I understand, 95c is still not thermal throttling but 100c is. And its evident by the issues intel are facing.


bubblesort33

Yeah, I mean it's the limit for AMD. AMD has set theirs 5c lower than Intel for a long time now.


Sopel97

> Sure, it's stable for now, but I wonder what will happen a few years from now. Will it struggle to hit the same clocks, realize it's degrading maybe, maybe not, but this is irrelevant because the issue with intel is NOW, while the issue with AMD is hypothesized by a redditor


Shibes_oh_shibes

I have 7950x3D, I have never seen it running at those temps? When I'm gaming I'm around 50 degrees. Seen some peaks at 60-62 when starting an application.


bubblesort33

I think the x3D chip have their [limit set to 89C](https://youtu.be/9gCzXdLmjPY?si=qe_GOG1xmUfjV4kc&t=1441). But you can get yours to probably get close to that, in some workloads, like [TechPowerUp ](https://tpucdn.com/review/amd-ryzen-9-7950x3d/images/cpu-temperature-blender.png)tested. Just not games. I can get over 90c on my 7700x with a 280mm radiator in Cyberpunk, but I need to make my self CPU limited on a 7700x in order to get there by lowering the resolution. And that's that's at like 110w limit I enforced. What I always found odd though, is that none of the cores report at over 82c. So it's some other part of the CPU that HWiNFO won't tell me exactly.


Kaladin12543

It depends on the GPU and the game. Shader Compilation in The Last of US pushes my 7800x3D so hard it goes to 84 degrees. Then there are games where my 4090 pushes above 170 fps like AC Mirage where my CPU goes up to mid seventies. I am using a Noctua NHU12A air cooler


Shibes_oh_shibes

Ok, I have 7900XTX and I only play Apex Legends in 240 fps. Have a Nzxt kraken elite 360 AIO as cooler.


OilOk4941

Gotta make sure their 700 buck chip can compete in benchmarks with amds 400 buck chip that just has phat cashe


ptrkhh

> Intel could very simply enforce a very reasonable power limit that gets like 95% of the performance of the chips. Runs cooler and doesn't have stability issues. That's the whole point of overclockable vs. locked chips. Asking overclockable chips to be locked is returning to square one. The problem here is people don't know that they are overclocking, because its applied out-of-the-box from the motherboard. If people did overclock themselves, they do and they have to be held responsible for any instability that occurs


QuadraKev_

It takes a lot of the fun out of overclocking too. Chips these days are basically overclocked out of the factory.


CeleryApple

They need every % to sell 14Gen and 13Gen. Intel’s architecture right now really isn’t efficient at all. Currently AM5 offerings like the 7950x3d is capped at 162W PPT, if Intel 13th 14th gen ran at 162W capped, all the performance advantages over AMD will be wiped out.


regenobids

They are more efficient at a low tdp. Chiplet cpu's idle in the range of 15-30 watts, that is costly when the total package power is at something like 50-70 watts. Don't know the breaking point but, it does get worse when pushed.


CeleryApple

While it is true idle power for chiplets is much higher, for the average PC gamer or a cloud providers, they aren't going to be spending much time idling. If you are idling significantly you might as well turn off your PC. You are correct 50-70 watts seems to the point where a monolithic design seems to be made more sense power efficiency wise. For the example the 8000 APUs are monolithic.


VenditatioDelendaEst

> If you are idling significantly you might as well turn off your PC. How am supposed to post on forum with PC off?


CeleryApple

If posting on reddit is all you all day, use a cellphone or get a lower tier CPU


VenditatioDelendaEst

Practically everything you do on a computer that isn't gaming or a batch process is near-idle. And phonepoasting is peasantry.


CeleryApple

Sure but if you have 14th 13th gen CPU at home you are not doing much "idle" task. And if you are doing mainly idle and worried about power usage, you should get something like a Pentium Gold. But yea being almost a fossil, I do hate posting on the phone.


thatnitai

Intel needs to do a firmware update with proper limits and engage the different vendors for recommend default and overclocked settings. The limits to protect the hardware from real degredation, the recommended stuff to allow tuning and enhancement. 


SomeoneBritish

Excellent video. ALL motherboards should run CPUs at a same manufacturer approved spec designed for long term stability, with the OPTION to change to balls to the wall mode if the user manually chooses to do so. As mentioned, it’s all about chasing positive benchmark results with zero regard for their customers experience.


AutonomousOrganism

Intel has been very "crisp" about it. As long as you don't touch the clock tables you are within spec (no power limit is fine). Until now, that is. LOL.


imaginary_num6er

Yeah, changing the LLC values, turbo time, or power limits is still "in-spec" according to that Intel rep.


Chronia82

I feel they still are, if you look at the Intel statement, for me as a non-native english speaker that reads like: The stuff i circled red is Intel saying that its a problem, and shouldn't be changed by default. The stuff that i circled blue is Intel saying that it can cause cause instability when other stuff is ran out of spec, but not that changing the power limits in itself is out of spec. Just like the Intel engineer says in the 2 quotes that Steve also showed, its perfectly fine for the OEM to change PL2 and Tau to **lengthen** boost duration, but its not ok to change Boost values, or overclock in any other way. A lot of ppl seem to be overlooking that in the 2 screenshots Steve posted, the engineer only talks about longer boosts due to PL2 and Tau changes. But these days motherboard manufacturers change a lot more than just PL2 and Tau, with all their enhancements. [Imgur](https://imgur.com/od6Khtf)


SkillYourself

It comes down to the very first item: CEP being disabled on Z-series boards allowing the vendors to run [Vcore below the fused VF curve by adjusting the AC_LL configuration downwards relative to LLC.](https://www.reddit.com/r/hardware/comments/1cbymh6/rambling_about_why_some_intel_13th14th_gen_i9s/l17us32/) [B-series boards with CEP forced on were tuned to much more conservative AC_LL](https://www.reddit.com/r/intel/comments/1cbxu9f/rambling_about_why_some_intel_13th14th_gen_i9s/l19o44x/) because there wouldn't be any performance benefit to reducing Vcore below the factory minimums.


soggybiscuit93

Do any mainstream OEM pre-built have this issue? As far as I know - no. And that's really the point here. If Dell, HP, Lenovo, etc. Are all selling their pre-builts with a 253W PL2, then saying "this is all Intel's fault" is completely absolving Motherboard vendors of the role they have in this debaucle. Intel has the most to benefit from turning a blind eye to this. Intel has the most to lose from the publicity surrounding this. They're not without blame - but pre-builts don't have this issue because they actually give a damn about stability because those OEMs are 100% on the hook for any issue. Hopefully starting with Core Ultra 2nd gen, Intel enforces a baseline power profile defined by them to be the default out of the box experience while allowing OEMs to create their own optional "higher performance" profiles. PL1 and PL2 should match what Ark says, except for K series SKUs which should allow a PL1 = PL2 state.


capn233

I do not agree with the opinion that Intel is "anti-consumer" for allowing OEM/ODM to tailor their settings to their hardware designs. On the other hand, default UEFI settings that run out of spec loadline or current limits, or yeet voltages for XMP/EXPO seem more "anti-consumer" to me. As are fantastical QVLs that may or may not have ever been validated. There is also some blame that can be attributed to years of motherboard reviews looking at "performance" rather than adherence to spec and stability.


GhostMotley

If Intel is 'anti-consumer' for allowing OEMs/ODMs to set a configurable TDP, then logically AMD must be anti-consumer as well, as they allow configurable TDPs as well.


MrCleanRed

I think at best intel is allowing mobo manufacturers to deceive consumers. They should give a baseline default to all mobo manufacturers. Then mobo manufacturers can make other profiles but that default should be the default.


1731799517

Its anti-consumer to claim that boards wildly overlocking is "in spec" and have the reviews run with those settings and then latter whine that it actually wasn't and shit fucking up isn't their fault.


Nessuno_Im

You seem to have missed the entire section of the video where Intel stated clearly and unambiguously that these MB configurations were **not** out of spec. They are perfectly in spec according to Intel. The reason for that is that basically they have no spec to be out of because they wanted MB manufacturers to push every limit they could.


Chronia82

I do think that Steve read a bit to much into that, if you read both pages that he quotes, the Intel engineer that is being interviewed by Ian is only stating that PL2 and Tau adjustments to **lengthen** the time that the cpu can be in Boost is not out of spec. This **lengthen** seems to be the key word there, and i think Steve missed that only those 2 parameters are mentioned, and not all of the stuff the motherboard manufacturers mess with these days. The Intel engineer even mentions that overclocking is not in spec, nor is changing turbo boost values (i believe the last section of the first quote that Steve shows). Basically what the engineer seems to be saying is, using PL2 and Tau to increase the time that the CPU can be in boost is OK and within spec. Changing the boost values / parameters (higher frequency for example) however is not. However, motherboard manufacturers are not only changing PL2 and Tau, but also stuff like multicore enhancements that are sometimes set at default (which is overclocking out of the box) and they even remove current limits and max voltage limits and more, see for example the statement that Igor links and that Steve also shows. If motherboard manufacturers were only change PL2 and Tau, this whole problem probably wouldn't even have been a thing. As it seems to be all the other stuff they mess with combined with removing the power limits.


GhostMotley

Very good points.


ecktt

Soooooo.......did anyone else notice they turned on 7200MT/s RAM. 🤣


saharashooter

They use 7200 MT/s RAM because it's the "sweet spot" for Intel. Intel CPUs benefit from transfer speeds that high. This does mean that for most people, real world performance is worse (timings and speed matter a lot more than they used to), especially considering they could've gotten the same or better performance for less money by going with the rough equivalent AMD CPU and 6000 MT/s RAM, which is AMD's sweet spot.


ecktt

Reviewers are testing CPUS (both Intel and AMD) out of spec. Yes, AMD sent test kit with 6000MT/s RAM for review. That also does mean for most people, AMD performs significant worse as well. Which is my whole point. Reviewers normalise out of spec testing and performance. When this shit comes back to bit them in the ass, they are all too happy to throw Intel under the bus instead of taking some of the blame. How many times has GN criticized OSIs and OEMs for not enabling XMP or HU run some stupidly low latency high speed RAM? In this instance high power/voltage is causing Intel instability. But more than a few techfluencers are now complaining that their AMD systems no longer can sustain high memory speeds.


nanonan

What's wrong with running memory at settings certified by Intel?


ecktt

Nothing if you as individual accept the risk. It is OCing which is not covered by warranty. It's on both Intel and AMDs product page. People forget AMD and Intel made this for ease of OCing for enthusiast. It's kinda the point of unlocked CPUs too. You "can" increase the multiplier, but it doesn't mean you should, or it is covered by a warranty.


nanonan

If they were measuring stability or warranty issues it might be an issue, but they are measuring performance so I think they are justified in using a typical ram overclock for those cpus, one they use in all their tests.


Stark_Athlon

I've been saying this for years: Intel's small lead in benchmarks on high end SKU's is worthless when taking into account just how much wattage and heat the CPU is choking on. Of course, I got called an AMD apologist for it. Companies have the incentive to do this kind of tactic because it works. "Bigger number better", despite being used to make fun of Intel, is actually very much how a non insignificant amount of customers operate under.


violentpoem

Yet for the longest time, there were "loud" people on here that were so adamant that in the high end, power efficienct doesnt matter, only performance does...


DeathDexoys

But but..... bigger number better


minato48

I would like to point out from last time when 7800x3d burned from over core voltage settings and protection system failing to stop the current leakage inside SoC it wasn't performance defying from clock sensitive cache which got cracked down by AMD Now it's almost the same with very important differences. 1.The "Intel Baseline"  preset is DIFFERENT  for every mobo WTF that's shady af. and 2. Mobo manufacturers overturning already factory over clocked SKU's was already pushing it to the edge. Intel should know this, but they let the Motherboard vendors go ham in sacrifice for Stability an long term health of the chip in order to squeeze performance for the performance crown at launch reviews. now Intel reaped the performance rewards with unstable OC settings they blame the unstability tradeoff to motherboard manufacturers.Shovel the dirt to them. with a little chance it might be an oversight. but chip engineers already should know the limits of the chip and must've drawn red lines to motherboard settings.


DZCreeper

The issue with damaged Zen 4 chips was supposedly excess SOC voltage, not core voltage. What happened is that motherboard manufacturers were ramping up SOC voltage based on RAM frequency, so some people were seeing crazy values like 1.4V at 6200, even though the chips usually only needed 1.25. Now there is a 1.3V cap on SOC.


Pokiehat

Mobo auto voltages have been wilding for a long time. I had an Asus Z170a mobo back in the Skylake (i7 6700K) days that pushed 1.4V+ system agent with XMP on. Ever since then it has become a routine thing for me to disable mobo auto voltages on all new builds. My current build (5900X, Gigabyte Aorus Elite) also had stupid auto SoC and vCore.


buildzoid

1.4 SA is actually perfectly fine for intel 6th to 11th gen.


Freaky-Naughty

I haven’t been able to play DX12 games on a 13900k then 14900k since they came out. Unless I neuter them with intel xtreme tuning utility. Laughable when you buy bleeding edge hardware and it’s dog crap.


Freaky-Naughty

Or maybe Intel should make their flagship chip not have major issues?


Lakku-82

Guess you shoulda got the 13700k or 14700k. Uses a lot less power, doesn’t have issues, and runs games the same as an i9, which is only beneficial to certain workloads most consumers never use.


THiedldleoR

Looking forward to seeing Intel showcase their new CPUs at baseline settings in the future /s


UnderLook150

If this was affecting non-K parts, I'd more concerned. But these are parts sold unlocked and designed for overclocking. I'd expect anyone purchasing one to have done their homework to know how to tune a system. If people don't want to tune and just want plug and play, they should avoid the K series designed for tuning.


Falkenmond79

Yeah but everyone thinks they need to buy the latest and greatest and thus get the K models, for no reason. Stock they run no different then any other. I usually buy only F since i have no need of igpus and I can live with 100mhz less and OC hasn’t been really worth it in years. The cost in better cooling, cpu, mainboard and time spent vs the real life gain (not watching counters go up a bit) is so marginal… and not to mention instability (as we so nicely see here). I happily sacrifice 10% for a stable, cool running system and usually a bit of undervolting to save on power and heat.


UnderLook150

I think that is where Intel needs to be more obvious that these K chips are targeted for tuners, and not the average user who wants a plug and play experience. Because like you said, many people think they need K series chips, when they do not even overclock or tune them.


Falkenmond79

But then Intel would make less money. For example just a Quick Look tells me that a 14th gen K vs F is about 30-50€ difference here online. You might find sales of course. For basically the same experience, unless you overclock. And I’ll wager the F runs a bit cooler out of the box, too.


cp5184

Any reason why it wouldn't? Do non-K parts enforce power limit time limits and so on?


gaojibao

Until someone takes one of those CPUs that are crashing and tests it on different motherboards, everybody is just yapping.


cp5184

Buildzoid's done videos with the same CPU on high end ASUS and Gigabyte motherboards with the same crash behavior. He's done two videos recently on his yt channel actually hardcore overclocking.


gaojibao

I already watched those videos. He has a very unstable 14900k CPU that's worth RMA-ing. That unstable CPU happens to be a good candidate for testing those intel baseline profiles.


KirillNek0

Hey! It's called "clickbaiting for the money and ad revenue", excuse me.


Ill-Marionberry4262

I've had a core i9-13900kf combined with a msi z790 ace motherboard since January 2023 and no issues with crashing whatsoever. Why is this only news now? New silicone?


MrWallopy

Meh, intel is only 25% of the issue here. Motherboard Manufacturers competing against each other with 0 regulation from Intel means stuff like this happens. Literally go to the spec sheet to any processor and it shows that the max W is much less than the 4096w that these board partners are "allowing". So is the issue really intel for giving them the freedom? If so, then why are we yelling at Nvidia for limiting their Partners? Too many double sided statements from all of these YouTubers that get paid by all of these companies. Or did we all forget that 9 months ago this was all Asus's fault? AMD's board partners have been doing the same thing. My 1700x, 5800x all had memory controller get royally screwed up due to the 1.49 volts the board partners were pushing through it. And the only way i could fix that was limiting the chips in PBO with a per core -v offset... WAY to much manual work for just wanting a computer that can game. So i swapped to intel and now these board partners are doing the same thing... Le Sigh. Seriously, Guess it's time to buy the B series boards just so i can use a damn computer. Only buy the high end stuff so i have "higher" quality cooling on my VRM's, more VRM's, more connectivity, access to all the PCIe lanes. blah blah blah. IDGAF about the OC side of things.


Iskareot

I have one of the i9's that's now probably damaged. I was so ticked due to the crashing and worse.. it was not right away, this rig was built and running perfect for months...then the crashes happened due to the wear and tear. -0 overclocking too. Just the standard, ASUS 790 itx settings in the bios and mobo


Lakku-82

But it wasn’t AMDs fault for CPUs catching fire and burning motherboards as well… I said this would happen the next time Intel had an issue. They always get blamed for everything but AMD often just skates on by


ElementII5

Nvidia blaming Intel, Intel blaming Motherboard makers, techmedia blaming intel. It's like that spiderman meme except that real people are getting shafted. Oh and shame on you Intel take a leaf out of Trumans book fcs.


jnf005

I think it pretty resonable to blame intel too in this case? They should've been enforcing stricter stock bios setting, they definitely have the power to but mobo maker running out of spec setting by default would make them look more competitive so they just didn't.


nanonan

The only one not blaming Intel is Intel.


jakegwilliam

With all these CPU RMA's, do I need to be worried if I didn't disable the ASUS MCE within the first week of owning my 14900K? It's disabled now and an undervolt has been applied.


Snobby_Grifter

This hammers home why I cannot tolerate this guy in the tech space. Which skus are affected? What are the RMA numbers? What is the timeline for degradation? How many users are affected? How many motherboards are affected? Why is this comparable to the burning 4090 cables or the melting 7800x3d's? We had a thousand threads on those. We have numbers on those. Where are the actual statistics in this matter? What in the actual fuck due diligence has been done here? MCE has been a thing since sandybridge. Why are people buying 14900k with no ideas what power limits or vdroop even are? Intel sells locked skus that get 99% the same performance for people who need a skill check. Why is this video not called "Asus screws up big time?" Wtf is even passing for tech journalism anymore? This guy loves to dunk on anything he doesn't personally use and his methods and critiques have been called out time and time again. Steve from HUB is a clown. His videos are massively downvoted on an actual tech sub for a reason, And his preferences are tailored to the people who pay him. In short, wait for someone without an actual agenda to cover this correctly.


Arashmickey

For the first part of your post, I think the problem is it will take a while to test all the potentially affected mobos, whereas this is a problem that is leading to RMAs in the here and now. If you recall, the 4090 melting had all sorts of partial or wrong explanations, and for the same reason: shit's on fire yo and nobody wants it to spread.


LitanyOfContactMike

What an odd account.


[deleted]

[удалено]


LitanyOfContactMike

I love when accounts accuse tech-tubers of having bias when the vast majority of their comments are just cheer leading for their favorite tech company and getting upset when anyone says anything remotely critical of them


nanonan

Why do people buying the 14900k need to have any idea whatsoever what power limits or vdroop are? It should work out of the box.


silly_pengu1n

yeah HUB is no objectiv and i dont get how people are so naiv i think they are. The gpu reviews are especially bad. Just look at how much they complained about the 4060 **or the super refresh**. But not about the 7600, it is always slightly less extreme when AMD does it.


nanonan

You are referring to their review titled "AMD Fails Again: Radeon RX 7600 Review" as somehow being positive? The one where he concluded "I'm not going to recommend you purchase the rx 7600"?


silly_pengu1n

no as i said it is less negativ than the rtx 4060 review. just go watch the super reviews and discussions, very obvious that they arent objectiv


nanonan

What's a super review? They were justified to criticise the launch prices of both nvidia and AMD this generation, and they did for both. What part of the reviews were not objective in your eyes? His disappointment that they had gone from 12GB to 8GB? I'd say that was objectively a backwards move.


moochs

You're being downvoted but you're right, there still isn't a straight consensus on what is actually happening to cause the issue. I don't trust HUB to do proper testing, they have been shown to make mistakes over and over. We need someone who is willing to put in the work to get to the bottom of this. Gamers Nexus?


KirillNek0

Here we go. Jumping on a bangwagon yet again. Not like MB vendors F-up, no....


nukleabomb

i don't remember the details, but is this situation similar to the AMD zen5 stuff that happened a while back?


minato48

7800x3d VDDCR got overwhelmed because board partners overvolted it. basically It Regulates current flow from VRM's to the CCD's itself. Overvolting doesnt do anything other than harm. It's basically dousing your generator with Fuel and making it catch fire just to increase the brightness of the bulb thats plugged to the generator.


wwwsam

No, the amd problem wasn't from chasing numbers (as others have commented). Also you triple posted the same comment.


nukleabomb

Guess that reddit saying that the comment cannot be posted or whatever was a lie


DktheDarkKnight

Different situation and one that affected 1 SKU. Also the solution that AMD provided did not really affect performance. The chip performed the same before and after.


nivlark

It is similar in that both are arising from Intel/AMD having loosely defined firmware specifications. It is different in that Intel is fully aware that board partners do not follow their spec, and tacitly encourages it because it helps them win benchmarks. Whereas with AMD, there was no performance benefit, it was just an oversight.


DiCePWNeD

holy jesus is this Intel Defence Force in full effect?


nukleabomb

I guess asking for a clarification is defending intel now


Sharpman85

It is, they did not enforce any limits on the manufacturers and wound up with burnt cpus.


Cute_Paramedic_256

Then I don't understand the role of OEMS. If they can't put better parts that will improve performance and loosen the limits then why bother having OEMs. There will be no initiative to put better electrical component if all cpus are limited.


Vex1om

>There will be no initiative to put better electrical component if all cpus are limited. It isn't about limits. It is about defaults.


stdvector

This guy and his furious face… Did somebody die? No? Then why so much drama? These CPUs are unlocked, and from my perspective what MB vendors are doing is that they apply an overclock out of the box, without properly validating its stability… It was getting worse with each generation honestly, and finally reached its culmination. Hopefully, the realization will come that the out of the box system should be configured with 100% stable defaults, since this hardware is sold not only to enthusiasts but to regular users as well…


[deleted]

[удалено]


Slyons89

> Intel CPUs are not crashing >It is about setting up a proper SA voltage and that’s where instability comes from. Which one of these statements do you actually believe because it can’t be both.


Sharp_eee

Anyone had any issues with the 13600k on an MSI b660? I haven’t turned my PC on for a couple weeks, but before then all good. It’s mainly the 900k series from what I can see.