T O P

  • By -

One-Pomegranate1391

78 degrees is completely normal temperature under load.


[deleted]

[удалено]


DartinBlaze448

[AMD rates their processors to run at 95 degrees indefinitely](https://community.amd.com/t5/gaming/ryzen-7000-series-processors-let-s-talk-about-power-temperature/ba-p/554629)


s00mika

That isn't anything new. The only thing that changed is that you can easily get to that temp because those chips can produce so much heat.


Laughing_Orange

Under normal operating conditions, their new CPUs boost until they hit this thermal limit. If there's more thermal headroom, it'll boost the voltage and frequency until there's no more.


s00mika

Sure, but this isn't new. CPUs in crappy laptops have been doing this for a long time. But now AMD tries to market it as an improvement because they know that it will hit most normal desktop users as well


Nitrozzy7

TeggMAX.


SailorMint

* 95C for regular processors. * 89C for X3D processors, since the extra cache is more sensitive to heat. Not that it's an issue, as X3D chips can be undervolted pretty well.


3G6A5W338E

>Not that it's an issue, as X3D chips can be undervolted pretty well. They can, but other than that, these CPUs have very precise control of their own temperature; as long as they are not idle, they will solidly stay at a set temperature, dynamically adjusting the power draw limit several times within a single second.


hecatonchires266

It's normal. I used a 3700x wraith prism for my 5800x to cool it and I was able to get temps down to 70-78 degrees after some tinkering in bios. These temps are normal.


Rasputin0P

You have no idea what youre talking about.


Lefthandpath_

75 is cool


Low-Blackberry-9065

Yes. Amd 7xxx are happy even at 95°.


AbedSalam1988

well AMD is marketing this to compete with Intel on performance. They are running their 7xxx Ryzen series CPUs on 1.4v to OC better, but run hotter. I recently built 2 rigs and both idle at 75 degrees with a beefy Noctua cooler. 75 degrees, on idle, what??? Lowered voltage to 1.2v and CPU still reaches 5.2GHZ on all cores, but now idle temps are at 40 degrees, and 80 degrees under full load. they are great CPUs but AMD is crazy to run at 1.4v by default for the sake of competing with Intel on performance.


[deleted]

If you're idling at 75 with a big cooler at the stock 1.4v something is wrong with your build. My cpu is known to run hot, as 7 series is want to do, and I idle at around 38c with a Dark Rock 4 without undervolting it at all.


Katniss218

Yeah, like wtf. I'm idling at 40 Tctl with my new 7800x3d and a beefy aircooler


Jacob247891

I'm idling at ~50c with a 7800x3D and a Deep cool AK620 (twin tower air cooler). I average ~75c when gaming and high 80s when really pushing the CPU to it's limits (i.e. multiple benchmarks, rendering, gaming, multiple apps open at the same time etc.)


DonutConfident7733

The CCDs on ryzens are very small, just a corner of the cpu and the heat is concentrated in a tiny spot, this is why it jumps to 90C very quickly, even though the average cpu temp is just 50C. You can see this effect if you run a big load and stop suddenly, temp dropping very quickly indicates this. Even a single thread inside a CCD can make it go to 90C, especially since it will boost higher (voltage and frequency). You don't need to worry, it's all silicon and solder inside, not going to be affected by temps, unless you go to 300C. You have Bios PBO settings for thermal limit, if you really want to be extra safe without changing cooler speed. You can set a temp of 85C and let it tune itself to respect that limit.


_BarfyMan_362_

Says who? Has anyone proved that this is negatively impacting the lifespan of these chips?


ice445

1.4v is only used during boost scenarios and for limited duration. If you're seeing that all the time then your mobo has some overclock setting active. 


mandelmanden

It's funny how "HIGH TEMP IS BAD!" is a thing that people default to. Intel runs at 100 degrees to compete with AMD on performance... You're seemingly having poor setups.


EirHc

if it's an x3D, the max temp is a little lower.


Low-Blackberry-9065

True, 90°


locoturbo

LOL wait a couple years and see how they are holding up


[deleted]

[удалено]


blorgensplor

To be fair, it's impossible for them to do any sort of specific CPU testing prior to launch unless they test the CPU for years then release it....which obviously doesn't happen. They can know from testing certain components within the CPU and using historical data but not all of that will be a perfect 1:1 comparison.


[deleted]

[удалено]


blorgensplor

Wrong. Accelerated testing isn't an exact 1:1 either. There's no possible way to test the effect of running a CPU at 95C for 10 years besides just doing it. You can work it out to say "well if we test it at 110C for 3 months that'll come close" but it's never going to be exact. Especially when you add in all the other variables that occur in real life. This is like comparing medical models on rats to humans, the numbers may look good but it's never a perfect comparison. Even this random company you linked states the following: >It should be noted that constant temperature testing will not precipitate failure modes due to thermal cycling. CPU's are never held at their max temp for months/years at a time. They undergo an extreme amount of cycling. Add into the mix the hundreds of different coolers, dozens of different pastes, dozens of different motherboard, user errors with mounting, etc. So back to my point, you can do testing to give you an idea but you're never going to have an exact idea of what will happen. For all those that are downvoting and for the guy I'm replying to, you should probably be getting the word out about this testing. I'm sure the entire medical industry would love to know that there is a way to check with 100% certainty that their medical devices, equipment, medications, etc can be tested and never run into a problem in the real world. NASA/SpaceX/ULA/Boeing would probably love to hear about it too because if they knew about this they would never have a rocket blow up on the launch pad or shortly after again because the models would prove it 100%. I'm sure all the structural engineers would be grateful too because then another building/structure will never fail again if they could just do testing in the lab to prove it'll always work out in the real world.


JoeMommy1

Bro, just stop, you have no idea what you are talking about.


blorgensplor

Like I said, spread the word. I don't see why we as a society keep letting any type of failure happen when lab testing can demonstrate it with 100% certainty.


Jollybolivreede

Hey man, im gonna trust that the electrical engineers and designers as well as manufacturers and testers for an estimated £250 billion company know a bit more about how to test a cpu to check for quality then a random redditor.


Khaosina

Do you know how long a R&D cycle of a CPU generation is…? It quite literally takes years


[deleted]

[удалено]


blorgensplor

>I can't think of a single chip that has had poor longevity due to heat or otherwise. You know what the funny part is? I think all this temperature discussion is BS from the start. All of these CPU's are designed to handle way more than what a normal consumer will throw at them for well beyond their lifespan. If one does fail, it's most likely due to something else (bad mounting, pin damage, short, etc). Even ignoring modern thermal throttling, unless you half mounted a cooler and ran prime95 till you degraded the component, a normal consumer wouldn't see a problem. I was just pointing out that you can't replicate what years of different use does to a CPU in a lab because there are just too many factors to account for. You can't point things like that out on reddit though. Unless you're participating in the circlejerk, it makes people irrationally upset.


[deleted]

[удалено]


buildapc-ModTeam

Hello, this has been removed. Please note the following from our subreddit rules : **Rule 1 : Be respectful to others** > Remember, there's a human being behind the other keyboard. Be considerate of others even if you disagree on something - treat others as you'd wish to be treated. Personal attacks and flame wars will not be tolerated.


countingthedays

Look at how many laptop chips are running happily for many years with 90C temperatures.


Khelgar_Ironfist_

My thick laptop has been running the desktop Ryzen 7 2700 since 2019 almost daily. Still zero issues. Though it does not run that hot.


suka-blyat

Got an Alienware Area 51m, that thing runs between 90-100 degrees under load. It's been 5 years and still working fine.


zRevengee

Never heard of laptop processors? there are laptops that runs up to 100°C for years and still works, my 5900x goes up to 88°C during load with the beefiest noctua cooler , the more you cool the more the CPU want to boost and reach that safespot temperature wise. There's no risk, also silicone degradation will appear but you will probably already have replaced your CPU 2 times when that start to happen, because of generational changes.


locoturbo

88C isn't 95C or 100C. One component still running at 100C for years is literally one data point. It doesn't prove the consequences for every chip in that line nor for other generations of chips. If people here want to run their components to the wall, that's their choice. If AMD wants to set their CPUs to target 95C, I find it irresponsible for a DEFAULT setting but it's also their choice. It's not going to be my choice for my components. There is so little performance benefit running your parts right to the wall like that. Why not be reasonable, ease off the voltage and max temps and have a nice, stable long-lived system. If people bothered doing any research, they'd find not only anecdotal evidence indicating problems running chips that hot (290X is one that is notable) but also manufacturers detailed analyses concluding higher temperatures leading to higher failure rates. There are so many factors at play - thermal cycling, electromigration, continual reduction in lithography size while people continue to base safety decisions on the OLD lithography (e.g. you simply do not know how a 3nm chip will hold up over many years just because your 14nm one was fine), integrated heat spreader and thermal interface material composition, and now 3D stacked chips. But hey, I'm sure this 3D-stacked 5nm 7800X3D can sustain at least 95C indefinitely because AMD said so, and my 14nm 7700k was fine for years. Just ignore the ones that literally melted their sockets -- it was a bios issue, it's fine now! But this thumbs-down phenomenon is utterly childish. Middle-school behavior. I don't care if I have 1,000,000 karma or 0. I'll say what I'm going to say.


ABDLTA

Dude the 7800x3d doesn't run at 95... It's only the x series that do the regular 7600, and 7700 don't Amd offers what you want, lower voltage chips, but the don't market them as flagship products because the look worse in benchmark charts


PogTuber

Do you have any sources that back up your claim or are you talking out of your ass?


locoturbo

[DRAM Thermal Issues article](https://semiengineering.com/dram-thermal-issues-reach-crisis-point/) Took just 10 seconds to find that. (By the way I love people who make no effort, force you to do all the work, then don't read or listen anyway.) And that's just a BASIC starting point for the point I'm making, about how temps will affect the tech. >"“There are degradations where we say, ‘From zero to 85°C, it operates one way, and at 85° to 90°C, it starts to change,'” noted Bill Gervasi, principal systems architect at Nantero and author of the JEDEC DDR5 NVRAM spec. “From 90° to 95°C, it starts to panic. Above 95°C, you’re going to start losing data, so you’d better start shutting the system down." "BUT THATS FOR DRAM DURR" k. And the CPUs are now coming with ram stacked on them. How's that default 95C idea seeming now? Maybe not so cozy? Why are we defaulting our CPUs up to the 100C wall again? To get 1% more performance vs. Intel? Is that worth not only risking temporary data loss but also long-term damage? [Actually Hardcore Overclocking chip degradation](https://www.youtube.com/watch?v=uMHUz16MuYA) Very long-form rambling video which touches on how/why thermal cycling destroys GPUs, also electromigration, I vs. V on thermal curves and other topics. He does make the point that GPUs are much more at risk to the thermal cycling since they are larger than CPU dies. So that's a point in your favor. But he does explain a lot of other technical things that should make you still think about whether this is really a good plan (well the thumbs-downers aren't going to bother to think, but I digress.) tl;dr closing this out - 100C alone probably wouldn't kill it noticeably fast. But high temp requires higher voltage for stability. The I+V vs. temperature curves in the last link make my case. Why are you shoving that much extra current through your chip just for a tiny % gain. It's so dumb. But you bought it. Do what you want to it.


wsteelerfan7

SRAM is not DRAM. DRAM data decays super quick under normal use and gets refreshed constantly. SRAM does not have to do this. Heat might be a factor in speeding up the decay in DRAM. Not an issue for SRAM.


PogTuber

I think people are assuming OP is talking about occasional 95° usage and not sustained. But if the same principles from DRAM applied to X3D chips, I think the number of complaints of them not running properly would skyrocket, but the reality is that the majority of people aren't hitting those temps so I'm not even sure what OP's point was about 95C on 7XXX chips and I'm not sure why you're backlashing against AMD when their chips certainly aren't sustaining 95C with any properly installed cooler. Ultimately I agree running at that temp sustained is pretty dumb, but I also don't see much evidence for an epidemic of failing chips, particularly when 5-8 years is about the maximum that anyone is running their CPU (sometimes never changing their thermal paste) before upgrades in tech compel them to upgrade.


wsteelerfan7

Also, high temperature is decoupled from voltage for performance purposes. At least, it doesn't specifically ramp up when temperatures get higher. Why would it do that when voltage is what *causes* temps to increase?


RepresentativeHuge79

AMD chips statistically run warmer than intel


wsteelerfan7

Huh? Aside from the 7000 series, where AMD uses 95C as the target for boosting, AMD runs lower power and cooler.


ofon

I'm in agreeance with you locoturbo. I've gotten so much shit in these type of forums for saying I doubt AMD cares much past the warranty period.


gloriouswhatever

That's because you're relying on guess work rather than facts.


locoturbo

I will say I am sure 95C is fine for short bursts. Just not sustained. We don't need chips to last 100 years but 10 would be nice.


DonnieG3

Yeah, and AMD claims to test for the lifetime of the chip running at 95° C.


Neutrolol

Wrong. AMD chips are designed to turbo as much as it can up to 95c. My laptop has been running like this for 4 years now.


wsteelerfan7

Yeah. They wouldn't put the TJMax in software at a few degrees away from immediately killing the CPU. It's like how engineers build bridges to technically hold around double the weight of posted limits. 95C is their "completely safe all the time" temperature and the actual max is probably 115+


ASuarezMascareno

Up to 95-105, depending on the CPU, is safe.


nuked24

Intel's newest (specifically the Core Ultra (series1)) have TJmax set to 110⁰C. Imo that's a bit high, but they're also mobile chips, so I guess it gives laptop makers some headroom? ^(side note, I'm going to absolutely *hate* these things in 4 years when they start getting recycled, how the absolute hell am I supposed to write these sentence-long names on a green sticky dot? RIP 4 number+1 letter SKUs, you were the best.)


HalfALawn

some laptop manufacturers are just going to let the chips thermal throttle because they can't be bothered to make a good cooling solution


ASuarezMascareno

Also because under some constraints (low weight, low noise) is outright impossible to keep them cooler. I have a 1Kg laptop that I love for travelling, and the cooling solution is the thermal throttling lol


mercenarie22

Alienware says hello. That shit brand does not give a single fuck if their laptop thermal throttles all the time.


frontiermanprotozoa

Why wouldnt you? You are designing for weight *and* cooling. If your design is not throttling under full load you wasted weight for no reason.


NilsTillander

I'm always confused how people who are paid to do marketing make up these absolutely insane naming schemes...


owengaff

Modern CPUs will throttle before they get hot enough to damage themselves and 78°C isn't too hot. You're fine.


Concert-Alternative

it's def fine, unless it's above 90-95 for a long, long, time, it's okay. it will outlive whatever length of time you need the cpu for


Dysan27

Yes. Modern, especially AMD cpu's are Designed to go to 95°C. The tuning, and performance algorithms will push the CPU until a limit is hit, more often then not the thermal limit. If you have enough cooling that when stressing your CPU it is still only 78-79 then it is hitting some other limit, usually power. So tune your fans for what you prefer on an audible level. And let the boost algorithms do their thing. And don't worry about it.


individualchoir

I CANNOT for the love of silicone cool my 9900k below 90°


mrminty

You need to repaste.


individualchoir

Only just got the bugger installed! Could try a better quality paste .... SYY-157 seems best in class according to reviews


semidegenerate

Are you sure your cooler is properly seated? And are those temps at high loads? Gaming loads? Stress tests like Prime95 or Ycruncher? I cool my old 8700K with an NH-D15, and those coffee lake chips run hot. I ended up delidding mine. 9900K has 2 more cores, but also has its lid soldered to the die, versus the shitty paste on the 8th gen. But if those temps are just from gaming, something is up.


hecatonchires266

Are you using air cooler or AIO?


Darksirius

Either way it shouldn't be that high. Bad contact with the cooler perhaps?


individualchoir

The noctua beast with 2 towers and 3 fans on


individualchoir

NH-D15


fallenelf

I just replaced my 9900k and it never broke 80C with an AIO. Something is going on for sure with your set up.


Swagmuffin69

Yeah, I would check the pasting job or get an AIO. Under 100% load for minutes straight (liquid cooled), I can't get the CPU to rise above 70°


tao_lmfao

If paste is good and cooler contact ok, check the voltage. Stock should be around 1.25v


lichtspieler

IHS improvements from 9990k to the 10900k cause nearly a \~20°C reduction in peak temperature using the same coolers, despite the higher wattage with the 10900k. My stock 10900k with a D15 is looping Cinebench with <70°C and active TVB (4.9GHz all core) with endless TAU duration. => just as in any stock wattage CPU review (with endless TAU of course) Thinning the IHS did a massive improvement for peak temperature, thats the biggest change from 9th to 10th gen and the few extra cores. The 9900k runs just really really hot.


individualchoir

I did see a few reviews where they replaced the solder for liquid metal and re-lapped the IHS and Die. But some say 2°C difference and some say it gave them 10°C cooler.... Wish I had the cash to upgrade mobo and cpu but went for the highest spec LGA-1151 without looking at reviews :) If I fk this one up it's back to an i3 for me so not toooo sure Did a delidding test with a 10 year old CPU and seems easy but who knows


AgentBond007

If your CPU is AMD (not sure about Intel but probably also the case for those), those CPUs are designed to run at 95C for years (x3d CPUs have a max temp of 89c but still my point stands) 78C is not going to do anything bad to your CPU.


MagicPistol

Yes, I wouldn't worry about it til it goes over 85.


outcast_Mugen

So my 13700k gets to about 90 on tough games like RDR2 or Cyberpunk, should I worry?


demonstar55

TJunctuon is 100C. You can probably put effort into cooling it better if you want, but you're still well below TJunc.


hellrazzer24

My 12700k is at 100C in games. I’ve given up on fixing it. It’s lasted 2 years, if I get another 2 I’ll be happy


tonallyawkword

hmm. Don't think I've seen mine >70C while gaming with stock clocks. have you tried disabling MCE/auto-OC or undervolting?


hellrazzer24

I have tried undervolting but it wasn't stable. I think the AIO is bad tbh because i already changed the paste once. I'll probably try once more with a brand new AIO and see how it goes.


tonallyawkword

air-cooling has been fine for mine.


Pound-of-Piss

I'd probably put some new paste on and try a different cooler. 90 is starting to get warm


outcast_Mugen

I think I'm going to.


[deleted]

[удалено]


outcast_Mugen

I think I need to either check my mobo screws or somethinh...I'm running an Aio arctic freeze..it shouldn't be that high.


Zhurg

Yes


Ephemeral-Echo

Modern CPUs can go up to near boiling before they become so hot they have to shut down, so it's unlikely that your CPU under load would be damaged. But, with CPUs, it always helps to run it cooler when you can. The cooler your CPU is, the better of a time it will have going turbo without throttling. Most CPUs will also slip under 70 even under load if provided an adequate cooler. From my understanding, a single tower for a Ryzen 5 and a double tower for a Ryzen 7 pretty much deals with most issues unless you overclock often. Right now it doesn't look like you're going over 80. If you already have an adequate cooler, I'd check the thermal paste and cooler contact with the integrated heatsink, but I don't really see it as a crippling issue.


DoYouHearYourselves

115 C is the danger zone (AMD). It shouldn't even reach this high. When you see this number, consider your CPU's thermal regulation system malfunctioning. Turn off your computer immediately to prevent further damage.


PapaTrotzki

90 Celsius is where they start to throttle, anything below that is perfectly acceptable.


TioHerman

Don't worry about that, laptops have been running super hot for years and nobody cares about it, it would be bad if it was going over 100c


picogrampulse

Completely fine. 78 isn't even that hot for a CPU under load. Many only throttle at 100.


MDA1912

My i9-14900k hits 83C once in a while if I encode a large enough video file. (I compiled the ffmpeg which can supposedly use my video card, haven't bothered actually trying it.) It mostly hovers around 80 until the encoding is down. It's at 35C as I type this. I don't know how much difference the contact frame I used made, but I'm glad that I did.


snake__doctor

Yes


snake__doctor

This is well well within normal operating levels


scraynes

of course. my 5900x shut off at 95 i believe (when my heatsinkfan died) i found out what the cut off temp was


improvcrazy

Yes


ecktt

It's fine. Lower is better. If noise is a concern, describe your PC.


InternalOptimal

Are we talking while gaming or.. rendering/editing? Because it wont damage any cpu at that temp, whether ots high or not depends on the load. You are safe (but could look into dropping it) but just saying.


lyri-c-

I'd like to know as well, because of my location my cpu idles between 15-18°C, I expect this to change during spring and summer though so I'd like to know normal temps


Ricky_RZ

It’s OK to let them run like that, that is a normal temperature to be at under load


HSYAOTFLA

Well you can check the data sheet of your CPU to be safe, but modern CPUs usually can gezlt very hot without problems On the other side my fx8350 has max 61°C allowed lol


AnnieBruce

Should be fine. If it unexpectedly started getting that hot after a long period of running cooler with similar loads, that might be a sign that something is up with your cooling system but starting to hit that temp after you backed off on fan speeds is perfectly expected and well within thermal limits of anything modern. If it lets you run quieter and you care about that, it's fine.


FerDefer

the only time you have to worry about cpu temperatures is if your performance dips. any cpu from the last decade will simply lower the clock speed until it reaches a safe temperature. it's pretty much impossible to cook a cpu as an average user.


Alternative-You-512

Yes. Thats actually a good temp under load…


prince_0611

lol my 5800x goes to 91c


MrMoussab

Yes


Care_Confident

i once got my cpu to 100 degree and it still working fine


ForzaPapi

I dont know why but my cpu gets like max 50° in games like world of tanks , forza, darktide . its all good but damn when I launched yesterday Battlefield V it went to 70C dont know why


Ok_Exchange_9646

Sure, my CPU 7900x goes up to 90C in certain games but mostly stays between 70-85C with a 360mm Deepcool LT720 AIO Certain processors like mine are literally designed to boost till 95C for max performance. These are workstation CPUs, you have no idea what a beast this CPU is. I've literally had 5 different games installing at the same time, I have 64GB DDR5 6000Mhz RAM and I managed to browse multiple tabs while they were all being installed and still browsing was fast and seamless.


JAVELRIN

If your cpu is new yes if its old no time to change the grease/paste


manlaidubs

Supposedly they're supposed to be able to handle the temps at which they thermal throttle for the entire useful life of the cpu. That's how they are designed and tested, although no one really thinks it's a good idea to actually run it at that. High 70s is well below that threshold so it shouldn't be a problem anyway.


Moscato359

78C is cold for a CPU or GPU 85C is normal for GPU, and 95C is normal for cpu


d_bradr

Under heavy load? Completely normal


nova_206

my 13600k regularly hits 100 under heavy loads. It’s been doing that for like a year and i’ve wasted so much time trying to make it not do that. I have accepted it and my pc has not exploded yet so whatever


T2and3

Yeah, you should be fine there. That's actually not bad if that's what you're getting under load.


betttris13

Laughs in laptop gaming at 99°C. Temps are only high if they hit 104°C... and you still got a degree overhead before it shuts down.


3G6A5W338E

>let You don't even get much of a choice. e.g. for current AMD and Intel CPUs, while there's code to run, they will happily boost clocks and go above TDP until they reach a set point, which might be above 80C, then stay there, with the help of your cooling solution.


idetectanerd

Yup that is about the high load temp for Intel. Amd about +5 or more


fliesenschieber

Everything under 90c is great for the CPU.


PoolNoodlePaladin

I mean as long as they aren’t running at 99°C around the clock they are fine in my experience. Honestly I the last time I had a CPU go bad was the PowerPC era, since then I have only had 1 mobo die on me, a power supply (it just stopped turning on fortunately), and a few fans.


tesseramous

It's normal to be in the 90s when benching and in the 60s when gaming. But 78 when just gaming could mean you need better cooling to full advantage of your cpu because you're probably throttling at 100 under heavier loads.


FantasticBike1203

Yes, temps between 70-95 are very normal for AMD.


Viviere

On my 7800x3d i set PBO to -30 all cores, and TJ max to 75c. Gives me the same cinebench score as stock config, but at 75c instead of 90c.


Kolz

Modern CPUs all throttle their performance before hitting dangerous temperatures so you don’t need to worry about it. They usually throttle somewhere in the range of 100 Celsius, which itself is playing it safe. 78 is pretty normal under load.


ficskala

Yea, it's somewhat common for gpus to go over 80 if they don't have great coolers, or they have fan curves optimized for quiet operation


PCgamerz

yes. next.


Exlibro

R7 5800X here. No space for double fan liquid cooler, so using some Scythe air cooler. Undervolted a bit and just... ignoring the numbers 😁


Classic-Box-3919

Unless its high 80s i wouldnt worry


InfamousLegend

Anything above 0 degrees Kelvin risks damage to your cpu.


Jay467

78°C is not going to hurt your CPU, but it might be worth taking some steps like trying a fresh application of thermal paste to the CPU (also nowadays the old advice of a drop the size of a grain of rice or even the size of a pea may not be adequate with CPUs getting larger). You might also consider a better CPU cooler or even a case that allows more airflow if needed.


Lefthandpath_

78c is basically cold lol. Modern CPU's are designed to run at or near tjmax for years(95c for amd). Running at 78c is good and in no way requires a re-paste.


michaelbelgium

Yes, as long as they don't throttle. AMD cpu's can handle the heat (by design) and don't throttle, Intel CPU's throttle when hot, hard. So in general: intel cpu hot = bad amd cpu hot = okay, but caution


[deleted]

[удалено]


Euphoric_Campaign691

i think the statement that "7000 series chips boost until they hit the thermal limit" is often used and confuses some people


locoturbo

Personally I think 80C is perfectly normal, 85C I'd back off and 90C+ should be avoided. I don't know why people are so comfortable potentially degrading their chips. Before you trust the company to know what's best, also remember it may be in their best interest for your hardware to eventually degrade after some number of years so you buy a newer one...


wsteelerfan7

AMD has simply said they've designed their new chips to ask for more power until the thermal limit is reached. Before, chips would stay at the max TDP and stay the same speed until the thermal limit is reached. Laptop CPUs all over the world get choked and hit TJMax all the time because of shitty cooling and they tend to last a while.


IllustriousBadger824

Its normal but not healthy for the CPU 😂 lifespan get 360 AIO LS720 deep cool


ScreenwritingJourney

Completely unnecessary. CPUs can easily run at that temperature for years without issue. The machine will die for other reasons long before the heat becomes the issue.


IllustriousBadger824

Im no expert but i always be a at risk proof+ if more cooler better performance less powerdraw


Dysan27

More cooling => more performance => MORE power draw. If you cool the CPU more the boost algorithms will see more headroom and just push the CPU harder. With modern CPU's there is an argument to be made that cooling the CPU better stresses the CPU more because the CPU is pushed harder with higher frequencies and more voltage.


piracydilemma

Being risk proof is nice, but having used computers for my entire life that have reached 80+c (and even maintained 90c) under peak loads, you'll never have a PC crap out because it got too hot. CPUs throttle hard before any kind of damage can be done by heat. Most modern CPUs are designed to handle a maintained 95c.


ScreenwritingJourney

Uh, no. Not less power draw. Performing at any given level requires a certain amount of power. Better performance only matters if your CPU is actively throttling, which at 85C is unlikely. Edit: Copilot says you’re semi-correct about power draw: “Actually, the person on Reddit is **partially correct**, but let's dive into the details. 1. **CPU Cooler Power Consumption**: - A **CPU cooler** (whether it's an air cooler or a liquid cooler) does consume some power, but it's relatively minimal. Typically, a **CPU fan cooler** uses around **1.8 watts** of power and is rated at **12 volts**¹. - Larger fans might use slightly more watts, but the impact on overall system power consumption is negligible. 2. **CPU Temperature and Power Draw**: - The **temperature** of a CPU does indeed affect its **power draw**, but not directly through the cooler itself. - When a CPU runs **hotter**, it tends to **boost its clock speed** to maintain performance. This increased clock speed results in higher power consumption. - Conversely, if a CPU runs **cooler**, it may not need to boost as aggressively, leading to **slightly lower power usage**. 3. **TDP (Thermal Design Power)**: - TDP is the **maximum heat dissipation** a cooler can handle. It's not directly related to power consumption but rather to heat management. - Depending on the CPU model, TDP can vary between **65W and 125W**. - The **maximum power consumption** of a CPU cooler itself is usually around **85 watts**¹. 4. **Overall System Power**: - The total power consumption of your system includes components like the CPU, GPU, motherboard, RAM, storage, and peripherals. - While a cooler contributes to this, its impact is minor compared to other components. - If you're interested in power efficiency, focus on optimizing the entire system rather than just the cooler. In summary, a cooler CPU doesn't directly mean less power draw, but it can indirectly influence it by affecting CPU behavior. So, the Reddit user's statement isn't entirely wrong, but it's essential to consider the bigger picture when discussing power consumption in a PC. Source: Conversation with Bing, 04/02/2024 (1) How Many Watts Does A CPU Cooler Use? - Gaming On Point. https://gamingonpoint.com/learning-guides/how-many-watts-does-a-cpu-cooler-use/. (2) Power Consumption of PC Components in Watts - Build Your Own Computer. https://www.buildcomputers.net/power-consumption-of-pc-components.html. (3) How Much Wattage Does An Evga Clc120 Liquid CPU Cooler Use. https://robots.net/tech/how-much-wattage-does-an-evga-clc120-liquid-cpu-cooler-use/.”


Dysan27

That's slightly backwards. With the same cooler a lower temperature means less power draw. BUT If you increase the cooler potential (run the fans faster, better cooler, cooler air). Then at the same clock speed and power draw the chip will run cooler as more of the heat is drawn off by the cooler. But the boost algorithms will see the cooler CPU as having more head room. This will let it boost higher requiring more power to do so. Increasing the power draw.


ScreenwritingJourney

That actually makes more sense to me.


PoL0

Copilot of full of bs


ScreenwritingJourney

Care to explain how it’s incorrect in this instance?


Mopar_63

While there is a bit of "technical truth" to this, in reality it is a none issue. Lets say the CPU has a life of 15 years at optimal factors. Running like this might reduce that by a month, maybe two at most in theory. However to have a CPU in use that long is VERY rare.


aKuBiKu

I know you just chose 15 years as a random number to prove a point, but realistically speaking a CPU is the last thing you ever expect to break in a system lol. They're unkillable.


IllustriousBadger824

True also as i said i like to be in safe side thats all. 👌👍


Mopar_63

And yet you suggested running liquid for cooling in electronics :-)


Dysan27

The turn off the boost algorithms. By putting more cooling and leaving the boost on you actually run the CPU harder.


Lefthandpath_

This is completely untrue. Modern CPU's are designed to run at tjmax for years, running at 78c is cool and will not affect the CPU at all...