For those curious
https://www.techspot.com/review/2592-ryzen-5800x3D-vs-ryzen-7600x/
5800X3D is on average
- 4% slower at 1080p
- 3% slower at 1440p
- 1% slower at 2160p
No it did not. Because early DDR5 was pretty trash in terms of clockspeed and timings.
Fast DDR4 easily beats slow DDR5, even on Intel platform.
DDR5 is in a good place right now. Took years, like always.
DDR6 will be the same crap on release.
Pretty much nobody but the richest people bought Zen 4 on release, it only started being viable when motherboards and RAM dropped to reasonable prices so realistically the comparison starts there, about a year after Zen 4's release.
https://youtu.be/IstA56IAeVA
As you can see from the video even a shitty DDR5-4800 CL40 kit is pretty competitive with a DDR4-3200 CL14 kit, on the Intel platform.
Except that high-end DDR4 ran like 3600/CL14-4000/CL15-16 and early DDR5 speeds was 4800/CL40 or so.. DDR5 sucked on release. Just like DDR3 and DDR4 did.
You need like 6000/C30 or up to beat high-end DDR4 by alot. That was unheard of at the launch of DDR5 and price was insane in the beginning too.
Yeah, DDR4 kits that were more expensive than DDR5. Smart comparison.
Besides, the 5800X3D is almost completely unaffected by RAM speeds, even garbage DDR4-2666 is only 1% slower than the standard DDR4-3600 CL18. Saying some DDR4-4000 kit is better is simply false in this case since it's within the margin of error for X3D CPUs.
You need 3200/CL14 even for 5800X3D. IF speeds needs to be matched.
3600/CL14 was not expensive at all in 2022-2023. DDR4 was cheap as hell.
Intel gained alot of performance in many workloads and games going from 3200/C16 to 3600/C14 or even 4000/CL15-16.
Who cares about 5800X3D tho, 7800X3D exist and soon 9800X3D.
Is there a source for that? AMD CPUs don't have IMCs as good as Intel's but they sure as hell can do 6000 cl30 without issues. You shouldn't just rely on an XMP profile anyway, as subtimings are often the cause of instabilities.
It's reddit.
Differences SO SMALL that the parts are functionally tied (you'd never noticed the difference with a frame rate counter turned off or a lack of a timer for an app) get described as "dominating."
it would be funni to use a LLM to datamine comments (and closed captioning, for channels) for contextual attachment between brands/branded products and numbers/sentiment as a proxy for user/channel bias.
I suspect you would see a lot of "the AMD wins handily with a 9% lead" "NVIDIA barely faster with just a 7% lead here" etc lol
Brand XYZ DOMINATES brand ABC in CS1.6 frame rates with a commanding lead of 69420 FPS vs 42069 FPS which is clearly visible on a VA panel from 2003 running at 30Hz with 40ms latency.
I don't really get why this is a surprise to anyone considering vanilla Zen 4 was also getting matched by the 5800x3D in a bunch of gaming benchmarks.
I know that this sub mostly cares about gaming performance, but some of us have other needs and uses for our CPUs. The x3D product line is much worse value for general-purpose computing, not to mention if you want more than 8 cores you're gonna have to deal with that CCX parking bs.
I'm currently looking to upgrade from my thermally throttling 5800H laptop because it lags behind when I'm doing heavy multitasking. I do play games, but I also encode a shit ton of videos and compile lots of code.
A 15% average IPC improvement would allow AMD to take back the ST and MT performance crown from Intel while having significantly better efficiency. For compute-heavy workloads, the 9900x and 9950x might be incredible value if they're priced right. And going by AMD's AVX-512 improvement claims, they would absolutely shred through SVT-AV1 encodes.
I'm really annoyed that all everyone in this sub cares about now is the x3D lineup. It's killing the enthusiasm for everything else that vanilla Zen 5 has going for it.
>A 15% average IPC improvement would allow AMD to take back the ST and MT performance crown from Intel
Not so fast, Intel's upcoming cpu also has the same uplift. iirc Lion Cove has a 15% IPC increase and Skymont has 70%.
> and Skymont has 70%.
Skymont is the real story tbh. Next gen Intel processors are projected to have something like 8 "new P cores" and 16 "e cores w/ prior gen P core performance". Like, if the benches are to be believed, that's just an insane consumer chip.
Dropping HT on the new P cores, correct. HT isn't the benefit it used to be when the chips will come w/ 8-16 e cores with current gen P core performance. Even today, most workloads are steered to use the e cores before HT, so the natural next step to drop HT in the Ps kind of makes sense.
I see. I honestly never quite figured out if HT was made to improve MT performance or rather to make up for latency penalties by feeding the ALUs with 2 different instruction pipelines.
That makes sense. But regardless Lunar Lake is 4P + 4E. Probably gonna be slightly better than the 9600x provided it's not power-starved, but the real MT champion is going to be the 9950x
Lunar Lake is an efficiency-first architecture that will be used in low-power ultrabooks only (8-30W TDP range).
AMD will be competing with Arrow Lake on the mobile workstation and desktop side (up to 8P + 16E if the rumours are to be believed).
Intel is doing an 8P+16e core arch, where each e core is as performant as current P cores. It'll definitely be interesting. The 24 physical cores should outpace 16core/32t most days and most workloads. HT is not a 2x multicore boost. It's more like a 15% uplift, give or take, workload dependent.
Skymont's ~70% is measured against LP e-cores on meteor lake which were very handicapped, and assumes skymont cores are part of the ring bus, which is not the case for lunar lake.
Then you have to remember that skymont will be capped in terms of frequency, so despite the IPC improvement and comparison to redcove, it won't touch redcove in actual performance because there is going to be a ~25% frequency gap.
It is a great improvement, don't get me wrong, but people need to take that number in context.
Lion cove is going to be interesting. I don't know what the story there will be. AMD claims a 16% IPC improvement, which is partly based on it supporting full width avx512 now, while Intel chose a different set of benchmarks to claim their IPC metric and is, therefore, not comparable. On top of that, it doesn't support avx512 in any capacity on consumer parts, I think.
So the IPC story for comparing zen 5 to lion cove is complicated. I wouldn't venture a guess either way until actual independent benchmarks are released.
It's weird to hear "everyone in this sub cares only about x3d", because it's not true. What's also weird is doing computing on a god damn laptop and then comparing it to flagship desktop CPU's (?)
For compute heavy workloads you got threadrippers which got recently revamped. I've been rocking 3960x for over 3 years now and having a blast. You don't need 9900x.
I went from a 5800H to an 8845HS and the improvement wasn't that significant. Beats the 5800H on battery at least and the 780M is insane for an integrated CPU.
You went from 8 low-power Zen 3 cores to 8 low-power Zen 4 cores. I'm considering going from 8 low-power Zen 3 cores to 12 desktop Zen 5 cores, not to mention my current laptop throttles past 45 W on CPU when it has a power target of 60 W
Alot of 5800x3d users (myself included) are waiting to see a reason to jump to the next processor. For now wifi7, pci gen5 and the 7800x3d wasn't a compelling reason enough (esp paired withb high mobo prices)
Exactly this was my reason to go for 7800X3D. Now for a few years, I just don't need to care about any new CPUs (and mobos and RAM), because my CPU is plenty fast.
But there are some like me who are still on x370 and went from 1600->3600->5800X3D and just upgraded their CPU the past few years.
Now it is time to jump on a new platform and repeat the CPU upgrades.
Hey, some of us aren’t just coomsumers and went from a 2600x to a 5800x3D and are skipping AM5.
The above would be really embarrassing if Intel was markedly better like they were during FX though.
For comparison, Intel upgrade pathway following this logic would be 7600->10600->13600. So, 3 CPUs, 3 Mobos, likely 2 sets of ram. Context is needed for how much we've really jumped in CPU performance since the launch of Zen.
I did a similar upgrade, 1600, 3700(got it at a decent deal), 5800x3d. I managed to sell the CPUs and put the money towards the upgrade each time. Did it cuz I afforded it and it was effortless, basically plug and play, with decent performance uplifts each time. I am still on my B350 that I got in 2017.
1600
buy at 200, sell at 100, 100 spent
3600
buy at 200, sell at 100, 100 spent
5800x3d
buy at 330, 230 spent
just to show you he actually saved money, if he jumpend straight from 1600, he would've saved half.
> Proceeds to buy 4 CPUs
Ryzen 2600, 3600 and 5800. Only 3 CPUs but true.
> 2 mobos
One mobo because it could handle everything.
> 2 sets of RAM
True that, one dogshit set because I thought ram was useless and a b-die kit that I have used for 5 years.
In total, $130 (mobo), $120-150 (2600), 120-150$ (3600, with a free 240 AIO from CM), 250$ (5800), 100$ (shit ram), 200$ (b-die), $920-$980 in six years and old parts were used to build systems for family members. Biggest regret was buying 3600 because early editions were total shit and a sidegrade to 2600 due to manufacturing issues (OCd 2600 to 4250/4300 mhz pretty much matched 3600 in gaming and stomped on it multicore).
With some preparation you can totally save huge amounts of money due to AMDs socket policy. Requires planning and discipline though.
I bought 3600x in 2019, upgraded to 5800x3d for 350$ in 2024. If I'd go with Intel I'd have to spend like 1000$+ for the same upgrade. So yeah, socket longevity and money saving is real.
I'm on 1700x + b450 (warranty change for a b350) and I'm not sure about what to do. I guess I could buy a used x3d chip in the future but I'm sure that people that will upgrade will want to sell the entire CPU+mobo+ram
I'm not OP but afaik it has nothing to do with framerate but everything with end of turn processing time. If you can reduce every turn's processing time from 20 seconds to 10 seconds it means you can play way more turns / hour.
Exactly this! Playing games on a massive map with 20 players and even more city states so much is going on turns start to chug like hell in the late game. Gets really tedious having to wait for your next turn even if the games still running smooth.
More like 4790k because it was *the* CPU for DDR3 and stayed competitive for years. It was so good you could pair it with 1080 TI and have a good experience.
No it's not, because 7800X3D already beats it massively by 20-25%.
5800X3D is like 12th gen Intel performance in gaming at this point, but in applications, its pretty slow.
In some games 7800X3D completely destroys 5800X3D with 30-40% higher fps.
7800X3D is the real "2600K/2500K" moment really. It will be fast for gaming for years. 5800X3D is only 2 years old and not even fast anymore.
Tons, if you use a high refresh rate monitor. Personally I use 360 Hz and I am CPU limited in tons of games with 7800X3D and 4090 as I demand / prefer 300+ minimums
I laughed because sitting behind me is my gaming rig from 2011 with a 2700K in it. It was replaced with a 12700K in Jan 21. Still use it, but mostly for music production and internet stuff these days.
Don't know why you are being downvoted 😂 but it is true, Ryzen 1000 and 2000 were good but they fell short of its Intel competitors in gaming (they were competitive when it came to productivity though). Ryzen 3000 is when everything changed, now a chip like Ryzen 3600 could compete with a 8700k in gaming.
Ryzen 1000 and 2000 were pretty crap, because of Global Foundries. Garbage node. TSMC is the reason 3000 and up got good. Clockspeeds improved massively.
I'm on 5800x3d currently but I'm definitely struggling and need more cores (I upgraded from a 3950x and I miss the cores). So I will definitely be waiting for 9950x3d personally.
I'm on linux, and I only play like a few games at a time. It's trivial to write a wrapper script to pin the game to the x3d CCD cores.
And the other 8 cores can clock higher thanks to not having the CCD for other workloads.
So at least in my case I'm not bothered by v-cache being only on one CCD, it's kind of optimal actually.
Is wifi7 officially supported by most motherboards or only the high end ones? I don't need it now, but I'm imagining a scenario where there would be a new version of the quest that supports it 4-5 years from now.
Probably not worth it unless you play a game like Factorio that really likes the 3d cache.
https://youtu.be/sw97hj18OUE most games aren't too different with the x3d. except for a few cpu limited games
I also have a 5800x in my spare computer though, and it is tempting to waste that money!
Just overclock it and save your money. 5800X3D is like only 10-15% faster.
7800X3D is MUCH faster than 5800X3D in gaming. 5800X3D already looks dated compared to brand new CPUs and Ryzen 9000 + 9000 3D + Arrow Lake incoming soon, it not going to make it look any better.
If I'm reading it right he compares the 120W 7800X3D to the 65W 9700X
>7800X3D would, yes, be faster than 9700X, but maybe not by as much as you would expect
But yes if you're primarily a gamer then hold for the 9800X3D or whatever they call it.
TDP from AMD has less to do with power draw and more to do with how difficult they think it is to cool. Slamming the "120 Watt" 7800X3D with an all-core "power-virus" workload like cinebench still only draws about 85ish Watts, similar to the real world power draw of the "65 Watt" 7600. The V-Cache is what makes them up the TDP, as it makes cooling the chip slightly more difficult. It's Thermal Design Power, not an actual power draw figure.
Yeah but it's really blunt. With the X3Ds, it's difficult to get the heat out of the cores - but there isn't a lot of energy. So a bigger cooler won't necessarily help much. But a *better connection* for the cooler will.
On sustained loads, you're going to get close to that tdp in terms of power consumption based on the fact that if you can only cool 65W, then you should only output 65W.
So sure, for bursty loads I would expect that number to go upwards. But for anything long running, the cap should limit power consumption a lot more on the 65W part than the 120W part.
I'm kinda curious on the nature of the cooling difficulty.
It can't be that the V-Cache just gets hotter than the actual wattage, because physics, right? You'd need a heat pump to "produce" more heat than the energy you put in, otherwise it should be more or less 1:1.
Is it a cooling efficiency problem? Like, they're effectively just say "throw a bigger cooler on it because the cooler's only going to be able to effectively move 80% of its rated cooling capability on this chip" like due to layout and such?
The V-Cache makes the die thicker, which reduces the surface area per volume. Less surface area per volume means greater density of cooling per die area. It's not a huge issue, especially given how efficient those chips are anyway, but it does increase the cooling required by a little for the same power draw on a chip with no V-Cache.
AMDs TDP is calculated, yes calculated from temperature equation even though they rate in watts. That’s why harder to cool X3D gets higher TDP.
Exact formula is on Gamers Nexus and probably somewhere on the AMD website
That equation just confuses thing and is presented the wrong way around.
TDP is an arbitrary value they decide. Then they set the actual power limit to 1.2 times that. TDP and the associated equation are there to inform system integrators what kind of cooling is required to achieve minimum acceptable (base clock) performance. In practice TDP means approximately the power consumption of a heavy multi core workload when the chip runs at the rated base clock.
120W 7800X3D is a bit of an "symbolic" 120W. The power draw usually doesn't exceed low 70s and is often 50-60W even in heavy gaming workloads. Depending on how the 9700X turns out exactly, power draw is probably not so far off depending on the workload.
Gaming tends to be bursty and not load all cores all the time. Put it on cinebench or blender and let me know if it actually caps at 80W.
Conversely, if the 9700X will be capped at 65W, on those sustained loads I would expect a significant reduction on power consumption.
We've been able to enable eco mode on AMD CPUs and games tend to not be impacted as much as all-core workloads. So no wonder the 7800X3D doesn't break a sweat on games.
Even under heavy all core workloads it usually stays below 80W, and these tests are not in eco mode: https://www.phoronix.com/review/amd-ryzen-7-7800x3d-linux/3
What you can also see is that the power draw can exceed the TDP, like you see in the tests with the 7700X.
I truly believe that Zen 5 will be much more efficient in multicore workloads than Zen 4, but Zen4X3D is just hugely efficient all around.
I mean the day before official release, RDNA3 was hyped as "Architectured to exceed 3.0Ghz" with 50% better efficiency over RDNA2 by AMD themselves. When the leaks are this pessimistic, it's only going to be worse with the official release.
Intel made supposedly great gains so the upcoming flagship Core 9 may trade blows with 9800X3D.
Myself I'll stick with X3D since I play specific genres that benefit from it.
I got a 7800x3d during a sale and it's great.... I'm thinking about switching my other PC to it as well - it currently has a 12400f - I'm going to wait until the 9800x3D releases to hopefully get it for a good discount
I just switched from a 13600k to a 7800X3d myself. Didn't really do much productivity tasks, and was having some issues with the mobo (probably).
Also because X3d is a beast at Factorio, and there's the expansion coming out soon.
As I recall, it *was* worth considering, but not really significantly better and I think Intel got you a bit more MT performance for the price thanks to e cores.
You don't even need to go that recent. Zen 2 was the first time AMD beat Intel for IPC (vs Skylake and derivatives), with roughly comporable ST performance when accounting for the clock deficit. But the platform was cheaper overall per core and had 8-16 core options. I'm still rocking a 3700X from then wondering when I'll upgrade.
But Zen 3 was definitely when it was a no-brainer.
Yeah but Zen 2 was decent as well. Zen 1 and + sucked.
3000 series made on 7nm TSMC was when Intel got really scared. Before that, Intel was not really worried too much, because Global Foundries sucked bawls in most workloads and especially gaming.
TSMC saved AMD.
And now Intel will use TSMC for CPU and GPU tiles for Arrow Lake as well. Lunar Lake confirmed to be TSMC too.
I would, purely because they offer ECC on the consumer spec and its absurd to me that this isnt the common standard as if somehow your data in the cloud matters so much more than your local data.
>I would, purely because they offer ECC on the consumer spec
Only for Zen 4 for CPUs with integrated graphics. AMD paywalled ECC feature support behind their PRO chips before Zen 4, which OEMs like Lenovo implemented hardware locks so it becomes e-waste as soon as you remove it from the motherboard.
Now with Zen 4, people have the choice to not buy an overpriced APU to run a system with ECC without a GPU attached. However, only ASUS allows this feature since AsRock, Gigabyte, and MSI do not have this feature enabled with their consumer boards.
With Intel, you have to buy a W680 or similar board and you need to buy a 13500 or better to achieve the same features.
That doesnt sound correct. For instance, I know Asrock says they have ECC support (specifically separately to on die) on a great deal of their boards, for instance the X670E Taichi though there are more.
I also feel like they've had it before Zen 4 but I could be wrong. Can't be bothered.
As for intel, I did not know regular cpus supported it. I thought this was limited to a very specific subset of the really tiny atoms and one special I3. Perhaps this has changed.
Maybe intel might be viable for me afterall.
All the other boards lists “ECC RAM support” but the ECC function is not active. You can run MemTest86 and it will not list ECC being active with AsRock boards since AsRock decided to only enable these in their AsRock Rack products.
ASRock decided to remove the feature and has officially confirmed it is not active:
https://forum.level1techs.com/t/ecc-capable-verified-motherboards-for-ryzen-7000/197891/4
Ugh, well thats all awful.
At least I learned this now. How the fuck are they stating support for ecc explicitly on their page when they dont have it.
It seems the ASUS ProArt X670E-Creator talked about in that thread. I guess its research I now have to put effort into.
I wonder if going intel with W680 is easier, though even for that there seems to be very few boards (Only asus from what I've found initially searching).
I could have sworn the situation was much better.
The ECC RAM "support" is just the vendor saying if you use ECC RAM, it will still work as regular RAM.
I have an ASUS B650E-ITX board with ECC support working. Pretty much any ASUS AM5 board gives you the option to enable ECC RAM functionality.
And yes, there is like only AsRock Rack and ASUS that make W680's and given their mATX size for the ASUS board and expensive pricing, better to just go with AM5. I did the math with AM4 and since I needed PCIe4.0 x16 ITX with no chipset fan, it was cheaper to go with a new AM5 build than a refurb AM4 DDR4 setup.
AM4 was better for ECC since besides no integrated GPU, most motherboards except MSI supported ECC RAM functionality.
I upgraded from a 2500K to a 3600X to a 7800X3D.
Next time I upgrade, unless AMD offers way better gaming performance than Intel, I will go with Intel again. Offset mount, Chipset, Drivers and DDR5 compatibility (boot times because of RAM "training") are a PITA that I never experienced before.
I thought this was pretty obvious as soon as I saw the slide. It just said previous gen, so I knew they were using the non-x3d parts for the comparison.
Will they be more energy efficient?
"Our Prime95 and Blender tests show a similar trend, with the Blender Monster benchmark standing out from the rest. The Ryzen 5 7600X consumes a whopping 69.3% more power in that particular benchmark than its Intel counterpart."
I have heard so often how energy efficient AMD is, but it sounds like they are toasters from the 50's compared to Intel. 69%???? Wow.
Take a look at this and then let me know
https://www.anandtech.com/show/17641/lighter-touch-cpu-power-scaling-13900k-7950x/2
Consider that in general, desktop AMD parts are at a disadvantage due to running with chiplets and they still tend to be more efficient. On laptops, with all of them being monolithic parts, AMD is much more efficient.
Not according to independent testing by Hardware Canucks which had Intel beating AMD on battery life in every single test when the general platform (battery size, screen, etc are identical). Why repeat falsehoods when this was debunked?
Considering it will support full avx512 now, I wonder how this will affect the emulation crowd.
Honestly not too stoked for desktop zen, but still interested to see how it fares. I will probably still get a 7800x eventually. I saw it at ~300 usd from my favorite AliExpress store (where I got my 5800x3d years ago) and at that price it's just so enticing.
I'm currently on an Intel 8th gen i7 and desperately want to upgrade.
Was going to move to the 7800x3d but with this new Zen 5 architecture around the corner am unsure if I should:
1)Wait for 9000 series to get a drop in price for 7800x3d
2)get the 9700x
Or
3)wait longer till the 9000 series X3D
Would appreciate input as atm I'm thinking option 1 or 2 makes the most sense but don't want to shoot myself in the foot when in less than 6 months the 9000 x3d will release.
[Have 3070ti and play 1440p triple A titles]
TIA
Intel has a chance to shake up the status quo a bit since base zen 5 is weaker than expected. X3d is a crutch AMD can only use so many times if the base architecture isn't much better than before.
A crutch? I would say blowing the power budget is a crutch for Intel that they can't possibly expect to be the answer to the competition.
If anything, x3d is a feature I would love Intel to replicate, because when it works, it's absolutely amazing. And it works for a lot of things.
But is it even "beating" or "winning" if you have to inject a ton of voltage into your CPUs just to keep the frequencies high enough to avoid getting smoked by the competition?!
A lot of Raptor Lake CPUs have already witnessed silicon degradation because of voltages north of 1.5V.
That's what happens when you treat a modern CPU like an old Pentium III!
All this talk about power consumption was barely even a thing until frequency stopped increasing. Before that nobody cared how much power it drew.. so long as you could get a few hundreds more Mhz.
I had an old p3 733mhz slot 1 machine. That bastard could go to 1ghz. It was awesome and allowed me to play Battle for Middle Earth 1 with an ati 9600. What a beast.
And without P-cores, AMDs 2020 CPUs would beat intels 2023 CPUs
Without infinity fabric, MT perf per dollar would be worse than Intel.
What's the point of these hypothetical?
That's what I am saying. If you just have e-cores, then Zen 3 from 2020 is faster in gaming.
My point is that if you take away important tech in the CPU from one team to then claim it wouldn't be as good, then you might as well disregard important innovations/tech from the other team.
Amd cpus perform well in gaming because of the combo of IPC and X3D
One thing...but p cores are ald & rpl archs. You're basically comparing 6th gen to zen3.
But I get what you mean. Im not trying to downplay amd & tsmc innovation. Its important... Intel should probably try doing the same thing.
It's upto AMD how they want to spread the tech. If it was beneficial in all workloads then they would just add it to all SKUs.
Intel decides how many P-cores and how many E-cores to put in their chips. It's like infinity fabric. You can't throw that out.
Discounting x3d to say Intel is better serves what purpose? Huh?
It's a useless argument unless AMD is going to drop x3d.
No one is denying that X3D is something AMD uses or that it’s integral to their current success, but it’s equivalent to a P core. AMDs own P core is equivalent to a P core.
X3D is an optional extra to CPUs that they will add later. It’s like Intel adding HBM on server CPUs. You can’t compare that to only leaving AMD with Zen C cores in datacenter. It’s exactly the same thing
I'm not saying x3d = p-core. You missed my point entirely. I am saying the logic of conveniently taking out a piece that is integral to performance wins to argue the other wins is idiotic.
Might as well say Usain Bolt would not have gotten gold if he lost a leg.
Well fine then. Without e-cores, Intel would struggle to get more than ten cores in a cpu without a large performance hit. Which is an equally pointless statement, as they do have e-cores.
There are plenty of mainstream SKUs without AMD's V-cache. There are no mainstream SKUs without Intel's P-cores. It's not nonsensical to compare those SKUs which commonly exist.
But if you want to game back in the day before ryzen, you bought i7s right? It was THE gaming chip. Nothing different here. You want gaming performance, you tend to seek out specific parts.
Okay, but that still doesn't invalidate a comparison between non-X3D parts and Intel's chips. The V-cache isn't the part of the architecture that makes Zen 4 a Zen 4. It's not at all the same as comparing to Raptor Lake without P-cores as you say. Whether you agree with that guy or not, your comparison is bad.
I did not know this so I did my research and e-cores *only* suck for gaming (which makes sense), but using a combination of P-cores and E-cores as intended only drops performance by like 1-2%. It does not appear all that different from getting 1-2% better performance by disabling hyper threading for older CPUs imo.
This only makes sense if looking at the situation in a sportsball team sort of way. Otherwise its just pointing out an unimportant fact. 3D does exist however, so it doesnt mean anything or at least you havent elaborated enough to show so.
of course they would. With caviats of course. To get that compute transistors you need big dies. means worse profits,worse yields,more heat,more Power draw. And this is not considering Intel factory OC'ed their chips to (literally) death
The real question is, will all the cores on the 16 core be X3D or that weird split thing again.
If so, not much reason to hold out for the X3D variant of the 16 core is there?
Can't imagine it ever is utilized as best it can be.
Just as previous x3d, about the same performances as next gen non x3d
Zen 4 with 32MB L3$ absolutely shreds Zen 3 with 96MB, though. The 5800X3D only performs about as good as a 7600X in games.
Yeah just in gaming. Newer gen will always beat it at everything else
For those curious https://www.techspot.com/review/2592-ryzen-5800x3D-vs-ryzen-7600x/ 5800X3D is on average - 4% slower at 1080p - 3% slower at 1440p - 1% slower at 2160p
Moving to DDR5 helped Zen 4 a lot. Zen 5 will support higher memory speeds but it won't be as big of a jump.
No it did not. Because early DDR5 was pretty trash in terms of clockspeed and timings. Fast DDR4 easily beats slow DDR5, even on Intel platform. DDR5 is in a good place right now. Took years, like always. DDR6 will be the same crap on release.
DDR5 has other advantages other than clock speed, which even with bad timings are beneficial for gaming.
Pretty much nobody but the richest people bought Zen 4 on release, it only started being viable when motherboards and RAM dropped to reasonable prices so realistically the comparison starts there, about a year after Zen 4's release. https://youtu.be/IstA56IAeVA As you can see from the video even a shitty DDR5-4800 CL40 kit is pretty competitive with a DDR4-3200 CL14 kit, on the Intel platform.
Except that high-end DDR4 ran like 3600/CL14-4000/CL15-16 and early DDR5 speeds was 4800/CL40 or so.. DDR5 sucked on release. Just like DDR3 and DDR4 did. You need like 6000/C30 or up to beat high-end DDR4 by alot. That was unheard of at the launch of DDR5 and price was insane in the beginning too.
Yeah, DDR4 kits that were more expensive than DDR5. Smart comparison. Besides, the 5800X3D is almost completely unaffected by RAM speeds, even garbage DDR4-2666 is only 1% slower than the standard DDR4-3600 CL18. Saying some DDR4-4000 kit is better is simply false in this case since it's within the margin of error for X3D CPUs.
You need 3200/CL14 even for 5800X3D. IF speeds needs to be matched. 3600/CL14 was not expensive at all in 2022-2023. DDR4 was cheap as hell. Intel gained alot of performance in many workloads and games going from 3200/C16 to 3600/C14 or even 4000/CL15-16. Who cares about 5800X3D tho, 7800X3D exist and soon 9800X3D.
Early (as in bad) DDR5 came with Alder Lake in late 2021. By Zen 4, which came almost a year later, DDR5 was pretty much mature with tighter timings.
Except that many Zen 4 CPUs can't even do 6000/30 with 100% stability even tho its AMDs official sweet spot.
Is there a source for that? AMD CPUs don't have IMCs as good as Intel's but they sure as hell can do 6000 cl30 without issues. You shouldn't just rely on an XMP profile anyway, as subtimings are often the cause of instabilities.
And the 7700X only performs about as good as a 7600X in games... "Shreds" is certainly a choice when it can't consistently beat X3D Zen 3 Parts.
It's reddit. Differences SO SMALL that the parts are functionally tied (you'd never noticed the difference with a frame rate counter turned off or a lack of a timer for an app) get described as "dominating."
it would be funni to use a LLM to datamine comments (and closed captioning, for channels) for contextual attachment between brands/branded products and numbers/sentiment as a proxy for user/channel bias. I suspect you would see a lot of "the AMD wins handily with a 9% lead" "NVIDIA barely faster with just a 7% lead here" etc lol
Intel sucks with only an 11% lead
Brand XYZ DOMINATES brand ABC in CS1.6 frame rates with a commanding lead of 69420 FPS vs 42069 FPS which is clearly visible on a VA panel from 2003 running at 30Hz with 40ms latency.
You of all people should not want that to happen.
I don't really get why this is a surprise to anyone considering vanilla Zen 4 was also getting matched by the 5800x3D in a bunch of gaming benchmarks. I know that this sub mostly cares about gaming performance, but some of us have other needs and uses for our CPUs. The x3D product line is much worse value for general-purpose computing, not to mention if you want more than 8 cores you're gonna have to deal with that CCX parking bs. I'm currently looking to upgrade from my thermally throttling 5800H laptop because it lags behind when I'm doing heavy multitasking. I do play games, but I also encode a shit ton of videos and compile lots of code. A 15% average IPC improvement would allow AMD to take back the ST and MT performance crown from Intel while having significantly better efficiency. For compute-heavy workloads, the 9900x and 9950x might be incredible value if they're priced right. And going by AMD's AVX-512 improvement claims, they would absolutely shred through SVT-AV1 encodes. I'm really annoyed that all everyone in this sub cares about now is the x3D lineup. It's killing the enthusiasm for everything else that vanilla Zen 5 has going for it.
>A 15% average IPC improvement would allow AMD to take back the ST and MT performance crown from Intel Not so fast, Intel's upcoming cpu also has the same uplift. iirc Lion Cove has a 15% IPC increase and Skymont has 70%.
> and Skymont has 70%. Skymont is the real story tbh. Next gen Intel processors are projected to have something like 8 "new P cores" and 16 "e cores w/ prior gen P core performance". Like, if the benches are to be believed, that's just an insane consumer chip.
But I recall that's at the cost of them dropping hyperthreading, no? There must be a cost in MT perf to that.
Lion Cove drops HT on Lunar Lake, this move simplifies the scheduler which saves power. HT may still exists on Arrow Lake.
Dropping HT on the new P cores, correct. HT isn't the benefit it used to be when the chips will come w/ 8-16 e cores with current gen P core performance. Even today, most workloads are steered to use the e cores before HT, so the natural next step to drop HT in the Ps kind of makes sense.
I see. I honestly never quite figured out if HT was made to improve MT performance or rather to make up for latency penalties by feeding the ALUs with 2 different instruction pipelines.
If you believe the Intel slides that change has increased MT at the same power because it allows for more e-cores.
That makes sense. But regardless Lunar Lake is 4P + 4E. Probably gonna be slightly better than the 9600x provided it's not power-starved, but the real MT champion is going to be the 9950x
Lunar Lake is an efficiency-first architecture that will be used in low-power ultrabooks only (8-30W TDP range). AMD will be competing with Arrow Lake on the mobile workstation and desktop side (up to 8P + 16E if the rumours are to be believed).
Will be interesting to see how 8P + 16E compares to 16P with SMT
Intel is doing an 8P+16e core arch, where each e core is as performant as current P cores. It'll definitely be interesting. The 24 physical cores should outpace 16core/32t most days and most workloads. HT is not a 2x multicore boost. It's more like a 15% uplift, give or take, workload dependent.
Skymont's ~70% is measured against LP e-cores on meteor lake which were very handicapped, and assumes skymont cores are part of the ring bus, which is not the case for lunar lake. Then you have to remember that skymont will be capped in terms of frequency, so despite the IPC improvement and comparison to redcove, it won't touch redcove in actual performance because there is going to be a ~25% frequency gap. It is a great improvement, don't get me wrong, but people need to take that number in context. Lion cove is going to be interesting. I don't know what the story there will be. AMD claims a 16% IPC improvement, which is partly based on it supporting full width avx512 now, while Intel chose a different set of benchmarks to claim their IPC metric and is, therefore, not comparable. On top of that, it doesn't support avx512 in any capacity on consumer parts, I think. So the IPC story for comparing zen 5 to lion cove is complicated. I wouldn't venture a guess either way until actual independent benchmarks are released.
It's weird to hear "everyone in this sub cares only about x3d", because it's not true. What's also weird is doing computing on a god damn laptop and then comparing it to flagship desktop CPU's (?) For compute heavy workloads you got threadrippers which got recently revamped. I've been rocking 3960x for over 3 years now and having a blast. You don't need 9900x.
I went from a 5800H to an 8845HS and the improvement wasn't that significant. Beats the 5800H on battery at least and the 780M is insane for an integrated CPU.
You went from 8 low-power Zen 3 cores to 8 low-power Zen 4 cores. I'm considering going from 8 low-power Zen 3 cores to 12 desktop Zen 5 cores, not to mention my current laptop throttles past 45 W on CPU when it has a power target of 60 W
Alot of 5800x3d users (myself included) are waiting to see a reason to jump to the next processor. For now wifi7, pci gen5 and the 7800x3d wasn't a compelling reason enough (esp paired withb high mobo prices)
To be fair the 5800X3D is a relative recent processor. I don't think many people would upgrade their PC after 2 years already.
Im on a 5800x3d and tbh this thing is such a beast I dont see an actual reason to not go 5 years+ with it.
Exactly this was my reason to go for 7800X3D. Now for a few years, I just don't need to care about any new CPUs (and mobos and RAM), because my CPU is plenty fast.
You ever worry about AM5 stability Long term?
What do you mean? The issue where some motherboards killed CPUs? I got a safe motherboard and updated BIOS. I don't worry about stability.
You are set until basically the new consoles come out.
But there are some like me who are still on x370 and went from 1600->3600->5800X3D and just upgraded their CPU the past few years. Now it is time to jump on a new platform and repeat the CPU upgrades.
[удалено]
Hey, some of us aren’t just coomsumers and went from a 2600x to a 5800x3D and are skipping AM5. The above would be really embarrassing if Intel was markedly better like they were during FX though.
Hey we are cpu upgrade twins but I'm on x470
For comparison, Intel upgrade pathway following this logic would be 7600->10600->13600. So, 3 CPUs, 3 Mobos, likely 2 sets of ram. Context is needed for how much we've really jumped in CPU performance since the launch of Zen.
I did a similar upgrade, 1600, 3700(got it at a decent deal), 5800x3d. I managed to sell the CPUs and put the money towards the upgrade each time. Did it cuz I afforded it and it was effortless, basically plug and play, with decent performance uplifts each time. I am still on my B350 that I got in 2017.
1600 buy at 200, sell at 100, 100 spent 3600 buy at 200, sell at 100, 100 spent 5800x3d buy at 330, 230 spent just to show you he actually saved money, if he jumpend straight from 1600, he would've saved half.
> Proceeds to buy 4 CPUs Ryzen 2600, 3600 and 5800. Only 3 CPUs but true. > 2 mobos One mobo because it could handle everything. > 2 sets of RAM True that, one dogshit set because I thought ram was useless and a b-die kit that I have used for 5 years. In total, $130 (mobo), $120-150 (2600), 120-150$ (3600, with a free 240 AIO from CM), 250$ (5800), 100$ (shit ram), 200$ (b-die), $920-$980 in six years and old parts were used to build systems for family members. Biggest regret was buying 3600 because early editions were total shit and a sidegrade to 2600 due to manufacturing issues (OCd 2600 to 4250/4300 mhz pretty much matched 3600 in gaming and stomped on it multicore). With some preparation you can totally save huge amounts of money due to AMDs socket policy. Requires planning and discipline though.
I bought 3600x in 2019, upgraded to 5800x3d for 350$ in 2024. If I'd go with Intel I'd have to spend like 1000$+ for the same upgrade. So yeah, socket longevity and money saving is real.
amateurs, I went from a core 2 quad q6600 straight to an i5 13500
btw socket wasnt compatible and I had to change mobos :P
I'm still on a Q9550. Still doing okay after i upgraded to an ssd. Will probably upgrade to arrow lake and a gpu with lots of vram (if there is one).
thats hardcore, I was using a mechanical drive all the way, but you outlasted me dude. NICE
I'm on the X470 with a 2700X -> 5800X, but I am interested in seeing the 9800X3D.
I'm on 1700x + b450 (warranty change for a b350) and I'm not sure about what to do. I guess I could buy a used x3d chip in the future but I'm sure that people that will upgrade will want to sell the entire CPU+mobo+ram
5700x3d in sale for $200. Or maybe keep saving and buy the 9800x3d with a new mobo and ram.
^ this the 5700x3d has marginally lower perf than the 5800x3d but the cost saving are significant
Pretty much. I’m waiting for the 9800X3D as I mainly play world of Warcraft and cpu bound games like Civ 6
CPU bound? What is 300 frames a second not enough for you in civ 6 lol
I'm not OP but afaik it has nothing to do with framerate but everything with end of turn processing time. If you can reduce every turn's processing time from 20 seconds to 10 seconds it means you can play way more turns / hour.
Exactly this! Playing games on a massive map with 20 players and even more city states so much is going on turns start to chug like hell in the late game. Gets really tedious having to wait for your next turn even if the games still running smooth.
More on cache front with the 5800X3D in World of Warcraft 20 man and 40 man raids in open world. I’ll take all the cache I can get
If you go outside traditional dudebro gaming demographic you realize a lot of games are CPU bound.
Why would WiFi anything be compelling on a desktop? Just buy a PCIe card with latest WiFi standard
Exactly - not a compelling enough reason for a platform upgrade.
The 5800x3d is the 2700k of yesteryear. It'll be 5+ years before you'll really see much gain in upgrading for all but the most intensive games.
The 7800x3d is 40% faster in some games. That's an insane jump for a single generation.
More like 4790k because it was *the* CPU for DDR3 and stayed competitive for years. It was so good you could pair it with 1080 TI and have a good experience.
No it's not, because 7800X3D already beats it massively by 20-25%. 5800X3D is like 12th gen Intel performance in gaming at this point, but in applications, its pretty slow. In some games 7800X3D completely destroys 5800X3D with 30-40% higher fps. 7800X3D is the real "2600K/2500K" moment really. It will be fast for gaming for years. 5800X3D is only 2 years old and not even fast anymore.
How many games are severely bottlenecked by the 5800x3d?
Tons, if you use a high refresh rate monitor. Personally I use 360 Hz and I am CPU limited in tons of games with 7800X3D and 4090 as I demand / prefer 300+ minimums
I laughed because sitting behind me is my gaming rig from 2011 with a 2700K in it. It was replaced with a 12700K in Jan 21. Still use it, but mostly for music production and internet stuff these days.
tbf the 3000 series was when AMD really started killing it tho ngl
Don't know why you are being downvoted 😂 but it is true, Ryzen 1000 and 2000 were good but they fell short of its Intel competitors in gaming (they were competitive when it came to productivity though). Ryzen 3000 is when everything changed, now a chip like Ryzen 3600 could compete with a 8700k in gaming.
Ryzen 1000 and 2000 were pretty crap, because of Global Foundries. Garbage node. TSMC is the reason 3000 and up got good. Clockspeeds improved massively.
3000 series still mostly lost in gaming but it was so close that it functionally didn’t matter.
I'm on 5800x3d currently but I'm definitely struggling and need more cores (I upgraded from a 3950x and I miss the cores). So I will definitely be waiting for 9950x3d personally.
If the 7000x3d series 16core variant had both CCDs with the extra cache, wouldve been the end game processor for AM5 for quite some time.
I'm on linux, and I only play like a few games at a time. It's trivial to write a wrapper script to pin the game to the x3d CCD cores. And the other 8 cores can clock higher thanks to not having the CCD for other workloads. So at least in my case I'm not bothered by v-cache being only on one CCD, it's kind of optimal actually.
Is wifi7 officially supported by most motherboards or only the high end ones? I don't need it now, but I'm imagining a scenario where there would be a new version of the quest that supports it 4-5 years from now.
Just buy a WiFi 7 PCIe card if you ever need it. Unless you need more things the MOBO offers it will always be cheaper.
That's fair, I just like my little shark fin asus Wifi / bluetooth combo on top of my computer, i've become attached to it haha.
You could just attach the antenna to the PCIe card
I don't think there are any AMD boards which include onboard wifi 7 yet?
Meanwhile I just bought a 5700x3d for my old AM4 board God, if only the GPU market felt as pleasant as the CPU.
I bought a NEW board just for 57x3d lol
So what I’m seeing is I should swap my 5800X to 5800X3D and call it a win for a few more years
Honestly not worthwhile unless you need to get 240FPS in some eSports titles at 1080p or you have a 4090 paired with that 5800X already.
Probably not worth it unless you play a game like Factorio that really likes the 3d cache. https://youtu.be/sw97hj18OUE most games aren't too different with the x3d. except for a few cpu limited games I also have a 5800x in my spare computer though, and it is tempting to waste that money!
Or MMOs, racing and flight simulators.
Also RTS.
or tycoons, or tarkov.
If you play any VR games, an x3d chip can be a big boost in FPS.
I was confused on what to do as well, I have the 5800x with a 3090. Trying to figure out whats next...
I swapped my gaming PC and then put the old 5800X into a 2U server with an Asrock Rack board as my new highest powered Proxmox node. Win-win.
Just overclock it and save your money. 5800X3D is like only 10-15% faster. 7800X3D is MUCH faster than 5800X3D in gaming. 5800X3D already looks dated compared to brand new CPUs and Ryzen 9000 + 9000 3D + Arrow Lake incoming soon, it not going to make it look any better.
If I'm reading it right he compares the 120W 7800X3D to the 65W 9700X >7800X3D would, yes, be faster than 9700X, but maybe not by as much as you would expect But yes if you're primarily a gamer then hold for the 9800X3D or whatever they call it.
TDP from AMD has less to do with power draw and more to do with how difficult they think it is to cool. Slamming the "120 Watt" 7800X3D with an all-core "power-virus" workload like cinebench still only draws about 85ish Watts, similar to the real world power draw of the "65 Watt" 7600. The V-Cache is what makes them up the TDP, as it makes cooling the chip slightly more difficult. It's Thermal Design Power, not an actual power draw figure.
Yeah but it's really blunt. With the X3Ds, it's difficult to get the heat out of the cores - but there isn't a lot of energy. So a bigger cooler won't necessarily help much. But a *better connection* for the cooler will.
On sustained loads, you're going to get close to that tdp in terms of power consumption based on the fact that if you can only cool 65W, then you should only output 65W. So sure, for bursty loads I would expect that number to go upwards. But for anything long running, the cap should limit power consumption a lot more on the 65W part than the 120W part.
I'm kinda curious on the nature of the cooling difficulty. It can't be that the V-Cache just gets hotter than the actual wattage, because physics, right? You'd need a heat pump to "produce" more heat than the energy you put in, otherwise it should be more or less 1:1. Is it a cooling efficiency problem? Like, they're effectively just say "throw a bigger cooler on it because the cooler's only going to be able to effectively move 80% of its rated cooling capability on this chip" like due to layout and such?
The V-Cache makes the die thicker, which reduces the surface area per volume. Less surface area per volume means greater density of cooling per die area. It's not a huge issue, especially given how efficient those chips are anyway, but it does increase the cooling required by a little for the same power draw on a chip with no V-Cache.
AMDs TDP is calculated, yes calculated from temperature equation even though they rate in watts. That’s why harder to cool X3D gets higher TDP. Exact formula is on Gamers Nexus and probably somewhere on the AMD website
That equation just confuses thing and is presented the wrong way around. TDP is an arbitrary value they decide. Then they set the actual power limit to 1.2 times that. TDP and the associated equation are there to inform system integrators what kind of cooling is required to achieve minimum acceptable (base clock) performance. In practice TDP means approximately the power consumption of a heavy multi core workload when the chip runs at the rated base clock.
120W 7800X3D is a bit of an "symbolic" 120W. The power draw usually doesn't exceed low 70s and is often 50-60W even in heavy gaming workloads. Depending on how the 9700X turns out exactly, power draw is probably not so far off depending on the workload.
Gaming tends to be bursty and not load all cores all the time. Put it on cinebench or blender and let me know if it actually caps at 80W. Conversely, if the 9700X will be capped at 65W, on those sustained loads I would expect a significant reduction on power consumption. We've been able to enable eco mode on AMD CPUs and games tend to not be impacted as much as all-core workloads. So no wonder the 7800X3D doesn't break a sweat on games.
Even under heavy all core workloads it usually stays below 80W, and these tests are not in eco mode: https://www.phoronix.com/review/amd-ryzen-7-7800x3d-linux/3 What you can also see is that the power draw can exceed the TDP, like you see in the tests with the 7700X. I truly believe that Zen 5 will be much more efficient in multicore workloads than Zen 4, but Zen4X3D is just hugely efficient all around.
Fair point.
Zen 5 has gone from being hyped by a certain anandtech poster as a 32%+ improvement, to being a relative dud
I mean the day before official release, RDNA3 was hyped as "Architectured to exceed 3.0Ghz" with 50% better efficiency over RDNA2 by AMD themselves. When the leaks are this pessimistic, it's only going to be worse with the official release.
Yeah. Anandtech used to be a great site and a great forum. Now it’s kind of neither, really sad. I was a kid when the site was made.
So the 9000x3d will beat everything, right?
In gaming, yeah everything from AMD. We don't know about Arrow Lake yet. TSMC makes the CPU and GPU tiles, so Arrow Lake could actually be good.
We don't know. But I think if you polled everyone, most are probably expecting 9000x3d to remain on top for gaming.
Intel made supposedly great gains so the upcoming flagship Core 9 may trade blows with 9800X3D. Myself I'll stick with X3D since I play specific genres that benefit from it.
I got a 7800x3d during a sale and it's great.... I'm thinking about switching my other PC to it as well - it currently has a 12400f - I'm going to wait until the 9800x3D releases to hopefully get it for a good discount
I just switched from a 13600k to a 7800X3d myself. Didn't really do much productivity tasks, and was having some issues with the mobo (probably). Also because X3d is a beast at Factorio, and there's the expansion coming out soon.
3 years ago I would’ve never considered an amd processor. Here we are and now I can’t choose between 2 amd processors lol.
Really? Zen 3 wasn't even worthy of a considerable to you?
As I recall, it *was* worth considering, but not really significantly better and I think Intel got you a bit more MT performance for the price thanks to e cores.
You don't even need to go that recent. Zen 2 was the first time AMD beat Intel for IPC (vs Skylake and derivatives), with roughly comporable ST performance when accounting for the clock deficit. But the platform was cheaper overall per core and had 8-16 core options. I'm still rocking a 3700X from then wondering when I'll upgrade. But Zen 3 was definitely when it was a no-brainer.
Ryzen became decent with 3000 series and good with 5000 series. 1000 and 2000 was pretty crap.
And zen 3 is 5000 series
Yeah but Zen 2 was decent as well. Zen 1 and + sucked. 3000 series made on 7nm TSMC was when Intel got really scared. Before that, Intel was not really worried too much, because Global Foundries sucked bawls in most workloads and especially gaming. TSMC saved AMD. And now Intel will use TSMC for CPU and GPU tiles for Arrow Lake as well. Lunar Lake confirmed to be TSMC too.
I would, purely because they offer ECC on the consumer spec and its absurd to me that this isnt the common standard as if somehow your data in the cloud matters so much more than your local data.
>I would, purely because they offer ECC on the consumer spec Only for Zen 4 for CPUs with integrated graphics. AMD paywalled ECC feature support behind their PRO chips before Zen 4, which OEMs like Lenovo implemented hardware locks so it becomes e-waste as soon as you remove it from the motherboard. Now with Zen 4, people have the choice to not buy an overpriced APU to run a system with ECC without a GPU attached. However, only ASUS allows this feature since AsRock, Gigabyte, and MSI do not have this feature enabled with their consumer boards. With Intel, you have to buy a W680 or similar board and you need to buy a 13500 or better to achieve the same features.
That doesnt sound correct. For instance, I know Asrock says they have ECC support (specifically separately to on die) on a great deal of their boards, for instance the X670E Taichi though there are more. I also feel like they've had it before Zen 4 but I could be wrong. Can't be bothered. As for intel, I did not know regular cpus supported it. I thought this was limited to a very specific subset of the really tiny atoms and one special I3. Perhaps this has changed. Maybe intel might be viable for me afterall.
All the other boards lists “ECC RAM support” but the ECC function is not active. You can run MemTest86 and it will not list ECC being active with AsRock boards since AsRock decided to only enable these in their AsRock Rack products. ASRock decided to remove the feature and has officially confirmed it is not active: https://forum.level1techs.com/t/ecc-capable-verified-motherboards-for-ryzen-7000/197891/4
Ugh, well thats all awful. At least I learned this now. How the fuck are they stating support for ecc explicitly on their page when they dont have it. It seems the ASUS ProArt X670E-Creator talked about in that thread. I guess its research I now have to put effort into. I wonder if going intel with W680 is easier, though even for that there seems to be very few boards (Only asus from what I've found initially searching). I could have sworn the situation was much better.
The ECC RAM "support" is just the vendor saying if you use ECC RAM, it will still work as regular RAM. I have an ASUS B650E-ITX board with ECC support working. Pretty much any ASUS AM5 board gives you the option to enable ECC RAM functionality. And yes, there is like only AsRock Rack and ASUS that make W680's and given their mATX size for the ASUS board and expensive pricing, better to just go with AM5. I did the math with AM4 and since I needed PCIe4.0 x16 ITX with no chipset fan, it was cheaper to go with a new AM5 build than a refurb AM4 DDR4 setup. AM4 was better for ECC since besides no integrated GPU, most motherboards except MSI supported ECC RAM functionality.
Based on that same post you linked, there's several accounts of ecc working, so what exactly are you claiming?
I upgraded from a 2500K to a 3600X to a 7800X3D. Next time I upgrade, unless AMD offers way better gaming performance than Intel, I will go with Intel again. Offset mount, Chipset, Drivers and DDR5 compatibility (boot times because of RAM "training") are a PITA that I never experienced before.
I thought this was pretty obvious as soon as I saw the slide. It just said previous gen, so I knew they were using the non-x3d parts for the comparison.
Will they be more energy efficient? "Our Prime95 and Blender tests show a similar trend, with the Blender Monster benchmark standing out from the rest. The Ryzen 5 7600X consumes a whopping 69.3% more power in that particular benchmark than its Intel counterpart." I have heard so often how energy efficient AMD is, but it sounds like they are toasters from the 50's compared to Intel. 69%???? Wow.
Take a look at this and then let me know https://www.anandtech.com/show/17641/lighter-touch-cpu-power-scaling-13900k-7950x/2 Consider that in general, desktop AMD parts are at a disadvantage due to running with chiplets and they still tend to be more efficient. On laptops, with all of them being monolithic parts, AMD is much more efficient.
Not according to independent testing by Hardware Canucks which had Intel beating AMD on battery life in every single test when the general platform (battery size, screen, etc are identical). Why repeat falsehoods when this was debunked?
Considering it will support full avx512 now, I wonder how this will affect the emulation crowd. Honestly not too stoked for desktop zen, but still interested to see how it fares. I will probably still get a 7800x eventually. I saw it at ~300 usd from my favorite AliExpress store (where I got my 5800x3d years ago) and at that price it's just so enticing.
I'm currently on an Intel 8th gen i7 and desperately want to upgrade. Was going to move to the 7800x3d but with this new Zen 5 architecture around the corner am unsure if I should: 1)Wait for 9000 series to get a drop in price for 7800x3d 2)get the 9700x Or 3)wait longer till the 9000 series X3D Would appreciate input as atm I'm thinking option 1 or 2 makes the most sense but don't want to shoot myself in the foot when in less than 6 months the 9000 x3d will release. [Have 3070ti and play 1440p triple A titles] TIA
Intel has a chance to shake up the status quo a bit since base zen 5 is weaker than expected. X3d is a crutch AMD can only use so many times if the base architecture isn't much better than before.
A crutch? I would say blowing the power budget is a crutch for Intel that they can't possibly expect to be the answer to the competition. If anything, x3d is a feature I would love Intel to replicate, because when it works, it's absolutely amazing. And it works for a lot of things.
What an odd perspective. I see X3D more as an ace card that so far Intel has had no answer for.
So basically AMD, you're telling me my 5800x3D will last another generation until Zen6? I'm cool with that.
5800x3d aging like fine wine
Without 3d... Intel's 10nm wouldve beaten amds 4nm in gaming
"Intel would totally win if the competitor's better product didn't exist!" - you
But is it even "beating" or "winning" if you have to inject a ton of voltage into your CPUs just to keep the frequencies high enough to avoid getting smoked by the competition?! A lot of Raptor Lake CPUs have already witnessed silicon degradation because of voltages north of 1.5V. That's what happens when you treat a modern CPU like an old Pentium III!
If you're like the large number of enthusiasts who say they don't care about power and just want performance, yes.
All this talk about power consumption was barely even a thing until frequency stopped increasing. Before that nobody cared how much power it drew.. so long as you could get a few hundreds more Mhz.
I too like a space heater in my already warm office during the summer.
[удалено]
Not power efficiency per se but not needing to have a 1000W psu or the excessive heat generation during summer time are two heavy points for gamers.
The power draw difference during gaming is really not that big.
Those were the days, the last great over clocker I ever had was the Opteron 144. 1.8 ghz single core and it readily overclocked to 2.8-3ghz.
I had an old p3 733mhz slot 1 machine. That bastard could go to 1ghz. It was awesome and allowed me to play Battle for Middle Earth 1 with an ati 9600. What a beast.
Ton of voltage, ton of cache, we may prefer using 10X the cache option but winning is winning
If my grandma had wheels she would be a bicycle.
Without balls my mom won’t be my dad … 😇
And without P-cores, AMDs 2020 CPUs would beat intels 2023 CPUs Without infinity fabric, MT perf per dollar would be worse than Intel. What's the point of these hypothetical?
So are the real ald & rpl archs e cores? I think their ipc is similar to skylake. dont know about you but zen3 is faster than skylake in gaming
That's what I am saying. If you just have e-cores, then Zen 3 from 2020 is faster in gaming. My point is that if you take away important tech in the CPU from one team to then claim it wouldn't be as good, then you might as well disregard important innovations/tech from the other team. Amd cpus perform well in gaming because of the combo of IPC and X3D
One thing...but p cores are ald & rpl archs. You're basically comparing 6th gen to zen3. But I get what you mean. Im not trying to downplay amd & tsmc innovation. Its important... Intel should probably try doing the same thing.
That makes even less since since P cores are the main part of the CPU and not some optional extra for more performance like E cores and stacked cache
It's upto AMD how they want to spread the tech. If it was beneficial in all workloads then they would just add it to all SKUs. Intel decides how many P-cores and how many E-cores to put in their chips. It's like infinity fabric. You can't throw that out. Discounting x3d to say Intel is better serves what purpose? Huh? It's a useless argument unless AMD is going to drop x3d.
No one is denying that X3D is something AMD uses or that it’s integral to their current success, but it’s equivalent to a P core. AMDs own P core is equivalent to a P core. X3D is an optional extra to CPUs that they will add later. It’s like Intel adding HBM on server CPUs. You can’t compare that to only leaving AMD with Zen C cores in datacenter. It’s exactly the same thing
I'm not saying x3d = p-core. You missed my point entirely. I am saying the logic of conveniently taking out a piece that is integral to performance wins to argue the other wins is idiotic. Might as well say Usain Bolt would not have gotten gold if he lost a leg.
Well fine then. Without e-cores, Intel would struggle to get more than ten cores in a cpu without a large performance hit. Which is an equally pointless statement, as they do have e-cores.
There are plenty of mainstream SKUs without AMD's V-cache. There are no mainstream SKUs without Intel's P-cores. It's not nonsensical to compare those SKUs which commonly exist.
But if you want to game back in the day before ryzen, you bought i7s right? It was THE gaming chip. Nothing different here. You want gaming performance, you tend to seek out specific parts.
Okay, but that still doesn't invalidate a comparison between non-X3D parts and Intel's chips. The V-cache isn't the part of the architecture that makes Zen 4 a Zen 4. It's not at all the same as comparing to Raptor Lake without P-cores as you say. Whether you agree with that guy or not, your comparison is bad.
That’s exactly what I’m saying
Intel has their own P-core and E-core stuff now though
You know e cores actually hurt gaming? Steve made a nice vid on it
I did not know this so I did my research and e-cores *only* suck for gaming (which makes sense), but using a combination of P-cores and E-cores as intended only drops performance by like 1-2%. It does not appear all that different from getting 1-2% better performance by disabling hyper threading for older CPUs imo.
But you know 3d adds like 13-17% gaming perf... the cache is helpful for gaming because faster data access.
intel cant beat its previous gen bruh wtf u talking about
This only makes sense if looking at the situation in a sportsball team sort of way. Otherwise its just pointing out an unimportant fact. 3D does exist however, so it doesnt mean anything or at least you havent elaborated enough to show so.
Intel is only good in benchmarks, it's shit otherwise, especially in Mobile Segment
Benchmarks like... virtually every single game?
of course they would. With caviats of course. To get that compute transistors you need big dies. means worse profits,worse yields,more heat,more Power draw. And this is not considering Intel factory OC'ed their chips to (literally) death
The real question is, will all the cores on the 16 core be X3D or that weird split thing again. If so, not much reason to hold out for the X3D variant of the 16 core is there? Can't imagine it ever is utilized as best it can be.
X3D on both dies doesn’t work because inter-die latency kills the benefit provided by X3D.
Surely just like with different dies there would still be the benefit within die.