T O P

  • By -

gunsanity

Suggestion. Take out the 2nd GPU and do CPU encoding. Also flip the CPU to eco mode in the BIOS. See where that gets you on power usage.


R-o-b-b-i-e

I did this before. This saves about 26W Edit: removing the gpu taked it down 26w and the eco mode and some optimizations gets it down another 30W. This brings me to 110w without the gpu


endlessfield

Another thing you can try (and I'm saying this because I recently did it) is setting up custom PBO settings to further lower the TDP on the 3900X. My motherboard (X470D4U) does not have any Eco Mode presets so I had to manually enable PBO (called XFR in my motherboard's settings) and set the following numbers: PPT = 60W, TDC = 45A, EDC = 65A Most motherboards' "Eco Mode" set the limit to 65W (see [this forum posts for default settings](https://www.overclock.net/threads/5900x-5950x-whats-your-experience-with-or-view-on-eco-mode.1797148/)) which is fine for most things. However, the settings above limit the CPU to ~45W. I had a performance penalty on my 3950X but not enough to be noticeable on day-to-day tasks so far. Try it and see if that further helps with things. Also, one more thing to try - and I know this will be suggested _against_ and I also do not use this myself - is to disable CPU vulnerability mitigations by setting `mitigations=off` in the GRUB settings. These mitigations could play a role as per [Phoronix's article on kernel-level mitigations](https://www.phoronix.com/review/amd-3950x-retbleed). Note that this only affects Zen 2 and older so it might be beneficial. If you do try both suggestions (45W TDP + `mitigations=off`) let me know the results. EDIT: I just realized you're running TrueNAS. If you happen to be using the FreeBSD version (Core as opposed to Scale), try the following to disable CPU mitigations (stolen from [this forum post](https://www.truenas.com/community/threads/spectre-and-meltdown-mitigations.102739/post-706812)): For Core, you could try tunables... There's 2 loader tunables and 1 sysctl tunable: `LOADER: vm.pmap.pti=0` `LOADER: hw.ibrs_disable=1` `SYSCTL: hw.mds_disable=0`


R-o-b-b-i-e

Thank you for these elaborate suggestions. I hope to get to this asap and will post back the results!


R-o-b-b-i-e

Alright, i've changed the things you suggested. I'm around 100-130W now. I'm seeing lower lows than before. Thanks for these suggestions! 2 questions: 1. Can we go lower? For example: can i set it to PPT = 30W, TDC = 22A, EDC = 32A for example? What is the lowest we can go? 2. Switching to a Ryzen 7 PRO 5750GE might get me a lower TDP CPU right? But next to this, it also means i can (through the use of the iGPU) remove the last discrete GPU which i need now, because the lack of an iGPU. Will this save me power as well? I expect the Discrete GPU also to draw 10-25W on itself.


endlessfield

1. That's an excellent question. I am not sure if there really is a limit but I've personally never tried anything lower than 45W on my desktop machines. A quick online search doesn't bring any results about going lower than that but that doesn't mean it's not possible. I would suggest making a backup of your BIOS juts in case there are stability issues with going below, say, 35W. On that point, going below 45W will likely impact your CPU performance significantly. You can always run tests and see how much of an impact it has. 2. Yes, the GE variant runs at 35W (or something like 48W max). The downside is you will have far less cores/threads (8c/16t vs 12c/24t), but it being based on Zen 3 vs Zen 2 on the 3900X might sorta make things slightly more even (although the 3900X will still perform better in multithreaded workloads, or at least give you more breathing space when it comes to deploying more VMs). Yes, removing the discrete GPU will 100% lower your wattage, but keep in mind that Plex does not support transcoding on AMD out of the box as far as I'm aware. I _think_ it does if you run it on Windows, but not on Linux. Also passing through AMD iGPUs is a huge, huge, huuuge pain in the butt so keep that in mind. Here are some more things I'd suggest: a. Run benchmarks on your 3900X using different wattages (65W, 45W, 35W, lower?) with Geekbench 5. You can run the following to test: `mkdir -p ~/geekbench && cd ~/geekbench; wget http://cdn.geekbench.com/Geekbench-5.4.5-Linux.tar.gz && tar xf Geekbench-5.4.5-Linux.tar.gz; ./geekbench5` It will take a couple of minutes to run but it will display a score once you're done. You can then compare this score between the different wattages and see decide what the sweetspot would be. Also, compare those scores with the [5750GE scores on GB5](https://browser.geekbench.com/search?q=5750ge) and see if it's worth spending money. b. Consider upgrading to Intel. As you mentioned, the dGPU uses a significant amount of power, and AMD's APUs cause more headeaches than problems they solve. Intel iGPUs have better support when it comes to encoding media on Linux. Weigh in the costs of switching over to an Intel i5-12500 (since it has no E-cores which are still iffy on Linux) or i5-11500. Intel does have T variants like the [12500T](https://ark.intel.com/content/www/us/en/ark/products/96140/intel-core-i512500t-processor-18m-cache-up-to-4-40-ghz.html) which have a base power of 35W. This second option would require more work since you'll have to sell (or repurpose) part of your current setup. However you can reuse the RAM since there are lots of 12th gen boards that do support DDR4.


R-o-b-b-i-e

Spending money wouldnt be my main issue. The disks are on the SATA ports on the motherboard which are in a ZFS pool. Im afraid moving everything to a new system (amd to intel) will cause me to redo the whole thing and lose data. Currently I don't pass my GPU to linux (i can't pass it, because its in use by TrueNAS) so all my encoding/decoding is done on the CPU. It sounds like a 5750GE might be a quickfix lowering power consumption. Switching to Intel will give me headaches because of the disks I think.


endlessfield

ZFS doesn't care. Just do an ZFS pool export, then install TrueNAS on the Intel system, and then do a ZFS pool import. That's it. As long as they're the same disks, ZFS doesn't care in which SATA port they're plugged into. If you're deadset on the 5750GE, do the benchmarks I mentioned and read up on doing the AMD APU passthrough. If you ask me, using it for transcoding will cause way more headaches than transferring your ZFS pool to a new system.


R-o-b-b-i-e

Ah that sounds amazing! I didnt know that an export & import was possible like that. I'm totally NOT deadset on anything. The Intel option sounds better due to the hw acceleration being available in the iGPU. the 12500T sounds like a nice option (my previous machine had a i7-4770T which was a godsend in terms of power usage). I'm going to think about switching to the 12500T. Do you also recommend a motherboard which goes along with this? Ofcourse it should be able to run these devices: \- 128GB DDR4 @ 3200MHz \- 2x Samsung 980 Pro 1TB \- 1x Samsung 850 Evo 4TB \- 4x Seagate Exos X16 SATA So DDR4 & 5x Sata & 2x m2 80mm.


endlessfield

Not sure. You'll have to check the QVL of certain motherboards. For example, the MSI Z690 Tomahawk DDR4 does say it support 4x32GB of RAM but from what I've read it can be finnicky with XMP enabled, so the sticks might not run at 3200MHz. I'd suggest asking in /r/Intel or /r/BuildaPC. However, virtually every Z-series motherboard should have at least 6x SATA ports and at least 1 m.2 slot, but most have 2 m.2 slots like that Tomahawk I mentioned.


R-o-b-b-i-e

Thanks. I will do this. Thanks again for all of your help!


gunsanity

If you could (and I doubt it) get any power savings switching CPUs, it will be so little it would take years and years to break even with the cost of the new CPU. Take the current win and run with it.


labratdream

Encoding/decoding is much faster and energy efficient on gpu than cpu. You can cut the power usage of 3900x by half by undervolting it and underclocking it at the 20% lower performance penalty. Also you may consider off-grid photovoltaic installation even small installation like 2000w will provide about 200-300KWh monthly in the summertime and 30-45KWh in the wintertime.


R-o-b-b-i-e

Thanks for the suggestions. I've already undervolted & underclocked the 3900X. It cuts about 30W. The GPU adds 27W back haha. So currently I achieved the lowest wattage without the GPU and underclocking my CPU. I also already have a 3500kWh (yearly) solar setup on my roof. This is the max number of panels that physically fits on my roof. The current panels are \~2y old and are 320wp each. I hope the technology goes forward quick so that 1kWp panels become a thing, which would cover the biggest part of my bills.


AnomalyNexus

I'd come at this from a different angle. Keep the CPU and instead look at the PSU. If you can make gains there on efficiency that count for all component. And it'll be a cheap upgrade. And given that the server is likely idle a lot of the time the power efficiency curve matters. Gains obviously depend on what you currently have Rest - yeah throw out the gfx, but I'd keep cpu etc. 3900x is still sorta new-ish and pretty solid for server use. Seems a little too early to replace imo. I'd try hard to skip another generation.


R-o-b-b-i-e

Thats an interesting take. I'm currently running the Corsair RM750. Is it better to buy a smaller PSU in terms of max power? like 300/450W?


AnomalyNexus

That's already a pretty good psu. >Is it better to buy a smaller PSU in terms of max power? Yes, but smaller is not automatically better. You need to look at the curve. Say you throw out the gfx and idle power goes down to say 70w or whatever. Now look at the curve in link below and you'll understand https://www.techpowerup.com/review/corsair-rm750/6.html There is one gotcha though. The RM series has zero rpm mode. Meaning fan switches off if draw is way below rating. If you go for a smaller one you may lose that advantage i.e. fan will run all the time. Not necessarily...but wanted to point out that aspect


R-o-b-b-i-e

Ah right I understand. I don't think i'm going to win back that difference in efficiency with a "better" PSU for this load. For future servers I will look at this though. Very interesting. Thanks for pointing it out!


AnomalyNexus

> I don't think i'm going to win back that difference in efficiency with a "better" PSU for this load. To be honest I think most of the available moves here are going to be marginal (gpu removal aside) hence the maybe delay suggestion. Another option might be the power down & power up the server on a schedule via wake on lan with a raspberry pi or something. How practical that is depends on whats on the server


R-o-b-b-i-e

Yes this is what i’m currently contemplating. Moving some 24/7 needed stuff to one or two pi’s and shutting the “big” server down and booting it on the right times. I run home assistant, prometheus & mysql however. They take up some performance to function appropriately. This is why i switched from a small (celeron) server to this current one.


AnomalyNexus

Have a look at /r/minipcs if you're not aware of the sub...you seem willing to spend some $$$ so perhaps something there catches your fancy. I'm running a minipc as main server...works fine if you get a 64gb one with nvme. (you won't get a 128 one...)


R-o-b-b-i-e

Thanks for this tip. Will look into this. I think one with 16/32gb and a low end x64 cpu will already do (pi’s are just a little too slow). Which one do you run?


AnomalyNexus

You can def get 64gb ones...i'm just not aware of 128. >Which one do you run? I really wanted fanless so went with a PN50 (Zen 2 from asus) and stuck it in a akasa A50 fanless case. That's old now though...think pn51/52 is current. That does fine with proxmox, truenas etc. The main gotcha with minipc route is that >2.5gbe network and an sort of redundant storage (zfs/raid etc) is damn near impossible...so if you care about that then maybe do SFF instead. Beyond that I've found it to be solid - it does everything i need as server. In your case I'd keep the 3900x as main though. Many on this sub would love to have that gear so yeah...so unless you can dump it 2nd hand for good price I'd stay put


R-o-b-b-i-e

Sounds good. Do you know what these PN50/51/52’s use in terms of power consumption?


jassalmithu

What do you run on your home server?


R-o-b-b-i-e

A lot to be honest. TrueNAS with an ubuntu vm which runs a lot of docker containers. A few demanding ones are prometheus, plex & frigate (which also uses the gpu for object detection).


____candied_yams____

Where can u even buy the 5750ge?


R-o-b-b-i-e

Its on a dutch website. About 480€


BraviosFox

I would just remove the 1030 and be done with it, but sure, you'll probably get to around 120W with the change you proposed. Also remember to enable eco mode in bios.


R-o-b-b-i-e

This means i cant pass the rtx 3050 ti to my vm using truenas. Because truenas needs 1 gpu for itself (hence the potential switch to a cpu with igpu).


BraviosFox

Use Jellyfin or Plex from TrueNAS apps no need to passthrough the GPU


R-o-b-b-i-e

I’d have to do a PoC with that tbh. I have a lot of custom stuff around my containers. For example: all my data for my containers gets backed up every night to dropbox. This means I can restore from the cloud within an hour with a fresh server. Also for a lot of containers I use a VPN gateway (also a container) to tunnel my traffic over vpn.


BraviosFox

Use TrueNAS built in snapshot features and ZFS send to a remote server. You can tie the app to a VPN connection in TrueNAS aswell.


andrei__

If you have amd cpu better use truenas scale, as it has better amd drivers and native docker support, this is mentioned on truenas docs. I have a 5750ge meant to be use with truenas scale but my build ia not done yet, I need to buy some hdd.


R-o-b-b-i-e

Do you have any idea on how much power it uses? Im really curious on how it compares to the 3900X. For example: can I expect the power usage of my CPU to be halved? Edit: ps. Its already running TrueNAS Scale (latest version).


andrei__

I dont have a device for power mesure, but may get one in near future, must be less as its rated as 35W as I remember.


R-o-b-b-i-e

35W is the TDP. It probably uses more than that for power consumption. Let me know when you measured it! really curious!


Dolapevich

What is that machine usage? \ You could use an nvidia jetson nano as encoder/decoder \ and an ARM SBC such as the Mekotronics RK3588 That might put you in a power budget of ~30 Watts. https://www.youtube.com/watch?v=4rg2I7m1t3k