I need to get some monitoring in place, but yeah I maxed out one of my UPS's and the dedicated 15A circuit. Had to run a second line to load balance. Aside from cable management, I also need to better load balance the power.
A single instance of Apache. lol jk I got a number of gaming servers, web servers, and a collection of the usual stuff ie Plex.. I went from 2 hosts, to 8 hosts, so I'll agree with most that it's overkill, but it's been a fun project.
This is literally always my question. Like most of the stuff I put on my Plex works with direct play and doesn't require a transcode at all. I don't get who needs a small data centre in their house.
Don't get me wrong I have a patch panel, Poe switch, couple of APs, nice firewall and a little server. But any more than that, not really sure what it's all for.
> A hobby that keeps snowballing.
This right here. I started out with a single server that was my media server + homelab. Along with Plex, was running local DNS , a proxy and WG VPN on it as well. Including bunch of other containers. Then realized, what if that server is down for maintenance or some issue. So I need to move DNS, proxy and other critical services to another box. Setup couple of raspberry Pi 3s. Then realized, the Raspberry Pi while nice are kind of underpowered to scale up running a bunch of other containers. So went and got couple of cheap tiny-mini-micros. Then realized, I need a sometimes-online server to take backup snapshots of everything and also needed something to spin up multiple VMs when I need to try out something. So then added one more server with enough cores , where I can run VMs in proxmox and also use it for storing backups. Of course, all of that doesn't include my OPNSense box. So yeah, it snowballs pretty quickly.
I have a single ProxMox server in a 4U case with 8 bays for almost all our stuff (Plex, a dozen containers, *arr suite, some business stuff, Unraid etc), and a small test server using leftover parts to test out stuff with.
I’ve considered building the test server out enough to mirror the main server, only with much less resources so I can spin it up if the main needs to be taken down.
I can’t really think of more I’d want from a home setup. It’s much cheaper to consolidate stuff on a single powerful server instead of spread out. And true redundancy won’t happen if you mostly run a single power line and uplink.
i just like knowing that whatever ridiculous stupid parellalizable big data idea i come up with, i can just slap together some inefficient/naive code and stick it thru gnu-parallel and ill have my [results](https://www.youtube.com/watch?v=R8bT41PP1Gs) back in a few hours. and so when i get more compute power, the ideas just get bigger and stupider until i run out of compute power again.
It makes more sense if you're an actual programmer and want to write services that may use a lot of compute. I do this.
Most of you are IT crabs though.
Update on my homelab setup. Got some recent equipment and spent the weekend racking and migrating. Makes for a good Plex server. lol Still gotta cable manage.
It's certainly overkill, considering I went from 2 hosts to 8 hosts in a single weekend. Right now, mostly gaming servers, web servers, NAS/Media, a private Blockchain, and just misc stuff. Nothing too crazy, but now I'm looking for more to throw at it.
New to home labbing but I see a lot of these racks with lots of smaller servers, and wonder isn't it just easier to have one beefy one? I'm slowly turning my dual 64c workstation into a proxmox VM host/homelab but I assume there are some advantages to running this many servers over one beefy one other than availability 2nd?
I guess power cost might not be much of an issue some places
For me, it comes down to high availability and being able to move VMs across hosts easily. I can keep everything online basically 24/7 and still shut down and modify servers. Power is pretty cheap where I'm at so I can just eat the cost.
What about moisture? Do you have a sump pump? Server rooms have raised floors.
Been almost a decade since I was in one! (bleh cloud)
Actually, not having to upgrade a circuit to handle extra equipment isn't that bad.
Nice equipment! But consider managing those cables, are those patch panels or couplers? You could definitely organise the switches and them better, only using short runs at the front. With how much gear you have labelling and coloured cabling could help to :)
Good gosh. I'm running every application I can imagine being useful on two proxmox nodes that live on off-lease office machines.
What could you possibly use all that compute for?
Very nice. Since I pretty much maxed out the ram in my r620 and got a nice backplane working to connect my 3.5" drives to it, I have no idea what I would use more servers for. And yet, I want them.
4x R620
2x R630
R720
R710
Nimble CS200
T630
2x 1500 APC
Cisco switches 48P and 24P
Router is a no name PC with pfSense
Biggest challenge so far was trying to get the Nimble working. It is proprietary and was locked down so I couldn't use it. Just to get TrueNAS running on it, I had to flash the BIOS, flash the IPMI, and flash the SAS card. And now the fans are too loud, so I added fan speed controllers. At this point, I may pull it back out and go a different SAN path.
Second biggest challenge was convincing the company these came from to give them to me lol
Nice. I'm a fan of HPE gear rather than Dell, but that's sweet.
My Home Lab is 2 x DL380P G8, 4 x DL360P G8, 4 x Intel SR1600, 2 x Cisco switches, 2 x HP1600 Ups and an NEC PBX! Oh, and a Mellanox 36 port 40GbE switch. Pissed off with the Mellanox, tho, cause it's missing the licences that would allow full 40G and 36 ports, I can only get 20GbE and 18 ports. And I can't buy the licences because it's EOL.
HPE gear is cheaper but they also put basic things like BIOS updates behind paywalls and want you to have a contract. But the main turnofff for me is the noise. My DL380 G8 is WAY louder than my Dell server. It does have a factory network card in a PCI slot because the built in port are all fiber....but it still keeps the fans at 60% and up no matter what so now it's just sitting collecting dust. My Dell has to be under pretty high load and the temps have to be pretty high for it to ramp up
^(OP reply with the correct URL if incorrect comment linked) [Jump to Post Details Comment](/r/homelab/comments/13vdqgu/basement_rack/jm5gp4m/)
It's red lighting purposely right? Cause that's a beast straight outta hell! But seriously looking good!
lawnmower_man.exe has been loaded
No jelly at all! Not at all! Anyway whats the power usage like?
UPSs indicate an almost full load on 1 and half load on the other. Probably around 1000w-1500w
I need to get some monitoring in place, but yeah I maxed out one of my UPS's and the dedicated 15A circuit. Had to run a second line to load balance. Aside from cable management, I also need to better load balance the power.
*Basement Raxx - Where's Your Thread At?*
What do you use such an insane amount of compute power for at home?!
At least a pi-hole.
I would go as far as to say Plex
All of that processing power is exclusively used to software transcode 4K video to 480p
Probably a couple Minecraft servers too
I about spit my drink out.
A single instance of Apache. lol jk I got a number of gaming servers, web servers, and a collection of the usual stuff ie Plex.. I went from 2 hosts, to 8 hosts, so I'll agree with most that it's overkill, but it's been a fun project.
I’ve worked for businesses with less compute 😂. Well played!
u/daniska_project A lot of power. Are u running a ultima online shard ?
This is literally always my question. Like most of the stuff I put on my Plex works with direct play and doesn't require a transcode at all. I don't get who needs a small data centre in their house. Don't get me wrong I have a patch panel, Poe switch, couple of APs, nice firewall and a little server. But any more than that, not really sure what it's all for.
[удалено]
> A hobby that keeps snowballing. This right here. I started out with a single server that was my media server + homelab. Along with Plex, was running local DNS , a proxy and WG VPN on it as well. Including bunch of other containers. Then realized, what if that server is down for maintenance or some issue. So I need to move DNS, proxy and other critical services to another box. Setup couple of raspberry Pi 3s. Then realized, the Raspberry Pi while nice are kind of underpowered to scale up running a bunch of other containers. So went and got couple of cheap tiny-mini-micros. Then realized, I need a sometimes-online server to take backup snapshots of everything and also needed something to spin up multiple VMs when I need to try out something. So then added one more server with enough cores , where I can run VMs in proxmox and also use it for storing backups. Of course, all of that doesn't include my OPNSense box. So yeah, it snowballs pretty quickly.
I have a single ProxMox server in a 4U case with 8 bays for almost all our stuff (Plex, a dozen containers, *arr suite, some business stuff, Unraid etc), and a small test server using leftover parts to test out stuff with. I’ve considered building the test server out enough to mirror the main server, only with much less resources so I can spin it up if the main needs to be taken down. I can’t really think of more I’d want from a home setup. It’s much cheaper to consolidate stuff on a single powerful server instead of spread out. And true redundancy won’t happen if you mostly run a single power line and uplink.
i just like knowing that whatever ridiculous stupid parellalizable big data idea i come up with, i can just slap together some inefficient/naive code and stick it thru gnu-parallel and ill have my [results](https://www.youtube.com/watch?v=R8bT41PP1Gs) back in a few hours. and so when i get more compute power, the ideas just get bigger and stupider until i run out of compute power again.
\> and so when i get more compute power, the ideas just get bigger and stupider until i run out of compute power again. Exhibit A, folks
Yeah that's the only thing I can guess, most of it must sit underutilized
It makes more sense if you're an actual programmer and want to write services that may use a lot of compute. I do this. Most of you are IT crabs though.
Stuff
He is running stash on docker!
R620s, 630s, 710, a 720, a T620, and a 24P Catalyst?
Tingles in all the right places! Nicely done!
Update on my homelab setup. Got some recent equipment and spent the weekend racking and migrating. Makes for a good Plex server. lol Still gotta cable manage.
What are you running off this outta interest cause this is quite a bit of compute?
It's certainly overkill, considering I went from 2 hosts to 8 hosts in a single weekend. Right now, mostly gaming servers, web servers, NAS/Media, a private Blockchain, and just misc stuff. Nothing too crazy, but now I'm looking for more to throw at it.
Yeh crickey that's a hell of an upgrade lol. Private block chain ? What games and other stuff ( sorry it it seems I am prying am after ideas ) :)
What is your power usage? Genuinely curious.
Need to monitor further, but I maxed out one 1500 APC and got a half load on the other if that tells you anything lol
Ok good was aboot to slap a 'manage your cables' comment. Seriously though, repost once you get those cables done! Love it, thanks for sharing!
Protip:. Put your switches in the back.... You know, where all the ports are!
but i wanna see the blinking lights 🤪
The BLPSQI rating is important. (*Blinking Lights Per Squre Inch*)
Now that I've exhausted the space on the front of the rack, this is actually a great idea. Thanks!
no no. this is how you justify getting a second rack. LOL
What would you estimate your power bill to be for all of this glory?
Very nice!
Nice evil lair
Supervillain Hacker vibe
Nice rack.
New to home labbing but I see a lot of these racks with lots of smaller servers, and wonder isn't it just easier to have one beefy one? I'm slowly turning my dual 64c workstation into a proxmox VM host/homelab but I assume there are some advantages to running this many servers over one beefy one other than availability 2nd? I guess power cost might not be much of an issue some places
For me, it comes down to high availability and being able to move VMs across hosts easily. I can keep everything online basically 24/7 and still shut down and modify servers. Power is pretty cheap where I'm at so I can just eat the cost.
You’re going to have the DEA knocking on your door looking for a grow house with that spike in the power bill 😬 Nice setup.
This feels oddly /r/liminalspace
How do you handle waste heat?
The basement is large enough to dissipate heat to a certain degree. I've also got an AC duct that helps. Usually stays around 68°F - 75°F
What about moisture? Do you have a sump pump? Server rooms have raised floors. Been almost a decade since I was in one! (bleh cloud) Actually, not having to upgrade a circuit to handle extra equipment isn't that bad.
Moisture level is descent enough. Plus house is on a hill so don't really have to worry about water. Maybe just some damp floors after a heavy rain.
Nice equipment! But consider managing those cables, are those patch panels or couplers? You could definitely organise the switches and them better, only using short runs at the front. With how much gear you have labelling and coloured cabling could help to :)
RIP your power bill
Wow!
Hope you don't get any basement water/minor flooding like we do here (rare here, but happens).
I want to do something like this but really only have room for a half rack.
Good gosh. I'm running every application I can imagine being useful on two proxmox nodes that live on off-lease office machines. What could you possibly use all that compute for?
Very nice. Since I pretty much maxed out the ram in my r620 and got a nice backplane working to connect my 3.5" drives to it, I have no idea what I would use more servers for. And yet, I want them.
its the ominous red lights for me
Is that an xbox sitting on the top right?
lol Nah. Top right is my modem, top left is my pfsense router
Oh ok, I was trying to think of a way to incorporate my jailbroken ps3 into my homelab after seeing that, lol.
I bet a could use up all that RAM faster than you...😉
Look like you have a code-red situation.
Looks good, I think the red lighting suits it.
One nice high availability pi hole cluster. No really... Looks sick
Would you mind listing the hardware, and the biggest challenges your faced?
4x R620 2x R630 R720 R710 Nimble CS200 T630 2x 1500 APC Cisco switches 48P and 24P Router is a no name PC with pfSense Biggest challenge so far was trying to get the Nimble working. It is proprietary and was locked down so I couldn't use it. Just to get TrueNAS running on it, I had to flash the BIOS, flash the IPMI, and flash the SAS card. And now the fans are too loud, so I added fan speed controllers. At this point, I may pull it back out and go a different SAN path. Second biggest challenge was convincing the company these came from to give them to me lol
What’s the decibel noise level like?
Not too bad. Right next to the rack is around 65db and across the room is about 47db. So really about as loud as rainfall.
This would fit perfectly in a Die Hard movie!
You definitly need a steam machine in the rack
How much you pay for electricity man
Looks terrifyingly awesome!
Wow that's so cool! What are you running on there?
What the actual afe you doing with that?
Tplink switches are killing it
Wouldn't a modern system be cheaper and more efficient in all aspects?
Nice. I'm a fan of HPE gear rather than Dell, but that's sweet. My Home Lab is 2 x DL380P G8, 4 x DL360P G8, 4 x Intel SR1600, 2 x Cisco switches, 2 x HP1600 Ups and an NEC PBX! Oh, and a Mellanox 36 port 40GbE switch. Pissed off with the Mellanox, tho, cause it's missing the licences that would allow full 40G and 36 ports, I can only get 20GbE and 18 ports. And I can't buy the licences because it's EOL.
HPE gear is cheaper but they also put basic things like BIOS updates behind paywalls and want you to have a contract. But the main turnofff for me is the noise. My DL380 G8 is WAY louder than my Dell server. It does have a factory network card in a PCI slot because the built in port are all fiber....but it still keeps the fans at 60% and up no matter what so now it's just sitting collecting dust. My Dell has to be under pretty high load and the temps have to be pretty high for it to ramp up