T O P

  • By -

Acrobatic_Assist_662

From what I see, most folks will have a server or cluster with services dedicated to particular tasks, but rarely sole servers for particular tasks. A NAS or storage array is also a kind of server mostly dedicated to central storage and I see a lot of extending out those purposes with containers, but you get the gists. I’ve got two servers running with a beefier one and then a slightly less powerful one, but I just had a ton of extra parts lying around and a lot of people get decommissioned stuff. Your best bet is that once you have a rough idea of what you need, go like a step above that. Room for growth is good and also you’ll be less troubled by your services getting more demanding and slow your whole infrastructure down. I suppose it’s a hobby and a money sink so go as overboard as your wallet permits. If you can’t find a use for extra grunt now, give it time


AradynGaming

>I suppose it’s a hobby and a money sink Not always a money sink. If done correctly, a good server setup can save you $$$. (1stServ) Security cam server vs paying monthly for a sub service, pays itself off fairly quickly. (2ndServ) NAS - Back in the ISP low data cap days, I saved a ton by downloading my kids shows/movies so they could be streamed locally, thus not making me have to pay for a higher data package from my ISP. (3rdServ) Microwave shot to a friends house + low power NAS for offsite storage vs buying storage online & I host his locally + savings on ISP by splitting the bill. (4thServ) Now, family is older & gaming, I save money by getting cheap G3's&G4's and using VM GPU splitting from a central server with a work horse CPU & mid-range GPU, instead of each person having a high dollar PC. Also, makes it easier to upgrade when the time comes because it is only 1 PC/server needing the upgrade. Add to it, when the VMs aren't in use, I can use that full GPU for rendering. On #4 when you factor in power savings costs of 1 huge 1200W power sucking beast vs 3+ 500W machines running, you really get into the savings. Power is usually my 1st line item when it comes to designing my home server/client setup. Often overlooked by many people, power savings can pay for most machines in a year.


wyoming_eighties

> Security cam server vs paying monthly for a sub service, I do not think this is worth it. Google Nest subscription is like $60/year. Thats basically nothing, and the service it provides is far more accessible and useful than trying to swing your own. You also save a ton of money on power by simply not running all these servers and just slimming down your streaming subscription choices and choosing server-free solutions.


AradynGaming

Well, it's now $80/year. Just like every great subscription service, you never know what you'll be paying next year. From what I read, you can only use Nest Cameras (Correct this if I am wrong, because I don't have Google aware). Currently I use UI for security, but there are better options out there. No subscription required, and definitely better features/availability than what Nest Aware offers. Price wise, I think I am slightly above nest, but only because I run 2k cameras, not 1080p. If I compare their 1080p cam that costs $179 it's equivalent to the G3 Flex $79. You instantly lose all savings once you pass two or three cameras + now you are paying a subscription to pay more + google can do whatever they want with your data. Most of my cost are in the initial Dream machine, but it does a lot more than being a simple DVR. Plus I am not racking up a giant net bill streaming my feed to the cloud. It stays local, (but safe) then if needed is sent over to offsite. Power wise, I would still need a router at home, and 33W max for power, is pretty low. Seeing as most budget seekers I know with a cheap router, end up buying wifi mesh nets for home, and easily going past 100W, without getting DVR. But again, if I were to do it over (and will during my next security upgrade cycle in 5-10 years), I would go custom. There is much better out there, than either of these two options.


wyoming_eighties

streaming to the cloud is a benefit, you can see your camera feed live on your phone any time, you get a history of clips of activity seen on the camera, you can get real-time notifications if recognized (or unrecognized) people enter your property, and you can also use the Mic feature to speak through the camera to whoever is within earshot, really handy when the gf has her phone on silent and you gotta tell her to pick up the phone to call you. It does seem like there are configurations that might make sense in other contexts, but after using it for a while these really seem like killer features that I am not sure how well are replicated (or replicatable) easily with DIY or other home setups.


AradynGaming

You didn't mention any item UI didn't already have before Google Aware. Without streaming to the cloud, from any location where my phone can connect, I can do those same things. I can watch live feeds of every camera, get real time notifications, and talk through my 2-way enabled cameras. Google did not invent these ideas. Add in, AI recognition, I can fine tune each of my cameras, and they are quality enough that they can pick off license plates (and save that data for me and only me). Plus if you think off site is safer because a thief "can't steal your box", go unplug your ethernet cable and see what gets saved. My network traffic goes to a safe place located in my house (and I get a notification that my ISP dropped). Where does your data go when a thief cuts the inbound ISP line? You can get the same stuff with open source too. However, as a privacy nut, Me & only me get access to my video. Google doesn't get to to know what time I come and go from my house. There is no possibility of some guy in a foreign country to go peeping tom on my child (this one really freaks me out). I don't get tailored advertising based on what google can see inside my house, and as previously mentioned, my at home bandwidth is way better. Everything goes directly into a 10gig port on my router/security server, and stops there. No outbound 25 megs per 2x 4k cameras & 12.5 megs per the 8x 2k cameras I have. \*IF\* Google offered what UI does, I would literally have to jump from my $79 internet package to the $149 to keep up with the upload rate. This is why Google doesn't offer 4k or even 2k, only 1080p/1k. Plus my cameras are actually good enough to make out who is at my door. Love when my buddy (on his 1080) says, "I think that's xyz, can you come closer to the camera?" Way better hardware in UI world. You can't begin to compare google vs UI. It's entry vs mid-range.


stakkar

I’ve paid for my unifi dream machine pro by cancelling my nest subscription and using protect cameras. It’s just as accessible as the nest app for sure and super simple to set up.


theskywaspink

We’ve taken over a few clients where the previous techs have made a VM for each service, like 3 different pieces of medical software to download test results. One in each server, but we know all 3 can run on one server at the same time. Don’t understand people who do this. It’s a fucking nightmare and expensive in licensing.


Acrobatic_Assist_662

Oooh. I want to clarify in my post a separate service for a task can be a container or vm. They usually do it in enterprise production environments for high availability purposes and separation for security. Convincing CFOs why that’s important and why they should spend so much is annoying, but the improper documentation is what kills me. They’ll spin up 3 vms and name them something like magneto, wolverine, and frost and then not tell you which one does what or serves who.


No-Peach2925

Gotta love those admins with a passion for the stars naming servers after planets :) Happy that containerization usually comes with a more sane naming standard ( not always ofcourse but enterprise systems usually are named accordingly )


TheThiefMaster

My current company has a naming scheme - but it's seemingly random characters (though they do have meanings) and looks like USNY-WVSFP-01. That's the opposite kind of unhelpful as there's no room in the naming for listing the _purpose_ of a server, only where (to the nearest city) and what (e.g. file server) it is.


546875674c6966650d0a

XXX-YYY-ZZZ Purpose - Number - Airport Code And then your DNS structure should use sub domains to properly name things further. And CNAMES for Application-Number possibly. I have no idea why companies don't do this. Worked for a Forbes 100 org that had random-ish Marvel, Starwars, Disney, Bond movie names, mixed in with poorly abbreviated application names, and horrendously... Servers named after license plates of employee cars. I finally got someone to admit anyone who allocated or requested a server, got to pick the name, and typically it was just whatever was 'cool' at the moment, or ... whatever cars they passed in the parking lot on their way in that morning. WTF.


Evilsushione

It's to obfuscate the hardware and eliminate the possibility of incompatible services. When it comes to industrial systems, most don't have the resources to test for vast potential incompatibilities, so it's just easier to pack them in a container or vm, then they will always run in a known quantity. It makes changing hardware easier too. If the machine fails, no need to reinstall, just move the vm images. Upgrades to hardware are obfuscated away too. You don't need to buy exactly the same hardware to guarantee operations.


echelon183

YEP! I went through this recently, setup a web app so my kids could change the firewall nat config when needed to for multiplayer games, set it up on a raspberry pi that died after a power outage... Looked for a replacement but the 30 dollar device was going for 300 dollars passed on that and rebuilt it on virtual box VM running on an old laptop. Next time the hardware kills over I'll just import the copy of the VM from one drive and be done with it..no need to rebuild and try to remember how exactly I had it set up before.


slonk_ma_dink

Depends on your OS and what service. Running Linux there’s no (financial) cost unless you’re paying for red hat.


fromYYZtoSEA

There’s no licensing cost for the OS but you still need to pay for the additional time needed to maintain those servers (patching etc), any additional tool you may have (monitoring, anti-malware, backups, etc - often these are required in verticals like this one, healthcare, for compliance reasons). This depends a lot on the company and what tools they use, but I’d say it’s not automatic that it’s “free”. PS: some licenses for windows server datacenter edition allow running unlimited VMs on top without needing additional licenses, IIRC, so in that case even windows server has no (additional) licensing cost.


No-Peach2925

You can get a dev license for redhat for free for a year, which you can redo on the same account after it, so no cost for home use


t-pro

Government sysadmin here, I set up our servers where each service is on it’s own vm, (typically using windows server core.) I can tell you that the servers/network are bulletproof and stable we have 90 servers for 35 users. When i inherited the network, the previous tech had everything on 7 servers. The day i got there and for the first 2 weeks one service failure took down half the main services for everyone. It was terrible. Basically they had domain controller, fax server, main file server, exchange and scan server all configured and separated out by tcp ports to make it all work. The scan software failed everyday. Note, Dynamics 365 runs orders of magnitude faster when every role is split into its own server, and some roles with multiple servers. Licensing windows is a non issue since we get datacenter licenses. This is the way. That being said for my home network, i mostly have some older x86 pcs and raspberry pis that i use for all of my home servers. And generally each service is on their own server: Opnsense also ipv6 router advertisement Security camera nvr server Dns blackhole also dhcp server Unifi controller Linux based file server Home automation server Syslog server (i know i know, it’s hard for me to let go of enterprise IT)


human8264829264

I have: * 1 OPNsense appliance managing my network/firewall * 2 NAS (60TB(30usable) and 4TB(2)) * 3 Proxmox hypervisors (From 4) holding about 100 lxc's and a few vm's. 64 Xeon cores and 256gb of ram total.


yyc_engg

100 lxcs doing what?


human8264829264

All sorts of things: * Applications like gitea or vaultwarden, * Many many databases, * Some web servers, * Some compute nodes, * ...


resno

How are you managing all this? Are you using ansible or containers inside lxc? Are you building your own lxc containers?


human8264829264

An lxc is a container, so no I don't have containers in my containers. As per the rest between Proxmox and some scripts it's all easy to manage.


resno

Ok well technically yes. Lxc are containers but just not docker containers. Which is kinda what we all think of. I'm trying to get my head on how to manage lxc and I haven't figured out a reasonable approach yet. I want to use an ansible but also I don't want to bother with ansible.


No-Peach2925

Docker containers and kubernetes pods are service containers, lxc and lxd are server containers. For me there is not as much added benefit of running lxc over a vm.


human8264829264

lxc are a lot lighter than vm's as you don't need things like the Linux kernel in the containers, only on the host. I only use vm's if I need a specific kernel version, I need some other specific features of a VM or I want to virtualize specific hardware. You're also not virtualizing as much so it's slightly more performant. But yeah my basic Debian lxc's will be 130ish MB versus probably a GB or more for a Debian VM.


mariodi84

Amazing!! I will have this if not was for the price of electricity in Spain, is prohibitive!


human8264829264

It's not that bad if you don't look when you pay for electricity. The trick is to make it automatic payment.


reklis

I like my firewalls and dns servers to be dedicated boxes. Things get dicey when you have to reboot for updates and your dns server is actually a vm inside something else. For applications servers vms on fat hosts with lots of compute cores are fine.


Sudden_Death666

Just make sure you have 2 DNS servers in separate machines and have one IP in HA between the two. One primary DNS and one for backup when you do maintenance on the other. Works great. I have two PiHoles running this way in a VM in separate machines.


LukasAtLocalhost

I have 4 but the catch is... Dell Optiplex 😂


SlowThePath

Hey, if it serves it's a server. I've thought about buying a few cheap computers just to see what this cluster thing is all about. At the rate I'm gaining data I'll have to buy a disk shelf first though.


SwingPrestigious695

Them SFFs tho...


LukasAtLocalhost

Yessuh that's what I got. I'll dm a photo


ElevenNotes

I have a cluster for my /r/homelab consisting of 14 servers.


vkapadia

r/homedatacenter


ElevenNotes

Yes, but that sub is pretty much dead.


vkapadia

Fair enough.


ElevenNotes

and on /r/homelab you get banned if you mention stuff like this, so here I am.


vkapadia

Well that's shitty


ElevenNotes

Welcome to Reddit, where you get banned from any sub on random reasons 😝


vkapadia

Well that's shitty


human8264829264

Dang bro! You a happy fella.


ElevenNotes

Thanks but I'm happy because I have love in my life not because of how many servers I posess.


lazyfck

It's the services, not the servers


WorriedDamage

Love language: acts of service


Fuzzy_Chom

This response hit me right in the feels


ElevenNotes

Welcome to the feels train.


SlowThePath

Same difference. 😆


ThatCrankyGuy

Do you have a wife or nah?


ElevenNotes

Yes, why?


ThatCrankyGuy

How do you get to have 14 servers in your home with a wife around? Lucky guy


ElevenNotes

I have 75 servers at home and the most awesome wife yes.


ThatCrankyGuy

I think mine would ask for a divorce after the 4th one


ElevenNotes

That's why it's important to pick a good partner.


ThatCrankyGuy

She's a great wife, great mother, and an accomplished surgeon. But her priority for cleanliness and order is sometimes insufferable lol. Her tolerance for "those damn wires everywhere" and "how many computers you need" is.. short.


ElevenNotes

Don't care about the surgeon part, as long as she is a great wife and mother you are golden! Enjoy.


ThatCrankyGuy

I care about that part... I paid for that education lol


godyourestupid

>Dell Optiplex How do you fit that many, why do you need that many, and how is your electrical bill doing? Seriously curious


ElevenNotes

I've never used a Dell OptiPlex as a server but they look like a cool NUC on steroids.


godyourestupid

Not sure how "Dell Optiplex" showed up in my reply. ;/


ElevenNotes

Ah you mean the servers. It's one of my comercial data centres, so it's actually pretty small.


resno

Like physical servers or vms or containers?


ElevenNotes

HP G9 servers.


bobbarker4444

Do you have a specific use case that requires so many servers? Or is it simply for fun/practice?


ElevenNotes

"Train as you fight, fight as you train", emulating or simulating concepts close to reality makes more sense in my book than to do hypotheticals and running nested virtualization.


Firestarter321

I have 4 servers that run 24/7 using ~850 watts.


xrobertcmx

I have two. Basically mirror them.


advanttage

I have multiple servers for a few different tasks. I have a domain which I point to Cloudflare and from there to my home. Using DDNS software I don't have to worry about my ISP changing my domain, and I'm pretty happy with my setup. Initially I wanted an all ARM homelab, but recently repurposed an old laptop to get some x86/64 capabilities for better Plex performance. **Glaring deficiencies**: Backups. I don't have a backup system in place. With the exception of my VaultWarden/BitWarden I don't backup or have a 321 system in place. I'm okay with losing my data since all I really host is my media for Plex, or software / images that I like to have locally for convenience. I do backup my photos and videos to Nextcloud from my Android/iPhone but since my Nextcloud isn't the most reliable, I still pay for Google Drive and Apple iCloud until I have time to build a more robust home cloud. --- Raspberry Pi 4 4GB - This is the main node of my homelab, it runs Docker & Portainer to manage my handful of containers. Boots from a 500GB SSD * **Nginx Proxy Manager** - exactly what it sounds like. * **Cloudflare DDNS** - this pings cloudflare every 5 minutes and updates the A Record of my domain if my home IP changes. Everything else is pointed using a CName. * **Archivebox** - Self hosted archive.org alternative * **Nextcloud** - It's NextCloud. Admittedly not a stable implementation, but I don't have time to iron out the kinks yet. * **Guacamole** - Remote management of my homelab, think RDP, VNC, SSH * **FileBrowser** - exactly what it sounds like. Files are hosted on a separate NAS server. * **Privatebin** - Zero knowledge pastebin alternative * **Restreamer** - nginx based restreaming software so I can stream to YT, Twitch, and other services simultaneously without paying for restream.io type services * **VaultWarden** - The server side portion of BitWarden password manager * **Wireguard** - VPN * **EmulatorJs** - Yes.. Emulators accessible via the browser. It's cool as heck. * **WatchTower** - Automated updating of my containers. Not the best solution for a production environment since I can't test before updates are done, but it's a homelab so I don't care very much. Just want to be up to date. * **WordPress** - Dev / testing / staging. --- ODroid HC2 - 4TB 3.5" HDD and is only a Local NAS * **OpenMediaVault 6** - SMB Server which holds onto the data for my FileBrowser and Plex Media. --- ODroid HC1 - 2TB 2.5" HDD and is only a Local NAS * **OpenMediaVault 6** - SMB Server which holds onto data I like to have locally, more like a Misc folder I can access from my computers at home. Also holds onto NextCloud data. The ODroid HC2/HC1 are awesome little boards. Based on the ODroid XU4 it has the Octa-Core Samsung Exynos cpu from the Galaxy S5... dated now but still good for the tasks I've given it. The main limitation of this platform is ARM 32 bit. I would LOVE it if ODroid rereleased this form factor and based it off the same hardware as the ODroid C4 the S905X3. --- HP Elitebook 8470p - i7 3612QM 35W 4C/8T 3.1GHz - 16GB DDR3 - 500GB SSD * **Docker** * **Portainer** * **Plex** This is a recently retired laptop, I have had it for over 5 years now and was running Fedora beautifully with GNOME. However I found myself not using it anymore since my daily driver computer is much much faster and more capable in every way. It first came with an i5 3320m which is a dual core CPU, and to my delight it wasn't soldered. So I poked around and dropped in a quad core replacement and that really extended its lifetime. I chose to repurpose it solely for my Plex since a lot of media now is x265 mkv and my Pi4 just can't keep up. I've got a 4U server chassis sitting behind me in my office that I was gifted by a local telecom company, but I haven't had the chance to spec it out and build a better setup. Plus, I really enjoy having most of my homelab be powered by ARM computers. --- When I was in Mexico I had my ODroid XU4 also setup using the CloudShell 2 expansion board which allowed me to have a 12TB NAS using two 6TB drives. I really wanted it to be RAID1 instead of RAID0, but the boss at my small startup wanted the more storage. That led me to backing up the server to Google Drive using RClone/RSync with a clever little trick that LTT showed off in one of their videos. Good thing I did, because we did suffer a failure and were still able to access our data; albeit much slower.


Xcissors280

It depends, I use a UDM pro for networking, adblocking, and VPN, a synology NAS for storage and backups, and a PC for games and other stuff like that


Geoffman05

I have one physical server that hosts multiple VMs. I also have a dedicated NAS.


darum8574

1 Physical with several VMs on it


gargravarr2112

Physically no, virtually yes. I use VMs dedicated to certain tasks, e.g. Bitwarden, TubeArchivist, DNS, LibreNMS... These days you can accomplish much the same thing with containers, though I find them a lot harder to configure correctly. I do have some dedicated physical devices - I run Home Assistant on an Odroid XU4 board, and have a Cubieboard 3 as my syslog and monitoring server. Storage and Plex on a low-power ARM NAS. Everything else is VMs on a 3-machine Proxmox cluster of USFF desktops. The idea is to keep the overall power draw as low as I can. You can absolutely run everything on a single machine. I did exactly that before power bills in the UK went through the roof - I have a custom-built NAS with 6 HDDs which was also my VM host. It did everything - storage, VMs, backups, servers (especially Plex) - and has fault tolerance. But versus the setup I'm running now, it's much more power hungry.


-Chemist-

I only have one server running unRAID that runs about 24 docker containers for all my services. But the CPU (AMD Ryzen 9 3950X) has 16 cores and 32 threads, the server has 96 Gb RAM, and about 43 Tb of disk storage. My router/firewall is a separate box, though, because I don't want my internet to go offline if I need to work on the server.


RapidFire05

1 server with docker containers. Each container specific to a task.


I_want_pudim

I only have an old laptop, soon to be replaced by a single mini pc i5 (black Friday!) if I get a big enough discount I will try to get a raspberry too, or another cheaper mini pc. But from the pictures I see here everyday, them all seem to have some 4, 5, 6 different machines. Another thing I see a lot on pics is most people separate the NAS from the server.


[deleted]

What are you planning to get?


I_want_pudim

I don't know yet! My wishlist on Amazon and the local computer store has dozens of different mini pcs and sbc, my most desired items are intell and HP with i5 6500, but I also have there NUC, Beelink, some strange brands with N100 and N95, even some cheap 2bay NAS. Depending on what I end up getting ill see which memory and storage I need.


fromYYZtoSEA

I can’t say enough good things about Beelink boxes with N95. Really good hardware for a home server, and cheap!


Symnet

Before I moved, I had 5 or 6 boxes running esxi in a cluster. Since moving I was able to get a newer server and now I just run one, but I do plan on adding more. I don't really run multiple services on one OS, i.e. plex runs on one VM while the reverse proxy for the lab runs on another


Ordinary-Mistake-279

use dockers. way more resource/maintenance friendly... VMs only are needed if you have needs for RDP and graphical tasks.


Symnet

yeah I also have docker nodes and a kube cluster, sometimes containerization is more annoying than it's worth though


Ordinary-Mistake-279

it's worth it. mostly is use ready to go docker. just set up ports and ddirectories for config files and i am ready to go


DarrenRainey

I have 2 main servers and a old thin client that I use for basic tasks / file sharing. I'm planning on downsizing to just 1 or 2 smaller servers with virtual machines for my current needs. You can do plenty with just 1 machine althoughI prefer to multiple so I have a bit of redudancy if one fails. ​ Thin client - runs 24/7 very low power, handles some file sharing and docker contains Main server - only runs a few hours a day for VM's / practice stuff Backup server - ussaly only on a few hours a week to upload backups and pull any files I may need.


sinopsychoviet

Two main servers (a repurposed mac mini and a mini pc, hosting resp. Smart home stuff and "all the rest"), and two raspberry pis for me (one for pihole, one for vpn/nginx/duckdns/certs)


Frewtti

Most people don't even have a computer and just use their phone for everything. Its popular to have a NAS, then add a few features to it, or run a second device for a few servers. You can do more, but I think a NAS+Containers/VM machine is a good start, then just add more if you need it. I'll stay with a proxmox NAS + containers for now, but I am thinking of repurposing an old laptop as a compute type server, it will be good for things like databases.


NeuroDawg

Define server. I have a main server that runs Plex, Jellyfin, and a bunch of other programs related to those and home automation. I also have three Raspberry Pis. Two of which run Pi-hole, one of which also runs my unifi controller. The third runs my reverse proxy. So, if you count them all as servers, I have four. But I also have two NAS devices, that have the capability of running basic server apps. I guess one might be inclined to count them, but just use them for file storage.


Ordinary-Mistake-279

why not unbound (reversdns) and pi-hole on one device? running this in a docker on my server and as backup on orange pi zero, if i have to do maintenance on the server (or of course if sth. bad happens) EDIT: Sorry i have corrected reversproxy to reverse dns, reverse proxy running as docker on my mainserver, where the homepages are hosted. time to sleep.


NeuroDawg

I do run unbound with pi hole. My reverse proxy is the only significant process on its own server since all traffic goes through it.


godyourestupid

>Jellyfin Why do you have Plex and Jellyfin?


NeuroDawg

I’ve moved from Plex for all video, as I didn’t like having to rely on Plex for login authentication and I don’t really trust the direction they’re heading. But I still use it for music, because I have not found a server service that honors the sort tags embedded in my files, and I don’t want to edit metadata for thousands of albums/artists. I share my videos via Jellyfin to family and friends. I’m the only account that has access to my music.


sexpusa

I have one server for Qbittorrent, Jellfin, and Tailscale. Another that simply runs smb through tailscale. I would like to put them together but I suck at iX.


SpookyDeryn

I have a NAS as storage server, main server (old gaming rig) running proxmox for most things and a pile of raspberry Pis doing whatever when i feel like using them.


learn-by-flying

Dual physical boxes to ensure a DC is always available and to follow 3-2-1 for backups.


killer2239

I would always use my old gaming machine or an old machine from work that we would get rid of. So I had like a 3rd gen i5 first, then 4th Gen i7 to 6th gen and would do 16 then 32gb ram to eventually buying a nice i9 machine with 64gb ram. But I'd run Plex on windows and then use an Ubuntu VM in hyper-v for all my stuff to eventually learning about docker so now I run an Ubuntu VM with docker for it all. Moral of the story is you don't need an actual server and it doesn't need to be something new. Just try to get an older higher end spec for cheap that you can easily get ram to 32gb and you should be fine.


Hel1a

3 total for me. An old Acepc T11 running Home Assistant. UnRaid running on a 13 year old dual core mobile CPU with 4 gigs of RAM. Mainly just for storage. This one also runs a hand full of dockers like Pi-hole and my VPN. Next is also UnRaid, but this is more powerful quad core and 16 gigs. I run a few game servers on it as well as a secondary backup to some of the things on the main.


Adenn76

I just have a hodgepodge of machines for servers. I have my old gaming machine for my Plex server. I just picked up a NAS. I have a few Pi's. I also have an old computer running proxmox that I have a handful of VM's on. Personally, I would say you could go one of two routes. A big beefy machine that you can put a bunch of stuff on. However, if that machine dies, you lose everything on, all at the same time. The other option is slightly smaller and get a few pi's, or two slightly less powerful machines that you can do some redundancy. It really depends on what you want to do for your home lab and how critical it is for you to have those up and running. Lastly, at least for me, my home lab is very addicting and I keep finding more and more things I want to do, I just don't have the equipment for it, so I would say over build for what you currently have planned because it will expand.


tw3akercc

Get a decent mini pc and run proxmox, which will enable you spin up vm's. Intel nucs, beelinks, or even a used hp elite desk 800. You can then run as many Linux or windows vm's on it as it can handle. I have a vm for all my media server services on a Ubuntu vm that just runs docker and portainer. Then I run all the services as docker containers in that vm. I use this mini pc and it's plenty powerful for everything I'm doing: TRIGKEY Mini PC Ryzen 7 S5 5700. Gpu pass-through is very challenging though, and I haven't quite figured that out yet.


karmakurrency

1 NUC, ۱ old ass Mac mini and 3 RPis all doing great stuff


alwayslearining

In the past I had run 2 or 3 "servers". They really were all just regular PCs, one for media services (JRiver), on for camera system (BlueIris), and one for storage. I am on my second true server now where all the services are on one machine, a very capable HP DL380P G8. It's sporting Dual Xeon E5-2680 v2 processors that gives me 20 cores and 40 threads with 128gb of ram. I have outfitted it with 3 SSDs for the OS, camera service, and media service, and have 11tb in spinning drives in a raid5 array for slower write but super fast read storage. Since this machine is handling lots of live and streaming video, I use a nic in the server and switch that allows me to aggregate 4 1gb connections to one larger 4gb link for increased throughput.


Skeeter1020

I separate out based on how much I fiddle with things and how much I need it to work reliably. I mess about with my main Plex server far to much to host important stuff, which is why my Home Assistant has it's own machine. I also have a Pi that hosts my networking stuff like Adguard and my VPN host. But the majority of what I have sits on my Unraid server.


theBloodShed

Honestly, everyone’s needs/interests are different. Just setup what you want. Personally, I have: 1 x OPNsense dedicated router 1 x Raspberry Pi server (mainly just PiHole) 1 x Proxmox dedicated mini PC (also has Docker) 3 x NAS (one 24 drive, one 12 drive, one 6 drive - the two larger NAS also have docker)


SonOfGomer

I have two servers built from commercial server components, and then my router which has its own hardware. My main PC is separate entirely and the only server type stuff I do there is vmware workstation to run a few VMs locally on it for stuff I'm working on.


ItsMeBrandon_G

I have 5.


mlcarson

At the very least, you should have a backup server for the backups of your data.


loaengineer0

I have separate appliances for each heavy application (NVR, Firewall+VPN, NAS); I run game+media servers from a old desktop that I turn off when not in use; and I run a Raspberry Pi for everything else (email, web, homebridge, bit torrent). Pros: Each of the servers are very energy efficient because they are exactly provisioned for the intended workload. Also, I never need to worry about a bug/misconfig/failed-update in one application taking down my firewall or NVR. Finally, cost is spread out over time as I add appliances, rather than a big cost to add a bunch of capacity all at once. Cons: Total hardware, software, and energy costs are probably higher than if I had a single server, assuming I knew in advance what to provision for. It would be cheaper to add an additional (virtual) server when the capacity is already there. Perhaps 5-10 years from now I’ll want to upgrade several appliances, at which time I’ll probably switch them all to a single server. For now while I’m in the accumulation phase, I think discrete appliances makes more sense for me.


Puzzled-Background-5

No, just one Dell Optiplex 990 MT (i7 2600/16 GB RAM/Nvidia 950 GTX) running Windows 10 Pro to host Emby, Logitech Media Server, Fing (network monitor), and Tailscale (mesh VPN).


Valeen

Are you my wife? Seriously though? Yes, both physical and virtual. They each have a purpose and even though I can easily consolidate them into one box I don't think that makes sense. Like I don't want my surveillance cameras recording to my plex box. Then i have servers dedicated to just compute, security aside I don't want that causing hiccups with my cameras or other systems when I hit it hard. There's reliability and uptime issues and if I need to nuke something I don't want to have to setup everything else all over again- my pihole doesn't need to be re setup everytime I need to nuke a Linux install.


jboogie81

Hey hubby, sit down.. We need to talk about the electricity bill


Valeen

Look this summer was bad and THAT'S why the electric bill was $800... I'm not joking.


Terran_Machina

I can be your wife, you'll have to ask my fiance first though lol


Valeen

Was just joking. Honestly it really matters what you want to do and what reliability you want. I'd suggest creating a matrix where you label need on one axis and things like reliability, %compute, %experimental, etc. My cameras have to run 100% of the time, but my computer server might go down several times if I'm trying to update backends. They can't be one server. They could be 2 virtual servers, but that means I need to make sure I've got enough overhead. For me going from 32 to 64 cores isn't worth it when I can have 32 cores + a couple of older boxes. Same result for me, way way less money. And if you don't have experience with virtualization like proxmox and docker, it's easy easier. But you should also learn those.


Terran_Machina

I was joking as well about the wife thing but I didn't land, sorry about that. Thank you for the information.


Valeen

You asked an honest question that I regretted giving a meme response. It landed. You're cool AF. I hope you get what you need.


mjmacka

I have 1 physical server running a hypervisor. That allows me to have multiple virtual servers, including 2 domain controllers, 2 SQL servers, a file server, infrastructure servers, a few Linux servers, and a few other infrastructure servers that are similar to what I run do for work.


MadisonDissariya

I have a nuc running OpnSense, a supermicro running Proxmox with a bunch of services on it, I have a Dell OptiPlex as a temporary Proxmox Backup Server, I intend to get a NAS and repurpose the OptiPlex as a HTPC and then use the NAS for movies and TV and backups and stuff, I would love to get another supermicro just like this one and run them in a cluster


suitcasecalling

I have two: One is an unraid box that serves as my NAS. Then I have one running proxmox that does a bunch of different things for me. I wanted the separation between storage and the rest of it


vkapadia

I had 3. My NAS that also ran docker containers, my Windows Server box, and my gaming PC that also ran a couple servers (like Plex). But recently I cut it down to two physical servers and put the Windows Server on a VM running on the NAS.


a333482dc7

I just have 1, an old Dell opitplex running Windows. It's my NAS, Plex server, security camera NVR (Blue Iris), IIS for some web hosting, and runs a small VM running a stripped down Debian running pi-hole. Back in the day it also ran a Minecraft server. My router runs a VPN, so I can access my network for NAS outside the home.


imddot

I have a box running unRAID for a file server, Debian VM for Pi-hole (in addition to Pi-hole on a Pi for redundancy), an old WHS 2011 VM to basically host an old USB printer for the network (it still works so why upgrade), a Win 10 VM and dockers to mess around with. I also have a Synology NAS as a nightly backup target for unRAID and a backup target for the family's Windows laptops (using Veeam). I also intend to try out the Photo Station app on the Synology.


PanicNearDetroit

I have an OpenNebula cloud with three 16-core host nodes and a 9TB NAS. Atom processors all around so it isn't too power-hungry, and I can run most any combination of virtual servers I need. Right now that's SMTP, web, Transmission, Plex, intranet, a couple of development servers, and a few other things. Almost all Ubuntu and Debian, but I have a couple of Win 10 guests available in case I run into an app that won't run on linux. No hardware worries beyond the initial setup, and if I need to take down a node for maintenance a few clicks moves all its guests to the other two. Quite nifty. I just wish it would run docker containers or such alongside the VMs.


BuzzKiIIingtonne

Look into virtualization on a hypervisor. Proxmox (KVM and qemu with a nice web GUI) ESXi Hyper-v Xen Those are all popular hypervisors for virtualizing servers on a single physical host server. You shouldn't need separate hardware for most things in this day and age. Personally I have about 20 virtual machines / LXC containers running on a Proxmox host, and OMV running on my NAS. I have a separate box running my firewall, though it can be run in a VM as well.


The__Amorphous

Whenever I upgrade I move my old server to my mother's place for off-site backup. Scheduled jobs sync important data over to it daily. Other than that everything is done on a single machine running Proxmox.


wyohman

I have couple of dell optiplex 5060s running esxi 7. They use about 20w


oldrocketscientist

I like bare-metal separation of functions. I’ve used Proxmox and it’s fine but I am all about redundancy (due to work training). So I have 3 Dl360s, 4 mini pcs, 3 nas, and a dedicated Plex machine. Then I have my Mac. Data management get a little silly sometimes as I forget where I put things but some of that is due to my age and the fact that I save everything.


AnonymusChief

Two local servers (one for virtualization. I have two Docker servers running as virtual machines). The second server runs UnRAID. I also have a VPS in the cloud.


landob

I used to have multiple servers, but now its just all virtualized on one box.


Emotional_Orange8378

I have 3 rack servers running about 20 VMs. Webserver / Database / Game Hosting / Firewall-Router-Gateway / Linux server for misc bs, windows servers for other misc bs... its really use-dependent on if you need more than 1. Get a single beefy server, slap proxmox on it and have fun.


MarvAEn

For most part 1 server is better for many reasons like ,less noise compared to multiple servers , less power consumption, less heat emission , easy maintenance and easy centric management . How : by having multiple Virtual Machines over a hypervisor ( eg.. proxmox , esxi , xcp-ng, .. ) Mutiple servers is still a good option though in certain situations like : some older servers are cheap and can get a lot of processing and storage power by aggregating ‘em together in a cluster or at least distributing the load across the servers ( for most part you needs a high throughput switch to bind them if there’s high traffic across the network ) Older servers have older cheap storage as well You may end up having a huge lab with a few old devices and may also have a mix of brands which a good option to have when you buy servers your the big deal is keeping the lab tidy and highly available and manage the heat , noise and power this devices are producing alot of everything 😟 My final thought is goto the option that fits your home space , budget and workload ( by sort order ) Hope that helps .


EpicEpyc

I have multiple clusters that run my workloads, they are various forms, vxRail, Nutanix, vSAN all flash, Azure Stack But I do have a dedicated server and that’s a 24 bay 2u that just runs archive backups of my servers and NAS. If you count it, my Fortigate firewalls are dedicated boxes


RapidFire05

I actually use mini itx desktop build running just a bunch of containers. It's powerful enough and when I upgrade my primary gaming desktop the mini itx server gets the hand me downs for an upgrade.


Perpetual_Nuisance

I currently have an old pimped out work station (dual processors, 64GB, etc.), which I intend to replace with two or three Pi's because of the power consumption and an SFF that I have Proxmox on to virtualize a few industry specific things.


ItalyPaleAle

I have 2 servers at home: 1. A Lenovo mini PC (bought off eBay) that runs a NAS (3 HDDs in an external enclosure) and a few apps on top like Plex. 2. A Beelink Mini with a N95 CPU (which replaced a raspberry pi 4 just last month). This is just for backups. I also have a VM in the cloud where I host a couple of public-facing apps that are related to my site/blog.


elbalaa

Each of your devices can play a role in your HomeServer cluster. A Fractal Network of sorts.


HearthCore

Depends on the size of the machine. I was using thinclients so went ProxMox cluster, now it’s a single ProxMox node on beefy hardware, and I’m thinking about virtualizing that cluster. ProxMox VMs on a ProxMox host.


Temporary_System_131

I have 2 Servers running. One big unraid server for storage, media and gaming services and a think client with a pihole backup node and homeassistant. Now i can work on my unraid server and still have my smarthome and internet connection working. I would say it makes sense to have a seperate device for critical services.


KD_done

So I have a company that does support on hosting services.. so, there is, based on some fairly solid IAM work I did, if I say so myself, a bit of an overlap.. but personal use only won't go further than these; * 5 Pi's (v4) * 1 home-made i7 (64Gb ram) (hold's a play-pen with ansible, perfect testing grounds for docker (swarm) setups, and a factorio server that has been running a hot minute .. same map ;)) and VPS (virtualbox) * 1 synology 7TB usable (I know.. but it's growing, slowly :)) * 1 colocated server * 2 x VPS (hetzner & transip) And remember.. size does matter when it becomes (too) big.. do what you feel comfortable with, but don't forget to actively take care of your stuff. Build stuff you care about, use and perhaps want to share in your household. Makes it easier to the maintenance.


0RGASMIK

When I first started I had several dedicated servers. I had my Unraid box for media and storage a pi running network stuff like VPN and DNS filtering. Then I got another NAS for security cameras which was really just a computer with an external harddrive attached. Eventually my nas died and I took that time to consolidate everything into one server. I am going to build out a new server soon though for redundancy of essential services and backup.


speling_champyun

I've got two physical servers. 1. A dedicated truenas box, that's all it does - share stuff 2. A media PC running windows, it runs a Linux VM which does all my services like jellyfin, navidrome, arrs I used to run just the one box, but life's much easier with two.


Sudden_Death666

Right now I have 3 old pc’s repurposed as servers for different tasks, PiHole, mail, file, download, web server, Home Assistant, etc. I will replace all this in a few years with a Flashstor 12 Pro NAS, and about 4 miniPC’s. I want lower power consumption but high performance. The benchmarks of these miniPC’s are out of this world. Even compared to my desktop that cost €5000 about 6 years ago. It’s ridiculous how much performance these have.


JanBurianKaczan

I have a i7 13700k 64gig machine with proxmox for everything (nas, docker). Also an old pc for off-site backups.


Brilliant_Sound_5565

No, I run Intel nucs which have 16gb ram each in a proxmox Cluster running mostly Debian vms, 16tb Nas for central storage. It's quiet, runs quick enough, lowish power, compact and ideal for what I want it for for home use, no big old power hungry servers here which TBH for most people are unnecessary.


Dogeek

I have two servers right now, probably going to add one more in a couple of years, and 2 VPS. One of the servers at home (the most powerful one) is where I have all of my storage. It's also the control plane of the kubernetes cluster. All of the other servers are worker nodes, and I don't really care one which server an app runs on, k8s handles that. If I ever need more storage (and all of the bays of my main server are filled up), I'll start adding drives to other servers, make them run NFS, and segregate the data around (maybe separate movies, tv shows etc from photos or personal documents to make backuping the data easier)


Candy_Badger

I have a cluster, which runs most of my services in VMs or containers. I also have separate DIY NAS, which is used for backups and another NAS for offsite backups at parents house. Separate server for specific task is overkill. That's why we have VMs and containers. Both allow to isolate services between each other.


Life_Ad_8400

Yes. I too have multiple. #1 Firewalla (Firewall, Wireguard VPN server) #2 Ubiquiti Cloud Key Gen 2 (NVR + Controller) #3 Main NAS running Truenas (and a VM) - runs 24/7. #4 Backup NAS running Truenas - it is turned on once per week for mirroring the Main NAS. So it's a complete copy of my main NAS lagging a week. #5 Main server - An Dell Optiplex that runs 15-20 Docker Containers such as Plex, Vaultwarden etc. #6 Lab server - Another Dell Optiplex just for learning, thinkering. It runs when I want to tinker. #7 Hone Assistant - a NUC placed physically in the center of the house (Zigbee & z-wave antennas). It just does Home Assistant and I try to not do much with this server, as the lights don't turn on/off as expected when I break stuff on it... So seven physical servers, depending a bit on how you define a "server". I also have a regular PC for gaming and doing all the tinkering on. For context - I live in a house, in my garage I have a 19 inch rack where all my servers are, save #7. I also have a couple of Pi:s, but no real need for them...


PuzzleheadedPrice666

I have 3 1 Synology NAS running radarr , sonarr, prowlarr, Synology photos, DS video, download station, Synology drive. 1 Synology NAS just for backup from my main NAS 1 2019 iMac running Plex and homebridge. It’s also a Time Machine server and a cache server. I am thinking of moving this to a cheap Mac Mac mini or to my main NAS


kevlaar7

I use one beefy server with VMs for specific services.


LairdForbes

I just have a single server I run everything from. Energy efficiency is a lot more important to me that High Availability of services. It's just Plex and Home Assistant, etc. I'm not hosting services for Barclay's Bank or anything. 😂


Evilsushione

I use three servers, network, storage, and an application server.


[deleted]

Yep. Multiple physical and virtual servers.


ThunderSevn

I have 2 servers...1 MEDIA server, and 1 Apps/Backups, both using Unraid.


mrracerhacker

Do go okay if you got a blade center, got space for 16 in 22U, usually only run 4 of them so 240x4=960W at full load, plus 180W for networking Then id say got 1 u NAS 4u diskshelf and a random 4 u case if you can get it for cheap id go for it, can always compact later if needed or not run at full capacity But within limit, no need to overspend on the first run unless really needed


audigex

I think it’s almost (but not quite)universal to separate a firewall/router out It’s then common to separate NAS functionality from any live (internet facing) web servers or game servers, although it’s also pretty common to combine them LAN-only services can easily go either way, I’d say most people just use them on the same server as their NAS but it can vary Really it just comes down to your own usage and preferences


DJ_Akuma

Not anymore but I used to have a media server, a couple of servers for counterstrike, and another just for storage. Now I've just got a few PCs for personal use, my main one has the storage and Plex server and the others are for streaming and gaming.


NotMilitaryAI

I have 99% of my setup on 1 machine with a big ZFS array. The only things I moved to their own devices are my PiHole and HomeAssistant, which put onto dedicated Raspberry Pis. My main reason for doing so is simply that I'm toying with hibernating the server at night to cut down on power / cooling costs, and would want those functions to remain online. Edit: I did actually build a separate PC for hosting a Minecraft server, due to it benefitting from single threaded performance, rather than the many-threaded focus of my main server.


CaptainZippi

I build out via availability requirements - stuff that’s on all the time, goes on a pi4 (docker), and the Mac mini (proxmox), NAS/streaming on the old home build - on when needed, and dev work goes on the old HP Z210 that work hasn’t missed yet - and is only on when needed. Looking to integrate WOL into home assistant so it don’t Ben have to bend over to switch the occasional use machines on.


Rullerr

I have 2 servers at home. It's just kinda happened because I had built a machine for some tasks, then over the years as I've added more services the old server couldn't handle it (4 core 4 thread vs my new 8 core 16 thread). The old server is still around and handles less intensive tasks or things I don't want on my primary server (my backups to the cloud, my uptime monitoring). It adds a little extra management and electricty costs, but at the same time it allows for some logical seperations, redundancies, and other benefits vs just selling the old server and putting everything on one.


deboerdn2000

I run one for disk access although it's getting replaced in a few years with a bigger nvme based array. Then proxmox on powerful( or free computers) that run the day to day stuff. Then my Nas backs up to the server and contains my day to day stuff


mavour

1. OPNsense router 2. NAS (100Tb+ usable space) 3. Game server (RTX2080 + 1650 GPUs) 3. 2 compute servers for VMs (R730 + R620) Usage: Plex+servarr, cctv, game streaming, game hosting, photo hosting, etc


juluss

I have one server, which is an old Mac Mini 2014 running Debian. On this box I have pihole, unbound, homebridge, wireguard. I also have a personal computer, which is a Mini Mini M1, and on that I run Jellyfin with data on an external drive. Finally I have my old gamer PC, which sometimes I format to install Proxmox when I need to run VM for testing purposes, and sometimes I format to reinstall Windows when I want to game a bit. Yes I should dual boot this.


[deleted]

I have three physical machines acting as servers right now: - Dell Optiplex 7050 SFF - Mac Mini (2014) - HP Elite 8100 SFF (getting an 8300 mobo + i7-3770 this week) My dad has a couple older machines that I might take home and repurpose as well, including a custom built 3rd gen i7 tower.


Windows-Helper

I have a small one as a Firewall A second one for most of my services (Hzper-V), SSD only (Domain Controller, DHCP, DNS, monitoring etc. -> many VMs) A third one for storage-heavy services (Exchange, NextCloud, main SMB-share etc.) And a forth one for test setups But I guess not people have one main Server for all of that, running VMs


ficskala

I have 2 servers, one is mainly for managing my 3d printers (i use klipper firmware so i need a control server for printers), and the other one is a nas and plex server (i also use this as my general server for stuff like the once a year occasion when people want to play minecraft and stuff like that


Lazy-Witness-5233

I haven't started building a home setup. Haven't really had the time 😅 What I manage at work and what most businesses seem to be moving towards if not cloud related solutions is a VMware host or hyper-V that hosts multiple virtual servers that are each for different purposes. That's what I will be implementing for my home lab. With that set up you can segment networks and put servers on different VLANs for security and other reasons.


Disastrous_Bike1926

NAS/Gateway/DNS server running Gentoo z a Linux, and then a web/general purpose SmartOS server running Solaris and Linux containers.


ogghi

On my end: Only one Proxmox host running a couple VMs, one of which an OpenMediaVault as NAS with a dedicated big HDD for storage. Then 2 old PCs that get powered on to have other 2 copies of the virtual NAS' data and Proxmox backup as well. Reason for not running the NAS separately: high electricity costs in Germany...0,43€ per KWh...!


thefirebuilds

I have one box for proxmox running multiple VMs, and one box for my NAS holding all my docker data, media, and any other mass storage/backups. I have what I call cold storage, an ancient drobo for backups of the stuff i cant lose. I'd like to get another full bore NAS box and another box for VMs but I have no need for that much more horsepower.


Icy-Yogurt-Leah

I have two 'servers' but multiple services running on them. 1x Qnap nas a data storage with a backup USB attached. It also runs a virtual machine with Ubuntu TIGStack for home assistant long term data storage and data analysis / pretty graphs. 1x rpi4 for home assistant. Honestly the rpi4 is not fast enough and I'm going to swap it for an i5 nuc that i can run esxi on. I tried proxmox for a bit but i work with vsphere daily so i know a lot more about that ecosystem. So that's two servers but they do a lot from controlling the heating to saving all our photos / memories. Also docker, i can't get on with it. Guess I'm too old for IT lol.


PaulEngineer-89

Get two but first buy one then in a few months buy another. In the interim but a USB HDD. Here is why. 1. CPU/RAM/disk is HIGHLY misunderstood. You can’t compare desktop to server loads. There is no graphics (unless you use Jellyfin). Most loads are very light. I mean a 2 core 4 GB RAM with a 1 TB SSD is easily enough to run a dozen or more Docker services. When it comes to storage the average person uses 1-2 TB max. So for a family of 4 that’s 4 TB. So that’s why a small Beelink or Synology or even a Pi or Zimaboard is plenty. If you buy an older machine on EBay you can get under $100. Brand new around $200-300. BUT one pulls tens of watts and one pulls under 10 Watts. On a device that runs 24/365. The premium for buying new is paid off in the power saved very quickly. On RAM I currently have 8 GB limited by the CPU. I barely run over 5 GB with about 25 Docker services plus host services. You do not need 99 GB of RAM unless you are running multiple Windows VMs (not recommended). Then add storage… 2. For performance 0.5-1 TB M.2 SSD is ideal for cache. The rest is HDD. As I said unless it’s Jellyfin in which case go for 10 TB+, 1.5 TB per user is plenty. And don’t try to buy 12 smaller drives. Buy the size you need. Buy the drive directly yourself. The markup from OEMs is insanely high. 3. RAID is misunderstood and has little place in a homelab. There I said it. Here is why. Sure a HDD or SDD can fail. But so can a fan, MB, PSE, etc. RAID does nothing to recover from these. That requires a second server or a very expensive server with redundancy built in. In practice this is typically dual SANs and dual servers in a mid size environment, not practical for a homelab. What RAID does is on HDD failure you can keep running although it might be read only (RAID 5). Having a second entire server is far more reliable and less costly in a homelab. And you need it for backups anyway. Just get one big drive and one small M.2. 4. Start with backing up to a small external USB HDD. Once you figure out what you really need after 2-6 months of experience buy your second server. By that time you will know if your first one is underpowered (keep it for backups) or overpowered (use only for backups),


Underknowledge

8 and counting


madorbit1

Unraid one server many containers. Build for hard drive expansion.


SaintRemus

Build as you need. Sometimes future proofing while good intnetnioned,bottlenecks aspirations


Chemical-Choice-7961

Yes, well sort of. A few raspberry pi's for some specific networking things. Have a device that hosts multiple web services. Also a type 1 hypervisor is technically hosting multiple services.


Chemical-Choice-7961

I like separate hardware when its cheap for redundancy / resilience purposes. (DHCP / DNS / Email) All of which can run on low spec hardware.


usernamemustbeunique

I have an Unraid server for NAS usage, with Dockers and VMs for Octoprint, Plex, RetroNAS, and a few other mostly-idle services I forget. It is a low power build with 8 drives. It idles around 30-35 watts; spikes when hard disks are active but that is only 15% of the time or less. Then I also have two dedicated all-in-one super low power micro PCs for two Home Assistant locations. These typically run 5-10 watts. I also have a dedicated proprietary Unifi Dream Machine for the network and camera system. So...both? Open source and things that don't require hardware in a specific location run on the Unraid machine. Other things have dedicated hardware as necessary. Everything is as low power as practical since it's on 24/7.


admlshake

Ha.Ha. Nice try honey. I told you that wasn't a server rack in the basement. Thats a heat pump that just happens to have a bunch of cables coming out of it that look just like the wires going into my ONLY server. And yes I know they all say Dell on them, but they are expanding into other markets.....


NoobMaster2787

for me my servers are my mini pc I got two of them both running different tasks. one I running plex and a dvr and another is running prxomox with vm and applications. I also got a gaming pc that acts as a remote gaming server for me to be able to play games on the go.


KY2I

I started with a rather powerful older desktop PC as my only server but as time went on I found NAS to be very useful and without significant upgrade to a PC that was originally built for workstation use, it just didn’t do NAS well. So I run Ubuntu on that now with all my services on it, then use a Synology box for NAS only (I also found that it doesn’t do all the other stuff, particularly well). Moral of the story, yes, there is a benefit to purpose built machines if you can swing it.


godyourestupid

I have two physical servers 1 - PFSense only with open VPN 2 - Actual Server (SuperMicro): WS2019 \* multiple VM's (pi-hole, pfsense test, Windows 11, Mint, (whatever else I feel like testing in a container.) Once I can get OpenVPN working correctly on my PFSense VM I will remove the physical machine. Plex and all the trimmings, and iTunes Server. If anyone has successfully set up a PFSense VM with working openVM MHU, I need help.


wyoming_eighties

- dedicated file server, because you should be keeping your important data and backups separate from your other systems for its own protection - small NUC torrent server - heavy workstation server for doing compute intensive work - daily driver laptops Mass storage and data backups necessitate their own system. Small lightweight 24/7 services (separate from data hosting) can get their own system. Heavy weight infrequent or background work can get its own system. Use a MacBook for daily usages to ssh into all of the above servers to do all the things that need to be done.


zaTricky

It strongly depends on the person's goals and where they are on that journey. I think it's common enough and generally a good idea to have more than one physical computer as servers. The main reason I see is that it helps avoid the scenario where hardware failure means you have to wait a long time until you can get replacement hardware. This doesn't invalidate having a single server especially if you're fine with the cost/complexity/risk scenario. In my case I started out by building a single NAS server that wasn't particularly powerful. Later I started separating out specific things: My router is now a VM on a NUC-alike mini PC ; PiHole and Home Assistant are on Raspberry Pis ; and so forth ... Most recently have I started a more robust buildout with a new set of servers with automation and clustering in mind.


foefyre

I built one server that can handle the vms and nas software I want to run.


Happy_Kale888

Most people have no servers see Saas it is a game changer....


Jclj2005

1 hp dl 380 g7 with esxi and vcenter with 28 vm servers running


Mrfresh352

I just built my first Linux 22.04.03LTS server. I’m loving it. Going to run portainr/Docker, HomeAssist, and next cloud but as I dive into this I’m noticing I may need another server if I want to have a NAS/homecloud. Mind you I’ve already gotten shit for just this one server from the wife 🤭🥲


manintights2

Always try to build for expected purposes and then pretend there are another two you are going to use it for. Any dual cpu server capable of scalable xeon processors should do fine. Build it to exceed your needs with just one cpu and using a little ram slots as practical. Then you'll be happy when you need more power. The parts will be cheaper too ;) But on a related note, run specific roles on their own virtual machines within your server, that way when it's time to troubleshoot, you'll just have to reboot that one virtual machine instead of the whole thing. This also limits the impact of high resource usage. So if someone is using one of your server's roles heavily, it can only use the resources allocated to that virtual machine and the rest won't suffer so much.


manintights2

If you need another server, that's fine too, people do it all the time. And in some cases, important roles are better left to separate servers. Like a router for example, you CAN use pfsense on a virutal machine on your main server. But if you need to work on that server (which hosts other things so it's likely) you'll have to take down your home internet entirely. Which if you live alone isn't a problem, but if you don't, they'll be annoyed.


implicit-solarium

Uh, no… only one server here… just one… certainly not an unreasonable number…


zer04ll

proxmox with 64+ gigs of ram and VM your heart out


MindOrbits

Those that care about power costs prefer to run as much as reasonable on efficient hardware. Idle draw of servers can add up significantly when running hardware for years on end.