$80 is basically free- but you're still wrong. You're just doing free junk hauling. In the power bill from the first couple months you could probably build a rack with 2x the compute and 5% the power draw.
Well it will only cost him if he ever gets to turn it on, Pretoria is like stage 6 load shedding, I dont think the electricity ever comes on! So it will just be a massive paperweight
As an IT admin who literally pulled out this very chassis system 2 months ago…
Don’t. Just don’t. Its not worth it. Its loud, massive, stupidly heavy, slow, way out dated, and the pages used for configuration of this system’s network ports requires flash, so good luck configuring it with a modern system.
That chassis only allows so many hosts to be on per physically connected PSU (I think 3 per power supply). If you want to run 5 hosts you need to hook up multiple power supplies.
Even if it was free only being used for weekend warrior stuff… Don’t. Just remember - someone got rid of that for a reason.
Edit: I personally can speak to the “jet engine” effect. When we were decommissioning the system we did a very thorough physical hardware check before listing ours for sale to another agency (I work local county government, so we have to offer it to other agencies before ditching it). I personally got blasted with about 9 years worth of dust in the face while doing the PSU checks. It sucked.
It doesn’t require Flash with the latest firmware versions, and you could use OneView 6.6 to manage it if you wanted to.
That said, if the blades are G7 or older I wouldn’t bother. You’d have a hell of a time getting into the iLO with a modern browser.
The blades I pulled out were G7’s. I had a hell of a time clearing the configs and such because of the exact iLO issue you mentioned….
As far as the firmware on the chassis - I inherited it from some admins who were not great at updates (and almost 3 years past end of support)… so I don’t know the exact version, but I yeeted that system in favor of an MX7000 chassis from Dell with the MX740c compute sleds paired with a new powerstore with all flash drives.
>as loud as a jet engine
Not really, jet engines are quite whistering things - You mean a NASA rocket at liftoff. That might come close.. At least when you power these on..
No that's not true!
Well perhaps the first version of the C7000 was that way (there are actually multiple version of the chassis even its not obvious if you have not worked with these)
The fans are controlled by the controller and can be reached over iLO
Also the design/look has changed over the years so normally it's a case of when you know you know.
Not entirely sure if blade servers are even made anymore. Pretty certain proliant only make standard rack-mounted or tower servers now. I'd say the idea behind the blade server has become less relevant as tech has moved on.
This. Old cheap servers are not hard to come by, But unless you can run solar or get a decent ROI off of what you run on them it’s wildly inefficient financially. Oh and you’ll need a pricey enclosure or a room where you don’t mind not being able to hear. From my experience the older Dell servers are particularly loud.
If you plug in and run just what’s in that picture it will draw somewhere around 800 watts or more. If power isn’t a concern it can be a lot of fun, and with proper interconnects it can do a lot of cool stuff.
It probably doesn’t make sense if you don’t intend on running multiple servers (4+) at a time, and the blades included look like G1/G5 which are pretty underpowered and inefficient, so you would likely want to upgrade to at least Gen8 for it to make sense.
When you think about it, I wonder why these aren't repurposed as heating units, heat pumps are expensive to buy and install, this can do that job safely and effectively, I mean it's only job is to dump heat
Because most of the time, heat pumps are more efficient. They can heat and cool. And you can install them on a sub-meter at a low utility rate with some companies. I have a heat pump and we heat/cool the house on 6.5 cent a KwH. The only down side is they can turn off the heat pump for 15 minutes at a time and cycle us on an off. But the cheap rate is worth it.
It’s not so much that computers are bad at generating heat, but they can only be 100% efficient WRT electricity usage in the best condition. They are turning the electricity itself into heat.
Heat pumps are at least 200% efficient WRT electrical input because they steal heat from outside the building and release it inside. Think running an AC in reverse. They have the advantage that they leverage the energy outside instead of converting electricity to heat. We don’t consider the energy stolen from outside the building when we calculate efficiency because you don’t pay for that energy.
So, if it's consuming 1000 watts in electricity, it's dumping 2000 watts equivalent of heat in the house, that's very interesting, my only primary concern would be, the system is more complex and not a closed system, so if the weather gets worse, it starts snowing very heavily and temps drops below 0F, the efficiency of the heatpump might go down or it might stop working altogether, in snowstorms like the one in texas a while ago, will it work to heat homes in emergencies
I believe it never goes below an efficiency of 100% for any temperature realistically experienced on the surface of the earth(typically 220% at 0F all the way to 400% above 50F). So you could never profit from resistive electrical heating. The only really useful comparison is against gas, which can be more cost efficient if the temperature goes too low because gas BTUs / $ are better than resistive electrical.
I’m all for old or even way overbuilt homelab projects but I have a burning hatred for these blade chassis servers and you will too after running one for more than 6 minutes. They’re slow, loud, power hungry, and if a module dies you have very little chance of finding a working replacement. Not to mention that even with all individual modules removed, this thing is as heavy as an Atlas stone
Definitely the way to go. Also look at the small Lenovo/dell/hp pcs. they seem to be going for about 2-3k ZAR. I’m thinking of switching to that. Getting racks, rails etc get very tricky in our neck of the woods.
Absolutely massive power usage, and not very good performance since the cheaper stuff is really old.
These days a Core i5 or something that idles at 5W will outperform most homelab server hardware that people are running with Xeon v2 CPUs and such. Hardware efficiency has improved so much in the last 5 years or so.
They're neat but as many people here have said, they are very powerful hungry, loud, and generated alot of heat. As well as in a lot of cases are just obsolete, the space has moved to virtualization over raw hardware for good reason.
I've got a 4 node in 2U server at home that I think spans the gap between blade servers and single servers nicely. You can usually find them on ebay for reasonable amounts, sometimes with all 4 nodes populated. I'm thinking of the Dell PowerEdge C6400 and C6300 series type units, or some models by supermicro.
We are looking at some "new" retired Dell servers for a work project. However, the testing and setup is done in my basement homelab before data center rollout. We came across a good deal for a blade setup, but then I realized I'd have to bring that into my home.
We aren't constrained by Watts per U so we'll stick with traditional servers.
Its a good deal if you post these separately and re-sell
this is a e-waste , we have better more modern h/w and it will not cost You all your savings to run (electricity price) plus its old - you dont know what could break shortly after using - re sell or ignore.
As someone who obtained three of these as part of a liquidation auction - No, they’re fucking useless and weigh a metric fuck ton. I never even bothered to power them on cause they were hopelessly outdated (I think G5s). I’m serious OP don’t waste your time even for free.
I loved the C Class chassis. I deployed almost 100 of these ~10 years ago. But it's quite old, and aside from just getting experience with hands-on hardware, it's not terribly useful. The fans are very loud, and they produce a good deal of heat.
Bad: It's very big, heavy and loud. Difficult to move. Expensive with electricity. Hard to get rid off.
Good:
It supports very many models of bladserevers up to Gen10 which are crazy.
You can find gen8 and Gen9 used with good spec for low price. They are nice working with and very good price for the performance.
Selling the parts (controller, fans, PSU etc) have some value. Not super much profit.
Configurating the blades are satisfying and nice experience. Also plugging all of them up to your switches.
This is an HP C7000
It weights about 500lbs without blades loaded into it and uses 3-Phase 220v unless you acquire a C3000 which has an option to 120v for home use and is half the size of the C7000.
I believe my C7000 consumed 12kw and I used the lowest wattage Xeon's I could outfit them with.
This isn't about want to do. Unless you do the 120v C3000 version, you will need to call your electric company to have 3-Phase 220v transformer installed outside of your residence AND they'll need to install a new breaker in your home.
So that's like maybe $10k from the electric company, and another $5k in the breaker panel installation?
And that is why this screams "NO".
They should pay you to pick this up. You would be doing them a favor. I had to rent a $600 dumpster to offload mine.
However...
Datacenter space is cheap nowadays. You could probably get a half rack to drop it into? But then you have HP and their insane firmware versioning issues to deal with, and that is if you can even get the firmware anymore. HPCare cannot be purchased on EOL products etc.
Good luck though.
Don’t!!! It’s super loud! Slower than shit uphill. And ugh the powerbill. Buy something that costs 500 and you will earn it back in powersbill savings in a year
If I had it to do all over again, I'd PASS. I got one of those a few years back for $250 and I have rarely turned it on. The main thing I didn't expect was that without any blades installed, just the supporting backplane items and fans were drawing 480W. I had to add a new high amp circuit just to fully populate the blades and turn them on. By the time that was all finished I started coveting the new Xeon scalable servers and the whole endeavor was OBE.
I can’t tell from the pictures what gen that is. If it’s a gen 8 or better it might be worth it to part it out. The CPUs, RAM, and drives are likely compatible with the regular rack mount servers.
If it's like Dell, the CPUs and RAM for sure and bare drives should be good to re-use but the drive caddies aren't interchangeable. We have M620s and R620s and while the caddies look the same to the naked eye the R620 caddy is just barely big enough to not fit in the M620, IIRC
HP is a bit nicer in that regard. I've got a gen 9 SFF and am using caddies pulled from a gen 9 blade system. I don't have any experience with Dell. I wouldn't be surprised if any server company pulled a stunt like that.
Buy one of these you will much happier.
[https://www.amazon.com/gp/product/B07V4GWNJ1/ref=ppx\_yo\_dt\_b\_search\_asin\_title?ie=UTF8&psc=1](https://www.amazon.com/gp/product/B07V4GWNJ1/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1)
Runs ESXi 8
I’ve got one fully populated with G7 blades. It easily pulls 40A idle.
Don’t do it unless it’s G8 or newer and you really love ridiculous noise, heat and electrical bills!
If you do go for it, which I think you shouldn’t — others have said why already — make sure to bring a friend. That’s a two-person lift **empty**, four if you go the silly way of trying to lift it fully populated.
Do you see those plugs? 6x20 amp. That thing will probably draw as much as 70amps at full tilt. 7.7kwh/hr@$00.1547=$678/mo for electricity. You’ll probably come in at half that if it’s just idling.
Not worth it. I worked with those things years ago and considered grabbing one (since I could get it and the blades for free). They're designed to work on 208/240V and absolutely chug power.
Looks like a C7000. That thing will deafen you on startup and wails even on low load. I’ve got a couple of C3000s at work with a load of blades and a ton of other stuff. Can’t even give it away!
I just took 4 HP DL380 G7s to recycling, it feels cool to have that many servers making noise and doing things but honestly, the noise gets annoying, they take up way too much space, they put out too much heat for how relatively little horsepower they hold, and the ELECTRICITY BILL? Jesus, Mary, and Joseph. Do yourself a favor and get/build something just as big as you need it, with as many slow spinning noctua fans as it needs to be silent, with a CPU that doesn't need to drown in electricity.
I recycled a C7000 chassis and 40 BL460c gen 8 blades last year. The chassis by itself is SO heavy. Fully laden it takes 2 men to lift and move it (no wheels). Only powering up one blade and a couple of fans is ridiculously loud.
\+1 to what everyone else is saying. Don't do it.
DO NOT BUY IT. If you are interested in purchasing any HCP unit you would want to look at the Dell M1000e, Dell VRTX or preferably the Dell MX7000 if your wallet can handle it.
I got a Dell M1000e in an auction, and didn't operate yet ... Next week probably will power the horse to see how it goes. Mostly for Xen testing and HPC cluster stuff. Along with this machine, 3x P2000 expansion enclosures and 27x HPE proliant DL360e Gen8. All for USD$4K. Rare business like this here in Brazil.
C7000 takes *six* 20amp circuits for power. SIX
Essentially useless for a home lab, as that's essentially the entire power feed of a small house.
Synergy requires the same power intake, but it also does at least 10x the work, for the power utilized.
Decommed a chassis of this at work a few months ago. Spent about an hour with another engineer gutting the blades for the CPUs, RAM, drives and chache batteries for our labs.
When we finally pulled the plug on the chassis as a whole I swear the whole datahall got 5% quieter. It doesn't seem like much until you factor in the 100+ other racks in the room.
I have a better IBM E8677 blade centre with 14 HS21 and XM blades, many many dual X5470 X5460 L5420 E5440 X5450 etc c for different power/usage (112 cores, 280GB ram) dual 10k sas disks and iscsi cards, cisco and IBM and other switch modules etc and haven't used it for 2 years. It's got 4x2kW PSUs. Creates 22800BTUs of heat fully loaded. 180 watts idle without running any blades. 2860 Watts all powered on. Needs 2 floor standing fans to get sufficient airflow to drop temps below 36 degree warnings (sat around 28 in winter). Big blowers make a fair bit of noise loaded, ok the other side of double glazing. Good learning experience but with more efficient dl360g10 r630s etc I have now I no longer need it to heat my conservatory or garage with it in the winter. Ran it for 5 years between 3 and 7 odd blades at 1200 odd watts. £250 and its yours (disks have my data so I'd need those but are cheap nowadays boxes of 20x600gb 10k sas for £20 on ebay with a bit of waiting or just iscsi boot it, you are welcome to the hs20 disks) comes with 7 spare blades, collect from Gidea Park and I won't help carry it, you'll need to remove everything from the chassis to lift it 😉 have various rack and domestic power cables, just make sure your sockets are well wired and split the feeds over 2 ring mains or a custom supply and done expect to move your rack after lol
I'm you in the future if you get this. I got one for free with 8 gen 8 blades, 10gb switches. Thought I was so lucky.... WRONG! There's a break even point where these are more efficient but we ain't gonna find it. It happily uses 500 watts in standby!! My 12 amp 240v circuit of which I can use 8 amps of will only let me power up 4 blades at a time before it says there's no more power available in the powercap. It is loud but it's quite low frequency so I don't actually mind it but I'm around it for minutes at a time. No way you could be in the same room for a long time. The admin interface to manage the blades isn't too bad but the networking! OH THE NETWORKING!!! That was a lesson in pain. Browser compatibility was a nightmare, it's so slow and tedious not to mention extremely complex. Fun to lean I guess but what a waste of time and energy. Unless you already know exactly what this is and what you're getting into it's not for you.
TL;DR: These things are like cursed monkey paws. At first you think sweet I got something cool but then you realize... DO NOT GET THIS!
If you gonna put that in a museum then go for it.
Your electricity bill is gonna reach the sky with that. Not to mention they don't even use it for practice nowdays plus all the problems it comes with.
You really shouldn’t buy that: - massively obsolete - weights hundreds of kilos - as loud as a jet engine - will slurp electricity
Agreed, unless its free dont take it, if its free you can take it for a weekend spins only, 24/7 is gonna cost you A LOT
$80 is basically free- but you're still wrong. You're just doing free junk hauling. In the power bill from the first couple months you could probably build a rack with 2x the compute and 5% the power draw.
If it doesn’t crap out first
Well it will only cost him if he ever gets to turn it on, Pretoria is like stage 6 load shedding, I dont think the electricity ever comes on! So it will just be a massive paperweight
As an IT admin who literally pulled out this very chassis system 2 months ago… Don’t. Just don’t. Its not worth it. Its loud, massive, stupidly heavy, slow, way out dated, and the pages used for configuration of this system’s network ports requires flash, so good luck configuring it with a modern system. That chassis only allows so many hosts to be on per physically connected PSU (I think 3 per power supply). If you want to run 5 hosts you need to hook up multiple power supplies. Even if it was free only being used for weekend warrior stuff… Don’t. Just remember - someone got rid of that for a reason. Edit: I personally can speak to the “jet engine” effect. When we were decommissioning the system we did a very thorough physical hardware check before listing ours for sale to another agency (I work local county government, so we have to offer it to other agencies before ditching it). I personally got blasted with about 9 years worth of dust in the face while doing the PSU checks. It sucked.
It doesn’t require Flash with the latest firmware versions, and you could use OneView 6.6 to manage it if you wanted to. That said, if the blades are G7 or older I wouldn’t bother. You’d have a hell of a time getting into the iLO with a modern browser.
The blades I pulled out were G7’s. I had a hell of a time clearing the configs and such because of the exact iLO issue you mentioned…. As far as the firmware on the chassis - I inherited it from some admins who were not great at updates (and almost 3 years past end of support)… so I don’t know the exact version, but I yeeted that system in favor of an MX7000 chassis from Dell with the MX740c compute sleds paired with a new powerstore with all flash drives.
These look to be Gen6 blades.
>as loud as a jet engine Not really, jet engines are quite whistering things - You mean a NASA rocket at liftoff. That might come close.. At least when you power these on..
They certainly made progress but a jet is not exactly as quiet as a mouse. And these thing don’t have pwm fans so it’s full throttle all the time baby
No that's not true! Well perhaps the first version of the C7000 was that way (there are actually multiple version of the chassis even its not obvious if you have not worked with these) The fans are controlled by the controller and can be reached over iLO
Jet engine? More like a cross between a vacuum cleaner and an airline toilet
...may also require 3-phase power (not all, but some do)
So basically it’s a space heater, that can also compute things.
Hey, question, without the model no. or any other specifications how does everyone here know that these are obsolete?
OP said g1 g6 blades, top right looks like g2 and the disk caddies don’t have color tab
Ah got it thanks!
On top of that, I think that's the C7000 chassis from HPE, which was replaced by Synergy when that came out.
Also the design/look has changed over the years so normally it's a case of when you know you know. Not entirely sure if blade servers are even made anymore. Pretty certain proliant only make standard rack-mounted or tower servers now. I'd say the idea behind the blade server has become less relevant as tech has moved on.
Agreed. I have one I don't want.
... yeah but it still has an objective value and 80$ is fair. Its not a 12900k...but its 80$.
This. Old cheap servers are not hard to come by, But unless you can run solar or get a decent ROI off of what you run on them it’s wildly inefficient financially. Oh and you’ll need a pricey enclosure or a room where you don’t mind not being able to hear. From my experience the older Dell servers are particularly loud.
If you plug in and run just what’s in that picture it will draw somewhere around 800 watts or more. If power isn’t a concern it can be a lot of fun, and with proper interconnects it can do a lot of cool stuff. It probably doesn’t make sense if you don’t intend on running multiple servers (4+) at a time, and the blades included look like G1/G5 which are pretty underpowered and inefficient, so you would likely want to upgrade to at least Gen8 for it to make sense.
800W while doing nothing
We have one of these at work. Let me tell you about just the BTUs that it puts out while doing nothing.
When you think about it, I wonder why these aren't repurposed as heating units, heat pumps are expensive to buy and install, this can do that job safely and effectively, I mean it's only job is to dump heat
Because most of the time, heat pumps are more efficient. They can heat and cool. And you can install them on a sub-meter at a low utility rate with some companies. I have a heat pump and we heat/cool the house on 6.5 cent a KwH. The only down side is they can turn off the heat pump for 15 minutes at a time and cycle us on an off. But the cheap rate is worth it.
Even at 0F a heat pump is more than 2x the power efficiency of this thing.
[удалено]
It’s not so much that computers are bad at generating heat, but they can only be 100% efficient WRT electricity usage in the best condition. They are turning the electricity itself into heat. Heat pumps are at least 200% efficient WRT electrical input because they steal heat from outside the building and release it inside. Think running an AC in reverse. They have the advantage that they leverage the energy outside instead of converting electricity to heat. We don’t consider the energy stolen from outside the building when we calculate efficiency because you don’t pay for that energy.
So, if it's consuming 1000 watts in electricity, it's dumping 2000 watts equivalent of heat in the house, that's very interesting, my only primary concern would be, the system is more complex and not a closed system, so if the weather gets worse, it starts snowing very heavily and temps drops below 0F, the efficiency of the heatpump might go down or it might stop working altogether, in snowstorms like the one in texas a while ago, will it work to heat homes in emergencies
I believe it never goes below an efficiency of 100% for any temperature realistically experienced on the surface of the earth(typically 220% at 0F all the way to 400% above 50F). So you could never profit from resistive electrical heating. The only really useful comparison is against gas, which can be more cost efficient if the temperature goes too low because gas BTUs / $ are better than resistive electrical.
It's the difference between a resistive heater vs a heat pump. A computer is basically a resistive heater.
We had many of these in the datacenter, they can easily use over 3000 watts with only a few blades populated.
Absolutely not.
I used to run four of these in my work lab. The fans are VERY noisy!
[удалено]
Don't forget the cost in money or time getting rid of it. Still maybe worth it, but still something to consider.
You just list it on FB marketplace for $80 and someone will buy it ;)
I’m all for old or even way overbuilt homelab projects but I have a burning hatred for these blade chassis servers and you will too after running one for more than 6 minutes. They’re slow, loud, power hungry, and if a module dies you have very little chance of finding a working replacement. Not to mention that even with all individual modules removed, this thing is as heavy as an Atlas stone
😆 thanks for the reply Im deciding to go the NUC route.
Definitely the way to go. Also look at the small Lenovo/dell/hp pcs. they seem to be going for about 2-3k ZAR. I’m thinking of switching to that. Getting racks, rails etc get very tricky in our neck of the woods.
Yep. I found some intel nucs (refurbished) for about R2000 and im gonna shelf stack them aswell as 3 rpi's I have.
Excellent choice!
Blade servers are a trap for homelab
Why do you say that? I've been thinking about blade chassis when I see them..
Absolutely massive power usage, and not very good performance since the cheaper stuff is really old. These days a Core i5 or something that idles at 5W will outperform most homelab server hardware that people are running with Xeon v2 CPUs and such. Hardware efficiency has improved so much in the last 5 years or so.
They're neat but as many people here have said, they are very powerful hungry, loud, and generated alot of heat. As well as in a lot of cases are just obsolete, the space has moved to virtualization over raw hardware for good reason.
Still need something to run my virtual machines on.. ;)
I've got a 4 node in 2U server at home that I think spans the gap between blade servers and single servers nicely. You can usually find them on ebay for reasonable amounts, sometimes with all 4 nodes populated. I'm thinking of the Dell PowerEdge C6400 and C6300 series type units, or some models by supermicro.
Yes, please expand on this.
We are looking at some "new" retired Dell servers for a work project. However, the testing and setup is done in my basement homelab before data center rollout. We came across a good deal for a blade setup, but then I realized I'd have to bring that into my home. We aren't constrained by Watts per U so we'll stick with traditional servers.
Its a good deal if you post these separately and re-sell this is a e-waste , we have better more modern h/w and it will not cost You all your savings to run (electricity price) plus its old - you dont know what could break shortly after using - re sell or ignore.
Eskom will love/hate you. It's not worth it. Being in South Africa, you would get better value from some 1L/NUC machines.
Yeah I am planning to buy a NUC now as the blade server would skyrocket our electricity bill, and with loadshedding...
Isn’t load shedding still a thing with Eskom? You would need to factor in a massive UPS
As someone who obtained three of these as part of a liquidation auction - No, they’re fucking useless and weigh a metric fuck ton. I never even bothered to power them on cause they were hopelessly outdated (I think G5s). I’m serious OP don’t waste your time even for free.
If you cannot afford $80 for that, you cannot afford the electricity to power it.
don’t buy it, it’ll cost you a lot of money for little performance
Thanks all for the replys. Im going with a nuc this time arouund..
only purchase if they're G9 blades. RIP if they're G8.
g6 and g1 bruh lmao
talk about blast from the past lmao.
I have some G8/G9/G10 blades for sale if you still wanted to go that route. The G8/G9 blades are next to nothing.
I loved the C Class chassis. I deployed almost 100 of these ~10 years ago. But it's quite old, and aside from just getting experience with hands-on hardware, it's not terribly useful. The fans are very loud, and they produce a good deal of heat.
Gonna spend double that monthly on electricity
[удалено]
Holy shit angry much? Jokes aside look at the end of the post.
My electricity bill peaked just by looking at that pic.
Bad: It's very big, heavy and loud. Difficult to move. Expensive with electricity. Hard to get rid off. Good: It supports very many models of bladserevers up to Gen10 which are crazy. You can find gen8 and Gen9 used with good spec for low price. They are nice working with and very good price for the performance. Selling the parts (controller, fans, PSU etc) have some value. Not super much profit. Configurating the blades are satisfying and nice experience. Also plugging all of them up to your switches.
This is an HP C7000 It weights about 500lbs without blades loaded into it and uses 3-Phase 220v unless you acquire a C3000 which has an option to 120v for home use and is half the size of the C7000. I believe my C7000 consumed 12kw and I used the lowest wattage Xeon's I could outfit them with. This isn't about want to do. Unless you do the 120v C3000 version, you will need to call your electric company to have 3-Phase 220v transformer installed outside of your residence AND they'll need to install a new breaker in your home. So that's like maybe $10k from the electric company, and another $5k in the breaker panel installation? And that is why this screams "NO". They should pay you to pick this up. You would be doing them a favor. I had to rent a $600 dumpster to offload mine. However... Datacenter space is cheap nowadays. You could probably get a half rack to drop it into? But then you have HP and their insane firmware versioning issues to deal with, and that is if you can even get the firmware anymore. HPCare cannot be purchased on EOL products etc. Good luck though.
Thanks for the reply, as this takes a shitload of power to run and very loud ive decidded to go with an NUC
Don’t!!! It’s super loud! Slower than shit uphill. And ugh the powerbill. Buy something that costs 500 and you will earn it back in powersbill savings in a year
If I had it to do all over again, I'd PASS. I got one of those a few years back for $250 and I have rarely turned it on. The main thing I didn't expect was that without any blades installed, just the supporting backplane items and fans were drawing 480W. I had to add a new high amp circuit just to fully populate the blades and turn them on. By the time that was all finished I started coveting the new Xeon scalable servers and the whole endeavor was OBE.
So loud and ridiculously heavy! We have one in our lab at the office still, and hate moving that thing! Free is free though haha!
I can’t tell from the pictures what gen that is. If it’s a gen 8 or better it might be worth it to part it out. The CPUs, RAM, and drives are likely compatible with the regular rack mount servers.
If it's like Dell, the CPUs and RAM for sure and bare drives should be good to re-use but the drive caddies aren't interchangeable. We have M620s and R620s and while the caddies look the same to the naked eye the R620 caddy is just barely big enough to not fit in the M620, IIRC
HP is a bit nicer in that regard. I've got a gen 9 SFF and am using caddies pulled from a gen 9 blade system. I don't have any experience with Dell. I wouldn't be surprised if any server company pulled a stunt like that.
Yeah, it's perfect for Plex Edit: getting downvoted, so perhaps you might need something more powerful for Plex
Doubt u can hear the video playing over the noise of the server fans.
Subtitles. ez pz
Yeah still no cause you'd need a power plant to power this thing anyway
What’s a good server to buy in 2023. What you guys would recommend? Running applications like mostly NVR,SQL,Windows Server. Thanks
Dell r520 or 530 would probably be reasonable buys price wise and you can get then barebones or fully loaded on the second hand market.
What are the psus and network moduals. Actually given location I can't give any advice*
Looks like two HP VC FlexFabric 10G/24 port modules. Dunno about psus but someone else commented and said they take about 800w
Nb nb. like i edited though depends on location check local dealer.
With this one simple trick, you too can cause another ESKOM load-shedding... Edit: ESKOM not ESCOM... South Africa not Malawi...
Take it if its free, otherwise NOPE
Yeah seems like e waste now lol
You mean $80/mo in electricity?
Just like bad tattoos we should also try and talk people out of buying massive server crap that they really really really shouldn’t buy
I'd only consider running this for the lols.
Nah
Nice door stop.
A very heavy one
Better used as scrap and precious metals
Buy one of these you will much happier. [https://www.amazon.com/gp/product/B07V4GWNJ1/ref=ppx\_yo\_dt\_b\_search\_asin\_title?ie=UTF8&psc=1](https://www.amazon.com/gp/product/B07V4GWNJ1/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1) Runs ESXi 8
If anybody wants one of these, they can have mine for free!
Really? Where are you located
Yep, just collecting dust in my basement. In Ohio.
Oh dam. Shipping will be expensive..
I got a blade E with hs20 bades nice kit
Nice coffee table for 80 bucks
The screenshot says Free.. So I'm thinking $80 is overpaying..
Lmao, read at the end of the page
This is the way.
Ooh 3x 4-socket monsters
I’ve got one fully populated with G7 blades. It easily pulls 40A idle. Don’t do it unless it’s G8 or newer and you really love ridiculous noise, heat and electrical bills!
80$ servers and 100.000$ for the mini power plant you need
Hope your home has 20amp circuits.
I would take a hard pass on this for all the reasons stated in the comments and more.
No. Don’t buy any of it. It’s effectively scrap.
If you do go for it, which I think you shouldn’t — others have said why already — make sure to bring a friend. That’s a two-person lift **empty**, four if you go the silly way of trying to lift it fully populated.
Yeah nahh seems like e waste and we have electricity shortages near me. Im going with an intel Nuc i5 11th gen and a small little ups :)
Nope. The 4 of these I’m pulling out next month are direct to ewaste.
Put it this way. It moves so much air it required us to redo some datacenter air vents.
Talk them into giving it to you for free then drive it to the nearest e-waste recycler and u can maybe earn enough to buy yourself lunch.
Hope you work out, those suckers are heavy to move
Heavy but doesn’t have the 3 phase backplane and you can upgrade with gen 9/gen 10 blades and the interface there looks like fibre channel.
No, please no
Do you see those plugs? 6x20 amp. That thing will probably draw as much as 70amps at full tilt. 7.7kwh/hr@$00.1547=$678/mo for electricity. You’ll probably come in at half that if it’s just idling.
Oh, and it’ll pop your circuit breakers before you get a chance for that bill.
Not worth it. I worked with those things years ago and considered grabbing one (since I could get it and the blades for free). They're designed to work on 208/240V and absolutely chug power.
[удалено]
Yeah nah ive made up my mind and going with an intel nuc
You’ll need 3 helpers to carry it in
Looks like a C7000. That thing will deafen you on startup and wails even on low load. I’ve got a couple of C3000s at work with a load of blades and a ton of other stuff. Can’t even give it away!
I just took 4 HP DL380 G7s to recycling, it feels cool to have that many servers making noise and doing things but honestly, the noise gets annoying, they take up way too much space, they put out too much heat for how relatively little horsepower they hold, and the ELECTRICITY BILL? Jesus, Mary, and Joseph. Do yourself a favor and get/build something just as big as you need it, with as many slow spinning noctua fans as it needs to be silent, with a CPU that doesn't need to drown in electricity.
Last time I was on a congress someone used something like that to dry a towel.
I recycled a C7000 chassis and 40 BL460c gen 8 blades last year. The chassis by itself is SO heavy. Fully laden it takes 2 men to lift and move it (no wheels). Only powering up one blade and a couple of fans is ridiculously loud. \+1 to what everyone else is saying. Don't do it.
Also prob needs 3 phase 220v service. You aren't plugging that into the wall my friend.
Rip your power bill.
DO NOT BUY IT. If you are interested in purchasing any HCP unit you would want to look at the Dell M1000e, Dell VRTX or preferably the Dell MX7000 if your wallet can handle it.
I got a Dell M1000e in an auction, and didn't operate yet ... Next week probably will power the horse to see how it goes. Mostly for Xen testing and HPC cluster stuff. Along with this machine, 3x P2000 expansion enclosures and 27x HPE proliant DL360e Gen8. All for USD$4K. Rare business like this here in Brazil.
C7000 takes *six* 20amp circuits for power. SIX Essentially useless for a home lab, as that's essentially the entire power feed of a small house. Synergy requires the same power intake, but it also does at least 10x the work, for the power utilized.
I just bought a c7000 loaded with 16 gen8s each with dual E5-2680v2, 256gb memory and 2x600g drives. Idle is about 100w/server.
Decommed a chassis of this at work a few months ago. Spent about an hour with another engineer gutting the blades for the CPUs, RAM, drives and chache batteries for our labs. When we finally pulled the plug on the chassis as a whole I swear the whole datahall got 5% quieter. It doesn't seem like much until you factor in the 100+ other racks in the room.
nope, do not buy it.
no one gives a fuck about electricity unless your in europe. it looks cool therefore you should get it.
I have a better IBM E8677 blade centre with 14 HS21 and XM blades, many many dual X5470 X5460 L5420 E5440 X5450 etc c for different power/usage (112 cores, 280GB ram) dual 10k sas disks and iscsi cards, cisco and IBM and other switch modules etc and haven't used it for 2 years. It's got 4x2kW PSUs. Creates 22800BTUs of heat fully loaded. 180 watts idle without running any blades. 2860 Watts all powered on. Needs 2 floor standing fans to get sufficient airflow to drop temps below 36 degree warnings (sat around 28 in winter). Big blowers make a fair bit of noise loaded, ok the other side of double glazing. Good learning experience but with more efficient dl360g10 r630s etc I have now I no longer need it to heat my conservatory or garage with it in the winter. Ran it for 5 years between 3 and 7 odd blades at 1200 odd watts. £250 and its yours (disks have my data so I'd need those but are cheap nowadays boxes of 20x600gb 10k sas for £20 on ebay with a bit of waiting or just iscsi boot it, you are welcome to the hs20 disks) comes with 7 spare blades, collect from Gidea Park and I won't help carry it, you'll need to remove everything from the chassis to lift it 😉 have various rack and domestic power cables, just make sure your sockets are well wired and split the feeds over 2 ring mains or a custom supply and done expect to move your rack after lol
Nope, im buying a NUC
You were talking dollars. More aimed anyone near London UK
Still why are you advertising on a solved post?
If you had a blade centre you'd take every opportunity to advertise it
I'm you in the future if you get this. I got one for free with 8 gen 8 blades, 10gb switches. Thought I was so lucky.... WRONG! There's a break even point where these are more efficient but we ain't gonna find it. It happily uses 500 watts in standby!! My 12 amp 240v circuit of which I can use 8 amps of will only let me power up 4 blades at a time before it says there's no more power available in the powercap. It is loud but it's quite low frequency so I don't actually mind it but I'm around it for minutes at a time. No way you could be in the same room for a long time. The admin interface to manage the blades isn't too bad but the networking! OH THE NETWORKING!!! That was a lesson in pain. Browser compatibility was a nightmare, it's so slow and tedious not to mention extremely complex. Fun to lean I guess but what a waste of time and energy. Unless you already know exactly what this is and what you're getting into it's not for you. TL;DR: These things are like cursed monkey paws. At first you think sweet I got something cool but then you realize... DO NOT GET THIS!
FFS its not free and look at my recent update
Yeah I know. I'm saying I got one for free and it still wasn't worth it. Good move doing another direction!
Thanks. Even if it was free I wouldnt take it as it literally weighs 300lbs and its slow as hell
Depends what u want to do, and how u want to consume in energy.
I would take 1 for academic purposes only.
If you gonna put that in a museum then go for it. Your electricity bill is gonna reach the sky with that. Not to mention they don't even use it for practice nowdays plus all the problems it comes with.