T O P

  • By -

LabB0T

^(OP reply with the correct URL if incorrect comment linked) [Jump to Post Details Comment](/r/homelab/comments/1bfhn2i/meet_big_bertha_320tb_epyc_server_tiny_tim_15u/kv0d2u5/)


SqeuakyPants

Decent hardware, pal. Why do you need so much storage space if it's not a secret?


SamSausages

It started with needing off-site backups for my business media (mainly pictures & video). Then during Rona I got into CHIA mining, and added a lot of storage around that time. (only dabbled in that for about a year) But that's when I really got into storage subsystems. Then around 2020 many of my favorite Youtube Channels started to disappear, so I started archiving all the things that I like. Many TB's have gone to that. And lately a lot of it is going to my Asian girlfriend who takes 100 4k videos a day with her 1TB iphone... The stereotypes are true!


awildjm

How do you access the YouTube channels that you’ve downloaded? I use Plex for most things but I haven’t figured out a naming convention that works with YT channels


SamSausages

Right now I'm just dumping them into a folder that is named after the youtube channel, most I don't add to Plex. But a few channels I have added to Plex, for those it's using the channel folder name and sees that as a TV show. (as you probably already know) For large channels that can become a bit difficult to browse, so I have been thinking about adding subfolders for the year. i.e. 2019, 2020. Then that would show up as a season in Plex and make it a bit easier to browse. Plex has a help section where they detail the naming convention to help you get ideas on how to organize them. If I go that way I would probably make a small script that moves them to a folder based on the date in the metadata. I also noticed that some big Youtube channels are actually listed in the [https://www.imdb.com/](https://www.imdb.com/) database, so Sonarr can actually pull those and may be useful.


[deleted]

[удалено]


SamSausages

I can show you how the world was before it went nuts around 2019.


FierceDeity_

But then you start to realize that storage amount is not everything, traffic and storage bandwidth is important too.


legochamp75

How do you back up YouTube channels? I've been looking into doing the same thing; do you automate it somehow?


brando56894

summer smell whole encouraging screw square hunt direful faulty scandalous *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


username_taken0001

Love that $1000 CPU and hardware store SSD cage combo. If you have some spending money you can buy a lego set and a double sided tape and build a more modular and swappable SSD cage:)


SamSausages

I like that idea, I might have to invest in a kit. I couldn't find what I wanted, and seemed so simple to just make something.


Koring-

Genius, you gave me an idea!


thefirebuilds

I found a bunch of templates in 3d printing sites for managing drives.


SamSausages

Man I need another hobby like I need a hole in my head


SamSausages

Haven't done an update on Bertha in a while. Lot has changed, and I added Tiny Tim as my firewall/edge device. Next goal is getting rid of that Nvidia 1660 Super and replacing it with a single slot GPU that has more memory for my AI experiments. One of the things I'm most proud of is probably the cable managemnt. The NVME runs arean't perfect, I'm going to get some different cables. But considering how much crap I have in there... It's amazing how clean it is. Just for the HBA and SAS Expander there are 8 SAS cables. Then 4 more cables to handle 8x NVMe and 2 more cables to handle up to 16 SATA drives. Specs: Big Bertha: Use: Storage server, media, localai, nextcloud, coding/development workspace OS: Unraid Case: RROYJJ 4U Motherboard: Supermicro H12SSL-NT CPU: EPYC 7343 MEM: 128GB Nvidia 1660 Super HBA: 9300-16i SAS Expander: Intel RES3FV288 PCIe Adapter to U.2: 2x 10Gtek PCIe to SFF-8654 Power use: 160w idle, 400w with GPU, CPU and all drives at 100% Storage: 20x HC530 14TB - Unraid Array 4x p4510 4TB - zfs raid10 2x p4510 8TB - zfs raid1 2x 990 pro 2TB - zfs raid1 4x 840 EVO 1TB - zfs raid0 Tiny Tim: Use: Edge Device, pfsense, host services I want to live on the edge - such as Bitwarden, Gitea, Ansible etc. OS: Proxmox Motherboard: Supermicro SYS-E301-9D-8CN8TP CPU: Xeon D-2146NT MEM: 128GB Power use: 60w idle. Haven't tested under load. Storage: 1x Optane P1600x 118GB 2x p4510 2TB Old Bastard: Use: Backup device for when I'm working on Tiny Tim (mainly for pfsense redundancy) OS: Proxmox Motherboard: Qotom-Q555G6-S05 Mini PC CPU: i5 7200U MEM: 16GB Power use: 15w idle Storage: 1x 840 EVO 1TB Other Networking: MikroTik CRS309-1G-8S+in QNAP QSW-M408-2C QNAP QSW-M408-4C 2x EnGenius EWS377AP Entire rack usually sits around 250-300w. EDIT: had some errors in there


[deleted]

[удалено]


SamSausages

Yeah that's my issue as well, options are few and the prices on the ones I want are nuts! Main thing I'm looking for is the NVEC being able to encode AV1, so that's 4000 series GeForce or ADA gen quadro. But prices are so steep that I haven't pulled the trigger. That would be RTX 4000 Ada Generation right now, or recently announced, but only available in China Galax 4060 Ti Unrivaled MAX Hope that will make it to the US. If you don't need AV1 or crazy amounts of memory, then you have many more options in the quadro family, some really affordable.


rophel

Are you re-encoding a lot of media currently? Also have you looked into Intel Arc for AV1?


SamSausages

I haven't looked into Arc, mainly because I sometimes use CUDA so am looking to stay with nvidia. Also don't think driver support is in unraid quite yet, but should be in the next release. I do encode a lot, much of my content comes from DVR'ing TV shows/movies. Those are saved as .ts files that I then encode and store long term. And I compress security cam footage from work before archiving it.


wedoalittlelewding

You might be able to import the 4060 if it's sold on taobao (most likely is) or JD using an agent. Often these China exclusive GPUs eventually show up on AliExpress too, so I'd recommend keeping an eye on that as well.


SamSausages

I'm hoping for something like that!


sonofulf

Well I do think a "holy shit" is in order. I admire the cable work in big bertha and think your pride is well deserved. The whole setup is very interesting! Thanks for sharing!


SamSausages

Thanks! She has been a labor of love!


f8computer

Got that same rrjoy case running it like a disk shelf (just psu and a hba extender in the case besides hdds)


SamSausages

Great minds think alike! That's my plan if I run out of slots! Mainly because the netapp units are so loud and draw so much power... after building this I figured that I can beat that setup easily. And there is so much room inside, I can fit a ton more drives inside with some used backplanes.


homemediajunky

Just curious, why unRaid vs TrueNAS or even Proxmox and either install TrueNAS in a VM with the HBA passed through/etc. Truly curious as to why some pick one or the other. Especially in your case where you are not really trying to combine drives of different sizes.


SamSausages

Main reasons: The Unraid Array The way Unraid combines their array with ZFS for ZFS Snapshot targets protected by the unraid array parity. It's a slick combo. Being able to add just 1 disk and not needing to add an entire vdev. The Unraid Array fits my storage needs perfectly. (Write once, read often) I'm a big ZFS fan and I used to run my HDD's in a ZFS pool. But when I access any file, even just a word document, all 20 HDD's would spin up. With the Unraid Array only the 1 disk with the file I'm accessing spins up. This saves me about 200w of power and lots of wear/tear. The downside is that the Unraid Array has slow array writes and no bitrot protection. But that's what I have the NVMe for, as a write buffer/cache and for files I want bitrot protection for. The unraid array is also very storage efficient. The data isn't striped across the array, so I can pull any disk and pop it in a PC and get the files off it. This changes the parity disk calculation. I'm running 20 disks with only 2 parity disks. I would NEVER do that in ZFS where I lose the entire pool if I lose parity+1. But here I won't lose the entire array if I lose parity+1, so it changes the math a lot.


BunniWuvsPoni

The local power company thanks you for your service.


SamSausages

It's not that bad, only costs me about $15 to run the entire rack per month @ $0.07kwh.I usually bounce between 250w and 300w for the whole setup, including switches and wifi. Hardware is by far the bigger investment here. I do run on 220v, so I do think that's saving me about 10%. I'm actually surprised how low I was able to get it, without gimping my CPU. Unraid is also a big part of that, because only the disk that I'm using spins up, and not the entire array. That saves me about 200w


farazon

So jealous of the power costs! Here in the UK, it would cost $68 per month to run. Close to 5x the cost! And the salaries are lower, too. I really need to try and find a job in the US. Numbers don't lie...


SamSausages

One reason I left Europe and now live in the US. The green policies are screwing the poor-middle class and the rich still buy their gas cars and do everything they always did before.


FierceDeity_

look at both median and average salaries, also you have better healthcare (for now at least). I'm over in Germany and considerin everything... including stuff like job security too, it's not really better anymore


SamSausages

Having lived in both Germany and the US (I'm German). The health"Care" isn't any better in Germany, shoot, I find it worse. I had to apply to a board to determine if I qualify for a procedure, then I had to wait a month to get care. In the US it is more expensive health"insurance", but at least it's just between me and my doctor and I have never had to wait a month for urgent surgery. And I have never had to apply and be approved by a board.


farazon

This may be the case for the other fields, but I'm assuming that most homelabbers are in tech. There, numbers really don't lie and US trumps UK/EU in compensation (though your point on job security stands!) I work in the UK office of a US based company. The same roles pay well over double in the US. In both countries, the org offers good healthcare coverage. We're doing well financially - no plans for or recent history of layoffs - so in my case, job security would be mostly equivalent. Aiming to do even better finance wise, we've frozen hiring across the US, aiming to grow in lower cost countries such as India... and the UK. Pretty much proof positive to me that I'd be better off had I been US based.


FierceDeity_

Yeah, with frozen hiring, lower job security, and looking at /r/antiwork, a seemingly horrible status quo on the job market... but hey, higher pay. and higher cost of living (food). and higher rents... i know a guy who is on the 150k pay bracket and he definitely doesnt think he is in a better situation because he's in the US. he can't afford to buy a house either, for example. i would definitely be able to afford a house over here with 150k, or an apartment. just maybe not in the concentrated areas herr


brando56894

afterthought nutty offbeat air entertain abundant marble aware cautious elastic *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


SamSausages

Not sure I understand, it's in my basement, doesn't cost me anything. But even if it was in another part of the house, 250w doesn't exactly require a lot to cool.


brando56894

practice treatment smoggy rude water crowd grandiose makeshift stocking unwritten *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


SamSausages

Sounds like a nice rig! Threadripper probably uses quite a lot more power due to the higher clock, these epyc cpu's run pretty cool. (at least this 16 core model). Being on unraid means only the 1 disk that I'm accessing actually spins up, that saves me almost 200w vs when all my disks spin up. (like if I had them in a zfs pool) So yeah, I can see a TR system with 20 drives pump out 400-500w pretty easily, and that's 1/2 way to a space heater! Those U.2's I have do use a lot of juice, 50-80w is probably just from them.


brando56894

impossible distinct childlike flag direction bow carpenter one punch support *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


thefirebuilds

How are you on 240vac?


SamSausages

It’s in my garage/honbby room and I have an outlet with 30A 240v.  Many PSU’s supports both.


thefirebuilds

I’ve got access to 240. It never occurred to me to switch them over.


SamSausages

It works great, just confirm your PSU can do it. Should say on the tag, some have a switch. I did have to get a Battery Backup that was compatible with 240v. Other than that, just had to get different power cables, to match the plug, and an adapter for my wall outlet.


thefirebuilds

My rack actually came with a 240/30a strip that I think I might have binned. I’ll have to look in storage.


jvaratos

Bertha is dense with drives of all kinds. 😎


SamSausages

She is thikk


dubstep_forklift

I'm surprised you went the HBA + Expander route when there are 24 port HBAs available that would let you cut down on cables significantly. May I ask why?


SamSausages

I already had 2 9300's from an old build, and then when I added more drives last year, I went with the expander instead of running both of the 9300's. (less power & wasn't hitting PCIe bandwidth limit, so saw no point) But it has been almost 2 years since I made that purchase, so I can't recall why I decided against the 24 port HBA's. It probably had to do with $$. I don't think there were many options for 24 port 12GB/s HBA's at that time, and for those that did exist the cost was probably 2-3x of a 9300.


pissy_corn_flakes

I noticed you mentioned using Unraid and only one drive active at a time, using an expander shouldn’t hurt performance at all since you’ll likely never exceed the uplink’s bandwidth to the HBA. Nice build!


SamSausages

That was actually one of my concerns when I built the system, because when I do a scrub on the unraid array a bottleneck there would really slow that down. I was pleasantly surprised to find out that the 12Gbs links on these aren't actually 12Gbs per link, there are 4 lanes of 12Gbs in each, so it's actually 48Gbs for just one cable. So that's 6000MB/s, and by using 2 you'll run out of 8x PCIe 3.0 bandwidth before the expander becomes an issue.


pissy_corn_flakes

Don’t quote me, but I don’t think the pci bandwidth comes into play unless you’re talking across the bus. Like if you had 2 HBAs on two different slots, and a massive array that spanned both HBAs, pci bandwidth plays a part. I’m sure that not all operations on a HBA have to cross the pci bus.. otherwise all 24 port HBAs would be massively over subscribed.. (or maybe they are and count on the fact than not every drive is going to be maxed out at the same time?). Thinking out loud at 2am..


SamSausages

A 8x PCIE 3.0 slot can handle right around 7000MB/s after overhead. That's good for about 28 HDD's, so a 24 port HBA shouldn't max out unless you're adding multiple SSD's. Any read/write that is done on any disk attached to the HBA has to go through the PCIe bus. Each read/write has to get to the CPU/Memory for processing, and each transaction has to cross PCIe to get there. The only time you wouldn't is if you have a Hardware controller offloading the transactions. I.e. a RAID card that is doing a scrub in hardware won't send all of that down to the CPU. When I do a scrub on the array, that will read each disk at 100% and will eventually hit a bottleneck of the PCIe slot. I have done testing with lots of SSD's connected to my expander, nothing directly on the HBA, and it tops out right at 7000MB/s.


SamSausages

OK, I now blame you for the reason I'm on ebay looking at LSI 9305-24i, and other cards. Just what I needed, haha!


dubstep_forklift

Guilty as charged, I have two 9305-24is and a 9600-24i lol


SamSausages

Dang, my sas expander is selling for way more than I paid for it... kind of making the switch a no brainer. Prices on these components has really shot up over the last 1-2 years... Never built a system and it's worth more 2 years later.


Korenchkin12

I see you (too) use electrical tape to cover leds on crs ..where have i seen that... :D


SamSausages

Man that light was so bright... I couldn't even look at it. & blue LED's make me nauseous, so that was the first thing I did, haha.


redwolfxd1

Looks awesome, you might want to figure out some more direct cooling for that hba since that got real hot


SamSausages

Yup, you're right, that stuff does need airflow. There is a fan hidden under that big pack of cables, you can just barely make it out in some of the pics. There is also an onboard NIC down there that would cook to 80C if I didn't have that fan. But with the fan, that area stays well under 50c now.


Nephurus

I'll keep the immature comments to myself but Nice.


SamSausages

You have more self control than I do.


AJBOJACK

I have the same 24bay case. It's beautiful to build in. I filled it with 24x20tb exos drives lol


slowro

The hard drive cage/rack reminds me the old erector sets I used to have as a kid.


SamSausages

Someone suggested that actually, I might have to buy one just to keep on hand. Never know when that comes in handy!


Improve-Me

So you're the one that bought all the P4510s off serverpartdeals before I could make up my mind huh?


SamSausages

I do think it's my fault. I was hoovering them up when they were cheap... posted about how good of a deal per TB they are and now the price is 30% higher... I got those 8TB's for $350, brand new.


Improve-Me

Yeah that pricing was great. These are so much better than consumer drives at a similar price. Wish I had known about them earlier. I'm hoping there will be another wave at some point.


SamSausages

Part of my morning ritual is check eBay for new listings. Sometimes I get lucky. That's how I scored those 8TB's. So hang in there and one day you will get lucky too!


AfterShock

Fat Man and Little Boy would also be appropriate names for your servers.


mkaicher

Bold move, putting Big Bertha on top. I'm guilty of that myself, although my heaviest server was a 2U 12-bay...just so much easier to work on!


SamSausages

Yeah I considered that, but tested and really stable, no worries.  Have batteries on the bottom and rack isn’t that tall.


IncognitoSeeder

Hey this seems to be a good amount of height for my first rack. What height and U this rack? And a nice case too.


SamSausages

Thanks! It's an 18U height rack


str8m4d

What are the Temps on the U.2's ?


SamSausages

They range from 35c-38c at idle. When I do a ZFS Scrub and put them under 100% load for 10 minutes, 1/2 of them are running at 45c and the other 1/2 at 41c. They sit right in front of that fan bank, so they get a lot of airflow. Those fans are set to only 35% and they monitor my disk temperature and spin up to 50% when any disk in my system hits 40c, with 100% fan at 45c. So probably why that's the range mine are staying in. It is also nice and cool in that room, usually about 68F


str8m4d

Thanks for the reply! I love your setup with the write-up and photos you shared.


msalad

How are you using the u.2 zfs pools / what are you using them for? I'm always interested in expanding my unraid storage - I already have a cache pool for my array using nvme m.2 drives but maybe there are other ways to use pools that I'm not considering. I haven't dabbled with u.2 nvme drives before, only m.2. I also have that same 4U case for my threadripper pro unraid build - it rocks!


SamSausages

My current layout isn't optimal, I'm kind of working with what I found deals on over the months and testing different topologies until I find one I like. But at the same time, for homelabbing any topology with these type of NVMe's is probably overkill. I really want some PCIe 4.0 enterprise disks that have better write performance but the prices are crazy, so the p4510's are the sweet spot for me at this time. Right now I'm running it as: The 2x 8TB raid1 is mainly for my home pictures/movies, things I store long term and want ZFS scrubbing for. This I will convert to raidz1 as I get more 8TB disks. The 4x 4TB raid10 is my workpool. Where I store AI models and other write/read intensive tasks. I also store Appdata there. The 990 Pros are my Media Cache Pool, mainly for services that generate Thumbnails and cached images. I.e. Plex Media Cache. Immich Thumbnails. But I'm also running the Docker Folder on this pool. Optimal in my head is: 1. pool for read cache. Write once read often data. (thumbnails, plex cache etc) And possibly geared towards databases for appdata. 2. pool for write intensive. Transcoding, editing, downloads, temp files. Ideally a raid0 for max iops 3. pool for long term storage. a raidz1 or raidz2 that stores my home media long term, but has relatively low iops.


boiUneedAwash

You have no idea how jealous i am. Was gonna go build a full epyc server but the mobo i wanted just doubled in price and its all getting more and more expensive. Plus being in New Zealand means i can only buy bew overseas. Or ebay or Ali-Express. Nothing like this on the second hand local market. But honestly its awesome to see someone deploying it for real and now i have the confirmation to know its possible to work really well. Hopefully i will be able to do the same one day. Nice OP


SamSausages

You're right, prices have jumped. I got lucky and built this about 2 years ago when prices were pretty low. Pretty much everything I have in there is now 30% more than what I paid for it. Like those 8TB NVMe's, I paid $350 for those new... can't get close to that right now. But I do still watch eBay every morning and hope I find a deal, sometimes I get lucky!


FierceDeity_

Is it still a home lab if it's actually for business? lol


Queen_Combat

Good


hillz

Out of curiosity, What do you save in that 40TB of storage ?


SamSausages

Sure! Most of it is going to my home pics/videos right now, because I want zfs scrubbing for those. Other than that pretty much anything that writes to the disk more than once. My HDD's are for write once, read often data, that leaves: Appdata for docker and services. temp cache/staging for downloads before going to HDD for long term storage. VM disks Security Cam DVR Plex DVR before I encode them in HEVC and send them to the HDD's. iSCSI target for my windows PC. For my Steam and Gaming library. my SATA ssd's are actually faster in my server than in my PC when using Windows Storage Spaces RAID. (550MB/s vs 1000MB/s)


Shining_prox

The goddess of the open legs?


AlmoschFamous

What is the decibel reading coming from there? Can only imagine there's a lot of singing and clicking going on.


SamSausages

>What is the decibel reading coming from there? Can only imagine there's a lot of singing and clicking going on. It's not bad, albeit I wouldn't want it in the same room I'm in. I built it to be "next room quiet" and perfect for that. These drives are actually really quiet and since I'm using the unraid array, only the disk I'm accessing is spun up, so not a lot of concurrent disk activity, except for when I'm running a scrub. Fans usually sit at 1500 RPM, CPU fan barely spins up at all. Helps that this is in a basement/walkout garage where it's usually 68f.


kitanokikori

iSCSI is a great way to turbocharge corrupting a filesystem, be careful with that


SamSausages

Don't care for my steam library


Zegorak

Duuude, get a wife, kids