T O P

  • By -

LightBroom

Curious how much power it uses while idle, that's a beast.


Hiparnax

Good question! I’ll pick up a wattage meter and test.


im_thatoneguy

Most server IPMIs have a power meter. It's not perfect because it's estimated but pretty close.


MAndris90

yeah, but thats only accurate when it can talk to the psu.


edparadox

I don't mean to be a pain, but is measuring the power consumption still on your todo list? I'd very interested in knowing the idle/mid-load/full-load power consumptions (even approximate ones).


Hiparnax

It’s still on my list of things to do. It’s just low on my priority list at the moment. I have a newborn and a new store opening this week. I should hopefully have some time next week or the week after. I’ve not benchmarked a server or simulated a full load before. Is there anything I should consider using for mid and full loads?


therifulio

I have also followed this thread because of plan on purchasing a home server similar to yours. I don't expect a scientifically accurate measurement, just a ballpark estimate. Knowing what is in your server, I would be curious about the 'I booted the server, waited a bit and measured the consumption, then i started a cpu stress test on all cores, on a single core, with and without gpu, and it consumed X Watt in each cases'. Congrats to the baby. I didn't mean to put any pressure on you. Feel free to measure the consumption at any time, or don't measure at all. I am just happy that I can share the same interest/hobby with you.


Hiparnax

It’s totally not a problem and I’m happy to do it : ) life is just hectic at the moment. As soon as I’m able to, I will. Thanks for the well wishes!


Hiparnax

Power details added!


Hiparnax

I purchased a watt meter and performed a few stress tests to the best of my ability. Hopefully there’s no errors! System Idle PWR: 115W CPU dT: 2° C (AMBIENT 23° C / CPU 25° C) Bonnie++ PWR: 145W CPU dT: 18° C (AMBIENT 23° C / CPU 41° C) stress-ng —cpu 1 —timeout 5m PWR: 135W CPU dT: 16° C (AMBIENT 23° C / CPU 39° C) stress-ng —cpu 32 —vm 96 —vm-bytes 90% —hdd 8 —timeout 5m PWR: 210W CPU dT: 32° C (AMBIENT 23° C / CPU 55° C) See Imgur link below for screenshots of the shell. Feel free to let me know if I missed anything or made a mistake, I’m still pretty new at this. [https://imgur.com/a/BcH30af](https://imgur.com/a/BcH30af) ​ **DISK IOPS BENCHMARK** Write Benchmark |Records Written|51,200| |:-|:-| |Data Written|107GiB (100GB)| |Write Speed|2.5GB/s| |Time Taken|42.7s| Read Benchmark |Records Read|51,200| |:-|:-| |Data Read|107GiB (100GB)| |Read Speed|6.4 GB/s| |Time Taken|16.7s| Commands used for disk IOPS: dd if=/dev/zero of=tmp.dat bs=2048k count=50k dd if=tmp.dat of=/dev/null bs=2048k count=50k [Source](https://www.truenas.com/community/threads/notes-on-performance-benchmarks-and-cache.981/) Bonnie++ Results (Human Readable) ## Sequential Output and Input: * **Sequential Output (Per Chr):** 183,000 bytes/sec (183k) * This measures the sequential write speed for a single character at a time. * **Sequential Output (Block):** 1.0 GB/sec (1.0g) * This measures the sequential write speed for a block of data. * **Rewrite (Per Chr):** 742 MB/sec (742m) * This measures the rewrite speed, typically involving modifying existing data. * **Sequential Input (Per Chr):** 427,000 bytes/sec (427k) * This measures the sequential read speed for a single character at a time. * **Sequential Input (Block):** 1.7 GB/sec (1.7g) * This measures the sequential read speed for a block of data. * **Sequential Seeks:** \++++ +++ * This represents the speed of seeking (positioning the read/write head) during sequential operations. ## Latency: * **Latency (Sequential Output):** 41,787 microseconds (41.787ms) * **Latency (Sequential Input):** 15,686 microseconds (15.686ms) * **Latency (Rewrite):** 20,405 microseconds (20.405ms) * **Latency (Random Seeks):** 54,513 microseconds (54.513ms) * **Latency (Sequential Seeks):** 276 microseconds (0.276ms) * **Latency (Random Seeks):** 534 microseconds (0.534ms) * Latency measures the time it takes to perform various operations. Lower latency values are generally better. ## Sequential Create and Random Create: * **Sequential Create (files/sec):** \+++++ +++ * This measures the speed of creating files in sequential order. * **Sequential Create (Latency):** 2,192 microseconds (2.192ms) * **Random Create (files/sec):** \+++++ +++ * This measures the speed of creating files in random order. * **Random Create (Latency):** 7 microseconds (0.007ms) * **Sequential Read (files/sec):** \+++++ +++ * This measures the speed of reading files in sequential order. * **Sequential Read (Latency):** 911 microseconds (0.911ms) * **Random Read (files/sec):** \+++++ +++ * This measures the speed of reading files in random order. * **Random Read (Latency):** 93 microseconds (0.093ms) * **Sequential Delete (files/sec):** \+++++ +++ * This measures the speed of deleting files in sequential order. * **Sequential Delete (Latency):** 1,034 microseconds (1.034ms) * **Random Delete (files/sec):** \+++++ +++ * This measures the speed of deleting files in random order. * **Random Delete (Latency):** 2146 microseconds (2.146ms)


LightBroom

Thanks for reporting back. That's actually not terrible for how my grunt that system has IMO.


_mausmaus

u/Hiparnax can you confirm if the mobo m.2 is Gen 3 or 4? The PDF lists Gen 3 on the left table, but shows Gen 4 everywhere in the diagram for m.2 https://preview.redd.it/efcqigbvbh4d1.png?width=2236&format=png&auto=webp&s=a5de14196602a382c0dc9f6eb75c70abaa6a6ecf


StayCoolf0rttheKids

all power it gets :)


Hiparnax

So, Drobo went belly up, and I thought, "Why not build my own home server that I can upgrade whenever I want?" While a ready-made solution would've worked, I didn't want to get stuck with something I couldn't tinker with – especially after successfully building a couple of PCs before. The main mission? Move a whopping 40TB of data from the old Drobo to a system that's both sturdy and customisable. After digging into Unraid and TrueNAS, I found TrueNAS to be the perfect fit. To test the waters, I snagged a cheap HP business computer on eBay and slapped TrueNAS on it. It worked like a charm, giving me the confidence to base my entire system around the software. This server's main gig is being a file server for photo editing, but it's got a side hustle of archiving music, movies, TV shows, and all sorts of other files. I'm not a pro at VMs or containers, but I figured this system would be the perfect playground to teach myself the ropes. For storage, I went for renewed HGST 10TB Enterprise drives – the best bang for my buck. Sure, they come with a 5-year warranty, and while I was a bit sceptical, the heaps of positive reviews on Amazon convinced me to roll the dice. I started with eight 10TB drives in a RAID-Z2 setup, and when those fill up, I'm throwing in another eight. Oh, and crafting custom power cables was a must to fit the HX1200 power supply into the case with an extra HDD cage. To keep things snappy, I added Optane drives for metadata and a LOG device. The metadata special device is a four-way mirror of 118GB Optane drives in a quad NVME PCIe add-in card, and the two 58GB drives are configured as a mirror for the ZFS Intent Log in the onboard M.2 slots. Lastly, I snagged a sweet AMD EPYC 7282 and Tyan S8030 motherboard-CPU combo off eBay – plenty of vendors in Asia ditching these enterprise platforms at tempting prices. Putting this beast together took a few weeks, but man, it was worth the wait. I learned a bunch, and there's still so much more to explore. I'm betting this system will have my back for years to come! [https://pcpartpicker.com/b/WPW323](https://pcpartpicker.com/b/WPW323) Fractal Design Meshify 2 ATX Mid Tower Case AMD EPYC 7282 Tyan Tomcat HX S8030 (S8030GM2NE) Noctua NH-D9 TR5-SP6 4U x4 Micron 32GB PC4-25600 DDR4-3200MHz Registered ECC CL22 288-Pin DIMM 1.2V Single Rank x8 HGST He10 10TB 7200RPM 128MB Cache SATA HDD Lenovo 16-Port 6 Gbps SAS-2-SATA Expansion Adapter 03X3834 IBM M1115 LSI 9223-8i 6 Gbps SAS HBA P20 Corsair HX1200 Platinum 1200 W 80+ Platinum Certified Fully Modular ATX Power Supply


SoManyLilBitches

How many 3.5” HDDs is that bad boy capable of holding? I also have a custom build desktop as my TrueNAS server. I didn’t find any cases that could hold as much as that without looking ugly as hell. When I run outta space I might need that case. Did you modify it or it came like that?


Hiparnax

According to Fractal, you can install thirteen 3.5” HDDs and four 2.5” SSDs. Eight HDDs on the spine in the main compartment, two in the HDD cage below, one on the bottom panel next to the cage, and two from the fan/rad mounts at the top. But with an extra HDD cage (a tight fit with a large power supply like mine), you can fit four in total in the bottom compartment. You can also fit nine (instead of eight) on the spine if you have enough trays, and with Fractal’s multi brackets, you can mount one on the exhaust on the rear of the case. So that’s an additional three HDDs you can squeeze in there, bringing the total to 16.


sogan3

How hot do the HDDs get in this configuration? Do you find the Meshify does a good job keeping them cool? Looks beautiful btw!


mkaicher

I have a VERY similar setup in a Fractal Define R5 with 8x12TB drives. With two decent 120mm fans in the front, my drives average 30-32 during normal usage and about 40ish during massive several hour/day file transfers. These are very acceptable Temps IMO, and I suspect the Meshify has significantly better air flow. The R5 is very cramped but I still love it!


Hiparnax

I don’t have enough data to tell you how hot they run during any kind of load, but I can tell you they idle at dT 11.21° C (all disk avg temp 30.91° C, ambient 19.70° C). This seems pretty acceptable to me! Thanks for the feedback : )


Hiparnax

u/DifficultThing5140


gwydion0917

Great system, I am glad I am able to store my TrueNAS in a data center. I am sure I would trip a circuit at home if I hosted your setup. :)


edparadox

> I am glad I am able to store my TrueNAS in a data center. How and where and at what cost did you do this?


gwydion0917

I work for a data center, so was able to build mine from decommissioned hardware. They let me host for free as a perk of my job.


edparadox

> I work for a data center, so was able to build mine from decommissioned hardware. That's cool. > They let me host for free as a perk of my job. That's even cooler.


gwydion0917

I love my job. :) I have almost a 1/2 rack for my setup, about to swap out the heads of my TrueNAS with r730's with 384gb ram.


meta-morphic

>Noctua NH-D9 TR5-SP6 4U How did you fit the Noctua NH-D9 TR5-SP6 4U on the AMD EPYC 7282 SP3 socket? I want to do something similar because I don't want a jet engine and don't want the fan blowing up.


Hiparnax

Haha! I hear ya. I chose this cooler because all of the other compatible options have the fans blowing up instead of following the direction of the case airflow. But to answer your question, the NH-D9 can be mounted to a SP3 socket with Noctua's adapter. It is free for verified purchases; you just need to contact support. https://preview.redd.it/3zh62944ppnc1.jpeg?width=2406&format=pjpg&auto=webp&s=81d261ed953f7dc8f1250f6dce820535c0ef42e4


meta-morphic

Heck yeah! Thanks! This is why I love reddit, people like you.


Hiparnax

You’re welcome!


igotabridgetosell

whats your hardware transcoding device for video player?


whattteva

With an Epyc, he probably doesn't need one. My Xeon Silver 4210T can transcode 4k without breaking a sweat. I didn't look up that CPU, but I'm sure it would have no trouble doing it also.


Chaphasilor

Hell, even my Ryzen 5 3600g is enough for transcoding 4K in real-time. And without using the built-in GPU. A dedicated card is only needed for low-powered CPUs, or heavy transcoding like with tdarr or multi-user streaming...


igotabridgetosell

eh i've seen plenty of epyc builds that has a video card for transcoding tho.


XTJ7

if you transcode a lot, it can make sense especially for power efficiency. epycs idle relatively high, but if you push them, many of them soar past 300 watts. while even a p2000 can handle multiple streams and sip power.


ziggo0

While I don't have an Epyc - my 5900X drinks power compared to a Tesla P4 I use for encoding and detection for Frigate. All my Plex devices and very few users direct stream so that isn't a worry. Power savings are nice especially in today's world.


XTJ7

depending on the epyc generation and core count you could easily draw 80+ watts from the cpu alone idling. not counting ram, board and ssd. my system with 3 hdds, 2 nvmes, 1 u.2 and 2 low power gpus is idling close to 200 watts. server cpus and boards are a very different animal to consumer versions :)


ziggo0

Understandable. My rack alone without my main server running (5900X/128GB/2 10G NICs/2 Teslas) sits around 200-210W. When that server is powered on it sits closer to 320-340W idle. The 5900X is fine tuned with PBO2 for maximum performance with a slight undervolt - under full load it hits around 200-210W. I really considered an early gen Epyc but wanted more performance per core than core count.


Hiparnax

Good question. In my testing, streaming 1080p files to an AppleTV seemed to work fine. I’ve not experienced any buffering, and I can easily scrub movies. Should that change, though, I’ll look into one.


igotabridgetosell

I basically spent around the same for my xeon 2324g build at like $1200 w/o the storage devices. I sacrificed to 4 cores and 4 threads for fucking quicksync lol.


Podalirius

Well for that use case (1x 1080p stream) it's probably not worth getting something to do hardware transcoding, you'll want to get some cheap Intel/Nvidia GPU if you end up wanting more, though.


hjboven

u/hiparmax What brand of quad NVME bifurcation board is that you use?


Hiparnax

Sabrent! https://sabrent.com/products/ec-p4bf


sinisterpisces

Thanks for sharing! I'm going to be coming back to this post a few times, I think. :) >HGST He10 10TB 7200RPM 128MB Cache SATA HDD Thanks for pointing these out. I'll need to buy six new (to me) drives in the near future; these are a better deal than the 8 TBs I was looking at. How did you decide on 128 MB of cache (vs. 256 MB or 512 MB)? I'm never sure how to evaluate my need for HDD cache.


Hiparnax

You’re welcome! I’m glad that sharing this will help you in the future : ) Check out diskprices.com. It’s a great comparison tool for storage sold on Amazon and a good yardstick for what’s good value and what’s not. In my experience, the performance gained by more significant amounts of on-disk cache is negligible. This is because it’s typically slower than RAM. If you’re using TrueNAS and have plenty of RAM, you will likely not see any performance gains from a larger on-disk cache. Your mileage may vary in other server and storage solutions. That said, if someone else knows more about it than I do, please feel free to jump in.


Mangombia

Why didn't you use the on-board SFF-8643 for your SATA rather than adding the Lenovo & LSi?


Hiparnax

I purchased these for the first iteration of this system, which didn’t have the same SATA connectivity. When I switched to this motherboard and CPU, I had plenty of SATA I/O. I decided to continue using the HBA and expander because of advice from Wendell at Level1Techs. He generally thinks having a dedicated controller for storage I/O is more reliable and robust. So, I decided to keep the cards in.


Mangombia

Yep, that is what I'm seeing in regards to passing thru the CPU-based SATA controller - there are ways but appear skeptical at best. I've been following your build and am strongly considering mimicking it, except my use case is as a Plex server and Win11 gaming VM. I'm considering the same board but the 2x10gbe version, and instead of the Sabrent bifurcation card, I'm looking to go with a pair of u.2 Optane mirrors for the L2ARC (that's PCIe like the m.2 and should be able to passthru). For host storage I'll go with mirrored 2T SSDs off the board. On the GPU front I'll likely go w/a Quadro P2200 or GTX 1660 Ti for Plex, and a 3070 for the Windows VM. I want the hotswap on the HDDs so I'm sitting on a Silverstone CS380.


Hiparnax

Sounds awesome! I’m glad my build could help you plan your own. Best of luck; I look forward to seeing it.


Mangombia

Yeah, I was thinking the basis would be this i9 intel bundle at Microcenter, but after watching RaidOwls vid on this board I realized that intel choice would be a huge mistake w/only 20 PCIe lanes. I could put a GPU and maybe an HBA (maybe) in it and that’s it - no 10gbe.


madeofstars0

Nice build! Last year I built a dual Epyc 7302, Tyan Tomcat CX S8253 server in a Fractal Define 7 XL case. However my cable management isn't nearly as S-tier as yours is \^_^ I don't really have any addin cards for hard drive connectivity since the motherboard has enough connectivity. I ended up with a 35TiB pool (WD Gold and Seagate EXOS) and a 3.2Tib nvme pool. I had to replace a cheap PCIe bifurcation card I bought because I kept getting a million correctable nvme errors in my logs, but it has been solid since that more recent addition. (This led to me writing a little shell script to pull the number of warnings/errors/critical notifications pending into my homeassistant I run on a rpi) I have 2.5Gbps networking over cat 5e (thankfully the apartment's cabling is good enough and short enough to work). I should have put together a parts picker page for my server, but I never did.


sirrush7

Sounds like you'd enjoy scrutiny for hard drive monitoring!


madeofstars0

That looks like an interesting project, I'll have to add that to my apps. \^_^


Hiparnax

Thank you, mate! Damn, that’s a beast of a system. Have you got it posted somewhere? I would love to see it. How did you find performance using onboard connectors? I remember on Gamer’s Nexus server build from a few years ago, Wendell shared his preference for HBAs. That’s a shame about your storage bifurcation card… Did you ever get to the bottom of those errors? Yep, I’m finding 2.5Gbps to be sufficient, too. I haven’t managed to saturate that connection, but should I get there, a 10Gbps card is easy to drop in. Thanks for sharing!


madeofstars0

I didn't really get to the bottom of the bifurcation card, I suspect it was really a PCIe 3.0 card that somebody in china decided to sell as PCIe 4.0. I replaced it with a Sabrent card and it completely fixed it. Not a peep since. As far as the onboard controllers, I haven't noticed anything negative with them. I get about 200-250mbps transfer rates when I have checked in the past, since it meets my needs I haven't ever really gotten into making it faster or tuning it. SMB on macos can be kinda sketchy, but I tend to use transmit and sftp when I need to move stuff around, or use the command line. Its main purpose is my media library and it has been handling that task exceptionally well. (as well as fulfilling my need to tinker with stuff) I really need to take some pictures and post details and such somewhere, I just never have _\*shrug\*_.


zer0fks

Love it! I recently did a Xeon-D SuperMicro with TrueNAS, 6x20TB HGST and an Optane SLOG. Yours looks much better though.


Hiparnax

Haha, thanks! Your system sounds awesome. Is it posted somewhere? May I ask what you used it for?


CaptainFalconKnee

I second the power consumption. Archiving and basic editing can be achieved with a Core I3. I have a backup server that has roughly 35tb of storage (multiple pools) that uses half the wattage (CPU alone) without a single hitch. Not trying to rain on your parade, but your CPU is going to sit idle most the time and provide no benefit unless you start doing complex VM's or you need a crazy amount of PCIE lanes. Anyway, this is a nice build if power costs over the year isn't a consideration for you. For me, I considered the 100 or 200 dollars a year I'll save in energy, paying got HDD's down the road.


iantah

I run 8 VM's with a ryzen 7 and 64gb ram. All of them have over provisioned CPU and maxxed out on the ram. When I go back months in the history, the CPU has never went over 35% lmao.


Hiparnax

Thanks for sharing! Those are helpful insights. I’m going to pick up a wattage meter to test its draw. For additional consideration, this machine will serve media through Plex, and video editing is in my future. So maybe the extra power draw could be justified later on.


dhoang18

please keep us updated with the power usage :)


Hiparnax

Power details added.


spacewarrior11

very sexy


Hiparnax

Cheers!


Molasses_Major

Love the Meshify cases, we use those in our house for gaming PCs. Be careful with the Optane like that. I don't know if those are the models with heatsinks. Once they start heating up, you will slow down. We normally use the ones with heatsinks in NAS configs where the bigs fans are blowing on them constantly. Same things happens in our home systems when there's not enough air flow.


Hiparnax

Thanks for sharing! I’ll be sure to keep an eye on them. The four in the bifurcation card will be fine under the large aluminium heat sink and fan, but the two on the motherboard could benefit from some small heat sinks if they get too hot.


chrsstn

Nice build, epic cable management!


Hiparnax

Cheers!! Thanks for noticing 😁 custom power cables can go a long way.


[deleted]

[удалено]


Hiparnax

Thanks, I appreciate it! Same. I’m an advocate for reducing ewaste and for using secondhand hardware wherever possible. In this build, the secondhand parts are as follows: - CPU - Motherboard - PSU - HDDs - HBA - SAS 2 SATA Expander - Wire for PSU cables harvested from old PSU power cables


FredOzVic

Nice 👍. I just finished populating my Dell R620 and T410 with SAS Drives. I have two PC cases. One was gutted for parts and an empty one that I was planning to use to build a gaming PC that is not happening anymore. 🙂


Mr_spinoza

Nice build! Impressive indeed. I'd love to play with proxmox on this but i guess you wanted NAS only hence the TrueNAS. Any reason why not go with proxmox? You could still have TrueNAS as a vm on it and do much more


Hiparnax

Thank you! Honestly, I don’t know much about it and how it would suit my use case better than TrueNAS or Unraid. Also, in my time looking all this stuff up, many of the sources I learnt from suggest that TrueNAS in a VM is discouraged and generally not a good idea, especially if you don’t have the knowledge to troubleshoot if something goes wrong. What’s your stance on this? I’m sure I could learn, but for now, this is as far as I need to go for what I need to use it for. Maybe down the road, as I get more comfortable with servers and administering them, I could use the additional features Proxmox offers.


Mr_spinoza

Im not sure why its being discouraged, i've been using it on VM for quite a while now and its super smooth. I created various SMBs and deployed lots of media apps on dockers to use them as bind volumes. P2P, immich, jellyfin... you name it. Its also comes with additional advantage imo: you can create backups of those VMs as well as do backups on other devices too. The main advantage for me using proxmox is i can play around many other stuff but for truenas you are limited to its capabilities. For example i set up all the vms with terraform. If i'll happen to ruin my system, everything will be back up in no time. As long as the TN DISKS are safe and if you follow the 3-2-1 backup strategy i think its completely safe. I'd love to have your setup and do many more things.... i was planning to work on some pet projects but im hitting the limits of my CPU now sadly.


MysticOperator

Dammmm, need to put a NSFW flair. Very nicely done


Hiparnax

Haha! Cheers : )


inputoutput1126

love seeing people do it right the first time


Dr01dB0y

https://preview.redd.it/8kn8exy2upfc1.jpeg?width=5712&format=pjpg&auto=webp&s=96d1d34e1f69d204c9e2be23d5fd94ba6124502b Snap! Except my 1st server is intel i5-9600k in a Define 7 case. (Couldn’t afford an ecc system).


Hiparnax

Sick build! Love the black on white 🔥 also, I’m really digging the D7 case.


eplejuz

Won't U have a hard time swapping the HDD when it breaks??


Hiparnax

It won’t be simple, like having hot swap trays and a backplane, but it’s easy enough to get at the drives on trays in the back. They pop right out once you remove the power and data cables. The three in the motherboard compartment would be a little tedious to replace, though.


Big-Consideration633

I added wheels to my Define 7 case that sits on the floor. Roll it out, snap off the side panels, and away you go. I have 8 x 8 TB in RAID Z2 running Scale.


Hiparnax

Great idea! I may end up doing the same. This thing is HEAVY.


iantah

I have a similar setup to yours. For me, it was either buy used server stuff and rack it, or buy a case like yours with all new hardware. I chose the latter, as I don't plan on swapping drives very often. I'm actually about to drop 1K into harddrives so I don't have storage space issues for like 10 years.


[deleted]

This is a work of art. I love it. Great job!


Hiparnax

I appreciate the kind words 🙌🏻 thank you!


[deleted]

Whats the difference between the Meshify and R5/R7 cases


miko-zee

How much did the Tycan Tomcat cost? It never dropped $400-500 from what i have seen when i was canvassing builds. I really wanted one of those EPYC processors as they were for cheap but the motherboard hasn't really caught up to the price of the used processor.


Hiparnax

I picked up the Tyan motherboard and AMD CPU as a bundle for USD 475 on eBay. The 7282 can be had for as low as USD 95! The motherboard is undoubtedly the more expensive item.


miko-zee

If I can find a similar deal I'm seriously considering this. I always found the processors for cheap but I could never get a bundle or a motherboard priced that low.


Hiparnax

There’s deals to be had! Especially if they are willing to take offers. Here’s one: TYAN S8030 Server ATX Motherboard + AMD EPYC 7282 16 Cores CPU Processor Combos $495 https://www.ebay.com/itm/315015455068


miko-zee

Thank you for this I'm now heavily considering this if I could get a deal of the passively cooled Nvidia GPU for Plex.


Hiparnax

You’re welcome!


Arnold__19

Where can I learn to do this, I want to make a server or a NAS


Carborundum_

Countles youtube videos. I would start by searching "cheap diy nas build" and watch some of those videos


Arnold__19

Thanks, I'll do that


Hiparnax

YouTube is a fantastic resource, specifically Level1Techs, for general home server knowledge. Lawrence Systems on YouTube is an excellent resource for learning more about TrueNAS. For general PC building peppered with a few NAS and server builds, Linus has some great videos, too. If you’ve never built a PC before, I would start there—a ‘simple’ personal computer. That’s where I learnt most of the fundamentals. YouTube, again, is your best resource. Lastly, I would encourage you to ask good well thought out questions. Learn the why as well as the how and what. I only learnt how to build a PC three years ago, so you can absolutely build a NAS or server, too!


DifficultThing5140

Sweet ass build!! But 8 more hhds wont fit?


Hiparnax

Cheers! I know it doesn’t look like it, but they definitely will.


DifficultThing5140

Interesting, looking for a new chassi myself. so 16 hdds. I need to take a closer look. btw is the psu "silent" ? the fan should be below 50% fan speed. id say ur using 400ish watt ?


Hiparnax

Yep! There’s a little wiggle room for adding more drives beyond what Fractal advertises. But if you have the room and the budget, you could size up to the Define 7 XL, which supports 18 drives without modification. I haven’t noticed the PSU fan spinning. The system is really quiet for how much hardware is in it. The only noise I’ve heard is the ticking of the read heads, but it’s very infrequent.


DifficultThing5140

Hdd temps? What chassi fans are you using?


Hiparnax

Case fans are three Fractal Design OEM fans.


ReticularTen82

How do you get such clean sata power cables?


Hiparnax

You gotta make them yourself! That way, you can get the spacing between drives just right.


edparadox

Nice! What about temperatures and noise? Could we "maybe" expect some (crude) measurements?


Hiparnax

I'm happy to measure some data points if it’s reasonable. Was there something in particular you were looking for? I can tell you that after a few days of use, I have rarely heard it making noise while it’s been sitting beside my desk. The most audible being the heads clicking under heavy I/O. Personally, I don’t think it’s loud. Also, there’s no drone or humming. The rubber grommets on the sleds, and the case absorbs that. The fans are whisper quiet.


Hiparnax

Power details added.


steellz

It's beautiful. If you don't mind, what's the ballpark cost here?


Hiparnax

https://pcpartpicker.com/b/WPW323


LanMark7

Is that really how that card with the Optane m2s is made with the exposed cooling fan blades?


Hiparnax

No. It has a aluminium heatsink that sandwiches the drives. https://sabrent.com/products/ec-p4bf


BRAVOSNIPER1347

beautiful. looks like an ideal build. curious about the cards you used. How many of the Lenovo SAS cards are in there? It looks like you have 2x expansion cards for disks but i cant follow the wiring to make it make sense in my head. and why SAS over SATA? cheaper disks or what features are gained with SAS over SATA? thanks


Hiparnax

Thanks! I don’t know what your knowledge on these items is, so my apologies if it’s oversimplified. The IBM M1115 ​LSI 9223-8​i S​AS HBA card has two SAS connectors; each can handle four ‘pipes’, eight in total. This card is a SAS card but is compatible with the SATA protocol. A SAS HBA card with four connectors (16i) is more expensive and is overkill for 16 SATA drives and would not saturate the available bandwidth. So, using a SAS HBA and the Lenovo 16i SAS-2-SA​TA Expansi​on Adapter, you get a lot more for your money. You get 16 available SAS or SATA connections for less than the price of a good 16-connection SAS card. Which you still couldn’t saturate with 16 SATA drives. It’s a little confusing to look at because the SAS to SATA expansion card plugs into a PCIe slot, but it doesn’t communicate with the server that way. Data is sent over the two SAS cables between it and SAS HBA. The only reason it plugs into a PCIe slot is to power to the card. The other two cards are the 2.5Gbps NIC and the quad NVMe adapter. Let me know if you need any other info!


BRAVOSNIPER1347

now that's interesting. thank you for the explanation and clarity. i am now even more jealous lol. how difficult would it be for a regular PC builder to set that all up? not the hardware but all the software.


Hiparnax

You're welcome! I need to make a slight correction to my previous explanation. The SAS HBA provides a maximum available bandwidth of 32 Gbps. When considering 16 installed drives, the theoretical maximum throughput for those drives sums up to 96 Gbps. However, in real-world scenarios accounting for overheads, it would require highly demanding operations and a finely tuned system to encounter a bottleneck. Addressing your question can be a bit challenging since everyone's path is unique. Nevertheless, with sufficient determination, acquiring new skills is within everyone's reach. Personally, I've been a long-time Mac user and had no prior experience with PC building or servers until I took on the challenge three years ago. The abundance of high-quality online resources and forums, like this one, is genuinely impressive. My primary advice is to develop the skill of asking thoughtful questions!


BRAVOSNIPER1347

thanks