Sunday 21 July 2013

FreeNAS with 8TB Drives for home lab

As explained in my previous blog, N40L based NAS is good for serving HTPC or small Lab environments. However I want to build a NAS with more disk spindles and drive addition possibilities. My requirements are
  • Motherboard should support 6 or more SATA ports (Pre-Haswell requirement)
  • Motherboard and processor should not be much costly and power requirements should be lower.
Haswell CPU or motherboards were not released at that time. These parts are purchased between April to June 2013 that's the reason some of their costs are lower or higher than their market price today.

Building a Haswell based NAS is not possible even today, unless you are happy to put an Haswell i5 which should be a overkill for a NAS's workload.We have to wait until Intel releases their Haswell Pentium/i3 Prcoessores which is going to be somewhere in September.

I had to look for the CPU and Board which can support above requirements. After doing bit of surfing in www, I found below given configuration could be  in-line with my requirements.

Components: 
  • Motherboard: Gigabyte - F2A85XM-D3H  = 89$
  • CPU: AMD - A6 5400K 3.8 GHz = 64$
  • Memory - Kingston -8GB DDR3 - 1333 Mhz - x 1 = 62$
  • Memory -  Kingston - 4GB DDR3 - 1333 Mhz - x 1 - Used the existing one I had as spare.
  • Case: Cooler Master - HAF 912 = 78$ 
  • Power Supply - Thermaltake 450W - Used the existing one I had as spare.
  • Hdd: WD - 4 x 1 TB RED - Taken out from N40L
  • Hdd: WD - 2 x 1 TB RED - 64MB - 7200 RPM = 112$ each
  • SSD: Kingston - 2 x 60GB = 62$ each
  • Additional NIC: 1 x Intel 1000 GT PCI Adapter - 14$ (bought from ebay) 
  • Boot drive: Sandisk Cruzer Blade 8 GB Stick - 8$ 
Motherboard selection: 
This motherboard has 8 x SATA 3 (6Gbps) connectors and supports 4 x 8 GB memory. It supports AMD A2 CPUs and has 1 x PCI, 1 x PCIeX4, 1 x PCIex16, 2 x FAN connectors.
Full specification can be found from here

CPU Selection:
I could have installed A4 -5300 CPU which costs 10$ less than A6 5400K which should be sufficient for the NAS OS load. However I can re-purpose this system as bare metal hypervisor or my desktop system at later stage. So I went ahead and bought the A6 - 5600K Processor. This CPU is capable of over clocking future which is denoted by "K" but I dont have any requirement so it is running without overclocking enabled.

Case Selection:
My requirement for the case was
  • 2 x 120 MM fans in the front to provide air flow to hard drives which will be installed in 3.5" brackets
  • At least one exhaust fan in the rear side.
  • Supports 10 or more Hard drive Bays.
I have checked the cases that can fit above requirements from various manufactures. However they were too costly for  the specification or could not meet some of the requirements like number of drive bays supported or having single 120MM Front fan, etc. I do not want install a "real Filer" kind of case which may require Rack mounts, etc which I am not having in my lab.

I like coolermaster's simple but sturdy cases.  I have choosen HAF 912 which is my 3rd Coolmaster case. :)

HAF 912 support 2 x 2.5" SSD/HDDs, 6 x 3.5" Drives, 4 x 5.25" Bays which makes the total of 12 Drive Bays. By adding 4 in 3 Device module  one can add another 4 Drives easily + 1 more Hdd using 5.25 into 3.5" converter. That will make this case support 13 Hard drives in total.

This case supports 2x 120MM Fans in the front, 2 in Top, 1 in Side panel and 1 in the read side. 6 x 120 MM FANs in total. Full specification can be found from here

Note: This case has lot of Air Flow as abbreviated by the High Air Flow in the name. However not having much dust filter in this case could be an issue for the environments which are dust friendly. My house is not that much dusty so I am happy with this case.

Hardware Assembly:
Installation of Motherboard, CPU, Power Supply and Hard drives went like a breeze. Since this case has lot of room inside, installing the components are very much easy.
I have installed 2 x Kingston SSDs in the bottom 2.5" Bays and 6 x 3.5" Drives in their respective bay areas.
I have installed 2 x 120MM Fans in the front as intake, 1 x 120 MM in Rear + 1 x 120MM in the top as exhaust FANs.

NAS OS:
I tried to install Nexentastor community edition which is an excellent  NAS OS. Community edition can support 18TB and should not be used for production environment. This OSgot colorful GUI and dashboards + ZFS configuration wizards. Unfortunately the Nexentastor could not see the drives that are configured in AHCI mode on this motherboard. By changing the controller to IDE based drives, OS can see the drives. However after reboot the system started throwing errors. So I could not use the amazing nexentastore NAS OS.
As mentioned in  my previous post, I like NAS4Free's supports to add plugins like upnp, etc without much trouble. However configuration of ZIL or read cache is not straight forwarded approach.I went through their wiki and other blogs but could not find much options to configure ZIL to specific SSD using GUI. There could be command based option but I wanted to configure them using GUI. So I choose FreeNAS 8.3.1 P2 as the NAS OS.

Configuring ZFS:
I am using this NAS for keeping my test VMs which I am running from bare metal hypervisors. My purpose of running the Lab is for study purpose of various evaluation/Trial components released in visualization space. So I am not having any business critical data or Production VMs. Loosing these test VMs are not going to be big issue and I have to spend some time to recreate them. However I like the ZFS filesystem and the SSD based cache, I configured the drives as given below. For simplification and my understanding, I used the respective drive ids given by freenas in their volume names.

4 x 1 TB + 1 SSD as ZIL + 1 SSDs as Read Cache in RAIDz1-0  -> vol0d2d3d6d7         
Since vol0d2d3d6d7 is having SSDs used as ZIL and Read cache I am using this volume with NFS and creating smaller datasets.


2 x 2 TB as Mirror -> vol1d6d7 
vol1d6d7 is configured with iSCSI based sharing. 

So Far I am happy with the NAS that I have built from scratch which is helping me to learn ZFS based file system as well serving stroage for my test VMs that I am creating. I have not faced any slowness while running more than 15 VMs at the same time. I could peak the gigabit  and the network throughput I have seen during the tests are close to 700Mbps.
Testing the NAS:
I have not done any ZFS tweaking or modification to any of the components. I did a performance test of the filer using NASPT and got below given test result from a VM that is running with 2GB of virutal memory. 

I am not having any other Benchmark results to compare the performance of this filer.

If there is a better throughput we can get with the desktop based components I will be happy to evaluate. Please give your comment below.

Test Result: