Sunday, 21 July 2013

FreeNAS with 8TB Drives for home lab

As explained in my previous blog, N40L based NAS is good for serving HTPC or small Lab environments. However I want to build a NAS with more disk spindles and drive addition possibilities. My requirements are
  • Motherboard should support 6 or more SATA ports (Pre-Haswell requirement)
  • Motherboard and processor should not be much costly and power requirements should be lower.
Haswell CPU or motherboards were not released at that time. These parts are purchased between April to June 2013 that's the reason some of their costs are lower or higher than their market price today.

Building a Haswell based NAS is not possible even today, unless you are happy to put an Haswell i5 which should be a overkill for a NAS's workload.We have to wait until Intel releases their Haswell Pentium/i3 Prcoessores which is going to be somewhere in September.

I had to look for the CPU and Board which can support above requirements. After doing bit of surfing in www, I found below given configuration could be  in-line with my requirements.

  • Motherboard: Gigabyte - F2A85XM-D3H  = 89$
  • CPU: AMD - A6 5400K 3.8 GHz = 64$
  • Memory - Kingston -8GB DDR3 - 1333 Mhz - x 1 = 62$
  • Memory -  Kingston - 4GB DDR3 - 1333 Mhz - x 1 - Used the existing one I had as spare.
  • Case: Cooler Master - HAF 912 = 78$ 
  • Power Supply - Thermaltake 450W - Used the existing one I had as spare.
  • Hdd: WD - 4 x 1 TB RED - Taken out from N40L
  • Hdd: WD - 2 x 1 TB RED - 64MB - 7200 RPM = 112$ each
  • SSD: Kingston - 2 x 60GB = 62$ each
  • Additional NIC: 1 x Intel 1000 GT PCI Adapter - 14$ (bought from ebay) 
  • Boot drive: Sandisk Cruzer Blade 8 GB Stick - 8$ 
Motherboard selection: 
This motherboard has 8 x SATA 3 (6Gbps) connectors and supports 4 x 8 GB memory. It supports AMD A2 CPUs and has 1 x PCI, 1 x PCIeX4, 1 x PCIex16, 2 x FAN connectors.
Full specification can be found from here

CPU Selection:
I could have installed A4 -5300 CPU which costs 10$ less than A6 5400K which should be sufficient for the NAS OS load. However I can re-purpose this system as bare metal hypervisor or my desktop system at later stage. So I went ahead and bought the A6 - 5600K Processor. This CPU is capable of over clocking future which is denoted by "K" but I dont have any requirement so it is running without overclocking enabled.

Case Selection:
My requirement for the case was
  • 2 x 120 MM fans in the front to provide air flow to hard drives which will be installed in 3.5" brackets
  • At least one exhaust fan in the rear side.
  • Supports 10 or more Hard drive Bays.
I have checked the cases that can fit above requirements from various manufactures. However they were too costly for  the specification or could not meet some of the requirements like number of drive bays supported or having single 120MM Front fan, etc. I do not want install a "real Filer" kind of case which may require Rack mounts, etc which I am not having in my lab.

I like coolermaster's simple but sturdy cases.  I have choosen HAF 912 which is my 3rd Coolmaster case. :)

HAF 912 support 2 x 2.5" SSD/HDDs, 6 x 3.5" Drives, 4 x 5.25" Bays which makes the total of 12 Drive Bays. By adding 4 in 3 Device module  one can add another 4 Drives easily + 1 more Hdd using 5.25 into 3.5" converter. That will make this case support 13 Hard drives in total.

This case supports 2x 120MM Fans in the front, 2 in Top, 1 in Side panel and 1 in the read side. 6 x 120 MM FANs in total. Full specification can be found from here

Note: This case has lot of Air Flow as abbreviated by the High Air Flow in the name. However not having much dust filter in this case could be an issue for the environments which are dust friendly. My house is not that much dusty so I am happy with this case.

Hardware Assembly:
Installation of Motherboard, CPU, Power Supply and Hard drives went like a breeze. Since this case has lot of room inside, installing the components are very much easy.
I have installed 2 x Kingston SSDs in the bottom 2.5" Bays and 6 x 3.5" Drives in their respective bay areas.
I have installed 2 x 120MM Fans in the front as intake, 1 x 120 MM in Rear + 1 x 120MM in the top as exhaust FANs.

I tried to install Nexentastor community edition which is an excellent  NAS OS. Community edition can support 18TB and should not be used for production environment. This OSgot colorful GUI and dashboards + ZFS configuration wizards. Unfortunately the Nexentastor could not see the drives that are configured in AHCI mode on this motherboard. By changing the controller to IDE based drives, OS can see the drives. However after reboot the system started throwing errors. So I could not use the amazing nexentastore NAS OS.
As mentioned in  my previous post, I like NAS4Free's supports to add plugins like upnp, etc without much trouble. However configuration of ZIL or read cache is not straight forwarded approach.I went through their wiki and other blogs but could not find much options to configure ZIL to specific SSD using GUI. There could be command based option but I wanted to configure them using GUI. So I choose FreeNAS 8.3.1 P2 as the NAS OS.

Configuring ZFS:
I am using this NAS for keeping my test VMs which I am running from bare metal hypervisors. My purpose of running the Lab is for study purpose of various evaluation/Trial components released in visualization space. So I am not having any business critical data or Production VMs. Loosing these test VMs are not going to be big issue and I have to spend some time to recreate them. However I like the ZFS filesystem and the SSD based cache, I configured the drives as given below. For simplification and my understanding, I used the respective drive ids given by freenas in their volume names.

4 x 1 TB + 1 SSD as ZIL + 1 SSDs as Read Cache in RAIDz1-0  -> vol0d2d3d6d7         
Since vol0d2d3d6d7 is having SSDs used as ZIL and Read cache I am using this volume with NFS and creating smaller datasets.

2 x 2 TB as Mirror -> vol1d6d7 
vol1d6d7 is configured with iSCSI based sharing. 

So Far I am happy with the NAS that I have built from scratch which is helping me to learn ZFS based file system as well serving stroage for my test VMs that I am creating. I have not faced any slowness while running more than 15 VMs at the same time. I could peak the gigabit  and the network throughput I have seen during the tests are close to 700Mbps.
Testing the NAS:
I have not done any ZFS tweaking or modification to any of the components. I did a performance test of the filer using NASPT and got below given test result from a VM that is running with 2GB of virutal memory. 

I am not having any other Benchmark results to compare the performance of this filer.

If there is a better throughput we can get with the desktop based components I will be happy to evaluate. Please give your comment below.

Test Result:

Saturday, 4 May 2013

Home NAS build DIY using HP N40L

I am running a home lab with 3 systems which I use for exploring new technologies. I was looking for a NAS server which can be used as shared storage. I was thinking to buy a Ready made NAS appliance or assemble one myself.
I saw a deal for HP N40L  which included 250G Drive + Postal for 219$ which was really a good deal. I have heard about this Micro Server which is decent hardware for NAS kind of setup. That time HP n54L was launched hence the price difference was more than 80$. So I thought of buying a known hardware which was tested by lot of people for their NAS build and ordered the HP N40L. I was very much exited about this new addition to my lab. 

I have done following changes before modifying the box into a NAS server.

Hardware Preparation: 
  • Removed the stock 2GB + 250GB Hard drive which was shipped with the server
  • Added 3x500GB WD Drives in Hdd Slot 1, 2 and 3
  • Added 1 x 1TB WD Red drives in Hdd Slot 4.
  • Added 2x4GB Transcend DDR3 Memory. 
  • Added Intel Gigabit CT Desktop Adapter. 
  • Using 4GB Sandisk Cruizer as NAS OS boot drive. 
Since the onboard NIC card does not support Jumbo frames we require a PCIe NIC which can support jumbo frames, WOL, etc. Also Intel NICs are known for their reliability, Open Source Driver support, etc.. this card worth the money. This NIC comes with standard profile + low profile bracket. I removed the standard profile bracket and replaced it with the low profile to fit inside this tiny server.

I was using the onboard NIC for management purpose to logon to this server using web interface. The intel NIC was configured with MTU, WOL, etc and used for iSCSI mapping and wireless UPNP streaming purpose.

OS Preparation:
I was using Openfiler when I started the home lab. However there was no progress in their release after 2.99 (due to the base fork unavailable, etc) I was looking for alternative opensource product which can be used. I learnt about Freenas and NAS4Free. These FreeBSDs variants support ZFS as the file system, I wanted to give a try. Though I am not keeping any business critical data in the drives, I would like to have better IO and throughput, I would like to go with the ZFS apporach.

NAS4Free is really a good product which is very simple to implement and comes with multiple Add-ons like upnp, nfs, iscsi, afs, etc. So I installed in USB stick as explained here.  Importing/Adding new drives, creating new ZFS volumes, datasets, etc are very much simple with this NAS OS.

I was really happy to use this as my NAS + upnp server for connecting my ipad to listen to music, do wireless movie streaming, etc.
While observing the performance of the server using the limited options available within this NAS4Free OS, I found the CPU and Memory utilization were not much high. From processing perspective the AMD processor configured in this server is good enough for running the NAS OS kind of load. Still the utilization was not that much high. One of the reason behind this low utilization is iSCSI. Generally CPU, Memory utilization are handled by the hyper visor which is configured with iscsi target and not by the NAS OS which is running from our magic NAS server.

So I thought of taking full benefit of the CPU and memory of N40L server by configuring the server with NFS. I know iSCSI was not performing very much bad to go to this. However improvisation and curiosity are two characters which will keep us motivated to learn new things/technologies.

I created couple of NFS shares from the drives I have added above. NAS4Free requires datasets to be created to share them as NFS. I did that and I started testing the File transfer rates between iSCSI and NFS mounts by transferring 3 GB of iso file. I was already aware that iSCSI was better than NFS in this custom build NAS envrionments, the results were really shocking. I have not used any complex IO tools for finding the difference which I am planning to explore in one of the weekend, I used simple transfer method by keeping a stop watch next to me.

I have not applied any performance tuning or modification to NAS4Free installation. It was installed in 4 GB USB stick and Autotunes were enabled while configuring the host name, IP Address, etc.

iSCSI based transfer took approx 1:50 minutes to transfer 3GB of data. However NFS based transfer took more than 10 minutes!!! which is more than 5 times slower than iSCSI.

To be continued...

Friday, 3 May 2013

Sharing my Experiences

I am starting up my first blog which I was planning for quite some time..quite some years infact.

Here I will be sharing my everyday experiences related to technology, readings, photography, etc.

Stay tuned.