The purpose of this thread is to detail my Unraid server build. I will discuss what hardware components went into this build and why I chose Unraid as my OS of choice for this server. I hope this review helps others who may be looking at similar solutions.
Additional Photos: https://picasaweb.google.com/1073453...eat=directlink
(I might embed them later)
Introduction:
Years ago when the mp3 file format was introduced it changed the way people listened to music. People no longer needed to haul huge stacks of CD’s. Now people had a lot more choice of what to listen to because their entire collection could be stored in a portable hand held device. This transition from physical to digital media also gave people more control over how their music was organized and played. Content could now be stored in a central location and streamed on demand to other devices.
I began ripping my collection of movies and tv shows for much the same reason. I liked the idea of having a main repository of digital content that could be accessed by other devices in the home. Once the data is ripped you have more flexibility in how you watch your content. You can stream things to your ipad, another computer or tv, with no commercials and no interruptions.
When I began archiving my digital media hard drives were just begging to hit the 1 TB mark. I thought that much storage would be enough to last a couple of years. A bit later high definition came into the scene, picture quality once again took a giant step forward and with it so did the storage requirements needed to archive an individual movie. I quickly outgrew my single 1TB drive and at that time it made the most sense to purchase additional drives and to simply shove them into my workstation. This solution was the most practical, there was no additional cost of adding storage other than the cost of the drive. This all worked well for a short while, this solution was fairly easy to manage with one or two drives but as soon as you have 6 drives and each drive contains a multitude of different shares (Movies1, Movies2 etc) this type of solution becomes a nightmare to manage.
Besides being difficult to manage this type of setup has no safeguards in place to protect you from data loss. In this case if one hard drive dies you simply lose all the data on that drive. The need for data protection doesn’t really hit you until you experience a drive failure and it’s too late. Recently this is what happened to me, I got more and more drives and sooner or later one was destined to fail. Luckily I was able to recover most of the data. The drive failure was the last thing in a chain of events that lead me look at more future proof storage solutions.
Purpose of Build:
The main motivation behind this build was to come up with a server that would be easily scalable that would serve as a central storage repository for all my digital data. I wanted to store all my movies, tv shows, photos and documents in a single location and have that data easily accessible to the other networked machines in the home. I wanted a solution that provided fault tolerance so that in the event of a drive failure I wouldnt have to worry about manually trying to restore data from a dieing drive.
What is Unraid:
Unraid is an operating system that is based on Slackware Linux, there are 3 versions of the product that offer support for different numbers of drives similarly to how windows comes in home, premium and ultimate flavors. Unraid is free for up to 3 drive and then if you want to make use of additional drives you have to pay for a license.
Small 3 Drives Free
Medium 7 Drives $69
Large 24 Drives $119
When purchasing unraid you can purchase a usb key with unraid already installed or you can pay for a license and load unraid on a usb of your choice.
Unraid runs off a USB key, there is nothing to install, simply boot from the USB and access the GUI through a web interface. The key features of Unraid is that it offers drive pooling and parity protection that protects you from a single drive failure. Drive pooling refers to the ability of taking a collection of smaller drives and pooling their data so instead of appearing like you’re dealing with a collection of smaller drives the to the user it appears as they are dealing with one big drive. If you lose more than one drive at a time then you lose the data on those failed drives but the rest of your data/array stays intact. You are unlikely to have multiple drive failures at the same time, but it is more likely in systems that use greater number of disks.
Before I settled on unraid as my solution of choice I looked at WHS+flexraid, Open Media Vault, openfiler, snapraid, amahi, FreeNAS, NAS4Free, napp-it, ZFS etc. I never demoed these alternatives but I did my fair share of research. Having looked at many of these alternatives I don’t think I can say that one solution is better than the next, it all depends on:
- What you are trying to achieve
- How much money you are willing to spend
- How much time you are prepared to invest getting things up and running
- The level of support you get when things go south
- Your level of technical expertise
- How much performance/redundancy is enough
Regardless of the solution you choose there are bound to be trade-offs. Everyone’s situation is a bit different, people have different budgets, technical expertise, performance and fault tolerance needs. In my case here are some of the features which were essential to me.
Must Haves
- Content is not striped among multiple drives
- Capability to easily expand the array
- Redundancy/Data Protection (Don't lose the whole array with multiple failures)
- Must support drive pooling/virtual shares
- Proven reputation (active community of people who can vouch for the product)
- Easily upgrade the existing drives in the array
Additional Benefits of Unraid
- Integrates with popular programs like sabnzb, sickbeard, couch potato via addons
- Small footprint (os boots off a usb stick and is easy to upgrade
- Power efficiency (drives that are not being used are sped down)
Potential Negatives
- Not for backing up critical data
- Maximum of one parity drive
- Write performance not as good as alternative solutions
- One developer/long release cycles
- Default user interface is lacking
- Costs money
- Some learning might be required (Linux commands etc)
Unraid had all the features I was looking for and I personally didn’t find the negatives to be a major detraction. It’s also important to mention that the newest final release of unraid is version 4.7. Version 5 has been in development for a long time, the beta has been out for over a year and provides some significant improvements. The majority of unraid users including myself are running version 5 beta, it is very stable and I have yet to have any issues with it.
Hardware:
Unraid supports a wide variety of hardware, even though lots of different types of hardware is supported I would suggest looking at the Unraid compatibility page before purchasing your parts for a new build.
Unraid Compatibility: http://lime-technology.com/wiki/inde..._Compatibility
Unraid Forums: http://lime-technology.com/forum/
Chances are if you went and purchased a motherboard that isn’t on Unraid’s compatibility page that it would still work with Unraid. Certain hardware configurations can result in odd performance issues, the benefit of having a piece of hardware that is verified to work with Unraid is it gives the prospective user that extra piece of mind, you know that the hardware you purchased has been tested by another user and deemed to be to function without issues.
One of the nice things about building a server is that the hardware requirements are fairly modest. The latest and greatest hardware isn’t required so you can build your home server with a conservative budget and still have a very functional machine at the end of the day.
Some users prefer to run unraid in a VM. The advantage of this approach is that it separates the raw data which is stored on the unraid vm from all the different services/applications that interact with it. A virtualized setup also provides a lot of flexibility for those individuals that like to tinker with different operating systems. Typically going with a virtualized setup involves greater complexity in terms of initial setup and requires a beefier spec’d system which translates into higher costs.
One of my goals when building an unraid server was to build something that met my current needs and was easily scalable. At first I was just going to go with bare-metal unraid, but I wanted to plan out the build and purchase the appropriate components that would allow me to do esxi in the future had I ever wished to pursue a vitalized setup.
I wanted to get hardware that supported ESXI right out of the gate, and not have to worry about upgrading components to get additional functionality sometime down the line. This meant spending more money on components during the initial build.
Parts List:
Case: Habey ESC 4242C
http://www.newegg.ca/Product/Product...82E16811321022
Motherboard: Supermicro MBD-X9SCM-F-O
http://www.newegg.ca/Product/Product...82E16813182253
CPU: Xeon 1230 Sandy Bridge
http://www.newegg.ca/Product/Product...16819115083CVF
RAM: Kingston 16gb (2x8gb) unbuffered ecc ram
http://www.newegg.ca/Product/Product...82E16820139979
Power Supply:Corsair AX750
http://www.newegg.ca/Product/Product...6817139016&Tpk
Sata Controllers: 3x IBM M1015
(purchased on ebay at ~100/card)
Cables: http://monoprice.com/products/search.asp?keyword=8189
Cooling: 3 x 120mm NF-P12 (http://www.memoryexpress.com/Products/MX34320)
2 x 80mm NF-R8 (http://www.memoryexpress.com/Products/MX34326)
Storage Configuration:
4 x 4TB Seagate Drives
1 x 4TB Hitachi Drive
2 x 3TB WD Red Drives
1 x 4TB Hitachi Drive (parity drive)
Total Storage: ~26 TB Useable space (using 8 out of 24 drive bays)
Prior to building this server I had a mix of 2TB and 4TB drives. I used the 4TB drives for this build and gave the older 2TB green drives to a family member.
The Case:
The Habey 4242C ESC is a case that is made by the same parent company that manufactures Norco rackmount cases which have been a popular case for home users looking for rackmount solutions. This case has 24 hot swappable bays, 120 mm fan wall for cooling and a backplane that utilizes the 8088 connector for easier cable management. In terms of functionality I think this case has all the features that a home enthusiast is looking for in terms of a scalable storage solution. The build quality on the case leaves something to be desired but that’s understandable considering the case was never meant to compete with server grade solutions that cost twice as much. The complaints I have are minor:
- The drive cages feel cheap
- It is difficult to remove some of the drive trays when they are empty based on the design of the ejection mechanism (described in greater detail in the full review of the case)
- No front facing usb slots
Overall I am satisfied with the case; so far I’ve only used it for a couple of months. Time will tell how reliable the backplane/electronics are.
The Motherboard:
One of the features that I really wanted to have, especially for a server build was IPMI (Intelligent Platform Management Interface). Prior to my build I had no clue what IPMI was as I had dealt primarily with components that were designed for desktop use.
IPMI is a feature that is predominantly found on server motherboards that allows you to run a headless setup. Instead of having a monitor hooked up to the server you can access the screen via a remote interface from another computer. I have seen other people rave about how they are never going to build a server without this feature again. Having completed my build, I can that I made use of this feature and it came in very handy.
Being able to startup/shutdown and administrator your server remotely is a huge benefit. I opted to go for the Supermicro MBD-X9SCM-F-O because, a lot of other people have used this motherboard for their unraid/ESXI builds, so I knew it’s a board that was tried and true. The motherboard had plenty of expansion slots for sata controller cards and has many positive reviews on sites like newegg.
CPU:
The Xeon 1230 (sandy bridge) is an excellent quad core server processor that is similar to a core i7 desktop equivalent. This cpu has plenty of cores/threads which is important if you are considering running something like ESXI with multiple VM’s.
Had I went with a bare metal unraid setup I would have probably purchased a cheaper Intel cpu like the G860/G2020/Core I3. These cpu’s can be had for ~70 dollars, are fairly energy efficient and have enough horsepower to convert media on the fly and deliver it to your mobile phone or tablet devices.
RAM:
The Supermicro motherboard only supports unbuffered memory. I went with Kingston 16gb (2x8gb) unbuffered ecc ram because other people have used this ram in their builds. I might be adding 16 gigs of ram in the future depending on how much ram will be utilized by the system when all my VM’s are configured. 4 gigs should be more than enough memory to run unraid but with ram being so cheap it makes more sense to get 8 gigs.
Power Supply:
Power supplies are one of the components that people usually cheap out on, thinking that all power supplies are created equally. I didn’t want to make that mistake and then spend hours troubleshooting a failing power supply. I required a ~750 Watt power supply because I wanted something with enough power to support 24 drives. It is important that the power supply have a single 12V rail, I also wanted something fully modular so if one day the power supply did bite the dust I could just pull it out and install the new one with minimal rewiring. I went with the corsair ax750 because it is manufactured by Seasonic which is known to be a reliable brand and meets the criteria described above.
Sata Controller Cards:
The IBM M1015 is a popular expansion card choice among home server enthusiasts because of its flexibility, price and performance. This device can be flashed with different firmware to support various raid configurations or just used as a standard sata expander in JBOD setup. These cards support 4tb drives and work well with different operating systems including linux and windows and derivatives thereof. The cards typically retail for ~$100 on ebay. Each card gives you the ability to add 8 additional drives.
SSD
Since I was going with an ESXI build I wanted to go with a fast OS drive. This OS drive would store all the different operating systems for the different VM’s I plan to install. The most important thing to me was reliability. Many of the SSD reviews rely on benchmarks, in the real world however differences in performance are often times unnoticeable.
I decided to go with the crucial M4 because it’s been on the market for a few years and many people have used these drives in their build. Had a samsung or intel drive been on sale at the time of my purchase I might have just as well gone with them. In terms of performance the only thing that I really care about is access speeds.
What is ESXI:
“VMware ESXi Server is computer virtualization software developed by VMware Inc. The ESXi Server is an advanced, smaller-footprint version of the VMware ESX Server, VMware's enterprise-level computer virtualization software product. Implemented within the VMware Infrastructure, ESXi can be used to facilitate centralized management for enterprise desktops and data center applications.”
If you have used virtual box or other virtualization software ESXI is very familiar. Instead of having a virtualized operating system running on top of the native operating system esxi passes hardware resources like RAM/CPU cores/adapter cards directly to the VM.
You can get a free version of ESXI by registering an account at vmware’s website:
http://communities.vmware.com/thread...art=0&tstart=0
There are some hardware limitations, which shouldn’t matter to the average user. The free license only supports a single physical cpu and it limited to a maximum of 32 gigabytes of ram.
Why ESXI:
Virtualization is a good choice fit for people that like to tinker with different operating systems or for those individuals who currently run multiple servers each of which servicing a particular need. Some individuals might have a separate computer that serves as a router/firewall, another one that acts as a file server, mail server, web server etc. Visualization provides an easier way to do all of these things on one machine; this in turn means you have to spend less money on hardware, less money paying the electric bill and less time managing all that hardware.
What appealed to me about running things in a visualized setup was the ability to separate my data from all the services/software that interact with it. That meant I could have an unraid VM dedicated to running unraid and then have a separate Linux VM that would utilize programs like sabnzb, couch potato, subsonic, plex etc.
Planned VM’s:
Unraid (main datastore for media)
Ubuntu Server (sabnzb, sickbeard, couch potato, subsonic etc)
Pfsense (firewall)
osx (some software is only available on OSX)
windows 7 (some software is only available one Windows)
fedora/centos (Linux Distro for studying for certification)
I might describe this in greater detail when everything has been configured. Currently I only have unraid configured as my storage server, Ubuntu linux server and osx installed.
Usability:
While a certain degree of knowledge is required for any DIY server build regardless of the software platform that you are using you do not have to be a Linux expert to setup an unraid.
Adding a disk to the array is very easy in unraid, prior to adding a disk it is recommend that you run the pre-clear script. The preclear utility writes 0,s to each sector of the disk. The idea behind this utility is to catch faulty/failing disks before they are added to the array. This process takes ~40 hours for a 4tb drive but in my mind it’s not a big deal because you can install a couple of new drives, run the preclear utility go to bed and in a few sleeps be ready to add the drives to the array. I much prefer this approach as if there is something wrong with your drive it’s easier to deal with the problem before the drive contains any data on it.
My main computer runs windows 7 and I didn’t have to do anything special to access the data in windows explorer. By default unraid has NFS enabled so the only thing you have to enter is your hostname or IP address in windows explorer to access the data.
Performance:
Because unraid doesn’t stipe data its performance will be limited to the read/write speed of the disk being read from/written to. There are advantages and disadvantages to striping data. In a conventional striped raid implementation you have fast read/write performance because your data is written to/read from multiple drives at the same time.
In a striped raid setup if you lose more than the maximum allowable drives you will lose the contents of the entire array. With unraid you don’t have to worry about such things. You can still lose the content on more than one drives if you experience concurrent drive failures but the rest of your array will be unaffected.
Unraid performs parity calculations in real time, what this means is every time you add or modify a file on the array the parity drive information has be re-calculated and written to the parity drive. This results in slower write times ( ~35MB/second ). This is pretty slow when you consider you could be transferring TB’s of data to the array.
This issue of slow write times can be addressed in a couple of different ways. When doing the initial data transfer, first add the data drives to the server and then add the parity drive once you have transferred all of your initial data. Doing this you take a risk that if a drive on the server dies during the transfer period it is unprotected and you lose the data on that drive. Without the parity drive I was getting a transfer speed of ~50MB - 100MB/sec over the network.
After you have transferred all of your data to the array and installed the parity drive you are still capped with slow write times. This is where a cache drive comes in. A cache drive is a drive that exists outside the unraid array. When transferring files/downloading content on the server people use a cache drive to get better write performance. The idea of a cache drive is you are able to achieve faster transfer times when it is off the essence. The data is then offloaded from the cache drive to the server overnight or whenever else the user finds preferable. Cron the automatic job scheduler takes care of this so it isn’t something the user has to be mindful of.
Performance outside of read/write times has been excellent. I havnt been running the system very long but it has been rock solid. I have no issues transcoding/streaming the content on the unraid server, so far things have been working without any hiccups.
User Interface:
One of the really nice things about unraid is that it runs from a USB stick so there is nothing that’s needs to be installed or configured to get unraid to run on your system. The interface looks like something from circa 1998, it’s not the prettiest thing to look at but it’s easy to use and gets the job done. It’s possible to make the interface more user friendly through the use of popular addons like “simple features” and “unmenu”. These addons provide smart information, email notifications and other important usage statistics.
Addons:
Unraids functionality is extendable via free add-ons which can be downloaded from the unraid forums website. These addons can improve the gui interface or provide integration with popular apps like plex, sabnzb, couch potato.
Final Thoughts:
So far I’ve been very pleased with unraid. I havnt had to mess around with any configuration options or anything like that, things just work. When I access my content from windows explorer on my windows 7 box, I can see all the files in my share(s) so I never have to wonder if certain content is on another drive in another pc, everything is central, organized and tidy. I havnt had any disk failures on the server, and that good because most of the drives are fairly new but im glad to know when that day actually comes unraid will take care of it and I will I never have to deal with the headaches of trying to recover data from a failing drive.
Even though I think Unraid is awesome it might not be for everybody. If you’re more comfortable in windows environment, WHS might be a better fit. If you want a prebuilt solution go checkout some of the NAS boxes built by Synology. If you like the idea of Unraid but don’t want to fork out the money checkout snapshot raid. All in all Unraid gets two thumbs up from me, it’s exactly the type of solution I was looking for. Yes there are several areas the software could use improvement (better interface, greater parity drive support) but overall even with these short comings I’m with my decision to purchase this product.