Quantcast
mushi_mushi's media server build/unraid review - Beyond.ca - Car Forums
Page 1 of 2 1 2 LastLast
Results 1 to 20 of 26

Thread: mushi_mushi's media server build/unraid review

  1. #1
    Join Date
    Jan 2005
    Location
    Calgary
    Posts
    77
    Rep Power
    0

    Default mushi_mushi's media server build/unraid review

    The purpose of this thread is to detail my Unraid server build. I will discuss what hardware components went into this build and why I chose Unraid as my OS of choice for this server. I hope this review helps others who may be looking at similar solutions.



    Additional Photos: https://picasaweb.google.com/1073453...eat=directlink
    (I might embed them later)

    Introduction:

    Years ago when the mp3 file format was introduced it changed the way people listened to music. People no longer needed to haul huge stacks of CD’s. Now people had a lot more choice of what to listen to because their entire collection could be stored in a portable hand held device. This transition from physical to digital media also gave people more control over how their music was organized and played. Content could now be stored in a central location and streamed on demand to other devices.

    I began ripping my collection of movies and tv shows for much the same reason. I liked the idea of having a main repository of digital content that could be accessed by other devices in the home. Once the data is ripped you have more flexibility in how you watch your content. You can stream things to your ipad, another computer or tv, with no commercials and no interruptions.

    When I began archiving my digital media hard drives were just begging to hit the 1 TB mark. I thought that much storage would be enough to last a couple of years. A bit later high definition came into the scene, picture quality once again took a giant step forward and with it so did the storage requirements needed to archive an individual movie. I quickly outgrew my single 1TB drive and at that time it made the most sense to purchase additional drives and to simply shove them into my workstation. This solution was the most practical, there was no additional cost of adding storage other than the cost of the drive. This all worked well for a short while, this solution was fairly easy to manage with one or two drives but as soon as you have 6 drives and each drive contains a multitude of different shares (Movies1, Movies2 etc) this type of solution becomes a nightmare to manage.

    Besides being difficult to manage this type of setup has no safeguards in place to protect you from data loss. In this case if one hard drive dies you simply lose all the data on that drive. The need for data protection doesn’t really hit you until you experience a drive failure and it’s too late. Recently this is what happened to me, I got more and more drives and sooner or later one was destined to fail. Luckily I was able to recover most of the data. The drive failure was the last thing in a chain of events that lead me look at more future proof storage solutions.

    Purpose of Build:

    The main motivation behind this build was to come up with a server that would be easily scalable that would serve as a central storage repository for all my digital data. I wanted to store all my movies, tv shows, photos and documents in a single location and have that data easily accessible to the other networked machines in the home. I wanted a solution that provided fault tolerance so that in the event of a drive failure I wouldnt have to worry about manually trying to restore data from a dieing drive.


    What is Unraid:

    Unraid is an operating system that is based on Slackware Linux, there are 3 versions of the product that offer support for different numbers of drives similarly to how windows comes in home, premium and ultimate flavors. Unraid is free for up to 3 drive and then if you want to make use of additional drives you have to pay for a license.

    Small 3 Drives Free
    Medium 7 Drives $69
    Large 24 Drives $119

    When purchasing unraid you can purchase a usb key with unraid already installed or you can pay for a license and load unraid on a usb of your choice.

    Unraid runs off a USB key, there is nothing to install, simply boot from the USB and access the GUI through a web interface. The key features of Unraid is that it offers drive pooling and parity protection that protects you from a single drive failure. Drive pooling refers to the ability of taking a collection of smaller drives and pooling their data so instead of appearing like you’re dealing with a collection of smaller drives the to the user it appears as they are dealing with one big drive. If you lose more than one drive at a time then you lose the data on those failed drives but the rest of your data/array stays intact. You are unlikely to have multiple drive failures at the same time, but it is more likely in systems that use greater number of disks.

    Before I settled on unraid as my solution of choice I looked at WHS+flexraid, Open Media Vault, openfiler, snapraid, amahi, FreeNAS, NAS4Free, napp-it, ZFS etc. I never demoed these alternatives but I did my fair share of research. Having looked at many of these alternatives I don’t think I can say that one solution is better than the next, it all depends on:
    • What you are trying to achieve
    • How much money you are willing to spend
    • How much time you are prepared to invest getting things up and running
    • The level of support you get when things go south
    • Your level of technical expertise
    • How much performance/redundancy is enough

    Regardless of the solution you choose there are bound to be trade-offs. Everyone’s situation is a bit different, people have different budgets, technical expertise, performance and fault tolerance needs. In my case here are some of the features which were essential to me.

    Must Haves
    • Content is not striped among multiple drives
    • Capability to easily expand the array
    • Redundancy/Data Protection (Don't lose the whole array with multiple failures)
    • Must support drive pooling/virtual shares
    • Proven reputation (active community of people who can vouch for the product)
    • Easily upgrade the existing drives in the array

    Additional Benefits of Unraid
    • Integrates with popular programs like sabnzb, sickbeard, couch potato via addons
    • Small footprint (os boots off a usb stick and is easy to upgrade
    • Power efficiency (drives that are not being used are sped down)

    Potential Negatives
    • Not for backing up critical data
    • Maximum of one parity drive
    • Write performance not as good as alternative solutions
    • One developer/long release cycles
    • Default user interface is lacking
    • Costs money
    • Some learning might be required (Linux commands etc)

    Unraid had all the features I was looking for and I personally didn’t find the negatives to be a major detraction. It’s also important to mention that the newest final release of unraid is version 4.7. Version 5 has been in development for a long time, the beta has been out for over a year and provides some significant improvements. The majority of unraid users including myself are running version 5 beta, it is very stable and I have yet to have any issues with it.

    Hardware:

    Unraid supports a wide variety of hardware, even though lots of different types of hardware is supported I would suggest looking at the Unraid compatibility page before purchasing your parts for a new build.

    Unraid Compatibility: http://lime-technology.com/wiki/inde..._Compatibility
    Unraid Forums: http://lime-technology.com/forum/

    Chances are if you went and purchased a motherboard that isn’t on Unraid’s compatibility page that it would still work with Unraid. Certain hardware configurations can result in odd performance issues, the benefit of having a piece of hardware that is verified to work with Unraid is it gives the prospective user that extra piece of mind, you know that the hardware you purchased has been tested by another user and deemed to be to function without issues.

    One of the nice things about building a server is that the hardware requirements are fairly modest. The latest and greatest hardware isn’t required so you can build your home server with a conservative budget and still have a very functional machine at the end of the day.

    Some users prefer to run unraid in a VM. The advantage of this approach is that it separates the raw data which is stored on the unraid vm from all the different services/applications that interact with it. A virtualized setup also provides a lot of flexibility for those individuals that like to tinker with different operating systems. Typically going with a virtualized setup involves greater complexity in terms of initial setup and requires a beefier spec’d system which translates into higher costs.

    One of my goals when building an unraid server was to build something that met my current needs and was easily scalable. At first I was just going to go with bare-metal unraid, but I wanted to plan out the build and purchase the appropriate components that would allow me to do esxi in the future had I ever wished to pursue a vitalized setup.

    I wanted to get hardware that supported ESXI right out of the gate, and not have to worry about upgrading components to get additional functionality sometime down the line. This meant spending more money on components during the initial build.

    Parts List:

    Case: Habey ESC 4242C
    http://www.newegg.ca/Product/Product...82E16811321022

    Motherboard: Supermicro MBD-X9SCM-F-O
    http://www.newegg.ca/Product/Product...82E16813182253

    CPU: Xeon 1230 Sandy Bridge
    http://www.newegg.ca/Product/Product...16819115083CVF

    RAM: Kingston 16gb (2x8gb) unbuffered ecc ram
    http://www.newegg.ca/Product/Product...82E16820139979

    Power Supply:Corsair AX750
    http://www.newegg.ca/Product/Product...6817139016&Tpk

    Sata Controllers: 3x IBM M1015
    (purchased on ebay at ~100/card)

    Cables: http://monoprice.com/products/search.asp?keyword=8189

    Cooling: 3 x 120mm NF-P12 (http://www.memoryexpress.com/Products/MX34320)
    2 x 80mm NF-R8 (http://www.memoryexpress.com/Products/MX34326)

    Storage Configuration:

    4 x 4TB Seagate Drives
    1 x 4TB Hitachi Drive
    2 x 3TB WD Red Drives
    1 x 4TB Hitachi Drive (parity drive)

    Total Storage: ~26 TB Useable space (using 8 out of 24 drive bays)

    Prior to building this server I had a mix of 2TB and 4TB drives. I used the 4TB drives for this build and gave the older 2TB green drives to a family member.

    The Case:

    The Habey 4242C ESC is a case that is made by the same parent company that manufactures Norco rackmount cases which have been a popular case for home users looking for rackmount solutions. This case has 24 hot swappable bays, 120 mm fan wall for cooling and a backplane that utilizes the 8088 connector for easier cable management. In terms of functionality I think this case has all the features that a home enthusiast is looking for in terms of a scalable storage solution. The build quality on the case leaves something to be desired but that’s understandable considering the case was never meant to compete with server grade solutions that cost twice as much. The complaints I have are minor:
    • The drive cages feel cheap
    • It is difficult to remove some of the drive trays when they are empty based on the design of the ejection mechanism (described in greater detail in the full review of the case)
    • No front facing usb slots

    Overall I am satisfied with the case; so far I’ve only used it for a couple of months. Time will tell how reliable the backplane/electronics are.


    The Motherboard:

    One of the features that I really wanted to have, especially for a server build was IPMI (Intelligent Platform Management Interface). Prior to my build I had no clue what IPMI was as I had dealt primarily with components that were designed for desktop use.

    IPMI is a feature that is predominantly found on server motherboards that allows you to run a headless setup. Instead of having a monitor hooked up to the server you can access the screen via a remote interface from another computer. I have seen other people rave about how they are never going to build a server without this feature again. Having completed my build, I can that I made use of this feature and it came in very handy.

    Being able to startup/shutdown and administrator your server remotely is a huge benefit. I opted to go for the Supermicro MBD-X9SCM-F-O because, a lot of other people have used this motherboard for their unraid/ESXI builds, so I knew it’s a board that was tried and true. The motherboard had plenty of expansion slots for sata controller cards and has many positive reviews on sites like newegg.

    CPU:

    The Xeon 1230 (sandy bridge) is an excellent quad core server processor that is similar to a core i7 desktop equivalent. This cpu has plenty of cores/threads which is important if you are considering running something like ESXI with multiple VM’s.

    Had I went with a bare metal unraid setup I would have probably purchased a cheaper Intel cpu like the G860/G2020/Core I3. These cpu’s can be had for ~70 dollars, are fairly energy efficient and have enough horsepower to convert media on the fly and deliver it to your mobile phone or tablet devices.

    RAM:

    The Supermicro motherboard only supports unbuffered memory. I went with Kingston 16gb (2x8gb) unbuffered ecc ram because other people have used this ram in their builds. I might be adding 16 gigs of ram in the future depending on how much ram will be utilized by the system when all my VM’s are configured. 4 gigs should be more than enough memory to run unraid but with ram being so cheap it makes more sense to get 8 gigs.

    Power Supply:

    Power supplies are one of the components that people usually cheap out on, thinking that all power supplies are created equally. I didn’t want to make that mistake and then spend hours troubleshooting a failing power supply. I required a ~750 Watt power supply because I wanted something with enough power to support 24 drives. It is important that the power supply have a single 12V rail, I also wanted something fully modular so if one day the power supply did bite the dust I could just pull it out and install the new one with minimal rewiring. I went with the corsair ax750 because it is manufactured by Seasonic which is known to be a reliable brand and meets the criteria described above.

    Sata Controller Cards:

    The IBM M1015 is a popular expansion card choice among home server enthusiasts because of its flexibility, price and performance. This device can be flashed with different firmware to support various raid configurations or just used as a standard sata expander in JBOD setup. These cards support 4tb drives and work well with different operating systems including linux and windows and derivatives thereof. The cards typically retail for ~$100 on ebay. Each card gives you the ability to add 8 additional drives.

    SSD

    Since I was going with an ESXI build I wanted to go with a fast OS drive. This OS drive would store all the different operating systems for the different VM’s I plan to install. The most important thing to me was reliability. Many of the SSD reviews rely on benchmarks, in the real world however differences in performance are often times unnoticeable.

    I decided to go with the crucial M4 because it’s been on the market for a few years and many people have used these drives in their build. Had a samsung or intel drive been on sale at the time of my purchase I might have just as well gone with them. In terms of performance the only thing that I really care about is access speeds.

    What is ESXI:

    “VMware ESXi Server is computer virtualization software developed by VMware Inc. The ESXi Server is an advanced, smaller-footprint version of the VMware ESX Server, VMware's enterprise-level computer virtualization software product. Implemented within the VMware Infrastructure, ESXi can be used to facilitate centralized management for enterprise desktops and data center applications.”

    If you have used virtual box or other virtualization software ESXI is very familiar. Instead of having a virtualized operating system running on top of the native operating system esxi passes hardware resources like RAM/CPU cores/adapter cards directly to the VM.
    You can get a free version of ESXI by registering an account at vmware’s website:
    http://communities.vmware.com/thread...art=0&tstart=0

    There are some hardware limitations, which shouldn’t matter to the average user. The free license only supports a single physical cpu and it limited to a maximum of 32 gigabytes of ram.

    Why ESXI:

    Virtualization is a good choice fit for people that like to tinker with different operating systems or for those individuals who currently run multiple servers each of which servicing a particular need. Some individuals might have a separate computer that serves as a router/firewall, another one that acts as a file server, mail server, web server etc. Visualization provides an easier way to do all of these things on one machine; this in turn means you have to spend less money on hardware, less money paying the electric bill and less time managing all that hardware.

    What appealed to me about running things in a visualized setup was the ability to separate my data from all the services/software that interact with it. That meant I could have an unraid VM dedicated to running unraid and then have a separate Linux VM that would utilize programs like sabnzb, couch potato, subsonic, plex etc.

    Planned VM’s:

    Unraid (main datastore for media)
    Ubuntu Server (sabnzb, sickbeard, couch potato, subsonic etc)
    Pfsense (firewall)
    osx (some software is only available on OSX)
    windows 7 (some software is only available one Windows)
    fedora/centos (Linux Distro for studying for certification)

    I might describe this in greater detail when everything has been configured. Currently I only have unraid configured as my storage server, Ubuntu linux server and osx installed.

    Usability:

    While a certain degree of knowledge is required for any DIY server build regardless of the software platform that you are using you do not have to be a Linux expert to setup an unraid.

    Adding a disk to the array is very easy in unraid, prior to adding a disk it is recommend that you run the pre-clear script. The preclear utility writes 0,s to each sector of the disk. The idea behind this utility is to catch faulty/failing disks before they are added to the array. This process takes ~40 hours for a 4tb drive but in my mind it’s not a big deal because you can install a couple of new drives, run the preclear utility go to bed and in a few sleeps be ready to add the drives to the array. I much prefer this approach as if there is something wrong with your drive it’s easier to deal with the problem before the drive contains any data on it.

    My main computer runs windows 7 and I didn’t have to do anything special to access the data in windows explorer. By default unraid has NFS enabled so the only thing you have to enter is your hostname or IP address in windows explorer to access the data.

    Performance:

    Because unraid doesn’t stipe data its performance will be limited to the read/write speed of the disk being read from/written to. There are advantages and disadvantages to striping data. In a conventional striped raid implementation you have fast read/write performance because your data is written to/read from multiple drives at the same time.

    In a striped raid setup if you lose more than the maximum allowable drives you will lose the contents of the entire array. With unraid you don’t have to worry about such things. You can still lose the content on more than one drives if you experience concurrent drive failures but the rest of your array will be unaffected.

    Unraid performs parity calculations in real time, what this means is every time you add or modify a file on the array the parity drive information has be re-calculated and written to the parity drive. This results in slower write times ( ~35MB/second ). This is pretty slow when you consider you could be transferring TB’s of data to the array.

    This issue of slow write times can be addressed in a couple of different ways. When doing the initial data transfer, first add the data drives to the server and then add the parity drive once you have transferred all of your initial data. Doing this you take a risk that if a drive on the server dies during the transfer period it is unprotected and you lose the data on that drive. Without the parity drive I was getting a transfer speed of ~50MB - 100MB/sec over the network.

    After you have transferred all of your data to the array and installed the parity drive you are still capped with slow write times. This is where a cache drive comes in. A cache drive is a drive that exists outside the unraid array. When transferring files/downloading content on the server people use a cache drive to get better write performance. The idea of a cache drive is you are able to achieve faster transfer times when it is off the essence. The data is then offloaded from the cache drive to the server overnight or whenever else the user finds preferable. Cron the automatic job scheduler takes care of this so it isn’t something the user has to be mindful of.

    Performance outside of read/write times has been excellent. I havnt been running the system very long but it has been rock solid. I have no issues transcoding/streaming the content on the unraid server, so far things have been working without any hiccups.


    User Interface:

    One of the really nice things about unraid is that it runs from a USB stick so there is nothing that’s needs to be installed or configured to get unraid to run on your system. The interface looks like something from circa 1998, it’s not the prettiest thing to look at but it’s easy to use and gets the job done. It’s possible to make the interface more user friendly through the use of popular addons like “simple features” and “unmenu”. These addons provide smart information, email notifications and other important usage statistics.


    Addons:

    Unraids functionality is extendable via free add-ons which can be downloaded from the unraid forums website. These addons can improve the gui interface or provide integration with popular apps like plex, sabnzb, couch potato.


    Final Thoughts:

    So far I’ve been very pleased with unraid. I havnt had to mess around with any configuration options or anything like that, things just work. When I access my content from windows explorer on my windows 7 box, I can see all the files in my share(s) so I never have to wonder if certain content is on another drive in another pc, everything is central, organized and tidy. I havnt had any disk failures on the server, and that good because most of the drives are fairly new but im glad to know when that day actually comes unraid will take care of it and I will I never have to deal with the headaches of trying to recover data from a failing drive.

    Even though I think Unraid is awesome it might not be for everybody. If you’re more comfortable in windows environment, WHS might be a better fit. If you want a prebuilt solution go checkout some of the NAS boxes built by Synology. If you like the idea of Unraid but don’t want to fork out the money checkout snapshot raid. All in all Unraid gets two thumbs up from me, it’s exactly the type of solution I was looking for. Yes there are several areas the software could use improvement (better interface, greater parity drive support) but overall even with these short comings I’m with my decision to purchase this product.
    Last edited by mushi_mushi; 07-07-2013 at 09:04 PM.

  2. #2
    Join Date
    Jan 2005
    Location
    Calgary
    Posts
    77
    Rep Power
    0

    Default

    reserved

  3. #3
    Join Date
    Mar 2004
    Location
    Calgary AB
    My Ride
    2020 Subaru Forester Sport
    Posts
    2,982
    Rep Power
    42

    Default

    Interesting read.

    You mentioned running ESXi and then Unraid in the VM. How does Unraid interact with the actual drives in this setup? Last I checked ESXi didn't support drive paththrough, unless your are using DirectPathIO for your raid controller? Just curious how this all works from that perspective, thus preventing that one raid controller from working for anything other than unraid VM. is that what you did? Trying to get my head around that part of it.

    Other comments from the whole read:

    1. Good on the IPMI mobo, I have a supermicro 1U server that I use for my Checkpoint firewall at home and having that connectivity is sure nice.

    2. ESXi is a good choice obviously. I don't know if Hyper-V has moved further in this or not, but last I checked you could run Hyper-V hosts on top of ESXi if you wanted to play with Hyper-V and learn it, but you couldn't run ESXi as a virtual host on top of Hyper-V. I run 2 x ESXi 5.1 hosts at home myself with shared storage.

    3. The SSD comment for ESXi OS is a little misplaced. Don't need SSD for ESXi itself, but seems like you are sharing the drive for ESXi install + local datastore so that all makes sense that you want to get the best IO out of it. I run ESXi off USB at home as my datastores are shared.

    4. Don't know how Unraid works, is the cache drive for reads and writes, or separate? If its for both, depending on how it functions, it may kill your SSD rather fast. I don't have any write cache on my setup, but I do have read cache from consumer SSDs. I heard the write cycles of write cache can kill consumer SSDs fast and cause corruption.

    5. You hit the power supply component right on the nose. Not too many people know this single rail information. Many power supplies advertise Multiple rail like its a good thing. When running a ton of drives from a power supply, you want to have consistent voltage across all of them. Aside from Corsair, PC Power and Cooling also does single rail and that is what I use in my setup.

    Also will throw this out there, but Server 2012 has storage pooling as well nowadays with the same concept as of mixed drives and all that as Unraid for those who prefer to stick with a Windows solution.

    I have a Norco 20 bay at home myself half filled with 10 2TB Samsung Spinpoint F4 drives. My server is dedicated just for storage and has 32GB of ram as most of the ram is used for cache, 2 x 128GB Corsair SSDs for read cache, and a simple Pentium Dual core G630 Sandy Bridge CPU (yes they still make Pentium....if you need something lower than an i3 hehe). I also have an Areca 1280ML raid card that I use just to get my 24 sata connections. I run Nexenta at home which is based on Solaris and using ZFS with RAIDZ2 for my setup. The free community edition of Nexenta is limited to 18TB so I can't grow much until they bump that up, but still got nearly 6TB of free space so should last me for some time still. Primary reason why I run Nexenta is because it supports my QLogic HBAs and I can present block storage over Fiber, which almost none of these other solutions (like FreeNAS ect) support. I run both Block storage and my CIFS and NFS shares all from the same server. I use 2 dual port Fiber HBAs in my storage server and then a single dual port HBA in each of my ESX hosts and just do a direct fiber connection from my storage to my hosts, one from each HBA on the storage server. I am just throwing all my info out there for those who may want to run a similar setup as well, as it may feel your need beyond what Unraid is able to offer. If it is just for storing data, I think Unraid is great, but I have never used it myself.

    Anyhoo, great to see other people getting into this and realizing the value. I feel like my current setup can last me for many years to come. Nexenta has previously bumped their limits for the community edition, so hopping that they do it again in a few years so I can expand my array as it gets filled simply by throwing more disks into it, as buying Nexenta is not an option (multiple thousands...), or I could try other Solaris based alternatives. And I am sure just like Unraid, its a web interface to manage everything.

  4. #4
    Join Date
    Apr 2012
    Location
    west side
    Posts
    342
    Rep Power
    13

    Default

    great thread. i just have a DNS-323 with a couple of 2 TB drives in a RAID configuration, but imagine I will upgrade to something like this one day.

  5. #5
    Join Date
    Jan 2005
    Location
    Calgary
    Posts
    77
    Rep Power
    0

    Default

    Originally posted by eblend
    [B]Interesting read.

    You mentioned running ESXi and then Unraid in the VM. How does Unraid interact with the actual drives in this setup? Last I checked ESXi didn't support drive paththrough, unless your are using DirectPathIO for your raid controller?
    Yes i'm using hardware pass through. For hardware pass through to work you have to have compatible hardware. The supermicro motherboard supports this. The controller cards that im using had to be flashed with specific firmware to enable passthrough.

    I left some of these details out of the review because there are threads on other forums that go into much greater detail how to do these things.

    More details about it here: http://lime-technology.com/forum/ind...?topic=14695.0


    1. Good on the IPMI mobo, I have a supermicro 1U server that I use for my Checkpoint firewall at home and having that connectivity is sure nice.
    Im loving IPMI, its a very useful feature. I know lots of people have monitors and keyboards connected to their servers are but I had nothing so when things went wrong it would take 10 minutes just to connect everything to get a glimpse of whats going on during the boot process.


    3. The SSD comment for ESXi OS is a little misplaced. Don't need SSD for ESXi itself, but seems like you are sharing the drive for ESXi install + local datastore so that all makes sense that you want to get the best IO out of it. I run ESXi off USB at home as my datastores are shared.
    Thats exactly what I meant. I boot off esxi from a usb as well and then have a bunch of different vm's boot from the ssd.

    4. Don't know how Unraid works, is the cache drive for reads and writes, or separate? If its for both, depending on how it functions, it may kill your SSD rather fast. I don't have any write cache on my setup, but I do have read cache from consumer SSDs. I heard the write cycles of write cache can kill consumer SSDs fast and cause corruption.
    The concept of a cache drive in terms of unraid is to keep your downloads/to be transferred data outside of the array. Typically you would do this because writing to the array is slow (~35MB/sec) so some people opt to write to a cache drive instead and then set up a cron job to copy the contents of the cache drive to the server overnight. Unraid doesnt require that you use a cache drive but the option is there for people who would want this sort of functionality.

    Also will throw this out there, but Server 2012 has storage pooling as well nowadays with the same concept as of mixed drives and all that as Unraid for those who prefer to stick with a Windows solution.
    Im not against windows based solutions, in fact I set up a WHS V1 for a family member which I will be upgrading to WHS 2011. Doesnt Windows Server 2012 Essentials cost ~450. Im thinking of using WHS with flexraid/raid t (http://www.flexraid.com/)which provides very similar functionality to unraid but its also compatible with windows systems.



    I run Nexenta at home which is based on Solaris and using ZFS with RAIDZ2 for my setup. The free community edition of Nexenta is limited to 18TB so I can't grow much until they bump that up, but still got nearly 6TB of free space so should last me for some time still.
    Ive read lots of good things about ZFS, I really liked the reliability factor but the main reason I decided to go against it was striping of data and storage scalability. I cant remember exactly how things work but if I recall things are organized into v-devs and in order to expand a v-dev you have to replace all the drives in that v-dev or create a new v-dev which requires a minimum of 3 drives. Sounds like you have a pretty kick ass setup. This is a very good thread (http://hardforum.com/showthread.php?t=1393939) that showcases many different types of server setups. You've probably seen it as ZFS is a very popular choice on that forum.
    Last edited by mushi_mushi; 07-07-2013 at 09:16 PM.

  6. #6
    Join Date
    Mar 2004
    Location
    Calgary AB
    My Ride
    2020 Subaru Forester Sport
    Posts
    2,982
    Rep Power
    42

    Default

    Originally posted by mushi_mushi


    Im not against windows based solutions, in fact I set up a WHS V1 for a family member which I will be upgrading to WHS 2011. Doesnt Windows Server 2012 Essentials cost ~450. Im thinking of using WHS with flexraid/raid t (http://www.flexraid.com/)which provides very similar functionality to unraid but its also compatible with windows systems.
    I actually have no clue how much Windows Server costs, I would think more than $450. I have an MSDN sub from work so I get all that for free, so it's a viable option for some, for other's its way to expensive.

  7. #7
    Join Date
    Mar 2008
    Location
    Calgary
    My Ride
    Busa
    Posts
    404
    Rep Power
    17

    Default

    Well... you picked good hardware. Nothing beats the price and performance of the supermicro mb's /w lsi sas controllers. Hardware wise I run an almost identical setup.

    But that's pretty much the only thing I like about your build. UnRaid is a ridiculously poor choice and it's a total waste of a really nice setup. It's made for budget builds where you don't have the luxury of matching drives or nice PCI-E SAS controllers. The whole idea behind UnRaid when they first started was JBOD+parity and as a result the performance is just shameful. If you were going to go with UnRaid from the beginning you could have saved yourself a ton of money on the build and gone with cheaper hardware.

    Instead you should run a Linux distro of your choosing and go with a nice RAID 6 setup. Depending on how you configure the array, and your knowledge level, you could easily see 5x the performance out of a RAID 6 array. You mention not having the data striped was a "must have" but that makes no sense to me with your setup, what's the reasoning behind that?

    Maybe it's just me, but I wouldn't be running a bunch of VM's on my data tank. If a bare metal solution exists, why limit yourself to a VM and lose all of that performance to overhead? If you need a CentOS install for school you should probably run that from a VM on your workstation anyway. Or at the very least run the Linux RAID OS as your host OS and then run whatever VM's you need under that knowing full well you're going to lose a lot of performance due to reduced cache size.

    And lastly I would double the RAM. Nothing makes a NAS system snappy like having 32gb of RAM to use as write cache. RAM is cheap and a fraction of what you've invested into the rest of the hardware.

  8. #8
    Join Date
    May 2004
    Location
    Calgary
    My Ride
    Sentra
    Posts
    1,490
    Rep Power
    22

    Default

    ^^ It depends what you are using unraid for. I also use unraid, mind you on a cheap p4 budget hardware, but it's only purpose is for media storage. I stream from it, so the performance is more than enough. Why limit yourself with raid 6 and not being able to upgrade the storage pool size one disk at a time?

    It depends if you need the performance.

  9. #9
    Join Date
    Mar 2004
    Location
    Calgary AB
    My Ride
    2020 Subaru Forester Sport
    Posts
    2,982
    Rep Power
    42

    Default

    Originally posted by GoChris
    [B. Why limit yourself with raid 6 and not being able to upgrade the storage pool size one disk at a time?

    It depends if you need the performance. [/B]
    Although I don't agree with UndrgroundRider, you can upgrade raid 6 with just one disk at a time if you have a decent controller, or using ZFS for example.

    As for the rest of the post by UndrgroundRider, I think you and the OP have very different needs, much like myself. If you are streaming moving and saving your music to a central location, and you already got a bunch of variable size disks, Unraid can surely be used. Running it in a VM isn't in issue either as he said he doesn't need performance. It is an all in one box that he is using for all his stuff and if you are limited on space or the need for anything more, why not just mix it all on one box and go with it. It's for his home setup, so I don't see an issue with that. With ESXi you get like 96% bare metal performance, so the overhead you speak of I would also disagree with.

  10. #10
    Join Date
    Mar 2008
    Location
    Calgary
    My Ride
    Busa
    Posts
    404
    Rep Power
    17

    Default

    Originally posted by GoChris
    Why limit yourself with raid 6 and not being able to upgrade the storage pool size one disk at a time?
    Not sure what you're talking about, that's actually really straight forward to do with RAID 5/6 under Linux.

  11. #11
    Join Date
    May 2004
    Location
    Calgary
    My Ride
    Sentra
    Posts
    1,490
    Rep Power
    22

    Default

    Originally posted by UndrgroundRider


    Not sure what you're talking about, that's actually really straight forward to do with RAID 5/6 under Linux.
    Well, obviously not talking about anything I'm informed about, ha ha.

  12. #12
    Join Date
    Feb 2007
    Location
    Calgary
    Posts
    220
    Rep Power
    0

    Default

    Originally posted by eblend


    I actually have no clue how much Windows Server costs, I would think more than $450. I have an MSDN sub from work so I get all that for free, so it's a viable option for some, for other's its way to expensive.
    Depending on what your needs are, Win8 is viable for a home server- I am going to upgrade my HP Mediasmart eventually.

  13. #13
    Join Date
    Mar 2008
    Location
    Calgary
    My Ride
    Busa
    Posts
    404
    Rep Power
    17

    Default

    Originally posted by eblend
    As for the rest of the post by UndrgroundRider, I think you and the OP have very different needs, much like myself.

    ...

    With ESXi you get like 96% bare metal performance, so the overhead you speak of I would also disagree with.
    I'm not saying it's the end of the world, it's just a shame to have the perfect hardware for a high performance RAID array and to squander it running mickey mouse UnRaid (for virtually no benefit). Also let's not forget that UnRaid doesn't support dual parity, which is a must (IMO) for arrays that large. I would be pretty damn nervous rebuilding a 26TB array without dual parity.

    This is what his setup is capable of (if not more):
    # dd if=/dev/md0 of=/dev/null bs=10M count=102
    102+0 records in
    102+0 records out
    1069547520 bytes (1.1 GB) copied, 1.52643 s, 701 MB/s
    Instead of that, he'll likely see 1/5th of that and only because he is using the wrong software.

    On the topic of VM performance loss I was referring to reduced resources incurred from actually running the VM's. As you point out, the overhead loss strictly due to virtualization is marginal.

  14. #14
    Join Date
    Jan 2005
    Location
    Calgary
    Posts
    77
    Rep Power
    0

    Default

    ^^

    I agree that a raid setup will have much better performance but for a media server this is not something that very important to me. Most of the data on a media server doesn’t change very often; hence write once read many times. I agree with everything you said in terms of the shortcomings of a raid 4/unraid solution; however striped raid implementations have their own potential drawbacks.

    You can’t expand your raid array with the same level of flexibility.
    If you lose more than the max number of allowable drives your whole array is toast
    Because the data is striped it’s not readable outside the array

    With unraid I never have to worry about losing the entire array. If my controller or motherboard dies I can stick my hard drives in any machines and the data will still be readable. Because unraid runs from a usb you never have to worry about dealing with a failing os drive.

    In an ideal world I would have a local backup of my content, but with larger data sets the costs to do this becomes prohibitive. Currently I only have 7 data drives and one parity drive, for the time being its sufficient but like you mentioned for arrays that consists of more drives it would be nice to be able to have greater parity. Had I gone with a raid setup I probably would have done a similar setup to eblends and gone the ZFS route.

  15. #15
    Join Date
    Mar 2008
    Location
    Calgary
    My Ride
    Busa
    Posts
    404
    Rep Power
    17

    Default

    Originally posted by mushi_mushi With unraid I never have to worry about losing the entire array. If my controller or motherboard dies I can stick my hard drives in any machines and the data will still be readable. Because unraid runs from a usb you never have to worry about dealing with a failing os drive.
    A lot of the things you list are not drawbacks of raid or advantages specific to UnRaid. After all, UnRaid is just Linux with a modified software RAID driver. When you say "unraid runs from a usb," you actually mean Linux runs from a usb. Or when you say "I can stick my hard drives in any machines and the data will still be readable," that's a feature of using a software based RAID stack. You can do the same thing with Linux software RAID. If your motherboard or controller ever dies, you can put the drives into any system and boot-up a Linux Live USB stick.

    Originally posted by mushi_mushi
    You can’t expand your raid array with the same level of flexibility.
    If you lose more than the max number of allowable drives your whole array is toast
    Because the data is striped it’s not readable outside the array
    I think you might be surprised at how flexible you can be with RAID 5/6. My RAID file system dates back over 10+ years to when I was running an external SCSI Ultra3 data tank. Since that time I haven't reformatted once. I went from 4x 80GB SCSI drives in RAID 5 to 5x 500GB IDE drives in RAID 5 to 7x 500GB drives in RAID 6, to 7x 1TB SATA drives, and finally 7x 2TB drives. Same filesystem.

    Originally posted by mushi_mushi
    In an ideal world I would have a local backup of my content, but with larger data sets the costs to do this becomes prohibitive. Currently I only have 7 data drives and one parity drive, for the time being its sufficient but like you mentioned for arrays that consists of more drives it would be nice to be able to have greater parity. Had I gone with a raid setup I probably would have done a similar setup to eblends and gone the ZFS route.
    In general terms I like to have two parity drives once I hit the 7 drive mark and I never exceed 10 drives in an array. The scary thing is that the bigger the drives get, the more you need that second drive. Bigger drives mean longer rebuild times. Rebuilding a 4TB drive with UnRaid is going to take a LONG TIME. Even with modern processors people report that 5-6MB/s is the standard for UnRaid. That's going to take 7.7 days and to be perfectly fair, those are the optimistic numbers. Do a search on the unraid forums, lots of people are reporting rebuilds taking a week and a half for a 2TB drive. The window for a failure of a second drive is huge. Not to mention the fact that the failed drive was probably bought at the same time as the rest of the drives and possibly even from the same batch. If you were storing more than just media I would be very concerned.

    Instead lets look at a smarter solution. Lets say you like the idea of two parity drives, but you don't want to lose a full drive worth of space from your array. With Linux software raid you can partition each drive, say 90% low-redundancy and 10% high redundancy. Then you make two arrays using the partitions, one as a RAID 5 and the other as a RAID 6. You can merge the resulting file systems using any number of techniques to suit your storage needs (eg. UnionFS with extension masks to automatically put documents and small files on the RAID 6 array). Or there's nothing wrong with the old fashioned way of mounting the high redundancy array as a folder under the low-redundancy array.

    ZFS is another great option. Its built-in data integrity features are pretty awesome. Although the one thing I don't like about ZFS is that the RAID is built into the filesystem. This prevents using other neat things like the device mapper, or drdb for high availability.
    Last edited by UndrgroundRider; 07-08-2013 at 12:27 PM.

  16. #16
    Join Date
    Mar 2004
    Location
    Calgary
    My Ride
    Eagle Talon
    Posts
    857
    Rep Power
    21

    Default

    Interesting read, and a nice project

  17. #17
    Join Date
    May 2004
    Location
    Calgary
    My Ride
    Sentra
    Posts
    1,490
    Rep Power
    22

    Default

    Originally posted by UndrgroundRider


    I think you might be surprised at how flexible you can be with RAID 5/6. My RAID file system dates back over 10+ years to when I was running an external SCSI Ultra3 data tank. Since that time I haven't reformatted once. I went from 4x 80GB SCSI drives in RAID 5 to 5x 500GB IDE drives in RAID 5 to 7x 500GB drives in RAID 6, to 7x 1TB SATA drives, and finally 7x 2TB drives. Same filesystem.
    Did you have to upgrade all disks at once? If not, then wtf am I using unraid for lol. But that's a nice feature of unraid and flexraid, mismatched disks. It's much cheaper to only have to buy one disk when out of space and either add it to the array, or upgrade the smallest one.

  18. #18
    Join Date
    Sep 2008
    Location
    Edmonton, AB
    Posts
    259
    Rep Power
    16

    Default

    Cool build!

    Originally posted by eblend
    I actually have no clue how much Windows Server costs, I would think more than $450. I have an MSDN sub from work so I get all that for free, so it's a viable option for some, for other's its way to expensive.
    WSE2012 is the replacement for SBS2011 and it also has some nice features as a replacement for WHS v1 users but the price is a major turn off.

    I just setup WSE2012 in Hyper V a few weeks ago (I get MS student software from NAIT) and have been playing around with it.

    I wanted to try it out because I read that it will do the nightly backup a lot faster than WHS 2011 and WHS. My WHS backups started to get really slow... it drove me crazy (some times 30mins each (x6 computers) ), WHS 2011 has been better but it is still slow. WSE2012 seems to be twice as quick after the initial backup than WHS2011.

    I'll have to update the old WHS tread with my new setup and not this thread!

    Originally posted by WhippWhapp
    Depending on what your needs are, Win8 is viable for a home server- I am going to upgrade my HP Mediasmart eventually.
    StableBit DrivePool seems to be more popular than using storage spaces because they seem to still have a few problems (quirks) like the original drive extender in WHS. I haven't tried it yet myself but I might get around to playing with it.
    Originally posted by Xtrema
    ZenOps is like everyone's crazy uncle.
    Originally posted by DayGlow
    How do you respond to stupid?
    Originally posted by rage2
    Jesus fucking christ Rob Anders, learn to read your own links.
    Originally posted by Seth1968
    Zenops: Ok, but remember my dick is made of nickle.

  19. #19
    Join Date
    Mar 2008
    Location
    Calgary
    My Ride
    Busa
    Posts
    404
    Rep Power
    17

    Default

    Originally posted by GoChris


    Did you have to upgrade all disks at once? If not, then wtf am I using unraid for lol. But that's a nice feature of unraid and flexraid, mismatched disks. It's much cheaper to only have to buy one disk when out of space and either add it to the array, or upgrade the smallest one.
    You can easily add as many disks as you want at any time. The raid can grow to fill the additional disks. The main drawback is the mismatched disks. If you buy bigger disks the raid array won't fill them entirely. Although I will point out that UnRaid has the same limitation, the parity drive has to be the biggest drive in the array. If you add a 6TB drive to 5-4TB drives, you're still going to have 2TB of unused space.

    In my case I usually do upgrade all of the drives at once because I wait until there's a big jump in storage technology. It doesn't make sense to mix and match an 80gb with a 500gb drive. Although when I switched to the 2TB drives I did that over time. I had a couple 1TB drives fail and it was most cost effective to just throw in a 2TB drive. At the time I wasn't utilizing the whole array so I was in no rush to add more capacity. Once all of the 1TB drives had been swapped out with 2TB drives I grew the array.

    So basically in the end the trade off is:


    Linux software RAID
    - Rock solid stability and extremely mature code base
    - Increased redundancy with dual parity
    - Huge performance gains (x1 for each drive in the array - minus some overhead)
    - Provides a block device, not a file system - so it can be used with LVM, drdb, dm

    vs

    UnRaid
    - Ability to mix and match drives smaller than the parity drive

    And IMO the ability to mix and match drives is often over valued, especially if you're starting with a set of matched drives. Take Mushi's build for example, by the time he fills that 26TB array (if ever) we're going to have 12TB drives. At that point his drives are going to be old and he'll want to upgrade them all anyway.

  20. #20
    Join Date
    Feb 2004
    Location
    Calgary
    My Ride
    '03 ep3
    Posts
    613
    Rep Power
    21

    Default

    Eblend,

    What was your reasoning for incorporating fiber HBA in to your setup?

    I am running a 16 drive RAID50 on a 16 core HyperV host and performance to my VMs isn't that bad with burst reads of 200+ MB/s.

    In regards to rebuild times, my choice in dual RocketRaid 2320s was that they rebuild an 18TB array in less than 24 hours.

    I find it interesting how people's architecture of a storage solution varies from person to person.
    Last edited by SmAcKpOo; 07-09-2013 at 04:43 AM.

Page 1 of 2 1 2 LastLast

Similar Threads

  1. Physical Server vs. Cloud Server for small business

    By R-Audi in forum Computer Help Desk
    Replies: 48
    Latest Threads: 02-13-2017, 01:39 PM
  2. Anyone wanna split an unRaid license?

    By GoChris in forum Computers, Consoles, and other Electronics
    Replies: 33
    Latest Threads: 11-04-2013, 11:56 AM
  3. server grabbing ip of another server

    By BeyondNewB in forum Computer Help Desk
    Replies: 4
    Latest Threads: 04-12-2012, 12:04 PM
  4. Replies: 10
    Latest Threads: 03-26-2012, 03:05 PM
  5. Lotus notes server. Is AST server supported?

    By Hollywood in forum Computers, Consoles, and other Electronics
    Replies: 1
    Latest Threads: 11-02-2006, 01:32 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •