[UCI-Linux] More hardware vs software SATA RAID

Harry Mangalam hjm at tacgi.com
Mon Apr 11 22:21:26 PDT 2005


Here's some more info on the HW vs SW RAID thread on linux that I 
began a while ago.  The SW RAID was working fine, but we wanted to 
populate the entire server chassis with disks. The on-board 
controller will only handle 4 disks, so it was either buy a 4 disk 
controller to take over the others or get one controller to do it 
all.   After much weighing of the options and consulting, I decided 
to get a 3ware 8506 8port controller (~$470).  It also comes in a 
12port version if you want to really to jam disks in there.  Wired up 
to all 8 250GB disks with the over-long SATA cables supplied, it 
looks like a nest of crimson snakes.  

The 8506-8 is a REAL HW RAID controller (controller takes care of 
everything), and presents itself to the OS as one huge SCSI disk.  
Under the covers it's whatever you configured it to be in the BIOS 
setup, where you can config it to be one of several RAID types (I 
again chose RAID5) and different stripe chunks, designate a hot spare 
of you want REAL non-stop operation (if you also have your disks in a 
hot-swappable cage).  

AFAIK, linux can't be booted off a RAID 5 array, which doesn't matter 
all that much as I have a separate IDE system disk for that.  It's 
MUCH easier to configure into the system tho - treat it as you would 
any single disk - enter it in /etc/fstab, make swap on it, etc.  No 
special treatment needed at all.

You will have to use 3ware's 3dm configuration package to communicate 
to the RAID controller (tell who to mail in the event of a failure, 
etc, ).  There's also a commandline util from 3ware that I haven't 
tried yet, but other than that, it's very easy to manage .... so far.

Like the SW RAID, it takes a while to initialize while it zeros the 
array.  Unlike the SW version, it doesn't allow you to use the array 
or system while this is going on (it's a BIOS operation), so you have 
to wait a couple hours while it zeros out the 2 TB.  NB: in neither 
case are you protected against data loss until the array initializes.

Unlike using the mdadm, there is limited interaction with the RAID 
while it's runnning.  Unless you use the LVM, you can't resize the 
RAID using mdadm.  

The speed of the RAID seems to be about the same as the SW RAID - I'm 
getting 50-25MB/s from the IDE to the RAID partition on GB sized 
files.

the bonnie++ benchmark gives this:  
Version  1.03       ------Sequential Output------ --Sequential Input- 
--Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec 
%CP  /sec %CP
sand.ess.uci.edu 7G 25666  52 31441  12 13151   4 28368  58 67291  13 
384.3   0
                    ------Sequential Create------ --------Random 
Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- 
-Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec 
%CP  /sec %CP
                  4 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ 
+++++ +++
sand.ess.uci.edu,7G,25666,52,31441,12,13151,4,28368,58,67291,13,384.3,0,4,
+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++

(that's probably going to be munged into incomprehensibility by the 
mailer - email me for a non-munged version if interested)



This overall looks to be slightly /slower/ than the SW RAID, but I 
never tested the SW RAID under heavy load - theoretically, it should 
degrade faster than the HW controller.

So the upshot according to my limited experience:  Linux SW RAID is  
cheaper than dirt, works extremely well, is surprisingly fast, is 
well supported in the Kernel for a variety of disk controllers (both 
RAID and cheap nonRAID).  
However:
  It is non-trivial to set up, altho it's not as technical as 
compiling your own kernel.  
  It will require a different approach to mount and admin than IDE 
disks - you'll almost certainly have to write your own mdadm init 
script to bring things up in the right order.  The one I previously 
posted could be used as a skeleton.
  Be prepared to spend a few hours with a browser to determine the 
latest & best compatibility with controllers - the difference between 
nonRAID, fake RAID, and real RAID controllers is important.  
  SATA is the way to go if you're considering this for anything beyond 
a learning experience. The cables & connections are a huge 
improvement over PATA and the density of disks you can stick in a 
chassis is amazing.  By next yr, you'll probably be able to build a 
12TB array in a desktop chassis from parts at Frys for under $5k.

Hardware RAID is much more expensive, not much faster, much more 
convenient, possibly more robust under load, fractionally harder to 
configure while running (and unlike mdadm) does not allow on-the-fly 
resizing without another layer of SW (LVM or equiv).
  In my googling, 3ware had the best reputation in terms of ease of 
use and kernel support.  It was trivial to install (simpler to 
install and config than a SCSI card). It was braindead simple to 
connect the disks (altho I had a momentary scare when the SATA plugs 
pulled off the RAID cage when I was disconnecting the cables - 
they're just pushed onto the circuit board leads).

HTH

-- 
Cheers, Harry
Harry J Mangalam - 949 856 2847 (vox; email for fax) - hjm at tacgi.com 
            <<plain text preferred>>


More information about the UCI-Linux mailing list