Contribute
Register

Software Fusion and RAID alternative

Status
Not open for further replies.
Joined
Jul 29, 2011
Messages
3
Motherboard
Dell Latitude 7390 2-in-1
CPU
i5-8350U
Graphics
UHD 620
Since it hasn't been discussed much, I figured I'd mention an alternative to software-based "fusion" drive and RAID approaches. The Highpoint Rocket 640L (not RocketRAID 640L) is a 4-port PCI-e SATA3 card (2x, so max ~1000MB/s throughput) that is natively supported by ML (and Unibeast). It's a good alternative for those who would rather drop $70 than have a slightly non-standard build. The card is based on the Marvell 88SE9230 controller, which is actually better than the 88SE9235 on the RocketRAID 640L:

http://www.marvell.com/storage/system-solutions/assets/Marvell-88SE92xx-Product-Brief.pdf

It supports RAID0,1,10 as well as "HyperDuo", which is a Fusion drive type solution that dynamically moves content that is accessed often to an SSD. All of this is transparent to the OS, so no need for dd'ing or Chameleon plist edits. I've been running on the card for a week (using RAID0, not HyperDuo) and everything seems stable.

A few issues I ran into in the course of my build:

* Firmware updates to the card destroy your RAID configuration. That isn't mentioned by Highpoint, but ruined my day the first time through.

* The Gigabyte GA-Z77X-UD5H requires beta BIOS 15m for the card to boot in AHCI mode. Banged my head against the wall for days until I figured that out!

*I've disabled the on-board Marvell controllers as they seemed to slow down boot-up, but that may have been another issue.

* You can find an updated BIOS on-line (2.3.0.1043) for the controller here: http://www.station-drivers.com/page/marvell.htm

Remember, flashing the card will kill your RAID configuration (and, therefore, your installation). It didn't seem to affect performance and also required that I dd boot1h, which the standard BIOS does not. I did the flash, but I don't see any real reason to unless, like me, you think bigger version numbers are "better".

Once you are up and running, the controller shows as unknown. To get it recognized, replace /System/Library/Extensions/AppleAHCIPort.kext/Contents/Info.plist with the one attached. I got that from here:

http://benjenq.pixnet.net/blog/post/30988001-黑蘋果啓用硬體raid磁碟陣列

As long as you install third-party SATA in MB, there should be no need for the other change he shows (to make the attached drives appear as internal).

The Syba SD-PEX40054 is based on the same controller, also has 2 eSATA ports, and goes for $50. I haven't tried, but it may work. There is a sketchy review on Amazon that recommends the Rocket 640L instead, but it may be a good science project for someone.

Hope this is useful!
 

Attachments

  • Info.plist
    7.4 KB · Views: 895
dlaporte - so I am interested to learn what are your RAID performance numbers, and which drives are you using? If you have the Blackmagicdesign Disk Speed Test, please share the write and read speeds seen.

I have been using OS X software RAID 0 and RAID 10 for some time with good performance and no reliability issues.

Thank you,
neil
 

Attachments

  • blackmagic.png
    blackmagic.png
    152.7 KB · Views: 914
dlaporte - Well you have may attention. I have put a card on order and I'm looking forward to a new experience. I have one machine handy here with a 4 hard drive RAID 0 that I will bench mark and then convert it to see what there is to see.

thank you,
neil
 
Just a quick update: I purchased the High Point Rocket 640L 4 Port 6Gb/s SATA PCI-Express 2.0 Host Adapter. I initially tried to bring it up on a i7-3770K ASUS P8Z77-M Pro system. The card worked OB as a SATA controller and I was able to swing 4 SATA cables over to it and boot the 4 hdd OS X software RAID 0 set. However I tried just about everything to get into the card BIOS and run the set up to configure the drives as a hardware RAID. I gave up and pulled the card out of that system. Of interest the 4 hdds in the software RAID do about 380MB/s.

I then tried the card on my "Lab 2" system, a i7-920, 6GB memory on a GA-EX58UD3R motherboard. This system has some slow 80GB hard drive, two of which were set up in an OS X software RAID-0. So I updated the RAID to 10.8.2 and benched marked them (about 110MBs Read and Write). I was pleased that I was able to configure Rocket 640L card on this system (I could get into the setup using the Ctrl-M key sequence). I few passes to learn by doing and I had my two drives configured. I then used Disk Utility to partition and format them as a single virtual drive.

Net net after cloning a 10.8.2 system over and rebooting to the new RAID 0, I found that the hardware RAID and software RAID performed about the same.

I even hacked the info.plist (AppleAHCIPort.kext) to get the Marvell controller to be displayed in the System Profiler.

So the card works on this machine. However this is not very interesting and I need access to some SATA III SSDs. Maybe more later if I can find the SSDs.

neil
 
Folks,

Sorry to dampen things a bit here, but I would suggest before you plan to use an SSD for a raidset, to have a look at this article first. It seems serious>
http://silvertonconsulting.com/blog/2011/05/19/potential-data-loss-using-ssd-raid-groups/

Humm... food for thought. However that article is a year and half old and I have not seen an out-cry from the "enterprise" or hobbyists.

When there is real reliability issue the "sky is falling" alarm is soon sounded and accompanied by an increasingly steep ramp in RMA activity.

Admittedly I do not have any great experience the SSD RAID 0 sets (only one off line system with a boot SSD RAID 0 set), and I would be interested to learn if anyone is experiencing failures due to write endurance.

neil
 
The article suggests that the rebuild is the problem. In the case of RAID0, if you lose one SSD, you lose the entire array. There is no rebuild, because there's nothing left to rebuild with :)

Another concern is the lack of TRIM support on RAID arrays that aren't using Intel's latest controller. I chose Sandforce-based SSDs because of their background garbage collection. My understanding is weak, but I believe that may make up for whatever eventual degradation in performance the lack of TRIM would cause. Does anyone have more info on that?

Humm... food for thought. However that article is a year and half old and I have not seen an out-cry from the "enterprise" or hobbyists.

When there is real reliability issue the "sky is falling" alarm is soon sounded and accompanied by an increasingly steep ramp in RMA activity.

Admittedly I do not have any great experience the SSD RAID 0 sets (only one off line system with a boot SSD RAID 0 set), and I would be interested to learn if anyone is experiencing failures due to write endurance.

neil
 
The article suggests that the rebuild is the problem. In the case of RAID0, if you lose one SSD, you lose the entire array. There is no rebuild, because there's nothing left to rebuild with :)

Another concern is the lack of TRIM support on RAID arrays that aren't using Intel's latest controller. I chose Sandforce-based SSDs because of their background garbage collection. My understanding is weak, but I believe that may make up for whatever eventual degradation in performance the lack of TRIM would cause. Does anyone have more info on that?

I have a setup at work our card supports SSD as well as TRIM. The data we store on the SSD SAN is daily video editing work. So if we lost it it wouldn't be the end of the world and we also do an hourly clone of that data to a HDD based NAS as well. For us the SSD is a no brainer the speed advantage is through the roof for our 20 editors. We just have it budgeted that at very worst case we will be replacing 24 SSD's every year or two.

All that said SSD does have that glaring fault of all drives starting to fail at the same time but its a small price to pay for the performance.
 
Humm... food for thought. However that article is a year and half old and I have not seen an out-cry from the "enterprise" or hobbyists.

When there is real reliability issue the "sky is falling" alarm is soon sounded and accompanied by an increasingly steep ramp in RMA activity.

Admittedly I do not have any great experience the SSD RAID 0 sets (only one off line system with a boot SSD RAID 0 set), and I would be interested to learn if anyone is experiencing failures due to write endurance.

neil

Neil,

Then you might be interested to learn about the write endurances of SSDs here. This is a really interesting report on the reliability of SSDs and is an ongoing test.
> http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm
Their best drive test so far has been a Samsung 830 SSD, which has lasted for over a year in their endurance tests. Some poorer brands/models have lasted on average less than 1-3 months even from acclaimed brands like OWC/Kingston/Sandisk, so it's definitely worth considering checking out the various brands.


In the interests of comparing systems RAID performance I thought to share with you my screencaps of two Thunderbolt RAID systems I am currently using, including some videos I've done on the Fusion Drive's performance on my Gigabyte board:

2011 MacMini i5 with 4TB Pegasus R4 RAID

2012 Macmini i7 with 12TB Areca Thunderbolt RAID

Blackmagic Fusion Drive Speed Test on Z77X-UP5-TH
http://www.youtube.com/watch?v=TPlk2M5rBUk

Blackmagic & Superduper cloning speed test between Z77X-UP5-TH Fusion Drive & Thunderbolt
http://www.youtube.com/watch?v=X2mFzKyzBcc

As you'll be able to see, the Areca Thunderbolt setup is the fastest here with speeds of over 700Mb/s on reads and 600Mb/s on writes.
 

Attachments

  • macmini-areca-12tb.jpg
    macmini-areca-12tb.jpg
    233.7 KB · Views: 595
  • macmini-i5-pegasusr4-blackmagic.jpg
    macmini-i5-pegasusr4-blackmagic.jpg
    233.8 KB · Views: 617
Status
Not open for further replies.
Back
Top