Contribute
Register

Gigabyte z170x-Designare: Advice for internal RAID, first Customac Pro build

Status
Not open for further replies.
Joined
Jan 24, 2017
Messages
53
Motherboard
GA-Z170X-Designare
CPU
i7-6700K
Graphics
RX 580
Mac
  1. iMac
  2. MacBook Air
  3. Mac mini
  4. Mac Pro
Mobile Phone
  1. iOS
Hi all, I'm in the process of collecting parts for my first CustoMac Pro build based on the January 2017 build recommendation here.

Some parts I have already, so they are a given unless I absolutely HAVE to change anything:

i7 6700K processor
Gigabyte z170x Designare mobo
Fractal design R5 case
EVGA G2 850w PSU
KFA2 980Ti HOF GPU
Kinston Hyper X Predator 480GB boot drive (at least that is my intention)

I also have two Samsung 840 Pro 500 GB drives and a Sonnet Tempo Pro RAID PCI-e card that I could re-use, but I'm worried about using up all my PCI lanes, as I will discuss below.

Last, but not least: I have a bunch of old spinning disk drives that I'd like to repurpose, six Toshiba 5TB disks, and am thinking about setting up an internal RAID in the case for storage and preferably for work as well if the speed gets decent.

I'm thinking I could mount the Hyper X in the mobo's M.2 slot - but I've gotten the version with the PCI-e adapter (it was actually cheaper), so I can choose. Any advice on this given the scenario that I will describe next would be most welcome, as all this PCI-e stuff, lanes etc. is very new to me:

The 170z Designare mobo is limited to around 20 PCI lanes if I understand it correctly, and the 980Ti GPU alone needs at least 8 lanes (but 8 lanes for this GPU should be sufficient, right? at least as far as my research so far indicates, but please enlighten me if this is wrong).

Given that my PCI lanes are already limited, then I've been thinking about installing a RAID card. I'm thinking that this may increase the number of possible internal drives, but I'm not sure how many lanes the controller card will use - 4 or 8 lanes? It may depend on the card too? If it is using 4 lanes but giving me the possibility of connectiong 8 disks then I assume this is because the RAID card comes with its own processor and RAM? (again please enlighten me if I'm wrong).

Now, I've been looking into the Highpoint Rocketraid 2720SGL, which comes recommended by many in here, and which sits at a comfortable price bracket. The number of available ports - 8 - also seems like a perfect number for my needs.

With this, I assume I could create a RAID 10 array with my old Toshiba's and a striped RAID 1 array from my two Samsung 840 Pro drives as a fast workdrive - and sell the Sonnet Tempo Pro card as it gets superfluous - and with the Rocketraid still only use one 8 lane PCI-e slot - right?

Then, I've seen some people dissing the 2720SGL because it isn't a "real" RAID card (whatever that means) or at least not a very good one, and that Areca's are the bees knees.

So I began looking into those as I discovered there's quite a few used Areca 1680's available second hand on ebay for roughly the same price as a 2720SGL. But then I discovered a lot of people discussing problems with Arecas while using consumer class drives (which is what I have) because of some issues with TLER(?) and it seems like Arecas are for people who like to tinker and tune stuff to squeese every last drop of performance from their arrays. I, on the other hand, am willing to sacrifice some speed for maintenance-free convenience and reliability. Would the 2720SGL trump the Areca in this department?

Then there is the question of SATA II vs. SATA III. I believe the Areca 1680 is getting quite old now, and is "only" SATA II, while the 2720SGL is SATA III. Is this significant?

My last concern is with power usage. The 980Ti GPU I have is reported to peak at around 350w power (roughly, from memory) and somewhere on the net I saw mentioned that a 1680 RAID could pull somewhere between 3-400w. So I am also getting concerned that my 850w PSU may be nearing its limits with the 1680. From the available specs the power consumption of the 2720SGL seems to be only 11w, which is significantly less than the Areca card - a significant argument in the 2720SGL's advantage if everything else was equal...

Ok, given the above info, all comments and insight would be most welcome and appreciated! Thanks in advance! :)
 
Just to answer my own question regarding the 980Ti card needing more than 8 PCI lanes - it seems like the answer is no according to a discussion I found here:

http://www.overclock.net/t/1553182/titan-x-980-ti-bottlenecked-by-x8/10:


"Simple answer

Yes you will have bottlenecks (restriction) with X4

No you will not have bottlenecks (restriction) with X8

thumb.gif
"
 
Just to answer my own question regarding the 980Ti card needing more than 8 PCI lanes - it seems like the answer is no according to a discussion I found here:

http://www.overclock.net/t/1553182/titan-x-980-ti-bottlenecked-by-x8/10:


"Simple answer

Yes you will have bottlenecks (restriction) with X4

No you will not have bottlenecks (restriction) with X8

thumb.gif
"

The 16 lanes for graphics use will usually come from the CPU. The 20 PCIe lanes you are talking about will come from the PCH (also known as chipset). So, you really have 36 PCIe lanes to work with. Every motherboard will divvy up those lanes differently and auto-switch some lanes depending on what is plugged into it. What I do and recommend is to go to the manufacturer's website and download the manual for the board you're looking at. There it will tell you exactly how the lanes are divided under different scenarios depending on what slots are in use.

Good luck!
 
Thanks a lot Crayonshin!

This is really helpful.

I've had a look in the manual, p. 29 (attached) and it seems to me like you are correct in that I have two PCIEX8 slots (so, a total of 16 lanes) coming directly from the CPU, and probably another 20 coming from the PCH as you say.

What I don't find obvious by reading from the manual is how the auto-switching of lanes are happening - except it seems like the PCIEX slots will eat up the SATA connectors as they are being filled up. Which suits me just fine as long as I can connect my drives through a RAID controller card instead.

So, I'm trying to work out how to best distribute my PCI cards amongst the Available PCI slots:

The GPU goes into the PCIEX16 slot, I presume?

Then, I guess the RAID controller card should go into the second PCIEX8 slot?

Could I still put my Sonnet Tempo Pro into the last PCIEX4 slot without the 4 lanes creating a bottleneck for that SSD RAID with my old Samsung 840 Pro drives?

Would I be better off getting a presumable more powerful and more expensive RAID controller card and let that handle both a spinning disk RAID and a striped SSD RAID with my Samsung 840 Pro drives? (assuming that that is something you can do on something like an Areca 1880?)

Or could I save some money and get decent enough performance by putting something like a Highpoint Rocketraid in one PCIE slot and the Sonnet tempo in the other? And in this case, which one should get the 8 lane and which one should get the 4 lane...?

And how about the Hyper X card in the M.2 slot? On p. 29 it looks like this will pull its lanes from the PCH, and as such will not affect any of the PCI-cards pulling lanes from the CPU - am I understanding this correctly...?

Again, advance thanks for all advice, as this is completely new territory for me.
 

Attachments

  • mb_manual_ga-z170x-designare_e.pdf
    26.5 MB · Views: 154
Ok, I found some more answers in Beskone's build description using the same Gigabyte z170x-Designare motherboard here https://www.tonymacx86.com/threads/...-build-skylake-imac-17-1.213683/#post-1433150:

One thing to note, although the Designaire has 6 internal SATA connectors, using the m.2 slot for my Windows drive disables SATA 4 & 5. This is why I bought the SATA PCI-E card, with my 4 SSD's taking up SATA 0-3 and the m.2 drive disabling SATA 4 & 5 I was a couple ports short of what In needed.

As I'm intending to use the Hyper X Predator SSD card in the M.2 slot as my boot drive, I won't run into the issues with RAID and boot drives that Beskone had, but the same applies with regards to SATA 4 & 5 being disabled when the M.2 slot is populated - I will be running short of SATA connections for storage, so better find a suitable RAID controller card for the PCI-E x8 slot.
 
I just bought locally a second-hand, never used, in the box with manual and CD-ROM Areca ARC-1224-8i RAID card at around $200 with two mini-SAS to SATA break-out cables. Seemed like a good deal. Will hopefully arrive in a couple of days.
 
Just to update this for anyone researching RAID solutions:

The Areca card is still going strong entering into 2019, but I recently had some serious issues with the RAID array.

I have to emphasise at the same time that AFAIK this was entirely my own fault, for being new to hardware RAIDs and failing to understand some of the finer details regarding the inner workings of hardware RAIDs. So I'm posting here in the spirit of sharing knowledge with other people that are curious about implementing hardware RAID solutions in a Hackintosh without prior knowledge of hardware RAID solutions - no need to read further if you are a RAID expert, but this may be useful if you are just starting your research:

After going strong for almost two years, my Areca RAID array lost three disks and most of its data in the matter of a few days. Fortunately I had the entire array backed up using ARQ5 transmitting to Google Drive storage, as I had to restore most contents from the backup. Just goes to confirm what has already been repeated over and over elsewhere: A RAID is not a backup...

I'm not 100% sure about the cause, but I believe it was caused by me using Toshiba consumer drives in the Areca hardware RAID. I had read something somewhere about somebody else that had successfully been using consumer Toshiba 5TB drives in an Areca RAID solution, so I decided to take my chances, as enterprise drives of the same capacity were prohibitively expensive. I also happened to stumble across a local deal where I could have the 5TB Toshibas' brand new for a ridiculous price that was too good to pass up.

This worked very well for almost two years, then all of a sudden I began to have issues, and things escalated quickly. It is possible I could have saved the array had I been more knowledgeable about what to do in the case of a failure, and if I had had a spare drive at hand and quickly replaced the first failed drive. But after a short while I had three drives marked as failed (in an 8 drive RAID6 array with no hot spares), and lost about 80% of the contents on the array.

After reading up on this I suspect the cause of this catastrophic failure is the consumer drives, that don't support a vital feature called TLER that is need for hardware RAIDs to handle the drives correctly in the event of difficulties. From my basic understanding TLER is a disk drive timeout limiter and this is essential for hardware RAIDs, as I've just learned the hard way (but not so for software RAIDs, as software RAIDs handle this in the software).

Without the TLER firmware, the hardware RAID will attempt to bring the problematic drive back to life indefenitely, and the entire RAID array will eventually be brought to a halt and eventually brought completely down, instead of just the first problematic drive being marked as faulty, and in need of a health check. Also according to my limited understanding; once the RAID is attempting to rebuild itself because of the first failed drive it is becoming more vulnerable to additional failures, and things can escalate quickly. (Anyone more knowledgeable about this are welcome to correct me and elaborate on this if I'm wrong)

I have just replaced the 8 x 5TB Toshiba consumer drives with 8 x 1TB WD RE4 enterprise drives (+1 spare drive in the event of issues!). The cost of 8 5TB enterprise drives would have been cost prohibitive for me, but I was able to find a reputable dealer on ebay that sold brand new 1TB WD RE4 drives for cheap, probably because they are not the latest and greates model or capacity anylonger. This makes for a 6TB RAID array which is plenty big enough for my current projects that I'm working on (stills photo and video files.) With the recent experience in mind this is also a more handy size to back up to other destinations. I will move the 5TB Toshiba drives on to a new FreeNAS storage server build, where they should work fine given that FreeNAS is using software RAID with no requirements for TLER support in the disk firmware.

I have been thinking about a FreeNAS-based storage server for a long time, given FreeNAS' support for the ZFS file system with superior data integrity features. But only after extensive tinkering with Hackintosh'es have I gained the self-confidence needed to undertake a FreeNAS server build! :cool:
 
Last edited:
I will move the 5TB Toshiba drives on to a new FreeNAS storage server build, where they should work fine given that FreeNAS is using software RAID with no requirements for TLER support in the disk firmware.

I have been thinking about a FreeNAS-based storage server for a long time, given FreeNAS' support for the ZFS file system with superior data integrity features. But only after extensive tinkering with Hackintosh'es have I gained the self-confidence needed to undertake a FreeNAS server build! :cool:

When I was in the planning stages of setting up a NAS, I looked in to FreeNAS and even put together a test machine with it briefly. I absolutely hated it. I found it needlessly complicated. Extremely complicated. I hated ZFS. Capacity expansion sucked. It's a notorious RAM and resource hog.

I ended up putting together a Xpenology NAS. It's a Synology clone, think hackintosh for the Synology DSM operating system. I used extremely cheap, slow consumer level drives and put together an 8 drive SHR2 array (dual disk redundancy) and I used a motherboard with an Intel Atom SoC that has a 14W TDP. That was over 3 years ago and it has been up and running 24/7 since. It's a fantastic NAS that gives me 800MB/s reads and 300MB/s writes (due to the very slow drives I used) over 10GbE. It also acts as my Plex server that serves up videos to my Apple TV.

I have expanded capacity since the initial setup by swapping 3 of the original 8TB drives for 12TB drives. It was an extremely simple process that just required a few clicks after physically swapping the drives. As far as I know, on ZFS, I would have had to swap out all 8 drives on the array to expand capacity. And when I tried this on my test build, it was so complicated to do, I just gave up.

Just a bit of a warning to do more research before investing too much in to FreeNAS...
 
Thanks Pastrychef, I really value your opinion.

The storage server build isn't written in stone yet, so I'm still open to suggestions.

What really sold me on the FreeNAS was the file integrity stuff, and my intention was to just build it with the 8 drives I have and forget about expansion (given how inflexible ZFS sounds in that regard) but if it turns out to be too complicated to set up then I have to re-evaluate.

I looked briefly at some of the Xpenology tutorials and came away with the impression that it looked harder than setting up a FreeNAS server, but your story is to the contrary, so I'll have to look up the Xpenology tutorials again :)
 
Thanks Pastrychef, I really value your opinion.

The storage server build isn't written in stone yet, so I'm still open to suggestions.

What really sold me on the FreeNAS was the file integrity stuff, and my intention was to just build it with the 8 drives I have and forget about expansion (given how inflexible ZFS sounds in that regard) but if it turns out to be too complicated to set up then I have to re-evaluate.

I looked briefly at some of the Xpenology tutorials and came away with the impression that it looked harder than setting up a FreeNAS server, but your story is to the contrary, so I'll have to look up the Xpenology tutorials again :)

Yes, all the setup guides I've seen for Xpenology do make things seem complicated. But, in reality, it's not very difficult at all.
  1. Mount the boot loader image on a PC. (I don't know of a method to do this in macOS.)
  2. Edit the config file to enter the MAC addresses of your NICs.
  3. Edit the config file with a generated serial number.
  4. Save and write the img file to a USB flash drive.
  5. Plug the USB flash drive to your NAS and boot up.
  6. Search for the IP address of your NAS.
  7. Connect to your NAS via a web browser.
  8. Follow the on screen instructions to install the Synology DSM.
  9. Done.
After that, you can use the graphical GUI to configure whatever type of array you would like. RAID, SHR, etc. This process is quick and extremely easy due to the fantastic GUI.

The Synology DSM is a popular choice for many businesses and can be found in many server rooms. Since Synology added support for BTRFS about 1-2 years ago which I think brings ZFS feature parity. My array pre-dates the availability of BTRFS so I'm still on EXT4, but in the 3+ years of 24/7 operation, I haven't experienced any problems or any data loss. I've tested by pulling a drive while the array was online and it handled everything beautifully. I have never been brave enough to test by pulling two drives but, technically, it should be able to handle that too.
 
Status
Not open for further replies.
Back
Top